Why the Star Wars universe is decidedly anti-technology

The Star Wars universe is brimming with technology – drones, droids, force fields, holograms, tractor beams, so many differing things. Technology is ubiquitous even in the most backwater planets. And yet in the same breath, Star Wars puts across quite a mantra of anti-technology. Or at least, people don’t worship technology – there are no personal “mobile” devices, no streaming services – no nothing. If anything the SW universe is one where people use technology everyday in their lives, but it doesn’t overwhelm them, i.e. they don’t rely on technology to the point where they loose their sentient ability to think.

Being anti-technology doesn’t mean technology doesn’t evolve, or that existing technology isn’t fantastic, but rather that it is not the central focus of peoples lives. I mean SW has holograms, but no personal communication devices? (beyond comlinks). They probably didn’t perceive any problem with not having the equivalent of social media – or perhaps the lack of galaxy-wide internet meant that communication just wasn’t a viable notion (they did have HoloNet, but it was more for news and propaganda, controlled by whoever controlled the galaxy). There are also difference how time affects perceptions of technology. For example hyperdrive technology had supposedly been around for a million-odd years, long enough for it to be considered part of everyday life, in the same way that humans barely acknowledge the utility of the wheel.

The more interesting thing may be how technology stagnated in the Star Wars universe. We’re not talking in a small amount of time, but rather over a long period, tens of thousands of years. That’s not really surprising because technological evolution is never a logarithmic in terms of advancement. There will be stagnant times punctuated by small leaps. Perhaps technology has been around so long that its evolution has plateaued. So maybe rather than chasing rainbows, people are content just to live their lives, choosing a level of technology that makes the most sense to them.

At some point many inventions become so ingrained in a society, that they become normalized, and we likely don’t take as much notice of them. Electricity was once considered by some as magic, and now we only notice it when we lose power. It is distinctively possible that human society has evolved too quickly to fully understand how computer technology will impact us over the long term. We rely far too much on the utopia of technology, treating it as a means to solve everything.

A non-recursive ruler

In the previous posts we looked at a simple divide-and-conquer algorithm to mark a ruler. Is there an equivalent non-recursive solution? Yes. But maybe it won’t be as elegant? The program is written in Processing, and the algorithm is derived from Robert Sedgewick’s Algorithms in C. Here is the program set-up:

void setup() {
   size(260, 75);
   background(255);
   fill(0);
   smooth();
   noLoop();
}

void draw() {
   rule(0,64,8);
}

The draw() function runs the rule() function, which will draw a ruler from units 0 to 64, with 8 divisions. Next is the mark() function which performs the actual marking of the lines on the ruler. It uses the function line() to draw a line. The x-positions are multiplied by 3 so that the lines aren’t all next to each other, and lines are drawn from the y-position at 50 upwards.

void mark(int pos, int ht) {
   line(pos*3,50,pos*3,50-4*ht);
}

Finally the function rule().

void rule(int l, int r, int h)
{ 
   mark(0,h-1);
   for (int t=1, j=1; t<h; j=j+j, t=t+1)
      for (int i=0; l+j+i<=r; i=i+j+j)
         mark(l+j+i, t);
}

The three parameters represent the left bound (l), the right bound (r), and the height (h) of a mark. The function uses two loops: The out loop iterates through the h-1 heights (using t), and keeps track of a second variable j, which keeps a count of the number of vertical lines on the ruler. The inner loop (i), is used to calculate the marks for each heights, t. Then mark() is called with the position of the new vertical line. Note that mark(0,h-1) marks the first vertical line (the pair to the final one).

Here is what a run of the algorithm looks like:

height (t)horizontal positions of vertical lines
11 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 51 53 55 57 59 61 63
22 6 10 14 18 22 26 30 34 38 42 46 50 54 58 62
34 12 20 28 36 44 52 60
48 24 40 56
516 48
632
764

Here is what the ruler looks like (with seven different levels of vertical lines on the rules):

Image

Is writing a book worth it?

I have talked before about how writing a textbook is not a path to becoming wealthy. Well, perhaps if you write books that a lot of people read (i.e. fiction), books that are popular because no-one else has written on the topic, or you are famous (and likely have the book ghost-written), you can make a living off it. When I wrote my first textbook on programming I spent the whole of my sabbatical doing so, way back in 2007. I wrote the book because I had an interest in providing a textbook that covered all the aspects of coding in C, but also included examples of programs. It was sold to students in my first year programming class for a couple of years, but beyond that nothing really came out of it. The reality is that writing programming books is likely a bit of a waste of time. That, and these textbooks would likely sell more copies if the publishers sold them at reasonable prices – I see my publisher still has the e-book version of my book in their catalog for US$87.60 – which is a complete rip-off. Even when it was sold to students it was a rip off – the book store made C$50 off each book for basically stocking it on a shelf – I made C$6 a copy (basically 10% of the wholesale list price). After tax that wouldn’t even likely get you an espresso anymore.

Publishing in academia is a act of sheer folly – it’s a lot of effort for very little payback (note that writing books in the humanities is quite common, but academic books outside of textbooks usually have a very small market). My best piece of advice is that if someone approaches you to write an academic textbook, then you should run… fast! Although having said that, I doubt it is much better in other fields. I sometimes wonder how much cookbook authors make, considering the vast amount of cookbooks published every year (and how similar some are in content). If you want to see a breakdown of a cookbook advance check out this analysis by Kristin Donnelly – from a US$60K advance, she took home around $12K over four years (after all the costs) – not exactly an all-inspiring number. Perhaps it is all about the exposure and the real money is made giving workshops and the like. If you want to write an academic book then I suggest self-publishing it, perhaps even electronically (offering it to students at a reasonable cost).

I am going to spent my retirement writing some food-related books, and will likely self-publish them. Sure, if something came of one it would be a bonus, but I’m not bothered about making money off writing. At the end of the day, writing for me is an expression of creativity, a journey.

The slowness of software

“As a general trend, we’re not getting faster software with more features. We’re getting faster hardware that runs slower software with the same features. Everything works way below the possible speed. Ever wonder why your phone needs 30 to 60 seconds to boot? Why can’t it boot, say, in one second? There are no physical limitations to that. I would love to see that. I would love to see limits reached and explored, utilizing every last bit of performance we can get for something meaningful in a meaningful way.”

Niki, Software disenchantment (2018)

Will I miss programming?

I rarely code these days. The odd something when I write a post that involves programming, but otherwise I try and avoid it. Why? Because I don’t really see much future need for it after I retire. Sure I may do the odd thing, like writing a program to calculate the volume of the gugelhupf tin, probably in Fortran or something, but I’m not going to spend much energy on it (and I’m certainly not going to learn anything new). Programming has dominated my working life for over three decades, and I’m ready to move on. The thing is, it’s not like my life outside work ever revolved around programming, my hobbies are anything but. Some may involve the use of computers in one form or another, as a platform for writing, or manipulating photographs, but never programming.

I think I’ll be okay with a more luddit-ish existence, I mean its not like I really care about much of the new technology about. It was all fun while things were evolving, but now progression just seems to look like AI, and I’m not really a fan. Modern programming languages mostly suck, because they are so bloated in their quest to be everything to everyone. Their design also mostly seems to be community-driven, which is honestly as much of a disaster now as it was with Algol-68 (think what you like, the best languages were designed by 1-2 people). Where is the progress towards developing software that is efficient, maintainable, correct and usable? There isn’t any, is there? The same problems exist now as in 1968 when “software engineering” first became a thing, only now they are likely a lot bigger. How much software is a complete bloat-fest?

But I digress. It’s time for me to leave programming behind.

It’s just a tool, not a way of life.