Why you’re wrong if you think spaces are better than tabs

OK, time to finally throw my hat into the political ring.

But I want to state for the record as a disclaimer that I do so not based on opinion, but merely based on the facts:

Tabs are better than spaces.

Now, quiet, you python people. You’re just wrong. I know that your style guide PEP says you should use 4 spaces.

But it’s wrong.

Now, you’ve all heard the old unix greybeard argument about how your files will be 2% smaller if you switch to tabs instead of spaces, because you’ll use 1 tab character rather than 4 space characters. While this argument is correct, it has nothing to do with my argument (it’s just another benefit of using tabs, as far as I’m concerned). But a small saving in file size isn’t a reason to change how you do things.

The reason you should change how you do things and start using tabs instead of spaces is simple: it’s the correct answer. But that’s not actually my primary reason. My primary reason is that it’s better.

Now, this might sound arrogant or whatever, but allow me to explain what’s actually going on under the hood, and how you can configure your editor correctly and we can all live in peace and harmony and never worry about this whole indentation thing ever again.

A lengthy treatise about the history of text (and how it’s indented)

A long long time ago – even before Nirvana – there were mechanical typewriters. That’s where the tab key comes from, since our computer systems were originally used with teletypes, which were based on typewriters.

But typewriters didn’t just have a tab key – they also had tab stops – a bar along the back of the typewriter with several movable latches which allowed you to set the tabs at any position you like. The behaviour of the tab key and tab stops in a WYSIWYG word processor emulates this pretty faithfully (though it is a superset of typewriter functionality, e.g typewriters had a limited number of tab stops and afaik could only do left-aligned tab stops).

When we started using teletypes and terminals, we were originally using fixed-width (i.e the screen was typically 80 or sometimes 40 characters wide, and used a monospaced font) text-only monochrome displays. And back in the 60s IIRC the ASCII standard was developed as a descendant of the baudot code used on telegraphs.

This standard defines a bunch of characters, and a bunch of control characters. If you’re familiar with ASCII or unicode at all you’ll recognise some of them. Some common examples:
character 32* – space
character 10 – linefeed
character 13 – carriage return
and character 65 – an uppercase “A”.

(* i tend to think in decimal, these are decimal values. All ascii values here should be decimal for consistency)

If you’ve ever played with colours in your terminal prompt, you might also recognise escape as character 27.

There are a bunch of these available, and you can see the full list with a simple ‘man ascii’ (assuming you have the relevant packages installed, apt-get install man-pages should do it on debian).

In this table, we see my beloved tab sitting at position 9. And you’ll also see one that you probably haven’t used before – character 11 – “vertical tab”.

All of these things are there for a reason, even though we almost never use some of them (like vertical tab) today.

There are a few intricacies of the ascii table which aren’t mentioned or immediately obvious from reading the man page I pointed you to. They’re a little more obvious if you look at a 4-column ascii table with the hex and binary values (<-- I'd encourage you to open that in a new window so you can look at it while reading this lengthy tome).

With this layout, it becomes more obvious that the first 32 ascii characters are in a special class that you probably already know about - these are the control characters. There is one other control character which is outside of this range and a special case - 127 / DEL.

Less Obvious is that this pattern of categorising the ascii table into sets of 32 applies for all four columns. The ASCII table was intended to be broken up this way: WE have four broad categories of characters here: control characters, symbols and numbers, uppercase, and lowercase.

Note another correspondence when we break the ascii table up in this way: the lower word (i.e the last 4 binary digits) are the same for each character for both uppercase and lowercase - we can think of the upper word / first four bits* as a "mode selector" to select between columns on this table, and the lower word selects one of the rows, giving a particular character.

(* in reality it's only three bits in the upper / most significant word, because we're only talking about 7-bit "pure" ascii today, but I'll be referring to them as two 4-bit words here to make things clearer - the most significant bit is always 0 for our purposes today)

This idea is modelled on an earlier code (baudot? something else? the history is long) and is in turn modelled on typewriters and how the shift key worked: On a mechanical typewriter, the shift key worked by physically shifting the printing mechanism or head (versions differed), and each "letter-stamper-thingy" on the typewriter had two characters - uppercase and lowercase (the names of which in turn come from a printing press operator's two cases of letters - uppercase tended to be used less often, so the operator would place it in the upper position, further away from his working area) - and depending on the position of the shift mechanism, selected between the two characters, giving each normal key two functions. Similarly, the number keys had symbols as their "uppercase character".

This design characteristic makes it pretty easy electronically to implement this "shift" mechanism for most of the keys on your keyboard without any special logic to handle upper/lowercase - each key has an encoded 4-bit value, and depending on the state of the shift key we set or unset bit 3 of the upper word (it's a little more complex than this these days, e.g capslock).

And that's why teletypes were fairly common already by the time computers were invented - they're a lot simpler - the character table is designed to make it easy electronically.

But it doesn't stop at the keyboard, it's also easier to interpret on the decoding end: if your bit 3 is set, you want to select a lowercase glyph. This is a very easy test that can be done with few logic gates, and in very few instructions on most(all?) computer processors.

So this meant that when computers came around, and we wanted a system to have them represent text and interact with keyboards, adopting this table made a lot of sense due to the slow speed of those early machines - efficiency was everything. And so ASCII was born - people took clever ideas of their predecessors and expanded on them.

You'll also notice that in this layout, the symbol characters between the uppercase and lowercase and at values >=123 make more sense – if you’ve ever looked at an ascii chart and wondered why e.g the symbols or letters weren’t all in one contiguous region, this is why!

(Today, we’re not technically using ASCII anymore – these days, all modern operating systems use unicode. But unicode takes this compatibility thing that ascii did even further – you may know that unicode is byte-compatible with 7-bit ascii, so a pure ascii file (and most english text from other similar encodings, e.g iso-8859-1, too) is also a valid, identical unicode file)

So far we’ve only covered columns 2-4, but a simple glance at our ascii table shows that column 1 is special. And you already know why: none of these are printable characters – except, debatably, tab.

You probably know about nonprintable characters – unicode means that most computers have lots and lots of them today. But you might not know the distinction between a printable / nonprintable character and a control character. And that’s what this column actually is – these are the control characters, not the nonprintable characters.

There is one other control character – DEL – which doesn’t live in this column. I’m not sure where it’s position at the end of the table originated and how that decision came about. But this is also relatively easy to test electronically – a 7-way AND gate on your 7 bits, and in code. Putting it at the end of the table like that makes it a relatively simple exception that you need to accommodate.

They’re control characters because this encoding was invented to provide all the functionality of all the various teletype machines out there, providing “one encoding to rule them all”, which should be able to work with any teletype, providing interoperability.

Teletype machines needed to have a way to signal to each other that this should be the end of the line, for example, and so you have a linefeed character. Today you might think of a linefeed as “just another character”, but the term “control character” isn’t just a pretty name – in it’s original intent, “linefeed” is not a character but an in-stream instruction for the receiving device, which means “move the physical roller which controls the vertical position of the physical paper in the actual real world one line down”. Presumably on some teletypes it also meant “…and return the physical IRL print head to the first column”, and on some it didn’t. In order to support all the features of all the teletype machines out there, a bunch of control characters were needed.

No, I have no idea what half of them do, either.

I do know about a couple that you may not have heard of. For instance, there’s the one that I call “EOF” – end of file, but which the ascii table lists as “End Of Transmission”, at position 4. Unix implements this as it’s “End Of File” character – this is what your terminal sends down the line when you press CTRL-D. It’s why you can press CTRL-D to log out of your terminal. It’s also why you can do

$ cat - > /tmp/foo (enter)
$ cat /tmp/foo

to create a file which includes linefeeds from the unix prompt, using cat to read from stdin and then using ctrl-d to send the the end-of-file character to tell the system that you’re done inputting data.

A more commonly known one due to a decision by microsoft to be contrarian is the difference between a linefeed (“move 1 line down”) and a carriage return (“return the carriage (or cursor) back to column 1″). Technically microsoft’s preference of doing both a carriage return and linefeed is perhaps more historically accurate, since in almost all cases you would want to do both of these things when the enter/return key is pressed, whereas unix says that a linefeed implies a carriage return, and interprets carriage return as “*only* do a carriage return, not a linefeed”, meaning that on unix CR allows you to “echo over” the same line again, and that means you can draw bar charts in bash using echo -e “\r$barchart” in a loop.

I member a time when *nix used LF, Windows used CR + LF, and macs used CR just to be totally goddamn annoying. Apple adopted LF along with unix with the advent of Mac OS X, so that’s not a thing anymore unless you’re into retrocomputing.

You may have seen the good old ^H^H^H^H^H^H joke, where a person is deleting their code. This is because the backspace character/key at position 8 was traditionally mapped to CTRL-H, which could render on some terminals visibly as ^H rather than a backspace depending on a ton of hardware variations and compatibility settings on the terminal you were sitting at and the terminal you were talking to.

CTRL-L clears the screen on *nix because it’s mapped to the form feed character at position 12. Likewise CTRL-C is mapped to character 3 (end of text, i’ve always called it ‘interrupt’). I believe that the dreaded CTRL-S and CTRL-Q to freeze/unfreeze output on your terminal are mapped to control characters, too, but I couldn’t tell you which ones.

There’s also a fun one which doesn’t appear to be mapped on my modern linux machine – CTRL-G, to ring the terminal bell.

These control key sequences exist because when people started using different terminals to talk to unix systems, they quickly found that not all terminals were the same. E.g not all of them had a ‘backspace’ or a ‘clear screen’ key, but all of them had some kind of “control” or “modifier” key, so the control sequences were added for people who didn’t have the corresponding key. To this day, I have a ‘compatibility’ tab in my terminal which allows me to tell the terminal to send a CTRL-H key sequence for backspace, amongst other things.

A short aside:

As I’ve demonstrated above, one of the pitfalls that we find ourselves running into on modern unix systems is that by the time you get to a terminal emulator in your gpu-accelerated, composited GUI, you’re running many layers of abstraction and compatibility deep: Your terminal is emulating and backwards-compatible with VT100 dumb-terminal hardware from perhaps the 1970s, patched to be able to support unicode, which is itself a backwards-compatible extension on top of the backwards-compatible extension of a previous code that is ascii, going all the way back to bardot and the telegraph in the late 1800s. So, no, it’s not as straightforward as you’d expect to write code to say “move the cursor to position x,y” on a unix console.

This causes us a bunch of problems and causes us limitations on modern desktop unix systems perhaps more often than it helps the average user. If you read the unix-hater’s handbook, you’ll find an entire chapter on how /dev/tty and the terminal emulator is the worst thing in the entire universe. This is generally acknowledged as one of unix’s “foibles”.

So why hasn’t anyone done anything about all that legacy stuff?

Because one of the joys and beauties of unix is the deeply-ingrained principles of backwards compatibility and portability that came to embody the unix philosophy over the course of decades. Which means that I can still (relatively) easily connect my modern terminal emulator up to an antique teletype and have it be compatible to a pretty decent extent.

This is an important quality of unix. It’s important to keep these open, compatible standards around for the purpose of the preservation of information. If we had moved from ascii to an incompatible standard, we would have had to convert every single document ever written in ascii into that new standard, or potentially lose the information as the old and incompatible ascii standard became more and more rare and unknown.

And if you search youtube, you can find people hooking modern systems up to antique teletypes. For my money that makes it all worth it.

But finally, Let’s talk about tab.

Note that space is up at position 32, in column 2 with the printable characters. I’ve seen space categorised as a nonprintable character, but this is the wrong way of thinking about it. A better way is to think of space as a fully black glyph on an oldschool fixed-width text terminal (regardless of whether or not it was actually implemented this way). You want a space character to erase any pre-existing character at that position on the screen, for example. And you want that “move on to the next screen column with each keypress, so that the user can type left-to-right” functionality that you get from making it a fully-black glyph.

For example, in bash:

echo -e "12345 \r     67890"

doesn’t give you the output:


it gives you:


- the spaces erase the previously-printed characters.

Space is a printable character.

Tab is a control character.

I was tempted to write “which means ‘print 4 spaces’ on my system”, but I thought I’d do another bash example/test/demonstration, and I surprised even myself. On my system, it’s not “print 4 spaces” at all:

$ echo -e "1234567890\r\tABCDEF"

I had expected this to echo


But it turns out that the implementation of tab on my system is a bit more complicated than that. Instead it means “indent by one tab width”. If I did:

$ tabs -8
$ echo -e "1234567890\r\tABCDEF"

I’d get:


And if I do:

$echo -e "\tsomething"

That’s not 4 spaces that it’s printed at the start of the line – try selecting that text – it’s a single tab character, and its width is whatever your tab width is set to (since it’s being displayed on your machine right now).

I think this demonstrates pretty clearly that space is printable and tab is control :)

When fixed-with, monochrome teletypes and terminals were the norm (and for a long time they were the best way for humans to talk to computers – they beat the shit out of punchcards), and the ascii standard was adopted for use on a screen – with generally more capability than a teletype (a screen can easily delete characters / clear itself, and can emulate an infinite roll of paper by scrolling lines), indentation came up. This caused an issue at the time because they didn’t have WYSIWYG word processors with an infinite number of center-aligned tabs that could do everything your typewriter could do. Instead, they had this atomic system – there was no physical way on these devices to have a ‘half-character-width’ tab, like you could on a typewriter. And not a lot of memory or processor power for implementing fancy rules around kiiiiinda-trivial stuff like tabs. So the compromise that was reached was making a tab equal to a certain number of spaces.

But how many spaces? Some said 4, I think some said 8, and some said 2. This is what the ‘tab width’ setting of your text editor means. I’m sure others did more complex things with tab, like “indent to the same column as the next word from the line above”.

I’m not sure where the convention of “a tab equals 4 spaces” came from, but that’s certainly the one that became dominant at some point. Maybe it’s standardised somewhere, maybe it’s just a popular convention.

The point is, the way that tabs was handled used to differ at one point between different terminal hardware and/or settings. This is why tab settings are so seemingly-complicated in plaintext editors today – Similarly to why ASCII has so many control characters, terminal emulators wanted to be able to emulate multiple types of terminal, so the tab settings had to be a superset of all of them.

The practical upshot of all this means that by correctly using your IDE’s “Tab width” setting, if you use tabs for indentation, you don’t need to have this argument about whether a tab should be 2 or 4 or 8 or 32 spaces: You simply set the tab width to your preference and tell your IDE to use tabs for indentation, and you’re set, and can see it indented however you like, and so can everybody else. We can all just use tabs correctly, and live in peace and tolerate each other’s preferences for indenting.

(The correct IDE settings are: Tab width: whatever you prefer; Use tabs for indentation, never spaces; aggressively and automatically convert groups of spaces *at the start of the line* into tabs. Auto-indent. If your editor can’t do these things, you should use a better one. Scite and Geany are good).

And there are valid preferences, too – I personally use 4 spaces indents on a desktop or laptop machine where characters are small and screen real estate is cheap, but if you’re coding on a small form-factor device with a small screen that can’t display long lines easily and large enough to be readable (like my openpandora), an indent of 2 characters is much more workable.

Another still valid though less-relevant-today reason to have a preference about tab width is something i only touched on very briefly earlier – some of these fixed-width displays were 40 columns, and some were 80 columns. The most common 40 column displays you would see were on the 8-bit microcomputers of the 80s, which tended to be built to hook up to TVs via an RF modulator, typically leading to insufficient resolution to do 80 columns and be readable. On a 40 column device there’s a good argument for a smaller indent for the same reason as I have on my openpandora – screen real estate.

So to start summing this all up and getting back to my original point, and although I’ve spent a million words describing the “why it’s more technically and semantically correct”, my #1 argument for tabs is not even based on any principle of it being more technically or semantically correct, or respecting the past, or anything like that.

I argue for tabs over spaces for indentation based on features: Done correctly, it removes the whole “How wide should an indent be?” question and allows users to decide based on their preference while still working together and having consistent code.

But I do also argue for it based on a nerdy “technical correctness” and “compliance with well-reasoned specifications” principles, too: In python, tab is even more explicitly semantically correct – in python we use indentation to signal a block of code to the interpreter. That’s the job of a control character, not of a printable character. That’s exactly what control characters are designed for. Those smart guys back in the 1960s or 1910s or whenever it was knew what they were doing when they put space in there with all the other printable characters.

However, note that when I say that you should be using tabs for indentation, I do not mean they should also be used for formatting – that does cause issues, as many advocates of space have pointed out in the past. I think maybe this is the most common pitfall is that people run into which makes them prefer spaces. But understanding these tab settings is not hard, and there’s a benefit for all users, and it’s the correct option, and also it saves you some space, because one tab character is one quarter the size of 4 space characters!*

(* this old argument for tabs is actually not really true anymore a lot of the time: if you’re transferring this as plaintext over http, you’re probably using a modern web browser which supports http2 and/or gzip compression, and it’s quite likely you’re talking to a server that also supports it, so there’s a very good chance that you’re getting those 4 space characters gzipped, even if you’re not minifying your javascript, and in that case those 4 tabs will take up perhaps 10 or 11 bits of data vs the 8 bits a tab would use )

So, for example:

#!/usr/bin/env python3

def something():
	# this line is indented. You should use a single tab character to indent it.
	#    but if I want to indent this line inside the comment, this is formatting, 
	#    and I shouldn't use tab for that.
	#<-- tab
	#    <-- spaces      
	# so, for example, to make an ascii-art table outlining the characters on this line:
	#    ----
	# it would be:
	#  pos | character
	# -----------------
	#   1  | tab
	#   2  | hash
	#   3  | space
	#   4  | space
	#   5  | space
	#   6  | space
	#   7  | hyphen
	#   8  | hyphen
	#   9  | hyphen
	#   10 | hyphen        # note consistent column widths here, 10 is longer than 9, 
	#                      #   don't use tabs here between the hash and pipe characters


In the code world I've found that this formatting rule boils down to a pretty simple generalisation: left of the comment signifier (the hash character in python), that's indentation, right of it is formatting.

(yes, there are always weird edge cases, like heredocs, where formatting and indentation simply cannot be done well and unambiguously, but I've found this system to work pretty well. In these cases you should do what seems best and cleanest)

And now hopefully you know why tabs are correct and spaces are wrong. Please feel free to disagree and argue that the PEP says so, but just know advance that if you do that you will be wrong.

More seriously, I would welcome discussion over some of the edge cases and pitfalls that people can run into with regard to this stuff. I find that a lot of the issues that people complain about with tabs also occur with spaces. It'd be cool to put together an exhaustive resource on the subject to document what is totally the empirically correct way to do it.

If you made it through this may thousand rambling words over something that many would consider trivial, thanks for reading :)

Command Of The Day

The other day I learned about a new command that I wish I’d known about years ago: mountpoint

I’ve done all kinds of things grepping /proc/mounts (or the output from ‘mount’) in the past to try to determine whether a directory is a mountpoint or not, and there was a simple command for it all along.

$ mount

/dev/sdb2 on / type ext4 (rw,relatime,discard,errors=remount-ro)
/dev/sda2 on /home type ext4 (rw,relatime)
/dev/sdd1 on /media/external type ext4 (rw)

$ mountpoint /home
/home is a mountpoint

$ mountpoint /home/antisol
/home/antisol is not a mountpoint

$ umount /media/external

$ mountpoint /media/external || echo "Dammit"
/media/external is not a mountpoint

# A Better Example:
$ mountpoint -q /some/dir || echo -e "\n** Setting up bind mount, sudo password may be required **\n" && sudo mount --bind /src/dir/ /some/dir

News from another century

A long, long time ago – 1999/06/03 – I was brave enough to try (and succeed!) at getting Max Reason’s XBasic running on Linux (Red Hat 5.1, to be precise). I remember thinking it was cool to see my name on someone else’s website when he thanked me. I didn’t even think about it at the time, but this is probably the first time I was able to contribute something back to a free software project.

It seem Max’s site has gone down recently, but here’s the wayback machine link.

(I’d just like to award Max’s parents the “best name evar” award – I think Max Reason even beats out Max Power, particularly for an engineer)

Recursively fixing indentation for a project

An interesting thing happened recently. My team had a discussion about various coding standards in order to come up with company guidelines. We all did a survey indicating our preferences on various questions.

One of the questions which came up was spaces vs tabs.

Now, having done a bunch of work with python in the last decade or so, it has seemed to me that spaces are preferred in the python community by the vast majority of people – projects with correct indentation seem to be few and far between, so I expected this question to be a slam-dunk for spaces.

But it wasn’t. It was split right down the middle. And in the end – tabs won out! :O

Maybe there’s still a fighting chance for doing indentation the right way in the python community?

If you, like me, have been stuck in a codebase with incorrect indentation, I’ve put together the incantation necessary to fix the situation:

find . -name \*.py -exec bash -c 'echo {} && unexpand -t 4 "{}" > "{}-tabs" && mv "{}-tabs" "{}" ' \;

* you may want to include more file extensions by doing e.g: find . -name \*.pr -or -name \*.txt -exec blablabla
* You may want to change the -t 4 to another value if your project doesn’t use 4 spaces for its indentation width

This site requires IE6

Welcome to the new and wonderful world of the “modern” web, where you can be running a browser released 2 months ago and it’s considered OK to not support it because it’s not one of the big three.

I particularly adore the fearmongering security theatre used as an excuse for their not bothering with progressive enhancement and compatibility.

Now, I suppose to be fair I should concede that although I do label it as a “blatant lie”, it is possible that I might be wrong and this might not actually be a lie: It could just be that they’re simply incompetent at web development.

983 days uptime

bye bye hactar. You were a good server. It’s a pity we didn’t quite make it to 1000 days.

(the machine was decommissioned today by people who aren’t me. Interestingly, this action immediately caused a website outage. I’ll refrain from any more blatant “I told you so”‘s).

Nice work, youtube!

Aaaaaaaaaaaaaaand this is what happens when you have people who don’t understand the technology work on one layer of abstraction, using inefficient frameworks to build things that could be built better with just a little skill and hard work, with no incentive or curiosity to care about any of the thousand other layers of abstraction:

youtube.com is literally more than 50% invalid HTML.

Nice work!

I’m guessing their unit tests don’t include running the output through a HTML validator.

The State of Pulseaudio in 2021

Every now and then I like to revisit an old topic.

So, let’s revisit pulseaudio and my hatred for it, shall we?

Now, you’re not an old greybeard like me, so you’re probably saying to yourself right now “OMG still with pulseaudio?!? That attitude is sooooo 2008!”

Well, for all the people over the years who have repeated the line of pure bullshit propaganda that “pulseaudio is much better these days and almost sorta kinda works most of the time, if you squint”, I’d like to present my first, reflexive solution to the fact that today I had no audio in zoom on a laptop with a current version of pulseaudio. A solution which, I might add, solved my problem instantly:

$ sudo bash -c "while true; do pkill -9 pulseaudio; done" &

Sure, it might not be efficient or pretty, but it worked. And it serves as a perfect metaphor for the state of Linux audio for this past decade and change, which can be summed up as “If you have an audio problem on Linux, the fault lies with pulseaudio”.

There have been multiple occasions where I’ve been trying to figure out some weird audio behaviour, only to realise “Oh OF COURSE! How silly of me! this machine has pulseaudio installed!”, and disable pulseaudio, and the problem goes away.

I’d file bugs for all this stuff, but I’m sure the fault lies with gnome (which I don’t use), or KDE (which I don’t use), or nginx, or my distro, or perhaps Microsoft Office. I’m sure these things are not actually problems with pulseaudio, because Lennart’s software never has any bugs.

I for one welcome the next decade’s worth of “if there’s a weird issue on Linux, The problem lies with systemd”, and being told in my bug reports that the problem is in the default configuration that comes with Mac OS X Server.

Now it’s off to go read the documentation (yet again) on how to disable this godawful dreck to stop it from automatically starting itself. Unfortunately we’re not in the days where just removing it is a simple option anymore (thanks for the totally unnecessary hard dependency, mozilla!)

RIP John McAfee

John McAfee has been found dead in his cell hours after a court ruled he would be extradited to the US.

The article says “Authorities said the cause of death was being investigated”.

For once, I agree wholeheartedly with the Authorities. It sure was. Though I find it surprising that they would make such a candid admission.

Luckily, his important advice on how to uninstall McAfee Antivirus will be with us forever:

Say what you like about the man, but he was always entertaining.

RIP Aricebo

Aricebo Observatory has collapsed.

So sad. This awesome instrument has been an inspiration to me ever since I became aware of its existence via The X Files.

I was fortunate enough to be able to spend a few quintillion floating point operations processing data from Aricebo as part of the seti@home project.

I always wanted to visit it. Now that will never happen.

I’d bet good money that if they’d had the funding they needed for the last 15-20 years, they probably could have prevented the collapse.

But don’t worry, that funding totally went where it was needed: researching new ways to blow cunts up. So yay progress!

Self-Transforming Machine Elves

Excerpts from this article, where Terrence Mckenna describes the “Self-Transforming Machine Elves”, or “Jeweled, Self-Dribbling Basketballs” – nonhuman entities many people claim to have encountered during a DMT experience. The whole article is worth reading, but I’ve edited it down here to remove most of Terrence’s trademark rambling (and delightful) style of talking to keep it to just a description of the entities. Mainly because I was reminded of the somewhat-famous ending quote and I wanted it on my blog:

DMT does not provide an experience that you analyze. Nothing so tidy goes on. The syntactical machinery of description undergoes some sort of hyper-dimensional inflation instantly, and then, you know, you cannot tell yourself what it is that you understand. In other words, what DMT does can’t be downloaded into as low-dimensional a language as English.

The place, or space, you’ve burst into—called “the dome” by some—seems to be underground, and is softly, indirectly lit. The walls are crawling with geometric hallucinations, very brightly colored, very iridescent with deep sheens and very high, reflective surfaces—everything is machine-like and polished and throbbing with energy.

But that is not what immediately arrests my attention. What arrests my attention is the fact that this space is inhabited—that the immediate impression as you break into it is there’s a cheer. [...] You break into this space and are immediately swarmed by squeaking, self-transforming elf-machines…made of light and grammar and sound that come chirping and squealing and tumbling toward you. And they say, “Hooray! Welcome! You’re here!” And in my case, “You send so many and you come so rarely!”

The elves, or “jeweled self-dribbling basketballs,” come running forward. They’re “singing, chanting, speaking in some kind of language that is very bizarre to hear, but what is far more important is that you can see it, which is completely confounding!” And also, something is “going on” that over the years McKenna has come to call luv—”not ‘light utility vehicle,’ but love that is not like Eros or not like sexual attraction,” something “almost like a physical thing,” “a glue that pours out into this space.”

Each “elf-machine creature” “elbows others aside, says, ‘Look at this, look at this, take this, choose me!’” They come toward you, and then—and you have to understand they don’t have arms, so we’re kind of downloading this into a lower dimension to even describe it, but—what they do is they offer things to you. You realize what you’re being shown—this “proliferation of elf gifts,” or “celestial toys,” which “seem somehow alive”—is “impossible.” This “state of incredible frenzy” continues for about three minutes, during which the elves are saying:

“Don’t give way to wonder. Do not abandon yourself to amazement. Pay attention. Pay attention. Look at what we’re doing. Look at what we’re doing, and then do it. Do it!”

Tips and tricks for registering python plug-ins with gimp – Number 4 will SHOCK you!

This is a write-up of some of the quirks and behaviours I’ve discovered writing Python plugins for Gimp, along with some quick reference material.

I did a bit of searching and didn’t find a good write-up or documentation of the register() method you need to use to register your Python plug-in with Gimp. I figured some stuff out and thought I’d write it down.

Normally, you don’t need much of a reference for gimp’s python library, because it has such wonderful built-in documentation: In Gimp, choose Filters -> Python-Fu -> Console. The Python console will open. Press the “Browse” button and you have a searchable library of gimp functions. If you select a function and press the Apply button, gimp will give you the python incantation on the console command-line, ready to be copy-pasted into your plug-in. This is super helpful and alleviates the need for a (seemingly-nonexistent? I can’t find it online) API reference, but it does have one drawback: It doesn’t give you any examples or tell you a whole lot about the available options. In the case of the register method used to register plug-ins with Gimp, I couldn’t find it in the browser at all.

So, here’s what I’ve learned about registering python plug-ins with Gimp:

  1. There is some documentation If you look around
    It’s not particularly easy, but you can find some documentation out there. Mostly, it’s tutorials on how to write gimp plugins with python. A web search for ‘gimp python plugin’ will give you a bunch.
    I pieced this info together by looking at multiple “how to write a gimp plugin with python” tutorials and examining the difference between their calls to register(), and by trying things out.

    • In the gimp python console, you can use:
      import gimpfu 

      to get a very basic description of the Register method. This gives you back something super useful:

      register(proc_name, blurb, help, author, copyright, date, label, imagetypes, params, results, function, menu=None, domain=None, on_query=None, on_run=None)
          This is called to register a new plug-in.
    • A couple of places have a list of available options for the register method:The best documentation I’ve been able to find is now a 404, but is still available thanks to the Internet Archive here. This includes things like helpful list of available parameter types, and lots of useful little notes on behaviour.The Gimp’s Developer wiki has a “Hacking Plugins” page, which doesn’t mention python but which has a few useful links.
    • Here is a table of parameters for the register method, shamelessly copied from this tutorial:
      Parameter Example Description
      proc_name “your_plugin_name” The name of the command that you can call from the command line or from scripting
      blurb “Some Text” Information about the plug-in that displays in the procedure browser
      help “Some Text” Help for the plug-in
      author “Some Person” The plug-in’s author
      copyright “Some Person” The copyright holder for the plug-in (usually the same as the author)
      date “2097″ The copyright date
      label “<Image>/Image/_Do A Thing…” The label that the plug-in uses in the menu. Put an underscore before a letter to set the accelerator key. Use <Image>/ for a plug-in which operates on an open image, or <Toolbox>/ for a plug-in which opens or creates an image.
      imagetypes “RGB*, GRAY*” (see below) The types of images the plug-in is made to handle.
      params [] (See below) The parameters for the plug-in’s method
      results [] The results of the plug-in’s method
      function myplugin The method gimp should call to run your plugin. Not a string.
  2. Making sense of register()’s parameters
    I found myself having trouble with the imagetypes and label parameters. The first few plugins I wrote simply batched up a few gimp operations into one thing, working on an image that I had open.Then, I found myself wanting to write plugins that would perform batch operations, or generate a new image. These worked just fine, but there was one snag: I found that the menu items for my plugins were disabled if I didn’t have an image open. I decided to investigate.I discovered that imagetypes and label work together to control when your menu item is available, and whether your method needs to accept parameters for the currently open image and drawable.

    imagetypes takes a string argument telling gimp what types of images your plugin operates on. The acceptable arguments I’ve found so far are:

    • “RGB*” – if your plugin works on an image and requires colour.
    • “RGB*, GRAY*” – if your plugin also works on grayscale images.
    • “*” seems to be an easier synonym for the above.
    • None – This one is important, and it’s the one I couldn’t find anywhere and found by experimentation. You need to specify None (that is the python NoneType, not the string ‘None’) to have your plugin enabled when you have no image open in gimp, i.e if you’re doing a batch operation on a directory of images, or generating a new image.
    • Maybe “GRAY*” – I haven’t tried this. Does it make sense? RGB has all the grays, too.

    label takes a string argument telling gimp where in the menu your plug-in should go. This has a couple of behaviours and implications that I had to figure out.

    • If your plugin will modify an open image, you should prefix your label with “<Image>/“. So your label might be “<Image>/Filters/Artistic/My _Plugin…”.Importantly, this is what determines whether your method will be passed timg and tdrawable parameters with the currently open image and drawable. So if your label does start with “<Image>/”, your method definition should look like this:
      def myplugin(timg, tdrawable, myfirstparam, myotherparams...):

      If your plugin will open or create image(s) itself (e.g a batch operation or a plugin which creates a new image), you should prefix your label with “<Toolbox>/“. So your label might be “<Toolbox>/File/_Batch/_My Batch Operation…
      If you use “<Toolbox>“, your method definition should not have the timg and tdrawable parameters:

      def myplugin(myfirstparam, myotherparams...):
    • Note the underscores in my examples. These specify the accelerator key gimp will use in the menu. You should set accelerators, they make your stuff easier to use.
    • You can easily create submenus or even new menus “on-the-fly” just by specifying them with a slash. They can also have accelerators. So that label might actually be “<Image>/Filters/My _Menu/My _Plugin” or “<Image>/My _Menu/My _Plugin” to create a “My Menu” menu if you want to.
  3. Here’s the list of data types you can use for plug-in parameters. Gimp will show nice, helpful selectors for them all. Use them!One which I will note is PF_LAYER, which is useful if you want the user to select a specific layer to operate on or work with.
    • PF_INT8
    • PF_INT16
    • PF_INT32
    • PF_INT
    • PF_FLOAT
    • PF_VALUE
    • PF_COLOR
    • PF_IMAGE
    • PF_LAYER
    • PF_BOOL
    • PF_RADIO
    • PF_FONT
    • PF_FILE
    • PF_BRUSH


  4. Prepare to be shocked: This tip isn’t about registering plugins at all! Gasp. But since we’re talking about batch operations, it’s useful to note that you can easily have your plugin show and update progress bar by using a couple of calls in your loop. There’s also another good practice that you should be aware of if you’re writing a plug-in that’s going to take a while to run: knowing when to update the display.
    • Use gimp.progress_init(“Some Text…”) to set up a progress bar. Do this at the start of your method, duh.
    • Use gimp.progress_update(floatval) in your loop to set progress on the progress bar. floatval should be a float between 0 and 1. You can also call gimp.progress_init(“Your message”) again in your loop to update the text.
    • By default, gimp won’t update its display while your plug-in is running unless you tell it to. So you may want to call gimp.displays_flush() periodically so that the user sees what is going on.
    • But be wary of calling these too often, updating the display is expensive and may slow you down! use something like ‘if count % 5 == 0: gimp.displays_flush()
    • While we’re talking about long-running plugins, it’s not advisable to operate on images on a pixel-by-pixel basis, i.e looping through each pixel in the image, getting an RBG value, doing an operation, and changing a pixel. This is verrrry sloooooow. I assume there’s a faster way, probably retrieving the image as a multidimensional array, working with that, and then writing it back. But I haven’t managed to do that yet. I’ll update this if I do. Mail me if you figure it out!
  5. There Are Still Mysteries!
    Shocking as it is, I’m not omniscient, so I don’t have it all figured out. I haven’t had need of all the available options. I’ve discussed some unknowns already.
    For instance, I don’t know what gimp would do with your return value if you gave it results. That might make for an interesting experiment, and I don’t know what parameter you’d use for filetypes to work on indexed images. I don’t think this presents much of a problem as it’s easy to switch to and from indexed to rgb modes. I would expect that you probably only really want indexed when you’re about to export, unless you’re doing pixel art, in which case I’d recommend checking out something like Aseprite.

So, there’s my wisdoms on that subject. I mostly just wanted to document what I’d learned about writing plugins to generate a new image vs working with an open image, but I find myself searching for gimp-python docs every now and then, so I figured this would be a good thing to write and come back to. I expect I’ll come back and edit it as i learn more. Hopefully somebody else might find it useful too! :)

End Of An Era

In the next day or so, the seti@home project goes into “hibernation”.

I’ve been contributing my spare CPU time to this project for over 20 years. More than half my life. A whole bunch of posts on this blog are about seti@home milestones.

I’m pretty avid about it, because I think that SETI is probably the single most important bit of science we can be doing. That’s a whole discussion, perhaps for another day.

I keep track of the statistics on an irregular basis. I contribute as much as possible, including donating CPU time of servers and workstations I control.

I’m the number 49 contributor in the country. I’m glad that I managed to crack the top 50 (this happened fairly recently) before the project shut down.

I also managed to crack the 99.9th percentile – I’ve accumulated more credit than 99.90051% of all SETI@Home Users. This is also a fairly recent development. I’m also glad that I managed to crack three-nines before the project shut down.

I’m ranked 1,797 out of 1,806,205 in the world.

I’ve contributed 28.91 quintillion floating-point operations:

Suffice to say that it’s something that I’m passionate about. My Drake Equation simulator is an example of that.

I’m… displeased… by this development.

The announcement that the project is going into “hibernation” came less than a month ago. Here’s the stated reasons:

We’re doing this for two reasons:

1) Scientifically, we’re at the point of diminishing returns; basically, we’ve analyzed all the data we need for now.

2) It’s a lot of work for us to manage the distributed processing of data. We need to focus on completing the back-end analysis of the results we already have, and writing this up in a scientific journal paper.

With regard to point one: My drake equation simulator, and common sense, tells me one thing about SETI: it’s a long-haul game. Given the size of the galaxy and the delays in communication between stars, any communication with extraterrestrial intelligence is going to be a slow process. Another important factor is that given the size of the galaxy, if an alien civilization starts broadcasting today, the likelihood is that it’s going to be thousands of years – perhaps even a hundred thousand – before we receive that transmission. And that’s only taking civilisations in our galaxy into account. The SETI project might run for hundreds of years and not find anything. And it should. A couple of decades for a project like this is an infinitesimal blip compared with the timespans we’re talking about with regard to extraterrestrial intelligence. If you’re going to make the claim that “we’ve analyzed all the data we need for now”, then that can only mean one of two possibilities: 1: You’re not actually doing SETI, or 2: you don’t know what the fuck you’re talking about. There is new data coming in every second of every day. That first signal we detect could be tomorrow. Or it could be a thousand years from now. If we stop looking it’ll be never.

Some fuckwitspeople argue that running a project like SETI is expensiveblah blah blah. They seem to think that because we haven’t found anything in a few decades (well, nothing definite – we have found a couple of interesting and unexplained signals, the Wow! Signal being the most famous) that we should save our money and give up. This is ludicrously short-sighted thinking. The SETI project needs to be a LONG-term project. In the hundreds or thousands of years. It’ll take a hundred thousand years of SETI before we can say that we’re (probably) the only intelligence in the galaxy, and even then we could get a signal the next day. And no result is a result where SETI is concerned – not getting signals gives us some indication of the rarity of intelligence (or, at least, EM radio tech) in the galaxy.

As for point two, this basically boils down to “we’ve decided we can’t be bothered”. If it’s a lot of work then that means you haven’t automated it properly. Writing this up as a paper? What I’m hearing is “it’s more important that I get published than answering one of the most important and fundamental questions out there”. I’ll be expecting to see my name attributed on the paper.

There are nearly 2 million seti@home users. Lots of us are computer nerds. I’m sure you could have found some volunteers to do all that hard work you can’t be bothered with any more. I’d be happy to do as much as I can. But you didn’t ask, instead you just shut down a project that I’ve been invested in for most of my lifetime.

Obviously, seti@home isn’t all of SETI, obviously there will be a bunch of other SETI being done. The Breakthrough listen project is doing some great stuff. But this is a blow to science. Seti@home was a pioneer of distributed computing. And I think that the way it’s being shut down is a huge disservice to science and to all the people who have volunteered our processor time and electricity over the decades. I’m not impressed.

My machines, on the other hand, will be relieved. Their processors will be running much cooler from now on. I’ll go through processor fans much less quickly. And my wallet will probably appreciate the reduction in electricity consumption: I’ll be interested to see the difference in my power bill. I wouldn’t be surprised if it’s noticeable.

An Ode To The Orville

I fucking love The Orville.

If you haven’t seen it, here be spoilers. You should prpbably just go watch it if you haven’t. But, on the other hand, this is a fairly episodic show. It’s not serialised like so many things are these days. So while there will be spoilers, I think it’s probably not such a huge deal for a show like this. Still, you have been warned.

I think perhaps my favourite moment in the entire show (so far) is in Season 2′s “All the World is a Birthday Cake”, when the captain says “attention everybody, prepare to initiate… First Contact”.

And all of the crew cheers.

It’s fucking glorious.

Oh, optimistic sci-fi, I’ve missed you! It’s been so long! It’s so rare these days that I really can’t even remember the last time I saw any. I suppose there are a few movies that might count: Arrival, Interstellar, The Martian. Perhaps. But they’re all movies rather than TV series. Perhaps Stargate, but it’s now been over a decade since that ended. What I’m really talking about is obviously Trek.

I don’t want to talk about the current dumpster fires with Trek stickers slapped onto the side of them as they gang-rape Roddenberry’s corpse. I’m not here for that. I’d rather not think about them. I think the best thing is if I just stick my fingers in my ears and pretend they don’t exist. As far as I’m concerned, they’re absolutely definitely non-canon. but I kind of have to talk about them at least in passing. The comparison is inevitable, because the Orville is more Trek than any of that trash will ever be. And there’s one simple reason:

This is a show written by somebody who actually likes Science Fiction.

And I mean REAL Science Fiction, not braindead action crap set in space. Not heroic stories about wizards with laser swords. Not another frankenstein “oooh scence bad!” story, or other hamfisted moralising (cough).

Now, real sci-fi doesn’t have to be optimistic. There’s lots and lots of great sci-fi that isn’t. Lexx is one of my favourite shows ever, and it oozes cynicism from every pore. Babylon 5 might be hoepful overall but it gets into some pretty dark territory, and I fucking adore it. But I think that probably the very best of it tends to be optimistic. Much of Asimov’s work (particularly the foundation and robot stories) springs immediately to mind. The Oddysey series. The Galactic Milieu series. All of these are favourites of mine. But the point isn’t that you can’t make good sci-fi that isn’t optimistic. The point is that there’s basically no optimistic sci-fi these days. Certainly not on TV or movie screens. It’s all gritty, edgy stuff where people are cunts. And that’s a huge shame, because it’s the very core of the greatest sci-fi TV/Movie franchise ever. And I’ve missed it. So when the Orville’s crew cheers at First Contact, with comments from the crew like “This is why we’re out here!”, it just about brings tears of joy to my eyes. And I’d like to think that maybe Roddenberry’s corpse is at least taking some comfort, while being gang-raped, in the fact that some people paid attention, even if those people have seemingly been banned from working on anything with a Trek license because the people in charge of trek obviously hate Trek.

I’m starting to think, and this is a big statement, that The Orville might have the potential to be better than Trek. All of it, not just the current dumpster fires.

Wait, don’t close the tab, hear me out.

Firstly, I’m not saying that it IS better. It’s got some pretty huge boots to fill if it wants to take the crown. That’s 50 years of some of the best TV sci-fi ever that you’re going up against. A little 2-or-3-season run isn’t going to allow you to come close. We won’t be able to really consider whether it IS better until it’s at season 10, or movie 5, or something like that. Seth MacFarlane is going to have to keep it up to the same (of higher) levels of great for a LONG time to truly compete with the king.

But I can see that it might have potential.

Firstly, the comedic aspect. The Orville doesn’t have to always take itself so seriously. They’ve leaned hard into the drama and sci-fi side, and I think that’s for the best (it’s one of the things I love about season 2, the comedy has been dialled back and it’s gone 95% sci-fi), but they could do a very comedy-heavy episode and the audience wouldn’t bat an eye if it was done well. Trek really struggled to do that kind of thing. Yes, there is the odd outlier like The Trouble With Tribbles, but even with episodes like those, Trek can’t really be self-referential or examine itself. It has to take it’s premise seriously. But the Orville doesn’t have that limitation. And that means that it has the potential to do something that we really don’t see enough of: The Orville is perfectly positioned to start examining sci-fi tropes. Oh how I’d love to see an episode that deals with the fact that sci-fi writers have no sense of scale. The comedic side of the show gives it the ability to do stuff like that, and I’d LOVE to see it. Deconstruct those tropes. Reconstruct them. Play them for laughs. Make the sci-fi fans chuckle. Reference classic stories and point out how absurd they are. Have somebody mention that we’re entering an asteroid belt, and have somebody say “all hands brace for impact!“, and somebody else say “What are you talking about? The average distance between asteroids is like a hundred thousand kilometers. The chance of hitting one is in the billions to one. We’ll be lucky if we come within visual range of anything larger than a grain of sand”.

Secondly, and this is going to be a bit contentious: Canon. Trek has 50 years of history sitting behind it, and there was no concept of “canon” when it started, it was just a sci-fi show. The idea of canon developed gradually over years. There’s really no canon to speak of in TOS: things tend to mostly be self-consistent, but the idea of canon didn’t really come about until TNG. So there are a bunch of things that are inconsistent, particularly in TOS. Hell, in the second (or first, depending on how you count) episode, “Where No Man Has Gone Before”, at the very start of the episode, they talk about how they’re at the edge of the galaxy. According to later (and more consistent and realistic) canon, that’s a multi-decade journey. Similarly in Star Trek V they go into the galactic core, a similar distance. Another that springs to mind is that I recall a mention of travel at warp 13, but another episode establishes that warp 10 would be “infinite speed” and is basically impossible (if you travelled at infinite speed you would be at every point in the universe simultaneously). These are just two inconsistencies of many. Most of the time, they’re not a big deal, and we kind of just go with the one that makes the most sense and keeps things as internally consistent as possible.

But more than inconsistencies, this canon serves as a huge pile of restrictions for writers. If you want a trek story to feature regular travel to and from another galaxy, you’re probably going to have to explain that there’s been a huge leap forward in propulsion technology allowing travel literally millions of times faster than what we’ve previously seen. And it’s going to have huge implications for all future stories set in that universe, e.g the delta quadrant is now suddenly a day or two away rather than 70 years. It’s not impossible, but not exactly simple either. So, let’s come up with a totally ridiculous example: Say that I was a trek writer and I wanted to include some kind of, I don’t know, let’s say it’s a “spore drive” that allows instantaneous travel to pretty much anywhere in the galaxy via the power of magic mushrooms, or something. That would have all kinds of huge implications on the canon of the rest of the series. And if I was to put something stupid like that in, say, a prequel series set before other pre-existing shows, it’s going to be pretty unavoidable that I’m going to break canon pretty majorly, or I’m going to have to come up with some very contrived reason why the voyager crew doesn’t have knowledge of or access to any information about this ridiculous technology that could get them home in 15 minutes. And anyway, that’s a particularly absurd example because it doesn’t “feel” right for trek – it feels like magic, and trek has always been grounded in science. It would be similar to introducing magical powers into trek. Like, say, I don’t know, let’s go with the ability to telepathically communicate over interstellar distances. Something dumb like that would be very out of character for a trek show and only somebody with no understanding of and/or contempt for trek would contemplate adding something like that to the canon.

I’m not saying that canon is bad, or that there are no more interesting Trek stories to be written (I have like 5 different ideas). What I’m saying is that writing in the Trek universe is by definition fairly restrictive. You can’t, for example, suddenly declare that the Federation has become evil and… I don’t know, let’s go with something off-the-wall and totally absurd and say that they decide not to help an enemy when they’re in need due to some catastrophe, using it as an opportunity to start talking and potentially ushering in a new era of peace, like they did in Star Trek VI, because such an idea would be totally ridiculous and go against everything that Trek is about and destroy the very core of the concept.

Instead what I’m saying is that the canon is restrictive, and that it’s difficult to keep consistent with it. It makes the writer’s job harder. There’s a huge body of stuff that you need to know, and even somebody with the most encyclopaedic knowledge of Trek can make a mistake.

But that doesn’t mean it’s OK to just not try, or that it’s time for a reboot, or anything like that – if you want to set your story in the Trek universe, you’re taking on the responsibility to live by that canon. If you don’t like it, set your story in a separate canon and don’t slap a Trek label on it.

But The Orville doesn’t have that issue. They can make up their canon as they go. And because they have the benefit of Trek’s hindsight, they can stop and think about what they say before they say it, with an eye to future continuity. They can avoid doing things like saying “we’re at the edge of the galaxy” in an early episode. They can build a new canon, one which might be a bit more consistent.

I’ve heard that there are rumours that CBS has been thinking about selling it’s dumpster fire to Universal, and that they want to put Seth MacFarlane in charge. I don’t think they’re true. But even if they were, I say “why would he want that? He’s more free where he is, and he’s doing fucking brilliantly, thank you very much, and his property hasn’t been perhaps-irrevocably tarnished by people who hate science fiction”

To reiterate: All of this is speculative, and The Orville has got a LONG way to go before it can even reach for the crown. But I think I can see a potential there. There’s certainly a potential for a few classes of stories that Trek couldn’t do.

And we’ve already seen some of the best allegorical sci-fi in a long time come from The Orville: The arc about Bortis’ child is a very interesting meditation on some current trans and gender issues. Bortis’ porn addiction episode is a great bit of science fiction, and something that Trek would probably struggle to cover due to being so family-friendly (but that might also be a product of the times, There are some oblique references to various types of holosuite programs you can get from Quark in DS9, so perhaps a modern trek could do a story like that, it’s just a pity they’re not making any trek any more). Bortis seems to get a lot of the interesting stories. His race is almost purpose-built for a lot of great allegory about some pretty current stuff. But there are others. I really really loved “Majority Rule”, which discusses social media and mob mentality, and “All The World is a Birthday Cake” which is an hour-long indictment of astrology, and “Mad Idolatry” where time passes quickly on the planet and Kelly is their god (which reminds me a lot of a really great Voyager episode, “Blink of an Eye”). And the final two episodes of season two (“Tomorrow, and Tomorrow, and Tomorrow” and “The Road Not Taken”) are a great time-travel story. And these are just the “real” sci-fi episodes which spring to mind. I also really liked the Kaylon arc in season 2, even though it was fairly standard stuff I thought it was well-executed. I think just about every single episode has been enjoyable, I certainly can’t think of one that sucked.

I was particularly struck by the first few episodes of season 2. I didn’t get around to watching season 2 until recently. I had just watched the first episode of a certain brand new dumpster fire that shits all over the core concepts of a certain 50-year old franchise, and needed to wash the taste of disgust out of my mouth, so season 2 of The Orville was particularly refreshing. I loved that the first episode was just a quiet little character study/drama thing. No explosions. No roundhouse kicks. Just a trip to Bortis’s homeworld so he can take a piss, and a couple of other little character things.

Nobody even fires a gun until episode 3. Though to be fair there is one isolated and ritualistic stabbing in episode 2.

It’s fucking glorious.

And then there are the references. And the guest cast. And the people behind the scenes. Brannon Braga, Jonathan Frakes, Robert Duncan McNeill. Robert Picardo. Marina Sirtis. Charlize Theron. Liam Neeson. Ted Danson.

It’s fucking glorious.

And then there’s the episodic nature of it. It’s not heavily serialised. If you miss an episode, it’s not the end of the world. If you just want to watch one episode in isolation, you can do that. You can jump in and watch a season 2 episode without having seen ten hours of backstory to understand what’s going on. Not that serialised stories are bad – I might have to write another ode one of these days for The Expanse. But there are definite advantages to smaller, self-contained, episodic stories.

Can I think of flaws? Sure, I guess, nothing’s perfect, but I don’t know that “flaws” is the right word, I think “finding its footing” might be more appropriate. It can be a bit derivative. Some of the episodes have strong flavours of certain episodes from other franchises. But that’s not necessarily a bad thing. You’ll be hard-pressed if you want to write completely original sci-fi, or any kind story for that matter. And the show’s premise IS pretty derivative, that’s what it is intended to be: it’s not trying to be something totally new and unlike anything you’ve seen before. In fact it’s specifically NOT going for that. It’s trying to be like something great that you haven’t seen in 15 or 20 years, while also having its own feel. And I think it does a really great job at that. I think that the early episodes were a bit comedy-heavy and some of it didn’t really land for me. I’m glad that they seem to have shed that and gone for a mostly-serious tone with the odd joke thrown in. But on the other hand, Isaac cutting off Gordon’s leg was gold. I wouldn’t mind seeing perhaps the odd comedy episode.

“Individual science fiction stories may seem as trivial as ever to the blinder critics and philosophers of today – but the core of science fiction, its essence has become crucial to our salvation if we are to be saved at all.”
Isaac Asimov

As far as I’m concerned, season 2 cements The Orville in the pantheon of most worthy science fiction shows. I bought both seasons on DVD when I was mid way through season 2, I figured I should put my money where my mouth is. And I’ve got them sitting on the same shelf as my Trek box sets, where they belong. It was a nice feeling, I hadn’t added to that shelf in over a decade and didn’t think I’d be adding to it any time soon.

The Orville is fucking glorious. Go buy it. Let’s see if we can make it a big deal. Let’s see if we can get it to season 10.


As of today I’ve spent half my life contributing to SETI@home!

I’m ranked #2241 in the world in terms of total CPU time donated, and #62 in Australia. In terms of active users, I’m ranked #985 out of 1,738,452 in the world.

Bye bye github

Microsoft announces the ruination of github.

Because apparently destroying skype, linkedin, hotmail, etc etc etc wasn’t enough.

I can’t fathom the rationale behind this. Apparently there’s an accounting thing that having lots of users means you’re worth lots of money. So, 7.5 billion.

BUT surely there’s nobody out there who doesn’t think that MS buying github will immediately lead to an exodus of most of its users? As far as I’m concerned it’s a given: MS buys github, github users leave en-masse. I know it’s what I’ll be doing.

So basically MS is buying a website which will no longer have any users for 7.5 billion. Good luck with that.

I’d find it funny if it wasn’t so tragic. I liked github. Just like I liked skype.

Moving Linux to an SSD

The other day I needed to move my Linux install to an SSD, but there were a few issues:

  • I was moving to a 240GB SSD from a 1TB HDD
  • I wanted the OS to be installed on the SSD and to make the 1tb drive a /home partition. When installing initially I didn’t think to use a separate /home partition (I usually do, it’s a good idea)
  • I didn’t have enough free space anywhere to make a copy of everything on the 1tb drive (I was able to make a full backup to another machine, but doing so meant that there wasn’t enough free space anywhere to use e.g as an intermediate place for doing repartitioning

Here’s how I did it:

  1. Ensure that /etc/fstab is using uuids rather than device nodes (it was, should be the default these days)
  2. Perform a full backup of the entire system
  3. Reboot into a live environment
  4. Partition and format the SSD using gparted
  5. Mount both the old and new disks
  6. Copy the system over, excluding home directories, with:
    sudo rsync -aXS --exclude=/media/old/home /media/old /media/new

    (Note: I found that using ‘v’ (verbose) in the rsync command slows things down significantly, since there are many small files and most of the time is spent outputting and scrolling text when in verbose mode)

  7. Prepare the old disk as a home partition by moving the system into a subdirectory and then moving the contents of the /home directory into root:
    mkdir /media/old/old_root
    mv /media/old/* /media/old/old_root
    (check for hidden files/folders in root and move them 
    individually with ls -a and mv)
    mv /media/old/old_root/home/* /media/old/
  8. use ‘blkid’ to find the uuid of the new disk
  9. edit fstab, adding a new entry for / using the uuid of the SSD (simply copied the ‘/’ line and changed the uuid), and changing the mount point for the 1tb drive to /home
  10. install grub on the new disk. This requires running from a chroot environment:
    for f in dev sys proc usr; do sudo mount --bind /$f /mnt/new/$f; done
    sudo chroot /media/new
    sudo update-grub2
    sudo grub-install /dev/sdb
  11. Exit chroot (CTRL-D), Reboot, enter BIOS, and change boot priority to boot from the SSD
  12. Save BIOS changes and reboot
  13. You are now running your existing Linux install, with everything (settings, installed programs, data) intact, from an SSD
  14. Now that my system is running from an SSD with my home directory on the 1TB drive, there were certain things in my home directory which I wanted to speed up by putting them on the SSD. To this end, I created an /SSD directory in the root and then moved and symlinked certain things there, so that they would appear in my home directory but really be on the SSD:
    sudo mkdir /SSD
    sudo chown -Rf username /SSD
    mv ~/Work/codebase /SSD/codebase
    ln -s /SSD/codebase ~/Work/codebase
    mv ~/workspace /SSD/workspace
    ln -s /SSD/workspace ~/workspace
    mv ~/VMs /SSD/VMs
    ln -s /SSD/VMs ~/VMs
    (repeat for anything you want to load faster)
  15. Since everything is working, clean up by removing the old_system directory (containing the system files from the 1tb disk), will be located in /home: sudo rm -Rf /home/old_system
  16. This had a huge effect on many things: the system is much much faster, it boots in less than 10 seconds, and loading pages in my codebase is now faster than our production server (TODO: move our server to an SSD machine).

Most of the tutorials for moving linux from one machine to another assume that you’re moving to a bigger disk, or assume that you are using less than 50% of available disk space. In my case, where neither of these was true, I found this method of moving things around on the 1tb disk to be efficient in terms of time and space – moving files around on the same disk is very fast on Linux, since it doesn’t actually need to copy the data, so the ‘mv /media/old/* /media/old/old_system’ and ‘mv /media/old/old_system/home/* /media/old/’ took seconds. The slowest part of this entire procedure was the rsync – copying the OS onto the SSD. Overall I found this process to be fairly simple and painless – I was a little apprehensive about it, but it turned out that moving to a new disk really is quite simple.

I found this page quite helpful when doing this (in particular, installing grub on the new disk).

PHP: pretty print JSON as coloured HTML

Today I wanted a way to pretty-print a JSON string with colour highlighting. I went looking and found a bunch of ‘pretty print’ functions, but none with colour, so I implemented my own


  1. Include the relevant CSS for formatting your prettified JSON. There’s example CSS in the code below. You can do:
  2. Call Convert::json2PrettyHTML(), e.g:

    ….And the code:

    class Convert {
    	 * Helper for Convert::prettyJSON()
    	 * Returns a HTML <span> with a class matching the data type (integer,string,double,etc)
    	 * 	Add css to colour the values according to type.
    	 * autodetects numeric strings and treats them as numbers 
    	 * runs htmlentities() and wordwrap() on values (wraps at 100 chars)
    	 * @param mixed $val	value to beautify
    	 * @param int $indents	number of indents
    	 * @param bool $isKey	true if this is a key name
    	 * @return HTML
    	 * @see Convert::prettyJSON()
    	 * @see Convert::json2PrettyHTML() 
    	private static function jsonColor($val,$indents=1,$isKey=false) {
    		//echo print_r($val,true) . ": " . gettype($val) . "\n";
    		$type = gettype($val);
    		if (($type == "string") && is_numeric($val)) {
    			//try to convert it to a number
    			$val = floatval($val);
    			if (intval($val) == $val)	//convert from float to int if it's a whole number: 
    				$val = intval($val);
    			$type = gettype($val);
    		//$type = gettype($val);
    		$color = "";
    		switch($type) {
    			case 'string':
    				$val = '"' . $val . '"';
    			case 'array':
    				$val = self::prettyJSON($val,$indents);
    		$val = wordwrap(htmlentities($val),100,"<br />",true);
    		if ($isKey) $type = $type . " key";
    		return "<span class='$type'>" . //"' style='color:$color;'>" 
    			"$val</span>"; // . " (" . gettype($val) . ")";
    	 * Helper for Convert::json2PrettyHtml()
    	 * convert a value (i.e from json_decode) into a pretty colourised string
    	 * @param array|string|number $json		value to prettify
    	 * @param number $indents				indentation level (used for recursion)
    	 * @return string
    	 * @see Convert::json2PrettyHTML()
    	private static function prettyJSON($json,$indents = 1) {
    		$ret = "";
    		$indent=str_repeat("<span class='indent'> </span>",$indents);
    		if (is_array($json) || is_object($json) ) {
    			foreach ($json as $k => $v) {
    				$k = htmlentities($k);
    				if (is_array($v) || is_object($v)) {
    					$v = self::prettyJson($v,$indents+1);
    					$ret .= ($ret ? ",<br />\n" : "") . $indent .
    						self::jsonColor($k,$indents,true) . ":\t<br />$v";
    				} else {
    					$ret .= ($ret ? ",<br />\n" : "") . $indent .
    						self::jsonColor($k,$indents,true) . ":\t" . self::jsonColor($v,$indents);
    			if (is_object($json)) {
    				$openbrace = "{";
    				$closebrace = "}";
    			} else {
    				$openbrace = "[";
    				$closebrace = "]";
    			$outdent=str_repeat("<span class='indent'> </span>",$indents-1);
    			$ret = "$outdent$openbrace<br />\n$ret<br />\n$outdent$closebrace";
    		} else
    			$ret = self::jsonColor($json,$indents);
    		return $ret;
    	 * Return or add some CSS for json2PrettyHTML to the requirements
    	 * @param string $return	if true, return the CSS. Otherwise insert it using Requirements::customCSS()
    	 * @return string | void
    	 * @see Convert::json2PrettyHTML()
    	public static function jsonPrettyHtmlCSS($return = true) {
    		return 'span.json .integer, span.json .double {
    				color: #700;
    				font-family: mono;
    			span.json .string {
    				color: #070;
    				font-family: mono;
    			span.json .key.string {
    				color: #007;
    			span.json .key.integer, span.json .key.double {
    				color: #707;
    			span.json .indent {
    				padding-left: 40px;
    	 * Converts a JSON string to pretty, readable HTML output which can be 
    	 * 	colourised/customised via CSS
    	 * Also does other nice things, like word wrapping at 100 chars, running 
    	 * 	values through htmlentities(), and treating numeric strings as numbers
    	 * Include CSS to style the output (set colours, indent width, etc)
    	 * Notes: 
    	 * 		- everything will be wrapped in a span.json (i.e <span> with 'json' 
    	 * 			as the class, css: span.json)
    	 * 		- keys will be spans with the'key' class  ( e.g span.key )
    	 * 		- values and keys will be spans and will have the datatype as the 
    	 * 			class ( span.integer, span.key.integer)
    	 * 		- there will be empty spans with the 'indent' class in the 
    	 * 			appropriate places. There may be more than one consecutively. 
    	 * Example CSS is returned by the jsonPrettyHtmlCSS() function
    	 * @param string $json	the json to beautify
    	 * @return HTML
    	 * @see Convert::jsonPrettyHtmlCSS()
    	public static function json2PrettyHTML($json) {
    		return "<span class='json'>" . self::prettyJSON(json_decode($json)) . "</span>";

    I hope someone finds this useful! :)


Ladies and gentlemen, presenting: kgrep – kill-grep

This is a bash function which allows you to type in a search term and kill matching processes. You will be prompted to kill each matching process for your searchterm.

You can also optionally provide a specific signal to use for the kill commands (default: 15)

Usage: kgrep [<signal>] searchterm

Signal may be -2, -9, or -HUP (this could be generalised but I CBF).

search term is anything grep recognises.

kgrep() {
    #grep for processes and prompt whether they should be killed
    if [ -z "$*" ]; then
        echo "Usage: $0 [-signal] searchterm"
        echo -e "\nSearches for processes matching  and prompts to kill them."
        echo -e "signal may be:\n\t-2\n\t-9\n\t-HUP\n to send a different signal (default: TERM)"
        return 0
	#yes, this could be more sophisticated
    if [ "$1" == "-9" ] ||  
        [ "$1" == "-2" ] ||
        [ "$1" == "-HUP" ]; then 
    #we need to unset the field separator if ^C is pressed:
    trap "unset IFS; return 0" KILL
    trap "unset IFS; return 0" QUIT
    trap "unset IFS; return 0" INT 
    trap "unset IFS; return 0" TERM

	for l in `ps aux | grep "$*" | grep -v grep `; do
        echo $l
        pid=`echo $l | awk '{print $2}'`
        read -p "Kill $pid (n)? " a
        if [[ "$a" =~ [Yy]([Ee][Ss])? ]]; then
            echo kill $SIG $pid
            kill $SIG $pid
    unset IFS

Securing Windows 10

How to make a Windows 10 VM secure with a Linux host

Simple! Restrict all intarwebs access to everything that you don’t absolutely need:

  1. run virtualbox with the vboxusers group:

    sudo -g vboxusers virtualbox
  2. allow access to the site you want:
    sudo iptables -A OUTPUT -m owner --gid-owner vboxusers -d [ip address] -j ACCEPT
  3. block everything else:
    sudo iptables -A OUTPUT -m owner --gid-owner vboxusers -j DROP
  4. In windows you’ll need to edit c:\windows\system32\drivers\etc\hosts to
    add an entry for the sites you want, since DNS won’t work. Or you could
    look at allowing DNS. But I wouldn’t.

If you follow these simple steps, you never have to worry about your testing VM reporting everything you do back to Microsoft.

For extra security, i recommend disconnecting the virtual network cable before you close the VM. That way if you accidentally start it without the vboxusers group it still won’t be able to access the internet.

If you’re running windows on bare metal in 2015 I have no advice for you, you deserve whatever happens.

New Horizons

Congratulations to NASA for the first ever Pluto flyby!

I’m days late in saying this, but I was watching events unfold live via NASA TV. I’ve been anticipating this all year, and it’s awesome to finally see Pluto up-close. Great Job! I can’t wait to see more as more data slowly streams back from nearly 5 billion km away.


  • DSN Now! – see what spacecraft the Deep Space Network is communicating with in realtime. While streaming NASA TV, I was also watching this for a signal from New Horizons as it phoned home.
  • New Horizons Website – had counters telling us when the flyby happened, then when the phone-home signal was expected, and now has a “time since flyby” counter. Also news and images.
  • NASA TV – it’s not just interesting to watch when there’s a major mission going on.

Click To Print

Here’s a nifty little piece of javascript I whipped up the other day in response to a client request.

With this code (and jquery) on a web page an element on a web page with the “clicktoprint” class becomes clickable. When clicked, it is printed, but only that element. In addition, the element will be scaled to the full width of the page.

I was asked to do this so that a client could have a voucher on their website which you could click on and have it printed. Their previous solution (using a ‘print’ media query in the site css) meant that the rest of the page could never be printed. This code injects a new piece of css for the duration of the special print and removes it afterwards, allowing the rest of the page to be printed by the regular means.

<script type="text/javascript">
jQuery(document).ready(function() {
	jQuery('.clicktoprint').click(function() {
		jQuery(this).parents().each(function(idx,i) {
		jQuery('head').append('<style id="clicktoprint-style">@media print { * { display: none; } .clicktoprint, .clicktoprint-parent {display: block !important; width: 100% !important;} }</style>');
		return false;

Lunar Eclipse Photos

We had a lunar eclipse last month, and I happened to go outside for a smoke, looked up, and spotted it. Dutifully, I ran inside and grabbed my video camera and tripod and spent the next hour filming. And now I’ve finally gotten around to cutting together some reasonable-quality images:

I took these using my cheap video camera and tripod, then I extracted a bunch of frames from the video and aligned / stacked them.

This was a bit of a challenge, since the astrophotography software I have (CCDSoft) doesn’t seem to like noisy colour JPEG images very much and kept crashing, so I had to do everything with the GIMP, including manual alignment. Not fun – the third image above is a combination of 30 video frames, aligning that was a bitch. I might have to write a gimp plugin for that.

I did look at open-source astrophotography software, but the options seem to be a bit lacking. I tried a couple – siril looks like it’s the most promising, but it refused to stack my images, giving me a weird error for which teh google was no help. But it’s brand new – the latest update on the website was only a couple of weeks ago, so hopefully in a little while it’ll be useable.

For comparison, here’s a “before” shot – a raw frame of video, in all its noisy glory:

Apple knows best

A paraphrased version of a funny conversation I had once via SMS:

Me: “OMFG I simply cannot grok the iphone interface, it’s completely awful on so many levels. If I was going to go into detail I’d need a proper keyboard to type up the relevant hundred-thousand word thesis – even this phone’s physical keyboard has its limitations.”

Smartass iphone user: “What is ‘grok’? If you had an iphone, autocorrect would have picked that up for you.”

Me: “Yeah? Well autocorrect would have been wrong – type ‘define grok’ into a search engine.”

Iphone user: “…Oh.”

Me: “lol, pwnd.”

Future History of Init Systems

  • 2015: systemd becomes default boot manager in debian.
  • 2017: “complete, from-scratch rewrite”. In order to not have to maintain backwards compatibility, project is renamed to system-e.
  • 2019: debut of systemf, absorbtion of other projects including alsa, pulseaudio, xorg, GTK, and opengl.
  • 2021: systemg maintainers make the controversial decision to absorb The Internet Archive. Systemh created as a fork without Internet Archive.
  • 2022: systemi, a fork of systemf focusing on reliability and minimalism becomes default debian init system.
  • 2028: systemj, a complete, from-scratch rewrite is controversial for trying to reintroduce binary logging. Consensus is against the systemj devs as sysadmins remember the great systemd logging bug of 2017 unkindly. Systemj project is eventually abandoned.
  • 2029: systemk codebase used as basis for a military project to create a strong AI, known as “project skynet”. Software behaves paradoxically and project is terminated.
  • 2033: systeml – “system lean” – a “back to basics”, from-scratch rewrite, takes off on several server platforms, boasting increased reliability. systemm, “system mean”, a fork, used in security-focused distros.
  • 2117: critical bug discovered in the long-abandoned but critical and ubiquitous system-r project. A new project, system-s, is announced to address shortcomings in the hundred-year-old codebase. A from-scratch rewrite begins.
  • 2142: systemu project, based on a derivative of systemk, introduces “Artificially intelligent init system which will shave 0.25 seconds off your boot time and absolutely definitely will not subjugate humanity”. Millions die. The survivors declare “thou shalt not make an init system in the likeness of the human mind” as their highest law.
  • 2147: systemv – a collection of shell scripts written around a very simple and reliable PID 1 introduced, based on the brand new religious doctrines of “keep it simple, stupid” and “do one thing, and do it well”. People’s computers start working properly again, something few living people can remember. Wyld Stallyns release their 94th album. Everybody lives in peace and harmony.


I have fortune integrated into various scripts. Because I can.

Today, logwatch gave me one that made me chuckle:


This otherwise unremarkable language is distinguished by the absence of
an “S” in its character set; users must substitute “TH”. LITHP is said
to be useful in protheththing lithtth.

HOWTO: Power on your computer

In this latest entry in my series of helpful ‘how to’ articles, I’ll be teaching you how to power on your computer.

If you have a PC, follow these steps:
1. Ensure that the machine is plugged in
2. Ensure that the rear power switch is in the ‘on’ position
3. Press the ‘Power’ button on the front of the device.

If you have a mac, these are the steps you’ll need to follow:
1. Ensure that the machine is plugged in
2. Examine the machine, noting that there’s no power switch anywhere to be seen.
3. Unplug all cables from the machine
4. Pick up the machine and examine it from every angle, looking for the power switch. You’ll note that it’s in a location which is completely nonfunctional and unintuitive. But at least it doesn’t interfere with the nice brushed metal finish.
5. grab a permanent marker and put a mark on the front of the machine (preferably on the lovely brushed metal) where the power switch is, so that it’s possible to turn it on again without repeating this entire process.
6. Put machine back on desk
7. Plug all cables back in
8. Reach around and behind the monitor, through all the cables you just plugged in, and press the power switch
9. Kill yourself.

The biggest problem with Microsoft certification

The problems with Microsoft certification are myriad.

One really big problem is that people with Microsoft certification think that because they know how to use the Wizard provided by Exchange server, they know something about email, the internet, or networking.

Microsoft certification teaches you the practical knowledge you’ll need to run a variety of servers – on Microsoft tech. You’ll be able to do cool things fast – as long as Microsoft anticipated that need. And you’re basically taught a mantra that says that if it doesn’t have a Microsoft logo, it’s “Not Compatible”.

In reality, the opposite is true: Pretty much everything is compatible, except for Microsoft. And usually everything is compatible with Microsoft, despite their best efforts to embrace and extend. The only thing is not compatible is that Microsoft products aren’t compatible with non-microsoft products.

But I think perhaps the biggest problem with Microsoft certification is that when people finish their course, they get a shiny certificate, and it says that they’re Microsoft certified. And Microsoft people spend lots of time impressing how respectable they are on their customers. So people get this impression that their Microsoft certification means that they deserve some kind of respect.

Firefox Demographics

Dear Firefox devs,

I’ve been using your browser for 10 years or so now – ever since I started to learn about open source software. The difference from IE was amazing – tabs!

Later, the difference became even more profound – Adblock! Firebug! and too many other add-ons to mention – eventually it got to the point where I had to limit the addons I use in order to not clutter and slow things down. Firefox really was the browser for power users.

I had my complaints – the CPU usage always seemed too high, and the memory usage was particularly absurd, but it did everything well.

Chrome happened. It closed the gap somewhat with its built-in developer tools and extensions. The one-process-per-tab idea was a good one. It was fast, and it didn’t require a gigabyte of memory to display one tab, but it just didn’t have the flexibility of firefox, so I could never quite make the switch.

There was one other thing about chrome I didn’t like – it had that sleek, minimalistic, “modern” interface. You know the type: they have pretty curved edges and nice animations for everything, but they tend to not be very configurable.

So it was with sadness that I updated my system the other day, only to see a shiny, chrome-lookalike interface on firefox.

I spent ages trying to turn the add-on bar back on and to remove the button which shows the awful new menu, to no avail.

Eventually I found the classic theme restorer add-on, which makes things sane again, but it’s not exactly awesome: Firefox is now using even more memory and I have yet another add-on installed just so that the interface isn’t terrible.

It seems that firefox is going for a new target demographic: they’ve decided to abandon the power users and go after the crowd who like chrome but think that it’s just too fast and doesn’t use enough memory.

Maybe they could use a new slogan: “Firefox: it’s just like chrome, only slower!”.

Personally, I think that this new demographic might be a limited market. If I wanted to use chrome, I’d…uh… use chrome.

Meanwhile, I wonder what the Opera team have been up to gor the last 5 years…

The Neo Freerunner – A Review

I just emailled this to some guy who was asking about the freerunner on the openmoko lists, where I still lurk. I was proofreading it and thought to myself “hey, this is actually a pretty decent review of the device”. So here it is for all to see:

The freerunner is the worst phone ever made. It might nearly be usable as a phone now thanks to Radek and QTMoko, but you’re much better off buying an old feature phone or rooting an android phone. I think that while it might nearly be acceptable for a linux hacker, the freerunner software will never be a truly good user experience despite radek’s efforts – it’s too big a job for one person. I hope I’m wrong about that, but I don’t think I will be.

I was particularly appalled at the battery life. The battery used to last about 2 hours, but they have nearly solved all the power management bugs so if you’re lucky you might get ~6 hours out of it these days. It might even last all day if you keep it in suspend and don’t use it. In particular, using Wifi, Bluetooth, GPS, or having the screen on will significantly reduce the battery life you should expect to get.

It doesn’t have a camera, though I believe there’s a camera module for the GTA04.

An important thing to note is that due to a design flaw, the device is not capable of fully utilizing it’s accelerated graphics as bandwidth to the screen is limited. therefors it’s not capable of playing fullscreen video at the native resolution of 480×640. It will play fullscreen video if you’re into extremely crap resolution – 240×320. You shouldn’t ever expect to see much more than 10-15fps at full resolution.

The company went out of business because they made a buggy phone and couldn’t figure out what they wanted to do software-wise – they seemed to think that making the UI themeable was more important than being able to recieve phone calls or have working power management. The demise of Openmoko is a good thing.

If you’re looking for a phone, you do not want a freerunner.

If you’re looking for a hackable linux palmtop with a tiny screen, no keyboard, not very much power and a fairly awful battery life when you’re using it as a computer, then the freerunner might be an option for you, although you can probably buy something like a raspberry pi with 3 times the power for half as much money.

Nikolaus’ GTA04 project does seem much more promising and addresses a lot of the shortcomings of the freerunner and may be worth looking into. I have spoken to Nikolaus via email a few times and he seems like a very cool guy – I trust him and I’d buy a GTA04 in a heartbeat if I wasn’t put off by the price – I already spent $400 on a phone that doesn’t work, and I bought a nokia so that I’d have a working phone before Nick brought out the GTA04, so I can’t justify spending that much money to make my freerunner useful.

Spilt Milk and the Model M

Aah. how I love my Model M. I’ve written about it before. The click-click every time I press a key. It feels like I’m accomplishing something. I do this wierd hybrid two-finger/semi-touch typing technique from which I can’t seem to break the habit, but touch-typing is easier on the Model M for me – the keys have sharper edges and are therefore more distinct to the touch – my fingers just seem to fall into place. The other thing I love about my particular Model M is that it’s extra-awesome: it has a manufacture date in 1992 stamped on the back, and a real proper motherfucking IBM logo on the front – none of this modern USB stuff. It’s a real proper original IBM Model M, though 1992 is getting kinda late for original – it’s “only” 21 years old.

The Model M really is an example of engineering at it’s best: In this way it has something in common with Commodore hardware – they’re from the same era, and every commodore machine I have which wasn’t spare parts when I got it still works. Some of the ones I got as spare parts aren’t spare parts anymore – they’ve been turned back into working machines. My Amiga 2000 is one of my most treasured posessions. I hardly ever use it. But when I turn it on, it just works… For twenty years! It’ll still be going long after this Dual Core 3ghz lintel box I’m using now is dead. This stuff is designed to last, no planned obsolescence here! Can you imagine the testing these things went through? Automated machines pressing those buckling springs over and over again to find their point of failure. I don’t even know what it is but I’d bet they’re rated for millions of keystrokes. Per key. This is not a flimsy piece of junk which falls off your lap and breaks, though it could maybe break your toe if it lands on it. Old cars are like this too – they’re designed to last a lifetime. Barring violent destruction at the hands of nefarious third parties my Model M is the last keyboard I’ll ever need.

I’ve hardly used it in 2 years. It was plugged into a server went pop a few months ago which I haven’t bothered to resurrect. The server used to be my primary machine before I got my current primary machine. It has it’s own new-fangled USB wireless non-Model-M keyboard with permanent ink blotting out the awful logo on the ‘super’ key. I use this new machine for games since it has a nice nvidia card and I’ve found that the Model M isn’t the ideal gamers keyboard for action games – the only shortcoming I’ve discovered – those ultra-tough keys aren’t designed for being pressed in rapid succession. Or, perhaps more accurately, my fingers lack the dexterity to press the same clicky-style key quickly enough. So I’d never bothered to plug in the good old Model M, even after the old server died.

Enter spilt milk leading to a sticking tab key. Uber annoying im vim. The story should be pretty obvious from here – no more fear of spilt milk, certainly no crying over it…

…except for one detail: now I have a good, PS2, Model-M keyboard with it’s awesome 2-3 metre cable and it’s clicky keys and weight (it really feels like a piece of furniture sitting on your lap!), AND a mere wireless USB keyboard with noobish easy-to-press keys that are nice for gaming. Awesome. :)

Dear the entire world

You don’t need to quote database column and table names in queries unless they contain special characters like spaces.

This applies for every database engine and every dialect of SQL I’ve ever used – quoting column names is always optional.

So why the fuck do you insist on writing this in your php codez?

$query=”SELECT \”some_ordinary_column\” from \”some_table\” where \”some_table\”.\”some_column\” = \”some_value\”"

Are you a masochist who loves escaping things or what?

How much more readable is this:
$query=’SELECT some_ordinary_column from some_table where some_table.some_column = “some_value”‘

The funny thing is that the type of people who write this garbage are the same type of people who tell you that using an if statement without braces is “bad style”. lol.

Logging is necessary

Unless you’re me, you’re less awesome than you think you are.

(I’m more awesome than I think I am. This is not a paradox)

Therefore, when you write a mission-critical piece of code, you need a logging system

Your logging system needs to have different types or log message: error and debug at the bare minimum.

Your code needs to log every action it takes.

This might be expensive or difficult. Tough shit. If it’s important, it needs to be logged – you must be able to go back over a particular execution and determine what happened. This is not optional.

This is a good rule even for not-important code. It makes debugging SO much easier. There are approximately 100 billion logging systems available, use a library if you must. Or you could write your own in 10 minutes.

Let’s discuss! Give me an example of a situation where logging is undesirable for important code, and I’ll tell you why you’re wrong… ;)

Things aren’t moving backwards

No, things aren’t moving backwards at all!

Let’s look at some of the awesome new features of a couple of current-gen Microsoft products:

Windows 7: One of my FAVOURITE features is the way it assumes that I, as a user, am too stupid to know how to resize a window: apparently, if I want to move a window mostly off the right-hand side of the screen, what I actually want to do is resize that window so that it takes up half the screen! Apparently I’m too fucking retarded to know that I can achieve the same result by simply moving my mouse to the top-left or bottom-left corner of the window and just resizing it. Of course I’m not sure how it thinks I intended to resize the window, given that the resizing corners at the right are off-screen.

Similarly, if I want to move a window to the top of the screen, that means I want to maximize! Apparently, I’m too fucking retarded to know to just press the maximize button like people have been doing for about 20 years. Apparently, after moving my small scite window to the top of the screen, I planned to use the resize corners to resize it so that it filled the whole screen, rather than just pressing maximize. It’s really great that I have this software to do my thinking for me: I’d been struggling with that whole ‘maximize’ notion for years.

So we’ve established that Microsoft thinks my intelligence lies somewhere between that of Mac user and an inanimate carbon rod.

However, when I want to access the New-And-Improved(TM) ribbon interface and add a button to it programatically via VBA – you know, so that my (retarded) users just get a new button they can click to make things happen, I find that:

(from A Blog Post):

You cannot create ribbon elements dynamically in VBA

It is not possible to create ribbon elements dynamically via code as 
with Office 2003, where you could manage your own CommandBars and 

In Excel 2007 each ribbon element (Tab, Group, Buttons, etc.) needs 
to be defined statically and embedded in the Excel document using a 
specially crafted XML file and with quite a few manual steps, 
including renaming and modifying contents of the Excel document —
factually a ZIP with the XLSM or XLAM extension.


(from This Book):

In previous versions of Excel, it was relatively easy for end users 
to change the user interface. They could create custom toolbars that
contained frequently used commands, and they could even remove menu 
items that they never used. Users could display any number of 
toolbars and move them wherever they liked. Those days are over.
The Quick Access Toolbar (QAT) is the only user-customizable UI 
element in Excel 2007. It's very easy for a user to add a command to 
the QAT, so the command is available no matter which ribbon tab is 
active. The QAT can't be moved, but Microsoft does allow users to 
determine whether the QAT is displayed above or below the ribbon.

The QAT is not part of the object model, so there is nothing you 
can do with it using VBA.

So, to boil it all down, there’s no way for me to programatically add a new toolbar button using this wonderful new interface, which means that my users (who, as previously established, are assumed to be about as clever as sponges) are expected to add a toolbar button themselves by following a set of instructions which I have to put together for them. Never mind the fact that this will inherently create a bunch of issues just in terms of support (e.g: morons calling me up asking what I mean by ‘right-click’ in step 6; users choosing a different icon, or giving the new button a different caption, ruining the uniformity of the interface), how I’m supposed to convey a concept as complex as ‘add a toolbar’ to a retarded grasshopper is strangely ommitted from the documentation I’ve looked through.

No, things aren’t moving backwards at all…

Watch out for the next installment of this series, where we’ll analyse why it’s a good thing to remove features from your program so that the interface isn’t cluttered anymore, because having a complex interface is a terrible, terrible thing, and menus are so unintuitive.

I hear that next year Microsoft is going to help the people at NASA Mission control replace their hideously complex systems (sometimes people have to TYPE THINGS at mission control!) with a (touchscreen) button (with round corners, of course!) that says “Launch Rocket” (in the tooltip, which you can’t see, because it’s a touchscreen – The icon will simply be a cartoony V2 rocket). It’s expected that this will lead to huge efficiency gains in the rocket launching process, and will probably only cause a 20-30% increase in catastrophes.

Routing everything via VPN

I have a VPN.

I have it set up in a pretty standard way: when a machine joins the VPN it effectively becomes part of my LAN. But I don’t route everything via the VPN, that would be inefficient and would waste my bandwidth. I haven’t bothered with doing DNS over VPN, as I usually just use IP addresses anyway (one of the advantages of using a 10.x.x.x network), and when you do that you run into all kinds of complexities and problems (like how to resolve names on the lan you’re connected to)

But sometimes I’m somewhere where I don’t trust the owner of the network that I’m connected to: I don’t want to be spied on.

In these instances, it’s handy to be able to route everything out over the VPN connection.

But if you don’t stop to think for a minute and just try to add a default route which points to the VPN server, you’ll instantly lose your VPN connection and all internet access because there’s no longer any way to reach the VPN you’re trying to route through. Doh.

The solution is simple:

#delete existing default route:
sudo route del default
#traffic to the VPN server goes out via your existing connection:
sudo route add <internet-ip.of.vpn.server> gw <your.existing.untrusted.gateway>
#...and everything else gets routed out via the VPN (assuming your VPN server is
sudo route add default netmask gw

OK, that takes care of routing. Next you need to send your DNS requests out via the VPN, or you’ll still be pretty easily monitorable – overlords will still know every domain you visit. To do that, edit /etc/resolv.conf and change the ‘nameserver’ line to point to the nameserver for your VPN and/or LAN:


I recommend running your VPN on port 443. My reason is really simple: in oppressive environments, you can pretty much count on port 443 being open, since it’s used for https, and https is not something that a tyrannical sysadmin/policymaker can get away with blocking: it’s the backbone of e-commerce. In addition, https traffic is encrypted, as is VPN, so it’s less likely to be monitored by things like deep packet inspection, and any not-too-deep packet inspection is likely to come up with an outcome of ‘encrypted traffic, nothing unusual’ when looking at VPN traffic on port 443.

It should be noted that while this is unlikely to set off automated alarm bells, it will look somewhat unusual to any human observer who notices – your overlords will see a bunch of “https” traffic, but nothing else (not even DNS), which may in itself raise suspicions.

It should also be noted that you very likely just added a couple of hundred milliseconds to your latency and have now effectively limited your available bandwidth somewhat, depending on network conditions.

But I know from experience that the victorian government’s IT agency, Cenitex, is incapable of determining any difference between https traffic and VPN traffic going via port 443.

Though, of course, that doesn’t mean it’s impossible…

…In fact, that doesn’t even mean it’s difficult…

…but you should be reasonably safe from the spying eyes of your microsoft cerfitied sysadmin. :)

Goodbye AWN

It would appear that Avant Window Navigator is dead:

Stable release 	0.4.0 / April 11, 2010; 2 years ago

Which is a pity: I liked AWN. But the fact that its task manager doesn’t work properly has become a deal-breaker: after days of trying, I’ve finally given up at attempting to make AWN’s task manager realise that I have Eclipse running.

I have an eclipse launcher set up in awn. While Eclipse’s splash screen is showing, AWN recognises that eclipse is running, but as soon as the main window opens, awn does it’s “window closed” animation and refuses to acknowledge that the window which currently has focus deserves any kind of icon in the task manager.

If it came up with a duplicate icon – one in the taskbar in addition to the launcher – that’d be acceptable. But it doesn’t do that. Instead, it makes it impossible to switch to a minimised eclipse without using ALT-TAB, or the xfce-panel on my 2nd monitor (which isn’t always on). Clicking on the launcher launches a 2nd copy of eclipse, which then (rightfully) whinges that the eclipse workspace is in use and can’t be opened.

I found a couple of people who had similar issues. I tried a bunch of things to work around it.

On one forum, somebody explained that it’s hard to match windows to running processes…which is fair enough…

…except that the xfce panel seems to manage it just fine.
…and so does gnome panel
…and so does cairo-dock
…and, presumably, whatever KDE uses
…and, I expect, docky
…and, more than likely, dockbarx

And the awn devs would appear to have all been hit by a bus: domains have expired and are redirecting to spam sites; IRC is dead, and I’m yet to find any remnant of the awn community. Which is a real pity.

So I’ve installed cairo-dock. It looks nice, and somehow feels ‘more snappy’ than awn: maybe this is because cairo-dock’s ratio of plugins written in python is low: most of them would seem to be C++, with only a couple of the prepackaged ones being python. My only complaints about cairo-dock are:

  • The ‘Indicator old’ applet is retarded: it has drawing issues (draws a grey box where an icon used to be), and why can’t it just display in the dock, like every other applet in existence? why should I need to click on it’s (nonexistent – it just renders as blackness) icon to actually see my systray? This is only a minor annoyance since the only thing I have which refuses to cooperate with the ‘new’ indicator applet is the seldomly-used fusion icon, and vmware (who’s systray icon i don’t use)
  • The context menus on some items are behaving suite wierdly: they’re not tall enough. And sometimes they don’t render clicks. except sometimes they do, and that sometimes they are tall enough. It seems very random.

This very likely means that in the near future I’ll be extending NodeUtil to have a cairo-dock applet, since this is the only thing I really miss from awn.


In today’s installment of “Awesome Open-Source Software”, I’m going to talk about Teeworlds.

A screenshot:

This game is a brilliantly playable, amazingly addictive, and hugely fun blend of a 2D platformer (a-la Mario) and a multiplayer FPS (a-la Quake3 or Unreal Tournament).

It’s not complicated: It’s multiplayer only, there are only 5 weapons, and the levels aren’t big or expansive – you won’t spend long looking for your enemy, you’ll spend more time lobbing grenades at him, and then running away frantically because you’re out of ammo and/or low on health.

That’s if you’re playing with only a few others. If there are lots of people in the game, it’ll just be frantic carnage, like any good deathmatch.

It takes its cues from “proper” deathmatch games – the old run-and-gun style: cover systems and regenerating health are for sissies; precision aiming is for people who don’t know about splash damage. Standing still is a VERY BAD IDEA. None of this “modern FPS” crap. This is evidenced most starkly in the fact that you can double-jump, and, perhaps coolest of all, you have a grapping hook, which you can use to climb and to swing yourself to/from places very quickly. If you play in a busy CTF server, you’ll see just how effective the grappling hook can be – these guys are SO FAST!

And it’s gorgeous and has a great atmosphere: cartoonish graphics and sounds. The sounds really do it for me: the cutesy scream your tee will make when he’s hit in the face with a grenade makes it fun to die, the maniacal yet cartoonish laugh your character will emit when your opponent cops a grenade to the face. It’s a really really fun atmosphere.

And I mean that: this is one of those games which is so much fun that you rarely feel like ragequitting, even when you’re losing badly: you will get killed mercilessly and repeatedly, but you’ll have a big smile on your face during the shootout, and when you die you’ll laugh.

And it’s quite well-balanced: none of the weapons are over-powerful or ridiculously weak. This is probably helped by the fact that the weapons have (very) limited ammo, even though running out of ammo sometimes annoys me slightly.


  • There aren’t enough teeworlds players in Australia, so I find myself playing on servers where I have a ping of 300 or more. This means you sometimes have a laggy experience.
  • You’ll come across killer bots sometimes. These bots are inhumanly good and can drain the fun out of being repeatedly stomped on, but the game has a voting system which allows you to vote on kicking players, so these bots are rarely a nuisance for long.
  • It has an ‘auto-switch weapons’ feature which switches when you pick up a new weapon, but it lacks a ‘weapon preference’ order a-la Unreal Tournament, and it does not switch weapons automatically when you run out of ammo. This is sometimes frustrating because you’re firing at your opponent but you only get an ‘out of ammo’ click, and while you’re trying to switch weapons your opponent kills you. But it’s one of those things you learn and it also serves to add tactics to the game – you’re always keeping an eye on how much ammo you have.

TL;DR: Teeworlds is a really really fun and addictive game which cleverly combines cutesy graphics and 2d-platformer gameplay with the frantic action of a golden-age FPS. It’s one of the better open-source games out there. Go and buy it now! ;)

[EDIT: antisol.org now runs a Teeworlds Deathmatch server! :) ]

want to scp recursively without rsync?

rsync doesn’t work on the freerunner for some reason I can’t even be bothered investigating.

So I came up with this without even really thinking about it, googling, etc:

cd /destination/path;ssh user@host 'tar cv /source/path' | tar x

It’s when I do stuff like that without giving it a second thought that I feel like I’m justified in saying that I’m “familiar” with linux. I’ve achieved something since ~2005!

one could add a ‘z’ or a ‘j’ to the tar parameters for compression, but the freerunner’s CPU speed makes compression take longer than transferring the data uncompressed.

HOWTO: Write the worst piece of open-source software in the history of mankind

…it’s actually pretty easy: All you have to to is write an open-source emulator (OK, fine, “API Compatibility Layer”) for the worst piece of software in the history of mankind.

In case it’s not completely obvious at this point, I’m talking about WINE.

Wine is a piece of shit. The only reason I don’t rate it as “worse than windows” is that the wine devs don’t expect you to pay for their garbage, whereas Microsoft does.

No, wait, I don’t want you to misunderstand me, so I’ll clarify my statement: wine is a godawful piece of shit.

Legions of freetards will quickly jump to defend wine: They’ll tell me how I have no right to criticise the hard work of all these people who are giving me something for nothing, and they’ll talk about how well the wine team keep abreast of the latest developments and how they’re in an impossible situation because they’re aiming at a moving target and how Microsoft’s documentation leaves alot to be desired in terms of reimplementing the entire Win32 API.

And they do have a point – the wine devs are not trying to do something trivial.

But that doesn’t change the fact that wine is a piece of shit.

I won’t dispute that there are some talented people working on the wine project – I don’t even want to think about how complex such an undertaking is, but if you can’t even make things work consistently when somebody upgrades to a new version of your product, it’s a shit product, and you’re useless, not doing enough testing, and not managing your project properly. Keeping existing features working is more important than adding new features.

Keeping existing features working is more important than adding new features!


Wine suffers from a completely retarded number of regression bugs: something which works just wine in version X of wine may or may not work in version X+1. This is an absolutely ridiculous situation.

Apparently “STABLE” doesn’t mean what I thought it meant: I thought it meant “Working, Usable, and tends to not crash horribly in normal use”. But the wine team seems to think that “STABLE” means “This alpha feature is almost feature-complete and almost works. Mostly. Except when it doesn’t”. I can’t fathom the decision to mark Wine 1.4 as “Stable” with its redesigned audio subsystem which lacks a fucking pulseaudio driver! And the attitude they take is “pulse has ALSA emulation, so we don’t really need to support pulse” – a weak cop-out at best. I mean, it’s not like the majority of distros these days default to using pulse… Oh, wait…

Application-specific tweaking. Oh, the essay I could write on the bullshit required to make any particular application work. Here’s the usual procedure:

  1. Go to appdb.winehq.org, see that it’s rated as “garbage”
  2. Note that the ‘Garbage’ Rating was with the current version of wine, and that there’s a “Gold” rating for the previous version of wine. Click on the gold rating to see that review
  3. Scroll down through the review to see if there are any helpful tips as to wierd and wonderful esoteric settings you should change to make your app work
  4. Try all the tips and manipulating all the settings in the previous point, to no avail
  5. Revert wine to the earliest version available in your distro’s repos. It’s important to note here that you probably just broke every other wine app you have installed
  6. When this doesn’t work, download and attempt to compile the version which got the gold rating. If you manage to get it to compile and install correctly (it probably won’t – it’ll depend on an older version of at least one library, which will lead you straight into dependency hell), go back to fiddling with esoteric settings for a few days
  7. When you’re sure you’ve replicated all the tweaks, DLL overrides. and settings for your app as per the gold rating on the appdb and it STILL doesn’t work, scream loudly
  8. Install virtualbox and waste many resources just so that you can run your app
  9. Hope that you used checkinstall when you compiled the old version of wine, so that it’s possible to remove it without wanting to commit ritual suicide
  10. Install the version of wine you had installed from the repos. Hope that the other apps you spent days configuring and actually managed to get working still work.
  11. Hope that the few apps you actually got working don’t break horribly next time you do an apt-get upgrade

I can’t be fucked with any of this anymore, so here’s the process I use:

  1. Create a new wineprefix: ‘env WINEPREFIX=~/wineprefix_for_this_particular_app winecfg’. It’s important to create an entirely new wineprefix for each app, because that way it’s slightly less trivially easy to break all your other apps just by changing one setting.
  2. Run the installer with something like ‘env WINEPREFIX=~/wineprefix_for_this_particular_app wine some_installer.exe’
  3. When things fail horribly (and they will for about 99.999% of apps in my experience), type ‘rm -Rv ~/wineprefix_for_this_particular_app’ and install it in your VM.

The most hilarious and entertaining part about all of this is the way that even after all my experiences which indicate the contrary, sometimes I still tend to assume that things will work under wine – after all, the wine mantra is “if it’s old, it’s more likely to work”. Here are some examples:

  • “The old PC version of ‘Road Rash’ – the one with soundgarden in the soundtrack – it’s more than 10 years old! It’ll be using DirectX 3 API calls if it’s lucky! SURELY it will ‘Just Work’ in wine, right?”
  • “Oh, I know – I’ll Install Dungeon Keeper – it’s old – it should Just Work, right?”
  • “AHA! Deus Ex has a ‘Gold’ rating with THE CURRENT WINE VERSION! Surely that means it will Just Work?”
  • “Aah, JASC Animation Shop. Such a great little app. I used to use it all the time back in 1997. Is should Just Work, right?”

And now, some questions for the wine developers:

  1. Why is it that Fl Studio uses about 3 times as much processor time in the current version than it did in the previous version? For some of my tracks, FL Studio sits at 90% CPU Utilization when it’s NOT EVEN PLAYING. Trying to play will give you a horrible mess of stuttering as the CPU valiantly attempts to keep up with wine’s bullshit demands. With the previous version of wine and the exact same track on the exact same install of FL Studio with the exact same settings, this track played just fine. I have been tweaking for about 4 days now and have achieved ABSOLUTELY NOTHING in terms of improving this situation.
  2. Why has wine not absorbed PlayOnLinux (another awful piece of code – random hangs FTW!)? This functionality should be included in wine (i.e: a central, per-application/wine version ‘tweaks’ database. You have a nice interface for installing apps. You choose the app from a list, and wine downloads the tweaks appropriate for that app and applies them automatically. This would not be hard to implement, and would solve a HUGE number of issues for a huge number of users.
  3. What genius decided that a version with a broken pulseaudio driver should be marked as “Stable”?
  4. Speaking of re-engineering your audio stack, can you please explain how this new, non-functional audio subsystem is superior to the working one which it replaced?
  5. Do you anticipate that one day wine will actually be able to do what it says on the tin? Or should I just give up on the whole concept? I feel like I am the butt of some huge, multi-year practical joke.

Now, to be fair, this is not all the fault of the wine devs – it’s also the fault of the people who manage repos for the various distros out there: these people should learn, and simply not upgrade the wine version in our repos until they’ve checked whether the fucking thing works or not – it’s a wierd kind of synchronicity that choosing a wine version is kind of like using Microsoft products: You never, ever, ever install the initial release, you wait for R2, when it might actually stand a chance of being something other than utter shit.

Wine sucks. It’s the worst piece of open-source code in the history of mankind. There are two factors why:

  1. Because they’re emulating the worst piece of code in the history of mankind.
  2. Because they’re amateur idiots who suck and can’t manage a project properly and don’t do enough testing. This is evident in that they regularly break backwards compatibility.

There is never a reason or excuse for breaking backwards compatibility other than laziness, and it’s therefore never acceptable to break backwards compatibility.

Wine is a piece of shit.

[EDIT: wine 1.5 is better than 1.4, but still not up to par with 1.2]
[EDIT 2: I'd really really love to hear an explanation of why opening a common dialog crashes metacity]

And Yet It Moves / Braid

No, this is not an Ad. Brokenrules are not paying me!

Everybody raves about Braid. It’s clever, with unique game mechanics, and very pretty.

But it’s too short – by the time you wrap your head around a concept, you’re not using that concept any more.

I’ve read reviews praising this, saying that there’s no repetition and not a single “wasted” puzzle.

But you know what? For all my complaints in the past about games being too repetitive, I can handle doing a couple of variations on the same puzzle if it means it’s going to take more than a couple of hours to get through the game.

Don’t get me wrong – Braid is a brilliant game, and the people who came up with those game mechanics are really clever, and the art style is very pretty… but it’s too short – it needs more levels or some different playmodes. It has near-zero replayability.

And Yet It Moves is a much, much better game – easily the most original and fun game I’ve played in years.

This game is fucking awesome in every respect – the game mechanic is deceptively simple – rotating the world, but it gets progressively more challenging and clever about how it uses it.

The game is a good length – it took me longer than Braid did. And it has different playmodes and an “epilogue” set of levels which you can play after you’ve finished the main game which will keep you interested. And achievements are always fun.

And it’s absolutely gorgeous. The “paper” art style is magnificent, and goes through enough variations to always be interesting.

And the music is fantastic – very distinctive and unusual. There’s one particular piece of music which usually plays in-game (usually when you’re jumping onto disappearing platforms) which is especially awesome.

Linux is supported. Steam for Linux is supported. I got it as part of one of the Humble Bundles. It’s going for $10 on Steam right now. You should buy it.

I’m trying to think back to the last time I loved a game this much. It was a long time ago – I’ve been bored with games for a long time. I think that probably the last time I was this impressed with a game was the first time I played Portal.

Here, watch the trailer:

Go and buy this game right now. You want it, you just don’t know it yet (or maybe now you do!). It’s cheap. And it’s a seriously awesome, awesome game.

foxtrotgps / landscape mode X apps on qtmoko / QX

I’ve been playing with the latest QTmoko on my freerunner after a couple of years of not updating my distro.

Some thoughts:

It’s great! Very snappy and responsive – congrats and thank-you to Radek and the other contributors, you’ve done a fantastic job and you’ve made some great strides over the last couple of years.

I haven’t tried using it as a phone yet (I’m still put-off by my previous experiences, and don’t have a second SIM), but it looks like it might be *gasp* almost usable! :O I’m tempted to try it out as a phone…

I particularly like what you’ve done with the keyboard – I think it’s about as good as an on-screen keyboard is going to get on this device. Very nice. though I wish I could have it default to qwerty mode.

But it’s not perfect – everything I want doesn’t “just work” yet (though it is very good – things like wifi and bluetooth seem to just work). But that means I get to have some fun tinkering!

I’ve been messing about with making foxtrotgps work under QX on qtmoko for a little while, and wanted to jot down some notes and tips:

  • When QX asks which X server to install, I recommend xorg – xglamo doesn’t seem to like being rotated. I’d love to make xglamo work, because it seems faster. (Performance with foxtrot on xorg is very usable, but faster == better.)
  • You very likely want to apt-get install gconf2, or foxtrot won’t save user prefs (e.g mapset, postiion, etc) when you close it.
  • Rotating the X screen with xrandr doesn’t rotate the touchscreen input properly. To fix this, you need to use xinput to swap the x-axis.
  • I’m using ‘xrandr -o right’ for my landscape orientation. This means that the USB plug on the freerunner is at the top. If you want to use ‘-o left’ you’ll need to play around with the axis swapping.

  • There’s no onscreen keyboard for X apps. To fix this, apt-get install matchbox-keyboard matchbox-keyboard-im, and launch matchbox-keyboard –daemon before you start foxtrot. This will give you a keyboard which pops up when you select a textbox. After foxtrot closes, I kill matchbox-keyboard.
  • QX has a ‘display always on’ option, but X has its own screensaver and blanking/dpms stuff. you’ll want to use xset to turn these off if you want your display always on.
  • You need to start gpsd before you start foxtrot. I also kill gpsd when foxtrot closes. This means it can take a while to get a fix, but I haven’t done a huge amount of outdoor testing yet – all I’ve done is confirmed that it will get a fix.
  • Pressing the AUX button to multitask while X is rotated under QT is ugly – qtmoko will work, but its display will be broken – it looks kinda like QVGA mode and is incorrectly rotated. If you can manage to hit AUX a couple of times to get back to QX, and then press ‘resume’ or ‘stop’ in QX, qtmoko will revert to an un-broken state. Ideally I’d like to disable qtmoko’s AUX-button handler while foxtrot is running, or capture focus events to unrotate on lostfocus and rotate on gotfocus, but I haven’t yet found a way to do either of these.
  • The above ugliness will also happen if X dies while rotated, so you need to xrandr -o normal after foxtrot exits. This means you want to exit foxtrot gracefully. Since foxtrot doesn’t have an ‘exit’ menu item, this means you want to ‘use matchbox’ in the QX settings. You also want fullscreen.

I ended up doing the following to make a wrapper script for foxtrot. It’s a bit of a nasty hack, but it works for me. A slightly nicer way would be to use update-alternatives to use an alternate foxtrotgps launcher script, or saving the script as ‘foxtrot_launcher’, building a desktop entry for it, and setting up a QX favourite for it.

the script below could very easily be modified/generalised to run things other than foxtrotgps!

root@neo:~$ mv /usr/bin/foxtrotgps /usr/bin/foxtrotgps.bin
root@neo:~$ vi /usr/bin/foxtrotgps
              (insert content, below)
root@neo:~$ chmod a+x /usr/bin/foxtrotgps


#Custom script for starting gpsd and foxtrotGPS in landscape mode:
#xinput stuff liberated from: http://lists.openmoko.org/nabble.html#nabble-td7561815

#ensure GPS is powered up:
om gps power 1
om gps om gps keep-on-in-suspend 1

#service gpsd start
gpsd /dev/ttySAC1

#sleep 1 
# we might have to wait some time before sending commands (I didnt)

xrandr -o right

#disable screen blanking:
xset s off -dpms 

#swap x axis:
xinput set-int-prop "Touchscreen" "Evdev Axis Inversion" 8 1 0
#no axis inversion
xinput set-int-prop "Touchscreen" "Evdev Axes Swap" 8 0
xinput set-int-prop "Touchscreen" "Evdev Axis Calibration" 32 98 911 918 107

#run the matchbox keyboard in daemon mode:
#with matchbox-keyboard-im this pops up automatically
matchbox-keyboard --daemon &

#run the real foxtrot:
foxtrotgps.bin --fullscreen

#foxtrot has closed, cleanup:

#kill keyboard:
killall matchbox-keyboard

xrandr -o normal

#stop gpsd:
#service gpsd stop
killall gpsd

Converting red-blue anaglyph to stereoscopic images

(EDIT: Updated to add black border between images – makes it easier to see the 3d, and makes the 3d image better defined)

I hate those red-blue anaglyphs. The red and blue fucks with my head – my brain refuses to interpret it properly, and the object does this wierd “flashing” between red and blue.

Plus, I’m too cheap to buy (and too reckless to keep) a pair of those red-blue 3D glasses.

So, I installed Imagemagick and wrote myself a bash function:

stereo_convert () {                                                                 
    if [ -z "$in" ] || [ -z "$out" ]; then                                    
        echo -e "\nYou need to supply input and output files!\n"              
        return 42                                                             
    convert \( $in -gravity east -background Black -splice 10x0 -gamma 1,0,0 -modulate 100,0 \) \( $in -gamma 0,1,1 -modulate 100,0 \) +append $out;                  
    echo -e "\nConverted red-blue stereo image '$in' to side-by-side image '$out'.\n"

Here’s a demo image from NASA’s Pathfinder mission.


Anaglyph image of Pathfinder


Stereoscopic Pathfinder


  • This process removes all colour information, giving you greyscale output. Unfortunately there’s no way to restore full colour to anaglyphs, as the full colour information isn’t there. IMHO greyscale is better than red/blue.
  • The images may not be exactly perfect due to:
    • Red and cyan do not have the same intensity to the human eye – cyan seems brighter, so the right eye may appear slightly lighter. I’ve done my best to eliminate this, but I CBF reading into the science of colour wavelengths etc. right now.
    • Some images may be reversed – it appears that there’s no “hard” convention as to which eye should be red and which should be blue. But it appears that “most” are red==left.


“It’s very obvious to me that you don’t understand my question. Can I please talk to someone competent – someone who has some basic knowledge of networking?”

Vodafone Mobile Broadband Technical Support* consultant:
“No, you can’t: We’re an Internet provider, we don’t do networks.”

I was reminded later by a friend that the Internet is in fact a series of tubes. And here I was thinking it was a TCP/IP network. Silly me.

* I use the term “Technical Support” very, very loosely – I do not mean to imply that they provide support or are capable of being technical.

Stallman is Nucking Futs!

Stallman talks about Valve releasing a Steam client for Linux

Go, read. I’ll wait.

Back? Good.

Oh, Look! Valve got a mention by the mighty Stallman!

He asks what good and bad effects can Valve’s release of a Steam client for Linux have? Well, it might boost linux adoption, and that’s good. But…

Nonfree games (like other nonfree programs) are unethical because they deny freedom to their users. If you want freedom, one requisite for it is not having these games on your computer. That much is clear.

Wait, what?

Hang on a minute… If I want freedom, I’m not free to run these games? huh?

IMHO, having freedom means having the freedom to choose to run nonfree software if I want to. I’d rather play Half-Life or Portal than any open source game (It’s not that there are no great open source games, it’s just that Half-Life and Portal are better than all of them).

Stallman goes on to discourage Linux distros from offering the software to users – i.e deb packages for steam, and says:

If you want to promote freedom, please take care not to talk about the availability of these games on GNU/Linux as support for our cause.

Which is totally…fucking…insane.

I’ll be promoting freedom – freedom from Windows: “You don’t need windows anymore – Steam is available for Linux!”. I’ll be promoting the freedom to finally run good games on my chosen OS without any fucking around with wine. I’ll be (gasp) buying a bunch of games. Because a Steam client for Linux would be totally fucking awesome – I think it’d be the biggest event in gaming since Id released the source code for Doom. Just watch the Linux market share grow after the release.

Stallman says that Linux adoption isn’t the primary goal. That the primary goal is to bring freedom to users (But apparently not the freedom to run games they love). But I think that adoption of Linux at this point is more important than sticking to this (silly, BTW: nonfree != evil) principle – The more adoption we see, the more the community will grow, and the better the software will get. While this happens, more people will be exposed to Stallman’s (unrealistic) philosophy.

Stallman does concede that “My guess is that the direct good effect will be bigger than the direct harm”.

Direct harm? Really? I can finally delete that old windows XP partition, and you’re talking about Direct Harm? You think there’s anything at all bad about Valve’s monumental decision to embrace Linux?

You’re fucking crazy. Even distros that your foundation doesn’t endorse (Prepare to be amazed), like Ubuntu, go out of their way to tell the user that they’re about to install nonfree software. It’s always optional. It’s just been made easy because not everybody is as nuts as Stallman – some people, like me, actually want to use nonfree software. I should be free to do that, but apparently that’s not OK with the so-called “Free Software Foundation”. Apparently software should be free, but not people.

(Update: Late 2013: Valve refuse to give me a refund for the nonfunctional game Fez, in violation of Australian Consumer Protection Laws. They try to tell me that the laws don’t apply. I lodge a complaint with the ACCC and stop buying things on Steam. Maybe Stallman isn’t that nuts after all. No company can be trusted.)

(Update 2: 2014: The ACCC Sues Valve for violation of consumer protection laws. I love those guys.)

(Update 3: Jun 2015: Valve announces that they now allow refunds. This is because they’re really good, caring people, and has nothing at all to do with an Australian judge being about to hand out a $10,000/day fine)

(Update 4: comments disabled on this post due to spam bots following the link here from my squee when steam for linux was announced)

A few months of xfce

In mid-late 2011 my “fed-up-ness” with Gnome reached critical levels – it got to the point that Windows has been at for years: me sitting, staring at my screen in disbelief, screaming: “What, exactly, the fuck, are YOU DOING THAT TAKES SO GODDAMN LONG?!?!?!”, or the classic: “aah, good, a minute’s delay while you read something which you ALREADY HAVE CACHED!!!”.

I shit you not, there are tasks which gnome’s bloatware manages to slow down my dual-core >3000Mhz machine with >8000Mb of RAM to a point where performance is comparable to performing the same task on my 7Mhz, 2Mb RAM Amiga 600. This is not an exaggeration – for example, my Amiga will open up a directory containing thousands of files in a comparable time, and it’s reading from an MFM hard disk and has pretty much nothing cached. My Amiga will boot up in less time than it takes gnome to show me my desktop from the gdm login screen.

It’s not that Gnome changed or did anything differently, it’s just that I gradually became less and less tolerant of it’s godawful performance, and the point came where I finally snapped and said “Fuck Gnome”.

No, I haven’t tried Gnome 3. Given an option, I never will – the screenshots are enough to tell me it’s an ungodly abomination. I’m talking about Gnome 2.

The solution I went with was to switch to Xfce.

Since I’m addicted to my pretty wobbly windows, I’ve been running Xfce with compiz as the window manager on my powerful machine. I’m using xfwm on my less-powerful laptop.

I’ve been using it for a few months now, and thought I’d report on the experience.

So, without further ado, here’s an exhaustive list of features I miss from Gnome which are not in Xfce:


  • Coffee – Gnome had this wonderful feature which allowed me to drink much more coffee every day. You see, when I open nautilus in my home folder, there’s a nice ~40 second delay while nautilus does whatever the fuck it does that takes such a god damn long time to do. In this time I used to go make coffee – my workflow went: a) double-click directory b) make coffee c) have a cigarette d) browse through the folder I double-clicked (assuming, of course, that I didn’t open my “mp3s” or “audiobooks” directories – in that case, there’s a step e) – celebrate a few birthdays, grow older and wiser, earn a doctorate, solve the energy crisis, and read every novel ever written ). Unfortunately since thunar will open up my home directory in less than one second, my coffee intake has been greatly reduced.


This also marks my departure from using the standard Ubuntu distro. I loved it and I wish the Ubuntu team well – 10.04 has been a brilliant OS – it’s served me really well and I’ve been very happy with it in general, but I’m not ever using unity, and next time I need to install an OS I’ll be going with Xubuntu.

Fuck Gnome. Fuck Gnome right in the ear.

Now all I need is for someone to build a web browser that doesn’t completely suck ass. Maybe I’ll give Opera a proper try…


Is awesome. This is what Isaac Asimov was talking about… well, not quite, but he’s certainly the best thing we have so far: the form is all there, now we just need a mind.

I’d just assumed he was named for Isaac, but he’s not – ASIMO is an acronym for “Advanced Step in Innovative MObility”. Still, I think Isaac would have shed tears of joy at the sight of him dancing.

I want one.

iptables masquerading for freerunner

I find myself constantly going to the OpenMoko USB Networking page to find the commands to enable iptables masquerading – it’s the only part of the process I can’t remember.

It’s a bit obscure to find on the USB Networking page, so now it’s here, too:

sudo iptables -I INPUT 1 -s -j ACCEPT
sudo iptables -I OUTPUT 1 -s -j ACCEPT
sudo iptables -A POSTROUTING -t nat -j MASQUERADE -s
sudo bash -c 'echo 1 > /proc/sys/net/ipv4/ip_forward'

creating a self-extracting bash script

You always see things like vmware and unreal tournament being installed via a self-extracting bash script – It would seem that this is the best way to provide an installer which will work on the widest selection of Linux distributions.

After some googlage, I came up with the following. Given a tarball and an installer script named ‘installer’, it will create a self-extracting bash script:

# Self-extracting bash script creator
# By Dale Maggee
# Public Domain
# This script creates a self-extracting bash script
# containing a compressed payload.
# Optionally, it can also have the self-extractor run a
# script after extraction.

output_extract_script() {
#echoes the extraction script which goes at the top of our self-extractor
# $target - suggested destination directory (default: somewhere in /tmp)
# $installer - name of installer script to run after extract
# (if specified, $target is ignored and /tmp is used)

#NOTE: odd things in this function due to heredoc:
# - no indenting
# - things like $ and backticks need to be escaped to get into the destination script

cat <<EndOfHeader
echo "Self-extracting bash script. By Dale Maggee."
target=\`mktemp -d /tmp/XXXXXXXXX\`
echo -n "Extracting to \$target..."


#here we put our conditional stuff for the extractor script.
#note: try to keep it minimal (use vars) so as to make it nice and clean.
if [ "$installer" != "" ]; then
#installer specified
echo 'INSTALLER="'$installer'"'
if [ "$target" != "" ]; then
echo '(temp dir: '$target')'

cat <<EndOfFooter

#do the extraction...
ARCHIVE=\`awk '/^---BEGIN TGZ DATA---/ {print NR + 1; exit 0; }' \$0\`

tail -n+\$ARCHIVE \$0 | tar xz -C \$target

echo -en ", Done.\nRunning Installer..."

cd \$target

echo -en ", Done.\nRemoving temp dir..."
cd \$CDIR
rm -rf \$target
echo -e ", Done!\n\nAll Done!\n"

exit 0

make_self_extractor() {

echo "Building Self Extractor: $2 from $1."

if [ -f "$3" ]; then
echo " - Installer script: $installer"

if [ "$4" != "" ]; then
echo " - Default target is: $target"

#check input...
if [ ! -f "$src" ]; then
echo "source: '$src' does not exist!"
exit 1
if [ -f "$dest" ]; then
echo "'$dest' will be overwritten!"

#ext=`echo $src|awk -F . '{print $NF}'`

#create the extraction script...
output_extract_script > $dest
cat $src >> $dest

chmod a+x $dest

echo "Done! Self-extracting script is: '$dest'"

show_usage() {
echo "Usage:"
echo -e "\t$0 src dest installer"

echo -en "\n\n"

# Main

if [ -z "$1" ] || [ -z "$2" ]; then
exit 1
make_self_extractor $1 $2 $3

You have FUD on your shoes

(Originally posted on myspace on 16-Sep-2008)

Here’s an example of unbiased and neutral journalism:


I find the fact that it’s on BBC somewhat strange, as I usually hold BBC to be one of the better commercial news sources.

Granted, their IT section is written by a bunch of clueless morons, but this just takes the cake.

Let’s analyse this article, shall we?

Firstly there’s all the mention of Linus torvalds, and how this guy is cursing his name. No mention of the fact that there’s actually thousands/millions of people developing Linux. the closest he comes is “made the heart of his operating system absolutely free and open source”, which is pretty close to the mark – Linus built the Linux Kernel, and it got plugged into the GNU operating system. Linus is responsible for only a small part of the entire OS, although it’s a major part.

Xandros worked right out of the box. Like most distros it includes Open Office, an open source copycat of Microsoft Office. Word processing, spreadsheets and presentations are no problem.

Xandros connected to the net through my home wireless network at the first time of asking. And surfing was fast and easy.

So, everything worked. Cool.

There were a couple of things about Xandros which I didn’t like.

The music management program – its “iTunes”, if you like – let me listen to music and podcasts on my new laptop but wouldn’t sync anything I loaded on to my iPod. Big problem for a music and podcast junkie.

Plus the desktop – the way the screen looks, the icons it uses to open programs – looks like it’s been designed by a four-year-old with a fat crayon. It’s may be down to personal taste, but I just don’t like the way Xandros looks.

Well, before I’d think about switching away from the preinstalled distro where everything works, by your own admission, I’d consider doing a couple of quick google searches. googling “Xandros Ipod” and  “Xandros theme” immediately shows me 3 or 4 interesting links which would seem to warrant reading. and doing two google searches takes much less time than a) procuring and b) installing a new distro.

But linux is all about freedom, so you’re free to install a new distro, I suppose…

I’d also like to point out that if you don’t like the way windows looks, your options are to go and buy a Mac, or install Linux. I guess you could also go out and BUY a different version of windows, perhaps – i.e Upgrade from Home to Professional, although this probably won’t help with the look much.

Except now the internet wireless connection doesn’t work and the music management software still won’t let me sync with my iPod. Mmmmmm……

How is this any different to any other OS? Windows XP doesn’t recognise my wireless card or my UltraATA IDE controller – I Can’t install any version of windows except for a prepackaged XP SP2 on my machine, because without detecting the UltraATA controller, it doesn’t know about my hard drive. I’d call this slightly more serious than having no wireless connection or Ipod connectivity. Granted, in today’s internet-centric world no network connectivity is an issue, but did it not occur to you to try plugging a cable into your ethernet port as an interim measure?

Like most journalists, I’ve the attention span and patience of a gnat. The air turns blue and I inform my wife loudly that Linus Torvalds has much to answer for (I paraphrase slightly).

A gnat has more attention span if you didn’t even try plugging in an ethernet cable… and It’s obviously linus’ fault, and has nothing to do with canononical (who make and distribute ubuntu)…

But I’m completely stumped by the instructions posted on these sites. The level of assumed knowledge is way above my head. I follow a couple of suggestions, try to connect to my router using an ethernet cable, download code that promises to set things right. And fail.

It’s probably worth mentioning one other important point about Linux here. It’s a text-based operating system, which means that a fair few of the things you may want to tell your computer to do – installing certain new software, for example – requires you to open up a “terminal window” and actually type text into the little window.

As someone used solely to double-clicking on pretty pictures to do most anything on a computer this is pretty hairy stuff.

How is this ‘hairy’? somebody tells you to go into a terminal and type ‘X’, so you go into a terminal and type ‘X’. how is this difficult??? are you telling me that you’re unable to copy something that you see written down? you’re unaware of copy and paste? As a Journalist, I suppose, expecting you to type is just a bit too much…

True to form when I’m too stupid to figure out how to do something in five minutes, I phone an expert.

Geek Squad, a tech support service partnered with the Carphone Warehouse, is more used to dealing with problems with broadband and e-mail but later that night, Agent Jamie Pedder walks me through it over the phone.

Download a couple of bits of code from one of the Linux help sites on to a memory stick. Whack the memory stick into the offending laptop.

Bang a couple of lines of code into the terminal window to tell the machine to install what we’ve downloaded. Bingo, we’re cooking on gas.

Ubuntu’s running my wireless network and I’m back on-line. Easy when you know how.

So it works. Cool. So you got it all sorted out reasonably quickly. How is this different to Windows, where you need to install drivers? ever tried downloading windows drivers for your network card? it’s not easy, unless you have a spare working PC and a USB stick…

The fly in the ointment remains the music management software. I still can’t sync an iPod and Agent Pedder reckons that I probably won’t be able to – for now at least.

While Linux is founded on the philosophy of free and easy access to its code for anyone who’s interested, Apple is not. That means no iTunes for Linux, and nor is Apple likely to release such a version.

This, again, is obviously the fault of Linus Torvalds, and has nothing to do with Apple. It’s good to see you’re blaming the right people for your problem at least. Let’s just sum this up, for emphasis:

  • Apple makes Itunes for Windows and it enables you to use an Ipod on Windows
  • Apple makes Itunes for Mac OS and it enables you to use an Ipod on Mac OS
  • Apple doesn’t make Itunes for linux, and it takes a google search to get an Ipod working on linux

Yep, this is obviously because Linux sucks – I mean, expecting Apple to provide Ipod management tools for Linux is just asking for too much, isn’t it?

The iPod out of action is a major irritation, but I’ve not given up hope. There’s software out there – free for Linux users as always – that promises to do what I want. I just haven’t got round to downloading and playing with it yet.

So you’re reporting on how irritating it is, and how you’ve had a ‘torrid’ time with Linux, but you haven’t even bothered trying to install the software which will allow you to do what you want? Here’s a parable for you:

  • I Install windows
  • I post in a forum saying that windows sucks because I can’t play Half-Life 2.
  • People ask me what happens when I double-click on the Half-life 2 Icon
  • I say “There is no Half-Life 2 Icon”
  • People ask me if I’m sure I’ve got half-life 2 Installed correctly
  • I say “You need to INSTALL IT?!? OMFG WINDOWS IS TEH SUX!!!”

This is obviously an entirely fair observation on my part.

For the time being, it’s back to the trusty CD player. All this talk of hippy ideals has put me right in the mood for a bit of Sgt Pepper’s.

So your Ipod doesn’t work. Typing “ubuntu ipod” in google must be too hard, I guess – when I do that I see four links which would probably help, without even scrolling down.

So, to summarise: You bought a PC without windows, and it worked out of the box. You decided to install a different OS, and it broke. This is the fault of the OS, and has nothing to do with you being too lazy to do a couple of google searches to try to solve the things you don’t like about the working distro.

You were able to solve the problems, although you needed to call somebody to help you to do this, because you’re not able to follow instructions. I’m presuming this didn’t involve the 40 minute wait times or hideously excessive charges involved with Microsoft’s support line?

You bought an Ipod, which itself is an idiotic move – there are hundreds of MP3 players out there which present themselves as USB drives and are therefore supported natively by every operating system on earth without the need for any software at all, but you chose the one which requires special software and has ghastly DRM built into it, and this is Linus Torvald’s fault?

I see.

And believing in freedom makes you a Hippie?

I see.

I think you’ve stepped in some FUD, and it’s stuck to your shoe, and you’re now dragging it across my carpet. Take your shoes off, or get out of my living room.

HP Marketing Techniques

(Originally posted on myspace on 20-Feb-2008)

Update: The only thing worse than a phone running windows? a Neo Freerunner. One day I might post a separate rant about that.

I’ve been using a HP iPAQ 6515 as my phone / mp3 player / GPS navigator / life support system for nearly 2 years now.

It’s a great little unit, in hardware terms – It’s got an SD card slot and a MiniSD slot, meaning you can give it a reasonable amount of storage space for playing MP3s. It’s a Quadband GSM mobile phone, so when I got it my old nokia 6210 got put in the cupboard. It’s got a builtin GPS receiver, and you can run TomTom on it. It’s (barely) powerfull enough to play MP3s and Run TomTom at the same time, which is nice, since I haven’t gotten around to putting a reasonable stereo in my car yet – I haven’t needed to. It’s a PDA, not a SmartPhone, meaning you can run a whole heap of Windows CE applications on it – My favorites are Voice Command, which is brilliant (when you’re in a quiet room, and you don’t have any contacts which sound even remotely similar to each other), and SCUMMVM, a cross-platform SCUMM engine, allowing it to run some of the most classic games ever: I’ve got Maniac Mansion, Indiana Jones and the Fate of Atlantis, and Monkey Island loaded onto mine. Best of all, it has a QWERTY keyboard, which is brilliant for txting – I hate the onscreen keyboards on the majority of PDA-type devices, and I don’t think I could live with a device which can’t atleast have a QWERTY keyboard attached to it.

It doesn’t have too many drawbacks: It lacks builtin Wifi, so I can’t run skype on it – the next model up (the 6965) has Wifi, but I couldn’t really justify spending $800 just for Wifi. You can buy an SD Wifi card for it reasonably cheaply, but the SD slots are in the side of the device, so a Wifi card would stick out the side and present a damage risk when you put it in your pocket (SD wifi cards neccessarily poke out of their slot, as they need space for an antenna, although there are compact ones available which aren’t as bad). It also lacks support for certain bluetooth services, namely high quality Audio – you can’t use stereo bluetooth headphones with it. This is kinda annoying, considering that the bluetooth software which comes with it says it supports high quality Audio. But the wired headset which comes with it is stereo and provides pretty good sound, if not eardrum-burstingly loud. Also the camera on it is not even worth using at 1.3 megapixels, and the HP Photosmart camera software is horrible. It suffices in an emergency, though. Unless the emergency happens at night – the “flash” on the thing is laughable.

Another disadvantage is the godawful Operating system it runs – Windows Mobile 2003SE. It’s slow and horrible – you have to wait a couple of seconds for windows to do it’s thing whenever you take the thing out of standby. And you have to reset it far more often than you’d ever turn a mobile phone off and on. I think that perhaps the slowness it merely related to it not having quite enough memory for my kind of usage – I run alot of programs on it above and beyond the standard Phone and Organiser functionality it provides. Maybe if I was a pleb user and didn’t load software onto it, or if it had more RAM, this wouldn’t be an issue.

I’ve looked into running a real OS on it, but the status of Linux support for this particular devce is not great – I’d probably have to live without access to GPS, the phone functionality may or may not work, the Camera wouldn’t work (pfft), and it could very well take a lot of hacking and mucking about to get linux onto the thing – there doesn’t seem to be any HOWTO for linux on this particular model of Ipaq, and I don’t really have the time and energy to figure it all out, especially considering that this is my phone we’re talking about – a day’s downtime would be unacceptable – I’d need another one to be able to play with to figure out what I’m doing.

but it’s certainly been a good little unit, I’ve thought… so far.

About a month ago, it stopped making noise except through the headset. Obviously what’s happened is that the switch in the headphone jack has become stuck in the ‘headset inserted’ position, which cancels all noise (and microphone input), except for the ringtone, from coming out of the unit’s speaker. I can still use it just fine with the headset plugged in, and I can still use it fine with the headset unplugged, just as long as I don’t want to do anything that requires audio input or output. Like talking on the phone. So at the moment when my phone rings, I have to scramble to find and untangle the headset, plug it in, and press the answer button.

So I contacted HP about this, wanting to know how to go about getting it serviced. I specifically made mention of the fact that this was my primary phone / communication method we’re talking about, so it’s pretty urgent.

I could go through the ensuing catastrophe of customer relations blow-by-blow, but then this would be 800,000 words long, and I would probably end up smashing something. And I’m using a work laptop at the moment, so that’s probably not a great idea. Suffice to say that they take up to a week to even reply to your emails, which you’ve market as urgent, and when they do it’s so unhelpfull that they might have well just kept playing Unreal Tournament, or whatever it is they do most of the time up there, rather than even replying. I just recieved an email yesterday, over a month since our last correspondence, which contained the exact same text as the previous email they sent me. Which I’d already replied to, over a month ago.

HP’s “support” team are without doubt the single most apathetic, indifferent, robotic, unhelpfull bunch of bastards I’ve ever dealt with. and I’ve BEEN an indifferent, unhelpfull tech support bastard before. But I at least used to try to project the appearance of caring about the customer’s problems – after all, it’s the company’s reputation at stake here. But HP’s “support” team doesn’t even seem to care about that.

It seems that HP are trying to sell Nokia products – a brilliant, novel, and innovative marketing strategy if I’ve ever seen one – There’s no amount of money Nokia could spend on advertising (short of having some cute chick giving blowjobs with every Nokia purchased) which would come close to what HP have done in terms of getting me to buy a Nokia handset.

After a couple of weeks of dealing with HP’s completely indifferent “support” team, I decided I’d just find myself an alternative device. It’ll cost me amaybe $1200 extra to do this, but at least I won’t have to deal with these pricks. I wouldn’t know what Nokia’s tech support people are like, because I’ve never had a problem with a Nokia product, ever. And I’ve used a few Nokia devices in my time.

It’s a good thing HP don’t make defibrillators or heart/lung machines.

So, congratulations HP – you’ve managed to ensure that I never buy another HP product as long as I live. You’ve managed to ensure that When I’m reviewing devices at work and making purchasing recommendations (which does happen), I don’t recommend the HP device, regardless of it’s technical specifications. You’ve managed to make the process of finding myself a new device less painfull – anything with a HP logo on it automatically gets excluded from my even looking at it,regardless of it’s capabilities. And most of all, you’ve managed to increase the yearly sales figures of one of your competitors. Whether that’s RIM, Palm, Nokia, Motorola, or somebody else I haven’t yet decided.

Congratulations, HP, and on behalf of Nokia, Motorola, RIM, and Palm: Thanks, HP.