Replacing the hard disk in a PowerBook G3 “Pismo”, and other fun with Mac OS 9

Replacing the hard disk in a PowerBook G3 “Pismo”, and other fun with Mac OS 9

I posted nearly five years ago about my shiny new Power Mac G4 and how much I was enjoying the nostalgia. Unfortunately the power supply in it has since started to die, and the machine will random turn itself off after an increasingly short period of time. Additionally, I’d forgotten just how noisy those machines were, and how hot they ran! I’ve bought a replacement power supply for it, but it involves rearranging the output pins from a standard ATX PSU to what the G4 needs, and that’s so daunting that I still haven’t tackled it yet. I decided to go back to the trusty old PowerBook G3, as I’ve since gotten a new desk and computer setup that has much more room on it, and having a much more compact machine has been very helpful.

One thing I was a bit concerned about was the longevity of the hard disk in it and I started investing the possibility of putting a small SSD into it. Thankfully such a thing is eminently possible by way of a 128GB mSATA SSD and an mSATA to IDE adapter! I followed the iFixit guide — though steps 6 through to 11 were entirely unnecessary — and now have a shiny new and nearly entirely silent PowerBook G3 (though it’s disconcerting at just how quiet it is given it’s an old machine… I realised I’m so subconsciously used to hearing the clicking of the hard disk).

A photo of a black PowerBook G3 sitting on a desk, booted to the Mac OS 9 desktop. The machine is big and chunky, but also has subtle curves to it, and the trackpad is HILARIOUSLY tiny compared to modern Macs.

I even had the original install discs from the year 2000 when mum first bought this machine, and they worked perfectly (though a few years ago I’d had to replace the original DVD drive with a slot-loading one because the original one died and it stopped reading discs entirely).

One I had it up and running, another sticking point is actually getting files onto it. As I mentioned in my previous post, Macintosh Repository has a whole ton of old software and if you load it up with a web browser from within Mac OS 9 it’ll load without HTTPS, but even so it’s pretty slow. Sometimes it’s nicer just to do all the searching and downloading from a fast modern machine and then transfer the resulting files over.

Mac OS 9 uses AFP for sharing files, and the AFP server that used to be built into Mac OS X was removed a few versions ago. Fortunately there’s an open-source implementation called Netatalk, and some kindly soul packed it all up into a Docker container.

I also stumbled across a project called Webone a while ago, which acts essentially as an SSL-stripping proxy that you run on a modern machine and point your old machine’s web browser to for its proxy setting. Old browsers are utterly unable to do anything with the modern web thanks to newer versions of encryption in HTTPS, but this lets you at least somewhat manage to view websites, even if they often don’t actually render properly.

Both Netatalk and Webone required a bit of configuration, and I rather than setting them up and then forgetting how I did so, I’ve made a GitHub repository called Mac OS 9 Toolbox with docker-compose.yml files and setup for both projects in them, plus a README so future-me knows what I’ve done and why. 😛 In particular getting write permissions to write from the Mac OS 9 machine to the one running Netatalk was tricky.

I also included a couple of other things in there too, and will continue to expand on it as I go. One thing is how to convert the PICT format screenshots from Mac OS 9 into PNG, since basically nothing will read PICTs anymore. It also includes a Mastodon client called Macstodon:

A screenshot of a multi-pane Mac OS 9 application showing the Mastodon Home and Local Timelines and Notifications at the top, and the details of a selected toot at the bottom.

And also the game Escape Velocity: Override (which I’m very excited to note is getting a modern remaster from the main guy who worked on the original):

A screenshot of a top-down 2D space trading/combat game with quite basic graphics. A planet is in the middle of the screen along with several starships of various sizes.

I mentioned both the Marathon and Myth games in my previous post, but those actually run quite happily on modern hardware since Bungie was nice enough to open-source them many years ago. Marathon lives on with Aleph One, and Myth via Project Magma.

Upping my monitoring game with MQTT

Previously on Monitor All The Things:

(The display that used to show the air quality has been changed to show a clock instead, and the air quality monitoring is done via another ESP32 now. I’m also sensing a definite theme with my blog post titles here).

I hadn’t blogged about it, but I also have all of this (indoor and outdoor temperature and humidity, power usage and generation plus battery charge, and outdoor air quality) going into InfluxDB for visualising in Grafana. The dashboard I made looks like this:

Pretty spiffy, eh?

It’d very much evolved rather organically as I went though, so lots of different things on different hosts sending HTTP calls all over the place, including my own slightly dodgy system for getting the ESP32s that are connected to the temperature sensor to save their readings onto the local filesystem if my website couldn’t be contacted (for example if we had an internet outage), as well as two separate things hitting the Powerwall’s local API every five seconds to pull the power data (one for the little HyperPixel display at the front of the house, and one for the visualisation stuff above).

I figured there had to be a cleaner and more elegant way of doing this. At work I deal with Amazon’s Simple Queue Service (SQS) quite a lot and use it in one of the services I built, so I wondered if there was a way to accomplish something similar myself, so I could just have everything drop messages onto a queue and have the things that need to read them pick those messages up from the queue.

Turns out there is, and it’s called MQTT!

It’s an absurdly simple and lightweight protocol, you have a central server called a “broker”, a publisher that sends messages to a given topic on the broker, and as many subscribers as you want that also connect to the broker and each listen on a topic or topics, and the broker ensures those messages get from the publisher to each subscriber. There’s also quality of service settings where you can have it guarantee that the message is received by the subscriber at least once, and it’ll queue up the messages for the subscribers if they drop offline and the messages will all be sent once the subscriber comes back.

Interestingly, you can also have a broker on one machine connect to a broker on another machine, and have it send messages on a particular topic to the remote broker, which seemed like it’d be a good way to get weather updates to my website.

There’s a guy who wrote an MQTT client library in MicroPython for the ESP32, mqtt_as, so that would take care of the ESP32 side of things, I’d use a popular open-source MQTT broker called Mosquitto, and there’s a Javascript MQTT client called MQTT.js that would be used for my website and all the other TypeScript parts of the setup.

I did a bunch of brainstorming in draw.io and came up with this elaborate diagram:

(Mechanise is the hostname of my Linode, which my website runs on, and PVOutput is a website for sending your solar power generation data to, a bunch of people at work also do the same and we’re all in the same “team” so we can see how much combined we’ve all generated together).

After that, it just involved a whole bunch of coding (as well as ordering two spare ESP32s so I could test that my code worked without having to pull apart my existing setup), which I’ve uploaded to GitHub:

Despite having written up a careful plan and done what I thought was getting all my ducks in a row to make a quick switchover this morning, there were a number of things I ran into that caused it to take a few hours to get going (things like forgetting to configure PostgreSQL on the Raspberry Pi 4B to allow things running in Docker to access it, needing to add an extra published port on the Linode so my website could connect to Mosquitto, and most annoyingly of all, a recent VSCode update breaking Pymakr and having to revert to an old version of both pieces of software). I got everything up and running in the end, and now if I add any new monitoring things, it’ll be quite simple to publish the data to Mosquitto and slurp it up into InfluxDB!

More fun with temperature sensors: ESP32 microcontrollers and MicroPython

More fun with temperature sensors: ESP32 microcontrollers and MicroPython

I’ve blagged previously about our temperature/humidity sensor setup and how they’re attached to my Raspberry Pis, and they’ve been absolutely rock-solid in the three-and-a-half years since then. A few months ago, a colleague at work had mentioned doing some stuff with an ESP32 microcontroller and just recently I decided to actually look up what that was and what one can do with it, because it sounded like it might be a fun new project to play with!

From Wikipedia: ESP32 is a series of low-cost, low-power system on a chip microcontrollers with integrated Wi-Fi and dual-mode Bluetooth.

So it’s essentially a tiny single-purpose computer that you write code for and then flash that code onto the board, rather than like with the Raspberry Pi where it has an entire Linux OS running on it. It runs at a blazing fast 240MHz and has 320KB of RAM. The biggest draw for me was that it has built-in wifi so I could do networked stuff easily. There’s a ton of different boards and options and it was all a bit overwhelming, but I ended getting two of Adafruit’s HUZZAH32s which come with the headers for attaching the temperature sensors we have already soldered on. Additionally, they have 520KB of RAM and 4MB of storage.

Next up, I needed to find out how to actually program the thing. Ordinarily you’d write in C like with an Arduino and I wasn’t too keen on that, but it turns out there’s a distribution of Python called MicroPython that’s written explicitly for embedded microcontrollers like the ESP32. I’ve never really done much with Python before, because the utter tyre fire that is the dependency/environment management always put me off (this xkcd comic is extremely relevant). However, with MicroPython on the ESP32 I wouldn’t be having to deal with any of that, I’d just write the Python and upload it to the board! Additionally, it turns out MicroPython has built-in support for the DHT22 temperature/humidity sensor that I’ve already been using with the Raspberry Pis. Score!

There was a lot of searching over many different websites trying to find how to get all this going, so I’m including it all here in the hopes that maybe it’ll help somebody else in future.

Installing MicroPython

At least on macOS, first you need to install the USB to UART driver or your ESP32 won’t even be recognised. Grab it from Silicon Labs’ website and get it installed.

Once that’s done, follow the Getting Started page on the MicroPython website to flash the ESP32 with MicroPython, substituting /dev/ttyUSB0 in the commands for /dev/tty.SLAB_USBtoUART.

Using MicroPython

With MicroPython, there’s two files that are always executed when the board starts up, boot.py which is run once at boot time and is generally where you’d put your connect-to-the-wifi-network code, and main.py which is run after boot.py and will generally be the entry point to your code. To get these files onto the board, you can use a command-line tool called ampy, but it’s a bit clunky and also not supported anymore.

However, there is a better way!

Setting up the development environment

There are two additional tools that make writing your Python code in Visual Studio Code and uploading to the ESP32 an absolute breeze.

The first one is micropy-cli, which is a command-line tool to generate the skeleton of a VSCode project and set it up for full autocompletion and Intellisense of your MicroPython code. Make sure you add the ESP32 stubs first before creating a new micropy project.

The second is a VSCode extension called Pymakr. It gives you a terminal to connect directly to the board and run commands and read output, and also gives you a one-click button to upload your fresh code, and it’s smart enough not to re-upload files that haven’t changed.

There were a couple of issues I ran into when trying to get Pymakr to recognise the ESP32 though. To fix them, bring up the VSCode command palette with Cmd-Shift-P and find “Pymakr > Global Settings”. Update the address field from the default IP address to /dev/tty.SLAB_USBtoUART, and edit the autoconnect_comport_manufacturers array to add Silicon Labs.

Replacing the Raspberry Pis with ESP32s

After I had all of that set up and working, it was time to start coding! As I mentioned earlier I’ve not really done any Python before, so it was quite the learning experience. It was a good few weeks of coding and learning and iterating, but in the end I fully-replicated my Pi Sensor Reader setup with the ESP32s, and with some additional bits besides.

One of the things my existing Pi Sensor Reader setup did was to have a local webserver running so I could periodically hit the Pi and display the data elsewhere. Under Node.js this is extremely easily accomplished with Express, but using MicroPython the options were more limited. There are a number of little web frameworks that people have written for it, but they all seemed quite overkill.

I decided to just use raw sockets to write my own, though one thing I didn’t appreciate until this point was how Node.js’s everything-is-asynchronous-and-non-blocking makes doing this kind of thing very easy, you don’t have to worry about a long-running function causing everything else to grind to a halt while it waits for that function to finish. Python has a thing called asyncio but I was struggling to get my head around how to use it for the webserver part of things until I stumbled across this extremely helpful repository where someone had shown an example of how to do exactly that! (I even ended up making a pull request to fix an issue I discovered with it, which I’m pretty stoked with).

One of the things I most wanted to do was to have some sort of log file accessible in case of errors. With the Raspberry Pi I can just SSH in and check the Docker logs, but once the ESP32s were plugged into power and running, you can’t easily do a similar thing. I ended up writing the webserver with several endpoints to read the log, clear it, reset the board, and view and clear the queue of failed updates.

The whole thing has been uploaded to GitHub with a proper README of how it works, and they’ve been running connected to the actual indoor and outdoor temperature sensors and posting data to my website for just under a week now, and it’s been absolutely flawless!

More Raspberry Pi-powered monitoring: air quality!

Here in New South Wales, last year’s bushfires over late spring and into summer were astoundingly bad, and there were days where Sydney had the poorest air quality on the entire planet. Everyone was watching the PM2.5 values, and there were days where Kristina couldn’t go outside because of her asthma. I figured it’d be neat to set up a Raspberry Pi-powered air quality sensor and had ordered the sensor back in February but didn’t get around to putting it into service until now.

This is the bit that lives inside so we can easily see the latest reading:

A small 4" LCD display showing the air quality values for PM1.0, PM2.5, and PM10.

It uses the same sort of setup as my Pimoroni display, and I updated my pi-home-dashboard to add a second page to display the values from the air quality reader.

The sensor itself is a Plantower PMS5003 sensor and is attached to the same Raspberry Pi that the outdoor temperature sensor is on. Adafruit’s instructions on getting it set up were pretty straightforward, and they also give some sample code for how to read it, but it’s in Python which I intensely dislike (I don’t really even have any strong feelings about the language itself one way or the other, but I’ve never had a good experience with the damn package management around it, so I do my damnedest to avoid it). I was able to write the same logic in TypeScript instead — though had to consult the clever people on Ars Technica because parsing the output from the sensor involves things like bit-shifting which is quite low-level and something I’m utterly unfamiliar with — and chucked the whole thing up on GitHub. It takes ten readings and averages them, and has an HTTP endpoint for pulling the latest values.

I’ve set the front-end up so the colour of the numbers will change to orange and red depending on how bad the air quality is, but hopefully it’s a long while before we actually see that in action!

Powering our house with a Tesla Powerwall 2 battery

I posted back in March about our our shiny new solar panels and efforts to reduce our power usage, and as of two weeks ago our net electricity grid power usage is now next to zero thanks to a fancy new Tesla Powerwall 2 battery!

A photo of a white Tesla Powerwall 2 battery and Backup Gateway mounted against a red brick wall inside our garage.
A side-on view of a white Tesla Powerwall 2 battery mounted against a red brick wall.

We originally weren’t planning on getting a battery back when we got our solar panels — and to to be honest they still don’t make financial sense in terms of a return on investment — but we had nine months of power usage data and I could see that for the most part the amount of energy the Powerwall can store would be enough for us to avoid having to draw nearly anything whatsoever from the grid*.

* Technically this isn’t strictly true, keep reading to see why.

My thinking was, we’re producing stonking amounts of solar power and are feeding it back to the grid at 7c/kWh, but have to buy power from the grid after the sun goes down at 21c/kWh. Why not store as much as possible of that for use during the night?

The installation was done by the same people who did the solar panels, Penrith Solar Centre, and as before, I cannot recommend them highly enough. Everything was done amazingly neatly and tidily, it all works a treat, and they fully cleaned up after themselves when they were done.

We have 3-phase power and the solar panels are connected to all three phases (⅓ of the panels are connected individually to each phase) and the Powerwall has only a single-phase inverter so is only connected to one phase, but the way it handles everything is quite clever: even though it can only discharge on one phase, it has current transformers attached to the other two phases so it can see how much is flowing through there, and it’ll discharge on its phase an amount equal to the power being drawn on the other two phases (up to its maximum output of 5kW anyway) to balance out what’s being used. The end result is that the electricity company sees us feeding in the same amount as we’re drawing, and thanks to the magic of net-metering it all balances out to next to zero! This page on Solar Quotes is a good explanation of how it works.

The other interesting side-effect is that when the sun is shining and the battery is charging, it’s actually pulling power from the grid to charge itself, but only as much as we’re producing from the solar panels. Because the Enphase monitoring system doesn’t know about the battery, it gives us some amusing-looking graphs whereby the morning shows exactly the same amount of consumption as production up until the battery is fully-charged!

We also have the Powerwall’s “Backup Gateway”, which is the smaller white box in the photos at the top of this post. In the event of a blackout, it’ll instantaneously switch over to powering us from the battery, so it’s essentially a UPS for the house! Again, 3-phase complicates this slightly and the Powerwall’s single-phase inverter means that we can only have a single phase backed up, but the lights and all the powerpoints in the house (which includes the fridge) are connected to the backed-up phase. The only things that aren’t backed up are the hot water system, air conditioning, oven, and stove, all of which draw stupendous amounts of power and will quickly drain a battery anyway.

We also can’t charge the battery off the solar panels during a blackout… it is possible to set it up like that, but there needs to be a backup power line going back from a third of the solar panels back to the battery, which we didn’t get installed when we had the panels put in in February. There was a “Are you planning on getting a battery in the next six months” question which we said no to. 😛 If we’d said yes, they would have installed the backup line at the time; it’s still possible to install it now, but at the cost of several thousand dollars because they need to come out and pull the panels up and physically add the wiring. Blackouts are not remotely a concern here anyway, so that’s fine.

In the post back in March, I included three screenshots of the heatmap of our power usage, and the post-solar-installation one had the middle of the day completely black. Spot in the graph where we had the battery installed!

We ran out of battery power on the 6th of November because the previous day had been extremely dark and cloudy and we weren’t able to fully charge the battery from the solar panels that day (it was cloudy enough that almost every scrap of solar power we generated went to just powering the house, with next to nothing left over to put into the battery), and the 16th and 17th were both days where it was hot enough that we had the aircon running the whole evening after the sun went down and all night as well.

Powershop’s average daily use graph is pretty funny now as well.

And even more so when you look all the way back to when we first had the smart meter installed, pre-solar!

For monitoring the Powerwall itself, you use Tesla’s very slick app where you can see the power flow in real time. When the battery is actively charging or discharging, there’s an additional line going to or from the Powerwall icon to wherever it’s charging or discharging to or from.

You can’t tell from a screenshot of course, but those on the lines connecting the Solar to the Home and Grid icons animate in the direction that the power is flowing.

It also includes some historical graph data as well, but unfortunately it’s not quite as nice as Enphase’s, and doesn’t even have a website, you can only view it in the app. There’s a website called PVOutput that you can send your solar data to, and we have been doing that via Enphase since we got the solar panels installed, but the Powerwall also has its own local API you can hit to scrape the power usage and flows, and battery charge percentage. I originally found this Python script to do exactly that, but a) I always struggle to get anything related to Python working, and b) the SQLite database that it saves its data into kept intermittently getting corrupted, and the only way I’d know about it is by checking PVOutput and seeing that we hadn’t had any updates for hours.

So, I wrote my own in TypeScript! It saves the data into PostgreSQL, so far it’s been working a treat and it’s all self-contained in a Docker container. The graphs live here, and to see the power consumption and grid and battery flow details, click on the right-most little square underneath the “Prev Day” and “Next Day” links under the graph. Eventually I’m going to send all this data to my website so I can store it all there, but for the moment PVOutput is working well.

It also won’t shock anybody to know that I updated my little Raspberry Pi temperature/power display to also include the battery charge and whether it’s charging or discharging (charging has a green upwards arrow next to it, discharging has a red downwards arrow).

My only complaint with the local API is that it’ll randomly become unavailable for periods of time, sometimes up to an hour. I have no idea why, but when this happens the data in the Tesla iPhone app itself is still being updated properly. It’s not a big deal, and doesn’t actually affect anything with regards to battery’s functionality.

Overall, we’re exceedingly happy with our purchase, and it’s definitely looking like batteries in general are going to be a significant part of the electrical grid as we move to higher and higher percentages of renewables!

Fixing a Guitar Hero World Tour/Guitar Hero 5 guitar strum bar

Kristina and I had a date night last night in which we ate trashy food and then took the Xbox 360 out of storage and fired up Guitar Hero: Warriors of Rock. It was an excellent time except that my Guitar Hero World Tour guitar had stopped registering downward strums, and only upwards strums worked.

I figured I’d pull it apart today and see what was up, and thanks to this guide I figured it out, and am documenting it here for posterity (my problem wasn’t one of the ones described in that guide, but it was very handy to see how to disassemble the thing in the first place).

Tools needed for disassembly

  • Philips-head #0 and #1 screwdriver
  • Torx T9 screwdriver

Process

Firstly the neck needs to be removed, and the “Lock” button at the back towards the base of the guitar set to its unlocked position.

Next, the faceplate needs to be removed. This can be done by just getting a fingernail or a flathead screwdriver underneath either of the top bits of the body, pointed to with a arrow here, and gently prying it away from around the edges.

After that, there’s twelve Torx T9 screws to remove, circled in red, and another four Philips-head #0 ones, marked in green.

Once they’re all out, you can gently separate the front of the guitar where all the electronics live from the back of it.

Next there’s four Philips-head #1 screws to remove to get the circuit board that contains the actual clicky-switches away from the strum bar itself. Leave the middle two alone as they attach the guides for the springs of the strum bar.

After this, it’s a bit of a choose-your-own-adventure, as what you do next really depends on what’s wrong with the strum bar. On the underside of the circuit board above are the switches, it’s definitely worth making sure they both click nice and solidly when you press on them directly. If they don’t, it’s apparently possible to source exact replacements (“275-016 SPDT Submini Lever Switch 5A at 125/250VAC”) and fix it with a bit of soldering, but thankfully this wasn’t necessary in my case.

In the next image, undoing the Philips-head #1 screws circled in blue will allow you to take the strum bar assembly itself out and give it a re-lubricating (don’t use WD40, use actual proper lubricating grease) to make it rock back and forth a bit more smoothly. Another improvement you can make is adding a couple of layers of electrical tape to the areas I’ve circled in red. They’re where the strum bar physically hits the inside of the case, and the electrical tape can dampen the noise a bit.

What the strum bar problem ultimately ended being in my case is that the middle indented section where the switch rests against the strum bar to register a downstroke had actually worn away and could no longer press the switch in far enough to click. My solution, circled in green, was to chop a tiny piece of plastic from a Warhammer 40,000 miniature sprue and glue it—with the same plastic glue I use for assembling plastic miniatures—to the strum bar. Then I reassembled everything and it’s as good as new!

More space: the Pimoroni HyperPixel4 display on a Raspberry Pi Zero W

Back at the start of 2018 I blogged about my Raspberry Pi temperature display setup and it’s been pretty excellent and utterly reliable since then, but because of its small size — the display is only 2 inches — it wasn’t particularly visible from across the room. That, combined with the discovery that the Envoy power consumption monitoring system we had installed with the solar panels has a locally-accessible API that you can use to get real-time production and consumption data (which lives at http://<ip-of-the-envoy-box>/production.json?details=1), made me start looking into larger displays so I could include both temperature/humidity data and our power consumption.

My first port of call was the 2.7-inch version of the original 2-inch display. I ordered it on the 6th of April then… nothing showed up. I’d assumed the PaPiRus was MIA and had instead ordered a 4-inch, 800×480-pixel display in the form of Pimoroni’s HyperPixel4 display, the non-touch version. The Raspberry Pi registers it as a regular display so you run a full desktop environment windowing system on it rather than the way the PaPiRus works.

Of course, about a week after ordering the HyperPixel 4, the PaPiRus finally arrived! The 2.7-inch version of the PaPiRus is 264 pixels wide by 176 pixels high, so not exactly high-resolution. There’s actually quite a lot of freedom to tweak the position of the elements on screen pixel-by-pixel, but I quickly discovered that that’s extremely tedious when doing it directly on the Raspberry Pi itself because it takes several seconds for it to contact the required endpoints to pull in the data and then refresh the whole display. As well as writing text, the display can also display (1-bit) bitmap images, so I decided to change tack and instead of using the PaPiRus’s text API I wrote a probably-slightly-overengineered Node.js application that would run on the Raspberry Pi 4B, fetch the data from the outdoor and indoor sensors as well as the Envoy, use the Javascript Canvas API to lay everything out, and then convert it to a bitmap image that the Python script on the Pi Zero W would fetch every minute and then update the display with.

The biggest advantage of this system is that I could run it locally on my regular computer to quickly tweak the positioning without having to wait for the PaPiRus display to refresh each time, and I set it up so I could invert the colours to be white on black instead so I could clearly see the boundaries of the canvas. I put the code up on GitHub if anyone is interested in poking through it, and the end result looks like this:

Having over-engineered my Node.js solution, the HyperPixel4 display arrived maybe a couple of weeks later! It’s extremely slick-looking, but unfortunately the little plastic nubs that are meant to keep the screen in place in the house aren’t actually big enough to hold it in, and I managed to have the display itself pop out and crack some of the wires that feed the display and it caused all sorts of display weirdness. I emailed the place that makes the HyperPixel display about it and they were super nice and helpful and sent me out a replacement display with no questions asked! While I was waiting for the new one to arrive, the old broken one was partially working enough that I could at least get everything up and running how I wanted it, anyway.

Because using the HyperPixel is the same as if you’d hooked up an HDMI display and were using the Pi as a regular computer, I started from the full-blown Raspbian desktop image, not the Lite one. It was relatively straightforward to get everything going (mostly just installing and configuring the driver from Pimoroni’s GitHub repository), but there were some additional things I needed to do to get everything working as I wanted. I settled on a Node.js backend and React frontend setup (the separate backend was necessary because CORS; I couldn’t hit the Envoy URL directly from the browser on the Pi, so I have to have the Node.js backend pull in the data and then feed it to the React app), both of which are running in a Docker image on the Raspberry Pi 4B.

  • By default the HyperPixel4 runs at full brightness, so I followed this to turn it way down, and also to set up a cron job to entirely turn the display off at midnight and turn it back on at 8am.
  • To get the Pi to open Chromium full-screen on boot, I followed these instructions.
  • To disable the annoying “Restore pages” dialog in Chromium, this on the Raspberry Pi Stack Exchange was helpful.
  • Raspbian comes by default with a VNC server installed, just not enabled. To enable it and allow access directly from macOS’s “Connect to Server” dialog in the Finder:
    • Run sudo raspi-config, go to Interface Options > VNC and enable it.
    • Run vncpasswd -service to set a VNC password (note if it’s longer than eight characters, only the first eight are used when connecting).
    • Create the file /etc/vnc/config.d/common.custom with the contents: Authentication=VncAuth
    • Then Restart the VNC service with sudo systemctl restart vncserver-x11-serviced
  • And lastly, to disable the Pi from turning the screen off after activity, I followed these steps.

My ~/.config/lxsession/LXDE-pi/autostart ultimately ended up looking like this:

@lxpanel --profile LXDE-pi
@pcmanfm --desktop --profile LXDE-pi
point-rpi
@chromium-browser --start-fullscreen --start-maximized --app=http://fourbee:3003
@xset s off
@xset -dpms 
@xset s noblank
@sudo /home/pi/Source/rpi-hardware-pwm/pwm 19 1000000 135000

And the whole setup looks like this:

A photo of a small LCD display showing outdoor and indoor temperature and current power consumption and production. The text is white on black.

It’s quite the improvement in visibility and I can easily read it from all the way in the kitchen! It updates itself automatically every 30 seconds, and there’s no e-ink full-display-refresh screen-blanking when it does.

Digital archeology: recovering ClarisWorks drawing files

Three years ago I posted about how I’d gone back and recovered all my old websites I’d published over the years and packed them up into a Docker image, and last year I’d idly mused that I should go back and recover the multitude of websites that I’d designed but never actually uploaded anywhere. I finally got around to doing that over the weekend, and they’re all up on archive.virtualwolf.org! Some are the original HTML source, some are just the Photoshop mockups, but that now contains the almost sum total of every single website I’d created (and there’s a lot of them). The only one missing is the very very first one… The Dire Marsh news updates are from early 1998, but I’d copied most of the layout from the previous site as evidenced by the (broken) visitor at the left that says “<number> half-crazed Myth fanatics have visited this site since 21/12/97”.

Prior to building way too many websites, I’d been introduced to the Warhammer 40,000 and Dune universes when I was 13 and had immediately proceed to totally rip them off get inspired and write my own little fictional universe along the same lines. This was all in 1996 and very early 1997, I even still have all the old files sitting in my home folder with the original creation dates and everything, but didn’t have anything that could open them as they were a combination of ancient Microsoft Word writings — old enough that Pages didn’t recognise them — and ClarisWorks drawing documents — ClarisWorks had a vector-based drawing component to it as well as word processing. I ended up going down quite the rabbit hole in getting set up to bring them forwards into a modern readable format, and figured I’d document it here in case it helps anyone in future.

Running Mac OS 9 with SheepShaver

The very first hurdle was getting access to Mac OS 9 to begin with. I originally started out with my Power Mac G4 that I’ve posted about previously but unfortunately it seems like the power supply is on the way out, and it kept shutting down (people have apparently had success resurrecting these machines using ATX power supplies but I haven’t had a chance to look into it yet). Fortunately, there’s a Mac OS 9 emulator called SheepShaver that came to the rescue.

  1. Download the latest SheepShaver and the “SheepShaver folder” zip file from the emaculation forums.
  2. You need an official “Mac OS ROM” file that’s come from a real Mac or been extracted from the installer. Download the full New World ROMs archive from Macintosh Repository, extract it, rename the 1998-07-21 - Mac OS ROM 1.1.rom file to Mac OS ROM and drop it into the SheepShaver folder.
  3. Download the Mac OS 9.0.4 installer image from Macintosh Repository (SheepShaver doesn’t work with anything newer).
  4. Follow the SheepShaver setup guide to install Mac OS 9 and set up a shared directory with your Mac. Notes:
    • It defaults to assigning 16MB of RAM to the created virtual machine, be sure to increase it to something more than 32MB.
    • Disable the “Update hard disk drivers” box in the Options sections of the Mac OS 9 installer or the installer will hang (this is mentioned in the setup guide but I managed to miss it the first time around).
    • When copying files from the shared directory, copy them onto the Macintosh HD inside Mac OS 9 directly, not just the Desktop, or StuffIt Expander will have problems decompressing files.

Recovering ClarisWorks files

This was the bulk of the rabbit hole, and if you’re running macOS 10.15, you’ve got some additional rabbit hole to crawl through because the software needed to pull the ClarisWorks drawing documents into the modern era, EazyDraw Retro (scroll down to the bottom of the page to find the download link), is 32-bit only which means it doesn’t run under 10.15, only 10.14 and earlier.

Step 1: Convert ClarisWorks files to AppleWorks 6

  1. Download the archive of QuickTime installers and install QuickTime 4.1.2, which is required to install AppleWorks 6.
  2. Download the AppleWorks 6 installer CD image (it has to be added in SheepShaver’s preferences as a CD-ROM device) and install it.
  3. Open each of the ClarisWorks documents in AppleWorks, you’ll get a prompt saying “This document was created by a previous version of AppleWorks. A copy will be created and ‘[v6.0]’ will be added to the filename”. Click OK and save the copy back onto the shared SheepShaver drive with a .cwk file extension.

Step 2: Install macOS 10.14 inside a virtual machine

This entire step can be skipped if you haven’t upgraded to macOS 10.15 yet as EazyDraw Retro can be run directly.

Installing 10.14 inside a virtual machine requires a bootable disk image of the installer, so that needs to be created first.

  1. Download DosDude1’s Mojave patcher and run it (you’ll likely need to right-click on the application and choose Open because Gatekeeper will complain that the file isn’t signed).
  2. Go into the Tools menu and choose “Download macOS Mojave” to download the installer package, save it into your Downloads folder.
  3. Open Terminal.app and create a bootable Mojave image with the following commands:
    1. hdiutil create -o ~/Downloads/Mojave -size 8g -layout SPUD -fs HFS+J -type SPARSE
    2. hdiutil attach ~/Downloads/Mojave.sparseimage -noverify -mountpoint /Volumes/install_build
    1. sudo ~/Downloads/Install\ macOS\ Mojave.app/Contents/Resources/createinstallmedia --volume /Volumes/install_build
    2. hdiutil detach /Volumes/Install\ macOS\ Mojave
    3. hdiutil convert ~/Downloads/Mojave.sparseimage -format UDTO -o ~/Downloads/Mojave\ Bootable\ Image
    4. mv ~/Downloads/Mojave\ Bootable\ Image.cdr ~/Downloads/Mojave\ Bootable\ Image.iso

Once you’ve got the disk image, fire up your favoured virtual machine software and install Mojave in it.

Step 3: Convert AppleWorks 6 files to a modern format

The final part to this whole saga is the software EazyDraw Retro which can be downloaded from their Support page. It has to be the Retro version because the current one doesn’t support opening AppleWorks documents (I’m guessing whatever library they’re using internally for this is 32-bit-only and can’t be updated to run on Catalina or newer OSes going forwards, so they dropped it in new versions of the software). It can export to a variety of formats, and has its own .eazydraw format that the non-Retro version can open.

Unfortunately EazyDraw isn’t free, but you can get a temporary nine-month license for US$20 (or pay full price for a non-expiring license if you’re going to be using it for anything else except this). It did work an absolute treat though, it was able to import every one of my converted AppleWorks 6 documents and I saved them all out as PDFs. There were a few minor tweaks required to some of the text boxes because the fonts were different between the original ClarisWorks document and the AppleWorks one and there were some overlaps between text and lines, but that was noticeable as soon as I’d opened them in AppleWorks and wasn’t the fault of EazyDraw’s conversions.

Converting Aldus SuperPaint files

There were only two of my illustration files that were done in anything but ClarisWorks, and they were from Aldus SuperPaint. Version 3.5 is available from Macintosh Repository and pleasingly it’s able to export straight to TIFF so I could convert them under current macOS from that straight to PNG. There were some minor tweaks required there as well, but it was otherwise quite straightforward.

Converting Microsoft Word files

All my non-illustration text documents were written with Microsoft Word 5.1 or 6, but the format they use is old enough that Pages under current macOS doesn’t recognise it. I wouldn’t be surprised if the current Word from Office 365 could open them, but I don’t have it so I went the route of downloading Word 6 from Macintosh Repository which can export directly out to RTF. TextEdit under macOS opens them fine and from there I saved them out as PDF.

History preserved!

Following the convoluted process above, I was able to convert all my old files to PDF and have chucked them into the Docker image at archive.virtualwolf.org as well (start at the What, even more rubbish? section), so you can marvel at my terrible fan fiction world-building skills!

I’m not deluding myself into thinking that this is any sort of valuable historical record, but it’s my record and as with the websites, it’s fun to look back on the things I’ve done from the past.

Installing OpenWRT on a Netgear D7800 (Nighthawk X4S) router

I had blogged back in October of last year about setting up DNS over HTTPS, and it’s been very reliable, except for the parts where I’ve had to run Software Update on the Mac mini to pick up security update, and while it’s restarting all of our DNS resolution stops working! I’d come across OpenWRT a while back, which is an open-source and very extensible firmware for a whole variety of different routers, but I did a bunch of searching and hadn’t come across any reports of people fully-successfully using it on our specific router, the Netgear D7800 (also known as the Nighthawk X4S), just people having various problems. One of the reasons I was interested in OpenWRT because it’s Linux-based and extensible and I would be able to move the DHCP and DNS functionality off the Mac mini back onto the router where it belongs, and in theory bring the encrypted-DNS over as well.

I finally bit the bullet and decided to give installing it a go today, and it was surprisingly easy. I figured I’d document it here for posterity and in the hopes that it’ll help someone else out in the same position as I was.

Important note: The DSL/VDSL modem in the X4S is not supported under OpenWRT!

Installation

  1. Download the firmware file from the “Firmware OpenWrt Install URL” (not the Upgrade URL) on the D7800’s entry on OpenWRT.org.
  2. Make sure you have a TFTP client, macOS comes with the built-in tftp command line tool. This is used to transfer the firmware image to the router.
  3. Unplug everything from the router except power and the ethernet cable for the machine you’ll be using to install OpenWRT from (this can’t be done wirelessly).
  4. Set your machine to have a static IP address in the range of 192.168.1.something. The router will be .1.
  5. Reset the router back to factory settings by holding the reset button on the back of it in until the light starts flashing.
  6. Once it’s fully started up, turn it off entirely, hold the reset button in again and while still holding the button in, turn the router back on.
  7. Keep the reset button held in until the power light starts flashing white.

Now the OpenWRT firmware file needs to be transferred to the router via TFTP. Run tftp -e 192.168.1.1 (-e turns on binary mode), then put <path to the firmware file>. It’ll transfer the file and then install it and reboot, this will take several minutes.

Once it’s up and running, the OpenWRT interface will be accessible at http://192.168.1.1, with a username of root and no password. Set a password then follow the quick-start guide to turn on and secure the wifi radios — they’re off by default.

Additional dnsmasq configuration and DNS-over-TLS

I mentioned in my DNS-over-HTTPS post that I’d also set up dnsmasq to do local machine name resolution, this is very trivially set up in OpenWRT under Network > DHCP and DNS and putting in the MAC address and desired IP and machine name under the Static Leases section, then hitting Save & Apply.

The other part I wanted to replicate was having my DNS queries encrypted. In OpenWRT this isn’t easily possible with DNS-over-HTTPS, but is when using DNS-over-TLS, which gets you to the same end-state. It requires installing Stubby, a DNS stub resolver, that will forward DNS queries on to Cloudflare’s DNS.

  1. On the router, go to System > Software, install stubby.
  2. Go to System > Startup, ensure Stubby is listed as Enabled so it starts at boot.
  3. Go to Network > DHCP and DNS, under “DNS Forwardings” enter 127.0.0.1#5453 so dnsmasq will forward DNS queries on to stubby, which in turns reaches out to Cloudflare; Cloudflare’s DNS servers are configured by default. Stubby’s configuration can be viewed at /etc/config/stubby.
  4. Under the “Resolv and Hosts Files” tab, tick the “Ignore resolve file” box.
  5. Click Save & Apply.

Many thanks to Craig Andrews for his blog post on this subject!

Quality of Service (QoS)

The last thing I wanted to set up was QoS, which allows for prioritisation of traffic when your link is saturated. This was pretty straightforward as well, and just involved installing the luci-app-sqm package and following the official OpenWRT page to configure it!

Ongoing findings

I’ll update this section as I come across other little tweaks and changes I’ve needed to make.

Plex local access

We use Plex on the Xbox One as our media player (the Plex Media Software runs on the Mac mini), and I found that after installing OpenWRT on the router, the Plex client on the Xbox couldn’t find the server anymore despite being on the same LAN. I found a fix on Plex’s forums, which is to go to Network > DHCP and DNS, and add the domain plex.direct to the “Domain whitelist” field for the Rebind Protection setting.

Xbox Live and Plex Remote Access (January 2020)

Xbox Live is quite picky about its NAT settings, and requires UPnP to be enabled or you can end up with issues with voice chat or gameplay in multiplayer, and similarly Plex’s Remote Access requires UPnP as well. This isn’t provided by default with OpenWRT but can be installed with the luci-app-upnp and the configuration shows up under Services > UPnP in the top navbar. It doesn’t start by default, so tick the “Start UPnP and NAT-PMP service” and “Enable UPnP” boxes, then click Save & Apply.

Upgrading to a new major release (February 2020)

When I originally wrote this post I was running OpenWRT 18.06, and now that 19.07 has come out I figured I’d upgrade, and it was surprisingly straightforward!

  1. Connect to the router via ethernet, make sure your network interface is set to use DHCP.
  2. Log into the OpenWRT interface and go to System > Backup/Flash Firmware and generate a backup of the configuration files.
  3. Go to the device page on openwrt.org and download the “Firmware OpenWrt Upgrade” image (not the “Firmware OpenWrt Install” one).
  4. Go back to System > Backup/Flash Firmware, choose “Flash image” and select your newly-downloaded image.
  5. In the next screen, make sure “Keep settings and retain the current configuration” is not ticked and continue.
  6. Wait for the router light to stop flashing, then renew your DHCP lease (assuming you’d set it up to be something other than 192.168.1.x like I did).
  7. Log back into the router at http://192.168.1.1 and re-set your root password.
  8. Go back to System > Backup/Flash Firmware and restore the backup of the settings you made (then renew your DHCP lease again if you’d changed the default range).

I had a couple of conflicts with files in /etc/config between my configuration and the new default file, so I SSHed in and manually checked through them to see how they differed and updated them as necessary. After that it was just a case of re-installing the luci-app-sqm, luci-app-upnp, and stubby packages, and I was back in business!

Cardiovascular health and a new shiny: Apple Watch Series 5

We bought a treadmill back at the start of 2014 and it came with a heart rate monitor that you wear around your chest, which is pretty cool. I gave the treadmill a pretty good going and was doing one of those Couch to 5K programs, but I keep having issues with my knees where running messes one of them up. We bought an elliptical in August last year — which apparently I didn’t post about here — and I’ve been thoroughly enjoying using it. The one we have has a tablet holder right at eye level so I’ve been watching TV shows on Netflix while using it, and it really helps pass the time.

The downside was that I had no heart rate monitor, as the one that came with the treadmill only works with the treadmill (it shows your current heart rate right alongside the distance and estimated calories burned and such). I’d been going pretty hard on it but had noticed that I was getting some heart palpitations, and had a couple of feeling-dizzy moments a while after I’d finished exercising. I went to the doctor and she suggested cutting down on caffeine to start with — I was on four admittedly only instant coffees a day — and see if that improves things to start with, and if not we could get an EKG done.

Quite conveniently timed, the Apple Watch Series 5 was announced on the 10th of September this year, and it comes with an always-on display. Prior models had their display totally black and would only light up when you’d either raise your wrist or tap on the screen. I’d been eyeing the Apple Watch off for a couple of years, and finally decided I’d jump on board because it’d be usable as a regular watch even if the screen doesn’t fully light up. I got the 40m stainless steel with black leather Modern Buckle band and it looks classy as hell.

(I also realised after my first workout that I needed to get one of the cheaper non-leather bands as well because man do I get sweaty wrists when I’m exercising 😛).

Apple has been leaning pretty hard into the health thing with the Apple Watch in recent years, and as well as the heart rate monitor — which is constantly taking your heart rate periodically throughout the day as well as constantly when you start a workout — it comes with an app called “Activity” on the iPhone to help motivate you to keep moving. The way it works is that there’s three “rings” you should try to close each day, called Move, Exercise, and Stand. Move is just generally getting up and about and not sitting on your arse, and is set to 1422kJ for me based on my height and weight. Exercise is 30 minutes of brisk movement — I walk fast enough that I get a few minutes counted towards it each time I’m walking to or from the station or taking the stairs at work. The stand goal is standing up and moving for at least a minute during a one hour period for 12 separate hours during the day, and if you’ve been sitting around for 50 minutes in a given hour you get a little buzz on your wrist at ten minutes to the next hour that reminds you to stand up and move around a bit.

Apple must have done a whole lot of psychological research into what’s most satisfying in terms of motivation because god damn closing those rings feels good. You get a little round fireworks animation of the given colour of ring when you fully complete one for the day, and the one with all three when you’ve finished all of them. I bought the Watch on the 23rd of September and every single day since then I’ve closed all three rings! You get little badges called “Awards” when you complete certain goals, like getting a full week of closing all three rings, which has meant that when I’ve been working from home I’ve been jumping on the treadmill or elliptical for just a quick half hour to get my exercise goal done. I also downloaded an app for the Watch called HeartWatch that gives you a little speedometer of heart rate when you’re exercising and ensures you keep it in the correct zone — not too fast and not too slow — for what you’re trying to do, in my case just generally be fitter.

I completed October with every single day’s rings fully closed, which I’m pretty chuffed about!

A screenshot of the Activity app for Apple Watch showing every single day in October having all three rings closed

We’d also bought a set of smart scales last year that sync with the Health app on iOS, I’ve been weighing myself each morning and as a result of all of this fitness I’m hovering around 70.2kg, which is a weight I don’t recall being for many years now; I was at 82kg a few years back. The heart palpitations have definitely decreased as well and I haven’t had any dizziness since I’ve been monitoring what my heart rate has been while exercising.

I don’t do much by way of outdoor exercising, but the Apple Watches all come with GPS as well so you can keep maps of the routes you’ve taken and see the speed you did during each section. Overall I’m wildly impressed with this bit of technology! I hadn’t worn a watch since about 2001 when I got a job and bought my first mobile phone, but now I feel naked without it, haha.

An artistic update

An artistic update

I posted back in February about some of the stuff I’d been doing in Procreate on my iPad, and I’m overdue for another post! I haven’t been doing as much in the intervening months, as there’s been lots of other things taking up my time and I haven’t felt as inspired but I still managed to do a few.

I’ve quite enjoy using Procreate’s Acrylic brush, you can get some really nice layer and lighting effects with it, and I used only that brush for this one:

A painting of a window at night, from inside a room. There's sheer curtains over the window, a candle is on a small table at the right casting light, and there's a tall cupboard at the left in the shadows.
The Window

I don’t actually remember the brush I used for this next one, but I definitely took full advantage of Procreate’s symmetry guides so I could get it properly even:

A painting of a cybernetic woman, her eyes look like blue glass and she has green and very shiny "skin". She has a purple hood over the back of her head.
Cybernetic Woman

This next one is interesting, I was intending on the main structures that take up the top two-thirds of the image to look like a big craggy mountain range, but I showed it to Kristina and she can’t see it as anything but a tornado coming down!

A painting of a craggy grey mountain range in the top two-thirds of the image, with a river of fire making its way the whole way across the image, and a bunch of conifers at the bottom.
The River

I quite enjoy doing epic-looking landscapes, and this one ended up starting out in a very different place than it finished. It was much more brown, the feature in the middle was a river, and the sky was a sunset which I didn’t manage to get looking how I wanted. In the end it became very much inspired by the aesthetic of the Hive from Destiny!

A painting looking down a desolate grey rocky valley. A deep black rift runs down the middle with a sickly green glow at the bottom, at the left is a crystal embedded in the ground with the same green glow coming from it. At the right is a cave entrance in the valley wall with another glowing crystal. The sky is awash with stars, and the moon peeks from behind the valley peak at the far left.
The Emergence

The paintings above were all done from about March to half-way through May, then there was a bit of a break until July.

I decided to take advantage of Procreate’s drawing guide again, this time with the perspective guide. I was aiming for buildings in a futuristic city but the thing that I always struggle with is details and a sense of scale, so it didn’t turn out to be anything but big blocks. ? Still pleased with the shadows and sense of lighting though.

A very clean geometric painting of grey and blue city buildings. The sky is purple and the light is coming from the very right, the buildings casting shadows to the left.
City Buildings

This next one I did as “speed-painting”, and did it in about 45 minutes! It was a combination of the acrylic brush and a palette knife brush from a big third-party brush pack I bought.

A painting of a volcano erupting atop a hill, the hill is surrounded by taller mountains all around, and the sky above is filled with striated dark orange clouds.
Volcano

Then lastly, this one was done in August, again with Procreate’s symmetry guide on! I was going to give her a witch’s hat but couldn’t get it looking right.

A head and shoulders portrait painting of a white woman with piercing green eyes, long red hair, and dark green lipstick. She’s wearing a dark purple top, and there’s a bright light shining behind her that’s lighting up her shoulders and the very edges of her hair.
The Witch

I also had a burst of inspiration and got some more miniature painting done! I’m still working my way through the Dark Imperium box set I got nearly two years ago, but the main impetus here was Games Workshop releasing their “Contrast” line of paints. They’re essentially a base coat plus wash combined into one single coat, and they’re seriously incredible. Dark Imperium comes with twenty poxwalkers which I was dreading having to paint, but the Contrast paints made them far quicker to deal with! There’s twenty models (but only ten unique ones), and I’ve done half of them so far.

As part of doing this, I also discovered how much better the miniatures look when you apply a varnish to them! The Contrast paint specifically comes off a lot more easily than regular paint, so varnish is a necessity, but it also really makes the colours pop, they’re a lot more vibrant than without it.

Poxwalker 1
Poxwalker 2
Poxwalker 3
Poxwalker 4
Poxwalker 5

I also finally finished off the Plague Marine champion that’d been sitting there mostly-finished for months, and I’m really happy with the base I did. I had a bunch of really old Space Marines from a starter painting box that a friend had given me, so I sacrificed one of them and cut him up to adorn the base, and it looks absolutely fantastic.

Plague Marine Champion

It’s fascinating seeing the evolution of Games Workshop’s plastic miniatures, back when I started (*cough*24 years ago*cough*) plastic was the cheap and crappy option, and the pewter (or lead as they were back then!) miniatures were much more detailed. Nowadays it’s very much the reverse, the plastic is INSANELY detailed — have a look at the full-size poxwalkers on Flickr and zoom all the way in — and the pewter ones are a bit shit by comparison.

There’s also a small-scale Warhammer 40,000 game called Kill Team that I’ve started playing at work with some people, and have bought the new box set that was released in September. It’s similar to Shadespire in that your squads only have a small number of miniatures so it’s much more feasible to get them painted, but it comes with a bunch of absolutely amazing-looking terrain. I put it together and took a couple of photos prior to it being painted, just to get a sense of the scale and what the terrain looks like.

A photo of some Death Guard and Space Wolves miniatures on the new Kill Team starter box terrain. The terrain itself is unpainted grey plastic but is towering over the miniatures and has a very steampunk aesthetic to it.
A photo of some Death Guard and Space Wolves miniatures on the new Kill Team starter box terrain. The terrain itself is unpainted grey plastic but is towering over the miniatures and has a very steampunk aesthetic to it.

I’ve finished painting a couple of pieces of it, but it’s so big that I don’t have a large enough white backdrop that’ll fit the whole terrain piece! Photos will definitely be forthcoming once I do get said backdrop though. ?

Installing Linux Mint 19.1 on a Late-2010 MacBook Air

Installing Linux Mint 19.1 on a Late-2010 MacBook Air

(Update December 2022: As suggested in the latest comments, this entire blog post is pretty much redundant now! Linux Mint 21.1 installs without a hitch, even using Cinnamon, and I have fully-functional brightness and sound keys straight out of the box.)

(Update December 2020: I successfully upgraded from Linux Mint 19.3 to Linux Mint 20 by following the official Linux Mint instructions. The only additional post-upgrade work I had to do was re-adding the Section "Device" bit to /usr/share/X11/xorg.conf.d/nvidia-drm-outputclass-ubuntu.conf as described below to get the brightness keys working again.)

(Update May 2020: I’ve re-run through this whole process using Linux Mint 19.3 and have updated this blog post with new details. Notably, no need to install pommed, and including the specific voodoo needed for the 2010 MacBook Air from Ask Ubuntu regarding PCI-E bus identifiers.)

We have a still perfectly usable Late-2010 MacBook Air (“MacBookAir3,2”, model number A1369), but with macOS 10.14 Mojave dropping support for Macs older than 2012 (it’s possible to extremely-hackily install it on older machines but I’d rather not go down that route), I decided I’d try installing Linux on it. The MacBook Air still works fine, if a bit slow, on macOS 10.13 but I felt like a bit of nerding!

Installation

My distribution of choice was Linux Mint, which is Ubuntu-based but less with the constant changes that Canonical keep making. The first hurdle right out of the gate was which “edition” to choose: Cinnamon, MATE, or xfce. There was zero info on the website about which to choose, I started with Cinnamon but that kept crashing when booting from the installation ISO and giving me a message about being in fallback mode. It turns out Cinnamon is the one with all the graphical bells and whistles, and it appears that an eight-year ultralight laptop’s video card isn’t up to snuff, so I ended up on “MATE” edition, which looks pretty much identical but works fine.

My installation method was using Raspberry Pi Imager to write the installation ISO to a spare SD card (despite the name, it can be used to write any ISO: scroll all the way down in the “Choose OS” dialog and select “Use custom”). Installing Linux requires you to partition the SSD using Disk Utility, I added a 2GB partition for the /boot partition, and another 100GB to install Linux itself onto. It doesn’t matter which format you choose as it’ll be reformatted as part of the installation process.

After partitioning, reboot with the SD card in and the Option key held down, and choose the “EFI Boot” option. The installer is quite straightforward, but I chose the custom option when it asked how to format the drive, formatted both the 2GB and 100GB partitions as ext4, with the 2GB one mounted at /boot and the 100GB at /. The other part is to install the bootloader onto that /boot partition, to make it easy to get rid of everything if you want to go back to single-partition macOS and no Linux.

Post-install

The next hurdle was video card drivers. Mint comes with an open-source video card driver called “Nouveau” which works but isn’t very performant, and there was lots of screen tearing as I’d scroll or move windows around. This being Linux, it was naturally not as simple as just installing the official Nvidia one and being done with, because that resulted in a black screen at boot. 😛 I did a massive amount of searching and eventually stumbled across this answer on AskUbuntu which worked where nothing else did: I followed those instructions and was able to successfully install the official Nvidia drivers without getting a black screen on boot!

(Update May 2020: I honestly don’t remember whether I had to go through Step 1 of Andreas’ instructions, “Install Ubuntu in UEFI mode with the Nvidia drivers”, but check for the existence of the directory /sys/firmware before running the rest of this. That directory is only created if you’ve booted in EFI mode. If it doesn’t exist, follow the link in Step 1).

I’m copying the details here for posterity, in case something happens to that answer, but all credit goes to Andreas there. These details are specifically for the Late 2010 MacBook Air with a GeForce 320M video card, so using this on something else might very well break things.

Create the file /etc/grub.d/01_enable_vga.conf and paste the following contents into it:

cat << EOF
setpci -s "00:17.0" 3e.b=8
setpci -s "02:00.0" 04.b=7
EOF

Then make the new file executable and update the grub config files:

$ sudo chmod 755 /etc/grub.d/01_enable_vga.conf
$ sudo update-grub

And then restart. Double-check that the register values have been set to 8 for the bridge device and 7 for the display device:

 $ sudo setpci -s "00:17.0" 3e.b
 08
 $ sudo setpci -s "02:00.0" 04.b
 07

Next, load up the “Driver Manager” control panel and set the machine to use the Nvidia drivers, once it’s finished doing its thing — which took a couple of minutes — restart once more, and you’ll be running with the much-more-performant Nvidia drivers!

At this point I realised that the brightness keys on the keyboard didn’t work. Cue a whole bunch more searching, with fix being to add the following snippet to the bottom of /usr/share/X11/xorg.conf.d/nvidia-drm-outputclass-ubuntu.conf:

Section "Device"
  Identifier     "Device0"
  Driver         "nvidia"
  VendorName     "NVIDIA Corporation"
  BoardName      "GeForce 320M"
  Option         "RegistryDwords" "EnableBrightnessControl=1"
EndSection

And now I have a fully-functioning Linux installation, with working sleep+wake, audio, wifi, and brightness!

I’m certainly not going to be switching to it full-time, and it feels like a lot more fragile than macOS, but it’s fun to muck around with a new operating system. And with 1Password X, I’m able to use 1Password within Firefox under Linux too!

Nginx, PHP-FPM, and Cloudflare, oh my!

I use my Linode to host a number of things (this blog and Kristina’s, my website and Kristina’s, an IRC session via tmux and irssi for a friend and me, and probably another thing or two I’m forgetting). Kristina started up a travel blog a few months ago which I’m also hosting on it, and shortly after that point I found that maybe once every two weeks or so my website and our blogs weren’t running anymore. I looked into it and it was being caused by Linux’s Out-Of-Memory Killer, which kicks in when the system is critically low on memory and needs to free some up, killing the Docker container that my website runs in as well as MariaDB.

The main cause was Apache and MariaDB using up entirely too much memory for my little 1GB Linode, it was evidently just sitting on this side of stable with two WordPress blogs but adding a third seems like it tipped it over the edge. The reason MariaDB and my website’s Docker container were being killed is because although Apache was using up a heap of memory it was spread over a number of worker threads, so individually none of those were high, and MariaDB and my website were the largest on the list. There’s lots of tweaks you can do, several of which I tried, but all that happened was that it delayed the inevitable rather than entirely resolving it. Apache is powerful but low-resource-usage it ain’t. The primary low-resource-usage alternative to Apache is Nginx, so I figured this weekend I’d have a crack at moving over to that.

Overall it was pretty straightforward, this guide from Digital Ocean was a good starting point, the bits where it fell short was mostly just a case of looking up all of the equivalent directives for SSL, mapping to filesystem locations, etc. (I have ~15 years of history of hosted images I’ve posted on the Ars Technica  forums and my old LiveJournal—which is now this blog—and wanted to make sure those links all kept working). 

One difference is with getting WordPress going… WordPress is all PHP, and Apache by default runs PHP code inside the Apache process itself via mod_php, whereas when you’re using Nginx you have to be using PHP-FPM or similar which is an entirely separate process that runs on the server and that Nginx talks to to process the PHP code. I mostly followed this guide, also from Digital Ocean though there were a couple of extra gotchas I ran into when getting it fully going with Nginx for WordPress:

  • Edit /etc/nginx/fastcgi_params and add a new line with this content or you’ll end up with nothing but an empty blank page: fastcgi_param PATH_TRANSLATED $document_root$fastcgi_script_name;
  • Remember to change the ownership of the WordPress installation directory to the nginx user instead of  apache
  • The default settings for PHP-FPM assume it’s running on a box with significantly more than 2GB of RAM; edit /etc/php-fpm.d/www.conf and change the line that says pm = dynamic to be pm = ondemand; with ondemand PHP-FPM will spin up worker processes as needed but will kill off idle ones after ten seconds rather than leaving them around indefinitely.

Additionally, Nginx doesn’t support .htaccess files so if you’ve got WordPress set up to use any of the “pretty”-type links, you’ll end up with 404s when you try to view an individual post instead. The fix is to put the following into the server block at the bottom:

location / {
  try_files $uri $uri/ /index.php?$args;
}

So it’ll pass the correct arguments to WordPress’ index.php file. You’ll also want to block access to any existing .htaccess files as well:

location ~ /\.ht {
  deny all;
}

The last thing I did with this setup was to put the entirety of my website, Kristina’s, and our respective blogs behind Cloudflare. I had great success with their DNS over HTTPS service, and their original product is essentially a reverse proxy that caches static content (CSS, Javascript, images) at each of their points of presence around the world so you’ll load those from whichever server is geographically closest to you. For basic use it’s free, and includes SSL, you just need to point your domain’s nameservers at the ones they provide. The only thing I needed to do was to set up another DNS record so I could actually SSH into my Linode, because now the host virtualwolf.org resolves to Cloudflare’s servers which obviously don’t have any SSH running!

Overall, the combination of Nginx + PHP-FPM + Cloudflare has resulted in remarkably faster page loads for our blogs, and thus far significantly reduced memory usage as well.

GPG and hardware-based two-factor authentication with YubiKey

As part of having an Ars Technica Pro++ subscription, they sent me a free YubiKey 4, which is a small hardware token that plugs into your USB port and allows for a bunch of extra security on your various accounts because you need the token physically plugged into your computer in order to authenticate. It does a number of neat things:

  • Generating one-time passwords (TOTP) as a second-factor when logging in to websites;
  • Storing GPG keys;
  • Use as a second-factor with Duo;

And a bunch of other stuff as well, none of which I’m using (yet).

My password manager of choice is 1Password, and although it allows saving one-time passwords for websites itself, I wanted to lock access to the 1Password account itself down even further. Their cloud-based subscription already has strong protection by using a secret key in addition to your strong master password, but you can also set it up to require a one-time password the first time you log into it from a new device or browser so I’m using the YubiKey for that.

I also generated myself GPG keys and saved them to the YubiKey. It was not the most user-friendly process in the world, though that’s a common complaint that’s levelled at GPG. I found this guide that runs you through it all and, while long, it’s pretty straightforward. It’s all set up now, though, my public key is here and I can send and receive encrypted messages and cryptographically sign documents, and the master key is saved only on an encrypted USB stick. You can also use the GPG agent that runs on your machine and reads the keys from the YubiKey to also be used for SSH, so I’ve got that set up with my Linode.

The last thing I’ve done is to set the YubiKey up as a hardware token with Duo and put my Linode’s SSH and this blog (and soon Kristina’s, though hers not with the YubiKey) behind that. With the Duo Unix module, even sudo access requires the YubiKey, and the way that’s set up is that you touch the button on the YubiKey itself and it generates a code and enters it for you.

It’s all pretty sweet and definitely adds a bunch of extra security around everything. I’m busily seeing what else I can lock down now!

Setting up DNS over HTTPS on macOS

Back in April, Cloudflare announced a privacy-focused DNS server running at 1.1.1.1 (and 1.0.0.1), and that it supported DNS over HTTPS. A lot of regular traffic goes over HTTPS these days, but DNS queries to look up the IP address of a domain are still unencrypted, so your ISP can still snoop on which servers you’re visiting even if they can’t see the actual content. We have a Mac mini that runs macOS Server and does DHCP and DNS for our home network, among other things, and with the impending removal of those functions and their suggested replacements with regular non-UI tools with a upcoming version of it, I figured now would be a good time to look into moving us over to use Cloudflare’s shiny new DNS server at the same time.

Turns out it wasn’t that difficult!

Overview

  1. Install Homebrew.
  2. Install cloudflared and dnsmasq: brew install cloudflare/cloudflare/cloudflared dnsmasq
  3. Configure dnsmasq to point to cloudflared as its own DNS resolver.
  4. Configure cloudflared to use DNS over HTTPS and run on port 54.
  5. Install both as services to run at system boot.

Configuring dnsmasq

Edit the configuration file located at /usr/local/etc/dnsmasq.conf and uncomment line 66 and change it from server=/localnet/192.168.0.1 to server=127.0.0.1#54 to tell it to pass DNS requests onto localhost on port 54, which is where cloudflared will be set up.

Configuring cloudflared

Create the directory /usr/local/etc/cloudflared and create a file inside that called config.yml with the following contents:

no-autoupdate: true
proxy-dns: true
proxy-dns-port: 54
proxy-dns-upstream:
  - https://1.1.1.1/dns-query
  - https://1.0.0.1/dns-query

Auto-update is disabled because that seems to break things when the update occurs, and the service doesn’t start back up correctly.

Configuring dnsmasq and cloudflared to start on system boot

dnsmasq: sudo brew services start dnsmasq will both start it immediately and also set it to start at system boot.

cloudflared: sudo cloudflared service install, which installs it for launchctl at /Library/LaunchDaemons/com.cloudflare.cloudflared.plist.

Updating your DNS servers

Now that dnsmasq and cloudflared are running, you need to actually tell your machines to use them as their DNS servers! Open up System Preferences > Network, hit Advanced, and in the DNS tab click the + button and put your computer’s local IP address in. (You’ll want to make sure your machine has a static IP address, of course). Repeat the process for everything else on your local network to have them all send their DNS traffic to 1.1.1.1 as well.

You can confirm that all your DNS traffic is going where it should be with dnsleaktest.

And done!

I was surprised at how straightforward this was. I also didn’t realise until I was doing all of this that dnsmasq also does DHCP, so with the assistance of this blog post I’ve also replaced the built-in DHCP server on the Mac mini and continue to have full local hostname resolution as well!

The spiritual successor to SimCity, Cities: Skylines

I first played the original SimCity Classic back in the early 1990s on our old Macintosh LC II, and absolutely loved it. Laying out a city and watching it grow was extremely satisfying, and the sequel, SimCity 2000 was even more detailed. I played a bit of SimCity 4, which came out in 2003, but the latest entry in the series, titled just “SimCity“, by all accounts sucked. The maps were significantly smaller in size, and it required an internet connection and was multiplayer to boot.

It’s actually possible to play SimCity 2000 on modern machines and I definitely got stuck into it a few years ago. This is a screenshot of my most recent city!

Screenshot of SimCity 2000, zoomed out and showing as much of my city as possible.

If you’re wanting a proper modern SimCity 2000-esque experience though, Cities: Skylines is what you’re after. It came out in March of 2015 on desktop, and was ported to Xbox One in April of 2017 and they did a damned good job of it, the controls are all perfectly suited to playing on a controller as opposed to with a mouse and keyboard.

The level of detail of the simulation is fantastic, you can zoom all the way in and follow individual people (called “cims”, as opposed to SimCity’s “sims”) or vehicles and see where they’re going. There’s a robust public transport system and you can put in train lines (and buses, and trams, and a subway, and in the most recent expansion called Mass Transit, even monorails, blimps, and ferries!) and see the cims going to and from work, and how many are waiting at each station and so on.

We recently upgraded to the Xbox One X and a shiny new OLED 4K TV (quite the upgrade from our nine year-old 37″ giant-bezeled LCD TV!), and it makes for some very nice screenshots. These are from my largest city called Springdale, currently home to ~140k people!

Nostalgia and the Classic Mac OS

I’ve been a Mac user my entire life, originally just because my dad used them at his work and so bought them for home as well. My earliest memories are of him bringing his SE/30 home and playing around in MacPaint. We also had an Apple IIe that we got second-hand from my uncle that lived in my bedroom for a few years, though that doesn’t count as a Mac.

The first Mac my dad bought for us at home was the LC II in 1992 (I was 9!), and I can remember spending hours trawling through Microsoft Encarta being blown away at just how much information I could look up immediately. I also remember playing Shufflepuck Café and Battle Chess, and I’m sure plenty of others too that didn’t leave as large an impression. There was also an application that came with the computer called Mouse Practice that showed you how to use a mouse, and we had At Ease installed for a while as well until I outgrew it.

After the LC II we upgraded to the Power Macintosh 6200 in 1995, which among other things came with a disc full of demos on it including the original Star Wars: Dark Forces (which I absolutely begged my parents to get the full version of for Christmas, including promising to entirely delete Doom II which they were a bit disapproving of due to the high levels of gore), and Bungie’s Marathon 2: Durandal (which I originally didn’t even bother looking at for the first few months because I thought it was something to do with running!). Marathon 2 was where I first became a fan of Bungie’s games, and I spent many many hours playing it and the subsequent Marathon Infinity as well as a number of fan-made total conversions too (most notably Marathon:EVIL and Tempus Irae).

The period we owned the 6200 also marked the first time we had an internet connection as well (a whopping 28.8Kbps modem, no less!). The World Wide Web was just starting to take off around this time, I remember dialing into a couple of the local Mac BBSes but at that point they were already dying out anyway and the WWW quickly took over. The community that sprang up around the Marathon trilogy was the first online community I was really a member of, and Hotline was used quite extensively for chatting. Marathon Infinity came with map-making tools which I eagerly jumped into and made a whole bunch of maps and put them online. I was even able to dig up the vast majority of them, there’s only a couple of them that I’ve not been able to find. I have a vivid memory of when Marathon:EVIL first came out, it was an absolutely massive 20MB and I can recall leaving the download going at a blazing-fast 2.7KB/s for a good two or three hours, and constantly coming back to it to make sure it hadn’t dropped out or otherwise stopped.

After the Marathon trilogy, Bungie developed the realtime strategy games Myth: The Fallen Lords and its sequel Myth II: Soulblighter, both of which I also played the hell out of and was a pretty active member of the community in.

After the 6200 we then had a second-gen iMac G3 then a “Sawtooth” Power Mac G4 just for me as my sister and I kept arguing about who should have time on the computer and the Internet. 😛 The G4 was quite a bit of money as you’d imagine, so I promised to pay it back to dad as soon as I got a job and started working.

macOS (formerly Mac OS X then OS X) is obviously a far more solid operating system, but I’ve always had a soft spot for the Classic Mac OS even with its cooperative multitasking and general fragility. We got rid of the old Power Mac G4 probably eight years ago now (which I regret doing), and I wanted to have some machine capable of running Mac OS 9 just for nostalgia’s sake. Mum and dad still had mum’s old PowerBook G3 and I was able to get a power adapter for it and boot it up to noodle around in, but it was a bit awkwardly-sized to fit on my desk and the battery was so dead that if the power cord wasn’t plugged in it wouldn’t boot at all.

There was a thread on Ars Technica a few months ago about old computers, and someone mentioned that if you were looking at something capable of running Mac OS 9 your best bet was to get the very last of the Power Mac G4s that could boot to it natively, the Mirrored Drive Doors model. I poked around on eBay and found a guy selling one in mint condition, and so bought it as a present to myself for my birthday.

Behold!

Power Mac G4 MDD

Dual 1.25GHz G4 processors, 1GB of RAM, 80GB of hard disk space, and a 64MB ATI Radeon 8500 graphics card. What a powerhouse. 😛

There’s a website, Macintosh Repository, where a bunch of enthusiasts are collecting old Mac software from yesteryear, so that’s been my main place to download all the old software and games that I remember from growing up. It’s been such a trip down memory lane, I love it!

More miniatures: Warhammer 40,000 edition

Warhammer 40,000 used to be quite the complicated affair, lots of rules and looking things up on different tables to check what dice roll you needed for different effects, and needing many hours to finish a game. The 8th Edition of the game came out last year, and was apparently extremely streamlined and simplified and seems to have been received very well. Since I’d been doing well with Shadespire, I decided to get the 8th Edition core box set as well, and had almost exactly enough in Amazon gift card balance for it! It comes with Space Marines, as always, but the opposing side is Chaos this time. 7 Plague Marines, a few characters, a big vehicle, and about 20 undead daemon things. I decided to alternate between painting a handful of each side at once, so as not to get bored, and have gone with Space Wolves (big surprise, I know) as the paint scheme for the Imperial side.

Space Wolves Intercessor

There’s another five of these Space Marines but they’re all identical apart from the poses so I didn’t take photos of all of them.

The Plague Marines are all unique though, so I’ve been taking photos of each of them, my first batch was four of them.

Plague Marine 1

Plague Marine 2

Plague Marine 3

Plague Marine 4

My mobile painting table has been a great success, but after the first batch of Space Marines I realised I was getting a sore neck and back from hunching over towards the miniatures as I was painting them because everything was too low. Another trip to Bunnings, and lo and behold…

Painting table from the side, showing the two vertical blanks to give it some hight

Problem solved!

I also realised the other day why I was enjoying painting my miniatures a lot more now than I used to… it’s thanks to being able to combine my hobbies of painting and also photography. 😛 I can paint the miniatures and be happy with my work, but then also take professional-looking photos of them and share them with the world!

More Raspberry Pi adventures: the Pi Zero W and PaPiRus ePaper display

I decided I wanted to have some sort of physical display in the house for the temperature sensors so we wouldn’t need to be taking out our phones to check the temperature on my website if we were already inside at home. After a bunch of searching around, I discovered the PaPiRus ePaper display. ePaper means it’s not going to have any bright glaring light at night, and it also uses very little power.

The Raspberry Pi is hidden away under a side table, and already has six wires attached to the header for the temperature sensors, so I decided to just get a separate Raspberry Pi Zero W — which is absurdly small — and the PaPiRus display.

Setting it up

I flashed the SD card with the Raspbian Stretch Lite image, then enabled SSH and automatic connection to our (2.4GHz; the Zero W doesn’t support 5GHz) wifi network by doing the following:

1) Plug the flashed SD card back into the computer.

2) Go into the newly-mounted “boot” volume and create an empty file called “ssh” to turn on SSH at boot

3) Also in the “boot” volume, create a file called “wpa_supplicant.conf” and paste the following into it:

country=AU
ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev
update_config=1
network={
ssid="WIFI_SSID"
psk="WIFI_PASSWORD"
key_mgmt=WPA-PSK
}

4) Unmount the card, pop it into the Pi, add power, and wait 60-90 seconds and it’ll connect to your network and be ready for SSH access! The default username on the Pi is “pi” and the password is “raspberry”.

(These instructions are all thanks to this blog post but I figured I’d put them here as well for posterity).

The PaPiRus display connection was dead easy, I just followed Pi Supply’s guide after soldering a header into the Pi Zero W. If you want to avoid soldering, they also offer the Zero W with a header pre-attached.

Getting the Python library for updating the display was mostly straightforward, I just followed the instructions in the GitHub repository to manually install the Python 3 version.

I wrote a simple Python script to grab the current temperature and humidity from my website’s REST endpoints, and everything works! This script uses the “arrow” and “requests” libraries, which can be installed with “sudo apt-get install python3-arrow python3-requests”.

Next step is to have the Pi 3 that has the sensors run a simple HTTP server that the Zero W can connect to it, so even if we have no internet connection for whatever reason, the temperatures will still be available at home. I’ve updated my Pi Sensor Reader to add HTTP endpoints.

Finally, some actual miniature painting

So despite having gotten the back room set up for miniature painting over three and a half years ago, I hadn’t actually done any of it since then. 😛 I also realised I hadn’t actually taken a photo of the setup.

I bought Games Workshop’s latest game Shadespire early last month, it does have miniatures to paint but only eight in the core set, and it’s a board game where the games last about half an hour or so versus the multi-hour affairs that are traditional Warhammer/Warhammer 40,000 games. I figured that with the holidays around and time to kill, and not having the prospect of endless amounts of miniatures to paint, I’d give it a go. I’m pleased to say that I clearly still have the painting skills!

I’ve finished five of them so far, so only three to go, and took some proper photos of them with the full external flash/umbrella setup.

Blooded Saek

Angharad Brightshield

Targor

Karsus the Chained

Obryn the Bold

(I’ll admit that I cheated slightly and didn’t actually paint any of these in the back room, however… during the week and a bit that I was doing them, the weather was really hot and the dinky little air conditioning unit in the back room wasn’t remotely up to keeping things cool, so I ended up bringing all the paints and bits inside and did them at the dining table).

The game Shadespire itself is really neat as well. I’ve only played a handful of games, but rather than just “Kill the other team” you also have specific objectives to accomplish as well. Have a read of Ars Technica’s review of it, they’re a lot more thorough and eloquent than I could be. 😛

Temperature sensors: now powered by Raspberry Pi

The Weather section on my website is now powered by my Raspberry Pi, instead of my Ninja Block! \o/

Almost exactly three years ago, I started having my Ninja Block send its temperature data to my website (prior to that, I was manually pulling the data from the Ninja Blocks API and didn’t have any historical record of it). Ninja Blocks the company went bust in 2015, and there was some stuff in the Ninja Blocks software that relied on their cloud platform to work and I ended up with no weather data for a couple of days because the Ninja Block couldn’t talk to the cloud platform. I ended up hacking at it and the result was this very simple Node.js application as a replacement for their software. It always felt a bit crap, though, because if the hardware itself died I’d be stuck; yes, it was all built on “open hardware” but I didn’t know enough about it all to be able to recreate it. I’d ordered a Raspberry Pi 3 in June last year, intending on replacing the Ninja Block and it’s sometimes-unreliable wireless temperature sensors with something newer and simpler and hard-wired, but I found there was a frustrating lack of solid information regarding something that on the surface seemed quite simple.

I’ve finally gotten everything up and running, the Ninja Block has been shut down, and I’ve previously said I’d write up exactly what I did. So here we are!

Components needed

  • Raspberry Pi 3 Model B+
  • AM2302 wired temperature-humidity sensor (or two of them in my case)
  • Ethernet cable of the appropriate length to go from the Pi to the sensor
  • 6x “Dupont” female to either male or female wires (eBay was the best bet for these, just search for “dupont female”, and it only needs to be female on one end as the other end is going to be chopped off)
  • 1.5mm heatshrink tubing
  • Soldering iron and solder
  • Wire stripper (this one from Jaycar worked brilliantly, it automatically adjusts itself to diameter of the insulation)

Process

  1. Cut the connectors off one end of the dupont cables, leaving the female connector still there, and strip a couple of centimetres of insulation off.
  2. Strip the outermost insulation off both ends of the ethernet cable, leaving a couple of centimetres of the internal twisted pairs showing.
  3. Untwist three of the pairs and strip the insulation off them, then twist them back together again into their pairs.
  4. Chop off enough heatshrink tubing to cover the combined length of the exposed ethernet plus dupont wire, plus another couple of centimetres, and feed each individual dupont wire through the tubing (there should be three separate bits of tubing, one for each wire).
  5. Solder each dupont wire together with one of the twisted pairs of ethernet cable, then move the heatshrink tubing up over the soldered section and use a hairdryer or kitchen blowtorch to activate the tubing and have it shrink over the soldered portion to create a nice seal.
  6. Repeat this feed-heatshrink-tubing/solder-wire/activate-heatshrink process again but with the cables that come out of the temperature sensor (ideally you should be using the same red/yellow/black-coloured dupont cables to match the ones that come out of the sensor itself, to make it easier to remember which is which).
  7. Install Raspbian onto an SD card and boot and configure the Pi.
  8. Using this diagram as a reference, plug the red (power) cable from the sensor into Pin 2 (the 5V power), the yellow one into Pin 7 (GPIO 4, the data pin), and the black one into Pin 6 (the ground pin).

AdaFruit has a Python library for reading data from the sensor, I’m using the node-dht-sensor library for Node.js myself. You can see the full code I’m using here (it’s a bit convoluted because I haven’t updated the API endpoint on my website yet and it’s still expecting the same data format as the Ninja Block was sending).

I’d found a bunch of stuff about needing a “pull-up” resistor when connecting temperature sensors, but the AM2302 page on adafruit.com says “There is a 5.1K resistor inside the sensor connecting VCC and DATA so you do not need any additional pullup resistors”, and indeed, everything is working a treat!

Internet history

On Twitter recently, Mark had downloaded the whole archive of his Twitter account’s history and had been poking through it and randomly retweeting amusing old tweets. I downloaded my own Twitter history and quickly realised that a lot of the old things I’d linked to weren’t accessible because I’d been using my own custom URL shortener (this was before the days of Twitter doing their own URL shortening) and it wasn’t running anymore. Fortunately I’d had the foresight to take a full copy of all of my data and databases from Dreamhost before I shut down my account, and one of those databases was the one that had been backing my URL shortener. A quick import to PostgreSQL and a hacky Node.js application later, it’s all up and running! I’m under no illusions that it’s almost ever going to be accessed by anyone except me, but it’s nice to have another part of my internet history working. I’ve been hosting my own website and images and whatnot (things like pictures I’ve posted on my blog née LiveJournal, or in threads on Ars Technica) in one form or another since about 2002, and the vast majority of those links and images still work!

Speaking of my website, about four years ago now I went and tried to collect all my old websites into a single archive so I could look back and see the progression. The majority of them I actually still had the original source code to, though my very first one or two have been totally lost. The earliest I still have is from March of 1998 when I was not quite fifteen years old! I started out with just HTML, then discovered CSS and Javascript rollover images, and then around 2001 I started using PHP. I had to go in and hack up some of the PHP-based sites in order to get them to work, and oh dear god 18-year-old me was a FUCKING AWFUL coder. One of the sites consisted of a bit over three thousand lines in a single file, with all sorts of duplication and terribleness, and every single one of the sites that was hooked into MySQL had SQL injection vulnerabilities. I’m very proud of just how much my code has improved over the years.

I went back this weekend and managed to recover another handful of sites, and also included exports of the Photoshop files where the original site source wasn’t available. I’ve packed them all up into a Docker container (I’ll write another post about my experiences with Docker at some point soon) and chucked them up on archive.virtualwolf.org for the entire Internet to marvel at how terrible they all were! There’s a little bit more background there, but it’s a lot of fun just looking back at what I did.

Better Raspberry Pi audio: the JustBoom DAC HAT

I decided that the sound output from the Pi’s built-in headphone jack wasn’t sufficient after all and so went searching for better options (a DAC—digital-to-analog converter).

The Raspberry Pi foundation created a specification called “HAT” (Hardware Attached on Top) a few years ago which specifies a standard way for devices to automatically identify and configure a device and drivers that’s attached to the Pi via its GPIO (General Purpose Input/Output) pins. There’s a number of DACs now that conform to this standard, and the one I settled on is the JustBoom DAC HAT. It’s a UK company but you can buy them locally from Logicware (with $5 overnight shipping no less).

The setup is incredibly simple: connect the plastic mounting plugs and attach the DAC to the Pi, then start it up and edit /boot/config.txt to comment out the default audio setting:

#dtparam=audio=on

Then add three new lines in:

dtparam=audio=off
dtoverlay=i2s-mmap
dtoverlay=justboom-dac

Then reboot. (If you’re running Raspbian Stretch or newer, the i2s-mmap line should not be added).

To say that I’m impressed would be an understatement! I didn’t realise just how crappy the audio from the Pi’s built-in headphone jack was until I’d hooked up the new DAC and blasted some music out. I’m not an audiophile and it’s hard to articulate, but I’d compare it most closely to listening to really low-quality MP3s on cheap earbuds versus high-quality MP3s on a proper set of headphones.

If you’re going to be hooking your Pi into a good stereo system, I can’t recommend JustBoom’s DAC HAT enough!

Raspberry Pi project: AirPlay receiver

I bought a Raspberry Pi almost exactly a year ago, intending on eventually replacing my Ninja Block and its sometimes-unreliable wireless sensors with hardwired ones (apart from the batteries needing occasional changing, there’s something that interferes with the signal on occasion and I just stop receiving updates from the sensor outside for several hours at a time, and then suddenly it starts working again). To do that, I need to physically run a cable from outside under the pergola to inside where the Raspberry Pi will live and I don’t really want to go drilling holes through the house willy-nilly. I want to eventually get the electrician in to do some recabling so I’m going to get him to do that as well, but until then the Pi was just sitting there collecting dust. I figured I should find something useful to do it with, but having a Linode meant that any sort of generic “Have a Linux box handy to run some sort of server on” itch was already well-scratched.

I did a bit of Googling, and discovered Shairport Sync! It lets you use the Raspberry Pi as an AirPlay receiver to stream music to from iTunes or iOS devices, a la an Apple TV or AirPort Express. We already have an Apple TV but it’s plugged into the HDMI port on the Xbox One which means that to simply stream audio to the stereo we have to have the Xbox One, TV, and Apple TV all turned on (the Apple TV is plugged into the Xbox’s HDMI input so we can say “Xbox, on” and the Xbox turns itself on as well as the TV and amplifier, then “Xbox, watch TV” and it goes to the Apple TV; it works very nicely but is a bit of overkill when all you want to do is listen to music in the lounge room).

Installing Shairport Sync was quite straightforward, I pretty much just followed the instructions in the readme there then connected a 3.5mm to RCA cable from the headphone jack on the Raspberry Pi to the RCA input on the stereo. It’s mentioned in the readme, but this issue contains details on how to use a newer audio driver for the Pi that significantly improves the audio output quality.

The only stumbling block I ran into was the audio output being extremely quiet. Configuring audio in Linux is still an awful mess, but after a whole lot of googling I discovered the “aslamixer” tool (thanks to this blog post), which gives a “graphical” interface for setting the sound volume, and it turned out the output volume was only at 40%! I cranked it up to 100% and while it’s still a bit quieter than what the Apple TV outputs, it doesn’t need a large bump on the volume dial to fix—there’s apparently no amplifier or anything on the Raspberry Pi, it’s straight line-level output. The quality isn’t quite as good as going via the Apple TV, but it gets the job done! I might eventually get a USB DAC or amplifier but this works fine for the time being.

On macOS it’s possible to set the system audio output to an AirPlay device, so you can be watching a video but outputting the audio to AirPlay, and the system keeps the video and audio properly in sync. It works extremely well, but the problem we found with having the Apple TV hooked up to the Xbox One’s HDMI input is that there’s a small amount of lag from the connection. When the audio and video are both coming from the Apple TV there’s no problem, but watching video on a laptop while outputting the sound to the Apple TV meant that the audio was just slightly out of sync from the video. Having the Raspberry Pi as the AirPlay receiver solves that problem too!

UPDATE: Two further additions to this post. Firstly, and most importantly, make sure you have a 5-volt, 2.5-amp power supply for the Raspberry Pi. I’ve been running it off a spare iPhone charger which is 5V but only 1A, and the Pi will randomly reboot under load because it can’t draw enough power from the power supply.

Secondly, the volume changes done with the “alsamixer” tool are not saved between reboots. Once you’ve set the volume to your preferred level, you need to run “sudo alsactl store” to persist it (this was actually mentioned in the blog post I linked to above, but I managed to miss it).

Farewell Dreamhost

After 12 years of service, I’m shutting my Dreamhost account down (for those unaware, Dreamhost is a website and email hosting service).

My very first—extremely shitty—websites were hosted on whichever ISP we happened to be using at the time—Spin.net.au, Ozemail, Optus—with an extremely professional-looking URL along the lines of domain.com.au/~username. I registered virtualwolf.org at some point around 2001-2002 and had it hosted for free on a friend’s server for a few years, but in 2005 he shut it down so I had to go find some proper hosting, and that hosting was Dreamhost.

The biggest thing I found useful as I was dabbling in programming was that Dreamhost offered PHP and MySQL, so I was able to create dynamic sites rather than just static HTML. Of course, looking back at the code now is horrifying, especially the amount of SQL injection vulnerabilities I had peppered my sites with.

Around the start of 2011, I started using source control—Subversion initially—and finally had a proper historical record of my code. I used PHP for the first year or so of it, then ended up outgrowing that and switched to a Perl web framework called Mojolicious. The only option to run a long-lived process on Dreamhost is to use Fast-CGI, which I never managed to get working with Mojolicious, but fortunately Mojolicious could also run as a regular CGI script so I was still able to use it with Dreamhost, albeit not at great speed.

At the same time I started using Subversion, I also signed up with Linode who offer an entire Linux virtual machine with which you can do almost anything you’d like as you have full root access. I originally used it mostly to run JIRA so I could keep track of what I wanted to do with my website and have the nifty Subversion/JIRA integration working to see my commits against each JIRA issue. I slowly started using the Linode for more and more things (and switched to Git instead of Subversion as well), until in 2014 I moved my entire website hosting over to the Linode.

At that point the only thing I was using Dreamhost for was hosting Kristina’s website and WordPress blog, and the email for our respective domains. Dreamhost’s email hosting wasn’t always the most reliable and towards the end of 2015 they had more than their usual share of problems, so we started looking for alternatives. Kristina ended up moving to Gmail and I went with FastMail (who I am extremely happy with and would very highly recommend!), I moved her blog and my previously-LiveJournal-but-now-Wordpress-blog over to the Linode, and that was that!

Moving my website hosting to the Linode also allowed me to move over to Node.js and I’ve been going full steam ahead ever since. Since that posted I’ve moved over from callbacks to Promises (so much nicer), I wrote myself a HipChat add-on to keep an eye on the temperature that my Ninja Block is reporting, and I moved my dodgy Twitter image upload Perl script functionality into my site and added a nice front-end to it. Even looking back at my code from 6 months ago to now shows a marked increase in quality and readability.

So in summary, thanks for everything Dreamhost, but I outgrew you. 🙂

Missing history

So it turns out that the LiveJournal to WordPress Importer didn’t actually import everything. I’d been going through and updating links to old entries to point to their relevant entry here in WordPress, and there were several pages that didn’t actually make it across (it also imported every single comment back to around 2010 twice, so I had to go through and delete all of those duplicates; prior to 2010 it was fine, for some weird reason). That wouldn’t have been too bad, but in between my having imported my LiveJournal originally and me discovering this, Kristina’s old LiveJournal account was deleted, which also meant that every single comment of hers on my LiveJournal was now gone, and any fresh import I did directly from the LiveJournal API wouldn’t have them at all. 🙁 Looking back through my old entries was kind of sad, with just “Deleted comment” everywhere in place of Kristina’s actual comments.

I wanted to have the complete history of my LiveJournal here in WordPress, but I also didn’t want to have all of Kristina’s comments missing. I figured SOMETHING had to be able to be done!

I used ljdump to hit LiveJournal’s API and download each entry there into a raw XML file, and that had grabbed all the journal entries, so clearly something had fucked up in the WordPress import part.

The situation was this:

  • I had most but not all of my old LiveJournal entries imported into WordPress
  • Those entries that made it across did have Kristina’s old comments on them
  • I had all of my own entries downloaded to raw XML
  • ljdump also grabbed all the comments for each entry as well (sans Kristina’s, obviously)

I manually went through and compared the entries in WordPress to those on my LiveJournal month-by-month, and found that there were 68 missing ones in total. I hacked at the LiveJournal to WordPress Importer plugin until I was able to get it to read the raw XML files that’d come directly from ljdump, then spun up a new temporary WordPress install and was able to import just those missing entries. Next, I erased that temporary instance, imported the full backup from this blog, then ran the importer again to bring in just those 68 missing entries from XML, and it worked a treat.

Unfortunately there were a handful of those entries that also had had Kristina’s comments on them previously, so they were still missing. Thankfully, me being the digital hoarder that I am, I still had all of the email notifications that LiveJournal had sent me for each and every comment on my journal, and the LiveJournal API actually shows even deleted comments in their properly threaded state, just with no body or detail beyond the username who posted it. So I was able to copy the content and timestamp for each comment of Kristina’s that’d been on those missing entries that weren’t imported, and update the raw comment XML with that detail!

This is still a work in progress and my next step is to hack at the importer further to read the comments directly from XML (currently it’s reading the journal entries from the XML files, but the comments are still pulled from LiveJournal’s API directly). It’ll definitely be do-able, it may just take a little while because everything related to WordPress is in PHP and I’ve not done any PHPing for quite a number of years now!

This may seem a bit odd, but given I have 13 years of history in LiveJournal, and it’s where Kristina and I initially started chatting a lot more before she visited and we got together, I didn’t want to have these weird entries where Kristina was just essentially erased from my blog history.

I might even put my modifications to the plugin up on Bitbucket if it seems to be working well, given the current LiveJournal to WordPress Importer is a bit shit.

New computer!

I’d been looking to upgrade to a 27″ iMac from my mid-2010 MacBook Pro, and five years from a machine is not bad at all! I was hoping that the non-retina iMacs would get one last upgrade but it was not to be. I checked the Apple Store after the cheaper model of retina iMac was released last week, and they had in fact removed all of the custom options from it other than RAM and storage, so the only video card available was the base-level 1GB one. Bleh.

However! There was fortuitously an almost-maxed-out 27″ Late 2013 iMac available on the refurb store: 4GB Nvidia 780M GTX video card (top of the line), and 3TB Fusion Drive. Only 8GB RAM but it’s user-replaceable in this model anyway, so I have another 16GB on the way. The thing with the refurb machines is that what you see is what you get, there’s no upgrades or anything available, but that was okay. The only change I’d have made is a 1TB SSD (the Fusion Drive is a large hard disk combined with a relatively small SSD—128GB) but ah well. I saved probably more than a grand all up!

kungfupolarbear has inherited my old machine as it’s a decent step up from her MacBook Air, and we’re going to use the MacBook Air as a travel computer, and if I want to do some coding from the lounge or some such.

\o/

kungfupolarbear is in the US visiting her family, she left yesterday and arrives back in the country two weeks later. It’s weird not having her here, I feel like I’m just biding my time until she’s back.

The whole stereotype of “The wife is away, I’m going to do all of these things that she doesn’t let me do normally!” just doesn’t apply to us at all. We both will happily play video games or putter around on the computer or whatever. I’m having gypocalypse, wobin, and some other friends over on Saturday to play this Warhammer 40,000-themed D&D-type thing, and that wouldn’t matter if kungfupolarbear was here, she’d happily hang out despite having zero interest in actually playing it and would absolutely not begrudge us spending time playing it. Everything is just so totally effortless, it still amazes me. <3

I had yesterday and today off (yesterday because we woke up at 3:30am in order to get to the airport on time D: ), and am working from home the rest of the time she’s gone since Beanie needs looking after and obviously can’t be by himself for 11 hours a day.

In other news we bought an Xbox One, a colleague of mine was selling his as he’s moving to Paris. We’ve got Forza 5 (a racing game) to tide us over until the Diablo III expansion comes out, and man the graphics are nice. This is only a launch title too, so they’ll be even more impressive once developers get a handle on squeezing the most out of the system. One very slick thing is that you can have the Xbox turn the TV and amplifier on when it turns on, and it’ll turn them off when the Xbox turns off too.

DIY, and some not so DIY

No posts for the last month, mostly because there’s not been anything of note happening. Yesterday, though… phew!

So, the path to the back room was pretty bloody awful.

kungfupolarbear had found a photo of a nice path design, so we decided to go to Bunnings and get the supplies for it. Long story short, about five hours and many sore muscles later, we have a new path!

As per pretty much everything so far, it took more work than expected.

And today we had an electrician come to replace the god-awful ceiling lights that were in the lounge room and kitchen, and to wire up some ethernet for us too. It’s so nice being able to put our own little touches on things now!

In other news, our annual bonus ended up being 12.5%! \o/ Of course, 45.6 cents in the dollar of that went to tax, but that was still a nice chunk of change. So I’ve ordered a NinjaBlock. Should be fun to play around with.

A new design

One of the things I’ve done with my website is design and add an entirely new design, and the ability to switch between the current design and the new one. Check it out!

The “Dark” style is the current one, and the “Light” style is the new one. I’m really pleased with how it turned out, design-wise, even apart from managing to implement a style-switcher that stores the current selection in a session cookie so it’s remembered next time you visit. 😀