Replacing the hard disk in a PowerBook G3 “Pismo”, and other fun with Mac OS 9

Replacing the hard disk in a PowerBook G3 “Pismo”, and other fun with Mac OS 9

I posted nearly five years ago about my shiny new Power Mac G4 and how much I was enjoying the nostalgia. Unfortunately the power supply in it has since started to die, and the machine will random turn itself off after an increasingly short period of time. Additionally, I’d forgotten just how noisy those machines were, and how hot they ran! I’ve bought a replacement power supply for it, but it involves rearranging the output pins from a standard ATX PSU to what the G4 needs, and that’s so daunting that I still haven’t tackled it yet. I decided to go back to the trusty old PowerBook G3, as I’ve since gotten a new desk and computer setup that has much more room on it, and having a much more compact machine has been very helpful.

One thing I was a bit concerned about was the longevity of the hard disk in it and I started investing the possibility of putting a small SSD into it. Thankfully such a thing is eminently possible by way of a 128GB mSATA SSD and an mSATA to IDE adapter! I followed the iFixit guide — though steps 6 through to 11 were entirely unnecessary — and now have a shiny new and nearly entirely silent PowerBook G3 (though it’s disconcerting at just how quiet it is given it’s an old machine… I realised I’m so subconsciously used to hearing the clicking of the hard disk).

A photo of a black PowerBook G3 sitting on a desk, booted to the Mac OS 9 desktop. The machine is big and chunky, but also has subtle curves to it, and the trackpad is HILARIOUSLY tiny compared to modern Macs.

I even had the original install discs from the year 2000 when mum first bought this machine, and they worked perfectly (though a few years ago I’d had to replace the original DVD drive with a slot-loading one because the original one died and it stopped reading discs entirely).

One I had it up and running, another sticking point is actually getting files onto it. As I mentioned in my previous post, Macintosh Repository has a whole ton of old software and if you load it up with a web browser from within Mac OS 9 it’ll load without HTTPS, but even so it’s pretty slow. Sometimes it’s nicer just to do all the searching and downloading from a fast modern machine and then transfer the resulting files over.

Mac OS 9 uses AFP for sharing files, and the AFP server that used to be built into Mac OS X was removed a few versions ago. Fortunately there’s an open-source implementation called Netatalk, and some kindly soul packed it all up into a Docker container.

I also stumbled across a project called Webone a while ago, which acts essentially as an SSL-stripping proxy that you run on a modern machine and point your old machine’s web browser to for its proxy setting. Old browsers are utterly unable to do anything with the modern web thanks to newer versions of encryption in HTTPS, but this lets you at least somewhat manage to view websites, even if they often don’t actually render properly.

Both Netatalk and Webone required a bit of configuration, and I rather than setting them up and then forgetting how I did so, I’ve made a GitHub repository called Mac OS 9 Toolbox with docker-compose.yml files and setup for both projects in them, plus a README so future-me knows what I’ve done and why. 😛 In particular getting write permissions to write from the Mac OS 9 machine to the one running Netatalk was tricky.

I also included a couple of other things in there too, and will continue to expand on it as I go. One thing is how to convert the PICT format screenshots from Mac OS 9 into PNG, since basically nothing will read PICTs anymore. It also includes a Mastodon client called Macstodon:

A screenshot of a multi-pane Mac OS 9 application showing the Mastodon Home and Local Timelines and Notifications at the top, and the details of a selected toot at the bottom.

And also the game Escape Velocity: Override (which I’m very excited to note is getting a modern remaster from the main guy who worked on the original):

A screenshot of a top-down 2D space trading/combat game with quite basic graphics. A planet is in the middle of the screen along with several starships of various sizes.

I mentioned both the Marathon and Myth games in my previous post, but those actually run quite happily on modern hardware since Bungie was nice enough to open-source them many years ago. Marathon lives on with Aleph One, and Myth via Project Magma.

Upping my monitoring game with MQTT

Previously on Monitor All The Things:

(The display that used to show the air quality has been changed to show a clock instead, and the air quality monitoring is done via another ESP32 now. I’m also sensing a definite theme with my blog post titles here).

I hadn’t blogged about it, but I also have all of this (indoor and outdoor temperature and humidity, power usage and generation plus battery charge, and outdoor air quality) going into InfluxDB for visualising in Grafana. The dashboard I made looks like this:

Pretty spiffy, eh?

It’d very much evolved rather organically as I went though, so lots of different things on different hosts sending HTTP calls all over the place, including my own slightly dodgy system for getting the ESP32s that are connected to the temperature sensor to save their readings onto the local filesystem if my website couldn’t be contacted (for example if we had an internet outage), as well as two separate things hitting the Powerwall’s local API every five seconds to pull the power data (one for the little HyperPixel display at the front of the house, and one for the visualisation stuff above).

I figured there had to be a cleaner and more elegant way of doing this. At work I deal with Amazon’s Simple Queue Service (SQS) quite a lot and use it in one of the services I built, so I wondered if there was a way to accomplish something similar myself, so I could just have everything drop messages onto a queue and have the things that need to read them pick those messages up from the queue.

Turns out there is, and it’s called MQTT!

It’s an absurdly simple and lightweight protocol, you have a central server called a “broker”, a publisher that sends messages to a given topic on the broker, and as many subscribers as you want that also connect to the broker and each listen on a topic or topics, and the broker ensures those messages get from the publisher to each subscriber. There’s also quality of service settings where you can have it guarantee that the message is received by the subscriber at least once, and it’ll queue up the messages for the subscribers if they drop offline and the messages will all be sent once the subscriber comes back.

Interestingly, you can also have a broker on one machine connect to a broker on another machine, and have it send messages on a particular topic to the remote broker, which seemed like it’d be a good way to get weather updates to my website.

There’s a guy who wrote an MQTT client library in MicroPython for the ESP32, mqtt_as, so that would take care of the ESP32 side of things, I’d use a popular open-source MQTT broker called Mosquitto, and there’s a Javascript MQTT client called MQTT.js that would be used for my website and all the other TypeScript parts of the setup.

I did a bunch of brainstorming in draw.io and came up with this elaborate diagram:

(Mechanise is the hostname of my Linode, which my website runs on, and PVOutput is a website for sending your solar power generation data to, a bunch of people at work also do the same and we’re all in the same “team” so we can see how much combined we’ve all generated together).

After that, it just involved a whole bunch of coding (as well as ordering two spare ESP32s so I could test that my code worked without having to pull apart my existing setup), which I’ve uploaded to GitHub:

Despite having written up a careful plan and done what I thought was getting all my ducks in a row to make a quick switchover this morning, there were a number of things I ran into that caused it to take a few hours to get going (things like forgetting to configure PostgreSQL on the Raspberry Pi 4B to allow things running in Docker to access it, needing to add an extra published port on the Linode so my website could connect to Mosquitto, and most annoyingly of all, a recent VSCode update breaking Pymakr and having to revert to an old version of both pieces of software). I got everything up and running in the end, and now if I add any new monitoring things, it’ll be quite simple to publish the data to Mosquitto and slurp it up into InfluxDB!

More fun with temperature sensors: ESP32 microcontrollers and MicroPython

More fun with temperature sensors: ESP32 microcontrollers and MicroPython

I’ve blagged previously about our temperature/humidity sensor setup and how they’re attached to my Raspberry Pis, and they’ve been absolutely rock-solid in the three-and-a-half years since then. A few months ago, a colleague at work had mentioned doing some stuff with an ESP32 microcontroller and just recently I decided to actually look up what that was and what one can do with it, because it sounded like it might be a fun new project to play with!

From Wikipedia: ESP32 is a series of low-cost, low-power system on a chip microcontrollers with integrated Wi-Fi and dual-mode Bluetooth.

So it’s essentially a tiny single-purpose computer that you write code for and then flash that code onto the board, rather than like with the Raspberry Pi where it has an entire Linux OS running on it. It runs at a blazing fast 240MHz and has 320KB of RAM. The biggest draw for me was that it has built-in wifi so I could do networked stuff easily. There’s a ton of different boards and options and it was all a bit overwhelming, but I ended getting two of Adafruit’s HUZZAH32s which come with the headers for attaching the temperature sensors we have already soldered on. Additionally, they have 520KB of RAM and 4MB of storage.

Next up, I needed to find out how to actually program the thing. Ordinarily you’d write in C like with an Arduino and I wasn’t too keen on that, but it turns out there’s a distribution of Python called MicroPython that’s written explicitly for embedded microcontrollers like the ESP32. I’ve never really done much with Python before, because the utter tyre fire that is the dependency/environment management always put me off (this xkcd comic is extremely relevant). However, with MicroPython on the ESP32 I wouldn’t be having to deal with any of that, I’d just write the Python and upload it to the board! Additionally, it turns out MicroPython has built-in support for the DHT22 temperature/humidity sensor that I’ve already been using with the Raspberry Pis. Score!

There was a lot of searching over many different websites trying to find how to get all this going, so I’m including it all here in the hopes that maybe it’ll help somebody else in future.

Installing MicroPython

At least on macOS, first you need to install the USB to UART driver or your ESP32 won’t even be recognised. Grab it from Silicon Labs’ website and get it installed.

Once that’s done, follow the Getting Started page on the MicroPython website to flash the ESP32 with MicroPython, substituting /dev/ttyUSB0 in the commands for /dev/tty.SLAB_USBtoUART.

Using MicroPython

With MicroPython, there’s two files that are always executed when the board starts up, boot.py which is run once at boot time and is generally where you’d put your connect-to-the-wifi-network code, and main.py which is run after boot.py and will generally be the entry point to your code. To get these files onto the board, you can use a command-line tool called ampy, but it’s a bit clunky and also not supported anymore.

However, there is a better way!

Setting up the development environment

There are two additional tools that make writing your Python code in Visual Studio Code and uploading to the ESP32 an absolute breeze.

The first one is micropy-cli, which is a command-line tool to generate the skeleton of a VSCode project and set it up for full autocompletion and Intellisense of your MicroPython code. Make sure you add the ESP32 stubs first before creating a new micropy project.

The second is a VSCode extension called Pymakr. It gives you a terminal to connect directly to the board and run commands and read output, and also gives you a one-click button to upload your fresh code, and it’s smart enough not to re-upload files that haven’t changed.

There were a couple of issues I ran into when trying to get Pymakr to recognise the ESP32 though. To fix them, bring up the VSCode command palette with Cmd-Shift-P and find “Pymakr > Global Settings”. Update the address field from the default IP address to /dev/tty.SLAB_USBtoUART, and edit the autoconnect_comport_manufacturers array to add Silicon Labs.

Replacing the Raspberry Pis with ESP32s

After I had all of that set up and working, it was time to start coding! As I mentioned earlier I’ve not really done any Python before, so it was quite the learning experience. It was a good few weeks of coding and learning and iterating, but in the end I fully-replicated my Pi Sensor Reader setup with the ESP32s, and with some additional bits besides.

One of the things my existing Pi Sensor Reader setup did was to have a local webserver running so I could periodically hit the Pi and display the data elsewhere. Under Node.js this is extremely easily accomplished with Express, but using MicroPython the options were more limited. There are a number of little web frameworks that people have written for it, but they all seemed quite overkill.

I decided to just use raw sockets to write my own, though one thing I didn’t appreciate until this point was how Node.js’s everything-is-asynchronous-and-non-blocking makes doing this kind of thing very easy, you don’t have to worry about a long-running function causing everything else to grind to a halt while it waits for that function to finish. Python has a thing called asyncio but I was struggling to get my head around how to use it for the webserver part of things until I stumbled across this extremely helpful repository where someone had shown an example of how to do exactly that! (I even ended up making a pull request to fix an issue I discovered with it, which I’m pretty stoked with).

One of the things I most wanted to do was to have some sort of log file accessible in case of errors. With the Raspberry Pi I can just SSH in and check the Docker logs, but once the ESP32s were plugged into power and running, you can’t easily do a similar thing. I ended up writing the webserver with several endpoints to read the log, clear it, reset the board, and view and clear the queue of failed updates.

The whole thing has been uploaded to GitHub with a proper README of how it works, and they’ve been running connected to the actual indoor and outdoor temperature sensors and posting data to my website for just under a week now, and it’s been absolutely flawless!

More Raspberry Pi-powered monitoring: air quality!

Here in New South Wales, last year’s bushfires over late spring and into summer were astoundingly bad, and there were days where Sydney had the poorest air quality on the entire planet. Everyone was watching the PM2.5 values, and there were days where Kristina couldn’t go outside because of her asthma. I figured it’d be neat to set up a Raspberry Pi-powered air quality sensor and had ordered the sensor back in February but didn’t get around to putting it into service until now.

This is the bit that lives inside so we can easily see the latest reading:

A small 4" LCD display showing the air quality values for PM1.0, PM2.5, and PM10.

It uses the same sort of setup as my Pimoroni display, and I updated my pi-home-dashboard to add a second page to display the values from the air quality reader.

The sensor itself is a Plantower PMS5003 sensor and is attached to the same Raspberry Pi that the outdoor temperature sensor is on. Adafruit’s instructions on getting it set up were pretty straightforward, and they also give some sample code for how to read it, but it’s in Python which I intensely dislike (I don’t really even have any strong feelings about the language itself one way or the other, but I’ve never had a good experience with the damn package management around it, so I do my damnedest to avoid it). I was able to write the same logic in TypeScript instead — though had to consult the clever people on Ars Technica because parsing the output from the sensor involves things like bit-shifting which is quite low-level and something I’m utterly unfamiliar with — and chucked the whole thing up on GitHub. It takes ten readings and averages them, and has an HTTP endpoint for pulling the latest values.

I’ve set the front-end up so the colour of the numbers will change to orange and red depending on how bad the air quality is, but hopefully it’s a long while before we actually see that in action!

Powering our house with a Tesla Powerwall 2 battery

I posted back in March about our our shiny new solar panels and efforts to reduce our power usage, and as of two weeks ago our net electricity grid power usage is now next to zero thanks to a fancy new Tesla Powerwall 2 battery!

A photo of a white Tesla Powerwall 2 battery and Backup Gateway mounted against a red brick wall inside our garage.
A side-on view of a white Tesla Powerwall 2 battery mounted against a red brick wall.

We originally weren’t planning on getting a battery back when we got our solar panels — and to to be honest they still don’t make financial sense in terms of a return on investment — but we had nine months of power usage data and I could see that for the most part the amount of energy the Powerwall can store would be enough for us to avoid having to draw nearly anything whatsoever from the grid*.

* Technically this isn’t strictly true, keep reading to see why.

My thinking was, we’re producing stonking amounts of solar power and are feeding it back to the grid at 7c/kWh, but have to buy power from the grid after the sun goes down at 21c/kWh. Why not store as much as possible of that for use during the night?

The installation was done by the same people who did the solar panels, Penrith Solar Centre, and as before, I cannot recommend them highly enough. Everything was done amazingly neatly and tidily, it all works a treat, and they fully cleaned up after themselves when they were done.

We have 3-phase power and the solar panels are connected to all three phases (⅓ of the panels are connected individually to each phase) and the Powerwall has only a single-phase inverter so is only connected to one phase, but the way it handles everything is quite clever: even though it can only discharge on one phase, it has current transformers attached to the other two phases so it can see how much is flowing through there, and it’ll discharge on its phase an amount equal to the power being drawn on the other two phases (up to its maximum output of 5kW anyway) to balance out what’s being used. The end result is that the electricity company sees us feeding in the same amount as we’re drawing, and thanks to the magic of net-metering it all balances out to next to zero! This page on Solar Quotes is a good explanation of how it works.

The other interesting side-effect is that when the sun is shining and the battery is charging, it’s actually pulling power from the grid to charge itself, but only as much as we’re producing from the solar panels. Because the Enphase monitoring system doesn’t know about the battery, it gives us some amusing-looking graphs whereby the morning shows exactly the same amount of consumption as production up until the battery is fully-charged!

We also have the Powerwall’s “Backup Gateway”, which is the smaller white box in the photos at the top of this post. In the event of a blackout, it’ll instantaneously switch over to powering us from the battery, so it’s essentially a UPS for the house! Again, 3-phase complicates this slightly and the Powerwall’s single-phase inverter means that we can only have a single phase backed up, but the lights and all the powerpoints in the house (which includes the fridge) are connected to the backed-up phase. The only things that aren’t backed up are the hot water system, air conditioning, oven, and stove, all of which draw stupendous amounts of power and will quickly drain a battery anyway.

We also can’t charge the battery off the solar panels during a blackout… it is possible to set it up like that, but there needs to be a backup power line going back from a third of the solar panels back to the battery, which we didn’t get installed when we had the panels put in in February. There was a “Are you planning on getting a battery in the next six months” question which we said no to. 😛 If we’d said yes, they would have installed the backup line at the time; it’s still possible to install it now, but at the cost of several thousand dollars because they need to come out and pull the panels up and physically add the wiring. Blackouts are not remotely a concern here anyway, so that’s fine.

In the post back in March, I included three screenshots of the heatmap of our power usage, and the post-solar-installation one had the middle of the day completely black. Spot in the graph where we had the battery installed!

We ran out of battery power on the 6th of November because the previous day had been extremely dark and cloudy and we weren’t able to fully charge the battery from the solar panels that day (it was cloudy enough that almost every scrap of solar power we generated went to just powering the house, with next to nothing left over to put into the battery), and the 16th and 17th were both days where it was hot enough that we had the aircon running the whole evening after the sun went down and all night as well.

Powershop’s average daily use graph is pretty funny now as well.

And even more so when you look all the way back to when we first had the smart meter installed, pre-solar!

For monitoring the Powerwall itself, you use Tesla’s very slick app where you can see the power flow in real time. When the battery is actively charging or discharging, there’s an additional line going to or from the Powerwall icon to wherever it’s charging or discharging to or from.

You can’t tell from a screenshot of course, but those on the lines connecting the Solar to the Home and Grid icons animate in the direction that the power is flowing.

It also includes some historical graph data as well, but unfortunately it’s not quite as nice as Enphase’s, and doesn’t even have a website, you can only view it in the app. There’s a website called PVOutput that you can send your solar data to, and we have been doing that via Enphase since we got the solar panels installed, but the Powerwall also has its own local API you can hit to scrape the power usage and flows, and battery charge percentage. I originally found this Python script to do exactly that, but a) I always struggle to get anything related to Python working, and b) the SQLite database that it saves its data into kept intermittently getting corrupted, and the only way I’d know about it is by checking PVOutput and seeing that we hadn’t had any updates for hours.

So, I wrote my own in TypeScript! It saves the data into PostgreSQL, so far it’s been working a treat and it’s all self-contained in a Docker container. The graphs live here, and to see the power consumption and grid and battery flow details, click on the right-most little square underneath the “Prev Day” and “Next Day” links under the graph. Eventually I’m going to send all this data to my website so I can store it all there, but for the moment PVOutput is working well.

It also won’t shock anybody to know that I updated my little Raspberry Pi temperature/power display to also include the battery charge and whether it’s charging or discharging (charging has a green upwards arrow next to it, discharging has a red downwards arrow).

My only complaint with the local API is that it’ll randomly become unavailable for periods of time, sometimes up to an hour. I have no idea why, but when this happens the data in the Tesla iPhone app itself is still being updated properly. It’s not a big deal, and doesn’t actually affect anything with regards to battery’s functionality.

Overall, we’re exceedingly happy with our purchase, and it’s definitely looking like batteries in general are going to be a significant part of the electrical grid as we move to higher and higher percentages of renewables!

Visualising Git repository histories with Gource and ffmpeg

First, a disclaimer: this is entirely based on a blog post from a co-worker on our internal Confluence instance and I didn’t come up with any of it. 😛

Gource is an extremely cool tool for visualising the history of a Git repository (and other source control tools) via commits, and it builds up an animated tree view. When combined with ffmpeg you can generate a video of that history!

On a Mac, install Gource and ffmpeg with Homebrew:

$ brew install gource ffmpeg

Then cd into the repository you’re visualising, and let ‘er rip!

$ gource -1280x720 \
    --stop-at-end \
    --seconds-per-day 0.2 \
    -a 1 \
    -o - \
    | ffmpeg -y \
    -r 60 \
    -f image2pipe \
    -vcodec ppm \
    -i - \
    -vcodec libx264 \
    -preset fast \
    -crf 18 \
    -threads 0 \
    -bf 0 \
    -pix_fmt yuv420p \
    -movflags \
    +faststart \
    output.mp4

Phew! The main options to fiddle with are the resolution from gource (1280x720 in this case), and the crf setting from ffmpeg (increase the number to decrease the quality and make a smaller file, or lower the number to increase the quality and make a larger file).

I ran it over my original website repository that started its life out as PHP in 2011 and was moved to Perl:

And then my Javascript website that I started in 2016 and subsequently moved to TypeScript:

How cool is that?!

I also ran it over the codebase of the main codebase at work that powers our internal PaaS that I support and it’s even cooler because the history goes back to 2014 and there’s a ton of people working on it at any given time.

More space: the Pimoroni HyperPixel4 display on a Raspberry Pi Zero W

Back at the start of 2018 I blogged about my Raspberry Pi temperature display setup and it’s been pretty excellent and utterly reliable since then, but because of its small size — the display is only 2 inches — it wasn’t particularly visible from across the room. That, combined with the discovery that the Envoy power consumption monitoring system we had installed with the solar panels has a locally-accessible API that you can use to get real-time production and consumption data (which lives at http://<ip-of-the-envoy-box>/production.json?details=1), made me start looking into larger displays so I could include both temperature/humidity data and our power consumption.

My first port of call was the 2.7-inch version of the original 2-inch display. I ordered it on the 6th of April then… nothing showed up. I’d assumed the PaPiRus was MIA and had instead ordered a 4-inch, 800×480-pixel display in the form of Pimoroni’s HyperPixel4 display, the non-touch version. The Raspberry Pi registers it as a regular display so you run a full desktop environment windowing system on it rather than the way the PaPiRus works.

Of course, about a week after ordering the HyperPixel 4, the PaPiRus finally arrived! The 2.7-inch version of the PaPiRus is 264 pixels wide by 176 pixels high, so not exactly high-resolution. There’s actually quite a lot of freedom to tweak the position of the elements on screen pixel-by-pixel, but I quickly discovered that that’s extremely tedious when doing it directly on the Raspberry Pi itself because it takes several seconds for it to contact the required endpoints to pull in the data and then refresh the whole display. As well as writing text, the display can also display (1-bit) bitmap images, so I decided to change tack and instead of using the PaPiRus’s text API I wrote a probably-slightly-overengineered Node.js application that would run on the Raspberry Pi 4B, fetch the data from the outdoor and indoor sensors as well as the Envoy, use the Javascript Canvas API to lay everything out, and then convert it to a bitmap image that the Python script on the Pi Zero W would fetch every minute and then update the display with.

The biggest advantage of this system is that I could run it locally on my regular computer to quickly tweak the positioning without having to wait for the PaPiRus display to refresh each time, and I set it up so I could invert the colours to be white on black instead so I could clearly see the boundaries of the canvas. I put the code up on GitHub if anyone is interested in poking through it, and the end result looks like this:

Having over-engineered my Node.js solution, the HyperPixel4 display arrived maybe a couple of weeks later! It’s extremely slick-looking, but unfortunately the little plastic nubs that are meant to keep the screen in place in the house aren’t actually big enough to hold it in, and I managed to have the display itself pop out and crack some of the wires that feed the display and it caused all sorts of display weirdness. I emailed the place that makes the HyperPixel display about it and they were super nice and helpful and sent me out a replacement display with no questions asked! While I was waiting for the new one to arrive, the old broken one was partially working enough that I could at least get everything up and running how I wanted it, anyway.

Because using the HyperPixel is the same as if you’d hooked up an HDMI display and were using the Pi as a regular computer, I started from the full-blown Raspbian desktop image, not the Lite one. It was relatively straightforward to get everything going (mostly just installing and configuring the driver from Pimoroni’s GitHub repository), but there were some additional things I needed to do to get everything working as I wanted. I settled on a Node.js backend and React frontend setup (the separate backend was necessary because CORS; I couldn’t hit the Envoy URL directly from the browser on the Pi, so I have to have the Node.js backend pull in the data and then feed it to the React app), both of which are running in a Docker image on the Raspberry Pi 4B.

  • By default the HyperPixel4 runs at full brightness, so I followed this to turn it way down, and also to set up a cron job to entirely turn the display off at midnight and turn it back on at 8am.
  • To get the Pi to open Chromium full-screen on boot, I followed these instructions.
  • To disable the annoying “Restore pages” dialog in Chromium, this on the Raspberry Pi Stack Exchange was helpful.
  • Raspbian comes by default with a VNC server installed, just not enabled. To enable it and allow access directly from macOS’s “Connect to Server” dialog in the Finder:
    • Run sudo raspi-config, go to Interface Options > VNC and enable it.
    • Run vncpasswd -service to set a VNC password (note if it’s longer than eight characters, only the first eight are used when connecting).
    • Create the file /etc/vnc/config.d/common.custom with the contents: Authentication=VncAuth
    • Then Restart the VNC service with sudo systemctl restart vncserver-x11-serviced
  • And lastly, to disable the Pi from turning the screen off after activity, I followed these steps.

My ~/.config/lxsession/LXDE-pi/autostart ultimately ended up looking like this:

@lxpanel --profile LXDE-pi
@pcmanfm --desktop --profile LXDE-pi
point-rpi
@chromium-browser --start-fullscreen --start-maximized --app=http://fourbee:3003
@xset s off
@xset -dpms 
@xset s noblank
@sudo /home/pi/Source/rpi-hardware-pwm/pwm 19 1000000 135000

And the whole setup looks like this:

A photo of a small LCD display showing outdoor and indoor temperature and current power consumption and production. The text is white on black.

It’s quite the improvement in visibility and I can easily read it from all the way in the kitchen! It updates itself automatically every 30 seconds, and there’s no e-ink full-display-refresh screen-blanking when it does.

Memories redux: Flickr

I posted back in December that I’d created my own version of Facebook’s “Memories” feature for my formerly-Tumblr-and-now-Mastodon media posts, and even at the time I’d had the thought of doing it for Flickr as well, since that’s where all my Serious Photography goes.

Well, now I have!

It wasn’t quite as straightforward as my Media memories functionality, because there I could just do a single database call, but for Flickr I’m having to make multiple API calls each time. Fortunately two of the search parameters that flickr.photos.search offers are min_taken_date and max_taken_date, so my approach is to run a query for whatever the current day of the year it happens to be for each year going back to 2007—this being when my account was created and when I first started posting photos to Flickr—with the min_taken_date set to 00:00 on that particular day, and max_taken_date set to 23:59 on that same day. It does mean that currently there’s 13 API calls each time the Memories page is loaded and this will increase by one with each year that goes past, but Flickr’s API docs say “If your application stays under 3600 queries per hour across the whole key (which means the aggregate of all the users of your integration), you’ll be fine”. That’s one query every single second for an entire hour, which absolutely isn’t going to be happening, so I ought to remain well under the limit.

I’m excited to see what forgotten gems from the past show up, and also being reminded of how terrible I was when I was first starting out taking photos. 😛

HomePod, Docker on Raspberry Pi, and writing Homebridge plugins

Apple announced the HomePod “smart speaker” in 2017, and started shipping them in early 2018. I had zero interest the smart speaker side of things — I’d never have Google or Amazon’s voice assistants listening to everything I say, and despite trusting Apple a lot more with privacy compared to those two companies, the same goes for Siri — but the praise for the sound quality definitely piqued my interest, especially having set up shairplay-sync on the Raspberry Pi as an AirPlay target and enjoying the ease of streaming music to a good set of speakers. For AU$499 though, I wasn’t going to bother as the setup for the stereo system in our home office did a reasonable enough job. It consisted of an amplifier that was sitting next to my desk, going into an audio switchbox that sat next to my computer and could be switched between the headphone cable attached to my computer, and one that snaked across the floor to the other side to Kristina’s desk so she could plug into it, with the speakers were sitting on the bookshelves on opposite sides of the room (you can see how it looked in this post, the speakers are the black boxes visible on the bottom shelves closest to our desks).

Fast-forward to last week, and someone mentioned that JB Hi-Fi were having a big sale on the HomePod and it was only AU$299! The space behind my desk was already a rat’s nest of cables, and with the standing desk I’ve ordered from IKEA I was wanting to reduce the number of cables in use, and being able to get rid of a bunch of them and replace it with a HomePod, I decided to get in on it (it’s possible to turn the “Listen for ‘Hey Siri'” functionality off entirely).

It arrived on Tuesday, and to say I’m impressed with the sound quality is a bit of an understatement, especially given how diminutive it is. It has no trouble filling the whole room with sound, the highs are crystal clear, and if the song is bassy enough you can feel it through the floor! It shows up just as another AirPlay target so it’s super-easy to play music to it from my phone or computer. I took a photo of our new setup and you can see the HomePod sitting on the half-height bookshelf right at the bottom-left of the frame (the severe distortion is because I took the photo on our 5D4 with the 8-15mm Fisheye I borrowed from a friend, which requires turning lens corrections on to avoid having bizarrely-curved vertical lines, which in turn distorts the edges of the image quite a bit).

The setup and configuration of the HomePod is done via Apple’s Home app, which uses a framework called HomeKit to do all sorts of home automation stuff, and the HomePod is one of the devices that can work as the primary “hub” for HomeKit. I have no interest in home automation as such, but a selling point of the HomeKit is that’s a lot more secure than random other automation platforms, and one of the things it supports is temperature sensors. Someone wrote a Node.js application called Homebridge that lets you run third-party plugins and even write your own ones to appear and interact with in HomeKit, so I decided I’d see if I could hook up the temperature sensors that are attached to the Raspberry Pi(s)!

I’d ordered a 4GB Raspberry Pi 4B last month because I wanted to have a bit more grunt than the existing Pi 3B — which only has 1GB RAM — and to start using Docker with it, and it arrived on the 1st of this month. With that up and running inside in place of my original Raspberry Pi 3B, I moved the Pi 3B and the outside temperature sensor much further outside and attached it to our back room that’s in the backyard, because the previous position of the sensor underneath the pergola and next to the bricks of the house meant that in summer the outdoor temperatures would register hotter than the actual air temperature, and because the bricks absorb heat throughout the day, the temperatures remain higher for longer too.

Installing and configuring Homebridge

Next step was to set up Homebridge, which I did by way of the oznu/docker-homebridge image, which in turn meant getting Docker — and learning about Docker Compose and how handy it is, and thus installing it too! — installed first:

  1. Install Docker — curl -sSL https://get.docker.com | sh
  2. Install Docker Compose — sudo apt-get install docker-compose
  3. Grab the latest docker-homebridge image for Raspberry Pi — sudo docker pull oznu/homebridge:raspberry-pi
  4. Create a location for your Homebridge configuration to be stored — mkdir -p ~/homebridge/config

Lastly, write yourself a docker-compose.yml file inside ~/homebridge

version: '2'
services:
  homebridge:
    image: oznu/homebridge:raspberry-pi
    restart: always
    network_mode: host
    volumes:
      - ./config:/homebridge
    environment:
      - PGID=1000
      - PUID=1000
      - HOMEBRIDGE_CONFIG_UI=1
      - HOMEBRIDGE_CONFIG_UI_PORT=8080

Then bring the Homebridge container up by running sudo docker-compose up --detach from ~/homebridge. The UI is accessible at http://<address-of-your-pi>:8080 and logs can be viewed with sudo docker-compose logs -f.

The last step in getting Homebridge recognised from within the Home app is iOS is to open the Home app, tap the plus icon in the top-right and choose “Add accessory”, then scan the QR code that the Homebridge UI displays.

Writing your own Homebridge plugins

Having Homebridge recognised within the Home app isn’t very useful without plugins, and there was a lot of trial and error involved here because I was writing my own custom plugin rather than just installing one that’s been published to NPM, and I didn’t find any single “This is a tutorial on how to write your own plugin” pages.

Everything is configured inside ~/homebridge/config, which I’ll refer to as $CONFIG from now on.

Firstly, register your custom plugin so Homebridge knows about it by editing $CONFIG/package.json and editing the dependencies section to add your plugin. It has to be named homebridge-<something> to be picked up at all, I called mine homebridge-wolfhaus-temperature and so my $CONFIG/package.json looks like this:

{
  "private": true,
  "description": "This file keeps track of which plugins should be installed.",
  "dependencies": {
    "homebridge-dummy": "^0.4.0",
    "homebridge-wolfhaus-temperature": "*"
  }
}

The actual code for the plugin needs to go into $CONFIG/node_modules/homebridge-<your-plugin-name>/, which itself is a Node.js package, which also needs its own package.json file located at $CONFIG/node_modules/homebridge-<your-plugin-name>/package.json. You can generate a skeleton one with npm init — assuming you have Node.js installed, if not, grab nvm and install it — but the key parts needed for a plugin to be recognised by Homebridge is adding the keywords and engine sections into your package.json:

{
  "name": "homebridge-wolfhaus-temperature",
  "version": "0.0.1",
  "main": "index.js",
  "keywords": [
    "homebridge-plugin"
  ],
  "engines": {
    "homebridge": ">=0.4.53"
  }
}

index.js is your actual plugin code that will be run when Homebridge calls it.

Once I got this out of the way, the last bit was a LOT of trial and error to actually get the plugin working with Homebridge and the Home app on my iPhone. The main sources of reference were these:

After several hours work, I had not the nicest code but working code (Update 2020-04-12 — moved to ES6 classes and it’s much cleaner), and I’ve uploaded it to GitHub.

The final bit of the puzzle is telling Homebridge about the accessories, which are the things that actually show inside the Home app on iOS. For this, you need to edit $CONFIG/config.json and edit the accessories section to include your new accessories, which will use the plugin that was just written:

{
    "bridge": {
        "name": "Traverse",
        [...]
    },
    "accessories": [
        {
            "accessory": "WolfhausTemperature",
            "name": "Outdoor Temperature",
            "url": "http://pi:3000/rest/outdoor"
        },
        {
            "accessory": "WolfhausTemperature",
            "name": "Indoor Temperature",
            "url": "http://fourbee:3000/rest/indoor"
        }
    ],
    "platforms": []
}

The url is the REST endpoint that my pi-sensor-reader runs for the indoor and outdoor sensors, and the name needs to be unique per accessory.

Homebridge needs restarting after all these changes, but once you’re done, you’ll have two new accessories showing in Home!

They initially appear in the “Default Room”, you can add an “Indoor” and “Outdoor” room to put them into by tapping on the Rooms icon in the bottom bar, then tapping the hamburger menu at the top-left, choosing Room Settings > Add Room, then long-pressing on the temperature accessory itself and tapping the settings cog at the bottom-right and selecting a different room for it to go into.

What’s next?

As part of doing all this, I moved all of my public Git repositories over to GitHub where they’re more likely to be actually seen by anybody and will hopefully help someone! I also updated my pi-sensor-reader to use docker-compose, and fully-updated the README to document all the various options.

Next on the Homebridge front is going to be tidying up the plugin code — including moving to async/await — and adding the humidity data to it!

Installing OpenWRT on a Netgear D7800 (Nighthawk X4S) router

I had blogged back in October of last year about setting up DNS over HTTPS, and it’s been very reliable, except for the parts where I’ve had to run Software Update on the Mac mini to pick up security update, and while it’s restarting all of our DNS resolution stops working! I’d come across OpenWRT a while back, which is an open-source and very extensible firmware for a whole variety of different routers, but I did a bunch of searching and hadn’t come across any reports of people fully-successfully using it on our specific router, the Netgear D7800 (also known as the Nighthawk X4S), just people having various problems. One of the reasons I was interested in OpenWRT because it’s Linux-based and extensible and I would be able to move the DHCP and DNS functionality off the Mac mini back onto the router where it belongs, and in theory bring the encrypted-DNS over as well.

I finally bit the bullet and decided to give installing it a go today, and it was surprisingly easy. I figured I’d document it here for posterity and in the hopes that it’ll help someone else out in the same position as I was.

Important note: The DSL/VDSL modem in the X4S is not supported under OpenWRT!

Installation

  1. Download the firmware file from the “Firmware OpenWrt Install URL” (not the Upgrade URL) on the D7800’s entry on OpenWRT.org.
  2. Make sure you have a TFTP client, macOS comes with the built-in tftp command line tool. This is used to transfer the firmware image to the router.
  3. Unplug everything from the router except power and the ethernet cable for the machine you’ll be using to install OpenWRT from (this can’t be done wirelessly).
  4. Set your machine to have a static IP address in the range of 192.168.1.something. The router will be .1.
  5. Reset the router back to factory settings by holding the reset button on the back of it in until the light starts flashing.
  6. Once it’s fully started up, turn it off entirely, hold the reset button in again and while still holding the button in, turn the router back on.
  7. Keep the reset button held in until the power light starts flashing white.

Now the OpenWRT firmware file needs to be transferred to the router via TFTP. Run tftp -e 192.168.1.1 (-e turns on binary mode), then put <path to the firmware file>. It’ll transfer the file and then install it and reboot, this will take several minutes.

Once it’s up and running, the OpenWRT interface will be accessible at http://192.168.1.1, with a username of root and no password. Set a password then follow the quick-start guide to turn on and secure the wifi radios — they’re off by default.

Additional dnsmasq configuration and DNS-over-TLS

I mentioned in my DNS-over-HTTPS post that I’d also set up dnsmasq to do local machine name resolution, this is very trivially set up in OpenWRT under Network > DHCP and DNS and putting in the MAC address and desired IP and machine name under the Static Leases section, then hitting Save & Apply.

The other part I wanted to replicate was having my DNS queries encrypted. In OpenWRT this isn’t easily possible with DNS-over-HTTPS, but is when using DNS-over-TLS, which gets you to the same end-state. It requires installing Stubby, a DNS stub resolver, that will forward DNS queries on to Cloudflare’s DNS.

  1. On the router, go to System > Software, install stubby.
  2. Go to System > Startup, ensure Stubby is listed as Enabled so it starts at boot.
  3. Go to Network > DHCP and DNS, under “DNS Forwardings” enter 127.0.0.1#5453 so dnsmasq will forward DNS queries on to stubby, which in turns reaches out to Cloudflare; Cloudflare’s DNS servers are configured by default. Stubby’s configuration can be viewed at /etc/config/stubby.
  4. Under the “Resolv and Hosts Files” tab, tick the “Ignore resolve file” box.
  5. Click Save & Apply.

Many thanks to Craig Andrews for his blog post on this subject!

Quality of Service (QoS)

The last thing I wanted to set up was QoS, which allows for prioritisation of traffic when your link is saturated. This was pretty straightforward as well, and just involved installing the luci-app-sqm package and following the official OpenWRT page to configure it!

Ongoing findings

I’ll update this section as I come across other little tweaks and changes I’ve needed to make.

Plex local access

We use Plex on the Xbox One as our media player (the Plex Media Software runs on the Mac mini), and I found that after installing OpenWRT on the router, the Plex client on the Xbox couldn’t find the server anymore despite being on the same LAN. I found a fix on Plex’s forums, which is to go to Network > DHCP and DNS, and add the domain plex.direct to the “Domain whitelist” field for the Rebind Protection setting.

Xbox Live and Plex Remote Access (January 2020)

Xbox Live is quite picky about its NAT settings, and requires UPnP to be enabled or you can end up with issues with voice chat or gameplay in multiplayer, and similarly Plex’s Remote Access requires UPnP as well. This isn’t provided by default with OpenWRT but can be installed with the luci-app-upnp and the configuration shows up under Services > UPnP in the top navbar. It doesn’t start by default, so tick the “Start UPnP and NAT-PMP service” and “Enable UPnP” boxes, then click Save & Apply.

Upgrading to a new major release (February 2020)

When I originally wrote this post I was running OpenWRT 18.06, and now that 19.07 has come out I figured I’d upgrade, and it was surprisingly straightforward!

  1. Connect to the router via ethernet, make sure your network interface is set to use DHCP.
  2. Log into the OpenWRT interface and go to System > Backup/Flash Firmware and generate a backup of the configuration files.
  3. Go to the device page on openwrt.org and download the “Firmware OpenWrt Upgrade” image (not the “Firmware OpenWrt Install” one).
  4. Go back to System > Backup/Flash Firmware, choose “Flash image” and select your newly-downloaded image.
  5. In the next screen, make sure “Keep settings and retain the current configuration” is not ticked and continue.
  6. Wait for the router light to stop flashing, then renew your DHCP lease (assuming you’d set it up to be something other than 192.168.1.x like I did).
  7. Log back into the router at http://192.168.1.1 and re-set your root password.
  8. Go back to System > Backup/Flash Firmware and restore the backup of the settings you made (then renew your DHCP lease again if you’d changed the default range).

I had a couple of conflicts with files in /etc/config between my configuration and the new default file, so I SSHed in and manually checked through them to see how they differed and updated them as necessary. After that it was just a case of re-installing the luci-app-sqm, luci-app-upnp, and stubby packages, and I was back in business!

Coding my own personal version of Facebook’s Memories feature

I deleted my Facebook account way back somewhere around 2009, but the one thing that Kristina shows me that I think is a neat idea is the “memories” feature, where it shows posts from previous years on that day in particular. I realised I could very much code something up myself to accomplish the same thing, given I have Media posts going back to 2009.

And so I did! By default it’ll show all posts that were made on this exact same date in each previous year (if any), excluding today’s, and you can also pick an arbitrary date and view all posts on that date for each previous year as well.

I was originally going to have it send me an email each day, but I quickly realised I couldn’t be bothered dealing with HTML emails and so it ended up in its current state. It’s not perfect, I’m still wrestling with timezones — if you view the main Memories page before 11am Sydney time, you’ll get yesterday’s date because 11am Sydney time is currently when the day switches over to the new day when it’s UTC time. If I do specify a Sydney time in my code, the automated tests fail on Bitbucket Cloud because they’re all running in UTC. I’m sure it’s fixable, I just haven’t had the brain capacity to sit down and work it out. 😛 Between this and my tag browser, it’s been pretty fun seeing old posts I’d forgotten about.

Update 21st December: I found these two posts about timezones in Postgres, and between them and firing up two Docker containers in UTC time for testing — one for Postgres and one for my code — I managed to get it fully working! 🎉

Of coding and a history of iPhone photo filter apps

I had last week off work, mostly due to being in desperate need of a holiday, I didn’t go anywhere but just chilled out at home. I did do a bunch of coding on my website though!

I’d been using Tumblr to post my random snaps from 2009 to about 2016 or so and cross-posting them to Twitter, before I found that Tweetbot had custom image posting functionality where you could post images to a URL that replied with a specific format and Tweetbot would use those image URLs in its tweets. I added functionality for that on my website and had been saving tweets and images directly since 2016.

Last year, it occurred to me that I should import my posts from Tumblr to my website in order to have everything in one place. I obsessively tag my Flickr photos and as a result am able to find almost anything I’ve taken a photo of very quickly, and while I hadn’t quite gone to those same levels of tagging with Tumblr, all my posts there had at least some basic tags on them that I wanted to preserve when bringing them in to my website, so I had coded up a tags system for my Media page and a script to scrape the Tumblr API and suck the posts, images, and tags in. I also wrote a very simple little React app to be able to continue adding tags to new posts I’m making directly to my website.

The one thing that was missing was the ability to see all of the current tags, and to search by tag, so this past week I’ve been doing exactly that! I have a page that shows all the tags that exist with links to view just the posts tagged with a given tag, and on the front page the tags that a post has are clickable as well.

I realised I had mucked up the tagging on a few posts so was going through and re-tagging and updating them, and it struck me just how much I used to rely on those camera filter apps to hide how shit photos from old iPhones used to be. One of the ways I’d tagged my photos on Tumblr, and I’ve continued this even now with the new direct-posting-via-a-custom-iOS-shortcut that I’ve got set up on my iPhone, is with the name of app I used to edit the photo. Going roughly chronologically as I started using each app:

Instagram was only a very brief foray, and VSCOCam was by far my most-used app. Unfortunately it went downhill a couple of years ago and they Androidified it and now all of the icons are utterly inscrutable and you also can’t get RAW files taken from within the app back out again in anything but JPEG. Apparently there’s a thing called a VSCO Girl which I suspect is part of what happened there.

My most recent editing app prior to getting the iPhone 11 Pro has been Darkroom, it’s extremely slick and integrates directly with the regular photo library on your phone and offers a similar style of film-esque presets to VSCOCam, though fewer in number.

With the iPhone 11 Pro, however, the image quality is good enough that I don’t even feel the need to add obviously-film-looking presets to the images. I take the photo, hit the “Auto” button in Photos.app to add a bit of contrast, and usually use the “Vivid” preset to bring the colours up a bit, but otherwise they’re pretty natural-looking.

That said, I’ll probably end up heading back to Darkroom at some point as I do like my film aesthetic!

More coding adventures: Migrating to TypeScript and Express.js

Three and a half years ago I blogged about learning Javascript and Node.js, and then again at the start of 2018 about my progress and also learning React, and I figured it was about time for another update! This time it’s been moving from Sails.js (which is a web framework based on Express.js) to using raw Express itself and moving the language from Javascript to TypeScript (TypeScript is basically Javascript, except with type-checking).

At work, we migrated the codebase of the server that runs our internal platform-as-a-service from Javascript to TypeScript, and I figured it seemed like a neat thing to learn. TypeScript ultimately gets compiled down to Javascript, and I started by trying to just write my Sails.js modules as TypeScript and have them compiled to Javascript in the locations that Sails expected them to be in, but this proved to be a fair bit of a pain so I figured I’d just go whole-hog and move to raw Express.js while I was at it.

I did a whole heap of reading, and ended up coming across this absolutely excellent series of blog posts that takes you through using Express and TypeScript step by step. It took about a month all up, and you can really see how much code was removed (this excludes Node’s package-lock.json file because it’s massive):

$ git diff --stat a95f378 47f7a56 -- . ':(exclude)package-lock.json'
[...]
 151 files changed, 2183 insertions(+), 4719 deletions(-)

My website looks absolutely no different in any way, shape, or form after all of this, but when writing code it’s quite nice having all of Visual Studio Code‘s smarts (things like complaining when you’ve missed a required parameter when calling a function, auto-completion, and on).

Having moved to raw Express.js from Sails.js means I have a much better understanding of how it all works under the bonnet… Sails is great for getting up and running quickly, but there’s a lot of magic that happens in order to accomplish that, and more than once I’ve run into the boundaries of where the magic ends and have had to try to hack my way around it. Express by itself is a lot more widely-used than Sails too, so if I run into problems I generally have an easier time finding an answer to it!

Configuring a virtual machine with Linode StackScripts

Configuring a virtual machine with Linode StackScripts

I’ve been using Linode to host myself a Linux virtual machine since 2011, originally so I could run Jira on it (now long since moved to a cloud-hosted instance), since my entire job was supporting it back then, and also just generally to dabble with Linux and the command line. I started out with CentOS 5 as that’s what we were using at work at the time, and slowly installed more and more random things on it.

When I decided it was time to upgrade to CentOS 7 in 2015, I put together a page in Confluence noting down each thing I was doing, as I was starting with a fresh new virtual machine and migrating only the bits and pieces I needed to it. That was better, but still ended up with a bit of a sprawling page and me forgetting to update it after I’d completed the initial migration. I eventually shut down my whole Dreamhost account and moved solely to having my website and blog and various miscellany (15 years worth of images from LiveJournal entries and posts on Ars Technica, as two examples) hosted on the Linode. Unfortunately I wrote down absolutely none of how I configured it all!

As part of playing around with my YubiKey and setting up GPG agent forwarding, I discovered that the version of GnuPG that CentOS 7 ships with is too old to support agent forwarding from newer versions, so I decided to spin up a new Linode but with Debian 9 instead (since thatdoes support agent forwarding), and migrate everything to it. This time, however, I would do it programatically!

Linode have a thing called StackScripts that let you start up a fresh VM and run a bunch of commands on boot to configure it how you need. Over the course of probably two months, I built up a Bash script to install and configure all my various software packages at boot to a fresh Debian 9 machine to configure it how I needed, and with everything stored in a Git repository. That included also adding Git repositories with my Nginx and systemd configurations as well as running a script on my existing CentOS 7 VM to grab database dumps of my website and our respective blogs, as well as the aforementioned 15 years’ worth of images and other files.

The end result is a ~500 line Bash script that’s version-controlled so I can see exactly what I did, with any new changes I’m making since I cut over to the Debian VM being saved in that as well, and the same with my systemd/Nginx/everything-else configuration! As long as I’m disciplined about remembering to update my StackScript when I make software changes, whenever the next big move to a new VM is should be a hell of a lot simpler.

Installing Linux Mint 19.1 on a Late-2010 MacBook Air

Installing Linux Mint 19.1 on a Late-2010 MacBook Air

(Update December 2022: As suggested in the latest comments, this entire blog post is pretty much redundant now! Linux Mint 21.1 installs without a hitch, even using Cinnamon, and I have fully-functional brightness and sound keys straight out of the box.)

(Update December 2020: I successfully upgraded from Linux Mint 19.3 to Linux Mint 20 by following the official Linux Mint instructions. The only additional post-upgrade work I had to do was re-adding the Section "Device" bit to /usr/share/X11/xorg.conf.d/nvidia-drm-outputclass-ubuntu.conf as described below to get the brightness keys working again.)

(Update May 2020: I’ve re-run through this whole process using Linux Mint 19.3 and have updated this blog post with new details. Notably, no need to install pommed, and including the specific voodoo needed for the 2010 MacBook Air from Ask Ubuntu regarding PCI-E bus identifiers.)

We have a still perfectly usable Late-2010 MacBook Air (“MacBookAir3,2”, model number A1369), but with macOS 10.14 Mojave dropping support for Macs older than 2012 (it’s possible to extremely-hackily install it on older machines but I’d rather not go down that route), I decided I’d try installing Linux on it. The MacBook Air still works fine, if a bit slow, on macOS 10.13 but I felt like a bit of nerding!

Installation

My distribution of choice was Linux Mint, which is Ubuntu-based but less with the constant changes that Canonical keep making. The first hurdle right out of the gate was which “edition” to choose: Cinnamon, MATE, or xfce. There was zero info on the website about which to choose, I started with Cinnamon but that kept crashing when booting from the installation ISO and giving me a message about being in fallback mode. It turns out Cinnamon is the one with all the graphical bells and whistles, and it appears that an eight-year ultralight laptop’s video card isn’t up to snuff, so I ended up on “MATE” edition, which looks pretty much identical but works fine.

My installation method was using Raspberry Pi Imager to write the installation ISO to a spare SD card (despite the name, it can be used to write any ISO: scroll all the way down in the “Choose OS” dialog and select “Use custom”). Installing Linux requires you to partition the SSD using Disk Utility, I added a 2GB partition for the /boot partition, and another 100GB to install Linux itself onto. It doesn’t matter which format you choose as it’ll be reformatted as part of the installation process.

After partitioning, reboot with the SD card in and the Option key held down, and choose the “EFI Boot” option. The installer is quite straightforward, but I chose the custom option when it asked how to format the drive, formatted both the 2GB and 100GB partitions as ext4, with the 2GB one mounted at /boot and the 100GB at /. The other part is to install the bootloader onto that /boot partition, to make it easy to get rid of everything if you want to go back to single-partition macOS and no Linux.

Post-install

The next hurdle was video card drivers. Mint comes with an open-source video card driver called “Nouveau” which works but isn’t very performant, and there was lots of screen tearing as I’d scroll or move windows around. This being Linux, it was naturally not as simple as just installing the official Nvidia one and being done with, because that resulted in a black screen at boot. 😛 I did a massive amount of searching and eventually stumbled across this answer on AskUbuntu which worked where nothing else did: I followed those instructions and was able to successfully install the official Nvidia drivers without getting a black screen on boot!

(Update May 2020: I honestly don’t remember whether I had to go through Step 1 of Andreas’ instructions, “Install Ubuntu in UEFI mode with the Nvidia drivers”, but check for the existence of the directory /sys/firmware before running the rest of this. That directory is only created if you’ve booted in EFI mode. If it doesn’t exist, follow the link in Step 1).

I’m copying the details here for posterity, in case something happens to that answer, but all credit goes to Andreas there. These details are specifically for the Late 2010 MacBook Air with a GeForce 320M video card, so using this on something else might very well break things.

Create the file /etc/grub.d/01_enable_vga.conf and paste the following contents into it:

cat << EOF
setpci -s "00:17.0" 3e.b=8
setpci -s "02:00.0" 04.b=7
EOF

Then make the new file executable and update the grub config files:

$ sudo chmod 755 /etc/grub.d/01_enable_vga.conf
$ sudo update-grub

And then restart. Double-check that the register values have been set to 8 for the bridge device and 7 for the display device:

 $ sudo setpci -s "00:17.0" 3e.b
 08
 $ sudo setpci -s "02:00.0" 04.b
 07

Next, load up the “Driver Manager” control panel and set the machine to use the Nvidia drivers, once it’s finished doing its thing — which took a couple of minutes — restart once more, and you’ll be running with the much-more-performant Nvidia drivers!

At this point I realised that the brightness keys on the keyboard didn’t work. Cue a whole bunch more searching, with fix being to add the following snippet to the bottom of /usr/share/X11/xorg.conf.d/nvidia-drm-outputclass-ubuntu.conf:

Section "Device"
  Identifier     "Device0"
  Driver         "nvidia"
  VendorName     "NVIDIA Corporation"
  BoardName      "GeForce 320M"
  Option         "RegistryDwords" "EnableBrightnessControl=1"
EndSection

And now I have a fully-functioning Linux installation, with working sleep+wake, audio, wifi, and brightness!

I’m certainly not going to be switching to it full-time, and it feels like a lot more fragile than macOS, but it’s fun to muck around with a new operating system. And with 1Password X, I’m able to use 1Password within Firefox under Linux too!

More fun with Yubikey: Signed Git commits and GPG agent forwarding

I’ve been on a “What other neat things can I do with my Yubikey” kick after my last post, and it turns out one of those neat things is to cryptographically sign Git commits. This allows you to prove that the owner of a particular GPG key is actually the person who committed the code. 

Setting up signed Git commits locally is very easy, run git config --global user.signingkey "<ID of your GPG signing subkey>" (mine is C65E91ED24C34F59 as shown in the screenshot below), then run your Git commit normally but with the added flag -S to sign it.

Bitbucket Cloud doesn’t currently support displaying signed Git commits in the UI, but you can do it on GitHub and you get a shiny little “Verified” badge next to each one and this message when you click on it:

You can also show it locally with git log --show-signature.

This is all well and good, but what if you want to sign something on a remote server that you’re connected to via SSH? Enter GPG agent forwarding!

Just like you can do SSH agent forwarding to have your private SSH key securely forwarded to a machine you’re connecting to, you can do the same with the GPG agent that stores your GPG keys and allow it to access your signing subkey. Setting up GPG agent forwarding is broadly straightforward, but make a note of which versions of GNUPG you’re running at each end. The “modern” version is 2.1 and higher, I’m running 2.2.x on my Macs but my Linode runs CentOS 7 which only comes with GPUPG 2.0.x and I wasn’t able to fully get agent forwarding working between it and 2.2.x on my Macs. I tested the latest Debian with 2.1 and that worked.

I followed this guide, but one extremely important note is that you can’t use a relative path for the local or remote sockets, they have to be the full absolute path. This becomes a pain when you’re connecting to and from different OSes or machines where your username differs. Thankfully, SSH has a Match exec option where you can run a command to match different hosts and use different host definitions (and thus put in different paths for the sockets) depending on your local and remote machines.

Mine looks like this :

# Source machine is a personal Mac, connecting to another personal Mac on my local network; the local network all uses the .core domain internally
Match exec "hostname | grep -F .core" Host *.core
RemoteForward /Users/virtualwolf/.gnupg/S.gpg-agent /Users/virtualwolf/.gnupg/S.gpg-agent.extra

# Source machine is a personal Mac, connecting to my Linux box
Match exec "hostname | grep -F .core" Host {name of the Host block for my Linode}
RemoteForward /home/virtualwolf/.gnupg/S.gpg-agent /Users/virtualwolf/.gnupg/S.gpg-agent.extra

# Source machine is my work Mac, connecting to my Linux box
Match exec "hostname | grep -F {work machine hostname}" Host {name of the Host block for my Linode}
RemoteForward /home/virtualwolf/.gnupg/S.gpg-agent /Users/{work username}/.gnupg/S.gpg-agent.extra

(Yes, technically this doesn’t work as I mentioned at the start due to my Linode being on CentOS 7 and having GNUPG 2.0, but the socket forwarding bit works, just not when I actually want to do anything with it. :P)

Nginx, PHP-FPM, and Cloudflare, oh my!

I use my Linode to host a number of things (this blog and Kristina’s, my website and Kristina’s, an IRC session via tmux and irssi for a friend and me, and probably another thing or two I’m forgetting). Kristina started up a travel blog a few months ago which I’m also hosting on it, and shortly after that point I found that maybe once every two weeks or so my website and our blogs weren’t running anymore. I looked into it and it was being caused by Linux’s Out-Of-Memory Killer, which kicks in when the system is critically low on memory and needs to free some up, killing the Docker container that my website runs in as well as MariaDB.

The main cause was Apache and MariaDB using up entirely too much memory for my little 1GB Linode, it was evidently just sitting on this side of stable with two WordPress blogs but adding a third seems like it tipped it over the edge. The reason MariaDB and my website’s Docker container were being killed is because although Apache was using up a heap of memory it was spread over a number of worker threads, so individually none of those were high, and MariaDB and my website were the largest on the list. There’s lots of tweaks you can do, several of which I tried, but all that happened was that it delayed the inevitable rather than entirely resolving it. Apache is powerful but low-resource-usage it ain’t. The primary low-resource-usage alternative to Apache is Nginx, so I figured this weekend I’d have a crack at moving over to that.

Overall it was pretty straightforward, this guide from Digital Ocean was a good starting point, the bits where it fell short was mostly just a case of looking up all of the equivalent directives for SSL, mapping to filesystem locations, etc. (I have ~15 years of history of hosted images I’ve posted on the Ars Technica  forums and my old LiveJournal—which is now this blog—and wanted to make sure those links all kept working). 

One difference is with getting WordPress going… WordPress is all PHP, and Apache by default runs PHP code inside the Apache process itself via mod_php, whereas when you’re using Nginx you have to be using PHP-FPM or similar which is an entirely separate process that runs on the server and that Nginx talks to to process the PHP code. I mostly followed this guide, also from Digital Ocean though there were a couple of extra gotchas I ran into when getting it fully going with Nginx for WordPress:

  • Edit /etc/nginx/fastcgi_params and add a new line with this content or you’ll end up with nothing but an empty blank page: fastcgi_param PATH_TRANSLATED $document_root$fastcgi_script_name;
  • Remember to change the ownership of the WordPress installation directory to the nginx user instead of  apache
  • The default settings for PHP-FPM assume it’s running on a box with significantly more than 2GB of RAM; edit /etc/php-fpm.d/www.conf and change the line that says pm = dynamic to be pm = ondemand; with ondemand PHP-FPM will spin up worker processes as needed but will kill off idle ones after ten seconds rather than leaving them around indefinitely.

Additionally, Nginx doesn’t support .htaccess files so if you’ve got WordPress set up to use any of the “pretty”-type links, you’ll end up with 404s when you try to view an individual post instead. The fix is to put the following into the server block at the bottom:

location / {
  try_files $uri $uri/ /index.php?$args;
}

So it’ll pass the correct arguments to WordPress’ index.php file. You’ll also want to block access to any existing .htaccess files as well:

location ~ /\.ht {
  deny all;
}

The last thing I did with this setup was to put the entirety of my website, Kristina’s, and our respective blogs behind Cloudflare. I had great success with their DNS over HTTPS service, and their original product is essentially a reverse proxy that caches static content (CSS, Javascript, images) at each of their points of presence around the world so you’ll load those from whichever server is geographically closest to you. For basic use it’s free, and includes SSL, you just need to point your domain’s nameservers at the ones they provide. The only thing I needed to do was to set up another DNS record so I could actually SSH into my Linode, because now the host virtualwolf.org resolves to Cloudflare’s servers which obviously don’t have any SSH running!

Overall, the combination of Nginx + PHP-FPM + Cloudflare has resulted in remarkably faster page loads for our blogs, and thus far significantly reduced memory usage as well.

GPG and hardware-based two-factor authentication with YubiKey

As part of having an Ars Technica Pro++ subscription, they sent me a free YubiKey 4, which is a small hardware token that plugs into your USB port and allows for a bunch of extra security on your various accounts because you need the token physically plugged into your computer in order to authenticate. It does a number of neat things:

  • Generating one-time passwords (TOTP) as a second-factor when logging in to websites;
  • Storing GPG keys;
  • Use as a second-factor with Duo;

And a bunch of other stuff as well, none of which I’m using (yet).

My password manager of choice is 1Password, and although it allows saving one-time passwords for websites itself, I wanted to lock access to the 1Password account itself down even further. Their cloud-based subscription already has strong protection by using a secret key in addition to your strong master password, but you can also set it up to require a one-time password the first time you log into it from a new device or browser so I’m using the YubiKey for that.

I also generated myself GPG keys and saved them to the YubiKey. It was not the most user-friendly process in the world, though that’s a common complaint that’s levelled at GPG. I found this guide that runs you through it all and, while long, it’s pretty straightforward. It’s all set up now, though, my public key is here and I can send and receive encrypted messages and cryptographically sign documents, and the master key is saved only on an encrypted USB stick. You can also use the GPG agent that runs on your machine and reads the keys from the YubiKey to also be used for SSH, so I’ve got that set up with my Linode.

The last thing I’ve done is to set the YubiKey up as a hardware token with Duo and put my Linode’s SSH and this blog (and soon Kristina’s, though hers not with the YubiKey) behind that. With the Duo Unix module, even sudo access requires the YubiKey, and the way that’s set up is that you touch the button on the YubiKey itself and it generates a code and enters it for you.

It’s all pretty sweet and definitely adds a bunch of extra security around everything. I’m busily seeing what else I can lock down now!

Setting up DNS over HTTPS on macOS

Back in April, Cloudflare announced a privacy-focused DNS server running at 1.1.1.1 (and 1.0.0.1), and that it supported DNS over HTTPS. A lot of regular traffic goes over HTTPS these days, but DNS queries to look up the IP address of a domain are still unencrypted, so your ISP can still snoop on which servers you’re visiting even if they can’t see the actual content. We have a Mac mini that runs macOS Server and does DHCP and DNS for our home network, among other things, and with the impending removal of those functions and their suggested replacements with regular non-UI tools with a upcoming version of it, I figured now would be a good time to look into moving us over to use Cloudflare’s shiny new DNS server at the same time.

Turns out it wasn’t that difficult!

Overview

  1. Install Homebrew.
  2. Install cloudflared and dnsmasq: brew install cloudflare/cloudflare/cloudflared dnsmasq
  3. Configure dnsmasq to point to cloudflared as its own DNS resolver.
  4. Configure cloudflared to use DNS over HTTPS and run on port 54.
  5. Install both as services to run at system boot.

Configuring dnsmasq

Edit the configuration file located at /usr/local/etc/dnsmasq.conf and uncomment line 66 and change it from server=/localnet/192.168.0.1 to server=127.0.0.1#54 to tell it to pass DNS requests onto localhost on port 54, which is where cloudflared will be set up.

Configuring cloudflared

Create the directory /usr/local/etc/cloudflared and create a file inside that called config.yml with the following contents:

no-autoupdate: true
proxy-dns: true
proxy-dns-port: 54
proxy-dns-upstream:
  - https://1.1.1.1/dns-query
  - https://1.0.0.1/dns-query

Auto-update is disabled because that seems to break things when the update occurs, and the service doesn’t start back up correctly.

Configuring dnsmasq and cloudflared to start on system boot

dnsmasq: sudo brew services start dnsmasq will both start it immediately and also set it to start at system boot.

cloudflared: sudo cloudflared service install, which installs it for launchctl at /Library/LaunchDaemons/com.cloudflare.cloudflared.plist.

Updating your DNS servers

Now that dnsmasq and cloudflared are running, you need to actually tell your machines to use them as their DNS servers! Open up System Preferences > Network, hit Advanced, and in the DNS tab click the + button and put your computer’s local IP address in. (You’ll want to make sure your machine has a static IP address, of course). Repeat the process for everything else on your local network to have them all send their DNS traffic to 1.1.1.1 as well.

You can confirm that all your DNS traffic is going where it should be with dnsleaktest.

And done!

I was surprised at how straightforward this was. I also didn’t realise until I was doing all of this that dnsmasq also does DHCP, so with the assistance of this blog post I’ve also replaced the built-in DHCP server on the Mac mini and continue to have full local hostname resolution as well!

Another year of Node.js (now also featuring React)

I posted last year about my progress with Node.js, and the last sentence included “I’m very interested to revisit this in another year and see what’s changed”.

So here we are!

There’s been a fair bit less work on it this year compared to last:

$ git diff --stat 6b7c737 47c364b
[...]
77 files changed, 2862 insertions(+), 3315 deletions(-)

The biggest change was migrating to Node 8’s shiny new async/await, which means that the code reads exactly as if it was synchronous (see the difference in my sendUpdate() code compared to the version above it). It’s really very nice. I also significantly simplified my code for receiving temperature updates thanks to finally moving over to the Raspberry Pi over the Christmas break. Otherwise it’s just been minor bits and pieces, and moving from Bamboo to Bitbucket Pipelines for the testing and deployment pipeline.

I also did a brief bit of dabbling with React, which is a frontend framework for building single-page applications. I’d tried to fiddle with it a couple of years ago but there was something fundamental I wasn’t grasping, and ended up giving up. This time it took, though, and the result is virtualwolf.cloud! All it’s doing is pulling in data from my regular website, but it was still a good start.

There was a good chunk of time from about the middle of the year through to Christmas where I didn’t do any personal coding at all, because I was doing it at work instead! For my new job, the primary point of contact for users seeking help is via a room on Stride, and we needed a way to be able to categorise those contacts to see what users were contacting us about and why. A co-worker wrote an application in Ruby a few years ago to scrape the history of a HipChat room and apply tags to it in order to accomplish this, but it didn’t scale very well (it was essentially single-tenented and required a separate deployment of the application to be able to have it installed in another room; understandable when you realise he wrote it entirely for himself and was the only one doing this for a good couple of years). I decided to rewrite it entirely from scratch to support Stride and multiple rooms, with the backend written in Node.js and the frontend in React. It really is a fully-fledged application, and it’s been installed into nearly 30 different rooms at work now, so different teams can keep track of their contact rate!

The backend periodically hits Stride’s API for each room it’s installed in, and saves the messages in that room into the database. There’s some logic around whether a message is marked as a contact or not (as in, it was someone asking for help), and there’s also a whitelist that the team who owns the room can add their team members to in order to never have their own messages marked as contacts. Once a message is marked as a contact, they can then add one or more user-defined tags to it, and there’s also a monthly report so you can see the number of contacts for each tag and the change from the previous month.

The backend is really just a bunch of REST endpoints that are called by the frontend, but that feels like I’m short-changing myself. 😛 I wrote up a diagram of the hierarchy of the frontend components a month or so ago, so you can see from this how complex it is:

And I’m in the middle of adding the ability to have a “group” of rooms, and have tags defined at the group level instead of the room level.

I find it funny how if I’m doing a bunch of coding at work, I have basically zero interest in doing it at home, but if I haven’t had a chance to do any there I’m happy to come home and code. I don’t think I have the brain capacity to do both at once though. 😛

Adventures with Docker

For a few years now, the new hotness in the software world has been Docker. It’s essentially a very-stripped-down virtual machine, where instead of each virtual machine needing to run an entire operating system as well as whatever application you’re running inside it, you have just your application and its direct dependencies and the underlying operating system handles everything else. This means you can package up your application along with whatever other crazy setup or specific versions of software is required, and as long as they have Docker installed, anyone in the world can run it on pretty much anything.

The process of converting something to run in Docker is called “Dockerising”, and I’d tried probably two or so years ago to Dockerise my website (which was at the time still in its Perl incarnation), but without success. Most of it was not properly understanding Docker but also Docker’s terminology not being hugely clear and information on Dockering Perl applications being a bit thin on the ground at the time.

My new job involves quite a lot of Docker so I figured I should probably have another crack at it, so I sat down in June and managed to get my website running in a Docker container! The two-or-so-years between when I tried it last and now definitely helped, as did having had a little bit of experience with it in the new job.

I think the terminology was one of the bits that I struggled with most, so maybe this explanation will help someone… you have a Docker image, that’s basically a blueprint for a piece of software and all its associated dependencies. From that image (blueprint), you start up one or more containers which are the actual running form of the image. If one container dies (the application inside crashes or whatever), you don’t care and just start up another one and it’s identical each time. To build your own image, you start with a Dockerfile that tells Docker exactly how to construct your application and all the different parts that are required to support it (see my Lessn Archive’s Dockerfile for an example). There really wasn’t any substitute for actually going in and doing it; by struggling and failing I eventually got there in the end.

Since my initial success with my website, I’ve gone on to put both my old site archive and my URL shortener in Docker containers as well! Next stop is Kristina’s website, but that’s still using Perl and Mojolicious and my initial attempts have not been successful. 😛

Internet history

On Twitter recently, Mark had downloaded the whole archive of his Twitter account’s history and had been poking through it and randomly retweeting amusing old tweets. I downloaded my own Twitter history and quickly realised that a lot of the old things I’d linked to weren’t accessible because I’d been using my own custom URL shortener (this was before the days of Twitter doing their own URL shortening) and it wasn’t running anymore. Fortunately I’d had the foresight to take a full copy of all of my data and databases from Dreamhost before I shut down my account, and one of those databases was the one that had been backing my URL shortener. A quick import to PostgreSQL and a hacky Node.js application later, it’s all up and running! I’m under no illusions that it’s almost ever going to be accessed by anyone except me, but it’s nice to have another part of my internet history working. I’ve been hosting my own website and images and whatnot (things like pictures I’ve posted on my blog née LiveJournal, or in threads on Ars Technica) in one form or another since about 2002, and the vast majority of those links and images still work!

Speaking of my website, about four years ago now I went and tried to collect all my old websites into a single archive so I could look back and see the progression. The majority of them I actually still had the original source code to, though my very first one or two have been totally lost. The earliest I still have is from March of 1998 when I was not quite fifteen years old! I started out with just HTML, then discovered CSS and Javascript rollover images, and then around 2001 I started using PHP. I had to go in and hack up some of the PHP-based sites in order to get them to work, and oh dear god 18-year-old me was a FUCKING AWFUL coder. One of the sites consisted of a bit over three thousand lines in a single file, with all sorts of duplication and terribleness, and every single one of the sites that was hooked into MySQL had SQL injection vulnerabilities. I’m very proud of just how much my code has improved over the years.

I went back this weekend and managed to recover another handful of sites, and also included exports of the Photoshop files where the original site source wasn’t available. I’ve packed them all up into a Docker container (I’ll write another post about my experiences with Docker at some point soon) and chucked them up on archive.virtualwolf.org for the entire Internet to marvel at how terrible they all were! There’s a little bit more background there, but it’s a lot of fun just looking back at what I did.

A year of Node.js

Today marks one year exactly since switching my website from Perl to Javascript/Node.js! I posted back in March about having made the switch, but at that point my “production” website was still running on Perl. I switched over full-time to Node.js shortly after that post.

From the very first commit to the latest one:

$ git diff --stat 030430d 6b7c737
[...]
177 files changed, 11313 insertions(+), 2110 deletions(-)

Looking back on it, I’ve learnt a hell of a lot in that one single year! I have—

  • Written a HipChat add-on that hooks into my Ninja Block data (note the temperature in the right-hand column as well as the slash-commands; the button in the right-hand column can be clicked on to view the indoor and outdoor temperatures and the extremes for the day)
  • Refactored almost all of the code into a significantly more functional style, which has the bonus of making it a hell of a lot easier to read
  • Moved from callbacks to Promises, which also massively simplified things (see the progression of part of my Flickr– and HipChat-related code)
  • Completely overhauled my database schema to accomodate the day I eventually replace my Ninja Block with my Raspberry Pi (the Ninja Block is still running though, so I needed to have a “translation layer” to take the data in the format that the Ninja Block sends and converts it to what can be inserted in the new database structure)
  • Added secure, signed, HTTP-only cookies when changing site settings
  • Included functionality to replace my old Twitter image hosting script, and also added a nice front-end to it to browse through old images

Along with all that, I’ve been reading a lot of software engineering books, which have helped a great deal with the refactoring I mentioned above (there was a lot of “Oh god, this code is actually quite awful” after going through with a fresh eye having read some of these books)—Clean Code by Robert C. Martin, Code Complete by Steve McConnell, The Art of Readable Code by Dustin Boswell and Trevor Foucher.

I have a nice backlog in JIRA of new things I want to do in future, so I’m very interested to revisit this in another year and see what’s changed!

Farewell Dreamhost

After 12 years of service, I’m shutting my Dreamhost account down (for those unaware, Dreamhost is a website and email hosting service).

My very first—extremely shitty—websites were hosted on whichever ISP we happened to be using at the time—Spin.net.au, Ozemail, Optus—with an extremely professional-looking URL along the lines of domain.com.au/~username. I registered virtualwolf.org at some point around 2001-2002 and had it hosted for free on a friend’s server for a few years, but in 2005 he shut it down so I had to go find some proper hosting, and that hosting was Dreamhost.

The biggest thing I found useful as I was dabbling in programming was that Dreamhost offered PHP and MySQL, so I was able to create dynamic sites rather than just static HTML. Of course, looking back at the code now is horrifying, especially the amount of SQL injection vulnerabilities I had peppered my sites with.

Around the start of 2011, I started using source control—Subversion initially—and finally had a proper historical record of my code. I used PHP for the first year or so of it, then ended up outgrowing that and switched to a Perl web framework called Mojolicious. The only option to run a long-lived process on Dreamhost is to use Fast-CGI, which I never managed to get working with Mojolicious, but fortunately Mojolicious could also run as a regular CGI script so I was still able to use it with Dreamhost, albeit not at great speed.

At the same time I started using Subversion, I also signed up with Linode who offer an entire Linux virtual machine with which you can do almost anything you’d like as you have full root access. I originally used it mostly to run JIRA so I could keep track of what I wanted to do with my website and have the nifty Subversion/JIRA integration working to see my commits against each JIRA issue. I slowly started using the Linode for more and more things (and switched to Git instead of Subversion as well), until in 2014 I moved my entire website hosting over to the Linode.

At that point the only thing I was using Dreamhost for was hosting Kristina’s website and WordPress blog, and the email for our respective domains. Dreamhost’s email hosting wasn’t always the most reliable and towards the end of 2015 they had more than their usual share of problems, so we started looking for alternatives. Kristina ended up moving to Gmail and I went with FastMail (who I am extremely happy with and would very highly recommend!), I moved her blog and my previously-LiveJournal-but-now-Wordpress-blog over to the Linode, and that was that!

Moving my website hosting to the Linode also allowed me to move over to Node.js and I’ve been going full steam ahead ever since. Since that posted I’ve moved over from callbacks to Promises (so much nicer), I wrote myself a HipChat add-on to keep an eye on the temperature that my Ninja Block is reporting, and I moved my dodgy Twitter image upload Perl script functionality into my site and added a nice front-end to it. Even looking back at my code from 6 months ago to now shows a marked increase in quality and readability.

So in summary, thanks for everything Dreamhost, but I outgrew you. 🙂

Stubbing services in other services with Sails.js

With all my Javascript learnings going on, I’ve also been learning about testing it. Most of my website consists of pulling in data from other places—Flickr, Tumblr, Last.fm, and my Ninja Block—and doing something with it, and when testing I don’t want to be making actual HTTP calls to each service (for one thing, Last.fm has a rate limit and it’s very easy to run into that when running a bunch of tests in quick succession which then causes your tests to all fail).

When someone looks at a page containing (say) my photos, the flow looks like this:

Request for page → PhotosController → PhotosService → jsonService → pull data from Flickr’s API

PhotosController is just a very thin wrapper that then talks to the PhotoService which is what calls jsonService to actually fetch the data from Flickr and then subsequently formats it all and sends it back to the controller, to go back to the browser. PhotosService is what needs the most tests due to it doing the most, but as mentioned above I don’t want it to actually make HTTP requests via jsonService. I read a bunch of stuff about mocks and stubs and a Javascript module called Sinon, such but didn’t find one single place that clearly explained how to get all this going when using Sails.js. I figured I’d write up what I did here, both for my future reference and for anyone else who runs into the same problem! This uses Mocha for running the tests and Chai for assertions, plus Sinon for stubbing.

Continue reading “Stubbing services in other services with Sails.js”

Learning new things: Javascript and Node.js

We’ve used Node.js (specifically with a framework called Sails.js) at work for a number of projects but I never really felt I properly understood one of Node’s fundamental concepts, that of the callback. It’s absolutely pervasive throughout Node and I was able to muddle on through at work without totally grasping it, but it wasn’t ideal.

Back at the end of January I decided to try rewriting my website using Node.js (it’s currently written in Perl using the Mojolicious framework) as a learning experience. It’s now almost two months later and my site is actually completely rewritten with Node/Sails (sans tests, which are currently being written; I know about test-driven development but I wasn’t about to start bashing my head against failing to understand how to get the tests to do what I wanted on top of learning a whole new language :P) with all the same functionality of my Perl one, and although I’m still far from an expert I actually feel like I have a proper handle on what’s going on.

The problem I found when trying to find examples was that they were all very contrived; I felt like they were missing fundamental underlying parts that apparently everybody else was able to understand but I couldn’t. For me, the “ah ha” moment was this post on Stack Overflow about using callbacks in your own functions. It didn’t assume anything or use an example of some module that apparently everyone is already familiar with (the most common one was fs.read() to read data from the filesystem). Once I had that straight, it was full steam ahead. It’s also significantly easier to deal with Javascript objects compared to Perl’s array/hash references.

My actual live website at virtualwolf.org is still on the old Perl version, but I don’t want to put the Node one up until I’ve actually got it properly covered with tests. Speaking of tests, I’m using a thing called Istanbul for code coverage, the reports it generates look like this, and it’s really satisfying having the numbers and bars go up as your coverage increases. It’s basically gamification of tests, really!

All in all, I’m pretty pleased!

Introducing the LiveJournal XML Importer

Continuing on from my previous post about my LiveJournal to WordPress experience, and how the importer managed to miss a bunch of entries, it turns out I didn’t have every notification email still around. The ones prior to February of 2004 I’d apparently deleted so sadly there’s no recovering Kristina’s really early comments from the missing posts, but from what I could see there weren’t too many of those anyway, thankfully.

However, I’m happy to say that I’ve been able to hack the importer to import all the entries and comments from an ljdump archive! I’ve put the code up on GitHub, I’m sure there’s bugs and edge-cases and things that don’t work properly, but it worked perfectly for me. I’ve changed it from the original importer to still import comments from journals that have been deleted so the threading remains intact and you don’t end up with weird comments seemingly replying to nothing. They’re easily identified by the fact that the date on the comment is set to the time you performed the import, so they show up at the top of the Comments section in WordPress’ admin.