Automating Raspberry Pi setup (and ESP32, and Linode) with Ansible

Back in 2017 when I first moved off the old and busted NinjaBlocks platform to a Raspberry Pi for my temperature sensor setup, I said:

[…] if the hardware itself died I’d be stuck; yes, it was all built on “open hardware” but I didn’t know enough about it all to be able to recreate it.

I definitely have no problems with the hardware now (and with my move to ESP32s and MQTT for the temperature sensors themselves it’s even simpler), but I recently realised that the software configuration on the Raspberry Pis was still a problem: the configuration and setup of everything had had a steady pace of organic updates and tweaks and if one of the SD cards died or had a problem and I had to reformat it, I’d have a hell of a time recreating it afresh. On the “main” Raspberry Pi 4B+ alone I would need to:

  • Install all the various software packages (vim, git, tmux, Docker, etc.)
  • Add my custom shell configuration and bashmarks
  • Install Chrony and configure it to allow access from the ESP32s
  • Configure the drivers for the JustBoom DAC HAT and install and configure shairport-sync to allow for AirPlay to the big speakers in the lounge room
  • Configure and run the Mosquitto Docker container to allow all my temperature, humidity, air quality, and power data to flow to all the various places it needs to go
  • Configure and run the pi-home-dashboard Docker container so the Raspberry Pi Zero Ws can display our little temperature dashboards
  • Configure and run the powerwall-to-pvoutput-uploader Docker container so our power usage data can be both sent to PV Output and also be read by InfluxDB and the Raspberry Pi Zero W dashboard

In addition to all of that, the Raspberry Pi Zero Ws that display our little dashboards required a whole bunch of constant tweaks and changes to get them working properly (they run a full Linux desktop and display the dashboard in Chromium, and Chromium has an extremely irritating habit of giving a “Chromium didn’t shut down properly” message instead of loading the page you tell it to if there were basically any issues at all; fixing that required a whole lot of trial and error until I narrowed down the specific incantations to get it to stop).

Setting all of that back up from scratch would have been an absolute nightmare so I figured I should probably automate it in some fashion. And in addition, it means I would be able to keep future changes under version control as well so no matter what I’ve done I’d be able to restore it if necessary. At work we use a piece of software called Ansible to bring up the required software and configuration on the Amazon EC2 instances we run, so I thought that’d be a good place to start.

Even though it’s developed by RedHat and isn’t just some random open-source piece of software (although it is open-source), I still found the documentation to be not great in terms of explaining everything step-by-step and building on that knowledge in subsequent steps. But after a bunch of reading and trial and error, plus several weeks of getting it all working, I have my entire Raspberry Pi setup for all the Pis we have at home fully automated! I can wipe the SD card, reflash it with a fresh copy of Raspbian, then run Ansible and it gets everything installed and configured and working exactly how I need it.

I uploaded the whole shebang to GitHub to hopefully help others out as well. It’s obviously completely customised for our setup, but it at least gives a reasonable idea of how everything works.

It starts by creating an inventory, basically a list of the hostnames you want to run playbooks against (a “playbook” is a file that describes the list of steps to run in order to get to the desired state you need). Alongside that, you can group the hosts together so you can target a playbook to run on a specific set of hosts. For example, the server group only consists of the main Raspberry Pi 4B+ described above, but the dashboards consists of all three of the Pi Zero Ws that are running the dashboards.

One of the really handy things with Ansible is that you can set variables that will be reused in various places, and you can configure them for all hosts, or per group, or per individual host. I’m using a combination of those, and inside the server group it will actually look up items from within 1Password so I can commit the code to source control and not be storing secrets in it. You can also set variables per individual host as well, which I use to specify the dashboard URL that each of the Pi Zero Ws should load.

The playbooks themselves live in the playbooks directory, and they specify a set of hosts to run against, and a series of roles to run. The roles are reusable sets of tasks, so for example I run the initialisation role against all of the Raspberry Pis no matter what they’re ultimately going to be doing, for doing the initial things like setting the hostname and updating all the software packages, configuring Git, etc.

After the initialisation is done, the server playbook will run all the various steps to get Docker installed, NVM and Node.js installed, then get everything else configured and installed that needs to be configured and installed. Compare to the dashboards playbook that will also run the same initialisation steps, but then runs the dashboard role which installs the drivers for the HyperPixel display and various other things that need doing, and will configure the autostart file so on boot Chromium comes up with the correct URL depending on which Raspberry Pi it’s running on! The dashboard_url variable in that template file is set in the host_vars directory per specific hostname, so I can customise it for each one.

After my complete success here, I decided I wanted to do the same for when I needed to reflash my ESP32s, because previous it relied on me remembering to update a configuration file with the name of the ESP32 and I had definitely messed that up on a couple of occasions. That was relatively straightforwards as well, and I added it to my esp32-sensor-reader-mqtt repository (and included a playbook to just erase a board because I never did it quite often enough to remember what the specific steps were).

And then finally after that, I decided I should also automate my Linode setup. I’d posted back in 2019 about using Linode’s StackScripts to set everything up, but the problem with that is that you run it at the very beginning and your Linode is set up appropriately at that point, but any changes you make after that aren’t saved anywhere, so you’re essentially back to square one again. The final sentence in that blog post was this:

As long as I’m disciplined about remembering to update my StackScript when I make software changes, whenever the next big move to a new VM is should be a hell of a lot simpler.

But in news that will surprise nobody, I was not at all disciplined. 😅 The other problem is that you can’t test the StackScript as you’re going (they only run when you first spin up the Linode afresh), you have to update it and hope those steps work in future. With Ansible, the idea is that everything is idempotent, so you can run everything as many times as you want and it won’t change if something has been configured already, so it enables you to easily test out parts of the playbook updates without needing to wipe the whole damn thing. It’s taken a bit over a month of working on it after dinner and on weekends, but now the whole Linode setup is fully automated as well.

However, the other wrinkle is that where the Raspberry Pis don’t store any data on them and can just be wiped without problem, my Linode hosts this blog, my website and all its images, all the various images I’ve posted to Ars Technica over the years, and a bunch of other things too. I ended up splitting the Ansible process into two, there’s the configuration and then another separate playbook that copies everything over from the old Linode onto the new one and then registers a new SSL certificate with Let’s Encrypt and updates Cloudflare to point the DNS of my website and blog and the general server hostname to the new Linode.

That bit was a bit nerve-wracking, I tested the process a bunch of times and pulled the trigger for real last weekend, then had a minor panic attack when I realised that the database dump of my website hadn’t been reimported since I originally tested the process back on the 9th (I suspect it didn’t import because there was already data in the database, but unfortunately it didn’t actually error out so I didn’t know), so I didn’t have any of the posts I’d made nor any of the temperature data since then. 😬 Fortunately I realised this the morning after I’d done the migration and had wisely left the old Linode up and running, so I renamed the database on the new Linode so I could harvest the temperature data that been sent since the migration, dumped the old database from the old Linode and imported it into the correct location on the new Linode, and then exported and reimported just the missing temperature data, and we’re back in business.

This time I should be able to revisit this in three or four years when I next do a big upgrade and it should actually be quite painless (famous last words, I know, but I’m much more confident this time).

Replacing the hard disk in a PowerBook G3 “Pismo”, and other fun with Mac OS 9

Replacing the hard disk in a PowerBook G3 “Pismo”, and other fun with Mac OS 9

I posted nearly five years ago about my shiny new Power Mac G4 and how much I was enjoying the nostalgia. Unfortunately the power supply in it has since started to die, and the machine will randomly turn itself off after an increasingly short period of time. Additionally, I’d forgotten just how noisy those machines were, and how hot they ran! I’ve bought a replacement power supply for it, but it involves rearranging the output pins from a standard ATX PSU to what the G4 needs, and that’s so daunting that I still haven’t tackled it yet. I decided to go back to the trusty old PowerBook G3, as I’ve since gotten a new desk and computer setup that has much more room on it, and having a much more compact machine has been very helpful.

One thing I was a bit concerned about was the longevity of the hard disk in it and I started investing the possibility of putting a small SSD into it. Thankfully such a thing is eminently possible by way of a 128GB mSATA SSD and an mSATA to IDE adapter! I followed the iFixit guide — though steps 6 through to 11 were entirely unnecessary — and now have a shiny new and nearly entirely silent PowerBook G3 (though it’s disconcerting at just how quiet it is given it’s an old machine… I realised I’m so subconsciously used to hearing the clicking of the hard disk).

A photo of a black PowerBook G3 sitting on a desk, booted to the Mac OS 9 desktop. The machine is big and chunky, but also has subtle curves to it, and the trackpad is HILARIOUSLY tiny compared to modern Macs.

I even had the original install discs from the year 2000 when mum first bought this machine, and they worked perfectly (though a few years ago I’d had to replace the original DVD drive with a slot-loading one because the original one died and it stopped reading discs entirely).

One I had it up and running, another sticking point is actually getting files onto it. As I mentioned in my previous post, Macintosh Repository has a whole ton of old software and if you load it up with a web browser from within Mac OS 9 it’ll load without HTTPS, but even so it’s pretty slow. Sometimes it’s nicer just to do all the searching and downloading from a fast modern machine and then transfer the resulting files over.

Mac OS 9 uses AFP for sharing files, and the AFP server that used to be built into Mac OS X was removed a few versions ago. Fortunately there’s an open-source implementation called Netatalk, and some kindly soul packed it all up into a Docker container.

I also stumbled across a project called Webone a while ago, which acts essentially as an SSL-stripping proxy that you run on a modern machine and point your old machine’s web browser to for its proxy setting. Old browsers are utterly unable to do anything with the modern web thanks to newer versions of encryption in HTTPS, but this lets you at least somewhat manage to view websites, even if they often don’t actually render properly.

Both Netatalk and Webone required a bit of configuration, and I rather than setting them up and then forgetting how I did so, I’ve made a GitHub repository called Mac OS 9 Toolbox with docker-compose.yml files and setup for both projects in them, plus a README so future-me knows what I’ve done and why. 😛 In particular getting write permissions to write from the Mac OS 9 machine to the one running Netatalk was tricky.

I also included a couple of other things in there too, and will continue to expand on it as I go. One thing is how to convert the PICT format screenshots from Mac OS 9 into PNG, since basically nothing will read PICTs anymore. It also includes a Mastodon client called Macstodon:

A screenshot of a multi-pane Mac OS 9 application showing the Mastodon Home and Local Timelines and Notifications at the top, and the details of a selected toot at the bottom.

And also the game Escape Velocity: Override (which I’m very excited to note is getting a modern remaster from the main guy who worked on the original):

A screenshot of a top-down 2D space trading/combat game with quite basic graphics. A planet is in the middle of the screen along with several starships of various sizes.

I mentioned both the Marathon and Myth games in my previous post, but those actually run quite happily on modern hardware since Bungie was nice enough to open-source them many years ago. Marathon lives on with Aleph One, and Myth via Project Magma.

Upping my monitoring game with MQTT

Previously on Monitor All The Things:

(The display that used to show the air quality has been changed to show a clock instead, and the air quality monitoring is done via another ESP32 now. I’m also sensing a definite theme with my blog post titles here).

I hadn’t blogged about it, but I also have all of this (indoor and outdoor temperature and humidity, power usage and generation plus battery charge, and outdoor air quality) going into InfluxDB for visualising in Grafana. The dashboard I made looks like this:

Pretty spiffy, eh?

It’d very much evolved rather organically as I went though, so lots of different things on different hosts sending HTTP calls all over the place, including my own slightly dodgy system for getting the ESP32s that are connected to the temperature sensor to save their readings onto the local filesystem if my website couldn’t be contacted (for example if we had an internet outage), as well as two separate things hitting the Powerwall’s local API every five seconds to pull the power data (one for the little HyperPixel display at the front of the house, and one for the visualisation stuff above).

I figured there had to be a cleaner and more elegant way of doing this. At work I deal with Amazon’s Simple Queue Service (SQS) quite a lot and use it in one of the services I built, so I wondered if there was a way to accomplish something similar myself, so I could just have everything drop messages onto a queue and have the things that need to read them pick those messages up from the queue.

Turns out there is, and it’s called MQTT!

It’s an absurdly simple and lightweight protocol, you have a central server called a “broker”, a publisher that sends messages to a given topic on the broker, and as many subscribers as you want that also connect to the broker and each listen on a topic or topics, and the broker ensures those messages get from the publisher to each subscriber. There’s also quality of service settings where you can have it guarantee that the message is received by the subscriber at least once, and it’ll queue up the messages for the subscribers if they drop offline and the messages will all be sent once the subscriber comes back.

Interestingly, you can also have a broker on one machine connect to a broker on another machine, and have it send messages on a particular topic to the remote broker, which seemed like it’d be a good way to get weather updates to my website.

There’s a guy who wrote an MQTT client library in MicroPython for the ESP32, mqtt_as, so that would take care of the ESP32 side of things, I’d use a popular open-source MQTT broker called Mosquitto, and there’s a Javascript MQTT client called MQTT.js that would be used for my website and all the other TypeScript parts of the setup.

I did a bunch of brainstorming in draw.io and came up with this elaborate diagram:

(Mechanise is the hostname of my Linode, which my website runs on, and PVOutput is a website for sending your solar power generation data to, a bunch of people at work also do the same and we’re all in the same “team” so we can see how much combined we’ve all generated together).

After that, it just involved a whole bunch of coding (as well as ordering two spare ESP32s so I could test that my code worked without having to pull apart my existing setup), which I’ve uploaded to GitHub:

Despite having written up a careful plan and done what I thought was getting all my ducks in a row to make a quick switchover this morning, there were a number of things I ran into that caused it to take a few hours to get going (things like forgetting to configure PostgreSQL on the Raspberry Pi 4B to allow things running in Docker to access it, needing to add an extra published port on the Linode so my website could connect to Mosquitto, and most annoyingly of all, a recent VSCode update breaking Pymakr and having to revert to an old version of both pieces of software). I got everything up and running in the end, and now if I add any new monitoring things, it’ll be quite simple to publish the data to Mosquitto and slurp it up into InfluxDB!

More fun with temperature sensors: ESP32 microcontrollers and MicroPython

More fun with temperature sensors: ESP32 microcontrollers and MicroPython

I’ve blagged previously about our temperature/humidity sensor setup and how they’re attached to my Raspberry Pis, and they’ve been absolutely rock-solid in the three-and-a-half years since then. A few months ago, a colleague at work had mentioned doing some stuff with an ESP32 microcontroller and just recently I decided to actually look up what that was and what one can do with it, because it sounded like it might be a fun new project to play with!

From Wikipedia: ESP32 is a series of low-cost, low-power system on a chip microcontrollers with integrated Wi-Fi and dual-mode Bluetooth.

So it’s essentially a tiny single-purpose computer that you write code for and then flash that code onto the board, rather than like with the Raspberry Pi where it has an entire Linux OS running on it. It runs at a blazing fast 240MHz and has 320KB of RAM. The biggest draw for me was that it has built-in wifi so I could do networked stuff easily. There’s a ton of different boards and options and it was all a bit overwhelming, but I ended getting two of Adafruit’s HUZZAH32s which come with the headers for attaching the temperature sensors we have already soldered on. Additionally, they have 520KB of RAM and 4MB of storage.

Next up, I needed to find out how to actually program the thing. Ordinarily you’d write in C like with an Arduino and I wasn’t too keen on that, but it turns out there’s a distribution of Python called MicroPython that’s written explicitly for embedded microcontrollers like the ESP32. I’ve never really done much with Python before, because the utter tyre fire that is the dependency/environment management always put me off (this xkcd comic is extremely relevant). However, with MicroPython on the ESP32 I wouldn’t be having to deal with any of that, I’d just write the Python and upload it to the board! Additionally, it turns out MicroPython has built-in support for the DHT22 temperature/humidity sensor that I’ve already been using with the Raspberry Pis. Score!

There was a lot of searching over many different websites trying to find how to get all this going, so I’m including it all here in the hopes that maybe it’ll help somebody else in future.

Installing MicroPython

At least on macOS, first you need to install the USB to UART driver or your ESP32 won’t even be recognised. Grab it from Silicon Labs’ website and get it installed.

Once that’s done, follow the Getting Started page on the MicroPython website to flash the ESP32 with MicroPython, substituting /dev/ttyUSB0 in the commands for /dev/tty.SLAB_USBtoUART.

Using MicroPython

With MicroPython, there’s two files that are always executed when the board starts up, boot.py which is run once at boot time and is generally where you’d put your connect-to-the-wifi-network code, and main.py which is run after boot.py and will generally be the entry point to your code. To get these files onto the board, you can use a command-line tool called ampy, but it’s a bit clunky and also not supported anymore.

However, there is a better way!

Setting up the development environment

There are two additional tools that make writing your Python code in Visual Studio Code and uploading to the ESP32 an absolute breeze.

The first one is micropy-cli, which is a command-line tool to generate the skeleton of a VSCode project and set it up for full autocompletion and Intellisense of your MicroPython code. Make sure you add the ESP32 stubs first before creating a new micropy project.

The second is a VSCode extension called Pymakr. It gives you a terminal to connect directly to the board and run commands and read output, and also gives you a one-click button to upload your fresh code, and it’s smart enough not to re-upload files that haven’t changed.

There were a couple of issues I ran into when trying to get Pymakr to recognise the ESP32 though. To fix them, bring up the VSCode command palette with Cmd-Shift-P and find “Pymakr > Global Settings”. Update the address field from the default IP address to /dev/tty.SLAB_USBtoUART, and edit the autoconnect_comport_manufacturers array to add Silicon Labs.

Replacing the Raspberry Pis with ESP32s

After I had all of that set up and working, it was time to start coding! As I mentioned earlier I’ve not really done any Python before, so it was quite the learning experience. It was a good few weeks of coding and learning and iterating, but in the end I fully-replicated my Pi Sensor Reader setup with the ESP32s, and with some additional bits besides.

One of the things my existing Pi Sensor Reader setup did was to have a local webserver running so I could periodically hit the Pi and display the data elsewhere. Under Node.js this is extremely easily accomplished with Express, but using MicroPython the options were more limited. There are a number of little web frameworks that people have written for it, but they all seemed quite overkill.

I decided to just use raw sockets to write my own, though one thing I didn’t appreciate until this point was how Node.js’s everything-is-asynchronous-and-non-blocking makes doing this kind of thing very easy, you don’t have to worry about a long-running function causing everything else to grind to a halt while it waits for that function to finish. Python has a thing called asyncio but I was struggling to get my head around how to use it for the webserver part of things until I stumbled across this extremely helpful repository where someone had shown an example of how to do exactly that! (I even ended up making a pull request to fix an issue I discovered with it, which I’m pretty stoked with).

One of the things I most wanted to do was to have some sort of log file accessible in case of errors. With the Raspberry Pi I can just SSH in and check the Docker logs, but once the ESP32s were plugged into power and running, you can’t easily do a similar thing. I ended up writing the webserver with several endpoints to read the log, clear it, reset the board, and view and clear the queue of failed updates.

The whole thing has been uploaded to GitHub with a proper README of how it works, and they’ve been running connected to the actual indoor and outdoor temperature sensors and posting data to my website for just under a week now, and it’s been absolutely flawless!

More Raspberry Pi-powered monitoring: air quality!

Here in New South Wales, last year’s bushfires over late spring and into summer were astoundingly bad, and there were days where Sydney had the poorest air quality on the entire planet. Everyone was watching the PM2.5 values, and there were days where Kristina couldn’t go outside because of her asthma. I figured it’d be neat to set up a Raspberry Pi-powered air quality sensor and had ordered the sensor back in February but didn’t get around to putting it into service until now.

This is the bit that lives inside so we can easily see the latest reading:

A small 4" LCD display showing the air quality values for PM1.0, PM2.5, and PM10.

It uses the same sort of setup as my Pimoroni display, and I updated my pi-home-dashboard to add a second page to display the values from the air quality reader.

The sensor itself is a Plantower PMS5003 sensor and is attached to the same Raspberry Pi that the outdoor temperature sensor is on. Adafruit’s instructions on getting it set up were pretty straightforward, and they also give some sample code for how to read it, but it’s in Python which I intensely dislike (I don’t really even have any strong feelings about the language itself one way or the other, but I’ve never had a good experience with the damn package management around it, so I do my damnedest to avoid it). I was able to write the same logic in TypeScript instead — though had to consult the clever people on Ars Technica because parsing the output from the sensor involves things like bit-shifting which is quite low-level and something I’m utterly unfamiliar with — and chucked the whole thing up on GitHub. It takes ten readings and averages them, and has an HTTP endpoint for pulling the latest values.

I’ve set the front-end up so the colour of the numbers will change to orange and red depending on how bad the air quality is, but hopefully it’s a long while before we actually see that in action!

Powering our house with a Tesla Powerwall 2 battery

I posted back in March about our our shiny new solar panels and efforts to reduce our power usage, and as of two weeks ago our net electricity grid power usage is now next to zero thanks to a fancy new Tesla Powerwall 2 battery!

A photo of a white Tesla Powerwall 2 battery and Backup Gateway mounted against a red brick wall inside our garage.
A side-on view of a white Tesla Powerwall 2 battery mounted against a red brick wall.

We originally weren’t planning on getting a battery back when we got our solar panels — and to to be honest they still don’t make financial sense in terms of a return on investment — but we had nine months of power usage data and I could see that for the most part the amount of energy the Powerwall can store would be enough for us to avoid having to draw nearly anything whatsoever from the grid*.

* Technically this isn’t strictly true, keep reading to see why.

My thinking was, we’re producing stonking amounts of solar power and are feeding it back to the grid at 7c/kWh, but have to buy power from the grid after the sun goes down at 21c/kWh. Why not store as much as possible of that for use during the night?

The installation was done by the same people who did the solar panels, Penrith Solar Centre, and as before, I cannot recommend them highly enough. Everything was done amazingly neatly and tidily, it all works a treat, and they fully cleaned up after themselves when they were done.

We have 3-phase power and the solar panels are connected to all three phases (⅓ of the panels are connected individually to each phase) and the Powerwall has only a single-phase inverter so is only connected to one phase, but the way it handles everything is quite clever: even though it can only discharge on one phase, it has current transformers attached to the other two phases so it can see how much is flowing through there, and it’ll discharge on its phase an amount equal to the power being drawn on the other two phases (up to its maximum output of 5kW anyway) to balance out what’s being used. The end result is that the electricity company sees us feeding in the same amount as we’re drawing, and thanks to the magic of net-metering it all balances out to next to zero! This page on Solar Quotes is a good explanation of how it works.

The other interesting side-effect is that when the sun is shining and the battery is charging, it’s actually pulling power from the grid to charge itself, but only as much as we’re producing from the solar panels. Because the Enphase monitoring system doesn’t know about the battery, it gives us some amusing-looking graphs whereby the morning shows exactly the same amount of consumption as production up until the battery is fully-charged!

We also have the Powerwall’s “Backup Gateway”, which is the smaller white box in the photos at the top of this post. In the event of a blackout, it’ll instantaneously switch over to powering us from the battery, so it’s essentially a UPS for the house! Again, 3-phase complicates this slightly and the Powerwall’s single-phase inverter means that we can only have a single phase backed up, but the lights and all the powerpoints in the house (which includes the fridge) are connected to the backed-up phase. The only things that aren’t backed up are the hot water system, air conditioning, oven, and stove, all of which draw stupendous amounts of power and will quickly drain a battery anyway.

We also can’t charge the battery off the solar panels during a blackout… it is possible to set it up like that, but there needs to be a backup power line going back from a third of the solar panels back to the battery, which we didn’t get installed when we had the panels put in in February. There was a “Are you planning on getting a battery in the next six months” question which we said no to. 😛 If we’d said yes, they would have installed the backup line at the time; it’s still possible to install it now, but at the cost of several thousand dollars because they need to come out and pull the panels up and physically add the wiring. Blackouts are not remotely a concern here anyway, so that’s fine.

In the post back in March, I included three screenshots of the heatmap of our power usage, and the post-solar-installation one had the middle of the day completely black. Spot in the graph where we had the battery installed!

We ran out of battery power on the 6th of November because the previous day had been extremely dark and cloudy and we weren’t able to fully charge the battery from the solar panels that day (it was cloudy enough that almost every scrap of solar power we generated went to just powering the house, with next to nothing left over to put into the battery), and the 16th and 17th were both days where it was hot enough that we had the aircon running the whole evening after the sun went down and all night as well.

Powershop’s average daily use graph is pretty funny now as well.

And even more so when you look all the way back to when we first had the smart meter installed, pre-solar!

For monitoring the Powerwall itself, you use Tesla’s very slick app where you can see the power flow in real time. When the battery is actively charging or discharging, there’s an additional line going to or from the Powerwall icon to wherever it’s charging or discharging to or from.

You can’t tell from a screenshot of course, but those on the lines connecting the Solar to the Home and Grid icons animate in the direction that the power is flowing.

It also includes some historical graph data as well, but unfortunately it’s not quite as nice as Enphase’s, and doesn’t even have a website, you can only view it in the app. There’s a website called PVOutput that you can send your solar data to, and we have been doing that via Enphase since we got the solar panels installed, but the Powerwall also has its own local API you can hit to scrape the power usage and flows, and battery charge percentage. I originally found this Python script to do exactly that, but a) I always struggle to get anything related to Python working, and b) the SQLite database that it saves its data into kept intermittently getting corrupted, and the only way I’d know about it is by checking PVOutput and seeing that we hadn’t had any updates for hours.

So, I wrote my own in TypeScript! It saves the data into PostgreSQL, so far it’s been working a treat and it’s all self-contained in a Docker container. The graphs live here, and to see the power consumption and grid and battery flow details, click on the right-most little square underneath the “Prev Day” and “Next Day” links under the graph. Eventually I’m going to send all this data to my website so I can store it all there, but for the moment PVOutput is working well.

It also won’t shock anybody to know that I updated my little Raspberry Pi temperature/power display to also include the battery charge and whether it’s charging or discharging (charging has a green upwards arrow next to it, discharging has a red downwards arrow).

My only complaint with the local API is that it’ll randomly become unavailable for periods of time, sometimes up to an hour. I have no idea why, but when this happens the data in the Tesla iPhone app itself is still being updated properly. It’s not a big deal, and doesn’t actually affect anything with regards to battery’s functionality.

Overall, we’re exceedingly happy with our purchase, and it’s definitely looking like batteries in general are going to be a significant part of the electrical grid as we move to higher and higher percentages of renewables!

Fixing a Guitar Hero World Tour/Guitar Hero 5 guitar strum bar

Kristina and I had a date night last night in which we ate trashy food and then took the Xbox 360 out of storage and fired up Guitar Hero: Warriors of Rock. It was an excellent time except that my Guitar Hero World Tour guitar had stopped registering downward strums, and only upwards strums worked.

I figured I’d pull it apart today and see what was up, and thanks to this guide I figured it out, and am documenting it here for posterity (my problem wasn’t one of the ones described in that guide, but it was very handy to see how to disassemble the thing in the first place).

Tools needed for disassembly

  • Philips-head #0 and #1 screwdriver
  • Torx T9 screwdriver

Process

Firstly the neck needs to be removed, and the “Lock” button at the back towards the base of the guitar set to its unlocked position.

Next, the faceplate needs to be removed. This can be done by just getting a fingernail or a flathead screwdriver underneath either of the top bits of the body, pointed to with a arrow here, and gently prying it away from around the edges.

After that, there’s twelve Torx T9 screws to remove, circled in red, and another four Philips-head #0 ones, marked in green.

Once they’re all out, you can gently separate the front of the guitar where all the electronics live from the back of it.

Next there’s four Philips-head #1 screws to remove to get the circuit board that contains the actual clicky-switches away from the strum bar itself. Leave the middle two alone as they attach the guides for the springs of the strum bar.

After this, it’s a bit of a choose-your-own-adventure, as what you do next really depends on what’s wrong with the strum bar. On the underside of the circuit board above are the switches, it’s definitely worth making sure they both click nice and solidly when you press on them directly. If they don’t, it’s apparently possible to source exact replacements (“275-016 SPDT Submini Lever Switch 5A at 125/250VAC”) and fix it with a bit of soldering, but thankfully this wasn’t necessary in my case.

In the next image, undoing the Philips-head #1 screws circled in blue will allow you to take the strum bar assembly itself out and give it a re-lubricating (don’t use WD40, use actual proper lubricating grease) to make it rock back and forth a bit more smoothly. Another improvement you can make is adding a couple of layers of electrical tape to the areas I’ve circled in red. They’re where the strum bar physically hits the inside of the case, and the electrical tape can dampen the noise a bit.

What the strum bar problem ultimately ended being in my case is that the middle indented section where the switch rests against the strum bar to register a downstroke had actually worn away and could no longer press the switch in far enough to click. My solution, circled in green, was to chop a tiny piece of plastic from a Warhammer 40,000 miniature sprue and glue it—with the same plastic glue I use for assembling plastic miniatures—to the strum bar. Then I reassembled everything and it’s as good as new!

More space: the Pimoroni HyperPixel4 display on a Raspberry Pi Zero W

Back at the start of 2018 I blogged about my Raspberry Pi temperature display setup and it’s been pretty excellent and utterly reliable since then, but because of its small size — the display is only 2 inches — it wasn’t particularly visible from across the room. That, combined with the discovery that the Envoy power consumption monitoring system we had installed with the solar panels has a locally-accessible API that you can use to get real-time production and consumption data (which lives at http://<ip-of-the-envoy-box>/production.json?details=1), made me start looking into larger displays so I could include both temperature/humidity data and our power consumption.

My first port of call was the 2.7-inch version of the original 2-inch display. I ordered it on the 6th of April then… nothing showed up. I’d assumed the PaPiRus was MIA and had instead ordered a 4-inch, 800×480-pixel display in the form of Pimoroni’s HyperPixel4 display, the non-touch version. The Raspberry Pi registers it as a regular display so you run a full desktop environment windowing system on it rather than the way the PaPiRus works.

Of course, about a week after ordering the HyperPixel 4, the PaPiRus finally arrived! The 2.7-inch version of the PaPiRus is 264 pixels wide by 176 pixels high, so not exactly high-resolution. There’s actually quite a lot of freedom to tweak the position of the elements on screen pixel-by-pixel, but I quickly discovered that that’s extremely tedious when doing it directly on the Raspberry Pi itself because it takes several seconds for it to contact the required endpoints to pull in the data and then refresh the whole display. As well as writing text, the display can also display (1-bit) bitmap images, so I decided to change tack and instead of using the PaPiRus’s text API I wrote a probably-slightly-overengineered Node.js application that would run on the Raspberry Pi 4B, fetch the data from the outdoor and indoor sensors as well as the Envoy, use the Javascript Canvas API to lay everything out, and then convert it to a bitmap image that the Python script on the Pi Zero W would fetch every minute and then update the display with.

The biggest advantage of this system is that I could run it locally on my regular computer to quickly tweak the positioning without having to wait for the PaPiRus display to refresh each time, and I set it up so I could invert the colours to be white on black instead so I could clearly see the boundaries of the canvas. I put the code up on GitHub if anyone is interested in poking through it, and the end result looks like this:

Having over-engineered my Node.js solution, the HyperPixel4 display arrived maybe a couple of weeks later! It’s extremely slick-looking, but unfortunately the little plastic nubs that are meant to keep the screen in place in the house aren’t actually big enough to hold it in, and I managed to have the display itself pop out and crack some of the wires that feed the display and it caused all sorts of display weirdness. I emailed the place that makes the HyperPixel display about it and they were super nice and helpful and sent me out a replacement display with no questions asked! While I was waiting for the new one to arrive, the old broken one was partially working enough that I could at least get everything up and running how I wanted it, anyway.

Because using the HyperPixel is the same as if you’d hooked up an HDMI display and were using the Pi as a regular computer, I started from the full-blown Raspbian desktop image, not the Lite one. It was relatively straightforward to get everything going (mostly just installing and configuring the driver from Pimoroni’s GitHub repository), but there were some additional things I needed to do to get everything working as I wanted. I settled on a Node.js backend and React frontend setup (the separate backend was necessary because CORS; I couldn’t hit the Envoy URL directly from the browser on the Pi, so I have to have the Node.js backend pull in the data and then feed it to the React app), both of which are running in a Docker image on the Raspberry Pi 4B.

  • By default the HyperPixel4 runs at full brightness, so I followed this to turn it way down, and also to set up a cron job to entirely turn the display off at midnight and turn it back on at 8am.
  • To get the Pi to open Chromium full-screen on boot, I followed these instructions.
  • To disable the annoying “Restore pages” dialog in Chromium, this on the Raspberry Pi Stack Exchange was helpful.
  • Raspbian comes by default with a VNC server installed, just not enabled. To enable it and allow access directly from macOS’s “Connect to Server” dialog in the Finder:
    • Run sudo raspi-config, go to Interface Options > VNC and enable it.
    • Run vncpasswd -service to set a VNC password (note if it’s longer than eight characters, only the first eight are used when connecting).
    • Create the file /etc/vnc/config.d/common.custom with the contents: Authentication=VncAuth
    • Then Restart the VNC service with sudo systemctl restart vncserver-x11-serviced
  • And lastly, to disable the Pi from turning the screen off after activity, I followed these steps.

My ~/.config/lxsession/LXDE-pi/autostart ultimately ended up looking like this:

@lxpanel --profile LXDE-pi
@pcmanfm --desktop --profile LXDE-pi
point-rpi
@chromium-browser --start-fullscreen --start-maximized --app=http://fourbee:3003
@xset s off
@xset -dpms 
@xset s noblank
@sudo /home/pi/Source/rpi-hardware-pwm/pwm 19 1000000 135000

And the whole setup looks like this:

A photo of a small LCD display showing outdoor and indoor temperature and current power consumption and production. The text is white on black.

It’s quite the improvement in visibility and I can easily read it from all the way in the kitchen! It updates itself automatically every 30 seconds, and there’s no e-ink full-display-refresh screen-blanking when it does.

Digital archeology: recovering ClarisWorks drawing files

Three years ago I posted about how I’d gone back and recovered all my old websites I’d published over the years and packed them up into a Docker image, and last year I’d idly mused that I should go back and recover the multitude of websites that I’d designed but never actually uploaded anywhere. I finally got around to doing that over the weekend, and they’re all up on archive.virtualwolf.org! Some are the original HTML source, some are just the Photoshop mockups, but that now contains the almost sum total of every single website I’d created (and there’s a lot of them). The only one missing is the very very first one… The Dire Marsh news updates are from early 1998, but I’d copied most of the layout from the previous site as evidenced by the (broken) visitor at the left that says “<number> half-crazed Myth fanatics have visited this site since 21/12/97”.

Prior to building way too many websites, I’d been introduced to the Warhammer 40,000 and Dune universes when I was 13 and had immediately proceed to totally rip them off get inspired and write my own little fictional universe along the same lines. This was all in 1996 and very early 1997, I even still have all the old files sitting in my home folder with the original creation dates and everything, but didn’t have anything that could open them as they were a combination of ancient Microsoft Word writings — old enough that Pages didn’t recognise them — and ClarisWorks drawing documents — ClarisWorks had a vector-based drawing component to it as well as word processing. I ended up going down quite the rabbit hole in getting set up to bring them forwards into a modern readable format, and figured I’d document it here in case it helps anyone in future.

Running Mac OS 9 with SheepShaver

The very first hurdle was getting access to Mac OS 9 to begin with. I originally started out with my Power Mac G4 that I’ve posted about previously but unfortunately it seems like the power supply is on the way out, and it kept shutting down (people have apparently had success resurrecting these machines using ATX power supplies but I haven’t had a chance to look into it yet). Fortunately, there’s a Mac OS 9 emulator called SheepShaver that came to the rescue.

  1. Download the latest SheepShaver and the “SheepShaver folder” zip file from the emaculation forums.
  2. You need an official “Mac OS ROM” file that’s come from a real Mac or been extracted from the installer. Download the full New World ROMs archive from Macintosh Repository, extract it, rename the 1998-07-21 - Mac OS ROM 1.1.rom file to Mac OS ROM and drop it into the SheepShaver folder.
  3. Download the Mac OS 9.0.4 installer image from Macintosh Repository (SheepShaver doesn’t work with anything newer).
  4. Follow the SheepShaver setup guide to install Mac OS 9 and set up a shared directory with your Mac. Notes:
    • It defaults to assigning 16MB of RAM to the created virtual machine, be sure to increase it to something more than 32MB.
    • Disable the “Update hard disk drivers” box in the Options sections of the Mac OS 9 installer or the installer will hang (this is mentioned in the setup guide but I managed to miss it the first time around).
    • When copying files from the shared directory, copy them onto the Macintosh HD inside Mac OS 9 directly, not just the Desktop, or StuffIt Expander will have problems decompressing files.

Recovering ClarisWorks files

This was the bulk of the rabbit hole, and if you’re running macOS 10.15, you’ve got some additional rabbit hole to crawl through because the software needed to pull the ClarisWorks drawing documents into the modern era, EazyDraw Retro (scroll down to the bottom of the page to find the download link), is 32-bit only which means it doesn’t run under 10.15, only 10.14 and earlier.

Step 1: Convert ClarisWorks files to AppleWorks 6

  1. Download the archive of QuickTime installers and install QuickTime 4.1.2, which is required to install AppleWorks 6.
  2. Download the AppleWorks 6 installer CD image (it has to be added in SheepShaver’s preferences as a CD-ROM device) and install it.
  3. Open each of the ClarisWorks documents in AppleWorks, you’ll get a prompt saying “This document was created by a previous version of AppleWorks. A copy will be created and ‘[v6.0]’ will be added to the filename”. Click OK and save the copy back onto the shared SheepShaver drive with a .cwk file extension.

Step 2: Install macOS 10.14 inside a virtual machine

This entire step can be skipped if you haven’t upgraded to macOS 10.15 yet as EazyDraw Retro can be run directly.

Installing 10.14 inside a virtual machine requires a bootable disk image of the installer, so that needs to be created first.

  1. Download DosDude1’s Mojave patcher and run it (you’ll likely need to right-click on the application and choose Open because Gatekeeper will complain that the file isn’t signed).
  2. Go into the Tools menu and choose “Download macOS Mojave” to download the installer package, save it into your Downloads folder.
  3. Open Terminal.app and create a bootable Mojave image with the following commands:
    1. hdiutil create -o ~/Downloads/Mojave -size 8g -layout SPUD -fs HFS+J -type SPARSE
    2. hdiutil attach ~/Downloads/Mojave.sparseimage -noverify -mountpoint /Volumes/install_build
    1. sudo ~/Downloads/Install\ macOS\ Mojave.app/Contents/Resources/createinstallmedia --volume /Volumes/install_build
    2. hdiutil detach /Volumes/Install\ macOS\ Mojave
    3. hdiutil convert ~/Downloads/Mojave.sparseimage -format UDTO -o ~/Downloads/Mojave\ Bootable\ Image
    4. mv ~/Downloads/Mojave\ Bootable\ Image.cdr ~/Downloads/Mojave\ Bootable\ Image.iso

Once you’ve got the disk image, fire up your favoured virtual machine software and install Mojave in it.

Step 3: Convert AppleWorks 6 files to a modern format

The final part to this whole saga is the software EazyDraw Retro which can be downloaded from their Support page. It has to be the Retro version because the current one doesn’t support opening AppleWorks documents (I’m guessing whatever library they’re using internally for this is 32-bit-only and can’t be updated to run on Catalina or newer OSes going forwards, so they dropped it in new versions of the software). It can export to a variety of formats, and has its own .eazydraw format that the non-Retro version can open.

Unfortunately EazyDraw isn’t free, but you can get a temporary nine-month license for US$20 (or pay full price for a non-expiring license if you’re going to be using it for anything else except this). It did work an absolute treat though, it was able to import every one of my converted AppleWorks 6 documents and I saved them all out as PDFs. There were a few minor tweaks required to some of the text boxes because the fonts were different between the original ClarisWorks document and the AppleWorks one and there were some overlaps between text and lines, but that was noticeable as soon as I’d opened them in AppleWorks and wasn’t the fault of EazyDraw’s conversions.

Converting Aldus SuperPaint files

There were only two of my illustration files that were done in anything but ClarisWorks, and they were from Aldus SuperPaint. Version 3.5 is available from Macintosh Repository and pleasingly it’s able to export straight to TIFF so I could convert them under current macOS from that straight to PNG. There were some minor tweaks required there as well, but it was otherwise quite straightforward.

Converting Microsoft Word files

All my non-illustration text documents were written with Microsoft Word 5.1 or 6, but the format they use is old enough that Pages under current macOS doesn’t recognise it. I wouldn’t be surprised if the current Word from Office 365 could open them, but I don’t have it so I went the route of downloading Word 6 from Macintosh Repository which can export directly out to RTF. TextEdit under macOS opens them fine and from there I saved them out as PDF.

History preserved!

Following the convoluted process above, I was able to convert all my old files to PDF and have chucked them into the Docker image at archive.virtualwolf.org as well (start at the What, even more rubbish? section), so you can marvel at my terrible fan fiction world-building skills!

I’m not deluding myself into thinking that this is any sort of valuable historical record, but it’s my record and as with the websites, it’s fun to look back on the things I’ve done from the past.

Installing OpenWRT on a Netgear D7800 (Nighthawk X4S) router

I had blogged back in October of last year about setting up DNS over HTTPS, and it’s been very reliable, except for the parts where I’ve had to run Software Update on the Mac mini to pick up security update, and while it’s restarting all of our DNS resolution stops working! I’d come across OpenWRT a while back, which is an open-source and very extensible firmware for a whole variety of different routers, but I did a bunch of searching and hadn’t come across any reports of people fully-successfully using it on our specific router, the Netgear D7800 (also known as the Nighthawk X4S), just people having various problems. One of the reasons I was interested in OpenWRT because it’s Linux-based and extensible and I would be able to move the DHCP and DNS functionality off the Mac mini back onto the router where it belongs, and in theory bring the encrypted-DNS over as well.

I finally bit the bullet and decided to give installing it a go today, and it was surprisingly easy. I figured I’d document it here for posterity and in the hopes that it’ll help someone else out in the same position as I was.

Important note: The DSL/VDSL modem in the X4S is not supported under OpenWRT!

Installation

  1. Download the firmware file from the “Firmware OpenWrt Install URL” (not the Upgrade URL) on the D7800’s entry on OpenWRT.org.
  2. Make sure you have a TFTP client, macOS comes with the built-in tftp command line tool. This is used to transfer the firmware image to the router.
  3. Unplug everything from the router except power and the ethernet cable for the machine you’ll be using to install OpenWRT from (this can’t be done wirelessly).
  4. Set your machine to have a static IP address in the range of 192.168.1.something. The router will be .1.
  5. Reset the router back to factory settings by holding the reset button on the back of it in until the light starts flashing.
  6. Once it’s fully started up, turn it off entirely, hold the reset button in again and while still holding the button in, turn the router back on.
  7. Keep the reset button held in until the power light starts flashing white.

Now the OpenWRT firmware file needs to be transferred to the router via TFTP. Run tftp -e 192.168.1.1 (-e turns on binary mode), then put <path to the firmware file>. It’ll transfer the file and then install it and reboot, this will take several minutes.

Once it’s up and running, the OpenWRT interface will be accessible at http://192.168.1.1, with a username of root and no password. Set a password then follow the quick-start guide to turn on and secure the wifi radios — they’re off by default.

Additional dnsmasq configuration and DNS-over-TLS

I mentioned in my DNS-over-HTTPS post that I’d also set up dnsmasq to do local machine name resolution, this is very trivially set up in OpenWRT under Network > DHCP and DNS and putting in the MAC address and desired IP and machine name under the Static Leases section, then hitting Save & Apply.

The other part I wanted to replicate was having my DNS queries encrypted. In OpenWRT this isn’t easily possible with DNS-over-HTTPS, but is when using DNS-over-TLS, which gets you to the same end-state. It requires installing Stubby, a DNS stub resolver, that will forward DNS queries on to Cloudflare’s DNS.

  1. On the router, go to System > Software, install stubby.
  2. Go to System > Startup, ensure Stubby is listed as Enabled so it starts at boot.
  3. Go to Network > DHCP and DNS, under “DNS Forwardings” enter 127.0.0.1#5453 so dnsmasq will forward DNS queries on to stubby, which in turns reaches out to Cloudflare; Cloudflare’s DNS servers are configured by default. Stubby’s configuration can be viewed at /etc/config/stubby.
  4. Under the “Resolv and Hosts Files” tab, tick the “Ignore resolve file” box.
  5. Click Save & Apply.

Many thanks to Craig Andrews for his blog post on this subject!

Quality of Service (QoS)

The last thing I wanted to set up was QoS, which allows for prioritisation of traffic when your link is saturated. This was pretty straightforward as well, and just involved installing the luci-app-sqm package and following the official OpenWRT page to configure it!

Ongoing findings

I’ll update this section as I come across other little tweaks and changes I’ve needed to make.

Plex local access

We use Plex on the Xbox One as our media player (the Plex Media Software runs on the Mac mini), and I found that after installing OpenWRT on the router, the Plex client on the Xbox couldn’t find the server anymore despite being on the same LAN. I found a fix on Plex’s forums, which is to go to Network > DHCP and DNS, and add the domain plex.direct to the “Domain whitelist” field for the Rebind Protection setting.

Xbox Live and Plex Remote Access (January 2020)

Xbox Live is quite picky about its NAT settings, and requires UPnP to be enabled or you can end up with issues with voice chat or gameplay in multiplayer, and similarly Plex’s Remote Access requires UPnP as well. This isn’t provided by default with OpenWRT but can be installed with the luci-app-upnp and the configuration shows up under Services > UPnP in the top navbar. It doesn’t start by default, so tick the “Start UPnP and NAT-PMP service” and “Enable UPnP” boxes, then click Save & Apply.

Upgrading to a new major release (February 2020)

When I originally wrote this post I was running OpenWRT 18.06, and now that 19.07 has come out I figured I’d upgrade, and it was surprisingly straightforward!

  1. Connect to the router via ethernet, make sure your network interface is set to use DHCP.
  2. Log into the OpenWRT interface and go to System > Backup/Flash Firmware and generate a backup of the configuration files.
  3. Go to the device page on openwrt.org and download the “Firmware OpenWrt Upgrade” image (not the “Firmware OpenWrt Install” one).
  4. Go back to System > Backup/Flash Firmware, choose “Flash image” and select your newly-downloaded image.
  5. In the next screen, make sure “Keep settings and retain the current configuration” is not ticked and continue.
  6. Wait for the router light to stop flashing, then renew your DHCP lease (assuming you’d set it up to be something other than 192.168.1.x like I did).
  7. Log back into the router at http://192.168.1.1 and re-set your root password.
  8. Go back to System > Backup/Flash Firmware and restore the backup of the settings you made (then renew your DHCP lease again if you’d changed the default range).

I had a couple of conflicts with files in /etc/config between my configuration and the new default file, so I SSHed in and manually checked through them to see how they differed and updated them as necessary. After that it was just a case of re-installing the luci-app-sqm, luci-app-upnp, and stubby packages, and I was back in business!