More fun with temperature sensors: ESP32 microcontrollers and MicroPython

More fun with temperature sensors: ESP32 microcontrollers and MicroPython

I’ve blagged previously about our temperature/humidity sensor setup and how they’re attached to my Raspberry Pis, and they’ve been absolutely rock-solid in the three-and-a-half years since then. A few months ago, a colleague at work had mentioned doing some stuff with an ESP32 microcontroller and just recently I decided to actually look up what that was and what one can do with it, because it sounded like it might be a fun new project to play with!

From Wikipedia: ESP32 is a series of low-cost, low-power system on a chip microcontrollers with integrated Wi-Fi and dual-mode Bluetooth.

So it’s essentially a tiny single-purpose computer that you write code for and then flash that code onto the board, rather than like with the Raspberry Pi where it has an entire Linux OS running on it. It runs at a blazing fast 240MHz and has 320KB of RAM. The biggest draw for me was that it has built-in wifi so I could do networked stuff easily. There’s a ton of different boards and options and it was all a bit overwhelming, but I ended getting two of Adafruit’s HUZZAH32s which come with the headers for attaching the temperature sensors we have already soldered on. Additionally, they have 520KB of RAM and 4MB of storage.

Next up, I needed to find out how to actually program the thing. Ordinarily you’d write in C like with an Arduino and I wasn’t too keen on that, but it turns out there’s a distribution of Python called MicroPython that’s written explicitly for embedded microcontrollers like the ESP32. I’ve never really done much with Python before, because the utter tyre fire that is the dependency/environment management always put me off (this xkcd comic is extremely relevant). However, with MicroPython on the ESP32 I wouldn’t be having to deal with any of that, I’d just write the Python and upload it to the board! Additionally, it turns out MicroPython has built-in support for the DHT22 temperature/humidity sensor that I’ve already been using with the Raspberry Pis. Score!

There was a lot of searching over many different websites trying to find how to get all this going, so I’m including it all here in the hopes that maybe it’ll help somebody else in future.

Installing MicroPython

At least on macOS, first you need to install the USB to UART driver or your ESP32 won’t even be recognised. Grab it from Silicon Labs’ website and get it installed.

Once that’s done, follow the Getting Started page on the MicroPython website to flash the ESP32 with MicroPython, substituting /dev/ttyUSB0 in the commands for /dev/tty.SLAB_USBtoUART.

Using MicroPython

With MicroPython, there’s two files that are always executed when the board starts up, boot.py which is run once at boot time and is generally where you’d put your connect-to-the-wifi-network code, and main.py which is run after boot.py and will generally be the entry point to your code. To get these files onto the board, you can use a command-line tool called ampy, but it’s a bit clunky and also not supported anymore.

However, there is a better way!

Setting up the development environment

There are two additional tools that make writing your Python code in Visual Studio Code and uploading to the ESP32 an absolute breeze.

The first one is micropy-cli, which is a command-line tool to generate the skeleton of a VSCode project and set it up for full autocompletion and Intellisense of your MicroPython code. Make sure you add the ESP32 stubs first before creating a new micropy project.

The second is a VSCode extension called Pymakr. It gives you a terminal to connect directly to the board and run commands and read output, and also gives you a one-click button to upload your fresh code, and it’s smart enough not to re-upload files that haven’t changed.

There were a couple of issues I ran into when trying to get Pymakr to recognise the ESP32 though. To fix them, bring up the VSCode command palette with Cmd-Shift-P and find “Pymakr > Global Settings”. Update the address field from the default IP address to /dev/tty.SLAB_USBtoUART, and edit the autoconnect_comport_manufacturers array to add Silicon Labs.

Replacing the Raspberry Pis with ESP32s

After I had all of that set up and working, it was time to start coding! As I mentioned earlier I’ve not really done any Python before, so it was quite the learning experience. It was a good few weeks of coding and learning and iterating, but in the end I fully-replicated my Pi Sensor Reader setup with the ESP32s, and with some additional bits besides.

One of the things my existing Pi Sensor Reader setup did was to have a local webserver running so I could periodically hit the Pi and display the data elsewhere. Under Node.js this is extremely easily accomplished with Express, but using MicroPython the options were more limited. There are a number of little web frameworks that people have written for it, but they all seemed quite overkill.

I decided to just use raw sockets to write my own, though one thing I didn’t appreciate until this point was how Node.js’s everything-is-asynchronous-and-non-blocking makes doing this kind of thing very easy, you don’t have to worry about a long-running function causing everything else to grind to a halt while it waits for that function to finish. Python has a thing called asyncio but I was struggling to get my head around how to use it for the webserver part of things until I stumbled across this extremely helpful repository where someone had shown an example of how to do exactly that! (I even ended up making a pull request to fix an issue I discovered with it, which I’m pretty stoked with).

One of the things I most wanted to do was to have some sort of log file accessible in case of errors. With the Raspberry Pi I can just SSH in and check the Docker logs, but once the ESP32s were plugged into power and running, you can’t easily do a similar thing. I ended up writing the webserver with several endpoints to read the log, clear it, reset the board, and view and clear the queue of failed updates.

The whole thing has been uploaded to GitHub with a proper README of how it works, and they’ve been running connected to the actual indoor and outdoor temperature sensors and posting data to my website for just under a week now, and it’s been absolutely flawless!

(Update October 2021: The dodgy HTTP setup described in this post has been replaced by a much more elegant MQTT one, and all my development efforts have been put towards the MQTT version of my sensor reader code.)

More Raspberry Pi-powered monitoring: air quality!

Here in New South Wales, last year’s bushfires over late spring and into summer were astoundingly bad, and there were days where Sydney had the poorest air quality on the entire planet. Everyone was watching the PM2.5 values, and there were days where Kristina couldn’t go outside because of her asthma. I figured it’d be neat to set up a Raspberry Pi-powered air quality sensor and had ordered the sensor back in February but didn’t get around to putting it into service until now.

This is the bit that lives inside so we can easily see the latest reading:

A small 4" LCD display showing the air quality values for PM1.0, PM2.5, and PM10.

It uses the same sort of setup as my Pimoroni display, and I updated my pi-home-dashboard to add a second page to display the values from the air quality reader.

The sensor itself is a Plantower PMS5003 sensor and is attached to the same Raspberry Pi that the outdoor temperature sensor is on. Adafruit’s instructions on getting it set up were pretty straightforward, and they also give some sample code for how to read it, but it’s in Python which I intensely dislike (I don’t really even have any strong feelings about the language itself one way or the other, but I’ve never had a good experience with the damn package management around it, so I do my damnedest to avoid it). I was able to write the same logic in TypeScript instead — though had to consult the clever people on Ars Technica because parsing the output from the sensor involves things like bit-shifting which is quite low-level and something I’m utterly unfamiliar with — and chucked the whole thing up on GitHub. It takes ten readings and averages them, and has an HTTP endpoint for pulling the latest values.

I’ve set the front-end up so the colour of the numbers will change to orange and red depending on how bad the air quality is, but hopefully it’s a long while before we actually see that in action!

Powering our house with a Tesla Powerwall 2 battery

I posted back in March about our our shiny new solar panels and efforts to reduce our power usage, and as of two weeks ago our net electricity grid power usage is now next to zero thanks to a fancy new Tesla Powerwall 2 battery!

A photo of a white Tesla Powerwall 2 battery and Backup Gateway mounted against a red brick wall inside our garage.
A side-on view of a white Tesla Powerwall 2 battery mounted against a red brick wall.

We originally weren’t planning on getting a battery back when we got our solar panels — and to to be honest they still don’t make financial sense in terms of a return on investment — but we had nine months of power usage data and I could see that for the most part the amount of energy the Powerwall can store would be enough for us to avoid having to draw nearly anything whatsoever from the grid*.

* Technically this isn’t strictly true, keep reading to see why.

My thinking was, we’re producing stonking amounts of solar power and are feeding it back to the grid at 7c/kWh, but have to buy power from the grid after the sun goes down at 21c/kWh. Why not store as much as possible of that for use during the night?

The installation was done by the same people who did the solar panels, Penrith Solar Centre, and as before, I cannot recommend them highly enough. Everything was done amazingly neatly and tidily, it all works a treat, and they fully cleaned up after themselves when they were done.

We have 3-phase power and the solar panels are connected to all three phases (⅓ of the panels are connected individually to each phase) and the Powerwall has only a single-phase inverter so is only connected to one phase, but the way it handles everything is quite clever: even though it can only discharge on one phase, it has current transformers attached to the other two phases so it can see how much is flowing through there, and it’ll discharge on its phase an amount equal to the power being drawn on the other two phases (up to its maximum output of 5kW anyway) to balance out what’s being used. The end result is that the electricity company sees us feeding in the same amount as we’re drawing, and thanks to the magic of net-metering it all balances out to next to zero! This page on Solar Quotes is a good explanation of how it works.

The other interesting side-effect is that when the sun is shining and the battery is charging, it’s actually pulling power from the grid to charge itself, but only as much as we’re producing from the solar panels. Because the Enphase monitoring system doesn’t know about the battery, it gives us some amusing-looking graphs whereby the morning shows exactly the same amount of consumption as production up until the battery is fully-charged!

We also have the Powerwall’s “Backup Gateway”, which is the smaller white box in the photos at the top of this post. In the event of a blackout, it’ll instantaneously switch over to powering us from the battery, so it’s essentially a UPS for the house! Again, 3-phase complicates this slightly and the Powerwall’s single-phase inverter means that we can only have a single phase backed up, but the lights and all the powerpoints in the house (which includes the fridge) are connected to the backed-up phase. The only things that aren’t backed up are the hot water system, air conditioning, oven, and stove, all of which draw stupendous amounts of power and will quickly drain a battery anyway.

We also can’t charge the battery off the solar panels during a blackout… it is possible to set it up like that, but there needs to be a backup power line going back from a third of the solar panels back to the battery, which we didn’t get installed when we had the panels put in in February. There was a “Are you planning on getting a battery in the next six months” question which we said no to. 😛 If we’d said yes, they would have installed the backup line at the time; it’s still possible to install it now, but at the cost of several thousand dollars because they need to come out and pull the panels up and physically add the wiring. Blackouts are not remotely a concern here anyway, so that’s fine.

In the post back in March, I included three screenshots of the heatmap of our power usage, and the post-solar-installation one had the middle of the day completely black. Spot in the graph where we had the battery installed!

We ran out of battery power on the 6th of November because the previous day had been extremely dark and cloudy and we weren’t able to fully charge the battery from the solar panels that day (it was cloudy enough that almost every scrap of solar power we generated went to just powering the house, with next to nothing left over to put into the battery), and the 16th and 17th were both days where it was hot enough that we had the aircon running the whole evening after the sun went down and all night as well.

Powershop’s average daily use graph is pretty funny now as well.

And even more so when you look all the way back to when we first had the smart meter installed, pre-solar!

For monitoring the Powerwall itself, you use Tesla’s very slick app where you can see the power flow in real time. When the battery is actively charging or discharging, there’s an additional line going to or from the Powerwall icon to wherever it’s charging or discharging to or from.

You can’t tell from a screenshot of course, but those on the lines connecting the Solar to the Home and Grid icons animate in the direction that the power is flowing.

It also includes some historical graph data as well, but unfortunately it’s not quite as nice as Enphase’s, and doesn’t even have a website, you can only view it in the app. There’s a website called PVOutput that you can send your solar data to, and we have been doing that via Enphase since we got the solar panels installed, but the Powerwall also has its own local API you can hit to scrape the power usage and flows, and battery charge percentage. I originally found this Python script to do exactly that, but a) I always struggle to get anything related to Python working, and b) the SQLite database that it saves its data into kept intermittently getting corrupted, and the only way I’d know about it is by checking PVOutput and seeing that we hadn’t had any updates for hours.

So, I wrote my own in TypeScript! It saves the data into PostgreSQL, so far it’s been working a treat and it’s all self-contained in a Docker container. The graphs live here, and to see the power consumption and grid and battery flow details, click on the right-most little square underneath the “Prev Day” and “Next Day” links under the graph. Eventually I’m going to send all this data to my website so I can store it all there, but for the moment PVOutput is working well.

It also won’t shock anybody to know that I updated my little Raspberry Pi temperature/power display to also include the battery charge and whether it’s charging or discharging (charging has a green upwards arrow next to it, discharging has a red downwards arrow).

My only complaint with the local API is that it’ll randomly become unavailable for periods of time, sometimes up to an hour. I have no idea why, but when this happens the data in the Tesla iPhone app itself is still being updated properly. It’s not a big deal, and doesn’t actually affect anything with regards to battery’s functionality.

Overall, we’re exceedingly happy with our purchase, and it’s definitely looking like batteries in general are going to be a significant part of the electrical grid as we move to higher and higher percentages of renewables!

Visualising Git repository histories with Gource and ffmpeg

First, a disclaimer: this is entirely based on a blog post from a co-worker on our internal Confluence instance and I didn’t come up with any of it. 😛

Gource is an extremely cool tool for visualising the history of a Git repository (and other source control tools) via commits, and it builds up an animated tree view. When combined with ffmpeg you can generate a video of that history!

On a Mac, install Gource and ffmpeg with Homebrew:

$ brew install gource ffmpeg

Then cd into the repository you’re visualising, and let ‘er rip!

$ gource -1280x720 \
    --stop-at-end \
    --seconds-per-day 0.2 \
    -a 1 \
    -o - \
    | ffmpeg -y \
    -r 60 \
    -f image2pipe \
    -vcodec ppm \
    -i - \
    -vcodec libx264 \
    -preset fast \
    -crf 18 \
    -threads 0 \
    -bf 0 \
    -pix_fmt yuv420p \
    -movflags \
    +faststart \
    output.mp4

Phew! The main options to fiddle with are the resolution from gource (1280x720 in this case), and the crf setting from ffmpeg (increase the number to decrease the quality and make a smaller file, or lower the number to increase the quality and make a larger file).

I ran it over my original website repository that started its life out as PHP in 2011 and was moved to Perl:

And then my Javascript website that I started in 2016 and subsequently moved to TypeScript:

How cool is that?!

I also ran it over the codebase of the main codebase at work that powers our internal PaaS that I support and it’s even cooler because the history goes back to 2014 and there’s a ton of people working on it at any given time.

Fixing a Guitar Hero World Tour/Guitar Hero 5 guitar strum bar

Kristina and I had a date night last night in which we ate trashy food and then took the Xbox 360 out of storage and fired up Guitar Hero: Warriors of Rock. It was an excellent time except that my Guitar Hero World Tour guitar had stopped registering downward strums, and only upwards strums worked.

I figured I’d pull it apart today and see what was up, and thanks to this guide I figured it out, and am documenting it here for posterity (my problem wasn’t one of the ones described in that guide, but it was very handy to see how to disassemble the thing in the first place).

Tools needed for disassembly

  • Philips-head #0 and #1 screwdriver
  • Torx T9 screwdriver

Process

Firstly the neck needs to be removed, and the “Lock” button at the back towards the base of the guitar set to its unlocked position.

Next, the faceplate needs to be removed. This can be done by just getting a fingernail or a flathead screwdriver underneath either of the top bits of the body, pointed to with a arrow here, and gently prying it away from around the edges.

After that, there’s twelve Torx T9 screws to remove, circled in red, and another four Philips-head #0 ones, marked in green.

Once they’re all out, you can gently separate the front of the guitar where all the electronics live from the back of it.

Next there’s four Philips-head #1 screws to remove to get the circuit board that contains the actual clicky-switches away from the strum bar itself. Leave the middle two alone as they attach the guides for the springs of the strum bar.

After this, it’s a bit of a choose-your-own-adventure, as what you do next really depends on what’s wrong with the strum bar. On the underside of the circuit board above are the switches, it’s definitely worth making sure they both click nice and solidly when you press on them directly. If they don’t, it’s apparently possible to source exact replacements (“275-016 SPDT Submini Lever Switch 5A at 125/250VAC”) and fix it with a bit of soldering, but thankfully this wasn’t necessary in my case.

In the next image, undoing the Philips-head #1 screws circled in blue will allow you to take the strum bar assembly itself out and give it a re-lubricating (don’t use WD40, use actual proper lubricating grease) to make it rock back and forth a bit more smoothly. Another improvement you can make is adding a couple of layers of electrical tape to the areas I’ve circled in red. They’re where the strum bar physically hits the inside of the case, and the electrical tape can dampen the noise a bit.

What the strum bar problem ultimately ended being in my case is that the middle indented section where the switch rests against the strum bar to register a downstroke had actually worn away and could no longer press the switch in far enough to click. My solution, circled in green, was to chop a tiny piece of plastic from a Warhammer 40,000 miniature sprue and glue it—with the same plastic glue I use for assembling plastic miniatures—to the strum bar. Then I reassembled everything and it’s as good as new!

A bread update!

I blogged back in October last year about how I’d started making my own bread from scratch, and I haven’t stopped! I still haven’t graduated to sourdough yet (my one attempt was poorly timed in retrospect, it was right in the middle of everyone panic-buying flour and things around when coronavirus was really becoming a thing, and I didn’t want to waste a whole bunch of flour), but I’ve settled into a delicious routine with the “Saturday Overnight White Bread” recipe. I’m using 20% whole wheat flour, have bumped the water from 780mL to 790mL, and use a full quarter teaspoon of yeast (the recipe says “a scant quarter”). It’s at the point where I don’t even need to look at the recipe, I’ve got the timings and measurements totally memorised.

As well, Kristina bought me a pair of proofing baskets for Christmas last year, and MAN do they make a difference! They help keep the shape of the loaf and also wick moisture from the surface of it, which makes the crust come out even better. Feast your eyes on these beauties.

A photo of two golden brown loaves of bread sitting on a cooling rack.
A photo of two golden brown loaves of bread sitting on a cooling rack.
A photo of two golden brown loaves of bread sitting on a cooling rack.
A photo of two brown loaves of bread sitting on a cooling rack.

My associated lunches have been excellent as well, they’re primarily either cheese and tomato toasties with a recent addition of capsicum, or mushrooms cooked in a frying pan with garlic, sage, and rosemary, sitting on top of thickly-sliced tomato (also cooked in a frying pan) with crumbled up sharp cheddar cheese on top.

A photo of two cheese and tomato toasties sitting on a plate, with diced capsicum on top.
A photo of two slices of homemade bread sitting on a plate, topped with thick slices of tomato with mushrooms on them, and small pieces of crumbled up cheddar cheese atop it all.

Beanie has gotten to know the exact sound of a knife cutting through bread, because whenever I start he’s over into the kitchen like a shot waiting for crumb fallout and also the tiny useless end-piece we always give him.

Apple Watch, one year on: fitter and healthier than ever

I blogged last year about getting an Apple Watch and the 23rd of this month marked a full year of owning it! I’m extremely pleased to say that I’m now fitter, healthier, and stronger than I’ve ever been in my entire life. 💪🏻 I’ve also closed all three activity rings on every single day since I bought the Watch!

I mentioned in that post that I was down to 70kg, after that (but prior to COVID) I hit 69kg, which I haven’t been since I was in my very early 20s. After we started working from home full-time in mid-March, I settled into a really good exercise routine: during the week I finish work around 5:30pm, then on every day but Monday and Friday I’ll go for a burn on the elliptical while watching an episode of something on Netflix (Saturdays and Sundays are also elliptical time, but at whatever point I feel like doing it during the day). It’s amazing how quickly time passes when you’re not concentrating on the fact that you’re exercising. I’d started watching Star Trek: The Next Generation prior to COVID, but went through the episodes much more quickly after it, and finally finished it last month.

Fun side note, I thought I’d watched way more episodes than it turns out I have… as I was making my way through the episodes, I’d remembered seeing the pilot episode before, then a random smattering of maaaaybe five to ten episodes in the middle, and then the series finale, but that was it. It was pretty fun to watch the progression of the series, and I think at this point I enjoy Star Trek more than I do Star Wars. After I finished TNG, I took a brief Trek break to watch Warrior Nun (IMDB’s blurb says “After waking up in a morgue, an orphaned teen discovers she now possesses superpowers as the chosen Halo Bearer for a secret sect of demon-hunting nuns.” HIGHLY recommended, I loved it), and have now started watching Star Trek: Deep Space Nine. I’m only a few episodes in, but again, I’ve definitely seen the first episode before but none of the other four I’ve watched thus far.

The elliptical has its own calorie counter in it, I have no idea how accurate it is because it doesn’t take into account your weight, and the Apple Watch doesn’t take into account the fact that I’ve got the incline on the elliptical at 14 degrees and the resistance set to 14 out of 22, but regardless it’s a hell of a workaround. After a typical 45-minute episode of a TV show, the elliptical says I’ve burnt around 650-700 calories, and the Apple Watch says 425-450 calories.

On Mondays and Fridays, instead of using the elliptical I do some weight training. I bought a pair of neoprene-coated 4kg dumbbells and have been doing twelve of each of the exercises from this article, three times each. I really like that Women’s Health Magazine article because it’s not pretentious and just says “Hey if you want some toned arms, do these things.” Pretty much all the ones aimed at men that I found were all of the “RARRRR GET RIPPPPPED BROOOO” which I just… just, no. 😑

As a result of all of this, I’ve actually gained weight and am now 71kg, but it’s all muscle mass, aw yiss. I’ve never actually had properly visible muscles, so this is a new look for me. 😛

On the odd occasion that I’m feeling a bit pooped-out for whatever reason, rather than using the elliptical or doing weight training, I’ll just go for a brisk walk on the treadmill and listen to part of a podcast. (Speaking of podcasts, I’ve basically entirely stopped listening to them except for when I’m occasionally using the treadmill… my commute into the city was my podcast listening time, and with that gone, I just don’t really do it anymore).

My primary use for the Apple Watch is still definitely fitness, but I’ve also really enjoyed the little notification buzz you get on your wrist when you get an iMessage or SMS. It’s totally unobtrusive and as someone who leaves all of their devices muted at all times, it means I don’t actually miss messages that need a timely reply, but if it’s not important I can just ignore it. The other handy thing I’ve found is using Siri to set custom timers. I don’t have any of the “always listening” Siri stuff on on any of my devices, but I have the Watch set to activate Siri when you hold the digital crown in for a couple of seconds. It’s super-handy when I’m timing my bread-making or barbecuing to just say “Set a timer for four minutes”, or whatever non-standard time it is, rather than having to futz around pressing things.

A year on, the Apple Watch is still one of my favourite gadgets and I’m keen to see what sort of additions Apple makes to it in the coming years!

Improvements in photographical skill

Back in June I blogged about the Flickr Memories functionality I’d added to my website, and finished the post off with this sentence:

I’m excited to see what forgotten gems from the past show up, and also being reminded of how terrible I was when I was first starting out taking photos. 

And oh boy, has it delivered on that second part in particular! I make a point of checking both my formerly-Tumblr-posts Memories page where I post all my random iPhone photos as well as the Flickr one each day, and in August of both 2008 and 2012 we were in Boston (in 2008 it was the first time I was visiting Kristina overseas back when she was still living there, and in 2012 we went back for her birthday) and I’d posted all the trip photos to Flickr.

Looking back at the 2008 photos (and all the photos I’d taken prior to that), they’re very much just happy-snaps… there was zero processing done on them, they were awkwardly-framed and weren’t straightened — crooked horizons ahoy! — and there was no culling, I’d just post them allll. I have a collection created for that trip where you can see the full horror (check out the sheer number of photos in the Mt. Auburn Cemetery album for example… why did I need to post that many?!).

In 2009 I borrowed an old Canon EOS 400D from a co-worker at Apple for a while, which was my first exposure to a proper DSLR and properly-shallow depth of field, as well as shooting in RAW, and doing actual post-processing. I had way too much fun adding heavy vignetting to everything, but I was starting to cull my photos rather than just dumping everything online!

In 2010 we bought a Canon EOS 7D with a 35mm f/2 lens, and my photography and processing skills definitely improved (that link is every single photo I took with the 7D, in order of date posted from oldest to newest).

Going back to Boston, some sample photos from Rockport and Boston Public Library on 2012’s trip shows the improvement from 2008. For some reason I was pretty far into too-heavy-handed territory with the post processing specifically for the Boston trip photos, but you can see the greatly-improved framing and composition.

From there on my skill has been on a upwards path, and thankfully I’ve stopped doing the EVERYTHING MUST BE REALLY HIGH CONTRAST thing I was doing for a while there. 😛 For travel photos specifically, the easiest way to see the progression is the “Travel” category of posts I have set up here! Some choice examples:

A while back I’d toyed with the idea of reprocessing my older photos, in particular the 2012 Boston ones, but I realised that way lies madness and never-ending editing, and I wouldn’t have a nice historical record of my photographical endeavours like I do now.

More space: the Pimoroni HyperPixel4 display on a Raspberry Pi Zero W

Back at the start of 2018 I blogged about my Raspberry Pi temperature display setup and it’s been pretty excellent and utterly reliable since then, but because of its small size — the display is only 2 inches — it wasn’t particularly visible from across the room. That, combined with the discovery that the Envoy power consumption monitoring system we had installed with the solar panels has a locally-accessible API that you can use to get real-time production and consumption data (which lives at http://<ip-of-the-envoy-box>/production.json?details=1), made me start looking into larger displays so I could include both temperature/humidity data and our power consumption.

My first port of call was the 2.7-inch version of the original 2-inch display. I ordered it on the 6th of April then… nothing showed up. I’d assumed the PaPiRus was MIA and had instead ordered a 4-inch, 800×480-pixel display in the form of Pimoroni’s HyperPixel4 display, the non-touch version. The Raspberry Pi registers it as a regular display so you run a full desktop environment windowing system on it rather than the way the PaPiRus works.

Of course, about a week after ordering the HyperPixel 4, the PaPiRus finally arrived! The 2.7-inch version of the PaPiRus is 264 pixels wide by 176 pixels high, so not exactly high-resolution. There’s actually quite a lot of freedom to tweak the position of the elements on screen pixel-by-pixel, but I quickly discovered that that’s extremely tedious when doing it directly on the Raspberry Pi itself because it takes several seconds for it to contact the required endpoints to pull in the data and then refresh the whole display. As well as writing text, the display can also display (1-bit) bitmap images, so I decided to change tack and instead of using the PaPiRus’s text API I wrote a probably-slightly-overengineered Node.js application that would run on the Raspberry Pi 4B, fetch the data from the outdoor and indoor sensors as well as the Envoy, use the Javascript Canvas API to lay everything out, and then convert it to a bitmap image that the Python script on the Pi Zero W would fetch every minute and then update the display with.

The biggest advantage of this system is that I could run it locally on my regular computer to quickly tweak the positioning without having to wait for the PaPiRus display to refresh each time, and I set it up so I could invert the colours to be white on black instead so I could clearly see the boundaries of the canvas. I put the code up on GitHub if anyone is interested in poking through it, and the end result looks like this:

Having over-engineered my Node.js solution, the HyperPixel4 display arrived maybe a couple of weeks later! It’s extremely slick-looking, but unfortunately the little plastic nubs that are meant to keep the screen in place in the house aren’t actually big enough to hold it in, and I managed to have the display itself pop out and crack some of the wires that feed the display and it caused all sorts of display weirdness. I emailed the place that makes the HyperPixel display about it and they were super nice and helpful and sent me out a replacement display with no questions asked! While I was waiting for the new one to arrive, the old broken one was partially working enough that I could at least get everything up and running how I wanted it, anyway.

Because using the HyperPixel is the same as if you’d hooked up an HDMI display and were using the Pi as a regular computer, I started from the full-blown Raspbian desktop image, not the Lite one. It was relatively straightforward to get everything going (mostly just installing and configuring the driver from Pimoroni’s GitHub repository), but there were some additional things I needed to do to get everything working as I wanted. I settled on a Node.js backend and React frontend setup (the separate backend was necessary because CORS; I couldn’t hit the Envoy URL directly from the browser on the Pi, so I have to have the Node.js backend pull in the data and then feed it to the React app), both of which are running in a Docker image on the Raspberry Pi 4B.

  • By default the HyperPixel4 runs at full brightness, so I followed this to turn it way down, and also to set up a cron job to entirely turn the display off at midnight and turn it back on at 8am.
  • To get the Pi to open Chromium full-screen on boot, I followed these instructions.
  • To disable the annoying “Restore pages” dialog in Chromium, this on the Raspberry Pi Stack Exchange was helpful.
  • Raspbian comes by default with a VNC server installed, just not enabled. To enable it and allow access directly from macOS’s “Connect to Server” dialog in the Finder:
    • Run sudo raspi-config, go to Interface Options > VNC and enable it.
    • Run vncpasswd -service to set a VNC password (note if it’s longer than eight characters, only the first eight are used when connecting).
    • Create the file /etc/vnc/config.d/common.custom with the contents: Authentication=VncAuth
    • Then Restart the VNC service with sudo systemctl restart vncserver-x11-serviced
  • And lastly, to disable the Pi from turning the screen off after activity, I followed these steps.

My ~/.config/lxsession/LXDE-pi/autostart ultimately ended up looking like this:

@lxpanel --profile LXDE-pi
@pcmanfm --desktop --profile LXDE-pi
point-rpi
@chromium-browser --start-fullscreen --start-maximized --app=http://fourbee:3003
@xset s off
@xset -dpms 
@xset s noblank
@sudo /home/pi/Source/rpi-hardware-pwm/pwm 19 1000000 135000

And the whole setup looks like this:

A photo of a small LCD display showing outdoor and indoor temperature and current power consumption and production. The text is white on black.

It’s quite the improvement in visibility and I can easily read it from all the way in the kitchen! It updates itself automatically every 30 seconds, and there’s no e-ink full-display-refresh screen-blanking when it does.

Memories redux: Flickr

I posted back in December that I’d created my own version of Facebook’s “Memories” feature for my formerly-Tumblr-and-now-Mastodon media posts, and even at the time I’d had the thought of doing it for Flickr as well, since that’s where all my Serious Photography goes.

Well, now I have!

It wasn’t quite as straightforward as my Media memories functionality, because there I could just do a single database call, but for Flickr I’m having to make multiple API calls each time. Fortunately two of the search parameters that flickr.photos.search offers are min_taken_date and max_taken_date, so my approach is to run a query for whatever the current day of the year it happens to be for each year going back to 2007—this being when my account was created and when I first started posting photos to Flickr—with the min_taken_date set to 00:00 on that particular day, and max_taken_date set to 23:59 on that same day. It does mean that currently there’s 13 API calls each time the Memories page is loaded and this will increase by one with each year that goes past, but Flickr’s API docs say “If your application stays under 3600 queries per hour across the whole key (which means the aggregate of all the users of your integration), you’ll be fine”. That’s one query every single second for an entire hour, which absolutely isn’t going to be happening, so I ought to remain well under the limit.

I’m excited to see what forgotten gems from the past show up, and also being reminded of how terrible I was when I was first starting out taking photos. 😛