Nginx, PHP-FPM, and Cloudflare, oh my!

I use my Linode to host a number of things (this blog and Kristina’s, my website and Kristina’s, an IRC session via tmux and irssi for a friend and me, and probably another thing or two I’m forgetting). Kristina started up a travel blog a few months ago which I’m also hosting on it, and shortly after that point I found that maybe once every two weeks or so my website and our blogs weren’t running anymore. I looked into it and it was being caused by Linux’s Out-Of-Memory Killer, which kicks in when the system is critically low on memory and needs to free some up, killing the Docker container that my website runs in as well as MariaDB.

The main cause was Apache and MariaDB using up entirely too much memory for my little 2GB Linode, it was evidently just sitting on this side of stable with two WordPress blogs but adding a third seems like it tipped it over the edge. The reason MariaDB and my website’s Docker container were being killed is because although Apache was using up a heap of memory it was spread over a number of worker threads, so individually none of those were high, and MariaDB and my website were the largest on the list. There’s lots of tweaks you can do, several of which I tried, but all that happened was that it delayed the inevitable rather than entirely resolving it. Apache is powerful but low-resource-usage it ain’t. The primary low-resource-usage alternative to Apache is Nginx, so I figured this weekend I’d have a crack at moving over to that.

Overall it was pretty straightforward, this guide from Digital Ocean was a good starting point, the bits where it fell short was mostly just a case of looking up all of the equivalent directives for SSL, mapping to filesystem locations, etc. (I have ~15 years of history of hosted images I’ve posted on the Ars Technica  forums and my old LiveJournal—which is now this blog—and wanted to make sure those links all kept working). 

One difference is with getting WordPress going… WordPress is all PHP, and Apache by default runs PHP code inside the Apache process itself via mod_php, whereas when you’re using Nginx you have to be using PHP-FPM or similar which is an entirely separate process that runs on the server and that Nginx talks to to process the PHP code. I mostly followed this guide, also from Digital Ocean though there were a couple of extra gotchas I ran into when getting it fully going with Nginx for WordPress:

  • Edit /etc/nginx/fastcgi_params and add a new line with this content or you’ll end up with nothing but an empty blank page: fastcgi_param PATH_TRANSLATED $document_root$fastcgi_script_name;
  • Remember to change the ownership of the WordPress installation directory to the nginx user instead of  apache
  • The default settings for PHP-FPM assume it’s running on a box with significantly more than 2GB of RAM; edit /etc/php-fpm.d/www.conf and change the line that says pm = dynamic to be pm = ondemand; with ondemand PHP-FPM will spin up worker processes as needed but will kill off idle ones after ten seconds rather than leaving them around indefinitely.

Additionally, Nginx doesn’t support .htaccess files so if you’ve got WordPress set up to use any of the “pretty”-type links, you’ll end up with 404s when you try to view an individual post instead. The fix is to put the following into the server block at the bottom:

location / {
  try_files $uri $uri/ /index.php?$args;
}

So it’ll pass the correct arguments to WordPress’ index.php file. You’ll also want to block access to any existing .htaccess files as well:

location ~ /\.ht {
  deny all;
}

The last thing I did with this setup was to put the entirety of my website, Kristina’s, and our respective blogs behind Cloudflare. I had great success with their DNS over HTTPS service, and their original product is essentially a reverse proxy that caches static content (CSS, Javascript, images) at each of their points of presence around the world so you’ll load those from whichever server is geographically closest to you. For basic use it’s free, and includes SSL, you just need to point your domain’s nameservers at the ones they provide. The only thing I needed to do was to set up another DNS record so I could actually SSH into my Linode, because now the host virtualwolf.org resolves to Cloudflare’s servers which obviously don’t have any SSH running!

Overall, the combination of Nginx + PHP-FPM + Cloudflare has resulted in remarkably faster page loads for our blogs, and thus far significantly reduced memory usage as well. 👍

GPG and hardware-based two-factor authentication with YubiKey

As part of having an Ars Technica Pro++ subscription, they sent me a free YubiKey 4, which is a small hardware token that plugs into your USB port and allows for a bunch of extra security on your various accounts because you need the token physically plugged into your computer in order to authenticate. It does a number of neat things:

  • Generating one-time passwords (TOTP) as a second-factor when logging in to websites;
  • Storing GPG keys;
  • Use as a second-factor with Duo;

And a bunch of other stuff as well, none of which I’m using (yet).

My password manager of choice is 1Password, and although it allows saving one-time passwords for websites itself, I wanted to lock access to the 1Password account itself down even further. Their cloud-based subscription already has strong protection by using a secret key in addition to your strong master password, but you can also set it up to require a one-time password the first time you log into it from a new device or browser so I’m using the YubiKey for that.

I also generated myself GPG keys and saved them to the YubiKey. It was not the most user-friendly process in the world, though that’s a common complaint that’s levelled at GPG. I found this guide that runs you through it all and, while long, it’s pretty straightforward. It’s all set up now, though, my public key is here and I can send and receive encrypted messages and cryptographically sign documents, and the master key is saved only on an encrypted USB stick. You can also use the GPG agent that runs on your machine and reads the keys from the YubiKey to also be used for SSH, so I’ve got that set up with my Linode.

The last thing I’ve done is to set the YubiKey up as a hardware token with Duo and put my Linode’s SSH and this blog (and soon Kristina’s, though hers not with the YubiKey) behind that. With the Duo Unix module, even sudo access requires the YubiKey, and the way that’s set up is that you touch the button on the YubiKey itself and it generates a code and enters it for you.

It’s all pretty sweet and definitely adds a bunch of extra security around everything. I’m busily seeing what else I can lock down now!

Setting up DNS over HTTPS on macOS

Back in April, Cloudflare announced a privacy-focused DNS server running at 1.1.1.1 (and 1.0.0.1), and that it supported DNS over HTTPS. A lot of regular traffic goes over HTTPS these days, but DNS queries to look up the IP address of a domain are still unencrypted, so your ISP can still snoop on which servers you’re visiting even if they can’t see the actual content. We have a Mac mini that runs macOS Server and does DHCP and DNS for our home network, among other things, and with the impending removal of those functions and their suggested replacements with regular non-UI tools with a upcoming version of it, I figured now would be a good time to look into moving us over to use Cloudflare’s shiny new DNS server at the same time.

Turns out it wasn’t that difficult!

Overview

  1. Install Homebrew.
  2. Install cloudflared and dnsmasq: brew install cloudflare/cloudflare/cloudflared dnsmasq
  3. Configure dnsmasq to point to cloudflared as its own DNS resolver.
  4. Configure cloudflared to use DNS over HTTPS and run on port 54.
  5. Install both as services to run at system boot.

Configuring dnsmasq

Edit the configuration file located at /usr/local/etc/dnsmasq.conf and uncomment line 66 and change it from server=/localnet/192.168.0.1 to server=127.0.0.1#54 to tell it to pass DNS requests onto localhost on port 54, which is where cloudflared will be set up.

Configuring cloudflared

Create the directory /usr/local/etc/cloudflared and create a file inside that called config.yml with the following contents:

port: 54
no-autoupdate: true
proxy-dns: true
proxy-dns-upstream:
  - https://1.1.1.1/dns-query
  - https://1.0.0.1/dns-query

Auto-update is disabled because that seems to break things when the update occurs, and the service doesn’t start back up correctly.

Configuring dnsmasq and cloudflared to start on system boot

dnsmasq is easy, simply run: sudo brew services start dnsmasq which will both start it immediately and also set it to start at system boot.

Due to a bug that isn’t fixed as of writing, setting the port for cloudflared requires being set via a launchctl environment variable. Install it as a service with sudo cloudflared service install, then run sudo launchctl unload /Library/LaunchDaemons/com.cloudflare.cloudflared.plist  to temporarily turn off the service. Next, run sudo launchctl setenv TUNNEL_DNS_PORT 54 to set the environment variable such that the launch script will pick it up, and lastly run sudo launchctl load /Library/LaunchDaemons/com.cloudflare.cloudflared.plist to start the service up again.

This is the same thing as setting port: 54 in the configuration file above but works around the aforementioned bug where that setting is ignored (and so tries to start on the default port 53 which fails because dnsmasq is already running there).

And done!

Apart from a bunch of work to figure out how to work around that cloudflared bug, I was surprised at how straightforward this was. I also didn’t realise until I was doing all of this that dnsmasq also does DHCP, so with the assistance of this blog post I’ve also replaced the built-in DHCP server on the Mac mini and continue to have full local hostname resolution as well!

The spiritual successor to SimCity, Cities: Skylines

I first played the original SimCity Classic back in the early 1990s on our old Macintosh LC II, and absolutely loved it. Laying out a city and watching it grow was extremely satisfying, and the sequel, SimCity 2000 was even more detailed. I played a bit of SimCity 4, which came out in 2003, but the latest entry in the series, titled just “SimCity“, by all accounts sucked. The maps were significantly smaller in size, and it required an internet connection and was multiplayer to boot.

It’s actually possible to play SimCity 2000 on modern machines and I definitely got stuck into it a few years ago. This is a screenshot of my most recent city!

Screenshot of SimCity 2000, zoomed out and showing as much of my city as possible.

If you’re wanting a proper modern SimCity 2000-esque experience though, Cities: Skylines is what you’re after. It came out in March of 2015 on desktop, and was ported to Xbox One in April of 2017 and they did a damned good job of it, the controls are all perfectly suited to playing on a controller as opposed to with a mouse and keyboard.

The level of detail of the simulation is fantastic, you can zoom all the way in and follow individual people (called “cims”, as opposed to SimCity’s “sims”) or vehicles and see where they’re going. There’s a robust public transport system and you can put in train lines (and buses, and trams, and a subway, and in the most recent expansion called Mass Transit, even monorails, blimps, and ferries!) and see the cims going to and from work, and how many are waiting at each station and so on.

We recently upgraded to the Xbox One X and a shiny new OLED 4K TV (quite the upgrade from our nine year-old 37″ giant-bezeled LCD TV!), and it makes for some very nice screenshots. These are from my largest city called Springdale, currently home to ~140k people!

Nostalgia and the Classic Mac OS

I’ve been a Mac user my entire life, originally just because my dad used them at his work and so bought them for home as well. My earliest memories are of him bringing his SE/30 home and playing around in MacPaint. We also had an Apple IIe that we got second-hand from my uncle that lived in my bedroom for a few years, though that doesn’t count as a Mac.

The first Mac my dad bought for us at home was the LC II in 1992 (I was 9!), and I can remember spending hours trawling through Microsoft Encarta being blown away at just how much information I could look up immediately. I also remember playing Shufflepuck Café and Battle Chess, and I’m sure plenty of others too that didn’t leave as large an impression. There was also an application that came with the computer called Mouse Practice that showed you how to use a mouse, and we had At Ease installed for a while as well until I outgrew it.

After the LC II we upgraded to the Power Macintosh 6200 in 1995, which among other things came with a disc full of demos on it including the original Star Wars: Dark Forces (which I absolutely begged my parents to get the full version of for Christmas, including promising to entirely delete Doom II which they were a bit disapproving of due to the high levels of gore), and Bungie’s Marathon 2: Durandal (which I originally didn’t even bother looking at for the first few months because I thought it was something to do with running!). Marathon 2 was where I first became a fan of Bungie’s games, and I spent many many hours playing it and the subsequent Marathon Infinity as well as a number of fan-made total conversions too (most notably Marathon:EVIL and Tempus Irae).

The period we owned the 6200 also marked the first time we had an internet connection as well (a whopping 28.8Kbps modem, no less!). The World Wide Web was just starting to take off around this time, I remember dialing into a couple of the local Mac BBSes but at that point they were already dying out anyway and the WWW quickly took over. The community that sprang up around the Marathon trilogy was the first online community I was really a member of, and Hotline was used quite extensively for chatting. Marathon Infinity came with map-making tools which I eagerly jumped into and made a whole bunch of maps and put them online. I was even able to dig up the vast majority of them, there’s only a couple of them that I’ve not been able to find. I have a vivid memory of when Marathon:EVIL first came out, it was an absolutely massive 20MB and I can recall leaving the download going at a blazing-fast 2.7KB/s for a good two or three hours, and constantly coming back to it to make sure it hadn’t dropped out or otherwise stopped.

After the Marathon trilogy, Bungie developed the realtime strategy games Myth: The Fallen Lords and its sequel Myth II: Soulblighter, both of which I also played the hell out of and was a pretty active member of the community in.

After the 6200 we then had a second-gen iMac G3 then a “Sawtooth” Power Mac G4 just for me as my sister and I kept arguing about who should have time on the computer and the Internet. 😛 The G4 was quite a bit of money as you’d imagine, so I promised to pay it back to dad as soon as I got a job and started working.

macOS (formerly Mac OS X then OS X) is obviously a far more solid operating system, but I’ve always had a soft spot for the Classic Mac OS even with its cooperative multitasking and general fragility. We got rid of the old Power Mac G4 probably eight years ago now (which I regret doing), and I wanted to have some machine capable of running Mac OS 9 just for nostalgia’s sake. Mum and dad still had mum’s old PowerBook G3 and I was able to get a power adapter for it and boot it up to noodle around in, but it was a bit awkwardly-sized to fit on my desk and the battery was so dead that if the power cord wasn’t plugged in it wouldn’t boot at all.

There was a thread on Ars Technica a few months ago about old computers, and someone mentioned that if you were looking at something capable of running Mac OS 9 your best bet was to get the very last of the Power Mac G4s that could boot to it natively, the Mirrored Drive Doors model. I poked around on eBay and found a guy selling one in mint condition, and so bought it as a present to myself for my birthday.

Behold!

Power Mac G4 MDD

Dual 1.25GHz G4 processors, 1GB of RAM, 80GB of hard disk space, and a 64MB ATI Radeon 8500 graphics card. What a powerhouse. 😛

There’s a website, Macintosh Repository, where a bunch of enthusiasts are collecting old Mac software from yesteryear, so that’s been my main place to download all the old software and games that I remember from growing up. It’s been such a trip down memory lane, I love it!

More miniatures: Warhammer 40,000 edition

Warhammer 40,000 used to be quite the complicated affair, lots of rules and looking things up on different tables to check what dice roll you needed for different effects, and needing many hours to finish a game. The 8th Edition of the game came out last year, and was apparently extremely streamlined and simplified and seems to have been received very well. Since I’d been doing well with Shadespire, I decided to get the 8th Edition core box set as well, and had almost exactly enough in Amazon gift card balance for it! It comes with Space Marines, as always, but the opposing side is Chaos this time. 7 Plague Marines, a few characters, a big vehicle, and about 20 undead daemon things. I decided to alternate between painting a handful of each side at once, so as not to get bored, and have gone with Space Wolves (big surprise, I know) as the paint scheme for the Imperial side.

Space Wolves Intercessor

There’s another five of these Space Marines but they’re all identical apart from the poses so I didn’t take photos of all of them.

The Plague Marines are all unique though, so I’ve been taking photos of each of them, my first batch was four of them.

Plague Marine 1

Plague Marine 2

Plague Marine 3

Plague Marine 4

My mobile painting table has been a great success, but after the first batch of Space Marines I realised I was getting a sore neck and back from hunching over towards the miniatures as I was painting them because everything was too low. Another trip to Bunnings, and lo and behold…

Painting table from the side, showing the two vertical blanks to give it some hight

Problem solved!

I also realised the other day why I was enjoying painting my miniatures a lot more now than I used to… it’s thanks to being able to combine my hobbies of painting and also photography. 😛 I can paint the miniatures and be happy with my work, but then also take professional-looking photos of them and share them with the world!

More Raspberry Pi adventures: the Pi Zero W and PaPiRus ePaper display

I decided I wanted to have some sort of physical display in the house for the temperature sensors so we wouldn’t need to be taking out our phones to check the temperature on my website if we were already inside at home. After a bunch of searching around, I discovered the PaPiRus ePaper display. ePaper means it’s not going to have any bright glaring light at night, and it also uses very little power.

The Raspberry Pi is hidden away under a side table, and already has six wires attached to the header for the temperature sensors, so I decided to just get a separate Raspberry Pi Zero W — which is absurdly small — and the PaPiRus display.

Setting it up

I flashed the SD card with the Raspbian Stretch Lite image, then enabled SSH and automatic connection to our (2.4GHz; the Zero W doesn’t support 5GHz) wifi network by doing the following:

  1. Plug the flashed SD card back into the computer
  2. Go into the newly-mounted “boot” volume and create an empty file called “ssh” to turn on SSH at boot
  3. Also in the “boot” volume, create a file called “wpa_supplicant.conf” and paste the following into it:

    country=AU
    ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev
    update_config=1
    network={
    ssid="WIFI_SSID"
    scan_ssid=1
    psk="WIFI_PASSWORD"
    key_mgmt=WPA-PSK
    }

  4. Unmount the card, pop it into the Pi, add power, and wait 60-90 seconds and it’ll connect to your network and be ready for SSH access! The default username on the Pi is “pi” and the password is “raspberry”.

(These instructions are all thanks to this blog post but I figured I’d put them here as well for posterity).

The PaPiRus display connection was dead easy, I just followed Pi Supply’s guide after soldering a header into the Pi Zero W. If you want to avoid soldering, they also offer the Zero W with a header pre-attached.

Getting the Python library for updating the display was mostly straightforward, I just followed the instructions in the GitHub repository to manually install the Python 3 version.

I wrote a simple Python script to grab the current temperature and humidity from my website’s REST endpoints, and everything works! This script uses the “arrow” and “requests” libraries, which can be installed with “sudo apt-get install python3-arrow python3-requests”.

Next step is to have the Pi 3 that has the sensors run a simple HTTP server that the Zero W can connect to it, so even if we have no internet connection for whatever reason, the temperatures will still be available at home. I’ve updated my Pi Sensor Reader to add HTTP endpoints.

Another year of Node.js (now also featuring React)

I posted last year about my progress with Node.js, and the last sentence included “I’m very interested to revisit this in another year and see what’s changed”.

So here we are!

There’s been a fair bit less work on it this year compared to last:

$ git diff --stat 6b7c737 47c364b
[...]
77 files changed, 2862 insertions(+), 3315 deletions(-)

The biggest change was migrating to Node 8’s shiny new async/await, which means that the code reads exactly as if it was synchronous (see the difference in my sendUpdate() code compared to the version above it). It’s really very nice. I also significantly simplified my code for receiving temperature updates thanks to finally moving over to the Raspberry Pi over the Christmas break. Otherwise it’s just been minor bits and pieces, and moving from Bamboo to Bitbucket Pipelines for the testing and deployment pipeline.

I also did a brief bit of dabbling with React, which is a frontend framework for building single-page applications. I’d tried to fiddle with it a couple of years ago but there was something fundamental I wasn’t grasping, and ended up giving up. This time it took, though, and the result is virtualwolf.cloud! All it’s doing is pulling in data from my regular website, but it was still a good start.

There was a good chunk of time from about the middle of the year through to Christmas where I didn’t do any personal coding at all, because I was doing it at work instead! For my new job, the primary point of contact for users seeking help is via a room on Stride, and we needed a way to be able to categorise those contacts to see what users were contacting us about and why. A co-worker wrote an application in Ruby a few years ago to scrape the history of a HipChat room and apply tags to it in order to accomplish this, but it didn’t scale very well (it was essentially single-tenented and required a separate deployment of the application to be able to have it installed in another room; understandable when you realise he wrote it entirely for himself and was the only one doing this for a good couple of years). I decided to rewrite it entirely from scratch to support Stride and multiple rooms, with the backend written in Node.js and the frontend in React. It really is a fully-fledged application, and it’s been installed into nearly 30 different rooms at work now, so different teams can keep track of their contact rate!

The backend periodically hits Stride’s API for each room it’s installed in, and saves the messages in that room into the database. There’s some logic around whether a message is marked as a contact or not (as in, it was someone asking for help), and there’s also a whitelist that the team who owns the room can add their team members to in order to never have their own messages marked as contacts. Once a message is marked as a contact, they can then add one or more user-defined tags to it, and there’s also a monthly report so you can see the number of contacts for each tag and the change from the previous month.

The backend is really just a bunch of REST endpoints that are called by the frontend, but that feels like I’m short-changing myself. 😛 I wrote up a diagram of the hierarchy of the frontend components a month or so ago, so you can see from this how complex it is:

And I’m in the middle of adding the ability to have a “group” of rooms, and have tags defined at the group level instead of the room level.

I find it funny how if I’m doing a bunch of coding at work, I have basically zero interest in doing it at home, but if I haven’t had a chance to do any there I’m happy to come home and code. I don’t think I have the brain capacity to do both at once though. 😛

Finally, some actual miniature painting

So despite having gotten the back room set up for miniature painting over three and a half years ago, I hadn’t actually done any of it since then. 😛 I also realised I hadn’t actually taken a photo of the setup.

I bought Games Workshop’s latest game Shadespire early last month, it does have miniatures to paint but only eight in the core set, and it’s a board game where the games last about half an hour or so versus the multi-hour affairs that are traditional Warhammer/Warhammer 40,000 games. I figured that with the holidays around and time to kill, and not having the prospect of endless amounts of miniatures to paint, I’d give it a go. I’m pleased to say that I clearly still have the painting skills!

I’ve finished five of them so far, so only three to go, and took some proper photos of them with the full external flash/umbrella setup.

Blooded Saek

Angharad Brightshield

Targor

Karsus the Chained

Obryn the Bold

(I’ll admit that I cheated slightly and didn’t actually paint any of these in the back room, however… during the week and a bit that I was doing them, the weather was really hot and the dinky little air conditioning unit in the back room wasn’t remotely up to keeping things cool, so I ended up bringing all the paints and bits inside and did them at the dining table).

The game Shadespire itself is really neat as well. I’ve only played a handful of games, but rather than just “Kill the other team” you also have specific objectives to accomplish as well. Have a read of Ars Technica’s review of it, they’re a lot more thorough and eloquent than I could be. 😛

Temperature sensors: now powered by Raspberry Pi

The Weather section on my website is now powered by my Raspberry Pi, instead of my Ninja Block! \o/

Almost exactly three years ago, I started having my Ninja Block send its temperature data to my website (prior to that, I was manually pulling the data from the Ninja Blocks API and didn’t have any historical record of it). Ninja Blocks the company went bust in 2015, and there was some stuff in the Ninja Blocks software that relied on their cloud platform to work and I ended up with no weather data for a couple of days because the Ninja Block couldn’t talk to the cloud platform. I ended up hacking at it and the result was this very simple Node.js application as a replacement for their software. It always felt a bit crap, though, because if the hardware itself died I’d be stuck; yes, it was all built on “open hardware” but I didn’t know enough about it all to be able to recreate it. I’d ordered a Raspberry Pi 3 in June last year, intending on replacing the Ninja Block and it’s sometimes-unreliable wireless temperature sensors with something newer and simpler and hard-wired, but I found there was a frustrating lack of solid information regarding something that on the surface seemed quite simple.

I’ve finally gotten everything up and running, the Ninja Block has been shut down, and I’ve previously said I’d write up exactly what I did. So here we are!

Components needed

  • Raspberry Pi 3 Model B+
  • AM2302 wired temperature-humidity sensor (or two of them in my case)
  • Ethernet cable of the appropriate length to go from the Pi to the sensor
  • 6x “Dupont” female to either male or female wires (eBay was the best bet for these, just search for “dupont female”, and it only needs to be female on one end as the other end is going to be chopped off)
  • 1.5mm heatshrink tubing
  • Soldering iron and solder
  • Wire stripper (this one from Jaycar worked brilliantly, it automatically adjusts itself to diameter of the insulation)

Process

  1. Cut the connectors off one end of the dupont cables, leaving the female connector still there, and strip a couple of centimetres of insulation off.
  2. Strip the outermost insulation off both ends of the ethernet cable, leaving a couple of centimetres of the internal twisted pairs showing.
  3. Untwist three of the pairs and strip the insulation off them, then twist them back together again into their pairs.
  4. Chop off enough heatshrink tubing to cover the combined length of the exposed ethernet plus dupont wire, plus another couple of centimetres, and feed each individual dupont wire through the tubing (there should be three separate bits of tubing, one for each wire).
  5. Solder each dupont wire together with one of the twisted pairs of ethernet cable, then move the heatshrink tubing up over the soldered section and use a hairdryer or kitchen blowtorch to activate the tubing and have it shrink over the soldered portion to create a nice seal.
  6. Repeat this feed-heatshrink-tubing/solder-wire/activate-heatshrink process again but with the cables that come out of the temperature sensor (ideally you should be using the same red/yellow/black-coloured dupont cables to match the ones that come out of the sensor itself, to make it easier to remember which is which).
  7. Install Raspbian onto an SD card and boot and configure the Pi.
  8. Using this diagram as a reference, plug the red (power) cable from the sensor into Pin 2 (the 5V power), the yellow one into Pin 7 (GPIO 4, the data pin), and the black one into Pin 6 (the ground pin).

AdaFruit has a Python library for reading data from the sensor, I’m using the node-dht-sensor library for Node.js myself. You can see the full code I’m using here (it’s a bit convoluted because I haven’t updated the API endpoint on my website yet and it’s still expecting the same data format as the Ninja Block was sending).

I’d found a bunch of stuff about needing a “pull-up” resistor when connecting temperature sensors, but the AM2302 page on adafruit.com says “There is a 5.1K resistor inside the sensor connecting VCC and DATA so you do not need any additional pullup resistors”, and indeed, everything is working a treat!

Adventures with Docker

For a few years now, the new hotness in the software world has been Docker. It’s essentially a very-stripped-down virtual machine, where instead of each virtual machine needing to run an entire operating system as well as whatever application you’re running inside it, you have just your application and its direct dependencies and the underlying operating system handles everything else. This means you can package up your application along with whatever other crazy setup or specific versions of software is required, and as long as they have Docker installed, anyone in the world can run it on pretty much anything.

The process of converting something to run in Docker is called “Dockerising”, and I’d tried probably two or so years ago to Dockerise my website (which was at the time still in its Perl incarnation), but without success. Most of it was not properly understanding Docker but also Docker’s terminology not being hugely clear and information on Dockering Perl applications being a bit thin on the ground at the time.

My new job involves quite a lot of Docker so I figured I should probably have another crack at it, so I sat down in June and managed to get my website running in a Docker container! The two-or-so-years between when I tried it last and now definitely helped, as did having had a little bit of experience with it in the new job.

I think the terminology was one of the bits that I struggled with most, so maybe this explanation will help someone… you have a Docker image, that’s basically a blueprint for a piece of software and all its associated dependencies. From that image (blueprint), you start up one or more containers which are the actual running form of the image. If one container dies (the application inside crashes or whatever), you don’t care and just start up another one and it’s identical each time. To build your own image, you start with a Dockerfile that tells Docker exactly how to construct your application and all the different parts that are required to support it (see my Lessn Archive’s Dockerfile for an example). There really wasn’t any substitute for actually going in and doing it; by struggling and failing I eventually got there in the end.

Since my initial success with my website, I’ve gone on to put both my old site archive and my URL shortener in Docker containers as well! Next stop is Kristina’s website, but that’s still using Perl and Mojolicious and my initial attempts have not been successful. 😛

Internet history

On Twitter recently, Mark had downloaded the whole archive of his Twitter account’s history and had been poking through it and randomly retweeting amusing old tweets. I downloaded my own Twitter history and quickly realised that a lot of the old things I’d linked to weren’t accessible because I’d been using my own custom URL shortener (this was before the days of Twitter doing their own URL shortening) and it wasn’t running anymore. Fortunately I’d had the foresight to take a full copy of all of my data and databases from Dreamhost before I shut down my account, and one of those databases was the one that had been backing my URL shortener. A quick import to PostgreSQL and a hacky Node.js application later, it’s all up and running! I’m under no illusions that it’s almost ever going to be accessed by anyone except me, but it’s nice to have another part of my internet history working. I’ve been hosting my own website and images and whatnot (things like pictures I’ve posted on my blog née LiveJournal, or in threads on Ars Technica) in one form or another since about 2002, and the vast majority of those links and images still work!

Speaking of my website, about four years ago now I went and tried to collect all my old websites into a single archive so I could look back and see the progression. The majority of them I actually still had the original source code to, though my very first one or two have been totally lost. The earliest I still have is from March of 1998 when I was not quite fifteen years old! I started out with just HTML, then discovered CSS and Javascript rollover images, and then around 2001 I started using PHP. I had to go in and hack up some of the PHP-based sites in order to get them to work, and oh dear god 18-year-old me was a FUCKING AWFUL coder. One of the sites consisted of a bit over three thousand lines in a single file, with all sorts of duplication and terribleness, and every single one of the sites that was hooked into MySQL had SQL injection vulnerabilities. I’m very proud of just how much my code has improved over the years.

I went back this weekend and managed to recover another handful of sites, and also included exports of the Photoshop files where the original site source wasn’t available. I’ve packed them all up into a Docker container (I’ll write another post about my experiences with Docker at some point soon) and chucked them up on archive.virtualwolf.org for the entire Internet to marvel at how terrible they all were! There’s a little bit more background there, but it’s a lot of fun just looking back at what I did.

Better Raspberry Pi audio: the JustBoom DAC HAT

I decided that the sound output from the Pi’s built-in headphone jack wasn’t sufficient after all and so went searching for better options (a DAC—digital-to-analog converter).

The Raspberry Pi foundation created a specification called “HAT” (Hardware Attached on Top) a few years ago which specifies a standard way for devices to automatically identify and configure a device and drivers that’s attached to the Pi via its GPIO (General Purpose Input/Output) pins. There’s a number of DACs now that conform to this standard, and the one I settled on is the JustBoom DAC HAT. It’s a UK company but you can buy them locally from Logicware (with $5 overnight shipping no less).

The setup is incredibly simple: connect the plastic mounting plugs, attach the DAC to the Pi, then edit /boot/config.txt to comment out the default audio settings and add three new lines in, then reboot.

To say that I’m impressed would be an understatement! I didn’t realise just how crappy the audio from the Pi’s built-in headphone jack was until I’d hooked up the new DAC and blasted some music out. I’m not an audiophile and it’s hard to articulate, but I’d compare it most closely to listening to really low-quality MP3s on cheap earbuds versus high-quality MP3s on a proper set of headphones.

If you’re going to be hooking your Pi into a good stereo system, I can’t recommend JustBoom’s DAC HAT enough!

Raspberry Pi project: AirPlay receiver

I bought a Raspberry Pi almost exactly a year ago, intending on eventually replacing my Ninja Block and its sometimes-unreliable wireless sensors with hardwired ones (apart from the batteries needing occasional changing, there’s something that interferes with the signal on occasion and I just stop receiving updates from the sensor outside for several hours at a time, and then suddenly it starts working again). To do that, I need to physically run a cable from outside under the pergola to inside where the Raspberry Pi will live and I don’t really want to go drilling holes through the house willy-nilly. I want to eventually get the electrician in to do some recabling so I’m going to get him to do that as well, but until then the Pi was just sitting there collecting dust. I figured I should find something useful to do it with, but having a Linode meant that any sort of generic “Have a Linux box handy to run some sort of server on” itch was already well-scratched.

I did a bit of Googling, and discovered Shairport Sync! It lets you use the Raspberry Pi as an AirPlay receiver to stream music to from iTunes or iOS devices, a la an Apple TV or AirPort Express. We already have an Apple TV but it’s plugged into the HDMI port on the Xbox One which means that to simply stream audio to the stereo we have to have the Xbox One, TV, and Apple TV all turned on (the Apple TV is plugged into the Xbox’s HDMI input so we can say “Xbox, on” and the Xbox turns itself on as well as the TV and amplifier, then “Xbox, watch TV” and it goes to the Apple TV; it works very nicely but is a bit of overkill when all you want to do is listen to music in the lounge room).

Installing Shairport Sync was quite straightforward, I pretty much just followed the instructions in the readme there then connected a 3.5mm to RCA cable from the headphone jack on the Raspberry Pi to the RCA input on the stereo. It’s mentioned in the readme, but this issue contains details on how to use a newer audio driver for the Pi that significantly improves the audio output quality.

The only stumbling block I ran into was the audio output being extremely quiet. Configuring audio in Linux is still an awful mess, but after a whole lot of googling I discovered the “aslamixer” tool (thanks to this blog post), which gives a “graphical” interface for setting the sound volume, and it turned out the output volume was only at 40%! I cranked it up to 100% and while it’s still a bit quieter than what the Apple TV outputs, it doesn’t need a large bump on the volume dial to fix—there’s apparently no amplifier or anything on the Raspberry Pi, it’s straight line-level output. The quality isn’t quite as good as going via the Apple TV, but it gets the job done! I might eventually get a USB DAC or amplifier but this works fine for the time being.

On macOS it’s possible to set the system audio output to an AirPlay device, so you can be watching a video but outputting the audio to AirPlay, and the system keeps the video and audio properly in sync. It works extremely well, but the problem we found with having the Apple TV hooked up to the Xbox One’s HDMI input is that there’s a small amount of lag from the connection. When the audio and video are both coming from the Apple TV there’s no problem, but watching video on a laptop while outputting the sound to the Apple TV meant that the audio was just slightly out of sync from the video. Having the Raspberry Pi as the AirPlay receiver solves that problem too!

UPDATE: Two further additions to this post. Firstly, and most importantly, make sure you have a 5-volt, 2.5-amp power supply for the Raspberry Pi. I’ve been running it off a spare iPhone charger which is 5V but only 1A, and the Pi will randomly reboot under load because it can’t draw enough power from the power supply.

Secondly, the volume changes done with the “alsamixer” tool are not saved between reboots. Once you’ve set the volume to your preferred level, you need to run “sudo alsactl store” to persist it (this was actually mentioned in the blog post I linked to above, but I managed to miss it).

A year of Node.js

Today marks one year exactly since switching my website from Perl to Javascript/Node.js! I posted back in March about having made the switch, but at that point my “production” website was still running on Perl. I switched over full-time to Node.js shortly after that post.

From the very first commit to the latest one:

$ git diff --stat 030430d 6b7c737
[...]
177 files changed, 11313 insertions(+), 2110 deletions(-)

Looking back on it, I’ve learnt a hell of a lot in that one single year! I have—

  • Written a HipChat add-on that hooks into my Ninja Block data (note the temperature in the right-hand column as well as the slash-commands; the button in the right-hand column can be clicked on to view the indoor and outdoor temperatures and the extremes for the day)
  • Refactored almost all of the code into a significantly more functional style, which has the bonus of making it a hell of a lot easier to read
  • Moved from callbacks to Promises, which also massively simplified things (see the progression of part of my Flickr– and HipChat-related code)
  • Completely overhauled my database schema to accomodate the day I eventually replace my Ninja Block with my Raspberry Pi (the Ninja Block is still running though, so I needed to have a “translation layer” to take the data in the format that the Ninja Block sends and converts it to what can be inserted in the new database structure)
  • Added secure, signed, HTTP-only cookies when changing site settings
  • Included functionality to replace my old Twitter image hosting script, and also added a nice front-end to it to browse through old images

Along with all that, I’ve been reading a lot of software engineering books, which have helped a great deal with the refactoring I mentioned above (there was a lot of “Oh god, this code is actually quite awful” after going through with a fresh eye having read some of these books)—Clean Code by Robert C. Martin, Code Complete by Steve McConnell, The Art of Readable Code by Dustin Boswell and Trevor Foucher.

I have a nice backlog in JIRA of new things I want to do in future, so I’m very interested to revisit this in another year and see what’s changed!

Farewell Dreamhost

After 12 years of service, I’m shutting my Dreamhost account down (for those unaware, Dreamhost is a website and email hosting service).

My very first—extremely shitty—websites were hosted on whichever ISP we happened to be using at the time—Spin.net.au, Ozemail, Optus—with an extremely professional-looking URL along the lines of domain.com.au/~username. I registered virtualwolf.org at some point around 2001-2002 and had it hosted for free on a friend’s server for a few years, but in 2005 he shut it down so I had to go find some proper hosting, and that hosting was Dreamhost.

The biggest thing I found useful as I was dabbling in programming was that Dreamhost offered PHP and MySQL, so I was able to create dynamic sites rather than just static HTML. Of course, looking back at the code now is horrifying, especially the amount of SQL injection vulnerabilities I had peppered my sites with.

Around the start of 2011, I started using source control—Subversion initially—and finally had a proper historical record of my code. I used PHP for the first year or so of it, then ended up outgrowing that and switched to a Perl web framework called Mojolicious. The only option to run a long-lived process on Dreamhost is to use Fast-CGI, which I never managed to get working with Mojolicious, but fortunately Mojolicious could also run as a regular CGI script so I was still able to use it with Dreamhost, albeit not at great speed.

At the same time I started using Subversion, I also signed up with Linode who offer an entire Linux virtual machine with which you can do almost anything you’d like as you have full root access. I originally used it mostly to run JIRA so I could keep track of what I wanted to do with my website and have the nifty Subversion/JIRA integration working to see my commits against each JIRA issue. I slowly started using the Linode for more and more things (and switched to Git instead of Subversion as well), until in 2014 I moved my entire website hosting over to the Linode.

At that point the only thing I was using Dreamhost for was hosting Kristina’s website and WordPress blog, and the email for our respective domains. Dreamhost’s email hosting wasn’t always the most reliable and towards the end of 2015 they had more than their usual share of problems, so we started looking for alternatives. Kristina ended up moving to Gmail and I went with FastMail (who I am extremely happy with and would very highly recommend!), I moved her blog and my previously-LiveJournal-but-now-Wordpress-blog over to the Linode, and that was that!

Moving my website hosting to the Linode also allowed me to move over to Node.js and I’ve been going full steam ahead ever since. Since that posted I’ve moved over from callbacks to Promises (so much nicer), I wrote myself a HipChat add-on to keep an eye on the temperature that my Ninja Block is reporting, and I moved my dodgy Twitter image upload Perl script functionality into my site and added a nice front-end to it. Even looking back at my code from 6 months ago to now shows a marked increase in quality and readability.

So in summary, thanks for everything Dreamhost, but I outgrew you. 🙂

Stubbing services in other services with Sails.js

With all my Javascript learnings going on, I’ve also been learning about testing it. Most of my website consists of pulling in data from other places—Flickr, Tumblr, Last.fm, and my Ninja Block—and doing something with it, and when testing I don’t want to be making actual HTTP calls to each service (for one thing, Last.fm has a rate limit and it’s very easy to run into that when running a bunch of tests in quick succession which then causes your tests to all fail).

When someone looks at a page containing (say) my photos, the flow looks like this:

Request for page → PhotosController → PhotosService → jsonService → pull data from Flickr’s API

PhotosController is just a very thin wrapper that then talks to the PhotoService which is what calls jsonService to actually fetch the data from Flickr and then subsequently formats it all and sends it back to the controller, to go back to the browser. PhotosService is what needs the most tests due to it doing the most, but as mentioned above I don’t want it to actually make HTTP requests via jsonService. I read a bunch of stuff about mocks and stubs and a Javascript module called Sinon, such but didn’t find one single place that clearly explained how to get all this going when using Sails.js. I figured I’d write up what I did here, both for my future reference and for anyone else who runs into the same problem! This uses Mocha for running the tests and Chai for assertions, plus Sinon for stubbing.

Continue reading “Stubbing services in other services with Sails.js”

Learning new things: Javascript and Node.js

We’ve used Node.js (specifically with a framework called Sails.js) at work for a number of projects but I never really felt I properly understood one of Node’s fundamental concepts, that of the callback. It’s absolutely pervasive throughout Node and I was able to muddle on through at work without totally grasping it, but it wasn’t ideal.

Back at the end of January I decided to try rewriting my website using Node.js (it’s currently written in Perl using the Mojolicious framework) as a learning experience. It’s now almost two months later and my site is actually completely rewritten with Node/Sails (sans tests, which are currently being written; I know about test-driven development but I wasn’t about to start bashing my head against failing to understand how to get the tests to do what I wanted on top of learning a whole new language :P) with all the same functionality of my Perl one, and although I’m still far from an expert I actually feel like I have a proper handle on what’s going on.

The problem I found when trying to find examples was that they were all very contrived; I felt like they were missing fundamental underlying parts that apparently everybody else was able to understand but I couldn’t. For me, the “ah ha” moment was this post on Stack Overflow about using callbacks in your own functions. It didn’t assume anything or use an example of some module that apparently everyone is already familiar with (the most common one was fs.read() to read data from the filesystem). Once I had that straight, it was full steam ahead. It’s also significantly easier to deal with Javascript objects compared to Perl’s array/hash references.

My actual live website at virtualwolf.org is still on the old Perl version, but I don’t want to put the Node one up until I’ve actually got it properly covered with tests. Speaking of tests, I’m using a thing called Istanbul for code coverage, the reports it generates look like this, and it’s really satisfying having the numbers and bars go up as your coverage increases. It’s basically gamification of tests, really!

All in all, I’m pretty pleased!

Introducing the LiveJournal XML Importer

Continuing on from my previous post about my LiveJournal to WordPress experience, and how the importer managed to miss a bunch of entries, it turns out I didn’t have every notification email still around. The ones prior to February of 2004 I’d apparently deleted so sadly there’s no recovering Kristina’s really early comments from the missing posts, but from what I could see there weren’t too many of those anyway, thankfully.

However, I’m happy to say that I’ve been able to hack the importer to import all the entries and comments from an ljdump archive! I’ve put the code up on Bitbucket, I’m sure there’s bugs and edge-cases and things that don’t work properly, but it worked perfectly for me. I’ve changed it from the original importer to still import comments from journals that have been deleted so the threading remains intact and you don’t end up with weird comments seemingly replying to nothing. They’re easily identified by the fact that the date on the comment is set to the time you performed the import, so they show up at the top of the Comments section in WordPress’ admin.

Missing history

So it turns out that the LiveJournal to WordPress Importer didn’t actually import everything. I’d been going through and updating links to old entries to point to their relevant entry here in WordPress, and there were several pages that didn’t actually make it across (it also imported every single comment back to around 2010 twice, so I had to go through and delete all of those duplicates; prior to 2010 it was fine, for some weird reason). That wouldn’t have been too bad, but in between my having imported my LiveJournal originally and me discovering this, Kristina’s old LiveJournal account was deleted, which also meant that every single comment of hers on my LiveJournal was now gone, and any fresh import I did directly from the LiveJournal API wouldn’t have them at all. 🙁 Looking back through my old entries was kind of sad, with just “Deleted comment” everywhere in place of Kristina’s actual comments.

I wanted to have the complete history of my LiveJournal here in WordPress, but I also didn’t want to have all of Kristina’s comments missing. I figured SOMETHING had to be able to be done!

I used ljdump to hit LiveJournal’s API and download each entry there into a raw XML file, and that had grabbed all the journal entries, so clearly something had fucked up in the WordPress import part.

The situation was this:

  • I had most but not all of my old LiveJournal entries imported into WordPress
  • Those entries that made it across did have Kristina’s old comments on them
  • I had all of my own entries downloaded to raw XML
  • ljdump also grabbed all the comments for each entry as well (sans Kristina’s, obviously)

I manually went through and compared the entries in WordPress to those on my LiveJournal month-by-month, and found that there were 68 missing ones in total. I hacked at the LiveJournal to WordPress Importer plugin until I was able to get it to read the raw XML files that’d come directly from ljdump, then spun up a new temporary WordPress install and was able to import just those missing entries. Next, I erased that temporary instance, imported the full backup from this blog, then ran the importer again to bring in just those 68 missing entries from XML, and it worked a treat.

Unfortunately there were a handful of those entries that also had had Kristina’s comments on them previously, so they were still missing. Thankfully, me being the digital hoarder that I am, I still had all of the email notifications that LiveJournal had sent me for each and every comment on my journal, and the LiveJournal API actually shows even deleted comments in their properly threaded state, just with no body or detail beyond the username who posted it. So I was able to copy the content and timestamp for each comment of Kristina’s that’d been on those missing entries that weren’t imported, and update the raw comment XML with that detail!

This is still a work in progress and my next step is to hack at the importer further to read the comments directly from XML (currently it’s reading the journal entries from the XML files, but the comments are still pulled from LiveJournal’s API directly). It’ll definitely be do-able, it may just take a little while because everything related to WordPress is in PHP and I’ve not done any PHPing for quite a number of years now!

This may seem a bit odd, but given I have 13 years of history in LiveJournal, and it’s where Kristina and I initially started chatting a lot more before she visited and we got together, I didn’t want to have these weird entries where Kristina was just essentially erased from my blog history.

I might even put my modifications to the plugin up on Bitbucket if it seems to be working well, given the current LiveJournal to WordPress Importer is a bit shit.

New computer!

I’d been looking to upgrade to a 27″ iMac from my mid-2010 MacBook Pro, and five years from a machine is not bad at all! I was hoping that the non-retina iMacs would get one last upgrade but it was not to be. I checked the Apple Store after the cheaper model of retina iMac was released last week, and they had in fact removed all of the custom options from it other than RAM and storage, so the only video card available was the base-level 1GB one. Bleh.

However! There was fortuitously an almost-maxed-out 27″ Late 2013 iMac available on the refurb store: 4GB Nvidia 780M GTX video card (top of the line), and 3TB Fusion Drive. Only 8GB RAM but it’s user-replaceable in this model anyway, so I have another 16GB on the way. The thing with the refurb machines is that what you see is what you get, there’s no upgrades or anything available, but that was okay. The only change I’d have made is a 1TB SSD (the Fusion Drive is a large hard disk combined with a relatively small SSD—128GB) but ah well. I saved probably more than a grand all up!

kungfupolarbear has inherited my old machine as it’s a decent step up from her MacBook Air, and we’re going to use the MacBook Air as a travel computer, and if I want to do some coding from the lounge or some such.

\o/

kungfupolarbear is in the US visiting her family, she left yesterday and arrives back in the country two weeks later. It’s weird not having her here, I feel like I’m just biding my time until she’s back.

The whole stereotype of “The wife is away, I’m going to do all of these things that she doesn’t let me do normally!” just doesn’t apply to us at all. We both will happily play video games or putter around on the computer or whatever. I’m having gypocalypse, wobin, and some other friends over on Saturday to play this Warhammer 40,000-themed D&D-type thing, and that wouldn’t matter if kungfupolarbear was here, she’d happily hang out despite having zero interest in actually playing it and would absolutely not begrudge us spending time playing it. Everything is just so totally effortless, it still amazes me. <3

I had yesterday and today off (yesterday because we woke up at 3:30am in order to get to the airport on time D: ), and am working from home the rest of the time she’s gone since Beanie needs looking after and obviously can’t be by himself for 11 hours a day.

In other news we bought an Xbox One, a colleague of mine was selling his as he’s moving to Paris. We’ve got Forza 5 (a racing game) to tide us over until the Diablo III expansion comes out, and man the graphics are nice. This is only a launch title too, so they’ll be even more impressive once developers get a handle on squeezing the most out of the system. One very slick thing is that you can have the Xbox turn the TV and amplifier on when it turns on, and it’ll turn them off when the Xbox turns off too.

DIY, and some not so DIY

No posts for the last month, mostly because there’s not been anything of note happening. Yesterday, though… phew!

So, the path to the back room was pretty bloody awful.

kungfupolarbear had found a photo of a nice path design, so we decided to go to Bunnings and get the supplies for it. Long story short, about five hours and many sore muscles later, we have a new path!

As per pretty much everything so far, it took more work than expected.

And today we had an electrician come to replace the god-awful ceiling lights that were in the lounge room and kitchen, and to wire up some ethernet for us too. It’s so nice being able to put our own little touches on things now!

In other news, our annual bonus ended up being 12.5%! \o/ Of course, 45.6 cents in the dollar of that went to tax, but that was still a nice chunk of change. So I’ve ordered a NinjaBlock. Should be fun to play around with.

A new design

One of the things I’ve done with my website is design and add an entirely new design, and the ability to switch between the current design and the new one. Check it out!

The “Dark” style is the current one, and the “Light” style is the new one. I’m really pleased with how it turned out, design-wise, even apart from managing to implement a style-switcher that stores the current selection in a session cookie so it’s remembered next time you visit. 😀

Oops

Yes, I’ve forgotten to update LiveJournal for another several months. 😛 I keeping remembering at random useless times to update my LJ (like when I’m nowhere near a computer or am about to go to bed). I’ve now added a category to Things on my computer and iPhone so I can hopefully at least quickly add a reminder to blog things and there’ll be more frequent updates. 😛 (Yes I’m a massive nerd).

Biggest thing is that today is our third wedding anniversary! \o/ (Squee wedding photos squee). This is actually a Lily weekend, which was slightly annoying, but we went out to dinner last night after work, and I got some beautiful roses sent to kungfupolarbear‘s work which she spent the whole day squeeing about. 😀 Next weekend we’re going to Melbourne on Sunday and Monday, so we’ll eat lots of delicious food and take lots of photos. I can’t wait! It’s crazy, things just keep getting better and more awesome! BEST WIFE EVER.

In other news, Lily turned five last month. O_O And she’s already half-way through her first year of school. WHAT THE HELL HOW WHY WHAT.

On the subject of nerdy things, someone posted an epic rant back in April on how shit PHP is. My website was written in PHP, so that rant gave me the impetus to find something else. I discovered a Perl web framework called Mojolicious and have totally rewritten my site with that. It has all sorts of useful functions for reading JSON and such, so I’ve hooked my website into my Flickr, Tumblr, and it has my Last.fm stats too. 😀 It’s really been a lot of fun!

I was looking at my slightly older LJ entries, and saw the one from Christmas about the guitar input… I’ve basically not touched it or the guitar for like four months now. 🙁 I think the DVD that I bought (learning the chords by themselves) just doesn’t work for me, I end up losing interest far too quickly. That’s probably the fifth or sixth time I’ve attempted to learn, and I just can’t do it. I suspect I need to learn actual full songs, but I also have far too many another things vying for my attention that I totally forgot about the guitar input until I was writing this entry, heh.

…and holy shit, I also just realised that today not only marks our third wedding anniversary, but ten years TO THE DAY since my first LJ post! 😮 God damn. I’ve also been registered and posting on Ars Technica for over eleven years, and have been going by “VirtualWolf” online for a good thirteen years now. All of those numbers are more than a third of my life, which blows my mind.

Oops

I know I said I’d update this more often, but not too much update-worthy has been happening! Work has been crazy busy this week, as we had an update over the weekend so of course there’s random things not working and exciting to bugs to deal with.

With regards to this post, I bought a book on Objective-C and am attempting to learn it so I can get my iPhone programming on! We play a lot of Magic at work, so I’m going to try to write an app to keep track of life scores. Dead simple, I know, but it’s a project for me to sink my teeth into, and also shouldn’t be horrendously difficult. \o/ It might even end up on the App Store (if there’s not already several apps like this I’ll be shocked), but that’s a lesser priority.

…and that’s about it, really.

All VCed up and nothing to code

I’ve been getting my head around version control and have been using it for my website, now I want to code something more but I have absolutely nothing I can think of that I want to do! My website is pretty much just a bunch of links to other things (LJ, Twitter, Flickr, etc.), and there’s not too much more I can do with it there.

I’d learn Objective-C and make some Mac applications, but I really don’t have anything I want to create. I’ve found from previous experience that if I don’t actually have a pet project in mind, attempting to learn a programming language is doomed to failure. I don’t do anything outside of work that really would require any little scripts or anything, so that’s pretty ruled out too.

Of geekery, and bad movies

I’ve been getting my geek on in a big way.

Since I’m supporting JIRA Studio, and it’s a lot more sysadminy (lots of command-line work, SSHing into hosts, etc.), I’ve signed up with Linode and have installed CentOS on it (CentOS is what we use at work for just about all our server machines) to get my learnings on.

I’ve got my own JIRA instance running on it, it’s also running Subversion, email (postfix and dovecot), and I’m using SVN for my web development as well (this is my regular website, and I test out everything on here). I’ve integrated JIRA with my Subversion repository, and have OpenLDAP set up for authentication for Subversion via Apache, email, and JIRA.

In another news, we’ve been doing bad movie night with gypocalypse on a semi-regular basis, and oh it’s such fun. So. Much. Snarking! We’ve covered a lot of video game movies (Super Mario Brothers, Mortal Kombat 1 and 2, Double Dragon, all the FUCKING AWFUL Uwe Boll films, etc.) and our movie of choice for tonight is Robowar, and if we have enough time, BIRDEMIC: SHOCK AND TERROR.

😀 😀

Still nothing on the job front.

In other news, I’ve installed VMware Fusion 3 on my Mac mini server and am going to install Windows Server 2008 on it, mostly for fiddling and learning purposes. A friend has a copy from MSDN with a legitimate key, so woo! Learnings are fun.

We’re busily sitting at work doing very little, and talking complete shit in the joint internal iChat server that the Tier 2s from both Sydney and Singapore are in. It’s fun!