Setting up DNS over HTTPS on macOS

Back in April, Cloudflare announced a privacy-focused DNS server running at 1.1.1.1 (and 1.0.0.1), and that it supported DNS over HTTPS. A lot of regular traffic goes over HTTPS these days, but DNS queries to look up the IP address of a domain are still unencrypted, so your ISP can still snoop on which servers you’re visiting even if they can’t see the actual content. We have a Mac mini that runs macOS Server and does DHCP and DNS for our home network, among other things, and with the impending removal of those functions and their suggested replacements with regular non-UI tools with a upcoming version of it, I figured now would be a good time to look into moving us over to use Cloudflare’s shiny new DNS server at the same time.

Turns out it wasn’t that difficult!

Overview

  1. Install Homebrew.
  2. Install cloudflared and dnsmasq: brew install cloudflare/cloudflare/cloudflared dnsmasq
  3. Configure dnsmasq to point to cloudflared as its own DNS resolver.
  4. Configure cloudflared to use DNS over HTTPS and run on port 54.
  5. Install both as services to run at system boot.

Configuring dnsmasq

Edit the configuration file located at /usr/local/etc/dnsmasq.conf and uncomment line 66 and change it from server=/localnet/192.168.0.1 to server=127.0.0.1#54 to tell it to pass DNS requests onto localhost on port 54, which is where cloudflared will be set up.

Configuring cloudflared

Create the directory /usr/local/etc/cloudflared and create a file inside that called config.yml with the following contents:

no-autoupdate: true
proxy-dns: true
proxy-dns-port: 54
proxy-dns-upstream:
  - https://1.1.1.1/dns-query
  - https://1.0.0.1/dns-query

Auto-update is disabled because that seems to break things when the update occurs, and the service doesn’t start back up correctly.

Configuring dnsmasq and cloudflared to start on system boot

dnsmasq: sudo brew services start dnsmasq will both start it immediately and also set it to start at system boot.

cloudflared: sudo cloudflared service install, which installs it for launchctl at /Library/LaunchDaemons/com.cloudflare.cloudflared.plist.

Updating your DNS servers

Now that dnsmasq and cloudflared are running, you need to actually tell your machines to use them as their DNS servers! Open up System Preferences > Network, hit Advanced, and in the DNS tab click the + button and put your computer’s local IP address in. (You’ll want to make sure your machine has a static IP address, of course). Repeat the process for everything else on your local network to have them all send their DNS traffic to 1.1.1.1 as well.

You can confirm that all your DNS traffic is going where it should be with dnsleaktest.

And done!

I was surprised at how straightforward this was. I also didn’t realise until I was doing all of this that dnsmasq also does DHCP, so with the assistance of this blog post I’ve also replaced the built-in DHCP server on the Mac mini and continue to have full local hostname resolution as well!

Another year of Node.js (now also featuring React)

I posted last year about my progress with Node.js, and the last sentence included “I’m very interested to revisit this in another year and see what’s changed”.

So here we are!

There’s been a fair bit less work on it this year compared to last:

$ git diff --stat 6b7c737 47c364b
[...]
77 files changed, 2862 insertions(+), 3315 deletions(-)

The biggest change was migrating to Node 8’s shiny new async/await, which means that the code reads exactly as if it was synchronous (see the difference in my sendUpdate() code compared to the version above it). It’s really very nice. I also significantly simplified my code for receiving temperature updates thanks to finally moving over to the Raspberry Pi over the Christmas break. Otherwise it’s just been minor bits and pieces, and moving from Bamboo to Bitbucket Pipelines for the testing and deployment pipeline.

I also did a brief bit of dabbling with React, which is a frontend framework for building single-page applications. I’d tried to fiddle with it a couple of years ago but there was something fundamental I wasn’t grasping, and ended up giving up. This time it took, though, and the result is virtualwolf.cloud! All it’s doing is pulling in data from my regular website, but it was still a good start.

There was a good chunk of time from about the middle of the year through to Christmas where I didn’t do any personal coding at all, because I was doing it at work instead! For my new job, the primary point of contact for users seeking help is via a room on Stride, and we needed a way to be able to categorise those contacts to see what users were contacting us about and why. A co-worker wrote an application in Ruby a few years ago to scrape the history of a HipChat room and apply tags to it in order to accomplish this, but it didn’t scale very well (it was essentially single-tenented and required a separate deployment of the application to be able to have it installed in another room; understandable when you realise he wrote it entirely for himself and was the only one doing this for a good couple of years). I decided to rewrite it entirely from scratch to support Stride and multiple rooms, with the backend written in Node.js and the frontend in React. It really is a fully-fledged application, and it’s been installed into nearly 30 different rooms at work now, so different teams can keep track of their contact rate!

The backend periodically hits Stride’s API for each room it’s installed in, and saves the messages in that room into the database. There’s some logic around whether a message is marked as a contact or not (as in, it was someone asking for help), and there’s also a whitelist that the team who owns the room can add their team members to in order to never have their own messages marked as contacts. Once a message is marked as a contact, they can then add one or more user-defined tags to it, and there’s also a monthly report so you can see the number of contacts for each tag and the change from the previous month.

The backend is really just a bunch of REST endpoints that are called by the frontend, but that feels like I’m short-changing myself. 😛 I wrote up a diagram of the hierarchy of the frontend components a month or so ago, so you can see from this how complex it is:

And I’m in the middle of adding the ability to have a “group” of rooms, and have tags defined at the group level instead of the room level.

I find it funny how if I’m doing a bunch of coding at work, I have basically zero interest in doing it at home, but if I haven’t had a chance to do any there I’m happy to come home and code. I don’t think I have the brain capacity to do both at once though. 😛

Adventures with Docker

For a few years now, the new hotness in the software world has been Docker. It’s essentially a very-stripped-down virtual machine, where instead of each virtual machine needing to run an entire operating system as well as whatever application you’re running inside it, you have just your application and its direct dependencies and the underlying operating system handles everything else. This means you can package up your application along with whatever other crazy setup or specific versions of software is required, and as long as they have Docker installed, anyone in the world can run it on pretty much anything.

The process of converting something to run in Docker is called “Dockerising”, and I’d tried probably two or so years ago to Dockerise my website (which was at the time still in its Perl incarnation), but without success. Most of it was not properly understanding Docker but also Docker’s terminology not being hugely clear and information on Dockering Perl applications being a bit thin on the ground at the time.

My new job involves quite a lot of Docker so I figured I should probably have another crack at it, so I sat down in June and managed to get my website running in a Docker container! The two-or-so-years between when I tried it last and now definitely helped, as did having had a little bit of experience with it in the new job.

I think the terminology was one of the bits that I struggled with most, so maybe this explanation will help someone… you have a Docker image, that’s basically a blueprint for a piece of software and all its associated dependencies. From that image (blueprint), you start up one or more containers which are the actual running form of the image. If one container dies (the application inside crashes or whatever), you don’t care and just start up another one and it’s identical each time. To build your own image, you start with a Dockerfile that tells Docker exactly how to construct your application and all the different parts that are required to support it (see my Lessn Archive’s Dockerfile for an example). There really wasn’t any substitute for actually going in and doing it; by struggling and failing I eventually got there in the end.

Since my initial success with my website, I’ve gone on to put both my old site archive and my URL shortener in Docker containers as well! Next stop is Kristina’s website, but that’s still using Perl and Mojolicious and my initial attempts have not been successful. 😛

Internet history

On Twitter recently, Mark had downloaded the whole archive of his Twitter account’s history and had been poking through it and randomly retweeting amusing old tweets. I downloaded my own Twitter history and quickly realised that a lot of the old things I’d linked to weren’t accessible because I’d been using my own custom URL shortener (this was before the days of Twitter doing their own URL shortening) and it wasn’t running anymore. Fortunately I’d had the foresight to take a full copy of all of my data and databases from Dreamhost before I shut down my account, and one of those databases was the one that had been backing my URL shortener. A quick import to PostgreSQL Workers KV and a hacky Node.js Cloudflare Workers application later, it’s all up and running! I’m under no illusions that it’s almost ever going to be accessed by anyone except me, but it’s nice to have another part of my internet history working. I’ve been hosting my own website and images and whatnot (things like pictures I’ve posted on my blog née LiveJournal, or in threads on Ars Technica) in one form or another since about 2002, and the vast majority of those links and images still work!

Speaking of my website, about four years ago now I went and tried to collect all my old websites into a single archive so I could look back and see the progression. The majority of them I actually still had the original source code to, though my very first one or two have been totally lost. The earliest I still have is from March of 1998 when I was not quite fifteen years old! I started out with just HTML, then discovered CSS and Javascript rollover images, and then around 2001 I started using PHP. I had to go in and hack up some of the PHP-based sites in order to get them to work, and oh dear god 18-year-old me was a FUCKING AWFUL coder. One of the sites consisted of a bit over three thousand lines in a single file, with all sorts of duplication and terribleness, and every single one of the sites that was hooked into MySQL had SQL injection vulnerabilities. I’m very proud of just how much my code has improved over the years.

I went back this weekend and managed to recover another handful of sites, and also included exports of the Photoshop files where the original site source wasn’t available. I’ve packed them all up into a Docker container (I’ll write another post about my experiences with Docker at some point soon) and chucked them up on archive.virtualwolf.org for the entire Internet to marvel at how terrible they all were! There’s a little bit more background there, but it’s a lot of fun just looking back at what I did.

A year of Node.js

Today marks one year exactly since switching my website from Perl to Javascript/Node.js! I posted back in March about having made the switch, but at that point my “production” website was still running on Perl. I switched over full-time to Node.js shortly after that post.

From the very first commit to the latest one:

$ git diff --stat 030430d 6b7c737
[...]
177 files changed, 11313 insertions(+), 2110 deletions(-)

Looking back on it, I’ve learnt a hell of a lot in that one single year! I have—

  • Written a HipChat add-on that hooks into my Ninja Block data (note the temperature in the right-hand column as well as the slash-commands; the button in the right-hand column can be clicked on to view the indoor and outdoor temperatures and the extremes for the day)
  • Refactored almost all of the code into a significantly more functional style, which has the bonus of making it a hell of a lot easier to read
  • Moved from callbacks to Promises, which also massively simplified things (see the progression of part of my Flickr– and HipChat-related code)
  • Completely overhauled my database schema to accomodate the day I eventually replace my Ninja Block with my Raspberry Pi (the Ninja Block is still running though, so I needed to have a “translation layer” to take the data in the format that the Ninja Block sends and converts it to what can be inserted in the new database structure)
  • Added secure, signed, HTTP-only cookies when changing site settings
  • Included functionality to replace my old Twitter image hosting script, and also added a nice front-end to it to browse through old images

Along with all that, I’ve been reading a lot of software engineering books, which have helped a great deal with the refactoring I mentioned above (there was a lot of “Oh god, this code is actually quite awful” after going through with a fresh eye having read some of these books)—Clean Code by Robert C. Martin, Code Complete by Steve McConnell, The Art of Readable Code by Dustin Boswell and Trevor Foucher.

I have a nice backlog in JIRA of new things I want to do in future, so I’m very interested to revisit this in another year and see what’s changed!

Farewell Dreamhost

After 12 years of service, I’m shutting my Dreamhost account down (for those unaware, Dreamhost is a website and email hosting service).

My very first—extremely shitty—websites were hosted on whichever ISP we happened to be using at the time—Spin.net.au, Ozemail, Optus—with an extremely professional-looking URL along the lines of domain.com.au/~username. I registered virtualwolf.org at some point around 2001-2002 and had it hosted for free on a friend’s server for a few years, but in 2005 he shut it down so I had to go find some proper hosting, and that hosting was Dreamhost.

The biggest thing I found useful as I was dabbling in programming was that Dreamhost offered PHP and MySQL, so I was able to create dynamic sites rather than just static HTML. Of course, looking back at the code now is horrifying, especially the amount of SQL injection vulnerabilities I had peppered my sites with.

Around the start of 2011, I started using source control—Subversion initially—and finally had a proper historical record of my code. I used PHP for the first year or so of it, then ended up outgrowing that and switched to a Perl web framework called Mojolicious. The only option to run a long-lived process on Dreamhost is to use Fast-CGI, which I never managed to get working with Mojolicious, but fortunately Mojolicious could also run as a regular CGI script so I was still able to use it with Dreamhost, albeit not at great speed.

At the same time I started using Subversion, I also signed up with Linode who offer an entire Linux virtual machine with which you can do almost anything you’d like as you have full root access. I originally used it mostly to run JIRA so I could keep track of what I wanted to do with my website and have the nifty Subversion/JIRA integration working to see my commits against each JIRA issue. I slowly started using the Linode for more and more things (and switched to Git instead of Subversion as well), until in 2014 I moved my entire website hosting over to the Linode.

At that point the only thing I was using Dreamhost for was hosting Kristina’s website and WordPress blog, and the email for our respective domains. Dreamhost’s email hosting wasn’t always the most reliable and towards the end of 2015 they had more than their usual share of problems, so we started looking for alternatives. Kristina ended up moving to Gmail and I went with FastMail (who I am extremely happy with and would very highly recommend!), I moved her blog and my previously-LiveJournal-but-now-Wordpress-blog over to the Linode, and that was that!

Moving my website hosting to the Linode also allowed me to move over to Node.js and I’ve been going full steam ahead ever since. Since that posted I’ve moved over from callbacks to Promises (so much nicer), I wrote myself a HipChat add-on to keep an eye on the temperature that my Ninja Block is reporting, and I moved my dodgy Twitter image upload Perl script functionality into my site and added a nice front-end to it. Even looking back at my code from 6 months ago to now shows a marked increase in quality and readability.

So in summary, thanks for everything Dreamhost, but I outgrew you. 🙂

Stubbing services in other services with Sails.js

With all my Javascript learnings going on, I’ve also been learning about testing it. Most of my website consists of pulling in data from other places—Flickr, Tumblr, Last.fm, and my Ninja Block—and doing something with it, and when testing I don’t want to be making actual HTTP calls to each service (for one thing, Last.fm has a rate limit and it’s very easy to run into that when running a bunch of tests in quick succession which then causes your tests to all fail).

When someone looks at a page containing (say) my photos, the flow looks like this:

Request for page → PhotosController → PhotosService → jsonService → pull data from Flickr’s API

PhotosController is just a very thin wrapper that then talks to the PhotoService which is what calls jsonService to actually fetch the data from Flickr and then subsequently formats it all and sends it back to the controller, to go back to the browser. PhotosService is what needs the most tests due to it doing the most, but as mentioned above I don’t want it to actually make HTTP requests via jsonService. I read a bunch of stuff about mocks and stubs and a Javascript module called Sinon, such but didn’t find one single place that clearly explained how to get all this going when using Sails.js. I figured I’d write up what I did here, both for my future reference and for anyone else who runs into the same problem! This uses Mocha for running the tests and Chai for assertions, plus Sinon for stubbing.

Continue reading “Stubbing services in other services with Sails.js”

Learning new things: Javascript and Node.js

We’ve used Node.js (specifically with a framework called Sails.js) at work for a number of projects but I never really felt I properly understood one of Node’s fundamental concepts, that of the callback. It’s absolutely pervasive throughout Node and I was able to muddle on through at work without totally grasping it, but it wasn’t ideal.

Back at the end of January I decided to try rewriting my website using Node.js (it’s currently written in Perl using the Mojolicious framework) as a learning experience. It’s now almost two months later and my site is actually completely rewritten with Node/Sails (sans tests, which are currently being written; I know about test-driven development but I wasn’t about to start bashing my head against failing to understand how to get the tests to do what I wanted on top of learning a whole new language :P) with all the same functionality of my Perl one, and although I’m still far from an expert I actually feel like I have a proper handle on what’s going on.

The problem I found when trying to find examples was that they were all very contrived; I felt like they were missing fundamental underlying parts that apparently everybody else was able to understand but I couldn’t. For me, the “ah ha” moment was this post on Stack Overflow about using callbacks in your own functions. It didn’t assume anything or use an example of some module that apparently everyone is already familiar with (the most common one was fs.read() to read data from the filesystem). Once I had that straight, it was full steam ahead. It’s also significantly easier to deal with Javascript objects compared to Perl’s array/hash references.

My actual live website at virtualwolf.org is still on the old Perl version, but I don’t want to put the Node one up until I’ve actually got it properly covered with tests. Speaking of tests, I’m using a thing called Istanbul for code coverage, the reports it generates look like this, and it’s really satisfying having the numbers and bars go up as your coverage increases. It’s basically gamification of tests, really!

All in all, I’m pretty pleased!

Introducing the LiveJournal XML Importer

Continuing on from my previous post about my LiveJournal to WordPress experience, and how the importer managed to miss a bunch of entries, it turns out I didn’t have every notification email still around. The ones prior to February of 2004 I’d apparently deleted so sadly there’s no recovering Kristina’s really early comments from the missing posts, but from what I could see there weren’t too many of those anyway, thankfully.

However, I’m happy to say that I’ve been able to hack the importer to import all the entries and comments from an ljdump archive! I’ve put the code up on GitHub, I’m sure there’s bugs and edge-cases and things that don’t work properly, but it worked perfectly for me. I’ve changed it from the original importer to still import comments from journals that have been deleted so the threading remains intact and you don’t end up with weird comments seemingly replying to nothing. They’re easily identified by the fact that the date on the comment is set to the time you performed the import, so they show up at the top of the Comments section in WordPress’ admin.