Running Pihole using Docker on your Mac

A colleague of mine mentioned an open source program called Pi-hole designed to act as a DNS resolver in your local network to blackhole trackers and ads. The biggest advantage of this is that it can also be used by devices that don’t support adblockers natively or are cumbersome to use.

So how does one trial something on their local Mac to see if it’s worthwhile? Turns out the project has a Dockerfile and it works quite well, and if you don’t expose the DHCP ports you can ignore breaking your work network with a rogue DHCP server. So assuming you already have Docker installed:

cat <<EOF | tee ~/.piholeenv

docker pull pihole/pihole

docker run -d -p 80:80 -p 53:53 -p 53:53/udp -p 443:443 --restart=unless-stopped --env-file ~/.piholeenv --name pihole pihole/pihole


Next, set your DNS server to

networksetup -setdnsservers Wi-Fi

And nada, a quick, dirty, and SUPER ephemeral test that doesn’t mess with the current DHCP setup on your network. If you want to run it more long term, follow the docs properly and specify a volume to save the data.

To shut it all down:

docker stop pihole
docker rm pihole

Unfortunately my home router provided by my ISP doesn’t offer the ability to change DNS. So I guess that’s the push necessary to get around to putting it in bridge mode and getting a proper router.


Using CloudFlare as a v6 to v4 Bridge

CloudFlare offers the ability for you to turn on CDN caching and present your service to the public without requiring a public IPv4 address (so long as you have a publicly accessible v6 address) To turn it on, add the DNS entry to your domain on CloudFlare, and then turn on the caching service (Coloured in logo)

Alt text

The caveats with the CDN are the same as if you had a v4 address; only certain ports (eg. 80, 8080, 443 , 8443, etc.) work. The output from your server is cached/proxied via CloudFlare’s CDN servers. So it’s not a full fix; eg. no port 22 to ssh in, but for running a web/http based service can be quite useful.

SSH Key Types and Cryptography: The Short Notes

On nearly all current (< 3 years old) operating systems there are 4 different types of SSH key types available - both as a client’s key and the host key:

  • DSA (No longer allowed by default in OpenSSH 7.0+)
  • RSA
  • ECDSA (OpenSSH 5.7+)
  • ed25519 (OpenSSH 6.5+)

So which one to use?

In general, the best practice preference is to use ed25519 if possible, otherwise use RSA (4096 bits) due to mistrust of NIST’s curve for ECDSA. Which key is chosen/created is managed by HostKeyAlgorithms in sshd.conf, and when you create a client key by running ssh-keygen. So what about the other parts of an SSH connection, and can I use an ed25519 key anywhere?

The key types are just one portion of an SSH connection; authentication. SSH connections have three major cryptographic phases, the key exchange, the authentication, followed by the negotiated symmetric encryption used by the rest of the connection. (If you want more detail, check out Digital Ocean or Cisco’s explanations.)

Unlike the SSH key type, the ciphers and key exchange are decided on between sshd and ssh depending on their feature set and what is defined in their config files.

If you’re running OpenSSH 6.3 or newer you can see what algorithms are supported by running one of the three commands: ssh -Q [cipher|mac|kex], or read man ssh_config.

Key Exchange

A glossed over version of the key exchange, has the client and the server share some information (eg. public keys) and use the Diffie-Hellman algorithm with a decided curve to set up the cipher (symmetric key) and the MAC (message authentication code to confirm validity) to be used for the rest of the connection.

Mozilla’s recomended list of kex choices to use (specify in sshdconfig) per their [wiki]( is a great starting point. The summary being anything at least with a sha256 confirmation helps.


The symmetric key created during the key exchange step is now used to encrypt and decrypt the rest of the connection.

Mozilla’s wiki again lists the most recommended ciphers and MACs with the new chacha20-poly1305 being the first on the list.

Key Type Reference

OS OpenSSH Type
Ubuntu 12.04 5.9 dsa,rsa,ecdsa
Ubuntu 14.04 6.6 dsa,rsa,ecdsa,ed25519
Ubuntu 16.04 7.2 dsa*,rsa,ecdsa,ed25519
Fedora 23 7.1 dsa*,rsa,ecdsa,ed25519
CentOS 7 6.4 dsa,rsa,ecdsa
Mac OS X 10.11 (El Capitan) 6.9 dsa,rsa,ecdsa,ed25519
macOS 10.12 (Sierra DP) 7.2 dsa*,rsa,ecdsa,ed25519
Cmder 7.1 dsa*,rca,edsa,ed25519
Window 10 (14342) 6.6.1 dsa,rsa,ecdsa,ed25519
PuTTY N/A dsa,rsa,ecdsa[1],ed25519[1]

* - disabled by default for sshd
1 - PuTTY stable only supports dsa and rsa but the latest development snapshots support ecdsa and ed25519.


Unless you’re using CentOS 6 or Ubuntu 12.04, use ed25519 keys and Mozilla’s config files to limit the preferred connection ciphers.


On April 8th I stopped redirecting to Bike Calgary[1] to start showing off the aggregated data that I was pulling together from the 3 Eco-Counter installations. With the source on GitHub, I thought it’d be worth explaining a little of the why and how.

At the start of January, the City of Calgary made public the web page for bike counter on the Peace Bridge with promises of making more available including at least 10 more during the upcoming cycle track pilot. The Peace Bridge counter had data stretching back to April 24th, 2014 and by default was always showing the entire daily data set.

My first curiousity was whether I can could have a bookmark to just show the last week or so worth of numbers which led me to figuring out how the webapp worked. (Good ol’ WebKit developer tools)

After that in tandem with some projects I was looking into for work I decided to start seeing about scrapping the data and storing it somewhere to compare numbers (different installations, averages, weather) more easily. So a big thank you for the people at the City and Eco-Counter for not telling me to “get lost and don’t use things inappropriately”.

As for how - the Python scripts just ask Environment Canada and the counters once a day for their last day’s worth of new data (if possible) and store it in Graphite. Interacting with the data is Grafana 2 behind nginx. All hosted on a tiny instance on some publicly available free compute resources that I just happen to also manage as part of my day job. Funnily, most of the script writing was done during an all nighter at a Denny’s in Kamloops waiting for 4 AM to roll around so I could swap some power cables in a maintenance window.

It’s nothing fancy but it’s fun to see what might come of it when data is made available.

1 - I had registered the domain last year and figured that was a good place to point until I had a better idea of how to use it.

Trying to make sense of when to use Docker vs. LXC

While working on some side projects the past couple weeks I kept confusing myself on how things worked behind the scenes between Linux Containers and Docker. They both leverage the Linux kernel’s cgroups to function on Linux (and in Docker’s case - similiar technologies in other OSes), but differ completely in terms of how you interact with them.

While Linux Containers can best be thought of a super lightweight VM to run a whole VM, Docker contains a slew of other features that blur the lines between it acting like a super lightweight VM and being a full platform to build off of. Docker plays closer to the idea of a process/group of processes (application) under a chroot versus LXC’s idea of a whole OS/machine in a chroot jail.

So it’s misleading to think of a Docker container the same way as a LXC container. Same technology behind the scenes but completely different approaches. For Docker it’s all in how you set up your container to run - you can have all the other services you normally get in a VM if you so wish.

For example with LXC setting up MySQL would consist of making the container, running the command to install MySQL and setting the service to go. You can then log in or attach and run other commands as well if necessary.

Docker on the other hand involves similar steps with the flexibility of having Docker do the install and run the service when the container starts (defined in the Dockerfile). However if you want to attach to that container and run more commands you have to have set access to do that up ahead of time (eg. supervisord, runit), create a new container with that command, or try and force your way into the container. (you can try lxc-attach but if you want a new TTY and you’re attaching to a mysqld instance? Not going to work)

After figuring that out - the use of Puppet in Docker started to make more sense. Have Puppet configure your image and then save/commit that state or kick off the supervisord process to keep the container “alive”. Docker lends itself more to recreating/iterating whenever a new update is needed over updating settings.

In summary - LXC container is analagous to a VM, while Docker a very supercharged sandbox for running a process or group of processes. Use LXC when you’re wanting a separate “server” without the extra overhead, Docker when you’re wanting to run a “service”.

I also recommend reading the FAQ - primarily the what Docker “adds to LXC”. In the end it’s left me more leery of using Docker - it’s a bit of a paradigm shift I’m not ready to do just yet.

On one last sidenote, IPv6 support also looks like a lot of pain - but not any worse than LXC.


IPv6 and Systems

Last Monday I was part of a team presenting a workshop at BCNET’s 2014 conference about Configuring IPv6 for Networks and Systems. The network walkthrough and slides put together by BCNET are available on their wiki while the Systems portion I worked has the slides and workshop examples on Github.

Thanks to everyone who came out.

   presentation, ipv6, work

About Me

I am a Senior Systems Administrator at Cybera by day, a geek, father, and husband by night. My current role grants me a great deal of freedom to try out various different solutions both as a learning exercise and as a way to improve how things run. I’ve used the pseudonym Chealion online since 1998 and have subsequently owned and posted content on off and on (more off than on) since 2006.

I’m writing this for myself so I apologize in advance if you find it not very focused.

You can get contact me by emailing me chealion AT chealion DOT ca

Other haunts:


Moving to Ghost

Alongside changing hosts (moving from TextDrive to my own VPS), adding IPv6 support for my websites and taking far too long to do it I’ve also swapped WordPress for Ghost. Slightly involved installation but much nicer. Most importantly no comment spam.


Calgary FCPUG: Outputting to Blu-ray, DVD and the Web

I’ve now uploaded the slides from the talk I gave to the Calgary FCPUG about outputting to Blu-ray, DVD and the Web. You can grab them from the following link:

I do have an audio recording of the first presentation I’ve given since high school but have yet had a chance to listen and edit it as necessary.

I hope everyone found it useful.

   calgary, fcpug, presentation

Using Gmail as your SMTP server When Using your ISP's Email

NOTE: You’re going to be using Google’s service to send the email but for all intents and purposes it’s completely transparent to both you and your recipient. It’s also a world lot better than using some random SMTP server (having to find out the local one and always change it) or finding all your email you sent doesn’t even arrive in your recipient’s inbox because it’s been marked as spam because of the server used. I’d recommend looking for an IMAP host instead for the long run.

For brevity I’m leaving out the exact steps to hook this up with your favourite mail client but you can find that out fairly easily as it’s only changing the SMTP server (or check my post about setting up Shaw’s SMTP service) and change to and using your Google login instead of say Shaw’s in the section about changing your SMTP server).

  1. Set up a Google Account. If you have one you’re good to go.
  2. Log into Gmail
  3. Go to Settings (link is in the top right)
  4. Go to Accounts and Import
  5. Under “Send mail as:” section click “Send mail from another address”
  6. Enter your email address you want to use (eg. [email protected]) and press Next
  7. Choose to use Gmail’s servers, press Next and choose Send Verification
  8. Click on the link in the verification email. This will verify the email address so you can move onto step 9. You may need to check your Junk Mail folder.
  9. Back at the “Send mail as” section (you may need to refresh the browser) click the “make default” link for the email address you set up and be sure that below it “Always reply from default address” is selected.
  10. Now be sure to change your SMTP settings on your computer/mobile device accordingly. This varies from device to device as to the steps but is the most important step. If not set correctly (eg. not turning off other SMTP servers on an iOS device) will make everything we’ve done for naught.
  11. Send an email to yourself to test and reply to it and make sure it gets to the right address. The only times I’ve ever seen an error here is if the SMTP wasn’t set up correctly, step 9 wasn’t followed or the carrier’s SMTP server was enabled again (yes it’s repeated because it accounts for 99% of errors I’ve seen).

Not difficult, but something I can grab when writing an email on how to do it. :-)

   gmail, mail, smtp