Dev Diary - Community Development Map

The Problem

The City of Calgary has a number of resources available for finding out what development is happening in your community. The primary resource is the Development Map, which while comprehensive is significantly off putting because of the amount of information presented. It also only covers the major developments - land use changes and development permits, that are open to some community consultation.

My particular community - Sunalta - has an additional problem because we joined a project with the City of Calgary removing the need for most businesses to apply for a development permit for change of use on our Main Streets (10th Ave. and 14th St). While we are very supportive of the project, we’ve always struggled with notification or knowing when a new business has moved in so we could welcome them to the community. The lack of development permit makes this a bit harder.

The Data

Thankfully, the City provides this data via their open data portal:

However the City also releases the information for more minor information to help answer resident’s questions that are sometimes directed to us.

But can I use it?

So cool, we have some data. How do we make this easier to use? The Open Data Portal does present the data and even allows you to do some visualization but you’re unable to share your visualizations and filtered datasets without logging in. So using data.calgary.ca is only good for exploration. It also does not provide a great way for less technically minded folks on our board to get this data or even glance at it.

So the next best thing is to look at setting something up to read the data and create the visualizations we want!

The “solution”

From here I’ll share my notes about how I arrived at my solution. It’s far from the only solution, but is the result of my trial, error, and realizing there are a LOT of options I’m simply not even aware of.

Filtering

The first step was to try and filter the data from the 4 datasets to just the area I cared about - turns out this is easy with Socrata’s Query Language. Adding ?communityname=SUNALTA to my queries and the python py-soda library and I’m off to the races. The land use dataset however does NOT include community fields, so I bet heavily on that 1000 land use items is sufficient for the last 365 days worth of data.

Visualization (Streamlit)

The next tech I dove into - was how can I show off this data on a map and in a table. At work another team has started using Streamlit and it was incredibly easy to get started with. It also had a fantastic feature where it will cache data removing my worry about needing to jump through hoops to avoid hitting data.calgary.ca too many times and needing to create a caching layer myself.

The biggest gotchas with streamlit I have found is that the default map is nowhere near as featureful as using the plotly library to put dots on the map. Plotly’s table feature while supported has terrible performance and causes everything to drag to a halt. As such I went with Plotly for showing the map, but used Streamlit’s native table view to show off the data.

Hosting

While Streamlit is easy to run and put a reverse proxy in front of (for SSL and caching), they do also run a free service to deploy your Streamlit app to Heroku for showing off. Streamlit Sharing is free for public repositories (like this project). It will also hook up to GitHub to redeploy your app whenever you push an update without needing to code this in GitHub workflows yourself. Very convenient.

Dev Diary - yycbike_count - Python 3 and GitHub Actions

A couple weeks back, a Twitter bot I run stopped working correctly. It’s been on my list for a long time to revamp and clean up - if only because the server that it was running on was very overdue for being rebuilt. So when Eco Counter changed their private API and broke the script it provided a great chance to rewrite.

Fun items learned along the way:

Python 3 Upgrade Notes

  • Changing print is straight forward.
  • f strings are awesome
  • Cleaning up data is super time consuming, but cleans things up so nicely.

GitHub Actions

As part of the rewrite I wanted to explore ways to stop managing a server in order to run the bot. As it turns out GitHub actions has a schedule feature and could work quite well for both testing and running the script. They also provide a fantastic learning tutorial at https://lab.github.com/githubtraining/github-actions:-hello-world.

It gave me a great easy chance to get into actually setting up a simplistic CD system for myself, and to get my hands dirty. This coupled with act made it easy to test and run the script through being rewritten.

The whole process is summarized in a sentence, but it took some time and I hope will pay off in spades in the future.

You can find the workflow used on GitHub.

Repurposing or Extending yycbike_count

For those who are interested in taking the work in yycbike_count to use for their own city - please do. There are a couple things to mention if you want to re-use most of the work I did.

Counter Config

The counter config is hard coded into twitterBot.py - sorry.

GitHub Actions

If you want to use the workflow you’ll need to make 4 GitHub secrets correspondding to the environment variables (TWITTER_TOKEN, TWITTER_TOKEN_KEY, etc.). If you want to use act, you’ll also need to add them to the .secrets file.

act

If you want to use act to run the workflow locally you’ll need to add the following to your .actrc file:

-P ubuntu-latest=catthehacker/ubuntu:act-latest

What's calgary.bike?

On April 8th I stopped redirecting calgary.bike to Bike Calgary[1] to start showing off the aggregated data that I was pulling together from the 3 Eco-Counter installations. With the source on GitHub, I thought it’d be worth explaining a little of the why and how.

At the start of January, the City of Calgary made public the web page for bike counter on the Peace Bridge with promises of making more available including at least 10 more during the upcoming cycle track pilot. The Peace Bridge counter had data stretching back to April 24th, 2014 and by default was always showing the entire daily data set.

My first curiousity was whether I can could have a bookmark to just show the last week or so worth of numbers which led me to figuring out how the webapp worked. (Good ol’ WebKit developer tools)

After that in tandem with some projects I was looking into for work I decided to start seeing about scrapping the data and storing it somewhere to compare numbers (different installations, averages, weather) more easily. So a big thank you for the people at the City and Eco-Counter for not telling me to “get lost and don’t use things inappropriately”.

As for how - the Python scripts just ask Environment Canada and the counters once a day for their last day’s worth of new data (if possible) and store it in Graphite. Interacting with the data is Grafana 2 behind nginx. All hosted on a tiny instance on some publicly available free compute resources that I just happen to also manage as part of my day job. Funnily, most of the script writing was done during an all nighter at a Denny’s in Kamloops waiting for 4 AM to roll around so I could swap some power cables in a maintenance window.

It’s nothing fancy but it’s fun to see what might come of it when data is made available.

1 - I had registered the domain last year and figured that was a good place to point until I had a better idea of how to use it.

Xdebug, Leopard, and 64-bit shenanigans.

In order to profile some in house web application development and find out a couple bottlenecks that were appearing I went looking for a decent PHP profiler and found Xdebug. Because I am using the built in version of PHP on the Leopard Server, and a MacPorts version on Snow Leopard (we also have an application that fails miserably on PHP 5.3) it lead to some fun installing Xdebug on the Leopard machine since I didn’t want to reconfigure apache2 to use PHP 5.3 from MacPorts.

The crux of it is to grab the source of Xdebug, and then when configuring prefix your standard ./configure --enable-xdebug with the statement CFLAGS='-arch ppc64' or CFLAGS='-arch x86_64' as necessary. Compile xdebug as a 64-bit extension and it will actually work with a 64-bit version of Apache. With MacPorts it was as simple as enabling the xdebug variant and letting it recompile PHP.

I highly recommend using webgrind. It gave me more useful results than MacCallGrind, and didn’t require an hour hoping MacPorts could compile all the dependencies for kcachegrind (which would require graphviz as an additional install). Now I have some numbers to tell me where to look for proof old me is a moron.

Chromium Mac Build Bot Updater Script

When I read on MacRumors that the Chromium builds for the Mac were available and were actually launching the “stupid” geek in me jumped at the opportunity to try the unfinished, unpolished software. By “stupid” I refer to the side of near any geek who sees the OOO SHINY and is fully prepared to deal with the instability and issues that will crop up with using pre-alpha/alpha software. Naturally when I saw how fast and furious the updates were happening alongside the fact there was no automatic update mechanism apparent in Chromium (I don’t have the Google Updater installed and don’t want to) I sought to find out how to automate the updates to make the pain of manual downloads, copies and such go away.

Using my previous WebKit script, 5 minutes looking at the Chromium application layout and how the builds are stored on the server I changed the script. I fully expect it to break horribly in the future as Google changes folder layouts and/or adds Chromium support to their own “silent” updating mechanism.

Realistically you should be looking for the latest source at the link below:

As always this script can be found in my git repository on GitHub.

WebKit Nightly Update Script

This is a repost of a much older post that was lost when I transitioned from Moveable Type 3 to 4.

I personally like using WebKit nightlies as my main browser as it’s more or less Safari with a better, faster, and much more current rendering engine underneath it. That and the gold logo looks much better than the silver. It does have bugs occasionally (for example I was unable to post comments to Flickr with WebKit nightlies for a couple weeks) but all in all the experience is very positive. This is the script I use to keep WebKit updated whenever it bugs me for an update. I use an alias in my .bashrc file so I can just type ‘wkupdate’ to run the shell script. The best part is that if I copy it and I’m still using the program the changes won’t take effect until I restart but it allows me to copy it still. (Not advisable however)

EDIT: Updated 12/30/08 - Fixed URL check. Realistically you should be looking for the latest source at the link below:

As always this script can be found in my git repository on GitHub.

Opening/Copying Locked DVDs

At work we sometimes receive DVDs that we need footage off of that are locked. They will play back fine in DVD Player but we don’t want it for playback. The resulting DVD images are always titled SONATA_VOLUME and are created by a stand alone Sony DVD Recorder.

In order to copy the VIDEO_TS file off of the DVD you require sudo access but given that sudo access is limited to administrators (and not the editors) - this gets to be a bit of hassle because I have to be called to copy the DVD which adds unnecessary overhead to just get some footage off a simple DVD.

The solution? Create a droplet application (in this case AppleScript with shell scripts) so the editors don’t have to disrupt their workflow to get me to unlock the footage they need.

The source follows:

Tab Count to Growl for Safari

The following AppleScript snippet figures out how many tabs are open in the frontmost window and spits it out to Growl. Helps when I don’t feel like counting how many tabs I’ve opened in WebKit/Safari. Note: To get this to work with Safari change WebKit to Safari.