Archive for March 2012

Arch Linux Lanyards Are Back!

Demand for Arch Linux lanyards has been growing steadily. I had intended to have a new order by new years, but I ended up dealing with a different company, and decided to do a completely new design inspired by the updated Arch Linux website. There is a subtle gradient from dark grey to black in the background that looks very dynamic, and the logo itself is crisp and clear. Each lanyard is thin and very light to wear.

The lanyards are $6 for singles, and can be purchased from The Arch Schwag Store.

Add prominent links to your project in your project’s documentation

I’ve spent this weekend researching a wide variety of different python libraries for a pet project that may never come to fruition. To my delight, most of the projects I was interested were documented in Sphinx and the documentation had been posted online using Read The Docs or a self-hosted site.

Some of the projects only include API documentation, while others also include helpful guides or tutorials. However, None of the projects I visited had links back to the project website and source code repository in their documentation. Some had such links hidden on a contributing or download page.

A lot of the time, my Duck Duck Go results link me directly to the documentation for a project I haven’t heard of. I read the APIs and think, “hey, I’d like to try this out.” Then I have a bit of trouble actually finding the project’s home page.

So documentation authors: Please add prominent links to your project’s home page, as well as source code repository and issue trackers to your documentation.

Easily changing virtualenvs

The workon script available in virtualenvwrapper is a useful tool for switching virtual environments on the fly. However, it fails for the workflow I prefer, so I wrote my own.

I prefer to keep my virtualenvs in the same folder as whatever source code or project they apply to. Normally, I put them in a folder named venv, although sometimes I use different names, most often when I have multiple virtualenvs applying to the same source tree (e.g. venv27, venvpypy).

My directory structure therefore looks like this:

+ project1
| + src
| | + pkg1
| | + pkg2
| + doc
| + venv
+ project2
| + src
| + venv27
| + venvpypy
+ project3
| + venv

… and so on

The problem is, I may be anywhere in the folder tree when I want to activate a virtualenv. This means I’m doing stuff like . ../../venv/bin/activate one time and venv/bin/activate another time. Trying to remember which parent directory I want often requires several tab completions to see the directory listing. This is annoying. So I wrote a bash function to activate the optionally named virtualenv in the current directory or any parent directory.

I avoid bash whenever I can, so this may not be the prettiest bash script you’ve ever seen. ;-)

function v {
  # activate a virtualenv
  # usage: v
  #   activate the virtualenv named venv
  # usage: v venvname
  #   activate the virtualenv named venvname
  if [ $1 ] ; then
  deactivate &>/dev/null
  while [ $quit -eq 0 ]
    cd $cwd
    if [ $cwd == '/' ] ; then
    if [ -e $name ]  ; then
      source "$name/bin/activate"

Put this in your ~/.bashrc. Typing v on the command line will look in the current and all parent directories for a directory named venv, and will activate that venv. Typing v [venvname] will walk the same tree looking for a directory named [venvname] to activate.

If you want to make it prettier, feel free to fork it on github.

Contributing to git projects on github

I spent some time this week helping a colleague learn the ins and outs of Git rebase. He mentioned that he could not find a git introduction that summarized the basic workflow you might use when contributing to an open source project on Github. This article attempts to fill the gap.

Note that there are many possible ways to interact with git and github and none of those ways are canonically correct. The steps here will help a newcomer to git or to collaborative editing on git. I outline the steps in short form first, then summarize alternatives below.

  1. Fork the remote project on github.
  2. Clone your new fork.
    git clone ssh://[yourname]/[reponame]/
    cd [reponame]
  3. Add a shortcut to upstream.
    git remote add upstream[developername]/[reponame]/
  4. Optionally choose a branch to hack on.
    git checkout development_branch_name
  5. Create a local copy of that branch.
    git checkout -b working_branch_name
  6. Hack.
  7. Commit your changes
    git add [-p] filenames
    git commit

    Repeat the hack,add,commit steps until your commits look the way you want them to.

  8. Grab any commits that were added upstream while you were working.
    git checkout development_branch_name
    git pull upstream development_branch_name
  9. Rebase your work onto to the end of the development branch.
    git checkout working_branch_name
    git rebase development_branch_name
  10. Push your commits to your github fork for the world to see.
    git push origin working_branch_name
  11. Make a pull request on github. Pick development_branch_name as the one you want your code pulled into, and working_branch_name as the one you are pulling from in your repo.
  12. Oh nos! The upstream devs didn’t like your changes. That’s fine. Repeat the hack/commit step as needed. You may
    be able to just append commits to your previous commits. If you need to alter your history (e.g. because they don’t like how one of your commits looks), check the man pages for git commit --amend and git rebase -i.
  13. Once you have the working_branch in a state you like, repeat the remote pull and rebase steps to ensure your commits “look like” they happened at the end of the remote developer’s branch.
  14. Push your changes to your github fork, forcing an overwrite of any existing changes.
    git push -f origin
  15. Issue another pull request on github; your old pull request will be updated to the new working branch.
  16. Your code has been accepted upstream, congratulations on your contribution to an important open source project!


  1. This represents a branch the upstream developer has pushed to their remote repository. The branch chosen really depends on where you want to develop. Some projects do all their development on master. Others push a remote branch named ‘devel’ or branches named after the feature being developed on that branch. Either way, your ultimate goal is to have your commits added to this branch on the remote repository.
  2. It is possible to do your development on a single branch and rearrange commits as needed. However, branching is free in git, and storing your changes on a different logical branch will really help you identify where everything fits together.
  1. Each commit should contain one logical change.
    A commit message should not contain the word “and”.
    If you have been working on two logical changes at the same time, you can index only relevant hunks using git add -p.
  2. You’re checking out the original branch here. You don’t want to pull all the foreign commits into the branch you’ve been developing on; that branch should only contain your commits at this time.
  3. This is the same as “moving” all your commits
    to the end of development. The idea is to have your commits as a “tail” that all come after the upstream changes. Upstream will love this because they can do a fast-forward merge of your commits. Hopefully
    there aren’t any conflicts. If there are, resolve them so that your commits can apply cleanly. You’ll want to review the rebase documentation before continuing. Also, use three way merging, it makes life so much easier.
  1. There is a philisophical debate as to whether it is acceptable to rewrite public history, as Dieter has discussed in detail. In this case, since you have communicated (via the pull request) an intent to rewrite history, it’s not only acceptable, but desirable.Most upstream developers will consider it good form to edit older commits
    so that they reflect what the history “should have” looked like so that people reading your history can understand it.

Hacking on PyPy

In another great Pycon2012 keynote, David Beazely asked the question, “is PyPy easily hackable?” After a great talk, he answered with a decisive, “I still don’t know.” Having sprinted on Python I’d like to answer his question in a bit more detail.

I love David’s presentation style. He has a novel method of using phrases like, “blow your mind” and “this is really scary” repeatedly until they lose their meaning and you no longer feel mindblown or scared. A variety of factors, including Beazely’s thorough keynote address motivated me to join the PyPy team during the Pycon developer sprints.

I’d like to clear up one oversight in Dave’s otherwise impeachable talk. One of the PyPy devs, Holger Kregel explained to me that PyPy does not have over 1 million lines of code. I don’t have exact numbers, but for “historical reasons”, a non-python file containing Base64 encoded data was given a .py extension. When excluding this file from the line count, around half a million lines of actual Python code exist, and about a quarter of these are tests.

I was surprised how trivial it was to get started hacking on PyPy. I don’t really grok the many layers of the translation toolset and PyPy interpreter, but it’s pretty clear that the layers are well separated. I was hacking on the py3k branch of PyPy. I am happy to admit I was working primarily on changing print statements to print() functions and commas in exceptions to the as keyword.

Here are the steps to start hacking on PyPy. Notice that the hour-long translation step is not part of the procedure. PyPy has a solid test framework, and the PyPy crew are focused on a 100% test-driven-development paradigm.

  1. Clone pypy (this takes a while):
    hg clone
  2. Pick a branch to work on. There are about 80 branches. I don’t know what they all do. Popular ones during the sprints included py3k and numpy-ufuncs2
  3. Pick a feature to work on. For py3k support, the list of failing tests in the buildbot is a good place to start. Numpy programmers had a list of fuctions that needed implementing, but I can’t find the link. Ask on IRC, the PyPy crew are very helpful. The bug tracker contains many features and issues that need addressing
  4. Add /path/to/pypy/ to your path so you can run the command
  5. cd into the directory indicated in the buildbot output and run path/to/ -k testname
  6. The test will likely fail. Hack away and fix it.
  7. When the test passes, commit, push to a bitbucket repo, and issue a pull request.
  8. Repeat!
  9. There are quite a few cons to working on this project. If you run hg in the pypy/modules/ directory, it will try to pick standard library modules from pypy and choke horribly. The pypy developers don’t really believe in documenting their code. Being able to tell the difference between rpython and python (which have identical syntax) is important. In general, if a module starts with “interp_” it contains rpython, but if it starts with “app_” it contains python. The code does not appear to be well-documented.

    If you are hacking on Python 3 support, you need to bear in mind that the PyPy interpreter is written in Python 2. You are working on a Python 2 application that executes Python 3 bytecode!

    On the positive side, rPython and Python are much easier to read and write than C. The PyPy devs are brilliant, but not intimidating. They are so confident in their test suite that they are comfortable programming in a “cowboy coding” style, hacking randomly until all the tests pass. Any one layer in the toolchain is easy to understand and develop. The IRC channel is full of friendly, knowledgable, helpful people at any time of day.

    Overall, I am much less intimidated by this project than I was before I started the dev sprints. I still can’t answer the, “Is Python easily hackable?” question fully. It’s certainly easy to get started, but I don’t know how easy it is to become intimate with the project. Dave Beazely’s keynote made PyPy more approachable, and I approached it. Hopefully this article will encourage you to do the same.

Three-way merge for hg using vim

I accidentally blew away my .hgrc and it took me some time to recover my 3-way merge inspired by Dan McGee’s article on the subject. So for my and your future reference here’s what you need in your .hgrc to get hg merge to give you a three merge window in vim:

merge = vimdiff

vimdiff.executable = vim
vimdiff.args = -f -d -c "wincmd J" "$output" "$local" "$base" "$other"

When running hg merge, the effect you get is best described by Dan McGee himself:

The end result is three panes along the top (local, base, and remote respectively), and the merge work-in-progress is on the bottom. In addition, all four panes are still locked together for scrolling.

Why we need Python in the Browser

In his Pycon 2012 keynote speech on Sunday, Guido van Rossum covered many of the open “Troll” questions related to the Python community. I’ve had occasion to either complain about or defend all of the topics he covered, and he did a wonderful job of addressing them. The issues could largely be divided into two categories: those that come from a fundamental misunderstanding of why Python is wonderful (e.g. whitespace), and those that are not an issue for the core python dev team, but are being actively worked on outside of core Python (e.g event loop).

And then there’s the question of supporting Python in the web browser. I honestly thought I was the only one that cared about this issue, but apparently enough people have complained about it that Guido felt a need to address it. His basic assertion is that the browsers aren’t going to support this because nobody uses it and that nobody uses it because the browsers don’t support it.

This is a political problem. Politics shouldn’t impact technical awesomeness.

The fallacious underlying assumption here is that modern HTML applications must be supported on all web browsers in order to be useful. This is no longer true. Web browser applications are not necessarily deployed to myriad unknown clients. In a way, HTML 5, CSS 3, and DOM manipulation have emerged as a de facto standard MVC and GUI system. For example, many mobile apps are developed with HTML 5 interfaces that are rendered by a packaged web library rather than an unknown browser. Client side local storage has created fully Javascript applications that require no or optional network connectivity. There are even situations where it may not be necessary to sandbox the code because it’s trusted. Many developers create personal or private projects using HTMl 5 because it’s convenient.

Convenient. It would be more convenient if we could code these projects in Python. Web browsers can be viewed as a zero install interface, a virtual machine for these applications. Such a VM has no reason to be language dependent.

It is simply unfair to all the other programming languages and coders of those languages to say, “we can’t displace Javascript, so we won’t try.” Web browsers have evolved into a virtualization layer more like operating systems than single programs. While it is true that the most restrictive operating systems only permit us to code in Objective C, in general, it is not considerate to restrict your developers a single language or environment.

It is time (in fact, long overdue) for Python to be supported in the browser, not necessarily as an equal to Javascript, but as an alternative. The web is a platform, and we must take Guido’s statement as a call to improve this platform, not to give up on it.

Update: I just stumbled across and I can’t wait to play with it!

Update 2: From the number of comments on this article, it appears that my article has hit some web aggregator. To those users suggesting python to javascript compilers, I’ve been a minor contributor to the pyjaco project, a rewrite of the pyjs library. It has the potential to be a great tool, and the head developer, Christian Iversen is very open to outside contributions. Let’s make it better!

PyCon 2012

Hello Arch Linux community! If any of you are attending Pycon 2012 in Santa Clara, CA this week, make sure to bump into me. Or contact me to schedule a meetup.


When I chose to disable my Google plus account, the one complaint that hit home for me was a friend mentioning how interesting he found the sketches I had been posting there. I have been planning to upload my sketches to a self-hosted service or Deviant Art, but I simply never got around to it.

This week, I finally did upload my sketches to Drawspace is a terrific collection of tutorials that I highly recommend if you have any interest in learning to draw. I have been following the free online course for several months now and appreciate the gentle instruction and hands-on approach. I’m to the point now where I feel confident in displaying my work publicly, and hope you enjoy the album.