Posts tagged ‘Django’

When you use a Django query keyword as a field name

I need to model a location in the Alberta Township System coordinate space. The model is extremely simple:

class Location(models.Model):
    project = models.ForeignKey(Project)
    lsd = models.PositiveIntegerField(null=True, blank=True)
    section = models.PositiveIntegerField(null=True, blank=True)
    township = models.PositiveIntegerField(null=True, blank=True)
    range = models.PositiveIntegerField(null=True, blank=True)
    meridian = models.PositiveIntegerField(null=True, blank=True)

There’s a rather subtle problem with this model, that came up months after I originally defined it. When querying the foreign key model by a join on location, having a field named range causes Django to choke:

>>> Project.objects.filter(location__range=5)
------------------------------------------------------------
Traceback (most recent call last):
  File "", line 1, in
  File "/home/dusty/code/egetime/venv/lib/python2.7/site-packages/django/db/models/manager.py", line 141, in filter
    return self.get_query_set().filter(*args, **kwargs)
  File "/home/dusty/code/egetime/venv/lib/python2.7/site-packages/django/db/models/query.py", line 556, in filter
    return self._filter_or_exclude(False, *args, **kwargs)
  File "/home/dusty/code/egetime/venv/lib/python2.7/site-packages/django/db/models/query.py", line 574, in _filter_or_exclude
    clone.query.add_q(Q(*args, **kwargs))
  File "/home/dusty/code/egetime/venv/lib/python2.7/site-packages/django/db/models/sql/query.py", line 1152, in add_q
    can_reuse=used_aliases)
  File "/home/dusty/code/egetime/venv/lib/python2.7/site-packages/django/db/models/sql/query.py", line 1092, in add_filter
    connector)
  File "/home/dusty/code/egetime/venv/lib/python2.7/site-packages/django/db/models/sql/where.py", line 67, in add
    value = obj.prepare(lookup_type, value)
  File "/home/dusty/code/egetime/venv/lib/python2.7/site-packages/django/db/models/sql/where.py", line 316, in prepare
    return self.field.get_prep_lookup(lookup_type, value)
  File "/home/dusty/code/egetime/venv/lib/python2.7/site-packages/django/db/models/fields/related.py", line 136, in get_prep_lookup
    return [self._pk_trace(v, 'get_prep_lookup', lookup_type) for v in value]
TypeError: 'int' object is not iterable

That’s a pretty exotic looking error in Django’s internals, but it didn’t take long to figure out that using location__range is making Django think I want to use the range field lookup on Location.id instead of the field I defined in the model. I expect a similar problem would arise if I had a field named “in”, “gt”, or “exact”, for example.

The solution is simple enough, but didn’t occur to me until searching Google and the Django documentation, and ultimately scouring the Django source code failed to yield any clues. If you ever encounter this problem, simply explicitly specify an exact lookup:

>>> Project.objects.filter(location__range__exact=5)
[< Project: abc>, > Project: def >]

Django: don’t use distinct and order_by across relations

I needed to get a list of project objects that had Time objects attached to them that had been updated by a specific user. I wanted the list to be ordered by the most recently updated Time object, and importantly, I wanted the list of project objects to be distinct (since there are multiple time objects attached to any one project).

I was trying to make the following query work in django:

Project.objects.filter(time__user=user).distinct().order_by('-time__date')

As the note here describes, this particular combination (distinct and order_by on a related field) doesn’t work so well. The related table (Time, in this case) columns are being added to the query’s SELECT clause, giving me multiple copies of projects that I wanted to be distinct.

There is a Django feature request to support named fields in the call to distinct, but it is not incorporated into trunk yet, mostly due to database backend support.

After some searching and pondering, I was able to get the same list of projects using aggregates instead:

Project.objects.filter(time__user=user).annotate(
                models.Max("time__date")).order_by('-time__date__max')

This solution to the problem doesn’t seem to be suggested often, so I thought I’d take the time to mention it.

Rendering Django models to JSON

I recently discussed a simple template tag that allows rendering “raw” jquery templates inside a Django template, even though the syntax overlaps. A commenter asked for a Pypi package. It was such a simple template tag that I didn’t want to maintain a separate package for it, but it turns out, this tag fits in very well with the next step in the project. The short story is that the tag is now available on Pypi.

The tag is distributed with a project I called Djason (named for Django, JSON, and former Arch Linux Developer, Jason Chu, and because djson was already taken). Djason’s primary purpose is to serialize Django models to a json client format that is friendly to client libraries, especially to jquery-tmpl. It is essentially just a fork of Django Full Serializers with a patch applied, and some minor edits to simplify the output json. I install it as a Django serializer in settings.py using:

SERIALIZATION_MODULES = {
    'djason': 'djason.json'
}

And in my code:

from django.core.serializers import serialize

    content = serialize("djason", project,
            relations=["location_set", "company", "project_type"])
    return HttpResponse(content)

I’ll probably be adding some additional code to this package to fit in with my still cloudy ideas about the proper separation between client and server code. I’ll also likely be rewriting portions of the forked project to appease my personal sense of style.

Django and jquery.tmpl

Lately, I’ve been finding Django increasingly inappropriate for the web applications I develop. I have several complaints: the forms library doesn’t extend well to ajax requests, any extensive customization to the admin requires obtuse inspection of the admin source code, many admin customizations simply aren’t possible, the “reusable apps” philosophy has added a layer of complexity to a lot of things that really should not be there, there are no obvious best practices for ajax support.

In spite of all this Django is still better than other frameworks (Python or not) that I have investigated or tested. I’ve considered writing my own web framework, but I wouldn’t maintain interest in it long enough to get it off the ground. So I’m letting my complaints bounce around in the back of my mind with the hopes that I can improve Django so that it continues to work for me as a platform.

I’m currently trying to come up with a better system for ajax requests. I have nothing concrete in mind, but I’ve started with the premise that ajax requests should never return rendered html, but should return only json (hence my issue with the django forms library). With that in mind, I need a templating library for json data. JQuery is a must, and the officially supported JQuery templating library is jquery.tmpl http://api.jquery.com/category/plugins/templates/

The problem with jquery.tmpl is that it uses django-like syntax. The following is a valid block that may be rendered in a jquery.tmpl page:

<script id="project_tmpl" type="text/x-jquery-tmpl">
    {{each projects}}<li>${$value}</li>{{/each}}
</script>

If you try to include this in a django template, the {{ and }} tags will be replaced with (probably) empty variables. Django has {% templatetag %} to render these individual items, but what we really need is a way to tell the django templating system to leave complete sections of code alone. So I wrote the jqtmpl template tag. It allows us to wrap code in a block that tells Django not to render that block as template code. So the above would show up in a Django template as follows:

<script id="project_tmpl" type="text/x-jquery-tmpl">
{% jqtmpl %}
    {{each projects}}<li>${$value}</li>{{/each}}
{% endjqtmpl %}
</script>

Here’s the template tag:

from django.template import Library, TextNode, TOKEN_BLOCK, TOKEN_VAR
 
register = Library()
 
@register.tag
def jqtmpl(parser, token):
    nodes = []
    t = parser.next_token()
    while not (t.token_type == TOKEN_BLOCK and t.contents == "endjqtmpl"):
        if t.token_type == TOKEN_BLOCK:
            nodes.extend(["{%", t.contents, "%}"])
        elif t.token_type == TOKEN_VAR:
            nodes.extend(["{{", t.contents, "}}"])
        else:
            nodes.append(t.contents)
 
        t = parser.next_token()
 
    return TextNode(''.join(nodes))

This doesn’t handle Django’s {# comments #}, as token.contents doesn’t return a valid value for comment items. As far as I know, you wouldn’t use the comment construct inside a jquery.tmpl template anyway, so it’s still functional.

Next on my list is a better forms validation library to suit my theory that validation should be done client side. I’ve got a server-side system in mind that returns json responses, and does not return user-facing error messages, since those should have been caught client side. With these tools, and hopes for Grappelli to eventually create a decent admin, Django may continue to serve me.

Updating m2m after saving a model in Django admin

I wanted to ensure that a many-to-many field on my model contained a value that was on a foreign key field on the model whenever that model was saved in the admin, like so:

            obj.participants.add(obj.instructor)

I thought this was a trivial task. I added this code to a post_save signal connected the model, but the participants list was not being updated. I was pretty sure that the reason it didn’t work was that form.save_m2m() must be called somewhere in the admin after the object was saved, which would override my m2m changes with the empty ones from the model.. Reading the Django admin source code confirmed this. But it didn’t show an obvious way to circumvent the behavior.

It is possible to override the change_view and add_view functions on the ModelAdmin object, but I did not want to do that. I would have had to copy the entire change_view contents into the subclass, as there is no way to make a super call do what I wanted here. Here’s the section of code I needed to modify (it’s in django.contrib.admin.options.ModelAdmin.change_view if you want to look):

            if all_valid(formsets) and form_validated:
                self.save_model(request, new_object, form, change=True)
                form.save_m2m()
                # Right here is where i want to insert my call
                for formset in formsets:
                    self.save_formset(request, form, formset, change=True)
 
                change_message = self.construct_change_message(request, form, formsets)
                self.log_change(request, new_object, change_message)
                return self.response_change(request, new_object)

Obviously, I can’t override save_model because save_m2m() is called after that, which would still wipe out my changes. I really need to have a self.after_m2m() call at the point I have commented in the above code. But I don’t.

I really didn’t want to have to copy this entire method into my ModelAdmin subclass (in admin.py) just to add that one call… so instead, I overroad another method that happens to have access to the new object and is called after save_m2m(). See that call to self.log_change a few lines later? That method updates the admin log db table. But it also happens to have access to the newly created object. I want to emphasize that this is an ugly hack:

# in admin.py
class ClassListAdmin(admin.ModelAdmin):
    #
    def log_change(self, request, object, message):
        super(ClassListAdmin, self).log_change(request, object, message)
        # The add and change views happen to call log_addition and
        # log_change after the object has been saved. I update the
        # m2m at this point (in update_participants) because the add
        # and change views call form.save_m2m() which wipes out the
        # changes if I put it in self.save_model().
        self.update_participants(object)

self.update_participants(), of course, contains the code I originally wanted to run.

This isn’t the most proper way to do this, but if you’re looking for a quick, dirty but DRY hack, it might save you some time.

Great Big Crane now supports pip

This week, I started using Great Big Crane in real life to manage some of my buildout projects. I was surprised to discover how useful, slick, and bug-free it is. When we wrote it in a 48 hour sprint, I did not realize how functional and complete our final product turned out to be.

I filed about a dozen issues on the project as I used it, but surprisingly few of them were bugs; just feature requests and minor enhancements to make it more usable. I don’t think any of us were expecting to maintain this project when the contest was over. However, now that I see how useful it is, and because winning the dash has garnered a lot of interest in the project, I sat down for a few hours and added the one thing people have been requesting for it: pip support.

This new feature is fairly simple, and not fully tested, but the general idea is to be able to run virtualenv, manage your requirements.txt, and install the dependencies from inside greatbigcrane. This required a fairly invasive refactor of certain commands that we had implemented as buildout specific, but overall, it wasn’t a terribly difficult task.

What I have so far is certainly usable, but I suspect in the long run, it’s just a start!

Have a look at the sources here: http://github.com/pnomolos/Django-Dash-2010/

Django Dash 2010 winners

I was shocked, yesterday, to discover that my team had won the 2010 Django Dash for our buildout management project, Great Big Crane. Competition was extremely fierce, and there are a fair number of projects that I felt would be contenders for first place. My personal goal was to move upward from the fifth place we were awarded last year, and I would have been completely satisfied with third.

While Jason and I obviously did a terrific job of keeping our backend code elegant and organized, I think the major difference between our project and the competition was Phil’s amazing styling. His enthusiasm and attention to detail throughout the project turned a neat project into a work of art.

A lot of really interesting projects came out of the Dash. It’s amazing how much fifty teams can accomplish in 48 hours. I hope all the participants had as much fun as I did!

Get your head out of the clouds: Local web applications

I spent this weekend with two friends crazy enough to join me in a 48 hour coding sprint for the Django Dash. We competed in the dash last year and placed 5th. Our goal was to move up in the rankings this year (competition is stiff, wish us luck!). Our team had the highest number of commits, but I can’t say how many of them can be justified as quality commits… especially since we keep track of our TODO file inside the git repository!

This year, we created a project called Great Big Crane. (I don’t know why we called it this.) The code is stored on Github, and we set up a splash page at greatbigcrane.com. We don’t have a live demo for reasons I’ll get into shortly.

This project’s primary purpose is to help managing buildouts for Python projects, most especially Django projects. It helps take care of some of the confusing boilerplate in buildout configuration. It also allows one click access to common commands like running bootstrap or buildout, syncdb, manage.py, or migrate, and running the test suite associated with a buildout. It does most of these actions as jobs in the background, and pops up a notification when it completes. It even keeps track of the results of the latest test suite run so you can see at a glance which of your projects are failing their tests.

One of the most intriguing things this application does is open a text editor, such as gvim, to edit a buildout if you need more control than our interface provides. It does this be queuing a job that executes the text editor command on the server.

Wait, What? It can be a bit creepy when clicking a button in a web application fires up an arbitrary program on your computer.

This entire app is designed to run on localhost. It’s set up for developers to manage their own projects. It doesn’t support authentication (this is why we don’t have a live demo), and the server has full access to the local filesystem. It’s meant to support your local IDE, not to provide an online IDE. The entire app is therefore super fast (no network delay), and switching from it to my text editor to several terminals became quite normal as I was developing on it (yes, the buildout for Great Big Crane runs just fine from inside Great Big Crane ;).

So yes, you’re expected to run this web app locally. Why would anybody want to do this? Is it a sensible thing to do?

The alternative to what we’ve done here would be to code the whole thing up as a GUI application of some sort. I have experience with most of the Python GUI toolkits, and I can’t say that I “enjoy” working in any of them. I’m not sure I enjoy working in HTML either, but I do a lot of it. HTML 5 with CSS 3 is certainly a powerful and reasonable alternative to modern graphical toolkits.

I’ve been coding HTML for so long that I don’t know what the learning curve is for it, but I’m definitely more comfortable working with it than I am with TK, QT, GTK, or WxWidgets, all of which take a long time to learn how to code properly. Possibly I’m just stagnating, but I think I’d prefer to develop my next “desktop” app as a webapp intended to run locally, rather than study these toolkits again. Indeed, because I think I’d prefer to do that, I started coding my last project in PyQT, just to fight the stagnation tendency. PyQT is an incredibly sensible toolkit, after you have learned how to make sense of it, but it’s not as sensible as the new web standards. Another advantage is that if you ever decide you want to make the app network enabled, you’re already running an app server, and using standard web technologies to push it to the cloud.

So my gut feeling at this point is that yes, it is sensible to design “traditional” desktop apps using HTML 5, CSS, and javascript for the interface, and your choice of webserver and web framework for the backend. Perhaps it’s not any more sensible than using a GUI toolkit, but it’s certainly not insane.

If it makes sense to replace local desktop apps with a local server, does it also make sense to replace web applications with a local system?

I’m not a huge fan of web applications because they are slow for me. I have a good connection (by Canadian standards, which aren’t high…). Yet Gmail is slower than Thunderbird, Freshbooks is too slow for me to justify paying for it, and github, while fascinating, is also slow compared to local access. The only webapp I have tested that I consider responsive is Remember The Milk, a popular todo list. I’m not certain what they do to make it so responsive, but I suspect Google Gears or HTML 5 localstorage must be involved.

Local storage. I’ve written about this before (I must be getting repetitive). My idea then was that offline enabled webapps are just as responsive as desktop apps. But the current available paradigm, using HTML5 localstorage, requires a lot of overhead normally involving manual syncing between the browser data and the server. What if I was running the app locally instead? Then I could just design it as a “normal” web app, without having to put extra thought into designing and maintaining local storage in the browser. It would be super responsive when I access it from my computer. More interestingly, it would also be available from remote computers. If I accessed it across my LAN using another laptop or my phone’s wifi, it would still be acceptably responsive. And if I happen to need access from the library or my friend’s computer, I can log in remotely, and still have approximately the same level of responsiveness that I currently get by logging into a server in the cloud.

This isn’t a new idea. It’s been presented as a “gain control of your own data” alternative to the privacy and control fears that Google, Facebook, and Apple (among others) have been creating. (<a href="http://www.h-online.com/open/features/Interview-Eben-Moglen-Freedom-vs-the-Cloud-Log-955421.html"this Is a nice discussion). There are a lot of clear advantages of moving data local, but there are also disadvantages. The nice thing about cloud storage is not having to worry about data backup. The “access anywhere” paradigm is nice, too, although that is not ruled out with running a home webserver. Zero install and end users not having to think about dependencies is also nice.

Overall, I’m finding more and more reasons to bring our apps home, where we have control of them. Such cycles are common in the technology industry. Dumb terminals mainframes. Personal computers. Business networks. The Internet. The cloud. Off-board video/On-board video. Network cards? On-board nic. Hardware modems or Software modems. Personally, I think the cycle away from the cloud is just beginning. I think the company that finally conquers Google will be doing it by giving you back control of your data. I’ve never been totally comfortable with the whole web application idea (as a user — they’re fine to develop!). I’m still trying to identify what my reasons are, but in the meantime, we experimented with the idea by developing Great Big Crane as a local web application.

Packaging django management commands: Not Zip Safe

I had a devil of a time sorting this out; it must be documented in other places, but on the off chance that this information is useful to anyone, I’m posting it here.

I wrote a django app that had a custom management command in it. When I ran this app in my development environment, it ran fine. But when I deployed it from Pypi as an egg, the command mysteriously disappeared. Django simply did not see it.

Django not seeing management commands is a common problem for me, it seems like I always have to say “please” in just the right tone of voice before my custom management commands will work. This problem, however, was a new one for me.

I ended up exploring the Django sources and discovered, eventually, that the imp builtin was unable to find anything inside a zipped egg. This struck me as odd, so I did some research on a nifty tool I discovered (over a decade ago) called Google.

I came across this message, which basically says that Django and Django apps are “flat-out not zip-safe and probably never will be”, making specific reference to custom management commands.

Then I had to do more research to figure out how to mark my app as not zip safe. I ended up switching my setup.py for the app from distutils.core to setuptools and added a zip_safe=False argument to the setup call.

In addition, for my future and perpetual sanity, I discovered that buildout can accept an unzip=true command to ALWAYS unzip eggs. I placed this under the [buildout] section in my buildout.cfg.

Converting a Django project for zc.buildout

Jacob Kaplan-Moss has written an article on using zc.buildout to develop a django application. My goal is slightly different: I want to deploy an entire django project, with numerous dependencies, using zc.buildout. The documentation seems scarce, so I’m trying to keep track of each step as I go, in the hopes that it may be useful to someone someday.

I have an existing Django project that I’m having trouble deploying and sharing with other developers. It’s located in a private github repository. So my goal is not only to manage a Django project, but to manage an already mature project. This is, of course, harder than starting from scratch.

I do my development on Arch Linux, which is currently running Python 2.6 (and 3.1, but Django isn’t supported, so I’m using 2.6 for this project). I have git version 1.7.1, and my project is using Django version 1.2.1.

Since I didn’t know what I was doing, I started by doing some exploring. I created an empty directory and ran:

wget http://svn.zope.org/*checkout*/zc.buildout/trunk/bootstrap/bootstrap.py

to install the buildout bootstrap. I then created a very simple buildout.cfg file based on the djangorecipe example:

[buildout]
parts = django
eggs = ipython
 
[django]
recipe = djangorecipe
version = 1.2.1
eggs = ${buildout:eggs}
project = my_project

I then ran:

python bootstrap.py
./bin/buildout

Suddenly, my directory containing only two files (bootstrap.py and buildout.cfg) looked like this:

bin
buildout.cfg
downloads
my_project
bootstrap.py
develop-eggs
eggs
parts

Jacob’s article has an excellent description of all these files. The main question for me was “where does my source go?” This example shows that the project source code goes in my_project. Djangorecipe had created the following structure in that directory:

development.py
__init__.py
media
production.py
settings.py
templates
urls.py

The development.py and production.py files both have from my_project.settings import * calls, and then customize some variables. My habit has always been to have a localsettings.py in my .gitignore and include from localsettings.py import * in my main settings.py. For my project I had to decide whether to stick with my old habits, or modify my setup to be parallel to the djangorecipe version.

I see that djangorecipe has a way to select the settings to use for a given buildout, but if buildout.cfg is under version control, wouldn’t that make selecting settings a pain? And if each developer has a different database setup, would we require a different settings module for each developer? In my experience, it is better to do things the way the examples in the documentation say it should be done, because they know what they’re doing and I don’t. But in this case, I decided to keep my layout as is. I can always change it later.

The thing I wanted to learn from that experiment was where my source goes; apparently it goes in a folder with my project’s name at the same level as buildout.cfg and bootstrap.py. Looks like I’m going to have to move my code around in my project’s version control.

First I checked out a new branch, because that is the thing to do in git. Specifically because I want it to be easy to go back to the status quo if I decide, halfway through the process, that buildout is a pain to configure.

git checkout -b buildout

The first thing I want to do is move all my files into a new subdirectory with my project’s name, so buildout can have the top of the git tree for it’s own files:

mkdir my_project
git mv -k !(my_project) my_project
mv localsettings.py my_project
rm *.pyc
git commit

The git mv command essentially says “move anything that isn’t my_project into my_project“. The -k switch says “just ignore it if it isn’t under version control.” This left my localsettings.py and a few .pyc files in the main directory, since those files are in .gitignore, so I cleaned those up manually. Finally, I committed the changes, so the move happened in one place.

Now it’s time to start creating a new buildout, this time in the version controlled directory. I ran the wget command to get bootstrap.py, and I copied the buildout.cfg from my exploration directory. Then I ran the bootstrap and bin/buildout commands to see what happened. It did the same thing before, except for providing a django: Skipping creating of project: my_project since it exists. That’s what I wanted. Running git status showed several patterns that needed to be added to my .gitignore:

.installed.cfg
bin
develop-eggs
downloads
eggs
parts

I also had to change the .gitignore file to ignore my_project/static/uploads instead of just static/uploads.

At this point, I decided to commit bootstrap.py and buildout.cfg:

git add bootstrap.py buildout.cfg
git commit

Now, I know I’m missing dozens of dependencies, but I wanted to see what happens if I run bin/django. My understanding is that this is supposed to be a wrapper similar to management.py, but using the buildout’s django environment. It failed, telling me that the development settings.py file didn’t exist. I modified the buildout.cfg to add settings = settings to the django recipe. Then I ran bin/django again, and nothing had changed.

Whenever you change buildout.cfg, you have to also run bin/buildout to create the new environment (rant: I hate compile steps!).

I was worried that my custom management commands (in my case, for py.test testing, and running south migrations) would not show up, but there they were, listed in the help output that bin/django provided. This is especially surprising, since I have not installed south inside the buildout yet! It appears that bin/django is a drop-in replacement for manage.py.

Next, I ran bin/django shell expecting to enter dependency hell. Not yet! Instead, I got the error “no module named my_project.settings”. Looking at the bin/django script, it is trying to prepend the project name to the project. I have a habit of not including an __init__.py in my project directory, preferring to think of a django project as a collection of apps, rather than an independent project. I don’t want to write from my_project.my_app import something, because then the apps are no longer reusable. In my world, the project is not a package. Apparently, djangorecipe thinks it is. So touch my_project/__init__.py had to happen, since I definitely don’t want to start hacking the recipe at this point!

Now I have “no module named ” errors for each of my INSTALLED_APPS. Because I list my apps as “x” instead of “myproject.x”. To fix this, I added extra-paths = my_project, which inserts the project directory into the path.

Then I ran bin/django shell and bin/django runserver only to discover that everything was working! Apparently my buildout had not installed to a private environment, and was still accessing the default site-packages on my system. Not quite what I wanted. I thought zc.buildout created an isolated environment, much like virtualenv, only portable across systems. My mistake.

zc.buildout does not create an isolated sandboxed environment by default.

I had to do a lot of google searching to come to this conclusion. There are many statements out there that suggest that zc.buildout can and does create an isolated environment, but none of them turned out to be true. zc.buildout is all about reproducibility, while virtualenv is about isolation They are not competing products, and the ideal environment uses both of them.

So I removed all the temp files and directories (including the hidden .installed.cfg) that buildout had created for me and started over to install them to a virtualenv:

virtualenv -p python2.6 --no-site-packages .
source bin/activate
python bootstrap.py
bin/buildout

I temporarily removed IPython from the eggs because it was refusing to download. The server must be down. This time, when I run the bin/django shell, I get a proper dependency error for psycopg2. Looks like I’m finally on the right track. I also had to add several directories virtualenv had created to my .gitignore.

Before buildout, I had a rather complicated dependencies.sh file that installed all my dependencies using a combination of easy_install, git checkout, hg checkout, etc. I started with the easy_install stuff; stuff that can be installed from Pypi. I created a new eggs part in my buildout. The entire file now looked like this:

[buildout]
parts = eggs django
 
[eggs]
recipe = zc.recipe.egg
interpreter = python
eggs =
    psycopg2
    south==0.7
    django-attachments
    pil==1.1.7
    Markdown
    recaptcha-client
    django-registration-paypal
    python-dateutil
 
[django]
settings = settings
recipe = djangorecipe
version = 1.2.1
eggs = ${eggs:eggs}
project = my_project
extra-paths = my_project

Trying to run bin/buildout now causes a “Text file busy” error. At this point, I’m seriously considering that buildout is more of a pain than a help. It’s poorly documented and broken (some might say poorly documented IS broken). And I know I have an even harder task coming up when I have to patch a git clone.

But, I’m obstinate and I persevered. Google was quick to confirm my hypothesis that virtualenv and buildout were both trying to access the “bin/python” file. The solution was to change the interpreter = python line in my recipe; I called the buildout interpreter” py”.

This time, when I ran bin/django shell I got an error pertaining to a module that needs to be installed from git. Time to look for a git recipe! Here’s how it eventually looked:

[django-mailer]
recipe = zerokspot.recipe.git
repository = git://github.com/jtauber/django-mailer.git
as_egg = True

I also had to add django-mailer to my parts in the [buildout] section, and arranged the [django] extra-paths section as follows:

extra-paths =
    ${buildout:directory}/my_project
    ${buildout:directory}/parts/django-mailer

I had a second git repository to apply, and this one was messy because the code on the project was not working and my dependencies.sh was applying a patch to it. I was considering whether I had to hack the git recipe to support applying patches when I realized a much simpler solution was to fork it on github. So I did that, applied my patch, and rejoiced at how simple it was.

Finally, I had to install an app from a mercurial repository (because we can’t all use the One True DVCS, can we?) I found MercurialRecipe, but no examples as to how to use it. It’s not terribly difficult:

[django-registration]
recipe = mercurialrecipe
repository = http://bitbucket.org/ubernostrum/django-registration

With all my dependencies set up, I was finally able to run bin/django shell without any errors.

Now I have to figure out how to make this thing work in production, but that’s another post. I hope it works flawlessly on my co-developer’s Mac. Hopefully the pain will be less painful than the old pain. This was a huge amount of work, several hours went into it, and I won’t know for a while if it was worth it.