Using filesystem transport with Celery

Celery is an asynchronous job queue. It is used to build distributed applications — application which run on (potentially) multiple hosts, but it is also useful if your application runs on a single host.


Celery distributes and schedules work by passing messages around through an AMQP broker such as RabbitMQ or key-value-store such as Redis. While these solutions are fine for production, I prefer something lighter for my development environment. I prefer to not have databases or brokers running on my laptop.

A frequently suggested solution is to use djcelery. This package implements a transport based on Django’s database. There are two complications, however: the project seems to be no longer developed, but more importantly - it does not work well with SQLite - my go-to database for development. The problem lies in the fact that SQLite does not handle parallel write access graciously, causing frequent crashes in the worker daemon.

File system

Celery uses Kombu to interface with its transport. While the documentation doesn’t mention it, Kombu also support the file system as a transport. Given that it’s missing from documentation, I suspect it is not quite ready for prime time, but in my development environment it performs just fine.

To use it in Django, add this to your

# This assumes you have defied BASE_DIR, which is the case if you're using 
# a generated project. If not, just set it to wherever you think
broker_dir = os.path.join(BASE_DIR, '.broker')

BROKER_URL = 'filesystem://'
    "data_folder_in": os.path.join(broker_dir, "out"),
    "data_folder_out": os.path.join(broker_dir, "out"),
    "data_folder_processed": os.path.join(broker_dir, "processed"),

Make sure you create the .broker/out and .broker/processed directories. To ease startup time, I just added code to create these directories to my (will only work in python>3.2)

import os