I recently found myself looking to switch our Celery message broker to use Redis, as part of a wider task of improving the resilience of the asynchronous jobs running our Django based web application.
Up to this point we’d been using the Django database as a message broker via the kombu.transport.django
app (part of the kombu
library). Although reasonable for development purposes and requiring no extra infrastructure other than the database, this message transport is not ideal for production and is really considered experimental. Support for it has been dropped in Celery 4.0.
With our container-based architecture, I knew that introducing a Redis broker ought to be straightforward with Docker, but I had no idea how incredibly easy it would turn out to be. The hardest part was getting past our corporate proxy server.
Introducing Redis into the stack
We use Docker Compose for defining our application’s services, or containers. The docker-compose.yaml
file specifies all of the containers comprising our application. There is a container for Gunicorn, a container for Nginx, one for PostreSQL and others that run Celery workers.
So the first thing I had to do was to configure a new container in the docker-compose.yaml
for the Redis broker. Before doing this, I took the whole stack down with:
docker-compose down
I then edited the docker-compose.yaml
and added:
broker: image: redis:3.2.8 restart: always
Here, I added a container called broker
, which is started from a Redis image tagged 3.2.8
. I set the restart policy to always
, which tells the Docker daemon to always restart the container regardless of its exit status.
After adding this, I ran the command to bring up the new broker container:
docker-compose up -d broker
Because the Redis image did not exist locally on our server, Docker tried to pull it down from the public Docker repository. However the download fell over with the message:
x509: certificate signed by unknown authority
A quick search on this error and I discovered that it was likely caused by our corporate proxy server. The proxy does SSL inspection of HTTPS traffic and requires a special root certificate to be installed on every machine that wants to access the Internet. Although this certificate had previously been installed on the server, it had become out of date. I needed to update the certificate so that the Docker daemon could get out to the public repository.
Getting Docker past the proxy
The IT department pointed me to the location of our company’s up-to-date root certificate. It had a .crt
extension, but files with .cer
extensions can also be used. I copied it to the required location on the server, which was running Fedora 23:
sudo cp certificate.crt /etc/pki/ca-trust/source/anchors/
Then I ran the following command to update the trusted certificates:
sudo update-ca-trust extract
Before restarting the Docker daemon:
sudo systemctl restart docker
Running docker-compose up -d broker
again and things looked better:
And running docker ps
a few seconds later showed that the new container was up:
Finally, a quick tail of the container log with
docker-compose logs --tail=100 -f broker
confirmed that Redis had started up and was listening on port 6379.
Well that was easy and took about 15 minutes. It would have taken 90 seconds if it wasn’t for the proxy. Still, 15 minutes wasn’t bad.
Linking the containers
So with the Redis container now part of the stack, I had to tell the other containers about it so they could talk to it over Docker’s internal network.
The containers that required updating were the container that ran the Celery worker processes, and the application container that ran Gunicorn and was responsible for queueing tasks.
This was another simple update to the docker-compose.yaml
. This time it was a case of adding the name of the broker container to the links
sections of the dependent containers (additional config stripped for readability):
application: links: - database - broker celeryworker: links: - database - broker
This made the broker container available to the application
and celeryworker
containers on the hostname broker
.
Updating the Django settings
With the links to the broker container in place, the next task was to switch the broker URL in the Django configuration, which gets read by both the application
and celeryworker
containers.
The broker URL was specified in a single place in our Django settings.py
, so it was simply a case of changing the entry from:
BROKER_URL = 'django://'
to:
BROKER_URL = 'redis://broker:6379/0'
I also had to add the Redis Python client to the requirements.txt
, so the application and workers could talk to Redis.
redis==2.10.5
Rebuilding the containers
With the existing containers now reconfigured to talk to the new Redis broker, the final task was to rebuild them so they could pick up the new broker URL and client library.
The Docker image from which the containers are created is configured such that the application source code is copied into it when the image gets built from its Dockerfile
. The Dockerfile
also tells pip
to read the requirements.txt
to pull down the application dependencies into the image. Rebuilding the images was simply a case of running:
docker-compose build application celeryworker
Finally, with the containers rebuilt, I brought the whole stack back up with:
docker-compose up
I spent a few minutes tailing the celeryworker
container log with
docker-compose logs --tail=100 -f celeryworker
to sanity check that tasks were being picked up without any problem, which they appeared to be.
I then sat back and marvelled over just how painless that whole process was. The more I use Docker, the more I like it.