Deploy Celery & RabbitMQ with Compose

Deepjyoti Barman @deepjyoti30
Apr 7, 2021 • 1:34 PM UTC
Post cover

Off late, I had been working a lot with FastAPI. Recently, however I had a requirement. I needed to run tasks in the background after the request was made. So something like, when the request is recieved, add it to the task list and return a response. The task will be done in the background.

I somewhat knew about Celery, however never had the need to work with it. So I finally decided to use Celery and boy oh boy, was I surprised.

What is Celery

So, Celery is a task scheduler. What I described above in the first paragraph, Celery does exactly that! We pass it a task and it runs that in the background.

It basically runs the task as a synchornous function which effectively makes the function run in the background. You can read more about it here.

However, in order for Celery to run properly, it needs a broker.

What is a broker?

A broker will keep a list of all the tasks that are to be executed and will accordingly supply Celery with the tasks. Celery will then use the task and work on it.

Directly from Google:

A broker is a person or firm who arranges transactions between a buyer and a seller for a commission when the deal is executed.

This pretty much sums it up. In our case, we are the seller, celery is the buyer and we will use a broker in between in order to handle all the tasks. Brokers are also called as message queues or task queues.

Some of the brokers that Celery works with are:

RabbitMQ

In this article, I will primarily use RabbitMQ as the broker. You can read on how to use Redis with Celery.

Easiest way to setup RabbitMQ is to use a docker file. Using the following command, a container with RabbitMQ can be deployed within seconds.

docker run -d --rm -it --hostname my-rabbit -p 15672:15672 -p 5672:5672 rabbitmq:3-management

In the above command, the management image is used. You can check other available images here

Breaking down the above command:

  • We are mapping the 15672 port of the container to our host
  • We are mapping the 5672 port of the container to our host.

This is because the 15672 port serves the GUI for rabbitmq and 5672 is how Celery will communicate with it.

Worker

Now that we have our broker in place, let's use a Dockerfile to deploy Celery. Celery, since it does tasks in the background, is referred to as worker.

We will build the worker with the following Dockerfile:

FROM python:3.6

# copy contents of project into docker
COPY ./ /app/

# We will use internal functions of the API
# So install all dependencies of the API
RUN cd app && pip install -r requirements.txt

WORKDIR /app

ENTRYPOINT celery -A worker worker --loglevel=INFO

Using the above Dockerfile, we can deploy the worker.

Using a compose file

Now that we have two of the services ready, we are ready to write our docker compose file. Read more about docker compose here.

Usually, the worker is run along with an API and the API makes calls to the worker in order to run the worker tasks in the background.

In our case, we will be creating two containers:

  • RabbitMQ container
  • Worker container

We want our worker to access the rabbitMQ container through the network and accordingly use it as a broker.

Most of the time, you'll probably also need an API container that will also interact with the worker using the network.

Following is the compose file:

version: "3.7"

services:
  # Deploy the broker.
  rabbitmq_server:
    image: rabbitmq:3-management
    ports:
      # Expose the port for the worker to add/get tasks
      - 5672:5672
      # OPTIONAL: Expose the GUI port
      - 15672:15672

  # Deploy the worker
  worker:
    # Build using the worker Dockerfile
    build:
      context: .
      dockerfile: worker.Dockerfile
    # Need to access the database
    # OPTIONAL: If you worker needs to access your db that is deployed
    # locally, then make the network mode as host.
    network_mode: host
    # Pass the rabbitmq_uri as env varible in order to
    # connect to our service
    environment:
      # NOTE: Below we are using 127.0.0.1 because this container
      # will run on the host network, thus it will have access to the
      # host network.
      # If it would not have run locally, we would have had to
      # connect using the service name like following:
      # amqp:rabbitmq_server:5672
      rabbitmq_uri: amqp://127.0.0.1:5672
    # Make it wait for rabbitmq deployment
    depends_on: 
      - rabbitmq_server

With the above file, you can deploy it as follows:

docker-compose -f docker-compose.yml up --detach --scale worker=2 --build

In the above command, we are scaling the worker service to have 2 containers.

Gotcha's to look out for

Connection URI for RabbitMQ

Let's say we have RabbitMQ deployed in a container called rabbitmq. Now, from our worker container we need to access RabbitMQ in order to add tasks. In this case, we will have to connect to RabbitMQ using a connection URI. This URI will be something like:

amqp://rabbitmq:5672

Note that we have name of the container in the URI. This will map the URI to the network of that container.

Typically this URI should be something like amqp://localhost:5672

However, now, let's say we need to run our container in the network. This can be easily done using the network_mode: host field in the compose file or the --network=host arguement to the deploy command.

In cases like this, our container will have the network of the host which means the RabbitMQ container will be accessible as it will be accessible to the network which will be:

amqp://127.0.0.1:5672

Note that we exposed the port 5672 when deploying the rabbitmq container.

Discussion