ScanSkill
Sign up for daily dose of tech articles at your inbox.
Loading

Dockerize Django With Postgres, Gunicorn, And Nginx

Dockerize Django With Postgres, Gunicorn, And Nginx
Dockerize Django With Postgres, Gunicorn, And Nginx

In this article, we’re going to deploy a Python Django application with PostgreSQL as a database using Docker, Gunicorn, and Nginx.

Prerequisites

First, ensure the following is installed on your machine:

Let’s jump directly to Dockerize Django With Postgres. And make sure, you have Django project set up on your system.

You can find the source code of this article: here

1. Docker and docker-compose with Django

After installation of docker, add a Dockerfile to the root directory of your project:

FROM python:3.8.9-alpine

WORKDIR /app

ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONNUNBUFFERED 1

RUN pip install --upgrade pip
COPY ./requirements.txt .

RUN pip install -r requirements.txt

COPY . .

Here, we used an alpine-based docker image for python 3.8.9. Then we set two environmental variables:

  1. PYTHONDONTWRITEBYTECODE (which prevents writing pyc files)
  2. PYTHONUNBUFFERED (which prevents buffering stdout and stderr)

And, we updated pip version and copied requirements.txt file to the working directory, and installed requirements. After that, we finally copied our project to the working directory(/app).

Now, create a docker-compose.yml file to the project root and add services:

version: '3.5'

services:
    app:
        build: .
        command: python manage.py runserver 0.0.0.0:8000
        volumes:
            - static_data:/vol/web
        ports:
            - "8000:8000"
        restart: always
        env_file:
            - ./.env

Create .env file at the root (the same directory containing docker-compose.yml) and edit as:

DEBUG=1
SECRET_KEY=your_secret_key_here
DJANGO_ALLOWED_HOSTS=localhost 127.0.0.1 [::]

In docker-compose file, build: . means it will build an image from the root Dockerfile we have created before.

Now, build the image:

$ docker-compose up -d --build

Then run the migrations:

$ docker-compose exec app python manage.py migrate --noinput

Output:

 Operations to perform:
  Apply all migrations: admin, auth, blogs, contenttypes, django_summernote, portfolio, sessions, works
Running migrations:
  Applying contenttypes.0001_initial... OK
  Applying auth.0001_initial... OK
  Applying admin.0001_initial... OK
  Applying admin.0002_logentry_remove_auto_add... OK
  Applying admin.0003_logentry_add_action_flag_choices... OK
  Applying contenttypes.0002_remove_content_type_name... OK
  Applying auth.0002_alter_permission_name_max_length... OK
  Applying auth.0003_alter_user_email_max_length... OK
  Applying auth.0004_alter_user_username_opts... OK
  Applying auth.0005_alter_user_last_login_null... OK
  Applying auth.0006_require_contenttypes_0002... OK
  Applying auth.0007_alter_validators_add_error_messages... OK
  Applying auth.0008_alter_user_username_max_length... OK
  Applying auth.0009_alter_user_last_name_max_length... OK
  Applying auth.0010_alter_group_name_max_length... OK
  Applying auth.0011_update_proxy_permissions... OK
  Applying blogs.0001_initial... OK
  Applying django_summernote.0001_initial... OK
  Applying django_summernote.0002_update-help_text... OK
  Applying portfolio.0001_initial... OK
  Applying sessions.0001_initial... OK
  Applying works.0001_initial... OK
  Applying works.0002_auto_20200325_1330... OK
  Applying works.0003_auto_20200325_1411... OK
  Applying works.0004_auto_20200325_1413... OK
  Applying works.0005_auto_20200325_1417... OK
  Applying works.0006_remove_work_image... OK
  Applying works.0007_work_image... OK

If any error, you can run docker-compose down -v to remove the volumes along with the containers. Then re-build and run migrations.

Ensure database tables are created:

$ docker-compose exec app-db psql --username=user --dbname=portfolio_db
$ sudo docker-compose exec app-db psql --username=sagar --dbname=portfolio_db
psql (12.7)
Type "help" for help.

portfolio_db=# \c portfolio_db
You are now connected to database "portfolio_db" as user "sagar".
portfolio_db=# \l
                               List of databases
     Name     | Owner | Encoding |  Collate   |   Ctype    | Access privileges
--------------+-------+----------+------------+------------+-------------------
 portfolio_db | sagar | UTF8     | en_US.utf8 | en_US.utf8 |
 postgres     | sagar | UTF8     | en_US.utf8 | en_US.utf8 |
 template0    | sagar | UTF8     | en_US.utf8 | en_US.utf8 | =c/sagar         +
              |       |          |            |            | sagar=CTc/sagar
 template1    | sagar | UTF8     | en_US.utf8 | en_US.utf8 | =c/sagar         +
              |       |          |            |            | sagar=CTc/sagar
(4 rows)

portfolio_db=# \dt
                   List of relations
 Schema |             Name             | Type  | Owner
--------+------------------------------+-------+-------
 public | auth_group                   | table | sagar
 public | auth_group_permissions       | table | sagar
 public | auth_permission              | table | sagar
 public | auth_user                    | table | sagar
 public | auth_user_groups             | table | sagar
 public | auth_user_user_permissions   | table | sagar
 public | blogs_category_post          | table | sagar
 public | blogs_comment                | table | sagar
 public | blogs_post                   | table | sagar
 public | blogs_post_categories        | table | sagar
 public | django_admin_log             | table | sagar
 public | django_content_type          | table | sagar
 public | django_migrations            | table | sagar
 public | django_session               | table | sagar
 public | django_summernote_attachment | table | sagar
 public | portfolio_contact            | table | sagar
 public | works_category_work          | table | sagar
 public | works_work                   | table | sagar
 public | works_work_categories        | table | sagar
(19 rows)

portfolio_db=#

Now add entrypoint.sh script inside scripts directory:

#!/bin/sh

if [ "$DATABASE" = "postgres" ]
then
    echo "Waiting for postgres..."

    while ! nc -z "$POSTGRES_HOST" "$POSTGRES_PORT"; do
      sleep 0.1
    done

    echo "PostgreSQL started"
fi

# python manage.py flush --no-input
# python manage.py migrate


exec "$@"

Here, you can run flush and migrate commands on development mode(debug=True) but not recommended for production.

Update Dockerfile with file permissions, and also add DATABASE variable to .env file.

From python:3.8.9-alpine

WORKDIR /app

PYTHONDONTWRITEBYTECODE 1
ENV PYTHONNUNBUFFERED 1

#psycopg2 dependencies installation
RUN apk update
RUN apk add postgresql-dev gcc python3-dev musl-dev libc-dev linux-headers

RUN pip install --upgrade pip
COPY ./requirements.txt .

RUN pip install -r requirements.txt

COPY . .
COPY ./scripts /scripts

RUN chmod +x /scripts/*

RUN mkdir -p /vol/web/media
RUN mkdir -p /vol/web/static

RUN chmod -R 755 /vol/web

ENTRYPOINT ["/scripts/entrypoint.sh"]

Edit .env file:

DEBUG=1
DJANGO_ALLOWED_HOSTS=localhost 127.0.0.1 [::1]
POSTGRES_HOST_AUTH_METHOD=trust
POSTGRES_USER=user
POSTGRES_PASSWORD=password
POSTGRES_DB=portfolio_db
POSTGRES_HOST=app-db #from docker-compose
POSTGRES_PORT=5432
DATABASE=postgres

Now, re-build, run and try http://localhost:8000/

Here you go, you have successfully configured docker and docker-compose for your application.


Also Read: How to Run Python Script Forever.


2. Production Grade Deployment with Gunicorn and Nginx

Python Django Application on Docker

Gunicorn

Now, install Gunicorn. It’s a production-grade WSGI server.

For now, since I want to use default Django’s built-in server, create a production compose file:

version: '3.5'

services:
    app:
        build:
            context: .
        command: gunicorn personal.wsgi:application --bind 0.0.0.0:8000
        volumes:
            - static_data:/vol/static
        ports:
            - "8000:8000"
        restart: always
        env_file:
            - .env.prod
        depends_on:
            - app-db

    app-db:
        image: postgres:12-alpine
        ports:
            - "5432:5432"
        restart: always
        volumes:
            - postgres_data:/var/lib/postgresql/data:rw
        env_file:
            - .env.prod
volumes:
    static_data:
    postgres_data:

Here, we’re using command gunicorn instead of Django server command. we can static_data volume as it’s not needed in production. For now, let’s create .env.prod file for environment variables:

DEBUG=0
DJANGO_ALLOWED_HOSTS=localhost 127.0.0.1 [::1]
DB_ENGINE=django.db.backends.postgresql_psycopg2
POSTGRES_HOST_AUTH_METHOD=trust
POSTGRES_USER=sagar
POSTGRES_PASSWORD=********
POSTGRES_DB=portfolio_db_prod
POSTGRES_HOST=app-db
POSTGRES_PORT=5432

Add both files to .gitignore file if you want to keep them out from version control. Now, down all containers with -v flag, -v flag removes associated volumes:

$ docker-compose down -v

Then, re-build images and run the containers:

$ docker-compose -f docker-compose.prod.yml up --build

Run with -d flag if you want to run services in the background. If any error when running, check errors with the command:

$ docker-compose -f docker-compose.prod.yml logs -f

Wow, let’s create production Dockerfile as Dockerfile.prod with production entrypoint.prod.sh file inside scripts directory of the root. entrypoint.prod.sh script file:

#!/bin/sh

if [ "$DATABASE" = "postgres" ]
then
    echo "Waiting for postgres..."

    while ! nc -z "$POSTGRES_HOST" "$POSTGRES_PORT"; do
      sleep 0.1
    done

    echo "PostgreSQL started"
fi

exec "$@"

Dockerfile.prod file with scripts permission:

FROM python:3.8.9-alpine as builder

ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONNUNBUFFERED 1

RUN apk update
RUN apk add postgresql-dev gcc python3-dev musl-dev libc-dev linux-headers

RUN apk add jpeg-dev zlib-dev libjpeg

RUN pip install --upgrade pip
COPY ./requirements.txt .
RUN pip wheel --no-cache-dir --no-deps --wheel-dir /wheels -r requirements.txt


#### FINAL ####

FROM python:3.8.9-alpine

RUN mkdir /app
COPY . /app
WORKDIR /app

RUN apk update && apk add libpq
COPY --from=builder ./wheels /wheels
COPY --from=builder ./requirements.txt .
RUN pip install --no-cache /wheels/*
#RUN pip install -r requirements.txt


COPY ./scripts /scripts
RUN chmod +x /scripts/*

RUN mkdir -p /vol/media
RUN mkdir -p /vol/static

#RUN adduser -S user

#RUN chown -R user /vol

RUN chmod -R 755 /vol
#RUN chown -R user /app
#RUN chmod -R 755 /app

#USER user

ENTRYPOINT ["/scripts/entrypoint.prod.sh"]FROM python:3.8.9-alpine as builder

Here we used a multi-stage build as it reduces final image size. ‘builder’ is the temporary image that’s used just to build python wheels with dependencies, that are copied to the Final stage. we can create a non-root user. Because that is the best practice to be safe from attackers. Now, update the compose production file with docker production file:

version: '3.5'

services:
    app:
        build:
            context: .
            dockerfile: Dockerfile.prod
        command: gunicorn personal.wsgi:application --bind 0.0.0.0:8000
        volumes:
            - static_data:/vol/static
        expose:
            - "8000:8000"
        restart: always
        env_file:
            - .env.prod
        depends_on:
            - app-db

    app-db:
        image: postgres:12-alpine
        ports:
            - "5432:5432"
        restart: always
        volumes:
            - postgres_data:/var/lib/postgresql/data:rw
        env_file:
            - .env.prod
volumes:
    static_data:
    postgres_data:

Rebuild, and run:

$ docker-compose -f docker-compose.prod.yml down -v
$ docker-compose -f docker-compose.prod.yml up -d --build
$ docker-compose -f docker-compose.prod.yml exec app python manage.py migrate --noinput

Nginx

Nginx really gives you the ultimate power. You can do whatever you want. Let’s add Nginx to act as a reverse proxy for Gunicorn. Add service on docker-compose file (production):

version: '3.5'

services:
    app:
        build:
            context: .
            dockerfile: Dockerfile.prod
        command: gunicorn personal.wsgi:application --bind 0.0.0.0:8000
        volumes:
            - static_data:/vol/static
            - media_data: /vol/media
        ports:
            - "8000:8000"
        restart: always
        env_file:
            - .env.prod
        depends_on:
            - app-db

    app-db:
        image: postgres:12-alpine
        ports:
            - "5432:5432"
        restart: always
        volumes:
            - postgres_data:/var/lib/postgresql/data:rw
        env_file:
            - .env.prod

    proxy:
        build: ./proxy
        volumes:
            - static_data:/vol/static
            - media_data:/vol/media
        restart: always
        ports:
            - "8008:80"
        depends_on:
            - app
volumes:
    static_data:
    media_data:
    postgres_data:

Inside the root directory create a proxy(whatever you want to name it) directory and add a configuration file, in my case I have created default.conf file as:

server {
    listen 80;

    location /static {
        alias /vol/static;
    }

    location /media {
        alias /vol/media;
    }


    location / {
        uwsgi_pass app:8000;
        include /etc/nginx/uwsgi_params;
    }
}

And create uwsgi_params file for this.

uwsgi_param QUERY_STRING $query_string;
uwsgi_param REQUEST_METHOD $request_method;
uwsgi_param CONTENT_TYPE $content_type;
uwsgi_param CONTENT_LENGTH $content_length;
uwsgi_param REQUEST_URI $request_uri;
uwsgi_param PATH_INFO $document_uri;
uwsgi_param DOCUMENT_ROOT $document_root;
uwsgi_param SERVER_PROTOCOL $server_protocol;
uwsgi_param REMOTE_ADDR $remote_addr;
uwsgi_param REMOTE_PORT $remote_port;
uwsgi_param SERVER_ADDR $server_addr;
uwsgi_param SERVER_PORT $server_port;
uwsgi_param SERVER_NAME $server_name;

Also, add a Dockerfile inside the proxy directory for Nginx configuration:

FROM nginxinc/nginx-unprivileged:1-alpine

COPY ./default.conf /etc/nginx/conf.d/default.conf
COPY uwsgi_params /etc/nginx/uwsgi_params

You can use expose instead of ports in docker-compose.prod.yml file for app service:

app:
        build:
            context: .
            dockerfile: Dockerfile.prod
        command: gunicorn personal.wsgi:application --bind 0.0.0.0:8000
        volumes:
            - static_data:/vol/static
            - media_data:/vol/media
        expose:
            - 8000
        restart: always
        env_file:
            - .env.prod
        depends_on:
            - app-db

Again, re-build run and try:

$ docker-compose -f docker-compose.prod.yml down -v
$ docker-compose -f docker-compose.prod.yml up -d --build
$ docker-compose -f docker-compose.prod.yml exec web python manage.py migrate --noinput
$ docker-compose -f docker-compose.prod.yml exec web python manage.py collectstatic --no-input --clear

Ensure app is running in http://localhost:8000

And you have successfully configured docker, docker-compose, gunicorn, and nginx for your Django application.
You can find the full source code of this article: here

Sign up for daily dose of tech articles at your inbox.
Loading