Django Deployment with Docker
The instructions here should help you get Django projects (or with minimal adaptation, other Python-based web projects) running on Docker.
I'm assuming here that you have a working project: can check it out with runserver
, etc.
Development-style Deployment
These instructions should be enough to get things running, so you can work on the code, at least. These are not appropriate to production.
For this deployment, we will only need one container, but let's set it up with Docker Compose, so we're ready to add more later. The docker-compose.yml
file will be something like this:
version: '3'
services:
app:
build:
context: .
dockerfile: Dockerfile-devel
ports:
- "8000:8000"
volumes:
- ./:/code:ro
environment:
DJANGO_MODE: devel # look at os.environ['DJANGO_MODE'] to detect deployment
This will (when everything is done) expose your app at http://localhost:8000/
And the Dockerfile-devel
like this:
FROM python:3.7
WORKDIR /code
ENV PYTHONUNBUFFERED=1
COPY requirements.txt /tmp/
RUN pip install -r /tmp/requirements.txt
CMD python mysite/manage.py migrate \
&& python mysite/manage.py loaddata data.json \
&& python mysite/manage.py runserver 0:8000
This assumes you have a requirements.txt
in the current directory with your pip requirements, and a data.json
fixture with some data for development. Adjust accordingly.
You can create a Django fixture (like data.json
used here) with the dumpdata
command.
In your Django settings, make sure it will answer requests for the place it's living:
ALLOWED_HOSTS = ['app', 'localhost']
And for now, we can use SQLite as the database. That will be database settings like:
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': '/db.sqlite3',
}
}
Production-style Deployment
For a production-like environment, we'll have three services running: our application, a database server, and a frontend web server. The frontend server will be responsible for serving the static files (assembled by Django's collectstatic
).
Docker Compose
My docker-compose.yml
to do all of this is:
version: '3'
services:
app:
build:
context: .
dockerfile: Dockerfile-app
volumes:
- static:/static:rw
environment:
DJANGO_MODE: production
db:
image: "postgres:latest"
volumes:
- db:/var/lib/postgresql/data:rw
environment:
POSTGRES_USER: project
POSTGRES_PASSWORD: secret
web:
build:
context: .
dockerfile: Dockerfile-web
ports:
- "8080:80"
volumes:
- static:/static:ro
volumes:
static:
db:
Database Container
The database setup should be fairly straightforward. Note that the Compose config sets a semi-permanent volume for the database data, so it will persist when the container is destroyed/recreated.
The above config suggests Django settings like:
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'NAME': 'project',
'USER': 'project',
'PASSWORD': 'secret',
'HOST': 'db',
'PORT': 5432,
}
}
Web Frontend Container
The static files that Django will assemble for us need to be stored somewhere: the above Compose config creates a volume static
that will be populated by the application and served by the frontend server.
The Django config to make this work out is:
STATIC_URL = '/static/'
STATIC_ROOT = '/static'
You will need to update the web server config so that most URLs are handled by your app, and the static files are served by the frontend server. That will be a something like:
location / {
proxy_pass http://app:8000;
}
location /static {
alias /static;
}
Make sure your config file is copied to /etc/nginx/conf.d/default.conf
by the Dockerfile-web
.
Application Container
For production, we will copy our code into the image, not mount it externally.
The Docker command for the main application will need to do a few things: (1) wait for the database to be ready, (2) have Django get the static assets into a single location where the frontend server can find then, (3) run the app with a more production-ready system.
FROM python:3.7
WORKDIR /code
ENV PYTHONUNBUFFERED=1
COPY wait.sh /wait.sh
RUN chmod +x /wait.sh
COPY requirements.txt /tmp/
RUN pip install -r /tmp/requirements.txt
ADD ./ /code
CMD /wait.sh db 5432 \
&& python mysite/manage.py collectstatic --noinput \
&& python mysite/manage.py migrate \
&& python mysite/manage.py loaddata data.json \
&& /usr/local/bin/uwsgi --ini /code/mysite/mysite/uwsgi.ini
I'm using uWSGI here, which seems to be the easiest to get working. You will need uwsgi
in your requirements.txt
.
This also uses the wait.sh script that will pause until a service is available: the Postgres database in this case.
You will need a uwsgi.ini
file which will serve as a recipe to start up the uWSGI process. This runs your WSGI processes as the www-data
user: it's best practice for security to use a minimally-privileged user for that process.
[uwsgi]
chdir = /code/mysite
module = mysite.wsgi
master = true
processes = 4
http = app:8000
uid = www-data