Building a Simple Web App Part 3 of N : Docker

The first time I used docker was back when the company was called dotCloud. Its come a long way since then and is the heart of a whole ecosystem for deploying software. For me, what makes docker great is that it gives you the ability to create the equivalent of an .exe for you service. You bundle together what you need and create the docker image which then can run almost anywhere.

Making the Dockerfile

I wanted to create simple Dockerfile to use for my Web App. Here’s what I came up with. Nothing in here is really specific to the App (except I guess the specific contents of the requirements.txt file) which is great since I can use the same file for different apps.

FROM bitnami/minideb:latest

#
# Install system dependencies
#
RUN apt-get update
RUN apt-get upgrade -y
RUN apt-get install -y nginx
RUN apt-get install -y gunicorn3
RUN apt-get install -y python3

# for installation of all dependencies in requirements.txt
RUN apt-get install -y python3-pip
RUN apt-get install -y curl
RUN apt-get install -y cmake

# These are only needed for debugging
RUN apt-get install -y git
RUN apt-get install -y postgresql-client
RUN apt-get install -y vim
RUN apt-get install -y lsof
RUN apt-get install -y zsh

# Set the working directory for the code
WORKDIR /code
#
# Install python dependencies
#
# COPY the requirements.txt file first 
# so that we can cache the pip install step
COPY requirements.txt /code

# See: https://veronneau.org/python-311-pip-and-breaking-system-packages.html
ENV PIP_BREAK_SYSTEM_PACKAGES 1
RUN pip install --upgrade pip
RUN pip install -r requirements.txt

#
# Set up user info
#
# Create a user group for django
RUN addgroup -system dj_group
# Create a django user in that group
RUN useradd django
RUN adduser django dj_group

# Push all the code into working directory
COPY . /code

# Set the config file for nginx
COPY deploy/conf/nginx.conf /etc/nginx/

# collect static files
RUN python3 manage.py collectstatic --noinput

# We don't put a CMD in here to run anything because 
# that is going to be done by the docker-compose.yml file

Most of the above is straightforward and just copied from other examples on the interwebs. A few notes

  • I am using bitnami/minideb as the base image to build on but there’s several other options.
  • The PIP_BREAK_SYSTEM_PACKAGES thing is a little annoyance. There are other ways around this but this one works fine
  • I add several system dependencies that make it nice for me to debug things but aren’t strictly needed.
  • collectstatic is a django thing which copies all the static files to one place so that they can be served up by nginx
  • I don’t include a RUN command since I prefer to control that from the docker compose file or command line

You can build this with something like:

docker build -t appname/web .

Or whatever you’d like you container to be named.

Setting up Docker Compose

Next we want to assemble an environment using docker compose. This allows us to just use an off the shelf Postgres install which makes things nice. I use two different compose files. One for local dev and one for the server. Each of them has the same basic format.

services:
  db:
       # stuff about the postgress db
  web:
       # stuff about the web server
  volumes: 
       # config about any volumes defined

Lets start by looking at the db service config for the local dev setup.

  db:
    image: postgres
    restart: always
    environment:
      - POSTGRES_DB=dj_app_name
      - POSTGRES_USER=postgres
      - POSTGRES_PASSWORD=postgres
    # This is so manage.py runserver works on the dev machine
    ports:
      - '5432:5432'
    volumes:
      - db:/var/lib/postgresql/data

Some things to note:

  • dj_app_name is just a generic name for the django database for my application.
  • If I was really going to be in production I might care about more specifics of the image but for now it seems fine to just use the docker official postgres image.
  • The ports directive is what allows the postgres server to be seen outside the network environment created by compose. When deploying on the server we don’t need this because the web app is always runing from docker compose but locally on my dev machine I still like to run the web app using: manage.py runserver sometimes and this allows that app to see the postgres running in docker on the correct port.
  • We set the volume here to be a named volume “db” which is managed by docker. So we will have to add this to the volumes section.
volumes:
   db:
    driver: local

If you want more details there’s a good explainer here.

For the version on the server its much the same:

  db:
    image: postgres
    restart: always
    environment:
      - POSTGRES_DB=dj_curate
      - POSTGRES_USER=postgres
      - POSTGRES_PASSWORD=postgres
    volumes:
     - /mnt/data/db:/var/lib/postgresql/data

The only real difference is that we don’t use a named volume but just map the postgres data to /mnt/data on the host system. This is actally set up as volume storage right now.

Now lets look at the web service definition for local:

  web:
    image: bricetebbs/app_name:latest
    command: sh deploy/docker_go_prod_async.sh
    volumes:
      - .:/code
    ports:
      - "8000:8000"
    environment:
      - DB_CONFIG=compose
      - SECRET_KEY=va-m%%@-423lKrvz83y^uzj#03c&fqmrv59^y@jhmw%m!l)xw*
      - DEBUG=True
    depends_on:
      - db

There’s a few things going on here:

  • The image name here is the one I give it when I build the docker image locally on my laptop. Later we’ll see what happens when I’m using a CI pipeline to build it.
  • For the command I have a set of scripts that do different things depending on what I’m testing. More on that below.
  • Volumes – This says maybe the current folder to /code. This is nice when you develop because you can change the local files and see the results w/o rebuilding the container. If your container is running on a server and the files are only existing insider the container then this will not work. Docker will ignore the files you put in the container and there won’t be any on the local server filesystem so you will spend some time pulling your hair out.
  • Ports – This makes it so that port 8000 (where the server is running) is available outside docker compose. For local development this means I can talk to the web service directly from the browser using port 8000. On the server I actually have the whole thing proxied behind apache but that’s another topic for a later post.
  • environment – Setting these environment variables lets me control some Django settings via ENV vars. I just wrote simple function in Django that I use in my settings.py like this: SECRET_KEY = get_setting('SECRET_KEY')
  • Depends on just says thing service needs the db service to be running before it can work.

For the server version the only real difference is this line:

image: registry.gitlab.com/bricetebbs/app_name/web

I am using gitlab CI/CD to build the images for “production” so that’s the difference in the naming.

Command Scripts

For running the web service in the container I have 3 different “go” scripts that I use in the compose file. They are designed to work both locally and on the server.

The simplest one is just docker_go.sh:

#!/bin/sh
python3 manage.py runserver 0.0.0.0:8000

Which just runs the django development server @ 0.0.0.0:8000. This is just for testing.

Next is docker_go_prod.sh

#!/bin/sh
export DJANGO_SETTINGS_MODULE=cproject.settings
# start up nginx
service nginx start
# set up gunicorn behind the nginx proxy
gunicorn cproject.wsgi -b 0.0.0.0:9000

This one starts up nginx which proxies to gunicorn for the django stuff. I’ll cover the nginx config in a later post. cproject is just the startup Django folder for my app where the settings.py and wsgi.py files are.

For async Django with asgi there is django_go_prod_async.sh

#!/bin/sh
export DJANGO_SETTINGS_MODULE=cproject.settings
# start up nginx
service nginx start
# set up gunicorn behind the nginx proxy running as async
gunicorn cproject.asgi:application -k uvicorn.workers.UvicornWorker -b 0.0.0.0:9000 --workers=3 --access-logfile '-'

So now docker compose can be used to manage the setup.

docker compose up -d

Will get everything started and detach for example, generally I don’t ever need to restart the db service so mostly I am doing

docker compose up web

or

docker compose down web


Posted

in

by

Tags:

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *