Manage static files for Django when deployed on kubernetes
Posted on 2021-03-31 in Trucs et astuces
This will be a continuation of my article Some tips to deploy Django in kubernetes. In this previous article, I talked about generic tips you should apply to deploy Django into kuberentes. Here, I'll focus on static files.
If you don't already know that, gunicorn (or any other app server) is not designed to serve your static files directly. You should use something else.
I think you have three main options:
Rely on WhiteNoise and let your application serve the files.
Put all your static files into a S3 bucket or something equivalent. For that, you will need to:
- Make this bucket public on the internet.
- Configure STATIC_URL in you settings so Django knows where to search for these files.
- Collect your static and upload them into the bucket during your deployment process.
Configure an nginx sidecar and let it serve the files. For that, you will need to:
Configure the nginx sidecar as described in my previous article.
Collect the static in the Dockerfile with something like this (I updated my setup-django-run-as-non-root.sh script for that):
# Collect static # Use dummy values just to allow the command to run. export "DJANGO_SETTINGS_MODULE=myapp.settings.prod" export "DB_NAME=postgres" python manage.py collectstatic --on-input
Mount a shared folder in both nginx and the Django pod:
Configure this volume mount in both pods:
- name: staticfiles mountPath: /var/www/api/ # The API must be able to copy the files to the volume. readOnly: false
Configure this empty volume:
- name: staticfiles emptyDir: {}
Use a script to copy the static files and then run the application: an emptyDir is always empty at pod startup even if its mount point was not in the image. So, if you mount it directly to the static folder, it will be emptied. So we need to mount it elsewhere and then run the application. I created run-django-production.sh for that:
#!/bin/bash set -o errexit set -o pipefail set -o nounset mkdir -p /var/www/api/ cp -R static /var/www/api/ gunicorn --bind :8000 --workers 5 myapp.wsgi
Configure nginx correctly, for instance with this configuration:
1 upstream app_server { 2 server 127.0.0.1:{{ .Values.container.port }} fail_timeout=0; 3 } 4 5 6 server { 7 listen 80; 8 root /var/www/api/; 9 client_max_body_size 1G; 10 11 access_log stdout; 12 error_log stderr; 13 14 location / { 15 location /static { 16 add_header Access-Control-Allow-Origin *; 17 add_header Access-Control-Max-Age 3600; 18 add_header Access-Control-Expose-Headers Content-Length; 19 add_header Access-Control-Allow-Headers Range; 20 21 if ($request_method = OPTIONS) { 22 return 204; 23 } 24 25 try_files /$uri @django; 26 } 27 28 location /nghealth { 29 return 200; 30 } 31 32 try_files $uri @django; 33 } 34 35 location @django { 36 proxy_connect_timeout 30; 37 proxy_send_timeout 30; 38 proxy_read_timeout 30; 39 send_timeout 30; 40 # We have another proxy in front of this one. It will capture traffic 41 # as HTTPS, so we must not set X-Forwarded-Proto here since it's already 42 # set with the proper value. 43 # proxy_set_header X-Forwarded-Proto $schema; 44 proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; 45 proxy_set_header Host $http_host; 46 proxy_redirect off; 47 proxy_pass http://app_server; 48 } 49 }
Now that we've seen the options this question remains: which option should I use? I guess it depends:
- If you need a nginx sidecar (to handle file uploads for instance, see my other article), you can rely on nginx if your files are small: you still want to keep the image as small as possible to speed up the deployment of your app. With this method, you'll avoid adding another component to your application. And since we are talking about static files here, and since it's mostly CSS and JS files, you shouldn't have very big files, so this should work great. If you have big static files in your app, you should probably store them outside your repo anyway.
- If you have big files or can afford it, use a bucket. It's simple and reliable.
- If you are stuck and cannot use anything else, rely on good old WhiteNoise.