Django async
Posted on 2023-12-10 in Programmation
Now that Django is fully async (views, middleware and ORM), I though it was a good time to test how it behaves when run asynchronously. I’ll try to keep this article concise with only relevant data and resources. Code can be seen in a sample project so you can check the code and go further if you want. I won’t explain it, but I think it’s simple enough to be easy to understand if you already know Python and Django. I also provide a synthesis and conclusion at the end of the article. I checked the most common solutions to serve a project: the classic (and included in Django) runserver (dev only), gunicorn, uvicorn, daphne and hypercorn.
Table of contents
Running a basic view
I started by checking the behavior of a basic view: it renders a template and loads a CSS file. I wanted to check how each solution behaves when the template and the static file are updated and whether everything is served correctly. This is mostly to check the behavior of each solution during development: in production, you won’t update your templates on the fly and will rely on something else to serve your static assets. I think it is still interesting and useful if you want to be as close as possible from your production environment in development.
- runserver: as expected, everything went smoothly, the static file is served and when the template is updated a simple page reload allowed me to see the newest version.
- daphne: when launched directly, the template modification is not available until I do a restart and the static file is not found. It’s not surprising since allowing this is a feature of runserver for ease of development.
- gunicorn: same as daphne.
- uvicorn: same as daphne.
- hypercorn: same as daphne.
What’s interesting is that you can change this default behavior with command line options for development:
- daphne: you can install it as an app in your Django project. It will then be picked up by runserver. So runserver will behave like an ASGI server while still being able to serve static files and to see template modifications immediately. You can make sure daphne is used if you see Starting ASGI/Daphne in the startup logs.
- For all other servers, you will need to rely on Whitenoise to serve static assets and use their reloading option to update when source code change and their watch extra files one to restart when HTML files are updated. Please note that during my test, reloading in hypercorn seemed broken.
Regarding template reloading, you could also choose to disable template caching to always use the latest one. See this article for more.
Note
If you are in an async view, you must use only async operation and wrap sync operations (like a call to the ORM) in sync_to_async. You will get errors if you don’t. Same goes in reverse with async_to_sync. You can make sync and async views cohabit without any issues whether you serve your project on WSGI or ASGI. You won’t really benefit from async on a WSGI serve app though.
Note
You can hook into the signal autoreload_started to make runserver restart on any file you want. But it’s not documented and thus may break at any time. See here for more.
Relevant commits:
- Do basic tests with API & template views
- Setup daphne
- Test with model
- Test app servers (includes script to launch each servers in a dev like and prod like fashion).
- Use whitenoise for static file serving (I basically followed the tutorial).
Sync vs async behavior
To spot any differences between sync and async behaviors, I created a very simple view that returns JSON. I then sleep 10s with time.sleep or asyncio.sleep.
- runserver: I passed the --nothreading option to avoid having multiple threads that could handle the requests simultaneously. By default, if I launch two requests in parallel, the first completes in 10s and the second one in 20s. So they are handled one after the other. The fact that no, one or both requests are made to an async view doesn’t change a thing. That was what I was expecting. I’ll call this the fully sync behavior.
- daphne: both requests always ends after 10s. Even if I target two times the sync view. I suppose it is using sync_to_async to run the non async views, which, as far as I know, will make it run in a thread. Using it directly or through runserver doesn’t change the behavior. I’ll call this the fully async behavior.
- gunicorn: as expected, I get the fully sync behavior.
- uvicorn: as expected, I get the fully async behavior, whether I launch it directly or as a gunicorn worker as suggested in the documentation for production environment.
- hypercorn: as expected, I get the fully async behavior.
Relevant commits:
Getting more serious with async: using the StreamingHttpResponse
The StreamingHttpResponse is not new and allows us to stream a response, ie instead of sending it in one go you send it chunk by chunk. The use case for this in the documentation is to send a big CSV file. As the documentation points out, in WSGI, you will need a worker for the whole duration of the response. This worker won’t be able to serve any other clients. That’s where ASGI really comes handy: your worker can still server clients while it is waiting on IO. Let’s test this.
I created two new views to test this: one sync and one async. I used HTTPIE like this to view the streaming: http 'http://localhost:8000/stream' --stream (sync stream) and http 'http://localhost:8000/astream' --stream (async stream).
When using gunicorn I can serve both views. The sync one is correctly streamed. The async one responds but all in one go (ie without any streaming). And I got this warning: StreamingHttpResponse must consume asynchronous iterators in order to serve them synchronously. Use a synchronous iterator instead. It’s consistent with what the doc says:
When serving under WSGI, this should be a sync iterator. When serving under ASGI, then it should be an async iterator. […] Under WSGI the response will be iterated synchronously. Under ASGI the response will be iterated asynchronously. (This is why the iterator type must match the protocol you’re using.)
When using an ASGI server, I got the reversed behavior: the async view streamed its content while the sync one didn’t. And I got this warning: StreamingHttpResponse must consume synchronous iterators in order to serve them asynchronously. Use an asynchronous iterator instead (except for daphne which for some reason didn’t print anything).
Relevant commits:
Going further with async: Server Sent Events (SSE)
Instead of just stream data, how about streaming it to a browser and allowing the browser to react? This could be handy to notify the browser of some changes (like new data being inserted). Like websockets, the client could see the update immediately. Unlike Websockets, communication is unidirectional: from the server to the client. But it should be enough for many use cases and doesn’t require any extra lib.
This is done with StreamingHttpResponse and an async iterator.
How to get notified of events? You could use PostgreSQL directly thanks to its listen/notify feature or rely on the pub/sub feature of Redis.
There are several things you must pay attention to:
- Each message must be ended with two line breaks.
- The data must start with data: or you won’t be able to access the data in JS. So your payload must be like f"data: {data}".
- The content type of your response must be "text/event-stream".
- You can then create an EventSource in JS and parse each event data property.
For more details on this, please read this article.
Relevant commits:
How about Websockets?
To use Websockets, you still need to use Django Channels. It works on all servers except gunicorn (you need ASGI for this, no work around or compatibility with async_to_sync this time!).
It’s very easy to set up. To test it you don’t even need Redis (the only supported channel used to dispatch messages to all consumers) and can rely on a channel that works in memory. I only followed the official tutorial.
Relevant commits:
HTTP2 support
HTTP2 is only supported by hypercorn out of the box. For daphne you need to install two new packages: one for HTTP2 and one TLS. That’s because daphne only does HTTP2 through TLS. Since your browser won’t open a HTTP2 connexion unless it’s under TLS it’s not a big deal. gunicorn doesn’t support HTTP2 and uvicorn decided not to add it because they are alternatives (hypercorn and daphne as well as using a good old reverse proxy like nginx or Apache in from of the ASGI server).
It worked fine with both hypercorn and daphne.
Note
To test an HTTP2 connection, you can use curl with its --http2 option or Firefox (as far as I know Chrome doesn’t display the HTTP version of the connection).
Note
To test this in Firefox, I had to generate self signed certificates. I used the method described here. See the script for how to launch hypercorn and daphne with HTTP2 and certificates.
Relevant commits:
How about HTTP2 PUSH feature?
When HTTP2 was launched I clearly remember that push was the feature everybody was exited about. I never bothered to dig and enable it, but it sounded compelling: you could push assets to the browser before it even asked for it to load them faster! This test sounded like the perfect time to give it a try.
And… it turns out Chrome removed it a couple of years ago and nginx deprecated it in June 2023 in version 1.25.1 (the directives have no effect now, but don’t yet trigger an error). Ouch!
It turns out that it’s not easy to use, makes caching a lot harder and can lead to needless resources being pushed or the same resources being pushed more than once. For more, please read this article.
I still decided to test it for the sake of it. I made it run easily with Apache. I’ll extend a bit since it not obvious:
I used the default httpd.conf file from the container as a basis.
Then I added this at the top of the file:
Listen 80 LoadModule http2_module modules/mod_http2.so Protocols h2 http/1.1 LoadModule ssl_module modules/mod_ssl.so SSLEngine on Listen 443 SSLCertificateFile "/var/certs/server.crt" SSLCertificateKeyFile "/var/certs/server.key" LoadModule proxy_module modules/mod_proxy.so LoadModule proxy_http_module modules/mod_proxy_http.so ProxyPass /static ! ProxyPass "/" "http://172.17.0.1:80/" ProxyPassReverse "/" "http://172.17.0.1:80/" <VirtualHost *> DocumentRoot /var/www/html <Directory "/var/www/html"> Require all granted </Directory> ServerName host ServerAlias * </VirtualHost>
I then updated my view so to would add a Link header listing the resources to push:
response[ "Link" ] = "</static/home.css>; as=style; rel=preload, </static/list.css>; as=style; rel=preload"
You can see in Firefox in the dev tools: the CSS files are loaded but not associated request is displayed in the explorer.
You can test it more easily with nghttp a CLI tool dedicated to testing HTTP2 connections. To do so, once you’ve installed it, you can launch it like this: nghttp -ans https://localhost:8080/async. It will list the resources loaded by the page with an asterix if they were pushed.
I never managed to make it work with hypercorn though with the same settings. Since it is being deprecated, I didn’t dig any further.
Relevant commits:
- Test with reverse proxy for benchmark & HTTP push <https://gitlab.com/Jenselme/dj-test-async/-/commit/42fa4713047291716637f3eb99709faa10b392d9>
Load testing
After testing all that, I decided to do some load testing to see how everything behaved. I did all my testing with h2load which comes with nghttp since it supports load testing with HTTP1 and 2. I won’t post the detail results and, as any benchmark made by an amateur in non real conditions, you should take it with a big grain of salt.
I tested on HTTP2 (always with TLS) and with HTTP1 (with and without TLS). When I used a reserve proxy, daphne in HTTP1 was always the application server. I tested with 1 concurrent connection, 200 and 500.
Here is a summary of my results:
- Adding HTTPS slows the app. This is mostly seen when comparing HTTP1 with HTTP1 + TLS. No surprises there: TLS has a cost. Letting a reverse proxy handle TLS removes this slowness.
- In this test, the app was slower when server in HTTP2, whether directly on in front of a reverse proxy. I guess HTTP2 has an extra cost (in addition to TLS). Relying on a reverse proxy, just like with HTTP1, greatly improved performance (note that the proxy and daphne communicated in HTTP1).
- I got some errors (between 25% and 44%) when using HTTP2 with 500 concurrent clients. I got more error with Apache than with raw daphne. I think this case is very theoretic anyway since you wouldn’t have this in on a real production environment: your app would be much more complex and you would probably have more workers to process the load thus preventing this issue.
- I found no performance difference between all servers. Nor between sync and async views.
- I got errors with hypercorn in HTTP2. I don’t know why and it seemed to work fine in a browser. I didn’t try to dig any further.
- I also got lots of errors with nginx in HTTP2. A quick search yielded this result so it may be a protection against attack. I tried some configuration to prevent this without success.
Relevant commits:
Wrapping up!
Let’s summarize what I’ve learned so far:
- All solutions can serve both sync and async views quite efficiently. But you can only really benefit from async on ASGI and most notably from websockets and SSE.
- All solutions recommended in the Django documentation seem mature and performant.
- All solutions can be used in development.
- You probably still want a reverse proxy to handle HTTPS and HTTP2 connections. Letting something else will degrade performance quite a bit. But if you can’t or don’t want to, it’s not an obligation.
HTTP2 | SSE | Websockets | Usage in dev | |
---|---|---|---|---|
Daphne | ✅ (with extra deps, TLS only) | ✅ | ✅ | ⚠️ (the easiest to integrate with Django but harder to watch for extra files) |
Gunicorn | ❌ (not a very big deal if you have a reverse proxy anyway) | ❌ (requires ASGI) | ❌ (requires ASGI) | ✅ |
Uvicorn (with Gunicorn in prod) | ❌ (not a very big deal if you have a reverse proxy anyway) | ✅ | ✅ | ✅ |
Hypercorn | ✅ (couln’t make server push work) | ✅ | ✅ | ✅ (had an issue with reloading, but should work) |
My recommendations
After all that, what would I recommend?
- If you don’t need async and thus ASGI, you can probably stick with your current stack. It’s solid and won’t go away.
- I’d still put a reverse proxy in front of my app, even for ASGI.
- For a pure ASGI project, I think I’d use daphne and install it as an app. I’d to it because it’s the easiest to integrate with Django, including the runserver command I’m used to having during development. It also makes static files serving and template changes easier out of the box.
- uvicorn under gunicorn looks like a very good alternative. And you benefit from all the options of gunicorn in production.
- I’d reach for hypercorn only if I needed all of its features (like the ability to use uvloop or its experimental HTTP3 feature).
- If you are in the process of migration to async, I think you should start by running "old" views under WSGi and let a reverse proxy route traffic to an ASGI server for async views. Once most of your app is migrated, I think you can switch to ASGI and let is serve your sync views until you change them (or for the end of time because you’ll never have time to migrate something that works).
Having said all that, I’ll gladly hear what you think in the comments!