After the talk I got some interesting feedback. Some people disagree on some points for very valid reasons so I tought I'd write in more details what I was talking about.
Settings across multiple environments, how do they work?
A historically famous pattern is the "local_settings trick": it simply consists in adding the following lines at the end of the project's settings:
try: from local_settings import * except ImportError: pass
This works combined with a local_settings.py that's kept out of source control and managed manually for development-specific or production-specific settings.
One issue with this technique is that it's hard to extend the base settings. If you want to add something to the value defined in the base settings, you need to copy the initial value completely. There is no way to add something to INSTALLED_APPS or MIDDLEWARE_CLASSES without redefining the value completely.
To solve this problem, another pattern has emerged. Coined by Jacob Kaplan-Moss as The One True Way, it consists in reversing the import flow. Instead of importing local settings from the base settings, just import the base settings from the environment-specific settings:
# settings/local.py from .base import * INSTALLED_APPS += ( 'raven.contrib.django', )
And with that you'd have a handful of settings files for each kind of environment: development, staging, production, etc.
The issue with that is that you end up with environment-specific code: when doing local development you're not running the code with production settings. This increases the chance of running into production-specific bugs when you update some code without updating the production settings accordingly.
How do we fix this?
First if you haven't yet, start by reading the 12factor website. It's essentially a bunch of good practices for developers. These practices might seem obvious or trivial but the Django community still has room for improvement in that respect.
Here's a quote from the 12factor chapter on configuration (emphasis mine):
Apps sometimes store config as constants in the code. This is a violation of twelve-factor, which requires strict separation of config from code. Config varies substantially across deploys, code does not.
In Django, the settings module is a weird mix of code and configuration: it contains application configuration (stuff like TEMPLATE_DIRS, AUTHENTICATION_BACKENDS are very unlikely to change between local development and production) and environment-specific configuration (datatbase settings, email credentials, secret key, etc.).
What I'd like to suggest instead is to clearly identify which of your settings vary between environments and expose them via configuration that's done outside of the settings file(s). One good way to do this is via environment variables, which is what pretty much all PaaS platforms use for exposing environment information.
This approach allows having a single settings file with a clear contract of what can be configured and what can't. In fact looking at the configuration management code for this blog (I'm self-hosted), here the full list of environment variables I need to configure my website:
ALLOWED_HOSTS DATABASE_URL DJANGO_SETTINGS_MODULE MEDIA_ROOT PORT REDIS_URL SECRET_KEY SENTRY_DSN STATIC_ROOT
This make deployments extremely simple: set a couple of environment variables, run gunicorn project.wsgi -b 0.0.0.0:$PORT and done! You have a site ready to rock, no messy settings file to deploy via fabric or configuration management.
This also allows you to move stuff around without making any code changes. The database is moving somewhere else? Fine: update the DATABASE_URL, restart the app and profit. No code change, less risks to break stuff.
When implementing this, be very careful to have secure and sane defaults. You don't want to deploy insecure stuff. For instance, instead of setting DEBUG to False in production, do the opposite: set it to True in development by using a DEBUG environment variable:
DEBUG = bool(os.environ.get('DEBUG', False))
With that, DEBUG is False in production because you simply don't set the corresponding environment variable.
Some settings are critical: you don't want to (or in some cases you even can't) run with them being empty. In those cases, use direct dictionary access instead of os.environ.get to prevent the app from starting if the environment variable isn't present:
ALLOWED_HOSTS = os.environ['ALLOWED_HOSTS'].split() SECRET_KEY = os.environ['SECRET_KEY']
In 2 Scoops of Django, Danny and Audrey advocate a more elaborate pattern for having more useful error messages than KeyError. See the book for more information.
One more pattern is when you want to configure Sentry, which is usually something you don't want to care about in development:
INSTALLED_APPS = ( # … ) if 'SENTRY_DSN' in os.environ: INSTALLED_APPS += ( 'raven.contrib.django', )
And finally, how to set MEDIA_ROOT with a sane location for development while still leaving it configurable:
# BASE_DIR has been added to the default project # template in the upcoming Django 1.6 MEDIA_ROOT = os.environ.get('MEDIA_ROOT', os.path.join(BASE_DIR, 'media'))
Environment variables in local development
This solution leaves us with one slight problem, which is that environment variables are usually a pain to work with and particularly on windows. The solution we implemented is completely transparent and os-agnostic. It is based on the idea implemented in Daemontools's envdir program. envdir takes a directory, reads all the files in it and exposes environment variables from the file names with their values being the content of the file. This is how you can use it:
mkdir envdir echo "true" > envdir/DEBUG echo "postgres://postgres@localhost:5432/project" > envdir/DATABASE_URL … etc etc envdir envdir/ python manage.py shell # <-- here we have access to the env
Nice, but prefixing each command with envdir envdir/ is tedious to say the least. No worries! We can actually implement envdir directly inside manage.py in a couple of lines of code. This is what our manage.py files look like:
#!/usr/bin/env python import os import sys import glob if __name__ == "__main__": if 'test' in sys.argv: env_dir = os.path.join('tests', 'envdir') else: env_dir = 'envdir' env_vars = glob.glob(os.path.join(env_dir, '*')) for env_var in env_vars: with open(env_var, 'r') as env_var_file: os.environ.setdefault(env_var.split(os.sep)[-1], env_var_file.read().strip()) from django.core.management import execute_from_command_line execute_from_command_line(sys.argv)
This also allows having a separate envdir directory for test-specific settings. With this technique, ./manage.py uses environment variables completely transparently and there's nothing to export or re-source when a variable is changed. We use envdir in production and the custom manage.py in development.
If you can't have environment variables
In some deployment environments, env variables may not be desirable for technical or security reasons. But all of this is still valid! Instead of environment variables, just use a config file to serve the very same purpose. Or some other configuration system. The point stays, which is that the configuration contract of your app should be clearly defined and the configuration values should not be set with code.