Django + AWS Lightsail + Github Action - How I build Continues Deployment
Building a continues integration pipeline with AWS, Django and Github Actions
On my side projects I need cheap and scalable hosting. I also want to be able to make production changes quick and often, with the flexibility to roll back if something goes wrong. To achieve this I built the following workflow:
Components
Bring in AWS Lightsail
AWS Lightsail offers the AWS Scale without ALL the bells and whistles. It's UI Is simpler but don't let it fool you into thinking it's less capable. You can take your pick from different size of servers as well as have up to 20 replicas. Other plusses are: the ability to keep container images (Break something, just revert to the previous image) and loadbalancing and certificates come straight out of the box.
Django + nginx
Both NGINX and Django, have been around for 20 years. It's battletested and fast enough for my needs. Need to say more? The other upside of NGINX is that you can catch the AWS Lightsail healthcheck without it hitting Django as well as use it to host Django's static files and caching.
Automated testing and code quality
I focus on deploying from this workflow. But it is very simple to add pytest, linting and other CI tools to this workflow. I can highly recommend Snyk and Talisman for security testing and seeing if you spilled secret keyst. I'm very much in prototyping mode right now and therefor those tools bother me more than they help me right now. Once I move to production I'll add those back in, leave a comment if you want me to explain how.
Github Actions
Github is perfect not just for version control. But I also use it for Autscaling and deployment with Github actions. My workflow works as follows:
- Work on code
- Commit that code
- I get a telegram message the deployment started
- A github action is triggered to build a new docker container of my django code to my test environment
- It also creates another container of NGINX
- If that all succeeds it creates a pullrequest
- If I accept the pull request the same workflow runs again but then towards production
- If anything might happen and the server goes down: Lightsail checks for healthchecks, if none come it automatically reverts back to a previous version.
- If no new version comes up I get a text message
- If a new version is detectedI get a text message
- I intend to add the automated pytests in there too, which should be fairly simple from here on.
Separate to that I have a workflow that runs a autoscaler: checking the Lightsail load and adjusts the power in Lightsail accordingly.
Workflow steps:
- The workflow starts by getting all my code from Github this is a standard Github Action
- I trigger a Telegram message from appleboy/telegram-action workflow
- then I update all services to latest version
- creating a .env file that is filled with github Secrets. Basically you add secrets in the repository that you can then call from the workflow. This way you don't have to expose your database settings or other keys but the container can still be created.
- install the AWS Client --> We need this to create the deployments
- Then we log in with the AWS Secrets --> This is done with public key and secret key both can be stored in the github secrets
- Then I install the Django dependancies
- Then Static files are collected --> This is also the moment to run migrations if you need to
- Then we bundle all of this up into a Docker container for Django
- Then we upload the image to the lightsail container registry
- Then we do the same for nginx
- Then we upload both images to AWS and kick of the deployment.
In the standard YML file this looks like this:
- Take note that some of the commands reference a path in my repository. For instance in my rootfolder, I keep a folder with:
- SRC for my Django files
- Infrastructure with all my Devops code, there are also my requirements, AWS files and NGINX config. Folder structure of Infrasturcture
├── src (All of my django app)
├── AWS
│ ├── autoscale.py
│ ├── checkdeploymentsuccess.py
│ ├── deploymentconfig.json
│ ├── publicendpoint.json
│ └── scaleupordown.py
├── Docker
│ ├── DockerfileDEV
│ └── DockerfilePROD
├── Mixpanel
│ └── addannotation.py
├── nginx
│ ├── Dockerfile
│ └── default.conf
└── requirements.txt
The Github Action
name: 🧪 Triage deployment
on:
push:
branches:
- '**' # matches every branch
- '!master' # excludes master
permissions:
contents: read
env:
AWS_REGION: eu-central-1
AWS_LIGHTSAIL_SERVICE_NAME: mapmaker
concurrency:
group: '${{ github.workflow }} @ ${{ github.event.pull_request.head.label || github.head_ref || github.ref }}'
cancel-in-progress: true
jobs:
buildcontainers:
name: 🌎 Deploying to Triage env
runs-on: ubuntu-latest
steps:
- name: 🗂 Getting code from Github
uses: actions/checkout@v2
- name: ⚙️ Updating to the latest versions
run: |
sudo apt-get update
sudo apt-get install -y jq unzip
- name: 🤐 Make envfile
uses: SpicyPizza/create-envfile@v1.3
with:
envkey_EMAIL_HOST_USER: ${{ secrets.EMAIL_HOST_USER_GMAIL }}
envkey_EMAIL_HOST_PASSWORD: ${{ secrets.EMAIL_HOST_PASSWORD_GMAIL }}
envkey_DJANGO_SECRET_KEY: ${{ secrets.DJANGO_SECRET_KEY }}
envkey_DB_USER: ${{ secrets.DATABASE_USER }}
envkey_DB_PASSWORD: ${{ secrets.DATABASE_PASSWORD }}
envkey_DB_HOST: ${{ secrets.DATABASE_HOST }}
envkey_DB_NAME: "mapmaker_dev"
envkey_HCTI_API_KEY: ${{ secrets.HCTI_API_KEY }}
envkey_HCTI_API_USER_ID: ${{ secrets.HCTI_API_USER_ID }}
envkey_MYSQL_ATTR_SSL_CA: ${{ secrets.MYSQL_ATTR_SSL_CA }}
envkey_SECURE_SETTINGS: True
envkey_DEBUG: True
envkey_MIXPANEL_TOKEN: ${{ secrets.MIXPANEL_TOKEN}}
directory: src/core
file_name: .env
fail_on_empty: false
- name: 🏢 Install Amazon Client
run: |
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install || true
aws --version
curl "https://s3.us-west-2.amazonaws.com/lightsailctl/latest/linux-amd64/lightsailctl" -o "lightsailctl"
sudo mv "lightsailctl" "/usr/local/bin/lightsailctl"
sudo chmod +x /usr/local/bin/lightsailctl
- name: 🤐 Log in to AWS Lightsail with Secrets
uses: aws-actions/configure-aws-credentials@v1
with:
aws-region: ${{ env.AWS_REGION }}
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
- name: Installing dependancies
run: |
sudo pip3 install --upgrade pip
sudo pip3 install -r Infrastructure/requirements.txt
- name: 📦 Collecting all static files
run: |
python3 src/manage.py collectstatic --noinput
python3 src/manage.py makemigrations --noinput
python3 src/manage.py migrate --noinput
- name: 🐳 Create a Docker Container for DJANGO
run: docker build -t mapmakerdev:latest -f ./Infrastructure/Docker/DockerfileDEV .
- name: 📬 Upload Backend image to AWS container register
run: |
service_name=${{ env.AWS_LIGHTSAIL_SERVICE_NAME }}
aws lightsail push-container-image \
--region ${{ env.AWS_REGION }} \
--service-name ${service_name} \
--label mapmakerdev \
--image mapmakerdev:latest
- name: 🐳 Create a Docker Container for NGINX
run: docker build -t nginx:latest -f ./Infrastructure/nginx/Dockerfile .
- name: 📬 Upload NGINX image to AWS container register
run: |
service_name=${{ env.AWS_LIGHTSAIL_SERVICE_NAME }}
aws lightsail push-container-image \
--region ${{ env.AWS_REGION }} \
--service-name ${service_name} \
--label nginx \
--image nginx:latest
- name: =========== All done. Cleaning up ♻️ ===========
run: ls
- name: Build Alerts
if: ${{ failure() }}
uses: appleboy/telegram-action@master
with:
to: ${{ secrets.TELEGRAM_CHAT }}
token: ${{ secrets.TELEGRAM_TOKEN }}
message: |
🚨 Deployment failed 🚨
Build ${{ github.run_id }} failed
Something went wrong while building the NGINX container the containers to AWS. See the details here:
${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}
- name: 🚀 Launching the Containers
run: |
aws lightsail create-container-service-deployment --service-name ${{ env.AWS_LIGHTSAIL_SERVICE_NAME }} \
--containers file://Infrastructure/AWS/deploymentconfig.json \
--public-endpoint file://Infrastructure/AWS/publicendpoint.json
pull-request:
needs: [buildcontainers]
name: 🔃 Creating Pull request to merge with Master
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: pull-request
uses: repo-sync/pull-request@v2
with:
destination_branch: "master"
assignees: "two-trick-pony-NL"
pr_title: "🤖 Merge and Deploy ${{ github.ref }}"
pr_body: "Verify the code is working on triage.mapmaker.nl If you merge this pull-request the code will be deployed to production. Check out the changes here: https://github.com/${{ github.repository }}/commit/${{github.sha}}"
pr_label: "automatic-pullrequest"
github_token: ${{ secrets.GH_TOKEN }}
Lightsail specific commands:
In the workflow file above you'll see 2 commands:
- name: 📬 Upload NGINX image to AWS container register
run: |
service_name=${{ env.AWS_LIGHTSAIL_SERVICE_NAME }}
aws lightsail push-container-image \
--region ${{ env.AWS_REGION }} \
--service-name ${service_name} \
--label nginx \
--image nginx:latest
and
- name: 🚀 Launching the Containers
run: |
aws lightsail create-container-service-deployment --service-name ${{ env.AWS_LIGHTSAIL_SERVICE_NAME }} \
--containers file://Infrastructure/AWS/deploymentconfig.json \
--public-endpoint file://Infrastructure/AWS/publicendpoint.json
These are basically call to the AWS API using the AWS CLI tool. The AWS_LIGHTSAIL_SERVICE_NAME is set in your github secrets as well as your AWS Access keys so you can actually log in.
Then we reference 2 files that specify how we want the containers to be deployed. See my folder structure above but basically it's 2 json files: This first one describes what images should be used, and what ports to open. I have 3 containers, 1 prod, 1 dev and one NGINX as reverse proxy. But feel free to add REDIS or a database.
{"prod-mapmaker-django": {
"image": ":mapmaker.mapmaker.latest",
"ports": {"8000": "HTTP"}
},
"dev-mapmaker-django": {
"image": ":mapmaker.mapmakerdev.latest",
"ports": {"8080": "HTTP"}
},
"mapmaker-nginx": {
"image": ":mapmaker.nginx.latest",
"ports": {"80": "HTTP"}
}
}
And this second one tells lightsail what the public endpoint should be. Basically: where to send all traffic. I made it a NGINX reverse proxy so I can split traffic. You also specify which endpoint to use fo the healthcheck. Mine just talks to NGINX so that my django logs don't have entries every few seconds.
{
"containerName": "mapmaker-nginx",
"containerPort": 80,
"healthCheck": {
"healthyThreshold": 2,
"unhealthyThreshold": 10,
"timeoutSeconds": 60,
"intervalSeconds": 300,
"path": "/healthcheck",
"successCodes": "200-299"
}
}
Drawbacks:
consider that containers are emphemeral: basically they are throwaway. So don't store data or images in them. Otherwise the next time you deploy these will be gone. Use S3 buckets and a external database for that persistant information. It is also recommended that you do authentication in Django through the database as this allows you to add nodes in lightsail. This way each node can talk to any user and check in the database if they are logged in.