Running Remote DB Migrations via GitHub Actions
Database migrations are an essential component in the life cycle of any relational database. We needed a way to trigger database migrations in our development and production environments within our existing CI/CD pipelines running in GitHub Actions.
We wanted to avoid provisioning separate compute for handling the migrations, and minimize the risk of errors due to duplicated code, environment variables, or containers.
At a high level, the process we settled on for triggering our migrations is as follows. Within a CD pipeline:
- Deploy the latest version of our API, which includes an ORM and migrations to run.
- Have an endpoint in the deployed API that can execute migrations. This eliminates the need to create or manage different compute to run the migration.
- Send an HTTP request to the migration endpoint that actually runs the migration.
This logic can be employed with any number of API/ORM/CD Pipeline stacks. For the remainder of this post, we will be working through the logic with the following stack:
- API: Flask
- ORM: flask_sql_alchemy + Flask-Migrate
- CD Pipeline: GitHub Actions
- Cloud Infrastructure: Google Artifact Registry and Google Cloud Run
First we created the API endpoint that runs the migrations. One thing to note is how authentication is handled. For this, we add a function decorator that checks to ensure the header passed to the request has an “admin” token. This “admin” token is stored as an environment variable on the API’s compute instance and as a GitHub Actions secret.
In our GitHub action, we deploy the container containing our Flask API and the latest migrations to Google Cloud Run, and then trigger the migration via the API using our admin token.
And that is it, now upon deployment we will always have an up-to-date DB without having to manage any additional infrastructure or tooling. Just a single extra endpoint has been added and a single additional step in our CD pipeline!
We hope this helps you simplify your CI/CD process for running database migrations in remote cloud environments. While the example we showed here was specific to a Flask API/ORM and GitHub Actions, any ORM and CI/CD framework can accommodate the described process.
dragondrop.cloud’s mission is to automate developer best practices while working with Infrastructure as Code. Our flagship OSS product, cloud-concierge, allows developers to codify their cloud, detect drift, estimate cloud costs and security risks, and more — while delivering the results via a Pull Request. For enterprises running cloud-concierge at scale, we provide a management platform. To learn more, schedule a demo or get started today!
Learn More About Web Development
Background We built an OSS containerized tool, cloud-concierge, for automating Terraform best practices, and primarily implemented the solution in Go, given that Go generally is the “lingua franca” for Terraform tooling. With some gaps in Go, however, we built a few...
Yes, an Amazon team moved from serverless microservices to a monolith, and saved a boatload in AWS costs. But the real lesson here is to choose an approach tailored for your use case.
How can we create unit tests for our Flask API that test end-to-end endpoint functionality (authentication, database changes, and data formats)? We explore one possible solution.