Db migrations should be backwards compatible in most cases. So don't delete columns, just mark them as nullable. (Remove them manually in the future a few versions down the line), make sure new columns have default values so that your old deployments can still continue to work. That way you can always safely deploy and rollback without destroying your db. Also rolling deployments can work now. For more complex migrations, accept the down time. But these cases should be rare. As for deploying migrations in k8s, you want to start a migration job first, separate from the deployment. Once that job finishes successfully, update the deployment. You definitely don't want race conditions in your db migrations. That is scary af.
In terms of your migration race condition problem, I personally think it's because you're running the migrations on service startup. At my work we run our migrations as part of the CI/CD. After all our tests have run, docker images get built, then the very last thing is migrations are run and then docker images get deployed to the cluster. That way it only happens once and doesn't even require a microservice.
But what about the version out of sync problem that he mentioned.... Like pod deployment fails but the migration is successful? Just curious to know what happens in that case
@@moveonvillain1080 We make our migrations backwards compatible and that if our migration runs but the new pods don't spin up, the old ones will still work as normal. For bigger changes in our datase (like changing the name of a column) we do it in multiple steps and even support two different columns at the same time until the old one is decomissioned.
20:22 I don't know if you already found a solution but within the AWS ECS world, you can just define a task that executes a special "task" and then dies. AWS ECS tasks can be compared to Kubernetes "Jobs" in the context of special migration tasks. With Kubernetes, you can create Jobs that run to completion and are triggered manually rather than automatically alongside other services. This allows you to run specific tasks like database migrations or batch processing without affecting the rest of your running services. It's a flexible way to handle one-time or periodic tasks separately from your main application workloads.
Why not gitignore the generated sqlc code and have air run the sqlc command? In any case I don't think sqlc is perfect but it definitely speeds up development. Thanks for all the go content.
I tend to have a docker-compose with 2 services. One that runs the migration script then the other one that starts the app when the migrations have completed.
27:12 It's like a fake sum type, and it's a great pattern! This is how http errors are handled in Elm as well. The sucky thing about it in Go is you need to opt in to wrapping your values in these interfaces, much like the "Effect" package for javascript, a great pattern, but at the end of the day it's a wrapper.
For the DB race condition issue you can take a couple of approaches: 1. Bring down all services and start one after the other (downtime) 2. Do a rolling deployment. There are multiple issues in this. Two different pods with different schemas can run simultaneously. You can maintain a schema version table to handle this at the app layer. The new pods should process requests only according to the correct schema version. 3. Offload schema handling to PostgreSQL's ALTER command. This will ensure old and new schemas active at the same time
Migrations: your changes should not break the existing code, you should be able to have two code versions running at once. That goes beyond databases but to all external services
20:51 in the opposite. Manual migrations make me soo nervous. I’m slow and what if I f up? I’ll be slow to unwind. Automated migrations are fast and precise… exactly as programmed.
I tried this guy’s course and got a refund, could have been me could of been the teaching style-cool guy but would recommend a trial first make sure the style works for you. It is fine for RU-vid videos but i never expected that for a formal course but that’s on me.
this is a weird podcast episode. you jump right into niche questions about the Go db migration tooling you use and problems you’ve run into in prod. you’ve trying to leverage his expertise to solve your own problems instead of learning about his experience
I don't see it like that. Go lacks opinions so it's cool to see how people handle things. I often wonder how others handle migrations, passing user context around, dependency injection, etc.