Hello Klaas,

Excellent question! :)

And I have 4 answers to it:

1.
It might be possible to do something like this after the deployment finished to rerun the datamigration and keep all data in sync:

./manage.py migrate — fake your_app 0003_before_data_migration
./manage.py migrate your_app 0004_your_data_migration
./manage.py migrate — fake your_app

For this to work, the datamigration must be idempotent (meaning safe to run multiple times). Also it should be well tested before.

2.
We once developed a method to keep multiple versions of one model in sync over a longer time. We needed it because we wanted to switch out the entire model structure (and not just some renames here and there). We had a feature toggle in place to have both models active at the same time but kept in sync. The code for this is available here, but currently it is not usable at all, since it was just copy&pasted from a bigger codebase. But maybe it can give you some ideas…

3.
It might make sense to “collect” field-rename requests across your team (e.g. with comments in the code) and then rename many fields in an exceptional maintenance-enabled deployment.

4.
Consider that this is is more a theoretical than a practical problem for your use-case?

Depending on

  • the number of renames you want to do,
  • the time that the blue/green process takes,
  • the number of users you impact and
  • the importance of the field for the application

it might be acceptable to just rename fields and risking some data inconsistencies and doing some manual checks afterwards.

And again:
Try to avoid renaming is possible, since all of the methods are somewhat complex to manage…

I hope that helped,

Felix

CTO at 3YOURMIND

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store