We can all agree that “it works on my machine” is a really popular term in the modern software development world. It is quite annoying trying to open a pull request on Github (assuming you have some checks in place) and then see that your deployment failed. It is even more annoying to see that your code failed due to something that in your head falls under the category of “devops are the ones who need to deal with it”.
The truth is that growing as a developer does not necessarily mean you should not care about things that are not directly related to you. It is common to hear “I am a Frontend developer, why would I care about docker”. In my opinion understanding the big picture no matter how directly related that is to what you do will not in any case cause any harm.
I recently got involved in a new project inside Equals that taught me some great stuff about dealing with such issues and I would like to share these learnings with you. Being more involved in the ops side of things gave me the opportunity to dive a bit deeper into how to use docker “correctly”. The whole idea of docker is to develop software in a flexible way that allows our applications to go through the different stages of our deployment pipeline with ease. It is common to see approaches that sacrifice how clean the code is in order to handle all these scenarios of where our code is hosted. However because this is a biiig topic which touches several different parts of our code I am going to split that into several different blog posts. Today I am going to talk about environment variables.
Managing multiple environments and their configs can be quite simple for small projects but it gets wild upon scaling. Weekend projects and other small apps may have a couple if any config variables. However projects managed by big teams may require a lot more, especially the ones that have multiple stages before reaching production. In many cases the easy solution is to hardcode every different stage inside the code. That usually looks like that.
...
ourApi: {
local: '<http://localhost:1234/api>',
develop: '<http://develop.example.com/api>',
staging: '<http://staging.example.com/api>',
production: '<https://example.com/api>'
}
...
The issues usually begin when there is a last minute change of the production URL which by the way happens a lot more often than I thought. We have to add a new commit just to change the environment URL. Then we have to open a pull request and tell our team to quickly approve it and god forbid someone requests for changes, the Github comments on the PR will turn into war zone. We then wait for it to be deployed to all N stages of our pipeline, wish that the CI is not going to fail this time. I hope you do not have these days as well where you do spend a few hours waiting for the CI to turn green. Wait there’s more! While waiting for this to go through, the database team noticed a malicious act. They have to change the password of the user that is connected to the database. Awesome! We have to go through the same process again.
How about APIs, headless cms and other keys and sensitive info? We wouldn’t dare store them into a config file like that right? RIGHT!?!?! Oh dear, no… Let’s all be honest, we’ve been there before. However we are here to talk about how to not go through a tough time when the boss realises that this may be related to a recent cyber attack the company faced (runs away). First thing we do is to identify exactly what we are doing wrong. For example if our code does that kind of stuff.
if (window.location.href === api.production) {
// execute prod code
} else if (window.location.href === api.staging) {
// execute staging environment code}
else {
// execute local? what if there was also a QA env? another else if? LMFAO
}