It’s typical for a developer to have to wait on some person to be able to get a new environment set up. For example, when the new project has its first acceptance test, a new server will probably be needed to run that test on. At your typical company, this developer will depend on another team to set up that server. Usually, that team is very busy working on more pressing matters (e.g., a production server seems to be low on RAM), so the developer’s needs are relatively low priority.
This type of situation should not even exist. Developers should not have to interact with human beings to set up a new environment. There should be an internal application that a developer can use to spin up a new environment at the click of a button. Anything less than this should be looked at as a massive business cost. Developers are either wasting time waiting for these servers or coming up with elaborate workarounds to deal with the situation. These elaborate workarounds increase complexity and the chances of introducing bugs.
Sometimes there is push back from the team that sets up the servers. A common complaint is that “resources are tight”. But, if you weigh the cost of RAM and hard drives against the cost of developers and bugs, it’s almost always worth it to pay for more resources.
I’ll admit there is a balance to this. Sometimes developers ask for an environment, abandon it, and forget to tell anyone. It’s usually worth it to reclaim those resources instead of buying more RAM. That’s why your company should be able to run reports on all the developer environments to see the last time they were actually used. When “resources are tight”, ask the developers if they still need the environments that they haven’t used in months so they can be “recycled”.
Both these projects (the “new environment at the push of a button” and the “abandoned environment reporter”) should be developed by both teams since they both have a mutual interest in its success. When they are completed, the developers can stop pestering the system administrators with low priority requests and the developers can have sufficient environments to thoroughly test their code.
This is about allowing developers to be professional. If they don’t have an environment to test their code, they can’t be confident they’re producing quality work. It’s like the story of the blind men and the elephant: If your developers can’t adequately test the project they’re developing, there’s no way for them to get a sense of the big picture. They’ll have no choice but to throw crap over the wall and hope for the best.
I don’t want to simplify this to a developer/system administrator issue. This “environment at the click of a button” is also extremely useful to Quality Assurance. Imagine if they wanted to compare the functionality between the current deploy, the previous deploy, and the deploy before that, but there’s not enough servers for them to deploy these on. They should not be hamstrung in this way. If they want 3 new servers, they should be able to get them within that hour.
Improve your company’s efficiency: Automate the process for setting up new environments.
good read Daniel!
That’s one of the main reasons I use Docker – setting up everything I need with a button click (or a bash script). Docker lets you provision your entire production infrastructure on your laptop.
One of my projects require the following: nginx for reverse proxy, node.js, nginx + php, log service, mysql and mongodb. I created 6 Docker containers that are all running together using a bash script.
Hi Oren, thanks for the comment. Yeah I’ve been definitely trying to look in Docker, I just haven’t found the time. I must have listened to at least 3 podcasts on the topic though. What’s a “turn off” for me is that if you’re on a Mac or Windows, it can’t use that linux container library so you have to install a VM to use it on these OSes. At least that’s what I heard on one of those podcasts. Thoughts?
True, on Mac and Window you’ll have to use Boot2Docker (http://boot2docker.io).
It’s not a big deal since it’s very small linux distro.
The reason for this added layer is Docker containers uses the kernel of the host machine. and currently this kernel must be linux. Not BSD or Windows.
Just make sure you have a recent Mac (with at least 8GB RAM) and you should be fine.