It has often been said that having too many choices can be as painful as having too few. When hiring a contractor to work on a project, I want to trust him to follow a set of best practices. Having to micromanage projects is exhausting and defeats the purpose of outsourcing the work. Enterprise IT departments often are laden with legacy products, as a result of industry trends from 20 years ago. Yes, I mean Big Blue, Mainframes, Computer Associates with multi-million dollar price tags for licensing each year. The challenge is adopting DevOps and new technology trends while keeping the lights on, meeting our companies’ changing needs, and ensuring our security.
Complexity and barriers are the enemies of functional IT teams. When we are able to standardize what works incredibly well, we can then scale it almost indefinitely while preserving efficiency and allowing our colleagues to refocus on important tasks.
Be Done with Drama
Ideally, we evolve our CI/CD to a boring, everyday routine by asking, “What would a human do?” in response to every failure and applying the solution immediately to all processes. Essentially, we strive to eliminate all the decision-making on the mechanics and processes of CI/CD and pre-build them for the application teams. The app teams are compelled to use them, not because we seek to limit choice, but because we have already evolved the process beyond any individual team’s or department’s capabilities and we want them to benefit from both our intensive labor and our mistakes. In my experience, no two applications are alike, because no codes are exactly alike. But the mechanics of deploying code are exactly the same and based on only one thing – the technology. .NET or Docker deployments have the same logical steps regardless of whether they were made by NASA or a start-up developer in Brooklyn. So it makes sense to let us make decisions for the team based on our proven expertise.
The Domino Effect
With so much focus on shifting left, DevSecOps, and the need for security, the CI/CD plays a crucial role. You can’t deploy or test code until it’s built; you can’t run a code quality and vulnerability scan until the code is in the pipeline, and you can’t do anything at all if the code is unavailable or not committed. This is another opportunity to control the set of decisions available to developers; they can code with only our tool and deploy whatever and whenever they want, but always within our template. Trusting each team to do their own security scans is like trusting a fox to guard the henhouse. It’s not impossible, but the likelihood of a few chickens missing here or it is almost guaranteed. The only way to avoid vendor lock-in and unnecessary special measures to make the code work is to compel the supplier to use the existing process.
As each enterprise is catching on to new technologies like Docker, K8S, and others, re-directing the focus from the vendor’s black box is even more essential. Docker and FOSS are game-changing trends and they are here to stay; those who adopt them early on will reap the benefits. IT departments today face new challenges, and to meet them, we need to adapt, so give your teams the illusion of choice and let them refocus on technology adoptions. We all have real problems to solve, like security, data analytics, and efficient automation; these are the tasks that will put IT back on the map of a valued partner. If we are honest with ourselves, copying the same file for the ten-thousandth time and editing it for no real reason should not ensure a paycheck or professional respect. And finally, superior automation gives back our nights and weekends, a benefit that can’t be measured in metrics but one that’s of vital importance to most of us.