Today’s IT environment is moving and evolving at an unprecedented pace. So, all of a sudden, your 5-year old software infrastructure can look more like it’s 50. To get your software current – and stay there – requires flexibility. Moving to containers does just that. There’s been lots of talk about containers over the past few years – so why aren’t you on the bandwagon yet?
Software containers are pretty much what they sound like.
No complicated technobabble here.
They’re modular, executable standalone packages for software that include every element needed to run it, including everything from code to settings. Because they include everything, they’re highly portable and flexible, making them deployable in almost any environment. Moreover, they solve three challenges in one: distribution, configuration, and isolation. In short, containers save time and lower probabilities of mistakes, which is why they are conquering the IT world.
Docker Swarm, Kubernetes, and Mesos are some of the bigger names in open-source container management software. TechCrunch put it best: “…Instead of shipping around a full operating system and your software (and maybe the software that your software depends on), you simply pack your code and its dependencies into a container that can then run anywhere — and because they are usually pretty small, you can pack lots of containers onto a single computer.”
Why should I make the move? The short version is “everyone is doing it.” From terminals and mainframes to multi-tiered software, managing your infrastructure is just getting more complicated – and containers make it easier. But there are a few reasons behind this huge shift that you should be paying attention to:
Containers are a more efficient way to pack software into your existing infrastructure.
Moving to containers provides the flexibility needed in a modern IT environment.
They’re easier to deal than virtual machines, since you deploy only to one uniform environment which abstracts from all the different types of hardware/cloud/etc.
A configuration that seemed incredibly complicated just a few short years ago (a hybrid system, for example) is suddenly within reach.
Build once and deploy anywhere you want.
Why should you double your efforts?
What will containers do for me? You can isolate your IT headaches (systems that aren’t working as they should) by packaging systems that work into containers and clusters so you can continue to move forward technology-wise. Because as long as your software is completely intertwined with one another, you won’t be able to update your systems, since it won’t be clear what else will be affected.
Containers create an abstraction layer that provide you with the needed flexibility to update and upgrade as you go.
Modern system architecture relies on a combination of (mostly open source) technologies, such as containers, container management, orchestration and coordination software, as well as network and security technology (e.g. VPN like Strong Swan, overlay networks, etc.), and software defined infrastructure (Open Stack, SDN, etc.) to provide security, portability, multi-cloud support, high availability, scalability, and ease of use.
How do I implement? One of the best parts of the container approach is that you can take baby steps.
Start with systems that are working, package them into a container and simplify their connections (i.e., one entry point). Then, you can experiment at your own pace (moving to the cloud, connecting to X, etc.). It’s a safe environment to test things out.
Next, you create one abstraction layer which will help simplify system interdependencies. With that layer in place, over time you’ll have to manage much less even as your system becomes more complex. All you’ll need to worry about managing is the abstraction.