POINT OF VIEW
Introduction
Firstly, let’s understand the who is who and what is what in the virtual world.
What makes a VM a VM?
To put it layman’s terms, virtual machines (VMs) take one giant computer and carve its physical resources up into small chunks that behave like individual servers with their own CPU, memory, and virtual IO cards. Each chunk of hardware gets its own OS image installed with drivers, libraries, etc. just like a real physical server would.
This is great for stretching your physical hardware further when on a tight budget. However, it is limited since each VM is requiring all the same type of resources as a real server, with each OS image taking up precious disk space.
What makes a container any different?
A container can run within a VM or bare metal server, biggest difference is that they do not require their own OS image, but rather a very small streamlined base image, stretching your hardware resources even further than what a straightforward virilised VM installation can do, it is like the next level of virtualisation.
Getting Started
If we start by building a virtual machine (VM) in the traditional form with its own CPU, memory, virtual network interface, SAN/ISCSI interface to storage, have a portion of disk space allocated to VM.
On this installation of the OS, for example, we will install the Docker engine to get the functionality to build and run containers. Once done, the capability to add containers with their own individual applications that have their own binaries and libraries and are self-contained and therefore easy to move.
We can have several containers on a single VM and several VMs on the one physical server.
Are you starting to see any benefit?
Over and above this, each container can have its application “upgraded” individually to the other containers, within the VM. The upgrade is more of a replace by adding a new container with the new image and then trashing the old container.
Sounds good so far, until the numbers of containers start to grow, and they will.
Some applications that you containerise will be HA aware and will require a quorum to help keep services up and available. Quorum will require minimum of three instances to stay aware of each other and not cause, what is known as a split brain. Managing this beast can get very confusing.
This is where you will need to introduce an orchestrator, e.g., Kubernetes – the layer of complexity that makes you what to just sit tight in the past, on your VM’s.
If we take a simple example of a ESXi host with 3 virtual servers each having their own guest OS install with their own set of libraries and binaries, which will all need to be maintained and kept up to date. Each VM may run a different part of the solution such as an Application, database and UI for example. This will take up a fair chunk of hardware resources on the ESXi host with each OS image requiring CPU, memory and disk space. Then if we want to extend on this solution, we may have to build more VM’s using more hardware resources, if available.
If we take the same solution in the container world, we could possibly keep the DB on its own separate VM, seeing not all DB’s out there currently scale so well, yes there are exceptions, but we are trying to keep this general. This DB VM can be our persistent storage. Then we have a second VM with for example, a docker engine installed, giving the ability to build two containers on this one Guest OS, one for the application and the other for the UI.
Not really a big saving at this point for the extra effort, as we have only saved on one OS image so far. But here is where the real savings comes in, if we need to extend the solution this time around, we can simply spin up more instances of the application or UI within the same VM, this is known as scaling.
Each new instance is now saving you one OS image and can be implemented in a couple of minutes. As mentioned earlier, when it comes time to update the application or UI you can simply spin up a new container with the latest version and trash the old container when the new one comes online. No down time for application upgrades.
The orchestrator brings forward more advantages, like being able to nominate number of replicas of a container in your cluster. This will keep the running number of containers in sync with the manifest you provide, for an example Kubernetes deployment config. This is all done from one central manifest describing the entire application.
Monitoring will become something critical as the numbers grow. With an orchestrator like Kubernetes in place, there are add on tools, that you can use to have a dashboard of all containers in one central point where their state of health can be displayed, and errors easily spotted.
The question remains, when to choose a VM over a container? There is not right or wrong answer here, no golden number or silver bullet, but personally I would most definitely give containers the upper hand for cloud solutions, due to the agility and functionality they have surrounding them, while in house solution can have an either or, or a combination.
Like all IT problems, it will boil down to your business needs and what you are trying to achieve. No matter whether you are a giant corporate with an IT team the size of a medium size business or a smaller business with just a handful of IT staff, there is place for both with benefits on both sides.
Conclusion
There’s no one-size-fits-all number when it comes to deciding when to transition from virtual machines to containers – but the tipping point usually comes when scalability, resource efficiency, and deployment speed start outweighing the simplicity of traditional VM management.
If your infrastructure is growing beyond a handful of VMs and your applications require frequent updates, faster boot times, and efficient resource usage, containers start to make a compelling case. Add in orchestration tools like Kubernetes, and the benefits of automation, scaling, and monitoring amplify significantly.
However, containers do introduce complexity in areas like networking and security, and may require more specialised skills. Ultimately, the decision hinges on your environment’s size, growth trajectory, and operational goals, but as a rule of thumb, once you’re managing more than a few VMs per application component or environment, it’s time to seriously consider containers as the next step forward.
Discover the power of containers with Responsiv