An Architect’s View on Containers

Facebooktwitterredditpinterestlinkedinmail

To truly understand the why and when of Containers, it is important to have some background on the world of development. Developers seldom truly trust other developers, sometimes with good reason. 20 years ago, even though it would’ve been cheaper to run multiple applications on the same server, it was common to have a separate set of servers for each application. Why? That way you don’t worry about one application impacting another. There are several ways they could have impacted each other:

  • Performance — one of them could hog all of the CPU time, memory, bandwidth, etc. or even crash the machine.
  • Security — one compromised application could attempt to look at the data of the other, or intentionally disrupt the other, perhaps by even altering data. A poorly written application can, unintentionally, be just as bad as a compromised one.
  • Incompatible Libraries — why write new code when you can use existing libraries that someone else maintains? It is very common for developers to leverage libraries, particularly those provided by the operating system or closely related to it (e.g. .net Framework). There are Java libraries available for just about anything you can imagine. But what if two different applications are counting on two different versions of the same library?

Operationally, different servers for different apps is a simpler model as well. You only have to worry about building out the maximum number of servers needed for the peak usage of a single app. It is an easier calculation. And it is easier to separate costs.

So it is easy to see why folks would tend towards separate servers for separate apps. The trade-off, though, is it costs more. It just isn’t a very efficient use of servers.

Compromises

Even 20 years ago, there were compromises, the most common of which was multiple applications sharing the same database server. By accepting a little more security risk and a little more performance risk, you could save a lot of money.

Virtualization has been a great compromise. The apps think they have the server all to themselves, but they are really sharing the underlying hardware. The security risk is all but eliminated. The performance risk is still there, perhaps slightly worse in some ways, but with a lot of tools available (vMotion, etc.) to try to reduce that risk. And each virtual server can have its own set of libraries.

In this light, Containers are just another compromise. Instead of a hypervisor creating multiple virtual machines, each complete with their own operating system, with Containers the operating system creates a separate sandbox for each app, and attempts to isolate them from each other. You save money on operating systems because you need less copies. You also need less hypervisors, and less resources because you aren’t trying to emulate an entire machine, hardware and all. Essentially, you use the servers more efficiently. In exchange, you accept a little more risk.

Microsoft has added a new twist in Windows 2016, offering two flavors of Containers: the classic OS-based model, and a new, sort of mini-hypervisor-within-the-OS model. The latter, essentially, tries to split the difference between Containers and virtualization.

Alternatives

Virtualization will work with most legacy applications. There are always exceptions…something somehow still running on an ancient Sun Solaris SPARCstation or WindowsNT, for instance…but the exceptions are becoming increasingly rare. Containers will work with nearly as many legacy applications, and that is part of their attraction.

The main alternative is to take a fundamentally different approach to application development, preferring a service approach to a library approach, and avoiding dependencies on a particular operating system or hardware platform. The services may be a mix of internally developed and 3rd-Party (e.g. Auth0 for authentication). “Serverless” apps and Platform as a Service apps are currently the most popular alternatives.

Most companies have no appetite to redevelop (or refactor) existing code, so they while they might embrace alternatives for new apps, they are likely to prefer virtualization or Containers for legacy apps.

Portability

Containers are dependent on the operating system (i.e. you can’t run a Linux container in Windows and vice versa); whereas, a virtual machine (VM) carries the operating system (OS) with it, and can run on any hypervisor that supports it or can convert it. While not as good as it should be, this is an area in hypervisors with promising trends.

The big advantage of containers is that they avoid cloud lock-in.  So long as it is run on the same OS, the container could be run in any cloud.

Container apps are very OS-dependent. PaaS and Serverless apps are often completely OS-independent, but very cloud-specific (i.e. an “Azure Function” won’t run as-is in AWS Lambda or as an Auth0 Webtask).

Scalability

VM’s and Containers both require orchestration to scale. Containers can scale faster because they are lighter. Most PaaS implementations also require orchestration to scale, and the speed of scaling varies widely with the specific platform. Serverless apps require no orchestration–that is completely handled by the cloud–and they scale fastest of all. Their compromise is they make take slightly longer to respond to an event than code that is always up and waiting on a dedicated server.

Leave a Reply

Your email address will not be published. Required fields are marked *