Containing the problem

by Simon Bisson

Simon Bisson explains what containers are and how they can help deliver your applications.

HardCopy Issue: 67 | Published: November 6, 2015

The modern data centre is a complex mix of hardware and software, simplified by new layers of abstraction that turn it into a private cloud of compute, network, and storage fabrics. Virtualisation makes it easier to deploy new servers, to assign storage, and to reconfigure networks on the fly. But there’s one piece of the story missing: the applications. How can we manage them like we manage our data centres, automating everything in the application lifecycle?

That’s where software containers come into their own, adding a new layer of virtualisation that abstracts the interface between software and operating system. Applications don’t need to install on the operating system: they just need to be built into containers that can then be loaded and run on any supported platform. That doesn’t mean Windows apps will run on Linux and vice versa: applications will still need to access the OS features they’d normally use, but this access is now managed and protected. Where possible containers offer abstractions of common OS services, so for example, a network connection will look like a standard OS networking API, but in fact be a NAT connection over the OS’s networking stack.

Container Technology Diagram

Container technology helps isolate applications from one another.

The popular face of modern containerisation is Docker, and it’s not surprising just how popular it’s become as it’s easy to use, it’s open source, and it’s supported by an ecosystem of tools and products built around its APIs. Docker gives you a simple command line tool for building and managing containers, with support for most major operating systems. A Docker container wraps the user-space for an application, allowing it to run isolated from other applications on a server, sharing system resources and mapping networking so that connections are routed through a local firewall.

There’s a lot more to containerisation than wrapping and running applications. If you’re going to use it in your data centre, then you need to automate everything. Docker’s product suite also includes Machine, a tool to automate container host creation; a cluster manager in Swarm; and Compose, a tool for orchestrating containers across a cluster of servers. All have APIs for management tooling, and can be called with a single command line, making it easy to script your container architectures.

Docker’s tools are supported by many vendors, from dedicated Linux-based operating systems to support for OS X, and it’s now being built into the next release of Windows Server. Download the latest technical preview of Windows Server 2016 from TechNet, and you’ll find that you’ve downloaded two files. One is a familiar ISO image for a full-blown Windows Server installation, the other is a WIM (Windows Imaging Format) image of a Windows Server Core preconfigured with support for Windows Server Containers. You can also download a PowerShell script that sets up a Windows Server Core VM, ready for you to try out Windows Server’s container support.

You build and use containers in Windows Server 2016 using either PowerShell or Docker (but not both, at present). The familiar Docker command-line and API are built into Windows Server, while the PowerShell option allows you to remotely manage your containers with PowerShell remoting. Under the hood, both approaches are using the same Docker container image format, making it easier to share images between systems. Microsoft is planning to deliver another option, namely Hyper-V Containers, which will allow you to deliver a thin Windows Server OS running in a VM hosting Docker containers, so increasing application isolation and allowing you to run nested virtual machines on top of Microsoft’s new Nano Server or on Azure.

 

How it works

It’s easy to think of containers as new, but in fact the underlying technologies have been around since the mainframe days. The same concepts that let applications share mainframe resources without affecting each other underlie technologies like Docker and rckt, building on ideas familiar from Linux’s LXC container model and Solaris’ Zones. Best thought of as the direct descendants of the virtual private servers offered by hosting companies, containers are a modern form of operating-system level virtualisation, providing applications with a secure, isolated user-space where they can run without affecting other code running on the same server and using the same operating system.

What’s new with technologies like Docker is that they define a set of APIs between the container and the host OS, along with a packaging format and a set of metadata for describing the contents of the container, and its requirement. There’s also the option of differential containers, which apply their contents to a base file system. That way you can have a container that has your preferred web server configuration, and a series of containers that host web applications for that server container. It’s an approach that can save on disk space by allowing reuse of core application infrastructure.

So what does it all mean? There’s a reason why containers have become popular tools in the last couple of years, and it’s the rise of DevOps.

Containers are essentially an element of architectural abstraction. Much as a hypervisor abstracts the OS from hardware, so a container abstracts an application from the OS. As we move to a world of automated, programmable infrastructures, containers become the endpoint of a build process, encapsulating your services and their dependencies. Instead of deploying code, we just swap in a new container with the latest version, using tools like Docker controlled by modern build tools like Jenkins, and managed by configuration management tooling like Chef.

As cloud platforms become more important to developers, it’s clear that containers are a technology that simplifies the process of deploying applications at cloud scale. That’s why they’re at the heart of many new cloud services. Amazon’s AWS has its Container Service up and running, and Microsoft is looking at offering similar on Azure, with Windows Server TP3 container hosts available in the Azure VM gallery.

Tools for managing containers at a data centre or cloud level are already available. Perhaps best described as data centre operating systems, management frameworks like Apache Mesos and Google’s Kubernetes are able to deploy and manage container-hosted services, providing a data centre-scale scheduler and tools for handling available resources. Applications can be defined as groups of containers and deployed to individual servers, or across an entire virtual infrastructure.

There’s a lot to be said for working this way. You can build an application, configure its infrastructure, and deploy in minutes. Working with a continuous delivery model allows you to push code several times a day, encapsulating each build in its own versioned container. If an application update fails, you can quickly fall back to the last known good container, and just carry on working. Deploying a complete application becomes a matter of deploying all its containers, using tools like Docker Compose to manage placement in a server cluster.

Switching to working with containers does mean changing the way you think about development and deployment, and it’s not surprising that the container model is at the heart of much modern DevOps thinking. Talk to the folk from Chef, and they’ll note that this is part of the process of “moving left”, bringing operations into the development workflow, and tying infrastructure and configuration management into the same tools that you use to build your applications. Decoupling application lifecycle management from server configuration management makes a lot of sense, simplifying the build process and reducing the risk of server configuration mismatches between development machines and production. Code that runs in a container on a development machine will run on a server without any changes.

Containers are also changing the way we build operating systems. There’s a concept in research operating systems called the Library OS, which configures OS modules to provide only functions that are needed for its applications. It’s a flexible, lightweight and very secure way of working, and was used by Microsoft Research as the basis of its Drawbridge OS.

Containers are letting us deliver something that, while not quite a Library OS, is closer to the concept than anything in public use. For example, CoreOS’s Linux is designed to support Docker and rckt containers, giving you a thin operating system layer that adds functionality via a library of functional containers. RancherOS goes even further, with a set of Docker containers for core OS functions, abstracting as much of Linux’s user-space as possible. These containers then host another set of higher-level containers for your applications.

The ideal would be an OS that configured itself based on the manifests and other metadata offered by a container. That’s still some way in the future, but with a little work you can have a set of containers that deliver only the services you need to run your applications, so simplifying configuration management and deployment while making your infrastructure a lot more flexible and a lot more secure.

If you’re building cloud-scale applications, and considering using micro-services at the heart of your architectures, then it’s well worth considering a container as the standard deployment unit for your code. It simplifies both scaling and updating an application, as well as giving you a foundation for future continuous delivery models.

ISV Royalty Advert
Adobe Stock ad