Straight talking 70

by Tim Anderson

Tim Anderson reports back from Microsoft’s Ignite event, and tells us why Windows Server 2016 is still relevant to software developers.

HardCopy Issue: 70 | Published: November 4, 2016

Microsoft has released Windows Server 2016. Should developers care?

There are two reasons why they should. One is because the design of Windows Server 2016 speaks volumes about Microsoft’s general direction. The other is that this release has a big new feature aimed at developers, namely Docker and Windows Server containers.

First, a quick look at Microsoft’s direction. The company is gradually tilting away from its traditional role as a software supplier towards being a cloud platform and services vendor. The new release of Windows Server illustrates this with its focus on cloud-oriented features. Here is the headline summary of what is new:

Much improved Hyper-V virtualisation. Some 40 new features include nested virtualisation – the ability to run VMs (Virtual Machines) within VMs – security enhancements such as virtual TPM (Trusted Platform Module) enabling Bitlocker encryption within a VM, and massive scalability improvements. You can now create a VM with up to 12TB RAM and 240 virtual processors, while the host can support up to 24TB RAM and 512 logical processors. There is also runtime memory resizing, hot add and remove of virtual network cards, production-supported checkpoints, faster networking and more.

A new edition of Windows Server, called Nano Server, which is designed as a lightweight Hyper-V or container host, or to run in a VM that is running application workloads. Nano Server has no command prompt when you boot it up; you have to manage it remotely with PowerShell.

Containers, lightweight cousins of VMs, which are designed to be replaced rather than updated when you rebuild your application. Server 2016 supports two kinds, standard containers and Hyper-V containers. Hyper-V containers are better isolated and run their own copy of the Windows kernel, but both are managed the same way, using the Docker engine and tools.

Big improvements in virtual networking and storage management.

Security enhancements that enable a degree of fine-grained control that has not been seen before in the Windows world. This includes Shielded VMs, which address the risk of a hacker getting access to your Hyper-V host. In earlier versions, access to the host meant access to all the VMs it hosted, but that is not so with Shielded VMs, which are encrypted and cannot be run on any other host. You can also now manage administrator privileges, using temporary accounts and restricted PowerShell sessions so that the risks involved in having global administrative rights are much reduced.

Licensing Windows Server 2016

Windows Server has moved to a licensing structure based on the number of physical cores rather than the number of processors it will run on, in a fashion similar to that already adopted for SQL Server. However, unlike SQL Server where client access is included within the cost of core licensing, Windows Server requires you to buy Client Access Licences (CALs) as well. As with SQL Server, Windows Server 2016 is licensed in twin-core packs, but with a minimum of eight cores which requires you to purchase at least four packs.

The virtualisation rights associated with each edition initially remain unchanged. However, there are a few key points to consider that fall in line with the technology advancements:

  1. Standard Edition allows you to run up to two Virtual Operating System Environments (VOSEs) or Hyper-V Containers. Multiple licences can be assigned to the same cores for additional virtualisation rights, where required. The Datacenter edition allows for an unlimited amount of VOSEs and Hyper-V Containers.
  2. Hyper-Threading is no longer accounted for in Windows Server 2016, so you only need to cover the physical cores and can ignore virtual cores.
  3. If a processor is disabled for use by Windows, the cores on that processor do not need to be licensed. Disabling hyper-threading or core-utilisation for specific applications does not alleviate the licensing.
  4. External Connectors are still licensed per-server and should be applied to each server that is being accessed, regardless of the number of users or devices.
  5. Nano Server is a deployment option within Windows Server 2016. It is included as part of the edition that is deployed and is not licensed uniquely or separately.
  6. ‘Nesting’ of VMs (running one Virtual Machine inside another) is considered separately and so licensed according to the number of VMs utilised. In other words, embedding one VM inside another counts as a Primary and a Secondary Embedded. In such scenarios the Datacenter edition would work better as it has no cap on virtualisation rights.

All this sounds good, but one thing that struck me when installing the release build of Windows Server 2016 is that there is not much here for small guys, the businesses that have just a few servers in a room at the office. The new security features are impressive, but require a substantial overhead of infrastructure and administration to manage.

The enhancements to Hyper-V will be useful for anyone using virtualisation, of course, but the truth is that Server 2016 is designed for large-scale cloud deployments, whether that is a private cloud in a datacentre, or a public cloud such as Microsoft Azure. In fact, Microsoft itself is probably the biggest single customer for Windows Server, bearing in mind its huge investment in Azure and Office 365. In these environments, the ability to make better use of hardware through the denser deployments enabled by Nano Server and containers is a huge advantage. More secure VMs, nested virtualisation, more scalable VMs, better software defined networking and storage management: all play well on cloud platforms.

This then is a cloud-oriented release, and you can conclude that a top priority for Microsoft is improving its cloud platform in order to compete with Amazon Web Services and to improve the foundation of Office 365, Dynamics Online and its other cloud services.

It is also making the necessary investments in physical infrastructure to run these services. Microsoft now has over a million servers across over 100 datacentres supporting its global cloud infrastructure, and is building more.

In September 2016, at the Ignite event in Atlanta, CEO Satya Nadella stated that AI (Artificial Intelligence) is at the heart of Microsoft’s vision of the future. The link with the cloud is obvious, since it provides both the data and the processing power to analyse it. There was even a demonstration, though sadly only a ‘what-if’ one, of applying Azure’s entire set of FPGA cards (Field Programmable Gate Arrays) to a translation task and achieving a billion billion operations per second. This is an Exaflop, a goal which supercomputers do not expect to reach until 2020.

 

Everyday development

But what has this to do with everyday software development? Nadella’s idea is that all of us should start writing bot applications, calling cloud services to create next-generation user interfaces based on natural language parsing: “Every business is going to build a bot interface.”

Another obvious use case is analysing IoT data, and Microsoft is ready for you with its Azure IoT hub.

All of this though is still rather remote from what most developers work on every day. That said, the container support in Server 2016 is a big deal for Windows developers working at almost any scale, not only because of its scalability and reliability advantages, but also because it is so amenable to automation.

If there is one word that defines modern development trends it is not Agile; it is automation. Gone are the days when you would build up to a new release by listing and prioritising bugs and feature requests, then roll out alpha and beta builds for testing, before finally deploying a new release with a fanfare of trumpets. In today’s world you amend code and deploy a new build little and often, with automated tests before and after checking in changes, automated build, and automated deployment – a model known as ‘continuous delivery’. If a problem is discovered, the answer is a quick rollback to an earlier version.

Admittedly this model does not work for all kinds of software, and there may be marketing reasons for version upgrades and trumpet fanfares. It does make sense for custom business software though, as well as for web applications, or subscription software where users expect frequent small upgrades.

Containers are not essential for automated deployment, but they are a great enabler. In Server 2016, the official tool for managing containers is Docker, and a commercially supported version of the Docker engine is free for anyone to install, thanks to an agreement between Microsoft and Docker, though you will need PowerShell to install it.

The way you use Docker is by downloading or creating a base image, modifying it for your application using a script that is called a Dockerfile, and then running it. The Dockerfile can do things like copying files to the container image, running commands – which can include PowerShell scripts, setup files or adding features with DISM (Deployment Image Servicing and Management tool) – and defining what happens when the container is deployed, such as starting an application.

Docker on Windows Server 2016 screenshot

Running Docker on Windows Server 2016, using a Nano Server base image to execute a Hello World .NET Core application.

Docker images are binary files, which you store in public or private repositories, but a Dockerfile is just text that can be version controlled like any other code.

What this means is that using containers is another path to infrastructure as code, the ability to define not only the instructions that forms your application, but also the platform on which it runs, in text files that are managed and versioned. And using containers is more lightweight and accessible than other forms of infrastructure, such as code, which means that any developer can take advantage.

You can also see why Microsoft has been so keen to reduce the minimum footprint of Windows Server. Small editions like Server Core and Nano Server are well suited to packaging as container images, especially Nano Server. At the time of writing, the microsoft/nanoserver image is just 652MB, whereas the microsoft/iis image (Server Core with IIS) is 7.58 GB. Nano Server is better suited to container deployment than any other version of Windows Server.

There are a few caveats. Docker is mature on Linux, but brand new on Windows. The fact that the vast majority of Docker images, tools and documentation out there are for Linux is a source of considerable confusion, especially as most resources describing ‘Docker on Windows’ refer to running Linux Docker using VMs. Another issue is that Nano Server does not run the full .NET Framework but only the cross-platform .NET Core, which means that existing ASP.NET applications will not run.

Docker also requires a change of mind-set for developers. For example, a container is essentially stateless; you can store files on it, but they will be gone next time you deploy the container. If you need persistent storage, the answer is to mount a shared drive, or use web storage like Azure Blob storage, or use a database server instead.

The tools for Windows are also in their infancy. The Docker engine is there, but Microsoft needs to bake support into Visual Studio so that building an application and deploying with Docker is fully integrated. No doubt this will come soon.

The bottom line though: despite Microsoft’s cloud obsession, containers and Nano Server are a big step forward for Windows developers at any scale, and well worth investigating.

ISV Royalty Advert
Microsoft Cloud Pricing banner