by Mary Branscombe
Does it make sense to host your applications on your own private cloud? Mary Branscombe investigates.
HardCopy Issue: 58 | Published: November 1, 2012
On first hearing, ‘private cloud’ sounds like an oxymoron. The advantages of the public cloud are scale and elasticity: you don’t need the capacity for your peak demand all year round and with cloud you don’t have to pay for it all year round, plus you don’t have to do the work of procuring, installing and maintaining the infrastructure. A cloud provider is big enough to negotiate cheaper rates for electricity, or move to where power is just cheaper, and can get volume discounts on hardware. Keep your IT in house and all these advantages disappear.
However building your in-house system around cloud principles can bring advantages. As Microsoft puts it, “A private cloud enables organisations to deliver IT as services by providing a pool of computing resources delivered as a standard set of capabilities that are specified, architected, and managed based on requirements defined by a private organisation.”
The hybrid cloud
Every company should be looking at public cloud services, both because of the economics and because there are some areas where there’s no business advantage to having in-house expertise. However public cloud isn’t right for every application, particularly if something is mission critical, or you have regulatory issues, or the application just wouldn’t run well in the cloud. However if you start thinking of cloud as a style of infrastructure that gives you efficiency, abstraction and agility, then you’ll stop wondering whether the cloud is right for your business and start thinking about what you want to run in a public cloud service and what you want to run in a private cloud, and how you link the two.
There are three main models for hybrid cloud. If you want to scale out applications from public to private cloud, or have the option of migrating from one to the other, then tools like System Center App Controller let you manage public Azure and private deployments in the same place, while support for other options such as XEN virtual machines is on the way. AD federation and single sign-on give users access without caring whether they’re using an on-premise or cloud-hosted service. That said, you can also use tools like the Azure Service Bus and Windows Azure Connect to build hybrid applications and services which run on premise and communicate with services in the public cloud, or even run partly on premise for security or performance reasons, and partly in the cloud because of variable demand or to simplify access systems that are used mainly by partners or customers, or by mobile and remote users.
Think of it as creating a layer of abstraction for pooled resources (computer, storage and networking) that you standardise as much as possible and run as efficiently as possible, with as little human intervention as possible. It’s also important to think more in terms of the applications that run on the infrastructure, than of the infrastructure itself.
You won’t get the full cost savings of cloud, and indeed Microsoft estimates that public cloud can be ten times more cost effective than private cloud. You can’t get the full benefits of elasticity in your own cloud because if you want the hardware to be available you have to actually own it, but you can scale services up or down and you can share resources between different teams and groups.
Your peak selling season probably doesn’t coincide with your financial year end or your busiest product planning period. However if you can standardise the services you deliver and automate the way you provision and scale resources and applications, based on demand and workflow, then it becomes much easier to share the same infrastructure between the teams and so balance the load.
At worst, you can plan for the peak demand of the most demanding team plus a baseline for everyone else, rather than the combined peak requirements of all of them. At best, you can offload some of the peak demand with a hybrid cloud solution, which becomes far easier to implement if your internal architecture is similar to that of the cloud you want to scale out to. Windows Server and Windows Azure have an obvious advantage here because you can work with them coherently through System Center, and in particular System Center App Controller (formerly ‘Concero’) and the Azure Service Bus, including what used to be known as AppFabric.
It’s also much easier to agree on service levels and to move to a usage-based chargeback model, where internal teams pay for the IT resources and services they actually use, when you have this kind of model.
Elasticity isn’t the only benefit of cloud architecture – if it was, cloud providers would need to charge very high margins because they have to buy the hardware necessary to cope with peak demand. For the IT team, instrumentation and automation make maintenance less of a chore as well as making your infrastructure more efficient. For users, self-service is key. Once you can deliver applications and resources as services then you can give them a portal where they can request, configure and even partly manage those services themselves.
And of course if you have concerns about the security and privacy of public cloud services, running a private cloud gives you and your users – who may well be demanding cloud-style simplicity, self-service and specific service levels – many of the benefits without the risks. There’s also the option of hybrid cloud, where you mix in-house systems and public cloud services to get the balance you need.
The cloud trend is a reaction to high operational costs, complex and manual maintenance, low system utilisation, inconsistent availability, poor transparency and IT delivery that’s too slow to respond to business needs. Whatever you think of the term ‘private cloud’, the fundamentals are extending what you probably already call best practices, especially if you’ve been following ITIL recommendations or Microsoft’s dynamic IT principles. You may be able to run your system more efficiently and at lower cost, or simplify and speed up the way you provision services for your internal customers so you can give them the platform they need more quickly. Or you might switch the way you deliver IT to make it more of a self-service platform, freeing up the IT team to do more interesting things such as evaluating new technologies or coming up with new ways that existing technologies can be useful to the business.
Although Microsoft has been preaching the idea of private cloud for some time, it’s the new versions of Windows Server and in particular System Center that really deliver. Support for private cloud goes deeper than the obvious virtualisation and storage features in Windows Server 2012, important as they are.
Storage Spaces let you aggregate storage across the range of technologies from arrays to standalone or clustered file servers to commodity disks, with SMB 3 support. Hyper-V 3 is far more scalable and includes network and storage virtualisation to give you “shared nothing” migration of virtual machines without needing either clustered servers or shared storage, while Hyper-V Replica gives you a business continuity and disaster recovery solution for virtualised workloads.
The Microsoft view
Steve Ballmer famously declared that Microsoft was “all in” the cloud back in 2010, and Windows Server, System Center, Visual Studio and other products are starting to show what Microsoft has learned from running its own cloud services. The Microsoft angle is always “cloud on your own terms” and different product teams have complementary views.
System Center director of product management Andrew Conway says your approach to private cloud depends on the maturity of your existing IT: “We see customers already with all the components for private cloud infrastructure today; they are delivering private cloud. Others are working with Configuration Manager and Operations Manager and starting to look at more tools on the infrastructure and VMM side. The increasing experience of public cloud is creating the imperative for IT teams to say ‘How do I do that with my own infrastructure?’ And it’s only possible if you have tools that let you do the automation.”
Neither public nor private cloud replace everything you already have, points out Server and Cloud corporate vice president Bill Laing: “When mainframes started, it wasn’t that everything was replaced; it was an additive market and then we had the PC revolution adding to that market, then the Internet and now cloud. And cloud doesn’t mean everything will be replaced but it is something that is really shaping the way we think. There is technology in Windows Server 2012 to help people connect to public clouds, and to build their own private clouds. People will put part of their application in the cloud and relay the information back to on-premise systems to get the global reach; we will soon start to see that as a common model of deployment.”
Azure corporate vice president Scott Guthrie says multiple options don’t dilute the value of the cloud approach: “The broadening means more ways to find value. I can dip my toe in cloud and get cost savings or I can jump into the cloud and fundamentally reimagine my customer experience and business. That’s where Azure is kind of unique: it doesn’t require you to jump in; you can use the infrastructure for burst, for dev-and-test. There are companies that aren’t sure they’re ready to run mission critical apps in the cloud but are definitely comfortable doing dev-and-test of that app. We’re kind of a unique vendor: we have a great cloud, but we also have this great OS with Windows Server. You can literally use the same VMs; not import and export them but literally copy up and copy down.”
The DirectAccess VPN replacement needs much less network infrastructure than in previous versions of Windows Server and runs as part of the Remote Access role, so you have a more flexible way of giving remote and mobile users access to the private cloud you’re building.
Improvements in more fundamental networking features matter as well. The policy-based, software-controlled network virtualisation in Windows Server 2012 makes building and managing a dedicated IaaS cloud far easier, because you don’t have to deal with the limitations and complications of managing virtual LANs just to allow VM migration. IP address management is much improved in the new server OS as well, but you don’t need to lock VMs to physical IP subnets or over-provision because of the constraints of the physical network infrastructure. Instead you can have an IP address migrate with the VM, and like just about everything else in Windows Server 2012, you can automate it with WMI and PowerShell scripts.
This puts responsibilities where they belong. You can leave network infrastructure and traffic management to network administrators while server admins get on with managing servers and the services that run on top of them.
As network traffic in your private cloud increases, you can offload some of the work of processing that I/O traffic to the network cards to reduce the load on the CPU. You can also dynamically adjust the way the Virtual Machine Queue distributes incoming network traffic to VMs, using ‘dedicated’ virtual NICs assigned to VMs that actually put the packets straight into their memory stack, based on the load on both the network and the server CPU. Combine that with the new QoS bandwidth management options, where you can set both bandwidth caps and bandwidth floors to ensure predictable network performance for the virtual machines running your private cloud; NIC Teaming of network cards from multiple vendors with load balancing and bandwidth aggregation when everything is working well and failover when it isn’t; and load balancing for DHCP failover, and it becomes much easier to run a resilient network that can actually deliver your private cloud.
System Center 2012 takes the familiar infrastructure tools of configuration management, service management, health monitoring and reporting, fabric management, deployment provisioning, and network and security management, and adds a management layer to help you automate processes and offer them on demand. An on-demand service might be as simple as creating a user account or scheduling a software update; or it could be as complex as provisioning a service deployment package that includes content, configurations, procedures and service templates from your private cloud resource pools.
System Center decouples applications from operating systems to make it easier to deploy them as services, and to build packages of services from libraries of components. Instead of having to bring down a server to patch it because the application and the OS are so intertwined, you can deal with it as an abstraction. You combine the OS image, the hardware profile and the OS profile into a virtual machine template, add application profiles to create service templates and then use automation and workflow to deploy and manage those services – either yourself or by offering them through a portal to authorised users who select the services they need from a menu.
VMware and RedHat
Windows is far from the only private cloud option. VMware’s vCloud Suite builds on its existing virtualisation framework to deliver what VMware calls “the software-defined datacentre.” Just like other private cloud solutions vCloud is intended to change the way we use and manage data centres, letting IT departments move away from managing physical infrastructures to focus on delivering services to businesses and providing a flexible, easy to configure, virtual infrastructure. Aiming to turn data centre layers into software-defined services, vCloud mixes a wide selection of tools in a single suite, with automation tools to handle dynamic provisioning and give your users self-service tools to help manage and deploy their own services.
The vSphere infrastructure tools provide much of the foundation needed to deliver the resources private cloud needs, with tools for handling compute and storage. Software-defined networking and security are handled by vCloud, while additional tools help you automate disaster recovery. With vCloud’s security tools you can continuously monitor your defined server pools for non-compliant behaviour, as well as wrapping groups of virtual servers into trust zones to control access.
Automation is an important part the vCloud story, and the tools in vCloud should help you build virtual data centres that you can manage through policies, with tools to handle on-demand deployment of new and additional workloads. With vCloud you can encapsulate a set of linked virtual machines as a vApp and store them in a library. vApps can be deployed when required, provisioning storage and networking automatically.
Going open source with OpenSack
There’s another platform to bear in mind with regards to a hybrid cloud, namely the open source OpenStack platform originally developed by NASA and popularised and extended by RackSpace. The advantage of OpenStack is that you should be able to migrate from on-premises to public cloud, and then move a cloud workload of applications and services built to run on OpenStack from one OpenStack hosting provider to another with the minimum of changes, and there are already a number of providers beyond founders RackSpace. OpenStack could also form the foundation of a scalable hybrid cloud, mixing on-premises and public resources or services on multiple cloud providers.
Now run by a foundation with the code freely available under the Apache 2.0 licence, OpenStack has significant industry support from companies like Intel, HP, VMware, RedHat and Cisco. It’s made up of a series of projects designed to fit together to deliver a cloud. Tools in the platform allow you to manage computing resources, networking and storage through a single dashboard. Recent work has added support for software-defined networking via OpenFlow, as well as object storage for managing services, data and virtual machines.
The issue is whether the platform itself is mature. For example, some rather basic security flaws have recently come to light and the security development for OpenStack is actually done through closed mailing lists, without any public disclosure mechanism for vulnerabilities that have been found and fixed. OpenStack has a great deal of potential but at this stage you should evaluate it thoroughly before committing to the platform.
There’s some hybrid cloud support too. If you’re working with a service provider that supports the underlying technologies, you’re able to use the vCloud Connector to move workloads from private to public clouds. As well as supporting links between private and public clouds, vCloud Director is designed for more complex implementations where you need to handle multiple tenants within a single infrastructure.
By building on ESX hypervisor and vSphere, VMware takes a layered approach to delivering a foundation for private cloud. However as it stands, despite including tools for publishing a service catalogue, vCloud remains very much an infrastructure play and you’ll need to consider using VMware’s open source Cloud Foundry – or Microsoft System Center – to host and manage your applications.
Open source cloud platforms tend to be host agnostic. One exception to this is Red Hat’s cloud platform. Building on the virtualisation tools in Red Hat Enterprise, the Red Hat cloud lets you choose just how much of a cloud you want to implement, with layers that step up from hypervisor to platform-as-a-service, and from on-premise to service providers.
Building a Red Hat private cloud starts with the virtualisation tools built into Red Hat’s Linux servers, with support for storage virtualisation delivering what Red Hat calls ‘storage as a service’. You can use these to virtualise existing infrastructures using Red Hat CloudForms. With CloudForms you create and manage pools of resources, which can also include elements from a compatible public cloud. Application Blueprints let you manage applications with policies for how and where servers can be run, along with who can deploy and use services. The policy-driven approach of CloudForms supports self-service portals to simplify managing and deploying services while still ensuring that all the services running in your cloud are compliant with corporate and regulatory policies.
Another component of Red Hat’s cloud strategy is Red Hat Enterprise MRG. This lets you use a grid of virtualised servers for high-performance computing tasks with the framework providing tools for handling messaging between compute elements, giving you real-time connections and the ability to quickly scale a compute grid. At a higher level OpenShift lets you treat a private cloud as a platform. Taking advantage of tools like JBoss, it treats the underlying Red Hat Enterprise OS as a secure multitenant host, allowing you to deploy applications and services written in familiar languages, and using everyday techniques. It’s an approach that minimises learning curves and lets you quickly move existing code to a cloud, rather than having to develop everything from scratch.
While Red Hat’s cloud strategy may seem a little disjointed when compared to Microsoft and VMware, it’s an open approach that allows you to mix and match Red Hat technologies with those of other vendors. That means you can use Red Hat’s IaaS tools to host OpenStack, for example, simplifying interoperability with many large public cloud providers.
Realistically, though, private cloud will rarely give you everything you need, and all three vendors expect it to be just part of your infrastructure.