SQL Server 2014

by Tim Anderson

Tim Anderson checks out the latest version of Microsoft’s flagship database.

HardCopy Issue: 63 | Published: May 20, 2014

SQL Server is Microsoft’s flagship database product and has been a key part of the company’s platform for over 20 years. It comes in multiple editions ranging from a free Express edition, limited to 1GB RAM and 10GB maximum database size, up to the Enterprise edition which is limited only by the operating system and supports a wide range of features for scalability, high availability, management, integration, data analysis and reporting. The language of SQL Server is Transact-SQL (T-SQL), and on the programmability side there is support for .NET stored procedures. There is also PowerPivot for Excel for modelling and analysing data.

SQL Server has always been a robust and fast relational database engine, and Microsoft has carefully evolved the product to scale to today’s high data volumes and business analysis requirements. The new version has two areas of particular focus. One is a huge new feature, namely in-memory OLTP, which can speed up queries by a factor of 30 in certain scenarios. Real-world examples tend to give less spectacular but still worthwhile speed improvements. The other important feature is its extensive integration with Microsoft’s cloud platform, Azure.

 

In-Memory OLTP

Codenamed ‘Hekaton’, in-memory OLTP (On-line Transaction Processing) is more than just SQL Server with its data all held in memory. It is better to think of it as a new in-memory database engine that supports a substantial subset of T-SQL, and has the ability to compile stored procedures to native code for a further performance gain.

The reason Microsoft needed to create a new database engine is that such techniques require a different approach. A disk-based engine copies data from disk into memory, does something with it, and then saves it back to disk. By contrast, the Hekaton engine knows that all its data is already in memory and can be accessed directly. The engine is optimised to use new algorithms that no longer need to take disk reads and locks into account. Indexes are also in-memory and can be hash indexes rather than the usual B-trees. Optimistic concurrency means that waits and locks are minimised.

In-memory tables still comply with ACID (Atomicity, Consistency, Isolation, Durability) unless you specify non-durable tables. Non-durable tables are useful for certain scenarios, such as real-time reporting where you might not need to store the data permanently. Otherwise, the in-memory engine returns data to disk in the form of a transaction log and checkpoint streams, which between them enable recovery in the event of a problem such as unexpected loss of power.

You can mix in-memory and disk-based tables in a single database, and query across them. The main requirement is that you add a memory-optimised filegroup to the database. Once you’ve done that you can create tables WITH (MEMORY_OPTIMIZED=ON). If you want a non-durable table, you can add DURABILITY=SCHEMA_ONLY.

There are a few downsides to in-memory tables. One is that they have to fit in memory. Microsoft suggests having twice as much memory as the size of your data. If memory runs out, there is no failover to disk; transactions simply stop working. You can use the SQL Server Resource Governor to assign and control the amount of memory a database can use.

In addition, the total size of all durable tables must not exceed 250GB, and Microsoft states that systems should have no more than 4 processor sockets and 60 cores for optimal performance.
Another issue is that only a subset of T-SQL is supported for in-memory tables. Most existing stored procedures will need some porting effort, as will tables that you move to in-memory. For example, IDENTITY columns are only partially supported, FOREIGN KEY, CHECK and UNIQUE constraints are not supported, ALTER INDEX does not work, SELECT INTO is not supported, and so on. Microsoft has a list of problem keywords and suggested workarounds at http://msdn.microsoft.com/en-us/library/dn246937(v=sql.120).aspx.

Finally, in-memory OLTP is only supported in the Enterprise or Developer editions of SQL Server. This makes the new features in SQL Server 2014 rather thin in the other editions.
Despite these limitations, the potential speed increase from in-memory OLTP is such that users will be able to live with some issues in this first release.

 

Integration with Azure

A key feature of SQL Server 2014 is its integration with Microsoft Azure. There are several ways you can do this.

One, which sounds odd at first, is the ability to create and attach a database whose files reside on Azure blob storage. You can do this using SQL Server either running on-premise or from an Azure virtual machine (VM), although it is more the latter scenario that Microsoft has in mind, since attaching SQL Server files over the Internet gives poor latency. That said, smart caching means that performance can be tolerable, and let you easily transfer data in or out of Microsoft’s cloud.

Azure storage account in SQL Server database

Creating an Azure storage account for a SQL Server database.

Used in the context of SQL Server running on Azure VMs, blob storage does have advantages. This approach avoids Azure additional disks, and uses the virtual network driver directly to access the database files, increasing the available IOPS (I/O Per Second). It is also easy to detach database files from one SQL Server instance and attach them to another, enabling high availability, which is particularly useful since failover clusters are not supported by Azure VMs.

There are some limitations too. In-Memory OLTP is not supported, Filestream (data which resides in external files rather than in the database files) is not supported, and Azure blob storage cannot use geo-replication. The maximum size of a single file in blob storage is 1TB, though you can use as many blobs as you need.

You implement SQL Server data files in Azure by generating a key string called a Shared Access Signature, which gives limited access to Azure storage, and storing this in the SQL Server credential manager. Once done, you can simply use the URL of the blob storage as the filename for database files in T-SQL. There is also an option to encrypt the database for security using Transparent Data Encryption (TDE).

SQL Server 2014 also has new support for instances running on Azure VMs, without the use of blob storage. A Deploy SQL Server Database to Windows Azure VM wizard walks you through the steps and will even create a new VM. There are some preparatory steps, such as setting up an Azure management certificate or downloading a publish profile and opening the necessary management port on Azure.

SQL Server licensing options

There are six editions of SQL Server, each of which can be licensed in various ways.

The first licensing model is server plus CAL, where you buy a license for each server installation and a Client Access License (CAL) for each user or device accessing the installation. One CAL covers multiple SQL Server editions up to the version of the CAL, so a 2014 CAL allows you to access SQL Server 2012 or 2008, but not vice versa.

Alternatively, you can license SQL Server per CPU core. You buy a license for each core in the server, with each physical processor considered to have a minimum of 4 cores. No CALs are required. SQL Server Enterprise Edition can only be licensed per core.

Installing SQL Server on a virtual machine brings more options. You can license with server plus CAL (for editions which support this), or you can license per core, based on the number of virtual cores, again with a minimum of four per VM. Alternatively, you can fully license the host server or server farm with Enterprise Edition core licenses and Software Assurance (SA), in which case you can run unlimited VMs with SQL Server. This last option is the best for a private cloud.

For more complex scenarios, contact a licensing specialist such as Grey Matter for advice.

SQL Server Enterprise is the premier edition with many exclusive high availability and scalability features, including always on availability groups, mirrored backups, hot add memory and CPU, table and index partitioning, resource governor, and the hot new in-memory OLTP. You also need the Enterprise edition for high performance adapters for Oracle, Teradata, SAP and others, high performance transforms such as fuzzy grouping and lookup transformations, and data warehousing features including xVelocity memory optimised columnstores.

Enterprise has a maximum database size of 524 Petabytes (equivalent to 524,000 Terabytes). Memory and compute capacity is defined as “operating system maximum”. Licensing is per core only.

The Business Intelligence edition is designed for data analysis and reporting, and has some features for which you need either this or the Enterprise edition. These include full Reporting Services, Master Data Services, Data Quality Services, Tabular BI Semantic Model features, PowerPivot for SharePoint, Data mining features such as cross validation, advanced data mining configuration and tuning, parallel model processing and sequence prediction.

This edition is limited to 128GB memory per database engine instance, and the lesser of 4 CPU sockets or 16 cores, but there are no other compute limits. Licensing is Server plus CAL only.
The Standard edition is designed for general purpose use with the same core engine and T-SQL language, and many of the same management tools as Enterprise. It has Failover Cluster support for 2 nodes only, replication support, full programmability, Multidimensional BI Semantic Model, basic data mining tools and basic reporting services.

The Standard edition is limited to 64GB memory per database engine instance and the lesser of 4 sockets or 16 cores. Licensing is either per core, or server plus CAL.

The Web edition is designed for third party service providers hosting web applications. It has the same compute and RAM limits as the Standard edition, and similar features though with some restrictions, such as subscriber-only replication, and no BI or Data mining support. Licensing is only available through a Services Provider License Agreement (SPLA).

The free Express edition is for small-scale applications and websites, and is limited to a database size of 10GB, 1GB RAM per database engine instance, and the lesser of 1 socket or 4 cores.

The Express edition with Advanced Services includes full text and semantic search, SQL Server Data Tools and basic Reporting Services.

SQL Server Developer Edition has the same features as the Enterprise edition, but is licensed only for development use. Licensing is per-developer. There is also ISV Royalty Licensing For ISVs and SIs building unified solutions on SQL Server.

Backup to Azure is a feature introduced in SQL Server 2012, but now enhanced. Using Backup to Azure, you can backup or restore using Azure blob storage. The main requirement is simply an Azure blob storage account. You can use T-SQL, or select ‘Back up to URL’ in the Management Studio Back Up Database wizard.

A related feature is Managed Backup. This is a service accessed from SQL Server Management Studio, and is aimed at small businesses needing a simple solution for off-site backup. Managed Backup is built into SQL Server 2014 using stored procedures and also relies on the SQL Server Agent. Presuming you have set up an Azure blob storage account, you can run the wizard and simply specify a retention period (default 30 days) in order to have the service backup all databases to Azure. There is also an encryption option.

The documentation states that “The type of backups scheduled and the backup frequency is determined based on the workload of the database.” Looking in more detail, this include a full database backup at least weekly, and a transaction log backup every two hours, or more often if the database is particularly active.

You can monitor managed backup by running queries using specified stored procedures, and by enabling Database Mail and configuring email notifications. If you set it up using T-SQL you can configure Managed Backup per-database, rather than to backup up all databases.

Managed Backup works for both on-premise SQL Server and instances running in Azure virtual machines, but it is particularly recommended for Azure virtual machines since latency is low.

 

Clustered Columnstore Indexes

Microsoft introduced columnstore indexes in SQL Server 2012. Columnstore indexes are in-memory indexes that are oriented column-wise, which is more efficient for selecting only a few columns from a table. They are best suited for queries which scan the full table, typically for data warehouse applications, rather than queries which fetch only a small subset of data. In the right application, they can boost performance by up to ten times. Another advantage of column-wise indexes is better data compression, since values in each column tend to be similar.

The columnstore indexes in SQL Server 2012 are non-clustered and cannot be updated, other than by completely rebuilding the index. Non-clustered columnstore indexes can be combined with other indexes, and can be configured to index a subset of the columns in the table.

New in SQL Server 2014 Enterprise edition is the clustered columnstore index. This is an updateable index that is also the primary storage method for the table. It must index all columns and cannot be combined with any other index. Like the nonclustered columnstore index, it is an in-memory technology capable of excellent data warehouse performance.

Another way of looking at this is to view a clustered columnstore index as a specialist in-memory table type suited for data warehouse queries and updates. You create a clustered columnstore index from an existing row-wise table, at which point the table is converted. If you then drop the index, SQL Server converts it back to a row-wise table.

Although they are updateable, clustered columnstore indexes become fragmented over time if updated, so a periodic index rebuild is recommended.

 

Worth upgrading?

The ‘What’s new’ list for SQL Server 2014 is dominated by the in-memory OLTP feature, which can deliver remarkable performance benefits, provided that you have the Enterprise edition and a suitable application. This makes it a must-have upgrade for those who can use it, but less compelling otherwise, although improved Azure integration is also a significant benefit.

The updateable clustered columnstore index is an important data warehousing feature. Another feature worth a mention is the Buffer Pool Extension, which lets you use SSD (Solid State Drive) storage as extra memory.

In-memory OLTP

In-memory OLTP, combined with native code stored procedures, can deliver remarkable performance improvements.

The core of SQL Server is little changed in this release, and while that is no bad thing considering what a solid product it is, it does make this a minor release in the event that you are not in a position to make good use of the big new features – especially for Standard edition users for whom some new features do not apply.

None of this takes away from the excellence of the product, which has a broad and deep range of features as well as strong performance.

Find Out More

More details on the Grey Matter website at www.greymatter.com. To discuss your database requirements further, call Grey Matter on 01364 654100 or email maildesk@greymatter.com.

Visual Studio 2015 Supplement Ad
Adobe Creative Cloud for Teams ad