Inside Data 58

by Graham Keitch

Graham Keitch describes how new technologies are being used to improve database performance.

HardCopy Issue: 58 | Published: November 1, 2012

Getting the best database performance for intensive operations involves a wide range of factors, some of which have little to do with the database itself. The hardware infrastructure generates a number of potential bottlenecks, disk access being one of them. For example, huge performance gains are possible if frequently used data can be loaded into high speed memory first.

Oracle’s Times Ten In-Memory Database technology is an add-on product available either as a complete in-memory database or as a cache for an existing Oracle database. It can also be used alongside Microsoft SQL Server.

Microsoft SQL Server 2012 Enterprise Edition now has its own set of in-memory technologies known as xVelocity, the origins of which can be traced back to PowerPivot and SQL Server 2008 R2. The former used an in-memory analytics engine that enabled Excel to process hundreds of millions of rows per second on a desktop.

Traditional relational database management systems store data in a row orientated layout which is appropriate for many situations. However it’s not always ideal for the advanced analytical requirements of today’s BI and data warehousing systems. These often require an aggregate to be computed over many rows, sometimes involving just a few fields. In this situation, columnar-orientation is more efficient as only the relevant columns have to be read, as opposed to reading every row. This also facilitates compression as some of the latest techniques work best with similar adjacent data which is how serialised columnar data is stored.

Embarcadero DB Optimizer screenshot

Embarcadero’s DB Optimizer graphically portrays information about bottlenecks and other parameters that affect database performance.

SQL Server 2012 uses columnstore indexes for column storage and users are reporting tenfold or more performance gains for some queries when used with improved compression and in-memory storage. BI and analytics are two areas that Microsoft has focused on to deliver dramatic performance enhancements, and their database line-up has been reconfigured to reflect this. There is now a dedicated BI edition, although the most advanced features such as proactive caching and xVelocity optimised columnstore indexes are reserved for the Enterprise Edition.

Oracle provides its high-end Enterprise Edition with a number of add-on products to improve the performance of data warehouse queries, including one for OLAP. Oracle Database 11g R2 introduced some advanced compression technologies to store related columns contiguously on data blocks and to remove duplicate values. This reduces the amount of disk space needed as well as I/O and network traffic, allowing smaller and more affordable high performance solid state disks to be deployed. The Partitioning Option for Oracle Database Enterprise allows a table or index to be subdivided into smaller portions. You might choose to ‘remove’ old orders from a table or create an optimised subset for a specific query or report. These leaner partitions can be managed individually or collectively to increase performance and availability.

 

Third party products

There are also a number of products from other companies that can maximise database performance. These include database drivers and tools for storage management and SQL code tuning.

Progress DataDirect has developed some high performance ODBC, JDBC and ADO.NET drivers for both Oracle Database 11g and SQL Server 2008. The company is also working on a JDBC driver for Microsoft SQL Server Parallel Data Warehouse which will improve runtime performance for Java and J2EE applications connecting to it.

DataCore’s SANsymphony-V is a virtualisation solution with advanced features that can also help deliver improved performance. It automatically promotes frequently accessed disk blocks to faster devices in the storage subsystem, and creates SAN-wide caches to speed up performance.

The SQL code itself is often the cause of poor performance. Tools such as Embarcadero’s DB Optimizer XE3 and Idera’s SQL doctor v3.0 can help identify and correct problem code and other factors that can compromise performance such as hardware configuration.

With the availability of more powerful multi-core CPUs, fewer processors need to be thrown at the task and this can potentially reduce the licensing overhead. Microsoft has adopted per core licensing for SQL Server 2012. Oracle also counts cores in the case of the Enterprise Edition. However Standard and Standard Edition One processor licensing is based on the number of occupied CPU sockets only. Performance is paramount for some applications, but usually this has to be balanced with overall cost which is where hardware, virtualisation and licensing considerations enter the equation.

Find Out More

Graham Keitch is the database pre-sales specialist at Grey Matter and has worked in IT for over 20 years. You can find out more about improving database performance by emailing him at grahamk@greymatter.com or you can speak to him or his colleagues on 01364 654100.

 Microsoft Azures ads
MindManager 2017 banner