Intel AI Day

by Kay Ewbank

Kay Ewbank investigates Intel’s recent excursions into Artificial Intelligence.

HardCopy Issue: 71 | Published: May 10, 2017

Intel hardware has played a role in just about every implementation of artificial intelligence (AI), but until recently few people would have described that role as particularly significant. Then last year Intel acquired deep learning specialist Nervana Systems, and has used a series of Intel AI Day events to demonstrate its intention to become a key player in the field of deep neural networks (DNNs).

Stephan Gillich image

Stephan Gillich, Director of Technical Computing, Analytics and Artificial Intelligence GTM for Intel’s EMEA Datacenter Group.

The task of producing a useful neural network is usually split into two workloads, namely training and inference. Training is the part where you teach the neural network its specialist knowledge, and at the moment, this sector tends to use a mix of NVIDIA GPUs (Graphics Processing Units). Intel intends to change that and has used its Nervana portfolio to demonstrate how it is going about it. For developers, the news splits into several areas, namely the hardware that Intel says will replace GPUs; the software they’re making available for developers; and the alliance they have announced with Google to accelerate the use of AI in the enterprise.

Intel’s Stephan Gillich, Director of Technical Computing, Analytics and Artificial Intelligence GTM for Intel’s EMEA Datacenter Group, says the important message is that AI is something that’s happening now, not a future goal to wait for: “AI used to be just a research subject, but various vectors coming together have changed that. Greater processing power means the algorithms running on the processors aren’t limited, and the research methods and deep learning techniques have advanced to the point where they’re really strong. The final element is the availability of endless data to work on, to train neural networks, to provide the underlying information.”

 

Hardware

At these events Intel has shown off and discussed several new hardware products designed for work with DNN workloads, all of which it claims will be faster than rival GPU-based offerings. The fact that Intel is developing an architecture specifically for AI use shows just how important they believe the AI market to be: this will be the first time one of the major semiconductor companies has targeted anything so specific.

Much of the technology behind the products is based on the Nervana Engine. This is a machine learning chip that was under development at Nervana prior to Intel’s acquisition of the company in 2016. This work has continued at Intel, and the first product to come out of it has been previewed at the Intel AI Day events. Code-named Lake Crest, this is a deep learning accelerator that is purpose-built to train neural networks. The design includes up to 32GB of integrated high bandwidth memory (HBM2), capable of transferring data at 1TB per second with a 2GHz clock speed. Stephan Gillich says Lake Crest will power future best-in-class performance neural networks, and the speakers at the AI events made clear that Intel’s goal is for Nervana technology to achieve a 100-fold improvement for DNNs over today’s “best GPU” solutions by 2020.

Lake Crest will use a numerical format called Flexpoint that has been designed specifically for deep learning codes. Flexpoint is something between fixed and floating point, and offers much higher levels of computational density and lower power per operation on deep learning training codes. Flexpoint gives the precision of floating point, but is more or less as efficient as an integer execution unit, so it can calculate lower-precision floating point at near the rate usually achieved for integer values. This will enable the Lake Crest chip to deliver up to 10 times the parallelism compared to current GPUs. Lake Crest is being tested in the first half of this year and is due to become available later in the year.

A further product called Knights Crest was also announced at the AI Day events. This integrates Xeon processors with Nervana technology, and will be a commercial release that could be available at a lower cost aimed at more general use than the specialist Lake Crest. The integration would be relatively easy from an engineering viewpoint, and could build on the experience Intel has had integrating the Altera FPGAs. Then going forward, the Xeon, Core and Quark processors and FPGAs will be optimised with Nervana technology and software, and will be available for use in the inference processing where the trained neural network is put to use.

Work is still continuing on the next generation of Intel Xeon Phi processors (code-named Knights Mill). These are aimed at deep learning applications and will deliver up to four times better performance than current options. They should be available in 2017.

 

Software

Alongside the hardware, Intel has software for use by data scientists and developers, starting with the Intel Deep Learning SDK. This is a free set of tools for developing, training and deploying deep learning solutions. The SDK includes a training tool and a deployment tool which can be used either separately or together. The project is currently available in beta and supports Intel Distribution for Caffe, with more frameworks and domains to come.

The SDK lets you visualise real-time data when you’re creating DNNs, without the need to program. You can install deep learning frameworks and set up, tune and run deep learning algorithms. This lets you train datasets, design models with automatically optimised hyperparameters, launch and monitor the training of multiple candidate models, and visualise training performance and accuracy.

Stephan Gillich says the SDK is a key element of Intel’s AI software strategy: “Developers and data scientists can develop, train and deploy analytics by using the Deep Learning SDK. The SDK delivers end-to-end performance, a rich user experience and tools to boost productivity. This is one key element of the software stack, along with the Intel Nervana Graph Compiler, that ensures front-ends are usable and optimised for the hardware the stack is running on. We’re actively providing tools to make it faster to get results.”

The Nervana Graph Compiler (ngraph) is currently in preview. The current release consists of an API that can be used to create computational ngraphs; two higher level front-end APIs (TensorFlow and Neon) that make use of the ngraph API for common deep learning workflows; and a transformer API for compiling these graphs and executing them on both GPUs and CPUs.

Gillich describes Graph Compiler as sitting on top of libraries and optimising processes so they run well on the platform. “If you take a front end [such as Neon] and you have a task you want to run optimally on a machine, that’s the role of the Nervana Graph Compiler. It takes as input a graph and finds the optimal set of functions and data layouts to execute the functions.”

Neon is another technology that Intel acquired as part of the Nervana acquisition. Neon is an open source Python-based language and set of libraries for developing deep learning models. The developers say it is more than twice as fast as other deep learning frameworks such as Caffe and Theano, and that this is achieved through assembler-level optimisation, multi-GPU support, optimised data-loading, and use of the Winograd algorithm for computing convolutions.

Alongside the software, Intel has introduced the Intel Nervana AI Academy. This provides online developer access to training and tools. The Academy will host meetups, offer onsite, online and event based training, along with what Intel is describing as “expansion of enablement activities”. The academy has three tracks aimed at students, professional developers and startup companies. The developer track includes certification programs through published training programs, lecture series, competitions and access to tools. The online training provides guides on using Intel’s frameworks and technology, along with more general training on better ways to collect data, analyse it efficiently and present results.
Intel also announced a partnership with training company Coursera to provide a series of AI online courses.

 

Google Alliance

Alongside the hardware and tools, Intel announced a strategic alliance with Google focusing on Kubernetes (containers), machine learning, security and IoT. In immediate terms, the alliance is working on optimisation of both the TensorFlow library and the Kubernetes open source container management platform. TensorFlow is a software library that Google made open source in 2015. It was developed for conducting machine learning and DNN research, and shows results using data flow graphs. Kubernetes is another open source project from Google. It can be used to automate the deployment and use of application containers in general, including AI applications.

Diane Bryant and Dian Greene Intel AI Day in San Fransisco

Intel’s Diane Bryant (left) shares the stage with Diane Greene from Google Enterprise at the first Intel AI Day event in San Francisco.

Gillich told us that the two companies are cooperating to improve both: “We’re working with Google on both TensorFlow and Kubernetes. Our aim is to accelerate TensorFlow on our processors to allow deeper parallelism, and to optimise Kubernetes for the Intel architecture in terms of both performance and improved infrastructure management.”

The work on optimising TensorFlow means deep learning applications should run much faster on Intel processors. The Intel Xeon Phi processor, for example, is designed to scale out in a near-linear fashion across cores and nodes to dramatically reduce the time to train machine learning apps. And TensorFlow can now scale with future performance advancements as Intel continues to enhance its processors to handle even bigger and more challenging AI workloads.

When Intel bought Nervana, many analysts worried that Intel would fail to make full use of the possibilities it offered. The strategy and products announced at Intel AI Day have proved the doubters wrong. Intel intends to become the major player in the neural network market, and is putting a lot of effort into doing so. When mainstream companies like Intel think the time and money is well spent, it means that AI itself is entering the mainstream. As Stephan Gillich says, AI is something that’s happening now, not a future goal that we need to wait for.

Find Out More

You can find Intel’s AI resources online here. You can find out more about Intel from Grey Matter at www.greymatter.com/corporate/showcase/intel-software.

ISV Royalty Advert
 Microsoft Azures ads