Linley 2018 Fall Processor Conference on Halloween: Nothing Scary About AI Here

By Jay Walstrum - 2018-11-09

Linley Fall 2018 Processor conference took place on Halloween. There was nothing too scary happening unless you are afraid of performance requirements for moving information or needing to run faster than ever for markets like automotive, industrial internet of things (IIoT) and data centers (cloud). Day one was full of interesting views, but it comes back to the ability to move data around and the requirement for memory to support the processing to turn data into information, thus accelerating intelligence.

Linley Gwennap kicked off the morning, using artificial intelligence (AI) as the example for architectures that will need to change to deliver performance for the future of neural networks being created. He outlined how the coming machine learning and deep learning workloads will stress traditional processing architectures.

Linley introduced the market examples of what NVIDIA is doing in automotive with Xavier, how IIoT will become adopted in consumer, (driving volume & adoption up) and how data centers are pushing the interface to PAM4 at 400G aided by the broader adoption of smart network interface controller (NIC) cards to offset the expense of costly processing to accommodate.

In the next session, Kevin Deierling VP of Marketing at Mellanox talked about how smart NIC can improve the security of data center using “Bluefield”. He equated the old/new ways of protecting to Halloween candy. It used to be like “M&Ms” which are hard on the outside and soft on the inside. In the new data center with many different workloads, we now need to move to the “JAWBREAKER” candy model which is hard on the outside and hard on the inside. This was a great setup for Salman Jiva of Micron to follow up.

Linley Conference 1

Salman Jiva, Micron Senior Business Manager

Then, moving to the value of memory options and how this is supported with memory. Salman Jiva, Senior Business Development Manager, provided an intriguing discussion around improving the server efficiency by optimizing the network interface and its memory. He outlined the market trends, the need for speed in IO and where standard server memories are trending. Salman outlined what the “Modern cloud” looks like and what the applications running will require. He then introduced the value of “Smart NIC’s” allowing for nimble solutions like packet forwarding which enables the CPU to focus on the processing it needs to execute; a more efficient CPU and more offloaded to the smart NIC. Salman outlines the benefits of memory options and compared requirements of 400G network bandwidth when using DDR4 vs GDDR6.

Servers still need density where smart NIC needs higher bandwidth, with a thought on cost, implementation and power. GDDR6 addresses the power, area for NICs and the total cost of ownership. Not too scary for Halloween but eye-opening trends for memory and processing architectures to meet the coming demand for AI. It is all about accelerating intelligence to meet power and total cost of ownership.

Day two opened with a riveting energetic presentation from Google AI leader, Cliff Young. Cliff Young led the discussion on the tensor processing unit (TPU) architecture history and the inference vs training challenges. He pointed out the number of papers, projects around AI indicating the exponential growing interest and conversations revolving around AI.

These were just a few of the presentations reviewed at Linley. Other companies and topics included Synopsys on IP for comparing LiDAR and radar, Arm on the new Auto IP Core they have released, RAMBUS on memory systems for AI, and Ryan Baxter of Micron on AI shaping the next generation of memory solutions.

Linley Conference 2

Ryan Baxter, Micron Director of Cloud & Networking

The Linley conference did a great job of covering the critical concerns of safety, security, power, performance with very good dialogue and sometimes controversial approaches around the compute architectures. One common theme is the need for appropriate bandwidth of memory and storage to “feed the beasts” in applications like AI to accelerate intelligence.

Jay Walstrum

Jay Walstrum

Jay Walstrum is a Senior System Architect for Micron’s Compute and Networking Business Unit, where he is focusing on new opportunities in the three to five-plus year timeframe. In his role, he also applies innovative technologies and memory architectures to solve customer system-level challenges.

Jay’s responsibilities include working closely with a market-strategy team to identify new technology, applications, product/architecture specifications, customers, and markets for new product concepts. He also actively works with customer system architects, technologists, third-party developers, and Micron’s Research and Development team to identify, define, and architect innovative memory solutions.

Before beginning with Micron in January 2013, Jay spent 22 years at Xilinx Corp, where he held various positions ranging from the Director of Quality, to the Senior Product Planning Manager, to the Strategic and Technical Marketing Leader.

Jay holds a BSEE from University of Southern California. He holds 11 patents in the areas of FPGA system Solutions and memory interface architecture.