Jakob Morrison, from Micron's Applications Team, staffs the Linley Spring Processor Conference.
The Linley Processor event is normally very processor-focused, but the first day had some great new surprises. The first surprise was the very full room of attendees. There were many chairs added to the room compared with previous years at Linley.
Linley Gwennap opened the keynote with artificial intelligence (AI) being the focus. He outlined a variety of companies offering AI intellectual property (IP) like CEVA, Synopsys, Videantis, Almotive AIWare, Cadence, Imagination, Cambricon, Verisilicon and NVDIA Open sourced IP targeting automotive driving chip.
Someone astutely asked Linley, “Where is AI on the roadmap of rollout/adoption along the timeline to maturity?” Linley responded with a baseball analogy. “We are in the third inning” (of a nine inning game). Paraphrasing his remarks, he said we now understand enough to know that the traditional fetch/compute is not going to satisfy the need for the analytics required by applications. The Neural Networks are defined enough to develop hardware that will achieve great improvements in accelerating workloads defined. Continued development of use cases will further refine the value of hardware, tools and IP as we move forward.
AI relates back to memory and a memory vendor because it is always the bottleneck. The repeating mantra, now seemingly the conclusion, is captured in my favorite quote of the day:
“AI and deep learning is actually a memory business, not a processing business.”
Although a greater part of the day was spent on processing architectures and IP for silicon solutions, the types of AI and applications demand a variety of memory density and, more importantly, high memory bandwidth. This was outlined by several speakers talking about autonomous driving where different levels (0-5) require different memory characteristics, where for the simplest autonomous driving application is LPDDR, moving up to GDDR6 and up to HBM for highest levels of bandwidth requirements. Thus, the applications will determine the memory bandwidth required and which memory is best to satisfy the power, cost and performance required for each app.
A second supporting point was the concept of memory and storage where “Data is the new currency” or “Data makes the king” In other words, the company that owns the data, is the “king” or the wealthiest of all. Explained for AI, requires training data sets. The larger or more accurate of the data set for the application, the better performance during inference. Hardware to implement AI is not useful without the data set to train the Neural Network.
To close, some thought provoking quotes mentioned today to ponder:
- “AI is about Parallel computing using shared memory.”
- “AI and deep learning is actually a memory business, not a processing business.”
- “The new trend will be MORE open system designs coming about due to Moores law ending.”
- “Most of the power being burned is due to the need to move the data.”
- “Storage is more important for training in AI applications, more than inference.”
I am looking forward to day 2! For reserving copies of Linley material presented by Micron, please click on the following: