DESIGN TOOLS
applications

Scaling memory and storage for big memory workloads

Ryan Baxter | October 2021

When VMware wanted to host leading memory technology partners for a discussion at VMworld 2021, Micron was there. Speaking about “Big Memory – An Industry Perspective on Customer Pain Points and Potential Solutions” gave me a great opportunity to bring Micron’s perspective on innovation within memory and storage to the industry's premier multi-cloud event. I invite you to view this panel discussion on how to improve the performance, resiliency and scalability of data center memory subsystems. (VMworld registration is free.)

Announcing Project Capitola

During the panel, VMware announced the launch of its “Project Capitola” initiative, a transformative new approach to building technology solutions that reaps the benefits of Big Memory while scaling individual solutions across use cases. By innovating ways to simplify the complexity of compute and memory subsystems, VMware, along with other technology leaders, will help customers to adapt to the transformation of the data center and master the growth of data so it can be converted into valuable information.

The challenge for data center architects is the exponential growth of data, coupled with the demand to convert that data into insights faster than ever before. This requires a re-examination of current data center architectures. As I mentioned during the panel, the compute workhorses like x86 architectures will continue. But the evolving data center has shown us that, unlike handyman projects, one hammer is no longer the right tool for every job. Future data centers need to support and integrate heterogeneous compute, a re-imagined memory and storage hierarchy, and an open, agnostic interconnect — such as Compute Express Link (CXL) — to tie it all together and enable composable systems that can evolve with workloads.

With the industry moving toward heterogeneous compute models, memory subsystem innovations are needed so they can be paired with evolving compute designs. As the selection of compute hardware (CPUs, GPUs, FPGAs, TPUs, etc.) expands and becomes differentiated and optimized for individual workloads, memory subsystems must adapt. Otherwise, the compute will starve and never reach its full potential and do the important work of transforming data into valuable insights.

The customer’s perspective

Another key takeaway from the panel discussion was the importance of listening to customers’ concerns about data center expenses and how memory and storage increasingly contribute to that expense. Customers understandably want to know how to maximize the value of their investment. They want to know how the ecosystem can increase their ROI and deliver big benefits without over-provisioning servers, helping to reduce the effort to get to the optimal solution balancing cost and performance.

Heterogeneous workloads, memory architecture tiering

With the expansion of heterogeneous compute, the industry will be focused on memory-optimized subsystems. This includes tiering memory architecture. The engineering challenge is how to optimize memory tiering to deliver on both performance and cost with heterogeneity across many workloads to fit within the software stack.

Applications and software will lag behind infrastructure transformations by a significant amount of time, probably years. It takes that long for the applications and software to be optimized to take full advantage of the hardware advances. With a more complete and aligned understanding of end-customers’ goals, an earlier engagement and collaboration among applications, software and VMware will help end customers realize benefits more quickly.

Those benefits can be in the form of managing innovations in tiered memory subsystems with a solution like CXL attached memory from Micron, which is designed to run in the background and to help keep data moving. Automatically identifying and moving hot, warm and cold data to the appropriate tiers helps improve performance and solution efficiency.

Data centers are calling for and embracing innovation within memory subsystems — innovations like the broader implementation of near memory (high-bandwidth memory); advances in direct-attached memory like DDR5 DRAM; and the various techniques that CXL is enabling with far memory. Software-managed memory can also help deliver a flexible infrastructure with added composability. With this encouraging, evolving outlook, the industry is moving forward to deliver high value across a broad range of workloads.

Sr. Director, Data Center Segment

Ryan Baxter

Ryan Baxter is senior director of Cloud, Enterprise and Networking at Micron.