DESIGN TOOLS
storage

Virtualized computing gets better on Microsoft Azure stack HCI and Micron SSDs

Yasunori Ema | November 2020

Virtualized IT infrastructure is popular, as more than 80% of businesses virtualize their workloads to manage server sprawl and siloed storage, free up rack space, increase overall efficiency and flexibility, and improve disaster recovery.

Organizations considering the move to virtualized infrastructure must base their decisions on a combination of financial and technical criteria. Micron IT Engineering has worked with internal Micron teams who are interested in understanding these benefits, but not yet ready to invest in a broad virtualized deployment. We also talk to customers who may have hybrid cloud running on Microsoft Windows Server Enterprise, so they can often investigate virtualized computing with no additional license costs. Read on to learn how Micron used Microsoft Azure Stack HCI to achieve high performance.

Evaluating Hyperconverged Infrastructure

Micron IT is, like most other IT departments, highlighting flexibility in design and deployment of converged and hyperconverged infrastructure (HCI). We continue to evaluate HCI platform performance with major hypervisor solutions (typically for design teams and lab environments). As a global IT manager at Micron, I help our teams provide the customized virtual machines that our engineers require.

… Microsoft Azure Stack HCI combined with Micron data center SSDs can offer significant benefits.

Microsoft’s software-defined storage (SDS) offering, Storage Spaces Direct, can use remote direct memory access (RDMA) technology, which is becoming mainstream across IT environments around the world. Its storage flexibility enabled me to seamlessly and economically combine the Micron 9300 SSD with NVMe™ with the Micron 5300 (SATA) SSD into a performance solution. In addition, Storage Spaces Direct allowed me to use Micron’s latest persistent memory products designed for enterprise data centers. Persistent memory accelerates performance while showing significant cost efficiencies.

We have the tests to prove it!

The Micron IT Engineering team worked with Hewlett-Packard Enterprise (HPE) Japan to confirm the performance capability of the Azure Stack HCI using Micron products and HPE server hardware. The results: This configuration* achieved 544,000 IOPS on a 4KB I/O block, on random read (67%) and write (33%) workloads with Micron 9300 NVMe and 5300 SATA SSDs on two HPE ProLiant® DL380 Gen10 servers, each of which had two PCIe 100GbE network interface cards (NIC). Figure 1 details the configuration.

 

*Two-node S2D cluster (2x NVMe 9300 + 4x SSD 5300 per node) with 2x MCX516A-CCAT 100GbE NIC dual port QSFP28; 80 virtual machines (40 VM per node) – Azure D1 size

drawing showing cache and capacity in an Azure stack Figure 1: Configuring Cache and Capacity in Azure Stack HCI

We also tested other configurations

Impressed? Then you would probably like to see results from other combinations that Azure Stack HCI can support. Not a guarantee of results, but the actual Micron performance data may be a good reference for you:

  • 544K IOPS = 4 x NVMe 9300 as cache tier + 8 x SSD 5300 on 2 x HPE ProLiant DL380 Gen 10 servers with 100GB NIC
  • 462K IOPS = 4 x NVMe 9300 as cache tier + 8 x SSD 5300 on 2 x HPE ProLiant DL380 Gen 10 servers with 25GB NIC
  • 438K IOPS = 4 x NVMe 9300 as cache tier + 8 x SSD 5210 on 2 x HPE ProLiant DL380 Gen 10 servers with 100GB NIC
  • 431K IOPS = 4 x NVMe 9300 as cache tier + 8 x HPE HDD on 2 x HPE ProLiant DL380 Gen 10 servers with 100GB NIC
  • 113K IOPS = 4 x SSD5300 as cache tier + 8 x HPE HDD on 2 x HPE ProLiant DL380 Gen 10 servers with 100GB NIC

Note 1: Note: 80 VMs (40 VM per Node) – Azure D1 size (4KB IO Block, 100% Random [67% Read / 33% Write])

Note 2: 100GB NIC Mellanox MCX516A-CCAT 100GbE NIC dual port QSFP28

Note 3: 25GB NIC HPE Ethernet 10/25Gb 2-port 621-SFP28 adapter [867328-B21]

Here’s My Advice to You

Thoughtfully design your network. For instance, size your Ethernet network to your available infrastructure without oversubscribing. One NVMe SSD can saturate a 25Gb link. If you want to use four NVMe SSDs, have four 25Gb ports or one 100Gb port ready.

Also, consider the PCIe bus width for your NVMe storage. I recommend using 16 lanes (Figure 2) if you’re connecting your NVMe storage through an adapter. You can also direct attach the NVMe storage to the motherboard, or with a PCIe switch.

blue bar graph showing128K write throughput Figure 2: Performance Metrics From Micron IT Testing

Results show that the right platform and flexible flash storage for the data center can delay or replace the need for a more elaborate and expensive virtualized platform. I have been presenting this virtualization pilot program approach to groups of IT managers and engineers who must balance performance and cost as they decide how to future-proof their infrastructure.

If you plan to deploy virtual servers/virtual clients, Azure Stack HCI should be on your list to consider.

Want more information?

If you’d like to test the solution, download the ”Micron NVMe SSD Best Practices on Microsoft Azure Stack HCI.” And keep the Micron 9300, Micron 5300 and Micron 5210 ION SSDs in mind.

Sr. IT Domain Architect

Yasunori Ema

Senior IT domain architect for Micron, Yasunori Ema has spent his career as a system manager of IT infrastructure, including server/client, storage, network and telecom. He is now a technical lead for Micron'™s global environment, where he supports Windows servers, virtual servers and hypervisors for Micron'™s sales, marketing and design center teams.