chevron_leftBack
Suggested Search
Please enter a valid email address
Restricted email domain. Please enter your registered work email address.
Unexpected Error, please try after some time
The email address you provided is not registered. Please register prior to logging in
The Micron hub continuation page provides technology and workload insights on data center storage — direct from our technical experts.
Check out these insights from Micron's technical experts on workload testing.
What happens when AI experts combine innovative software-defined storage (SDS) with high-performance NVMe SSDs? Good things. Fast and scalable things.
Micron's 7500 NVMe™ SSD has low and consistent latency, which enables rapid, reliable responsiveness for demanding data center workloads. This blog discusses performance and latency for the Micron 7500 SSD across a range of mixed read/write workloads and block sizes, demonstrating its best-in-class QoS versus competitive SSDs.
Growing SSD capacity and increased power consumption require a more modern data map granularity. In the past, the industry was hesitant to make any changes that would negatively impact SSD life. Recent and focused data from real applications shows that this may not be the case, and there is a path towards a more efficient mapping.
Micron 9400 NVMe SSD is the top PCIe Gen4 SSD for AI storage
Characterizing the storage workload for AI training systems faces two unique challenges that the MLPerf Storage Benchmark Suite aims to address: the cost of AI accelerators and the small size of available datasets. This blog shows how the MLPerf Storage benchmark addresses both.
Storage for AI training: MLPerf Storage on the Micron 9400 NVMe SSD
MLPerf Storage tool is extremely helpful in benchmarking storage for various models by reproducing realistic AI workloads. Read this blog to learn how Micron is using MLPerf to test storage for AI workloads.
Identifying latency outliers in workload testing
When running and collecting workload traces of RocksDB, we sometimes see large latency spikes. In this blog, we talk about the methods used to identify the root cause of latency spikes in a mixed read and write workload.
Eliminating the I/O blender: The promise of flexible data placement
Google and Meta have worked closely to introduce the Flexible Data Placement (FDP) mode into the NVMe spec. In this blog, our testing shows that FDP decreases write amplification by 60% for sequential workloads.
Comparing Micron 7450, Samsung PM9A3 and Solidigm D5-P5430
db_bench is Meta’s preferred workload testing methodology as it provides a good emulation of workloads at the query level and simulates precise RocksDB storage I/Os. In this blog, we talk about a test that compares the performance of Micron's 7450 vs Samsung's PM9A3 vs Solidigm's D5-P5430 for RocksDB.
Digging in: Apache Cassandra performance with Micron 6500 ION SSD
Read about our deeper dive into the Apache Cassandra workload featured in our recently published tech brief comparing 6500 ION performance to a competitor's QLC drive. Bursty workloads require SSDs that perform well at high average disk IOs — an area where the Micron 6500 ION excels.
Micron 6500 ION provides massive WEKA performance on AMD-based servers
Check out these test results for high-performance computing (HPC) and AI using WEKA Data Platform software. We coupled this software with Supermicro servers built on the 4th Gen AMD EPYC™ 9554 and the Micron 6500 ION SSD.
Drop HDDs from your object store with the 6500 ION SSD
In this blog, Ceph object storage performance with Micron 6500 ION SSDs is compared to hard disk drives (HDDs). Read this article and see how the Micron 6500 ION wins in performance, power and cost over HDDs.
Need to get a hold of us? Contact our support teams as well as get contact information for our individual locations.
Contact our sales support team directly by completing the Sales Support form on our Contact Us page.
Your online source for placing and tracking orders for Micron memory samples.
Dive deeper into product features or functionality and get design guidance.