Massive MySQL® database performance on Ceph RBD

Ryan Meredith | December 2017

Hi everybody,

In my continuing quest to characterize the performance of Ceph® 12.2.1 (Luminous), I set up a test using MySQL® database server with Docker containers on this new version of Ceph.

The goal of the test is to measure how performance scales with large databases when a RBD block device is used as the database storage. I also used Docker containers to encapsulate my MySQL configuration for easy deployment and because Docker is cool.

Ceph Hardware: Micron IOPs Optimized Ceph Reference Architecture

I used the hardware from our Micron IOPs Optimized Ceph Reference Architecture for testing:

Ceph Hardware: Micron IOPs Optimized Ceph Reference Architecture

I installed the latest community edition of Ceph, Luminous 12.2.1, and configured it with 1 BlueStore OSD per Micron 9100MAX 2.4TB NVMe® SSD and crc32c checksum enabled.

MySQL Server Configuration

MySQL Server (x10)

On the MySQL server side, I used Docker to create a MySQL 5.7.19 image and copied it to 10 Supermicro 2028U servers, acting as MySQL database servers.

A 1.5TB RBD image was presented to each MySQL server, storing a 1TB TPC-C like MySQL database per RBD image. The RBD image is mounted on the MySQL server and passed through to the MySQL docker instance with one container instance per MySQL server.

All MySQL database files are stored on the RBD image so that the database instance is portable and protected by Ceph replication.

MySQL is sized to run a single large database per server, using a 224GB buffer pool per instance.

Micron SSE TPC™-C Tool

Our Storage Solutions Engineering team has built a TPC-C like benchmark tool that puts strain on storage by using the entire database as the active dataset. We call it the “Micron SSE TPC-C Tool”, and it is installed in each docker instance and kicked off on all MySQL instances simultaneously by an external script.

MySQL Performance on Ceph RBD

I scaled up from 1 MySQL server to 10 MySQL servers, each server using a Ceph RBD image for MySQL storage. The TPC-C like test ran for a 10-minute ramp up period (to reach steady-state), then a 30-minute test run. It was repeated with 1 MySQL server, 5 MySQL servers, and 10 MySQL servers.

Ceph RBD MySQL Performance
  Transactions Per Minute (TPM) Avg. Transaction Response Time (MS) 99.9% Response Time (MS)
1 MySQL Server 124,840 24 534 
5 MySQL Servers 607,988 24 549 
10 MySQL Servers 1,043,093 28  634 

MySQL performance scaled linearly from 1 MySQL server to 10 servers. At 10 servers, the modified TPC-C benchmark hit over 1 million transactions per minute with a transaction response time of 28ms.

Ceph RBD MySQL Performance

CPU utilization on the Ceph Storage nodes is typically the limiting factor with Ceph small block performance. In this case, the MySQL servers averaged 70%-80% CPU utilization and Ceph storage node CPU utilization ramped up from 7% (1 client) to 64% (10 clients).

There is more overhead to add MySQL clients since Ceph is not fully utilized at this point. Sadly, I ran out of MySQL servers to push further. Based on this scaling, one could reasonably expect to add 1 to 5 more MySQL clients with similar results.

Would you like to know more?

We are working on blog content to share the methodology behind the Micron SSE TPC-C Tool that will go into greater detail on how we achieved the TPM numbers shared here. With this successful test, Docker + MySQL will become a standard test we use for Ceph and other Software-defined Storage solutions. Stay tuned.

Have additional questions about our testing or methodology? Leave a comment below or you can email us

Director, Storage Solutions Architecture

Ryan Meredith

Ryan Meredith is director of Data Center Workload Engineering for Micron's Storage Business Unit, testing new technologies to help build Micron's thought leadership and awareness in fields like AI and NVMe-oF/TCP, along with all-flash software-defined storage technologies.