In this blog post, I’ll look at MyRocks performance through some benchmark testing.

As the MyRocks storage engine (based on the RocksDB key-value store http://rocksdb.org ) is now available as part of Percona Server for MySQL 5.7, I wanted to take a look at how it performs on a relatively high-end server and SSD storage. I wanted to check how it performs for different amounts of available memory for the given database size. This is similar to the benchmark I published a while ago for InnoDB.

In this case, I plan to use a sysbench-tpcc benchmark and I will execute it for both MyRocks and InnoDB. We’ll use InnoDB as a baseline.

For the benchmark, I will use 100 TPC-C warehouses, with a set of 10 tables (to shift the bottleneck from row contention). This should give roughly 90GB of data size (when loaded into InnoDB) and is roughly equivalent to 1000 warehouses data size.

To vary the memory size, I will change innodb_buffer_pool_size from 5GB to 100GB for InnoDB, and rocksdb_block_cache_size for MyRocks.

For MyRocks we will use LZ4 as the default compression on disk. The data size in the MyRocks storage engine is 21GB. Interesting to note, that in MyRocks uncompressed size is 70GB on the storage.

For both engines, I did not use FOREIGN KEYS, as MyRocks does not support it at the moment.

MyRocks does not support SELECT .. FOR UPDATE statements in REPEATABLE-READ mode in the Percona Server for MySQL implementation. However, “SELECT .. FOR UPDATE” is used in this benchmark. So I had to use READ-COMMITTED mode, which is supported.

The most important setting I used was to enable binary logs, for the following reasons:

  1. Any serious production uses binary logs
  2. With disabled binary logs, MyRocks is affected by a suboptimal transaction coordinator

I used the following settings for binary logs:

  • binlog_format = ‘ROW’
  • binlog_row_image=minimal
  • sync_binlog=10000 (I am not using 0, as this causes serious stalls during binary log rotations when the  content of binary log is flushed to storage all at once)

While I am not a full expert in MyRocks tuning yet, I used recommendations from this page: https://github.com/facebook/mysql-5.6/wiki/my.cnf-tuning. The Facebook-MyRocks engineering team also provided me input on the best settings for MyRocks.

Let’s review the results for different memory sizes.

This first chart shows throughput jitter. This helps to understand the distribution of throughput results. Throughput is measured every 1 second, and on the chart, I show all measurements after 2000 seconds of a run (the total length of each run is 3600 seconds). So I show the last 1600 seconds of each run (to remove warm-up phases):

MyRocks Performance

To better quantify results, let’s take a look at them on a boxplot. The quickest way to understand boxplots is to take a look at the middle line. It represents a median of measurements (see more here):

MyRocks Performance 2

Before we jump to the summary of results, let’s take a look at a variation of the throughput for both InnoDB and MyRocks. We will zoom to a 1-second resolution chart for 100 GB of allocated memory:

MyRocks Performance 3

We can see that there is a lot of variation with periodical 1-second performance drops with MyRocks. At this moment, I do not know what causes these drops.

So let’s take a look at the average throughput for each engine for different memory settings (the results are in tps, and more is better):

Memory, GBInnoDBMyRocks
5849.06644205.714
101321.94298.217
201808.2364333.424
302275.4034394.413
402968.1014459.578
503867.6254503.215
604756.5514571.163
705527.8534576.867
805984.6424616.538
905949.2494620.87
1005961.24599.143

 

This is where MyRocks behaves differently from InnoDB. InnoDB benefits greatly from additional memory, up to the size of working dataset. After that, there is no reason to add more memory.

At the same time, interestingly MyRocks does not benefit much from additional memory.

Basically, MyRocks performs as expected for a write-optimized engine. You can refer to my article How Three Fundamental Data Structures Impact Storage and Retrieval for more details. 

In conclusion, InnoDB performs better (compared to itself) when the working dataset fits (or almost fits) into available memory, while MyRocks can operate (and outperform InnoDB) on small memory sizes.

IO and CPU usage

It is worth looking at resource utilization for each engine. I took vmstat measurements for each run so that we can analyze IO and CPU usage.

First, let’s review writes per second (in KB/sec). Please keep in mind that these writes include binary log writes too, not just writes from the storage engine.

Memory, GBInnoDBMyRocks
5244754.487401.54
10290602.589874.55
2031172693387.05
30313851.793429.92
40316890.694044.94
50318404.596602.42
60276341.594898.08
70217726.997015.82
80184805.396231.51
90187185.196193.6
100184867.597998.26

 

We can also calculate how many writes per transaction each storage engine performs:

MyRocks Performance 4

This chart shows the essential difference between InnoDB and MyRocks. MyRocks, being a write-optimized engine, uses a constant amount of writes per transaction.

For InnoDB, the amount of writes greatly depends on the memory size. The less memory we have, the more writes it has to perform.

What about reads?

The following table shows reads in KB per second.

Memory, GBInnoDBMyRocks
5218343.1171957.77
10171634.7146229.82
20148395.3125007.81
30146829.1110106.87
4014470797887.6
50132858.187035.38
6098371.277562.45
7042532.1571830.09
803479.85266702.02
903811.37164240.41
1001998.13762894.54

 

We can translate this to the number of reads per transaction:

MyRocks Performance 5

This shows MyRocks’ read-amplification. The allocation of more memory helps to decrease IO reads, but not as much as for InnoDB.

CPU usage

Let’s also review CPU usage for each storage engine. Let’s start with InnoDB:

MyRocks Performance 6

The chart shows that for 5GB memory size, InnoDB spends most of its time in IO waits (green area), and the CPU usage (blue area) increases with more memory.

This is the same chart for MyRocks:

MyRocks Performance 7

In tabular form:

Memory, GBengineussyswaid
5InnoDB825733
5MyRocks56111815
10InnoDB1235728
10MyRocks57111813
20InnoDB1645525
20MyRocks58111911
30InnoDB2055025
30MyRocks59111910
40InnoDB2674424
40MyRocks6011209
50InnoDB3583819
50MyRocks6011217
60InnoDB43103610
60MyRocks6111226
70InnoDB5112344
70MyRocks6111235
80InnoDB5512311
80MyRocks6111235
90InnoDB5512321
90MyRocks6111234
100InnoDB5512321
100MyRocks6111244

 

We can see that MyRocks uses a lot of CPU (in us+sys state) no matter how much memory is allocated. This leads to the conclusion that MyRocks performance is limited more by CPU performance than by available memory.

MyRocks directory size

As MyRocks writes all changes and compacts SST files down the road, it would be interesting to see how the data directory size changes during the benchmark so we can estimate our storage needs. Here is a chart of datadirectory size:

MyRocks Performance 8

We can see that datadirectory goes from 20GB at the start, to 31GB during the benchmark. It is interesting to observe the data growing until compaction shrinks it.

Conclusion

In conclusion, I can say that MyRocks performance increases as the ratio of dataset size to memory increases, outperforming InnoDB by almost five times in the case of 5GB memory allocation. Throughput variation is something to be concerned about, but I hope this gets improved in the future.

MyRocks does not require a lot of memory and shows constant write IO while using most of the CPU resources.

I think this potentially makes MyRocks a great choice for cloud database instances, where both memory and IO can cost a lot. MyRocks deployments would make it cheaper to deploy in the cloud.

I will follow up with further cloud-oriented benchmarks.

Extras

Raw results, scripts, and config

My goal is to provide fully repeatable benchmarks. To this end, I’m  sharing all the scripts and settings I used in the following GitHub repo:

https://github.com/Percona-Lab-results/201803-sysbench-tpcc-myrocks

MyRocks Performance Settings

InnoDB settings

Hardware spec

Supermicro server:

  • CPU:
    • Intel(R) Xeon(R) CPU E5-2683 v3 @ 2.00GHz
    • 2 sockets / 28 cores / 56 threads
  • Memory: 256GB of RAM
  • Storage: SAMSUNG  SM863 1.9TB Enterprise SSD
  • Filesystem: ext4
  • Percona-Server-5.7.21-20
  • OS: Ubuntu 16.04.4, kernel 4.13.0-36-generic

You May Also Like

For a detailed look at how MyRocks stacks up against typical InnoDB deployments, read my blog MyRocks Engine: Things to Know Before You Start. We go over the differences, major and minor, in the storage engine and discuss its implementation with Percona Server. MyRocks could also be beneficial for your cloud deployment. Saving With MyRocks in The Cloud shows how the storage engine performed under heavy I-O workloads in the cloud and what that means for your storage costs.

20 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Brian Glover

Your innodb_buffer_pool_size is twice the maximum amount of memory (100G) you used for your benchmark. Wouldn’t double the amount of ram for the buffer pool cause MySQL to go into swap?

Was that number static throughout the tests or did you tweak it based on the amount of memory available?

Brian Glover

Thanks for the clarification

Hi Vadim,

Your 2010 benchmark on RAM vs. fast SSD is my favourite benchmark ever 🙂 . Thank you for running it again in 2018 with variance included.

I would be curious to see 8.0 results for InnoDB… there were some redo log enhancements that made it in shortly before GA.

lawrence

hi Vadim,

looks good. I was wondering which charting app you used to make your charts with?

thanks
lawrence

Zviad Metreveli

Great article. The throughput heatmap is very interesting and a good way to visualize the higher variance of rocksDB.

One hidden danger with LSM (rocksDB) is that you may not be catching all the more extreme edgecases of compactions with a test that only lasts for 1 hr.
As the data set on disk grows larger and data gets compacted to lower and lower levels, there can definitely be cases when a larger compaction can run after 6, 12 or even 24 hours of running the system, and it can introduce bigger variance in throughput, while those background compactions are running. In some cases the throughput drops can even last for many minutes, which can definitely cause some unexpected issues for a database that is serving live low latency traffic.

Mark Callaghan

With fast storage 1 hour is probably long enough to see steady state behavior.

RocksDB supports leveled and tiered compaction but the default is leveled. While there might be long running compaction with tiered, that doesn’t happen with level as each compaction step consumes ~11 SST files which should be a few hundred MB of data.

Arda Beyazoglu

Thanks for great benchmark. Is myrocks supported in percona cluster ?

ovaistariq

Vadim, I would also suggest running the test which uses buffered-IO with MyRocks. Read-ahead in RocksDB is currently not supported when directIO is used and depending on the query plan that may be a big deal for some of the queries.

Bruno

Could you compare with TokuDB ? I think it is the main contender to MyRocks

Alex

Thank you very much for the interesting comparison. I have a question regarding performance with low page buffer (Inno) and block cache size(MyRocks). For InnoDB, buffer pool is the main consumer of memory storing both data and indexes. However for RocksDB, additional memory is allocated for bloom filters and for index blocks.
How large was this memory area in your experiments?

techwizardg

We are using percona mysql-5.7 and have tables that are both RocksDB and InnoDB and during replication we are facing crashes for both RocksDB tables and InnoDB. Does Percona MySQL 5.7 support replication for both storage engines

Alkin Veysal

Some limitations of Using MyRocks:
Transportable Tablespace, Foreign Key, Spatial Index, and Fulltext Index are not supported

Alkin Veysal

Let’s look at some of the limitations of using the MyRocks engine…

MariaDB’s optimistic parallel replication may not be supported
MyRocks is not available for 32-bit platforms
MariaDB Cluster (Galera Cluster) doesn’t work with MyRocks (Only InnoDB or XtraDB storage engines)
The transaction must fit in memory
Requires special settings for loading data
SERIALIZABLE is not supported
Transportable Tablespace, Foreign Key, Spatial Index, and Fulltext Index are not supported

Zaiyang Li

As people have mentioned before, RocksDB uses memory for several things: memtables, block cache, sstable indexes/bloom filters. RocksDB can easily write at the top speed supported by SSD because it is write optimized.

Don’t meant to be negative about your blog post, but I felt the need to comment because I think you did your first benchmark wrong.

“rocksdb_block_cache_size” controls the size of how much memory RocksDB is using to cache data on the read path. In particular, block cache stores uncompressed data/index/bloom filter blocks being read from sst files. Here you increased block cache size and discovered that it didn’t affect write performance because block cache is only used for read path.

The correct memory-related parameter to tune which would affect write performance is “rocksdb-db-write-buffer-size” which controls the size of memtables, i.e., how much data to accumulate in memory before writing to disk.