Need help with your database environment? Talk to a Percona expert.
This blog compares how PostgreSQL and MySQL handle millions of queries per second.

Anastasia: Can open source databases cope with millions of queries per second? Many open source advocates would answer “yes.” However, assertions aren’t enough for well-grounded proof. That’s why in this blog post, we share the benchmark testing results from Alexander Korotkov (CEO of Development, Postgres Professional) and Sveta Smirnova (Principal Technical Services Engineer, Percona). The comparative research of PostgreSQL 9.6 and MySQL 5.7 performance will be especially valuable for environments with multiple databases.

The idea behind this research is to provide an honest comparison of the two popular RDBMSs. Sveta and Alexander wanted to test the most recent versions of both PostgreSQL and MySQL with the same tool, under the same challenging workloads and using the same configuration parameters (where possible). However, because both PostgreSQL and MySQL ecosystems evolved independently, with standard testing tools (pgbench and SysBench) used for each database, it wasn’t an easy journey.

The task fell to database experts with years of hands-on experience. Sveta has worked as a Senior Principal Technical Support Engineer in the Bugs Verification Group of the MySQL Support Group at Oracle for more than eight years, and since 2015  has worked as a Principal Technical Services Engineer at Percona. Alexander Korotkov is a PostgreSQL major contributor, and the developer of a number PostgreSQL features – including the CREATE ACCESS METHOD command, generic WAL interface, lockfree Pin/UnpinBuffer, index-based search for regular expressions and much more. So we have a pretty decent cast for this particular play!

SvetaDimitri Kravtchuk regularly publishes detailed benchmarks for MySQL, so my main task wasn’t confirming that MySQL can do millions of queries per second. As our graphs will show, we’ve passed that mark already. As a Support Engineer, I often work with customers who have heterogeneous database environments in their shops and want to know about the impact of migrating jobs from one database to another. So instead, I found the chance to work with the Postgres Professional company and identify both the strong and weak points of the two databases an excellent opportunity.

We wanted to test both PostgreSQL and MySQL on the same hardware, using the same tools and tests. We expected to test base functionality, and then work on more detailed comparisons. That way we could compare different real-world use case scenarios and popular options.

Spoiler: We are far from the final results. This is the start of a blog series.

OpenSource Databases on Big Machines, Series 1: “That Was Close…”

Postgres Professional together with Freematiq provided two modern, powerful machines for tests.

Hardware configuration:

Processors: physical = 4, cores = 72, virtual = 144, hyperthreading = yes
Memory: 3.0T
Disk speed: about 3K IOPS
OS: CentOS 7.1.1503
File system: XFS

I also used a smaller Percona machine.

Hardware configuration:

Processors: physical = 2, cores = 12, virtual = 24, hyperthreading = yes
Memory: 251.9G
Disk speed: about 33K IOPS
OS: Ubuntu 14.04.5 LTS
File system: EXT4

Note that machines with smaller numbers of CPU cores and faster disks are more common for MySQL installations than machines with larger numbers of cores.

The first thing we needed to agree on is which tool to use. A fair comparison only makes sense if the workloads are as close as possible.

The standard PostgreSQL tool for performance tests is pgbench, while for MySQL it’s SysBench. SysBench supports multiple database drivers and scriptable tests in the Lua programming language, so we decided to use this tool for both databases.

The initial plan was to convert pgbench tests into SysBench Lua syntax, and then run standard tests on both databases. After initial results, we modified our tests to better examine specific MySQL and PostgreSQL features.

I converted pgbench tests into SysBench syntax, and put the tests into an open-database-bench GitHub repository.

And then we both faced difficulties.

As I wrote already, I also ran the tests on a Percona machine. For this converted test, the results were almost identical:

Percona machine:

Freematiq machine:

I started investigating. The only place where the Percona machine was better than Freematiq’s was disk speed. So I started running the pgbench read-only test, which was identical to SysBench’s point select test with full dataset in memory. But this time SysBench used 50% of the available CPU resources:

Alexander, in turn, had issues with SysBench, which could not create a high load on PostgreSQL when prepared statements were used:

We contacted SysBench author Alexey Kopytov, and he fixed MySQL issue. The solution is:

  • Use SysBench with the options --percentile=0 --max-requests=0  (reasonable CPU usage)
  • Use the concurrency_kit branch (better concurrency and Lua processing)
  • Rewrite Lua scripts to support prepared statements (pull request: https://github.com/akopytov/sysbench/pull/94)
  • Start both SysBench and mysqld with the jemalloc or tmalloc library pre-loaded

A fix for PostgreSQL is on the way. For now, Alexander converted a standard SysBench test into pgbench format and we stuck with it. Not much new for MySQL, but at least we had a baseline for comparison.

The next difficulty I faced was the default operating system parameters. To make the long story short, I changed them to the recommended ones (described below):

The same parameters were better for PostgreSQL performance as well. Alexander set his machine similarly.

After solving these issues we learned and implemented the following:

  • We cannot use a single tool (for now)
  • Alexander wrote a test for pgbench, imitating the standard SysBench tests
  • We are still not able to write custom tests because we use different tools

But we could use these tests as a baseline. After work done by Alexander, we stuck with the standard SysBench tests. I converted them to use prepared statements, and Alexander converted them into pgbench format.

I should mention that I was not able to get the same results as Dimitri for the Read Only and Point Select tests. They are close but slightly slower. We need to investigate if this is the result of different hardware or my lack of performance testing abilities. The results from the Read-Write tests are similar.

Another difference was between the PostgreSQL and MySQL tests. MySQL users normally have many connections. Setting the value of the variable max_conenctions, and limiting the total number of parallel connections to thousands is not rare nowadays. While not recommended, people use this option even without the thread pool plugin. In real life, most of these connections are sleeping. But there is always a chance they all will get used in cases of increased website activity.

For MySQL, I tested up to 1024 connections. I used powers of two and multiples of the number of cores: 1, 2, 4, 8, 16, 32, 36, 64, 72, 128, 144, 256, 512 and 1024 threads.

For Alexander, it was more important to test in smaller steps. He started from one thread and increased by 10 threads until 250 parallel threads were reached. So you will see a more detailed graph for PostgreSQL, but no results after 250 threads.

Here are our comparison results.

Point SELECTs

PostgreSQL and MySQL

  • pgsql-9.6 is standard PostgreSQL
  • pgsql-9.6 + pgxact-align is PostgreSQL with this patch (more details can be found in this blog post)
  • MySQL-5.7 Dimitri is Oracle’s MySQL Server
  • MySQL-5.7 Sveta is Percona Server 5.7.15

OLTP RO

PostgreSQL and MySQL

OLTP RW

PostgreSQL and MySQL

Sync commit in PostgreSQL is a feature, similar to innodb_flush_log_at_trx_commit=1 in InnoDB, and async commit is similar to innodb_flush_log_at_trx_commit=2.

You see that the results are very similar: both databases are developing very fast and work with modern hardware well.

MySQL results which show 1024 threads for reference.

Point SELECT and OLTP RO

PostgreSQL and MySQL

OLTP RW with innodb_flush_log_at_trx_commit set to 1 and 2

PostgreSQL and MySQL

After receiving these results, we did a few feature-specific tests that will be covered in separate blog posts.

More Information

MySQL Options for OLTP RO and Point SELECT tests:

MySQL Options for OLTP RW:

MySQL SysBench parameters:


PostgreSQL pgbench parameters:

Features in MySQL 5.7 that significantly improved performance:

  • InnoDB: transaction list optimization
  • InnoDB: Reduce lock_sys_t::mutex contention
  • InnoDB: fix index->lock contention
  • InnoDB: faster and parallel flushing
    • Multiple page cleaner threads: WL #6642
    • Reduced number of pages which needs to be flushed: WL #7047
    • Improved adaptive flushing: WL #7868
  • MDL (Meta-Data Lock) scalability
    • Remove THR_LOCK::mutex for InnoDB: Wl #6671
    • Partitioned LOCK_grant
    • Number of partitions is constant
    • Thread ID used to assign partition
    • Lock-free MDL lock acquisition for DML

Anastasia: The initial findings of this research were announced at Percona Live Amsterdam 2016. More findings were added to the second version of the same talk given at Moscow HighLoad++ 2016. Hopefully, the third iteration of this talk will be available at Percona Live Open Source Database Conference 2017 in Santa Clara.

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

28 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Dane Miller

Sveta and Anastasia, Thanks for posting! I found a couple of broken links. Could you fix:
* sysbench ConcurrencyKit https://github.com/akopytov/sysbench/tree/concurrency_kit
* https://blogs.oracle.com/mysqlinnodb/entry/transaction_life_cycle_improvements_in links to “http://innodb%20team%20blog%20post/”

Sveta Smirnova

Thank you, Dane!

Links are fixed.

Pratap

Great post!
Any reason for testing with a different number of tables? (1 for Postgres vs 8 for mysql)
Is it just the price to pay for having to convert sysbench tests to pgbench?

Wondering how much of this could be re-used for an Aurora Mysql vs Aurora Pgsql benchmark…

Sveta Smirnova

> Any reason for testing with a different number of tables? (1 for Postgres vs 8 for mysql)
> Is it just the price to pay for having to convert sysbench tests to pgbench?

I believe number of tables does not much a lot for these tests. It certainly has not effect at all for RO tests for MySQL (I tested). But this is good point and I make sure in the next round of tests we will have same number of tables.

> Wondering how much of this could be re-used for an Aurora Mysql vs Aurora Pgsql benchmark…

You can certainly use tests for these benchmarks. These are standard MySQL tests for now anyway. But you will need to tune Aurora instances differently.

Brad Buhrkuhl

I would love to see this test done w/ MS Sql Server as well. (And even Oracle, just for the fun of it)

Vadim Tkachenko

MS SQL and Oracle make it very complicated having DeWitt clause in their EULA http://sqlmag.com/sql-server/devils-dewitt-clause

Brad Buhrkuhl

That is a good (read: depressing) point.

I wonder if we could hack around this by providing a simple script to run that uses EC2/Azure/etc to display the numbers?

Sveta Smirnova

Well, idea of these particular benchmarks is to compare databases, setup by their experts. I did not touch PostgreSQL and Alexander did not touch MySQL. If we decide to include MS SQL or Oracle (thought they are not Open Source databases) we need to find partner(s) who are known modern expert(s) with them. This person is also should not be somebody who used MS SQL/Oracle 10 years ago =)

Sveta Smirnova

*used last time

Mark Callaghan

You did a great job tuning, running and documenting your work.

Can you summarize what is in the concurrency_kit sysbench branch?
Is prepared statement support in sysbench now?
What features do you and Alexey want to add to sysbench?

Sveta Smirnova

> Can you summarize what is in the concurrency_kit sysbench branch?

This branch uses LuaJIT instead of Lua and has scaling improvements. Actually looks like while this post was under review Alexey merged concurrency_kit into upstream. At least this is that commit logs say.

> Is prepared statement support in sysbench now?

You can use prepared statements with any version of SysBench! Difference between 0.4 and 0.5/1.0 versions is that 0.4 had –oltp-ps-mode option while in 0.5/1.0 you have to implement prepared statements in Lua code. Alexey wants to re-implement –oltp-ps-mode for modern versions: https://github.com/akopytov/sysbench/issues/95 I believe this is the reason why he does not advertise prepared statements support. But I did not wait and just re-wrote standard tests. You may use them as a workaround for now: https://github.com/akopytov/sysbench/pull/94

> What features do you and Alexey want to add to sysbench?

To be able to continue collaboration with PostgreSQL we need to solve SysBench+PostgreSQL issue. For MySQL I am happy so far.

Adam Scott

Thank you Anastasia and Sveta.

Anastasia Raspopina

All the credits should go to Sveta, actually. All the tests are hers, my assistance was minor and was mainly about organizational things, intros and such. Thank you for supporting the idea though – hopefully we’ll see more open source databases tested under the umbrella of this project.

zhao

In the graph of “OLTP RW”,why the performance of MySQL-5.7 trx=1 Dimitri is better than MySQL-5.7 trx=2 Dimitri ?
why the the performance of MySQL-5.7 trx=2 Sveta is better than MySQL-5.7 trx=1 Sveta ?

Sveta Smirnova

Oh, looks like Dimitri’s graphs misplaced! trx=2 Dimitri is certainly faster than trx=1 Dimitri. Thank you for noticing this! I will ask to replace graph with correct one.

In any case reason is obvious: with trx=2 flushing occurs once per second while with trx=1 at every transaction.

Inceptus

Nice explanation & thanks for sharing.

Yousef

A real life test revealed that Percona is,at least, 3x faster then postgres, not using any if those benchmarking tools

Thank you for the post. Could you please disclose how long took you to do those tests? at least approximately if possible

Sorry, missed your comment. It is in the blog post: 300 seconds. But data set was warmed up before running tests.

Reiso

Hi, having a doubt on OLTP_RW and even OLTP_RO, looking at the graphs from >150 clients how many tests where done on the mysql side? One? postgres seems to have had 10 clients increments done. Is my assumption correct?

Sveta Smirnova

What do you mean by how many tests were done on the MySQL side? Everything is in the blog post: sysbench job warmed up, then run for 300 seconds for each number of clients.

Yes, PostgreSQL incremented number of clients by 10, because my partners did not use PgBouncer. For MySQL incrementing number of clients by 10 does not make sense, because it can handle 1024 connections out of the box. Thread Pool plugin needed only if number of connections exceeds 5K or even 10K.

Reiso

no need to answer.

Reiso

what I meant was you can delete my comment 🙂 my apologies

Reiso

Have you considered or plan doing the comparison with mariadb 10.4 and postgres 11? Do you know of any benchmarks?

Sveta Smirnova

No for both questions. This project was addition to my “normal” support job and it seems I do not have time to repeat this effort right now. Maybe in far future.

However, since now Percona supports PostgreSQL there is a hope that benchmarking team at Percona will do something. Subscribe to posts by Vadim Tkachenko and Alexey Stroganov.

Bai Yang

Seem that the binary log of mysql is disabled? I think we can look at the results after enable it. After all, mysql has two kinds of logs, and its master-slave replication relies on the binlog. PostgreSQL requires only wal to achieve both ACID compliance and replication.

Therefore, in the production environment, mysql almost needs to enable binlog, and the comparison result with binlog enabled is more real.

Holger Winkelmann

Hi, Have you any Metrics about the Latency behaviour while running the tests?

Sveta Smirnova

Not