These are some of my notes from some sysbench in-memory r/o testing in past day or so:
- At ‘fetch data by primary key’ benchmark with separate read snapshots at each statement, MySQL shines until ~200 concurrent threads, then performance starts dropping slightly faster than one would want, I think mostly from table cache LOCK_open contention
- auto-commit cost (establishing read snapshot per statement) for SELECTs is ~10% for MySQL, but for PG it can be +50% in plain SQL mode and +130% (!!!!!!!) when using prepared statements (this can be seen in a graph – obviously the global lock PG has during this operation is held for too long and maybe is too costly to acquire.)
- Some benchmarks went up by 10% when using jemalloc
- MySQL could accept 10x more connections per second than PG (15000 vs 1500)
- Most confusing behavior MySQL exhibited was at 100-record range scans in PK order:
- At innodb_thread_concurrency=0 it did around 70k range reads, both fetching data and aggregation (SUM())
- At innodb_thread_concurrency>0 it did only 10k range reads returning data but still was able to do 70k aggregations/s
- PG was doing ~35k ops/s at that test
It seems that at least for systems that do lots of range scans (or joins) I guess, managed concurrency kills performance entirely due to giving up tickets too often, need to review it more (Update: it seems that offending stack is ha_release_temporary_latches being called way too early in the select_send::send_data()).