What Limits MySQL’s Performance?
CPU saturation happens when MySQL works with data that either fits in memory or can be read from disk as fast as needed.
I/O saturation, on the other hand, generally happens when you need to work with much more data than you can fit in memory.
How to Select CPUs for MySQL
You can identify a CPU-bound workload by checking the CPU utilization, but instead of looking only at how heavily your CPUs are loaded overall, try to look at the balance of CPU usage and I/O for your most important queries, and notice whether the CPUs are loaded evenly.
You can use tools such as mpstat, iostat, and vmstat.
Which Is Better: Fast CPUs or Many CPUs?
When you have a CPU-bound workload, MySQL generally benefits most from faster CPUs.
MySQL’s current architecture has scaling issues with multiple CPUs, and MySQL cannot run a single query in parallel across many CPUs.
Broadly speaking, there are two types of performance you might desire:
- Low latency (fast response time)
- High throughput
MySQL replication also works best with fast CPUs, not many CPUs. The I/O subsystem, not the CPU, is usually the bottleneck on a slave.
CPU Architecture
It’s a good idea to choose a 64-bit architecture for all new hardware you purchase.
Scaling to Many CPUs and Cores
One place where multiple CPUs can be quite helpful is an OLTP system.
There are actually two types of concurrency problems in database, and you need different approaches to solve them:
- Logical concurrency issues
Contention for resources that are visible to the application, such as table or row locks.
- Internal concurrency issues
Contention for resources such as semaphores, access to pages in the InnoDB buffer pool, and so on. You can try to work around these problems by changing server settings, changing your OS, or using different hardware.
The CPU architecture (RISC, CISC, depth of pipeline, etc.), CPU model, and OS all affect MySQL’s scaling pattern.
Some systems can even give lower total performance with more processors.
Balancing Memory and Disk Resources
Random Versus Sequential I/O
Database servers use both sequential and random I/O, and random I/O benefits the most from caching.
Another reason sequential reads don’t benefit much from caching is because they are faster than random reads.
- Sequential I/O is faster than random I/O
Random accesses are some 2,500 times faster in memory than on disk, while sequential accesses are only 10 times faster in memory.
- Storage engines can perform sequential reads faster than random reads.
Adding memory is the best solution for random-read I/O problems if you can afford it.
Caching, Reads, and Writes
Writes benefit greatly from buffering, because it converts random I/O into more sequential I/O. Asynchronous (buffered) writes are typically handled by the OS and are batched so they can be flushed to disk more optimally. Synchronous (unbuffered) writes have to be written to disk before they finish. That’s why they benefit from buffering in a RAID controller’s battery-backed write-back cache.
What’s Your Working Set?
The working set and the cache unit
The working set contains both data and indexes, and you should count it in cache units. A cache unit is the smallest unit of data that the storage engine works with.
The size of the cache unit varies between storage engines, and therefore so does the size of the working set. For example, InnoDB always works in pages of 16 KB.
Finding an Effective Memory-to-Disk Ratio
Choosing Hard Disks
Choosing Hardware for a Slave
If you’re planning to use a replication slave for failover, it usually needs to be at least as powerful as the master.
RAID Performance Optimization
RAID Failure, Recovery, and Monitoring
Balancing Hardware RAID and Software RAID
RAID Configuration and Caching
The RAID stripe chunk size
The RAID cache
- Caching reads
- Caching read-ahead data
- Caching writes
- Internal operations
In general, the RAID controller’s memory is a scarce resource that you should try to use wisely. Using it for reads is usually a waste, but using it for writes is an important way to speed up your I/O performance.
Storage Area Networks and Network-Attached Storage
Storage area network (SANs) and network-attached storage (NAS) are two related, but very different, ways to attach external file storage devices to a server. A SAN exposes a block-level interface that a server sees as being directly attached, while a NAS device exposes a file-based protocol such as NFS or SMB. A SAN is usually connected to the server via the Fibre Channel Protocol (FCP) or iSCSI, while a NAS device is connected via a standard network connection.
Storage Area Networks
The benefits of using a SAN include more flexible storage management and the ability to scale storage.
They permit a server to access a very large number of hard drives – often 50 or more—and typically have large, intelligent caches to buffer writes.
The block-level interface they export appears to the server as logical unit numbers (LUNs), or virtual volumes. Many SANs also allow multiple nodes to be “clustered” to get better performance.
SANs work well when you have a lot of concurrent requests and need high throughput.
A SAN is still ultimately a collection of hard drives that can do only a limited number of I/O operations per second, and because a SAN is external to the server and does its own processing, it adds latency to each I/O requests. The extra latency makes SANs less efficient when you need very high performance for synchronous I/O, so keeping your transaction logs on a SAN is usually not as good as using a directly attached RAID controller.
In general, directly attached storage is faster than the LUNs on a SAN with the same number of similar hard drives.
SANs have one big disadvantage: their cost is typically much higher than the cost of comparable directly attached storage (especially internal storage).
Most web application don’t use SANs, but they’re very populate for so-called enterprise applications. There are some reasons for this:
- Enterprise applications are usually less constrained by budget.
- Enterprises often run many applications, or many instances of a single application, and have unpredictable growth requirements. A SAN gives you the ability to buy a lot of storage, share it, and grow it on demand.
- A SAN’s large buffers can help absorb write spikes and provide fast access to “hot” data, and SANs typically balance load across a very large number of hard drives. All this is usually required for clustered applications that are vertically scaled, but it doesn’t help web applications much. Web applications usually don’t have periods of low activity followed by huge write spikes; most of them are writing a lot of data almost constantly, so buffering writes isn’t helpful. Read buffering isn’t needed either, because databases tend to have their own (large, smart) caches. And the most common and successful strategy for building a very large web application is to use application partitioning (sharding), so web applications are already balancing the load across a large number of hard drives.
Network-Attached Storage
A NAS device is essentially a stripped-down file server appliance, typically with a web interface instead of a physical mouse, monitor, and keyboard.
NAS devices are not very fast, because they’re mounted over the network. They also have a long history of problems with synchronous I/O support and locking, so we don’t recommend then for general database storage.
Using Multiple Disk Volumes
- Data and indexes files
- Transaction log files
- Binary log files
- General log files (e.g., for the error log, query log, and slow query log)
- Temporary files and tables
A typical disk layout is to have the operating system, swap partition, and binary logs on a RAID 1 volume, and a separate RAID 5 or RAID 10 volume that holds everything else.
Network Configuration
It’s a good idea to monitor network performance and errors on all network ports. Monitor every port on servers, on routers, and on switches. The Multi Router Traffic Grapher (MRTG) is the tried-and-true solution for device monitoring. Other common tools for monitoring network performance are Smokeping and Cacti.
Choosing an Operating System
- GNU/Linux
- Solaris
- FreeBSD
- Windows: typically used for development and when MySQL is used with desktop applications.
Choosing a Filesystem
When possible, it’s best to use a journaling filesystem, such as ext3, ReiserFS, XFS, ZFS, or JFS. If you don’t, a filesystem check after a crash can take a long time.
Threading
As of version 5.0, MySQL uses one thread per connection, plus housekeeping threads, special-purpose threads, and any threads the storage engine creates.
GNU/Linux offers two thread libraries: LinuxThreads and the newer Native POSIX Threads Library (NPTL). NPTL is usually lighter and more efficient, and it doesn’t suffer from a lot problems LinuxThreads had.
Swapping
Swapping occurs when the OS writes some virtual memory to disk because it doesn’t have enough physical memory to hold it.
Swapping is very bad for MySQL’s performance. It defeats the purpose of caching in memory, and it results in lower efficiency than using too little memory for the caches.
On GNU/Linux, you can monitor swapping with vmstat.
Operating System Status
How to Read vmstat Output
How to Read iostat Output
A CPU-Bound Machine
An I/O-Bound Machine
A Swapping Machine
An Idle Machine