MySQL InnoDB Performance: Scaling inserts.

Posted On Mar-12

Over this past week, I noticed some slowness on one of my InnoDB setups. This one stored roughly 3 Terabytes of data . At approximately 15 million new rows arriving per minute, bulk-inserts were the way to go here. InnoDB-buffer-pool was set to roughly 52Gigs. And things had been running smooth for almost a year.I restarted mysql, and inserts seemed fast at first at about 15,000rows/sec, but dropped down to a slow rate in a few hours (under 1000 rows/sec)

MySQL InnoDB Slowness : What Changed? First guess!

Things just don’t work for a year, and then break overnight (Well, turns out they do!! but anyway..) I noticed that the insert volume had almost doubled. But the tunables of this instance were configured to scale well, and it wasn’t supposed to be breaking at just twice the typical load!

The reason I chose MySQL InnoDB over MyISAM while architecting this was simple. This database had a high number of concurrent writes. MyISAM creates a table-level lock for INSERTs while InnoDB only locks at a row-level. Transactions and rollback were a must too. In a database of this scale with a huge number of writes, InnoDB was the right choice, after all.

As I mentioned earlier, this MySQL database was a highly concurrent production database. So I had took the rest of this research home , for the weekend. I carved out a VM from my ESX box , and gave it 8 gigs RAM, 4 cores, and enough NAS.
The idea was to replicate my production instance, at a smaller scale, and break it, and then find why it broke. I ran into some interesting observations.

 

Goal : Start from scratch and see where things break

 

  1. keeping inno_db_buffer at 75% of total RAM
  2. bulk uploads with LOAD DATA INFILE to boost insert speeds
  3. sudo swapoff -a
  4. pump in random data, till inserts slowdown
  5. keep monitoring insertion_rate, disk_io, innodb_buffer,etc

These are the settings I changed in MySQL’s config file

 

innodb_file_per_table
innodb_buffer_pool_size = 2G
innodb_write_io_threads = 8
innodb_read_io_threads = 8
innodb_thread_concurrency = 16
innodb_flush_log_at_trx_commit = 1
innodb_log_buffer_size = 8M

 

Table Structure

Tables contained 3 columns (ID, Time, Value). To keep MySQL tables relatively small, and to scale well, I had created a framework around this, whose job was to create new dated-tables every day, and delete those older than a month. As older tables were deleted, their data would be summarized, and shared to another summary table with lesser resolution. Think RRD!

Inserts came in the order of ‘Time’. Roughly 4 million inserts every minute. To speed up table-reads, the Primary key, was a compound one(ID+Time). There was also an index on the ‘Time’ field.

Scripts to automate it
The scripts I used for generating and inserting data, are checked in at GitHub

Throwing things onto a graph helps

It didn’t take long to break. Especially since my ESXI at home doesnt have the 96Gigs of RAM, or the 24 cores I have in production.

ib1

Observations : Inserts start at around 10-20K rows/sec. Soon after the innoDB-buffer gets depleted, they drop down to roughly 5K/sec. After the indexes reach a size where they can no longer fit into RAM, the inserts become painfully slow at about 500-1000/sec.

I repeated the above with several tuning parameters turned on. Although the exact number of inserts varied a bit, the end result was the same – after about an hour, my insert rate started dropping. Since the indexes couldn’t fit in RAM, MySQL was using a lot of disk IO, thereby slowing things down. But we already knew that. Hence the daily tables!! But it turned out that these daily tables were getting too big, for their indexes to fit in RAM.

And the solution!

Every large database needs careful design, and the solution for two different databases, might be completely different. In this case, we were storing time-series data. It was flowing in, with incrementing timestamps. My application was splitting these tables into daily tables, but by late afternoon, the primary keys were outgrowing the RAM. So I knew I had to somehow make the primary keys fit in RAM.

Sure, I could throw more RAM at MySQL! But that would just postpone the problem.

The solution was to partition these tables – but in such a way, that the data-inserter would only need to access the newest few partitions to load a file, and hence the keys loaded into RAM would be minimal.

Perhaps, I could just use MySQLs partitioning, to partition this table by timestamp. Data came in with sequentially incrementing timestamps. So then, MySQL would probably have to keep just that latest partition in RAM… and once data overflows that partition, and onto the next, MySQL can load the next into RAM, and slowly flush out the older one. This was the thought process…Sounded good in theory, lets see how it actually did in testing…

ib2

This time the graph showed an interesting pattern. My Inserts started at 20K rows/sec, and as the partition got filled, the speed of inserts reduced. However, once we crossed over onto data-inserting in the following partition, speeds went up again to 20K/sec. This pattern kept on going. And our speeds never reduced to less than 8K rows/sec. That’s what we’d theorized. Time to put this in production.

Update: I’ve been running this in production for more than 6 months now. Data has since quadrupled, and the database is keeping up with the data its fed!


References:

Data Warehousing Best Practices: Comparing Oracle to MySQL, part 2 (partitioning)
Partitioning in MySQL 5.1
mySQL Partitioning summary and what to watch out for
Two quick performance tips with MySQL 5.1 partitions