Quantcast
Channel: Search results matching tags 'san' and 'storage'
Viewing all articles
Browse latest Browse all 10

How did Random I/Os Outperform Sequential I/Os?

$
0
0

Recently, when I was doing some I/O performance tests on an I/O path, I found that 8K random reads (and writes) significantly and consistently outperformed 8K sequential reads (and writes) in terms of I/O throughput (megabytes per second). I was puzzled.

With a traditional hard disk that is made up of a stack of magnetic platters held by a spindle, an electro-mechanic access arm, a printed circuit board, and a hard-case enclosure, the average seek latency is always significantly higher than the average rotational latency (e.g. 9.5ms vs. 4.2ms on a 7200 rpm 500GB disk). And therefore, the random I/O throughput is always expected to be significantly lower than the sequential I/O throughput.

In fact, sequential I/Os have such a huge performance advantage over random I/Os that the computer industry has labored over the past few decades trying very hard to reduce random I/Os and convert them to sequential I/Os with such techniques as caching, transaction logging, sorting, and log-structure file systems.

Granted, the I/O path I was working with was not a traditional hard disk. It was a LUN presented from a SAN with a large amount of cache, and to simplify to some extent, the LUN was a RAID 0 stripe set across 12 virtualized drives with a rather large stripe unit size (960K). But how should I explain why 8K random I/Os could outperform 8K sequential I/Os?

After some discussions with a storage professional, we came up with a theory consisting of the following three key factors:

  • Random I/Os were able to effectively hash I/Os across multiple drives that make up the RAID 0 device.
  • Relatively large RAID 0 stripe unit size of 960K caused 8K sequential I/Os to cluster around the same drives. Note that it would take 120 sequential I/Os to fill a single 960K stripe.
  • A base amount of cache was assigned to each drive in RAID 0. And when random I/Os were hashed across 12 drives, the I/Os benefited from larger amount of cache.

Do I have solid proof that these three factors were the root cause of 8K random I/Os outperforming 8K sequential I/Os? No, I don't. But I do have some circumstantial evidence supporting the theory.

First of all, if the theory is correct, I should see the same behavior with smaller I/Os such as 1K reads and writes. Indeed, 1K random I/Os outperformed 1K sequential I/Os on the same I/O path.

Secondly, if the theory is correct, I should not see the same behavior with larger I/Os, especially I/O block size that is not much smaller than 960K. Indeed, 128K random I/Os did not outperform 128K sequential I/Os on the same I/O path.

Thirdly, if the theory is correct, I should not see the same behavior on an I/O path that has fewer drives. Indeed, on a RAID 0 device with three drives in the same SAN, 8K random I/Os did not outperform 8K sequential I/Os.

Finally, if the theory is correct, I should not see the same behavior on a RAID 0 device with much smaller amount of cache. Indeed, on a directly attached RAID 0 device, 8K random I/Os did not outperform 8K sequential I/Os.

Now as mentioned, I'm not 100% confident about this theory. I can't prove it beyond reasonable doubt. Hopefully, some of you reading this blog post know exactly what caused or could have caused 8K random I/Os to outperform 8K sequential I/Os. And if my explanation doesn't match up with yours, I'd love to hear your comments.


Viewing all articles
Browse latest Browse all 10

Trending Articles