I'd recommend using fio to do this using libaio and direct disk reads/writes, and IOPing for basic latency tests:
* Random 4k read test for flash storage:
fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test \ --filename=test --bs=4k --iodepth=128 --size=5G --numjobs=12 --norandommap \ --readwrite=randread
fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test \ --filename=test --bs=4k --iodepth=128 --size=5G --numjobs=12 --norandommap \ --readwrite=randwrite
Here's an example here is a test storage unit I'm logged into at work right now (NOTE: THIS IS NOT ON INSTANTCLOUD!):
root@s1-san5:~ # fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test \ --filename=/dev/md200 --bs=4k --iodepth=128 --size=5G --numjobs=12 --norandommap \ --readwrite=randread
...
Run status group 0 (all jobs): READ: io=61440MB, aggrb=2339.7MB/s, minb=199652KB/s, maxb=200362KB/s, mint=26167msec, maxt=26260msec Disk stats (read/write): md100: ios=15636294/0, merge=0/0, ticks=0/0, in_queue=1581403800, util=100.00%, aggrios=7864320/0, aggrmerge=0/0, aggrticks=128880/0, aggrin_queue=131372, aggrutil=100.00% nvme0n1: ios=7576253/0, merge=0/0, ticks=123812/0, in_queue=125748, util=100.00% nvme1n1: ios=8152387/0, merge=0/0, ticks=133948/0, in_queue=136996, util=100.
Jobs: 12 (f=12): [r(12)] [12.3% done] [2537MB/0KB/0KB /s] [650K/0/0 iops] [eta 00m:50s]
* Throughput: aggrb=2339.7MB/s
* IOs (Read in this case): ios=15636294/0
And then with IOPing to test latency:
* Here's an example of really bad storage latency on my crappy old rotational RAID array at home:
root@nas:/mnt/raid# ioping /dev/md0 4 KiB from /dev/md0 (block device 7.28 TiB): request=1 time=27.2 ms 4 KiB from /dev/md0 (block device 7.28 TiB): request=2 time=15.7 ms
root@s1-san5:~ # ioping /dev/md200 4 KiB from /dev/md200 (block device 1.09 TiB): request=1 time=136 us 4 KiB from /dev/md200 (block device 1.09 TiB): request=2 time=124 us 4 KiB from /dev/md200 (block device 1.09 TiB): request=3 time=112 us
root@nagios:~ # ioping /dev/xvda 4096 bytes from /dev/xvda (device 15.0 Gb): request=1 time=11.6 ms 4096 bytes from /dev/xvda (device 15.0 Gb): request=2 time=0.2 ms 4096 bytes from /dev/xvda (device 15.0 Gb): request=3 time=7.1 ms 4096 bytes from /dev/xvda (device 15.0 Gb): request=4 time=1.2 ms
OK So let's try this on instantcloud.io / scaleway:
* IOP/s - Random 4k reads:
bs: 12 (f=12): [r(12)] [0.5% done] [31050KB/0KB/0KB /s] [7762/0/0 iops] [eta 37m:32s]
Jobs: 12 (f=12): [w(12)] [0.2% done] [0KB/7848KB/0KB /s] [0/1962/0 iops] [eta 02h:29m:10s]
root@instantcloud:~# ioping /dev/nbd0 4.0 KiB from /dev/nbd0 (device 46.6 GiB): request=1 time=1.4 ms 4.0 KiB from /dev/nbd0 (device 46.6 GiB): request=2 time=1.4 ms 4.0 KiB from /dev/nbd0 (device 46.6 GiB): request=3 time=1.4 ms 4.0 KiB from /dev/nbd0 (device 46.6 GiB): request=4 time=2.8 ms
Good performance for small arm devices but not even close to even a single entry-level consumer grade SATA SSD.
I'd recommend using fio to do this using libaio and direct disk reads/writes, and IOPing for basic latency tests:
* Random 4k read test for flash storage:
* And writes: ---Here's an example here is a test storage unit I'm logged into at work right now (NOTE: THIS IS NOT ON INSTANTCLOUD!):
test: (g=0): rw=randread, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=128...
While the test is running is you'll see the storage performance (Note the MB/s in the third [] and iops in the fourth []): And when it's completed:* Throughput: aggrb=2339.7MB/s
* IOs (Read in this case): ios=15636294/0
And then with IOPing to test latency:
* Here's an example of really bad storage latency on my crappy old rotational RAID array at home:
* Here's an example of pretty good storage latency on my new storage at work: * And one more on a VM with storage provisioned over iSCSI to a very slow rotational storage array that's quite busy: ---OK So let's try this on instantcloud.io / scaleway:
* IOP/s - Random 4k reads:
* IOP/s - Random 4k writes: * Latency: Conclusion:Good performance for small arm devices but not even close to even a single entry-level consumer grade SATA SSD.