基于虚拟化环境下测试虚拟机的磁盘IO

当虚拟机底层为 SATA SSD 盘时,测试需结合 SATA SSD 的特性(如 IOPS、延迟、吞吐量的典型表现),并针对性优化测试参数以充分发挥 SSD 性能,同时规避虚拟化层的潜在瓶颈。以下是针对 SATA SSD 底层的虚拟机磁盘 IO 测试方案。

一、SATA SSD 的性能特性参考

SATA SSD 的典型性能指标(物理盘):

  • 4K 随机读 IOPS:约 8000-15000
  • 4K 随机写 IOPS:约 4000-8000
  • 顺序读写吞吐量:约 450-550MB/s(受 SATA 3.0 接口带宽限制)
  • 平均延迟:读≈0.1-0.5ms,写≈0.2-1ms

二.使用fio工具进行测试

1. 4K 随机读(SATA SSD 的核心读性能
复制代码
fio --name=ssd_randread_4k \
    --filename=/data/fio_test_ssd \
    --rw=randread \
    --bs=4k \
    --size=20G \
    --numjobs=4 \
    --runtime=120 \
    --iodepth=16 \
    --direct=1 \
    --ioengine=libaio \
    --group_reporting \
    --norandommap \
    --randrepeat=0

测试结果

fio-3.7

Starting 4 processes

ssd_randread_4k: Laying out IO file (1 file / 20480MiB)

Jobs: 4 (f=4): [r(4)][100.0%][r=463MiB/s,w=0KiB/s][r=119k,w=0 IOPS][eta 00m:00s]

ssd_randread_4k: (groupid=0, jobs=4): err= 0: pid=1722: Mon Dec 8 18:27:30 2025

read: IOPS=116k, BW=455MiB/s (477MB/s)(53.3GiB/120001msec)

slat (usec): min=3, max=11378, avg= 8.22, stdev=29.56

clat (usec): min=87, max=42508, avg=540.39, stdev=330.43

lat (usec): min=97, max=43990, avg=548.87, stdev=336.41

clat percentiles (usec):

| 1.00th=[ 265], 5.00th=[ 310], 10.00th=[ 334], 20.00th=[ 379],

| 30.00th=[ 420], 40.00th=[ 465], 50.00th=[ 510], 60.00th=[ 562],

| 70.00th=[ 611], 80.00th=[ 660], 90.00th=[ 725], 95.00th=[ 791],

| 99.00th=[ 1270], 99.50th=[ 1532], 99.90th=[ 2868], 99.95th=[ 7767],

| 99.99th=[13304]

bw ( KiB/s): min=94072, max=124736, per=24.99%, avg=116428.18, stdev=3521.56, samples=959

iops : min=23518, max=31184, avg=29107.01, stdev=880.38, samples=959

lat (usec) : 100=0.01%, 250=0.57%, 500=47.14%, 750=44.64%, 1000=5.64%

lat (msec) : 2=1.83%, 4=0.09%, 10=0.05%, 20=0.03%, 50=0.01%

cpu : usr=7.62%, sys=38.72%, ctx=8932045, majf=0, minf=202

IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=100.0%, 32=0.0%, >=64=0.0%

submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%

complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0%

issued rwts: total=13974987,0,0,0 short=0,0,0,0 dropped=0,0,0,0

latency : target=0, window=0, percentile=100.00%, depth=16

Run status group 0 (all jobs):

READ: bw=455MiB/s (477MB/s), 455MiB/s-455MiB/s (477MB/s-477MB/s), io=53.3GiB (57.2GB), run=120001-120001msec

Disk stats (read/write):

dm-0: ios=13947459/160, merge=0/0, ticks=7344046/204, in_queue=7379595, util=100.00%, aggrios=13974846/66, aggrmerge=141/98, aggrticks=7400120/74, aggrin_queue=7415748, aggrutil=100.00%

sda: ios=13974846/66, merge=141/98, ticks=7400120/74, in_queue=7415748, util=100.00%

  • norandommap:避免 fio 使用固定随机映射,更贴近真实负载;
  • randrepeat=0:生成非重复的随机 IO 模式,模拟真实业务的随机访问。
2. 4K 随机写(SATA SSD 的写性能,需关注稳态)
复制代码
fio --name=ssd_randwrite_4k \
    --filename=/data/fio_test_ssd \
    --rw=randwrite \
    --bs=4k \
    --size=20G \
    --numjobs=4 \
    --runtime=120 \
    --iodepth=16 \
    --direct=1 \
    --ioengine=libaio \
    --group_reporting \
    --norandommap \
    --randrepeat=0 \
    --write_bw_log=ssd_write_bw \
    --write_iops_log=ssd_write_iops

测试结果

fio-3.7

Starting 4 processes

ssd_randwrite_4k: Laying out IO file (1 file / 20480MiB)

Jobs: 4 (f=4): [w(4)][100.0%][r=0KiB/s,w=195MiB/s][r=0,w=49.8k IOPS][eta 00m:00s]

ssd_randwrite_4k: (groupid=0, jobs=4): err= 0: pid=1740: Mon Dec 8 18:33:05 2025

write: IOPS=43.1k, BW=168MiB/s (176MB/s)(19.7GiB/120003msec)

slat (usec): min=3, max=17134, avg=52.54, stdev=195.57

clat (usec): min=10, max=21858, avg=1432.02, stdev=1369.64

lat (usec): min=46, max=21864, avg=1484.81, stdev=1381.15

clat percentiles (usec):

| 1.00th=[ 255], 5.00th=[ 437], 10.00th=[ 578], 20.00th=[ 758],

| 30.00th=[ 906], 40.00th=[ 1045], 50.00th=[ 1188], 60.00th=[ 1336],

| 70.00th=[ 1516], 80.00th=[ 1713], 90.00th=[ 2057], 95.00th=[ 2606],

| 99.00th=[ 8979], 99.50th=[10814], 99.90th=[13960], 99.95th=[15270],

| 99.99th=[18220]

bw ( KiB/s): min= 187, max=373722, per=2.45%, avg=4229.28, stdev=3292.45, samples=5169118

iops : min= 1, max= 1, avg= 1.00, stdev= 0.00, samples=5169118

lat (usec) : 20=0.01%, 50=0.01%, 100=0.06%, 250=0.86%, 500=6.08%

lat (usec) : 750=12.25%, 1000=17.48%

lat (msec) : 2=52.07%, 4=8.24%, 10=2.26%, 20=0.69%, 50=0.01%

cpu : usr=3.47%, sys=14.06%, ctx=2944742, majf=0, minf=12972

IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=100.0%, 32=0.0%, >=64=0.0%

submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%

complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0%

issued rwts: total=0,5169118,0,0 short=0,0,0,0 dropped=0,0,0,0

latency : target=0, window=0, percentile=100.00%, depth=16

Run status group 0 (all jobs):

WRITE: bw=168MiB/s (176MB/s), 168MiB/s-168MiB/s (176MB/s-176MB/s), io=19.7GiB (21.2GB), run=120003-120003msec

Disk stats (read/write):

dm-0: ios=3/8225191, merge=0/0, ticks=0/7108612, in_queue=7117708, util=95.22%, aggrios=3/7711493, aggrmerge=0/513698, aggrticks=0/6642118, aggrin_queue=6649381, aggrutil=94.97%

sda: ios=3/7711493, merge=0/513698, ticks=0/6642118, in_queue=6649381, util=94.97%

  • --write_bw_log/--write_iops_log:记录写吞吐量和 IOPS 的实时变化,便于观察 SATA SSD 的稳态性能(避免前期缓存加速的虚高值)。
3. 128K 顺序读(SATA SSD 的顺序读峰值)
复制代码
fio --name=ssd_seqread_128k \
    --filename=/data/fio_test_ssd \
    --rw=read \
    --bs=128k \
    --size=40G \
    --numjobs=2 \
    --runtime=120 \
    --iodepth=32 \
    --direct=1 \
    --ioengine=libaio \
    --group_reporting

测试结果

...

fio-3.7

Starting 2 processes

ssd_seqread_128k: Laying out IO file (1 file / 40960MiB)

fio: ENOSPC on laying out file, stopping

Jobs: 2 (f=2): [R(2)][89.7%][r=12.0GiB/s,w=0KiB/s][r=98.6k,w=0 IOPS][eta 00m:03s]

ssd_seqread_128k: (groupid=0, jobs=2): err= 0: pid=1824: Mon Dec 8 18:36:45 2025

read: IOPS=25.3k, BW=3164MiB/s (3318MB/s)(80.0GiB/25889msec)

slat (usec): min=13, max=475, avg=17.77, stdev= 5.63

clat (usec): min=3, max=35884, avg=2504.43, stdev=1674.08

lat (usec): min=18, max=35899, avg=2522.44, stdev=1673.58

clat percentiles (usec):

| 1.00th=[ 537], 5.00th=[ 619], 10.00th=[ 619], 20.00th=[ 627],

| 30.00th=[ 635], 40.00th=[ 824], 50.00th=[ 3458], 60.00th=[ 3720],

| 70.00th=[ 3851], 80.00th=[ 3982], 90.00th=[ 4146], 95.00th=[ 4293],

| 99.00th=[ 4555], 99.50th=[ 4686], 99.90th=[ 5014], 99.95th=[11863],

| 99.99th=[29492]

bw ( MiB/s): min= 796, max= 6182, per=48.09%, avg=1521.67, stdev=1473.71, samples=102

iops : min= 6370, max=49462, avg=12173.24, stdev=11789.60, samples=102

lat (usec) : 4=0.01%, 50=0.01%, 100=0.01%, 250=0.03%, 500=0.08%

lat (usec) : 750=39.56%, 1000=1.26%

lat (msec) : 2=0.43%, 4=41.14%, 10=17.44%, 20=0.01%, 50=0.04%

cpu : usr=2.89%, sys=26.88%, ctx=131493, majf=0, minf=69

IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=100.0%, >=64=0.0%

submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%

complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%

issued rwts: total=655360,0,0,0 short=0,0,0,0 dropped=0,0,0,0

latency : target=0, window=0, percentile=100.00%, depth=32

Run status group 0 (all jobs):

READ: bw=3164MiB/s (3318MB/s), 3164MiB/s-3164MiB/s (3318MB/s-3318MB/s), io=80.0GiB (85.9GB), run=25889-25889msec

  • 顺序读的numjobs设置为 2 即可(SATA 3.0 接口带宽约 600MB/s,单 job 已接近峰值,多 job 易导致虚拟化层调度开销)。
4. 128K 顺序写(SATA SSD 的顺序写峰值)
复制代码
fio --name=ssd_seqwrite_128k \
    --filename=/data/fio_test_ssd \
    --rw=write \
    --bs=128k \
    --size=40G \
    --numjobs=2 \
    --runtime=120 \
    --iodepth=32 \
    --direct=1 \
    --ioengine=libaio \
    --group_reporting

测试结果

fio-3.7

Starting 2 processes

ssd_seqwrite_128k: Laying out IO file (1 file / 40960MiB)

ssd_seqwrite_128k: Laying out IO file (1 file / 40960MiB)

fio: io_u error on file /data/fio_test_ssd: No space left on device: write offset=25974407168, buflen=131072

fio: io_u error on file /data/fio_test_ssd: No space left on device: write offset=25974407168, buflen=131072

fio: io_u error on file /data/fio_test_ssd: No space left on device: write offset=25974538240, buflen=131072

fio: io_u error on file /data/fio_test_ssd: No space left on device: write offset=25974538240, buflen=131072

fio: pid=1833, err=28/file:io_u.c:1747, func=io_u error, error=No space left on device

fio: pid=1834, err=28/file:io_u.c:1747, func=io_u error, error=No space left on device

ssd_seqwrite_128k: (groupid=0, jobs=2): err=28 (file:io_u.c:1747, func=io_u error, error=No space left on device): pid=1833: Mon Dec 8 18:39:14 2025

write: IOPS=15.1k, BW=1886MiB/s (1978MB/s)(48.4GiB/26266msec)

slat (usec): min=8, max=1076, avg=129.89, stdev=47.57

clat (usec): min=511, max=7658, avg=4109.01, stdev=334.29

lat (usec): min=647, max=7854, avg=4239.18, stdev=341.72

clat percentiles (usec):

| 1.00th=[ 3720], 5.00th=[ 3752], 10.00th=[ 3785], 20.00th=[ 3818],

| 30.00th=[ 3851], 40.00th=[ 3916], 50.00th=[ 4113], 60.00th=[ 4178],

| 70.00th=[ 4228], 80.00th=[ 4359], 90.00th=[ 4555], 95.00th=[ 4686],

| 99.00th=[ 4883], 99.50th=[ 5538], 99.90th=[ 6849], 99.95th=[ 7046],

| 99.99th=[ 7570]

bw ( KiB/s): min=737536, max=1026816, per=50.01%, avg=965923.34, stdev=51745.65, samples=104

iops : min= 5762, max= 8022, avg=7546.25, stdev=404.26, samples=104

lat (usec) : 750=0.01%, 1000=0.01%

lat (msec) : 2=0.01%, 4=46.65%, 10=53.33%

cpu : usr=6.94%, sys=23.12%, ctx=447588, majf=0, minf=133

IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=100.0%, >=64=0.0%

submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%

complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%

issued rwts: total=0,396402,0,0 short=0,0,0,0 dropped=0,0,0,0

latency : target=0, window=0, percentile=100.00%, depth=32

Run status group 0 (all jobs):

WRITE: bw=1886MiB/s (1978MB/s), 1886MiB/s-1886MiB/s (1978MB/s-1978MB/s), io=48.4GiB (51.9GB), run=26266-26266msec

Disk stats (read/write):

dm-0: ios=0/394470, merge=0/0, ticks=0/33497, in_queue=33471, util=69.42%, aggrios=0/396493, aggrmerge=0/3, aggrticks=0/33327, aggrin_queue=33155, aggrutil=68.65%

sda: ios=0/396493, merge=0/3, ticks=0/33327, in_queue=33155, util=68.65%

5. 混合读写(模拟真实业务,如 7:3 读写比)
复制代码
fio --name=ssd_randrw_4k \
    --filename=/data/fio_test_ssd \
    --rw=randrw \
    --rwmixread=70 \
    --bs=4k \
    --size=20G \
    --numjobs=4 \
    --runtime=120 \
    --iodepth=16 \
    --direct=1 \
    --ioengine=libaio \
    --group_reporting \
    --norandommap \
    --randrepeat=0

测试结果

fio-3.7

Starting 4 processes

ssd_randrw_4k: Laying out IO file (1 file / 20480MiB)

Jobs: 4 (f=4): [m(4)][100.0%][r=259MiB/s,w=111MiB/s][r=66.3k,w=28.4k IOPS][eta 00m:00s]

ssd_randrw_4k: (groupid=0, jobs=4): err= 0: pid=1849: Mon Dec 8 18:43:44 2025

read: IOPS=66.3k, BW=259MiB/s (271MB/s)(30.3GiB/120001msec)

slat (usec): min=3, max=9633, avg= 9.25, stdev=44.20

clat (usec): min=10, max=29191, avg=688.57, stdev=697.93

lat (usec): min=84, max=29197, avg=698.13, stdev=700.75

clat percentiles (usec):

| 1.00th=[ 219], 5.00th=[ 281], 10.00th=[ 306], 20.00th=[ 343],

| 30.00th=[ 367], 40.00th=[ 400], 50.00th=[ 437], 60.00th=[ 490],

| 70.00th=[ 603], 80.00th=[ 947], 90.00th=[ 1369], 95.00th=[ 1663],

| 99.00th=[ 3589], 99.50th=[ 4113], 99.90th=[ 5342], 99.95th=[ 6521],

| 99.99th=[16319]

bw ( KiB/s): min=42944, max=83056, per=24.99%, avg=66226.89, stdev=8008.73, samples=957

iops : min=10736, max=20764, avg=16556.66, stdev=2002.21, samples=957

write: IOPS=28.4k, BW=111MiB/s (116MB/s)(13.0GiB/120001msec)

slat (usec): min=3, max=5359, avg= 9.62, stdev=45.33

clat (usec): min=35, max=28855, avg=611.71, stdev=581.85

lat (usec): min=71, max=28864, avg=621.64, stdev=585.21

clat percentiles (usec):

| 1.00th=[ 186], 5.00th=[ 225], 10.00th=[ 249], 20.00th=[ 289],

| 30.00th=[ 330], 40.00th=[ 371], 50.00th=[ 416], 60.00th=[ 486],

| 70.00th=[ 627], 80.00th=[ 832], 90.00th=[ 1188], 95.00th=[ 1532],

| 99.00th=[ 3064], 99.50th=[ 3687], 99.90th=[ 4883], 99.95th=[ 5538],

| 99.99th=[15795]

bw ( KiB/s): min=18288, max=35880, per=24.99%, avg=28401.76, stdev=3442.69, samples=957

iops : min= 4572, max= 8970, avg=7100.37, stdev=860.71, samples=957

lat (usec) : 20=0.01%, 50=0.01%, 100=0.01%, 250=4.79%, 500=56.68%

lat (usec) : 750=13.84%, 1000=7.42%

lat (msec) : 2=13.76%, 4=3.00%, 10=0.47%, 20=0.04%, 50=0.01%

cpu : usr=5.36%, sys=34.74%, ctx=5569268, majf=0, minf=132

IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=100.0%, 32=0.0%, >=64=0.0%

submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%

complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0%

issued rwts: total=7950470,3409565,0,0 short=0,0,0,0 dropped=0,0,0,0

latency : target=0, window=0, percentile=100.00%, depth=16

Run status group 0 (all jobs):

READ: bw=259MiB/s (271MB/s), 259MiB/s-259MiB/s (271MB/s-271MB/s), io=30.3GiB (32.6GB), run=120001-120001msec

WRITE: bw=111MiB/s (116MB/s), 111MiB/s-111MiB/s (116MB/s-116MB/s), io=13.0GiB (13.0GB), run=120001-120001msec

Disk stats (read/write):

dm-0: ios=7948658/3408770, merge=0/0, ticks=4716301/1863151, in_queue=6599137, util=100.00%, aggrios=7950437/3409561, aggrmerge=32/13, aggrticks=4734811/1869742, aggrin_queue=6610447, aggrutil=100.00%

sda: ios=7950437/3409561, merge=32/13, ticks=4734811/1869742, in_queue=6610447, util=100.00%

三.使用sysbench测试

1. 准备测试文件

首先需生成测试用的临时文件(仅需执行一次,后续测试可复用,若需重新测试需先清理):

复制代码
sysbench fileio --file-total-size=10G --file-test-mode=rndrw prepare

--file-total-size:测试文件总大小(建议设置为内存的 2 倍以上,避免缓存干扰,如虚拟机内存 8G 则设为 20G);

--file-test-mode:测试模式(提前指定为目标模式,prepare 阶段仅生成文件,不执行测试)。

2.随机读写测试(核心场景)

复制代码
sysbench fileio --file-total-size=10G --file-test-mode=rndrw --file-block-size=4k --threads=4 --time=60 --report-interval=10 run

--file-test-mode=rndrw:随机读写模式(可选rndread/rndwrite/seqread/seqwrite);

--file-block-size=4k:块大小(模拟小文件场景,可选 8k/16k/64k 等);

--threads=4:并发线程数(建议与虚拟机 CPU 核心数匹配,如 4 核 CPU 设为 4);

--time=60:测试时长(秒);

--report-interval=10:每隔 10 秒输出一次中间结果。

测试结果

sysbench 1.0.17 (using system LuaJIT 2.0.4)

Running the test with following options:

Number of threads: 4

Report intermediate results every 10 second(s)

Initializing random number generator from current time

Extra file open flags: (none)

128 files, 80MiB each

10GiB total file size

Block size 4KiB

Number of IO requests: 0

Read/Write ratio for combined random IO test: 1.50

Periodic FSYNC enabled, calling fsync() each 100 requests.

Calling fsync() at the end of test, Enabled.

Using synchronous I/O mode

Doing random r/w test

Initializing worker threads...

Threads started!

10s \] reads: 198.07 MiB/s writes: 132.05 MiB/s fsyncs: 108159.45/s latency (ms,95%): 0.092 \[ 20s \] reads: 199.70 MiB/s writes: 133.13 MiB/s fsyncs: 109063.10/s latency (ms,95%): 0.090 \[ 30s \] reads: 203.09 MiB/s writes: 135.39 MiB/s fsyncs: 110914.97/s latency (ms,95%): 0.087 \[ 40s \] reads: 203.67 MiB/s writes: 135.78 MiB/s fsyncs: 111238.50/s latency (ms,95%): 0.087 \[ 50s \] reads: 203.74 MiB/s writes: 135.83 MiB/s fsyncs: 111260.53/s latency (ms,95%): 0.087 File operations: reads/s: 51682.82 writes/s: 34455.27 fsyncs/s: 110264.65 Throughput: read, MiB/s: 201.89 written, MiB/s: 134.59 General statistics: total time: 60.0023s total number of events: 11784370 Latency (ms): min: 0.00 avg: 0.02 max: 3.67 95th percentile: 0.09 sum: 233209.18 Threads fairness: events (avg/stddev): 2946092.5000/321.58 execution time (avg/stddev): 58.3023/0.00

3.随机读测试

复制代码
sysbench fileio --file-total-size=10G --file-test-mode=rndrd --file-block-size=4k --threads=4 --time=60 --report-interval=10 run

测试结果

sysbench 1.0.17 (using system LuaJIT 2.0.4)

Running the test with following options:

Number of threads: 4

Report intermediate results every 10 second(s)

Initializing random number generator from current time

Extra file open flags: (none)

128 files, 80MiB each

10GiB total file size

Block size 4KiB

Number of IO requests: 0

Read/Write ratio for combined random IO test: 1.50

Periodic FSYNC enabled, calling fsync() each 100 requests.

Calling fsync() at the end of test, Enabled.

Using synchronous I/O mode

Doing random read test

Initializing worker threads...

Threads started!

10s \] reads: 3404.67 MiB/s writes: 0.00 MiB/s fsyncs: 0.00/s latency (ms,95%): 0.006 \[ 20s \] reads: 3474.04 MiB/s writes: 0.00 MiB/s fsyncs: 0.00/s latency (ms,95%): 0.006 \[ 30s \] reads: 3479.52 MiB/s writes: 0.00 MiB/s fsyncs: 0.00/s latency (ms,95%): 0.006 \[ 40s \] reads: 3494.23 MiB/s writes: 0.00 MiB/s fsyncs: 0.00/s latency (ms,95%): 0.006 \[ 50s \] reads: 3533.76 MiB/s writes: 0.00 MiB/s fsyncs: 0.00/s latency (ms,95%): 0.005 File operations: reads/s: 894161.84 writes/s: 0.00 fsyncs/s: 0.00 Throughput: read, MiB/s: 3492.82 written, MiB/s: 0.00 General statistics: total time: 60.0002s total number of events: 53651123 Latency (ms): min: 0.00 avg: 0.00 max: 4.50 95th percentile: 0.01 sum: 191991.84 Threads fairness: events (avg/stddev): 13412780.7500/80855.18 execution time (avg/stddev): 47.9980/0.25

4,随机写测试

复制代码
sysbench fileio --file-total-size=10G --file-test-mode=rndwr --file-block-size=4k --threads=4 --time=60 --report-interval=10 run

测试结果

sysbench 1.0.17 (using system LuaJIT 2.0.4)

Running the test with following options:

Number of threads: 4

Report intermediate results every 10 second(s)

Initializing random number generator from current time

Extra file open flags: (none)

128 files, 80MiB each

10GiB total file size

Block size 4KiB

Number of IO requests: 0

Read/Write ratio for combined random IO test: 1.50

Periodic FSYNC enabled, calling fsync() each 100 requests.

Calling fsync() at the end of test, Enabled.

Using synchronous I/O mode

Doing random write test

Initializing worker threads...

Threads started!

10s \] reads: 0.00 MiB/s writes: 164.82 MiB/s fsyncs: 54008.60/s latency (ms,95%): 0.163 \[ 20s \] reads: 0.00 MiB/s writes: 164.96 MiB/s fsyncs: 54042.97/s latency (ms,95%): 0.163 \[ 30s \] reads: 0.00 MiB/s writes: 166.99 MiB/s fsyncs: 54727.02/s latency (ms,95%): 0.160 \[ 40s \] reads: 0.00 MiB/s writes: 166.49 MiB/s fsyncs: 54549.99/s latency (ms,95%): 0.163 \[ 50s \] reads: 0.00 MiB/s writes: 164.19 MiB/s fsyncs: 53802.27/s latency (ms,95%): 0.163 \[ 60s \] reads: 0.00 MiB/s writes: 165.79 MiB/s fsyncs: 54356.93/s latency (ms,95%): 0.163 File operations: reads/s: 0.00 writes/s: 42375.63 fsyncs/s: 54247.85 Throughput: read, MiB/s: 0.00 written, MiB/s: 165.53 General statistics: total time: 60.0048s total number of events: 5797495 Latency (ms): min: 0.00 avg: 0.04 max: 1.27 95th percentile: 0.16 sum: 236684.42 Threads fairness: events (avg/stddev): 1449373.7500/3045.04 execution time (avg/stddev): 59.1711/0.01

5.顺序读测试(大文件场景

复制代码
sysbench fileio --file-total-size=10G --file-test-mode=seqrd --file-block-size=128k --threads=2 --time=60 --report-interval=10 run

测试结果

sysbench 1.0.17 (using system LuaJIT 2.0.4)

Running the test with following options:

Number of threads: 2

Report intermediate results every 10 second(s)

Initializing random number generator from current time

Extra file open flags: (none)

128 files, 80MiB each

10GiB total file size

Block size 128KiB

Periodic FSYNC enabled, calling fsync() each 100 requests.

Calling fsync() at the end of test, Enabled.

Using synchronous I/O mode

Doing sequential read test

Initializing worker threads...

Threads started!

10s \] reads: 10296.09 MiB/s writes: 0.00 MiB/s fsyncs: 0.00/s latency (ms,95%): 0.028 \[ 20s \] reads: 10471.65 MiB/s writes: 0.00 MiB/s fsyncs: 0.00/s latency (ms,95%): 0.026 \[ 30s \] reads: 10477.87 MiB/s writes: 0.00 MiB/s fsyncs: 0.00/s latency (ms,95%): 0.026 \[ 40s \] reads: 10478.95 MiB/s writes: 0.00 MiB/s fsyncs: 0.00/s latency (ms,95%): 0.026 \[ 50s \] reads: 10462.13 MiB/s writes: 0.00 MiB/s fsyncs: 0.00/s latency (ms,95%): 0.027 File operations: reads/s: 83494.31 writes/s: 0.00 fsyncs/s: 0.00 Throughput: read, MiB/s: 10436.79 written, MiB/s: 0.00 General statistics: total time: 60.0003s total number of events: 5010046 Latency (ms): min: 0.02 avg: 0.02 max: 0.24 95th percentile: 0.03 sum: 117896.98 Threads fairness: events (avg/stddev): 2505023.0000/3739.00 execution time (avg/stddev): 58.9485/0.01

6.顺序写测试

复制代码
sysbench fileio --file-total-size=10G --file-test-mode=seqwr --file-block-size=128k --threads=2 --time=60 --report-interval=10 run

测试结果

sysbench 1.0.17 (using system LuaJIT 2.0.4)

Running the test with following options:

Number of threads: 2

Report intermediate results every 10 second(s)

Initializing random number generator from current time

Extra file open flags: (none)

128 files, 80MiB each

10GiB total file size

Block size 128KiB

Periodic FSYNC enabled, calling fsync() each 100 requests.

Calling fsync() at the end of test, Enabled.

Using synchronous I/O mode

Doing sequential write (creation) test

Initializing worker threads...

Threads started!

10s \] reads: 0.00 MiB/s writes: 1211.07 MiB/s fsyncs: 12390.71/s latency (ms,95%): 0.125 \[ 20s \] reads: 0.00 MiB/s writes: 1002.50 MiB/s fsyncs: 10265.34/s latency (ms,95%): 0.068 \[ 30s \] reads: 0.00 MiB/s writes: 1004.72 MiB/s fsyncs: 10288.13/s latency (ms,95%): 0.097 \[ 40s \] reads: 0.00 MiB/s writes: 1004.03 MiB/s fsyncs: 10282.75/s latency (ms,95%): 0.104 \[ 50s \] reads: 0.00 MiB/s writes: 1003.61 MiB/s fsyncs: 10283.46/s latency (ms,95%): 0.102 \[ 60s \] reads: 0.00 MiB/s writes: 1004.11 MiB/s fsyncs: 10292.79/s latency (ms,95%): 0.108 File operations: reads/s: 0.00 writes/s: 8306.47 fsyncs/s: 10635.73 Throughput: read, MiB/s: 0.00 written, MiB/s: 1038.31 General statistics: total time: 60.0047s total number of events: 1136390 Latency (ms): min: 0.00 avg: 0.10 max: 29.09 95th percentile: 0.10 sum: 119295.90 Threads fairness: events (avg/stddev): 568195.0000/9776.00 execution time (avg/stddev): 59.6479/0.00

复制代码
sysbench fileio --file-total-size=10G --file-test-mode=rndrw cleanup

四.针对结果分析

该性能远超单块 SATA SSD 的常规水平,可能的原因包括:

  1. 缓存影响:SSD 自身的 SLC 缓存或虚拟化层的读写缓存未完全规避,导致测试结果虚高(可通过增大测试文件至内存 3 倍以上,或执行多次写测试后再测读性能,验证稳态性能);
  2. 底层存储阵列:宿主机的 SATA SSD 并非单盘,而是 RAID 0/RAID 10 阵列,通过多盘并行提升了 IOPS 和吞吐量;
  3. 虚拟化层优化 :启用了virtio-scsi多队列、SSD 直通等高级特性,极大降低了虚拟化层的 IO 损耗。
相关推荐
郭涤生1 天前
QT 架构笔记
java·数据库·系统架构
zhou_gai1 天前
供应链计划系统架构实战(五):数据模型设计-全球网络模型与数据分布
大数据·系统架构·制造
虚幻如影1 天前
虚拟机安装统信UOS
系统架构
milanyangbo3 天前
像Git一样管理数据:深入解析数据库并发控制MVCC的实现
服务器·数据库·git·后端·mysql·架构·系统架构
奋进的电子工程师3 天前
软件定义汽车的背后:一场架构的“深层次革命”
网络协议·系统架构·汽车·安全威胁分析·安全架构
职业码农NO.13 天前
智能体AI的六大核心设计模式,很常见
人工智能·设计模式·系统架构·aigc·rag
摇滚侠3 天前
ElasticSearch 教程入门到精通,核心概念,系统架构,单节点集群,故障转移,水平扩容,笔记33、34、35、36、37
笔记·elasticsearch·系统架构
乾元4 天前
SDN 与 AI 协同:控制面策略自动化与策略一致性校验
运维·网络·人工智能·网络协议·华为·系统架构·ansible