C# SIMD向量索引实战:从理论到高性能实现

庸鹊套贡一、raid无缓存

3个文件系统:

---u01: hdd 4T*4 raid10,raid无缓存

---u02: hdd 4T*4 raid5,raid无缓存

---u03: ssd 447G*1

测试结果如下:

./testdd.sh /u01 /u02 /u03 > testdd.log.`date +%Y%m%d%H%M` 2>&1 &

vgraid10_local-lv01 7.3T 100G 7.2T 2% /u01 --- direct写入:38.7 MB/s direct读取:151 MB/s cache写入:328 MB/s cache读取:511 MB/s

vgraid5_local-lv01 11T 88G 11T 1% /u02 --- direct写入:7.4 MB/s direct读取:128 MB/s cache写入:77.6 MB/s cache读取:700 MB/s

vgssd_local-lv01 447G 65G 382G 15% /u03 --- direct写入:147 MB/s direct读取:189 MB/s cache写入:387 MB/s cache读取:512 MB/s

结论:

在无缓存(例如RAID卡禁用缓存、设置为Write Through模式,或使用无缓存的RAID卡)的情况下,针对4块磁盘的配置,RAID 10 的直接写入性能【38.7 MB/s】全面且大幅度地优于 RAID 5【7.4 MB/s】。

二、raid缓存模式

确认以上测试 u01/u02 是无缓存模式:

root@host2 \~\]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sdc 8:32 0 7.3T 0 disk └─vgraid10_local-lv01 253:3 0 7.3T 0 lvm /u01 sdd 8:48 0 10.9T 0 disk └─vgraid5_local-lv01 253:4 0 10.9T 0 lvm /u02 sdb 8:16 0 447.1G 0 disk └─vgssd_local-lv01 253:5 0 447G 0 lvm /u03 执行 arcconf getconfig 1 ld 显示 Logical Device number 0 和 1 的配置信息: \[root@host2 \~\]# arcconf getconfig 1 ld Controllers found: 1 -------------------------------------------------------- Logical device information -------------------------------------------------------- Logical Device number 0 Logical Device name : vd1 Disk Name : /dev/sdc (Disk0) (Bus: 1, Target: 0, Lun: 0) Block Size of member drives : 512 Bytes Array : 0 RAID level : 10 Status of Logical Device : Optimal Size : 7630830 MB Stripe-unit size : 256 KB Full Stripe Size : 512 KB Interface Type : Serial ATA Device Type : Data Boot Type : None Heads : 255 Sectors Per Track : 32 Cylinders : 65535 Caching : Disabled Mount Points : Not Mounted LD Acceleration Method : None SED Encryption : Disabled Volume Unique Identifier : 600508B1001CF6173057FB8A85255004 -------------------------------------------------------- Logical Device segment information -------------------------------------------------------- Segment : Availability (SizeMB, Protocol, Type, Connector ID, Location) Serial Number -------------------------------------------------------- Group 0, Segment 0 : Present (3815447MB, SATA, HDD, Connector:CN0, Enclosure:1, Slot:2) WQB0BYF0 Group 0, Segment 1 : Present (3815447MB, SATA, HDD, Connector:CN0, Enclosure:1, Slot:4) WQB0B5PV Group 1, Segment 0 : Present (3815447MB, SATA, HDD, Connector:CN0, Enclosure:1, Slot:3) WQB0B5V7 Group 1, Segment 1 : Present (3815447MB, SATA, HDD, Connector:CN0, Enclosure:1, Slot:5) V302WXYF Logical Device number 1 Logical Device name : vd2 Disk Name : /dev/sdd (Disk0) (Bus: 1, Target: 0, Lun: 1) Block Size of member drives : 512 Bytes Array : 1 RAID level : 5 Status of Logical Device : Optimal Parity Initialization Status : Completed Size : 11446245 MB Stripe-unit size : 256 KB Full Stripe Size : 768 KB Interface Type : Serial ATA Device Type : Data Boot Type : None Heads : 255 Sectors Per Track : 32 Cylinders : 65535 Caching : Disabled Mount Points : Not Mounted LD Acceleration Method : None SED Encryption : Disabled Volume Unique Identifier : 600508B1001CE4C11BEB914107DF0141 -------------------------------------------------------- Array Physical Device Information -------------------------------------------------------- Device ID : Availability (SizeMB, Protocol, Type, Connector ID, Location) Serial Number -------------------------------------------------------- Device 14 : Present (3815447MB, SATA, HDD, Connector:CN0, Enclosure:1, Slot:6) V3039ZHF Device 15 : Present (3815447MB, SATA, HDD, Connector:CN0, Enclosure:1, Slot:7) WQB0BY59 Device 16 : Present (3815447MB, SATA, HDD, Connector:CN0, Enclosure:1, Slot:8) WQB0AW76 Device 17 : Present (3815447MB, SATA, HDD, Connector:CN0, Enclosure:1, Slot:9) VB00EL3F Command completed successfully. 关键信息是这两行: Caching : Disabled 在 arcconf 设置 磁盘组的 Caching 为 WB 模式: arcconf SETCACHE 1 LOGICALDRIVE 0 con arcconf SETCACHE 1 LOGICALDRIVE 1 con 再次查询,已经强行改成了 缓存模式: Caching : Enabled 再次测试磁盘的IO性能: ./testdd.sh /u01 /u02 \> testdd.log.\`date +%Y%m%d%H%M\` 2\>\&1 \& vgraid10_local-lv01 7.3T 100G 7.2T 2% /u01 --- direct写入:38.3 MB/s direct读取:400 MB/s cache写入:303 MB/s cache读取:526 MB/s vgraid5_local-lv01 11T 88G 11T 1% /u02 --- direct写入:7.1 MB/s direct读取:357 MB/s cache写入:33.7 MB/s cache读取:350 MB/s 结论: 打开RAID卡缓存后,raid5的 direct写入 性能仍然很差【7.1 MB/s】,但是 direct读取 性能有了大幅度提升!【128 MB/s ---\> 357 MB/s】读取性能飙升而写入性能依旧拉胯,完全符合理论预期。 最后修改回原来设置: 因为 raid 卡无后备电池保护,存在丢数据风险。 arcconf SETCACHE 1 LOGICALDRIVE 0 coff arcconf SETCACHE 1 LOGICALDRIVE 1 coff 三、测试脚本 vim testdd.sh #!/bin/bash if \[ $# -lt 1 \]; then echo "usage: $0 /target1 /target2 /target3 ..." exit 1 fi while \[ $# -gt 0 \]; do target=$1 echo "${target} direct写入:" sync \&\& echo 3 \> /proc/sys/vm/drop_caches time dd if=/dev/zero of=${target}/dd.out bs=8k count=200000 oflag=direct echo "${target} direct读取:" sync \&\& echo 3 \> /proc/sys/vm/drop_caches time dd if=${target}/dd.out of=/dev/null bs=8k count=200000 iflag=direct echo "${target} cache写入:" sync \&\& echo 3 \> /proc/sys/vm/drop_caches time dd if=/dev/zero of=${target}/dd.out bs=8k count=200000 echo "${target} cache读取:" sync \&\& echo 3 \> /proc/sys/vm/drop_caches time dd if=${target}/dd.out of=/dev/null bs=8k count=200000 shift done chmod +x testdd.sh ./testdd.sh /u01 /u02 \> testdd.log.\`date +%Y%m%d%H%M\` 2\>\&1 \&