======VMware NMP IOPS optimization======
When installing ESXi and enable iSCSI datastores with multipath round robin the performance could be poor.
VMware will use the other path if it reaches 1000 IOPS. This is the default setting.
So I have done some measurements.
=====iSCSI setup=====
iSCSI FreeNAS 9.3. 4 disks in striped mirror. logging and caching on SSD mirror.
zpool status
pool: sata-disk
state: ONLINE
scan: scrub repaired 0 in 2h59m with 0 errors on Sun May 3 09:59:59 2015
config:
NAME STATE READ WRITE CKSUM
sata-disk ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
gptid/f8c7b55e-4ef0-11e4-98c0-0cc47a0917b6 ONLINE 0 0 0
gptid/f92b9f6b-4ef0-11e4-98c0-0cc47a0917b6 ONLINE 0 0 0
mirror-2 ONLINE 0 0 0
gptid/1558a0b6-4ef2-11e4-98c0-0cc47a0917b6 ONLINE 0 0 0
gptid/15b29c40-4ef2-11e4-98c0-0cc47a0917b6 ONLINE 0 0 0
logs
mirror-1 ONLINE 0 0 0
da0p1 ONLINE 0 0 0
da4p1 ONLINE 0 0 0
cache
da4p2 ONLINE 0 0 0
da0p2 ONLINE 0 0 0
spares
gptid/fac4aa65-4ef0-11e4-98c0-0cc47a0917b6 AVAIL
errors: No known data errors
The storage box is connected with 2x 1Gb ethernet to the switch. Each network connection sits in a VLAN.
=====VMware setup=====
The hypervisor is connected with 2x 1Gb ethernet to the switch. Multipathing is done using 2 VMkernel adapters connected to a VLAN and specifically bound to a physical NIC.
Get the available UIDs
esxcli storage core path list
iqn.1998-01.com.vmware:supermicro1-1c5f6261-00023d000008,iqn.2011-03.nl.helux.istgt:target1,t,2-t10.FreeBSD_iSCSI_Disk______001517bc5bf6001_________________
UID: iqn.1998-01.com.vmware:supermicro1-1c5f6261-00023d000008,iqn.2011-03.nl.helux.istgt:target1,t,2-t10.FreeBSD_iSCSI_Disk______001517bc5bf6001_________________
Runtime Name: vmhba38:C3:T0:L1
Device: t10.FreeBSD_iSCSI_Disk______001517bc5bf6001_________________
Device Display Name: FreeBSD iSCSI Disk (t10.FreeBSD_iSCSI_Disk______001517bc5bf6001_________________)
Adapter: vmhba38
Channel: 3
Target: 0
LUN: 1
Plugin: NMP
State: active
Transport: iscsi
Adapter Identifier: iqn.1998-01.com.vmware:supermicro1-1c5f6261
Target Identifier: 00023d000008,iqn.2011-03.nl.helux.istgt:target1,t,2
Adapter Transport Details: iqn.1998-01.com.vmware:supermicro1-1c5f6261
Target Transport Details: IQN=iqn.2011-03.nl.helux.istgt:target1 Alias= Session=00023d000008 PortalTag=2
Maximum IO Size: 131072
esxcli storage nmp psp roundrobin deviceconfig get -d t10.FreeBSD_iSCSI_Disk______001517bc5bf6001_________________
Byte Limit: 10485760
Device: t10.FreeBSD_iSCSI_Disk______001517bc5bf6003_________________
IOOperation Limit: 1000
Limit Type: Default
Use Active Unoptimized Paths: false
Set the new IOPS value using the following command:
esxcli storage nmp psp roundrobin deviceconfig set -d t10.FreeBSD_iSCSI_Disk______001517bc5bf6001_________________ -t iops -I 1
In this case it is value 1. I have bench-marked 4 values (1000 (default), 100, 10 and 1)
=====SATP rule====
To make it last a reboot:
esxcli storage nmp satp rule add -s "VMW_SATP_ALUA" -P "VMW_PSP_RR" -O iops=1 -c "tpgs_on" -M "iSCSI Disk" -e "FreeNAS iSCSI custom SATP Claimrule"
=====Results=====
Default 1000 IOPS:
{{ :nmp_1000iops.png |}}
100 IOPS:
{{ :nmp_100iops.png |}}
10 IOPS:
{{ :nmp_10iops.png |}}
1 IOPS:
{{ :nmp_1iops.png |}}
Network usage graphs from FreeNAS:
storage path1 (iscsi1 VLAN)
{{ :storage_iscsi1.png |}}
storage path2 (iscsi2 VLAN)
{{ :storage_iscsi2.png |}}
=====Results jumbo frames=====
{{:screenshot_from_2015-05-28_15-11-17.png|}}
=====Conclusion====
The 1 IOPS setting is the best in my configuration.
{{tag>vmware}}