Equallogic Firmware 5.0.2, MEM, VAAI and ESX Storage Hardware Accleration
Finally got this wonderful piece of Equallogic plugin working, the speed improvement is HUGE after intensive testing in IOmeter.
100% Sequential, Read and Write always top 400MB/sec, sometimes I see 450-460MB/sec for 10 mins for a single array box, then the PS6000XV box starts to complain about all its interfaces were being saturated.
For IOPS, 100% Random, Read and Write has no problem reaching 4,000-4,500 easily.
The other thing about this Equallogic’s MEM script is IT IS JUST TOO EASY to setup the whole iSCSI vSwitch/VMKernel with Jumbo Frame or Hardware iSCSI HBA!
There is NO MORE complex command lines such as esxcfg-vswitch, esxcfg-vmknic or esxcli swiscsi nic, life is as easy as a single command of setup.pl –config or –install, of course you need to get VMware vSphere Power CLI first.
Something worth to mention is the MPIO parameter that you can actually tune and play with.
C:\>setup.pl –setparam –name=volumesessions –value=12 –server=10.0.20.2
You must provide the username and password for the server.
Enter username: root
Enter password:
Setting parameter volumesessions = 12
Parameter Name Value Max Min Description
————– —– — — ———–
reconfig 240 600 60 Period in seconds between iSCSI session reconf
igurations.
upload 120 600 60 Period in seconds between routing table upload
.
totalsessions 512 1024 64 Max number of sessions per host.
volumesessions 12 12 3 Max number of sessions per volume.
membersessions 2 4 1 Max number of sessions per member per volume.
C:\>setup.pl –setparam –name=membersessions –value=4 –server=10.0.20.2
You must provide the username and password for the server.
Enter username: root
Enter password:
Setting parameter membersessions = 4
Parameter Name Value Max Min Description
————– —– — — ———–
reconfig 240 600 60 Period in seconds between iSCSI session reconf
igurations.
upload 120 600 60 Period in seconds between routing table upload
.
totalsessions 512 1024 64 Max number of sessions per host.
volumesessions 12 12 3 Max number of sessions per volume.
membersessions 4 4 1 Max number of sessions per member per volume.
Yes, why not getting it to its maximum volumesessions=12 and membersessions=4, each volume won’t spread across more than 3 array boxes anyway, and the new firmware 5.0.2 allows 1024 total sessions per pool, that’s way way more than enough. So say you have 20 volumes in a pool and 10 ESX hosts, each having 4 NICs for iSCSI, that’s still only 800 iSCSI connections.
Update Jan-21-2011
Do NOT over allocate membersessions to be greater than the available iSCSI NICs, I encountered a problem that allocating membersessions = 4 when I only have 2 NICs, high TCP-Retransmit starts to occur!
To checkup if Equallogic MEM has been installed correctly, issue
C:\>setup.pl –query –server=10.0.20.2
You must provide the username and password for the server.
Enter username: root
Enter password:
Found Dell EqualLogic Multipathing Extension Module installed: DELL-eql-mem-1.0.
0.130413
Default PSP for EqualLogic devices is DELL_PSP_EQL_ROUTED.
Active PSP for naa.6090a078c06ba23424c914a0f1889d68 is DELL_PSP_EQL_ROUTED.
Active PSP for naa.6090a078c06b72405fc9b4a0f1880d96 is DELL_PSP_EQL_ROUTED.
Active PSP for naa.6090a078c06b722496c9c4a2f1888d0e is DELL_PSP_EQL_ROUTED.
Found the following VMkernel ports bound for use by iSCSI multipathing: vmk2 vmk3 vmk4 vmk5
One word to summarize the whole thing: “FANTASTC” !
More about VAAI from EQL FW5.0.2 Release Note:
Support for vStorage APIs for Array Integration
Beginning with version 5.0, the PS Series Array Firmware supports VMware vStorage APIs for Array Integration (VAAI) for VMware vSphere 4.1 and later. The following new ESX functions are supported:
•Hardware Assisted Locking – Provides an alternative means of protecting VMFS cluster file system metadata, improving the scalability of large ESX environments sharing datastores.
•Block Zeroing – Enables storage arrays to zero out a large number of blocks, speeding provisioning of virtual machines.
•Full Copy – Enables storage arrays to make full copies of data without requiring the ESX Server to read and write the data.
VAAI provides hardware acceleration for datastores and virtual machines residing on array storage, improving performance with the following:
•Creating snapshots, backups, and clones of virtual machines
•Using Storage vMotion to move virtual machines from one datastore to another without storage I/O
•Data throughput for applications residing on virtual machines using array storage
•Simultaneously powering on many virtual machines
•Refer to the VMware documentation for more information about vStorage and VAAI features.
Update Aug-29-2011
I noticed there is a minor update for MEM (Apr-2011), the latest version is v1.0.1. Since I do not have such error and as a rule of thumb if there is nothing happen, then don’t update, so I won’t update MEM for the moment.
Finally, I wonder if MEM will work with vSphere 5.0 as the released note saying “The EqualLogic MEM V1.0.1 supports vSphere ESX/ESXi v4.1″.
Issue Corrected in This Release: Incorrect Determination that a Valid Path is Down
Under rare conditions when certain types of transient SCSI errors occur, the EqualLogic MEM may incorrectly determine that a valid path is down. With this maintenance release, the MEM will continue to try to use the path until the VMware multipathing infrastructure determines the path is permanently dead .
During the setup you are asked to install this either as a standard switch or vds (distributed switch). It looks like you did the stand switch method. I was wondering if you tried the vDS method to have a distributed port bound ISCSI switch. That just seems as though it would be best in a multi-hypervisor environment.
Dan,
Thanks for dropping by.
Yes, I used standard swtich and nope, I haven’t tried vDS, but I do remember there are some limitations when using with vDS, look up the MEM manual.
OK, I got it, it is said the only dvs is VMware vNetwork Distributed Switch (vDS), so you are fine.
Please do let me know your testing result on vDS.
Thanks.
Well I installed the vDS ISCSI network. I have four NICS tied to the new vDS (just like my pre-MEM and pre-vDS ESX boxes) but I am only showing 2 active paths under the managed path section. Any idea?
Yes I included four vmnics during the install, I am running the dell managed path for a path selection, each VMKernel port is tied to a dvUplink (four of them … 1 to 1 ratio), each vmnic is showing green and active as well.
Yes, you can change it (btw, I saw your post on VMTN as well. ), please read the above content or MEM manual, it’s very easy to do, just make sure you set everything to maximum, there is no harm to do it at all.
Setting parameter volumesessions = 12
Setting parameter membersessions = 4
Then you will see 4 path to each voume.
Hope this helps.
Thank you alot for all your wonderfull equallogic blog posts
Im running the previously recommended vmware mpio, is it really this easy to change away from that?
Darking,
You mean 3:1 VMKernetl right? Yes, it’s official now 1:1 with MEM Plugin and No if you are already running it without any problem.