Category: Others (其它)

Finally surrender to Smart Phone – iPhone 4

By admin, October 14, 2010 12:21 pm

In the past few years, there have been numerous time I’ve seriously considered buying a smart phone like HTC, iPhone, but the technology wasn’t really ready yet by then.

However things changed with the raise of HSPDA and particular the launch of iPhone 4 early this year.

It’s time to CHANGE finally!

IMG_2934

Today I’ve got an iPhone for me finaly…technically speaking, it’s not for me, but actually it’s for her. Compared many plans from different service providers in Hong Kong, finally located SmartTone mainly due to it’s network coverage and reliability and HK$398 unlimited plan is the best to go for a 32GB version. It took about 1 week to order and the customer service is excellent!

In additional, SmartTone embedded something called X-Power which allows you to view flash movie on Youtube and many other web sites that’s definitely a PLUS, as for normal users they really have no idea how to “JailBreak”.

For me, a network guy, I found the most useful tool is WYSE PocketCloud which provides the best RDP mouse control ability among its category and I am going to manage the whole data center through a little device like iPhone 4, wow…that’s awesome really! Basically this is ONE AND ONLY reason I surrender to iPhone, nothing else really.

wyse

Probably I will try Dropbox later which allows me to synchronize MP3/documents between my desktop and iPhone, but putting sensitive information on their server will be a big concern for many.

Finally, the 3G roaming service is still very disappointing, NO UNLMITED usage when I go to other countries even SmartTone/3/One2Free provides things called HK$168 daily UNLMITED PLAN with HIDDEN clauses that you can ONLY use email and browse web pages, and anything else like RDP/VPN/FTP/SKYPE will cost you HK$0.01/KB, so it’s absolutely USELESS when traveling aboard, only Wi-Fi works, but again why do I need to buy an iPhone without using 3G then? It’s like driving a HK$3M Ferrari on country road only and you are not allow to drive on highway.

Mouse is very slow in Windows Server 2008 R2 under ESX 4.1

By admin, October 12, 2010 1:04 pm

Basically, all you need to update the SVGA driver to WDDM driver, but why didn’t VMware include that in its latest VMware Tools?

Troubleshooting SVGA drivers installed with VMware Tools on Windows 7 and Windows 2008 R2 running on ESX 4.0

WDDM and XPDM graphics driver support with ESX 4.x, Workstation 7.0, and Fusion 3.0

Solution to Dell OEM Windows Server Requires Re-Activation in ESX 4.1

By admin, October 12, 2010 9:26 am

So you have been there and encountered that annoying thing, you’ve called Dell Pro-Support and they replied you there is DEFINITELY NO WAY and you also called Microsoft, finger pointing back to Dell by asking you to contact Dell directly as it’s OEM product. You have asked local Microsoft distributor, they also said there is no way to do it, you have to buy Box set or Open License, your existing Dell OEM license will not allow you to reactivate using the key printed on it.

dellkey

Well, THEY ARE ALL WRONG!!!

  • Dell’s Pro-Support is unprofessional in this case.
  • Microsoft is responsible for its own product, NOT!
  • Local Microsoft Distributor wants you to pay more, huh?

 

This is the Official Solution from Dell, hope it’s useful for others, the key point is to use Virtual Key to re-activate, and then either activate on-line or use phone to activate again and finally clone it as master gold image for further deployment.

You cannot automatically pre-activate the Windows Server 2008 operating system installed on VMs by using the product activation code in the Dell OEM installation media. You must use the virtual product key to activate the guest operating system. For more information, see the whitepaper Dell OEM Windows Server 2008 Installation on Virtual Machines using Dell OEM Media at dell.com.

 

I always thought Virtual Key is for Microsoft’s own Hyper-V only and cannot be used in VMWare enviornment, but I was wrong.

 

Alternatively, you can force the VM to load the default BIOS containing DELL SLIC 2.1 (supports Windows 7 and Windows Server 2008 R2), which will trick the VM thinking it’s actually a PHYSICAL DELL server.

1. Simply add bios440.filename = “DELL.ROM” to VM configuration parameters by using VC Client, of course you do need to upload DELL.ROM to your VM directory and please don’t ask me where to get that DELL.ROM, goole it around yourself. One draw back is this VM can’t be vMotioned around, as the DELL.ROM won’t get vMotioned. (Update: Solution to vMotion is to put DELL.ROM on every host or simply on SAN such as bios440.filename = “/vmfs/volumes/san/DELL.ROM”. :)

2. Very importantly, you will also need to find the corresponding certificatedell.XRM-MS, then use slmgr.vbs -ilc c:\dell.XRM-MS to import the certificate.

3. Insert the Key by slmgr.vbs -ipk XXXXX-XXXXX-XXXXX-XXXXX-XXXXX

 

Finally, some say even by adding SMBIOS.reflect = True will work, but I COULD NEVER get this method working!

Update: The reason I didn’t get it working is because I didn’t use Dell’s W2k8R2 installation disk, see this link from IBM, sounds so simple! Really?

Solution

Edit the virtual machine’s .vmx file to contain the following line:

SMBIOS.reflectHost = “true”

Note: Encoding of the text added to the .vmx file must be in UTF8.

This updates the virtual machine BIOS with the IBM Original Equipment Manufacture (OEM) information required to use IBM-provided Operating System (OS) installation media.

IBM-provided Microsoft Windows 2008 media must be “BIOS Locked” to ensure that the OS will only install on IBM hardware. Virtual machines use a virtual BIOS that does not contain information that identifies the system as being manufactured by IBM.

The installation of Microsoft Windows Server 2008 from IBM OEM media to such a virtual machine will fail until the virtual BIOS has been updated to include this information. Alteration of the virtual machine’s .vmx file to state SMBIOS.reflectHost = “true” performs this function for servers using VMware’s ESX/ESXi technology.

The workaround resolves this issue by using media that is not locked to a specific OEM.

The solution resolves this issue by adding IBM information to the virtual BIOS.

Update Apr-16

Tried again today, the method SMBIOS.reflectHost = “true” is DEFINITELY NOT working! Even loaded with Dell’s OEM w2k8r2 std installation disk and the server is Poweredge R710, it still asked for activation. In additional, I discovered I can install Dell’s OEM w2k8r2 std disk on VM even without SMBIOS.reflectHost = “true”, so this means Dell’s w2k8r2 disk can be used on a non-Dell server.

So only the above two methods are working, but not the last one, if you got the last one working, pls drop me a line, thanks.

Update Apr-17

May be the answer is SMBIOS.reflectHost = “true” WILL ONLY WORK for ESX 3.5 or before, as VMware’s KB didn’t indicate this method apply to ESX 4.0.

ESX 4.1, VM is Version 7, VMware PVSCSI and VMXNet3, Safely Remove Hardware?

By admin, October 11, 2010 11:15 am

After upgrading to ESX4.1, my VM with latest Version 7, VMware PVSCSI and VMXNet3 starts to show “Safely Remove Hardware” alert in tray, but why would you want to remove your harddisk and NIC? Huh?

Then I found this useful link, case solved!

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1012225

To Thin or Not To Thin? On Equallogic and/or ESX Datastore?

By admin, October 10, 2010 12:42 pm

About a month ago, I was told by an experienced Dell Equallogic Consultant to use Normal (non-thin) on EQL array and Thin on ESX VMFS, I wasn’t exactly sure what did he mean by then.

So I did a simple test on my EQL box:

Create a 10GB volume (non-thin), attach to Windows, write 5GB, then remove 4GB, leave with 1GB, to EQL it’s 5GB used.

Then I write another 4GB, EQL still reports 5GB, then I write 1GB more, now EQL reports 6GB.

However in my Thin Provisioning test for the above same 10GB, case looks completely different now.

Create a 10GB volume, attach to Windows, write 5GB, then remove 4GB, leave with 1GB, to EQL it’s 5GB used.

Then I write another 4GB, EQL somehow EQL volume reports the size continously growing to 5GB, then 6GB, then finallyBbang 9GB. WHY? WHY doesn’t it use the UNUSED SPACE? (actually inside to Windows, it’s still 5GB, you will see later)

HOWEVER, Please note THIS, as I continue to add another 4GB to the volume (now EQL reports 9GB, windows reports 5GB), then EQL reached 10GB max (somehow the volume didn’t go offline? why? I don’t know), but I can still add this 4GB to the volume, and windows reports 9GB/10GB used.

So in a strange way, even EQL reports the volume has been fully used, we can still add data to it at Windows level, but it’s just TOTALLY CONFUSING and false volume is going to full alarm all the way when using Thin Provisioning.

That’s why WAIT UNTIL FW5.0.x or FW5.x coming out with the REAL THIN RECLAMATION feature like what HDS’s or 3PAR’s did a year ago. (Yes, EQL is behind in this particular area) 

We are probably better to NOT USE Thin Prov. in ESX, what I mean is to

Use Thick Prov. in EQL, but Thin in ESX VMFS would be the best way.

For snapshots, just set it to a smaller % during the volume creation (10% would be good, as you can always grow it later), this apply to the volume as well, make your own Thin Provioning, just set the volume to a smaller size when you first create it, then gradually expand it as you need later, then you won’t waste a lot of space from the beginning.

 

Update:

I did another test and it proved I was wrong above.

The GB are reported in EQL Group Manager under Volume Used Size

  Thick (20GB) Thin (20GB)
——————————————-
1. +5GB  5GB  5GB
2. +5GB  10GB  10GB
3. -5GB  10GB  10GB
4. +5GB  10GB  10GB
5. -10GB 10GB  10GB
6. +15GB 15GB  15GB (Warning as over the default 60%)
7. -5GB  15GB  15GB
8. +5GB  15GB  15GB

So we are safe to use Thin Provisioned VMFS now I think. 

Btw, I also received a reply from EQL indicating they are working on the Re-Thin feature.

In response to “reclaiming unallocated array disk space” on the PS Series arrays:

An enhancement request for this feature (reclaim space that was previously used) has already been submitted.  Firmware version 5.0.2 does not introduce this feature.  Engineering has not updated support as to when such a feature will be available in future firmware releases.

Finally, I looked into details about Hitachi HDS’s Re-Thin feature, a 3PAR guy points out HDS’s Re-Thin in fact is actually a…Migration and the Zero Out the unused blocks, but in 3PAR, they can REALLY, I MEAN REALLY do the Re-Thin in real time, no need to copy the volume to another copy and then zero out the unused block. I do hope Equallogic can have this kind of feature instead of a “Not so real” Re-Thin like HDS ones.

 

Oct 14, 2010 Some update from Dell Pro-Support regarding NTFS/VMFS can REUSE the touched blocks somehow.

=======================
A similar problem is when the initiator OS reports significantly more space in use than the array does. This can be pronounced in systems like VMWare that create large, sparse files. In VMWare, if you create yourself a 10GB disk for a VM as a VMDK file, VMWare does not write 10GB of zeros to the file. It creates an empty (sparse) 10GB file, and subtracts 10GB from free space. The act of creating the empty file only touches a few MB of actual sectors on the disk. So VMWare says 10GB missing, but the array says, perhaps, only 2MB written to.

Since the minimum volume reserve for any volume is 10%, the filesystem has a long way to go before the MB-scale writes catch up with the minimum reservation of a volume. For instance, a customer with a 100GB volume might create 5 VMs with 10GB disks. That’s 50GB used according to VMWare, but only perhaps 5 x 2MB (10MB) written to the array. Until the customer starts filling the VMDK files with actual data, the array won’t know anything is there. If has no idea what VMFS is; it only knows what’s been written to the volume.

• Example: A file share is thin-provisioned with 1 TB logical size. Data is placed into the volume so that the physical allocation grows to 500 GB. Files are deleted from the file system, reducing the reported file system in use to 100 GB. The remaining 400 GB of physical storage remains allocated to this volume in the SAN.

� This issue can also occur with maintenance operatiions including defragmentation, database re-organization, and other application operations.

In most environments, file systems do not dramatically reduce in size, so this issue occurs infrequently. Also some file systems will not make efficient re-use of previously allocated space, and may not reuse deleted space until it runs out of unused space (this is not an issue for NTFS, VMFS).
=======================

 

Update Oct-15-2010

If you ask me again now, I would say THIN PROVINTIONING (aka TP) ALL THE WAY, both on Equallogic AND on ESX Datastore is the BEST way to go and it is going to be the trend in storage management world I think, especially if Equallogic will release it’s upcoming Re-Thin or Space Reclaim feature in coming 5.x firmware update. (So far only 3PAR is able to do it I think)


Update Sep-3-2011

Storage APIs for Array Integration (VAAI) has been enhanced to reclaim blocks when a virtual disk is deleted, unlike previously where the storage array is not aware about deleted blocks contains data after deleting virtual disks.

There is a new feature in vSphere 5.0 that may finally solved the problem, but is this only going to work in vSphere 5.0? I really do hope ESX 4.1 can also get this VAAI enhancement after upgrading the Equallogic firmware with such thin provisioning reclaim capability. 

Currently, the only way to reclaim a thin provisioned volume (TP) in Equallogic is to Storage VMotion all existing VMs to a new TP volume and then delete the existing one.

Impressive Equallogic PS6000XV IOPS result

By admin, October 9, 2010 11:41 am

Impressive Equallogic PS6000XV IOPS result

I just performed the test again 3 times and confirmed the followings, this is with default 1 Worker only, IOmeter testing using VM’s VMFS directly, no MPIO direct mapping to EQL array, VM is version 7, Disk Controller is Paravirtualized and NIC is VMXNet3.
SERVER TYPE: VM on ESX 4.1 with EQL MEM Plugin, VAAI enabled with Storage Hardware Acceleration
CPU TYPE / NUMBER: vCPU / 1
HOST TYPE: Dell PE R710, 96GB RAM; 2 x XEON 5650, 2,66 GHz, 12 Cores Total
STORAGE TYPE / DISK NUMBER / RAID LEVEL: Equallogic PS6000XV x 1 (15K), / 14+2 600GB Disks / RAID10 / 500GB Volume, 1MB Block Size
SAN TYPE / HBAs : ESX Software iSCSI, Broadcom 5709C TOE+iSCSI Offload NIC

##################################################################################
TEST NAME——————-Av. Resp. Time ms——Av. IOs/sek——-Av. MB/sek——
##################################################################################

Max Throughput-100%Read……4.1913………13934.42………435.45

RealLife-60%Rand-65%Read……13.4110………4051.49………31.65

Max Throughput-50%Read………5.5166………10240.39………320.01

Random-8k-70%Read……………14.1525………3915.15………28.95

EXCEPTIONS: CPU Util. 67.82, 38.12, 56.80, 40.2158%;

##################################################################################
RealLife-60%Rand-65%Read 4051 IOPS is really impressive for a single array with 14 15K RPM spindles!

Windows Server Backup in Windows Server 2008 R2

By admin, October 8, 2010 1:17 pm

I’ve been using Windows Server Backup in Windows Server 2008 R2 for almost a month, and found it can do everything Acronis True Image Server does, seemed there is really no need to buy AIS in the future in my own opinion.

See what WSB can offer you (my requirement list):

  • WSB works at Bloack Level, so backup or taking snapshot is very fast during backup and restore as well.
  • Full server baremetal backup.
  • Full backup at first time, and incremental afterwards, Excluding files during backup, Compression
  • System State integrated with AD, so you won’t get a crash consistant state that you have your server restored, but found AD cannot be started.
  • Individual folder/file restore
  • Backup to network shared folders (there is a limit that you cannot have incremental copies, but only keep ONE copy, the later one will over-write the previous one), this does suck badly! However I don’t use network folder to store my backup, so it’s fine for me.
  • Maximum 64 copies (in my term, it’s almost 2 months since I only have 1 schedule backup running) or limited to your backup disk size
  • The backup copies are hidden from the file system, in TIS, you need to create a partition to hide the backup copies. (Acronis Secure Zone)
  • WSB can backup Hyper-V vm images as well.

Best of all Windows Server Backup (WSB) in Windows Server 2008 R2 IS FREE! TIS is over USD1,200, I don’t need any features like Convert to VM, Universal Restore, Central Management, so WSB works perfectly for the standalone server.

Finally, you may ask what about the rescure bootable DVD/CD-ROM? While you don’t have one, what? Yes, the Windows Server 2008 DVD is your optimate rescure bootable DVD, fair enough? :)

A Possible Bug in VMware ESX 4.1 or EQL MEM 1.0 Plugin

By admin, October 8, 2010 12:37 am

This week, I encountered a strange problem in redundancy testing, all paths to our network switches, servers and EQL arrays have been setup correctly with redundancy.  Each ESX Host iSCSI VMKernel (or pNIC) has 16 paths to EQL arrays and we tested every single possible failure situation and found ONLY ONE senerio doesn’t work. See When Power OFF Master PC5448 Switch, we are no longer able to ping PS6000XV.

After two days of troubleshooting with local Pro-Support as well as US EQL support, we have narrowed down the problem to “A Possible Bug in VMware 4.1 or EQL MEM 1.0 Plugin”.

During the troubleshooting, I found there is another Equallogic user in Germany is also having similar problem not exactly the same though (See  Failover problems between esx an Dell EQL PS4000), as he’s using PS4000 and only have two iSCSI paths, and his problem is more seroius than ours.

Oct 5, 2010 3:16 AM
Fix-List from v5.0.2 Firmware:
iSCSI Connections may be redirected to Ethernet ports without valid network links.

Also he’s problem is similar as whatever iscsi connection left in LAG won’t get redirected to slave switch after shutdown the master switch, I got 4 paths, his PS4000 has two, so my iscsi connection survived due to there is an extra path to the slave switch, but somehow vmkping doesn’t work.

and if you look at comment #30 .

Jul 27, 2010
Dell acknowledged that the known issue they reporting in the manual of the EqualLogic Multipathing Extension Module is the same I get.

They didn’t open a ticket at vmware for now, but they will, after some more tests.

I think this issue is there since esx 4.0. In VI3 they used only one vmkernel for swiscsi with redundancy on layer1/2, so there it should not be the case.

My case number for this issue at vmware is 1544311161, the case number at dell is 818688246.

If vmware acknowledge this as a bug in 4.1, and don’t have a workaround, we will go with at least 4 logical paths for each volume and hope that at least one path is still connected after switch1 fails, until they fix it.
Finally, it could also be something related to EQL MEM Plugin for ESX which we have installed. (Comment #29 on page 2)

It indicates there is a know issue that once a network link failed (could be due to shut down the master switch), if the physical NIC with the network failure is the only uplink for the VMKernel port that is used as the default route for the subnet. This affects several types of kernel network traffic, including ICMP pings which the EqualLogic MEM uses to test for connectivity on the SAN.

Jul 23, 2010

from the dell eql MEM-User_Guide 4-1:

Known Issues and Limitations
The following are known issues for this release.

Failure On One Physical Network Port Can Prevent iSCSI Session Rebalancing
In some cases, a network failure on a single physical NIC can affect kernel traffic on other NICs. This occurs if the physical NIC with the network failure is the only uplink for the VMKernel port that is used as the default route for the subnet. This affects several types of kernel network traffic, including ICMP pings which the EqualLogic MEM uses to test for connectivity on the SAN. The result is that the iSCSI session management functionality in the plugin will fail to rebuild the iSCSI sessions to respond to failures of SAN changes.

Could it be the same problem I have? So they (DELL/VMware) already know about this problem?

Aside this it looks like the Dell MEM makes only sense in setups with more then one array per psgroup, because the PSP selects a path to a interface of the array where the data of the volume is stored. And it have a lot of limitations. We only have one array per group for now, so I think I skip this.

Still dont understand why there is no way to prevent that the connections go through the LAG in the first place, it should be possible to prefer direct connections…

My last reply to EQL Support today:

Some updates, may be you can pass them to L3 for further analysis.

The problem seemed to be due to EQL MEM version 1.0 Known Issue. (User Manual 4-1)

==================================================
Failure On One Physical Network Port Can Prevent iSCSI Session Rebalancing

In some cases, a network failure on a single physical NIC can affect kernel traffic on other NICs. This occurs if the physical NIC with the network failure is the only uplink for the VMKernel port that is used as the default route for the subnet. This affects several types of kernel network traffic, including ICMP pings which the EqualLogic MEM uses to test for connectivity on the SAN. The result is that the iSCSI session management functionality in the plugin will fail to rebuild the iSCSI sessions to respond to failures of SAN changes.
==================================================
I’ve performed the test again and it showed your prediction is correct that the path is always fixed to vmk2.

1. I’ve restarted the 2nd ESX host, and found the path C0 to C1 showed correctly.
2. The reboot master switch, from 1st or 2nd ESX Host, I cannot ping vCenter or vmkping EQL or iSCSI vmks on other ESX hosts, and CANNOT PING OTHER VMKernel such as VMotion of FT this time as well.
3. But VM did not crash or restart, so underneath, all iSCSI connections stay on-line, that’s good news.
4. After master swtich comes back, under storage path on ESX Hosts, I see those C4, C5 paths generated.

Could you confirm with VMware and EQL if they have this bug in their ESX 4.1 please? (ie, Path somehow is always fixed to vmk2)

I even did a test by switching off iSCSI ports on the master switch one by one, problem ONLY HAPPENS when I switch off ESX Host VMK2 port. (which is the first vmk iScsi port, ie, the default route for the subnet?)

It confirmed the vmkping IS BOUND to the 1st iSCSI vmk port which is vmk2 and this time, all my vmkping dead including VMotion as well as FT.

The good news is underneath the surface everything works perfectly as expected, iSCSI connections are working fine, VMs are still working and they can be VMotioned around, FT is also working fine.

We do hope VMware and EQL can release a patch sometime soon so they fix this vmkping problem that vmkping always has to go out of the default vmk2 which is the 1st iSCSI VMKernel in the vSwitch, but not any other vmk connections, so when the master switch dead, vmkping also died with vmk2 as vmkping uses vmk2 for ICMP ping to other VMKernel IP address.

Some interesting findings from VMWare Unofficial Storage Performance

By admin, October 6, 2010 5:17 pm

Some more interesting comments contributed by all the testers:

1) VMFS block size (1-8MB) seems to have little to no effect on performance.

2) Thin/Thick provisioning doesn’t have much impact on performance.

3) RDM has minimal performance increases over VMFS (except in 100% sequential tests which just won’t ever happen in the real world), VMFS has minimal impact on performance, achieving approximately 98%+ of physical performance, so others suggest use VMFS all the way

4. The most real test seems to be the “RealLife-60%Rand-65%Read” – in normal life you have random and sequential connections mixed (often 60% Random vs 40% Sequential).

5. We can see vm on iSCSI compared to physocal server loses more throughput and response time than its counterparts on FC SAN. (but not much <5%, especially in situation of sequential Read/Write)

6. First my suggestion for disabling Jumbo Frames would only be on the Guest O/S. Leave it enabled on the switch and the vSwitch on the host.

7. As for performance best practice, configure VM using Version 7, Paravirtual for Disk Controller and NIC as VMXNET3.

Extract from VMWare Unofficial Storage Performance Comparing Equallogic and other SAN Vendors

By admin, October 6, 2010 4:47 pm

It’s not offical, but after comparing the results, I would still say Equallogic ROCKS!

Finally, I wonder why there aren’t many results from Lefthand, NetApp, 3PAR and HDS?

My own result:

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
TABLE oF RESULTS
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

SERVER TYPE: VM on ESX 4.1 with EQL MEM Plugin
CPU TYPE / NUMBER: vCPU / 1
HOST TYPE: Dell PE R710, 96GB RAM; 2 x XEON 5650, 2,66 GHz, 12 Cores Total
STORAGE TYPE / DISK NUMBER / RAID LEVEL: Equallogic PS6000XV x 1 (15K), / 14+2 600GB Disks / RAID10 / 500GB Volume, 1MB Block Size
SAN TYPE / HBAs : ESX Software iSCSI, Broadcom 5709C TOE+iSCSI Offload NIC

##################################################################################
TEST NAME——————-Av. Resp. Time ms——Av. IOs/sek——-Av. MB/sek——
##################################################################################

Max Throughput-100%Read……..5.4673……….10223.32………319.48

RealLife-60%Rand-65%Read……15.2581……….3614.63………28.24

Max Throughput-50%Read……….6.4908……….4431.42………138.48

Random-8k-70%Read……………..15.6961……….3510.34………27.42

EXCEPTIONS: CPU Util. 83.56, 47.25, 88.56, 44.21%;
##################################################################################
Compares with other Equallogic user’s result:
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
TABLE OF RESULTS
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

SERVER TYPE: VM ON ESX 3.0.1
CPU TYPE / NUMBER: VCPU / 1
HOST TYPE: Dell PE6850, 16GB RAM; 4x XEON 7020, 2,66 GHz, DC
STORAGE TYPE / DISK NUMBER / RAID LEVEL: EQL PS3600 x 1 / 14+2 SAS10k / R50
SAN TYPE / HBAs : iSCSI, QLA4050 HBA

##################################################################################
TEST NAME——————-Av. Resp. Time ms——Av. IOs/sek——-Av. MB/sek——
##################################################################################

Max Throughput-100%Read……..__17______……….___3551___………___111____

RealLife-60%Rand-65%Read……___21_____……….___2550___………____20____

Max Throughput-50%Read……….____10____……….___5803___………___181____

Random-8k-70%Read……………..____23____……….___2410___………____19____

EXCEPTIONS: VCPU Util. 60-46-75-46 %;
##################################################################################

 

SERVER TYPE: VM.
CPU TYPE / NUMBER: VCPU / 1 ) JUMBO FRAMES, MPIO RR
HOST TYPE: Dell PE2950, 32GB RAM; 2x XEON 5440, 2,83 GHz, DC
STORAGE TYPE / DISK NUMBER / RAID LEVEL:EQL PS5000 x 1 / 14+2 Disks (sata)/ R5

##################################################################################
TEST NAME——————-Av. Resp. Time ms——Av. IOs/sek——-Av. MB/sek——
##################################################################################

Max Throughput-100%Read……..____9,6___……….____5093___………___159,00_

RealLife-60%Rand-65%Read……____26,6___……….___1678___………___13,11__

Max Throughput-50%Read………._____8,5__……….____4454___………___139,20_

Random-8k-70%Read…………….._____31,3_……….____1483___………___11,58____

EXCEPTIONS: CPU Util.-XX%;
##################################################################################

 

SERVER TYPE: PHYS
CPU TYPE / NUMBER: CPU / 2
HOST TYPE: DL380 G3, 4GB RAM; 2X XEON 3.20 GHZ
STORAGE TYPE / DISK NUMBER / RAID LEVEL: PS6000XV / 14+2 DISK (15K SAS) / R10)
NOTES: 2 NIC, MS iSCSI, no-jumbo, flowcontrol on

##################################################################################
TEST NAME——————-Av. Resp. Time ms——Av. IOs/sek——-Av. MB/sek——
##################################################################################

Max Throughput-100%Read……..___13.60____……….___3788____………___118____

RealLife-60%Rand-65%Read…….___14.87____……….___3729____………___29.14__

Max Throughput-50%Read………___12.75____……….___4529____………___141____

Random-8k-70%Read…………..___15.42____……….___3580____………___27.97__

EXCEPTIONS: CPU Util.-XX%;
##################################################################################

 

SERVER TYPE: PHYS
CPU TYPE / NUMBER: CPU / 2
HOST TYPE: DL380 G3, 4GB RAM; 2X XEON 3.20 GHZ
STORAGE TYPE / DISK NUMBER / RAID LEVEL: PS6000XV / 14+2 DISK (15K SAS) / R50)
NOTES: 2 NIC, MS iSCSI, no-jumbo, flowcontrol off, ntfs aligned w/ 64k alloc, mpio-rr

##################################################################################
TEST NAME——————-Av. Resp. Time ms——Av. IOs/sek——-Av. MB/sek——
##################################################################################

Max Throughput-100%Read……..____9.84____……….___5677____………___177_____

RealLife-60%Rand-65%Read…….___13.20____……….___3712____………___29.00___

Max Throughput-50%Read………____8.39____……….___6742____………___211_____

Random-8k-70%Read…………..___13.91____……….___3783____………___29.55___

EXCEPTIONS: CPU Util.-XX%;
##################################################################################

 

SERVER TYPE: VM windows 2008 enterprise.
CPU TYPE / NUMBER: VCPU / 1
HOST TYPE: Dell PE2950, 32GB RAM; 2x XEON 5450, 3,00 GHz
STORAGE TYPE / DISK NUMBER / RAID LEVEL: EQL PS5000E x 1 / 14+2 Disks / R10 / MTU: 9000

####################################################################
TEST NAME——————-Av. Resp. Time ms—-Av. IOs/sek—-Av. MB/sek—–AV. CPU Utl.
Max Throughput-100%Read………….16,3……………..3638,3…………..113,7……………..35………
RealLife-60%Rand-65%Read………21,7………………2237,8…………….17,5……………..43………
Max Throughput-50%Read…………..17,7……………….2200,6…………….67,8……………..80………
Random-8k-70%Read………………….23,6………………2098,4…………….16,3……………..41………
####################################################################

 

SERVER TYPE: database server
CPU TYPE / NUMBER: CPU / 2
HOST TYPE: Dell PowerEdge M600, 2*X5460, 32GB RAM.
STORAGE TYPE / DISK NUMBER / RAID LEVEL: Equallogic PS5000E / 14*500GB SATA in RAID10

##################################################################################
TEST NAME——————-Av. Resp. Time ms——Av. IOs/sek——-Av. MB/sek—
##################################################################################

Max Throughput-100%Read……___10.29____……._5694__………_177.94___
RealLife-60%Rand-65%Read…..___31.75____…….__1382__………__10.80___
Max Throughput-50%Read…….___10.51____…….__5664__………_177.02___
Random-8k-70%Read…………___34.34____…….__1345__………__10.51___

EXCEPTIONS: CPU Util. 20% – 15% – 10% – 13%;
####################################################################

 

SERVER TYPE: VM, VMDK DISK
CPU TYPE / NUMBER: VCPU / 1
HOST TYPE: DELL R610, 16GB RAM; 2 x Intel E5540, QuadCore
STORAGE TYPE / DISK NUMBER / RAID LEVEL: EqualLogic PS5000XV / 14+2 DISK (15k SAS) / R50)
NOTES: 3 NIC, modified ESX PSP RR IOPS parameter, jumbo on, flowcontrol on

##################################################################################
TEST NAME——————-Av. Resp. Time ms——Av. IOs/sek——-Av. MB/sek——
##################################################################################

Max Throughput-100%Read……..__6,48__……….__9178,56__………_286,83__

RealLife-60%Rand-65%Read…….__13,08__……….__3301,94__………__25,8__

Max Throughput-50%Read………__9,06__……….__6160,2__………__192,51__

Random-8k-70%Read…………..__13,59__……….__3215,69__………__25,12__
##################################################################

 

SERVER TYPE: Windows XP VM w/ 1GB RAM on ESXi 4
CPU TYPE / NUMBER: VCPU / 1
HOST TYPE: Sun SunFire x4150, 48GB RAM; 2x XEON E5450, 2.992 GHz
STORAGE TYPE / DISK NUMBER / RAID LEVEL: Two EQL PS6000E’s with / 14+2 SATA Disks / R50

##################################################################################
TEST NAME——————-Av. Resp. Time ms——Av. IOs/sek——-Av. MB/sek——
##################################################################################

Max Throughput-100%Read….….15.025.……….…3915.89……..…..122.37

RealLife-60%Rand-65%Read……12.20..…..…..….3324.92.…..…….25.97

Max Throughput-50%Read….……13.18..………….4460.97….…..….139.40

Random-8k-70%Read….….….…..13.40….………..3033.14….…..…..23.69

EXCEPTIONS: CPU%= 44 – 66 – 40 – 63

Using iscsi w/ software initiator. 4 nics, each with a vmkernel assigned to it.
##################################################################################

 

Server Type: VM Windows Server 2008 R2 x64 Std. on VMware ESXi 4.1
CPU Type / Number: vCPU / 1
VM Hardware Version 7
Two vmxnet3 NICs (10 GBit) used for iSCSI Connection (10 GB LUN directly connected to VM, no VMFS/RDM)
MS iSCSI Initiator (integrated in 2008 R2)
SAN Type: EQL PS6000XV (14+2 SAS HDDs, 15K, RAID 50)
Switches: Dell PowerConnect 6224
ESX Host is equipped with four 1GBit NICs (only for iSCSI connection)
Jumbo Frames and Flow Control enabled.

##################################################################################
Test——————-Av. Resp. Time ms——Total IOs/sek——-Total MB/sek——
##################################################################################

Max Throughput-100%Read……..___10.1929_____…….___4967.06_____…..____155.22______

RealLife-60%Rand-65%Read……_____12.6970____…..____3933.39____…..____30.73______

Max Throughput-50%Read………____9.5941____…..____5115.05____…..____159.85______

Random-8k-70%Read…………..____12.9845_____…..____4030.60______…..____31.49______
##################################################################################

 

SERVER TYPE: Dell NX3100
CPU TYPE / NUMBER: Intel 5620 x2 24GB RAM
HOST TYPE: Server 2008 64bit
STORAGE TYPE / DISK NUMBER / RAID LEVEL: Equallogic PS4000XV-600 14 * 600GB 15K SAS @ R50 

##################################################################################
TEST NAME——————-Av. Resp. Time ms——Av. IOs/sek——-Av. MB/sek——
##################################################################################
Max Throughput-100%Read 7163 223 8.3
RealLife-60%Rand-65%Read 4516 35 11.4
Max Throughput-50%Read 6901 215 8.4
Random-8k-70%Read 4415 34 11.9
##################################################################################

 

SERVER TYPE: W2K8 32bit on ESXi 4.1 Build 320137 1vCPU 2GB RAMCPU TYPE / NUMBER: Intel X5670 @ 2.93GhzHOST TYPE: Dell PE R610 w/ Broadcom 5709 Dual Port w/ EQL MPIO PSP EnabledNETWORK: Dell PC 6248 Stack w/ Jumbo Frames 9216STORAGE TYPE / DISK NUMBER / RAID LEVEL: EQL PS4000X 16 Disk Raid 50

##################################################################################
TEST NAME——————-Av. Resp. Time ms——Av. IOs/sek——-Av. MB/sek——
##################################################################################
Max Throughput-100%Read 8.12 7410 231 29%
RealLife-60%Rand-65%Read 10.65 3347 26 59%
Max Throughput-50%Read 7.19 7861 245 34%
Random-8k-70%Read 11.37 3387 26 55%
##################################################################################

  

Also compares with other major iSCSI/FC SAN Vendors:

 SERVER TYPE: VM ON ESX 3.5 U3
CPU TYPE / NUMBER: VCPU / 1
HOST TYPE: HP DL380 G5, 24GB RAM; 4x XEON 5410(Quad), 2,33 GHz,
STORAGE TYPE / DISK NUMBER / RAID LEVEL: EMC CX4-120 / 4+1 / R5 / 14+1total disks
SAN TYPE / HBAs : 4GB FC HP StorageWorks FC1142SR (Qlogic)
MetaLUNS are configured with 200GB LUNs striped accross all 14 disks for total datastore size of 600GB

##################################################################################
TEST NAME——————-Av. Resp. Time ms——Av. IOs/sek——-Av. MB/sek——
##################################################################################

Max Throughput-100%Read……………__6______……….___9320___………___291____

RealLife-60%Rand-65%Read……___24_____……….__1638___………____13____

Max Throughput-50%Read…………….____5____……….___11057___………___345____

Random-8k-70%Read……………..____23____……….___1800___………____14____
####################################################################

 

SERVER TYPE: VM on ESX 3.5.0 Update 4
CPU TYPE / NUMBER: VCPU / 1
HOST TYPE: HP Proliant DL385C G5, 32GB RAM; 2x AMD 2,4 GHz Quad-Core
SAN Type: HP EVA 4400 / Disks: 4GB FC 172GB 15k / RAID LEVEL: Raid5 / 38+2 Disks / Fiber 8Gbit FC HBA

##################################################################################
TEST NAME——————-Av. Resp. Time ms——Av. IOs/sek——-Av. MB/sek——
##################################################################################

Max Throughput-100%Read…….______5___……….___10690__……..___334____

RealLife-60%Rand-65%Read……______8___……….____5398__……..____42____

Max Throughput-50%Read…….._____49___……….____1452__……..____45____

Random-8k-70%Read………….______9___……….____5390__……..____42____

EXCEPTIONS: NTFS 32k Blocksize

##################################################################################

 

SERVER TYPE: VM WIN2008 64bit SP2 / ESX 4.0 ON Dell MD3000i via PC 5424
CPU TYPE / NUMBER: VCPU / 2 )JUMBO FRAMES, MPIO RR
HOST TYPE: Dell R610, 16GB RAM; 2x XEON 5540, 2,5 GHz, QC
ISCSI: VMWare iSCSI software initiator , Onboard Broadcom 5709 with TOE+ISCSI
STORAGE TYPE / DISK NUMBER / RAID LEVEL:Dell MD3000i x 1 / 6 Disks (15K 146GB / R10)

##################################################################################
TEST NAME——————-Av. Resp. Time ms——Av. IOs/sek——-Av. MB/sek——
##################################################################################

Max Throughput-100%Read……..____14,55___……….____4133__………____128,48____
RealLife-60%Rand-65%Read……_____22,69_………._____2085__………____16,92____
Max Throughput-50%Read………._____14,13___………._____4289__………____134,04____
Random-8k-70%Read…………….._____21,7__………._____2272__………____17,75___

##################################################################################

 

####################################################################
SERVER TYPE: Windows Server 2003r2 x32 VM with LSI Logic controller, ESX 4.0
CPU TYPE / NUMBER: VCPU / 1
HOST TYPE: HP BL490c G6, 64GB RAM; 2x XEON E5540, 2,53 GHz, QC
STORAGE TYPE / DISK NUMBER / RAID LEVEL: HP EVA6400 / 23 Disks / RAID5
Test name Avg resp time Avg IO/s Avg MB/s Avg % cpu
Max Throughput-100%Read 5.5 10831 338.47 38
RealLife-60%Rand-65%Read 10.8 4313 33.70 45
Max Throughput-50%Read 31.6 1822 56.95 17
Random-8k-70%Read 9.9 4613 36.04 47
SERVER TYPE: Windows Server 2008 x64 VM with LSI Logic SAS controller, ESX 4.0
CPU TYPE / NUMBER: VCPU / 1
HOST TYPE: HP BL490c G6, 64GB RAM; 2x XEON E5540, 2,53 GHz, QC
STORAGE TYPE / DISK NUMBER / RAID LEVEL: HP EVA6400 / 48 Disks / RAID10
Test name Avg resp time Avg IO/s Avg MB/s Avg % cpu
Max Throughput-100%Read 5.51 10905 340.8 32
RealLife-60%Rand-65%Read 8.20 6366 49.7 39
Max Throughput-50%Read 9.31 5279 165 43
Random-8k-70%Read 7.81 6734 52.6 39
####################################################################

 

SERVER TYPE: VM ON VI4
CPU TYPE / NUMBER: VCPU / 1
HOST TYPE: Supermicro , 64GB RAM; 4x XEON , E5430 2,66 GHz, QC
STORAGE TYPE / DISK NUMBER / RAID LEVEL: SUN 7410 11×1tb + 18gb ssd write + 100gb ssd read

##################################################################################
SAN TYPE / HBAs : 1gb NIC NFS
##################################################################################
TEST NAME——————-Av. Resp. Time ms——Av. IOs/sek——-Av. MB/sek——

Max Throughput-100%Read……..__17______……….___3421___………___106____

RealLife-60%Rand-65%Read……___6_____……….___7771___………____60____

Max Throughput-50%Read……….____11____……….___5321___………___166____

Random-8k-70%Read……………..____6____……….___2662___………____60____

##################################################################################

 

SERVER TYPE: VM Windows 2003, 1GB RAM
CPU TYPE / NUMBER: 1 VCPU
HOST TYPE: IBM x3650 M2, 34GB RAM, 2x X5550, 2,66 GHz QC
STORAGE TYPE / DISK NUMBER / RAID LEVEL: IBM DS3400 (1024MB CACHE/Dual Cntr) 11x SAS 15k 300GB / R6 + EXP3000 (12x SAS 15k 300GB) for the tests
SAN TYPE / HBAs : FC, QLA2432 HBA

##################################################################################
RAID10- 10HDDs ——————-Av. Resp. Time ms——Av. IOs/sek——-Av. MB/sek——
##################################################################################

Max Throughput-100%Read_______5,8_______________9941_______310

RealLife-60%Rand-65%Read_____16,7______________3083_________24

Max Throughput-50%Read________12,6______________4731________147

Random-8k-70%Read___________15,5______________3201________25

##################################################################################

 

####################################################################
SERVER TYPE: 2008 R2 VM ON ESX 4.0 U1
CPU TYPE / NUMBER: VCPU / 1 / 2GB Ram
HOST TYPE: HP BL460 G6, 32GB RAM; XEON X5520
STORAGE TYPE / DISK NUMBER / RAID LEVEL: EMC CX4-240 / 3x 300GB 15K FC / RAID 5
SAN TYPE / HBAs: 8Gb Fiber Channel

Test Name Avg. Response Time Avg. I/O per Second Avg. MB per Second CPU Utilization
Max Throughput – 100% Read 5.03 12,029.33 375.92 21.87
Real Life – 60% Rand / 65% Read 42.81 1,074.93 8.39 19.57
Max Throughput – 50% Read 3.63 16,444.30 513.88 29.67
Random 8K – 70% Read 51.44 1,039.38 8.12 14.01
SERVER TYPE: 2003 R2 VM ON ESX 4.0 U1
CPU TYPE / NUMBER: VCPU / 1 / 1GB Ram
HOST TYPE: HP DL360 G6, 24GB RAM; XEON X5540
STORAGE TYPE / DISK NUMBER / RAID LEVEL: LeftHand P4300 x 1 / 7 +1 Raid 5 10K SAS Drives
SAN TYPE / HBAs: iSCSI, SWISCSI, 2x 82571EB GB Eth Port Nics, One connection on each MPIO enabled – Jumbo Frames Enabled – 4 iSCSI connections to Volume – 1x Hp Procurve Switch
Test Name   Avg. Response Time   Avg. I/O per Second   Avg. MB per Second   CPU Utilization
Max Throughput – 100% Read   13.94   4289.95   134.06   22.17
Real Life – 60% Rand / 65% Read   18.95   1952.18   15.25   54.70
Max Throughput – 50% Read   41.95   1284.81   40.13   27.41
Random 8K – 70% Read   15.56   2132.71   16.66   60.32
####################################################################


SERVER TYPE: VMWare ESX 4u1
GUEST OS / CPU / RAM Win2K3 SP2, 2 VCPU, 2GB
HOST TYPE: DELL R610, 32GB RAM, 2 x Intel E5520, 2.27GHz, QuadCore
STORAGE TYPE / DISK NUMBER / RAID LEVEL: PILLAR DATA AX500 180 drives 525GB SATA, RAID5
SAN TYPE / HBAs : FCOE CNA EMULEX LP21002C on NEXUS 5010

####################################################################
TEST NAME———-Av.Resp.Time ms—Av.IOs/se—Av.MB/sek——
##################################################################
Max Throughput-100%Read….5.1609……….11275……… 362.86 CPU=22.84%

RealLife-60%Rand-65%Read…3.2424……… 17037…….. 131.68 CPU=32.6%

Max Throughput-50%Read……4.2503 ………12742 …….. 403.35 CPU=26.45%

Random-8k-70%Read………….3.2759……….16824………128.19 CPU=30.39%##################################################################

 

SERVER TYPE: ESXi 4.10 / Windows Server 2008 R2 x64, 2 vCPU, 4GB RAMCPU TYPE / NUMBER: Intel Xeon X5670 @ 2.93GHzHOST TYPE: HP ProLiant BL460c G7STORAGE TYPE / DISK NUMBER / RAID LEVEL: NetApp FAS6280 Metrocluster, FlashCache / 80 Disks / RAID DP

##################################################################################
TEST NAME——————-Av. Resp. Time ms——Av. IOs/sek——-Av. MB/sek——
##################################################################################
Max Throughput-100%Read 4.07 11562 361 63%
RealLife-60%Rand-65%Read 1.67 22901 178 1%
Max Throughput-50%Read 3.93 11684 365 61%
Random-8k-70%Read 1.45 25509 199 1%
##################################################################

 

SERVER TYPE: HP Proliant DL360 G7
CPU TYPE / NUMBER: Intel Xeon 5660 @2.8 (2 Processors)
HOST TYPE: Server 2008R2, 4vCPU, 12GB RAM
STORAGE TYPE / DISK NUMBER / RAID LEVEL: HP P4500 SAN, 24 600GB 15K in NETRAID 10.  4 Paths to Virtual iSCSI IP, RoundRobin host IOPS policy set to 1 Jumbo Frames Enabled Netflow Enabled

##################################################################################
TEST NAME——————-Av. Resp. Time ms——Av. IOs/sek——-Av. MB/sek——
##################################################################################
Max Throughput-100%Read 8.45 7119 222 22%
RealLife-60%Rand-65%Read 15.68 2423 18 55%
Max Throughput-50%Read 9.75 6000 187 25%
Random-8k-70%Read 11.71 2918 22 61% 
##################################################################

EMC VNX5500, 200gb fast cache 4×100 efd raid1)
Pool of 25×300gb 15k disks
Cisco UCS blades
##################################################################################
TEST NAME——————-Av. Resp. Time ms——Av. IOs/sek——-Av. MB/sek——
##################################################################################
Max Throughput-100%Read—— 16068 — 502 —– 1.71
RealLife-60%Rand-65%Read—– 3498 —- 27 —– 10.95
Max Throughput-50%Read——– 12697 —- 198 —- 0.885
Random-8k-70%Read—————- 4145 —– 32.38 — 8.635
##################################################################

Pages: Prev 1 2 3 4 5 6 7 ...89 90 91 ...102 103 104 Next