Monday, February 18, 2013

Dell md3600/md3620 Review

We recently built a new virtualization cluster using two new Dell PowerEdge R620's and a new Dell PowerVault md3620f configured with dual controllers, 14x 300gb 10k RPM drives and 3x 1tb 7200 RPM drives. Each controllers has 4x 8gb FC ports.  With this, if you are using MPIO you can connect 4 hosts directly, if you are not using MPIO you could connect up to 8 hosts directly via Fibre Channel without using FC switches.

The md3620f is a 2U SAN that holds 24 disks, with expansion shelves it can expand up to 192 drives (requires additional license to go above 120).  Each controller has 2GB of cache.

There are also SAS and iSCSI versions available, known as the md3620i and md3620 for sas.  The md3600 series is similar, except the shelves hold 12x 3.5" drives instead of the 24x 2.5" drives in the md3620 series.

I connected a watt meter to it, and on 110v AC it was consuming 200 watts of power at idle, and never more than 240 under load.

As of firmware 7.84 Windows Server 2012 is supported, but previous revisions do not so make sure you upgrade to the latest possible firmware revision.

Also of note is that the Dell md3620f does NOT work with Microsoft's System Center Data Protection Manager 2012 w/ SP1, Dell's VSS provider crashes and appfaults, and this isn't something Dell is willing to fix.  Unfortunately Dell has not supported DPM since 2007, and recommends you uninstall their VSS provider because they are too lazy to fix it.  You can simply tell DPM to bypass the VSS provider by creating a new registry key here: [HKLM\Software\Microsoft\Microsoft Data Protection Manager\Agent\UseSystemSoftwareProvider]


Moving on... The Dell SAN uses "Modular Disk Storage Manager" which is the configuration software (rather than a web interface) that you must install on any computer you would like to manage your cluster from, and in the latest version of 7.84 it is much more stable / reliable.  The software is still "clunky" and dated, but at least it is functional.  One of my biggest complaints with it is how long it takes to connect to the SAN, ranging from 10 to 120 seconds to connect.  Additionally, if one of the controllers is offline (unplugged ethernet cable for instance) you are unable to manage LUN's, etc.  making the redundancy of this unit questionable.

Here are some basic screenshots of the MDSM.


The host mappings page is a bit clunky to work with compared to our Compellent and HP systems, but again it works. 

The hardware overview is a nice little page.

The performance monitor setup is much nicer than the HP & Compellent systems I've used, but I don't think the IO/second numbers are accurate, 7 300GB 10k drives can't do 2,500 IOPS, and 3 1TB sata drives can't do 1707.


We provisioned our md3620f as follows:
Disk Group 1 - 7x 300gb in Raid 5
Disk Group 2 - 8x 300gb in Raid 5
Disk Group 3 - 3x 1tb in Raid 5

Each test was run a couple times and averaged, but there may have been minimal load on the SAN at the time so the numbers aren't perfect science, but they are pretty good.  File Read / Write was done via file copies to & from the SAN and represents the real world performance I've seen from this SAN over the last 3 months.  Raw read/write was done using HD Tune Pro File Benchmark with a 20GB file length and random data pattern.

Disk Group 1 (7x 300gb in Raid 5)
File Read - 229M/s
File Write - 180M/s
Raw Read - 590M/s
Raw Write - 255M/s

Disk Group 2 (8x 300gb in Raid 5)
File Read - 290M/s
File Write - 209M/s
Raw Read - 610M/s
Raw Write - 240M/s

Disk Group 3 (3x 1tb Raid 5) Performance:
File Read - 115M/s
File Write - 76M/s
Raw Read - N/A
Raw Write - N/A

Then I tested Disk Group 1 & Disk Group 2 (owned by different controllers) at the same time using HD Tune Pro, and here are the results:

Disk Group 1 (7x 300gb in Raid 5)
Raw Read - 595M/s
Raw Write - 225M/s


Disk Group 2 (8x 300gb in Raid 5)
Raw Read - 640M/s
Raw Write - 220M/s

Combined Max:
Raw Read - 1,420M/s
Raw Write - 457M/s

1 comment:

  1. Hi Andrew,

    thank you for sharing your Experience with Dell Hardware.
    I m also in a Project where the customer uses a MD3620f and I m bit disappointed with Dell. I have experience with Fujitsu Hardware and I like there solutions. Which Hypervisors you use? I m building currently a ESXI 5.5 Cluster on top.

    Thanks
    Addnan Ramma

    ReplyDelete