The VNXe3150 replaces VNXe3100 and offers a boost in performance from the previous array version. It is also a step-up in terms of some features and in my experience is a really good fit for SMB’s or ROBO (Remote Office/Branch Office) requirements. When many people switch between operating a Fiber Channel SAN to a iSCSI array like VNXe 3150 certain points need to be taken into consideration for a good infrastructure design.
- Disk Layout
The VNXe3150 unlike the Clariion or VNX has its first four drives reserved for low level array operations but does not store the OS on those drives. The OS is actually on a flash drive that is separate from these first four drives on the enclosure. So it’s important to note that you can use these 4 drives to store servers or data (that is not I/O intensive) unlike the Clariion where you don’t use the first five drives. The first four drives are also SAS and not NL-SAS so larger data storage requirements need the NL-SAS drives on different DAE. You could mix and match drives but that is not recommended. The drives spin at different speeds and the best practice recommendation is always to separate drives spinning at different speeds on different enclosures – reason being noise and vibrations. The default configuration includes one DAE so you need to buy a second one to scale up and the ideal thing to do is to use 2Tb drives. Note that the SAS drive configuration allows RAID 5, but the 2Tb drives will have to be configured as RAID 6. Some people might argue about loss of space with RAID6 versus RAID5, but the purpose behind that is better data resiliency so take it or leave it. And the first package of disks with the VNXe 3150 purchase is for 8 x 600gb drives. This translates to about 2.8Tb of disk space and the last drive can be used as a hot spare. One good functionality of all EMC arrays is the capability to have a global hot spare. And ideal thing to do when you create hot spares (considering you have multiple enclosures) is to have separate hot spares on different enclosures.
- Storage Pools and LUN creation
Then it gets down to creating pools and assigning storage space. The concept of pools is nice and friendly but going bonkers and creating too many pools is going to spoil the benefits of a disk pool. Limit your storage pools to a few and allow more spindles in each pool. Then performance is distributed evenly across all disks in a pool as slices of data on each disk. But try to distribute I/O intensive workload across atleast two different pools if that is a possibility. The reason being due to data slicing operations it is possible that datat from high i/o servers can reside on one disk itself and that could bog down operations. There is no automated data tiering in the VNXe so data load balancing is not done on a scheduled or automatic basis. From a design standpoint I would configure the pools more carefully to accomodate for the lack of automated data tiering. VNXe3150 offers multiple data source formats – VMFS, NFS, CIFS, Generic iSCSI storage and so on. It even formats VMware datastores and presents them automatically to hosts configured on the iSCSI array. But not all requirements might service needs of VMFS datastore and if you have a need to present an RDM disk it is not direclty available. You would need to present disk area from the Generic ISCSI storage area and that should serve the RDM like needs. Also, since NFS & CIFS shares are available it becomes easier to deploy file shares right out of the array instead of presenting from a file server. However, certain features like moving a datastore to another storage controller are not available. I think the reason is that the VNXe 3150 offers cost savings compared to the Clariion or VNX and hence has to live with a few less features. But what it lacks it makes up in other areas – de-duplication and datastore snapshot features are available and they are an important factor in the array’s operational capability.
Pools can be expanded in size and datastore sizes are limited to 2Tb (because of VMware settings). Note that the VNXe 3150 does allow spanning of RAID groups so keep all RAID types equal in one storage pool. e.g. don’t mix and match RAID groups in a pool. This is an overall recommendation for any storage array for better efficiency and I am not sure (haven’t tried) if the array blocks you from doing that. I can try that out later and post here. A key VMware functionality (VAAI) is not yet supported and I am hoping it would come out in future software upgrade.
(Update – Sep 16, 2013) – VAAI is only available for entreprise and enterprise plus and you sould not be worrying about this feature if you are planning in implementing all iSCSI from VNXe because as of now VNXe supports VAAI for NFS with ESXi5 and above.
The vSphere API for Array Integration which enables offloading specific tasks to storage array and it requires the plugin from EMC to be installed on ESX.
Please refer this link for more info: Storage APIs
Where can i find VAAI?
So that was it in this post and I will provide more ideas for properly deploying and benefitting from an VNXe 3150.
Part II (in next post)
- Data Protection and Replication
- Management and Maintenance