Continuing from my earlier post on VNXe 3150 design thoughts (Part 1) today I will discuss a few critical aspects in using an iSCSI array like VNXe 3150.
– Data Protection and Replication
Unlike a traditional SAN when we work with something like a VNXe 3150 array there are certain limitations for data protection. The VNXe 3150 actually overcomes these limitations really well since it integrates with a variety of EMC products and hopefully will offer even advanced features in future software versions of either its own or through integration.
To be specific – the VNXe 3150 offers a data protection suite that offers – local protection and remote protection. Local protection is by way of snapshots of data LUN’s or datastores. The ability to present disks as NFS or CIFS shares also offers indirect benefits of data protection – e.g. those datastores can be backed up using existing backup software as well. But when we don’t consider the backup software each VNXe 3150 can be configured for a data replication port using a network interface. You could remove one network interface on the iSCSI server settings and reconfigure it on a backup VLAN. This does two things – offers data replication over a non production VLAN but more importantly allows remote replication on a different VLAN that might be isolated due to network design.
If the VNXe 3150 did not have this feature it would have been difficult to accommodate requirements due to network architecture. If you provision space to servers using VMFS datastore note that remote replication cannot be performed unless you use remote protection suite with EMC replication manager. Instead, if you use NFS or CIFS datastores and have them presented as generic volumes then you can easily perform remote replication (assuming you have the remote replication license) without using replication manager. From a design perspective you can use this feature of disk presentation to consolidate from multiple file servers to a few and save costs.
VNXe 3150 can replicate to a VNX and I am told that the replication works in a 1:5 mechanism. That means you can replicate from 5 VNXe arrays to 1 VNXe array and vice versa. For remote offices this is a cool thing so you can replication data from multiple VNXe’s to one array and just load that remote array with 2Tb disks thus saving on disk costs.
The capability to de-dupe data is obviously important but the de-duplication feature is limited in various ways. Things that I would like to see is a multiple target de-duplication. Currently, data is de-duped to only one other location. But this feature works well for the intended purposes and I have seen really good de-dupe ratios. I think EMC might have leveraged their purchase of Data Domain and integrated software level de-duplication as a result. But use de-duplication and perform remote replication to reduce bandwidth traffic
– Management and Maintenance
Finally, the management and maintenance of VNXe 3150 is quite important. EMC has a service console built into the VNXe array and you can access this under the Settings menu. Once inside the maintenance menu you instantly see a few options for the array and then the SP’s. Not ethat shutting down the array can be performed through this menu. You can also put SPA or SPB or both in maintenance mode and take a clean shutdown. This process also stops all I/O to the array and saves data in write cache to the array. Alternately, if you want to have a quick restart of the array you can just reboot it or shutdown from the service management area.
Software upgrades are released from time to time and can again be applied in a way that is non-intrusive to operations. Alerts can be configured as needed but there is not a whole lot of granularity. Performance of the array can be observed in the metrics dashboard (graphical) but again this is not of much help from a historical standpoint. There is another way to get historical performance logs and analyze them. That could be another post sometime down the road. If you need to know urgently, then read that on Henriwithani’s blog (google it).
One thing to be aware from a design perspective is that you can move a LUN or datastore to another storage processor. So it’s only in the initial configuration that we know where the LUN is allocated. If you are having multiple datastores on one storage array and you observe performance issues this could be an area to look at.
Unlike other EMC support situations, I have encountered that when you open a support ticket with EMC for a failed hardware component they will ask you to provide some information about the hardware (e.g. physical disk details). With other arrays all you give them is information of the failed disk drive and the speed. You don’t have to provide information of the actual disk. The process to collect a support bundle has to be followed.
From a solution design perspective configure LDAP in advance and also enable ESRS if you have purchased the support. For LDAP one thing to be aware is that you cannot add two seperate groups (from LDAP perspective). e.g. if you have some resources from a remote office configured as storage admins and if you have another set of storage admins in another OU you cannot have a joint group to control permissions. This is a bit convoluted but the use case for this requirement could be that you central resources are the super admins and the remote admins have less rights.
Overall, the VNXe 3150 array is very simple to configure – i set one up in less than half a day. But be very careful of design components because you will not have the flexibility to make important changes later. Those can be done but will require a lot of extra disk space and some outages.