I'm a first time to administering anything except NFS connected NetApp storage, let alone a HUS 110 specifically. I've been reading through the Provisioning guide, older posts here, other forums, and some VMware best practices documents. I most recently had a conversation with one of the engineers from the company I purchased the HUS 110 through, but I wanted to see if I can get additional opinions.
My HUS 110 is a small system with 17x1.2TB 10K SAS + 7x200GB SSD currently configured with 1 DP Pool consisting of 4x2+2 10K SAS RAID + Spare (8.4TB usable) and 1x5+1 SSD RAID5 + Spare (900GB usable) Tiering is enabled. I am direct connect FC to a pair of VMware Hosts with dual port 8GB QLogic HBA's, and that is the only thing that will use this storage. I am licensed for HDT.
I have a typical Microsoft stack with Exchange, SharePoint, SQL and all the usual stuff that goes along with it. From a data standpoint, there is about 360 GB of SQL Data+Logs, 1.1TB of VM OS disks/small misc data and 800GB of Exchange databases. The SQL/VM OS data size will stay fairly consistent overall but we are going to be increasing Exchange limits and will allow for 3-5TB of Exchange databases over the next few years.
DP Pool - Am I good with the single existing DP Pool? There's not a lot of disk/space in general here to break it up very much, and it doesn't seem like I gain anything by doing that?
DP VOL sizing - The recommendation has been to not exceed 2-2.5TB VOLs, but other then that, break them up however I would like to in order to logically manage my chunks of data at the VMware datastore level. Is it as simple as this given my configuration, or is there other things I should consider, specifically as it may relate to performance and HDT.
DP VOL / New Page Assignment Tier - Should I put everything at middle in order to maximize what HDT can do? A volume that holds SQL data would be the only one that I could see to potentially set as high, but the I/O required by our SQL DB's is pretty low as a whole, so I'm wondering if I'm bettering off setting it to middle so that the available space on Tier 1 is maximized without me specifically preferencing a set of data to Tier 1?
DP VOL / Accelerated Wide Striping - Given the overall size of my array and data set, would I benefit from this feature? It seems like I may fall under the recommended place where this could help, but the Provisioning guide has one paragraph where it says it's not recommended to use
DP VOL / Promptly Promote Mode - There is enabled by default on volume creation in SNM2. I've found some older posts where people recommend not using it, but if Hitachi has it enabled by default, I'm wondering why that is so?
Queue Depth - VMware ESXi 5.5 has a default of 64, should I leave this or reduce to 32 or lower?
Pinning Storage - If I had a workload I specifically wanted to pin to Tier 1, is this accomplished by simply setting "new page assignment tier" to high and then checking the box for "disable tier relocation". If I did this AFTER a volume was created and in-use (let's assume new page assignment tier was medium", would this combination of settings effectively pin all data on the volume on Tier 1, after the fact?
Lastly, I have been using SNM2. Is there any reason for me to switch to HiCommand given this is my only array? I couldn't quite determine if there were going to be configuration options that would be available via HiCommand by not SNM2.