AnsweredAssumed Answered

HDT with three Tiers - High RG utilization on SSD tier

Question asked by Rahul Bhat on Sep 1, 2015

Hi Colleagues

I am sure we have had multiple discussions w.r.t to HDT and how it works, I have an understanding of how it works in general but I just wanted to share some details and get some feedback from the community if I am missing some tricks here and have to change some settings.


Let me give you an idea what it is like -  look at the picture

I have 3 tier HDT made up of SSD , SAS 10K & SAS 7.2K all internal ofcourse. We started using the new storage system and the tiering policy is default all tiers, which means it should use all disk types for tiering and the new page assignment to middle tier.



Now the problem is that, after 3months of migration , i see that SSD raidgroups of 38.39TB ( 2 Raidgroups 6+2P) are already at peak 50-55% utilization by using 37.77 TB SSD from overall VSP G1000 used capacity of 54TB , which means , all new volumes start using SSD first, whereas SAS 10K is now being used only at 9% of 172TB and Tier3 is still 0% used out of 106TB.


Currently maybe on average there is no competition between the IOPS as the system is still fairly empty except some peaks for some LDEVS during migrations, but most volumes upon migration are not doing much, around 500-1200 IOPS and highest i saw 2000 IOPS , but I still can't understand the logic of having those non-performing volumes on SSD and still Tier2 & Tier3  not being utlized at all and SSD raidgroups running at 50-55% peak utilization with just peak 2000IOPS or max 270MB/s ?


I would have expected that the raidgroup utilization would have triggered some non-performing luns to move to SAS drives,but thats not really happening. One of the test servers we checked is making 200IOPS but still sitting on SSDs , on average maybe the IOPS values have no competition but is there a way to have a much spread where my SSD raidgroups are not highly utilized where as rest of the system is empty and waiting for more capacity allocation before trigger heavier movement across the tiers.


The fact is that I have high raidgrp utilization on 2 SSD RG and rest are not doing much and this might still go on till the whole Tier 2 is not filled which is like 172TB? 


I would like to know other folks opinion. Could I change some global  settings ?