Jul 29, 2014
As a response to my recent post, George Crump provided a differing perspective in his blog site, storageswiss.com. George is a highly regarded blogger and analyst whom I respect and share many of the same points of view regarding the IT industry. However, we don’t always agree, or sometimes we take a different perspective and discussion is valuable to furthering knowledge in the industry. George disagreed with my prediction that the “AFA (All Flash Array) market will be displaced by Hybrid arrays that provide all the benefits of Flash performance integrated into an enterprise storage platform.”
George responded with a blog that “Not only will the all-flash array market not go away but I believe that over the next 5-6 years it will become the only way that production data is stored.”
His reasoning is that once IT planners and “their users get hooked on the performance of all-flash it will spread throughout the data center especially as flash media approaches cost parity with hard disks.
George’s argument is based primarily on the performance of flash and the eventual price parity with hard disks. My argument was targeted at the current crop of AFA, which are focused on performance as one size fits all and lacks integration into an enterprise pool of storage resources. As I pointed out in my post, most of the data goes stale very quickly, and should be tiered to lower cost more durable media and eventually archived so that it does not clog up other enterprise functions like data protection and data recovery. Integration into a pool of storage resources makes it easier to manage.
My main concern with flash media is its limited durability. We all know that writes degrade the durability of flash, but reads and the passage of time also cause degradation. During a write to flash, electrons are trapped in a floating gate but the electrons are leaking all the time and read voltages drive out more electrons causing write amplification as the records need to be refreshed. Disks don’t have that problem. While MLC flash may support 2000 to 3000 writes, Disk can support 1016 writes. I feel more comfortable moving less active data to disk through automated tiering and reduce the wear on flash drives. Eventually we will see more durable non-volatile memory devices like phase change or resistive random memory devices that will reach the price point where they can replace hard disks. But flash technology will not take us there.
My other point is that flash drives obey the queuing delay theorem otherwise know as Little’s Theorem. As the number of IOPs and threads increase the response times in mixed read/write environments also increase. Is that what you want for your tier 1 applications when you mix it with a number of tier 2 and 3, even 4 applications? I think it would be better to tier those IOPs to a different queue. Flash performance is great for random I/O but not for sequential or for RAID rebuilds. RAID rebuild performance for Flash is about the same as for hard disks. The difference is that you would be less likely to do a RAID rebuild for hard disks due to its increased durability.
George does agree with me that disks will be around for data that should never be on flash. I agree, with some reservation, to his statement that “Disk, like tape, will be with us for the long term but they will serve supporting roles to an all-flash primary storage tier.”
I would state this a little differently.
Disk, like tape, will be with us for the long term, and they will serve supporting roles to a non-volatile memory storage tier.
The key word here is tier as in a hybrid array and not array as in AFA
Next week there will be an opportunity to continue the discussion. I am looking forward to talking with George when he chairs our panel on “How Flash Will Transform Enterprise Applications”, Session 302A at the Flash Memory Summit in Santa Clara, CA. It is an open session August 7, 9:45 to 10:50 . Hope to see you there.