May 2, 2014
A lot can happen in 7 years. By 2021, we will all be 7 years older. The explosion of technology trends like mobility, cloud, social, and the Internet of things will drive orders of magnitude more data and drive tremendous changes in our business and in our lives. IDC in their predictions for 2014 predicted that the average home would have 20 sensors by 2020 that will generate 200TB of data per year, per household. The amount of data that we will need to manage may be orders of magnitude more than we have today in 2014.
In the face of this tremendous data growth and technology changes, Gartner’s Roger Cox and Jimmie Chang predict that “Five to seven-year ECB disk storage refresh cycles will become the new normal”. This is in contrast to traditional storage refresh cycles of 3 to 5 years.
In the past, advances in magnetic recording technologies would double bit densities every two years according to Kryder’s Law, and the price erosion for disk drives was about 30% per year. At that rate it was cheaper to buy new disk capacity every three to five years rather than maintain the old with its higher maintenance and environmental costs. What has changed to prompt this prediction by Gartner?
One reason is that the price erosion in disk drives has slowed down from about 30% to less than 20%. The days of doubling bit densities every two years with new magnetic recording technologies are over and the only way to increase the capacity of disk drives seems to be an increase in the number of disks from four to six or seven per disk module. So while this may double the capacity of a disk drive, the increase in number of components and power and cooling will mean that the cost reduction will not be nearly the same as doubling bit densities. Price erosion is also driven by competition and volumes, and the number of enterprise disk vendors is now down to two and the volumes for disk drives is decreasing due to conversion to flash drives, especially in the high volume PC server market which is also declining. This dramatic change in price erosion for storage from 30% to 20% will have a major impact on storage economics. One way to compensate for this is to extend the life of the storage asset from what has been a three to five year cycle to a longer cycle of five to seven years. As disks get larger and capacities grow into the multi TBs the disruption caused by migration to larger capacity disks is another reason for extending the refresh cycle. Non-disruptive migration of storage systems will be a key requirement for everyone as disk capacities continue to increase.
The greater adoption of storage efficiencies, starting with storage virtualization, thin provisioning, active archive, thin images, deduplication and compression have dramatically reduced the need for new capacity in recent years. New demand for storage capacity is expected to be about 30% per year through 2017, compared to about 60% during the 2005 to 2007 period, according to IDC.
This drive for storage efficiencies was triggered by the economic downturn of 2008/2011 and the supply shortages caused by the floods in Thailand in 2011. Eventually these efficiencies will play out and by 2017, I expect to see demand for capacity return to even higher growth rates driven by the Internet of Things and Big Data. So if we expect our storage systems to scale to meet our needs over the next 5 to 7 years, beyond 2017, we need to be investing in systems today that can scale to hundreds of petabytes and millions of IOPs with the software systems and management tools to ensure that it is always available, automated, and agile.
Hitachi is delivering a Continuous Cloud Infrastructure that is designed to scale with your business requirements over the next five to seven years and beyond.
The base is Hitachi’s Storage Virtualization Operating System that can extend virtualization across separate Hitachi storage systems up to 100KM apart for active/active availability and non-disruptive migration to and from current Hitachi enterprise storage systems. This SVOS will also enable the non-disruptive migration of heterogeneous storage systems, once they are attached and virtualized behind enterprise Hitachi storage systems.
VSP G1000 is a new generation VSP storage system that supports virtual pools of file, block, and object stores that can scale to 256 PB of total capacity and nearly 4 Million IOPs.
Command Suite v8 – Integrated management for server, file, and block infrastructure that include new SVOS capabilities and will include workflows like nondisruptive migration, supporting the HDS services delivered capability available today. Improved application awareness and provisioning templates as well as a simplified and streamlined GUI and common REST API interfaces.
Unified Compute Platform and UCP Director 3.5 – UCP Director adds support for SVOS and G1000 with enhanced server profiles provisioning and disaster recovery support. UCP for VMware with Cisco UCS will also support SVOS and VSP G1000
The specifications of this platform go way beyond what many of our customers need today. For instance many of our customers may only need several hundred thousand IOPs today, but in 5 to 7 years it is very likely that they will need millions of IOPs to be competitive. Since the design of this platform is very modular they can start with a Virtual Storage Director pair and a pair of Front End Directors without any internal storage and leverage existing storage assets to provide the latest in active /active enterprise availability and performance. As their business needs grow they can scale up to 16 Virtual storage directors, 2 TB of cache and 192 FC ports without any disruption to their applications. And when they are ready for the next generation platform beyond the VSP G1000 in 2020 or 2021 they will be able to do that with out any interruptions. This will provide a continuous cloud infrastructure that will enable them to leverage their storage assets and contain costs in the face of escalating demand.