VSP Gx00 a different perspective - Part 1

Document created by Cris Danci on Apr 29, 2015Last modified by Cris Danci on May 1, 2015
Version 2Show Document
  • View in full screen mode

With the recent announcement made at Hitachi Connect 2015 around the new storage offerings, I felt it was a good time to write a series of blog posts. There is (or at least there will be) plenty of technical information from all the usual bloggers writing reviews, so I thought I'd provide something different. The intent of this series is really about the 'why' - in relation to the announcement - from an industry and product evolutionary perspective (post 2). This is probably a good time to disclose that I've known about the products for some time now, and know more than most people outside of HDS. This has given me time to think about the messaging being provided by HDS, and understanding the overall logic (which doesn't always present itself in the marketing material when dealing with a large US technology vendor).


As a re-cap, as part of its larger “software defined infrastructure” themed announcements, HDS announced a new 'unified storage' portfolio in the infrastructure layer to replace their existing midrange/modular (the HUS100 series) and entry-level enterprise (HUSVM).  The new platforms in this portfolio are based on a hybrid architecture bringing the best of modular based hardware and of enterprise level software. For those of you that are familiar with the HDS' storage offering, this is somewhat of the next evolutionary step following the HUSVM which focused on modular based disk trays.  Since the software for the new platform is based on the enterprise code base, the new products are being aligned with the same branding to complement the existing VSP G1000 product offering and will be rolled out over 4 models, namely the VSP G200, G400, G600 and G800. For more information please refer to the general announcement here.


Industry perspective:


After reading this announcement particularly the 'Storage Virtualization to the Masses', some might question HDS' direction and the intended market position. There are a number of different ways to interpret this.   Traditionalists in the storage field might focus on the midrange/enterprise dichotomy. Here, HDS has heavily focused on the enterprise extreme as their products have always been performance, availability, functionality and scalability centric as key value drivers. It is important to understand that this dichotomy has always been somewhat artificial, and a bi-product of an industry transition from central storage - SAN if you will - moving from a niche technology class to a common (not commodity) technology class. For a long time many vendors, including HDS, have really used this scheme as a way to distinguish between capabilities and cost; this is true for both backend engineering efforts and frontend marketing messaging.  This dichotomy, at some point, become a trichotomy with the introduction of entry-level enterprise systems (HUS-VM, IBM XIV, VMAX 10K, etc.). This distinction itself was also artificial because it was based on the viewer’s perspective - or more importantly the competitor’s view or definition of the products in the market. In reality, the lines are blurred and this can see this in the AFA (all flash array) market where start-ups are winning in traditional enterprise accounts using modular style two controller architectures with specific capabilities (deduplication, compression, etc.) being rolled into the messaging and value proposition.


With the introduction of the new VSP offerings HDS has totally removed this distinction with regards to capabilities.  As of this release, all capabilities exist across the storage portfolio; from the smallest G200 to the largest G800 (which will be available later in the year) and of course the G1000.  This creates more of a continuum in their product offering, rather than a distinct class of products. That's not to say that it makes sense for you to design for all the offerings in the same way as there are clearly physical hardware limitations (cache, CPU, ports, etc.) that need to be considered, but the capability is there.  Of course this statement in the storage industry isn't particularly new. For many years we've had to deal with these limitations with other storage vendors (you know who you are) which offered rich functionality on hardware that was architected to reach a price point and in the midrange space, this price point can be very low, but I digress…


Commonality in the infrastructure layer is what this announcement is really about.


This commonality however is more than just a vehicle to increase market penetration.  It’s easy to think about it in that way since it raises the capabilities of their existing midrange platform, and provides many ancillary benefits, such as reduced development costs, simplified process, training load, etc. However, there is a bigger strategic picture here that needs to be considered. Before diving in, there is a market driven technology discussion that has to be highlighted, “software-defined”. Software-defined was a big part of the messaging for a big reason, as Hu’s recently blogged (beating me to this punch while I was waiting for the release…) software-defined is a vehicle for innovation because of its decoupling affect.  Software-defined (through an application centric view) focuses on how technology can be used, designed and consumed more effectively (which of course has a cascading effect) rather than focusing on the process or managing technology or the innovation of the underlying hardware itself.   This abstraction process obviously has an effect from a solution perspective in helping shift requirements towards functional requirements rather than towards low level technical ones.


That’s not to say the technical details aren’t relevant. The complexity to effectively implement software-defined generally involves a middle layer between the software-definable components to integrate with the underlying hardware.   For storage, this is normally an operating system or firmware which interacts through an API. This API is generally crammed into a component which is referred to in software-defined terminology as “the control plane”. HDS historically had two firmware branches which also acted as the control planes to deal with for its storage, one for their modular firmware (DF) and one for the enterprise firmware (RAID).  There is, of course, commonality between them and external management platforms which offer APIs to glue things together, but this adds layers of complexity into the equation which has a negative effect on software-defined innovation.  Collapsing all storage platforms into a single OS (SVOS) obviously changes this and the move to a common storage platform removes layers of complexity and this, in turn, allows HDS and its partners to focus their efforts.    What they will focus on is actually a good segue into the bigger picture (and a shift back to the industry perspective).


HDS, through its affiliation with Hitachi Ltd., is shifting its focus toward 'Social innovation'. This is a fairly complex concept and I won't even attempt to explain or define it here, but I suggest everyone do some background reading (here, here and here). Social innovation is extremely important to the overall strategy because it sets HDS aside from their usual competitors. Social Innovation solutions are generally very complex and require a wide set of technology classes to all work together and they requires going beyond the simple storage, compute, networking, etc. model.  By focusing on Social innovation HDS/Hitachi Ltd., are positioning themselves in a market with very few competitors, a market where solutions are impossible to imitate and a market that that can have a positive global impact.   Taking a step back, by adding a common storage platform, HDS is simplifying the technology equation to deliver Social innovations solutions (in the infrastructure layer) and enables it to occur on all scales.


In my next post (probably tomorrow or the day after)  I will drill down on the evolving architecture of Virtual Storage Platforms (the Product evolution perspective) and the how (and why) Hitachi is balancing the benefits of specialised hardware with the power of the Intel roadmap to build the kind of solution that can handle the direction towards Social Innovation.





https://community.hitachivantara.com/docs/DOC-1004970 - Part 1

https://community.hitachivantara.com/docs/DOC-1005022 - Part 2

https://community.hitachivantara.com/docs/DOC-1005019 - Part 3