Bob Madaio

Does Hardware Matter?

Blog Post created by Bob Madaio Employee on May 3, 2016

Last week, Hitachi Data Systems announced some significant enhancements to our storage portfolio which can simplify, consolidate and accelerate file-based and virtual environments and enhance our customers’ cloud storage usage.


Aside from the very slick storage configuration and management tool (Hitachi Storage Advisor), new cross-platform analytics (Hitachi Infrastructure Analytics Advisor) and enhanced data protection, the announcement led with that sometimes most uncool of things: uniquely engineered hardware.


But that begs the question, in a software-defined world, does hardware still matter?


Before directly answering that question (hint, it’s a yes), let’s first be very clear on one key point: Hitachi is “All In” on Software-Defined Infrastructure. We view the struggle of many IT teams as one of modernizing, and modernizing fast enough to stay ahead of the massive pressures put on them from new business-led Digital Transformation initiatives within their business.


That modernization is driving IT inevitably toward a more services-led model, which requires the type of flexibility and scale that a software-led environment can deliver.  In fact, a year ago, we focused heavily on how we believed, that despite the differences in applications supported, there was a way for a new approach to software which could help move customers forward.


Nowhere was this more obvious for us last year than when we announced the expanded Hitachi Virtual Storage Platform (VSP) family and expansion of the Hitachi Storage Virtualization Operating System (SVOS) that drove common storage functionality from entry to ultimate and from open source to the mainframe.


We also announced our Hitachi Hyper Scale-out Platform, which derives its amazing scale-out capabilities from a unique file system, embedded (open source) virtualization and “industry standard” hardware. Software-defined at its best.


To see what IDC thinks is important about software-defined infrastructure, click here.  And so see how we feel the pieces fit together to lead you to ITaaS, check out this video:



There are plenty more examples across our product line that show our commitment to moving value into software – for simplicity, for programmability and for automation, including our vast expansion of REST-base APIs. 


Why then, would we have spent time designing a unique hardware architecture?  Is it even worth it today?


The industry thinks so.  Gartner, for instance, said this "As [Software Defined Storage or SDS] becomes a strategic imperative, I&O leaders need to address the misconception that the underlying hardware does not matter as many of the features are now provided by SDS."  The Research Note, which discussed the fact that hardware has a major impact on the success of software-defined initiatives, can be found here.


Because, to answer the original question, hardware matters. Intel has all but won the architectural battle for general purpose processing, but for intelligent offloading of tasks within advanced hardware design and specialization can still provide a tremendous advantage. (Which, by the way, explains why Intel recently purchased Altera, as they see this as well.)


In fact, being the research and development powerhouse that we are, we’ve patented such offloading of tasks in a number of places.


Our Hitachi Accelerated Flash modules have patented compression offloading, for instance, that leverages the expanded processing power embedded in each device, which far exceeds that of any traditional SSD.  This parallelization of flash operations and compression is why Hitachi all-flash systems maintain their performance as they fill and as workloads get challenging.


But flash storage is not the only area our research and development has created intelligent offloading.


While other NAS architectures just ask more of an already busy general purpose processor that was already working hard to deliver block storage functionality.  So features work, but not to the performance levels customers might need long term.  Do they have things like deduplication?  Yes. Do customers often want to run them when there’s real work getting done?  No.


That's where innovation comes in. In fact, with over 4,000 patents (no, not a typo) related to unified storage, we sit by far atop the industry in Unified Storage innovation. Included in this list is the Hitachi patented architecture of our NAS portfolio, and customers have always known that if you wanted NAS storage fast and scalable enough to get the most out of today’s modern flash-based storage, such a specialized architecture makes sense. 
NASmodule.jpg

 

Hitachi NAS delivered this with key file system, networking and deduplication operations moved from the shared Intel processor, into dedicated, task-tuned Field Programmable Gate Arrays (FPGA) that ensure operations happen at flash-like speeds. (Hu Yoshida talks about the historical architecture of Hitachi NAS and some of the inherent benefits our FPGA-based architecture brings in his recent blog post, here.)


And while these NAS gateways are popular with our customers, there was the need for a VSP-based solution that was fast, consolidated and fully integrated.  That’s what has been delivered with cards that leverage these advanced FPGA technology, but plug directly into our PCI-based back-plane and share the Intel processor of our block controllers using our years of experience delivering Intel-based LPAR technology on our Compute Blade server line.


So while the focus may be on fast and efficient tiering to cloud, advanced data protection, or deeply integrated VM understanding and protection, don’t miss one key fact that sets Hitachi unified storage solutions apart.


The realization that hardware still matters. And that hardware engineering still matters.

 

Read more at our Storage Systems Community page.

Outcomes