Nathan Tran

Protect Microsoft® SQL Server® with Hitachi Application Protector in a Hyper-V Environment

Discussion created by Nathan Tran on Dec 18, 2013

This technical post shows how your organization can use Hitachi Applications Protector to protect Microsoft® SQL Server® 2012 in a Hyper-V environment. HAPRO protects your SQL data from the scenarios in Table 1.


Table 1. Test Case

Test Case

Pass/Fail Criteria


SQL database protection from the following:

§  Accidental   deletion of data

§  Logical   database file corruption

§  Physical file corruption caused by disk or hardware failure

Full database recovery from snapshot



Hitachi Application Protector allows you to do the following:

§  Schedule or perform backups of your SQL Server databases on an as needed basis

§  Restore SQL Server databases rapidly in the event of accidental data deletion or data corruption

Hitachi Application Protector leverages Hitachi Thin Image snapshot technology and Microsoft Volume Shadow Copy Server (VSS) to provide the following application-consistent data protection:

§  Backup, recovery and data protection software that is snapshot based and is application consistent

§  Off-loaded host server backup over head with storage snapshots

§  Manages disk-to-disk-based backup and recovery, leveraging Microsoft Volume Shadow Copy Service infrastructure and Hitachi file clone software

§  Ability to schedule single, daily, weekly or monthly snapshots

This posting provides the following:

§  A proof point of basic functionality of this solution

§  High level technical reference for considering of this solution

§  High level reference of the use case implementation

This posting does not cover the following:

§  Performance measurement

§  Sizing information

§  Best practice

§  Implementation details



NoteTesting of this configuration was in a lab environment. Many things affect production environments beyond prediction or duplication in a lab environment. Follow the recommended practice of conducting proof-of-concept testing for acceptable results in a non-production, isolated test environment that otherwise matches your production environment before your production implementation of this solution.

Use Case Overview

Microsoft SQL Server is a very critical part of many businesses. Ensuring data protection is one of the highest priorities of the database administrator's job.

Hitachi Applications Protector provides you protection for you data and database. You also have the ability to rapidly recover from database failure to minimize the loss of productivity that is associated with a service outage.

Tested Components

These hardware and software components were used in the Hitachi Application Protector testing environment.


Table 2 Hardware Components         





Hitachi Unified Storage VM

§  Dual   controllers

§    2 × 8 Gb/sec Fibre Channel ports used

§  166 GB cache   memory

§  8 × 600 GB SAS   disks



Hitachi Compute Blade 500 chassis

§  8-blade chassis

§  2 management modules

§  6 cooling fan modules

§  4 power supply modules

§  2 Brocade 5460 Fibre Channel switch modules

§  2 Brocade 10 GbE DCB switch   modules

SVP: A0135-D-6829


520HB1 server blade

§  Half blade

§    2 × 8-core Intel Xeon E5-2680   processor, 2.70 GHz

§  160 GB RAM

§  Emulex 10GbE CNA onboard network adapter

§    Hitachi 8 Gb/sec Fibre   Channel mezzanine card

BMC: 01-56



Table 3 Software Components             



Hitachi Application Protector

Hitachi Dynamic Provisioning

Microcode Dependent

Hitachi Thin Image

Licensed on Hitachi Unified Storage VM

Command control interface


Hitachi VSS Hardware Provider


Microsoft Windows Server

2012 Standard Edition

Microsoft SQL Server

2012 Enterprise Edition

High Level Test Infrastructure

The testing of Hitachi Application Protector functionality used the following:

§  A Hitachi Compute Blade 500 chassis with a half-size server blade for the Microsoft SQL Server 2012 server

§  Hitachi Unified Storage VM for the SAN storage

Figure 1illustrates the high-level test infrastructure.

SQL HaPRO diagram.png

Figure 1

Test Result

This is the test result and success criteria for this testing


Table 4, Test Results

Test   Case



Backup and restore

This test case performs a backup of the SQL database   and validated the ability to successfully restore the database in the event   of data deletion or corruption to the SQL data or log files.


Expected result:

§  Successfully   backup of the SQL database

§  Successfully   restore of the SQL database

Using Hitachi Application Protector

Hitachi Application Protector uses a snapshot technology based on the storage system. It leverages Volume Shadow Copy Service from Microsoft to give you application-consistent data protection.

Application Protector understands how and where the storage is of the primary database file and supporting files (like log files). As a result, Application Protector ensures the tracking and backup of all application-related data changes at the requested point-in-time to guarantee recovery.

The storage system-based snapshot technology creates backup images quickly. The technology uses storage space efficiently to maintain changes to the database and log files.

Snapshot technology simplifies recovery because the protected data appears in a mirrored file system, just like the original. Right-click to recover the database and related files.

To learn more about Hitachi Application Protector, visit the Hitachi Data Systems website.

Using Hitachi Application Protector in a Microsoft SQL Server Clustered Environment

Using Hitachi Application Protector in a Microsoft SQL Server clustered environment requires manual steps to failover the Hitachi Application Protector metadata from one node to the other in the event of a SQL node failure.

Classify Hitachi Application Protector metadata into the following, based on their location.

§  On-system metadata, such as WMI DB, Task Scheduler, and registry keys

§  On-disk metadata, such as on-disk objects, logs, and configuration

When using on-disk metadata, the following are the available options for metadata storage through the configurable Hitachi Application Protector metadata directory. Set this location-configurable data to one of the following paths:

§  Non-shared volume

§This can be shared as a CIFS share during manual failover.

§  Shared non-cluster volume

§  Cluster resource volume

The following describes each option.

Non-Shared Volume

When hosting the metadata on a currently active, non-clustered, non-shared location on a given node, Hitachi Application Protector runs on that node.

Configure a separate, currently passive clustered node for Hitachi Application Protector to run.

Fail Over (Move Metadata to the New Active Node)

In case of failover, perform the following steps:

1.     Copy the metadata from the volume of the old active node (Node 1) to the volume of the new active node (Node 2).

Example: To run from Node 1, when the path of Node 1 is shared over \\Node1\METADATA\Metadata, type the following:

# robocopy \\Node1\HAPRO\Metadata\ H:\HAPRO\Metadata

2.     Run the HAPROS_SYNC command to sync on-disk into on-system on the new active node (Server 2):

HAPRO_SYNC –sync system

Shared Non-Cluster Location

For the cases where using a non-clustered shared volume to store metadata, the shared volume may appear as different drives on various systems.

Note — Use different paths to configure on Hitachi Application Protector running on the various cluster nodes, as shown in Table 5.

Table 5. Valid Configuration Example



Shared   Volume

Driver   Letter

Metadata   Path

Node 1




Node 2




Table 6 is an example of an invalid configuration.


Table 6. Invalid Configuration Example        


Shared   Volume

Driver   Letter

Metadata   Path






Invalid configuration as it conflicts with Server 2   (same location).





Invalid configuration as it conflicts with Server 1   (same location).

Fail Over

In case of a fail over, perform the following steps.

§  If Node 2 is the new active node, and it just took over from Node 1, then run the following command to sync on-system metadata using on-disk metadata:

HAPRO_SYNC –sync system

Cluster Resource Location

When using cluster resources, the disks are visible only to the active node. Switch the cluster resource to the new active node during failover.

The volume is visible under the same driver letter on all of the cluster nodes. In this case, configure the Hitachi Application Protector metadata path as in the case of Shared Non-Cluster Location. It can be the same path on all the nodes. Table 7example configuration:


Table 7. Cluster Resource Location Example     


Shared   Volume

Driver   Letter

Metadata   Path

Server 1




Server 2




Fail Over

In case of a fail over, perform the following step:

§  If Node 2 is the new active node, and it just took over from Node 1, then run the following command to sync on-system metadata using on-disk metadata:

HAPRO_SYNC –sync system

Notes applicable to all syncing of Hitachi Application Protector metadata:

§  The HAPRO_SYNC command syncs the on-system metadata using the on-disk metadata that was recently copied with robocopy. Here the snapshot metadata path need not be specified explicitly, assuming the path conforms to the configured metadata path. During this operation, HAPRO_SYNC reads the on-disk metadata from the path from which it is configured as the snapshot metadata path.

§  If you need to additionally check for the consistency of the on-system metadata, combine the {–check|-c} flag with –sync flag, run the following.

HAPRO_SYNC –sync system -check

§  If you wish to perform only on-system metadata consistency checking, which is recommended after running HAPRO_SYNC –s on the system, run the following:

HAPRO_SYNC {–check|-c}

§  If the user needs to copy Hitachi Application Protector on-disk metadata from the source location to the destination location, use this command:

HAPRO_SYNC [ {–replicate|-r}  {"source,destination"} ]

General Statements

These general statements apply to using Hitachi Application Protector:

§  Although Hitachi Application Protector metadata can be hosted on a cluster resource, this may make manual metadata management a complex task.

§  Hitachi Data Systems recommends hosting metadata privately on a non-shared location, as Hitachi Application Protector  is not yet fully cluster-aware.

§  As the command line interface can be used to copy the metadata, it should be possible to script the metadata movement successfully in case of a fail over. Hitachi Data Systems does not provide support for such scripts currently, but does support the correct functioning of the command line interface.