70cc710850b21f2cd1027a96d266b2e7aaf4081a

The Mission-Critical Standards of Data Availability

Advertisement

Lighted Ways Tech
Shop Your Best Moments here. The easiest way to find your things!- CHECK EVERYTHING ON AMAZON

 


Overview

Customer profiles are too important to the organization primarily because a big part of data is key to the long-term health of the business. In a typical organization, mission-critical data center is an often used information supporting strategic business processes. This can include accounting, revenue generation, customer service, logistics, and regulatory compliance. Hindering any access to or recovery of mission-critical data in these areas could result in damage to organization's reputation, lost sales and revenues, loss of customers and regulatory or financial penalties. Purpose-built mission critical technologies matter. There simply is no substitute.

The Mission-Critical Difference

What it makes technology mission-critical? Committed mission-critical networks - and every one of the components that are in them - are so designed to withstand the harshness, inflexibilities and environmental excesses of daily use. Availability, redundancy, reliability, and security are all the bywords that make up the mission-critical and this where the difference begins, because all mission-critical technologies are expected to work when they are needed in most difficult of times. No ifs and buts! Mission-critical networks always deliver tailored coverage and as needed in the community, organization’s daily operations, and the unexpected. And even though mission-critical solutions require substantial investments, the returns – capacity to handle growing traffic in a disaster, back-up for the backup systems, coverage where it is required – remain incomparable.  

However, if businesses do not know what data they have and how to classify it, they be likely to value it as vitally important, in case. When deciding how best to protect data and applications, the key essential to positive result rests in identifying what's mission-critical - that is, what the business cannot live devoid of.

Comprehensive site protection by way of multiple site SAN configuration

Multiple site SAN feature leverages the main functionality of Network RAID 10 and 10+1, resulting in two full copies of the information in every volume, and both copies are guaranteed to be stored on a different node. This is one of the advantages of Network RAID 10. Owing to the Network RAID 10’s data outline, one of the clear results of this system is that each node could go off-line at the very same time. Meaning that half of the HP StoreVirtual cluster could go off-line, and the associated applications and volumes are still online, without any intervention from the storage administrator. This degree of availability cannot be found anywhere else in the storage industry, and even though it is a powerful and unique feature, its worth is upgraded and developed still more with the HP StoreVirtual cluster split throughout the different locations. This cluster is understandably exemplified as a single pool of storage, linked by the same networking set of rules as in a single site. In other words, in a single data center, half of the cluster can be set differently on another power circuit and the other half on another. The same standard networking protocols will continue to operate the storage nodes as a single cluster as they would if all the nodes were in a single power circuit.

After volumes are configured with Network RAID 10, should power breaker interruptions occurs, the other half of the cluster will remain online, with a complete duplicate of the information in the volumes. Moreover, the applications that are relying on this data nevertheless have access to it, and consequently, the applications continue to be online. Bringing it through a step farther, say with two data centers, if half of the cluster is respectively in both data centers, and in any case an entire data center goes off for some reason or the other, such as cooling, power or natural disaster, the other half of the cluster will continue running with a complete duplicate of the information, and no admin intervention whatsoever is needed allowing the HP StoreVirtual Storage cluster to continuously serving the data. This is called as multiple site cluster configuration. With a configured Network RAID 10+1, a multiple site cluster could span over three data centers, with all copies of the volume at every data center, provides a premium level of site availability and redundancy.

Unfortunately, in a multiple site cluster, Network RAID 5 and 6 are not supported.

The HP StoreVirtual Storage

The fast adoption of virtualized technologies as well as the speedy growth in data necessitates a reliable shared storage system that is always exactly available. The dependency on shared storage has altered customer high expectancies about data availability. Though, an uptime of 99.99% (annual downtime of 52 minutes) used to be the accepted standard for most organizations. Today, a customer cannot allow lost time across their environs and a 99.999% (called 5 nines) availability (downtime of 5 minutes annually) is the new criterion for data availability.

HP for one has conducted standard quality reviews of every reported instance of data loss and data unavailability for every storage systems covered by a support agreement. The quality review is applicable to all HP enterprise-class storage, networking products, and servers. In the ensuing review process, HP tabulated hours of data unavailability as reported by customers. Considering this and the number of systems still under warranty we can approximate field availability. By definition, field availability is the host or a server availability to access information that is in the HP StoreVirtual Storage cluster. Whenever the host or a server is unable to access the data attributable to connectivity problems or if the data itself is questionable, the storage cluster is deemed unavailable. For the past two years, HP was able to establish that the HP StoreVirtual Storage provides 5 nines: 99.999% availability or much better in the field, once configured in accordance with best practices.

Solving the majority of availability challenges can be done by following established HP StoreVirtual Storage practices. Best practices analyzer (BPA) which is built into the centralized management console (CMC) can provide the needed guidance on compliance with recommended best practices. BPA will compare all configuration rulings with the best practices and emphasizes those configurations with issues such as incorrect NIC teaming (or the lack of it), checking data security with Network RAID, appropriate load balancing across nodes cluster, and much more.

It is strongly recommended that BPA be referred on a systematic timetable, specifically right away after creating any alterations to the storage cluster, such as, but not limited to, adding up or eliminating nodes, making new volumes, or creating modifications to the networking configuration.

Migration of HP StoreVirtual volume

Peer Motion on HP StoreVirtual Storage lets a system administrator to move an HP StoreVirtual volume from cluster to cluster, online, without of having to reconfigure the applications or hosts. This is simply done by editing the volume properties, selecting the Advance tab, and selecting a new cluster from the cluster drop-down box. The blocks making up the volume on the original cluster will start migrating to a new cluster, and the LeftHand OS will redirect automatically all proxy requests for blocks to the correct cluster as the data migration is ongoing. When a migration is completed, the iSCSI shifts to a new cluster from the host are automatically restored, on the assumption that the new cluster’s IP address was added to the iSCSI configuration of the hosting server. A Peer Motion typical to this case could be a volume containing data for an application that has a growing performance need. If a volume has started out on an SAS MDL cluster, a storage system administrator may well utilize Peer Motion in moving the volume to an SAS-based software cluster. Likewise, if the volume is in an SAS cluster, the storage administrator could possibly choose to add more nodes to the cluster to provide added performance to the volume or could prefer to move the volume to an even better performance level such as an SSD-based software cluster.

To expound it a little further, the virtualized storage within the HP StoreVirtual cluster will mean that any rules concerning data that is tied to physical hardware resources no longer applies. The virtualization, in effect, allows volumes to be moved dynamically between different physical hardware clusters, and allows for a cluster swap, further means, its ability to remove existing storage nodes from a cluster and replace them with new storage nodes, online, with no loss of data or data availability whatsoever. In this process, data from the removed storage nodes is moved into the new storage nodes, and all IO is appropriately and precisely guided to the right node. Upgrading storage nodes to faster, larger or newer storage nodes does not, at all, require any downtime, producing a well-defined, clear policy for any future expansion and growth. A fitting example for this is, a customer may well start off with a cluster of 8 drive systems, but as the customer adds up applications and workload to these clusters, these could reach the performance level or capacity of the nodes. They could easily be migrated to nodes with 12 or more drives to boost capacity and performance without bringing any applications offline.

Eventually, the fusion of the amount of data being generated by practically by every organization coupled with the standard shifts that are being made by the IoT (Internet of Things) is without a doubt going to categorically move data standards from the “nice to have” to the “mission-critical” category. If an industry doesn’t have the standard or is sluggish on developing them, the clock is unquestionably ticking towards the point of no return.

Upgrading online with Upgrade Advisor

Upgrading online give storage admins the ability to put into practice the most recent in firmware and software to the systems without taking the storage cluster along for maintenance. Other upgrades might also be available in smaller individual pieces of software or as a group of software known to the community as a patch set. There can also be instances where enhancements to the HP StoreVirtual Storage nodes are only accessible via firmware upgrading or through the LeftHand OS upgrading (it is referred to as the previous versions of SAN/iQ). A number of these enhancements are available through major version upgrades, such as SAN/iQ 9.0 to SAN/iQ 9.5. Irrespective of the kind of upgrade, the Upgrade Advisor in the CMC instinctively tests the present installed software levels on the nodes in the storage cluster versus the commonly accessible software releases specifically those that were published by HP, and forewarns the system administrator once an upgrade is forthcoming. These software upgrades are envisioned to improve the availability of HP StoreVirtual systems. Moreover, the Upgrade Advisor provides the possibility to put into effect the software upgrades, and provides a checklist of dependencies that might need to be remedied in another place in the environment, such as in guaranteeing that the HP StoreVirtual device-specific modules (DSMs) Multipath I/O software is indeed compatible with the storage nodes upgrades. Thus, it is favorably suggested that any available upgrades pinpointed by the Upgrade Advisor be appraised and implemented when possible.

Zero downtime for maintenance

Unlike in other storage systems, altering or modifying the properties of a volume requires zero downtime for either the host accessing the volume or the volume itself, giving storage administrators the leeway it needed to fine-tune the varying changes of the requirements. Any volume specifically protected by a Network RAID level higher than 0 can withstand every storage node going offline, whether it is for maintenance or due to unforeseen situations. As a result of this, maintenance can be accomplished whenever you like it in a live environment, without bringing down applications or hosts. Just simply, select a storage node to perform the maintenance on, and carry out the maintenance works. Even if a specific node turns out to be unobtainable due to a software installation that necessitates a power switch off, or may be a maintenance reboot, the volumes that are protected with Network RAID higher than 0 would still be available. In this scenario, a storage maintenance window is no longer a requirement, because by simply carrying out maintenance on one node at a time, Network RAID will allow HP StoreVirtual Storage to just proceed in serving the data.

Subsequently, if there is a requirement for volumes to remain online when one or more nodes in an HP StoreVirtual cluster or a management group necessitate maintenance, every volume have to be secured by a Network RAID level higher than 0. Clearly, Network RAID 10 is the best there is, that is to say, recommended Network RAID level since this one offers an excellent combination of availability and performance the system needs. After ensuring that all volumes are secured by Network RAID, simply continue with the maintenance, one node at a time. After finishing the maintenance for that node, it comes back online and the data is synchronized. After the synchronization is complete, the next node in the cluster can go through the same maintenance work.

The virtualized storage allows administrators to alter or modify practically every single attribute of an HP StoreVirtual volume, Network RAID, including size, whether the volume is provisioned thinly, and all that. Since the fundamental physical storage is virtualized, there are no solid rules as to where to store the data or how to configure the volumes. While a good deal of these jobs is just accounting changes, meaning counting of how many blocks are allocated to a specific volume, they can be finished promptly with no impact whatsoever on system performance.


Previous Post Next Post