Items Tagged: Virtual+Infrastructure
When it comes time to automate IT processes, we recommend taking stock of your existing trusted vendor relationships as well as your most glaring pain points: do you struggle to keep operating systems and servers patched to the right levels to meet compliance?
Last time, I discussed the growing need for end-to-end Virtual Infrastructure Performance Management solutions to help datacenter managers successfully virtualize more performance-hungry and business-critical applications with confidence. While no vendor (in my view) has yet to solve the VIPM challe.
Storage for the Integrated Virtual Infrastructure: HP P4000 SAN Solutions
This paper examines the features and capabilities of the HP P4000 product family, and shows how these SAN solutions help businesses to overcome the storage-related growing pains they typically encounter as they deploy and scale out a virtual infrastructure.
This report is FREE w/registration. Click the blue Register button on the right-hand side and follow the quick registration process.
Doubling VM Density with HP 3PAR Storage
This paper examines HP 3PAR Utility Storage and describes how the solution overcomes typical virtual infrastructure storage issues, enabling customers to increase VM density by at least two-fold as a result.
Scale Computing HC3: Ending complexity with a hyper-converged, virtual infrastructure
Consolidation and enhanced management enabled by virtualization has revolutionized the practice of IT around the world over the past few years. By abstracting compute from the underlying hardware systems, and enabling oversubscription of physical systems by many virtual workloads, IT has been able to pack more systems into the data center than ever before. Moreover, for the first time in seemingly decades, IT has also taken a serious leap ahead in management as this same virtual infrastructure has wrapped the virtualized workload with better capabilities than ever before – tools like increased visibility, fast provisioning, enhanced cloning, and better data protection. The net result has been a serious increase in overall IT efficiency.
But not all is love and roses with the virtual infrastructure. In the face of serious benefits and consequent rampant adoption, virtualization continues to advance, and bring about more capability. All too often, an increase in capability has come at the cost of introducing considerable complexity. Virtualization now promises to do everything from serving up compute instances, to providing network infrastructure and network security, to enabling private clouds. Complex it can be.
For certain, much of this complexity exists between the individual physical infrastructures that IT must touch, and the simultaneous duplication that virtualization often brings into the picture. Virtual and physical networks must now be integrated, the relationship between virtual and physical servers must be tracked, and the administrator can barely answer with certainty whether key storage functions, like snapshots, should be managed on physical storage systems or in the virtual infrastructure.
With challenges surrounding increasing virtual complexity driving their vision of a better way to do IT, Scale Computing, long a provider of scale-out storage for the SMB, recently introduced a new line of technology – a product labeled HC3, or Hyper Convergence 3. HC3 is an integration of scale-out storage and scale-out virtualized compute within a single building block architecture that couples all of the elements of a virtual data center together inside one single system. The promised result is a system that is simple to use, and does away with the management and complexity overhead associated with virtualization in the data center. By virtualizing and intermingling all compute and storage inside a system that is already designed for scale-out, HC3 does away with the need to manage virtual networks, assemble complex clusters, provision and manage storage, and a bevy of day to day administrative tasks. Provisioning additional resources – any resource – becomes one-click-easy, and adding more physical resources as the business grows is reduced to a simple 2-minute exercise.
While this sounds compelling on the surface, Taneja Group recently turned our Technology Validation service – our hands-on lab service – to the task of evaluating whether Scale Computing’s HC3 could deliver on these promises in the real world. For this task, we put several HC3 clusters through the paces to see how they deployed, how they held up under use, and what specialized features they delivered that might go beyond the features found in traditional integrations of separate compute and storage systems.
Protocol Choices for Storage in the Virtual Infrastructure
Over the past few years, server virtualization has rapidly emerged as the defacto standard for today’s data center. But the path has not been an easy one, as server virtualization has brought with it near upheaval in traditional infrastructure integrations.
From network utilization, to data backup, almost no domain of the infrastructure has been untouched, but by far, some of the deepest challenges have revolved around storage. It may well be the case that no single infrastructure layer has ever imposed as great of a challenge to any single IT initiative as the challenges that storage has cast before virtualization.
After experiencing wide-reaching initial rewards, IT managers have aggressively expanded their virtualization initiatives, and in turn the virtual infrastructure has grown faster than any other infrastructure technology ever before deployed. But with rapid growth, demands against storage rapidly exceed the level any business could have anticipated, requiring performance, capacity, control, adaptability, and resiliency like never before. In an effort to address these new demands, it quickly becomes obvious that storage cannot be delivered in the same old way. For organizations facing scale-driven, virtualization storage challenges, it quickly becomes clear that storage must be delivered in a more utility-like fashion than ever before.
What do we mean by utility-like? Storage must be highly efficient, more easily presented, scaled and managed, and more consistently delivered with acceptable performance and reliability than ever before.
In the face of challenges, storage has advanced by leaps and bounds, but differences still remain between products and vendors. This is not a matter of performance or even purely interoperability, but rather one of suitability over time in the face of growing and constantly changing virtual infrastructures – changes that don’t solely revolve around the number and types of workloads, but also includes a constantly evolving virtualization layer. A choice today is still routinely made – typically at the time of storage system acquisition – between iSCSI, Fibre Channel (FC), and NFS. While often a choice between block and file for the customer, there are substantial differences between these block and file architectures, and even iSCSI and FC that will define the process of presenting and using storage, and determine the customer’s efficiency and scale as they move forward with virtualization. Even minor differences will have long ranging effects and ultimately determine whether an infrastructure can ever be operated with utility-like efficiency.
Recently, in this Technology in Depth report, Taneja Group set out to evaluate these protocol choices and determine what fits the requirements of the virtual infrastructure. We built our criteria with the expectation that storage was about much more than just performance or interoperability, or up-front ease of use – factors that are too often bandied about by vendors who conduct their own assessments while using their own alternative offerings as proxies for the competition. Instead, we defined a set of criteria that we believe are determinative in how customer infrastructure can deliver, adapt, and last over the long term.
We summarize these characteristics as five key criteria. They are:
• Efficiency – in capacity and performance
• Presentation and Consumption
• Storage Control and Visibility
• Scalable and autonomic adaptation
These are not inconsequent criteria, as a key challenge before the business is effectively realizing intended virtualization gains as the infrastructure scales. Moreover, our evaluation is not a matter of performance or interoperability – as the protocols themselves have comparable marks here. Rather our assessment is a broader consideration of storage architecture suitability over time in the face of a growing and constantly changing virtual infrastructure. As we’ll discuss, mismatched storage can create a number of inefficiencies that defeat virtualization gains and create significant problems for the virtual infrastructure at scale, and these criteria highlight the alignment of storage protocol choices with the intended goals of virtualization.
What did we find? Block storage solutions carry significant advantages today. Key capabilities such as VMware API integrations, and approaches to scaling, performance, and resiliency make a difference. While advantages may be had in initial deployment with NAS/NFS, architectural and scalability characteristics suggest this is a near term advantage that does not hold up in the long run. Meanwhile, between block-based solutions, we see the difference today surfacing mostly at scale. At mid-sized scale, iSCSI may have a serious cost advantage while “converged” form factors may let the mid-sized business/enterprise scale with ease into the far future. But for businesses facing serious IO pressure, or looking to build an infrastructure for long term use that can serve an unexpected multitude of needs, FC storage systems delivery utility-like storage with a level of resiliency that likely won’t be matched without the FC SAN.
Dell AppAssure: Data Protection made simple
Over the past few years, backup has become a busy market. For the first time in many years, a new wave of energy hit this market as small innovators sprang forth to try to tackle pressing challenges around virtual server backup. The market has taken off because of a unique set of challenges and simultaneous opportunities within the virtual infrastructure – with large amounts of highly similar data, interesting APIs for automation, and a uniquely limited set of IO and processing resources, the data behind the virtual server can be captured and protected in unique new ways. As innovators in turn attacked these opportunities, backup has been fundamentally changed. In many cases, backup has been put in the hands of the virtual infrastructure administrator, made lighter weight and vastly more accessible, and has become a powerful tool for data protection and data management.
In reality, the innovations with virtual backup have leveraged the unifying layer of virtualization to tackle several key backup challenges. These challenges have been long-standing in the practice of data protection, and include ever-tightening backup windows, ever more demanding recovery point objectives (RPO or the amount of tolerable data loss when recovering), short recovery time objectives (RTO or how long it takes to complete a recovery), recovery reliability, and complexity. Specialized data protection for the virtual infrastructure has made enormous progress in tackling these challenges, and simplifying the practice of data protection to boot.
But we’ve often wondered what it would take to bring the innovation from virtual infrastructure protection to a full-fledged backup product that could tackle both physical and virtual systems. At the recent request of Dell, Taneja Group Labs had the opportunity to look at just such a product. That product is AppAssure – a set of technology that seems destined to be the future architectural anchor for the many data protection technologies in Dell’s rapidly growing product portfolio. We jumped at the chance to run AppAssure through the paces in a hands-on exercise, as we wanted to see whether AppAssure had an architecture that might be poised to change how datacenter-wide protection is typically done, perhaps by making it more agile and accessible.
Are Your Critical Business Services Protected?
In many ways, Information Technology (IT) has become the centerpiece of business operations across the globe. This dynamic is both an opportunity and a threat to IT organizations. On one hand, IT has a very important seat at the table as businesses decide where to invest or deploy new offerings and services. On the other hand, IT organizations now become responsible for ensuring that these business services, and the data that drives them, are always available.
To ensure availability, IT must have a comprehensive business continuity plan in place, especially for critical operations that the business requires. However, business critical services are no longer just a matter of managing a single application or workload running on a solitary server. Instead, business critical services are often sets of interwoven components made up of multiple physical and virtual servers that depend upon one another. Seldom does a business critical application stand alone, or act with complete independence from other systems in the data center.
This complexity introduces challenges and compromises that the business is little prepared to understand or recognize. Often, when it comes to business continuity, issues are not recognized until it is too late. Many systems may have had a more manageable approach to continuity in the physical world. Now, with the agility characteristics that virtualization introduces, viewing, controlling and protecting the complete business service, especially when that service is made up of multiple physical and virtual components becomes a larger challenge. Considering the intersection of the business critical applications that run on physical and virtual infrastructure, IT needs a better capability for viewing and protecting the entire service being delivered to a business.
In this solution brief, we’ll look at what a Business Service is comprised of, and the challenges and options for business continuity across disparate physical and virtual infrastructure.