Items Tagged: Applications
BakBone Introduces NetVault: TrueCDP – Integrated, Continuous Data Protection
In this profile we review this challenge of fast and granular data recoverability, the role of CDP, and the advantage of BakBone’s NetVault: TrueCDP for fast and consistent file recovery in environments running the BakBone suite of data protection products.
Asempra BCS - Continuous Availability
The reality today is that too many data protection platforms targeted at the small and medium enterprise (SME) market fail to deliver basic capabilities with the ease of use, speed, and flexibility necessary to provide the holistic, effective data protection demanded. What the SME requires is a new approach to data protection that is easy to use and ensures the business that both the application and data remain available. We’ve identified this approach to data protection as Continuous Availability.
The four-part cloud tips series by Arun Taneja, founder and president of Taneja Group, concludes with an explanation of which applications are best suited for the cloud. When moving applications to the cloud, what are the basic ground rules for what applications aren’t going to work and which ones might from a performance standpoint?
VI - Top Six Physical Layer Best Practices: Maintaining Fiber Optics for the High Speed Data Center
Whether it’s handling more data, accelerating mission-critical applications, or ultimately delivering superior customer satisfaction, businesses are requiring IT to go faster, farther, and at ever-larger scales. In response vendors keep evolving newer generations of higher-performance technology. It’s an IT arms race full of uncertainty, but one thing is inevitable – the interconnections that tie it all together, the core data center networks, will be driven faster and faster.
Unfortunately, many data center owners are under the impression that their current “certified” fiber cabling plant is inherently future-proofed and will readily handle tomorrow’s networking speeds. This is especially true for the high-speed critical SAN’s at the heart of the data center. For example, most of today’s fiber plants supporting protocols like 2Gb or 4Gb Fibre Channel (FC) simply do not meet the required physical layer specifications to support upgrades to 8Gb or 16Gb FC. And faster speeds like 20Gb FC are on the horizon.
It is not just the plant design that’s a looming problem. Fiber cabling has always deserved special handling but is often robust enough that it can withstand a certain amount of dirt and mistreatment at today’s speeds. While lack of good cable hygiene and maintenance can and does cause significant problems today, at higher networking speeds the tolerance for dust, bends, and other optical distractions is much smaller. Careless practices need to evolve to whole new level of best practice now or future network upgrades are doomed.
In this paper we’ll consider the tighter requirements of higher speed protocols and examine the critical reasons why standard fiber cabling designs may not be “up to speed”. We’ll introduce some redesign considerations and also look at how an improperly maintained plant can easily degrade or defeat higher-speed network protocols, including some real world experiences that we’ve drawn from experienced field experts in SAN troubleshooting at Virtual Instruments. Along the way we will come to recommend the top six physical layer best practices we see necessary to designing and maintaining fiber to handle whatever comes roaring down the technology highway.
Cloud storage can be a beast to wrangle. Deciding which applications to move into the cloud, understanding how to select and deal with a cloud storage provider, deciding on cloud storage solutions – none of these are easy. Is it worth it?
Increasing Virtualization Velocity with NetApp OnCommand Balance
Why do so many virtualization implementations stall out when it comes to mission-critical applications? Why do so many important applications still run on dedicated hardware? In one word – performance. Virtualization technologies have proven incredibly powerful in helping IT deliver agile “idealized” services, and doing so by efficiently sharing expensive physical resources. But mission-critical applications bring above-average requirements for performance service quality that can greatly challenge virtualized hosting.
Maintaining good performance (as well as availability, et.al.) requires solid systems management. Hypervisor management solutions like VMware’s vCenter Operations Management Suite provide a significant advantage to virtualization administrators by centralizing and simplifying many traditionally disparate management tasks, including fundamental performance monitoring for system health and component utilization. Yet when it comes to assuring performance for mission-critical applications like transactional databases and email – the kinds of apps that depend heavily on multiple IT domains of resources – straight hypervisor-centric solutions can fall short. Solving complex cross-domain performance issues like resource contention, virtual-physical competition, and assuring sufficient “good performance” headroom can require both deeper and wider analysis capabilities.
In this paper we’ll first review a high-level management perspective of performance and capacity to explore what it takes to support mission-critical application performance service levels. We’ll examine the management strengths of the most well known hypervisor management solution – VMware’s vCenter Operations Suite - to understand the scope and limitations of its performance and capacity management capabilities. Next, we will look at how the uniquely cross-domain (storage and server, virtual and physical) model-based performance management capabilities of NetApp’s OnCommand Balance complements a solution like vCenter Operations. The resulting combination helps the virtualization admin and/or storage admin become more proactive and ultimately elevate performance management enough to reliably virtualize mission-critical applications.