Join Newsletter
Forgot
password?
Register
Trusted Business Advisors, Expert Technology Analysts

Profiles/Reports

Free Reports

Page 1 of 5 pages  1 2 3 >  Last ›
Free Reports

Market Landscape Abstract: Survey of VVol Implementation by Various Storage Vendors

VMware Virtual Volumes (VVols) is one of the most important technologies that impacts how storage interacts with virtual machines. In April and May 2015, Taneja Group surveyed eleven storage vendors to understand how each was implementing VVols in their storage arrays. This survey consisted of 32 questions that explored what storage array features were exported to vSphere 6, how VMs were provisioned and managed. We were surprised at the level of differences and the variety of methods used to enable VVols. It was also clear from the analysis that underlying limitations of an array will limit what is achievable with VVols. However, it is also important to understand that there are many other aspects of a storage array that matter—the VVol implementation is but one major factor. And VVol implementation is a work in progress and this represents only the first pass.

We categorized these implementations in three levels: Type 1, 2 and 3, with Type 3 delivering the most sophisticated VVol benefits. The definitions of these three types is shown below, as is the summary of findings.

Most storage array vendors participated in our survey but a few chose not to, often due to the fact that they already delivered the most important benefits that VVols deliver, i.e. the ability to provision and manage storage at a VM-level, rather than at a LUN, volume or mount point level. In particular that list included the hyperconverged players, such as Nutanix and SimpliVity but also players like Tintri.

Publish date: 06/08/15
Free Reports

TVS: Qumulo Core: Data-Aware Scale-Out NAS Software

New enterprise-grade file systems don’t come around very often. Over the last two decades we have seen very few show up; ZFS was introduced in 2004, Isilon’s OneFS in 2003, Lustre in 2001, and WAFL in 1992. There is a good reason behind this: the creation of a unique enterprise-grade file system is not a trivial undertaking and takes considerable resources and vision. During the last ten years, we have seen seismic changes in the data center and storage industry. Today’s data center runs a far different workload than what was prevalent when these first-generation file systems were developed. For example, today’s data center runs virtual machines, and it is the exception when there is a one-to-one correlation between a server and storage. Databases span the largest single disk drives. A huge amount of data is ingested by big data applications, social media. Data must be kept in order to meet the requirements of government and corporate policy. Technology has dramatically changed over the last decade. For instance, flash memory has become prevalent, commodity x86 processors now rival ASIC chips in power and performance, and new software development and delivery methodologies such as “agile” have become mainstream. In the past, we were concerned with how to deal with the underlying storage, but now we are concerned with how to deal with this huge amount of data that we have stored.

What could be accomplished if a new file system was created from the ground up to take advantage of the latest advances in technology and, more importantly, had an experienced engineering team that had done this once before? That is, in fact, what Qumulo has done with the Qumulo Core data-aware scale-out NAS software, powered by its new file system, QSFS (Qumulo Scalable File System). Qumulo’s three co-founders were the primary inventors of OneFS – Peter Godman, Neal Fachan, Aaron Passey – and they assembled some of the brightest minds in the storage industry to create a new modern file system designed to support the requirements of today’s datacenter, not the datacenter of decades ago.

Qumulo embraced from day one an agile software development and release model to create their product. This allows them to push out fully tested and certified releases every two weeks. Doing this allows for new feature releases and bug fixes that can be seamlessly introduced to the system as soon as they are ready – not based on an arbitrary 6, 12 or even 18-month release schedule.

Flash storage has radically changed the face of the storage industry. All of the major file systems in use today were designed to work with HDD devices that could produce 150 IOPS; if you were willing to sacrifice capacity and short-stroke them you might get twice that. Now flash is prevalent in the industry, and we have commodity flash devices that can produce up to 250,000 IOPs.  Traditional file systems were optimized for slower HDD drives – not to take advantage of the lower latency and higher performance of today’s solid state drives. Many traditional file systems and storage arrays have devised ways to “bolt on” SSD devices to their storage to boost their performance. However, their initial architecture was based around the capabilities of yesterday’s HDD drives and not the capabilities of today’s new flash technology.

An explosion in scale-out large capacity file systems has empowered enterprises to do very interesting things, but has also come with some very interesting problems. Even one of the most trivial tasks—determining how much space the files on a file system are consuming—is very complicated to answer on first-generation file systems. Other questions that are difficult to answer without being aware of the data on a file system include finding out who is consuming the most space on a file system, and which clients and files or applications are consuming the most bandwidth. Second-generation file systems need to be designed to be data-aware, not just storage-aware.

In order to reach performance targets, traditional high performance storage arrays were designed to take advantage of an ASIC-optimized architecture. ASIC architecture is good in the sense that it can speed up performance for some storage related operations; however, this benefit comes at a heavy price – both in dollars and in flexibility. It can take years and millions of dollars to embed new features in an ASIC. By using very powerful and relative inexpensive x86 processors new features can be introduced very quickly via software. The slight performance advantage of ASIC-based storage is disappearing fast as x86 processors get more cores (Intel E5-2600 V3 has up to 18 cores) and advanced features.

When Qumulo approached us to take a look at the world’s first data-aware, scale-out enterprise-grade storage system we welcomed the opportunity. Qumulo’s new storage system is not based on an academic project or designed around an existing storage system, but has been designed and built on entirely new code that the principals at Qumulo developed based on what they learned in interviews with more than 600 storage professionals. What they came up with after these conversations was a new data-aware, scale-out NAS file system designed to take advantage of the latest advances in technology. We were interested in finding out how this file system would work in today’s data center.

Publish date: 03/17/15
Free Reports

Free Report: Galileo’s Cross-Domain Insight Served via Cloud

Every large IT shop has a long shelf of performance management solutions ranging from big platform bundles bought from legacy vendors, through general purpose log aggregators and event consoles, to a host of device-specific element managers. Despite the invested costs of acquiring, installing, supporting, and learning these often complex tools, only a few of them are in active daily use. Most are used only reactively and many just gather dust for a number of reasons. But, if only because of the ongoing costs of keeping management tools current, it’s only the solutions that get used that are worth having.

When it comes to picking which tool to use day-to-day, it’s not the theory of what it could do, it’s the actual value of what it does for the busy admin trying to focus on the tasks at-hand. And among the myriad of things an admin is responsible for, assuring performance requires the most management solution support. Performance related tasks include checking on the health of resources that the admin is responsible for, improving utilization, finding lurking or trending issues to attend to in order to head off disastrous problems later, working with other IT folks to diagnose and isolate service impacting issues, planning new activities, and communicating relevant insight to others – in IT, the broader business, and even to external stakeholders.

Admins responsible for infrastructure, when faced with these tasks, have huge challenges in large, heterogeneous, complex environments. While vendor-specific device and element managers drill into each piece of equipment, they help mostly with easily identifiable component failure. Both daily operational status and difficult infrastructure challenges involve looking across so-called IT domains (i.e. servers and storage) for thorny performance impacting trends or problems. The issue with larger platform tools is that they require a significant amount of installation, training, ongoing tool support, and data management that all detract from the time an admin can actually spend on primary responsibilities.

There is room for a new style of system management that is agile, insightful and empowering, and we think Galileo presents just such a compelling new approach. In this report we’ll explore some of the IT admin’s common performance challenges and then examine how Galileo Performance Explorer with its cloud-hosted collection and analysis helps conquer them. We’ll look at how Performance Explorer crosses IT domains to increase insight, easily implements and scales, fosters communication, and focuses on and enables the infrastructure admin to achieve daily operational excellence. We’ll also present a couple of real customer interviews in which, despite sunk costs in other solutions, adding Galileo to the data center quickly improved IT utilization, capacity planning, and the service levels delivered back to the business.

Publish date: 01/01/15
Free Reports

HP ConvergedSystem: Altering Business Efficiency and Agility with Integrated Systems

The era of IT infrastructure convergence is upon us. Over the past few years Integrated Computing systems – the integration of compute, networking, and storage - have burst onto the scene and have been readily adopted by large enterprise users. The success of these systems has been built by taking well-known IT workloads and combining it with purpose built integrated computing systems optimized for that particular workload. Example workloads today that are being integrated to create these systems are Cloud, Big Data, Virtualization, Database, VDI or even combinations of two or more.

In the past putting these workload solutions together meant having or hiring technology experts with multiple domain knowledge expertise. Integration and validation could take months of on-premise work. Fortunately, technology vendors have matured along with their Integrated Computing systems approach, and now practically every vendor seems to be touting one integrated system or another focused on solving a particular workload problem. The promised set of business benefits delivered by these new systems fall into these key areas:

·         Implementation efficiency that accelerates time to realizing value from integrated systems

·         Operational efficiency through optimized workload density and an ideally right sized set of infrastructure

·         Management efficiency enabled by an integrated management umbrella that ties all of the components of a solution together

·         Scale and agility efficiency unlocked through a repeatedly deployable building block approach

·         Support efficiency that comes with deeply integrated, pre-configured technologies, overarching support tools, and a single vendor support approach for an entire-set of infrastructure

In late 2013, HP introduced a new portfolio offering called HP ConvergedSystem – a family of systems that includes a specifically designed virtualization offering. ConvergedSystem marked a new offering, designed to tackle key customer pain points around infrastructure and software solution deployment, while leveraging HP’s expertise in large scale build-and-integration processes to herald an entirely new level of agility around speed of ordering and implementation. In this profile, we’ll examine how integrated computing systems marks a serious departure from the inefficiencies of traditional order-build-deploy customer processes, and also evaluate HP’s latest advancement of these types of systems.

Publish date: 10/16/14
Free Reports

Executive Summary: VCE and Nutanix in the Real World

Taneja Group prepared a Field Report for Nutanix on the real-world customer experience for seven Nutanix hyperconvergence and seven VCE convergence customers. We did not cherry pick customers for dissatisfaction or delight; we were interested in typical customers’ honest reactions.

The same conclusions kept emerging: VCE users see convergence as a benefit over traditional do-it-yourself infrastructure, but an expensive one. Some of the concerns include high prices, infrastructure and management complexity, expensive support contracts, and concerns over the long-term viability of the partnership between EMC, VMware and Cisco. The Nutanix users also shared valuable hyperconvergence benefits.  In contrast to VCE, they also cited simplified architecture and management, reasonable acquisition and operating costs, and considerably faster time to value.

Our conclusion is that VCE convergence is an improvement over traditional architecture, but Nutanix hyperconvergence is an evolutionary improvement over VCE. 

Publish date: 09/29/14
Free Reports

HP StoreVirtual VSA and VMware Virtual SAN - A Closer Look

The age of the software defined datacenter (SDDC) and converged infrastructure is upon us. The benefits of abstracting, pooling and running compute, storage and networking functions together on shared commodity hardware brings unprecedented agility and flexibility to the datacenter while driving actual costs down. The tectonic shift in the datacenter caused by software-defined storage and networking will prove to be as great as, and may prove to be greater than, the shift to virtualized servers during the last decade. While software-defined networking (SDN) is still in its infancy, software-defined storage (SDS) has been developing for quite some time.

LeftHand Networks (now HP StoreVirtual) released its first iSCSI VSA (virtual storage appliance) in 2007, which brought the advantages of software-based storage to small and midsize company environments. LeftHand Networks’ VSA was a virtual machine that hosted a software implementation of LeftHand’s well-regarded iSCSI hardware storage array. Since that time many other vendors have released VSAs, but none have captured the market share of HP’s StoreVirtual VSA. But the release of VMware Virtual SAN (VSAN) in March of 2014 could change that as VSAN, with the backing of the virtualization giant, is poised to be a serious contender in the SDS marketplace. Taneja Group thought that it would be interesting to take a closer look at how a mature, well regarded and widely deployed SDS product such as HP StoreVirtual VSA compares to the newest entry in the SDS market: VMware’s VSAN.

The observations we have made for both products are based on hands-on lab testing, but we do not consider this a Technology Validation exercise because we were not able to conduct an apples-to-apples comparison between the offerings, primarily due to the limited hardware compatibility list (HCL) for VMware VSAN. However, the hands-on testing that we were able to conduct gave us a very good understanding of both products. Both products surprised and, more often than not, did not disappoint us. In an ideal world without budgetary constraints, both products may have a place in your datacenter, but they are not by any means interchangeable. We found that one of the products would be more useful for a variety of datacenter storage needs, including some tier 1 use cases, while the other is more suited today to supporting the needs some of tier 2 and tier 3 applications.

Publish date: 08/21/14
Page 1 of 5 pages  1 2 3 >  Last ›