Join Newsletter
Forgot
password?
Register
Trusted Business Advisors, Expert Technology Analysts

Profiles/Reports

Recently Added

Page 1 of 136 pages  1 2 3 >  Last ›
Report

The Importance of Being Flexible: Adaptive Data Backup and Recovery

The pace of IT development is fast and it is relentless. What was bleeding edge just a few short years ago becomes old hat, replaced by the next big thing. Generally, no one can predict in detail what that next advancement may be, either. Since no one wants to have to replace everything in their data center to accommodate the newest technology, maintaining flexibility is extremely important. Remaining flexible means that you can continue to utilize the capital investment you’ve already made, while rolling in the next wave of new products and services with minimal cost and disruption.

Most companies don’t have the luxury of starting from scratch every few years and building out a new IT infrastructure from the ground up. Instead, they have to work with what they’ve got. That means dealing with an existing array of operating systems, hypervisors, servers and storage. It means adding in cloud connections and services to existing architectures, systems and processes. It means moving forward into an ever more virtualized world while still supporting the status quo.

Publish date: 07/31/15
Report

Redefining the Economics of Enterprise Storage (2015 Update)

Enterprise storage has long delivered superb levels of performance, availability, scalability, and data management.  But enterprise storage has always come at exceptional price, and this has made enterprise storage unobtainable for many use cases and customers. Most recently Dell introduced a new, small footprint storage array – the Dell Storage SC Series powered by Compellent technology – that continues to leverage proven Dell Compellent technology using Intel technology in an all-new form factor. The SC4020 also introduces the most dense Compellent product ever, an all-in-one storage array that includes 24 drive bays and dual controllers in only 2 rack units of space.  While the Intel powered SC4020 has more modest scalability than current Compellent products, this array marks a radical shift in the pricing of Dell’s enterprise technology, and is aiming to open up Dell Compellent storage technology for an entire market of smaller customers as well as large customer use cases where enterprise storage was too expensive before.

Publish date: 06/30/15
Report

The Promise of VM-Centric Storage and VVols: Tintri VMstore Delivers the Future Promise Now

The din surrounding VMware vSphere Virtual Volumes (VVols) is deafening. It started in 2011 when VMware announced the concept of VVols and the storage industry reacted with enthusiasm and culminated with its introduction as part of vSphere 6 release in April 2015. Viewed simply, VVols is an API that enables storage arrays that support the functionality to provision and manage storage at the granularity of a VM, rather than LUNs or Volumes or mount points, as they do today. Without question, VVols is an incredibly powerful concept and will fundamentally change the interaction between storage and VMs in a way not seen since the concept of server virtualization first came to market. No surprise then that each and every storage vendor in the market is feverishly trying to build in VVols support and competing on the superiority of their implementation.

Yet one storage player, Tintri, has been delivering products with VM-centric features for four years without the benefit of VVols. How can this be so? How could Tintri do this? And what does it mean for them now that VVols are here? To do justice to this question we will briefly look at what VVols are and how they work and then dive into how Tintri has delivered the benefits of VVols for several years. We will also look at what the buyer of Tintri gets today and how Tintri plans to integrate VVols. Read on…

Publish date: 06/26/15
Report

Journey Towards Software Defined Data Center (SDDC)

While it has always been the case that IT must respond to increasing business demands, competitive requirements are forcing IT to do so with less. Less investment in new infrastructure and less staff to manage the increasing complexity of many enterprise solutions. And as the pace of business accelerates those demands include the ability to change services… quickly. Unfortunately, older technologies can require months, not minutes to implement non-trivial changes. Given these polarizing forces, the motivation for the Software Defined Data Center (SDDC) where services can be instantiated as needed, changed as workloads require, and retired when the need is gone, is easy to understand.

The vision of the SDDC promises the benefits needed to succeed: flexibility, efficiency, responsiveness, reliability and simplicity of operation… and does so, seemingly paradoxically, with substantial cost savings. The initial steps to the SDDC clearly come from server virtualization which provides many of the desired benefits. The fact that it is already deployed broadly and hosts between half and two-thirds of all server instances simply means that existing data centers have a strong base to build on. Of the three major pillars within the data center, the compute pillar is commonly understood to be furthest along through the benefits of server virtualization.

The key to gaining the lion’s share of the remaining benefits lies in addressing the storage pillar. This is required not only to reap the same advantages through storage virtualization that have become expected in the server world, but also to allow for greater adoption of server virtualization itself. The applications that so far have resisted migration to the hypervisor world have mostly done so because of storage issues. The next major step on the journey to the SDDC has to be to virtualize the entire storage tier and to move the data from isolated, hardware-bound silos where it currently resides into a flexible, modern, software-defined environment.

While the destination is relatively clear, how to move is key as a business cannot exist without its data. There can be no downtime or data loss. Furthermore, just as one doesn’t virtualize every server at once (unless one has the luxury of a green-field deployment and no existing infrastructure and workloads to worry about) one must be cognizant of the need for prioritized migration from the old into the new.  And finally, the cost required to move into the virtualized storage world is a major, if not the primary, consideration. Despite the business benefits to be derived, if one cannot leverage one’s existing infrastructure investments, it would be hard to justify a move to virtualized storage. Just to be sure, we believe virtualized storage is a prerequisite for Software Defined Storage, or SDS.

In this Technology Brief we will first look at the promise of the SDDC, then focus on SDS and the path to get there. We then look at IBM SAN Volume Controller (SVC), the granddaddy of storage virtualization. SVC initially came to market as a heterogeneous virtualization solution then was extended to homogeneous storage virtualization, as in the case of IBM Storwize family. It is now destined to play a much more holistic role for IBM as an important piece of the overall Spectrum Storage program.

Publish date: 06/17/15
Page 1 of 136 pages  1 2 3 >  Last ›