Join Newsletter
Forgot
password?
Register
Trusted Business Advisors, Expert Technology Analysts

Profiles/Reports

Free Reports

Page 1 of 4 pages  1 2 3 >  Last ›
Free Reports

HP StoreVirtual VSA and VMware Virtual SAN - A Closer Look

The age of the software defined datacenter (SDDC) and converged infrastructure is upon us. The benefits of abstracting, pooling and running compute, storage and networking functions together on shared commodity hardware brings unprecedented agility and flexibility to the datacenter while driving actual costs down. The tectonic shift in the datacenter caused by software-defined storage and networking will prove to be as great as, and may prove to be greater than, the shift to virtualized servers during the last decade. While software-defined networking (SDN) is still in its infancy, software-defined storage (SDS) has been developing for quite some time.

LeftHand Networks (now HP StoreVirtual) released its first iSCSI VSA (virtual storage appliance) in 2007, which brought the advantages of software-based storage to small and midsize company environments. LeftHand Networks’ VSA was a virtual machine that hosted a software implementation of LeftHand’s well-regarded iSCSI hardware storage array. Since that time many other vendors have released VSAs, but none have captured the market share of HP’s StoreVirtual VSA. But the release of VMware Virtual SAN (VSAN) in March of 2014 could change that as VSAN, with the backing of the virtualization giant, is poised to be a serious contender in the SDS marketplace. Taneja Group thought that it would be interesting to take a closer look at how a mature, well regarded and widely deployed SDS product such as HP StoreVirtual VSA compares to the newest entry in the SDS market: VMware’s VSAN.

The observations we have made for both products are based on hands-on lab testing, but we do not consider this a Technology Validation exercise because we were not able to conduct an apples-to-apples comparison between the offerings, primarily due to the limited hardware compatibility list (HCL) for VMware VSAN. However, the hands-on testing that we were able to conduct gave us a very good understanding of both products. Both products surprised and, more often than not, did not disappoint us. In an ideal world without budgetary constraints, both products may have a place in your datacenter, but they are not by any means interchangeable. We found that one of the products would be more useful for a variety of datacenter storage needs, including some tier 1 use cases, while the other is more suited today to supporting the needs some of tier 2 and tier 3 applications.

Publish date: 08/21/14
Free Reports

Redefining the Economics of Enterprise Storage

Enterprise storage has long delivered superb levels of performance, availability, scalability, and data management.  But enterprise storage has always come at exceptional price, and this has made enterprise storage unobtainable for many use cases and customers.

Most recently Dell introduced a new, small footprint storage array – the Dell Storage SC Series powered by Compellent technology – that continues to leverage proven Dell Compellent technology using Intel technology in an all-new form factor. The SC4020 also introduces the most dense Compellent product ever, an all-in-one storage array that includes 24 drive bays and dual controllers in only 2 rack units of space.  While the Intel powered SC4020 has more modest scalability than current Compellent products, this array marks a radical shift in the pricing of Dell’s enterprise technology, and is aiming to open up Dell Compellent storage technology for an entire market of smaller customers as well as large customer use cases where enterprise storage was too expensive before.

Publish date: 05/05/14
Free Reports

Data Defined Storage: Building on the Benefits of Software Defined Storage

At its core, Software Defined Storage decouples storage management from the physical storage system. In practice Software Defined Storage vendors implement the solution using a variety of technologies: orchestration layers, virtual appliances and server-side products are all in the market now. They are valuable for storage administrators who struggle to manage multiple storage systems in the data center as well as remote data repositories.

What Software Defined Storage does not do is yield more value for the data under its control, or address global information governance requirements. To that end, Data Defined Storage yields the benefits of Software Defined Storage while also reducing data risk and increasing data value throughout the distributed data infrastructure. In this report we will explore how Tarmin’s GridBank Data Management Platform provides Software Defined Storage benefits and also drives reduced risk and added business value for distributed unstructured data with Data Defined Storage. 

Publish date: 03/17/14
Free Reports

Fibre Channel: The Proven and Reliable Workhorse for Enterprise Storage Networks

Mission-critical assets such as virtualized and database applications demand a proven enterprise storage protocol to meet their performance and reliability needs. Fibre Channel has long filled that need for most customers, and for good reason. Unlike competing protocols, Fibre Channel was specifically designed for storage networking, and engineered to deliver high levels of reliability and availability as well as consistent and predictable performance for enterprise applications. As a result, Fibre Channel has been the most widely used enterprise protocol for many years.

But with the widespread deployment of 10GbE technology, some customers have explored the use of other block protocols, such as iSCSI and Fibre Channel over Ethernet (FCoE), or file protocols such as NAS. Others have looked to Infiniband, which is now being touted as a storage networking solution. In marketing the strengths of these protocols, vendors often promote feeds and speeds, such as raw line rates, as a key advantage for storage networking. However, as we’ll see, there is much more to storage networking than raw speed.

It turns out that on an enterprise buyer’s scorecard, raw speed doesn’t even make the cut as an evaluation criteria. Instead, decision makers focus on factors such as a solution’s demonstrated reliability, latency, and track record in supporting Tier 1 applications. When it comes to these requirements, no other protocol can measure up to the inherent strengths of Fibre Channel in enterprise storage environments.

Despite its long, successful track record, Fibre Channel does not always get the attention and visibility that other protocols receive. While it may not be winning the media wars, Fibre Channel offers customers a clear and compelling value proposition as a storage networking solution. Looking ahead, Fibre Channel also presents an enticing technology roadmap, even as it continues to meet the storage needs of today’s most critical business applications.

In this paper, we’ll begin by looking at the key requirements customers should look for in a commercial storage protocol. We’ll then examine the technology capabilities and advantages of Fibre Channel relative to other protocols, and discuss how those translate to business benefits. Since not all vendor implementations are created equal, we’ll call out the solution set of one vendor – QLogic – as we discuss each of the requirements, highlighting it as an example of a Fibre Channel offering that goes well beyond the norm.

Publish date: 02/28/14
Free Reports

Storage That Turns Big Data Into a Bigger Asset: Data-Defined Storage With Tarmin GridBank

UPDATED FOR 2014: Today’s storage industry is as stubbornly media-centric as it has always been: SAN, NAS, DAS; disk, cloud, tape. This centricity forces IT to deal with storage infrastructure on media-centric terms. But the storage infrastructure should really serve data to customers, not media; it’s the data that yields business value, while the media should be an internal IT architectural choice.

Storage media focused solutions only support business indirectly by providing optimized storage infrastructure for data. Intelligent data services on the other hand provide direct business value by optimizing data utility, availability, and management. The shift from traditional thinking here is really about seeking to provide logically ideal data storage for the people who own and use the data first, while freeing up underlying storage infrastructure designs to be optimized for efficiencies as desired. Ideal data storage would be global in access and scalability, secure and resilient, and inherently support data-driven management and applications.

Done well, this data centric approach would yield significant competitive advantage by leveraging an enterprise’s valuable intellectual property:  its vast and growing amounts of unstructured data. If this can be done by building on the company’s existing data storage and best practices, the business can quickly increase profitability, achieve faster time-to-market, and gain tremendous agility for innovation and competitiveness.

Tarmin, with its GridBank Data Management Platform, is a leading proponent of the data centric approach. It is firmly focused on managing data for global accessibility, protection and strategic value. In this product profile, we’ll explore how a data centric approach drives business value. We’ll then examine how GridBank was architected expressly around the concept that data storage should be a means for extracting business value from that data, not as a dead-end data dump.

Publish date: 02/17/14
Free Reports

Glassbeam SCALAR: Making Sense of the Internet of Things

In this new era of big data, sensors can be included in almost everything made. This “Internet Of Things” generates mountains of new data with exciting potential to be turned into invaluable information. As a vendor, if you make a product or solution that when deployed by your customers produces data about its ongoing status, condition, activity, usage, location, or practically any other useful information, you can now potentially derive deep intelligence that can be used to improve your products and services, better satisfy your customers, improve your margins, and grow market share.

For example, such information about a given customer’s usage of your product and its current operating condition, combined with knowledge gleaned from all of your customers’ experiences, enables you to be predictive about possible issues and proactive about addressing them. Not only do you come to know more about a customer’s implementation of your solution than the customer himself, but you can now make decisions about new features and capabilities based on hard data.

The key to gaining value from this “Internet Of Things” is the ability to make sense out of the kind of big data that it generates. One set of current solutions addresses data about internal IT operations including “logfile” analysis tools like Splunk and VMware Log Insight. These are designed for a technical user focused on recent time series and event data to improve tactical problem “time-to-resolution”. However, the big data derived from customer implementations is generally multi-structured across streams of whole “bundles” of complexly related files that can easily grow to PB’s over time. Business user/analysts are not necessarily IT-skilled (e.g. marketing, support, sales…) and the resulting analysis to be useful must at the same time be more sophisticated and be capable of handling dynamic changes to incoming data formats.

Click "Available Now" to read the full analyst opinion.

Publish date: 10/21/13
Page 1 of 4 pages  1 2 3 >  Last ›