Join Newsletter
Forgot
password?
Register
Trusted Business Advisors, Expert Technology Analysts

Profiles/Reports

Free Reports

Page 1 of 4 pages  1 2 3 >  Last ›
Free Reports

Data Defined Storage: Building on the Benefits of Software Defined Storage

At its core, Software Defined Storage decouples storage management from the physical storage system. In practice Software Defined Storage vendors implement the solution using a variety of technologies: orchestration layers, virtual appliances and server-side products are all in the market now. They are valuable for storage administrators who struggle to manage multiple storage systems in the data center as well as remote data repositories.

What Software Defined Storage does not do is yield more value for the data under its control, or address global information governance requirements. To that end, Data Defined Storage yields the benefits of Software Defined Storage while also reducing data risk and increasing data value throughout the distributed data infrastructure. In this report we will explore how Tarmin’s GridBank Data Management Platform provides Software Defined Storage benefits and also drives reduced risk and added business value for distributed unstructured data with Data Defined Storage. 

Publish date: 03/17/14
Free Reports

Storage That Turns Big Data Into a Bigger Asset: Data-Defined Storage With Tarmin GridBank

UPDATED FOR 2014: Today’s storage industry is as stubbornly media-centric as it has always been: SAN, NAS, DAS; disk, cloud, tape. This centricity forces IT to deal with storage infrastructure on media-centric terms. But the storage infrastructure should really serve data to customers, not media; it’s the data that yields business value, while the media should be an internal IT architectural choice.

Storage media focused solutions only support business indirectly by providing optimized storage infrastructure for data. Intelligent data services on the other hand provide direct business value by optimizing data utility, availability, and management. The shift from traditional thinking here is really about seeking to provide logically ideal data storage for the people who own and use the data first, while freeing up underlying storage infrastructure designs to be optimized for efficiencies as desired. Ideal data storage would be global in access and scalability, secure and resilient, and inherently support data-driven management and applications.

Done well, this data centric approach would yield significant competitive advantage by leveraging an enterprise’s valuable intellectual property:  its vast and growing amounts of unstructured data. If this can be done by building on the company’s existing data storage and best practices, the business can quickly increase profitability, achieve faster time-to-market, and gain tremendous agility for innovation and competitiveness.

Tarmin, with its GridBank Data Management Platform, is a leading proponent of the data centric approach. It is firmly focused on managing data for global accessibility, protection and strategic value. In this product profile, we’ll explore how a data centric approach drives business value. We’ll then examine how GridBank was architected expressly around the concept that data storage should be a means for extracting business value from that data, not as a dead-end data dump.

Publish date: 02/17/14
Free Reports

Glassbeam SCALAR: Making Sense of the Internet of Things

In this new era of big data, sensors can be included in almost everything made. This “Internet Of Things” generates mountains of new data with exciting potential to be turned into invaluable information. As a vendor, if you make a product or solution that when deployed by your customers produces data about its ongoing status, condition, activity, usage, location, or practically any other useful information, you can now potentially derive deep intelligence that can be used to improve your products and services, better satisfy your customers, improve your margins, and grow market share.

For example, such information about a given customer’s usage of your product and its current operating condition, combined with knowledge gleaned from all of your customers’ experiences, enables you to be predictive about possible issues and proactive about addressing them. Not only do you come to know more about a customer’s implementation of your solution than the customer himself, but you can now make decisions about new features and capabilities based on hard data.

The key to gaining value from this “Internet Of Things” is the ability to make sense out of the kind of big data that it generates. One set of current solutions addresses data about internal IT operations including “logfile” analysis tools like Splunk and VMware Log Insight. These are designed for a technical user focused on recent time series and event data to improve tactical problem “time-to-resolution”. However, the big data derived from customer implementations is generally multi-structured across streams of whole “bundles” of complexly related files that can easily grow to PB’s over time. Business user/analysts are not necessarily IT-skilled (e.g. marketing, support, sales…) and the resulting analysis to be useful must at the same time be more sophisticated and be capable of handling dynamic changes to incoming data formats.

Click "Available Now" to read the full analyst opinion.

Publish date: 10/21/13
Free Reports

Enterprise Online Backup: What Administrators need to know

We define online backup as using the cloud to provide users with a highly scalable and elastic repository for their backup data. This is true across all online backup users but enterprise has specific requirements and some risks that consumer and SMB customers do not share. Consumer and SMB – including education and small government agencies – primarily require acceptable backup and restore performance, plus security and compliance reporting in their online backup. The enterprise needs these things too but they are dealing with additional pressures from backing up larger data sets across multiple remote sites and/or storage systems and applications. Here is what to know when you consider cloud backup vendors for your enterprise backup system. 

Publish date: 10/03/13
Free Reports

Market Landscape Abstract: Enterprise Hadoop Infrastructure for Big Data IT

Hadoop is coming to enterprise IT in a big way. The competitive advantage that can be gained from analyzing big data is just too “big” to ignore. And the amount of data available to crunch is only growing bigger, whether from new sensors, capture of people, systems and process “data exhaust”, or just longer retention of available raw or low-level details. It’s clear that enterprise IT practitioners everywhere are soon going to have to operate scale-out computing platforms in the production data center, and being the first, most mature solution on the scene, Hadoop is the likely target. The good news is that there is now a plethora of Hadoop infrastructure options to choose from to fit almost every practical big data need – the challenge now for IT is to implement the best solutions for their business client needs.

While Apache Hadoop as originally designed had a relatively narrow application for only certain kinds of batch-mode parallel algorithms applied over unstructured (or semi-structured depending on your definition) data, because of its widely available open source nature, commodity architecture approach, and ability to extract new kinds of value out of previously discarded or ignored data sets, the Hadoop ecosystem is rapidly evolving and expanding. With recent new capabilities like YARN that opens up the main execution platform to applications beyond batch MapReduce, the integration of structured data analysis, real-time streaming and query support, and the roll out of virtualized enterprise hosting options, Hadoop is quickly becoming a mainstream data processing platform.

There has been much talk that in order to derive top value from big data efforts, rare and potentially expensive data scientist types are needed to drive. On the other hand, there is an abundance of higher level analytical tools and pre-packaged applications emerging to support the existing business analyst and user with familiar tools and interfaces. While completely new companies have been founded on the exciting information and operational intelligence gained from exploiting big data, we expect wider adoption by existing organizations based on augmenting traditional lines of business with new insight and revenue enhancing opportunity. In addition, a Hadoop infrastructure serves as a great data capture and ETL base for extracting more structured data to feed downstream workflows, including traditional BI/DW solutions. No matter how you want to slice it, big data is becoming a common enterprise workload, and enterprise IT infrastructure folks will need to deploy, manage, and provide Hadoop services to their businesses.

Publish date: 10/01/13
Free Reports

Top Performance on Mixed Workloads, Unbeatable for Oracle Databases

There is a storm brewing in IT today that will upset the core ways of doing business with standard data processing platforms. This storm is being fueled by inexorable data growth, competitive pressures to extract maximum value and insight from data, and the inescapable drive to lower costs through unification, convergence, and optimization. The storage market in particular is ripe for disruption. Surprisingly, that storage disruption may just come from a current titan only seen by many as primarily an application/database vendor —Oracle.

When Oracle bought Sun in 2009, one of the areas of expertise brought over was in ZFS, a “next generation” file system. While Oracle clearly intended to compete in the enterprise storage market, some in the industry thought that the acquisition would essentially fold any key IP into narrow solutions that would only effectively support Oracle enterprise workloads. And in fact, Oracle ZFS Storage Appliances have been successfully and stealthily moving into more and more data centers as the DBA-selected best option for “database” and “database backup” specific storage.

But the truth is that Oracle has continued aggressive development on all fronts, and its ZFS Storage Appliance is now extremely competitive as scalable enterprise storage, posting impressive benchmarks topping other comparative solutions. What happens when support for mixed workloads is also highly competitive? The latest version of Oracle ZFS Storage Appliances, the new ZS3 models, become a major contender as a unified, enterprise featured, and affordable storage platform for today’s data center, and are positioned to bring Oracle into enterprise storage architectures on a much broader basis going forward.

In this report we will take a look at the new ZS3 Series and examine how it delivers both on its “application engineered” premise and its broader capabilities for unified storage use cases and workloads of all types. We’ll briefly examine the new systems and their enterprise storage features, especially how they achieve high performance across multiple use cases. We’ll also explore some of the key features engineered into the appliance that provide unmatched support for Oracle Database capabilities like Automatic Data Optimization (ADO) with Hybrid Columnar Compression (HCC) which provides heat map driven storage tiering. We’ll also review some of the key benchmark results and provide an indication of the TCO factors driving its market leading price/performance.

Publish date: 09/10/13
Page 1 of 4 pages  1 2 3 >  Last ›