Includes Storage Arrays, NAS, File Systems, Clustered and Distributed File Systems, FC Switches/Directors, HBA, CNA, Routers, Components, Semiconductors, Server Blades.
Taneja Group analysts cover all form and manner of storage arrays, modular and monolithic, enterprise or SMB, large and small, general purpose or specialized. All components that make up the SAN, FC-based or iSCSI-based, and all forms of file servers, including NAS systems based on clustered or distributed file systems, are covered soup to nuts. Our analysts have deep backgrounds in file systems area in particular. Components such as Storage Network Processors, SAS Expanders, FC Controllers are covered here as well. Server Blades coverage straddles this section as well as the Infrastructure Management section above.
With the advent of big data and cloud-scale delivery, companies are racing to deploy cutting-edge services that include “extreme” applications like massive voice and image processing or complex fi-nancial analysis modeling that can push storage systems to their limits. Examples of some high visi-bility and big market impacting solutions include applications based on image pattern recognition at large scale and financial risk management based on decision-making at high speed.
These ground-breaking solutions, made up of very different activities but with similar data storage challenges, create incredible new lines of business representing significant revenue potential. Every day here at Taneja Group we see more and more mainstream enterprises exploring similar “extreme service” opportunities. But when enterprise IT data centers take stock of what it is required to host and deliver these new services, it quickly becomes apparent that traditional clustered and even scale-out file systems - of the kind that most enterprise data centers (or cloud providers) have racks and racks of - simply can’t handle the performance requirements.
There are already great enterprise storage solutions for applications that need either raw throughput, high capacity, parallel access, low latency, or high availability – maybe even for two or three of those at a time. But when an “extreme” application needs all of those requirements at the same time, only supercomputing type storage in the form of parallel file systems provides a functional solution. The problem is that most commercial enterprises simply can’t afford or risk basing a line of business on an expensive research project.
The good news is that some storage vendors have been industrializing former supercomputing storage technologies, hardening massively parallel file systems into commercially viable solutions. This opens the door for revolutionary services creation, enabling mainstream enterprise datacenters to support the exploitation of new extreme applications.
Object storage has long been pigeon-holed as a necessary overhead expense for long-term archive storage, a data purgatory one step before tape or deletion. In our experience, we have seen many IT shops view object storage more as something exotic they have to implement to meet government regulations rather than as a competitive strategic asset that can help their businesses make money.
Normally when companies invest in high-end IT assets like enterprise-class storage, they hope to re-coup those investments in big ways like accelerating the performance of market competitive applica-tions or efficiently consolidating data centers. Maybe they are even starting to analyze big data to find better ways to run the business. There are far more opportunities to be sure, but these kinds of “money-making” initiatives have been mainly associated with “file” and “block” types of storage – the primary storage commonly used to power databases, host office productivity applications, and build pools of shared resources for virtualization projects. But that’s about to change. If you’ve intentionally dismissed or just over-looked object storage it is time to take deeper look. Today’s object storage provides brilliant capabilities for enhancing productivity, creating global platforms and developing new revenue streams.
Object storage has been evolving from its historical second tier data dumping ground into a value-building primary storage platform for content and collaboration. And the latest high performance cloud storage solutions could transform the whole nature of enterprise data storage. To really exploit this new generation of object storage, it is important to understand not only what it is and how it has evolved, but to start thinking about how to harness its emerging capabilities in building net new business.
Since its founding in 2004, Vembu Technologies has maintained a two-fold mission: innovate cloud information management for business users and accelerate the growth of the channel partners who serve them.
Vembu StoreGrid is the flagship product that offers simplified, flexible and cost-effective data protection services in the cloud. Its innovative architecture enables integrated backup and recovery across multi-platform OS’s, physical and virtual environments, and multiple applications. Vembu SyncBlaze builds on these capabilities with a cloud file collaboration solution that solves the growing problem of file sharing in a mobile workforce.
Vembu completes its product portfolio with customized editions of StoreGrid and SyncBlaze for Managed Service Providers (MSPs), Managed Hosting Providers (MHPs), Cloud Service Providers (CSPs) and Value Added Resellers (VARs). These specialized editions enable Vembu partners to offer multi-platform hybrid cloud data protection and content management to their SMB and mid-sized customers. Vembu’s customized support and business management offerings enable these partners to grow their customer base, increase revenues, and speed up go-to-market initiatives.
Reliable storage is critical to the lifeblood of every data-driven business, and operational storage capabilities like non-disruptive scalability, continuous data protection, capacity optimization, and disaster recovery are not just desired, but required. But enterprise-class storage features have long been out of reach of organizations that don't have enterprise-sized budgets, storage experts and large data centers. Instead, they make do with low-end disk arrays or even just a box of disks patched together with a minimal amount of data protection in the form of manual backups. The problem is that disks fail, organizations change, and data continues to grow – organizations that pile up disks under the desktop are taking big risks for significant business failure while those that pay up for traditional arrays and even cloud storage incur significant cost and management overhead.
Having to step up to deliver these advanced storage requirements challenges growing organizations with big adoption hurdles, not the least of which is both OPEX and CAPEX cost. Far too many organizations struggle along with high-risk storage or feel forced to allocate significant energy, cost, and staff time into acquiring, deploying, and operating high-touch storage arrays with layers of complex add-on software. Even larger enterprises with expert storage gurus and big data centers can feel the weight of managing complex SAN’s for departmental, ROBO, and other practical rubber-meets-road storage scenarios. What’s really needed is a new approach to storage - an affordable, expandable array solution with advanced storage capabilities baked in. Ideally it should be simpler to operate than even setting up a file system on raw drives and it should be available at a justifiable cost for even small data driven businesses.
In this solution brief we are going to look at what SMB and departmental storage buyers should both require and expect from storage solutions to meet their business goals, and how traditional mid-market storage based on old technologies can fall short. We will then introduce Exablox’s new OneBlox storage array to highlight how purposefully designing storage from the ground up can lead to a simple but powerful hardware design and software architecture that features built-in high availability, easy scalability, and great data protection. Along the way we’ll see how two real-world users of OneBlox experience its actual benefits, cost effectiveness and true ease of management in their live customer deployments.
Deduplication took the market by storm several years ago, and backup hasn’t been the same since. With the ability to eradicate duplicate data in duplication-prone backups, deduplication made it practical to store large amounts of backup data on disk instead of tape. In short order, a number of vendors marched into the market spotlight offering products with tremendous efficiency claims, great throughput rates, and greater tolerance for the too often erratic throughput of backup jobs that was a thorn in the side for traditional tape. Today, deduplicating backup storage appliances are a common site in data centers of all types and sizes.
But deduplicating data is a tricky science. It is often not as simple as just finding matching runs of similar data. Backup applications and modifications to data can sprinkle data streams with mismatched bits and pieces, making deduplication much more challenging. The problem is worst for Virtual Tape Libraries (VTLs) that emulate traditional tape. Since they emulate tape, backup applications use all of their traditional tape formatting. Such formatting is designed to compensate for tape shortcomings and allow faster and better application access to data on tape, but it creates noise for deduplication.
The best products on the market recognize this challenge and have built “parsers” for every backup application – technology that recognizes the metadata within the backup stream and enables the backup storage appliance to read around it.
In 2012, IBM introduced a parser for IBM’s leading backup application Tivoli Storage Manager (TSM) in their ProtecTIER line of backup storage solutions. TSM has long had a reputation for a noisy tape format. That format enables richer data interaction than many competitors, but it creates enormous challenges for deduplication.
At IBM’s invitation, in November of 2012, Taneja Group Labs put ProtecTIER through the paces to evaluate whether this parser for the ProtecTIER family makes a difference. Our findings: Clearly it does; in our highly structured lab exercise, ProtecTIER looked fully poised to deliver advertised deduplication for TSM environments. In our case, we observed a reasonable 10X to 20X deduplication range for real world Microsoft Exchange data.
Over the past few years, backup has become a busy market. For the first time in many years, a new wave of energy hit this market as small innovators sprang forth to try to tackle pressing challenges around virtual server backup. The market has taken off because of a unique set of challenges and simultaneous opportunities within the virtual infrastructure – with large amounts of highly similar data, interesting APIs for automation, and a uniquely limited set of IO and processing resources, the data behind the virtual server can be captured and protected in unique new ways. As innovators in turn attacked these opportunities, backup has been fundamentally changed. In many cases, backup has been put in the hands of the virtual infrastructure administrator, made lighter weight and vastly more accessible, and has become a powerful tool for data protection and data management.
In reality, the innovations with virtual backup have leveraged the unifying layer of virtualization to tackle several key backup challenges. These challenges have been long-standing in the practice of data protection, and include ever-tightening backup windows, ever more demanding recovery point objectives (RPO or the amount of tolerable data loss when recovering), short recovery time objectives (RTO or how long it takes to complete a recovery), recovery reliability, and complexity. Specialized data protection for the virtual infrastructure has made enormous progress in tackling these challenges, and simplifying the practice of data protection to boot.
But we’ve often wondered what it would take to bring the innovation from virtual infrastructure protection to a full-fledged backup product that could tackle both physical and virtual systems. At the recent request of Dell, Taneja Group Labs had the opportunity to look at just such a product. That product is AppAssure – a set of technology that seems destined to be the future architectural anchor for the many data protection technologies in Dell’s rapidly growing product portfolio. We jumped at the chance to run AppAssure through the paces in a hands-on exercise, as we wanted to see whether AppAssure had an architecture that might be poised to change how datacenter-wide protection is typically done, perhaps by making it more agile and accessible.