Storing digital data has long been a perilous task. Not only are stored digital bits subject to the catastrophic failure of the devices they rest upon, but the nature of shared digital bits subjects them to error and even intentional destruction. In the virtual infrastructure, the dangers and challenges subtly shift. Data is more highly consolidated and more systems depend wholly on shared data repositories; this increases data risks. Many virtual machines connecting to single shared storage pools mean that IO or storage performance has become an incredibly precious resource; this complicates backup, and means that backup IO can cripple a busy infrastructure. Backup is a more important operation than ever before, but it is also fundamentally more challenging than ever before.
Fortunately, the industry rapidly learned this lesson in the earlier days of virtualization, and has aggressively innovated to bring tools and technologies to bear on the challenge of backup and recovery for virtualized environments. APIs have unlocked more direct access to data, and products have finally come to market that make protection easier to use, and more compatible with the dynamic, mobile workloads of the virtual data center. Nonetheless differences abound between product offerings, often rooted in the subtleties of architecture – architectures that ultimately determine whether a backup product is best suited for SMB-sized needs, or whether a solution can scale to support the large enterprise.
Moreover, within the virtual data center, TCO centers on resource efficiency, and a backup strategy can be one of the most significant determinants of that efficiency. On one hand, traditional backup just does not work and can cripple efficiency. There is simply too much IO contention and application complexity in trying to convert a legacy physical infrastructure backup approach to the virtual infrastructure. On the other hand, there are a number of specialized point solutions designed to tackle some of the challenges of virtual infrastructure backup. But too often, these products do not scale sufficiently, lack consolidated management, and stand to impose tremendous operational overhead as the customer’s environment and data grows. When taking a strategic look at the options, it often looks like backup approaches fly directly in the face of resource efficiency.
In this new era of big data, sensors can be included in almost everything made. This “Internet Of Things” generates mountains of new data with exciting potential to be turned into invaluable information. As a vendor, if you make a product or solution that when deployed by your customers produces data about its ongoing status, condition, activity, usage, location, or practically any other useful information, you can now potentially derive deep intelligence that can be used to improve your products and services, better satisfy your customers, improve your margins, and grow market share.
For example, such information about a given customer’s usage of your product and its current operating condition, combined with knowledge gleaned from all of your customers’ experiences, enables you to be predictive about possible issues and proactive about addressing them. Not only do you come to know more about a customer’s implementation of your solution than the customer himself, but you can now make decisions about new features and capabilities based on hard data.
The key to gaining value from this “Internet Of Things” is the ability to make sense out of the kind of big data that it generates. One set of current solutions addresses data about internal IT operations including “logfile” analysis tools like Splunk and VMware Log Insight. These are designed for a technical user focused on recent time series and event data to improve tactical problem “time-to-resolution”. However, the big data derived from customer implementations is generally multi-structured across streams of whole “bundles” of complexly related files that can easily grow to PB’s over time. Business user/analysts are not necessarily IT-skilled (e.g. marketing, support, sales…) and the resulting analysis to be useful must at the same time be more sophisticated and be capable of handling dynamic changes to incoming data formats.
Click "Available Now" to read the full analyst opinion.
We define online backup as using the cloud to provide users with a highly scalable and elastic repository for their backup data. This is true across all online backup users but enterprise has specific requirements and some risks that consumer and SMB customers do not share. Consumer and SMB – including education and small government agencies – primarily require acceptable backup and restore performance, plus security and compliance reporting in their online backup. The enterprise needs these things too but they are dealing with additional pressures from backing up larger data sets across multiple remote sites and/or storage systems and applications. Here is what to know when you consider cloud backup vendors for your enterprise backup system.
Hadoop is coming to enterprise IT in a big way. The competitive advantage that can be gained from analyzing big data is just too “big” to ignore. And the amount of data available to crunch is only growing bigger, whether from new sensors, capture of people, systems and process “data exhaust”, or just longer retention of available raw or low-level details. It’s clear that enterprise IT practitioners everywhere are soon going to have to operate scale-out computing platforms in the production data center, and being the first, most mature solution on the scene, Hadoop is the likely target. The good news is that there is now a plethora of Hadoop infrastructure options to choose from to fit almost every practical big data need – the challenge now for IT is to implement the best solutions for their business client needs.
While Apache Hadoop as originally designed had a relatively narrow application for only certain kinds of batch-mode parallel algorithms applied over unstructured (or semi-structured depending on your definition) data, because of its widely available open source nature, commodity architecture approach, and ability to extract new kinds of value out of previously discarded or ignored data sets, the Hadoop ecosystem is rapidly evolving and expanding. With recent new capabilities like YARN that opens up the main execution platform to applications beyond batch MapReduce, the integration of structured data analysis, real-time streaming and query support, and the roll out of virtualized enterprise hosting options, Hadoop is quickly becoming a mainstream data processing platform.
There has been much talk that in order to derive top value from big data efforts, rare and potentially expensive data scientist types are needed to drive. On the other hand, there is an abundance of higher level analytical tools and pre-packaged applications emerging to support the existing business analyst and user with familiar tools and interfaces. While completely new companies have been founded on the exciting information and operational intelligence gained from exploiting big data, we expect wider adoption by existing organizations based on augmenting traditional lines of business with new insight and revenue enhancing opportunity. In addition, a Hadoop infrastructure serves as a great data capture and ETL base for extracting more structured data to feed downstream workflows, including traditional BI/DW solutions. No matter how you want to slice it, big data is becoming a common enterprise workload, and enterprise IT infrastructure folks will need to deploy, manage, and provide Hadoop services to their businesses.
There is a storm brewing in IT today that will upset the core ways of doing business with standard data processing platforms. This storm is being fueled by inexorable data growth, competitive pressures to extract maximum value and insight from data, and the inescapable drive to lower costs through unification, convergence, and optimization. The storage market in particular is ripe for disruption. Surprisingly, that storage disruption may just come from a current titan only seen by many as primarily an application/database vendor —Oracle.
When Oracle bought Sun in 2009, one of the areas of expertise brought over was in ZFS, a “next generation” file system. While Oracle clearly intended to compete in the enterprise storage market, some in the industry thought that the acquisition would essentially fold any key IP into narrow solutions that would only effectively support Oracle enterprise workloads. And in fact, Oracle ZFS Storage Appliances have been successfully and stealthily moving into more and more data centers as the DBA-selected best option for “database” and “database backup” specific storage.
But the truth is that Oracle has continued aggressive development on all fronts, and its ZFS Storage Appliance is now extremely competitive as scalable enterprise storage, posting impressive benchmarks topping other comparative solutions. What happens when support for mixed workloads is also highly competitive? The latest version of Oracle ZFS Storage Appliances, the new ZS3 models, become a major contender as a unified, enterprise featured, and affordable storage platform for today’s data center, and are positioned to bring Oracle into enterprise storage architectures on a much broader basis going forward.
In this report we will take a look at the new ZS3 Series and examine how it delivers both on its “application engineered” premise and its broader capabilities for unified storage use cases and workloads of all types. We’ll briefly examine the new systems and their enterprise storage features, especially how they achieve high performance across multiple use cases. We’ll also explore some of the key features engineered into the appliance that provide unmatched support for Oracle Database capabilities like Automatic Data Optimization (ADO) with Hybrid Columnar Compression (HCC) which provides heat map driven storage tiering. We’ll also review some of the key benchmark results and provide an indication of the TCO factors driving its market leading price/performance.
Taneja Group and InfoStor jointly ran a survey asking IT managers about their experience with corporate file sharing. Taneja Group defines corporate file sharing as the ability to share large numbers of files between business users across networks and mobile devices.
File sharing heavily intersects with Bring Your Own Device (BYOD) and the cloud. BYOD is the phenomenon of employees using personal mobile devices for personal and business applications and data access. File sharing as a business usage is closely associated with BYOD as end users seek to easily share files between their own and others’ multiple computing devices.
File sharing is also bound up with cloud usage. File sharing on mobile devices does not strictly require file sharing services using the cloud; basic secure sharing can be done via VPN just as one would email a file or share its pathname over the LAN. However, this solution is less than ideal for file sharing because it is poorly scalable and lacks any file sharing application functionality.
In contrast, most file sharing products use the cloud because the environment is highly scalable and delivers application functionality such as file versioning and locking. Many file sharing products also use the cloud to host a shared file repository, and most integrate with Active Directory and other SAML-based access management applications. Given a huge growth in data files and in mobile access needs, this approach is far superior to simply sending files using VPN connections.
This is no surprise to end users, who happily use file sharing applications like Dropbox to easily share files. Yet not all file sharing applications are created equal and consumer file sharing applications can threaten corporate data security. Vendors are quickly developing business- and enterprise-level file sharing applications in response to valid concerns about file sharing security, scalability, management, usability and compliance.
These are serious questions and should be serious concerns for IT in businesses of any size. However, our survey found that although some respondents have file sharing solutions and policies already in place, many did not. Some respondents have solid short-term plans to do so but others have no plans in place. Why? Taneja Group has observed that when IT denies a need for secure file sharing in a BYOD environment, they usually lack the time, sense of urgency, executive support and/or budget to deal effectively with the problem.
For more on file collaboration/BYOD issues and vendors, download Taneja Group’s File Collaboration Landscape Market Report.