Since its founding in 2004, Vembu Technologies has maintained a two-fold mission: innovate cloud information management for business users and accelerate the growth of the channel partners who serve them.
Vembu StoreGrid is the flagship product that offers simplified, flexible and cost-effective data protection services in the cloud. Its innovative architecture enables integrated backup and recovery across multi-platform OS’s, physical and virtual environments, and multiple applications. Vembu SyncBlaze builds on these capabilities with a cloud file collaboration solution that solves the growing problem of file sharing in a mobile workforce.
Vembu completes its product portfolio with customized editions of StoreGrid and SyncBlaze for Managed Service Providers (MSPs), Managed Hosting Providers (MHPs), Cloud Service Providers (CSPs) and Value Added Resellers (VARs). These specialized editions enable Vembu partners to offer multi-platform hybrid cloud data protection and content management to their SMB and mid-sized customers. Vembu’s customized support and business management offerings enable these partners to grow their customer base, increase revenues, and speed up go-to-market initiatives.
Choosing a cloud vendor in today's market can be a challenge. This market profile from Taneja Group provides an objective review of current cloud infrastructure-as-a-service (IaaS) and management, platform-as-a-service (PaaS), end user computing (EUC), and public cloud IaaS offerings from eleven leading vendors and includes key takeaways for IT leaders to consider when selecting a vendor.
Attached is the executive summary only. To get access to the FULL Taneja Group Landscape Report, you can download it from VMware directly.
The past few years have seen virtualization rapidly move into the mainstream of the data center. Today, virtualization is often the defacto standard in the data center for deployment of any application or service. This includes important operational and business systems that are the lifeblood of the business.
For mission critical systems, customers necessarily demand a broader level of services than is common among the test and development environments where virtualization often gains its foothold in the data center. It goes almost without saying that topmost in customer’s minds are issues of availability.
Availability is a spectrum of technology that offers businesses many different levels of protection – from general recoverability to uninterruptable applications. At the most fundamental level, are mechanisms that protect the data and the server beneath applications. While in the past these mechanisms have often been hardware and secondary storage systems, VMware has steadily advanced the capabilities of their vSphere virtualization offering, and it includes a long list of features – vMotion, Storage vMotion, vSphere Replication, VMware vCenter Site Recovery Manager, vSphere High Availability, and vSphere Fault Tolerance. While clearly VMware is serious about the mission critical enterprise, each of these offerings have retained a VMware-specific orientation toward protecting the “compute instance”.
The challenge is that protecting a compute instance does not go far enough. It is the application that matters, and detecting VM failures may fall short of detecting and mitigating application failures.
With this in mind, Symantec has steadily advanced a range of solutions for enhancing availability protection in the virtual infrastructure. Today this includes ApplicationHA – developed in partnership with VMware – and their gold standard offering of Veritas Cluster Server (VCS) enhanced for the virtual infrastructure. We recently turned an eye toward how these solutions enhance virtual availability in a hands-on lab exercise, conducted remotely from Taneja Group Labs in Phoenix, AZ. Our conclusion: VCS is the only HA/DR solution that can monitor and recover applications on VMware that is fully compatible with typical vSphere management practices such as vMotion, Dynamic Resource Scheduler and Site Recovery Manager, and it can make a serious difference in the availability of important applications.
Microsoft officially acquired StorSimple on November 15, 2012. StorSimple was a relative startup that had been shipping products for about 18 months. Why did Microsoft buy StorSimple? What is the strategy behind the purchase? Where will Microsoft take this newly acquired technology? These are many of the questions we are being asked at present. Here is our view....
Taneja Group and InfoStor jointly ran a survey asking IT managers about their big data experiences and roadmaps. We concluded that there is a great deal of uncertainty around big data: what it is, how to manage it, and if it is even in the IT domain rather than specialized application administrators.
Storing and managing large volumes of data certainly involves IT. However, “big data” is its own class: large data sets that are subjected to ongoing analytics and/or massive re-use. Some big data is structured into databases; most of it is unstructured. Big data operations continuously act upon large and growing volumes of data, which generates fast and frequent data movement between servers, networks and storage. Big data analytics in particular need fast and large feedback loops for decision-making as the specialized software tools analyze and reform data into a variety of views, reports and reformed data sets.
IT is rarely involved at the analytics administration level, but they are very involved at the storage level. Big data needs both high capacity and high performance, which requires storage with high capacity disk and the ability to process storage IO very quickly. It must also be highly available since big data by definition is active and important data. And it should be cost-effective as well, though it will not inexpensive.
[Taneja Group discusses scale-out storage as a best practice solution to big data analytics in our report: “Big Data, Big Storage: Scale-Out NAS for Big Data Environments.” (http://bit.ly/UGCVjm)]
Big data means different things to different people. A database administrator might insist that big data is large databases; a 100-server SharePoint administrator might classify content blobs as big data; a storage administrator in a hospital radiology lab may define big data as digitized x-rays times 100,000 yearly patients. In fact they are all right: each administrator’s data is large, active, and must be kept protected and highly available to applications. In other words, big data.
It is the business units’ responsibility to decide how to use and analyze this data; it is IT’s lookout to store the data in a way that provides required service levels of availability and performance. IT frequently turns to NAS to do this, citing its familiarity, file-based architecture and general ease of use. However, traditional NAS’s very simplicity can impact its usability for big data growth and capacity needs. Given fast data growth and more active data than ever before, this model soon disintegrates into poorly managed storage sprawl and forced data migrations in the name of balancing workloads.
There are several storage choices for big data depending on your big data environment: projected growth, data types, performance, capacity and scalability. One excellent option for many big data storage environments is scale-out NAS. This report will briefly discuss scale-out and suggest important questions to ask when researching vendors.