Join Newsletter
Forgot
password?
Register
Trusted Business Advisors, Expert Technology Analysts

Profiles/Reports

Recently Added

Page 1 of 133 pages  1 2 3 >  Last ›
Free Reports

TVS: Qumulo Core: Data-Aware Scale-Out NAS Software

New enterprise-grade file systems don’t come around very often. Over the last two decades we have seen very few show up; ZFS was introduced in 2004, Isilon’s OneFS in 2003, Lustre in 2001, and WAFL in 1992. There is a good reason behind this: the creation of a unique enterprise-grade file system is not a trivial undertaking and takes considerable resources and vision. During the last ten years, we have seen seismic changes in the data center and storage industry. Today’s data center runs a far different workload than what was prevalent when these first-generation file systems were developed. For example, today’s data center runs virtual machines, and it is the exception when there is a one-to-one correlation between a server and storage. Databases span the largest single disk drives. A huge amount of data is ingested by big data applications, social media. Data must be kept in order to meet the requirements of government and corporate policy. Technology has dramatically changed over the last decade. For instance, flash memory has become prevalent, commodity x86 processors now rival ASIC chips in power and performance, and new software development and delivery methodologies such as “agile” have become mainstream. In the past, we were concerned with how to deal with the underlying storage, but now we are concerned with how to deal with this huge amount of data that we have stored.

What could be accomplished if a new file system was created from the ground up to take advantage of the latest advances in technology and, more importantly, had an experienced engineering team that had done this once before? That is, in fact, what Qumulo has done with the Qumulo Core data-aware scale-out NAS software, powered by its new file system, QSFS (Qumulo Scalable File System). Qumulo’s three co-founders were the primary inventors of OneFS – Peter Godman, Neal Fachan, Aaron Passey – and they assembled some of the brightest minds in the storage industry to create a new modern file system designed to support the requirements of today’s datacenter, not the datacenter of decades ago.

Qumulo embraced from day one an agile software development and release model to create their product. This allows them to push out fully tested and certified releases every two weeks. Doing this allows for new feature releases and bug fixes that can be seamlessly introduced to the system as soon as they are ready – not based on an arbitrary 6, 12 or even 18-month release schedule.

Flash storage has radically changed the face of the storage industry. All of the major file systems in use today were designed to work with HDD devices that could produce 150 IOPS; if you were willing to sacrifice capacity and short-stroke them you might get twice that. Now flash is prevalent in the industry, and we have commodity flash devices that can produce up to 250,000 IOPs.  Traditional file systems were optimized for slower HDD drives – not to take advantage of the lower latency and higher performance of today’s solid state drives. Many traditional file systems and storage arrays have devised ways to “bolt on” SSD devices to their storage to boost their performance. However, their initial architecture was based around the capabilities of yesterday’s HDD drives and not the capabilities of today’s new flash technology.

An explosion in scale-out large capacity file systems has empowered enterprises to do very interesting things, but has also come with some very interesting problems. Even one of the most trivial tasks—determining how much space the files on a file system are consuming—is very complicated to answer on first-generation file systems. Other questions that are difficult to answer without being aware of the data on a file system include finding out who is consuming the most space on a file system, and which clients and files or applications are consuming the most bandwidth. Second-generation file systems need to be designed to be data-aware, not just storage-aware.

In order to reach performance targets, traditional high performance storage arrays were designed to take advantage of an ASIC-optimized architecture. ASIC architecture is good in the sense that it can speed up performance for some storage related operations; however, this benefit comes at a heavy price – both in dollars and in flexibility. It can take years and millions of dollars to embed new features in an ASIC. By using very powerful and relative inexpensive x86 processors new features can be introduced very quickly via software. The slight performance advantage of ASIC-based storage is disappearing fast as x86 processors get more cores (Intel E5-2600 V3 has up to 18 cores) and advanced features.

When Qumulo approached us to take a look at the world’s first data-aware, scale-out enterprise-grade storage system we welcomed the opportunity. Qumulo’s new storage system is not based on an academic project or designed around an existing storage system, but has been designed and built on entirely new code that the principals at Qumulo developed based on what they learned in interviews with more than 600 storage professionals. What they came up with after these conversations was a new data-aware, scale-out NAS file system designed to take advantage of the latest advances in technology. We were interested in finding out how this file system would work in today’s data center.

Publish date: 03/17/15
Technology Validation

Qumulo Core: Data-Aware Scale-Out NAS Software

New enterprise-grade file systems don’t come around very often. Over the last two decades we have seen very few show up; ZFS was introduced in 2004, Isilon’s OneFS in 2003, Lustre in 2001, and WAFL in 1992. There is a good reason behind this: the creation of a unique enterprise-grade file system is not a trivial undertaking and takes considerable resources and vision. During the last ten years, we have seen seismic changes in the data center and storage industry. Today’s data center runs a far different workload than what was prevalent when these first-generation file systems were developed. For example, today’s data center runs virtual machines, and it is the exception when there is a one-to-one correlation between a server and storage. Databases span the largest single disk drives. A huge amount of data is ingested by big data applications, social media. Data must be kept in order to meet the requirements of government and corporate policy. Technology has dramatically changed over the last decade. For instance, flash memory has become prevalent, commodity x86 processors now rival ASIC chips in power and performance, and new software development and delivery methodologies such as “agile” have become mainstream. In the past, we were concerned with how to deal with the underlying storage, but now we are concerned with how to deal with this huge amount of data that we have stored.

What could be accomplished if a new file system was created from the ground up to take advantage of the latest advances in technology and, more importantly, had an experienced engineering team that had done this once before? That is, in fact, what Qumulo has done with the Qumulo Core data-aware scale-out NAS software, powered by its new file system, QSFS (Qumulo Scalable File System). Qumulo’s three co-founders were the primary inventors of OneFS – Peter Godman, Neal Fachan, Aaron Passey – and they assembled some of the brightest minds in the storage industry to create a new modern file system designed to support the requirements of today’s datacenter, not the datacenter of decades ago.

Qumulo embraced from day one an agile software development and release model to create their product. This allows them to push out fully tested and certified releases every two weeks. Doing this allows for new feature releases and bug fixes that can be seamlessly introduced to the system as soon as they are ready – not based on an arbitrary 6, 12 or even 18-month release schedule.

Flash storage has radically changed the face of the storage industry. All of the major file systems in use today were designed to work with HDD devices that could produce 150 IOPS; if you were willing to sacrifice capacity and short-stroke them you might get twice that. Now flash is prevalent in the industry, and we have commodity flash devices that can produce up to 250,000 IOPs.  Traditional file systems were optimized for slower HDD drives – not to take advantage of the lower latency and higher performance of today’s solid state drives. Many traditional file systems and storage arrays have devised ways to “bolt on” SSD devices to their storage to boost their performance. However, their initial architecture was based around the capabilities of yesterday’s HDD drives and not the capabilities of today’s new flash technology.

An explosion in scale-out large capacity file systems has empowered enterprises to do very interesting things, but has also come with some very interesting problems. Even one of the most trivial tasks—determining how much space the files on a file system are consuming—is very complicated to answer on first-generation file systems. Other questions that are difficult to answer without being aware of the data on a file system include finding out who is consuming the most space on a file system, and which clients and files or applications are consuming the most bandwidth. Second-generation file systems need to be designed to be data-aware, not just storage-aware.

In order to reach performance targets, traditional high performance storage arrays were designed to take advantage of an ASIC-optimized architecture. ASIC architecture is good in the sense that it can speed up performance for some storage related operations; however, this benefit comes at a heavy price – both in dollars and in flexibility. It can take years and millions of dollars to embed new features in an ASIC. By using very powerful and relative inexpensive x86 processors new features can be introduced very quickly via software. The slight performance advantage of ASIC-based storage is disappearing fast as x86 processors get more cores (Intel E5-2600 V3 has up to 18 cores) and advanced features.

When Qumulo approached us to take a look at the world’s first data-aware, scale-out enterprise-grade storage system we welcomed the opportunity. Qumulo’s new storage system is not based on an academic project or designed around an existing storage system, but has been designed and built on entirely new code that the principals at Qumulo developed based on what they learned in interviews with more than 600 storage professionals. What they came up with after these conversations was a new data-aware, scale-out NAS file system designed to take advantage of the latest advances in technology. We were interested in finding out how this file system would work in today’s data center.

Publish date: 03/16/15
Profile

Acaveo Smart Information Server: Bringing Dark Data into Light

In 2009, a fully burdened computing infrastructure figured storage at about 20% of all components.  By 2015, we’ve surged to 40% storage in the infrastructure (and counting) as companies pour in more and more data. And most of this data is hard-to-manage unstructured data, which typically represents 75%-80% of corporate data. This burdened IT infrastructure presents two broad and serious consequences: it increases capital and operating expenses, and cripples unstructured data management.  Capital and operating expenses scale up sharply with the swelling storage tide. Today’s storage costs alone include buying and deploying storage for file shares, email, and ECMs like SharePoint. Additional services such as third-party file sharing services and cloud-based storage add to cost and complexity.

And growing storage and complexity make managing unstructured data extraordinarily difficult. A digital world is delivering more data to more applications than ever before. IT’s inability to visualize and act upon widely distributed data impacts retention, compliance, value, and security. In fact, this visibility (or invisibility) problem is so prevalent that it has gained it own stage name: dark data. Dark data plagues IT with hard-to-answer questions: What data is on those repositories? How old is it? What application does it belong to? Which users can access it?

IT may be able to answer those questions on a single storage system with file management tools. But across a massive storage infrastructure including the cloud? No. Instead, IT must do what it can to tier aging data, to safely delete when possible, and try to keep up with application storage demands across the map. The status quo is not going to get any better in the face of data growth. Data is growing at 55% and higher per year in the enterprise. The energy ramifications alone of storing that much data are sobering. Data growth is getting to the point that it is overrunning the storage budget’s capacity to pay for it. And managing that data for cost control and business processes is harder still.

Conventional wisdom would have IT simply move data to the cloud. But conventional wisdom is mistaken. The problem is not how to store all of that data – IT can solve that problem with a cloud subscription. The problem is that once stored, IT lacks the tools to intelligently manage that data where it resides.

This is where highly scalable, unstructured file management comes into the picture: the ability to find, classify, and act upon files spread throughout the storage universe. In this Product Profile we’ll present Acaveo, a file management platform that discovers and acts on data-in-place, and federates classification and search activities across the enterprise storage infrastructure. The result is highly intelligent and highly scalable file management that cuts cost and adds value to business processes across the enterprise. 

Publish date: 02/27/15
Profile

Enterprise Flash - Scalable, Smart, and Economical

There is a serious re-hosting effort going on in data center storage as flash-filled systems replace large arrays of older spinning disks for tier 1 apps. Naturally as costs drop and the performance advantages of flash-accelerated IO services become irresistible, they also begin pulling in a widening circle of applications with varying QoS needs. Yet this extension leads to a wasteful tug-of-war between high-end flash only systems that can’t effectively serve a wide variety of application workloads and so-called hybrid solutions originally architected for HDDs that are often challenged to provide the highest performance required by those tier 1 applications.

Someday in its purest form all-flash storage theoretically could drop in price enough to outright replace all other storage tiers even at the largest capacities, although that is certainly not true today. Here at Taneja Group we think storage tiering will always offer a better way to deliver varying levels of QoS by balancing the latest in performance advances appropriately with the most efficient capacities. In any case, the best enterprise storage solutions today need to offer a range of storage tiers, often even when catering to a single application’s varying storage needs.

There are many entrants in the flash storage market, with the big vendors now rolling out enterprise solutions upgraded for flash. Unfortunately many of these systems are shallow retreads of older architectures, perhaps souped-up a bit to better handle some hybrid flash acceleration but not able to take full advantage of it. Or they are new dedicated flash-only point products with big price tags, immature or minimal data services, and limited ability to scale out or serve a wider set of data center QoS needs.

Oracle saw an opportunity for a new type of cost-effective flash-speed storage system that could meet the varied QoS needs of multiple enterprise data center applications – in other words, to take flash storage into the mainstream of the data center. Oracle decided they had enough storage chops (from Exadata, ZFS, Pillar, Sun, etc.) to design and build a “flash-first” enterprise system intended to take full advantage of flash as a performance tier, but also incorporate other storage tiers naturally including slower “capacity” flash, performance HDD, and capacity HDD. Tiering by itself isn’t a new thing – all the hybrid solutions do it and there are other vendor solutions that were designed for tiering – but Oracle built the FS1 Flash Storage System from the fast “flash” tier down, not by adding flash to a slower or existing HDD-based architecture working “upwards.” This required designing intelligent automated management to take advantage of flash for performance while leveraging HDD to balance out cost. This new architecture has internal communication links dedicated to flash media with separate IO paths for HDDs, unlike traditional hybrids that might rely solely on their older, standard HDD-era architectures that can internally constrain high-performance flash access.

Oracle FS1 is a highly engineered SAN storage system with key capabilities that set it apart from other all-flash storage systems, including built in QoS management that incorporates business priorities, best-practices provisioning, and a storage alignment capability that is application aware – for Oracle Database naturally, but that can also address a growing body of other key enterprise applications (such as Oracle JD Edwards, PeopleSoft, Siebel, MS Exchange/SQL Server, and SAP) – and a “service provider” capability to carve out multi-tenant virtual storage “domains” while online that are actually enforced at the hardware partitioning level for top data security isolation.

In this report, we’ll dive in and examine some of the great new capabilities of the Oracle FS1. We’ll look at what really sets it apart from the competition in terms of its QoS, auto-tiering, co-engineering with Oracle Database and applications, delivered performance, capacity scaling and optimization, enterprise availability, and OPEX reducing features, all at a competitive price point that will challenge the rest of the increasingly flash-centric market.

Publish date: 02/02/15
Page 1 of 133 pages  1 2 3 >  Last ›