Flash Memory Summit 2013 Reveals Future of NAND Flash, Predicts the End of Hard Disk Drives

Disruptive Flash Data Storage Providers FinalIn the relatively short and fast-paced history of data storage, the buzz around NAND Flash has never been louder, the product innovation from manufacturers and solution providers never more electric. Thanks to mega-computing trends, including analytics, big data, cloud and mobile computing, along with software-defined storage and the consumerization of IT, the demand for faster, cheaper, more reliable, manageable, higher capacity and more compact Flash has never been greater. But how long will the party last?

In this modern era of computing, the art of dispensing predictions, uncovering trends and revealing futures is de rigueur. To quote that well-known trendsetter and fashionista, Cher, “In this business, it takes time to be really good – and by that time, you’re obsolete.” While meant for another industry, Cher’s ruminations seem just as appropriate for the data storage space.

At a time when industry pundits and Flash solution insiders are predicting the end of mass data storage as we have known it for more than 50 years, namely the mechanical hard disk drive (HDD), storage futurists, engineers and computer scientists are paving the way for the next generation of storage beyond NAND Flash – even before Flash has had a chance to become a mature, trusted, reliable, highly available and ubiquitous enterprise class solution. Perhaps we should take a breath before we trumpet the end of the HDD era or proclaim NAND Flash as the data storage savior of the moment.

The Flash Memory Summit (FMS), held over three-plus days in August at the Santa ClaraConvention Center, brought together nearly 200 exhibitors and speakers who regaled roughly 4,000 attendees with visions of Flash – present and future. FMS has grown significantly over the past 8 years, very recently attracting more than its traditional engineering and computer geek crowd. The Summit now embraces CIOs and other business executives cleaving to the Flash bandwagon, including Wall Street types looking to super-charge trading algorithms, web-based application owners seeking lower latencies for online transactions and a growing number of government and healthcare related entities who need to sift through mountains of data more quickly.

Short History of Flash

Flash has been commercially available since its invention and introduction by Toshiba in the late 1980s. NAND Flash is known for being at least an order of magnitude faster than HDDs and has no moving parts (it uses non-volatile memory, or NVM) and therefore requires far less power. NAND Flash is found in billions of personal devices, from mobile phones, tablets, laptops, cameras and even thumb drives (USBs) and, over the last decade, NAND Flash has become more powerful, compact and reliable as prices have also dropped, making enterprise-class Flash deployments much more attractive.

At the same time, IOPS-hungry applications such as database queries, OLTP (online transaction processing) and analytics have pushed traditional HDDs to the limit of the technology. To maintain performance measured in IOPS or read/write speeds, enterprise IT shops have employed a number of HDD workarounds such as short stroking, thin provisioning and tiering. While HDDs can still meet the performance requirements of most enterprise-class applications, organizations pay a huge penalty in additional power consumption, data center real estate (it takes 10 or more high-performance HDDs to match the same performance of the slowest enterprise-class Flash or solid-state storage drive (SSD)) and additional administrator, storage and associated equipment costs.

Flash is becoming pervasive throughout the compute cycle. It is now found on DIMM (dual inline memory module) memory cards to help solve the in-memory data persistence problem and improve latency. There are Flash cache appliances that sit between the server and a traditional storage pool to help boost access times to data residing on HDDs as well as server-side Flash or SSDs, and all-Flash arrays that fit into the SAN (storage area network) storage fabric or can even replace smaller, sub-petabyte, HDD-based SANs altogether.

There are at least three different grades of Flash drives, starting with the top-performing, longest-lasting – and most expensive – SLC (single level cell) Flash, followed by MLC, which doubles the amount of data or electrical charges per cell, and even TLC for triple. As Flash manufacturers continue to push the envelope on Flash drive capacity, the individual cells have gotten smaller; now they are below 20 nm (one nanometer is a billionth of a meter) in width, or tinier than a human virus at roughly 30-50 nm.

Each cell can only hold a finite amount of charges or writes and erasures (measured in TBW, or total bytes written) before its performance starts to degrade. This program/erase, or P/E, cycle for SSDs and Flash causes the drives to wear out because the oxide layer that stores its binary data degrades with every electrical charge. However, Flash management software that utilizes striping across drives, garbage collection and wear-leveling to distribute data evenly across the drive increases longevity.

Honey, I Shrunk the Flash!

As the cells get thinner, below 20 nm, more bit errors occur. New 3D architectures announced and discussed at FMS by a number of vendors hold the promise of replacing the traditional NAND Flash floating gate architecture. Samsung, for instance, announced the availability of its 3D V-NAND, which leverages a Charge Trap Flash (CTF) technology that replaces the traditional floating gate architecture to help prevent interference between neighboring cells and improve performance, capacity and longevity.

Samsung claims the V-NAND offers an “increase of a minimum of 2X to a maximum 10X higher reliability, but also twice the write performance over conventional 10nm-class floating gate NAND flash memory.” If 3D Flash proves successful, it is possible that the cells can be shrunk to the sub-2nm size, which would be equivalent to the width of a double-helix DNA strand.

Enterprise Flash Futures and Beyond

Flash appears headed for use in every part of the server and storage fabric, from DIMM to server cache and storage cache and as a replacement for HDD across the board – perhaps even as an alternative to tape backup. The advantages of Flash are many, including higher performance, smaller data center footprint and reduced power, admin and storage management software costs.

As Flash prices continue to drop concomitant with capacity increases, reliability improvements and drive longevity – which today already exceeds the longevity of mechanical-based HDD drives for the vast number of applications – the argument for Flash, or tiers of Flash (SLC, MLC, TLC), replacing HDD is compelling. The big question for NAND Flash is not: When will all Tier 1 apps be running Flash at the server and storage layers?, but rather: When will Tier 2 and even archived data be stored on all-Flash solutions?

Much of the answer resides in the growing demands for speed and data accessibility as business use cases evolve to take advantage of higher compute performance capabilities. The old adage that 90%-plus of data that is more than two weeks old rarely, if ever, gets accessed no longer applies. In the healthcare ecosystem, for example, longitudinal or historical electronic patient records now go back decades, and pharmaceutical companies are required to keep clinical trial data for 50 years or more.

Pharmacological data scientists, clinical informatics specialists, hospital administrators, health insurance actuaries and a growing number of physicians regularly plumb the depths of healthcare-related Big Data that is both newly created and perhaps 30 years or more in the making. Other industries, including banking, energy, government, legal, manufacturing, retail and telecom are all deriving value from historical data mixed with other data sources, including real-time streaming data and sentiment data.

All data may not be useful or meaningful, but that hasn’t stopped business users from including all potentially valuable data in their searches and their queries. More data is apparently better, and faster is almost always preferred, especially for analytics, database and OLTP applications. Even backup windows shrink, and recovery times and other batch jobs often run much faster with Flash.

What Replaces DRAM and Flash?

Meanwhile, engineers and scientists are working hard on replacements for DRAM (dynamic random-access memory) and Flash, introducing MRAM (magnetoresistive), PRAM (phase-change), SRAM (static) and RRAM – among others – to the compute lexicon. RRAM or ReRAM (resistive random-access memory) could replace DRAM and Flash, which both use electrical charges to store data. RRAM uses “resistance” to store each bit of information. According to wiseGEEK “The resistance is changed using voltage and, also being a non-volatile memory type, the data remain intact even when no energy is being applied. Each component involved in switching is located in between two electrodes and the features of the memory chip are sub-microscopic. Very small increments of power are needed to store data on RRAM.”

And according to Wikipedia, RRAM or ReRAM “has the potential to become the front runner among other non-volatile memories. Compared to PRAM, ReRAM operates at a faster timescale (switching time can be less than 10 ns), while compared to MRAM, it has a simpler, smaller cell structure (less than 8F² MIM stack). There is a type of vertical 1D1R (one diode, one resistive switching device) integration used for crossbar memory structure to reduce the unit cell size to 4F² (F is the feature dimension). Compared to flash memory and racetrack memory, a lower voltage is sufficient and hence it can be used in low power applications.”

Then there’s Atomic Storage which ostensibly is a nanotechnology that IBM scientists and others are working on today. The approach is to see if it is possible to store a bit of data on a single atom. To put that in perspective, a single grain of sand contains billions of atoms. IBM is also working on Racetrack memory which is a type of non-volatile memory that holds the promise of being able to store 100 times the capacity of current SSDs.

Flash Lives Everywhere! … for Now

Just as paper and computer tape drives continue to remain relevant and useful, HDD will remain in favor for certain applications, such as sequential processing workloads or when massive, multi-petabyte data capacity is required. And lest we forget, HDD manufacturers continue to improve the speed, density and cost equation for mechanical drives. Also, 90% of data storage manufactured today is still HDD, so it will take a while for Flash to outsell HDD and even for Flash management software to reach the level of sophistication found in traditional storage management solutions.

That said, there are Flash proponents that can’t wait for the changeover to happen and don’t want or need Flash to reach parity with HDD on features and functionality. One of the most talked about Keynote presentations at FMS was given by Facebook’s Jason Taylor, Ph.D., Director of Infrastructure and Capacity Engineering and Analysis. Facebook and Dr. Taylor’s point of view is: “We need WORM or Cold Flash. Make the worst Flash possible – just make it dense and cheap, long writes, low endurance and lower IOPS per TB are all ok.”

Other presenters, including the CEO of Violin Memory, Don Basile, and CEO Scott Dietzen of Pure Storage, made relatively bold predictions about when Flash would take over the compute world. Basile showed a 2020 Predictions slide in his deck that stated: “All active data will be in memory.” Basile anticipates “everything” (all data) will be in memory within 7 years (except for archive data on disk). Meanwhile, Dietzen is an articulate advocate for all-Flash storage solutions because “hybrids (arrays with Flash and HDD) don’t disrupt performance. They run at about half the speed of all-Flash arrays on I/O-bound workloads.” Dietzen also suggests that with compression and data deduplication capabilities, Flash has reached or dramatically improved on cost parity with spinning disk.

Disruptive Flash Technology Vendors and Solution Providers

There are almost 100 companies who are now delivering product in the Flash data storage market including more than 30 vendors delivering all-Flash storage arrays. The companies represent a cross-section of Flash solution providers, from SSD drive and controller manufacturers to system integrators and software companies.

Disruptive Flash Data Storage Providers Final

Some companies, such as IBM and Intel, defy classification as they are a manufacturer or fabricator, system integrator, storage software provider, nanotechnology developer and more. While the following categories are broad, they are indicative of the breadth and strength of the enterprise Flash solutions provider landscape as it stands today, represented by established, global technology firms as well as by startups looking to disrupt the enterprise data storage market.

ALL-FLASH PROVIDERS

This group consists of smaller, mostly private equity or investor-backed companies that are primarily in the business of supplying all-Flash storage appliances to enterprises of all sizes. The success of these all-Flash providers hinges on their ability to exploit the advantages of inexpensive MLC NAND Flash, whether through proprietary hardware improvements or the development and delivery of a rich software feature set that improves Flash longevity, manageability and, of course, speed. Some version of MLC NAND Flash manufactured by a handful of providers, including Intel, Micron, Samsung, SanDisk, SK hynix and Toshiba is included in all of these flash-based storage solutions.

In smaller enterprises, Flash arrays have become affordable and functional enough to replace an organization’s entire HDD storage stack. In larger companies, all-Flash solutions co-exist with the legacy SAN fabric (and increasingly NAS as well) or sit closer to the application on a PCIe card within the server, providing the performance needed for mission-critical Tier 1 applications. Now that all-Flash vendors have succeeded in scaling their solutions up and/or out economically, it has become feasible for organizations to consider migrating away entirely from multi-tiered HDD storage strategies in favor of a single, performance-centric Flash storage tier.

Here is a link to related Wikibon research to view briefs of All-Flash Solution Providers including Astute Networks, Kaminario, Pure Storage, Skyera, SolidFire, Tegile, Virident and WHIPTAIL.

FLASH & HDD COMPONENT MANUFACTURERS

Companies in this category supply manufactured and/or fabricated components from Flash on DIMM and in PCIe cards used inside servers (PCIe cards are also being modified for use in Flash appliances that sit between the server and a SAN) to multiple grades of Flash (SLC, MLC, and TLC) used for enterprise-class storage arrays. Four of the manufacturers are also major suppliers of HDDs, and two are among the leading designers of semiconductors and software (controllers) that accelerate storage functionality in the data center.

Here is a link to related Wikibon research to view briefs of Flash and HDD Component Manufactures including Diablo Technologies, Intel, LSI, Marvell, Samsung, SanDisk and Toshiba.

SOFTWARE, FLASH AND SYSTEMS INTEGRATORS 

Companies in this broad category range from investment-backed startups to some of the world’s largest and most admired technology companies. What they all have in common is a passion for integrating their own proprietary software with largely commodity storage hardware components, whether they be HDD, NAND Flash or PCIe-based solutions – or a combination of all the above. The “secret sauce” is in how these storage solution providers interweave their own software into an enterprise’s new and existing storage fabric, whether providing additional performance for mission critical applications or enhancing backup and recovery capabilities. Software-defined, application and policy-driven storage are key messages for this group, placing the emphasis on available storage software services and capabilities such as compression, deduplication, replication, snapshotting, policy-based data management and security rather than prioritizing the hardware.

Here is a link to related Wikibon research to view briefs of Software, Flash and HDD Systems Integrators including Coraid, Dell, IBM, NetApp and Permabit. 

Bottom Line

NAND Flash has definitively demonstrated its value for mainstream enterprise performance-centric application workloads. When, how and if Flash replaces HDD as the dominant media in the data storage stack remains to be seen. Perhaps some new technology will leapfrog over Flash and signal its demise before it has time to really mature.

For now, HDD is not going anywhere, as it represents over $30 billion of new sales in the $50-billion-plus total storage market – not to mention the enormous investment that enterprises have in spinning storage media that will not be replaced overnight. But Flash is gaining, and users and their IOPS-intensive apps want faster, cheaper, more scalable and manageable alternatives to HDD.

At least for the next five to seven years, Flash and its adherents can celebrate the many benefits of Flash over HDD. Users take note: For the vast majority of performance-centric workloads, Flash is much faster, lasts longer and costs less than traditional spinning disk storage. And Flash vendors already have their sights set on Tier 2 apps, such as email, and Tier 3 archival applications. Fast, reliable and inexpensive is tough to beat.

 

 

 

About Gary MacFadden

Gary's career in the IT industry spans more than 25 years starting in software sales and marketing for IBM partners DAPREX and Ross Data Systems, then moving to the IT Advisory and Consulting industry with META Group, Giga and IDC. He is a co-founder of The Robert Frances Group where his responsibilities have included business development, sales, marketing, senior management and research roles. For the past several years, Gary has been a passionate advocate for the use of analytics and information governance solutions to transform business and customer outcomes, and regularly consults with both end-user and vendor organizations who serve the banking, healthcare, insurance, high tech and utilities industries. Gary is also a frequent contributor to the Wikibon.org research portal, a sought after speaker for industry events and blogs frequently on Healthcare IT (HIT) topics.
This entry was posted in Big Data, Information Management Thought Leadership, Strategic Information Management and tagged , , , , , , , , , , , , , , , , , , , , , , , , , . Bookmark the permalink.

Leave a Reply