Flash memory’s density surpasses hard drives for first time

“NAND flash memory has surpassed hard disk drive (HDD) technology in areal density for the first time, according to a new report from a market research firm,” Lucas Mearian reports for Computerworld.

“During a presentation at the 2016 IEEE International Solid State Circuits Conference (ISSCC) in San Francisco last week, Micron shared data showing NAND flash has moved past HDDs in areal density, according to Coughlin Associates,” Mearian reports. “Micron revealed it had demonstrated areal densities in its laboratories of up to 2.77Tbpsi for its 3D NAND. That compares with the densest HDDs of about 1.3Tbpsi.”

“Because of 3D NAND’s greater density, manufacturers such as Micron and Intel are opening new plants or are revamping older NAND facilities to increase their 3D production, which is driving prices down,” Mearian reports. “According to a recent report by DRAMeXchange, a division of market research firm TrendForce, the plummeting prices of SSDs have also driven their recent adoption in laptops. This year, SSDs will be used in around one-quarter of laptops. Next year, SSDs are expected to be in 31% of new consumer laptops, and by 2017 they’ll be in 41%, according to DRAMeXchange senior manager Alan Chen.”

Much more in the full article here.

MacDailyNews Take: This is an important inflection point for the storage industry.


    1. HDDs won’t go away for mass storage (think 100s of TB, PB or EB) for many years to come. The price for mass storage using HDDs is far below that of SSDs and will likely be so for at least another decade.

      1. Only because of supply. Not because they are cheaper to make.

        Now that SSD is denser, demand will skyrocket as data centers attempt to reduce the per square foot of storage.

        If I was an analcyst, I would predict SSD prices dropping by 50% in the next 12 months as manufacturing surges to meet demand.

  1. MDN: This is an important inflection point for the storage industry.

    It’s also an inaccurate comparison, comparing something that’s a lab only component that won’t be in the field for two to three (or more) years with shipping hard disk drives. HGST (and probably others) has 15 and 20 TB (and possibly larger) HDDs in the lab running today with densities equal to (or higher) than those quoted by Micron for the lab only SSD chips.

    Hopefully, SSD offerings very soon will surpass (and double as claimed) HDD offerings both in the lab and on the street, but not today.

  2. SSD drives are still not-ready-for-archiving-primetime. Unless regularly “energized” they can lose data, like hard drives (which I’ve had fail not used much but sitting on a shelf). So far no reasonably priced panacea is available for lots of digital data and long term storage. SSD’s though are great for the day to day computing use (I don’t have to tell you).

    1. There is no need to regularly “energize” SSDs, there is infact no such thing. Don’t confuse DRAM refresh with Flash Memory which does not refresh. FYI, hard drives if left sitting fail due to bearings siezing.

        1. iDontDoWindows also don’tKnowFlash. A flash block is written once and immediately read to insure correct readability. If data in the same block of Flash needs to be changed a merge of the current data and the new data is written to a new block. If the data in a block does not need to be changed it is not re-written, ever. Failures occur during write so writes need to be keeps to a minimum.

        2. OK. My point its that Flash is the current best. HDs must be spun-up and tape must be re-spooled every few years and both suffer from high temperatures. Flash does not need maintenance during storage, the biggest worry is inadvertent lose due to small size. I think that a USB interface will survive a reasonably long period of time. I have already seen punch cards and 9-track tape become unless (I can no longer read them).

        3. Charge decay (often referred to as one form of “bit rot”) is a real thing with SSDs. The charge stored (especially with multi level systems) does decay and the data stored does become unreliable. Using something like RAID-60 (often called RAID 6-0) extends the life of a large SSD array, There are even error correction algorithms in large array specific file systems that rival RAID-60, but even then storage reliability is nowhere near decades. The only real option with SSDs is to read the data out well before significant bit rot sets in, apply appropriate correction algorithms and write the data to another SSD array.

          In a way it is not unlike massive tape archives of the past few decades. It’s just that the time scales and storage amounts are significantly larger (but then the amount that is being archived today is significantly larger too).

        4. danzaph

          Don’t think I am dissing SSD’s. You should just understand the limitations and internals of how they work if you want to make claims based on that.

          BTW – I design SSD controllers for a living. I live, breath and eat this stuff. It’s what keeps me awake at night.

          And no matter what – back up your data.

        5. From: http://bit.ly/1Xei4mn “Spansion two-bit-per-cell MirrorBit flash devices are designed to provide 20 years of data retention after initial programming when exposed to a 55°C (131°F) environment.” Lower storage temperature will provide longer storage. Also DVD-RWs are good for up to 25 years at 25°C (77°F), 50% relative humidity and dark but have a small storage density by todays standards. In any event in 10 to 20 years you will probably want to move the data to a newer storage device.

        6. You don’t build SSD’s out of that type of NAND. Those are in a 90nm process. SSD flash (read cost effective) is being built in approx 15nm nodes.

        7. danzaph

          NAND flash is full of bit flips. the RBER (Raw Burst Error Rate) of flash is around 1e-6. You would not get any data back without ECC. But at some point the ECC is overcome and your data is no longer correct. Due the the cost pressure on SSD’s this problem is only getting worse, not better.

          Things have moved from simple cyclic BCH (Galois Field) codes to iterative Low Density Parity Codes.

    1. Yep, love the pops and clicks. In the radio biz in the “vinyl days” we would get a dozen copies and then play each one to find the track with the least noise, pops and clicks. Then there was the noise from all the backcueing to get right to the start of the song. There were also 16″ discs that were so warped they had to be taped down to the turntable.

Reader Feedback

This site uses Akismet to reduce spam. Learn how your comment data is processed.