It seems like the industry is getting hyped up over Solid State Disk (SSD) drives again. Analysts have been predicting the death of electromechanical disk drives and the coming of Solid State drives for close to 3 decades now. The first ferrite memory SSD devices emerged during the era of vacuum tube computers, but were discontinued with the introduction of cheaper drum storage units. In the 1970s and 1980s, SSDs were implemented in semiconductor memory for early supercomputers of IBM, Amdahl and Cray, but the high prices make them seldom used in mainstream storage products, which is the case to this day. As of mid-2008, Flash SSD prices are still considerably more costly per gigabyte than comparable conventional hard drives. SSD is typically $2 to $3.50 per GB for flash drives compared to less than USD 0.15 per gigabyte for hard drives. With the advent of MLC (multi-bit per cell), SSD cost is being reduced by 50% in $/GB annually and analysts are expecting prices to be competitive with that of Hard Disk Drive (HDD) by 2012.
In the mean time the disk drive guys have blown past many predictions of when they would reach the Superparamagnetic Limit (the theoretical maximum number of bits that can be recorded per square in.) They have already hit 1 trillion bits per square inch (or 1 terrabit/in²), which means we will see 5TB 3.5” drives before long. Physics will catch up to them sooner or later. My prediction is that they will hit the wall within the next 10 years and SSDs will begin to take over. Seagate is already working on hybrid drives (solid state memory with a traditional disk drive), so we will have a period in the industry with hybrid drives, which promises increased performance form a front-end read/write cache combined with economical high density disk.
Today’s major impediments to the widespread usage of SSD are Flash write endurance limitation, data loss in terms of non-recoverable bit errors and random write performance. Flash-memory cells wear out after 1,000 to 10,000 write cycles for MLC, and 100,000 write cycles for SLC (single bit per cell). Some high endurance cells may have an endurance of 1–5 million write cycles, but many log files, file allocation tables, and other commonly used parts of the file system can easily exceed this over the lifetime of a computer. Special file systems or firmware designs can mitigate this problem by spreading writes over the entire device (so-called wear leveling), rather than rewriting files in place. In 2008 wear leveling was just beginning to be incorporated into devices.
In the case of HDDs, the drive industry took twenty years to perfect disk controllers to overcome electromechanical and magnetic media imperfections for reliable storage. Similarly, the SSD controllers have to evolve and mature to solve the endurance and reliability problems created by MLC Flash and the process technology as it scales from the current 65nm to smaller process nodes over the next few years. The existing SSDs that EMC and other storage vendors have announced use expensive SLC (single bit per cell) Flash chips and their controllers offer an incomplete solution to the performance, endurance and data loss problems. The global NAND Flash fabrication capacity is shifting completely to MLC owing to its higher density and lower cost, therefore SLC drives will be short lived.
All of this suggests SSDs are at least a generation away from full usability, especially in the enterprise. However, IDC is forecasting that deployment of SSDs in enterprise computing will pick up by 2010 and that enterprise computing applications will grow from 12% of SSD revenue in 2007 to more than 50% by 2011. I believe SSDs will be predominately used for things like boot applications, or for applications requiring high random read performance, but won’t be used for primary storage applications intended for high duty cycle workloads, or applications requiring high write performance & reliability until the cost comes down further and the error correction technology for MLC flash matures.