Modern SSDs offer a much better GB per $ ratio than they used to a few years back. When SSDs were first making it to market, they were generally in the 64GB or 128GB capacity range. They were also more expensive than multi-terabyte HDDs. For years it was assumed if you wanted lots of storage and didn’t want to pay high prices, you needed an HDD and had to accept the lower performance.
Things are a little different now, though. Yes, SSDs are still more expensive per GB than HDDs, but the pricing is much closer. A 2TB SSD is currently the pricing sweet spot for SSDs. A 2TB SSD is around twice the price of a 2TB HDD. You can now get an even more significant performance advantage for that extra cost.
It’s still true that if you want many terabytes of storage. For example, it is cheaper to get HDDs if you want a large RAID array. But suppose you’re only dealing with everyday home-user levels of computer storage. In that case, a one- or two-terabyte SSD is more than enough and won’t break the bank.
Contents
How Did the Price Come Down?
So what changed? What brought the price down to reasonable levels? First of all, technology has simply matured. It gets cheaper to make these things over time. Some technological breakthroughs and innovations have been a real game-changer, though. 3D VNAND allowed significant increases in storage density by allowing memory cells to be stacked on top of each other rather than squished closer and closer together on a single plane. This is not dissimilar to how multi-story car parks allow more cars to be parked in the same area as a flat parking lot.
Modern SSDs now generally use TLC flash memory. TLC stands for Triple-Level Cell, meaning that each memory cell can store three data bits. This triples the data storage capacity of the same number of memory cells when compared to the Single-Layer Cell (SLC) memory in earlier SSDs.
These three changes explain the majority of the price improvement in SSDs. However, there have been plenty of other developments too. The thing is, TLC comes with some pretty big caveats.
What’s the Problem with TLC?
The problem with putting multiple bits of data into a single memory cell is that it is significantly more complex to write data. This slows down the process. This is a problem because SSDs are supposed to be fast. They’ve been driving new generations of standards to double and redouble bandwidth to allow speedier storage.
While you can still read from TLC at a blazing 16GBs on the latest PCIe 5 SSDs, you definitely can’t write to them that fast. In fact, TLC write speeds are generally somewhere around 2000MBs. That’s still much faster than an HDD but slower than PCIe 3 SSDs.
Note: TLC isn’t the only type of flash memory in use. There are a relatively low number of Quad-Level Cell (QLC) SSDs, and the development of Penta-Level Cell (PLC) SSDs is progressing for 4 and 5 bits of data per cell, respectively. Write speeds of QLC memory are currently around 350MBs, which is slower than HDDs.
Enter the SLC Cache
SSD manufacturers developed SLC caching to get around these heavily reduced write speeds. This is a simple trick of writing data to super-fast SLC flash memory. The data is then copied to the slower TLC flash as fast as possible in the background. This enables the advertised, fast write speeds of the SSD, as long as there is SLC cache space to write into. This isn’t a problem in most cases but can be if you’re making substantial write operations at one time. For example, restoring or writing a backup typically involves writing to a large percentage of a drive.
The SLC cache typically comes in two distinct parts: a static SLC cache and a dynamic pseudo-SLC cache. The static cache is generally tiny, less than 10GB even on large 2TB drives. The static cache is always available, even when the drive is almost full. The dynamic cache varies in size, as the name suggests, based on the remaining space on the drive.
Larger SSDs have larger pseudo-SLC caches and can make larger writes at peak speeds. It’s important to note that the dynamic cache size is based on the remaining free space, not the total drive capacity. The dynamic cache size is reduced as the drive is filled up. Many SSDs allocate about a third of their free space to be used as dynamic SLC cache. That can be around 600GB on a 2TB drive.
The SSD controller chooses to write incoming data to the SLC cache because it is fast. This is important because the data can be provided to the SSD faster than it can be written to the much slower TLC flash memory. When the SSD is then sitting idle, the controller then copies the data across to the TLC memory at its slower write speeds. This stores the data in a more space-efficient manner and frees up the SLC cache again to accept more write operations at high speeds. As long as there is space in the SLC cache, the SSD can operate at its peak advertised speeds. Once the cache is full, the drive has to slow down, which is why having a large SLC cache is useful.
Potential Future
No SSDs make use of it at the moment, but there is a potential use case for an MLC cache too. MLC stands for Multi-Level Cell, a poorly named method of storing two bits of data in a cell rather than one or three. This is slower than SLC but faster than TLC. While SLC caches offer fantastic speeds that MLC couldn’t match, MLC would offer twice the cache size.
Theoretically, this would be an excellent middle ground allowing for peak SLC caching speeds until the SLC cache is consumed. Then dropping to an MLC cache if more data still needs to be written. This would still be faster than directly writing to the TLC or QLC memory but would likely involve more complicated logic.
While TLC speeds have been relatively fast, this hasn’t been necessary. As QLC and PLC SSDs become more common, they will come with further write speed reductions. Secondary MLC caching might be a way the technology develops to alleviate this.
Conclusion
SLC caching is a clever method of write caching on SSDs. It allows for high transfer speeds on writes into the hundreds of gigabytes on flash memory that nominally can’t be written to at that speed. Data written to the cache is flushed to the TLC or QLC flash memory as quickly as possible to free up the cache for peak transfer speeds.
The amount of SLC cache varies depending on the remaining space free on the drive. This means larger and emptier drives can write more data at peak speeds than smaller SSDs or SSDs closer to capacity. What do you think? Let us know in the comments below.
Thanks for explaining this. One , you are the first site actually explaining that NVME can use either TLC or QLC. Two, you are the first ones that I found explaining pseudo-SLC. Before. I couldn’t figure out how my big&cheap QLC drive could have such a massive cache at that cheap price.
Thank you for article.
I’ve heard some disks have also DRAM cache. How it coexists with SLC cache?
Does drive move data from DRAM to SLC, then to TLC? Or is data directly stored from DRAM to TLC?
One example disk with both – DRAM and SLC – write caches is Samsung SSD 970 EVO Plus 1TB.