Synchronous DRAM or SDRAM is the current standard for DRAM. Its primary use is for system RAM, though it’s also used in VRAM on graphics cards and wherever else DRAM is used. It is so dominant in its field that the “S” is typically dropped, and it’s simply referred to as DRAM. The synchronization of SDRAM is critical to its performance and was instrumental in its rise over its predecessor, asynchronous DRAM.
Working In Sync
Synchronous refers to the fact that SDRAM has an internal clock and that the clock speed is known to the system. That isn’t to say that it runs at the same clock speed as the CPU. But it has an internal clock, and the CPU knows it. This allows interactions with the RAM to be optimized so that the I/O bus is fully utilized rather than being left idle to ensure that no commands interfere with other commands.
Part of the problem is that when writing data to DRAM. The data must be provided simultaneously as the command to write the data. When reading data, however, the data is read back two or three clock cycles after the read command is issued. This means that the DRAM controller needs to allow enough time for read operations to complete before a write operation happens. With asynchronous DRAM, this happened by simply allowing more than enough time for the operation to complete. This practice, however, left the I/O bus idle. At the same time, the controller waited for enough to be sure, which was an inefficient use of resources.
Synchronous DRAM uses an internal clock to synchronize the transfer of data and the execution of commands. This lets the memory controller time operations make optimum use of the I/O bus and ensures higher performance levels.
Improvements Over Asynchronous DRAM
Outside of improvements in timing allowing improved control, the main improvement of SDRAM is the ability to have multiple banks of memory within the DRAM. Each bank essentially operates independently internally. Within a bank, only one row can be open at once. Still, a second row can be opened in a different bank, allowing read or write operations to be pipelined. This design prevents the I/O bus from sitting idle. At the same time, a new read or write operation is being queued up, increasing efficiency.
One way to think about this is by adding a third dimension to a two-dimensional array. You can still only read or write data from one place at a time. But you can prepare another row in a different bank while one is being interacted with.
Another benefit of SDRAM comes from the inclusion of timing data on a chip on the memory. Some modern RAM sticks allow performance faster than the official DRAM standards by encoding their specific timing performance information on that chip. It can also be possible to manually override these settings, allowing the RAM to be “overclocked.” This is often very in-depth, as many timing values can be configured and tends to provide a minimal performance benefit. Overclocking RAM also comes at the risk of instability but can offer advantages in some workloads.
Improvements Over Time
The actual memory clock speed hasn’t increased much since the release of SDRAM. The first iteration of SDRAM received the retronym SDR. This is short for Single Data Rate to distinguish it from the later DDR or Double Data Rate memory. These types, as well as many other forms of DRAM, are all examples of SDRAM. The clock cycle of the DRAM chip controls the time between the fastest operations of DRAM. For example, reading a column from an open row takes a single clock cycle.
It’s important to note that there are two distinct clock speeds for SDRAM, the internal clock and the I/O bus clock. Both can be controlled independently and have been upgraded over time. The internal clock is the speed of the memory itself and directly influences the latency. The I/O clock controls how often data that has been read from – or will be written to – the SDRAM can be transmitted. This clock speed, combined with the width of the I/O bus, influences the bandwidth. Both clocks are linked and are critical to the high performance of SDRAM.
How Speeds Have Increased
The official JEDEC standard for the first generation of DDR SDRAM had memory clocks between 100 and 200MHz. DDR3 still offered 100MHz memory clocks, though it also standardized clock speeds up to 266.6MHz. Despite this, internal changes to the I/O clock speed and the amount of data included in a read operation meant that even at a 100MHz memory clock, the bandwidth for a unit of time was quadrupled.
DDR4 changed the upgrade pattern and doubled the memory clock with a range between 200 and 400MHz, again achieving a doubling of available bandwidth while reducing latency. The DDR5 standard also starts with a memory clock of 200MHz. Still, it reaches up to 450MHz, reverting back to doubling the amount of data transferred per cycle to double the bandwidth.
Conclusion
Synchronous DRAM is the primary type of DRAM in use today. It is the basis for system RAM and VRAM in graphics applications. By synchronizing actions of the DRAM with clocks, the actual performance of the DRAM can be known, allowing for operations to be efficiently queued up for execution. This is much more efficient than leaving more than enough time because there is no direct measure or way to know when a specific command has been completed.
The clocks that control SDRAM are critical to its high performance. They control how often commands can be run and how fast data can be read from or written to the DRAM. By having these timings known, they can be optimized for peak performance.