Suppose you’ve ever played a competitive video game and have won games. In that case, you’ll probably have seen your opponent blame “lag” for their loss. Lag is one form of latency. Though it’s not technically the most authentic form, as latency can have multiple definitions.
Latency is a measure of the time difference between cause and effect. In the real world, the time it takes for an arrow to fly from the bow that fired it to its target is an excellent example of latency. Another way to define it would be the travel time. Or the propagation delay.
Latency in Computer Networking
Computer networking is where the term latency is primarily used. It has four primary components in non-trivial networks. These are transmission, propagation, processing, and queueing delays. The transmission delay is the time between the first bit of a transmission being put on the wire and the last bit of that transmission.
The propagation delay is the amount of time any bit of data (typically the first) that transmission takes to travel down the wire from one end to the other. The processing delay is the time the receiving devices take to process the transmission. Generally deciding to pass it on to the next hop in the chain to the true destination. The queueing delay is the amount of time that transmission will spend in the queue waiting to be put back on the next wire.
In modern computing devices, all of these times are typically very short, as devices can perform billions of operations per second. These nano-second delays add up, especially on transmissions that have to travel further. The typical latency of internet traffic between the UK and the US is on the order of around 100 milliseconds. Someone living near the server they’re communicating with may see latencies as low as ten or even eight milliseconds. Over the internet, however, this is typically the lowest latency you can see due to the amount of infrastructure involved. Local area networks can see sub-millisecond latencies.
The Other Form of Latency
Actual latency is simply the time between cause and effect. In the case of computer networks, the cause was the network traffic being transmitted, and the effect was receiving and processing by the intended recipient. This isn’t particularly easy to measure; for interactive systems with a human involved, it doesn’t tell the whole story.
Round Trip Time, sometimes shortened to RTT, is the time it takes for a transmission to be sent and the response to be received by the original sender. This value is typical twice the actual latency between the two devices as the signal needs to make the journey twice, once there, once back. Minor variations can be seen as the route taken might not be identical. Some component delays may be slightly different on one trip than the other.
Internet users, especially gamers, refer to this round trip time as “ping.” A ping is a networking tool that measures the round trip time between the sender and a recipient. It sends a simple message that generates a standard “echo” response from the recipient. While Ping is the tool’s name, it has also become the general term for this round trip time measurement type.
While the round trip time or ping might not be true latency, it is the perceived latency of the user. This is because it is when the user can first see the result of their action. This is particularly important in reaction-based scenarios such as most competitive video games. Where a ping of 100 milliseconds can be a devastating disadvantage. Other activities like web browsing are much less sensitive to ping. Even a 500-millisecond ping would be a small part of a page load time.
A Gaming Example
“Peeker’s advantage” is an example of the effect of latency from video games. In shooter games, a common defensive strategy is to find a location with good cover and good sight lines and then lie in wait for an enemy. While it may seem like the defender has a big advantage because they can hide while also having good sight lines. The attacker has a range of options.
Some are tactical such as using utility items such as flashbangs and smoke cover to deny visibility and audio cues to distract defenders. Even fake moves to draw the defenders away. The attacker’s other advantage is the peeker’s advantage, thanks to ping.
Because there’s a round trip delay to the game server and back to the other players, no move is perfectly synchronized across player computers. Instead, everyone has a window of opportunity, the length of the round trip time. Where they can act, but the other players can’t see it yet.
Peeker’s advantage is the concept of using this delay when peeking around a corner into a sightline likely held by a defender. The defender should have the advantage, as they are already looking in the right place and can react to movement. The attacker has to check multiple locations for what may be a partially hidden or not present defender, then aim and shoot if necessary.
The attacker can step out from around the corner to gain visibility. At the same time, the defender can’t see them do so until the round trip time has passed because their computer hasn’t received that information. The person acting has the advantage because the round trip time delays when the enemy can start reacting to their action.
Conclusion
Latency is the delay between a cause and effect. Technically it is the delay to the actual effect. The wait until the perceived effect is often referred to in computer networking as latency. Still, it should more appropriately be called round trip time. The latency of a connection primarily depends on the distance between both ends. However, the number of intermediate hops also has an effect.