Network latency as a KPI has gained more focus in recent years with the introduction of 5G networks. It is commonly understood as the time it takes for an IP packet to reach the destination and it is measured in milliseconds (ms). The lower the latency is, the faster data gets to the end point to create a given digital service. The bandwidth (bit rate, capacity) available in the network further affects the amount of IP packets that can be transmitted at the same time.
With 4G, latency dropped significantly compared to good old 3G, the first real mobile data service. 5G non-standalone and 5G standalone networks promise to enhance latency, along with high bit rates, even further down to 1 ms. But what is the network latency consumers really experience today in commercial mobile networks? How should you measure and understand latency? Can you interpret latency metrics in a novel way and understand better the performance of your network?
Where does latency come from?
While transfer bit rate reflects the lowest performance between the data source and the destination, e.g. the bottleneck speed between a server and mobile user, latency builds up in the network. The longer the path, the higher the latency typically is. Naturally physical and link layer technologies matter, all nodes on the path add some amount of latency and availability of network capacity finally dictates how quickly IP packets get through a router, switch or cellular base station.
In modern cellular networks, latency is primarily a sum of three components:
- Distance to the vantage point,
- Performance of the end points themselves, and
- Capacity of the wireless link.
The vantage point affects the absolute latency. If we measure the latency from the mobile device to the base station hardware, in favorable conditions we can get to as low as 1 ms. Yet, when we measure further away from the mobile device, the latency grows due to the physical distance and due to all the nodes in the path that need to process the IP packets. Still, it is worth noting that the distance dictates the lower bound of the latency, but does not directly affect the fluctuation of the latency (often referred to as jitter) nor the upper bound.
Core and access networks are primarily built with optical links and they are extremely fast. The resulting latency comes in most cases from the physical distance to a vantage point. Still, if a part of the network becomes congested, it will increase the latency and potentially create packet loss.
The end points themselves can also affect the latency. The latency is affected by how the end points, e.g. server and mobile device, handle the data. Today when discussing crowd-sourced measurement, smartphones are rather powerful and should not add any significant amount of latency due to the data processing. Servers can become a bottleneck if they become overloaded and thereby can introduce very significant latency into the measurements.
In a cellular network, the provisioning of radio connectivity is the most complex and difficult part. As the end users are mobile, they can be virtually in any location within the reach of the antenna signal. This forces the radio link to behave differently depending on the circumstances. There can also be any number of end users that need to be served with data, they move around creating handovers, and the type of apps and data transfer needs differ. In many situations, modern cellular technologies can handle this complexity and serve the end users well. But at times, the sheer load of these users and their apps can create congestion on the radio and force the base station to buffer incoming data before it can be transmitted on the downlink. The uplink can behave similarly, just that the data buffering is done on the mobile devices before they start to transmit their data over the radio link.
As modern cellular networks are built to offer reliable transfer of IP packets, it means that there has to be adequate buffer space to hold user data before it can be transmitted on the radio link. If capacity becomes an issue on a given base station, it will buffer user’s data and thereby increase the latency. This increased latency can be even one second in 5G networks, a thousand fold higher than the advocated 1ms latency.
How to measure the latency that matters?
With 5G, the mobile community has been advocating 1ms latency for the services. What is typically left unsaid is that such low latency refers to the connection between the mobile device and the base station running some form of edge service. So in essence this is the radio link latency. With WIFI, we get the same latency from the radio link.
Many legacy network measurement platforms separate data transfers and latency from each other. They measure latency on an empty radio link and then test data transfer speeds. This mode of operation seeks to show an optimistic latency, a theoretical lower bound that a customer could experience if he did not have any data transfer ongoing. Seldom people are using apps on their smartphones without any data transfer. Moreover, these platforms optimize the physical location of the measurement vantage point to be as close to the consumer as possible, to further lower the measured latency.
With Netradar, our customers do not seek to simply measure this best case latency. They want to understand the real latency as experienced by their end-users throughout their daily network usage. Netradar calculates over ten different latency-related metrics and can store individual latency samples – downlink and uplink are studied separately. All this is coupled with extensive contextual information of the radio network to enable extremely accurate and detailed analysis of the quality of the cellular network.
Moreover, as highlighted earlier, the capacity and congestion of base stations increase the latency. Netradar’s proprietary algorithms (a form of AI) use the momentary bit rate of the app traffic coupled with latency and contextual information to indicate network capacity issues. The system is highly optimized and for a full month of detailed latency and capacity analysis, we use merely 2-3MB of data per user.
Real-world Examples
Netradar develops the core technology and AI to analyze the quality of cellular and WIFI networks. Our customers understand the difference between buying some legacy third-party crowd-sourced data and collecting detailed private data from one’s own network. When the real performance and development of the cellular network is critical, the data has to be reliable and extensive.
To support our technology development, we do our own data collection around the world. We use a network of distributed measurement points around the world and use a load balancer to measure latency to various vantage points.
Let’s take Finland and Germany as examples. I selected data that is measured to the same vantage point located in Frankfurt and filtered the data to only consider 4G or 5G connections across all mobile operators. The analysis shows that
- The lowest latency in Finland was around 26ms while in Germany it was 12ms. The difference is natural because of the physical distance from Finland to Germany.
- The average latency experienced by consumers in both Finland and Germany was 89 ms and 81 ms respectively. Considering the distance to the vantage point, Finns experience a slightly lower latency compared to the German consumers.
- The highest latencies in both countries go way beyond one second.
- Looking at average latencies that are over 100 ms for Germans and over 114ms for Finns (100ms+14ms for the physical distance), Finns encounter them 15% of the time and Germans 17% of the time.
- When comparing 4G and 5G NSA, we see surprising numbers. The lowest latencies are slightly higher for 5G than for 4G while it should be the opposite. There is no significant difference in average and worst case latencies. As the data is analyzed across all national operators, there are differences and one low performing provider will affect the national results negatively. For example in the German data, we see one provider having systematically higher latencies in both 4G and 5G compared to the competition.
In summary, measuring the real latency of a cellular network and how the end users and their smartphones experience it has not yet been truly understood by the mobile industry. Moreover, Netradar uses the deep understanding of latency and its behavior to analyze network capacity shortages. Hopefully this article will trigger some new thinking in how cellular network performance should be measured and understood. You can always email me jukka.manner@netradar.com if you have any thoughts on the topic, and join our forthcoming webinar to learn more. Register here: www.netradar.com/webinar