Announcing PhotonIQ: The New AI CDN For Accelerating Apps, APIs, Websites and Services

What is Latency?

Back to main page

The speed of information is a high priority for a world that relies on web-based applications. Many applications require near real-time speed, which in turn depends on a network’s latency. Recent research on networks is focused on overcoming the delays in data transmission so that applications can transmit and receive data faster than ever.

Latency is defined as the time it takes for a data packet to move from its source to a destination. There are two types of latency: A one-way transmission or round trip, depending on the use case. One-way latency is the transmission of data packets from a source to a destination. Round trip latency is when the data packet returns to the source after acknowledgement from the destination. Processing data involves time and energy costs - time cost being latency and the energy cost being bandwidth.

How to measure latency

Latency plays a vital role in how we use web-based applications, data storage, edge computing, video conferencing, streaming services, and gaming applications, just to name a few. In networks, high latency can have a profoundly negative effect on how these applications perform. Around 100ms is considered optimal latency for better performance of a web application over a network.

Latency is the time induced in taking the desired data from user to end network and getting the acknowledgements back to the initial network. Methods of measuring latencies include p90, p95 and p99. P99, for instance, means that 99 out of 100 http requests will observe that latency and only 1 out of 100 will be slower. This formula continues with p90 and p95 providing 90/100 and 95/100 percentile. Only the remaining percentile will observe more than the defined latency.

What causes high latency?

Several factors can contribute to latency, such as data packet loss, size of the packet, database storage delays, and the transmission medium (coaxial cables, fiber optics WAN, etc). Also, latency can occur at any point between the user and the end server, and can be caused by any or even all of these factors along the way.

Bandwidth is an important aspect for improved performance, but it simply refers to the capacity of data transferred and received in a specific time from a source to a destination. Throughput, however, is directly related to latency as it determines the number of data packets reaching the destination in one unit of time.

Latency has become a critical factor for growth at the enterprise level. Businesses and applications transmit massive amounts of data and demand a rapid network response to minimize the impact on the user experience. 

Conclusion

Quick network response is the key enabling factor for technology providers and users, and the industry as a whole. Reduction of latency is believed to be the primary goal that will revolutionize data storage and extraction methods. However, we must keep in mind that latency is not the only problem for data transmission; rather it is one of the more important aspects as it leads to a diminished user experience and lessened network accessibility.

Learn more about Macrometa’s Global Data Network (GDN) and how it offers a P99 round trip latency of <50ms with ready-to-go industry solutions to accelerate business outcomes.

Related reading:

Driving Low Latency With Global PoPs

Platform

PhotonIQ
Join the Newsletter