Latency and it’s importance during Performance Tests

What is Latency?
Suppose a request is sent from the application to a server. There is a certain time lag before it reaches the server and before it starts the actual processing based on the received request. This time lag is called latency or network latency.

In other words: Response time of a request = Processing time + Latency

Why consider Latency?
The majority of performance tests that are conducted within organizations are run from a local load source using performance testing tool like Jmeter, LoadRunner etc. in the same availability zone. As the tool resides in the same network, the latency is extremely low or negligible, probably in some milliseconds. But, in the real world scenario, any application will never have this kind of latency as the networks will be different. As observed in multiple projects, such things can add event up to 750 ms depending on the mix of traffic received by the application from global hits. With such things, the application can perform awfully wrong per the user expectations resulting in the SLA non-compliance.

Normally, a Webserver spends less time waiting for a response when the latency is low say 1 ms and hence allows it to handle a much larger volume of traffic that is spread over a much lower number of threads. Now let us consider that we change the latency to 300 ms. If the application has a 1:1 ratio of thread:Transaction, this will cause ideally around 300% increase in concurrency at any given point of time. And this will definitely lead to the performance box running out of threads or memory and will never reach the actual performance levels seen during low latency.

Latency issues could also highlight probable bottlenecks in the code where the application blocks processing while waiting on other threads to complete.

How to Introduce Latency during Tests
One can mimic production latency in the performance testing environment to ensure that the application is not only tested for the ideal performance, but also stress production similar concurrency levels under low latency. To mimic the production like environment one should generate the load for the tests remotely using cloud like AWS or mimic various bandwidths inhouse.

There are tools in the market that can help mimic the various bandwidths during the performance tests. Some of the tools are

How to Monitor Latency and Impact during Tests
There are multiple tools that can help monitor the application while the performance tests are being run. Some of them are

  • Dynatrace  can be used to perform application monitoring and identify the bottlenecks arising due to latency
  • Latencies Over Time JMeter plugin – Can help in identifying latency at development level

Do share your experiences encountered with and without considering latency on your performance tests.