In the days before the ubiquitous Internet, understanding latency was relatively straightforward. You simply counted the number of router hops between you and your application. Network latency was essentially the delays that the data packets experienced when travelling from the source, past the hops to your application.
Large enterprises had this largely under their control. You would own most if not all the routers. There would be network delays, but these were measurable and predictable, allowing you to improve on it while setting expectations. The internet changed this. In a shared, off-premise infrastructure, calculating network latency is now complex. The subtleties, especially those involving the cloud service providers’ infrastructure and your link to the data center, play a huge rule. And they can impact latency in ways we readily do not appreciate. At the same time, managing latency is becoming crucial. As more users live and breathe technology, they believe fast connectivity is a given. With consumers having easy access to high-speed broadband over wire or wireless, they expect enterprise networks to be of the same vein.
Cloud has made the subject even more pressing. As many enterprises look to benefit from public shared infrastructures for cost-efficiency, scalability and agility, they are shifting their in-house server-oriented IT infrastructure into a network-oriented one that is often managed and hosted by a service provider. With the rise of machine-to-machine decision making, automation, cognitive computing and high-speed businesses like high-frequency trading easier, network latency is in the spotlight, with adoption, reputation, revenues and customer satisfaction now tied to it.
As applications become latency sensitive, especially with near zero tolerance to lag and delays by end users, application development is also influenced by network latency.
note:If you want to learn more, please download the attachment.