April 27, 2021
It fell to me, as Cloud Business Development Manager for Dataquest Group, to explain not only the advantages of Private Cloud to our sales team, but the benefits of ours, so they had all the information to enable them to do the clever things they do to ensure a stream of new enquiries.
Whilst working through the many features of the various Cloud models available to businesses and the unique benefits of the Dataquest Private Cloud as part of a Hybrid Cloud solution, the word latency cropped up and I was asked on more than one occasion to explain its importance.
Latency is defined as the time between a user action and the resulting response, which in the case of Clouds running applications means how long it takes for the application to perform the action requested of it by the user.
Strictly speaking, latency is the measure in milliseconds of the time it takes for the command to be sent and the data request to be serviced and returned to the user, or the Round Trip Time (RTT) as it is more properly known.
Latency should always be minimised, but although data moves around at the speed of light on fibre-optic enabled networks, the distance between the user and the Cloud, along with delays introduced by internet infrastructure equipment, mean latency can never be eliminated completely.
High latency results in a poor user experience. For many working from home and relying on video conference apps hosted on Public Clouds, this has become evident in dropped calls, horrible audio performance, picture pixelating, video freezing and poor network performance messages.
One of the main causes of network latency is the distance travelled by the request and the data coming back to the user, whether it’s a website loading or the conversation between users via a hosted telecommunications solution.
Although we have clients across the UK, we have a concentration across London and the South East as you might expect, which is why our Private Cloud sits in Docklands, more specifically in Telehouse West, recognised as one of the world’s best-of-breed data centres.
Importantly, the data centre is carrier neutral with points of presence from different providers, ensuring low latency and diverse connectivity, whilst reducing the distance between the majority of our clients and the servers on which their data is stored.
Typical latency experienced by our clients is around 5-10 milliseconds, or better, which is unnoticeable to users when the RTT is being considered. If our data centre were thousands of miles away in the US, this figure could easily be as high as 50-60 milliseconds, or worse.
But there is rarely a single exchange and latency is compounded by all the back-and-forth communication necessary for the client and server to connect successfully, the total size and load time and any problems with the network equipment the data passes through.
And of course, data traversing the Internet usually has to cross multiple networks, with more opportunities for delays. When the data packets cross between networks, routers have to process and route them and may break them into smaller packets, all of which adds a few milliseconds.
By reducing the distance between user and servers, we are reducing latency; we’re cutting the delay between an action and an application’s response and improving the user experience, which is more critical now than ever before with dispersed workforces.
The overall speed of the network seems now to be less of an issue as organisations look to combine edge computing with local Private cloud solutions in a determined drive to help eliminate the latency affecting their users’ experiences.
If you would like to understand more about DataQuest low-latency Private Cloud solutions, please get in touch with me, Chris Baker, on 0333 800 8800 or email me at [email protected]