Cloud-based SD-WAN: The optimal approach to WAN latency
A recent Tech Research Asia study found that on average, “network problems” lead to 71 hours of productivity loss. This stat struck a chord with me as it helps to quantify a common problem the Cato team works with customers to solve: reducing WAN latency. With the growing popularity of cloud services like Unified Communications-as-a-Service (UCaaS) and the surge in mobile users thanks to Bring Your Own Device (BYOD) and the ubiquity of smartphones, low latency has become more important than ever.
However, keeping WAN latency in check while using traditional solutions, like MPLS or VPN, with cloud services has become impractical. As a result, many enterprises, like Centrient Pharmaceuticals, are turning to cloud-based SD-WAN providers to deliver WAN connectivity and WAN optimization that meets the demands of modern networks.
But why is it that cloud-based SD-WAN is so much more effective at addressing the WAN latency problem? We’ll answer that here.
Understanding WAN Latency
Before we explore the solution, let’s review the problem. At a high-level, we’re all familiar with what latency is: the time data takes to traverse a network. Traditionally, the main drivers of WAN latency have been: distance, routing issues, hardware limitations, and network congestion. The higher the latency, the worse application performance will be.
For serving web pages, latency measured in milliseconds (ms) generally isn’t an issue. Real-time applications like Voice over IP (VoIP) and videoconferencing are where latency can make or break performance and productivity. At what levels can you expect to see performance degradation? In this blog post, Phil Edholm pointed out that the natural pause in human voice conversations is about 250-300 ms. If the round-trip latency (a.k.a. Round Trip Time or RTT) is longer than that, call quality degrades. For many UCaaS services, performance demands are even higher. For example, Skype for Business requires latency of 100 ms or less.
Addressing WAN latency: why legacy WAN solutions come up short
Apart from cloud-based SD-WAN, enterprises have 3 main options for WAN connectivity: appliance-based Do-It-Yourself (DIY) SD-WAN, VPN, and MPLS (for a crash course on the differences, see SD-WAN vs. MPLS vs. Public Internet). All 3 come up short in tackling the WAN latency problem for several reasons.
Both DIY SD-WAN and VPN have proven inadequate in keeping latency at acceptable levels for a simple reason: neither offer a private network backbone and the public Internet doesn’t make for a reliable WAN backbone. As this SD-WAN Experts report demonstrated, WAN latency is very much a middle-mile problem. The study showed that while the last-mile is significantly more erratic, the middle-mile was the main driver of network latency.
On the surface, MPLS seems to solve this problem. It eliminates the public Internet from the equation and provides a low-latency backbone. However, MPLS creates challenges for enterprises because it is notoriously expensive and inefficient at meeting the demands of cloud and mobile.
As bandwidth demands increase, MPLS costs will become more and more prohibitive. However, agility may be a larger problem with MPLS. It was designed to reliably transport data between a few static locations, but WAN traffic is becoming increasingly more dynamic. Cloud and mobile is now the norm.
When the paradigm changed, enterprises using MPLS encountered the trombone routing problem. By forcing enterprises to inefficiently backhaul Internet-bound traffic through corporate datacenters for inspection, trombone routing leads to additional WAN latency and degraded performance real-time applications.
How cloud-based SD-WAN solves the WAN latency problem
Cato’s cloud-based SD-WAN is able to efficiently solve WAN latency because of its affordable, private, SLA-backed, and global WAN backbone, intelligent and agile routing, optimized mobile and cloud connectivity, and the ability to provide affordable WAN connectivity.
As opposed to relying on the public Internet, Cato provides customers access to its private backbone consisting of over 45 Points of Presence (PoPs) across the globe. This means Cato bypasses the latency and congestion common to the public Internet core.
Dynamic path selection and end-to-end route optimization for WAN and cloud traffic complement the inherent advantages of a private backbone, further reducing WAN latency. Cato PoPs monitor the network for latency, jitter, and packet loss, routing packets across the optimum path.
Furthermore, PoPs on the Cato backbone collocate in the same physical datacenters as the IXPs of the leading cloud providers, such as AWS. The result: low-latency connections comparable to private cloud datacenter connection services, such as AWS Direct Connect. For a deeper dive on how Cato helps optimize cloud connectivity, see How To Best Design Your WAN for Accessing AWS, Azure, and the Cloud.
Proving the concept: the real-world WAN latency benefits of Cato Cloud
Conceptually, understanding why cloud-based SD-WAN provides an optimal approach to addressing WAN latency is important. But proving the benefits in the real-world is what matters. Cato customers have done just that.
For example, after switching from MPLS to Cato Cloud, Matthieu Cijsouw Global IT Manager at Centrient Pharmaceuticals touted the cost and performance benefits by saying: “The voice quality of Skype for Business over Cato Cloud has been about the same as with MPLS but, of course, at a fraction of the cost. In fact, if we measure it, the packet loss and latency figures appear to be even better.” Similarly, performance testing between Singapore and Virginia demonstrated Cato’s ability to reduce latency by 10%. While a 10% reduction may not sound like a lot, it can be the difference between a productive VoIP call and an incomprehensible one.
Author: Dave Greenfield (secure networking evangelist)
Source: Catonetwork