Why is this page text-only?
 

« The Unintended Consequences of the Broadband Stimulus | Main | WildBlue Should NOT Get Stimulus Dollars for R&D; (Same With BPL) »

April 29, 2009 9:36 AM

Broadband Speed About Latency Not Bandwidth

Whenever anyone's talking about broadband they invariably refer to its speed, asking questions like how fast your service is. The answers are then given in terms of Kbps, Mbps, and Gbps.

But in reality these measurements have nothing to do with speed. Instead they refer to bandwidth, which is more accurately described as carrying capacity. Think of it this way: Kbps equals a motorcycle, Mbps equals a car, and Gbps equals a semi-truck. All these vehicles can travel at roughly the same speed, but they each can carry a different amount of stuff.

The importance of bandwidth becomes greater the heavier the files you need to transfer. Video, which are typically the heaviest files, requires the most bandwidth. And if your vehicle can't carry the whole thing then it needs to take it in pieces, which equates to when you have to wait for a video to download to start playing.

So bandwidth does not equal speed, it equals capacity.

But the speed of broadband does make a difference, and its quantified more directly by the measurement of latency, which tells you how much time it takes for a user's actions to make it through the network.

There are multiple instances where the latency of a broadband connection matters.

The first is in online gaming. In this case the lower the latency of a user's connection the shorter the time between when they try to move their character and when the character actually moves. Put another way, the lower your latency the faster you're able to move, so whoever has the lowest latency playing a game like World of Warcraft will have a clear advantage over those playing on higher latency connections.

The second is remote desktop applications. Here the same principle applies in that lower latency narrows the gap between when you move your mouse and the mouse actually moves on the remote application. So low latency is vitally important for creating a desktop-like experience that responds to user input in real-time.

The third is live two-way video communications. The more latency in a network the longer the pauses between speakers and therefore the more unnatural the conversation. If you've ever used a videocalling app over a mediocre broadband connection you know what I mean as you can see how long it takes for what you're saying to actually make it over the network. This high latency prevents conversations from flowing naturally, with users often talking over each other. So the lower the latency the better the conversation that can be had.

So now that we've laid out how the speed of broadband refers more to latency than bandwidth, and that low latency is vitally important to a wide range of applications, it's important to note that not only does fiber have exponentially more bandwidth than other broadband technologies, it also features far lower latency.

Just take a look at this graph:

status_rrd_graph_img.php

I got this from Nicholas Istre, who's a proud new customer of LUSFiber down in Lafayette, LA. This graph shows the latency of his broadband connection. The first half of it is when he was on Cox, with latency averaging 10ms and spiking up over 20ms. Then you'll notice his latency dropped down below 2ms and held steady; that's when he switched over to LUSFiber.

While the difference between 2ms and 10ms (ms = millisecond) may not seem like much, imagine if everything you did were every so slightly delayed, whether that's hitting the brakes in your car, catching a baseball, or anything. And making matters worse these delays can be cumulative, so the longer you're trying to use real-time apps on a high latency network the worse the experience gets.

The reason I'm taking the time to point out this issue is that it's vitally important to understand in terms of how broadband is used yet it's a technical enough topic that most average users and policymakers have no understanding of what this is or why it matters. That's why we so often hear people (myself included) conflating speed with bandwidth.

Yet not only is this an important issue to enabling real-time applications to run better, it also adds another layer to our case for why we need fiber, beyond fiber's near infinite capacity.

So I encourage everyone who cares about fiber to start talking about latency so that it becomes as well-recognized an issue as bandwidth.

Del.icio.us Digg Yahoo! My Web Seed Newsvine reddit Technorati

TrackBack

TrackBack URL for this entry:
http://www.app-rising.com/cgi-bin/mt/mt-tb.cgi/1395

Comments (1)

Saw this from the pfSense user on Twitter who provided the graph. If you want your own pretty WAN latency graph out of the box, check out pfSense - www.pfsense.org. :)

Definitely agree, latency makes a big performance difference. Though latency to the gateway is only part of the puzzle, it isn't necessarily indicative of cross-Internet latency, which is what really matters. But that will certainly be better with a fiber connection as well, as long as the ISP's network is well managed.

Posted by Chris Buechler on April 29, 2009 6:50 PM

Post a comment

(If you haven't left a comment here before, you may need to be approved by the site owner before your comment will appear. Until then, it won't appear on the entry. Thanks for waiting.)