Friday 26 December 2008

Does Edge Caching Hurt Network Neutrality?

Over at the Official Google Blog they've been talking about Net neutrality and the benefits of caching. There were a load of comments on the same post at their Public Policy Blog, arguing both ways. I figured I'd give my take too...

The aim of net neutrality is to ensure that Internet Service providers favour no one content provider's traffic. For example, the packets containing Google's search results to you shouldn't be prioritised any higher than the packets from BBC News. Edge caching, meanwhile, involves big content providers (like Google) having servers closer to their customers (at the edge of the Internet) in order to provide speedier service. In some cases, those servers (caches) could be at individual service providers' facilities.

On the face of it, these two principles sound incompatible: surely edge caching means that content is "prioritised" over other content? In reality, I think it depends on what measure of "speed" or "priority" is being used.

Let's assume the core network between the content provider and the Internet service provider (ISP) is well-provisioned, so there's no problem about concurrent flows from multiple content providers. This is reasonable, given that an ISPs have lots of customers. The constraint on the flow rate, the throughput, is the customer's connection to the ISP, say over ADSL. The network switch connecting the core network to the ADSL line has to buffer packets when the rate from the core destined to the customer exceeds the capacity of the ADSL line. If the buffers are about to overflow, packets need to be dropped. The choice of which packets to drop is up to the switch: it could pick at random, or try to be clever.

Traffic prioritisation would mean that one content provider's packets would be dropped in preference to another's. For a TCP flow, this would then mean that the lower priority flow would lower its send rate. If no prioritisation takes place, then (if random drop is used), the flow with the greatest number of packets per unit time is most likely to have one of its packets dropped, as it will be occupying more of the network switch's buffers (let's assume all flows have similar sized packets for simplicity).

The edge caching proposed by Google will involve their content servers being located in the ISPs' networks, but they haven't said they're using traffic prioritisation. So the flow from inside the ISP's network to the customer's ADSL line will be treated the same as the flow from a thousand miles away to the same ADSL line, from the point of view of packet drop when congestion occurs.

Thus, the throughput that you get from a content provider using edge caching, and concurrently that from one further away, are treated the same.

However, the latency you experience will obviously be different, as the time for a packet to go from edge caching to a customer, versus thousand miles away to customer will always be different.

(An illustration of the difference between latency and throughput is a low throughput 56 Kbit/s modem (where the latency is related to the inverse of the speed of light), and a lorry carrying a load of DVDs, which has very high throughput, but a latency related to the inverse of 100 km/hour.)

Two other thoughts:
  • Edge caching reduces load on core networks, as requests can be served from local caches. This means more capacity is available for other traffic. Content providers are likely to require less connectivity (to the Internet core) for each single location, due to this distributed request serving capability.
  • If the connection from the edge cache to the ADSL line is higher throughput than the connection across the core to the alternative content provider, then whilst a higher throughput connection is more likely to be penalised with packet drop, I'm not certain that it will achieve equal throughput to other TCP flows. I very much suspect this work has already been done, so I'll try and have a look for it.

0 comments:

Post a Comment