[MUSIC]
Okay [INAUDIBLE].
So, I learned a couple of things from that,
we've got Content Distribution Networks, CDN's that are actually carrying
a significant fraction of the Internet's traffic, and
what they give the cloud is kind of a way to reach out to the world,
to be close to the world, physically close to end users so that you get low latency.
Because otherwise you're fundamentally limited by the speed of light and
performance, irregularities and
performance problems of the wide area internet.
If you're only located in those big hyper scale clouds.
>> True. But also CDNs do reduce physical distance
and this latency and side step some of the wide area networking issues, but
they also lower latency by using transport layer techniques and
routing techniques based on performance that they observe.
And this is true for not just applications that have content that is cacheable
close to the edge, but also works for content that needs to be fetched,
for example, from other CDN nodes or from the original node, the content publisher.
>> Right.
And that's interesting actually because a lot of those optimizations are possible in
the CDN context.
Because you have this large scale infrastructure.
>> Correct.
>> Global infrastructure, that's managed and owned, and architected by one entity.
>> Mm-hm.
>> So if you want to implement your new overlay routing protocol, or
your, You know you're new version of TCP that maintains persistent connections,
then you can go ahead and do that.
It's actually kind of similar to in data centers we've seen new protocols and
optimizations within the data center where there's also one operator.
>> Absolutely.
It does come down to flexibility and
in fact, one of the interesting things about all this is
here decisions are being made in response to measurements that are being collected.
So you're monitoring performance and making decisions based on that, for
example with the over layer outing, right?
This is not something that's available in the Internet's protocols.