Some very nice broken IPv6 networks at Google and Akamai

Lorenzo Colitti lorenzo at google.com
Tue Nov 11 02:18:22 CET 2014


On Sun, Nov 9, 2014 at 8:10 PM, Jeroen Massar <jeroen at massar.ch> wrote:

> > Another fun question is why folks are relying on PMTUD instead of
> > adjusting their MTU settings (e.g., via RAs).
>
> Because why would anybody want to penalize their INTERNAL network?
>

Lowering the MTU from 1500 to 1280 is only a 1% penalty in throughput. I'd
argue that that 1% is way less important than the latency penalty.

Because you can't know if that is always the case.
>

I'm not saying that PMTUD shouldn't work. I'm saying that if you know that
your Internet connection has an MTU of 1280, setting an MTU of 1500 on your
host is a bad idea, because you know for sure that you will experience a
1-RTT delay every time you talk to a new destination.

As you work at Google, ever heard of this QUIC protocol that does not
>
use TCP?
>
> Maybe you want to ask your colleagues about that :)
>

Does QUIC work from behind your tunnel? If so, maybe my colleagues have
already solved that problem.


> > (Some parts of) Google infrastructure do not do
> > PMTUD for the latency reasons above and for reasons similar to those
> > listed
> > in https://tools.ietf.org/html/draft-v6ops-jaeggli-pmtud-ecmp-problem-00
> .
>
> As such, you are ON PURPOSE breaking PMTUD, instead trying to fix it
> with some other bandaid.
>

The draft explains some of the reasons why infrastructure is often built
this way.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.cluenet.de/pipermail/ipv6-ops/attachments/20141110/21df5bdc/attachment.html 


More information about the ipv6-ops mailing list