Some very nice broken IPv6 networks at Google and Akamai (Was: Some very nice IPv6 growth as measured by Google)

Jeroen Massar jeroen at massar.ch
Sat Nov 8 20:39:47 CET 2014


On 2014-11-08 18:38, Tore Anderson wrote:
> * Jeroen Massar
> 
>> The only link: they are all using IPv6.
>>
>> You are trying to make this OTE link. I have never stated anything
>> like that. Though, you likely take that from the fact that the reply
>> followed in that thread.
> 
> Yannis: «We're enabling IPv6 on our CPEs»
> Jeroen: «And then getting broken connectivity to Google»
> 
> I'm not a native speaker of English, but I struggle to understand it
> any other way than you're saying there's something broken about
> Yannis' deployment. I mean, your reply wasn't even a standalone
> statement, but a continuation of Yannis' sentence. :-P

That statement is correct though. As Google and Akamai IPv6 are
currently broken, enabling IPv6 thus breaks connectivity to those sites.

Not enabling IPv6 thus is a better option in such a situation.

But it was just a hook into it. Don't further worry about it.

> Anyway, I'm relieved to hear that there's no reason to supect IPv6
> breakage in OTE. As we host a couple of the top-10 Greek sites, one of
> which has IPv6, we're dependent on the big Greek eyeball network like
> OTE to not screw up their IPv6 deployment - it is *I* who get in trouble
> if they do. :-)

But your network was not involved in the above statement.

And if you monitor your sites correctly, also from non-native setups.
Then you should be fine.

>> PMTUD is fine.
>>
>> What sucks is 'consultants' advising blocking ICMPv6 "because that is
>> what we do in IPv4" and that some hardware/software gets broken once
>> in a while.
> 
> PMTUD is just as broken in IPv4, too.

No, PMTUD is fine in both IPv4 and IPv6.

What is broken is people wrongly recommending to break and/or filtering
ICMP and thus indeed breaking PMTUD.

> PMTUD has *never* been «fine»,
> neither for IPv4 nor for IPv6. That's why everyone who provides links
> with MTUs < 1500 resorts to workarounds such as TCP MSS clamping and

I am one of the people on this planet providing a LOT of "links with
MTUs < 1500" and we really will never resort to clamping MSS.

It does not fix anything. It only hides the problem and makes diagnosing
issues problematic as one does not know if that trick is being applied
or not.

I also have to note that in the 10+ years of having IPv6 we rarely saw
PMTU issues, and if we did, contacting the site that was filtering fixed
the issue.

> reducing MTU values in LAN-side RAs,

That is an even worse offender than MSS. Though at least visible in
tracepath6. Note that you are limiting packet sizes on your local
network because somewhere some person is filtering ICMP.

> so that reliance on PMTUD
> working is limited as much as possible. If you want to deliver an
> acceptable service (either as an ISP or as a content hoster), you just
> *can't* depend on PMTUD.

The two 'workarounds' you mention are all on the *USER* side (RA MTU) or
in-network, where you do not know if the *USER* has a smaller MTU.

Hence touching it in the network is a no-no.

> Even when PMTUD actually works as designed it sucks, as it causes
> latency before data may be successfully transmitted.

I fully agree with that statement. But this Internet thing is global and
one cannot enforce small or large packets on the world just because some
technologies do not support them.

Note that PMTUD is typically cached per destination. Unfortunately there
is no way for for a router to securely say "this MTU is for the whole
/64 or /32 etc" that would have been beneficial.

>> See the threads I referenced, they are still in the above quoted text.
>>
>> Note that the Google case is consistent: (as good as) every IPv6
>> connection breaks.
>>
>> The Akamai case is random: sometimes it just works as you hit good
>> nodes in the cluster, sometimes it breaks.
> 
> I see in the threads referenced things statements such as:
> 
> «this must be a major issue for everybody using IPv6 tunnels»
> «MTU 1480 MSS 1220 = fix»
> «the 1480MTU and 1220MSS numbers worked for my pfsense firewall»
> «The only thing that worked here is 1280 MTU / 1220 MSS»
> «clamping the MSS to 1220 seems to have fixed the problem for me»
> «I changed the MSS setting [...] for the moment Google pages are
> loading much better»
> 
> This is all perfectly consistent with common PMTUD mailfunctioning /
> tunnel suckage.

NOTHING to do with tunnels, everything to do with somebody not
understanding PMTUD and breaking it, be that on purpose or not.

Note that both Google and Akamai very well know about PMTUD and up to
about a week ago both where running perfectly fine.

> I'm therefore very sceptical that this problem would
> also be experienced by users with 1500 byte MTU links.

Tested failing also on MTU=1500 links

> (Assuming there's only a single problem at play here.)

That is indeed an assumption, as we can't see the Google/Akamai end of
the connection.

Note that in the Akamai case it is a random thing, it happens to random
nodes inside the cluster (at least, that is my assumption...)

>> In both cases, it is hard to say what exactly breaks as only the
>> people in those networks/companies have access to their side of the
>> view.
>>
>> As such... here is for hoping they debug and resolve the problem.
> 
> Having some impacted users do a Netalyzr test might be a good start.

Netalyzr does not talk to Google/Akamai from your host.

As such, it will report no issues, except for some warnings possibly.

Yes, Netalyzer is a great tool and can indicate some issues, but only
between their site and the user, which is great, but cannot be used to
figure out issues with a specific destination.

> Like I said earlier, WFM, but then again I'm not using any tunnels.

As you are wearing the hat of a hoster though, you should as there are
eyeballs that you want to reach that are behind tunnels and other
linktypes with a lower MTU than 1500.

Hence, I can only suggest to do your testing also from behind a node
that has a lower MTU. eg by configuring a monitoring node with a tunnel
into your own network and setting the MTU lower, or do the
MTU-in-RA-trick for that interface.

Better be aware than get bitten, like apparently two large and typically
well managed networks now are currently experiencing.

Greets,
 Jeroen




More information about the ipv6-ops mailing list