World IPv6 Day? [included bonus report on brokeness studies]

Tore Anderson tore.anderson at redpill-linpro.com
Thu Jan 13 00:44:44 CET 2011


* Martin Millnert

> I did, I googled "IPv6 brokenness presentation" and basically came up
> with Redpill/Tore's data / presentations, and a few iterations of
> Lorenzo/Google presos. This is not "well-studied" (in terms of
> distinct research) in my book, but maybe I apply a too academic weight
> to the term. I'm not at all saying that these two groups haven't been
> studying it at great length, just that the availability of similar-
> scope and detailed presentations of the data is a bit scarce.

Hi,

For what it's worth, I'm not an academic nor a statistician.  I believe
my data gave a fairly good indication of brokenness, but it was never
the point to give an very accurate percentage.  It was rather to
figure out *what* was going wrong most of the time, so that I could go
on to attempt to get those issues fixed.  I think I was fairly
successful at that to be honest.  However, there are still unfixed
issues, and issues that are fixed in newer versions of software than
what people is actually using.  Have a look here:

http://getipv6.info/index.php/Customer_problems_that_could_occur#.C2.ABBroken.C2.BB_users_unable_to_access_dual-stacked_content

> RIPE 61, Rome, 15-19 November 2010
> 
> http://ripe61.ripe.net/presentations/162-ripe61.pdf , Redpill Linpro,
> presented by Tore Anderson
> 
> * Went from 0.2 - 0.3% brokenness to, "over the last seven days",
> 0.058% (well before the change mentioned below)
> * Looking at http://www.fud.no/ipv6/ I'm reading 0.003% client loss
> now for the current week, but this might be too low due to recently
> introduced error handling of broken users, catching them before they
> reach the test rig.
> Could Tore comment on this a little perhaps? The 'Overall client
> loss' graph dips well *before* 2010-12-21... Perhaps brokenness
> numbers from the brokenness catcher can be included into the data?

The reason for the brokenness going up was due to two reasons. First
and foremost, summer holidays ended.  Since university and enterprise
networks are overrepresented in the brokenness stats, this makes a lot
of sense.  Second, there was a defective native implementation of IPv6
that was done on the University of Oslo in the end of October (that's
the small hike you see in the end of October in the second-to-last
slide).

After that, however, things improved.  Mac OS X 10.6.5 was rolling out
to more and more users, and the University of Oslo fixed the broken
network.  The day we deployed (the 21th of December) the last sliding
7-day average brokenness number was at 0.024% and dropping fast.  At
that point probably also helped by people taking extended Christmas
holidays;  the timing of the deployment, right before a major holiday,
was very intentional.

The current 0.003% brokenness number on my page is way too biased
towards non-brokenness to be even remotely useful.  Ignore it, please.
I'll look into removing it when I have time.  I doubt I would be able
to use the data from the javascript test to make any useful statistic
either.  It's main use is to get in touch with broken users so I can
help them become unbroken, and learn what was causing it so I can
document it for all of you guys.

> So in summary, this well-studied issue has two current data points,
> one  at 0.082% and one at max 0.058% (or better). This is far from
> 0.3%. In fact, it's even outside 0.2% ±0.1%. :)

It's >0%, which makes it a hard business decision regardless.  How hard,
exactly, is probably largely dependent on the size and culture of the
corporation in question...

> And *if* these numbers can be positively improved by content providers
> having return 6to4 gateways *very* close, I think that is a point that
> can speak for itself in the debate of whether or not content providers
> should do that.

Once nice thing is that since the release of Mac OS X 10.6.5, broken
6to4 users is really going away.  Not many users have that problem
any longer.  Now the most problems I see is buggy CPEs, defective
IPv6 deployments, and HE/SixXS/etc tunnels that have broken since the
user have probably gotten a new DHCP lease or something like that.

> As evident by Lorenzos reported improvement by changing from multiple
> returned AAAA:s to a single A, it is apparent that there is still more
> work to be done on the content service side of this issue.

Reducing the number of AAAAs from two to one will halve the timeouts but
the timeouts will still be unacceptable, for the most part.  For example,
if www.arin.net had one AAAA record instead of two, it would take my
(intentionally) v6-broken Linux host only 12½ minutes to start rendering
the page, rather than 25.  But is there really any difference between
the two?

Best regards,
-- 
Tore Anderson
Redpill Linpro AS - http://www.redpill-linpro.com/
Tel: +47 21 54 41 27



More information about the ipv6-ops mailing list