Tunnel overhead [On killing IPv6 transition mechanisms]

nick hatch nicholas.hatch at gmail.com
Wed Mar 17 22:31:08 CET 2010


On Wed, Mar 17, 2010 at 1:27 PM, Ted Mittelstaedt <tedm at ipinc.net> wrote:
> OK, I see, however from a users perspective I actually like this, because it discourages content providers from sticking in a bunch of
> extra advertisements and such that crap up their site.
>
> I still maintain, though, that for 90% of the machines out there, the browser rendering engine is a far larger contributor to "unsnappy"
> websites.  Based on our customer feedback most people have older systems or newer systems that are crapped-up with adware.

Yes, the rendering engine can be a bottleneck in performance. However,
this bottleneck is influenced by how the website is authored: Download
latency, for example, can affect the DOM, which affects the rendering
engine, which affects speed. The Webkit blog has a great post with
more details [1]:

"Introducing just 50ms of additional latency doubled the page loading
time in the high bandwidth case (from ~3200ms to ~6300ms)."

I strongly disagree that junky PCs are the limiting factor for 90% of
people, but don't have empirical data to back it up. In any case, what
you're describing is the lower-bound for website performance.

Website designers can't change several of those factors, and
"advertisements and such that crap up their site" isn't the biggest
problem. Designers are focused on the upper-bound of performance,
which improves things for everyone.

Yahoo has a great introduction to the topic [2]. Note they don't say
"reduce complexity" or "don't use ads", or "code for crap computers!"
but rather, "reduce the number of http requests".

> The google toolbar alone ads an extra 2-3 seconds for most systems,
I've never seen performance issues like this with the Google toolbar,
ever. As stated earlier, Google hates the slow web, and has data to
back up why. [3] Does it really make sense to you that Google would
release a tool that slows down the web that much?

>> There's a cute tool which allowes one to test it:
>>
>> http://www.alphaworks.ibm.com/tech/pagedetailer
>>
>> "A graphical tool that enables Web content providers to rapidly and
>> accurately measure client side performance of Web pages."
>>
>
> Cute too, I wonder how many content providers actually read the
> advice in the help to put their sites on a diet.

These tools aren't cute, they're very useful, and common:

http://code.google.com/speed/page-speed/
http://developer.yahoo.com/yslow/

Good web developers don't talk about "putting sites on diets", they
talk about progressive enhancement, caching, compression, CSS sprites,
Javascript include order, etc. This topic is well studied from just
about every angle, and is  empirical for obvious reasons.

Getting back to IPv6: Poorly performing transition mechanisms can
affect user experience in serious ways.

-Nick

[1] http://webkit.org/blog/166/optimizing-page-loading-in-web-browser/
[2] http://developer.yahoo.com/performance/rules.html
[3] http://code.google.com/speed/


More information about the ipv6-ops mailing list