Greenfield IPv4 + IPv6 broadband deployment

Martin Millnert martin at
Sun Feb 27 00:30:16 CET 2011

Hi Mark,

I realize I might have given the impression that what I described was
rolling today.  It is not. The design only exists on paper atm, and
equipment is only being delivered as we speak.  Your feedback is

On Sun, 2011-02-27 at 09:31 +1030, Mark Smith wrote:
> Hi Martin,
> What benefits are there of taking a /64 from the delegated prefix for
> this purpose? I generally like the idea of saying to the customer (via
> DHCPv6-PD), "here's your delegated prefix, use it how you want, I'll
> use this different separate /64 that I choose and manage for the link
> between us." 

Well, yeah, keeping routes down would be the motivation.  But you are
correct in that you could just as well use a /64 from a separate range
for the RA prefixes.  (Aggregatable per PE box, as well)

> If I understand you, you're using an IGP to push these per customer
> routes around. I think BGP would make this scale a lot further if
> necessary. Depending on the sorts of possible outages you have, and how
> many customer connections are impacted by them, BGP might be worth
> using anyway, as because it uses TCP, if a BGP peer is struggling
> with temporary processing load, it can use TCP windows to tell it's
> peers to back off for a while.

Possibly.  It is entirely a topic of its own though. :)  Keep in mind,
the "PE" switches in question are 24 or 48p switches: there are a lot of
them.  How do you set it up? (Personal experience with larger scale
shops is limited.)

A full mesh iBGP with so many devices requires very clever configuration
management, and has inherent scaling problems.
Things you could do to avoid the scaling problems, I guess includes
"hacks" such as confederation (each cross-connect room could in theory
be its own private ASN then, peering with other cross-connect rooms
and/or core - interesting idea actually), or use route-reflectors (Not a
very attractive idea IMO).


More information about the ipv6-ops mailing list