Question about IPAM tools for v6

Fernando Gont fgont at
Fri Jan 31 18:13:51 CET 2014


On 01/31/2014 01:47 PM, Alexandru Petrescu wrote:
>>>> It's as straightforward as this: whenever you're coding something,
>>>> enforce limits. And set it to a sane default. And allow the admin to
>>>> override it when necessary.
>>> I tend to agree, but I think you talk about a different kind of limit.
>>> This kind of limit to avoid memory overflow, thrashing, is not the same
>>> as to protect against security attacks.
>> What's the difference between the two? -- intention?
> Mostly intention, yes, but there are some differences.
> For example, if we talk limits of data structures then we talk mostly
> implementations on the end nodes, the Hosts.

Enforce, say, 16K, 32K, or 64K. And document it.

> For ND, if one puts a limit on the ND cache size on the end Host, one
> would need a different kind of limit for same ND cache size but on the
> Router.  The numbers would not be the same.

64K probably accommodates both, and brings a minimum level of sanity.

>>> The protocol limit set at 64 (subnet size) is not something to prevent
>>> attacks.  It is something that allows new attacks.
>> What actually allows attacks are bad programming habits.
> We're too tempted to put that on the back of the programmer.

It's the programmer's fault to not think about limits. And it's our
fault (IETF) that do not make the programmer's life easy -- he should't
have to figure out what a sane limit would be.

> But a
> kernel programmer (where the ND sits) is hard to suppose to be using bad
> habits.

THe infamous "blue screen of death" would suggest otherwise (and this is
just *one* example)...

> If one looks at the IP stack in the kernel one notices that
> people are very conservative and very strict about what code gets there.

.. in many cases, after... what? 10? 20? 30 years?

>  These are not the kinds of people to blame for stupid errors such as
> forgetting to set some limits.

Who else?

And no, I don't just blame the programmer. FWIW, it's a shame that some
see the actual implementation of an idea as less important stuff. A good
spec goes hand in hand with good code.

>> You cannot be something that you cannot handle. I can pretend to be
>> Superman... but if after jumping over the window somehow I don't start
>> flying, the thing ain't working.... and won't be funny when I hit the
>> floor.
>> Same thing here: Don't pretend to be able t handle a /32 when you can't.
>> In practice, you won't be able to handle 2**32 in the NC.
> I'd say depends on the computer?  The memory size could, I believe.

References, please :-)

>> Take the /64 as "Addresses could be spread all over this /64" rather
>> than "you must be able to handle 2**64 addresses on your network".
> It is tempting.  I would like to take it so.
> But what about the holes?  Will the holes be subject to new attacks?
> Will the holes represent address waste?

"Unused address space". In the same way that the Earth's surface is not
currently accommodating as many many as it could. But that doesn't meant
that it should, or that you'd like it to.

> If we come up with a method to significantly distribute these holes such
> that us the inventors understand it, will not another attacker
> understand it too, and attack it?

Play both sides. And attack yourself. scan6
( exploit current
addressing techniques. draft-ietf-6man-stable-privacy-addresses is meant
to defeat it.

Maybe one problem is the usual disconnect between the two: Folks
building stuff as if nothing wrong is ever going to happen. And folks
breaking stuff without ever thinking about how things could be made
better.  -- But not much of a surprise: pointing out weaknesses usually
hurt egos, and fixing stuff doesn't get as much credit as fixing it in
the security world.

Fernando Gont
SI6 Networks
e-mail: fgont at
PGP Fingerprint: 6666 31C6 D484 63B2 8FB1 E3C4 AE25 0D55 1D4E 7492

More information about the ipv6-ops mailing list