Question about IPAM tools for v6

Alexandru Petrescu alexandru.petrescu at gmail.com
Fri Jan 31 18:30:49 CET 2014


Le 31/01/2014 18:13, Fernando Gont a écrit :
> Alex,
>
> On 01/31/2014 01:47 PM, Alexandru Petrescu wrote:
>>>>> It's as straightforward as this: whenever you're coding something,
>>>>> enforce limits. And set it to a sane default. And allow the admin to
>>>>> override it when necessary.
>>>>
>>>> I tend to agree, but I think you talk about a different kind of limit.
>>>> This kind of limit to avoid memory overflow, thrashing, is not the same
>>>> as to protect against security attacks.
>>>
>>> What's the difference between the two? -- intention?
>>
>> Mostly intention, yes, but there are some differences.
>>
>> For example, if we talk limits of data structures then we talk mostly
>> implementations on the end nodes, the Hosts.
>
> Enforce, say, 16K, 32K, or 64K. And document it.

Well, it would be strange to enforce a 16K limit on a sensor which only 
has 4k memory size.  Enforcing that limit already means write new code 
to enforce limits (if's and so are the most cycle-consuming).

On another hand, the router which connects to that sensor may very well 
need a higher limit.

And there's only one stack.

I think this is the reason why it would be hard to come up with such a 
limit.

>> For ND, if one puts a limit on the ND cache size on the end Host, one
>> would need a different kind of limit for same ND cache size but on the
>> Router.  The numbers would not be the same.
>
> 64K probably accommodates both, and brings a minimum level of sanity.

Depends on whether it's Host or Router... sensor or server, etc.

>>>> The protocol limit set at 64 (subnet size) is not something to prevent
>>>> attacks.  It is something that allows new attacks.
>>>
>>> What actually allows attacks are bad programming habits.
>>
>> We're too tempted to put that on the back of the programmer.
>
> It's the programmer's fault to not think about limits. And it's our
> fault (IETF) that do not make the programmer's life easy -- he should't
> have to figure out what a sane limit would be.

:-)

>> But a
>> kernel programmer (where the ND sits) is hard to suppose to be using bad
>> habits.
>
> THe infamous "blue screen of death" would suggest otherwise (and this is
> just *one* example)...

The fault of blue-screen-of-death is put on the _other_ programmers 
(namely the non-agreed device programmers). :-) the hell is the others.

>> If one looks at the IP stack in the kernel one notices that
>> people are very conservative and very strict about what code gets there.
>
> .. in many cases, after... what? 10? 20? 30 years?
>
>
>>   These are not the kinds of people to blame for stupid errors such as
>> forgetting to set some limits.
>
> Who else?
>
> And no, I don't just blame the programmer. FWIW, it's a shame that some
> see the actual implementation of an idea as less important stuff. A good
> spec goes hand in hand with good code.

I agree.

>>> You cannot be something that you cannot handle. I can pretend to be
>>> Superman... but if after jumping over the window somehow I don't start
>>> flying, the thing ain't working.... and won't be funny when I hit the
>>> floor.
>>>
>>> Same thing here: Don't pretend to be able t handle a /32 when you can't.
>>> In practice, you won't be able to handle 2**32 in the NC.
>>
>> I'd say depends on the computer?  The memory size could, I believe.
>
> References, please :-)

Well I think about simple computer with RAM and virtual memory and 
terabyte disks.  That would fit well a 2^64-entry NC no?

>>> Take the /64 as "Addresses could be spread all over this /64" rather
>>> than "you must be able to handle 2**64 addresses on your network".
>>
>> It is tempting.  I would like to take it so.
>>
>> But what about the holes?  Will the holes be subject to new attacks?
>> Will the holes represent address waste?
>
> "Unused address space". In the same way that the Earth's surface is not
> currently accommodating as many many as it could. But that doesn't meant
> that it should, or that you'd like it to.

Hmm, intriguing... I could talk about the Earth and its ressources, the 
risks, the how long we must stay here together, the rate of population 
growth, and so on.

But this 'unused address space' is something one can't simply just live 
with.

Without much advertising, there are some predictions talking 80 billion 
devices arriving soon.  Something like the QR codes on objects, etc. 
These'd be connected directly or through intermediaries.  If one 
compares these figures one realizes that such holes may not be welcome. 
They'd be barriers to deployment.

>> If we come up with a method to significantly distribute these holes such
>> that us the inventors understand it, will not another attacker
>> understand it too, and attack it?
>
> Play both sides. And attack yourself. scan6
> (http://www.si6networks.com/tools/ipv6toolkit) exploit current
> addressing techniques. draft-ietf-6man-stable-privacy-addresses is meant
> to defeat it.
>
> Maybe one problem is the usual disconnect between the two: Folks
> building stuff as if nothing wrong is ever going to happen. And folks
> breaking stuff without ever thinking about how things could be made
> better.  -- But not much of a surprise: pointing out weaknesses usually
> hurt egos, and fixing stuff doesn't get as much credit as fixing it in
> the security world.

It's food for thought.

Alex

>
> Cheers,
>





More information about the ipv6-ops mailing list