Re: Memory overcommit misbehaves

"jon wayne" <jon.wayne.01@xxxxxxxxx> writes:


I was always under the assumption that linux always overcommits memory
by default - but I'm getting unexpected results
while requesting for a large ammount of memory using new (c++).

Linux does not overcommit memory.

In the sense , say I try and allocate dynamically a large array p (int

p = (int *) malloc(N * sizeof(int)); // ----

and replace it by

p = new int[ N * sizeof(int)]; // -- 2

where N = 1000000000000000 //

the second statement always generates a bad_alloc exception ---

I would hope so. Your system does not have that much memory, either in ram
or swap or anything else also 4x10^15=2^52 is probably far more than the
address bus on your system can allocate.
Just why are you trying to do such a rediculous thing?

Agreed that if you try and access p it'd give a SIGSEGV - but why
should a plain allocation give a bad_alloc - "C" doesn't seem to mind
it - shouldn't C++ too??

I suspect it could be because C++ uses a different memory management
library - could someone please clarify.

(When I do an strace - I find both of the above versions end up
calling mmap().)

Maybe if you told us what it is that you really want to do, rather than
this totally made up example, you might get better help.


gcc 3.4.3

linux - 2.4.21-40.EL

I'd really appreciate some info on this,