Re: prevent out of memory
From: Jean-David Beyer (jdbeyer_at_exit109.com)
Date: Tue, 03 Aug 2004 08:53:20 -0400
Juhan Leemet wrote:
> On Fri, 30 Jul 2004 21:35:26 -0400, Jean-David Beyer wrote:
>>Juhan Leemet wrote:
>>>On Fri, 30 Jul 2004 09:30:51 -0400, Jean-David Beyer wrote:
>>>>If you have users who run you out of swap space...
>>>I have run out of swap, rebuilding large packages (with maybe some kind
>>>of memory leak in a tool?
>>I have heard of memory leaks but, like ghosts, I have never seen one.
>>I think I would have seen at least one if they were easy to produce. I
>>write C++ programs and have since 1990 almost all programs I have
>>written have been in C++ and never experienced memory leaks. In the old
>>C days, with malloc() and free(), I could get memory leaks... I do not
>>see why rebuilding large packages would have to do with it because the
>>compiler compiles only one file at a time. Just where in the rebuilding
>>process do you run out of memory?
> It may not be a memory leak? I just cannot understand why so much memory
> is needed for building these packages.
Someone earlier complained when I mentionned your reference to building
packages that no one was talking about that.
The reason I very much doubt a memory leak in the process of building any
packages is that a lot of building of packages surely is done by the users
of the GNU/Linux OS. If the kernel had memory leak problems, they would
surely have come to the attention of the kernel developers by now and the
problem would be worked on and fixed, probably very quickly.
Similarly, if the compilers, linkers, loaders, etc., had memory leaks,
they would have been brought to the attention of the compilation system
developers by now, ... .
I do not know how you are building these packages. If you have a bunch of
makefiles, one option to make is to allow many parts to be executed in
parallel instead of sequentially. I forget how, since I never do that, but
if you tried to build the entire Linux kernel and all the supporting
modules and libraries, I can easily imagine the top level make forking off
a lot of cpu and memory hungry processes at once.
> Maybe some memory leak(s) together
> with large scale memory use, e.g. when it's trying to link a big program
> with all of its libraries together? There are a lot of symbols required
> for that, etc. I'm not really sure why they get so big, but there have
> been some application packages that actually do complete after a long time
> and using a large amount of memory. Things like gcc, XFree86, OpenOffice
> are likely candidates?
If you are building packages like gcc, XFree86, OpenOffice, etc., there
should be next to no problems with memory leaks because you are surely not
the first person to build them, and the first person who attempts it will
get the problem and, most likely, report it very promptly. In fact, the
developers of gcc would notice it themselves, I expect, and not release it
until the problem is fixed.
> Personally, I find it hard to imagine any build of an application that
> requires >1GB memory+swap, but I guess I'm an old timer and remember
> running a (multipass) FORTRAN compiler on a PDP-8 with only 4K words of
> memory. Where does memory GO? Why does everything get so big? Bloat?
Perhaps a little bloat. But linking a large C++ program can use quite a
bit more than 4K words because the mangled function names can be very large.
I wonder if there is something wrong with your installation. Some
misconfiguration, bad defaults, or something.
See, if there is a memory leak, or just a greedy demand for memory, in a
program, at some point, when the program does an sbrk() to get more, it
will get an error return and not more memory. So it should not cripple the
system, just not complete normally.
If, instead, you run a whole lot of programs, each of which takes 2 GBytes
of RAM, none of them is harming the system by itself, but together they
all are. And, IIRC, what Linux kernel will do is arbitrarily kill off
processes so it will not deadlock. Though it may run slow by disk thrashing.
In the old days, when there was a fixed size process table, it would
refuse to fork off a new process because the proc table could fill up. I
do not know if the proc table has a size limit these days, so perhaps you
never get a cannot-fork error anymore.
-- .~. Jean-David Beyer Registered Linux User 85642. /V\ Registered Machine 241939. /( )\ Shrewsbury, New Jersey http://counter.li.org ^^-^^ 08:30:00 up 7 days, 17:32, 3 users, load average: 4.13, 4.15, 4.10