Re: large binary immediately SEGV's

From: Nick Landsberg (hukolau_at_att.net)
Date: 02/21/04


Date: Sat, 21 Feb 2004 16:22:46 GMT


Jean-David Beyer wrote:

> Nick Landsberg wrote:
>
> > Ah, good. Now we have another reason (as opposed to the plain
> > statement by another poster that 91 MB was "idiotic").
> >
> > All good points, Jean-David.
>
> Thank you.

[ snip ]

> >
> > There are advocates of huge monolithic process structures on
> > single-CPU machines who claim that it lets the process itself control
> > the thread of execution, rather than rely on the vaguaries of the
> > scheduler. It also avoids context switch overhead, as every message
> > (whether it be SysV or Sockets) involves a system call, and thus a
> > context switch into and out of kernel space.
>
> Fine if they can run without an OS. Otherwise, they are going to have to
> deal with an OS in any case. There will be a context switch everytime
> the OS needs to do something anyway. Right now my SMP machine is doing
> nothing other than running two instances of setiathome, a
> compute-limited process, and this composer in Mozilla.
>
> procs swap io system cpu
> r b w si so bi bo in cs us sy id
> 2 0 0 0 13 0 20 204 548 95 5 0
> 2 0 0 0 0 0 5 138 334 93 7 0
>
> Even if you think you are avoiding context switches, you probably are
> not. Now you do not just walk into a design review and throw different
> bubbles arbitrarily into different UNIX (or Linux) processes. You must
> give each process a descriptive name that pretty much tells people what
> the process does (but hides how it does it). Normally, you arrange each
> process to do a lot of processing with little input or output to keep
> message passing and context switching overhead low.

Agreed

> >
> > In addition, if you have never worked with communicating processes
> > (and the possible error conditions when messages get lost or
> > delivered out of sequence) will usually get it wrong the first time
> > (or even the second or third time).
>
> I have done it a lot. I find systems designed that way are easier to
> design, work better the first time, are easier to modify (because the
> repercussions of implimentation detail changes do not propagate beyond
> the process boundary unless you violate the interface specification, etc.
>

I used "you" in the generic sense, not about you personally.

> My favorite system came up with about 100 processes, but as more and
> more users came on line, other processes were spawned dynamically. A
> monolithic program would be unmanageable.
>
> We arranged that the sequence of messages did not matter. Each process
> was a finite state machine, and the messages could come in any order. If
> the messages got lost, the responce expected by the sender timed out and
> the sender could send it again, or do something else. The biggest
> problem, IIRC, was when the input queues of processes filled up (we were
> using System 5 IPC at the time), and it was a lot of bother to design
> the system so that could not happen.

I think you have just made my point for me. If a developer
has never dealt with this in the past, it is just those
kinds of things which will bite him or her in the anatomy.

>
> As it was designed, we could even change the implementation of running
> processes and test them without taking the system down. Certain users
> could run the new implimentation and the rest (most of them) got the
> default implimentation. We just had to obey the interface specification,
> and if we goofed that up, only our own stuff failed. This was a
> wonderful way to test new algorithms and stuff on live data without
> hurting the innocent.

Ah, this is where our worlds diverge. It has been many
years since I worked on systems which dealt directly
with end-users. Rather, I work on systems which
process large amounts of requests from other systems.
These other systems are always connected. This is
what I meant by "it's ok at startup", below. We chose
a "subsystem" style of architecture, e.g. "COMM handlers"
for the external communications, "LOGIC handlers"
to handle the various types of requests, and a DBMS
abstraction layer (which can be either linked in
or run as client-server). There are maybe about
20 processes doing the "heavy lifting" and another
80 or so which are also started at init time which
hang around to do periodic maintenance activities.
A few processes are transients invoked by cron, very few.

We don't have the luxury of testing a new version
of a LOGIC handler on a few guineau pig users.

>
> > There are tradeoffs to any
> > design. You have correctly mentioned some of the benefits of a
> > loosely coupled design and have mentioned some of the benefits of the
> > monolithic design. (I am NOT a proponent of the monolithic design,
> > BTW, I am just playing devil's advocate.)
>
> Understood.
> >
> > It's up to the OP and his organization to make the choice, and they
> > seem to have settled on the monolithic, for better or for worse.
> >
> Yes, and by now, even if the OP tried to convince his organization to
> change, they would argue that it is too late: they have too much
> invested in what they already have. Reminds me of a motto I ascribed to
> a former employer, though they HATED it: "We haven't time to stop for
> gas; we're late already!"

Well put!

>
> A well, time to re-read Fred Brooks' book, "The Mythical Man Month",
> again, I suppose.
> >
> > P.S. - I would tend to disagree that process creation is "cheap" on
> > ANY O/S. If it is ONLY done on startup, then it's OK. If programs
> > fork()/exec() others willy-nilly, it's real pig.
> >
> A pig compared to sqrt(integer) no doubt. But not compared to UNIX in
> about 1970. It is much much better now, when we do paging instead of
> process swapping, when the processors have memory management units at
> their disposition, ... .
>
> Well, for most operating systems, process creation is costly, when it is
> allowed at all, and I have run on systems where you could not do it at
> all. A privileged user had to introduce a new process to the system
> before it could be run at all. But in Linux, fork|exec are pretty good
> compared to the average OS. In the systems I worked on with dynamically
> created processes, there tended to be less than a dozen created when a
> new user logged into the system (typically a user did that at most once
> a day). Recall that these days, if there are many instances of a process
> (as was typical of our system), they all share the same instruction
> space, that space is shared, so the memory consumption is reduced to
> only the necessary data and stack space. I do not know if fork, these
> days, notices that the parent and child are the same process, so
> allocate only the data space or not. But exec should know that what it
> is invoking is just another instance of a running program and need not
> remanage the memory for redundant copies of the instruction space.
>

Agreed. But in the system I described, we budget the CPU cycles
very carefully. The LOGIC handlers must complete in <100
microseconds and the DBMS lookup in <500 microseconds.
A fork/exec pair is still measured in milliseconds of CPU
time.

That's another place where our experiences diverge.

(No, we do not use a disk database, it's all in memory,
all 23 GB of it, with another 23 GB reserved for schema
updates on the fly, and the remaining 2 GB for executables.
We design not to page during steady state).

-- 

"It is impossible to make anything foolproof because fools are so 
ingenious" - A. Bloch