Re: [OT] Interview with Con Kolivas on Linux failures

On Tue, Jul 24, 2007 at 04:02:39PM -0500, Mike McCarty wrote:
Andrew Sackville-West wrote:


The modularity has some positives: a failure in one module will
not bring down the whole system. of course this is pretty rare in
linux these days too, but is certainly possible. It also provides some
serious security bonuses because a security failure in one
user-inserted module does not mean that the rest of the system is
compromised they way would be in the monolithic kernel model. I guess
some of these ideas are working their way into linux with the
inclusion of user-space drivers.

What you list as advantages of the microkernel approach are not
all obvious to me. All kernel services should be necessary
module be part of the kernel. Maybe it should be a demon or
my bad language. what I was referring to here was drivers, I
guess. Though I think the terms are a little different with a
micro-kernel. So, wikipedia says that microkernels essentially only
provide address management, thread management and inter-process
communication. Everything else gets moved to userspace using
servers. So that means, the artical further states, that any crash can
be corrected by restarting the appropriate server to bring up whatever
service it was providing (network, display, device access) without
rebooting the machine.

OTOH, one serious lack (IMO) with Linux is that drivers cannot
be started, stopped, uninstalled, and installed on a running
system. LynxOS supported that, ...

sounds very much like, at the user level, the same concept as the
microkernel approach. in fact LynxOS is mentioned 9along with minix)
as an example of a microkernel (with some added device drivers to aid
in booting)

I am not an expert in microkernel architecture.

AOL big time!

There are also negatives: there is overhead in the communication
between the modules that might not be there in the monolithic
model. And, I suppose, having the system remain up when all the

I have certainly read literature to this effect.

modules for the input methods go down is only of minor convenience,
but I really don't know what I'm talking about here.



A parallel conversation on /. (I know i know, its an addiction) was
discussing implementation of different lines for MS again, splitting
between a desktop-user oriented release and a more stable business
release. Who knows what that all means, but its an intriguing parallel
to the ck situation. He wanted a better desktop while linux is
pushing for more server oriented priorities.


Umm, you seem to have the impression that there are scheduling
algorithms which are "good for desktop apps and bad for other
types of apps" and scheduling algorithms which are "good for
server apps, but bad for desktop or other apps". The truth is that there
are predictable algorithms which give guaranteed results, but
require tuning to get the results you want, including assigning
what priorities are required by each app, and there are ad-hoc
scheduling algorithms which try to give time to apps based on
what kind of app they are. The former results in somewhat difficult
to tune systems which give predictable performance. The latter
gives systems which seem easier to tune because they mostly
work acceptably, but which are actually impossible to tune to get any
kind of guaranteed behavior.

I'm not sure actually which of my emails you're responding to, but I
went into more details on how *I* think a system should behave in a
rant about desktop OSes...

I understand what you are saying about the different types of
schedulers. THanks for the insight.

IOW, the former requires the user to characterize his needs, but
then guarantees that those needs get met (or fails to, and lets
you know if it doesn't have enough resources), the latter tries to
guess what the user needs, and when it fails gives the user no recourse.

The former also requires that apps be written to realize that,
just because they are necessary, they shouldn't just run forever
without either blocking or yielding. If they have latency requirements,
then they need high priority, but shouldn't hog the CPU.

ISTM that some sort of merging between the two is where its
at. Certain things you want defineable behavior, other things, you
just don't care. For example, since this was all predicated on a
discussion of ck's interface responsiveness ideas, in a user
interface, its probably easy to quantify what the response time should
be. A human being can only type so fast or react to input so
quickly... a mouse only needs to show up so many times a second for a
human being to see it all the time... If you want to perfectly
responsive human interface then, you merely have to be looking for and
responding to input at that frequency. This should be easily doable
and tunable to get a response the user likes as that is a predictable
behavior. Your keyboard interface only needs to be able to respond to
say at most 8 key presses per second (really fast typist) and that
only in bursts. So how many cpu cycles does it take to respond to a
keystroke... do the math and you're done. Similar things can be done
for mouse response, menu drawing etc etc.

And in my opinion, one would desire this kind of behavior from the
interface. The desktop interface is there to serve me on my
machine. There is plenty of downtime for it to do other stuff. So in
that respect, the predictable, though fussy to set up, scheduler is a
good thing. When I sit down at the machine, I want it to respond "now"
within the confines of "now" as determined above.

So then the question is, what do you do with the rest of your
time... that's where the guessing scenario comes in. THe rest of your
time just needs to be divided up somehow and guessing is as good as
anything else at that point (this assumes the user hasn't somehow
indicated that process X must be done now...). The user is already
satisfied by the machine acknowledging their input in some reasonable
way and the rest is just allocation of resources efficiently to get
done everything that's been started. I know its not that simple, but I
think from a users perspective it should be...

BTW, not all real time scheduling algorithms are priority
preemptive, and there is active research going on in RTOS scheduling,

When multithreaded apps enter in, then a whole host of other

sounds like i need to go back to school...

AIUI, the scheduler in Linux is integrated (someone correct me
if I'm wrong) but supports more than one policy. My machine reports

$ apropos scheduler
sched_setscheduler (2) - set and get scheduling algorithm/parameters

this gets discussed at the bottom of the page:

but I don't think its really apropos to the discussion of desktop
system responsiveness. still interesting though... its how to set
priorities on a per task basis, essentially. I don't think, without
installing some packages and reading, that it actually changes the
scheduling policy of the machine overall. Maybe this is what's needed
though: some method for the user to affect changes in scheduling
policy in an overall manner to get the performance they desire.

I am not a Linux scheduler expert.

AOL again.

[*] Except for Internet Explorer, of course.

heh heh...

Attachment: signature.asc
Description: Digital signature