Memory Resource Management
From: Alvin (replay_at_to_the_newsgroup.thanks)
Date: Thu, 01 Sep 2005 12:25:56 GMT
I have a program that links to a static library. I am trying to determine
how long a particular routine takes to execute from the static library. To
do this I am using gettimeofday() to calculate the elapsed time.
What I have noticed (and somewhat expect) is that each consecutive time I
run the program, the elapsed time is shorter and shorter till a point that
it just remains relatively constant. I expect that there is some caching
occurring on the OS side (SuSE 9.3 Pro - kernel 18.104.22.168-21.8-default).
However, what I find strange is that this "caching" crosses applications?
For example, I have a second program that links to the same static library.
What I have done is hacked the program to calculate the elapsed time in the
exact same way I do in the first program (used a couple of gettimeofday
What I have noticed is that if I run the first program, I get that initial
lag. However, if I then run the second program, the elapsed time is
minimal. I have also done this test in opposite order (after I run
OpenOffice and FireFox to sort of reset the scheduler and memory resource
manager) and have experienced the same thing. The 2nd program will have the
lag while the 1st will then run smooth.
What I think is happening is that in the routine I am timing memory is
allocating various arrays and data structures. The OS's memory manager then
must find the appropriate regions to accommodate the requested size, etc.
That would account for the initial lag. Then on the second run, the
allocations occur again but the OS's memory manager just reuses the same
spots in memory. Is this basically correct? If so, can this occur between
two separate processes? Or is this happening only because I am timing the
same routine from the same static library?
Any explanations, hints, tips or URLs where I can read up more on this would