Re: [SLE] Doubt about SMP's and parallel jobs
From: Rikard Johnels (rikard.j_at_rikjoh.com)
To: email@example.com Date: Wed, 14 Sep 2005 18:56:41 +0200
On Wednesday 14 September 2005 17.05, Randall R Schulz wrote:
> On Wednesday 14 September 2005 07:49, Chaitanya Krishna A wrote:
> > Hi,
> > This could be a bit off the list, but still ...
> > The output of uname -a of my machine is as below
> > Linux achala 126.96.36.199-21.8-smp #1 SMP Tue Jul 19 12:42:37 UTC 2005
> > i686 i686 i386 GNU/Linux, and I guess SMP stands for Shared Memory
> > Processor. I have a two processors on my mother board.
> SMP: "Symmetric Multi-Processor"; "Symmetric" because all processors are
> co-equal in their capabilities and ability to access shared resources
> such as memory and I/O ports / devices.
> > I am doing my work in Molecular Dynamics simulations. So most of the
> > time I would be doing a lot of number crunching. Now if start a job
> > on my machine, does it automatically run using both the processors on
> > my machine, or will I have to use a message passing library like MPI
> > to use both the processors?
> Nothing truly automatically parallelizes. Depending on the language used
> to implement the application, it can be more or less work to exploit
> multi-processor hardware. E.g., if the application is written in Java
> ad you're using the latest JVM from Sun, then at a minimum you get
> parallelization of I/O and garbage collection (w.r.t. to the main
> thread or threads that perform the work of your application).
> > I experimented with this some time back, I ran the same job with
> > ./executable and also mpirun -n 2 ./executable on my machine (no
> > clustering or anything). The second one gave maginally better results
> > and top showed two processes running. Can someone explain what's
> > happening?
> Clearly you're referring to some specific MPI system (probably
> <http://www-unix.mcs.anl.gov/mpi/>?) of which I'm not aware, so I
> cannot say definitively whether it can exploit your multiprocessor x86
> system. Are you certain your application is written to use this MPI
> Keep in mind that depending on the nature of the algorithms that
> dominate the application in question the magnitude of any speed-up
> possible _in principle_ varies. In practice, of course, one rarely sees
> the full speed-up that is possible because of various overhead in the
> software the provides the parallelism (your MPI system, e.g.).
> > Regards,
> > Chaitanya.
> Randall Schulz
> Check the headers for your unsubscription address
> For additional commands send e-mail to firstname.lastname@example.org
> Also check the archives at http://lists.suse.com
> Please read the FAQs: email@example.com
I have found that running calculations and image processing and other CPU
intensive tasks only benefits if the code is parallelized from the start.
The one thing that DOES speed up is the overall response of the system.
(The number crunching thread use one CPU and the system the other.)
But if the code is written to utilize SMP, then it will be alot faster.
An example was my Dual Celeron 433 MHz (god rest) which was alot more
responsive under heavy load, processing images than the current Single CPU
P3/733 MHz running the same setup/system.
(GIMP isnt written to utilize SMP systems).
-- /Rikard --------------------------------------------------------------- Rikard Johnels email : firstname.lastname@example.org Web : http://www.rikjoh.com Mob : +46 (0)763 19 76 25 PGP : 0x461CEE56 --------------------------------------------------------------- -- Check the headers for your unsubscription address For additional commands send e-mail to email@example.com Also check the archives at http://lists.suse.com Please read the FAQs: firstname.lastname@example.org