Re: IA64 Linux VM performance woes
From: Satoshi Oshima (oshima_at_sdl.hitachi.co.jp)
Date: Wed, 21 Apr 2004 21:39:23 +0900 To: email@example.com
Hello, Michael and all.
We have realized the same kind of performance issue.
In our case it is not an IA64 huge scale system but an IA32
In our experiment, we see file I/O throughput decline on
the server with over 8GByte memory. Kernel versions we use
are 2.6.0 and Red Hat AS3. We show our experiment.
Below is our hardware configuration and test bench.
CPUs: Xeon 1.6Ghz - 4way
Storage: ATA 120GB
File I/O workload generator consists of 1024 processes and
generates 100KByte to 5MByte file write. Using "mem=" option,
we change the memory recognition 2GByte to 12GB.
Below is the result ( unit: MByte/sec).
2GB 4GB 8GB 12GB
2.6.0 13.1 18.5 18.4 16.1
AS3 11.0 11.3 10.3 8.92
The result shows throughput decline occurs when the server
has over 8GByte memory.
We agree that your proposal is good idea. It reduces cache
memory reclaiming cost to set upper bound on number of
cache memory pages.
Generally it is very difficult to build one system which
could handle various type of workloads well. So we hope
Linux would have kernel parameter tuning interface.
We would be very happy if we could share information to
manage large scale memory.
Systems Development Laboratory
>>We are trying to deploy a 128 PE SGI Altix 3700 running Linux, with 265GB
>>memory and 10TB RAID disk (TP9500) :
>># cat /etc/redhat-release
>>Red Hat Linux Advanced Server release 2.1AS (Derry)
>># cat /etc/sgi-release
>>SGI ProPack 2.4 for Linux, Build 240rp04032500_10054-0403250031
>># uname -a
>>Linux c 2.4.21-sgi240rp04032500_10054 #1 SMP Thu Mar 25 00:45:27
>>PST 2004 ia64 unknown
>>We have been experiencing bad performance and downright bad behavior when
>>are trying to read or write large files (10-100GB).
>>File Throughput Issues
>>At first the throughtput we are getting without file cache bypass is at
>>440MB/sec MAX. This specific file system has LUNs whose primary FC paths
>>over all four 2Gb/sec FC channels and the max throughput should have been
>>close to 800MB/sec.
>>I've also noticed that the FC adapter driver threads are running at 100%
>>utilization, when they are pumping data to the RAID for long time. Is
>>any data copy taking place at the drivers? The HBAs are from QLogic.
>>VM Untoward Behavior
>>A more disturbing issue is that the system does NOT clean up the file
>>and eventually all memory gets occupied by FS pages. Then the system
>>We tried enabling / removing bootCPUsets, bcfree and anything else
>>to us. The crashes are just keep comming. Recently we started
>>lot of 'Cannot do kernel page out at address' by the bdflush and kupdated
>>threads as well. This complicates any attempt to tune the FS in a way
>>maximize the throughput and finally setup sub-volumes on the RAID in a
>>that different FS performance objectives can be attained.
>>Tunning bdlsuh/kupdated Behavior
>>One of our main objectives at our center is to maximize file thoughput
>>systems. We are a medium size Supercomputing Center were compute and I/O
>>intensive numerical computation code runs in batch sub-systems. Several
>>programs expect and generate often very large files, in the order of 10-
>>Minimizing file access time is importand in a batch environment since
>>processors remain allocated and idle while data is shuttled back and
>>from the file system.
>>Another common problem is the competition between file cache and
>>pages. We definitely do NOT want file cache pages being cached, while
>>computation pages are reclaimed.
>>As far as I know, the only place in Linux that the VM / file cache
>>can be tuned is with the 'bdflush/kupdated' settings. We need a good way
>>tuneup the 'bdflush' parameters. I have been trying very hard to find in-
>>documentation on this.
>>Unfortunately I have only gleaned some general and abstract advices on
>>bdflush parameters, mainly in the kernel source documentation tree
>>For instance, what is a 'buffer'? Is it a fixed size block (e.g., a VM
>>or it can be of any size? This is important as bdlush uses number and
>>percentages of dirty buffers. A small number of large buffers require
>>more data to get transferred to the disks, vs. a large number of small
>>Controls that are Needed
>>Ideally we need to:
>>1. Set an upper bound on the number of memory pages ever caching FS
>>2. Control the amount of data flushed out to disk in set time periods;
>>we need to be able to match the long term flushing rate with the service
>>that the I/O subsystem is capable of delivering, tolerating possible
>>spikes. We also need to be able to control the amount of read-ahead,
>>behind or even hint that data are only being streamed through, never to
>>3. Specify different parameters for 2., above, per file system: we have
>>systems that are meant to transfer wide stripes of sequential data, vs.
>>systems that need to perform well with smaller block, random I/O, vs.
>>that need to provide access to numerous smaller files. Also, cache
>>per file system would be useful.
>>4. Specify, if else fails, what parts of the FS cache should flushed in
>>5. Provide in-depth technical documentation on the internal workings of
>>file system cache, its interaction with the VM and the interaction of
>>with the VM.
>>6. We do operate IRIX Origins and IBM Regatta SMPs where all these issues
>>been addressed to a far more satisfying degree than on Linux. Is the IRIX
>>system cache going to be ported to ALTIX Linux? There is already a LOT of
>>experience in IRIX for these types of matters that should NOT remain
>>Any information/hint or pointers for in-depth discussion on the bugs and
>>tunning of VM/FS and I/O subsystems or other relevant topics would be
>>We are willing to share our experience with anyone who is interested in
>>improving any of the above kernel sub-systems and provide feedback with
>>experimental results and insights.
>>Texas A&M University
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to firstname.lastname@example.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/