Re: Hard disk speed - Maybe OT


Tnxs for your feedback. I tried to keep the question simple, but see I need to elaborate!

I have certainly allocated 2x RAM to swap but note that it is barely used. To repeat the numbers;

Memory: Total Used Free Shared Buffers
Mem: 255416 138680 116736 0 90028
Swap: 522072 8384 513688

Only a very small amount of swap is used and there is still 90MB of buffer space. Depending on the weather that buffer space goes up to about 120MB. This will of course all be disk cache.

I am sorry I didnt explain the system well enough. It isnt video editing and it isnt interactive. It is more a surviellance system. I capture from 6 IP cameras, jpg images at around 5-6FPS and 640x480. Each jpg compressed filesize is in the order of 50-100K. The current capture uses wget and works surprisingly well. Since I use noclobber I then have to rename the files to a timestamped name. This obviously adds time to processing that has to happen in real time, so I shell/fork out & to do this. The right way is of course to write code that captures and writes direct to a stamped filename. (I am aware that some GPL projects area available already do this) That will be the next step if I can't resolve the I/O bottleneck. The renaming process is set to inhibit the wget process if it runs overtime, which it does as the I/O load gets higher. The result is that their are gaps in the capture stream. (I use an input and output directory and swap them when renaming is finished)

After capturing I use mjpegtools (mpeg2enc etc) to create 1FPS MPEG2 videos from the individual images. (It also does motion detect) These are used to rough catch thieving events after which the 5FPS images are checked and/or a 5FPS MPEG is created for the date/time in question. We have some classic shots of people tucking shop items into their shorts, pants, pockets and even socks! (Its a non profit org "thrift" store) Whether talking a 1FPS or 5FPS MPEG video they are between 20 and 50MB in size. (They are of varying time lengths)

So it isnt as badly loaded as you might think. At any one time there are a lot of images on the server, maybe 1-2 million for 2-3 days. They are broken up by camera to make find/sorts a little faster and I think NFS also suffers with the large number of files per directory. I cant do an ls for example as shell expansion runs out of space. (I use find and xargs) During the development (one might say playing) process I hit the mpeg creation limit problem first. I needed 30 hours to process the days data. I then clustered two more machines in and read up/implemented reducing the cpu grunt required for mpeg2enc. This was all originally at 1FPS. I now find I have hit an I/O rather than cpu limit trying to increase the frame rate.

Apologies for the length.

I dont think RAM will help all that much. The actual I/O rate numbers look like;

The capture side is maybe 3Mbytes/sec but that goes to the ramdisk. The moving/renaming will be around the same rate but of course writes to the real RAID0 disk. The 1FPS mpeg creation for an hours worth of data per cameras takes say 15 minutes. Thats about 350MB of reads or 0.5Mbytes/sec. There are three of these running plus a bit of other I/O that wouild on average maybe an extra 50% (I have to recreate reference images and that uses "convert") Say all up 3MBytes/sec. About the same as the capture rate. hdparm -tT for one of the RAID0 devices shows about 20Mbytes/sec buffered disk reads. Maybe there is a lot more I/O going on than I thought? Have to look at the mpeg creation script some more..

Would still like to hear your ideas.

Cheers Bob

Nikos Chantziaras wrote:
With 256Mb RAM, I'm surprised that it will stagger along even without the
RAM disk!
What kind of video processing are we talking about here exactly? Videos usually mean multi-GB file sizes.