Re: How I built a 2.8TB RAID storage array
From: Folkert Rienstra (see_reply-to_at_myweb.nl)
Date: Thu, 3 Mar 2005 01:04:43 +0100
"Jon Forrest" <firstname.lastname@example.org> wrote in message news:4224E4B9.email@example.com
> John-Paul Stewart wrote:
> > Yes. If you look at the CPUs on RAID cards, they're a lot less
> > powerfull than the host CPU (even on the most expensive $1000+ cards).
> That's because, other than performing the XOR operations
> for writes, they don't have to do very much.
> > I haven't seen any benchmarks comparing software RAID to hardware
> > RAID where the host CPU was heavily used. They always seem to be done
> > on otherwise unloaded systems. But then, everything I've read agrees with
> > the previous poster's assesment that hardware RAID will win when the
> > host CPU is otherwise occupied.
> Right. Even when a server is busy, satisfying read requests and non-RAID-
> 5 requests shouldn't add much to the load. Most of the work is done by
> the intelligence built-in to the ATA or SCSI electronics on the disk.
> The latency imposed by the movement of the arms and platters dominates
> the latency caused by a busy CPU.
> For a while I was a big fan of those cheap IDE pseudo-RAID 0 and 1
> controllers but I now realize that they really don't provide much benefit
> compared to just adding more IDE channels since those controllers do so
> That's one reason why you can convert one of those
> Promise IDE boards into a RAID controller by simply adding a resistor.
That was only possible with the original Ultra 66 and 100 boards.
Next they used different PCI IDs for Ultra and Fasttrak.