Re: Which partitioning scheme gives best performance?
From: Floyd L. Davidson (floyd_at_barrow.com)
Date: Thu, 24 Jun 2004 20:33:59 -0800
Nick Landsberg <firstname.lastname@example.org> wrote:
>Floyd L. Davidson wrote:
>> Nick Landsberg <email@example.com> wrote:
>>>In an attempt to put some mathematics into the mix (rather
>>>than just opinion) -
>>>Given a particular piece of hardware (PC, Workstation, etc.)
>>>let us assume:
>>>1 - all *critical* data resides on two machines (because that's
>>>the purpose of having two machines in this case, isn't it?).
>> It *should* be, but that is not necessarily the way people do
>> it. The system described with NFS mounted filesystems did not
>> have that characteristic. It split essentials between two
>> systems, thus requiring *both* too be available for real work.
>Your points above (and below) are well taken, Floyd.
>I was not trying to play sides, just injecting simple
>arithmetic into the discussion.
Yes, exactly. On the other hand, I *am* taking sides and
being an advocate! :-)
I really appreciated your running the numbers (I'm not much into
statistics), but since you didn't take sides, I figure it should
be put into perspective.
>> The original description had the "server" holding the /home
>> directory and other data, which is NFS mounted on the
>> "workstation". As described, if either machine fails in almost
>> any way (or if the ethernet link fails) then *both* machines are
>> basically useless until the other is fixed.
>> The justification was that the down time was so low that it
>> simply was not a problem. That logic impossible to debate
>> because like beauty, it is in they eye of the beholder.
>And I tried to inject a bit of simple arithmetic to take it
>out of the "eye of the beholder" category.
Impossible. It categorizes it very well, but for some people a
99.7% reliability is fabulous, and others look at 99.997% as
barely something they can live with.
(My background is with long distance telecommunications,
including administration of toll switching systems. Circuit
reliability at 99.997% wouldn't get anybody a bonus... :-)
>> I said it doubled the risk, but if you run the numbers you'll
>> see that isn't precisely true. The number of failures doubles,
>> but the reliability rate, calculated as you've done above, will
>> no doubt be significantly more than twice as much down time.
>Hmmm... the Markov models don't say so, as far as I recall,
>but it's too late in the evening to go and do arithmetic
>with this ancient gray matter. Let's just say that, without
>redundancy of critical, data the failure rate is roughly
>double the failure rate of a single system. In the case
I'll take your word for all of that!! (Early in the morning
is too late in the day for me on that topic.)
>of my example, the total down time for the year would
>be roughly 48 hours. To some people this is chicken-feed,
>to our customers, this is unacceptable. (To our customers
>the loss of 5 minutes worth of billing data is unacceptable,
>but that's a whole 'nother can of worms.)
Exactly! (A down time of 48 hours in one year for a toll
switching system would be a major disaster! It would be cause
for serious notoriety, of the wrong kind...)
>>>at exactly the same time goes way down. (Unless, of
>>>course, you bought all your disk drives from the
>>>same manufacturer and they were from the same
>>>"batch", manufactured at the same time, in which case
>>>they tend to have "clustered failures", but that's a
>>>different computation. :)
>> Not funny! A few years ago I purchased 3 different hard disks
>> made by one manufacturer. Two were identical, the third was a
>> different model but the same basic technology. *All* *three*
>> died within one month of each other about 18 months later.
>Only 13,000 hours MTBF? Were these Seagate drives by any
Might have been, though they weren't labeled as such. Who
bought out Seagate? Western Digital... ???
This was about 5-6 years ago. It was two 8Gb drives and
a 6 Gb drive. I did discover that not too long after I
bought the pair of 8Gb drives they were all pulled from
the shelves at Office Max, which is where I got them. I
don't recall now where I bought the other drive.
>chance? (We had a disk farm with about 100 drives 10
>or so years ago on a large system. About 70 or so failed
>within a month or two of the others. Our customer
>was not a happy camper.) Sorry if I hit a sore spot in
However, this *does* make a very good point for this discussion.
A few years ago it was hard disks. Maybe next week it will be
CDROMs or motherboards or disks again. Shit happens, and
denying it is asking for trouble, even on a home system.
>the above, but these things are known phenomena, and the
>experts at computing reliablity in our shop (I'm not
>one of them), actually take this into account before
>shipping systems to a customer (they specify the number
>of spares to keep on hand).
Exactly. And it is *not* based on price. Rather, the total
cost is based on reliability requirements. As opposed to what
most of us do at home... juggling what we know of reliability
with the prices for quality and our perceived needs (and that
spells greed for some functionality in most cases), we very
haphazardly take risks or not. A large company simply cannot
afford to guess though, and it is well worth the cost of hiring
-- FloydL. Davidson <http://web.newsguy.com/floyd_davidson> Ukpeagvik (Barrow, Alaska) firstname.lastname@example.org