Re: [help] 1 cpu to rule them all
From: David Wright (david_c_wright_at_hotmail.com)
Date: Mon, 12 Jul 2004 11:34:54 +0200
Juhan Leemet wrote:
> On Mon, 12 Jul 2004 00:31:20 +0200, David Wright wrote:
>> Desi Cortez wrote:
>>> Timberwoof wrote:
>>>> Time was when that was a viable way to do things, but nowadays CPUs
>>>> really are cheap enough that every user can have his own...
>>> I don't understand how this 4-user HP machine can be so 'cheap'...
>> It depends on your definition of cheap... Maintaining 100 central
>> application servers in an air-conditioned computer room and keeping the
>> configuration and maintenance in one place is a lot more economical than
>> having 1000 PC's dispersed around an office block...
> How are the users going to access these application servers? They will
> need a small/cheap thin client, i.e. a PC (or workstation?) which is the
> cheapest commodity access point for network computing. You're not going
> to run long cables for screens and/or keyboards. The only thing that makes
> sense to snake around a building is ethernet (these days).
A thin client usually has the X-Windows (or whatever protocol Citrix and
Microsoft also have remote desktop protocols) in ROM, a small amount of RAM
and no local storage devices. Also makes it difficult for users to install
applications locally - this is usually locked down by policy anyway.
> I'm in favor of putting shared services (home directories, shared
> resources, special purpose computing devices) in a "glass house". However,
> I cannot see any advantage in a multi-screen multi-user cluster machine.
> We used to do that kind of thing 20 years ago, back in mainframe or
> minicomputer days, when CPU was expensive and HAD to be shared. Nowadays
> CPU is dirt cheap. PCs are too. Even workstations are not that expensive.
As I've said elsewhere, it isn't the price of the hardware that makes it
expensive or cheap. The cost of a PC or server is often the smallest part
of the equation in a reasonable sized firm - excluding 1-man-bands and
small companies here, who can't afford a dedicated technical staff, and
usually don't bother with maintenance contracts; which leaves them hanging
when something does fail. It is the on-going support and maintenance that
cost the money. Having all the "moving" parts centralised keeps the costs
down. Not having CD-ROMS and hard disks out in the work place means there
is much less to go wrong on the desktop.
>> If you cut the number of physical machines you have, you cut the number
>> of support staff you need to maintain them as well...
> You can make things easier by having lots of machines that are virtually
> identical for "workstations". Put all the customized stuff (e.g. home
> directories) on servers. Lock down the workstations. In an office, no one
> needs "root" for their own workstation. Then they cannot screw things up.
> They can still personalize their own setup, which is in their home dir. If
> they screw that up, they get to buy the sysadmin lunch! 8^)
It isn't the users re-configuring that causes most problems, yes, you lock
everything they can f**k-up down. But if you have PC's out there acting as
stand-alone machines with network stored data, or PC's just acting as
X-Terminals, you have the same problems. The hardware can fail, and most
often that is the hard drives getting corrupted or dying. Then you need a
few hours to re-build the system. Your backup policy needs to be expanded
as well, even if users are told to store everything on their home drive,
they have a tendancy to store it locally "because it is quicker."
If they have a X-Terminal, they can't store locally, the Terminal dies? You
slap a new one on their desk and turn it on, no fuss, no muss. And as they
don't have as many moving parts, they are less likely to fail. Most likely
culprits will be old monitors and coffee in keyboards... A simple, quick
> [cut a whole bunch or arguing... I don't really agree with it]
>> With a site of 500 users with 6 mini computers, we had a helpdesk of 2
>> people. With 500 PC's, we had 10-15 support staff running around the
>> building. Plus the hardware maintenance costs go up exponentially with
>> more PC's.
> I think you could manage your PCs better. What O/S? Did users "tinker"?
> The only reason someone should have to "run around" is to do a new
> installation or replace a busted PC with another complete working PC. You
> might have classes of users requiring bigger/smaller PCs, but you
> shouldn't have 1000 totally different configuration PCs.
> Ten years ago, I had a client who ran a banking type application on 500
> OS/2 PCs with about 50 OS/2 servers scattered across Canada with a help
> desk of 4 people in Toronto. Some additional application developers were
> 2nd tier support. They must have had a couple of "hardware guys" but no
> one was "running around" (across Canada?!?). We did diagnostics and
> software updates (including versions of O/S components, such as Comms
> Mgr) "across the wire". It took some special scripting and a small test
> lab to make absolutely sure that the update package was good and reliable.
> Never lost a machine as "vegetable on a wire" while I was there (1+ years
> of updates!). The next crowd got cocky/slack and "blew up" a server tho.
> I believe one could do the same thing with Solaris or Linux. Dunno (and
> don't care) about Windoze. I don't think we need bizarre H/W
Yes, they did have standardised hardware, although not all 500 were the
same, the original supplier went bust, so as machines were upgraded, they
were replaced by machines from a second supplier, but it wasn't that
important. The difference in the actual machines didn't make much odds on
the support front.
Yes, some of the running around was moving PC's between offices and
installing new machines, but a lot of hardware breaks, and the more
breakable hardware you install in the field, the more time you have to run
around behind it - and the more data you are likely to loose, even with a
good backup policy; loose a disk on a desktop, you loose productivity, and
possibly data, loose a disk in a raid array or SAN and you swap it out and
Most sites are Windows, so you have to care about Windows support - at least
we did. With the amount of viruses, worms and trojans going round, cleaning
and recovering machines takes a lot of work, plus crashing machines taking
out the installation, corrupting the disks etc. And as a 3rd party support
supplier, you don't get to decide what OS or hardware the customer uses,
you just have to support it.
I stick by my argument, there are cases for server based computing, whether
it be under Windows, *nix or whatever. As I said earlier, the hardware cost
is negligible compared with the total running costs. A couple of hundred
bucks saved on the purchase price can add up to thousands wasted over the
life of the product if it is unreliable.