Re: Dual hosting SCSI devices

Alan Adams wrote:
In message <enodmg$k9b$1@xxxxxxxxxxxxxxxxxx>
Juha Laiho <Juha.Laiho@xxxxxx> wrote:

none <""dada\"@(none)"> said:

Does anyone know if dual hosting SCSI drives is supported under linux ?

Yes, but it depends on your SCSI adapters. I'm conflicting here with the
answer from Mike, but I claim that no special drive hardware is required.
The requirement on adapters is that your adapters must allow specifying
the SCSI id of the adapter itself. Adapters typically sit on ID 7, but
you must change one of these, because you'll end up connecting two
adapters to a single SCSI bus. So, a SCSI bus is a single, unbranched
set of signal channels spanning between two bus terminators. Different
SCSI implementations place different limits on the bus length and
specifics of termination, as well as on the maximum number of devices
allowed on a single bus.

However, if you're serious with building HA, then special (intelligent)
drive arrays does become a requirement. The seriousness comes from not
trusting your SCSI adapters (and bus elements such as cabling). At
this point, you need multiple SCSI buses to a single set of disks - and
this is something you cannot do without an array containing its own
controller (in other words, you cannot directly connect a single drive
to two separate SCSI buses).

This is a common VMS cluster configuration. There is a requirement that each drive on the bus support "tagged command queueing". I don't know whether that would be basic, or because of the design of the VMS clustering.

In this configuration there also has to be a network connection between the hosts, ethernet, fiberchannel, T1 or something, as the lock management traffic and general cluster housekeeping don't run over SCSI.

If either connection is interrupted one of the hosts will bugcheck and reboot, to maintain "cluster sanity". (I have made use of that to reboot a totally frozen cluster member.)

Thanks Adam,

Yes I have experience doing this (Almost 20 years ago) with VAX workstations and VMS. There was an (open source) SCSI driver that you needed to do it with SCSI rather than DSSI or RA type drives (you needed a cluster interrconnect (CI?) to use DSSI or RA if I recall correctly)
Someone mentioned ocfs2, I will look into this. Also, I was expectiong to use a dedicated ethernet segment, not my LAN to try this. I am looking for a starting point. I am surprised that there seems to be limited LINUX application of this type of scenario.

(I actually still have 2 VMS boxes on the shelf, maybe when I get things cleaned up I'll get them set up and use them to serve my disks insted :)

Thanks to all for responses !
Have a good weekend !