RE: Problem with Raid Array persistence across reboots.

-----Original Message-----
From: Damon L. Chesser [mailto:damon@xxxxxxxxxx]
Sent: 25 August 2006 14:56
To: Chandler, Alan
Cc: debian-user@xxxxxxxxxxxxxxxx
Subject: Re: Problem with Raid Array persistence across reboots.

Chandler, Alan wrote:
[Apologies if this has already been sent. My home computer systems
seems to be falling around my ears as I have changed all the disks
around, and I desperately need to ask the question below and get an
answer, so I can rebuild my desktop system, and thus release some
disks acting as backup on my server. I was trying to send this via
sqwebmail from my home server, but it died in the process and - my
ssh session terminated and could not be re-established. I think I have

probably run out of disk space, because until I do release the space I

am running with a very restricted root filesystem . Until I get home
I can't fix it, but I want to ask this question quickly to get answers

start me on my way]


I created a raid array with mdadm, thus

mdadm --create=/dev/md0 --level=1 --raid-devices=2 /dev/sd[ab]4

and then turned /dev/md0 into a LVM physical volume, volume group and
some logical volumes.

This worked great until I rebooted, at which point the start-up
scripts failed to recreate the raid array, and I got into tricky
problems with duplicate LVM PVs with the same UUID. [and ironically,
since I used raid to avoid it, some data loss - although fortunately I

DO have backups]

Two questions

1) In the Debian world, how do you make raid arrays persistent across

[It appears that Debian does not use raidtools and /etc/raidtab as the

linux raid howto says)

2) If I do manage to create the array, what stops vgscan during LVM
startup from picking up 3 physical volumes (/dev/md0, /dev/sda4 and
/dev/sdb4) with the same UUID and only find /dev/md0?

(Sent from work e-mail )

This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.


I do not use SATA (I am soooo 20th century) and would follow Clives'
advise there. I have played with raid arrays and wrote a down and dirty
step by step on how I did it AND copied the data from /boot and mirrored
that as well complete with dual grub installs on to both parts of the
mirror. I hope this will help you:


I looked at your web site and read through your instructions, which
concentrate mainly on installing grub. Unfortunately that doesn't apply
to me because I am booting off a single IDE drive (hda) which has its
own 32Mb /boot partition on hda1 and a root partition on hda3 (hda2 is
swap). This is because, I can't seem (yet) to tell my motherboard to
boot off a Sata drive

So the two sata devices on which I have constructed the raid array do
not need the raid array assembled during boot - nor should they need
raid or sata modules available in the initramfs. Also, just to be
clear, I AM NOT USING THE SOFTWARE RAID on the sata interface card. I am
just wanting to create a partition on which I can store data that I
would particularly like not to loose due to a single device failure (I
do also plan to take backups!).

What I do take from the responses so far is you SHOULDN'T HAVE TO
specify how the raid array is built, mdadm should just do it from (?)
the first blocks in the partition - a partition scan thing?

What is clear in my case is that this isn't happening, and I don't know
why. Clive's comment about loaded modules sounds the most promising
avenue for debugging.