Re: DANGER!!! Problems with 10.04 installer (RAID devices *will* get corrupted)



Somehow, my reply to Alvin's original post ended up tacked on to the
spinoff thread... so here it is, hopefully attached to the correct
thread (I blame GMail's wonky ability to handle threads)

Long reply below:

On Wed, Apr 21, 2010 at 00:30, Alvin Thompson <alvin@xxxxxxxxxxxxxxxxx> wrote:
Long story short: the only way to be safe right now is to physically
remove drives with important data during the install.

I figured out the cause of my RAID problems, and it's a problem with
ubuntu's installer. This will cost people their data if not fixed.
Sorry about the length of this post, but the problem takes a while to
explain.

FWIW, this is what I just went through, step by step to try to
recreate a loss of data on an existing sofware raid array:

1: Installed a fresh Karmic system on a single disk with three partitions:
/dev/sda1 = /
/dev/sda2 = /data
/dev/sda3 = swap

all were primary partitions.

2: After installing 9.10, I created some test "important data" by
copying the contents of /etc into /data.
3: For science, rebooted and verified that /data automounted and the
"important data" was still there.
4: Shut the system down and added two disks. Rebooted the system.
5: Moved the contents of /data to /home/myuser/holding/
6: created partitions on /dev/sdb and /dev/sdc (the two new disks, one
partiton each)
7: installed mdadm and xfsprogs, xfsdump
8: created /dev/md0 with mdadm using /dev/sda2, /dev/sdb1 and
/dev/sdc1 in a RAID5 array
9: formatted the new raid device as xfs
10: configured mdadm.conf and fstab to start and automount the new
array to /data at boot time.
11: mounted /data (my new RAID5 array) and moved the contents of
/home/myuser/holding to /data (essentially moving the "important data"
that used to reside on /dev/sda2 to the new R5 ARRAY).
12: rebooted the system and verified that A: RAID started, B: /data
(md0) mounted, and C: my data was there.
13: rebooted the system using Lucid
14: installed Lucid, choosing manual partitioning as you described.
**Note: the partitioner showed all partitions, but did NOT show the
RAID partitions as ext4
15: configured the partitioner so that / was installed to /dev/sda1
and the original swap partition was used. DID NOT DO ANYTHING with the
RAID partitions.
16: installed. Installer only showed formatting /dev/sda1 as ext4,
just as I'd specified.
17: booted newly installed Lucid system.
18: checked with fdisk -l and saw that all RAID partitions showed as
"Linux raid autodetect"
19: mdadm.conf was autoconfigured and showed md0 present.
20: edited fstab to add the md0 entry again so it would mount to /data
21: did an mdadm --assemble --scan and waited for the array to rebuild
22: after rebuild/re-assembly was complete, mounted /data (md0)
23, verified that all the "important data" was still there, in my
array, on my newly installed Lucid system.

The only thing I noticed was that when I did the assembly, it started
degraded with sda2 and sdb1 as active and sdc1 marked as a spare with
rebuilding in progress.

Once the rebuild was done was when I mounted the array and verified my
data was still present.

So... what did I miss in recreating this failure?

--
ubuntu-users mailing list
ubuntu-users@xxxxxxxxxxxxxxxx
Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-users