Hi All,
I had installed Cent OS 6.6 on sda. The RAID1 and LVM setup was on sdb and sdc. To practice well on recovering RAID and LVM after the OS reinstallation, I just reinstalled the OS. During first re installation of OS, I had selected all the mount points including RAID/LVM partitions as same as how those where mounted before the reinstallation, but the format was selected to only /, /others, /var. And after booting /dev/md0 and LVM partitions were set to active automatically and everything was mounted properly. Also there was no any data loss in the RAID/LVM partitions. So I could made sure that everything will be perfect if we carefully select the mount points during OS reinstallation by making sure the formating partitions.
Now I thouht of reinstalling OS once again but this time didn't select mount points for RAID/LVM partitions during OS reinstallation, thought for manual setup after the installation. So just selected /, /others, /var partitions to format. When it booted, I ran "cat /proc/mdstat" but it was taken /dev/md127(read only) instead of /dev/md0.
Code:
# cat /proc/mdstat
Personalities : [raid1]
md127 : active (auto-read-only) raid1 sdc[1] sdb[0]
52396032 blocks super 1.2 [2/2] [UU]
unused devices: <none>
So now I just wanted to stop and restart this RAID array as /dev/md0. But I am not able to stop as it is giving following error.
Code:
# mdadm --stop --force /dev/md127
mdadm: Cannot get exclusive access to /dev/md127:Perhaps a running process, mounted filesystem or active volume group?
I made sure that no one RAID/LVM partitions are mounted.
Code:
# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 15G 3.5G 11G 26% /
tmpfs 376M 0 376M 0% /dev/shm
/dev/sda2 4.7G 9.8M 4.5G 1% /others
/dev/sda3 2.9G 133M 2.6G 5% /var
But LVM is active
Code:
# pvdisplay
--- Physical volume ---
PV Name /dev/md127
VG Name data
PV Size 49.97 GiB / not usable 4.00 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 12791
Free PE 5111
Allocated PE 7680
PV UUID IJ2br8-SWHW-cf1d-89Fr-EEw9-IJME-1BpfSj
# vgdisplay
--- Volume group ---
VG Name data
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 19
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 2
Open LV 0
Max PV 0
Cur PV 1
Act PV 1
VG Size 49.96 GiB
PE Size 4.00 MiB
Total PE 12791
Alloc PE / Size 7680 / 30.00 GiB
Free PE / Size 5111 / 19.96 GiB
VG UUID 982ay8-ljWY-kiPB-JY7F-pIu2-87uN-iplPEQ
# lvdisplay
--- Logical volume ---
LV Path /dev/data/home
LV Name home
VG Name data
LV UUID OAQp25-Q1TH-rekd-b3n2-mOkC-Zgyt-3fX2If
LV Write Access read/write
LV Status available
# open 0
LV Size 10.00 GiB
Current LE 2560
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:0
--- Logical volume ---
LV Path /dev/data/backup
LV Name backup
VG Name data
LV UUID Uq6rhX-AvPN-GaNe-zevB-k3iB-Uz0m-TssjCg
LV Write Access read/write
LV Status available
# open 0
LV Size 20.00 GiB
Current LE 5120
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:1
As LVM is active on /dev/md127 it is not allowing to stop /dev/md127 raid array. So as I am fresher in RAID/LVM, expecting your kind help to make LVM inactive without any data loss and restart the RAID array as /dev/md0 and then re activate the LVM setup.
Expecting your kind reply, Thanks.