Question About A=setting Up A RAID Array Using Mdadm

Hi All;
I am new to Linux and Ubuntu. I am setting up a RAID for a media server and want to be sure I do it right and in the proper order.

Based upon feedback received on this forum I think it makes most sense to partition my 4 TB disks into 2 partitions of 2 TB each.

So am I correct in running MDADM FIRST on both (Unpartitioned) 4 TB disks to create RAID 1 and then partition (Using Parted) the resulting single 4 TB disk that I create with MDADM into two 2 TB partitions? Thanks.

Tim


Similar Content



Question About RAID Disk

Hello All;
I was able to create a RAID array mode 1 using two 4TB drives but the resulting array (md0) is listed as "4.0 TB Unknown" in the Disks gui.

I am wondering if this is correct to leave it as-is or if I have to set this to a partition type (ext4?)for it to work properly?

Thanks.

Tim

Issues With RAID- Creating As /dev/md127 Instead Of What's In The Config

Hi,
Recently, I decided to change my partition scheme for my home server. I had a RAID0 that previously spanned three disks and now I only want it to span two. Getting rid of the old one was easy. But getting the new one to work has been a real pain.

It's running Debian Jessie.

For starters, here's my /etc/mdadm/mdadm.conf:
Code:
root@maples-server:~# cat /etc/mdadm/mdadm.conf 
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default (built-in), scan all partitions (/proc/partitions) and all
# containers for MD superblocks. alternatively, specify devices to scan, using
# wildcards if desired.
#DEVICE partitions containers
DEVICE /dev/sdb1 /dev/sdc1

# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR root

# definitions of existing MD arrays

ARRAY /dev/md0 metadata=1.2 UUID=032e4ab2:53ac5db8:98806abd:420716a5 devices=/dev/sdb1,/dev/sdc1

As you can see, I have it specified to setup the RAID as /dev/md0. But every time I reboot, my /proc/mdstat shows:
Code:
root@maples-server:~# cat /proc/mdstat 
Personalities : [raid0] 
md127 : active raid0 sdc1[1] sdb1[0]
      488016896 blocks super 1.2 512k chunks
      
unused devices: <none>

I can confirm that it's actually md127 by looking at /dev:
Code:
root@maples-server:~# ls -l /dev/md*
brw-rw---- 1 root disk 9, 127 May  2 20:17 /dev/md127

/dev/md:
total 0
lrwxrwxrwx 1 root root 8 May  2 20:17 maples-server:0 -> ../md127

And here's a bit more info:
Code:
root@maples-server:~# mdadm --detail --scan
ARRAY /dev/md/maples-server:0 metadata=1.2 name=maples-server:0 UUID=032e4ab2:53ac5db8:98806abd:420716a5

I've tried adding all sorts of options to /etc/mdadm/mdadm.conf, ranging from just the output of the above command (only changing "/dev/md/maples-server:0" to "/dev/md0") to what you see at the top. Nothing seems to be making a difference.

Does anyone have any ideas?

Creating RAID Array With Mdadm Question

Hi all,

Question for all, whats the difference between creating an array with mdadm using the partition (/dev/sdb1, /dev/sdc1, /dev/sdd1, etc...) and the non-partition (/dev/sdb, /dev/sdc, /dev/sdd, etc....) What are the benefits? performance boost?

Thanks,

Have Multiple Partitions For /var Along With A Partition For /var Itself Useful?

I'm setting up a new Linux server and trying to get the best performance from it, along with if any directory fills up, then I won't have to worry about the whole OS.

I want to split up /var into various partitions, such as the following:

/var/cache

/var/log

/var/software_install

Will there be a problem with having these separate partitions and still having a separate partition for /var too?

Or should I plan on just a separate partition only for /var only?

thanks

Degraded RAID 1 Mdadm

I have a degraded software RAID 1 array. md0 is in a state of clean, degraded while md1 is mounted in read-only and clean. I'm not sure how to go about fixing this. Any ideas?

cat /proc/mdstat
Code:
Personalities : [raid1] 
md1 : active (auto-read-only) raid1 sdb2[1] sda2[0]
      3909620 blocks super 1.2 [2/2] [UU]
      
md0 : active raid1 sda1[0]
      972849016 blocks super 1.2 [2/1] [U_]
      
unused devices: <none>

mdadm -D /dev/md0
Code:
/dev/md0:
        Version : 1.2
  Creation Time : Tue Jun 21 21:31:58 2011
     Raid Level : raid1
     Array Size : 972849016 (927.78 GiB 996.20 GB)
  Used Dev Size : 972849016 (927.78 GiB 996.20 GB)
   Raid Devices : 2
  Total Devices : 1
    Persistence : Superblock is persistent

    Update Time : Tue Jun  2 02:21:12 2015
          State : clean, degraded 
 Active Devices : 1
Working Devices : 1
 Failed Devices : 0
  Spare Devices : 0

           Name : 
           UUID : 
         Events : 3678064

    Number   Major   Minor   RaidDevice State
       0       8        1        0      active sync   /dev/sda1
       2       0        0        2      removed

mdadm -D /dev/md1
Code:
/dev/md1:
        Version : 1.2
  Creation Time : Tue Jun 21 21:32:09 2011
     Raid Level : raid1
     Array Size : 3909620 (3.73 GiB 4.00 GB)
  Used Dev Size : 3909620 (3.73 GiB 4.00 GB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent

    Update Time : Sat May 16 15:17:56 2015
          State : clean 
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           Name : 
           UUID : 
         Events : 116

    Number   Major   Minor   RaidDevice State
       0       8        2        0      active sync   /dev/sda2
       1       8       18        1      active sync   /dev/sdb2

MDADM: How To Mirror Installed LM 17 Disk

Computer:i5,8G,2x500GB HD. one of 500GB hard drive, /dev/sda, is installed Linux Mint Rebecca with 20GB boot/470GB home EXT4/10GB swap. I want to add one more 500GB hard drive, /dev/sdb, to create RAID 1 with old one. Is it possible? and How to command?

Mdadm: Do You Still Have Set Fdisk Types

If you want to setup a raid1 system will modern mdadm programs on debian/ubuntu on a new/zeroed disk automatically set the type to Oxdm ?

Fred.

Get RAID1 And LVM Back After Re Installating The OS

Hi All,
I had installed Cent OS 6.6 on sda. The RAID1 and LVM setup was on sdb and sdc. To practice well on recovering RAID and LVM after the OS reinstallation, I just reinstalled the OS. During first re installation of OS, I had selected all the mount points including RAID/LVM partitions as same as how those where mounted before the reinstallation, but the format was selected to only /, /others, /var. And after booting /dev/md0 and LVM partitions were set to active automatically and everything was mounted properly. Also there was no any data loss in the RAID/LVM partitions. So I could made sure that everything will be perfect if we carefully select the mount points during OS reinstallation by making sure the formating partitions.

Now I thouht of reinstalling OS once again but this time didn't select mount points for RAID/LVM partitions during OS reinstallation, thought for manual setup after the installation. So just selected /, /others, /var partitions to format. When it booted, I ran "cat /proc/mdstat" but it was taken /dev/md127(read only) instead of /dev/md0.
Code:
# cat /proc/mdstat 
Personalities : [raid1] 
md127 : active (auto-read-only) raid1 sdc[1] sdb[0]
      52396032 blocks super 1.2 [2/2] [UU]
      
unused devices: <none>

So now I just wanted to stop and restart this RAID array as /dev/md0. But I am not able to stop as it is giving following error.
Code:
# mdadm --stop --force /dev/md127
mdadm: Cannot get exclusive access to /dev/md127:Perhaps a running process, mounted filesystem or active volume group?

I made sure that no one RAID/LVM partitions are mounted.
Code:
# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda1        15G  3.5G   11G  26% /
tmpfs           376M     0  376M   0% /dev/shm
/dev/sda2       4.7G  9.8M  4.5G   1% /others
/dev/sda3       2.9G  133M  2.6G   5% /var

But LVM is active
Code:
# pvdisplay 
  --- Physical volume ---
  PV Name               /dev/md127
  VG Name               data
  PV Size               49.97 GiB / not usable 4.00 MiB
  Allocatable           yes 
  PE Size               4.00 MiB
  Total PE              12791
  Free PE               5111
  Allocated PE          7680
  PV UUID               IJ2br8-SWHW-cf1d-89Fr-EEw9-IJME-1BpfSj
   
# vgdisplay 
  --- Volume group ---
  VG Name               data
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  19
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               49.96 GiB
  PE Size               4.00 MiB
  Total PE              12791
  Alloc PE / Size       7680 / 30.00 GiB
  Free  PE / Size       5111 / 19.96 GiB
  VG UUID               982ay8-ljWY-kiPB-JY7F-pIu2-87uN-iplPEQ
   
# lvdisplay 
  --- Logical volume ---
  LV Path                /dev/data/home
  LV Name                home
  VG Name                data
  LV UUID                OAQp25-Q1TH-rekd-b3n2-mOkC-Zgyt-3fX2If
  LV Write Access        read/write
  LV Status              available
  # open                 0
  LV Size                10.00 GiB
  Current LE             2560
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:0
   
  --- Logical volume ---
  LV Path                /dev/data/backup
  LV Name                backup
  VG Name                data
  LV UUID                Uq6rhX-AvPN-GaNe-zevB-k3iB-Uz0m-TssjCg
  LV Write Access        read/write
  LV Status              available
  # open                 0
  LV Size                20.00 GiB
  Current LE             5120
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:1

As LVM is active on /dev/md127 it is not allowing to stop /dev/md127 raid array. So as I am fresher in RAID/LVM, expecting your kind help to make LVM inactive without any data loss and restart the RAID array as /dev/md0 and then re activate the LVM setup.
Expecting your kind reply, Thanks.

Is LVM/btrfs/ZFS "pooling" Worth The Trouble?

I've long wanted to delve into these methods of HD manipulation but here's the thing: I only have 1 hard drive -- a 1TB, and the more I read the more it seems the main point of using these techniques is to utilize that extra layer of abstraction to bridge HDD's in some version of a RAID setup.

Of course I've also read the performance is better, along with snapshot capability, on-the-fly partition resizing, striping, etc. These prospects excite me. So finally, two questions:

1) With just one physical drive, is it worth creating a new partition table to include these technologies?
2) With all of the above methods, there is no way I can keep the data on any part of this PV if I want to venture into LVM, ZFS or Btrfs, correct?

p.s. I've got 12 partitions (one swap, one extended, one very large one logical partition that serves the 9 linux distros which fill the remaining partitions as a hold of media, documents, music, etc.

Ddrescue RAID

Hello,
I have a server with hardware RAID so both drives show up as a single volume.
If I create a clone of this with ddrescue to a single disk, then ddrescue it back, will it still be bootable? I did attempt to boot the single disk on another computer but it failed so I'm a bit concerned whether my clone is any good.