Creating RAID Array With Mdadm Question

Hi all,

Question for all, whats the difference between creating an array with mdadm using the partition (/dev/sdb1, /dev/sdc1, /dev/sdd1, etc...) and the non-partition (/dev/sdb, /dev/sdc, /dev/sdd, etc....) What are the benefits? performance boost?

Thanks,


Similar Content



Question About A=setting Up A RAID Array Using Mdadm

Hi All;
I am new to Linux and Ubuntu. I am setting up a RAID for a media server and want to be sure I do it right and in the proper order.

Based upon feedback received on this forum I think it makes most sense to partition my 4 TB disks into 2 partitions of 2 TB each.

So am I correct in running MDADM FIRST on both (Unpartitioned) 4 TB disks to create RAID 1 and then partition (Using Parted) the resulting single 4 TB disk that I create with MDADM into two 2 TB partitions? Thanks.

Tim

Issues With RAID- Creating As /dev/md127 Instead Of What's In The Config

Hi,
Recently, I decided to change my partition scheme for my home server. I had a RAID0 that previously spanned three disks and now I only want it to span two. Getting rid of the old one was easy. But getting the new one to work has been a real pain.

It's running Debian Jessie.

For starters, here's my /etc/mdadm/mdadm.conf:
Code:
root@maples-server:~# cat /etc/mdadm/mdadm.conf 
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default (built-in), scan all partitions (/proc/partitions) and all
# containers for MD superblocks. alternatively, specify devices to scan, using
# wildcards if desired.
#DEVICE partitions containers
DEVICE /dev/sdb1 /dev/sdc1

# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR root

# definitions of existing MD arrays

ARRAY /dev/md0 metadata=1.2 UUID=032e4ab2:53ac5db8:98806abd:420716a5 devices=/dev/sdb1,/dev/sdc1

As you can see, I have it specified to setup the RAID as /dev/md0. But every time I reboot, my /proc/mdstat shows:
Code:
root@maples-server:~# cat /proc/mdstat 
Personalities : [raid0] 
md127 : active raid0 sdc1[1] sdb1[0]
      488016896 blocks super 1.2 512k chunks
      
unused devices: <none>

I can confirm that it's actually md127 by looking at /dev:
Code:
root@maples-server:~# ls -l /dev/md*
brw-rw---- 1 root disk 9, 127 May  2 20:17 /dev/md127

/dev/md:
total 0
lrwxrwxrwx 1 root root 8 May  2 20:17 maples-server:0 -> ../md127

And here's a bit more info:
Code:
root@maples-server:~# mdadm --detail --scan
ARRAY /dev/md/maples-server:0 metadata=1.2 name=maples-server:0 UUID=032e4ab2:53ac5db8:98806abd:420716a5

I've tried adding all sorts of options to /etc/mdadm/mdadm.conf, ranging from just the output of the above command (only changing "/dev/md/maples-server:0" to "/dev/md0") to what you see at the top. Nothing seems to be making a difference.

Does anyone have any ideas?

Question About RAID Disk

Hello All;
I was able to create a RAID array mode 1 using two 4TB drives but the resulting array (md0) is listed as "4.0 TB Unknown" in the Disks gui.

I am wondering if this is correct to leave it as-is or if I have to set this to a partition type (ext4?)for it to work properly?

Thanks.

Tim

Degraded RAID 1 Mdadm

I have a degraded software RAID 1 array. md0 is in a state of clean, degraded while md1 is mounted in read-only and clean. I'm not sure how to go about fixing this. Any ideas?

cat /proc/mdstat
Code:
Personalities : [raid1] 
md1 : active (auto-read-only) raid1 sdb2[1] sda2[0]
      3909620 blocks super 1.2 [2/2] [UU]
      
md0 : active raid1 sda1[0]
      972849016 blocks super 1.2 [2/1] [U_]
      
unused devices: <none>

mdadm -D /dev/md0
Code:
/dev/md0:
        Version : 1.2
  Creation Time : Tue Jun 21 21:31:58 2011
     Raid Level : raid1
     Array Size : 972849016 (927.78 GiB 996.20 GB)
  Used Dev Size : 972849016 (927.78 GiB 996.20 GB)
   Raid Devices : 2
  Total Devices : 1
    Persistence : Superblock is persistent

    Update Time : Tue Jun  2 02:21:12 2015
          State : clean, degraded 
 Active Devices : 1
Working Devices : 1
 Failed Devices : 0
  Spare Devices : 0

           Name : 
           UUID : 
         Events : 3678064

    Number   Major   Minor   RaidDevice State
       0       8        1        0      active sync   /dev/sda1
       2       0        0        2      removed

mdadm -D /dev/md1
Code:
/dev/md1:
        Version : 1.2
  Creation Time : Tue Jun 21 21:32:09 2011
     Raid Level : raid1
     Array Size : 3909620 (3.73 GiB 4.00 GB)
  Used Dev Size : 3909620 (3.73 GiB 4.00 GB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent

    Update Time : Sat May 16 15:17:56 2015
          State : clean 
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           Name : 
           UUID : 
         Events : 116

    Number   Major   Minor   RaidDevice State
       0       8        2        0      active sync   /dev/sda2
       1       8       18        1      active sync   /dev/sdb2

How To Force Our 500 GB Drive To Be The SDA Drive Instead Of Our Raid Array

We are going in cricles to try to get our Ubuntu installation (14.04.1) to allow us to load Ubuntu's operating system to the 500 GB drive which is always set up somehow as SDC. We do not want the BOOT LOADER on the RAID Arrays, but on the 500 GB drive. We have two sets of RAID arrays in one box/enclosure.

/dev/sda - 18 TB RAID Array
/dev/sdb - 12 TB RAID Array
/dev/sdc - 500 GB Raptor Drive <-- We want this one to be the sda drive and where we'd load the BOOT LOADER also.

How do we do this?

How To Determine If Syntax Array=( Str1 Str2 Str3 ) To Declare Array Is Valid

Hi, I am creating a Korn shell script and need to create an array where each element corresponds to one line of an input file. Being able to do the following would be awesome: Code:
array=( $(cat file.txt) )

However, I'm finding that not all of my development boxes allow this. Some return this: Code:
ksh: syntax error: `(' unexpected

I suspect this has something to do with the Korn shell version. Does anyone know how I can find out what version is required in order to be able to declare arrays, as above? Or is this an OS version issue?

I would really appreciate any tips on how I can find out the minimum requirements for this syntax to be valid.

Thanks.

Is LVM/btrfs/ZFS "pooling" Worth The Trouble?

I've long wanted to delve into these methods of HD manipulation but here's the thing: I only have 1 hard drive -- a 1TB, and the more I read the more it seems the main point of using these techniques is to utilize that extra layer of abstraction to bridge HDD's in some version of a RAID setup.

Of course I've also read the performance is better, along with snapshot capability, on-the-fly partition resizing, striping, etc. These prospects excite me. So finally, two questions:

1) With just one physical drive, is it worth creating a new partition table to include these technologies?
2) With all of the above methods, there is no way I can keep the data on any part of this PV if I want to venture into LVM, ZFS or Btrfs, correct?

p.s. I've got 12 partitions (one swap, one extended, one very large one logical partition that serves the 9 linux distros which fill the remaining partitions as a hold of media, documents, music, etc.

Mdadm: Do You Still Have Set Fdisk Types

If you want to setup a raid1 system will modern mdadm programs on debian/ubuntu on a new/zeroed disk automatically set the type to Oxdm ?

Fred.

Get RAID1 And LVM Back After Re Installating The OS

Hi All,
I had installed Cent OS 6.6 on sda. The RAID1 and LVM setup was on sdb and sdc. To practice well on recovering RAID and LVM after the OS reinstallation, I just reinstalled the OS. During first re installation of OS, I had selected all the mount points including RAID/LVM partitions as same as how those where mounted before the reinstallation, but the format was selected to only /, /others, /var. And after booting /dev/md0 and LVM partitions were set to active automatically and everything was mounted properly. Also there was no any data loss in the RAID/LVM partitions. So I could made sure that everything will be perfect if we carefully select the mount points during OS reinstallation by making sure the formating partitions.

Now I thouht of reinstalling OS once again but this time didn't select mount points for RAID/LVM partitions during OS reinstallation, thought for manual setup after the installation. So just selected /, /others, /var partitions to format. When it booted, I ran "cat /proc/mdstat" but it was taken /dev/md127(read only) instead of /dev/md0.
Code:
# cat /proc/mdstat 
Personalities : [raid1] 
md127 : active (auto-read-only) raid1 sdc[1] sdb[0]
      52396032 blocks super 1.2 [2/2] [UU]
      
unused devices: <none>

So now I just wanted to stop and restart this RAID array as /dev/md0. But I am not able to stop as it is giving following error.
Code:
# mdadm --stop --force /dev/md127
mdadm: Cannot get exclusive access to /dev/md127:Perhaps a running process, mounted filesystem or active volume group?

I made sure that no one RAID/LVM partitions are mounted.
Code:
# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda1        15G  3.5G   11G  26% /
tmpfs           376M     0  376M   0% /dev/shm
/dev/sda2       4.7G  9.8M  4.5G   1% /others
/dev/sda3       2.9G  133M  2.6G   5% /var

But LVM is active
Code:
# pvdisplay 
  --- Physical volume ---
  PV Name               /dev/md127
  VG Name               data
  PV Size               49.97 GiB / not usable 4.00 MiB
  Allocatable           yes 
  PE Size               4.00 MiB
  Total PE              12791
  Free PE               5111
  Allocated PE          7680
  PV UUID               IJ2br8-SWHW-cf1d-89Fr-EEw9-IJME-1BpfSj
   
# vgdisplay 
  --- Volume group ---
  VG Name               data
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  19
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               49.96 GiB
  PE Size               4.00 MiB
  Total PE              12791
  Alloc PE / Size       7680 / 30.00 GiB
  Free  PE / Size       5111 / 19.96 GiB
  VG UUID               982ay8-ljWY-kiPB-JY7F-pIu2-87uN-iplPEQ
   
# lvdisplay 
  --- Logical volume ---
  LV Path                /dev/data/home
  LV Name                home
  VG Name                data
  LV UUID                OAQp25-Q1TH-rekd-b3n2-mOkC-Zgyt-3fX2If
  LV Write Access        read/write
  LV Status              available
  # open                 0
  LV Size                10.00 GiB
  Current LE             2560
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:0
   
  --- Logical volume ---
  LV Path                /dev/data/backup
  LV Name                backup
  VG Name                data
  LV UUID                Uq6rhX-AvPN-GaNe-zevB-k3iB-Uz0m-TssjCg
  LV Write Access        read/write
  LV Status              available
  # open                 0
  LV Size                20.00 GiB
  Current LE             5120
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:1

As LVM is active on /dev/md127 it is not allowing to stop /dev/md127 raid array. So as I am fresher in RAID/LVM, expecting your kind help to make LVM inactive without any data loss and restart the RAID array as /dev/md0 and then re activate the LVM setup.
Expecting your kind reply, Thanks.

Problem With Creating Partitions For Bootable SD Card

Hi everybody,

I am trying to set up my sd card with an embedded ubuntu I build lately according to this linkhttps://eewiki.net/display/linuxonar...-Ubuntu14.04.1 (Topic Setup microSD/SD card).
I am doing this on Ubuntu running in VirtualBox. I am quite new to linux and barely understand the command that causes the first warning. Here is what happened:

ubuntu@ubuntu-VirtualBox:~$ sudo sfdisk --in-order --Linux --unit M ${DISK} <<-__EOF__ --force
> 1,12,0xE,*
> ,,,-
> __EOF__
Checking that no-one is using this disk right now ...
BLKRRPART: Invalid argument
OK

Disk /dev/sdc1: 1019 cylinders, 246 heads, 62 sectors/track

sfdisk: ERROR: sector 0 does not have an msdos signature
/dev/sdc1: unrecognized partition table type
Old situation:
No partitions found
New situation:
Units = mebibytes of 1048576 bytes, blocks of 1024 bytes, counting from 0

Device Boot Start End MiB #blocks Id System
/dev/sdc1p1 * 1 12 12 12288 e W95 FAT16 (LBA)
/dev/sdc1p2 13 7591 7579 7760896 83 Linux
/dev/sdc1p3 0 - 0 0 0 Empty
/dev/sdc1p4 0 - 0 0 0 Empty
Successfully wrote the new partition table

Re-reading the partition table ...
BLKRRPART: Invalid argument

If you created or changed a DOS partition, /dev/foo7, say, then use dd(1)
to zero the first 512 bytes: dd if=/dev/zero of=/dev/foo7 bs=512 count=1
(See fdisk(8).)
ubuntu@ubuntu-VirtualBox:~$ sudo mkfs.vfat -F 16 ${DISK}p1 -n BOOT
mkfs.fat 3.0.26 (2014-03-07)
/dev/sdc1p1: No such file or directory


So after I entered "sudo sfdisk --in-order --Linux --unit M ${DISK} <<-__EOF__ --force
> 1,12,0xE,*
> ,,,-
> __EOF__ "
everything looked fine since it sad that it created the partition sdc1p1 etc. but as soon as I want to format this partition with "sudo mkfs.vfat -F 16 ${DISK}p1 -n BOOT" it tells me that there is no partition called /dev/sdc1p1

I would be very gratefull if somebody could help me out on this. I thought about trying a different tutorial but they look really really different so I tried to stick to the one that worked well for me so far.

Thank you very much!

Regards,

Lenni