Mdadm: Do You Still Have Set Fdisk Types

If you want to setup a raid1 system will modern mdadm programs on debian/ubuntu on a new/zeroed disk automatically set the type to Oxdm ?

Fred.


Similar Content



Issues With RAID- Creating As /dev/md127 Instead Of What's In The Config

Hi,
Recently, I decided to change my partition scheme for my home server. I had a RAID0 that previously spanned three disks and now I only want it to span two. Getting rid of the old one was easy. But getting the new one to work has been a real pain.

It's running Debian Jessie.

For starters, here's my /etc/mdadm/mdadm.conf:
Code:
root@maples-server:~# cat /etc/mdadm/mdadm.conf 
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default (built-in), scan all partitions (/proc/partitions) and all
# containers for MD superblocks. alternatively, specify devices to scan, using
# wildcards if desired.
#DEVICE partitions containers
DEVICE /dev/sdb1 /dev/sdc1

# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR root

# definitions of existing MD arrays

ARRAY /dev/md0 metadata=1.2 UUID=032e4ab2:53ac5db8:98806abd:420716a5 devices=/dev/sdb1,/dev/sdc1

As you can see, I have it specified to setup the RAID as /dev/md0. But every time I reboot, my /proc/mdstat shows:
Code:
root@maples-server:~# cat /proc/mdstat 
Personalities : [raid0] 
md127 : active raid0 sdc1[1] sdb1[0]
      488016896 blocks super 1.2 512k chunks
      
unused devices: <none>

I can confirm that it's actually md127 by looking at /dev:
Code:
root@maples-server:~# ls -l /dev/md*
brw-rw---- 1 root disk 9, 127 May  2 20:17 /dev/md127

/dev/md:
total 0
lrwxrwxrwx 1 root root 8 May  2 20:17 maples-server:0 -> ../md127

And here's a bit more info:
Code:
root@maples-server:~# mdadm --detail --scan
ARRAY /dev/md/maples-server:0 metadata=1.2 name=maples-server:0 UUID=032e4ab2:53ac5db8:98806abd:420716a5

I've tried adding all sorts of options to /etc/mdadm/mdadm.conf, ranging from just the output of the above command (only changing "/dev/md/maples-server:0" to "/dev/md0") to what you see at the top. Nothing seems to be making a difference.

Does anyone have any ideas?

Question About A=setting Up A RAID Array Using Mdadm

Hi All;
I am new to Linux and Ubuntu. I am setting up a RAID for a media server and want to be sure I do it right and in the proper order.

Based upon feedback received on this forum I think it makes most sense to partition my 4 TB disks into 2 partitions of 2 TB each.

So am I correct in running MDADM FIRST on both (Unpartitioned) 4 TB disks to create RAID 1 and then partition (Using Parted) the resulting single 4 TB disk that I create with MDADM into two 2 TB partitions? Thanks.

Tim

What's The Problem Of LINUX MINT Installation On RAID1?

I am installing Linuc Mint Rebecca on this computer: CPU: I7, motherboard: ASUS X99-A, HD: 2x 2TB, motherboard BIOS RAID1, so the total hard drive size is 2TB

While I was installing LINUX MINT on RAID1 hard drive, the installation system pop up ??? ???windows at Wher Are You step, after I click OK, jump to Installation Type, no matter which type I select: Erase disk and install Linux Mint or Someting else, ??? error message pop up at Where are you.

And if I click either of the two disk, it shows "Unable to mount location"

however, I can double click and see the content of RAID1 2TB disk.

Question: what's the problem of LINUX MINT installation on RAID1?

Degraded RAID 1 Mdadm

I have a degraded software RAID 1 array. md0 is in a state of clean, degraded while md1 is mounted in read-only and clean. I'm not sure how to go about fixing this. Any ideas?

cat /proc/mdstat
Code:
Personalities : [raid1] 
md1 : active (auto-read-only) raid1 sdb2[1] sda2[0]
      3909620 blocks super 1.2 [2/2] [UU]
      
md0 : active raid1 sda1[0]
      972849016 blocks super 1.2 [2/1] [U_]
      
unused devices: <none>

mdadm -D /dev/md0
Code:
/dev/md0:
        Version : 1.2
  Creation Time : Tue Jun 21 21:31:58 2011
     Raid Level : raid1
     Array Size : 972849016 (927.78 GiB 996.20 GB)
  Used Dev Size : 972849016 (927.78 GiB 996.20 GB)
   Raid Devices : 2
  Total Devices : 1
    Persistence : Superblock is persistent

    Update Time : Tue Jun  2 02:21:12 2015
          State : clean, degraded 
 Active Devices : 1
Working Devices : 1
 Failed Devices : 0
  Spare Devices : 0

           Name : 
           UUID : 
         Events : 3678064

    Number   Major   Minor   RaidDevice State
       0       8        1        0      active sync   /dev/sda1
       2       0        0        2      removed

mdadm -D /dev/md1
Code:
/dev/md1:
        Version : 1.2
  Creation Time : Tue Jun 21 21:32:09 2011
     Raid Level : raid1
     Array Size : 3909620 (3.73 GiB 4.00 GB)
  Used Dev Size : 3909620 (3.73 GiB 4.00 GB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent

    Update Time : Sat May 16 15:17:56 2015
          State : clean 
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           Name : 
           UUID : 
         Events : 116

    Number   Major   Minor   RaidDevice State
       0       8        2        0      active sync   /dev/sda2
       1       8       18        1      active sync   /dev/sdb2

Creating RAID Array With Mdadm Question

Hi all,

Question for all, whats the difference between creating an array with mdadm using the partition (/dev/sdb1, /dev/sdc1, /dev/sdd1, etc...) and the non-partition (/dev/sdb, /dev/sdc, /dev/sdd, etc....) What are the benefits? performance boost?

Thanks,

1st Post, MDADM Issue

first off, hello all, I'm Gimpacause and I am just starting to use Linux again after about 12 years

I recently bought a Backblaze storage pod and Installed 15x6Tb hard drives (in Raid6+LVM) using this guide


recently due to a faulty UPS and power issues at my place, i have fried the mainboard and boot drive

i dont have anything backed up from my previous Ubuntu install


i have replaced the boot drive installed Ubuntu 14.04 LTS (including MDADM & XFSProgs) installed a replacement mainboard (exact same model/revision) can please someone point me to a guide , or advise how i proceed recovering the raid & lvm so my data is still intact, I have not attempted anything yet for frear of data loss

if you require any logs/information i am hgappy to provide them

many thanks

Partition Using Fdisk Gone Wrong.

Code:
Disk /dev/sda (Sun disk label): 255 heads, 63 sectors, 60801 cylinders
Units = cylinders of 16065 * 512 bytes

   Device Flag    Start       End    Blocks   Id  System
/dev/sda1             0     60795 488335837+  83  Linux native
/dev/sda2  u      60795     60801     48195   82  Linux swap
/dev/sda3             0     60801 488384032+   5  Whole disk

Actually i created 100 mb lvm partition using fdisk but after checking in fdisk, i am getting this. I don't know next time my system will boot or not.

What should i do in this situation to recover from this situation?

I am using centos6 x64.

Get RAID1 And LVM Back After Re Installating The OS

Hi All,
I had installed Cent OS 6.6 on sda. The RAID1 and LVM setup was on sdb and sdc. To practice well on recovering RAID and LVM after the OS reinstallation, I just reinstalled the OS. During first re installation of OS, I had selected all the mount points including RAID/LVM partitions as same as how those where mounted before the reinstallation, but the format was selected to only /, /others, /var. And after booting /dev/md0 and LVM partitions were set to active automatically and everything was mounted properly. Also there was no any data loss in the RAID/LVM partitions. So I could made sure that everything will be perfect if we carefully select the mount points during OS reinstallation by making sure the formating partitions.

Now I thouht of reinstalling OS once again but this time didn't select mount points for RAID/LVM partitions during OS reinstallation, thought for manual setup after the installation. So just selected /, /others, /var partitions to format. When it booted, I ran "cat /proc/mdstat" but it was taken /dev/md127(read only) instead of /dev/md0.
Code:
# cat /proc/mdstat 
Personalities : [raid1] 
md127 : active (auto-read-only) raid1 sdc[1] sdb[0]
      52396032 blocks super 1.2 [2/2] [UU]
      
unused devices: <none>

So now I just wanted to stop and restart this RAID array as /dev/md0. But I am not able to stop as it is giving following error.
Code:
# mdadm --stop --force /dev/md127
mdadm: Cannot get exclusive access to /dev/md127:Perhaps a running process, mounted filesystem or active volume group?

I made sure that no one RAID/LVM partitions are mounted.
Code:
# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda1        15G  3.5G   11G  26% /
tmpfs           376M     0  376M   0% /dev/shm
/dev/sda2       4.7G  9.8M  4.5G   1% /others
/dev/sda3       2.9G  133M  2.6G   5% /var

But LVM is active
Code:
# pvdisplay 
  --- Physical volume ---
  PV Name               /dev/md127
  VG Name               data
  PV Size               49.97 GiB / not usable 4.00 MiB
  Allocatable           yes 
  PE Size               4.00 MiB
  Total PE              12791
  Free PE               5111
  Allocated PE          7680
  PV UUID               IJ2br8-SWHW-cf1d-89Fr-EEw9-IJME-1BpfSj
   
# vgdisplay 
  --- Volume group ---
  VG Name               data
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  19
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               49.96 GiB
  PE Size               4.00 MiB
  Total PE              12791
  Alloc PE / Size       7680 / 30.00 GiB
  Free  PE / Size       5111 / 19.96 GiB
  VG UUID               982ay8-ljWY-kiPB-JY7F-pIu2-87uN-iplPEQ
   
# lvdisplay 
  --- Logical volume ---
  LV Path                /dev/data/home
  LV Name                home
  VG Name                data
  LV UUID                OAQp25-Q1TH-rekd-b3n2-mOkC-Zgyt-3fX2If
  LV Write Access        read/write
  LV Status              available
  # open                 0
  LV Size                10.00 GiB
  Current LE             2560
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:0
   
  --- Logical volume ---
  LV Path                /dev/data/backup
  LV Name                backup
  VG Name                data
  LV UUID                Uq6rhX-AvPN-GaNe-zevB-k3iB-Uz0m-TssjCg
  LV Write Access        read/write
  LV Status              available
  # open                 0
  LV Size                20.00 GiB
  Current LE             5120
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:1

As LVM is active on /dev/md127 it is not allowing to stop /dev/md127 raid array. So as I am fresher in RAID/LVM, expecting your kind help to make LVM inactive without any data loss and restart the RAID array as /dev/md0 and then re activate the LVM setup.
Expecting your kind reply, Thanks.

Problem With Creating Partitions For Bootable SD Card

Hi everybody,

I am trying to set up my sd card with an embedded ubuntu I build lately according to this linkhttps://eewiki.net/display/linuxonar...-Ubuntu14.04.1 (Topic Setup microSD/SD card).
I am doing this on Ubuntu running in VirtualBox. I am quite new to linux and barely understand the command that causes the first warning. Here is what happened:

ubuntu@ubuntu-VirtualBox:~$ sudo sfdisk --in-order --Linux --unit M ${DISK} <<-__EOF__ --force
> 1,12,0xE,*
> ,,,-
> __EOF__
Checking that no-one is using this disk right now ...
BLKRRPART: Invalid argument
OK

Disk /dev/sdc1: 1019 cylinders, 246 heads, 62 sectors/track

sfdisk: ERROR: sector 0 does not have an msdos signature
/dev/sdc1: unrecognized partition table type
Old situation:
No partitions found
New situation:
Units = mebibytes of 1048576 bytes, blocks of 1024 bytes, counting from 0

Device Boot Start End MiB #blocks Id System
/dev/sdc1p1 * 1 12 12 12288 e W95 FAT16 (LBA)
/dev/sdc1p2 13 7591 7579 7760896 83 Linux
/dev/sdc1p3 0 - 0 0 0 Empty
/dev/sdc1p4 0 - 0 0 0 Empty
Successfully wrote the new partition table

Re-reading the partition table ...
BLKRRPART: Invalid argument

If you created or changed a DOS partition, /dev/foo7, say, then use dd(1)
to zero the first 512 bytes: dd if=/dev/zero of=/dev/foo7 bs=512 count=1
(See fdisk(8).)
ubuntu@ubuntu-VirtualBox:~$ sudo mkfs.vfat -F 16 ${DISK}p1 -n BOOT
mkfs.fat 3.0.26 (2014-03-07)
/dev/sdc1p1: No such file or directory


So after I entered "sudo sfdisk --in-order --Linux --unit M ${DISK} <<-__EOF__ --force
> 1,12,0xE,*
> ,,,-
> __EOF__ "
everything looked fine since it sad that it created the partition sdc1p1 etc. but as soon as I want to format this partition with "sudo mkfs.vfat -F 16 ${DISK}p1 -n BOOT" it tells me that there is no partition called /dev/sdc1p1

I would be very gratefull if somebody could help me out on this. I thought about trying a different tutorial but they look really really different so I tried to stick to the one that worked well for me so far.

Thank you very much!

Regards,

Lenni

MDADM: How To Mirror Installed LM 17 Disk

Computer:i5,8G,2x500GB HD. one of 500GB hard drive, /dev/sda, is installed Linux Mint Rebecca with 20GB boot/470GB home EXT4/10GB swap. I want to add one more 500GB hard drive, /dev/sdb, to create RAID 1 with old one. Is it possible? and How to command?