Question About RAID Disk

Hello All;
I was able to create a RAID array mode 1 using two 4TB drives but the resulting array (md0) is listed as "4.0 TB Unknown" in the Disks gui.

I am wondering if this is correct to leave it as-is or if I have to set this to a partition type (ext4?)for it to work properly?

Thanks.

Tim


Similar Content



Question About A=setting Up A RAID Array Using Mdadm

Hi All;
I am new to Linux and Ubuntu. I am setting up a RAID for a media server and want to be sure I do it right and in the proper order.

Based upon feedback received on this forum I think it makes most sense to partition my 4 TB disks into 2 partitions of 2 TB each.

So am I correct in running MDADM FIRST on both (Unpartitioned) 4 TB disks to create RAID 1 and then partition (Using Parted) the resulting single 4 TB disk that I create with MDADM into two 2 TB partitions? Thanks.

Tim

Creating RAID Array With Mdadm Question

Hi all,

Question for all, whats the difference between creating an array with mdadm using the partition (/dev/sdb1, /dev/sdc1, /dev/sdd1, etc...) and the non-partition (/dev/sdb, /dev/sdc, /dev/sdd, etc....) What are the benefits? performance boost?

Thanks,

How To Force Our 500 GB Drive To Be The SDA Drive Instead Of Our Raid Array

We are going in cricles to try to get our Ubuntu installation (14.04.1) to allow us to load Ubuntu's operating system to the 500 GB drive which is always set up somehow as SDC. We do not want the BOOT LOADER on the RAID Arrays, but on the 500 GB drive. We have two sets of RAID arrays in one box/enclosure.

/dev/sda - 18 TB RAID Array
/dev/sdb - 12 TB RAID Array
/dev/sdc - 500 GB Raptor Drive <-- We want this one to be the sda drive and where we'd load the BOOT LOADER also.

How do we do this?

Issues With RAID- Creating As /dev/md127 Instead Of What's In The Config

Hi,
Recently, I decided to change my partition scheme for my home server. I had a RAID0 that previously spanned three disks and now I only want it to span two. Getting rid of the old one was easy. But getting the new one to work has been a real pain.

It's running Debian Jessie.

For starters, here's my /etc/mdadm/mdadm.conf:
Code:
root@maples-server:~# cat /etc/mdadm/mdadm.conf 
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default (built-in), scan all partitions (/proc/partitions) and all
# containers for MD superblocks. alternatively, specify devices to scan, using
# wildcards if desired.
#DEVICE partitions containers
DEVICE /dev/sdb1 /dev/sdc1

# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR root

# definitions of existing MD arrays

ARRAY /dev/md0 metadata=1.2 UUID=032e4ab2:53ac5db8:98806abd:420716a5 devices=/dev/sdb1,/dev/sdc1

As you can see, I have it specified to setup the RAID as /dev/md0. But every time I reboot, my /proc/mdstat shows:
Code:
root@maples-server:~# cat /proc/mdstat 
Personalities : [raid0] 
md127 : active raid0 sdc1[1] sdb1[0]
      488016896 blocks super 1.2 512k chunks
      
unused devices: <none>

I can confirm that it's actually md127 by looking at /dev:
Code:
root@maples-server:~# ls -l /dev/md*
brw-rw---- 1 root disk 9, 127 May  2 20:17 /dev/md127

/dev/md:
total 0
lrwxrwxrwx 1 root root 8 May  2 20:17 maples-server:0 -> ../md127

And here's a bit more info:
Code:
root@maples-server:~# mdadm --detail --scan
ARRAY /dev/md/maples-server:0 metadata=1.2 name=maples-server:0 UUID=032e4ab2:53ac5db8:98806abd:420716a5

I've tried adding all sorts of options to /etc/mdadm/mdadm.conf, ranging from just the output of the above command (only changing "/dev/md/maples-server:0" to "/dev/md0") to what you see at the top. Nothing seems to be making a difference.

Does anyone have any ideas?

Ddrescue RAID

Hello,
I have a server with hardware RAID so both drives show up as a single volume.
If I create a clone of this with ddrescue to a single disk, then ddrescue it back, will it still be bootable? I did attempt to boot the single disk on another computer but it failed so I'm a bit concerned whether my clone is any good.

Second Hard Drive Showing First Drive / When Mounted

On my current system I have drives sda, sdb, sdc; 1TB, 1.5TB, 1.5TB respectively.
SATA drive a is where I installed the OS and the other SATA drives b and c are from an old system that was linux software mirror, something I was using to play and learn. Now I have added b and c to my current system and before doing so I stopped the mirror and removed the partitions. I added the drives to my current system and used cfdisk to create one partition on each and then mounted the new drives. I then run ls and I see bin/....etc. listed in the new drive. Is this normal or when I create the new drive partitions should I use Logical instead of Primary.

Is there a way to get my drives back to non raid that I have missed doing?

Thanks

Re-assembled Software Raid5 Now With No File System

Hi All

Please could someone assist me with a software raid 5 array issue i am having?

About a week or so ago i was asked to have a look at a mate of mine's Company Server that had crashed after a power failure - no ups and close to 9GB data with no backup.
HP Proliant mini server running OpenMediaVault v1.9. with a software raid5 setup consisting of 4x 3TB Western Digital Hard drives


The 80GB boot drive crashed (hardware malfunction) so i replaced it and installed the server with OpenMediaVault 1.9 as was previously installed. When starting up however i noticed that none of the shares were available and upon closer inspection discovered that the raid to had crashed. I therefore re-assembled the raid but now have no file system nor any partitions on any of the 4x drives.

This is about the extent of my knowledge base when it comes to Linux, and again i don't know that i'd want to risk it with no backup of the data.

I have run fsck /dev/md1, but this reports an error 2 while executing fsck.linux_raid_member not found.
Please i am desperate for some advise!

Thanks
Wayne

My Computer No Longer Boots Live Disks

A week ago, I realized that I could not boot live disks. At first, I thought that it was the optical drive not writing the disks correctly, but when I tried two live disks that I've used many times before nothing happened with those either.

I have CD/DVD drive listed first in my BIOS' boot order and I even tried selecting the optical drive in the menu when the computer started and it still does not boot a disk. Thankfully, my somewhat old hard drives are still chugging along but I need to be able to use a live disk, in case one or both die on me.

My question is, how can I know for sure what the problem is? Is there a way to test the optical drive (Samsung Super Writemaster - which has a bad rep)? Could it be my motherboard. It was bought new 2 years ago, and there are no other issues with it. Also, I have been able to create playable DVDs on my computer with the Optical drive that work perfectly on my Bluray player, yet I cannot play the movies or open a data disk in file manager on my computer. What does that mean?

I want to know that it's worth it to buy another burner, before shelling out the money when I'm already practically broke from Christmas gift purchases. Any suggestions on how to test the optical drive would be appreciated.

Backups And External Drives

Hello everyone,

I recently had an issue where I lost my whole backup server due to an electrical overload causing my server to literally explode and fried all 4 of my terabyte drives.... needless to say, I have no more backups because of this, and everywhere I read about backups said that setting up a raid array would allow me to keep good backups.... boy did I learn this lesson the hard way in needing to have some sort of external backup option, which brings me to this post and my questions:

I'm using Ubuntu 14.04 LTS server on an older Dell Poweredge 600sc, and I was thinking of using WD Passport 1Tb external drives to be used as my "offsite" backup option. I don't have a lot of data, and my current backup schedule is only a weekly backup, so thinking that if I have two of these passport drives so that I can have one drive offsite and one attached to the server, and rotate them every 4 weeks so as not to loose all my data.

Here's my question: Ideally, I would love to just be able to unplug the current drive, plug in the new drive and have everything work. However, I don't see this actually working, but if there's a way to do this, that would be totally awesome.... ;-)

So, realistically, I know I will have to unmount the one drive, unplug it, then plug in the new drive and mount it on the system. Is there a way to mount this to the same mount point automatically so that I don't have to rewrite my backup script each time I swap drives out so that the backups go to the same mount point? Or will the UUID's get messed up each time I do this?

Hopefully this makes sense and an easy solution can be found to accomodate this idea.....

Thanks again for all your help. This site is awesome for newbies such as myself........

Mikey

Degraded RAID 1 Mdadm

I have a degraded software RAID 1 array. md0 is in a state of clean, degraded while md1 is mounted in read-only and clean. I'm not sure how to go about fixing this. Any ideas?

cat /proc/mdstat
Code:
Personalities : [raid1] 
md1 : active (auto-read-only) raid1 sdb2[1] sda2[0]
      3909620 blocks super 1.2 [2/2] [UU]
      
md0 : active raid1 sda1[0]
      972849016 blocks super 1.2 [2/1] [U_]
      
unused devices: <none>

mdadm -D /dev/md0
Code:
/dev/md0:
        Version : 1.2
  Creation Time : Tue Jun 21 21:31:58 2011
     Raid Level : raid1
     Array Size : 972849016 (927.78 GiB 996.20 GB)
  Used Dev Size : 972849016 (927.78 GiB 996.20 GB)
   Raid Devices : 2
  Total Devices : 1
    Persistence : Superblock is persistent

    Update Time : Tue Jun  2 02:21:12 2015
          State : clean, degraded 
 Active Devices : 1
Working Devices : 1
 Failed Devices : 0
  Spare Devices : 0

           Name : 
           UUID : 
         Events : 3678064

    Number   Major   Minor   RaidDevice State
       0       8        1        0      active sync   /dev/sda1
       2       0        0        2      removed

mdadm -D /dev/md1
Code:
/dev/md1:
        Version : 1.2
  Creation Time : Tue Jun 21 21:32:09 2011
     Raid Level : raid1
     Array Size : 3909620 (3.73 GiB 4.00 GB)
  Used Dev Size : 3909620 (3.73 GiB 4.00 GB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent

    Update Time : Sat May 16 15:17:56 2015
          State : clean 
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           Name : 
           UUID : 
         Events : 116

    Number   Major   Minor   RaidDevice State
       0       8        2        0      active sync   /dev/sda2
       1       8       18        1      active sync   /dev/sdb2