Issues With RAID- Creating As /dev/md127 Instead Of What's In The Config

Hi,
Recently, I decided to change my partition scheme for my home server. I had a RAID0 that previously spanned three disks and now I only want it to span two. Getting rid of the old one was easy. But getting the new one to work has been a real pain.

It's running Debian Jessie.

For starters, here's my /etc/mdadm/mdadm.conf:
Code:
root@maples-server:~# cat /etc/mdadm/mdadm.conf 
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default (built-in), scan all partitions (/proc/partitions) and all
# containers for MD superblocks. alternatively, specify devices to scan, using
# wildcards if desired.
#DEVICE partitions containers
DEVICE /dev/sdb1 /dev/sdc1

# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR root

# definitions of existing MD arrays

ARRAY /dev/md0 metadata=1.2 UUID=032e4ab2:53ac5db8:98806abd:420716a5 devices=/dev/sdb1,/dev/sdc1

As you can see, I have it specified to setup the RAID as /dev/md0. But every time I reboot, my /proc/mdstat shows:
Code:
root@maples-server:~# cat /proc/mdstat 
Personalities : [raid0] 
md127 : active raid0 sdc1[1] sdb1[0]
      488016896 blocks super 1.2 512k chunks
      
unused devices: <none>

I can confirm that it's actually md127 by looking at /dev:
Code:
root@maples-server:~# ls -l /dev/md*
brw-rw---- 1 root disk 9, 127 May  2 20:17 /dev/md127

/dev/md:
total 0
lrwxrwxrwx 1 root root 8 May  2 20:17 maples-server:0 -> ../md127

And here's a bit more info:
Code:
root@maples-server:~# mdadm --detail --scan
ARRAY /dev/md/maples-server:0 metadata=1.2 name=maples-server:0 UUID=032e4ab2:53ac5db8:98806abd:420716a5

I've tried adding all sorts of options to /etc/mdadm/mdadm.conf, ranging from just the output of the above command (only changing "/dev/md/maples-server:0" to "/dev/md0") to what you see at the top. Nothing seems to be making a difference.

Does anyone have any ideas?


Similar Content



Get RAID1 And LVM Back After Re Installating The OS

Hi All,
I had installed Cent OS 6.6 on sda. The RAID1 and LVM setup was on sdb and sdc. To practice well on recovering RAID and LVM after the OS reinstallation, I just reinstalled the OS. During first re installation of OS, I had selected all the mount points including RAID/LVM partitions as same as how those where mounted before the reinstallation, but the format was selected to only /, /others, /var. And after booting /dev/md0 and LVM partitions were set to active automatically and everything was mounted properly. Also there was no any data loss in the RAID/LVM partitions. So I could made sure that everything will be perfect if we carefully select the mount points during OS reinstallation by making sure the formating partitions.

Now I thouht of reinstalling OS once again but this time didn't select mount points for RAID/LVM partitions during OS reinstallation, thought for manual setup after the installation. So just selected /, /others, /var partitions to format. When it booted, I ran "cat /proc/mdstat" but it was taken /dev/md127(read only) instead of /dev/md0.
Code:
# cat /proc/mdstat 
Personalities : [raid1] 
md127 : active (auto-read-only) raid1 sdc[1] sdb[0]
      52396032 blocks super 1.2 [2/2] [UU]
      
unused devices: <none>

So now I just wanted to stop and restart this RAID array as /dev/md0. But I am not able to stop as it is giving following error.
Code:
# mdadm --stop --force /dev/md127
mdadm: Cannot get exclusive access to /dev/md127:Perhaps a running process, mounted filesystem or active volume group?

I made sure that no one RAID/LVM partitions are mounted.
Code:
# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda1        15G  3.5G   11G  26% /
tmpfs           376M     0  376M   0% /dev/shm
/dev/sda2       4.7G  9.8M  4.5G   1% /others
/dev/sda3       2.9G  133M  2.6G   5% /var

But LVM is active
Code:
# pvdisplay 
  --- Physical volume ---
  PV Name               /dev/md127
  VG Name               data
  PV Size               49.97 GiB / not usable 4.00 MiB
  Allocatable           yes 
  PE Size               4.00 MiB
  Total PE              12791
  Free PE               5111
  Allocated PE          7680
  PV UUID               IJ2br8-SWHW-cf1d-89Fr-EEw9-IJME-1BpfSj
   
# vgdisplay 
  --- Volume group ---
  VG Name               data
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  19
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               49.96 GiB
  PE Size               4.00 MiB
  Total PE              12791
  Alloc PE / Size       7680 / 30.00 GiB
  Free  PE / Size       5111 / 19.96 GiB
  VG UUID               982ay8-ljWY-kiPB-JY7F-pIu2-87uN-iplPEQ
   
# lvdisplay 
  --- Logical volume ---
  LV Path                /dev/data/home
  LV Name                home
  VG Name                data
  LV UUID                OAQp25-Q1TH-rekd-b3n2-mOkC-Zgyt-3fX2If
  LV Write Access        read/write
  LV Status              available
  # open                 0
  LV Size                10.00 GiB
  Current LE             2560
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:0
   
  --- Logical volume ---
  LV Path                /dev/data/backup
  LV Name                backup
  VG Name                data
  LV UUID                Uq6rhX-AvPN-GaNe-zevB-k3iB-Uz0m-TssjCg
  LV Write Access        read/write
  LV Status              available
  # open                 0
  LV Size                20.00 GiB
  Current LE             5120
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:1

As LVM is active on /dev/md127 it is not allowing to stop /dev/md127 raid array. So as I am fresher in RAID/LVM, expecting your kind help to make LVM inactive without any data loss and restart the RAID array as /dev/md0 and then re activate the LVM setup.
Expecting your kind reply, Thanks.

Apache Not Working Properly After Update To Debian 8

Hi,
I just upgraded my server from Debian 7 to Debian 8. Everything seemed to go fine. However, Apache doesn't seem to be able to see the web files. If I go to my server's IP, I get an empty directory listing, as if the document root was pointed to an empty directory. However, my apache.conf points to /var/www, and there are indeed files the
Code:
root@maples-server:~# ls -la /var/www/
total 624
drwxr-xr-x  5 www-data www-data   4096 Apr 28 19:35 .
drwxr-xr-x 13 root     root       4096 Mar 28 11:43 ..
lrwxrwxrwx  1 www-data www-data     18 Jan 10 20:47 anthony -> /home/anthony/web/
-rw-------  1 www-data www-data   1455 Apr 23 21:41 .bash_history
-rw-r--r--  1 www-data www-data   3388 Jan 21 19:34 .bashrc
drwxr-xr-x 11 www-data www-data   4096 Apr 23 21:41 chat
lrwxrwxrwx  1 www-data www-data     14 Mar 23 16:20 dad -> /home/dad/web/
drwxr-xr-x  2 root     root       4096 Mar 15 05:52 html
-rw-r--r--  1 www-data www-data    323 Mar 26 18:35 index.htm
drwx------  2 www-data www-data   4096 Jan 21 19:50 Mail
-rw-r--r--  1 anthony  anthony  592795 Apr 23 19:52 phpfreechat-1.7.tar.gz
-rw-r--r--  1 www-data www-data     41 Apr 15 21:52 robots.txt
-rw-------  1 www-data www-data   1541 Apr 23 21:41 .viminfo

Here's my apache.conf (with the comments stripped; there were no "end of line" comments):

Code:
root@maples-server:~# cat /etc/apache2/apache2.conf | grep -v "#"

Mutex file:${APACHE_LOCK_DIR} default

PidFile ${APACHE_PID_FILE}

Timeout 300

KeepAlive On

MaxKeepAliveRequests 100

KeepAliveTimeout 5


User ${APACHE_RUN_USER}
Group ${APACHE_RUN_GROUP}

HostnameLookups Off

ErrorLog ${APACHE_LOG_DIR}/error.log

LogLevel warn

IncludeOptional mods-enabled/*.load
IncludeOptional mods-enabled/*.conf

Include ports.conf


<Directory />
	Options FollowSymLinks
	AllowOverride None
	Require all denied
</Directory>

<Directory /usr/share>
	AllowOverride None
	Require all granted
</Directory>

<Directory /var/www/>
	Options Indexes FollowSymLinks
	AllowOverride None
	Require all granted
</Directory>

AccessFileName .htaccess

<FilesMatch "^\.ht">
	Require all denied
</FilesMatch>


LogFormat "v:p h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" vhost_combined
LogFormat "%h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" combined
LogFormat "%h %l %u %t \"%r\" %>s %O" common
LogFormat "%{Referer}i -> %U" referer
LogFormat "%{User-agent}i" agent


IncludeOptional conf-enabled/*.conf

IncludeOptional sites-enabled/*.conf

I also checked sites-enabled/000-default, and everything seems to be fine the
Code:
root@maples-server:~# cat /etc/apache2/sites-enabled/000-default 
<VirtualHost *:80>

	DocumentRoot /var/www
	<Directory />
		Options FollowSymLinks
		AllowOverride All
	</Directory>
	<Directory /var/www/>
		Options Indexes FollowSymLinks MultiViews
		AllowOverride All
		Order allow,deny
		allow from all
	</Directory>

	ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/
	<Directory "/usr/lib/cgi-bin">
		AllowOverride All
		Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch
		Order allow,deny
		Allow from all
	</Directory>

	ErrorLog ${APACHE_LOG_DIR}/error.log

	# Possible values include: debug, info, notice, warn, error, crit,
	# alert, emerg.
	LogLevel warn

	CustomLog ${APACHE_LOG_DIR}/access.log combined
</VirtualHost>

Additionally, the connections are no longer showing up in /var/log/apache2/access.log. The last access time in that file is from before the update. I don't know enough about systemd to know if it is responsible for redirecting the logs to somewhere else...

At this point, I have no idea why it's not working. If anyone could point me in the right direction, I would really appreciate it.
Thanks!

EDIT: After looking around some more, it seems that the output of "apachectl -S" is helpful. So here it is:
Code:
root@maples-server:~# apachectl -S
AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1. Set the 'ServerName' directive globally to suppress this message
VirtualHost configuration:
ServerRoot: "/etc/apache2"
Main DocumentRoot: "/var/www/html"
Main ErrorLog: "/var/log/apache2/error.log"
Mutex ssl-stapling: using_defaults
Mutex proxy: using_defaults
Mutex ssl-cache: using_defaults
Mutex default: dir="/var/lock/apache2" mechanism=fcntl 
Mutex mpm-accept: using_defaults
Mutex watchdog-callback: using_defaults
PidFile: "/var/run/apache2/apache2.pid"
Define: DUMP_VHOSTS
Define: DUMP_RUN_CFG
Define: ENABLE_USR_LIB_CGI_BIN
User: name="www-data" id=33
Group: name="www-data" id=33

It appears that it's looking in a subdirectory html, which was not the case previously (before the upgrade). I've currently got a (ugly but useable) work-around using a symlink:
Code:
root@maples-server:~# cd /var/www/
root@maples-server:/var/www# rm -r html/
root@maples-server:/var/www# ln -s /var/www/
root@maples-server:/var/www# mv www html
root@maples-server:/var/www# ls -l html
lrwxrwxrwx 1 root root 9 Apr 28 22:36 html -> /var/www/

While this does work, I'd like to find the proper way of doing it. Any ideas?

Question About A=setting Up A RAID Array Using Mdadm

Hi All;
I am new to Linux and Ubuntu. I am setting up a RAID for a media server and want to be sure I do it right and in the proper order.

Based upon feedback received on this forum I think it makes most sense to partition my 4 TB disks into 2 partitions of 2 TB each.

So am I correct in running MDADM FIRST on both (Unpartitioned) 4 TB disks to create RAID 1 and then partition (Using Parted) the resulting single 4 TB disk that I create with MDADM into two 2 TB partitions? Thanks.

Tim

Degraded RAID 1 Mdadm

I have a degraded software RAID 1 array. md0 is in a state of clean, degraded while md1 is mounted in read-only and clean. I'm not sure how to go about fixing this. Any ideas?

cat /proc/mdstat
Code:
Personalities : [raid1] 
md1 : active (auto-read-only) raid1 sdb2[1] sda2[0]
      3909620 blocks super 1.2 [2/2] [UU]
      
md0 : active raid1 sda1[0]
      972849016 blocks super 1.2 [2/1] [U_]
      
unused devices: <none>

mdadm -D /dev/md0
Code:
/dev/md0:
        Version : 1.2
  Creation Time : Tue Jun 21 21:31:58 2011
     Raid Level : raid1
     Array Size : 972849016 (927.78 GiB 996.20 GB)
  Used Dev Size : 972849016 (927.78 GiB 996.20 GB)
   Raid Devices : 2
  Total Devices : 1
    Persistence : Superblock is persistent

    Update Time : Tue Jun  2 02:21:12 2015
          State : clean, degraded 
 Active Devices : 1
Working Devices : 1
 Failed Devices : 0
  Spare Devices : 0

           Name : 
           UUID : 
         Events : 3678064

    Number   Major   Minor   RaidDevice State
       0       8        1        0      active sync   /dev/sda1
       2       0        0        2      removed

mdadm -D /dev/md1
Code:
/dev/md1:
        Version : 1.2
  Creation Time : Tue Jun 21 21:32:09 2011
     Raid Level : raid1
     Array Size : 3909620 (3.73 GiB 4.00 GB)
  Used Dev Size : 3909620 (3.73 GiB 4.00 GB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent

    Update Time : Sat May 16 15:17:56 2015
          State : clean 
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           Name : 
           UUID : 
         Events : 116

    Number   Major   Minor   RaidDevice State
       0       8        2        0      active sync   /dev/sda2
       1       8       18        1      active sync   /dev/sdb2

Sed: Transforming 'ls-laR' Output Into A List With Absolute Paths

Hello, this is my first post
First i would like to thank you all for answering other people questions because I've been able to learn from the forum a lot.

I need your help with something.
I have standard output from 'ls -laR /etc' command which looks like this:
Code:
/etc/X11/xorg.conf.d:
total 4
drwxr-xr-x. 2 root root  29 Apr  1 00:46 .
drwxr-xr-x. 5 root root  54 Apr  1 00:43 ..
-rw-r--r--. 1 root root 232 Apr  1 00:46 00-keyboard.conf

/etc/xdg:
total 12
drwxr-xr-x.  4 root root   36 Apr  1 00:43 .
drwxr-xr-x. 87 root root 8192 Apr 12 13:53 ..
drwxr-xr-x.  2 root root    6 Jun 10  2014 autostart
drwxr-xr-x.  2 root root   17 Apr  7 01:25 systemd

by using sed command:
Code:
sed -e '/./!d' -e '/^total/d' -e '/\.$/d' -e 's/:$/\//' list.txt

I have transformed it to the following form:

Code:
/etc/X11/xorg.conf.d/
-rw-r--r--. 1 root root 232 Apr  1 00:46 00-keyboard.conf
/etc/xdg/
drwxr-xr-x.  2 root root    6 Jun 10  2014 autostart
drwxr-xr-x.  2 root root   17 Apr  7 01:25 systemd

and now I would like to achieve absolute paths at the end of each row

Code:
-rw-r--r--.  1 root root  232 Apr  1 00:46 /etc/X11/xorg.conf.d/00-keyboard.conf
drwxr-xr-x.  2 root root    6 Jun 10  2014 /etc/xdg/autostart
drwxr-xr-x.  2 root root   17 Apr  7 01:25 /etc/xdg/systemd


How do I join(merge) filenames with corresponding absolute path to their parent directory?



I know how to extract filenames using awk and get this:
Code:
00-keyboard.conf

autostart
systemd

but I don't know what to do next. Should I use some hitech sed option or go for loop or try with arrays? Help. Heeeelp

Creating RAID Array With Mdadm Question

Hi all,

Question for all, whats the difference between creating an array with mdadm using the partition (/dev/sdb1, /dev/sdc1, /dev/sdd1, etc...) and the non-partition (/dev/sdb, /dev/sdc, /dev/sdd, etc....) What are the benefits? performance boost?

Thanks,

Trying To Dd A Server With LVM To Another Ext HD, Then To Another Server

I have Linux enterprise server 11 sp3 with 3 250 GB WD blue drives in a raid 5 configuration.

Server “A” (external drive not plugged in):
Code:
Disk /dev/sda: 499.0 GB, 499021512704 bytes
255 heads, 63 sectors/track, 60669 cylinders, total 974651392 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00059fd2

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *        2048     1028095      513024   83  Linux
/dev/sda2         1028096    21993471    10482688   82  Linux swap / Solaris
/dev/sda3        21993472   974651391   476328960   8e  Linux LVM

Disk /dev/mapper/VG_SYSTEM-ROOT: 487.8 GB, 487755612160 bytes
255 heads, 63 sectors/track, 59299 cylinders, total 952647680 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/mapper/VG_SYSTEM-ROOT doesn't contain a valid partition table

I am trying to clone this machine to another server, both servers are Dell Poweredge 1900, with 3 – 250 WD drives (the only difference is the ‘B’ server has WD Cavier drives), pretty much identical machines, same processor and ram. I have a 2TB ext hard drive that I am using to store the output of DD. I have booted from the CD to a rescue system, then mounted my 2TB ext and did the following:
Code:
    # dd if=/dev/sda conv=sync,noerror bs=64k | gzip –c | split –a3 –b 2G –verbose - /mnt/exthd/

This gives me the following files on my external hard drive:
Code:
    
-rwxr-xr-x 1 root root 2147483648 Jan 10 21:00 aaa
-rwxr-xr-x 1 root root 2147483648 Jan 10 21:31 aab
-rwxr-xr-x 1 root root 2147483648 Jan 10 21:53 aac
-rwxr-xr-x 1 root root 2147483648 Jan 10 22:05 aad
-rwxr-xr-x 1 root root 2147483648 Jan 10 22:10 aae
-rwxr-xr-x 1 root root 2147483648 Jan 10 22:17 aaf
-rwxr-xr-x 1 root root 2147483648 Jan 10 22:24 aag
-rwxr-xr-x 1 root root 2147483648 Jan 10 22:31 aah
-rwxr-xr-x 1 root root 2147483648 Jan 10 22:37 aai
-rwxr-xr-x 1 root root 2147483648 Jan 10 22:43 aaj
-rwxr-xr-x 1 root root 2147483648 Jan 10 22:50 aak
-rwxr-xr-x 1 root root 2147483648 Jan 10 22:56 aal
-rwxr-xr-x 1 root root 2147483648 Jan 10 23:02 aam
-rwxr-xr-x 1 root root 2147483648 Jan 10 23:06 aan
-rwxr-xr-x 1 root root 2147483648 Jan 10 23:12 aao
-rwxr-xr-x 1 root root 2147483648 Jan 10 23:32 aap
-rwxr-xr-x 1 root root  324998512 Jan 10 23:35 aaq

Now, I boot to the rescue system on server ‘B’ with the external drive plugged in, and run fdisk:
Code:
Disk /dev/sda: 498.8 GB, 498753077248 bytes
255 heads, 63 sectors/track, 60636 cylinders, total 974127104 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00059fd2

   Device Boot      Start         End      Blocks   Id  System

Disk /dev/sdb: 2000.4 GB, 2000398933504 bytes
255 heads, 63 sectors/track, 243201 cylinders, total 3907029167 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00015a3d

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1            2048  3907028991  1953513472    c  W95 FAT32 (LBA)

Notice sda is only 498.8GB where on server ’A’, it was 499.0. when I try to restore my files from the DD, I get an out of space error. To restore, I use the following:
Code:
    # cat /mnt/exthd/aa* | gunzip –c | dd of=/dev/sda
    
dd: writing to ‘/dev/sda’:  No space left on device
974127105+0 records in
974127104+0 records out
498753077248 bytes (499 GB) copied, 37067.3 s, 13.5MB/s

My guess, is that although the drives are the same capacity (3 – 250GB in RAID 5 array), the number of cylinders is different because it is a different model, and that is where it is running out of space, although I wouldn’t think it would.

Please correct me if I am wrong as I am a newbie, but if I do “# dd if=/dev/sda” that will take all the partitions with it? Such as sda1, sda2, sda3 correct?

Space Disk "used" In Df Is Nowhere To Be Found With Du

Hello,

I am facing an issue with a filesystem (/dev/sda3); I see space used on it (around 365GB) when I am looking at the host with "df -h" command.

Code:
[root@srv_omega /]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda3             443G  365G   56G  87% /
tmpfs                  95G   56K   95G   1% /dev/shm
/dev/sda1             484M   39M  421M   9% /boot
/dev/sdb1             3.6T  1.3T  2.2T  36% /hadoop/disk1
/dev/sdc1             3.6T  1.3T  2.2T  37% /hadoop/disk2
/dev/sdd1             3.6T  1.3T  2.2T  36% /hadoop/disk3
/dev/sde1             3.6T  1.3T  2.2T  37% /hadoop/disk4
/dev/sdf1             3.6T  1.3T  2.2T  36% /hadoop/disk5
/dev/sdg1             3.6T  1.3T  2.2T  36% /hadoop/disk6
/dev/sdh1             3.6T  1.3T  2.2T  36% /hadoop/disk7
/dev/sdi1             3.6T  1.3T  2.2T  36% /hadoop/disk8
/dev/sdj1             3.6T  1.3T  2.2T  36% /hadoop/disk9
/dev/sdk1             3.6T  1.3T  2.2T  36% /hadoop/disk10
/dev/sdl1             3.6T  1.2T  2.3T  36% /hadoop/disk11
/dev/sdm1             3.6T  1.3T  2.2T  36% /hadoop/disk12
/dev/sdn1             3.6T  1.3T  2.2T  36% /hadoop/disk13
/dev/sdo1             3.6T  1.3T  2.2T  37% /hadoop/disk14
/dev/sdp1             3.6T  1.1T  2.4T  30% /hadoop/disk15
cm_processes           95G  8.2M   95G   1% /var/run/cloudera-scm-agent/process

I have looked if any hidden file might cause the issue, no joy.

Code:
[root@srv_omega /]# pwd
/
[root@srv_omega /]#  ls -lrtha
total 121K
drwxr-xr-x    2 root root 4.0K Jun 28  2011 srv
drwxr-xr-x    2 root root 4.0K Jun 28  2011 mnt
drwxr-xr-x    2 root root 4.0K Jun 28  2011 media
drwxr-xr-x    2 root root 4.0K Dec 20  2012 cgroup
drwx------    2 root root  16K Jun  2  2014 lost+found
drwxr-xr-x    2 root root 4.0K Jun  2  2014 selinux
-rw-r--r--    1 root root    0 Jun  3  2014 .autorelabel
drwxr-xr-x   18 root root 4.0K Jun  5  2014 hadoop
drwxr-xr-x   21 root root 4.0K Jun  5  2014 var
dr-xr-xr-x    9 root root  12K Jun 20  2014 lib64
dr-xr-xr-x    2 root root  12K Jun 21  2014 sbin
dr-xr-xr-x    2 root root 4.0K Jun 21  2014 bin
dr-xr-xr-x    5 root root 1.0K Jun 22  2014 boot
dr-xr-x---    5 root root 4.0K Jun 22  2014 root
drwxr-xr-x    6 root root 4.0K Jun 22  2014 opt
drwxr-xr-x    3 root root 4.0K Dec 10 19:11 home
dr-xr-xr-x   13 root root 4.0K Dec 12 16:18 lib
dr-xr-xr-x 1140 root root    0 Apr 30 15:11 proc
drwxr-xr-x   13 root root    0 Apr 30 15:11 sys
-rw-r--r--    1 root root    0 Apr 30 15:11 .autofsck
drwxr-xr-x    2 root root    0 Apr 30 15:11 misc
drwxr-xr-x    2 root root    0 Apr 30 15:11 net
drwxr-xr-x   15 root root 4.0K Apr 30 15:12 usr
drwxr-xr-x   19 root root 4.6K Apr 30 15:12 dev
dr-xr-xr-x   27 root root 4.0K Apr 30 15:12 ..
dr-xr-xr-x   27 root root 4.0K Apr 30 15:12 .
drwxr-xr-x  122 root root  12K May  4 03:33 etc
drwxrwxrwt   16 root root 4.0K May  7 06:14 tmp

So I try to find where the space is used with a "du -sh" command

Code:
[root@srv_omega /]# pwd
/
[root@srv_omega /]# du -sh *
7.8M    bin
29M     boot
4.0K    cgroup
280K    dev
26M     etc
19T     hadoop
124K    home
144M    lib
26M     lib64
16K     lost+found
4.0K    media
0       misc
4.0K    mnt
0       net
7.9G    opt
du: cannot access `proc/9170/task/27326/fdinfo/538': No such file or directory
du: cannot access `proc/45119/task/45119/fd/4': No such file or directory
du: cannot access `proc/45119/task/45119/fdinfo/4': No such file or directory
du: cannot access `proc/45119/fd/4': No such file or directory
du: cannot access `proc/45119/fdinfo/4': No such file or directory
du: cannot access `proc/45160': No such file or directory
0       proc
3.8M    root
17M     sbin
4.0K    selinux
4.0K    srv
0       sys
3.9M    tmp
2.6G    usr
16G     var

So as far as I understand, only /hadoop is a suitable suspect (as cumulative size of all the other folders on "/" are well below the 365GB)

Code:
[root@srv_omega hadoop]# cd /
[root@srv_omega /]# cd /hadoop
[root@srv_omega hadoop]# ls -lrtha
total 72K
drwxr-xr-x  2 root root 4.0K Jun  5  2014 disk16
drwxr-xr-x 18 root root 4.0K Jun  5  2014 .
drwxr-xr-x  4 root root 4.0K Jun 22  2014 disk1
drwxr-xr-x  4 root root 4.0K Jun 22  2014 disk11
drwxr-xr-x  4 root root 4.0K Jun 22  2014 disk10
drwxr-xr-x  4 root root 4.0K Jun 22  2014 disk13
drwxr-xr-x  4 root root 4.0K Jun 22  2014 disk12
drwxr-xr-x  4 root root 4.0K Jun 22  2014 disk14
drwxr-xr-x  4 root root 4.0K Jun 22  2014 disk2
drwxr-xr-x  4 root root 4.0K Jun 22  2014 disk4
drwxr-xr-x  4 root root 4.0K Jun 22  2014 disk3
drwxr-xr-x  4 root root 4.0K Jun 22  2014 disk6
drwxr-xr-x  4 root root 4.0K Jun 22  2014 disk5
drwxr-xr-x  4 root root 4.0K Jun 22  2014 disk8
drwxr-xr-x  4 root root 4.0K Jun 22  2014 disk7
drwxr-xr-x  4 root root 4.0K Jun 22  2014 disk9
drwxr-xr-x  5 root root 4.0K Nov 19 20:02 disk15
dr-xr-xr-x 27 root root 4.0K Apr 30 15:12 ..

All folders from 1 to 15 are on different filesystems, so the folder disk16 seems to be the only option but there is nothing in it.

Code:
[root@srv_omega hadoop]# cd disk16/
[root@srv_omega disk16]# ls -lrtha
total 8.0K
drwxr-xr-x 18 root root 4.0K Jun  5  2014 ..
drwxr-xr-x  2 root root 4.0K Jun  5  2014 .
[root@srv_omega disk16]#

I just don't get it; no folder seems responsible for the "365Gb"...

Any idea on how I could try to find out where those "365GB" are ?

Mdadm: Do You Still Have Set Fdisk Types

If you want to setup a raid1 system will modern mdadm programs on debian/ubuntu on a new/zeroed disk automatically set the type to Oxdm ?

Fred.

Problems With Mounting Drive At Boot

Hi,

I have problems mounting my second drive at boot automatically.
(sorry i am a noob)

When i use the mount command, it works fine.
Code:
mount -t ext3 /dev/sdb2 /mnt/HD/HD_b2

But when i try to add one if the following lines to /etc/fstab
It will not mount the drive at boot or with Code:
mount -a

, also fstab is empty afer reboot (normal??)

Code:
/dev/sdb2 /mnt/HD/HD_b2 ext3 defaults,errors=remount-ro 0 1

Code:
UUID=553afede-fa45-4cdc-9972-c0a9aa899509 /mnt/HD/HD_b2 ext3 errors=remount-ro 0 1

Code:
/dev/sdb2 /mnt/HD/HD_b2 ext3 rw 0 0

Code:
/dev/sdb2 /mnt/HD/HD_b2 ext3 defaults 0 1

output blkid:

Code:
/dev/sda1: UUID="e67e5c15-7b8b-9389-c311-e5d4c61326f9" TYPE="linux_raid_member"
/dev/sda2: UUID="09e0e365-0aa6-4214-b571-2bc6b027fd9f" TYPE="ext3"
/dev/sda4: UUID="64038414-136c-4939-bd14-9871a20290bd" TYPE="ext3"
/dev/sdb1: UUID="e67e5c15-7b8b-9389-c311-e5d4c61326f9" TYPE="linux_raid_member"
/dev/sdb2: UUID="553afede-fa45-4cdc-9972-c0a9aa899509" TYPE="ext3"
/dev/sdb4: UUID="bf594be6-ffb6-469d-a3a8-246be66a4d90" TYPE="ext2"

/etc/mtab:

Code:
rootfs / rootfs rw 0 0
/dev/root / ext2 rw,relatime,errors=continue 0 0
sysfs /sys sysfs rw,relatime 0 0
proc /proc proc rw,relatime 0 0
squash /usr/local/tmp ramfs rw,relatime,size=38m 0 0
/dev/loop0 /usr/local/modules squashfs ro,relatime 0 0
/dev/mtdblock5 /usr/local/config jffs2 rw,relatime 0 0
/dev/sda4 /mnt/HD_a4 ext3 rw,relatime,errors=continue,data=writeback 0 0
/dev/sdb4 /mnt/HD_b4 ext2 rw,relatime,errors=continue 0 0
none /proc/bus/usb usbfs rw,relatime 0 0
/dev/sda2 /mnt/HD/HD_a2 ext3 rw,relatime,errors=continue,user_xattr,data=writeb$
/dev/sdb2 /mnt/HD/HD_b2 ext3 rw,relatime,errors=continue,user_xattr,data=writeb$
/dev/sda2 /mnt/HD/HD_a2/squeeze/mnt/HD/HD_a2 ext3 rw,relatime,errors=continue,u$
/dev/root /mnt/HD/HD_a2/squeeze/mnt/root ext2 rw,relatime,errors=continue 0 0
/dev/root /mnt/HD/HD_a2/squeeze/dev ext2 rw,relatime,errors=continue 0 0
sysfs /mnt/HD/HD_a2/squeeze/sys sysfs rw,relatime 0 0
proc /mnt/HD/HD_a2/squeeze/proc proc rw,relatime 0 0

When mount command is used Code:
mount -t ext3 /dev/sdb2 /mnt/HD/HD_b2

The following line is added to mstab -->
Code:
/dev/sdb2 /mnt/HD/HD_b2 ext3 rw 0 0

I diont know what i am doing wrong, mount for HD_a2 works fine (other disk, worked at default), i hav e NAS DNS-325 where i installed Debian on. I used this tutorial to install debian.

The strange thing is, i had to reinstall my NAS, and befor it worked fine after i had installed debian 2 years ago, i just dont remeber how i fixed this.