Duplicate VG Names And Missing Devices

This has to deal with ubuntu server, but I'm a Linux newbie. Basically talk to me like I'm an 8 year old.

I have a home linux server running Ubuntu server 14.04.2. A few months ago I ran into some trouble with it and long story short only one of the original drives remains and there are two new ones. I'm having a hard time accessing and mounting the logical volumes on the original drive. I'm having issues with the fact that have have volume groups with the same name, as well as missing devices.

Code:
mike@server:~$ lsblk
NAME                             MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda                                8:0    0   1.8T  0 disk
└─sda3                             8:3    0   1.8T  0 part
sdb                                8:16   0 111.8G  0 disk
├─sdb1                             8:17   0   243M  0 part /boot
├─sdb2                             8:18   0     1K  0 part
└─sdb5                             8:21   0 111.6G  0 part
  ├─server--vg-root (dm-2)       252:2    0  43.4G  0 lvm  /
  └─server--vg-swap_1 (dm-3)     252:3    0   3.2G  0 lvm  [SWAP]
sdc                                8:32   0   1.8T  0 disk
├─server--vg-PrivateMedia (dm-0) 252:0    0  1000G  0 lvm
└─server--vg-PublicMedia (dm-1)  252:1    0   750G  0 lvm

The current boot drive is a 120gb ssd, sdb. sda is a blank 2TB drive that I plan on allocating as necessary to logical volumes. sdc is the has the logical volumes I would like to access.

Code:
mike@server:~$ sudo pvdisplay
  WARNING: Duplicate VG name server-vg: Existing 7MjKjV-5R7c-Qnj1-f3Wh-FR3h-lbom-H4ZEMt (created here) takes precedence over pyCHBp-1KjX-FdhK-hT1V-15uP-7GDn-A1hSRj
  Couldn't find device with uuid QJxFW4-777H-shDs-C2QH-1gWo-Pb0d-56BLPs.
  Couldn't find device with uuid LxIoum-Zjvg-WZ4i-vO8c-ib4O-ldwI-G82TsZ.
  WARNING: Duplicate VG name server-vg: Existing 7MjKjV-5R7c-Qnj1-f3Wh-FR3h-lbom-H4ZEMt (created here) takes precedence over pyCHBp-1KjX-FdhK-hT1V-15uP-7GDn-A1hSRj
  --- Physical volume ---
  PV Name               unknown device
  VG Name               server-vg
  PV Size               465.02 GiB / not usable 2.00 MiB
  Allocatable           yes
  PE Size               4.00 MiB
  Total PE              119045
  Free PE               119045
  Allocated PE          0
  PV UUID               QJxFW4-777H-shDs-C2QH-1gWo-Pb0d-56BLPs

  --- Physical volume ---
  PV Name               /dev/sdc
  VG Name               server-vg
  PV Size               1.82 TiB / not usable 1.09 MiB
  Allocatable           yes
  PE Size               4.00 MiB
  Total PE              476932
  Free PE               28932
  Allocated PE          448000
  PV UUID               t1291H-bZqa-8ksX-qYDg-Jix6-CmC3-tVR0qX

  --- Physical volume ---
  PV Name               unknown device
  VG Name               server-vg
  PV Size               465.02 GiB / not usable 2.00 MiB
  Allocatable           yes
  PE Size               4.00 MiB
  Total PE              119045
  Free PE               119045
  Allocated PE          0
  PV UUID               QJxFW4-777H-shDs-C2QH-1gWo-Pb0d-56BLPs

  --- Physical volume ---
  PV Name               /dev/sdb5
  VG Name               server-vg
  PV Size               111.55 GiB / not usable 4.00 MiB
  Allocatable           yes
  PE Size               4.00 MiB
  Total PE              28556
  Free PE               16636
  Allocated PE          11920
  PV UUID               ad5UcZ-E7s6-SPg5-RUqL-qitV-qJ5t-XHFr0Q

I've tried vgrename and vgreduce to no avail. I'm not really sure what else I can do. Any help would be greatly appreciated.


Similar Content



Trying To Dd A Server With LVM To Another Ext HD, Then To Another Server

I have Linux enterprise server 11 sp3 with 3 250 GB WD blue drives in a raid 5 configuration.

Server “A” (external drive not plugged in):
Code:
Disk /dev/sda: 499.0 GB, 499021512704 bytes
255 heads, 63 sectors/track, 60669 cylinders, total 974651392 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00059fd2

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *        2048     1028095      513024   83  Linux
/dev/sda2         1028096    21993471    10482688   82  Linux swap / Solaris
/dev/sda3        21993472   974651391   476328960   8e  Linux LVM

Disk /dev/mapper/VG_SYSTEM-ROOT: 487.8 GB, 487755612160 bytes
255 heads, 63 sectors/track, 59299 cylinders, total 952647680 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/mapper/VG_SYSTEM-ROOT doesn't contain a valid partition table

I am trying to clone this machine to another server, both servers are Dell Poweredge 1900, with 3 – 250 WD drives (the only difference is the ‘B’ server has WD Cavier drives), pretty much identical machines, same processor and ram. I have a 2TB ext hard drive that I am using to store the output of DD. I have booted from the CD to a rescue system, then mounted my 2TB ext and did the following:
Code:
    # dd if=/dev/sda conv=sync,noerror bs=64k | gzip –c | split –a3 –b 2G –verbose - /mnt/exthd/

This gives me the following files on my external hard drive:
Code:
    
-rwxr-xr-x 1 root root 2147483648 Jan 10 21:00 aaa
-rwxr-xr-x 1 root root 2147483648 Jan 10 21:31 aab
-rwxr-xr-x 1 root root 2147483648 Jan 10 21:53 aac
-rwxr-xr-x 1 root root 2147483648 Jan 10 22:05 aad
-rwxr-xr-x 1 root root 2147483648 Jan 10 22:10 aae
-rwxr-xr-x 1 root root 2147483648 Jan 10 22:17 aaf
-rwxr-xr-x 1 root root 2147483648 Jan 10 22:24 aag
-rwxr-xr-x 1 root root 2147483648 Jan 10 22:31 aah
-rwxr-xr-x 1 root root 2147483648 Jan 10 22:37 aai
-rwxr-xr-x 1 root root 2147483648 Jan 10 22:43 aaj
-rwxr-xr-x 1 root root 2147483648 Jan 10 22:50 aak
-rwxr-xr-x 1 root root 2147483648 Jan 10 22:56 aal
-rwxr-xr-x 1 root root 2147483648 Jan 10 23:02 aam
-rwxr-xr-x 1 root root 2147483648 Jan 10 23:06 aan
-rwxr-xr-x 1 root root 2147483648 Jan 10 23:12 aao
-rwxr-xr-x 1 root root 2147483648 Jan 10 23:32 aap
-rwxr-xr-x 1 root root  324998512 Jan 10 23:35 aaq

Now, I boot to the rescue system on server ‘B’ with the external drive plugged in, and run fdisk:
Code:
Disk /dev/sda: 498.8 GB, 498753077248 bytes
255 heads, 63 sectors/track, 60636 cylinders, total 974127104 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00059fd2

   Device Boot      Start         End      Blocks   Id  System

Disk /dev/sdb: 2000.4 GB, 2000398933504 bytes
255 heads, 63 sectors/track, 243201 cylinders, total 3907029167 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00015a3d

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1            2048  3907028991  1953513472    c  W95 FAT32 (LBA)

Notice sda is only 498.8GB where on server ’A’, it was 499.0. when I try to restore my files from the DD, I get an out of space error. To restore, I use the following:
Code:
    # cat /mnt/exthd/aa* | gunzip –c | dd of=/dev/sda
    
dd: writing to ‘/dev/sda’:  No space left on device
974127105+0 records in
974127104+0 records out
498753077248 bytes (499 GB) copied, 37067.3 s, 13.5MB/s

My guess, is that although the drives are the same capacity (3 – 250GB in RAID 5 array), the number of cylinders is different because it is a different model, and that is where it is running out of space, although I wouldn’t think it would.

Please correct me if I am wrong as I am a newbie, but if I do “# dd if=/dev/sda” that will take all the partitions with it? Such as sda1, sda2, sda3 correct?

Get RAID1 And LVM Back After Re Installating The OS

Hi All,
I had installed Cent OS 6.6 on sda. The RAID1 and LVM setup was on sdb and sdc. To practice well on recovering RAID and LVM after the OS reinstallation, I just reinstalled the OS. During first re installation of OS, I had selected all the mount points including RAID/LVM partitions as same as how those where mounted before the reinstallation, but the format was selected to only /, /others, /var. And after booting /dev/md0 and LVM partitions were set to active automatically and everything was mounted properly. Also there was no any data loss in the RAID/LVM partitions. So I could made sure that everything will be perfect if we carefully select the mount points during OS reinstallation by making sure the formating partitions.

Now I thouht of reinstalling OS once again but this time didn't select mount points for RAID/LVM partitions during OS reinstallation, thought for manual setup after the installation. So just selected /, /others, /var partitions to format. When it booted, I ran "cat /proc/mdstat" but it was taken /dev/md127(read only) instead of /dev/md0.
Code:
# cat /proc/mdstat 
Personalities : [raid1] 
md127 : active (auto-read-only) raid1 sdc[1] sdb[0]
      52396032 blocks super 1.2 [2/2] [UU]
      
unused devices: <none>

So now I just wanted to stop and restart this RAID array as /dev/md0. But I am not able to stop as it is giving following error.
Code:
# mdadm --stop --force /dev/md127
mdadm: Cannot get exclusive access to /dev/md127:Perhaps a running process, mounted filesystem or active volume group?

I made sure that no one RAID/LVM partitions are mounted.
Code:
# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda1        15G  3.5G   11G  26% /
tmpfs           376M     0  376M   0% /dev/shm
/dev/sda2       4.7G  9.8M  4.5G   1% /others
/dev/sda3       2.9G  133M  2.6G   5% /var

But LVM is active
Code:
# pvdisplay 
  --- Physical volume ---
  PV Name               /dev/md127
  VG Name               data
  PV Size               49.97 GiB / not usable 4.00 MiB
  Allocatable           yes 
  PE Size               4.00 MiB
  Total PE              12791
  Free PE               5111
  Allocated PE          7680
  PV UUID               IJ2br8-SWHW-cf1d-89Fr-EEw9-IJME-1BpfSj
   
# vgdisplay 
  --- Volume group ---
  VG Name               data
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  19
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               49.96 GiB
  PE Size               4.00 MiB
  Total PE              12791
  Alloc PE / Size       7680 / 30.00 GiB
  Free  PE / Size       5111 / 19.96 GiB
  VG UUID               982ay8-ljWY-kiPB-JY7F-pIu2-87uN-iplPEQ
   
# lvdisplay 
  --- Logical volume ---
  LV Path                /dev/data/home
  LV Name                home
  VG Name                data
  LV UUID                OAQp25-Q1TH-rekd-b3n2-mOkC-Zgyt-3fX2If
  LV Write Access        read/write
  LV Status              available
  # open                 0
  LV Size                10.00 GiB
  Current LE             2560
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:0
   
  --- Logical volume ---
  LV Path                /dev/data/backup
  LV Name                backup
  VG Name                data
  LV UUID                Uq6rhX-AvPN-GaNe-zevB-k3iB-Uz0m-TssjCg
  LV Write Access        read/write
  LV Status              available
  # open                 0
  LV Size                20.00 GiB
  Current LE             5120
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:1

As LVM is active on /dev/md127 it is not allowing to stop /dev/md127 raid array. So as I am fresher in RAID/LVM, expecting your kind help to make LVM inactive without any data loss and restart the RAID array as /dev/md0 and then re activate the LVM setup.
Expecting your kind reply, Thanks.

LVM Mount Via USB

My father passed away about 6 months ago and I'm just getting around to going threw all 18 hard drives... I have been using Ubuntu 14.04 for over a year now and while I'm not a pro I can handle most things.

One drive has LVM on it. I have attached it via a USB2.0 to SATA cable. I followed the instructions he http://linuxers.org/howto/how-mount-...rtitions-linux

Code:
michael [ ~ ]$ sudo fdisk -l
[sudo] password for michael: 

Disk /dev/sda: 1500.3 GB, 1500301910016 bytes
255 heads, 63 sectors/track, 182401 cylinders, total 2930277168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x38431b10

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1            2048  1464322047   732160000    7  HPFS/NTFS/exFAT
/dev/sda2      1464322048  2930272255   732975104    7  HPFS/NTFS/exFAT

Disk /dev/sdb: 256.1 GB, 256060514304 bytes
255 heads, 63 sectors/track, 31130 cylinders, total 500118192 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000656a3

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1            2048    33560575    16779264   82  Linux swap / Solaris
/dev/sdb2   *    33560576    75505663    20972544   83  Linux
/dev/sdb3        75505664   500117503   212305920   83  Linux

Disk /dev/sdc: 750.2 GB, 750156374016 bytes
255 heads, 63 sectors/track, 91201 cylinders, total 1465149168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000da346

   Device Boot      Start         End      Blocks   Id  System
/dev/sdc1   *        2048      499711      248832   83  Linux
/dev/sdc2          501758  1465147391   732322817    5  Extended
/dev/sdc5          501760  1465147391   732322816   8e  Linux LVM
michael [ ~ ]$ sudo pvs
  PV         VG         Fmt  Attr PSize   PFree
  /dev/sdc5  ubuntu1-vg lvm2 a--  698.39g    0 
michael [ ~ ]$ sudo lvdisplay /dev/ubuntu1-vg
  --- Logical volume ---
  LV Path                /dev/ubuntu1-vg/root
  LV Name                root
  VG Name                ubuntu1-vg
  LV UUID                StErnb-Mtop-NaEy-faw4-wNk0-bY5I-6ARBmI
  LV Write Access        read/write
  LV Creation host, time ubuntu1, 2014-08-12 18:45:21 +0900
  LV Status              NOT available
  LV Size                694.39 GiB
  Current LE             177765
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
   
  --- Logical volume ---
  LV Path                /dev/ubuntu1-vg/swap_1
  LV Name                swap_1
  VG Name                ubuntu1-vg
  LV UUID                62DxMC-qLAo-6cU9-kA5H-kNoL-7B4t-1ZCQcK
  LV Write Access        read/write
  LV Creation host, time ubuntu1, 2014-08-12 18:45:21 +0900
  LV Status              NOT available
  LV Size                4.00 GiB
  Current LE             1024
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
   
michael [ ~ ]$ mount /dev/ubuntu1-vg/root /mnt
mount: only root can do that
michael [ ~ ]$ sudo mount /dev/ubuntu1-vg/root /mnt
mount: special device /dev/ubuntu1-vg/root does not exist

So where did I go wrong?

Difference Between Child THREAD And Child PROCESS

Hello,

I am troubleshooting something and I got this problem.

If I do "pstree -p"

It shows,

Code:
        ├─soffice.bin(7734)─┬─{soffice.bin}(7735)
        │                   ├─{soffice.bin}(7736)
        │                   ├─{soffice.bin}(7737)
        │                   └─{soffice.bin}(7743)

However, it does NOT show up in "ps -elf"

Code:
ps -elf | grep soffi
0 S whho      7734     1  0  80   0 - 36435 -      11:14 pts/2    00:00:03 /usr/lib/openoffice/program/soffice.bin -splash-pipe=5
0 S whho      7833  7759  0  80   0 -   751 -      11:21 pts/3    00:00:00 grep soffi

I was wondering if 7735, 7736, 7737, 7743 were really processes. Then I checked /proc, I could cd to /proc/7735, /proc/7736, etc, but I could not ls them out.

I looked at the man page of "pstree", it says,

Code:
Child threads  of a process are found under the parent process and are shown with the process name in curly braces, e.g.

           icecast2---13*[{icecast2}]

So, what does all this mean? Does it mean that 7735, 7736, 7737, 7743 are just threads but not processes? If so, why could I cd to /proc/<id> but not see them in "ps -elf".

Would somebody please help me?

Thanks!

whho

Resize LVM Partitions

UPDATED:
I installed a Deb 7 Srv w LVM w following partitions:
The end product should become a mail server (Citadel) and in time also a Web server.
Code:
 df -hT 
Filesystem                Type      Size  Used Avail Use% Mounted on
rootfs                    rootfs    322M  141M  165M  46% /
udev                      devtmpfs   10M     0   10M   0% /dev
tmpfs                     tmpfs     100M  260K  100M   1% /run
/dev/mapper/deb--srv-root ext4      322M  141M  165M  46% /
tmpfs                     tmpfs     5,0M     0  5,0M   0% /run/lock
tmpfs                     tmpfs     200M     0  200M   0% /run/shm
/dev/sda1                 ext2      228M   18M  199M   9% /boot
/dev/mapper/deb--srv-home ext4      233G  188M  221G   1% /home
/dev/mapper/deb--srv-tmp  ext4      368M   11M  339M   3% /tmp
/dev/mapper/deb--srv-usr  ext4      8,3G  481M  7,4G   6% /usr
/dev/mapper/deb--srv-var  ext4      2,8G  236M  2,4G   9% /var

 fdisk -l 
Disk /dev/sda: 268.4 GB, 268435456000 bytes
255 heads, 63 sectors/track, 32635 cylinders, total 524288000 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00064033

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *        2048      499711      248832   83  Linux
/dev/sda2          501758   524285951   261892097    5  Extended
/dev/sda5          501760   524285951   261892096   8e  Linux LVM

Disk /dev/mapper/deb--srv-root: 348 MB, 348127232 bytes
255 heads, 63 sectors/track, 42 cylinders, total 679936 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/mapper/deb--srv-root doesn't contain a valid partition table

Disk /dev/mapper/deb--srv-swap_1: 2143 MB, 2143289344 bytes
255 heads, 63 sectors/track, 260 cylinders, total 4186112 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/mapper/deb--srv-swap_1 doesn't contain a valid partition table

Disk /dev/mapper/deb--srv-usr: 8996 MB, 8996782080 bytes
255 heads, 63 sectors/track, 1093 cylinders, total 17571840 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/mapper/deb--srv-usr doesn't contain a valid partition table

Disk /dev/mapper/deb--srv-var: 2998 MB, 2998927360 bytes
255 heads, 63 sectors/track, 364 cylinders, total 5857280 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/mapper/deb--srv-var doesn't contain a valid partition table

Disk /dev/mapper/deb--srv-tmp: 398 MB, 398458880 bytes
255 heads, 63 sectors/track, 48 cylinders, total 778240 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/mapper/deb--srv-tmp doesn't contain a valid partition table

Disk /dev/mapper/deb--srv-home: 253.3 GB, 253289824256 bytes
255 heads, 63 sectors/track, 30794 cylinders, total 494706688 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/mapper/deb--srv-home doesn't contain a valid partition table

 pvs 
PV         VG      Fmt  Attr PSize   PFree
/dev/sda5  deb-srv lvm2 a--  249,76g    0

 lvs 
LV     VG      Attr     LSize   Pool Origin Data%  Move Log Copy%  Convert
home   deb-srv -wi-ao-- 235,89g
root   deb-srv -wi-ao-- 332,00m
swap_1 deb-srv -wi-ao--   2,00g
tmp    deb-srv -wi-ao-- 380,00m
usr    deb-srv -wi-ao--   8,38g
var    deb-srv -wi-ao--   2,79g

Now i want to _shrink_ the HOME partition so I can expand my VAR partition. Looked for guides but haven't found any site useful so far

I tried to find a way to do it when I installed it but it didn't seem to offer me this at this time even I looked around for a while.

How do I do this shrinking of HOME and extending of VAR partition??
please be fairly specific as Im not a pro yet

Virtual CentOS 6.4 Server Expand Disk For Splunk Instance

As I'm sure there are other posts for this, I'm terrified to mess with disks in Linux as I'm very green. I've looked at one other post which suggests to run the following so I'm going to do the same. The drive provisioned originally had 400 gb, but currently has 500 gb allocated in vmware. Thanks so much in advance!

Code:
fdisk -l
pvs
vgs
lvs
df -h


Code:
[root@uspk10splunk ~]# fdisk -l

Disk /dev/sda: 8589 MB, 8589934592 bytes
64 heads, 32 sectors/track, 8192 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000c255c

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           2         501      512000   83  Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2             502        8192     7875584   8e  Linux LVM
Partition 2 does not end on cylinder boundary.

Disk /dev/sdb: 536.9 GB, 536870912000 bytes
255 heads, 63 sectors/track, 65270 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x69437664

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1       52216   419424988+  8e  Linux LVM

Disk /dev/mapper/vg_uspk10vsp03-lv_root: 3833 MB, 3833593856 bytes
255 heads, 63 sectors/track, 466 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000


Disk /dev/mapper/vg_uspk10vsp03-lv_swap: 4227 MB, 4227858432 bytes
255 heads, 63 sectors/track, 514 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000


Disk /dev/mapper/vg_opt-lv_opt: 429.5 GB, 429488340992 bytes
255 heads, 63 sectors/track, 52215 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Code:
[root@uspk10splunk ~]# pvs
  PV         VG             Fmt  Attr PSize   PFree
  /dev/sda2  vg_uspk10vsp03 lvm2 a--    7.51g    0
  /dev/sdb1  vg_opt         lvm2 a--  399.99g    0

Code:
[root@uspk10splunk ~]# vgs
  VG             #PV #LV #SN Attr   VSize   VFree
  vg_opt           1   1   0 wz--n- 399.99g    0
  vg_uspk10vsp03   1   2   0 wz--n-   7.51g    0

Code:
[root@uspk10splunk ~]# lvs
  LV      VG             Attr      LSize   Pool Origin Data%  Move Log Cpy%Sync                     Convert
  lv_opt  vg_opt         -wi-ao--- 399.99g                                                          
  lv_root vg_uspk10vsp03 -wi-ao---   3.57g                                                          
  lv_swap vg_uspk10vsp03 -wi-ao---   3.94g

Code:
[root@uspk10splunk ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/vg_uspk10vsp03-lv_root
                      3.6G  1.5G  1.9G  45% /
tmpfs                 1.9G     0  1.9G   0% /dev/shm
/dev/mapper/vg_opt-lv_opt
                      394G  374G  422M 100% /opt
/dev/sda1             485M   32M  428M   7% /boot

Trouble With Fdisk

I got this configuration.

fdisk -l

Disk /dev/sda: 493.9 GB, 493921239040 bytes
255 heads, 63 sectors/track, 60049 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0002ac38

Device Boot Start End Blocks Id System
/dev/sda1 * 1 64 512000 83 Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2 64 32636 261630976 8e Linux LVM

Disk /dev/mapper/vg_gazduire-lv_root: 53.7 GB, 53687091200 bytes
255 heads, 63 sectors/track, 6527 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000


Disk /dev/mapper/vg_gazduire-lv_swap: 5100 MB, 5100273664 bytes
255 heads, 63 sectors/track, 620 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000


Disk /dev/mapper/vg_gazduire-lv_home: 209.1 GB, 209119608832 bytes
255 heads, 63 sectors/track, 25424 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000


df -h

Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg_gazduire-lv_root
50G 21G 27G 44% /
tmpfs 2.4G 0 2.4G 0% /dev/shm
/dev/sda1 485M 161M 299M 35% /boot
/dev/mapper/vg_gazduire-lv_home
192G 17G 166G 9% /home
/usr/tmpDSK 2.0G 981M 955M 51% /tmp


I want to increase the /dev/sda1 with aproximativ 200 gb who remains ! My plans a i want to make /dev/sda3 and i want to put in
vg_gazduire!

VG Name vg_gazduire
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 4
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 3
Open LV 3
Max PV 0
Cur PV 1
Act PV 1
VG Size 249.51 GiB
PE Size 4.00 MiB
Total PE 63874
Alloc PE / Size 63874 / 249.51 GiB
Free PE / Size 0 / 0



When i type fdisk /dev/sda3 the system give me Unable to open /dev/sda3!

Could anyone tell me the solution?

Doesn't Contain A Valid Partition Table? Working For Months...

Everything is working great on this server, however I am seeing Disk doesn't contain a valid partition table. I have no issue reading and writing, this something that can be ignored?


Disk /dev/xvda: 32.2 GB, 32212254720 bytes
255 heads, 63 sectors/track, 3916 cylinders, total 62914560 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Device Boot Start End Blocks Id System
/dev/xvda1 * 16065 62910539 31447237+ 83 Linux

Disk /dev/xvdf: 1099.5 GB, 1099511627776 bytes
255 heads, 63 sectors/track, 133674 cylinders, total 2147483648 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/xvdf doesn't contain a valid partition table

Disk /dev/xvdh: 1099.5 GB, 1099511627776 bytes
255 heads, 63 sectors/track, 133674 cylinders, total 2147483648 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/xvdh doesn't contain a valid partition table

Disk /dev/xvdg: 1099.5 GB, 1099511627776 bytes
255 heads, 63 sectors/track, 133674 cylinders, total 2147483648 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/xvdg doesn't contain a valid partition table

Disk /dev/xvdi: 1099.5 GB, 1099511627776 bytes
255 heads, 63 sectors/track, 133674 cylinders, total 2147483648 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/xvdi doesn't contain a valid partition table

Disk /dev/mapper/vgebs-lvebs: 4398.0 GB, 4398029733888 bytes
255 heads, 63 sectors/track, 534696 cylinders, total 8589901824 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 2097152 bytes / 4194304 bytes
Disk identifier: 0x00000000

Disk /dev/mapper/vgebs-lvebs doesn't contain a valid partition table

Disk /dev/xvdj: 32.2 GB, 32212254720 bytes
255 heads, 63 sectors/track, 3916 cylinders, total 62914560 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/xvdj doesn't contain a valid partition table

Issues With RAID- Creating As /dev/md127 Instead Of What's In The Config

Hi,
Recently, I decided to change my partition scheme for my home server. I had a RAID0 that previously spanned three disks and now I only want it to span two. Getting rid of the old one was easy. But getting the new one to work has been a real pain.

It's running Debian Jessie.

For starters, here's my /etc/mdadm/mdadm.conf:
Code:
root@maples-server:~# cat /etc/mdadm/mdadm.conf 
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default (built-in), scan all partitions (/proc/partitions) and all
# containers for MD superblocks. alternatively, specify devices to scan, using
# wildcards if desired.
#DEVICE partitions containers
DEVICE /dev/sdb1 /dev/sdc1

# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR root

# definitions of existing MD arrays

ARRAY /dev/md0 metadata=1.2 UUID=032e4ab2:53ac5db8:98806abd:420716a5 devices=/dev/sdb1,/dev/sdc1

As you can see, I have it specified to setup the RAID as /dev/md0. But every time I reboot, my /proc/mdstat shows:
Code:
root@maples-server:~# cat /proc/mdstat 
Personalities : [raid0] 
md127 : active raid0 sdc1[1] sdb1[0]
      488016896 blocks super 1.2 512k chunks
      
unused devices: <none>

I can confirm that it's actually md127 by looking at /dev:
Code:
root@maples-server:~# ls -l /dev/md*
brw-rw---- 1 root disk 9, 127 May  2 20:17 /dev/md127

/dev/md:
total 0
lrwxrwxrwx 1 root root 8 May  2 20:17 maples-server:0 -> ../md127

And here's a bit more info:
Code:
root@maples-server:~# mdadm --detail --scan
ARRAY /dev/md/maples-server:0 metadata=1.2 name=maples-server:0 UUID=032e4ab2:53ac5db8:98806abd:420716a5

I've tried adding all sorts of options to /etc/mdadm/mdadm.conf, ranging from just the output of the above command (only changing "/dev/md/maples-server:0" to "/dev/md0") to what you see at the top. Nothing seems to be making a difference.

Does anyone have any ideas?

UEFI Vs Debian - Cannot Boot Into Grub After Installation

The installation of the latest stable on my Lenovo G505s went just fine in UEFI mode but when the installation finished I never reach grub during the bootup process. I put Refind media onto a usb stick and boot into my computer using it after that I tried to follow this guide https://wiki.debian.org/GrubEFIReinstall but I'm having some trouble.

When I start the computer with the "Refind media" it gives me 3 options where the first is

"Boot EFI\debian\grubx64.efi from 510 Mib Fat Volume" If I choose this option it takes me to grub via eufi(?) and then I can get into my computer as usual.

How can I make my computer understand that it should recognize and boot from EFI\debian\grubx64.efi, 510 Mib Fat Volume?

I found https://wiki.debian.org/EFIStub and since my
/boot/efi/EFI/debian looks like; grubx64.efi initrd.img-3.16.0-4-amd64 vmlinuz-3.16.0-4-amd64
I tried the following
Code:
# efibootmgr -c -g -L "Debian (EFI stub)" -l '\EFI\debian\grubx64.efi' -u 'root=UUID=$UUID ro quiet rootfstype=ext4 add_efi_memmap initrd=\\EFI\\debian\\initrd.img-3.16.0-4-amd-64'
efibootmgr: Could not set variable Boot0005: No such file or directory
efibootmgr: Could not prepare boot variable: No such file or directory

Code:
    [ -d /sys/firmware/efi ] && echo "EFI boot on HDD" || echo "Legacy boot on HDD"
    EFI boot on HDD


     apt-get install --reinstall grub-efi

went just fine and I have /dev/sda1 mounted on /boot/efi
but when I try grub-install I get

Code:
grub-install /dev/sda
Installing for x86_64-efi platform.
efibootmgr: Could not set variable Boot0005: No such file or directory
efibootmgr: Could not prepare boot variable: No such file or directory
Installation finished. No error reported.

What is going wrong here?

Code:
    update-grub
    Generating grub configuration file ...
    Found background image: .background_cache.png
    Found linux image: /boot/vmlinuz-3.16.0-4-amd64
    Found initrd image: /boot/initrd.img-3.16.0-4-amd64
    Adding boot menu entry for EFI firmware configuration
    done

Code:
file /boot/efi/EFI/debian/grubx64.efi
/boot/efi/EFI/debian/grubx64.efi: PE32+ executable (EFI application) x86-64 (stripped to external PDB), for MS Windows

but efibootmgr --verbose | grep debian gives me nothing. If I run efibootmgr without grep I get:

Code:
    efibootmgr --verbose
    BootCurrent: 0002
    Timeout: 0 seconds
    BootOrder: 2001,0003,0004,0001,0000,2002,2003
    Boot0000* USB HDD     : SanDisk U3 Cruzer Micro    BIOS(2,500,00)..................3.......1...5........................................
    Boot0001* USB ODD     : SanDisk U3 Cruzer Micro    BIOS(3,500,00)..................;.......9...=........................................
    Boot0002* EFI USB Device   ACPI(a0341d0,0)PCI(10,0)USB(2,0)0311050000HD(1,800,2d5f,55fc859c-5227-4cd5-bd64-a4fd678ba8b6)RC
    Boot0003* SATA HDD    : ST1000LM014-1EJ164                         BIOS(2,0,00).......................................................................
    Boot0004* SATA ODD    : MATSHITADVD-RAM UJ8C2                      BIOS(3,0,00).......................................................................
    Boot2001* EFI USB Device   RC
    Boot2002* EFI DVD/CDROM   RC
    Boot2003* EFI Network   RC

Here is some additonial information which might be relevant.

Code:
    # df -h
    Filesystem                 Size  Used Avail Use% Mounted on
    /dev/dm-1                  9.1G  2.5G  6.2G  29% /
    udev                        10M     0   10M   0% /dev
    tmpfs                      1.4G  9.1M  1.4G   1% /run
    tmpfs                      3.5G   68K  3.5G   1% /dev/shm
    tmpfs                      5.0M  4.0K  5.0M   1% /run/lock
    tmpfs                      3.5G     0  3.5G   0% /sys/fs/cgroup
    /dev/sda2                  237M   35M  190M  16% /boot
    /dev/sda1                  511M  132K  511M   1% /boot/efi
    /dev/mapper/ludo--vg-home  893G  103M  848G   1% /home
    tmpfs                      713M  4.0K  713M   1% /run/user/117
    tmpfs                      713M  8.0K  713M   1% /run/user/1000

Code:
fdisk -l

Disk /dev/sda: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 0080BBEA-1B46-4EAE-9471-CC523BFCAD44

Device       Start        End    Sectors   Size Type
/dev/sda1     2048    1050623    1048576   512M EFI System
/dev/sda2  1050624    1550335     499712   244M Linux filesystem
/dev/sda3  1550336 1953523711 1951973376 930.8G Linux filesystem

GPT PMBR size mismatch (13695 != 2009726) will be corrected by w(rite).

Disk /dev/sdb: 981.3 MiB, 1028980224 bytes, 2009727 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: E4A93BB9-2F9C-4487-B090-91B620879E4C

Device     Start   End Sectors  Size Type
/dev/sdb1   2048 13662   11615  5.7M EFI System

Disk /dev/mapper/sda3_crypt: 930.8 GiB, 999408271360 bytes, 1951969280 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/mapper/ludo--vg-root: 9.3 GiB, 9999220736 bytes, 19529728 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/mapper/ludo--vg-swap_1: 14.4 GiB, 15439233024 bytes, 30154752 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/mapper/ludo--vg-home: 907.1 GiB, 973967720448 bytes, 1902280704 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes