New To LVM, How To Bridge Between /dev/sd* And /dev/mapper/vol Group/log Vol?

So LVM has taken me by surprise, especially working with all of these virtual servers.

I have Linux servers in VMWare, and I know how to grow hard disks there and how they are tied back to /dev/sd*, however what I'm not sure about is how do I know what Volume Group and Logical Volume they are tied too?

If I isssue df -ha I can see where the various partitions are tied to /dev/mapper/Vol Group/Logical Vol

and I issue fdisk -l and I can see the space and what is tied to /dev/sd*, however how do I tie to two together so I know who has what space and how to grow or shrink that space?

I found an older thread he

http://www.linuxquestions.org/questi...ds-for-699073/

However I still don't understand how to bridge between the two.

thanks


Similar Content



LVM / Device Mapper

Hello,
We have been having a nightmare of a time with a crash on one of our file shares.

We finally were able to recover the logical volume group and get the path to return. However it appears its mapping is off.

dmsetup status
vg_flood-sff_sha
vg_os-lv_tmp: 0 10485760 linear
vg_os-lv_usr: 0 20971520 linear
vg_os-lv_var: 0 10485760 linear
vg_os-lv_opt: 0 41943040 linear
vg_os-lv_swap: 0 16777216 linear
vg_os-lv_root: 0 10485760 linear


dm6 I would guess is where /dev/mapper/vg_flood-sff_share needs to be mapped to, as it is the only one without mapping. As you can see dmsetup also shows this logical volume as different than the rest.

I am trying to figure out how to fix the mapping for this so that I can activate the logical volume and get our data back.

Are Logical Volumes In LVM Separte From Each Other, Like A Partition?

Even though they are on the same Volume Group?

I'm still having a hard time understanding this.

Let's say for example, that I have the following:

Code:
# df -ha

VolGroup00/LogVol2     /tmp

VolGroup00/LogVol5     /opt

VolGroup00/LogVol3     /var

I have three different Logical Volumes where /tmp, /opt and /var live. Are these separate like a partition that one would place on a physical disk?

Keep in mind that they all are on the same Volume Group as well?

Vgchange -an Command Fail To Work To Inactive VG || Umount Logical Volume Succesfull

Hi All,

umounted all logical volume of tomcatvg successfully.But when tried to deactive Volume group using vgchange command show logical volume are in active state .Need help how to force de-active Volume group .


Actioned perfomed
================
vgchange -an tomcatvg


[root@porsche ~]# vgs
VG #PV #LV #SN Attr VSize VFree
tomcatvg 1 10 0 wz--n- 95.38g 8.88g


[root@porsche ~]# cat /etc/fstab | grep -i fs_opt_tomcat
/dev/tomcatvg/fs_opt_tomcat /opt/tomcat ext4 defaults 1 2
[root@porsche ~]#

fuser -km /opt/tomcat
umount /opt/tomcat

Still I could find logical volume in active state

[root@porsche ~]# lvs | grep -i appvg
fs_opt_tomcat tomcatvg -wi-ao---- 5.00g

Regards
Arun

How To Convert Logical Partition To Primary?

hello,

i am using slackware 14.1 and my partition table is given below
Code:
 sda1                    Primary   ext4                             60003.42 
    sda2                    Primary   swap                              8998.46
    sda3                    Primary   ext4                            119998.61
                            Logical   Free Space                           0.10*
    sda5        NC          Logical   Linux                           120739.34*
    sda6        NC          Logical   ntfs                            190356.39*
                            Logical   Free Space                          11.56

and i want to change Code:
sda5        NC          Logical   Linux                           120739.34*

Logical to primary partition.i don't know how to do it.i searched over internet but i can't understand could u any body please guide me how to do it.Thanks in advance.

Allocating Space To Mounts

My real objective is to clean existing nodes from a Hadoop cluster, claim the space etc. and do a fresh installation of another distro.

Below is the space descr. of one of the master(namenode):

Code:
df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/vg00-root
                      2.0G  739M  1.2G  39% /
tmpfs                  24G     0   24G   0% /dev/shm
/dev/mapper/mpathap1  194M   65M  120M  36% /boot
/dev/mapper/vg00-home
                      248M   12M  224M   5% /home
/dev/mapper/vg00-nsr  248M   11M  226M   5% /nsr
/dev/mapper/vg00-opt  3.1G   79M  2.8G   3% /opt
/dev/mapper/vg00-itm  434M   11M  402M   3% /opt/IBM/ITM
/dev/mapper/vg00-tmp  2.0G   68M  1.9G   4% /tmp
/dev/mapper/vg00-usr  2.0G  1.6G  305M  85% /usr
/dev/mapper/vg00-usr_local
                      248M   11M  226M   5% /usr/local
/dev/mapper/vg00-var  2.0G  820M  1.1G  43% /var
/dev/mapper/vg00-FSImage
                      917G  3.3G  867G   1% /opt/hadoop-FSImage
/dev/mapper/vg00-Zookeeper
                      917G  200M  870G   1% /opt/hadoop-Zookeeper

And one of the slave(datanodes)(the others too have 4-8 local disks and some mounted NFS drives):

Code:
df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/vg00-root
                      2.0G  411M  1.5G  22% /
tmpfs                  36G     0   36G   0% /dev/shm
/dev/mapper/mpathap1  194M   67M  118M  37% /boot
/dev/mapper/vg00-home
                      248M   11M  226M   5% /home
/dev/mapper/vg00-nsr  248M   11M  226M   5% /nsr
/dev/mapper/vg00-opt  3.1G   82M  2.8G   3% /opt
/dev/mapper/vg00-itm  434M   11M  402M   3% /opt/IBM/ITM
/dev/mapper/vg00-tmp  2.0G   68M  1.9G   4% /tmp
/dev/mapper/vg00-usr  2.0G  1.2G  690M  64% /usr
/dev/mapper/vg00-usr_local
                      248M   11M  226M   5% /usr/local
/dev/mapper/vg00-var  2.0G  1.5G  392M  80% /var
/dev/mapper/vg00-00   559G   33M  559G   1% /opt/hadoop-00
/dev/mapper/vg00-01   559G   33M  559G   1% /opt/hadoop-01
/dev/mapper/vg00-02   559G   33M  559G   1% /opt/hadoop-02
/dev/mapper/vg00-03   559G   33M  559G   1% /opt/hadoop-03
/dev/mapper/vg00-04   559G   33M  559G   1% /opt/hadoop-04
/dev/mapper/vg00-05   559G   33M  559G   1% /opt/hadoop-05
/dev/mapper/vg00-06   559G   33M  559G   1% /opt/hadoop-06
/dev/mapper/vg00-07   559G   33M  559G   1% /opt/hadoop-07

During the new installation, I have run into space issues for all the nodes/hosts

Code:
Not enough disk space on host (l1032lab.se.com). A minimum of 1GB is required for "/usr" mount. A minimum of 2GB is required for "/" mount.
Not enough disk space on host (l1033lab.se.com). A minimum of 1GB is required for "/usr" mount. A minimum of 2GB is required for "/" mount.
Not enough disk space on host (l1034lab.se.com). A minimum of 1GB is required for "/usr" mount. A minimum of 2GB is required for "/" mount.
Not enough disk space on host (l1035lab.se.com). A minimum of 1GB is required for "/usr" mount. A minimum of 2GB is required for "/" mount.

Now the installation requires a lot of space under /, /usr, /var and /home mounts which on both master and the slaves is less but there is lots of space in master under these two disks:

Code:
/dev/mapper/vg00-FSImage
                      917G  200M  870G   1% /opt/hadoop-FSImage
/dev/mapper/vg00-Zookeeper
                      917G  6.5G  864G   1% /opt/hadoop-Zookeeper

and on slaves under these 8 disks.

Code:
/dev/mapper/vg00-00   559G   33M  559G   1% /opt/hadoop-00
/dev/mapper/vg00-01   559G   33M  559G   1% /opt/hadoop-01
/dev/mapper/vg00-02   559G   33M  559G   1% /opt/hadoop-02
/dev/mapper/vg00-03   559G   33M  559G   1% /opt/hadoop-03
/dev/mapper/vg00-04   559G   33M  559G   1% /opt/hadoop-04
/dev/mapper/vg00-05   559G   33M  559G   1% /opt/hadoop-05
/dev/mapper/vg00-06   559G   33M  559G   1% /opt/hadoop-06
/dev/mapper/vg00-07   559G   33M  559G   1% /opt/hadoop-07

I'm a bit confused as how to proceed - I was wondering if I can partition one or more of these disks and allocate those under /, /usr etc. and proceed with the installation but then will these disk partitioning and mounting/un-mounting corrupt the existing /, /usr etc. mounts ?

Automatic LVM Names Include Hyphens

I am trying to setup an Ubuntu server and have been running into trouble with the automatic LVM naming including a hyphen. IE HomeServer-vg.

I have gotten errors when partitioning in webmin and have seen the volume group written with two hyphens instead of one(HomeServer--vg) but haven't figured out the proper way to deal with it. I finally reinstalled the server from the beginning and followed some instructions to rename the volume group and took out the hyphen. No more errors in webmin, but I have to assume I am missing a normal bit of knowledge that everyone knows about this. I just installed a virtual machine and it created a volume group name with a hyphen also.

So, what am I missing?

Xubuntu MBR Partioning Question

I was wondering if I can repair my current partitioning setup using gparted, or if I should just reload Xubuntu. Basically I screwed up by making the primary partition only 256M, and made a massive extended logical partition for everything else, and did not leave swap space. I am doing this on an older PC with MBR, dual processor, 2G RAM each processor, 160GB hard drive space. It is single boot, no Windows. I would like the partioning to be as follows, leaving empty disk space for other Linux flavors:

/ 13GB ext4
/home 50GB ext4
swap 8GB swap

sudo parted /dev/sda print all
Code:
Model: ATA ST3160812AS (scsi)
Disk /dev/sda: 160GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos

Number  Start   End    Size   Type      File system  Flags
 1      1049kB  256MB  255MB  primary   ext2         boot
 2      257MB   160GB  160GB  extended
 5      257MB   160GB  160GB  logical
                                                                       
Error: /dev/mapper/xubuntu--vg-swap_1: unrecognised disk label

Model: Linux device-mapper (linear) (dm)
Disk /dev/mapper/xubuntu--vg-root: 158GB
Sector size (logical/physical): 512B/512B
Partition Table: loop

Number  Start  End    Size   File system  Flags
 1      0.00B  158GB  158GB  ext4
                                                                          
Error: /dev/mapper/sda5_crypt: unrecognised disk label

df -hT
Code:
Filesystem                   Type      Size  Used Avail Use Mounted on
/dev/mapper/xubuntu--vg-root ext4      145G  6.5G  131G   5% /
none                         tmpfs     4.0K     0  4.0K   0% /sys/fs/cgroup
udev                         devtmpfs  989M  4.0K  989M   1% /dev
tmpfs                        tmpfs     201M  1.1M  200M   1% /run
none                         tmpfs     5.0M     0  5.0M   0% /run/lock
none                         tmpfs    1003M   88K 1003M   1% /run/shm
none                         tmpfs     100M   24K  100M   1% /run/user
/dev/sda1                    ext2      236M  120M  104M  54% /boot
/home/mbrk/.Private          ecryptfs  145G  6.5G  131G   5% /home/mbrk

Allot Free Space From One Partition To Other

I have a RHEL 5.3 machine with the following partitions and free space:

Free space on the partitions
/ : 74GB
/boot : 81MB
/var : 73GB
/home : 37GB
/icat : 758MB
/opt : 1.5GB

Now is it possible to allot a free space of some other partitions to /opt? I want around 100 GB more space on /opt. Thus, if it is possible I could move 50GB from /var and /20 GB from /home to /opt. This can make at least 70 GB space available.
What are other possible ways to increase space on /opt?
Thanks in advance !!!

Installing CentOS 6.5 To Multiple VHDX's On Hyper-V 2012. Is It Possible?

Hello all

I am trying to create a Hyper-V 2012 Virtual machine running Cent0S 6.5 for a client. Currently I have the VM set up with two SCSI disks one of size 25GB and one of size 60 GB.. The 25 GB disk will be the OS disk and the 60 GB disk will be the data disk which holds the data from the database. I want to create an LVM on the 60 GB disk and increase it over time as the database gets bigger. As soon as the disk gets to about 1 TB, I will create a third disk, and extend the Volume Group to include that third disk as well. The third disk will eventually grow past 1TB as well (but not 2TB)

Overall my question is, would this type of set up be possibly using a MBR Bios set up? or would I HAVE To use GPT?

all help is greatly appreciated!

Trying To Dd A Server With LVM To Another Ext HD, Then To Another Server

I have Linux enterprise server 11 sp3 with 3 250 GB WD blue drives in a raid 5 configuration.

Server “A” (external drive not plugged in):
Code:
Disk /dev/sda: 499.0 GB, 499021512704 bytes
255 heads, 63 sectors/track, 60669 cylinders, total 974651392 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00059fd2

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *        2048     1028095      513024   83  Linux
/dev/sda2         1028096    21993471    10482688   82  Linux swap / Solaris
/dev/sda3        21993472   974651391   476328960   8e  Linux LVM

Disk /dev/mapper/VG_SYSTEM-ROOT: 487.8 GB, 487755612160 bytes
255 heads, 63 sectors/track, 59299 cylinders, total 952647680 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/mapper/VG_SYSTEM-ROOT doesn't contain a valid partition table

I am trying to clone this machine to another server, both servers are Dell Poweredge 1900, with 3 – 250 WD drives (the only difference is the ‘B’ server has WD Cavier drives), pretty much identical machines, same processor and ram. I have a 2TB ext hard drive that I am using to store the output of DD. I have booted from the CD to a rescue system, then mounted my 2TB ext and did the following:
Code:
    # dd if=/dev/sda conv=sync,noerror bs=64k | gzip –c | split –a3 –b 2G –verbose - /mnt/exthd/

This gives me the following files on my external hard drive:
Code:
    
-rwxr-xr-x 1 root root 2147483648 Jan 10 21:00 aaa
-rwxr-xr-x 1 root root 2147483648 Jan 10 21:31 aab
-rwxr-xr-x 1 root root 2147483648 Jan 10 21:53 aac
-rwxr-xr-x 1 root root 2147483648 Jan 10 22:05 aad
-rwxr-xr-x 1 root root 2147483648 Jan 10 22:10 aae
-rwxr-xr-x 1 root root 2147483648 Jan 10 22:17 aaf
-rwxr-xr-x 1 root root 2147483648 Jan 10 22:24 aag
-rwxr-xr-x 1 root root 2147483648 Jan 10 22:31 aah
-rwxr-xr-x 1 root root 2147483648 Jan 10 22:37 aai
-rwxr-xr-x 1 root root 2147483648 Jan 10 22:43 aaj
-rwxr-xr-x 1 root root 2147483648 Jan 10 22:50 aak
-rwxr-xr-x 1 root root 2147483648 Jan 10 22:56 aal
-rwxr-xr-x 1 root root 2147483648 Jan 10 23:02 aam
-rwxr-xr-x 1 root root 2147483648 Jan 10 23:06 aan
-rwxr-xr-x 1 root root 2147483648 Jan 10 23:12 aao
-rwxr-xr-x 1 root root 2147483648 Jan 10 23:32 aap
-rwxr-xr-x 1 root root  324998512 Jan 10 23:35 aaq

Now, I boot to the rescue system on server ‘B’ with the external drive plugged in, and run fdisk:
Code:
Disk /dev/sda: 498.8 GB, 498753077248 bytes
255 heads, 63 sectors/track, 60636 cylinders, total 974127104 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00059fd2

   Device Boot      Start         End      Blocks   Id  System

Disk /dev/sdb: 2000.4 GB, 2000398933504 bytes
255 heads, 63 sectors/track, 243201 cylinders, total 3907029167 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00015a3d

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1            2048  3907028991  1953513472    c  W95 FAT32 (LBA)

Notice sda is only 498.8GB where on server ’A’, it was 499.0. when I try to restore my files from the DD, I get an out of space error. To restore, I use the following:
Code:
    # cat /mnt/exthd/aa* | gunzip –c | dd of=/dev/sda
    
dd: writing to ‘/dev/sda’:  No space left on device
974127105+0 records in
974127104+0 records out
498753077248 bytes (499 GB) copied, 37067.3 s, 13.5MB/s

My guess, is that although the drives are the same capacity (3 – 250GB in RAID 5 array), the number of cylinders is different because it is a different model, and that is where it is running out of space, although I wouldn’t think it would.

Please correct me if I am wrong as I am a newbie, but if I do “# dd if=/dev/sda” that will take all the partitions with it? Such as sda1, sda2, sda3 correct?