LVM / Device Mapper

Hello,
We have been having a nightmare of a time with a crash on one of our file shares.

We finally were able to recover the logical volume group and get the path to return. However it appears its mapping is off.

dmsetup status
vg_flood-sff_sha
vg_os-lv_tmp: 0 10485760 linear
vg_os-lv_usr: 0 20971520 linear
vg_os-lv_var: 0 10485760 linear
vg_os-lv_opt: 0 41943040 linear
vg_os-lv_swap: 0 16777216 linear
vg_os-lv_root: 0 10485760 linear


dm6 I would guess is where /dev/mapper/vg_flood-sff_share needs to be mapped to, as it is the only one without mapping. As you can see dmsetup also shows this logical volume as different than the rest.

I am trying to figure out how to fix the mapping for this so that I can activate the logical volume and get our data back.


Similar Content



New To LVM, How To Bridge Between /dev/sd* And /dev/mapper/vol Group/log Vol?

So LVM has taken me by surprise, especially working with all of these virtual servers.

I have Linux servers in VMWare, and I know how to grow hard disks there and how they are tied back to /dev/sd*, however what I'm not sure about is how do I know what Volume Group and Logical Volume they are tied too?

If I isssue df -ha I can see where the various partitions are tied to /dev/mapper/Vol Group/Logical Vol

and I issue fdisk -l and I can see the space and what is tied to /dev/sd*, however how do I tie to two together so I know who has what space and how to grow or shrink that space?

I found an older thread he

http://www.linuxquestions.org/questi...ds-for-699073/

However I still don't understand how to bridge between the two.

thanks

Vgchange -an Command Fail To Work To Inactive VG || Umount Logical Volume Succesfull

Hi All,

umounted all logical volume of tomcatvg successfully.But when tried to deactive Volume group using vgchange command show logical volume are in active state .Need help how to force de-active Volume group .


Actioned perfomed
================
vgchange -an tomcatvg


[root@porsche ~]# vgs
VG #PV #LV #SN Attr VSize VFree
tomcatvg 1 10 0 wz--n- 95.38g 8.88g


[root@porsche ~]# cat /etc/fstab | grep -i fs_opt_tomcat
/dev/tomcatvg/fs_opt_tomcat /opt/tomcat ext4 defaults 1 2
[root@porsche ~]#

fuser -km /opt/tomcat
umount /opt/tomcat

Still I could find logical volume in active state

[root@porsche ~]# lvs | grep -i appvg
fs_opt_tomcat tomcatvg -wi-ao---- 5.00g

Regards
Arun

Are Logical Volumes In LVM Separte From Each Other, Like A Partition?

Even though they are on the same Volume Group?

I'm still having a hard time understanding this.

Let's say for example, that I have the following:

Code:
# df -ha

VolGroup00/LogVol2     /tmp

VolGroup00/LogVol5     /opt

VolGroup00/LogVol3     /var

I have three different Logical Volumes where /tmp, /opt and /var live. Are these separate like a partition that one would place on a physical disk?

Keep in mind that they all are on the same Volume Group as well?

Xubuntu MBR Partioning Question

I was wondering if I can repair my current partitioning setup using gparted, or if I should just reload Xubuntu. Basically I screwed up by making the primary partition only 256M, and made a massive extended logical partition for everything else, and did not leave swap space. I am doing this on an older PC with MBR, dual processor, 2G RAM each processor, 160GB hard drive space. It is single boot, no Windows. I would like the partioning to be as follows, leaving empty disk space for other Linux flavors:

/ 13GB ext4
/home 50GB ext4
swap 8GB swap

sudo parted /dev/sda print all
Code:
Model: ATA ST3160812AS (scsi)
Disk /dev/sda: 160GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos

Number  Start   End    Size   Type      File system  Flags
 1      1049kB  256MB  255MB  primary   ext2         boot
 2      257MB   160GB  160GB  extended
 5      257MB   160GB  160GB  logical
                                                                       
Error: /dev/mapper/xubuntu--vg-swap_1: unrecognised disk label

Model: Linux device-mapper (linear) (dm)
Disk /dev/mapper/xubuntu--vg-root: 158GB
Sector size (logical/physical): 512B/512B
Partition Table: loop

Number  Start  End    Size   File system  Flags
 1      0.00B  158GB  158GB  ext4
                                                                          
Error: /dev/mapper/sda5_crypt: unrecognised disk label

df -hT
Code:
Filesystem                   Type      Size  Used Avail Use Mounted on
/dev/mapper/xubuntu--vg-root ext4      145G  6.5G  131G   5% /
none                         tmpfs     4.0K     0  4.0K   0% /sys/fs/cgroup
udev                         devtmpfs  989M  4.0K  989M   1% /dev
tmpfs                        tmpfs     201M  1.1M  200M   1% /run
none                         tmpfs     5.0M     0  5.0M   0% /run/lock
none                         tmpfs    1003M   88K 1003M   1% /run/shm
none                         tmpfs     100M   24K  100M   1% /run/user
/dev/sda1                    ext2      236M  120M  104M  54% /boot
/home/mbrk/.Private          ecryptfs  145G  6.5G  131G   5% /home/mbrk

Creating A Large Tar Ile Form A Sequence Of Small Tar Files And One Files Is Missing

I am trying to put back together a big atr file from some smaller tar files that I created several years ago. The issue is that in order to tar this large file, I must put each file back using the command

tar -xMf cd-1.tar
Prepare volume #2 for 'cd-1.tar' and hit return:n cd-2.tar
Prepare volume #3 for 'cd-2.tar' and hit return:n cd-3.tar
and so forth.

I have fourteen files cd-1.tar through cd-15.tar. The cd-9.tar files is missing and I assume that it is gone. Now when I type the commands in I get the following:

Code:
-linux tarfile]$ tar -xMf cd-1.tar
Prepare volume #2 for `cd-1.tar' and hit return: n cd-2.tar
Prepare volume #3 for `cd-2.tar' and hit return: n cd-3.tar
Prepare volume #4 for `cd-3.tar' and hit return: n cd-4.tar
Prepare volume #5 for `cd-4.tar' and hit return: n cd-5.tar
Prepare volume #6 for `cd-5.tar' and hit return: n cd-6.tar
Prepare volume #7 for `cd-6.tar' and hit return: n cd-7.tar
Prepare volume #8 for `cd-7.tar' and hit return: n cd-8.tar
Prepare volume #9 for `cd-8.tar' and hit return: n cd-10.tar
tar: This volume is out of sequence (10755138772 - 4889670868 != 6598651392)
Prepare volume #9 for `cd-10.tar' and hit return: n cd-10.tar
tar: This volume is out of sequence (10755138772 - 4889670868 != 6598651392)
Prepare volume #9 for `cd-10.tar' and hit return: 
tar: This volume is out of sequence (10755138772 - 4889670868 != 6598651392)

As you can see I do not have cd-9.tar. That stops the untarring cold. However, I have cd-10.tar,cd-11.tar,cd-12.tar,cd-13.tar,cd-14.tar,cd-15.tar. Now I may have these files, but they cannot be put back in the main file because cd-9.tar is missing and everything must be put in sequentially.

Is there a way to complete this sequence of steps and add all fourteen files to the files bigbackup leaving out cd-9.tar? That means that the bigbackup file will be incomplete, but that is better than no file or having bigbackup missing six files on the back end.

Any help appreciated.

Thanks in advance.

Respectfully,


Newport_j

Virtual CentOS 6.4 Server Expand Disk For Splunk Instance

As I'm sure there are other posts for this, I'm terrified to mess with disks in Linux as I'm very green. I've looked at one other post which suggests to run the following so I'm going to do the same. The drive provisioned originally had 400 gb, but currently has 500 gb allocated in vmware. Thanks so much in advance!

Code:
fdisk -l
pvs
vgs
lvs
df -h


Code:
[root@uspk10splunk ~]# fdisk -l

Disk /dev/sda: 8589 MB, 8589934592 bytes
64 heads, 32 sectors/track, 8192 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000c255c

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           2         501      512000   83  Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2             502        8192     7875584   8e  Linux LVM
Partition 2 does not end on cylinder boundary.

Disk /dev/sdb: 536.9 GB, 536870912000 bytes
255 heads, 63 sectors/track, 65270 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x69437664

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1       52216   419424988+  8e  Linux LVM

Disk /dev/mapper/vg_uspk10vsp03-lv_root: 3833 MB, 3833593856 bytes
255 heads, 63 sectors/track, 466 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000


Disk /dev/mapper/vg_uspk10vsp03-lv_swap: 4227 MB, 4227858432 bytes
255 heads, 63 sectors/track, 514 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000


Disk /dev/mapper/vg_opt-lv_opt: 429.5 GB, 429488340992 bytes
255 heads, 63 sectors/track, 52215 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Code:
[root@uspk10splunk ~]# pvs
  PV         VG             Fmt  Attr PSize   PFree
  /dev/sda2  vg_uspk10vsp03 lvm2 a--    7.51g    0
  /dev/sdb1  vg_opt         lvm2 a--  399.99g    0

Code:
[root@uspk10splunk ~]# vgs
  VG             #PV #LV #SN Attr   VSize   VFree
  vg_opt           1   1   0 wz--n- 399.99g    0
  vg_uspk10vsp03   1   2   0 wz--n-   7.51g    0

Code:
[root@uspk10splunk ~]# lvs
  LV      VG             Attr      LSize   Pool Origin Data%  Move Log Cpy%Sync                     Convert
  lv_opt  vg_opt         -wi-ao--- 399.99g                                                          
  lv_root vg_uspk10vsp03 -wi-ao---   3.57g                                                          
  lv_swap vg_uspk10vsp03 -wi-ao---   3.94g

Code:
[root@uspk10splunk ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/vg_uspk10vsp03-lv_root
                      3.6G  1.5G  1.9G  45% /
tmpfs                 1.9G     0  1.9G   0% /dev/shm
/dev/mapper/vg_opt-lv_opt
                      394G  374G  422M 100% /opt
/dev/sda1             485M   32M  428M   7% /boot

Need Help Running Fsck On Mounted Logical Volume

Hi ALL,
One of filesystem went readonly.Need help how to run filesystem check on mounted logical volume to make the filesystem read write

[root@porsche ~]# cd /opt/apps
[root@porscheapps]# touch 1
touch: cannot touch `1': Read-only file system
[root@porsche apps]#



[root@porsche ]# df -h /opt/apps
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/apps-vg
798G 687G 71G 91% /opt/apps
[root@porsche ]#

Sim Link Or A Better Option

Hello,

I have a file server that was built before my time here. Unfortunately it was not built very well, and of course it became more and more important to production.

Using LVM the following was created:
/dev/mapper/vg_weather-sff_share 34T 30T 3.4T 90% /sff_share

This is currently made up of 30 + different volumes. I do not want to continue growth here. Our developers need to be able to write to the date in /sff_share. The data is to large to be moved or copied.

I was thinking about creating a new fresh mount /sff_share1 and building a sim link to /sff_share so that the data will still be able to be accessed, but new data will be written to he fresh file system / volume attached to /sff_share1. So to the script everything can be accessed under /sff_share.

Does this sound like a good route? Or might there be a better option.

1. I need to get write to stop on the current /sff_share
2. New data needs to write to /sff_share1
3. Both /sff_share and /sff_share1 need to be accessible under /sff_share

Back Up Plan For San Migration. Lvmconvert--2 Leg Scenario.

Hello,

So I am going to move lvm from one san to anohter.
Its on CentOS 5.

so "lvm1" from "sanA" to "sanB"
I do

Code:
>lvconvert -m1 --corelog VG/lvm1   /dev/disk_on_sanB
check:
>lvmdisplay  -m
...
..
Logical volume lvm1_mimage_0
......
Logical volume lvm1_mimage_1

then I break the mirror, taking off lvm1 from old sanA
>lvconvert -m0 VG/lvm1  /dev/disk_on_sanA

At this point if some thing goes wrong, is there a way to bring back original lvm1(from sanA) back to the server ?

Thanks

Trouble With Fdisk

I got this configuration.

fdisk -l

Disk /dev/sda: 493.9 GB, 493921239040 bytes
255 heads, 63 sectors/track, 60049 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0002ac38

Device Boot Start End Blocks Id System
/dev/sda1 * 1 64 512000 83 Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2 64 32636 261630976 8e Linux LVM

Disk /dev/mapper/vg_gazduire-lv_root: 53.7 GB, 53687091200 bytes
255 heads, 63 sectors/track, 6527 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000


Disk /dev/mapper/vg_gazduire-lv_swap: 5100 MB, 5100273664 bytes
255 heads, 63 sectors/track, 620 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000


Disk /dev/mapper/vg_gazduire-lv_home: 209.1 GB, 209119608832 bytes
255 heads, 63 sectors/track, 25424 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000


df -h

Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg_gazduire-lv_root
50G 21G 27G 44% /
tmpfs 2.4G 0 2.4G 0% /dev/shm
/dev/sda1 485M 161M 299M 35% /boot
/dev/mapper/vg_gazduire-lv_home
192G 17G 166G 9% /home
/usr/tmpDSK 2.0G 981M 955M 51% /tmp


I want to increase the /dev/sda1 with aproximativ 200 gb who remains ! My plans a i want to make /dev/sda3 and i want to put in
vg_gazduire!

VG Name vg_gazduire
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 4
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 3
Open LV 3
Max PV 0
Cur PV 1
Act PV 1
VG Size 249.51 GiB
PE Size 4.00 MiB
Total PE 63874
Alloc PE / Size 63874 / 249.51 GiB
Free PE / Size 0 / 0



When i type fdisk /dev/sda3 the system give me Unable to open /dev/sda3!

Could anyone tell me the solution?