Vgchange -an Command Fail To Work To Inactive VG || Umount Logical Volume Succesfull

Hi All,

umounted all logical volume of tomcatvg successfully.But when tried to deactive Volume group using vgchange command show logical volume are in active state .Need help how to force de-active Volume group .


Actioned perfomed
================
vgchange -an tomcatvg


[root@porsche ~]# vgs
VG #PV #LV #SN Attr VSize VFree
tomcatvg 1 10 0 wz--n- 95.38g 8.88g


[root@porsche ~]# cat /etc/fstab | grep -i fs_opt_tomcat
/dev/tomcatvg/fs_opt_tomcat /opt/tomcat ext4 defaults 1 2
[root@porsche ~]#

fuser -km /opt/tomcat
umount /opt/tomcat

Still I could find logical volume in active state

[root@porsche ~]# lvs | grep -i appvg
fs_opt_tomcat tomcatvg -wi-ao---- 5.00g

Regards
Arun


Similar Content



Need Help Running Fsck On Mounted Logical Volume

Hi ALL,
One of filesystem went readonly.Need help how to run filesystem check on mounted logical volume to make the filesystem read write

[root@porsche ~]# cd /opt/apps
[root@porscheapps]# touch 1
touch: cannot touch `1': Read-only file system
[root@porsche apps]#



[root@porsche ]# df -h /opt/apps
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/apps-vg
798G 687G 71G 91% /opt/apps
[root@porsche ]#

Are Logical Volumes In LVM Separte From Each Other, Like A Partition?

Even though they are on the same Volume Group?

I'm still having a hard time understanding this.

Let's say for example, that I have the following:

Code:
# df -ha

VolGroup00/LogVol2     /tmp

VolGroup00/LogVol5     /opt

VolGroup00/LogVol3     /var

I have three different Logical Volumes where /tmp, /opt and /var live. Are these separate like a partition that one would place on a physical disk?

Keep in mind that they all are on the same Volume Group as well?

?? How To Auto Mount Logical Volume Before Transmission-daemon Starts At Start Up

I have been using an old computer to download my torrents and this has been my usual routine:

1. Press the power button.
2. Connect to the computer through Putty
3. Log in
4. Gain su privileges

Startup Commands:
Code:
/etc/init.d/transmission-daemon stop
apt-get update
mount -t ext4 /dev/vgTransmission/lvTransmission /mnt/transmissionVault
/etc/init.d/transmission-daemon start

Shutdown Commands:
Code:
/etc/init.d/transmission-daemon stop
umount /dev/vgTransmission/lvTransmission
shutdown -h now

I was wondering if there is a way to configure a computer to automatically mount a logical volume “before” the transmission-daemon starts at boot up (note my first Startup Command).

I think the instructions here are related to what I want to do, but I want to get some advice before I attempt to do anything dangerous:
http://tille.garrels.be/training/tld...#sect_04_02_04


My perfect scenario would be:

Startup Routine:
1. Send WakeOnLan magic packet to computer to turn it on.
2. Computer boots up, mounts the transmissionVault logical volume
3. Wait 10 seconds to ensure that the logical volume has finished mounting
4. Start transmission-daemon

Shutdown Routine:
Send WakeOnLan magic packet to turn off computer, i.e.: Execute “shutdown -h now”
The shutdown command should include:
1. Stop transmission-daemon
2. Wait 10 seconds to ensure that transmission-daemon has finished shutting down
3. Umount transmissionVault logical volume




Recently the Ethernet port of my transmission box has stopped working, so I went ahead and swapped the motherboard with another old motherboard, added a few hard drives, installed a fresh copy of Debian, and re-created the LVM logical volume for transmissionVault. I pretty much have a fairly stock system running at the moment.

Running a headless 3.2.0-4-amd64

LVM / Device Mapper

Hello,
We have been having a nightmare of a time with a crash on one of our file shares.

We finally were able to recover the logical volume group and get the path to return. However it appears its mapping is off.

dmsetup status
vg_flood-sff_sha
vg_os-lv_tmp: 0 10485760 linear
vg_os-lv_usr: 0 20971520 linear
vg_os-lv_var: 0 10485760 linear
vg_os-lv_opt: 0 41943040 linear
vg_os-lv_swap: 0 16777216 linear
vg_os-lv_root: 0 10485760 linear


dm6 I would guess is where /dev/mapper/vg_flood-sff_share needs to be mapped to, as it is the only one without mapping. As you can see dmsetup also shows this logical volume as different than the rest.

I am trying to figure out how to fix the mapping for this so that I can activate the logical volume and get our data back.

New To LVM, How To Bridge Between /dev/sd* And /dev/mapper/vol Group/log Vol?

So LVM has taken me by surprise, especially working with all of these virtual servers.

I have Linux servers in VMWare, and I know how to grow hard disks there and how they are tied back to /dev/sd*, however what I'm not sure about is how do I know what Volume Group and Logical Volume they are tied too?

If I isssue df -ha I can see where the various partitions are tied to /dev/mapper/Vol Group/Logical Vol

and I issue fdisk -l and I can see the space and what is tied to /dev/sd*, however how do I tie to two together so I know who has what space and how to grow or shrink that space?

I found an older thread he

http://www.linuxquestions.org/questi...ds-for-699073/

However I still don't understand how to bridge between the two.

thanks

Creating A Large Tar Ile Form A Sequence Of Small Tar Files And One Files Is Missing

I am trying to put back together a big atr file from some smaller tar files that I created several years ago. The issue is that in order to tar this large file, I must put each file back using the command

tar -xMf cd-1.tar
Prepare volume #2 for 'cd-1.tar' and hit return:n cd-2.tar
Prepare volume #3 for 'cd-2.tar' and hit return:n cd-3.tar
and so forth.

I have fourteen files cd-1.tar through cd-15.tar. The cd-9.tar files is missing and I assume that it is gone. Now when I type the commands in I get the following:

Code:
-linux tarfile]$ tar -xMf cd-1.tar
Prepare volume #2 for `cd-1.tar' and hit return: n cd-2.tar
Prepare volume #3 for `cd-2.tar' and hit return: n cd-3.tar
Prepare volume #4 for `cd-3.tar' and hit return: n cd-4.tar
Prepare volume #5 for `cd-4.tar' and hit return: n cd-5.tar
Prepare volume #6 for `cd-5.tar' and hit return: n cd-6.tar
Prepare volume #7 for `cd-6.tar' and hit return: n cd-7.tar
Prepare volume #8 for `cd-7.tar' and hit return: n cd-8.tar
Prepare volume #9 for `cd-8.tar' and hit return: n cd-10.tar
tar: This volume is out of sequence (10755138772 - 4889670868 != 6598651392)
Prepare volume #9 for `cd-10.tar' and hit return: n cd-10.tar
tar: This volume is out of sequence (10755138772 - 4889670868 != 6598651392)
Prepare volume #9 for `cd-10.tar' and hit return: 
tar: This volume is out of sequence (10755138772 - 4889670868 != 6598651392)

As you can see I do not have cd-9.tar. That stops the untarring cold. However, I have cd-10.tar,cd-11.tar,cd-12.tar,cd-13.tar,cd-14.tar,cd-15.tar. Now I may have these files, but they cannot be put back in the main file because cd-9.tar is missing and everything must be put in sequentially.

Is there a way to complete this sequence of steps and add all fourteen files to the files bigbackup leaving out cd-9.tar? That means that the bigbackup file will be incomplete, but that is better than no file or having bigbackup missing six files on the back end.

Any help appreciated.

Thanks in advance.

Respectfully,


Newport_j

Script Persistently Lowering Mplayer Volume

I have a similar script as follows

Code:
(sleep 5m; echo "set_property volume 85 > "fifo_file")&

mplayer -volume 95 -quiet -slave -input file="fifo_file" song1 song2 song3

This makes the current song change volume to 85 after the 5 minute sleep period, but any songs after that revert to 95.

I want to keep it at volume 85 until the mplayer command completes but start at 95 until changed.

Is this possible?

Back Up Plan For San Migration. Lvmconvert--2 Leg Scenario.

Hello,

So I am going to move lvm from one san to anohter.
Its on CentOS 5.

so "lvm1" from "sanA" to "sanB"
I do

Code:
>lvconvert -m1 --corelog VG/lvm1   /dev/disk_on_sanB
check:
>lvmdisplay  -m
...
..
Logical volume lvm1_mimage_0
......
Logical volume lvm1_mimage_1

then I break the mirror, taking off lvm1 from old sanA
>lvconvert -m0 VG/lvm1  /dev/disk_on_sanA

At this point if some thing goes wrong, is there a way to bring back original lvm1(from sanA) back to the server ?

Thanks

Automatic LVM Names Include Hyphens

I am trying to setup an Ubuntu server and have been running into trouble with the automatic LVM naming including a hyphen. IE HomeServer-vg.

I have gotten errors when partitioning in webmin and have seen the volume group written with two hyphens instead of one(HomeServer--vg) but haven't figured out the proper way to deal with it. I finally reinstalled the server from the beginning and followed some instructions to rename the volume group and took out the hyphen. No more errors in webmin, but I have to assume I am missing a normal bit of knowledge that everyone knows about this. I just installed a virtual machine and it created a volume group name with a hyphen also.

So, what am I missing?

Get RAID1 And LVM Back After Re Installating The OS

Hi All,
I had installed Cent OS 6.6 on sda. The RAID1 and LVM setup was on sdb and sdc. To practice well on recovering RAID and LVM after the OS reinstallation, I just reinstalled the OS. During first re installation of OS, I had selected all the mount points including RAID/LVM partitions as same as how those where mounted before the reinstallation, but the format was selected to only /, /others, /var. And after booting /dev/md0 and LVM partitions were set to active automatically and everything was mounted properly. Also there was no any data loss in the RAID/LVM partitions. So I could made sure that everything will be perfect if we carefully select the mount points during OS reinstallation by making sure the formating partitions.

Now I thouht of reinstalling OS once again but this time didn't select mount points for RAID/LVM partitions during OS reinstallation, thought for manual setup after the installation. So just selected /, /others, /var partitions to format. When it booted, I ran "cat /proc/mdstat" but it was taken /dev/md127(read only) instead of /dev/md0.
Code:
# cat /proc/mdstat 
Personalities : [raid1] 
md127 : active (auto-read-only) raid1 sdc[1] sdb[0]
      52396032 blocks super 1.2 [2/2] [UU]
      
unused devices: <none>

So now I just wanted to stop and restart this RAID array as /dev/md0. But I am not able to stop as it is giving following error.
Code:
# mdadm --stop --force /dev/md127
mdadm: Cannot get exclusive access to /dev/md127:Perhaps a running process, mounted filesystem or active volume group?

I made sure that no one RAID/LVM partitions are mounted.
Code:
# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda1        15G  3.5G   11G  26% /
tmpfs           376M     0  376M   0% /dev/shm
/dev/sda2       4.7G  9.8M  4.5G   1% /others
/dev/sda3       2.9G  133M  2.6G   5% /var

But LVM is active
Code:
# pvdisplay 
  --- Physical volume ---
  PV Name               /dev/md127
  VG Name               data
  PV Size               49.97 GiB / not usable 4.00 MiB
  Allocatable           yes 
  PE Size               4.00 MiB
  Total PE              12791
  Free PE               5111
  Allocated PE          7680
  PV UUID               IJ2br8-SWHW-cf1d-89Fr-EEw9-IJME-1BpfSj
   
# vgdisplay 
  --- Volume group ---
  VG Name               data
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  19
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               49.96 GiB
  PE Size               4.00 MiB
  Total PE              12791
  Alloc PE / Size       7680 / 30.00 GiB
  Free  PE / Size       5111 / 19.96 GiB
  VG UUID               982ay8-ljWY-kiPB-JY7F-pIu2-87uN-iplPEQ
   
# lvdisplay 
  --- Logical volume ---
  LV Path                /dev/data/home
  LV Name                home
  VG Name                data
  LV UUID                OAQp25-Q1TH-rekd-b3n2-mOkC-Zgyt-3fX2If
  LV Write Access        read/write
  LV Status              available
  # open                 0
  LV Size                10.00 GiB
  Current LE             2560
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:0
   
  --- Logical volume ---
  LV Path                /dev/data/backup
  LV Name                backup
  VG Name                data
  LV UUID                Uq6rhX-AvPN-GaNe-zevB-k3iB-Uz0m-TssjCg
  LV Write Access        read/write
  LV Status              available
  # open                 0
  LV Size                20.00 GiB
  Current LE             5120
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:1

As LVM is active on /dev/md127 it is not allowing to stop /dev/md127 raid array. So as I am fresher in RAID/LVM, expecting your kind help to make LVM inactive without any data loss and restart the RAID array as /dev/md0 and then re activate the LVM setup.
Expecting your kind reply, Thanks.