Map Allocated Space From Storage Device To Existing Partition

Dear Team,

i just received some new space from storage and verified it is allocated on the server by multipath command. now i want to resize the existing partition. Please help me for the same.

below is the multipath command output

multipath -ll
app2 (360060e80105ed650057075f500000011) dm-2 HITACHI,DF600F
[size=400G][features=0][hwhandler=0][rw]
\_ round-robin 0 [prio=2][active]
\_ 1:0:1:17 sdf 8:80 [active][ready]
\_ 2:0:1:17 sdi 8:128 [active][ready]
\_ round-robin 0 [prio=0][enabled]
\_ 1:0:0:17 sdg 8:96 [active][ready]
\_ 2:0:0:17 sdh 8:112 [active][ready]
app1 (360060e80105ed650057075f50000000a) dm-0 HITACHI,DF600F
[size=600G][features=0][hwhandler=0][rw]
\_ round-robin 0 [prio=2][active]
\_ 1:0:0:10 sdb 8:16 [active][ready]
\_ 2:0:0:10 sdd 8:48 [active][ready]
\_ round-robin 0 [prio=0][enabled]
\_ 1:0:1:10 sdc 8:32 [active][ready]
\_ 2:0:1:10 sde 8:64 [active][ready]

Here 600 GB is already applied and i want to add more 400 GB on the same existing partition.

below is the partition details.

df -kh
Filesystem Size Used Avail Use% Mounted on
/dev/sda3 97G 9.9G 82G 11% /
/dev/sda5 49G 180M 46G 1% /backup
/dev/sda2 97G 21G 72G 23% /opt
/dev/sda1 99M 12M 83M 13% /boot
tmpfs 16G 0 16G 0% /dev/shm
/dev/mapper/app1p1 591G 117G 445G 21% /app

ls /dev/mapper/
app1 app1p1 app2 control

Please guide me how i can add 400GB space in same 600 GB (/app) partition.

Many Thanks !!
Jignesh Dholakiya


Similar Content



Need Help Regarding Multipaths On RHEL

Hello Sirs,

In the below output for few of the hosts/paths there are 4 links and for few there are 2. Why is that so. How do I find out how many paths are there in total that are connected and in use. We are using native multipaths. How do I know what file systems are there on what devices. Please explain or direct me to a link where I can learn about multipaths. Also please explain the difference between "multipath -ll" and "multipath -l"




root@xxXXxxXX [/root]
# multipath -ll
mpathr (820060e8006d945444444d9458282183e) dm-21 HITACHI,OPEN-V
size=500G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 1:0:5:41 sdbi 67:192 active ready running
`- 2:0:4:41 sdfd 129:240 active ready running
mpathak (8238a95f2258000748844444444444416) dm-15 IBM,2145
size=1.0T features='0' hwhandler='0' wp=rw
|-+- policy='round-robin 0' prio=50 status=active
| |- 1:0:0:6 sdh 8:112 active ready running
| |- 2:0:1:6 sdds 71:160 active ready running
| |- 1:0:1:6 sdt 65:48 active ready running
| `- 2:0:0:6 sddg 70:224 active ready running
`-+- policy='round-robin 0' prio=10 status=enabled
|- 1:0:2:6 sdaf 65:240 active ready running
|- 2:0:3:6 sdeq 129:32 active ready running
|- 1:0:3:6 sdar 66:176 active ready running
`- 2:0:2:6 sdee 128:96 active ready running
data16 (820060e8006d945444444d94582821833) dm-67 HITACHI,OPEN-V
size=500G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 1:0:6:30 sdcw 70:64 active ready running
`- 2:0:5:30 sdgr 132:112 active ready running
mpathe (820060e8006d945444444d94582821f17) dm-45 HITACHI,OPEN-V
size=35G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 1:0:6:12 sdce 69:32 active ready running
`- 2:0:5:12 sdfz 131:80 active ready running
data0 (820060e8006d945444444d94582821815) dm-43 HITACHI,OPEN-V
size=500G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 1:0:6:14 sdcg 69:64 active ready running
`- 2:0:5:14 sdgb 131:112 active ready running
mpathq (820060e8006d945444444d9458282184a) dm-34 HITACHI,OPEN-V
size=500G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 1:0:5:53 sdbu 68:128 active ready running
`- 2:0:4:53 sdfp 130:176 active ready running
mpathaj (8238a95f2258000748844444444444415) dm-13 IBM,2145
size=1.0T features='0' hwhandler='0' wp=rw
|-+- policy='round-robin 0' prio=50 status=active
| |- 1:0:2:5 sdae 65:224 active ready running
| |- 2:0:3:5 sdep 129:16 active ready running
| |- 1:0:3:5 sdaq 66:160 active ready running
| `- 2:0:2:5 sded 128:80 active ready running
`-+- policy='round-robin 0' prio=10 status=enabled
|- 1:0:0:5 sdg 8:96 active ready running
|- 2:0:0:5 sddf 70:208 active ready running
|- 1:0:1:5 sds 65:32 active ready running
`- 2:0:1:5 sddr 71:144 active ready running
data15 (820060e8006d945444444d94582821832) dm-58 HITACHI,OPEN-V
size=500G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 1:0:6:29 sdcv 70:48 active ready running
`- 2:0:5:29 sdgq 132:96 active ready running
mpathd (820060e8006d945444444d94582821f15) dm-37 HITACHI,OPEN-V
size=20G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 1:0:6:4 sdby 68:192 active ready running
`- 2:0:5:4 sdft 130:240 active ready running
flash4 (820060e8006d945444444d94582822033) dm-44 HITACHI,OPEN-V
size=200G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 1:0:6:9 sdcc 69:0 active ready running
`- 2:0:5:9 sdfx 131:48 active ready running
mpathp (820060e8006d945444444d94582821842) dm-24 HITACHI,OPEN-V
size=500G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 1:0:5:45 sdbm 68:0 active ready running
`- 2:0:4:45 sdfh 130:48 active ready running
mpathai (8238a95f2258000748844444444444414) dm-9 IBM,2145
size=1.0T features='0' hwhandler='0' wp=rw
|-+- policy='round-robin 0' prio=50 status=active
| |- 1:0:0:4 sdf 8:80 active ready running
| |- 2:0:0:4 sdde 70:192 active ready running
| |- 1:0:1:4 sdr 65:16 active ready running
| `- 2:0:1:4 sddq 71:128 active ready running
`-+- policy='round-robin 0' prio=10 status=enabled
|- 1:0:2:4 sdad 65:208 active ready running
|- 2:0:2:4 sdec 128:64 active ready running
|- 1:0:3:4 sdap 66:144 active ready running
`- 2:0:3:4 sdeo 129:0 active ready running
data14 (820060e8006d945444444d94582821831) dm-55 HITACHI,OPEN-V
size=500G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 1:0:6:28 sdcu 70:32 active ready running
`- 2:0:5:28 sdgp 132:80 active ready running
mpathc (820060e8006d945444444d94582821f12) dm-41 HITACHI,OPEN-V
size=20G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 1:0:6:1 sdbv 68:144 active ready running
`- 2:0:5:1 sdfq 130:192 active ready running
flash3 (820060e8006d945444444d94582822032) dm-42 HITACHI,OPEN-V
size=200G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 1:0:6:8 sdcb 68:240 active ready running
`- 2:0:5:8 sdfw 131:32 active ready running
mpatho (820060e8006d945444444d9458282183a) dm-18 HITACHI,OPEN-V
size=500G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 1:0:5:37 sdbe 67:128 active ready running
`- 2:0:4:37 sdez 129:176 active ready running
mpathah (8238a95f2258000748844444444444413) dm-5 IBM,2145
size=1.0T features='0' hwhandler='0' wp=rw
|-+- policy='round-robin 0' prio=50 status=active
| |- 1:0:2:3 sdac 65:192 active ready running
| |- 2:0:2:3 sdeb 128:48 active ready running
| |- 1:0:3:3 sdao 66:128 active ready running
| `- 2:0:3:3 sden 128:240 active ready running
`-+- policy='round-robin 0' prio=10 status=enabled
|- 1:0:0:3 sde 8:64 active ready running
|- 2:0:0:3 sddd 70:176 active ready running
|- 1:0:1:3 sdq 65:0 active ready running
`- 2:0:1:3 sddp 71:112 active ready running
data13 (820060e8006d945444444d94582821830) dm-54 HITACHI,OPEN-V
size=500G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 1:0:6:27 sdct 70:16 active ready running
`- 2:0:5:27 sdgo 132:64 active ready running
mpathb (820060e8006d945444444d94582821f14) dm-82 HITACHI,OPEN-V
size=20G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 1:0:6:3 sdbx 68:176 active ready running
`- 2:0:5:3 sdfs 130:224 active ready running
mpathag (8238a95f2258000748844444444444419) dm-29 IBM,2145
size=1.0T features='0' hwhandler='0' wp=rw
|-+- policy='round-robin 0' prio=50 status=active
| |- 1:0:2:9 sdai 66:32 active ready running
| |- 2:0:3:9 sdet 129:80 active ready running
| |- 1:0:3:9 sdau 66:224 active ready running
| `- 2:0:2:9 sdeh 128:144 active ready running
`-+- policy='round-robin 0' prio=10 status=enabled
|- 1:0:0:9 sdk 8:160 active ready running
|- 2:0:1:9 sddv 71:208 active ready running
|- 1:0:1:9 sdw 65:96 active ready running
`- 2:0:0:9 sddj 71:16 active ready running
mpathn (820060e8006d945444444d94582821844) dm-27 HITACHI,OPEN-V
size=500G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 1:0:5:47 sdbo 68:32 active ready running
`- 2:0:4:47 sdfj 130:80 active ready running
data12 (820060e8006d945444444d9458282182f) dm-66 HITACHI,OPEN-V
size=500G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 1:0:6:26 sdcs 70:0 active ready running
`- 2:0:5:26 sdgn 132:48 active ready running
mpatha (820060e8006d945444444d94582821f13) dm-62 HITACHI,OPEN-V
size=20G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 1:0:6:2 sdbw 68:160 active ready running
`- 2:0:5:2 sdfr 130:208 active ready running
data9 (820060e8006d945444444d9458282181c) dm-52 HITACHI,OPEN-V
size=500G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 1:0:6:21 sdcn 69:176 active ready running
`- 2:0:5:21 sdgi 131:224 active ready running
mpathz (820060e8006d945444444d9458282183b) dm-16 HITACHI,OPEN-V
size=500G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 1:0:5:38 sdbf 67:144 active ready running
`- 2:0:4:38 sdfa 129:192 active ready running
mpathm (820060e8006d945444444d94582821840) dm-22 HITACHI,OPEN-V
size=500G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 1:0:5:43 sdbk 67:224 active ready running
`- 2:0:4:43 sdff 130:16 active ready running
mpathaf (8238a95f2258000748844444444444417) dm-10 IBM,2145
size=1.0T features='0' hwhandler='0' wp=rw
|-+- policy='round-robin 0' prio=50 status=active
| |- 1:0:2:7 sdag 66:0 active ready running
| |- 2:0:3:7 sder 129:48 active ready running
| |- 1:0:3:7 sdas 66:192 active ready running
| `- 2:0:2:7 sdef 128:112 active ready running
`-+- policy='round-robin 0' prio=10 status=enabled
|- 1:0:0:7 sdi 8:128 active ready running
|- 2:0:1:7 sddt 71:176 active ready running
|- 1:0:1:7 sdu 65:64 active ready running
`- 2:0:0:7 sddh 70:240 active ready running
data8 (820060e8006d945444444d9458282182c) dm-68 HITACHI,OPEN-V
size=500G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 1:0:6:23 sdcp 69:208 active ready running
`- 2:0:5:23 sdgk 132:0 active ready running
data11 (820060e8006d945444444d9458282182e) dm-57 HITACHI,OPEN-V
size=500G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 1:0:6:25 sdcr 69:240 active ready running
`- 2:0:5:25 sdgm 132:32 active ready running
mpathy (820060e8006d945444444d9458282183f) dm-25 HITACHI,OPEN-V
size=500G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 1:0:5:42 sdbj 67:208 active ready running
`- 2:0:4:42 sdfe 130:0 active ready running
mpathl (820060e8006d945444444d94582821848) dm-33 HITACHI,OPEN-V
size=500G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 1:0:5:51 sdbs 68:96 active ready running
`- 2:0:4:51 sdfn 130:144 active ready running
mpathae (8238a95f2258000748844444444444410) dm-2 IBM,2145
size=1.0T features='0' hwhandler='0' wp=rw
|-+- policy='round-robin 0' prio=50 status=active
| |- 1:0:0:0 sdb 8:16 active ready running
| |- 2:0:0:0 sdda 70:128 active ready running
| |- 1:0:1:0 sdn 8:208 active ready running
| `- 2:0:1:0 sddm 71:64 active ready running
`-+- policy='round-robin 0' prio=10 status=enabled
|- 1:0:2:0 sdz 65:144 active ready running
|- 2:0:3:0 sdek 128:192 active ready running
|- 1:0:3:0 sdal 66:80 active ready running
`- 2:0:2:0 sddy 128:0 active ready running
data7 (820060e8006d945444444d9458282181d) dm-56 HITACHI,OPEN-V
size=500G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 1:0:6:22 sdco 69:192 active ready running
`- 2:0:5:22 sdgj 131:240 active ready running
data10 (820060e8006d945444444d9458282182d) dm-53 HITACHI,OPEN-V
size=500G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 1:0:6:24 sdcq 69:224 active ready running
`- 2:0:5:24 sdgl 132:16 active ready running
mpathaq (8238a95f22580007488444444444444b3) dm-38 IBM,2145
size=256G features='0' hwhandler='0' wp=rw
|-+- policy='round-robin 0' prio=50 status=active
| |- 1:0:3:11 sdaw 67:0 active ready running
| |- 2:0:2:11 sdej 128:176 active ready running
| |- 1:0:2:11 sdak 66:64 active ready running
| `- 2:0:3:11 sdev 129:112 active ready running
`-+- policy='round-robin 0' prio=10 status=enabled
|- 1:0:0:11 sdm 8:192 active ready running
|- 2:0:0:11 sddl 71:48 active ready running
|- 1:0:1:11 sdy 65:128 active ready running
`- 2:0:1:11 sddx 71:240 active ready running
mpathx (820060e8006d945444444d94582821841) dm-23 HITACHI,OPEN-V
size=500G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 1:0:5:44 sdbl 67:240 active ready running
`- 2:0:4:44 sdfg 130:32 active ready running
mpathk (820060e8006d945444444d9458282183c) dm-19 HITACHI,OPEN-V
size=500G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 1:0:5:39 sdbg 67:160 active ready running
`- 2:0:4:39 sdfb 129:208 active ready running
mpathad (8238a95f2258000748844444444444411) dm-3 IBM,2145
size=1.0T features='0' hwhandler='0' wp=rw
|-+- policy='round-robin 0' prio=50 status=active
| |- 1:0:2:1 sdaa 65:160 active ready running
| |- 2:0:3:1 sdel 128:208 active ready running
| |- 1:0:3:1 sdam 66:96 active ready running
| `- 2:0:2:1 sddz 128:16 active ready running
`-+- policy='round-robin 0' prio=10 status=enabled
|- 1:0:0:1 sdc 8:32 active ready running
|- 2:0:0:1 sddb 70:144 active ready running
|- 1:0:1:1 sdo 8:224 active ready running
`- 2:0:1:1 sddn 71:80 active ready running
data6 (820060e8006d945444444d9458282181b) dm-65 HITACHI,OPEN-V
size=500G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 1:0:6:20 sdcm 69:160 active ready running
`- 2:0:5:20 sdgh 131:208 active ready running
mpathw (820060e8006d945444444d94582821849) dm-30 HITACHI,OPEN-V
size=500G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 1:0:5:52 sdbt 68:112 active ready running
`- 2:0:4:52 sdfo 130:160 active ready running
mpathac (820060e8006d945444444d94582821843) dm-26 HITACHI,OPEN-V
size=500G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 1:0:5:46 sdbn 68:16 active ready running
`- 2:0:4:46 sdfi 130:64 active ready running
mpathj (820060e8006d945444444d94582821838) dm-14 HITACHI,OPEN-V
size=500G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 1:0:5:35 sdbc 67:96 active ready running
`- 2:0:4:35 sdex 129:144 active ready running
data5 (820060e8006d945444444d9458282181a) dm-48 HITACHI,OPEN-V
size=500G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 1:0:6:19 sdcl 69:144 active ready running
`- 2:0:5:19 sdgg 131:192 active ready running
data23 (820060e8016726a828201726a8282172f) dm-8 HITACHI,OPEN-V
size=500G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 1:0:4:57 sdba 67:64 active ready running
`- 2:0:6:57 sdgy 132:224 active ready running
mpathv (820060e8006d945444444d94582821845) dm-28 HITACHI,OPEN-V
size=500G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 1:0:5:48 sdbp 68:48 active ready running
`- 2:0:4:48 sdfk 130:96 active ready running
mpathi (820060e8006d945444444d94582822031) dm-40 HITACHI,OPEN-V
size=200G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 1:0:6:7 sdca 68:224 active ready running
`- 2:0:5:7 sdfv 131:16 active ready running
mpathab (820060e8006d945444444d94582821847) dm-32 HITACHI,OPEN-V
size=500G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 1:0:5:50 sdbr 68:80 active ready running
`- 2:0:4:50 sdfm 130:128 active ready running
data4 (820060e8006d945444444d94582821819) dm-51 HITACHI,OPEN-V
size=500G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 1:0:6:18 sdck 69:128 active ready running
`- 2:0:5:18 sdgf 131:176 active ready running
data22 (820060e8016726a828201726a8282172d) dm-7 HITACHI,OPEN-V
size=500G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 1:0:4:55 sday 67:32 active ready running
`- 2:0:6:55 sdgw 132:192 active ready running
mpathan (8238a95f2258000748844444444444418) dm-20 IBM,2145
size=1.0T features='0' hwhandler='0' wp=rw
|-+- policy='round-robin 0' prio=50 status=active
| |- 1:0:0:8 sdj 8:144 active ready running
| |- 2:0:0:8 sddi 71:0 active ready running
| |- 1:0:1:8 sdv 65:80 active ready running
| `- 2:0:1:8 sddu 71:192 active ready running
`-+- policy='round-robin 0' prio=10 status=enabled
|- 1:0:2:8 sdah 66:16 active ready running
|- 2:0:2:8 sdeg 128:128 active ready running
|- 1:0:3:8 sdat 66:208 active ready running
`- 2:0:3:8 sdes 129:64 active ready running
mpathu (820060e8006d945444444d94582821839) dm-12 HITACHI,OPEN-V
size=500G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 1:0:5:82 sdbd 67:112 active ready running
`- 2:0:4:82 sdey 129:160 active ready running
data19 (820060e8006d945444444d94582821882) dm-61 HITACHI,OPEN-V
size=500G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 1:0:6:33 sdcz 70:112 active ready running
`- 2:0:5:33 sdgu 132:160 active ready running
mpathh (820060e8006d945444444d94582822030) dm-39 HITACHI,OPEN-V
size=200G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 1:0:6:6 sdbz 68:208 active ready running
`- 2:0:5:6 sdfu 131:0 active ready running
mpathaa (820060e8006d945444444d94582821837) dm-11 HITACHI,OPEN-V
size=500G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 1:0:5:34 sdbb 67:80 active ready running
`- 2:0:4:34 sdew 129:128 active ready running
data3 (820060e8006d945444444d94582821817) dm-47 HITACHI,OPEN-V
size=500G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 1:0:6:16 sdci 69:96 active ready running
`- 2:0:5:16 sdgd 131:144 active ready running
data21 (820060e8016726a828201726a8282172c) dm-6 HITACHI,OPEN-V
size=500G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 1:0:4:54 sdax 67:16 active ready running
`- 2:0:6:54 sdgv 132:176 active ready running
mpatht (820060e8006d945444444d9458282183d) dm-17 HITACHI,OPEN-V
size=500G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 1:0:5:40 sdbh 67:176 active ready running
`- 2:0:4:40 sdfc 129:224 active ready running
mpatham (8238a95f2258000748844444444444412) dm-4 IBM,2145
size=1.0T features='0' hwhandler='0' wp=rw
|-+- policy='round-robin 0' prio=50 status=active
| |- 1:0:0:2 sdd 8:48 active ready running
| |- 2:0:1:2 sddo 71:96 active ready running
| |- 1:0:1:2 sdp 8:240 active ready running
| `- 2:0:0:2 sddc 70:160 active ready running
`-+- policy='round-robin 0' prio=10 status=enabled
|- 1:0:2:2 sdab 65:176 active ready running
|- 2:0:2:2 sdea 128:32 active ready running
|- 1:0:3:2 sdan 66:112 active ready running
`- 2:0:3:2 sdem 128:224 active ready running
mpathg (820060e8006d945444444d94582822037) dm-64 HITACHI,OPEN-V
size=500G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 1:0:6:13 sdcf 69:48 active ready running
`- 2:0:5:13 sdga 131:96 active ready running
data18 (820060e8006d945444444d94582821835) dm-60 HITACHI,OPEN-V
size=500G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 1:0:6:32 sdcy 70:96 active ready running
`- 2:0:5:32 sdgt 132:144 active ready running
data20 (820060e8016726a828201726a8282172e) dm-50 HITACHI,OPEN-V
size=500G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 1:0:4:56 sdaz 67:48 active ready running
`- 2:0:6:56 sdgx 132:208 active ready running
data2 (820060e8006d945444444d94582821818) dm-49 HITACHI,OPEN-V
size=500G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 1:0:6:17 sdcj 69:112 active ready running
`- 2:0:5:17 sdge 131:160 active ready running
mpathal (8238a95f225800074884444444444441a) dm-35 IBM,2145
size=589G features='0' hwhandler='0' wp=rw
|-+- policy='round-robin 0' prio=50 status=active
| |- 1:0:0:10 sdl 8:176 active ready running
| |- 2:0:1:10 sddw 71:224 active ready running
| |- 1:0:1:10 sdx 65:112 active ready running
| `- 2:0:0:10 sddk 71:32 active ready running
`-+- policy='round-robin 0' prio=10 status=enabled
|- 1:0:2:10 sdaj 66:48 active ready running
|- 2:0:3:10 sdeu 129:96 active ready running
|- 1:0:3:10 sdav 66:240 active ready running
`- 2:0:2:10 sdei 128:160 active ready running
mpaths (820060e8006d945444444d94582821846) dm-31 HITACHI,OPEN-V
size=500G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 1:0:5:49 sdbq 68:64 active ready running
`- 2:0:4:49 sdfl 130:112 active ready running
mpathf (820060e8006d945444444d94582822034) dm-63 HITACHI,OPEN-V
size=275G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 1:0:6:10 sdcd 69:16 active ready running
`- 2:0:5:10 sdfy 131:64 active ready running
data17 (820060e8006d945444444d94582821834) dm-59 HITACHI,OPEN-V
size=500G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 1:0:6:31 sdcx 70:80 active ready running
`- 2:0:5:31 sdgs 132:128 active ready running
data1 (820060e8006d945444444d94582821816) dm-46 HITACHI,OPEN-V
size=500G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 1:0:6:15 sdch 69:80 active ready running
`- 2:0:5:15 sdgc 131:128 active ready running
root@xxXXxxXX [/root]

Linux Native Multipath IO Timeout

My Oracle RAC 12.1.0.1 runs on ASM. ASM has an I/O timeout of 15 seconds. How do I find my Linux native multipath I/O timeout? I am running RedHat Linux 6 Enterprise, x86-64.
This is the device section of my multipath.conf

devices{
device {
vendor "NETAPP"
product "LUN.*"
path_grouping_policy group_by_prio
getuid_callout "/lib/udev/scsi_id --whitelisted --device=/dev/%n"
path_selector "round-robin 0"
path_checker tur
features "3 queue_if_no_path pg_init_retries 50"
hardware_handler "0"
prio ontap
failback immediate
rr_weight uniform
rr_min_io 128
rr_min_io_rq 1
flush_on_last_del yes
fast_io_fail_tmo 5
dev_loss_tmo infinity
retain_attached_hw_handler yes
detect_prio yes
}
}

Thanks,
keith

Output Of "netstat -s | Egrep '(active|passive)'"?

If you run this command on your computer, how many active and passive connections openings are you typically supposed to have? Specifically, for a firewalled home PC behind a wifi router/cable modem combination. Can someone explain the purpose of this command?

Degraded RAID 1 Mdadm

I have a degraded software RAID 1 array. md0 is in a state of clean, degraded while md1 is mounted in read-only and clean. I'm not sure how to go about fixing this. Any ideas?

cat /proc/mdstat
Code:
Personalities : [raid1] 
md1 : active (auto-read-only) raid1 sdb2[1] sda2[0]
      3909620 blocks super 1.2 [2/2] [UU]
      
md0 : active raid1 sda1[0]
      972849016 blocks super 1.2 [2/1] [U_]
      
unused devices: <none>

mdadm -D /dev/md0
Code:
/dev/md0:
        Version : 1.2
  Creation Time : Tue Jun 21 21:31:58 2011
     Raid Level : raid1
     Array Size : 972849016 (927.78 GiB 996.20 GB)
  Used Dev Size : 972849016 (927.78 GiB 996.20 GB)
   Raid Devices : 2
  Total Devices : 1
    Persistence : Superblock is persistent

    Update Time : Tue Jun  2 02:21:12 2015
          State : clean, degraded 
 Active Devices : 1
Working Devices : 1
 Failed Devices : 0
  Spare Devices : 0

           Name : 
           UUID : 
         Events : 3678064

    Number   Major   Minor   RaidDevice State
       0       8        1        0      active sync   /dev/sda1
       2       0        0        2      removed

mdadm -D /dev/md1
Code:
/dev/md1:
        Version : 1.2
  Creation Time : Tue Jun 21 21:32:09 2011
     Raid Level : raid1
     Array Size : 3909620 (3.73 GiB 4.00 GB)
  Used Dev Size : 3909620 (3.73 GiB 4.00 GB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent

    Update Time : Sat May 16 15:17:56 2015
          State : clean 
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           Name : 
           UUID : 
         Events : 116

    Number   Major   Minor   RaidDevice State
       0       8        2        0      active sync   /dev/sda2
       1       8       18        1      active sync   /dev/sdb2

Is There A Current Active Linux Mint Sub-forum. Please Indicate Its Link

I found a link dated. From 2010. Is there an active one now?

Thank you

What Is The /dev/mapper Partition On Ubuntu 12.05.4 And How Did I Install It?

I am running two linux web-servers. One I am just setting up. On the other, I noted the following partition:
Code:
>df -h
Filesystem                      Size  Used Avail Use Mounted on
/dev/mapper/Machinename-root     27G  4.1G   21G  17 /
udev                            996M   12K  996M   1 /dev
tmpfs                           202M  792K  201M   1% /run
none                            5.0M     0  5.0M   0% /run/lock
none                           1006M  224K 1006M   1% /run/shm
/dev/sda1                       228M   90M  126M  42% /boot
/home/Amy/.Private               27G  4.1G   21G  17% /home/Amy

But on the one that I am installing, so far I get this:
Code:
>df -h
Filesystem              Size  Used Avail Use% Mounted on
udev                    996M   12K  996M   1% /dev
tmpfs                   202M  792K  201M   1% /run
none                    5.0M     0  5.0M   0% /run/lock
none                   1006M  224K 1006M   1% /run/shm
/dev/sda1               228M   90M  126M  42% /boot

How did I get /dev/mapper partition?

Get RAID1 And LVM Back After Re Installating The OS

Hi All,
I had installed Cent OS 6.6 on sda. The RAID1 and LVM setup was on sdb and sdc. To practice well on recovering RAID and LVM after the OS reinstallation, I just reinstalled the OS. During first re installation of OS, I had selected all the mount points including RAID/LVM partitions as same as how those where mounted before the reinstallation, but the format was selected to only /, /others, /var. And after booting /dev/md0 and LVM partitions were set to active automatically and everything was mounted properly. Also there was no any data loss in the RAID/LVM partitions. So I could made sure that everything will be perfect if we carefully select the mount points during OS reinstallation by making sure the formating partitions.

Now I thouht of reinstalling OS once again but this time didn't select mount points for RAID/LVM partitions during OS reinstallation, thought for manual setup after the installation. So just selected /, /others, /var partitions to format. When it booted, I ran "cat /proc/mdstat" but it was taken /dev/md127(read only) instead of /dev/md0.
Code:
# cat /proc/mdstat 
Personalities : [raid1] 
md127 : active (auto-read-only) raid1 sdc[1] sdb[0]
      52396032 blocks super 1.2 [2/2] [UU]
      
unused devices: <none>

So now I just wanted to stop and restart this RAID array as /dev/md0. But I am not able to stop as it is giving following error.
Code:
# mdadm --stop --force /dev/md127
mdadm: Cannot get exclusive access to /dev/md127:Perhaps a running process, mounted filesystem or active volume group?

I made sure that no one RAID/LVM partitions are mounted.
Code:
# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda1        15G  3.5G   11G  26% /
tmpfs           376M     0  376M   0% /dev/shm
/dev/sda2       4.7G  9.8M  4.5G   1% /others
/dev/sda3       2.9G  133M  2.6G   5% /var

But LVM is active
Code:
# pvdisplay 
  --- Physical volume ---
  PV Name               /dev/md127
  VG Name               data
  PV Size               49.97 GiB / not usable 4.00 MiB
  Allocatable           yes 
  PE Size               4.00 MiB
  Total PE              12791
  Free PE               5111
  Allocated PE          7680
  PV UUID               IJ2br8-SWHW-cf1d-89Fr-EEw9-IJME-1BpfSj
   
# vgdisplay 
  --- Volume group ---
  VG Name               data
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  19
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               49.96 GiB
  PE Size               4.00 MiB
  Total PE              12791
  Alloc PE / Size       7680 / 30.00 GiB
  Free  PE / Size       5111 / 19.96 GiB
  VG UUID               982ay8-ljWY-kiPB-JY7F-pIu2-87uN-iplPEQ
   
# lvdisplay 
  --- Logical volume ---
  LV Path                /dev/data/home
  LV Name                home
  VG Name                data
  LV UUID                OAQp25-Q1TH-rekd-b3n2-mOkC-Zgyt-3fX2If
  LV Write Access        read/write
  LV Status              available
  # open                 0
  LV Size                10.00 GiB
  Current LE             2560
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:0
   
  --- Logical volume ---
  LV Path                /dev/data/backup
  LV Name                backup
  VG Name                data
  LV UUID                Uq6rhX-AvPN-GaNe-zevB-k3iB-Uz0m-TssjCg
  LV Write Access        read/write
  LV Status              available
  # open                 0
  LV Size                20.00 GiB
  Current LE             5120
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:1

As LVM is active on /dev/md127 it is not allowing to stop /dev/md127 raid array. So as I am fresher in RAID/LVM, expecting your kind help to make LVM inactive without any data loss and restart the RAID array as /dev/md0 and then re activate the LVM setup.
Expecting your kind reply, Thanks.

Resize LVM Partitions

UPDATED:
I installed a Deb 7 Srv w LVM w following partitions:
The end product should become a mail server (Citadel) and in time also a Web server.
Code:
 df -hT 
Filesystem                Type      Size  Used Avail Use% Mounted on
rootfs                    rootfs    322M  141M  165M  46% /
udev                      devtmpfs   10M     0   10M   0% /dev
tmpfs                     tmpfs     100M  260K  100M   1% /run
/dev/mapper/deb--srv-root ext4      322M  141M  165M  46% /
tmpfs                     tmpfs     5,0M     0  5,0M   0% /run/lock
tmpfs                     tmpfs     200M     0  200M   0% /run/shm
/dev/sda1                 ext2      228M   18M  199M   9% /boot
/dev/mapper/deb--srv-home ext4      233G  188M  221G   1% /home
/dev/mapper/deb--srv-tmp  ext4      368M   11M  339M   3% /tmp
/dev/mapper/deb--srv-usr  ext4      8,3G  481M  7,4G   6% /usr
/dev/mapper/deb--srv-var  ext4      2,8G  236M  2,4G   9% /var

 fdisk -l 
Disk /dev/sda: 268.4 GB, 268435456000 bytes
255 heads, 63 sectors/track, 32635 cylinders, total 524288000 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00064033

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *        2048      499711      248832   83  Linux
/dev/sda2          501758   524285951   261892097    5  Extended
/dev/sda5          501760   524285951   261892096   8e  Linux LVM

Disk /dev/mapper/deb--srv-root: 348 MB, 348127232 bytes
255 heads, 63 sectors/track, 42 cylinders, total 679936 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/mapper/deb--srv-root doesn't contain a valid partition table

Disk /dev/mapper/deb--srv-swap_1: 2143 MB, 2143289344 bytes
255 heads, 63 sectors/track, 260 cylinders, total 4186112 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/mapper/deb--srv-swap_1 doesn't contain a valid partition table

Disk /dev/mapper/deb--srv-usr: 8996 MB, 8996782080 bytes
255 heads, 63 sectors/track, 1093 cylinders, total 17571840 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/mapper/deb--srv-usr doesn't contain a valid partition table

Disk /dev/mapper/deb--srv-var: 2998 MB, 2998927360 bytes
255 heads, 63 sectors/track, 364 cylinders, total 5857280 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/mapper/deb--srv-var doesn't contain a valid partition table

Disk /dev/mapper/deb--srv-tmp: 398 MB, 398458880 bytes
255 heads, 63 sectors/track, 48 cylinders, total 778240 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/mapper/deb--srv-tmp doesn't contain a valid partition table

Disk /dev/mapper/deb--srv-home: 253.3 GB, 253289824256 bytes
255 heads, 63 sectors/track, 30794 cylinders, total 494706688 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/mapper/deb--srv-home doesn't contain a valid partition table

 pvs 
PV         VG      Fmt  Attr PSize   PFree
/dev/sda5  deb-srv lvm2 a--  249,76g    0

 lvs 
LV     VG      Attr     LSize   Pool Origin Data%  Move Log Copy%  Convert
home   deb-srv -wi-ao-- 235,89g
root   deb-srv -wi-ao-- 332,00m
swap_1 deb-srv -wi-ao--   2,00g
tmp    deb-srv -wi-ao-- 380,00m
usr    deb-srv -wi-ao--   8,38g
var    deb-srv -wi-ao--   2,79g

Now i want to _shrink_ the HOME partition so I can expand my VAR partition. Looked for guides but haven't found any site useful so far

I tried to find a way to do it when I installed it but it didn't seem to offer me this at this time even I looked around for a while.

How do I do this shrinking of HOME and extending of VAR partition??
please be fairly specific as Im not a pro yet

Allocating Space To Mounts

My real objective is to clean existing nodes from a Hadoop cluster, claim the space etc. and do a fresh installation of another distro.

Below is the space descr. of one of the master(namenode):

Code:
df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/vg00-root
                      2.0G  739M  1.2G  39% /
tmpfs                  24G     0   24G   0% /dev/shm
/dev/mapper/mpathap1  194M   65M  120M  36% /boot
/dev/mapper/vg00-home
                      248M   12M  224M   5% /home
/dev/mapper/vg00-nsr  248M   11M  226M   5% /nsr
/dev/mapper/vg00-opt  3.1G   79M  2.8G   3% /opt
/dev/mapper/vg00-itm  434M   11M  402M   3% /opt/IBM/ITM
/dev/mapper/vg00-tmp  2.0G   68M  1.9G   4% /tmp
/dev/mapper/vg00-usr  2.0G  1.6G  305M  85% /usr
/dev/mapper/vg00-usr_local
                      248M   11M  226M   5% /usr/local
/dev/mapper/vg00-var  2.0G  820M  1.1G  43% /var
/dev/mapper/vg00-FSImage
                      917G  3.3G  867G   1% /opt/hadoop-FSImage
/dev/mapper/vg00-Zookeeper
                      917G  200M  870G   1% /opt/hadoop-Zookeeper

And one of the slave(datanodes)(the others too have 4-8 local disks and some mounted NFS drives):

Code:
df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/vg00-root
                      2.0G  411M  1.5G  22% /
tmpfs                  36G     0   36G   0% /dev/shm
/dev/mapper/mpathap1  194M   67M  118M  37% /boot
/dev/mapper/vg00-home
                      248M   11M  226M   5% /home
/dev/mapper/vg00-nsr  248M   11M  226M   5% /nsr
/dev/mapper/vg00-opt  3.1G   82M  2.8G   3% /opt
/dev/mapper/vg00-itm  434M   11M  402M   3% /opt/IBM/ITM
/dev/mapper/vg00-tmp  2.0G   68M  1.9G   4% /tmp
/dev/mapper/vg00-usr  2.0G  1.2G  690M  64% /usr
/dev/mapper/vg00-usr_local
                      248M   11M  226M   5% /usr/local
/dev/mapper/vg00-var  2.0G  1.5G  392M  80% /var
/dev/mapper/vg00-00   559G   33M  559G   1% /opt/hadoop-00
/dev/mapper/vg00-01   559G   33M  559G   1% /opt/hadoop-01
/dev/mapper/vg00-02   559G   33M  559G   1% /opt/hadoop-02
/dev/mapper/vg00-03   559G   33M  559G   1% /opt/hadoop-03
/dev/mapper/vg00-04   559G   33M  559G   1% /opt/hadoop-04
/dev/mapper/vg00-05   559G   33M  559G   1% /opt/hadoop-05
/dev/mapper/vg00-06   559G   33M  559G   1% /opt/hadoop-06
/dev/mapper/vg00-07   559G   33M  559G   1% /opt/hadoop-07

During the new installation, I have run into space issues for all the nodes/hosts

Code:
Not enough disk space on host (l1032lab.se.com). A minimum of 1GB is required for "/usr" mount. A minimum of 2GB is required for "/" mount.
Not enough disk space on host (l1033lab.se.com). A minimum of 1GB is required for "/usr" mount. A minimum of 2GB is required for "/" mount.
Not enough disk space on host (l1034lab.se.com). A minimum of 1GB is required for "/usr" mount. A minimum of 2GB is required for "/" mount.
Not enough disk space on host (l1035lab.se.com). A minimum of 1GB is required for "/usr" mount. A minimum of 2GB is required for "/" mount.

Now the installation requires a lot of space under /, /usr, /var and /home mounts which on both master and the slaves is less but there is lots of space in master under these two disks:

Code:
/dev/mapper/vg00-FSImage
                      917G  200M  870G   1% /opt/hadoop-FSImage
/dev/mapper/vg00-Zookeeper
                      917G  6.5G  864G   1% /opt/hadoop-Zookeeper

and on slaves under these 8 disks.

Code:
/dev/mapper/vg00-00   559G   33M  559G   1% /opt/hadoop-00
/dev/mapper/vg00-01   559G   33M  559G   1% /opt/hadoop-01
/dev/mapper/vg00-02   559G   33M  559G   1% /opt/hadoop-02
/dev/mapper/vg00-03   559G   33M  559G   1% /opt/hadoop-03
/dev/mapper/vg00-04   559G   33M  559G   1% /opt/hadoop-04
/dev/mapper/vg00-05   559G   33M  559G   1% /opt/hadoop-05
/dev/mapper/vg00-06   559G   33M  559G   1% /opt/hadoop-06
/dev/mapper/vg00-07   559G   33M  559G   1% /opt/hadoop-07

I'm a bit confused as how to proceed - I was wondering if I can partition one or more of these disks and allocate those under /, /usr etc. and proceed with the installation but then will these disk partitioning and mounting/un-mounting corrupt the existing /, /usr etc. mounts ?

Xubuntu MBR Partioning Question

I was wondering if I can repair my current partitioning setup using gparted, or if I should just reload Xubuntu. Basically I screwed up by making the primary partition only 256M, and made a massive extended logical partition for everything else, and did not leave swap space. I am doing this on an older PC with MBR, dual processor, 2G RAM each processor, 160GB hard drive space. It is single boot, no Windows. I would like the partioning to be as follows, leaving empty disk space for other Linux flavors:

/ 13GB ext4
/home 50GB ext4
swap 8GB swap

sudo parted /dev/sda print all
Code:
Model: ATA ST3160812AS (scsi)
Disk /dev/sda: 160GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos

Number  Start   End    Size   Type      File system  Flags
 1      1049kB  256MB  255MB  primary   ext2         boot
 2      257MB   160GB  160GB  extended
 5      257MB   160GB  160GB  logical
                                                                       
Error: /dev/mapper/xubuntu--vg-swap_1: unrecognised disk label

Model: Linux device-mapper (linear) (dm)
Disk /dev/mapper/xubuntu--vg-root: 158GB
Sector size (logical/physical): 512B/512B
Partition Table: loop

Number  Start  End    Size   File system  Flags
 1      0.00B  158GB  158GB  ext4
                                                                          
Error: /dev/mapper/sda5_crypt: unrecognised disk label

df -hT
Code:
Filesystem                   Type      Size  Used Avail Use Mounted on
/dev/mapper/xubuntu--vg-root ext4      145G  6.5G  131G   5% /
none                         tmpfs     4.0K     0  4.0K   0% /sys/fs/cgroup
udev                         devtmpfs  989M  4.0K  989M   1% /dev
tmpfs                        tmpfs     201M  1.1M  200M   1% /run
none                         tmpfs     5.0M     0  5.0M   0% /run/lock
none                         tmpfs    1003M   88K 1003M   1% /run/shm
none                         tmpfs     100M   24K  100M   1% /run/user
/dev/sda1                    ext2      236M  120M  104M  54% /boot
/home/mbrk/.Private          ecryptfs  145G  6.5G  131G   5% /home/mbrk