Need Help Regarding Multipaths On RHEL

Hello Sirs,

In the below output for few of the hosts/paths there are 4 links and for few there are 2. Why is that so. How do I find out how many paths are there in total that are connected and in use. We are using native multipaths. How do I know what file systems are there on what devices. Please explain or direct me to a link where I can learn about multipaths. Also please explain the difference between "multipath -ll" and "multipath -l"




root@xxXXxxXX [/root]
# multipath -ll
mpathr (820060e8006d945444444d9458282183e) dm-21 HITACHI,OPEN-V
size=500G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 1:0:5:41 sdbi 67:192 active ready running
`- 2:0:4:41 sdfd 129:240 active ready running
mpathak (8238a95f2258000748844444444444416) dm-15 IBM,2145
size=1.0T features='0' hwhandler='0' wp=rw
|-+- policy='round-robin 0' prio=50 status=active
| |- 1:0:0:6 sdh 8:112 active ready running
| |- 2:0:1:6 sdds 71:160 active ready running
| |- 1:0:1:6 sdt 65:48 active ready running
| `- 2:0:0:6 sddg 70:224 active ready running
`-+- policy='round-robin 0' prio=10 status=enabled
|- 1:0:2:6 sdaf 65:240 active ready running
|- 2:0:3:6 sdeq 129:32 active ready running
|- 1:0:3:6 sdar 66:176 active ready running
`- 2:0:2:6 sdee 128:96 active ready running
data16 (820060e8006d945444444d94582821833) dm-67 HITACHI,OPEN-V
size=500G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 1:0:6:30 sdcw 70:64 active ready running
`- 2:0:5:30 sdgr 132:112 active ready running
mpathe (820060e8006d945444444d94582821f17) dm-45 HITACHI,OPEN-V
size=35G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 1:0:6:12 sdce 69:32 active ready running
`- 2:0:5:12 sdfz 131:80 active ready running
data0 (820060e8006d945444444d94582821815) dm-43 HITACHI,OPEN-V
size=500G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 1:0:6:14 sdcg 69:64 active ready running
`- 2:0:5:14 sdgb 131:112 active ready running
mpathq (820060e8006d945444444d9458282184a) dm-34 HITACHI,OPEN-V
size=500G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 1:0:5:53 sdbu 68:128 active ready running
`- 2:0:4:53 sdfp 130:176 active ready running
mpathaj (8238a95f2258000748844444444444415) dm-13 IBM,2145
size=1.0T features='0' hwhandler='0' wp=rw
|-+- policy='round-robin 0' prio=50 status=active
| |- 1:0:2:5 sdae 65:224 active ready running
| |- 2:0:3:5 sdep 129:16 active ready running
| |- 1:0:3:5 sdaq 66:160 active ready running
| `- 2:0:2:5 sded 128:80 active ready running
`-+- policy='round-robin 0' prio=10 status=enabled
|- 1:0:0:5 sdg 8:96 active ready running
|- 2:0:0:5 sddf 70:208 active ready running
|- 1:0:1:5 sds 65:32 active ready running
`- 2:0:1:5 sddr 71:144 active ready running
data15 (820060e8006d945444444d94582821832) dm-58 HITACHI,OPEN-V
size=500G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 1:0:6:29 sdcv 70:48 active ready running
`- 2:0:5:29 sdgq 132:96 active ready running
mpathd (820060e8006d945444444d94582821f15) dm-37 HITACHI,OPEN-V
size=20G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 1:0:6:4 sdby 68:192 active ready running
`- 2:0:5:4 sdft 130:240 active ready running
flash4 (820060e8006d945444444d94582822033) dm-44 HITACHI,OPEN-V
size=200G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 1:0:6:9 sdcc 69:0 active ready running
`- 2:0:5:9 sdfx 131:48 active ready running
mpathp (820060e8006d945444444d94582821842) dm-24 HITACHI,OPEN-V
size=500G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 1:0:5:45 sdbm 68:0 active ready running
`- 2:0:4:45 sdfh 130:48 active ready running
mpathai (8238a95f2258000748844444444444414) dm-9 IBM,2145
size=1.0T features='0' hwhandler='0' wp=rw
|-+- policy='round-robin 0' prio=50 status=active
| |- 1:0:0:4 sdf 8:80 active ready running
| |- 2:0:0:4 sdde 70:192 active ready running
| |- 1:0:1:4 sdr 65:16 active ready running
| `- 2:0:1:4 sddq 71:128 active ready running
`-+- policy='round-robin 0' prio=10 status=enabled
|- 1:0:2:4 sdad 65:208 active ready running
|- 2:0:2:4 sdec 128:64 active ready running
|- 1:0:3:4 sdap 66:144 active ready running
`- 2:0:3:4 sdeo 129:0 active ready running
data14 (820060e8006d945444444d94582821831) dm-55 HITACHI,OPEN-V
size=500G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 1:0:6:28 sdcu 70:32 active ready running
`- 2:0:5:28 sdgp 132:80 active ready running
mpathc (820060e8006d945444444d94582821f12) dm-41 HITACHI,OPEN-V
size=20G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 1:0:6:1 sdbv 68:144 active ready running
`- 2:0:5:1 sdfq 130:192 active ready running
flash3 (820060e8006d945444444d94582822032) dm-42 HITACHI,OPEN-V
size=200G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 1:0:6:8 sdcb 68:240 active ready running
`- 2:0:5:8 sdfw 131:32 active ready running
mpatho (820060e8006d945444444d9458282183a) dm-18 HITACHI,OPEN-V
size=500G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 1:0:5:37 sdbe 67:128 active ready running
`- 2:0:4:37 sdez 129:176 active ready running
mpathah (8238a95f2258000748844444444444413) dm-5 IBM,2145
size=1.0T features='0' hwhandler='0' wp=rw
|-+- policy='round-robin 0' prio=50 status=active
| |- 1:0:2:3 sdac 65:192 active ready running
| |- 2:0:2:3 sdeb 128:48 active ready running
| |- 1:0:3:3 sdao 66:128 active ready running
| `- 2:0:3:3 sden 128:240 active ready running
`-+- policy='round-robin 0' prio=10 status=enabled
|- 1:0:0:3 sde 8:64 active ready running
|- 2:0:0:3 sddd 70:176 active ready running
|- 1:0:1:3 sdq 65:0 active ready running
`- 2:0:1:3 sddp 71:112 active ready running
data13 (820060e8006d945444444d94582821830) dm-54 HITACHI,OPEN-V
size=500G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 1:0:6:27 sdct 70:16 active ready running
`- 2:0:5:27 sdgo 132:64 active ready running
mpathb (820060e8006d945444444d94582821f14) dm-82 HITACHI,OPEN-V
size=20G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 1:0:6:3 sdbx 68:176 active ready running
`- 2:0:5:3 sdfs 130:224 active ready running
mpathag (8238a95f2258000748844444444444419) dm-29 IBM,2145
size=1.0T features='0' hwhandler='0' wp=rw
|-+- policy='round-robin 0' prio=50 status=active
| |- 1:0:2:9 sdai 66:32 active ready running
| |- 2:0:3:9 sdet 129:80 active ready running
| |- 1:0:3:9 sdau 66:224 active ready running
| `- 2:0:2:9 sdeh 128:144 active ready running
`-+- policy='round-robin 0' prio=10 status=enabled
|- 1:0:0:9 sdk 8:160 active ready running
|- 2:0:1:9 sddv 71:208 active ready running
|- 1:0:1:9 sdw 65:96 active ready running
`- 2:0:0:9 sddj 71:16 active ready running
mpathn (820060e8006d945444444d94582821844) dm-27 HITACHI,OPEN-V
size=500G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 1:0:5:47 sdbo 68:32 active ready running
`- 2:0:4:47 sdfj 130:80 active ready running
data12 (820060e8006d945444444d9458282182f) dm-66 HITACHI,OPEN-V
size=500G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 1:0:6:26 sdcs 70:0 active ready running
`- 2:0:5:26 sdgn 132:48 active ready running
mpatha (820060e8006d945444444d94582821f13) dm-62 HITACHI,OPEN-V
size=20G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 1:0:6:2 sdbw 68:160 active ready running
`- 2:0:5:2 sdfr 130:208 active ready running
data9 (820060e8006d945444444d9458282181c) dm-52 HITACHI,OPEN-V
size=500G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 1:0:6:21 sdcn 69:176 active ready running
`- 2:0:5:21 sdgi 131:224 active ready running
mpathz (820060e8006d945444444d9458282183b) dm-16 HITACHI,OPEN-V
size=500G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 1:0:5:38 sdbf 67:144 active ready running
`- 2:0:4:38 sdfa 129:192 active ready running
mpathm (820060e8006d945444444d94582821840) dm-22 HITACHI,OPEN-V
size=500G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 1:0:5:43 sdbk 67:224 active ready running
`- 2:0:4:43 sdff 130:16 active ready running
mpathaf (8238a95f2258000748844444444444417) dm-10 IBM,2145
size=1.0T features='0' hwhandler='0' wp=rw
|-+- policy='round-robin 0' prio=50 status=active
| |- 1:0:2:7 sdag 66:0 active ready running
| |- 2:0:3:7 sder 129:48 active ready running
| |- 1:0:3:7 sdas 66:192 active ready running
| `- 2:0:2:7 sdef 128:112 active ready running
`-+- policy='round-robin 0' prio=10 status=enabled
|- 1:0:0:7 sdi 8:128 active ready running
|- 2:0:1:7 sddt 71:176 active ready running
|- 1:0:1:7 sdu 65:64 active ready running
`- 2:0:0:7 sddh 70:240 active ready running
data8 (820060e8006d945444444d9458282182c) dm-68 HITACHI,OPEN-V
size=500G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 1:0:6:23 sdcp 69:208 active ready running
`- 2:0:5:23 sdgk 132:0 active ready running
data11 (820060e8006d945444444d9458282182e) dm-57 HITACHI,OPEN-V
size=500G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 1:0:6:25 sdcr 69:240 active ready running
`- 2:0:5:25 sdgm 132:32 active ready running
mpathy (820060e8006d945444444d9458282183f) dm-25 HITACHI,OPEN-V
size=500G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 1:0:5:42 sdbj 67:208 active ready running
`- 2:0:4:42 sdfe 130:0 active ready running
mpathl (820060e8006d945444444d94582821848) dm-33 HITACHI,OPEN-V
size=500G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 1:0:5:51 sdbs 68:96 active ready running
`- 2:0:4:51 sdfn 130:144 active ready running
mpathae (8238a95f2258000748844444444444410) dm-2 IBM,2145
size=1.0T features='0' hwhandler='0' wp=rw
|-+- policy='round-robin 0' prio=50 status=active
| |- 1:0:0:0 sdb 8:16 active ready running
| |- 2:0:0:0 sdda 70:128 active ready running
| |- 1:0:1:0 sdn 8:208 active ready running
| `- 2:0:1:0 sddm 71:64 active ready running
`-+- policy='round-robin 0' prio=10 status=enabled
|- 1:0:2:0 sdz 65:144 active ready running
|- 2:0:3:0 sdek 128:192 active ready running
|- 1:0:3:0 sdal 66:80 active ready running
`- 2:0:2:0 sddy 128:0 active ready running
data7 (820060e8006d945444444d9458282181d) dm-56 HITACHI,OPEN-V
size=500G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 1:0:6:22 sdco 69:192 active ready running
`- 2:0:5:22 sdgj 131:240 active ready running
data10 (820060e8006d945444444d9458282182d) dm-53 HITACHI,OPEN-V
size=500G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 1:0:6:24 sdcq 69:224 active ready running
`- 2:0:5:24 sdgl 132:16 active ready running
mpathaq (8238a95f22580007488444444444444b3) dm-38 IBM,2145
size=256G features='0' hwhandler='0' wp=rw
|-+- policy='round-robin 0' prio=50 status=active
| |- 1:0:3:11 sdaw 67:0 active ready running
| |- 2:0:2:11 sdej 128:176 active ready running
| |- 1:0:2:11 sdak 66:64 active ready running
| `- 2:0:3:11 sdev 129:112 active ready running
`-+- policy='round-robin 0' prio=10 status=enabled
|- 1:0:0:11 sdm 8:192 active ready running
|- 2:0:0:11 sddl 71:48 active ready running
|- 1:0:1:11 sdy 65:128 active ready running
`- 2:0:1:11 sddx 71:240 active ready running
mpathx (820060e8006d945444444d94582821841) dm-23 HITACHI,OPEN-V
size=500G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 1:0:5:44 sdbl 67:240 active ready running
`- 2:0:4:44 sdfg 130:32 active ready running
mpathk (820060e8006d945444444d9458282183c) dm-19 HITACHI,OPEN-V
size=500G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 1:0:5:39 sdbg 67:160 active ready running
`- 2:0:4:39 sdfb 129:208 active ready running
mpathad (8238a95f2258000748844444444444411) dm-3 IBM,2145
size=1.0T features='0' hwhandler='0' wp=rw
|-+- policy='round-robin 0' prio=50 status=active
| |- 1:0:2:1 sdaa 65:160 active ready running
| |- 2:0:3:1 sdel 128:208 active ready running
| |- 1:0:3:1 sdam 66:96 active ready running
| `- 2:0:2:1 sddz 128:16 active ready running
`-+- policy='round-robin 0' prio=10 status=enabled
|- 1:0:0:1 sdc 8:32 active ready running
|- 2:0:0:1 sddb 70:144 active ready running
|- 1:0:1:1 sdo 8:224 active ready running
`- 2:0:1:1 sddn 71:80 active ready running
data6 (820060e8006d945444444d9458282181b) dm-65 HITACHI,OPEN-V
size=500G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 1:0:6:20 sdcm 69:160 active ready running
`- 2:0:5:20 sdgh 131:208 active ready running
mpathw (820060e8006d945444444d94582821849) dm-30 HITACHI,OPEN-V
size=500G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 1:0:5:52 sdbt 68:112 active ready running
`- 2:0:4:52 sdfo 130:160 active ready running
mpathac (820060e8006d945444444d94582821843) dm-26 HITACHI,OPEN-V
size=500G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 1:0:5:46 sdbn 68:16 active ready running
`- 2:0:4:46 sdfi 130:64 active ready running
mpathj (820060e8006d945444444d94582821838) dm-14 HITACHI,OPEN-V
size=500G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 1:0:5:35 sdbc 67:96 active ready running
`- 2:0:4:35 sdex 129:144 active ready running
data5 (820060e8006d945444444d9458282181a) dm-48 HITACHI,OPEN-V
size=500G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 1:0:6:19 sdcl 69:144 active ready running
`- 2:0:5:19 sdgg 131:192 active ready running
data23 (820060e8016726a828201726a8282172f) dm-8 HITACHI,OPEN-V
size=500G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 1:0:4:57 sdba 67:64 active ready running
`- 2:0:6:57 sdgy 132:224 active ready running
mpathv (820060e8006d945444444d94582821845) dm-28 HITACHI,OPEN-V
size=500G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 1:0:5:48 sdbp 68:48 active ready running
`- 2:0:4:48 sdfk 130:96 active ready running
mpathi (820060e8006d945444444d94582822031) dm-40 HITACHI,OPEN-V
size=200G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 1:0:6:7 sdca 68:224 active ready running
`- 2:0:5:7 sdfv 131:16 active ready running
mpathab (820060e8006d945444444d94582821847) dm-32 HITACHI,OPEN-V
size=500G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 1:0:5:50 sdbr 68:80 active ready running
`- 2:0:4:50 sdfm 130:128 active ready running
data4 (820060e8006d945444444d94582821819) dm-51 HITACHI,OPEN-V
size=500G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 1:0:6:18 sdck 69:128 active ready running
`- 2:0:5:18 sdgf 131:176 active ready running
data22 (820060e8016726a828201726a8282172d) dm-7 HITACHI,OPEN-V
size=500G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 1:0:4:55 sday 67:32 active ready running
`- 2:0:6:55 sdgw 132:192 active ready running
mpathan (8238a95f2258000748844444444444418) dm-20 IBM,2145
size=1.0T features='0' hwhandler='0' wp=rw
|-+- policy='round-robin 0' prio=50 status=active
| |- 1:0:0:8 sdj 8:144 active ready running
| |- 2:0:0:8 sddi 71:0 active ready running
| |- 1:0:1:8 sdv 65:80 active ready running
| `- 2:0:1:8 sddu 71:192 active ready running
`-+- policy='round-robin 0' prio=10 status=enabled
|- 1:0:2:8 sdah 66:16 active ready running
|- 2:0:2:8 sdeg 128:128 active ready running
|- 1:0:3:8 sdat 66:208 active ready running
`- 2:0:3:8 sdes 129:64 active ready running
mpathu (820060e8006d945444444d94582821839) dm-12 HITACHI,OPEN-V
size=500G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 1:0:5:82 sdbd 67:112 active ready running
`- 2:0:4:82 sdey 129:160 active ready running
data19 (820060e8006d945444444d94582821882) dm-61 HITACHI,OPEN-V
size=500G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 1:0:6:33 sdcz 70:112 active ready running
`- 2:0:5:33 sdgu 132:160 active ready running
mpathh (820060e8006d945444444d94582822030) dm-39 HITACHI,OPEN-V
size=200G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 1:0:6:6 sdbz 68:208 active ready running
`- 2:0:5:6 sdfu 131:0 active ready running
mpathaa (820060e8006d945444444d94582821837) dm-11 HITACHI,OPEN-V
size=500G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 1:0:5:34 sdbb 67:80 active ready running
`- 2:0:4:34 sdew 129:128 active ready running
data3 (820060e8006d945444444d94582821817) dm-47 HITACHI,OPEN-V
size=500G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 1:0:6:16 sdci 69:96 active ready running
`- 2:0:5:16 sdgd 131:144 active ready running
data21 (820060e8016726a828201726a8282172c) dm-6 HITACHI,OPEN-V
size=500G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 1:0:4:54 sdax 67:16 active ready running
`- 2:0:6:54 sdgv 132:176 active ready running
mpatht (820060e8006d945444444d9458282183d) dm-17 HITACHI,OPEN-V
size=500G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 1:0:5:40 sdbh 67:176 active ready running
`- 2:0:4:40 sdfc 129:224 active ready running
mpatham (8238a95f2258000748844444444444412) dm-4 IBM,2145
size=1.0T features='0' hwhandler='0' wp=rw
|-+- policy='round-robin 0' prio=50 status=active
| |- 1:0:0:2 sdd 8:48 active ready running
| |- 2:0:1:2 sddo 71:96 active ready running
| |- 1:0:1:2 sdp 8:240 active ready running
| `- 2:0:0:2 sddc 70:160 active ready running
`-+- policy='round-robin 0' prio=10 status=enabled
|- 1:0:2:2 sdab 65:176 active ready running
|- 2:0:2:2 sdea 128:32 active ready running
|- 1:0:3:2 sdan 66:112 active ready running
`- 2:0:3:2 sdem 128:224 active ready running
mpathg (820060e8006d945444444d94582822037) dm-64 HITACHI,OPEN-V
size=500G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 1:0:6:13 sdcf 69:48 active ready running
`- 2:0:5:13 sdga 131:96 active ready running
data18 (820060e8006d945444444d94582821835) dm-60 HITACHI,OPEN-V
size=500G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 1:0:6:32 sdcy 70:96 active ready running
`- 2:0:5:32 sdgt 132:144 active ready running
data20 (820060e8016726a828201726a8282172e) dm-50 HITACHI,OPEN-V
size=500G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 1:0:4:56 sdaz 67:48 active ready running
`- 2:0:6:56 sdgx 132:208 active ready running
data2 (820060e8006d945444444d94582821818) dm-49 HITACHI,OPEN-V
size=500G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 1:0:6:17 sdcj 69:112 active ready running
`- 2:0:5:17 sdge 131:160 active ready running
mpathal (8238a95f225800074884444444444441a) dm-35 IBM,2145
size=589G features='0' hwhandler='0' wp=rw
|-+- policy='round-robin 0' prio=50 status=active
| |- 1:0:0:10 sdl 8:176 active ready running
| |- 2:0:1:10 sddw 71:224 active ready running
| |- 1:0:1:10 sdx 65:112 active ready running
| `- 2:0:0:10 sddk 71:32 active ready running
`-+- policy='round-robin 0' prio=10 status=enabled
|- 1:0:2:10 sdaj 66:48 active ready running
|- 2:0:3:10 sdeu 129:96 active ready running
|- 1:0:3:10 sdav 66:240 active ready running
`- 2:0:2:10 sdei 128:160 active ready running
mpaths (820060e8006d945444444d94582821846) dm-31 HITACHI,OPEN-V
size=500G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 1:0:5:49 sdbq 68:64 active ready running
`- 2:0:4:49 sdfl 130:112 active ready running
mpathf (820060e8006d945444444d94582822034) dm-63 HITACHI,OPEN-V
size=275G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 1:0:6:10 sdcd 69:16 active ready running
`- 2:0:5:10 sdfy 131:64 active ready running
data17 (820060e8006d945444444d94582821834) dm-59 HITACHI,OPEN-V
size=500G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 1:0:6:31 sdcx 70:80 active ready running
`- 2:0:5:31 sdgs 132:128 active ready running
data1 (820060e8006d945444444d94582821816) dm-46 HITACHI,OPEN-V
size=500G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 1:0:6:15 sdch 69:80 active ready running
`- 2:0:5:15 sdgc 131:128 active ready running
root@xxXXxxXX [/root]


Similar Content



Map Allocated Space From Storage Device To Existing Partition

Dear Team,

i just received some new space from storage and verified it is allocated on the server by multipath command. now i want to resize the existing partition. Please help me for the same.

below is the multipath command output

multipath -ll
app2 (360060e80105ed650057075f500000011) dm-2 HITACHI,DF600F
[size=400G][features=0][hwhandler=0][rw]
\_ round-robin 0 [prio=2][active]
\_ 1:0:1:17 sdf 8:80 [active][ready]
\_ 2:0:1:17 sdi 8:128 [active][ready]
\_ round-robin 0 [prio=0][enabled]
\_ 1:0:0:17 sdg 8:96 [active][ready]
\_ 2:0:0:17 sdh 8:112 [active][ready]
app1 (360060e80105ed650057075f50000000a) dm-0 HITACHI,DF600F
[size=600G][features=0][hwhandler=0][rw]
\_ round-robin 0 [prio=2][active]
\_ 1:0:0:10 sdb 8:16 [active][ready]
\_ 2:0:0:10 sdd 8:48 [active][ready]
\_ round-robin 0 [prio=0][enabled]
\_ 1:0:1:10 sdc 8:32 [active][ready]
\_ 2:0:1:10 sde 8:64 [active][ready]

Here 600 GB is already applied and i want to add more 400 GB on the same existing partition.

below is the partition details.

df -kh
Filesystem Size Used Avail Use% Mounted on
/dev/sda3 97G 9.9G 82G 11% /
/dev/sda5 49G 180M 46G 1% /backup
/dev/sda2 97G 21G 72G 23% /opt
/dev/sda1 99M 12M 83M 13% /boot
tmpfs 16G 0 16G 0% /dev/shm
/dev/mapper/app1p1 591G 117G 445G 21% /app

ls /dev/mapper/
app1 app1p1 app2 control

Please guide me how i can add 400GB space in same 600 GB (/app) partition.

Many Thanks !!
Jignesh Dholakiya

Linux Os Clustering?

Hy,
I am new to linuxquestion.org, however i am mid-level linux administrator.

As per my scenario, I want to cluster python and java services in two nodes running RHEL 6.4. Till now by watching tutorial I have seen application level clustering of linux for eg: webserver, mysql database and so on. However I haven't found any with python & java clustering.
I am already familiar with windows clustering having clustered ip of the two nodes. In my scenario, we have both the options open either going with active-active or active-passive cluster. So I want to some sort of idea to achieve linux os clustering and hence obtained clustered ip.
Further any idea on how this service will float from one node to another. Further, I cannot get to the fencing mechanism in Linux. Any idea for this will be helpful to understand.

Hope to get positive respond ASAP. Thanks in advance for those who help me with this.

Is There A Current Active Linux Mint Sub-forum. Please Indicate Its Link

I found a link dated. From 2010. Is there an active one now?

Thank you

Linux Native Multipath IO Timeout

My Oracle RAC 12.1.0.1 runs on ASM. ASM has an I/O timeout of 15 seconds. How do I find my Linux native multipath I/O timeout? I am running RedHat Linux 6 Enterprise, x86-64.
This is the device section of my multipath.conf

devices{
device {
vendor "NETAPP"
product "LUN.*"
path_grouping_policy group_by_prio
getuid_callout "/lib/udev/scsi_id --whitelisted --device=/dev/%n"
path_selector "round-robin 0"
path_checker tur
features "3 queue_if_no_path pg_init_retries 50"
hardware_handler "0"
prio ontap
failback immediate
rr_weight uniform
rr_min_io 128
rr_min_io_rq 1
flush_on_last_del yes
fast_io_fail_tmo 5
dev_loss_tmo infinity
retain_attached_hw_handler yes
detect_prio yes
}
}

Thanks,
keith

Output Of "netstat -s | Egrep '(active|passive)'"?

If you run this command on your computer, how many active and passive connections openings are you typically supposed to have? Specifically, for a firewalled home PC behind a wifi router/cable modem combination. Can someone explain the purpose of this command?

Update Manager Doesn't Work With Linux Mint 17.1 Mate

this is what I get after running "inxi -r"
Repos: Active apt sources in file: /etc/apt/sources.list.d/clipgrab-team-ppa-trusty.list
deb http://ppa.launchpad.net/clipgrab-team/ppa/ubuntu trusty main
deb-src http://ppa.launchpad.net/clipgrab-team/ppa/ubuntu trusty main
Active apt sources in file: /etc/apt/sources.list.d/ferramroberto-lffl-trusty.list
deb http://ppa.launchpad.net/ferramroberto/lffl/ubuntu trusty main
deb-src http://ppa.launchpad.net/ferramroberto/lffl/ubuntu trusty main
Active apt sources in file: /etc/apt/sources.list.d/getdeb.list
deb http://archive.getdeb.net/ubuntu trusty-getdeb apps
Active apt sources in file: /etc/apt/sources.list.d/gnome3-team-gnome3-trusty.list
deb http://ppa.launchpad.net/gnome3-team/gnome3/ubuntu trusty main
deb-src http://ppa.launchpad.net/gnome3-team/gnome3/ubuntu trusty main
Active apt sources in file: /etc/apt/sources.list.d/kalakris-okular-trusty.list
deb http://ppa.launchpad.net/kalakris/okular/ubuntu trusty main
deb-src http://ppa.launchpad.net/kalakris/okular/ubuntu trusty main
Active apt sources in file: /etc/apt/sources.list.d/kilian-f_lux-trusty.list
deb http://ppa.launchpad.net/kilian/f.lux/ubuntu trusty main
deb-src http://ppa.launchpad.net/kilian/f.lux/ubuntu trusty main
Active apt sources in file: /etc/apt/sources.list.d/libreoffice-ppa-trusty.list
deb http://ppa.launchpad.net/libreoffice/ppa/ubuntu trusty main
deb-src http://ppa.launchpad.net/libreoffice/ppa/ubuntu trusty main
Active apt sources in file: /etc/apt/sources.list.d/official-package-repositories.list
deb http://packages.linuxmint.com rebecca main upstream import
deb http://extra.linuxmint.com rebecca main
deb http://archive.ubuntu.com/ubuntu trusty main restricted universe multiverse
deb http://archive.ubuntu.com/ubuntu trusty-updates main restricted universe multiverse
deb http://security.ubuntu.com/ubuntu/ trusty-security main restricted universe multiverse
deb http://archive.canonical.com/ubuntu/ trusty partner
Active apt sources in file: /etc/apt/sources.list.d/ricotz-testing-trusty.list
deb http://ppa.launchpad.net/ricotz/testing/ubuntu trusty main
deb-src http://ppa.launchpad.net/ricotz/testing/ubuntu trusty main

Thanks in advance, ScottG

Degraded RAID 1 Mdadm

I have a degraded software RAID 1 array. md0 is in a state of clean, degraded while md1 is mounted in read-only and clean. I'm not sure how to go about fixing this. Any ideas?

cat /proc/mdstat
Code:
Personalities : [raid1] 
md1 : active (auto-read-only) raid1 sdb2[1] sda2[0]
      3909620 blocks super 1.2 [2/2] [UU]
      
md0 : active raid1 sda1[0]
      972849016 blocks super 1.2 [2/1] [U_]
      
unused devices: <none>

mdadm -D /dev/md0
Code:
/dev/md0:
        Version : 1.2
  Creation Time : Tue Jun 21 21:31:58 2011
     Raid Level : raid1
     Array Size : 972849016 (927.78 GiB 996.20 GB)
  Used Dev Size : 972849016 (927.78 GiB 996.20 GB)
   Raid Devices : 2
  Total Devices : 1
    Persistence : Superblock is persistent

    Update Time : Tue Jun  2 02:21:12 2015
          State : clean, degraded 
 Active Devices : 1
Working Devices : 1
 Failed Devices : 0
  Spare Devices : 0

           Name : 
           UUID : 
         Events : 3678064

    Number   Major   Minor   RaidDevice State
       0       8        1        0      active sync   /dev/sda1
       2       0        0        2      removed

mdadm -D /dev/md1
Code:
/dev/md1:
        Version : 1.2
  Creation Time : Tue Jun 21 21:32:09 2011
     Raid Level : raid1
     Array Size : 3909620 (3.73 GiB 4.00 GB)
  Used Dev Size : 3909620 (3.73 GiB 4.00 GB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent

    Update Time : Sat May 16 15:17:56 2015
          State : clean 
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           Name : 
           UUID : 
         Events : 116

    Number   Major   Minor   RaidDevice State
       0       8        2        0      active sync   /dev/sda2
       1       8       18        1      active sync   /dev/sdb2

Get RAID1 And LVM Back After Re Installating The OS

Hi All,
I had installed Cent OS 6.6 on sda. The RAID1 and LVM setup was on sdb and sdc. To practice well on recovering RAID and LVM after the OS reinstallation, I just reinstalled the OS. During first re installation of OS, I had selected all the mount points including RAID/LVM partitions as same as how those where mounted before the reinstallation, but the format was selected to only /, /others, /var. And after booting /dev/md0 and LVM partitions were set to active automatically and everything was mounted properly. Also there was no any data loss in the RAID/LVM partitions. So I could made sure that everything will be perfect if we carefully select the mount points during OS reinstallation by making sure the formating partitions.

Now I thouht of reinstalling OS once again but this time didn't select mount points for RAID/LVM partitions during OS reinstallation, thought for manual setup after the installation. So just selected /, /others, /var partitions to format. When it booted, I ran "cat /proc/mdstat" but it was taken /dev/md127(read only) instead of /dev/md0.
Code:
# cat /proc/mdstat 
Personalities : [raid1] 
md127 : active (auto-read-only) raid1 sdc[1] sdb[0]
      52396032 blocks super 1.2 [2/2] [UU]
      
unused devices: <none>

So now I just wanted to stop and restart this RAID array as /dev/md0. But I am not able to stop as it is giving following error.
Code:
# mdadm --stop --force /dev/md127
mdadm: Cannot get exclusive access to /dev/md127:Perhaps a running process, mounted filesystem or active volume group?

I made sure that no one RAID/LVM partitions are mounted.
Code:
# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda1        15G  3.5G   11G  26% /
tmpfs           376M     0  376M   0% /dev/shm
/dev/sda2       4.7G  9.8M  4.5G   1% /others
/dev/sda3       2.9G  133M  2.6G   5% /var

But LVM is active
Code:
# pvdisplay 
  --- Physical volume ---
  PV Name               /dev/md127
  VG Name               data
  PV Size               49.97 GiB / not usable 4.00 MiB
  Allocatable           yes 
  PE Size               4.00 MiB
  Total PE              12791
  Free PE               5111
  Allocated PE          7680
  PV UUID               IJ2br8-SWHW-cf1d-89Fr-EEw9-IJME-1BpfSj
   
# vgdisplay 
  --- Volume group ---
  VG Name               data
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  19
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               49.96 GiB
  PE Size               4.00 MiB
  Total PE              12791
  Alloc PE / Size       7680 / 30.00 GiB
  Free  PE / Size       5111 / 19.96 GiB
  VG UUID               982ay8-ljWY-kiPB-JY7F-pIu2-87uN-iplPEQ
   
# lvdisplay 
  --- Logical volume ---
  LV Path                /dev/data/home
  LV Name                home
  VG Name                data
  LV UUID                OAQp25-Q1TH-rekd-b3n2-mOkC-Zgyt-3fX2If
  LV Write Access        read/write
  LV Status              available
  # open                 0
  LV Size                10.00 GiB
  Current LE             2560
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:0
   
  --- Logical volume ---
  LV Path                /dev/data/backup
  LV Name                backup
  VG Name                data
  LV UUID                Uq6rhX-AvPN-GaNe-zevB-k3iB-Uz0m-TssjCg
  LV Write Access        read/write
  LV Status              available
  # open                 0
  LV Size                20.00 GiB
  Current LE             5120
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:1

As LVM is active on /dev/md127 it is not allowing to stop /dev/md127 raid array. So as I am fresher in RAID/LVM, expecting your kind help to make LVM inactive without any data loss and restart the RAID array as /dev/md0 and then re activate the LVM setup.
Expecting your kind reply, Thanks.

Iptables Not Active/firewalld Is - My Web Server Is Working But I Have No Idea Why.

This is a copy of my /etc/sysconfig/iptables.conf (w/o comments):
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
-A INPUT -p icmp -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 21 -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 80 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 443 -j ACCEPT
-A INPUT -j REJECT --reject-with icmp-host-prohibited
-A FORWARD -j REJECT --reject-with icmp-host-prohibited
COMMIT

- Added the port 80/21 entries.
vsftpd does work.

"iptables-save | grep 80" returns nothing.

My web server works (internal and external).

"systemctl is-active iptables" shows "inactive"
I have "just" gotten firewalld up and running thanks to questions answered here.

iptables is truly a mystery to me.

Can someone explain why my web server/vsftpd are up and working w/o iptables being active? How can I get my network and security both up and working safely together?

If I enable/activate iptables, is this going to break my web server?

Is this the appropriate forum for this question?

As always, thank you for your time and patience,

Skip

Recursively Move Files Of One Type And Create Destination Sub-folders

I'm just getting into Bash scripting, and would appreciate some help with this question. My music collection is split into a smaller, "active" set, kept on my laptop, and a much larger collection on an external hard drive. I've just converted some of the larger filetypes on my "active" set to *.mp3, and now want to move all the original files (*.flac) to the external hard drive. I need some help putting together a command or script that will recursively search my active music set for *.flac and then move them, but keeping the source directory structure. Some or all of these subdirectories may not exist on the destination.

eg. On the active music set, I may have:

/Music/artist1/album1/(a mix of *.mp3 and *.flac files)
/Music/artist2/album1/(a mix of *.mp3 and *.flac files)

and on the hard drive

/Music 2/artist1/album2/(the contents of the album)

So when copying, it'll need to create "/album1/" in "artist1" on the destination, and also "/artist2/album1/"

Thanks in advance!