My real objective is to clean existing nodes from a Hadoop cluster, claim the space etc. and do a fresh installation of another distro.
Below is the space descr. of one of the master(namenode):
Code:
df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg00-root
2.0G 739M 1.2G 39% /
tmpfs 24G 0 24G 0% /dev/shm
/dev/mapper/mpathap1 194M 65M 120M 36% /boot
/dev/mapper/vg00-home
248M 12M 224M 5% /home
/dev/mapper/vg00-nsr 248M 11M 226M 5% /nsr
/dev/mapper/vg00-opt 3.1G 79M 2.8G 3% /opt
/dev/mapper/vg00-itm 434M 11M 402M 3% /opt/IBM/ITM
/dev/mapper/vg00-tmp 2.0G 68M 1.9G 4% /tmp
/dev/mapper/vg00-usr 2.0G 1.6G 305M 85% /usr
/dev/mapper/vg00-usr_local
248M 11M 226M 5% /usr/local
/dev/mapper/vg00-var 2.0G 820M 1.1G 43% /var
/dev/mapper/vg00-FSImage
917G 3.3G 867G 1% /opt/hadoop-FSImage
/dev/mapper/vg00-Zookeeper
917G 200M 870G 1% /opt/hadoop-Zookeeper
And one of the slave(datanodes)(the others too have 4-8 local disks and some mounted NFS drives):
Code:
df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg00-root
2.0G 411M 1.5G 22% /
tmpfs 36G 0 36G 0% /dev/shm
/dev/mapper/mpathap1 194M 67M 118M 37% /boot
/dev/mapper/vg00-home
248M 11M 226M 5% /home
/dev/mapper/vg00-nsr 248M 11M 226M 5% /nsr
/dev/mapper/vg00-opt 3.1G 82M 2.8G 3% /opt
/dev/mapper/vg00-itm 434M 11M 402M 3% /opt/IBM/ITM
/dev/mapper/vg00-tmp 2.0G 68M 1.9G 4% /tmp
/dev/mapper/vg00-usr 2.0G 1.2G 690M 64% /usr
/dev/mapper/vg00-usr_local
248M 11M 226M 5% /usr/local
/dev/mapper/vg00-var 2.0G 1.5G 392M 80% /var
/dev/mapper/vg00-00 559G 33M 559G 1% /opt/hadoop-00
/dev/mapper/vg00-01 559G 33M 559G 1% /opt/hadoop-01
/dev/mapper/vg00-02 559G 33M 559G 1% /opt/hadoop-02
/dev/mapper/vg00-03 559G 33M 559G 1% /opt/hadoop-03
/dev/mapper/vg00-04 559G 33M 559G 1% /opt/hadoop-04
/dev/mapper/vg00-05 559G 33M 559G 1% /opt/hadoop-05
/dev/mapper/vg00-06 559G 33M 559G 1% /opt/hadoop-06
/dev/mapper/vg00-07 559G 33M 559G 1% /opt/hadoop-07
During the new installation, I have run into space issues for all the nodes/hosts
Code:
Not enough disk space on host (l1032lab.se.com). A minimum of 1GB is required for "/usr" mount. A minimum of 2GB is required for "/" mount.
Not enough disk space on host (l1033lab.se.com). A minimum of 1GB is required for "/usr" mount. A minimum of 2GB is required for "/" mount.
Not enough disk space on host (l1034lab.se.com). A minimum of 1GB is required for "/usr" mount. A minimum of 2GB is required for "/" mount.
Not enough disk space on host (l1035lab.se.com). A minimum of 1GB is required for "/usr" mount. A minimum of 2GB is required for "/" mount.
Now the installation requires a lot of space under /, /usr, /var and /home mounts which on both master and the slaves is less but there is lots of space in master under these two disks:
Code:
/dev/mapper/vg00-FSImage
917G 200M 870G 1% /opt/hadoop-FSImage
/dev/mapper/vg00-Zookeeper
917G 6.5G 864G 1% /opt/hadoop-Zookeeper
and on slaves under these 8 disks.
Code:
/dev/mapper/vg00-00 559G 33M 559G 1% /opt/hadoop-00
/dev/mapper/vg00-01 559G 33M 559G 1% /opt/hadoop-01
/dev/mapper/vg00-02 559G 33M 559G 1% /opt/hadoop-02
/dev/mapper/vg00-03 559G 33M 559G 1% /opt/hadoop-03
/dev/mapper/vg00-04 559G 33M 559G 1% /opt/hadoop-04
/dev/mapper/vg00-05 559G 33M 559G 1% /opt/hadoop-05
/dev/mapper/vg00-06 559G 33M 559G 1% /opt/hadoop-06
/dev/mapper/vg00-07 559G 33M 559G 1% /opt/hadoop-07
I'm a bit confused as how to proceed - I was wondering if I can partition one or more of these disks and allocate those under /, /usr etc. and proceed with the installation but then will these disk partitioning and mounting/un-mounting corrupt the existing /, /usr etc. mounts ?