How To Set A Time Limit Of Ssh

I have a hostlist and there are several hosts. I want to use ssh to connect to them. I want to see whether the host is available or not by the time length of ssh takes. If it take long than 5s (which means it is not available). Then stop it, and ssh to next host. Once a host is available, then output the name of host. I previously use nmap, but the IT security told it is not allowed to install it on the desktop on campus.
Code:
host_list="/home/campus27/zwang10/Desktop/cluster/program/hostlist"
HOSTS=`cat $host_list`
for line in $HOSTS
do 
timeout -5s `ssh $line`
done

The above script is all I can do.


Similar Content



How To Redirect The Error Message?

I have a hostlist and there are several hosts. I want to use ssh to connect to them. I want to see whether the host is available or not by the time length of ssh takes. If it take long than 5s (which means it is not available). Then stop it, and ssh to next host. Once a host is available, then output the name of host. I previously use nmap, but the IT security told it is not allowed to install it on the desktop on campus.
Code:
    HOSTS=`cat $host_list`
    for line in $HOSTS
    do 
    ssh -o ConnectTimeout=5s $line true >> /dev/null
    RESULT=$?
    if [ $RESULT -eq 0 ]
    then
    echo $line > succeed.txt
    else
    echo $line > fail.txt
    fi
    done

The above script is all I can do.
The problem is when it is failed, I want to redirect the error message
Code:
ssh: connect to host c28-0112-05.ad.mtu.edu port 22: No route to host

. But I still get the error message.

Need Help To Get The Available Hosts Among Many Hosts

I have many hosts as following. But those hosts are dual operating system (Linux and Windows). I always run program background. If someone is using Linux, it is fine. But if someone is using windows or the host is offline. Then, I cannot use ssh. The way I do preiviously is first ssh one by one. And then find the hosts which are offline or windows. And write them down one by one. And then ssh to hosts except them. Let us assume the number of programs is less than the total available hosts. Can someone write a shell script to output the all available hosts to a file like "host_available"?
Here is the host file.
https://www.dropbox.com/s/vbz6w864y3...tlist.txt?dl=0
I am using ssh to connect the computers on campus. If the computer I am trying to connect is offline or using windows, the ssh will take long time, and finally it will failed. I write a shell script to generate the hostlist
Code:
#!/bin/bash
for i in `seq -w 1 28`
do
echo "c15-0330-$i.ad.mtu.edu"
# I would like to add a command here to see whether ssh c15-0330-$i.ad.mtu.edu it succeeds or not. And then output a #file which contains the all available host.
done
for i in `seq -w 1 20`
do
echo "c28-0112-$i.ad.mtu.edu"
done
for i in `seq -w 1 20`
do
echo "c28-112a-$i.ad.mtu.edu"
done

I do not how to set a certain time to see whether connection is successful or not (see the comment in the shell script).

Get The Time When Nohup R Program Finishes

I want to get the time when program starts and finishes. But it always show the same time. Here is my shell script. But it always show that `START TIME` is same as `END TIME`.
Code:
host_list=("c15-0330-01.ad.mtu.edu" "c15-0330-02.ad.mtu.edu" "c15-0330-03.ad.mtu.edu" "c15-0330-04.ad.mtu.edu")
program=("L_1" "L_4" "L_3" "L_4")
subject="The job is finished"
START=$(date +"r")
address="/home/campus27/zwang10/Desktop/AWRR/program/power/vmodel_1/nprot/K_10"
ssh -f "${host_list[0]}" "cd '$address' && nohup Rscript '${program[0]}.R' > '${program[0]}_sh.txt';echo 'The job\n $address\n${program[0]} is finished\nSTART TIME = $START\n' END TIME =`date +"%r"` | mutt zwang10@mtu.edu -s '${host_list[0]} - Job ${program[0]}.R finished' -a '$address/${program[0]}_sh.txt';"

How To Define Variable In `ssh`

I need to use `ssh` to send command to different computers to run programs. Can I define a variable of `Directory` and `program` in front of this shell script? Then, I do not need to write them every time? The following is my shell script.

`host_list=("c15-0330-10.ad.mtu.edu" "c15-0330-11.ad.mtu.edu" "c15-0330-12.ad.mtu.edu")`
`# I have multiple programs`
`# program=c("L_1","L_2","L_3")`
`ssh -f "${host_list[0]}" 'set Directory="/home/campus27/zwang10/Desktop/AWRR/program/power/vmodel_1/nprot/K_10"; cd $Directory && nohup Rscript L_1.R> L_1_sh.txt;echo "The job L_1 is finished" |mutt "zwang10@mtu.edu" -s "The job L_1 is finished"';
ssh -f "${host_list[1]}" 'set Directory="/home/campus27/zwang10/Desktop/AWRR/program/power/vmodel_1/nprot/K_10"; cd $Directory && nohup Rscript L_2.R> L_2_sh.txt;echo "The job L_2 is finished" |mutt "zwang10@mtu.edu" -s "The job L_2 is finished"';
ssh -f "${host_list[2]}" 'set Directory="/home/campus27/zwang10/Desktop/AWRR/program/power/vmodel_1/nprot/K_10"; cd $Directory && nohup Rscript L_3.R> L_3_sh.txt;echo "The job L_3 is finished" |mutt "zwang10@mtu.edu" -s "The job L_3 is finished"';`

How To Copy File From Remote Host To Local Host Then Delete From Remote Host

I have an expect script to SSH to a remote host and obtain some user inputs and information about the server/network configuration. The responses are saved in a text file that I then need to copy to my local host so that I can read the lines into variables for use in the parent shell script.

Is there a way to do this without needing to enter the username and password for the local host to use function scp? I have tried the following in my expect script to no avail:
Code:
spawn scp $usr@$host:$flnm .
expect {
	-re "(.*)assword:" { 
		send -s "$pswd\r"
	}
}

I have also tried to directly scp the file and enter the username and password to try to debug the issue, and that doesn't work either:
Code:
spawn scp file.txt user@host:file.txt
expect {
	-re "(.*)assword:" {
		send -s "password\r"
	}
	"you sure you want to continue connecting" {
		send -s "yes\r"
		exp_continue
	}
}

In both scenarios I have used exp_internal 1, and there are no errors. But I do not end up with the file on my local host.

Following the copy, I would like to delete the file from the remote host. Any suggestions on how to accomplish this?

Access A Host From A Different Subnet In Linux

Hi all,

I have a host#1 with ip=192.168.3.100 and a host#2 with ip=192.168.2.100. Both hosts are connected to some linux device with 2 interfaces : eth0 with ip=192.168.2.1 and eth1 with ip=192.168.3.1.

So host#1 is connected to eth1 and host#2 to eth0. I would like to ping host#2 from host#1 and vice versa. How can I do that ?

I tried :
echo 1 > /proc/sys/net/ipv4/ip_forward
iptables -A FORWARD -i eth1 -o eth0 -j ACCEPT
iptables -A FORWARD -i eth0 -o eth1 -j ACCEPT
iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE

but it didn't work

PS
This is my first post here, so please don't be very strict to me
Looking forward to hearing from anybody as I'm out of ideas...

BR,
Dmitry

Mutt Does Not Show Full Subject

I use following shell script to send email. But the subject of email always shows "This" instead of "This is L_1.R is finished". You may refer http://www.linuxquestions.org/questions/linux-newbie-8/how-to-define-variable-in-%60ssh%60-4175540566/
Code:
host_list=("c15-0330-01.ad.mtu.edu" "c15-0330-02.ad.mtu.edu" "c15-0330-03.ad.mtu.edu" "c15-0330-04.ad.mtu.edu")
program=("L_1" "L_2" "L_3" "L_4")
subject="The job is finished"
ssh -f c15-0330-01.ad.mtu.edu 'echo' "the job ${program[0]} is finished" '|' 'mutt "zwang10@mtu.edu" -s' "This is "${program[0]}".R is finished";

SSH Error

I am trying to login to my linux server. I was initially doing "ssh hostname". The login did not work, so I tried "ssh username@IP", which still prompted me with the yes/no prompt, but I received this error:

Code:
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@    WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!     @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that a host key has just been changed.
The fingerprint for the RSA key sent by the remote host is
XX:XX...:XX.
Please contact your system administrator.
Add correct host key in /Users/user/.ssh/known_hosts to get rid of this message.
Offending RSA key in /Users/user/.ssh/known_hosts:5
RSA host key for 192.168.1.3 has changed and you have requested strict checking.
Host key verification failed.

Setting Up Apache2 Virtual Host - Getting URL Not Found Error

Trying to set up Virtual Host on Ubuntu 14.04.

Any help to solve this is greatly appreciated!!!

Here is info:

Directory: /var/www/mydb.com/public_html (owner set to $USER:$USER)

Permissions: sudo chmod -R 755 /var/www/

Sample Page: /var/www/mydb.com/public_html/index.html (Shows Message)

Virtual Host Files:

Sites Available: mydb.com.conf
set ServerAdmin => admin@mydb.com
set ServerName => mydb.com
set ServerAlias => www.mydb.com
set DocumentRoot => /var/www/mydb.com/public_html

Sites Enabled: mydb.com.conf
ServerName mydb.com
ServerAlias www.mydb.com
ServerAdmin admin@mydb.com
DocumentRoot /var/www/mydb.com/public_html


I: disabled 000-default.conf w/a2dissite
enabled mydb.com.conf w/a2ensite

HOSTS File /etc/hosts:

127.0.1.1 localhost mydb.com
127.0.0.1 localhost
127.0.1.1 rick-Latitude-E6510

Result of localhost/mydb.com Same with www.mydb.com.

404 Not Found.
The requested URL /mydb.com was not found on this server.
Apache/2.4.7 (Ubuntu) Server at localhost Port 80

This from /var/log/apache2/access.log

127.0.0.1 - - [03/Apr/2015:13:19:08 -0700] "GET /mydb.com HTTP/1.1" 404 496 "-" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:37.0) Gecko/20100101 Firefox/37.0"

Allocating Space To Mounts

My real objective is to clean existing nodes from a Hadoop cluster, claim the space etc. and do a fresh installation of another distro.

Below is the space descr. of one of the master(namenode):

Code:
df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/vg00-root
                      2.0G  739M  1.2G  39% /
tmpfs                  24G     0   24G   0% /dev/shm
/dev/mapper/mpathap1  194M   65M  120M  36% /boot
/dev/mapper/vg00-home
                      248M   12M  224M   5% /home
/dev/mapper/vg00-nsr  248M   11M  226M   5% /nsr
/dev/mapper/vg00-opt  3.1G   79M  2.8G   3% /opt
/dev/mapper/vg00-itm  434M   11M  402M   3% /opt/IBM/ITM
/dev/mapper/vg00-tmp  2.0G   68M  1.9G   4% /tmp
/dev/mapper/vg00-usr  2.0G  1.6G  305M  85% /usr
/dev/mapper/vg00-usr_local
                      248M   11M  226M   5% /usr/local
/dev/mapper/vg00-var  2.0G  820M  1.1G  43% /var
/dev/mapper/vg00-FSImage
                      917G  3.3G  867G   1% /opt/hadoop-FSImage
/dev/mapper/vg00-Zookeeper
                      917G  200M  870G   1% /opt/hadoop-Zookeeper

And one of the slave(datanodes)(the others too have 4-8 local disks and some mounted NFS drives):

Code:
df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/vg00-root
                      2.0G  411M  1.5G  22% /
tmpfs                  36G     0   36G   0% /dev/shm
/dev/mapper/mpathap1  194M   67M  118M  37% /boot
/dev/mapper/vg00-home
                      248M   11M  226M   5% /home
/dev/mapper/vg00-nsr  248M   11M  226M   5% /nsr
/dev/mapper/vg00-opt  3.1G   82M  2.8G   3% /opt
/dev/mapper/vg00-itm  434M   11M  402M   3% /opt/IBM/ITM
/dev/mapper/vg00-tmp  2.0G   68M  1.9G   4% /tmp
/dev/mapper/vg00-usr  2.0G  1.2G  690M  64% /usr
/dev/mapper/vg00-usr_local
                      248M   11M  226M   5% /usr/local
/dev/mapper/vg00-var  2.0G  1.5G  392M  80% /var
/dev/mapper/vg00-00   559G   33M  559G   1% /opt/hadoop-00
/dev/mapper/vg00-01   559G   33M  559G   1% /opt/hadoop-01
/dev/mapper/vg00-02   559G   33M  559G   1% /opt/hadoop-02
/dev/mapper/vg00-03   559G   33M  559G   1% /opt/hadoop-03
/dev/mapper/vg00-04   559G   33M  559G   1% /opt/hadoop-04
/dev/mapper/vg00-05   559G   33M  559G   1% /opt/hadoop-05
/dev/mapper/vg00-06   559G   33M  559G   1% /opt/hadoop-06
/dev/mapper/vg00-07   559G   33M  559G   1% /opt/hadoop-07

During the new installation, I have run into space issues for all the nodes/hosts

Code:
Not enough disk space on host (l1032lab.se.com). A minimum of 1GB is required for "/usr" mount. A minimum of 2GB is required for "/" mount.
Not enough disk space on host (l1033lab.se.com). A minimum of 1GB is required for "/usr" mount. A minimum of 2GB is required for "/" mount.
Not enough disk space on host (l1034lab.se.com). A minimum of 1GB is required for "/usr" mount. A minimum of 2GB is required for "/" mount.
Not enough disk space on host (l1035lab.se.com). A minimum of 1GB is required for "/usr" mount. A minimum of 2GB is required for "/" mount.

Now the installation requires a lot of space under /, /usr, /var and /home mounts which on both master and the slaves is less but there is lots of space in master under these two disks:

Code:
/dev/mapper/vg00-FSImage
                      917G  200M  870G   1% /opt/hadoop-FSImage
/dev/mapper/vg00-Zookeeper
                      917G  6.5G  864G   1% /opt/hadoop-Zookeeper

and on slaves under these 8 disks.

Code:
/dev/mapper/vg00-00   559G   33M  559G   1% /opt/hadoop-00
/dev/mapper/vg00-01   559G   33M  559G   1% /opt/hadoop-01
/dev/mapper/vg00-02   559G   33M  559G   1% /opt/hadoop-02
/dev/mapper/vg00-03   559G   33M  559G   1% /opt/hadoop-03
/dev/mapper/vg00-04   559G   33M  559G   1% /opt/hadoop-04
/dev/mapper/vg00-05   559G   33M  559G   1% /opt/hadoop-05
/dev/mapper/vg00-06   559G   33M  559G   1% /opt/hadoop-06
/dev/mapper/vg00-07   559G   33M  559G   1% /opt/hadoop-07

I'm a bit confused as how to proceed - I was wondering if I can partition one or more of these disks and allocate those under /, /usr etc. and proceed with the installation but then will these disk partitioning and mounting/un-mounting corrupt the existing /, /usr etc. mounts ?