Cron.daily Symlink (double) Does Not Seem To Be Executing?

Hello,

I cannot understand why the symlink I have put in /etc/cront.daily won't work. It is very possible I am wrong, but my understanding is that cront.{daily,weekly,monthly} works fine with symlinks.

Basically it is double symlink-ed. ls -la on /etc/cron.daily looks like this:

Code:
... 
lrwxrwxrwx  1 root root    49 Nov 27 18:26 rsync_mysql_backups.sh -> /home/myuser/scripts/bash/rsync_mysql_backups.sh
...

Now, ls -la on /home/myuser/scripts looks like this:

Code:
...
lrwxrwxrwx 1 myuser myuser    26 Sep 20  2013 scripts -> /media/md1_storage/scripts
...

I couldn't see anything suspicious in syslog, so I installed postfix in the hope that I will get some sort of information there. Nothing... I also redirected the output of the script to a file in /home/myuser/log.txt but nothing there. The file was not even created.

I am not doing anything mad in the script, I am just synchronising a local directory with a remote one like this:

Code:
/usr/bin/rsync -avzx -e 'ssh -i "/home/myuser/.ssh/myremotehost/id_rsa"' /media/md1_storage/backups/stuff/ myuser@myremotehost:/srv/backups/stuff/ >> /home/myuser/log.txt 2>&1;

As other people suggested in similar threads, I have verified that
Code:
test -x /usr/sbin/anacron

is false, which will result in the execution of the second part of the entry in /etc/crontab:

Code:
25 6    * * *   root    test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily )

Any input will be much appreciated. I know I am doing something wrong, but I just cannot see it right now... How can I gather more debugging which will help me understand what's going wrong?

Thanks!


Similar Content



Cron Ignoring Changes Made In File

Hi,

i have a small problem on my system... I recently wrote a small bash script and added it via crontab -u root -e and it's executed as it should.
Then a few days after i did edit my bash script, but once it is run by cron it doesn't reflect the changes i made to the script. I did manually restart crond, tryed sync, moved my script, renamed it even rstart the server - no success.
If i run my script manually it returns me the expected output.
part of my script (/root/sd.sh):
Code:
#!/bin/sh

echo -n "Checking Raid... "
RAID=`tw-cli /c0/u0 show status | cut -c17-31 | grep -ci OK`
RAIDSTATUS=`tw-cli /c0/u0 show status | head -n1 | cut -c17-31`
if [ $RAID -eq 0 ]; then
  VERIFYING=`tw-cli /c0/u0 show status | cut -c17-31 | grep -ci VERIFYING`
  REBUILDING=`tw-cli /c0/u0 show status | cut -c17-31 | grep -ci REBUILDING`
  if [ $VERIFYING -eq 1 ]; then
    CURRENTSTATUS=`tw-cli /c0/u0 show verifystatus | head -n1 | cut -c46-49`
  fi
  if [ $REBUILDING -eq 1 ]; then
    CURRENTSTATUS=`tw-cli /c0/u0 show rebuildstatus | head -n1 | cut -c47-50`
  fi
  echo "Raid is busy (Raid $RAIDSTATUS$CURRENTSTATUS)"
  exit 1
fi
echo "Raid is idle (Raid $RAIDSTATUS)"

echo -n "Checking for logged in Users... "
USERS=`who | wc -l`
if [ $USERS -gt 0 ]; then
  echo "Users online"
  exit 1
fi
echo "None logged in"

crontab -l :
Code:
root@monster:~# crontab -l
# Edit this file to introduce tasks to be run by cron.
#
# Each task to run has to be defined through a single line
# indicating with different fields when the task will be run
# and what command to run for the task
#
# To define the time you can provide concrete values for
# minute (m), hour (h), day of month (dom), month (mon),
# and day of week (dow) or use '*' in these fields (for 'any').#
# Notice that tasks will be started based on the cron's system
# daemon's notion of time and timezones.
#
# Output of the crontab jobs (including errors) is sent through
# email to the user the crontab file belongs to (unless redirected).
#
# For example, you can run a backup of all your user accounts
# at 5 a.m every week with:
# 0 5 * * 1 tar -zcf /var/backups/home.tgz /home/
#
# For more information see the manual pages of crontab(5) and cron(8)
#
# m h  dom mon dow   command
*/10 * * * * /root/sd.sh | logger
*/30 * * * * /root/dropbox.sh > /dev/null

syslog:
Code:
Mar 21 09:10:01 monster /USR/SBIN/CRON[3998]: (root) CMD (/root/sd.sh | logger)
Mar 21 09:10:01 monster logger: Checking Raid... Raid is busy (Raid )
Mar 21 09:17:01 monster /USR/SBIN/CRON[4045]: (root) CMD (   cd / && run-parts --report /etc/cron.hourly)
Mar 21 09:20:01 monster /USR/SBIN/CRON[4105]: (root) CMD (/root/sd.sh | logger)
Mar 21 09:20:01 monster logger: Checking Raid... Raid is busy (Raid )
Mar 21 09:30:01 monster /USR/SBIN/CRON[4266]: (root) CMD (/root/sd.sh | logger)
Mar 21 09:30:01 monster /USR/SBIN/CRON[4265]: (root) CMD (/root/dropbox.sh > /dev/null)
Mar 21 09:30:01 monster logger: Checking Raid... Raid is busy (Raid )

when i run sh /root/sd.sh
Code:
root@monster:~# sh /root/sd.sh
Checking Raid... Raid is idle (Raid OK)
Checking for logged in Users... Users online
root@monster:~#

Any ideas why? - any help or thoughts appreciated

regards
Pat

Perplexing Cron Audio Problem

I'm running LinuxLite 2.0 32bit on a Dell 3000.

I have never come across anything like this and to tell you, I am stumped.

Here are the contents of my crontab file:

Code:
# Edit this file to introduce tasks to be run by cron.
#
# Each task to run has to be defined through a single line
# indicating with different fields when the task will be run
# and what command to run for the task
#
# To define the time you can provide concrete values for
# minute (m), hour (h), day of month (dom), month (mon),
# and day of week (dow) or use '*' in these fields (for 'any').#
# Notice that tasks will be started based on the cron's system
# daemon's notion of time and timezones.
#
# Output of the crontab jobs (including errors) is sent through
# email to the user the crontab file belongs to (unless redirected).
#
# For example, you can run a backup of all your user accounts
# at 5 a.m every week with:
# 0 5 * * 1 tar -zcf /var/backups/home.tgz /home/
#
# For more information see the manual pages of crontab(5) and cron(8)
#
# m h  dom mon dow   command
*/10 * * * * /usr/bin/arecord -t wav -f cd -d 42 /home/randy/Music/lanting$(date "+\%^b\%d\%y").wav


If I run this from the terminal, everything is fine. It properly records the audio:

Code:
/usr/bin/arecord -t wav -f cd -d 42 /home/randy/Music/lanting$(date "+\%^b\%d\%y").wav

If I run this as it is shown in my crontab file, it records but there is no audio recorded.:


Code:
*/10 * * * * /usr/bin/arecord -t wav -f cd -d 42 /home/randy/Music/lanting$(date "+\%^b\%d\%y").wav

What could be causing this? I tried different cron settings for example 15 14 * * 2

This recorded at 2:15pm on Tuesday (today) but no audio. Yet if I run the code as mentioned above, from the terminal without the cron settings, the recording is fine.

Any ideas what I should do?

Ln With -t Option, To Create Relative Symlink Not Working

I'm trying to recreate a relative symlink, to link "asymlink" to "somedir/actualfile" in /root/test/ but its creating 2 symlinks instead.

Code:
[root]# ln -t /root/test/ -s somedir/actualfile asymlink
[root]# ll /root/test/
total 4
lrwxrwxrwx 1 root root   18 Feb 16 06:15 actualfile -> somedir/actualfile
lrwxrwxrwx 1 root root    8 Feb 16 06:15 asymlink -> asymlink
drwxr-xr-x 2 root root 4096 Feb 16 06:15 somedir

I can do it with
Code:
cd /root/test
ln -s somedir/actualfile asymlink

but I'm trying to avoid the cd, and also avoid using the full path.

Does anyone know why the -t flag isn't working as expected?

Cronjob Created Empty Files

Hi All,

I have facing a problem with cron job.
Actually on the client server there are some cronjob scheduled
as follows:

@daily sh_file.sh 2>&1>> /dev/null #Contact: vali.zachia@gmail.com
@hourly php demo.php 2>&1 >> /dev/null
10 2 * * * wget --no-check-certificate url 2>&1 > /dev/null #Cron IE
10 2 * * * wget --no-check-certificate url 2>&1 > /dev/null #cron EN
@daily sh script.sh
@daily sh script.sh
@daily sh script.sh
@hourly wget --no-check-certificate url 2>&1 > /dev/null
@daily wget --no-check-certificate url 2>&1 > /dev/null #Cron Poland
@daily wget --no-check-certificate url 2>&1 > /dev/null #Cron Norway
@daily wget --no-check-certificate url 2>&1 > /dev/null #Cron Portugal
10 2 * * * wget --no-check-certificate url 2>&1 > /dev/null #Cron GR
@daily wget --no-check-certificate url 2>&1 > /dev/null
@daily wget --no-check-certificate url 2>&1 > /dev/null
@daily wget --no-check-certificate url 2>&1 > /dev/null #Denmark



Cronjobs are scheduled daily/hourly.

But what happends after every hour/day there is a blank file created in a folder .

like :
hourly.1

hourly.2
hourly.3
hourly.4

and so on similary for daily scripts.

I am not sure whether these are created by cron job
So please clarify whether cronjob provides some thing like above or it is the code problem

Execute Script On Mysql Exit

Hello,

In debian I created a new user "myUser".

In /home/myUser/.profile

I added "mysql -u user -password"

So automatically after login to linux terminal the user is logged in mysql command line.

What I need to do is to exit terminal after user exit mysql, so he want have access to comand line.

Problem With (instalation Of?) Mysql.h On C

Hi, I've just recently installed MySQL connector/c from source code on my Slackware 14.1 x64

I read the official instructions of the connector but I felt a bit disorientated when I read:
Code:
1 -Change location to the top-level directory of the source distribution.

I interpreted that I have to go to the "highest" directory Code:
/

So I wrote: Code:
 
        #                            cd /

root@- /#                            tar xzvf /home/normal/Downloads/mysql-connector-c-6.1.6-src.tar.gz 

root@- /#                            cd /mysql-connector-c-6.1.6-src/

root@- /mysql-connector-c-6.1.6-src# cmake -G "Unix Makefiles"

root@- /mysql-connector-c-6.1.6-src# make 

root@- /mysql-connector-c-6.1.6-src# make install

Then I did:
Code:
ln -s /usr/local/mysql-5.6.25/include /usr/include

But when I try to compile a program in c with #include <mysql.h> i get this error:
Code:
# gcc ctemp.c 
In file included from ctemp.c:2:0:
/usr/include/mysql.h:57:27: fatal error: mysql_version.h: No such file or directory
 #include "mysql_version.h"
                           ^

What can I do? Thanks a lot and sorry for poor english

PD: If you need the official instructions I paste here the link: https://dev.mysql.com/doc/connector-...on-source.html

Best Way To Run Two Interdependent Scripts

Hi All,

I have two scripts, the aim of these two scripts is, to check whether a particular script is running or not, if it wont runs, then throw a mail.

How i Achieved this output is, I wrote first script.

I created an infinite while loop which performs below steps

1. It creates a touch file
2. Triggers the script which needs to be monitored if its working or not.
3. Removes the touch file.

If the second step fails, then the remove file command will not happen and the script will stuck there itself.

I created an another script which checks the creation time of the touch file and if it is more than ten minutes, it means the second step in the first script is hanged, which also means that particular script is not working.

So if the creation time is more than 2 minutes the second script will throw a mail.

Below are the two scripts.

Code:
#!/bin/ksh



userid="chansd"

filename="/apps/log/check.txt"

while true ;do
touch $filename
pass=`/apps/eDMZ/call_st.ksh $userid`
sleep 20
rm $filename
done

Below script checks the file creation time and throws email if it is older than 2 minutes
Code:
#!/bin/ksh


filename="/apps/log/check.txt"

if [ -f "${filename}" ]
then
if test "`find $filename -mmin +2`"
then
echo "script is not working ! Please act on it" | mail -s "Script  is not working" Example@mail.com
fi


else

        exit 1

fi


What im going to do is

1. I am going to run the first script in background so it runs forever.
2. I am going to run the second script in cron forevry 5 mins to check the file creation time.

3. So if the first script hangs . I will kill the process using process id and after the issue resolves with the inner script, I will run the main script again.

I am new to Linux, Please let me know if this approach will work as expected.

Setfacl Help

I can't believe I wrote a looong message and it logged me out when I tried to submit it.

So anyway, in short lines:

- I have a network of sites where all sites share same "images" folder
- I have created /home/_images/entities and symlinked it from all websites
- It works great with Apache, when I open /images/ on any of the sites I get list of images and can view them

The problem is suPHP which changes process ID of the PHP script to the file owner ID, so when I load site1.com, all scripts are executed as user1 (and files/folders created with those scripts belong to user1:user1). When I load site2.com, all scripts are executed as user2 (and files/folders created with those scripts belong to user2:user2). All these users do NOT belong to the same group, and I wouldn't like to change that as it is cPanel/WHM server so I'm afraid I'll screw something up if I change (primary?) group of all users.

Therefore I need to set it up in such way that all newly created folders and files under /home/_images/entities (owned by root) have read/write permissions for everyone.

Here's the command I used:

Code:
setfacl -Rdm o::rwx /home/_images/entities

To check it:
Code:
root@server1 [~]# getfacl /home/_images/entities/
getfacl: Removing leading '/' from absolute path names
# file: home/_images/entities/
# owner: root
# group: root
user::rwx
group::rwx
other::rwx
default:user::rwx
default:group::rwx
default:other::rwx

This looks fine, however when I try upload an image via site1.com it looks like this:

Code:
root@server1 [/home/_images/entities]# ls -l
total 24
drwxrwxrwx+ 5 root    root    4096 Jan 14 06:25 ./
drwxrwxrwx  5 root    root    4096 Jan 12 13:08 ../
drwxrwxr-x+ 3 user1   user1   4096 Jan 14 06:25 1/

And in folder "1" is the image (and thumbs folder):

Code:
root@server1 [/home/_images/entities/1]# ls -l
total 236
drwxrwxr-x+ 3 user1   user1     4096 Jan 14 06:25 ./
drwxrwxrwx+ 5 root    root      4096 Jan 14 06:25 ../
-rw-rw-rw-  1 user1   user1   225569 Jan 14 06:25 689048f221ab7c556f4d482a9d92b2d6.jpg
drwxrwxr-x+ 2 user1   user1   4096 Jan 14 06:25 thumbs/

My questions:

1) Why newly created folders do not have "write" permissions for everyone else [not user and/or group]? If I upload first image from site1.com, then I can't upload other images from any other site, while all sites can display them.

2) What is the + at the end of permissions list? (drwxrwxr-x+)

3) Why newly created files have only "rw" permissions for user, group AND everyone else, and not execute permissions? I don't actually need execute flag set here, but from my command you can see I've set "o::rwx" so it should be there (or not?)

Actually the real problem is #1 - other users can't write to this folder so users can't upload images from other sites nor other sites can create (missing) thumbnails.

Script Not Getting Executed Via CRON

I have a small script1 scheduled via cron to run every 20 mins to invoke script2 if script2 is not running.
When i run script1 from cmd line it works fine but when it is scheduled via cron it doesn't work. Not sure what am i doing wrong here.
I even tried to use absolute path.

Script 1
Code:
#!/bin/sh
/bin/ps -ef |/bin/grep script2.sh |/bin/grep -v $$ > /dev/null 2>&1
if [ $? -ne 0 ]
then
/usr/bin/nohup /home/user1/script2.sh &
fi

CRON LOG :
Feb 9 17:00:01 server1 crond[29771]: (user1) CMD (/home/user1/chk_script1.sh >/dev/null 2>&1)
Feb 9 17:20:01 server1 crond[21095]: (user1) CMD (/home/user1/chk_script1.sh >/dev/null 2>&1)
Feb 9 17:40:01 server1 crond[11218]: (user1) CMD(/home/user1/chk_script1.sh >/dev/null 2>&1)
Feb 9 18:00:01 server1 crond[29961]: (user1) CMD(/home/user1/chk_script1.sh >/dev/null 2>&1)

CRON JOB :
00,20,40 * * * * /home/user1/chk_script1.sh >/dev/null 2>&1

Command Manual Working But Not On Cron

Hi

When i run this command manually on Centos 6.6 it works:

Code:
/usr/bin/find /backup/ -type d -mtime +1 -print0 | xargs -0 rm -rf

but as a cron job it doesn't as i can see a folder with files there from Mar 28:

Code:
55 5 * * * /usr/bin/find /backup/ -type d -mtime +1 -print0 | xargs -0 rm -rf

And here are the logs from cron that it is executing this at the correct time :

Code:
Mar 30 05:55:01 server CROND[9526]: (root) CMD (/usr/bin/find /backup/ -type d -mtime +1 -print0 | xargs -0 rm -rf)

Any ideas why?

Thanks