How To Rename A File In Linux

hello,

I am trying to rename the file by adding .txt extension and also
before renaming, I want to replace . in file with _

right now file looks like this mdm.201504021628

after execution of my script file name should be mdm_201504021628.txt


#!bin/bash
//reading all files from directory

files=$(hadoop fs

-ls /dl/data/landing/hivedb/lnd_attunity_kpi_db_backup/auth_master |
awk '!/^d/ {print $8}')

for f in $files; do

//using sed to replace . with _ and then feeding to hadoop fs command

sed 's/./\_/g' $f | hadoop fs -mv $f $f.txt

done

Thanks for your help in advance.


Similar Content



Linux Bulk Renaming Files

Hello Folks.

I'm searching for a easy way to rename multiple files from CLI but didn't find any easy way for me so I'm reaching out to you guys for help.

This is what I want to do (from CLII or script). I want to move files with a sequence number on the name of the files (msg0000, msg0001, msg0002 and so on) to let's say msg0066, msg0067 and so on. Each of this file name has two other files (msg0000.wav, msg0000.WAV and msg0000.txt).

The idea is to move them from one directory to another and following a sequence in the file names, is there a way I can do this pain free?

Any help on this matter will be greatly appreciates and I'm talking about over 100 files I need to move following the sequence of the receiving directory.

Thanks!

Script To Recursively Enter Subdirectories And Rename Files Sequentially From Scratch

I am new to Bash scripting.

I have a main directory called Photos which has many subdirectories like People, Places and Things. Each of these subdirectories is populated by other subdirectories and lots of JPG photo images.

The digital cameras name the files in a way that is difficult to manage with web hosting.

I would like to go to each directory and subdirectory and rename the photos 1.jpg, 2.jpg, 3.jpg, etc. so that I can use a simple XML template to access them by specifying only a hosting directory.

I tried to use the following script:

#! /bin/bash

cd /home/paul/test

find . -name "*.jpg" -print0 | rename -v 's/.+/our $i; sprintf("%d.jpg", 1+$i++)/e' * -vn

exit 0

It successfully renames all of the files in all of the directories, but it does not restart the numbering for each new subdirectory. So first it goes through Photos and renames the three JPG files there 1.jpg, 2.jpg and 3. jpg, and then it opens the first subdirectory People and names the three JPG files there 4.jpg, 5.jpg and 6.jpg. Next it moves to the next subdirectory and continues sequential renaming until it is done.

I want it to restart sequential renaming with each new subdirectory, so that after renaming the three JPG files in Photos to 1.jpg, 2.jpg and 3.jpg, it moves to the first subdirectory and renames the JPG files there starting with 1.jpg again.

That way I use the links 1.jpg, 2.jpg, 3.jpg, etc in the XML template and just change the directory name to download the photos from the web.

Thanks for any help you can give me.

Space Disk "used" In Df Is Nowhere To Be Found With Du

Hello,

I am facing an issue with a filesystem (/dev/sda3); I see space used on it (around 365GB) when I am looking at the host with "df -h" command.

Code:
[root@srv_omega /]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda3             443G  365G   56G  87% /
tmpfs                  95G   56K   95G   1% /dev/shm
/dev/sda1             484M   39M  421M   9% /boot
/dev/sdb1             3.6T  1.3T  2.2T  36% /hadoop/disk1
/dev/sdc1             3.6T  1.3T  2.2T  37% /hadoop/disk2
/dev/sdd1             3.6T  1.3T  2.2T  36% /hadoop/disk3
/dev/sde1             3.6T  1.3T  2.2T  37% /hadoop/disk4
/dev/sdf1             3.6T  1.3T  2.2T  36% /hadoop/disk5
/dev/sdg1             3.6T  1.3T  2.2T  36% /hadoop/disk6
/dev/sdh1             3.6T  1.3T  2.2T  36% /hadoop/disk7
/dev/sdi1             3.6T  1.3T  2.2T  36% /hadoop/disk8
/dev/sdj1             3.6T  1.3T  2.2T  36% /hadoop/disk9
/dev/sdk1             3.6T  1.3T  2.2T  36% /hadoop/disk10
/dev/sdl1             3.6T  1.2T  2.3T  36% /hadoop/disk11
/dev/sdm1             3.6T  1.3T  2.2T  36% /hadoop/disk12
/dev/sdn1             3.6T  1.3T  2.2T  36% /hadoop/disk13
/dev/sdo1             3.6T  1.3T  2.2T  37% /hadoop/disk14
/dev/sdp1             3.6T  1.1T  2.4T  30% /hadoop/disk15
cm_processes           95G  8.2M   95G   1% /var/run/cloudera-scm-agent/process

I have looked if any hidden file might cause the issue, no joy.

Code:
[root@srv_omega /]# pwd
/
[root@srv_omega /]#  ls -lrtha
total 121K
drwxr-xr-x    2 root root 4.0K Jun 28  2011 srv
drwxr-xr-x    2 root root 4.0K Jun 28  2011 mnt
drwxr-xr-x    2 root root 4.0K Jun 28  2011 media
drwxr-xr-x    2 root root 4.0K Dec 20  2012 cgroup
drwx------    2 root root  16K Jun  2  2014 lost+found
drwxr-xr-x    2 root root 4.0K Jun  2  2014 selinux
-rw-r--r--    1 root root    0 Jun  3  2014 .autorelabel
drwxr-xr-x   18 root root 4.0K Jun  5  2014 hadoop
drwxr-xr-x   21 root root 4.0K Jun  5  2014 var
dr-xr-xr-x    9 root root  12K Jun 20  2014 lib64
dr-xr-xr-x    2 root root  12K Jun 21  2014 sbin
dr-xr-xr-x    2 root root 4.0K Jun 21  2014 bin
dr-xr-xr-x    5 root root 1.0K Jun 22  2014 boot
dr-xr-x---    5 root root 4.0K Jun 22  2014 root
drwxr-xr-x    6 root root 4.0K Jun 22  2014 opt
drwxr-xr-x    3 root root 4.0K Dec 10 19:11 home
dr-xr-xr-x   13 root root 4.0K Dec 12 16:18 lib
dr-xr-xr-x 1140 root root    0 Apr 30 15:11 proc
drwxr-xr-x   13 root root    0 Apr 30 15:11 sys
-rw-r--r--    1 root root    0 Apr 30 15:11 .autofsck
drwxr-xr-x    2 root root    0 Apr 30 15:11 misc
drwxr-xr-x    2 root root    0 Apr 30 15:11 net
drwxr-xr-x   15 root root 4.0K Apr 30 15:12 usr
drwxr-xr-x   19 root root 4.6K Apr 30 15:12 dev
dr-xr-xr-x   27 root root 4.0K Apr 30 15:12 ..
dr-xr-xr-x   27 root root 4.0K Apr 30 15:12 .
drwxr-xr-x  122 root root  12K May  4 03:33 etc
drwxrwxrwt   16 root root 4.0K May  7 06:14 tmp

So I try to find where the space is used with a "du -sh" command

Code:
[root@srv_omega /]# pwd
/
[root@srv_omega /]# du -sh *
7.8M    bin
29M     boot
4.0K    cgroup
280K    dev
26M     etc
19T     hadoop
124K    home
144M    lib
26M     lib64
16K     lost+found
4.0K    media
0       misc
4.0K    mnt
0       net
7.9G    opt
du: cannot access `proc/9170/task/27326/fdinfo/538': No such file or directory
du: cannot access `proc/45119/task/45119/fd/4': No such file or directory
du: cannot access `proc/45119/task/45119/fdinfo/4': No such file or directory
du: cannot access `proc/45119/fd/4': No such file or directory
du: cannot access `proc/45119/fdinfo/4': No such file or directory
du: cannot access `proc/45160': No such file or directory
0       proc
3.8M    root
17M     sbin
4.0K    selinux
4.0K    srv
0       sys
3.9M    tmp
2.6G    usr
16G     var

So as far as I understand, only /hadoop is a suitable suspect (as cumulative size of all the other folders on "/" are well below the 365GB)

Code:
[root@srv_omega hadoop]# cd /
[root@srv_omega /]# cd /hadoop
[root@srv_omega hadoop]# ls -lrtha
total 72K
drwxr-xr-x  2 root root 4.0K Jun  5  2014 disk16
drwxr-xr-x 18 root root 4.0K Jun  5  2014 .
drwxr-xr-x  4 root root 4.0K Jun 22  2014 disk1
drwxr-xr-x  4 root root 4.0K Jun 22  2014 disk11
drwxr-xr-x  4 root root 4.0K Jun 22  2014 disk10
drwxr-xr-x  4 root root 4.0K Jun 22  2014 disk13
drwxr-xr-x  4 root root 4.0K Jun 22  2014 disk12
drwxr-xr-x  4 root root 4.0K Jun 22  2014 disk14
drwxr-xr-x  4 root root 4.0K Jun 22  2014 disk2
drwxr-xr-x  4 root root 4.0K Jun 22  2014 disk4
drwxr-xr-x  4 root root 4.0K Jun 22  2014 disk3
drwxr-xr-x  4 root root 4.0K Jun 22  2014 disk6
drwxr-xr-x  4 root root 4.0K Jun 22  2014 disk5
drwxr-xr-x  4 root root 4.0K Jun 22  2014 disk8
drwxr-xr-x  4 root root 4.0K Jun 22  2014 disk7
drwxr-xr-x  4 root root 4.0K Jun 22  2014 disk9
drwxr-xr-x  5 root root 4.0K Nov 19 20:02 disk15
dr-xr-xr-x 27 root root 4.0K Apr 30 15:12 ..

All folders from 1 to 15 are on different filesystems, so the folder disk16 seems to be the only option but there is nothing in it.

Code:
[root@srv_omega hadoop]# cd disk16/
[root@srv_omega disk16]# ls -lrtha
total 8.0K
drwxr-xr-x 18 root root 4.0K Jun  5  2014 ..
drwxr-xr-x  2 root root 4.0K Jun  5  2014 .
[root@srv_omega disk16]#

I just don't get it; no folder seems responsible for the "365Gb"...

Any idea on how I could try to find out where those "365GB" are ?

Script Rename Files

Dears,
I want to create script to rename multiple file names from unix date to date for example
filename_1421907815_department.txt
rename to
filename_2015_01_22_08_23_department.txt

Python Ftplib

hello all,

please help me with python ftplib. i was trying to copy files from my linux machine to a windows server using ftplib. everything was working good. but i'm only able to copy files from the same directory the script is. how do i copy files from a different directory? i always get "file not found error message". here's my code :

Code:
tester_name = str (socket.gethostname())
def upload(ftp, file):
    ext = os.path.splitext(file)[1]
    if ext in (".txt", ".htm", ".html"):
        ftp.storlines("STOR " + file, open(file))
    else:
        ftp.storbinary("STOR " + file, open(file, "rb"), 1024)



parse_source_path = ('/path/to/where/i/go/')
parse_source_file_list = os.listdir(parse_source_path)

ftp = ftplib.FTP("server_IP")
ftp.login("username", "pass")

folder_list = []

ftp.dir(folder_list.append)

if str(tester_name) not in str(folder_list) :
    ftp.mkd("%s"%tester_name)
    ftp.cwd("%s"%tester_name)
    for files in parse_source_file_list :
        print files
        upload(ftp, files)


else :
    print "later"

Exporting Log Data To A File That Matches Stdout

hey guys,

Let's say I want to find out which log files have related ntp information in them. I use cat and grep to search through the files in /var/log and then export that to a file. this is the command...

# cat /var/log/* | grep ntp > /home/log.txt

The file created from this command will not include the directories the log entries are apart of. Why not? For example, if you do this same command without exporting to the /home/log.txt file it will show you in stdout which directory each log entry is in. Hope I'm making sense here. My question is, is there a clever way to export to a file in a way so that the file created is structured exactly like the stdout of the command below?

# cat /var/log/* | grep ntp

Inquiry On A Bash Script Using Sed And Grep -c

Hi Everyone,

I need some help on my bash script, I'm trying to rename a certain line in a file which might be 1 or more.

IDXCOUNT=`grep -c 'index .* on ' $FILENAME`;

for n in $(eval echo {1..$IDXCOUNT});
do
timestamp=$(date +"%s");
echo "Renaming index idx_$timestamp..";
if [ $n -eq 1 ]; then
sed -i "0,/^index [^)]* on /s/index idx_$timestamp on /" $FILENAME;

My problem is, if the sed target is 2 or more it will generate an idx_$timestamp that will cause duplicate index value when the script finish. My goal is to have the script recognize that when there are multiple index in the file, it will try to rename it one by one.

I'm new to bash so I'm not sure if I explained my issue well but I will appreciate any help!!

Thanks!

Need Help In Bash Scripting

I have two files which has exact same number of lines.
I want first line of first file should be filename of new file and content of this new file should be first line of second file.
Then second line of first file should be filename of again new file and content of this new file should be second line of second file.
then third line of first file should be filename of again new file and content of this new file should be third line of second file.
and so on...
I am trying to do it using for loop but I am not able to create two for loops.
This is what I have done
Code:
IFS=$'\n'
var=$(sed 's/\"http\(.*\)\/\(.*\).wav\"\,\".*/\2/g' 1797.csv) # filenames of all files
var2=$(sed 's/\"http\(.*\)\/\(.*\).wav\"\,\"\(.*\)\"$/\3/g' 1797.csv) # contents of all files
for j in $var;
do
#Here I do not know how to use $var2
done

Please help.

Sed Use To Rename Files Under Multiple Directory

Hi all,

I am trying to use sed to rename multiple files under multiple directory.

so lets say for exmaple:
Under /root I have 2 directory as follows:

# ll {test1,test2}
test1:
total 0
-rw-r--r--. 1 root root 0 Apr 10 19:16 authkey.apollo

test2:
total 0
-rw-r--r--. 1 root root 0 Apr 10 19:16 authkeys.apollo

if I want to change apollo to jupiter then used this:
for i in `ls {test1,test2} | grep -i 'apollo'`; do echo $i; sed -i 's/apollo/jupiter/g'; done

but it seems like it got missed on the file path in sed. is there any other easier way or better approach to make this work?

Thanks in advance.

Converting Ppdf Files To Jpg Files

I have just downloaded imagemagick and when I try to convert a pdf file to a jpg file. I get the message no such file or directory. The file I want to convert is in Home/Downloads