Removing Multiple Lines From Cell Data In A .csv File

I am trying to process some .csv files with Linux as follows:

Some fields have data with newline characters embedded, like so:

"Bob Smith
531 Pennsylvania Avenue
Washington, DC"

(I verified the existence of the " via Wordpad. The file is too large to easily edit in Wordpad to get all the data for each row on a single line).

what linux command would I use on the files to get the data in each cell on one line?

I have tried:

1. awk -v RS="" '{gsub (/\n/,"")}1' file > newfile

but the cell data was still being read in as if "531 Pennsylvania Avenue" was a brand new row in the CSV file.

2. Command 1 followed by awk -v RS="" '{gsub (/\r/,"")}1' newfile > finalFile

but that resulted in all of the data in the file being put onto a single line.

3. awk -v RS="" '{gsub (/\r\n/,"")}1' file > newFile

But that result was the same as attempt number 2.

How can I preprocess the file so that:

"Bob Smith
531 Pennsylvania Avenue
Washington, DC"

is read as a single field on a single line as part of the row it should be associated with, like

"Bob Smith 531 Pennsylvania Avenue Washington, DC"


Similar Content



How I Can Print A Specific Range Of Nubers Form A File.

hello,

i am trying to make a table from some files. i used this to record how much "RD_" field i have in my file. Quote:
grep -o 'RD_' $f|grep -c 'RD_'
forexample i got 5 "RD_" fields now i want to print 5 number of fields from another file starting from 2nd field. i did it mannully like Quote:
awk 'NR==1{print"{"$2","$3","$4","$5","$6","0.0000",""0.0000""}"","}' $file
i want to make it work together and a bit auto matic like PHP Code:
awk 'NR==1{print"{"$2"to "$5"," apend zeros to make it total 7 fields"}"","}' $file 


your coments would be apreciated
thanks alot

Need Help In Bash Scripting

I have two files which has exact same number of lines.
I want first line of first file should be filename of new file and content of this new file should be first line of second file.
Then second line of first file should be filename of again new file and content of this new file should be second line of second file.
then third line of first file should be filename of again new file and content of this new file should be third line of second file.
and so on...
I am trying to do it using for loop but I am not able to create two for loops.
This is what I have done
Code:
IFS=$'\n'
var=$(sed 's/\"http\(.*\)\/\(.*\).wav\"\,\".*/\2/g' 1797.csv) # filenames of all files
var2=$(sed 's/\"http\(.*\)\/\(.*\).wav\"\,\"\(.*\)\"$/\3/g' 1797.csv) # contents of all files
for j in $var;
do
#Here I do not know how to use $var2
done

Please help.

Search For A Character In Specific Word In File And Replace It In The Word

Hi all ,
I have a requirement where I have a file. Contents of the file are :
#comments
VAR="abg"
RES=123
#comments
IC6790ABG="https://www.abc.com"
IC5678-vg="https://www.bhy.com"
IC-gy_567:78="https://www.gyt.com"
#comments
The variable names can not have characters like - , : so
in this file I have to find words starting with IC and replace characters like - ,:
I want to change only the variable name , not the whole line.
I have used SED command

sed -i '/^IC/s/[^0-9 a-z A-Z _]*//g' file

when I am using this command , it is replacing the whole line
output becomes :

#comments
VAR="abg"
RES=123
#comments
IC6790ABGhttpswwwabccom
IC5678vghttpswwwbhycom
ICgy56778httpswwwgytcom
#comments


But I want the output like this :

#comments
VAR="abg"
RES=123
#comments
IC6790ABG="https://www.abc.com"
IC5678vg="https://www.bhy.com"
ICgy_56778="https://www.gyt.com"
#comments

How can I get the desired output , thanks for your help in advance .

Grayhole Attack In Ns2 Using Aodv Protocol

i am getting the folowing error while running the tcl script:-
can't read "source1": no such variable
while executing
"set source1"
(file "gray.tcl" line 109)
abhishek@ubuntu:~/Ns2$ ns gray.tcl
num_nodes is set 9
INITIALIZE THE LIST xListHead
can't read "source=1": no such variable
while executing
"set source=1"
(file "gray.tcl" line 109)

Extract Info And Find/count Strings From Blocks Inside Text File

Hello

I have a text file which has blocks like
Code:
dir1/dir2/dir3/name_run_number1:
line1_run_number1_part1
line2_run_number1_part2
line3_run_number1_part3...

Each block is separated with a blank line and there is the ":" in the "header" of each one while each block carries the same "number1" after "run_" suffix
What I want to do is for each block, extract the "number1" as shown in the first line and then for the lines below count from 1-20 and give a message if a "partX" line is missing. Any bash or python would be fine

Thanks

Can No Longer Mount Data Dvds

For whatever reason, when I try to mount a data DVD I get the following message:

Unable to mount [disk name]
Error mounting: mount exited with exit code 1: helper failed with:
mount: mount point /media/cdrom does not exist

Yet, if I put a DVD video in I can play on any of my video apps, yet I also can't mount those in a file browser. I have no idea what has caused this. I haven't backed up data to disks in a long time, I only did so today because I'm running low on space in one of my drives.

I was told elsewhere that I needed to alter a file: /etc/udev/rules.d/70-persistent-cd.rules

But I have no idea how to do this, and what I found there does not look like what was shown. This is what my file looks like:

Code:
# This file maintains persistent names for CD/DVD reader and writer devices.
# See udev(7) for syntax.
#
# Entries are automatically added by the 75-cd-aliases-generator.rules
# file; however you are also free to add your own entries provided you
# add the ENV{GENERATED}=1 flag to your own rules as well.
# TSSTcorp_CDDVDW_SH-S223C (pci-0000:00:1f.2-scsi-1:0:1:0)
SUBSYSTEM=="block", ENV{ID_CDROM}=="?*", ENV{ID_PATH}=="pci-0000:00:1f.2-scsi-1:0:1:0", SYMLINK+="cdrom", ENV{GENERATED}="1"
SUBSYSTEM=="block", ENV{ID_CDROM}=="?*", ENV{ID_PATH}=="pci-0000:00:1f.2-scsi-1:0:1:0", SYMLINK+="cdrw", ENV{GENERATED}="1"
SUBSYSTEM=="block", ENV{ID_CDROM}=="?*", ENV{ID_PATH}=="pci-0000:00:1f.2-scsi-1:0:1:0", SYMLINK+="dvd", ENV{GENERATED}="1"
SUBSYSTEM=="block", ENV{ID_CDROM}=="?*", ENV{ID_PATH}=="pci-0000:00:1f.2-scsi-1:0:1:0", SYMLINK+="dvdrw", ENV{GENERATED}="1"

Any suggestions would be appreciated.

Python Ftplib

hello all,

please help me with python ftplib. i was trying to copy files from my linux machine to a windows server using ftplib. everything was working good. but i'm only able to copy files from the same directory the script is. how do i copy files from a different directory? i always get "file not found error message". here's my code :

Code:
tester_name = str (socket.gethostname())
def upload(ftp, file):
    ext = os.path.splitext(file)[1]
    if ext in (".txt", ".htm", ".html"):
        ftp.storlines("STOR " + file, open(file))
    else:
        ftp.storbinary("STOR " + file, open(file, "rb"), 1024)



parse_source_path = ('/path/to/where/i/go/')
parse_source_file_list = os.listdir(parse_source_path)

ftp = ftplib.FTP("server_IP")
ftp.login("username", "pass")

folder_list = []

ftp.dir(folder_list.append)

if str(tester_name) not in str(folder_list) :
    ftp.mkd("%s"%tester_name)
    ftp.cwd("%s"%tester_name)
    for files in parse_source_file_list :
        print files
        upload(ftp, files)


else :
    print "later"

Diffing The Line Numbers

hi guys

i am trying to find the "size" of a "block" of data in LARGE data files, the example below test_data.txt is very simplified. by "size" i mean the difference in line numbers of a block, and the "size" will be constant throughout the file so

1234 6.600000 4321
1234 8.500000 4321
1234 1.800000 4321
1234 2.300000 4321
1234 8.500000 4321
1234 2.800000 4321

if i define a block as whenever i find 8.500000 in the second column, then in the example the the block size would be 3 becasue 8.500000 occurs on the 5th line and on the 2nd. right now i am using

Code:
 grep -n "8.500000" test_data.txt | cut -f1 -d:

and/or

Code:
 awk '/8.500000/ {print FNR}' test_data.txt

obviously i don't remeber how to tag text as code?

btw, the grep command is much much faster

both of these commands give an entire list (long list of number for files greater than a gig) of line numbers which i then have to subtract one from another to come up with 3 in the example. not that i'm opposed to doing math, but i would think awk or grep should be able to do this for me

ideas?

tabby

Error In Umts Verical Handover Wih Wlan

Hey,
please find attached the code
I got this error :
can't read "mac_(0)": no such variable
while executing
"subst $[subst $var]"
(procedure "_o112" line 5)
(SplitObject set line 5)
invoked from within
"$iface2 set mac_(0)"
invoked from within
"set tmp2 [$iface2 set mac_(0)] "
(file "umts.tcl" line 319)




can anyone help to solve the error??

Rsync, Reliable "copy And Paste" Type Of Backup In Case Things Break?

What I did in windows was create images of my drive and restore them.

in linux I am running

Code:
rsync -aAXv --exclude={"/home/*","/dev/*","/proc/*","/sys/*","/tmp/*","/run/*","/mnt/*","/media/*","/lost+found"} /* /path/to/backup/folder

and this creates a folder for me with all my files, and apparently saves meta data like permissions and paths...

Since I'm using arch and things break sometimes,I'm booted into a CLI with errors and cannot figure my way out since I'm a noob... would I be able to just delete my entire root and replace it with the rsync backup without a problem?