Quantcast
Channel: Jive Syndication Feed
Viewing all articles
Browse latest Browse all 18

Using GNU tools to quickly check your input files - duplicates lines

$
0
0

I’m currently doing data migration in a specific problem domain, but I think what I’m sharing here can be applied very generically.

 

The checks you usually do on an input file can be supported (and automated) by small GNU Tools.

 

As an example, it was agreed, that the input file will not contain duplicates - let's check if it does.

Also, it's always a good idea to know how many lines you are dealing with, so that is where we start:

 

1. get the number of lines:

echo [filename] | wc -l

 

2. get the number of unique lines:

echo [filename] | uniq | wc -l

 

(If the numbers are the same, there are no adjacent duplicates)

 

3. Maybe we have duplicates spread over the file? (-> so they are non-adjacent?) Let's check:

sort [filename] | uniq | wc -l

 

4. If we find duplicates we want to give a qualified feedback, like: what where the duplicate lines (-d) and how often do they appear in the File (-c):

 

sort [filename] | uniq -dc> duplicate_lines_please_check.txt

 

explanation:

uniq will write the unique lines of the input file to standard output.

-d will give (only) the duplicate lines - one line per duplicate

-D will give ALL the duplicates lines

-c also add's the info how often a line appears.


Viewing all articles
Browse latest Browse all 18

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>