Tuesday, July 24, 2012

Remove duplicate entries in a perl scalar array

How to remove duplicate entries from perl scalar array.

Ans ==>
my @duplicate_entries = (1,2,3,4,3,22,45,1,22,76,456,12,45,22,876,456,847,14,6,365,7,4,33,5);
my (%hash,@result);
foreach my $x(@duplicate_entries){
    unless($hash{$x}){
        push(@result,$x);
        $hash{$x} = 1;
    }
}

>> @result will be an array of unique elements.

Write a program which will identify public IP address of the current host.

sub get_tm_ip {
    my($host_to_contact) = shift; < ---- it can be any 
                                      pingable hostname
                                      (e.g.  google.com )
    my($port) = 22;
    my $sock = IO::Socket::INET->new(
            PeerAddr=> $host_to_contact,
            PeerPort=> $port,
            Proto   => "tcp");
    my $localip = $sock->sockhost;
    return($localip);
}



Get unique id for a Linux host.

Sometimes we found hostid for two hosts are same. In this case, following command should help us to get unique id for a linux host.

[root@host ~]dmidecode|grep UUID|cut -d: -f2|tr -d ' '
420DE5AC-31FE-5A1A-DDA3-C971902A228D

Inserting column [with value] at a given position in a file using AWK [1 liner]

very useful:
awk -v FS='|' -v OFS='|' '{$3=$3"|"4} 1' 1.txt 

Input:
1|2|3|5 
1|2|3|5 
1|2|3|5 
1|2|3|5 

Output:
1|2|3|4|5 
1|2|3|4|5 
1|2|3|4|5 
1|2|3|4|5 

Wednesday, July 18, 2012

variables inside awk print and sed

Simple & useful: [Many times i forfot this trick]


[mandy ~]$ echo $v1
75

[mandy ~]$ awk -F "|" ' { print $'"$v1"' } ' file_name
mandar
mpande
mp

[mandy ~]$ sed 's/'"$old_val"'/'"$new_val"'/g' file_name

Monday, July 9, 2012

XML Pull Parser

Found a very good article/link in XML Parsing:

Need to check this PullXML Parser thing is available with Perl/Python ? 
This kind of parsing seems to be really faster, efficient than SAX [Off course faster/better than DOM :)].

 XML Pull Parser .
http://www.bearcave.com/software/java/xml/xmlpull.html