Linux command line tools for parsing and analyzing logs π§β
Linux logs are a valuable resource for system administrators, network administrators, developers, and security professionals.
They record a timeline of events on a Linux system, such as operating system events, application activity, and user activities and actions (for example log-in attempts).
In this thread, I'll go over a few tools that can help you parse and analyze logs more effectively from the command line.
[+] Grep
Everyone's favourite.
"Grep" is an abbreviation for "Global Regular Expression Print." Grep is a Linux/Unix command-line tool that searches a file for a string of characters. The pattern that is searched in the file is referred to as the regular expression.
Everyone's favourite.
"Grep" is an abbreviation for "Global Regular Expression Print." Grep is a Linux/Unix command-line tool that searches a file for a string of characters. The pattern that is searched in the file is referred to as the regular expression.
When it finds a match, it prints the result on the terminal screen. When you want to search through large log files, the grep command comes in handy. Here is the basic syntax for grep:
$ grep <options> <pattern> filename
$ grep <options> <pattern> filename
If you want to search through compressed files, zgrep is your best friend.
[+] Ngrep
Ngrep (network grep) is a straightforward yet effective network packet analyzer. It is a network layer grep-like tool that matches traffic passing through a network interface.
Ngrep (network grep) is a straightforward yet effective network packet analyzer. It is a network layer grep-like tool that matches traffic passing through a network interface.
The only difference is that ngrep parses text in network packets by using regular or hexadecimal expressions to match against data payloads (the actual information or message in sent data, but not auto-generated metadata) of packets. Here is the basic syntax
$ ngrep -I file.cap
$ ngrep -I file.cap
[+] Cut
The cut is a Linux/Unix command line utility that removes sections of lines from files as well as piped data and prints the selected parts of lines to standard output (stdout) that is your terminal screen.
The cut is a Linux/Unix command line utility that removes sections of lines from files as well as piped data and prints the selected parts of lines to standard output (stdout) that is your terminal screen.
The basic usage of the cut command is very simple:
$ cut -d " " -f 3 file.log
$ cut -d " " -f 3 file.log
[+] Sed
SED is a text stream editor that is used on Unix systems to swiftly and efficiently edit files. The program searches through a text file, and replaces, inserts, and deletes lines without opening the file in a full-fledged text editor.
SED is a text stream editor that is used on Unix systems to swiftly and efficiently edit files. The program searches through a text file, and replaces, inserts, and deletes lines without opening the file in a full-fledged text editor.
Here is the basic usage of the sed command:
$ sed 's/pattern/replace/g' syslog.txt
$ sed 's/pattern/replace/g' syslog.txt
[+] Sort
The sort command is used to sort file contents and output the results to standard output. Rearranging the contents of a file numerically or alphabetically, as well as putting data in ascending or descending order, increases readability.
The sort command is used to sort file contents and output the results to standard output. Rearranging the contents of a file numerically or alphabetically, as well as putting data in ascending or descending order, increases readability.
Here is how you can sort a file:
$ sort log.txt
$ sort log.txt
[+] Uniq
The uniq (unique) command is another useful tool for parsing logs. It's used to omit/remove duplicates from a file, hence the name uniq. Uniq only remove adjacent duplicates, so we have to sort our file before we use it against uniq otherwise it won't work.
The uniq (unique) command is another useful tool for parsing logs. It's used to omit/remove duplicates from a file, hence the name uniq. Uniq only remove adjacent duplicates, so we have to sort our file before we use it against uniq otherwise it won't work.
Here is an example of using uniq to remove duplicates:
$ sort log.txt | uniq
$ sort log.txt | uniq
The preceding example is simply equivalent to the 'sort -u' command; it is up to you to choose which one you prefer. As for me, I tend to use both, depending on which comes to mind first.
[+] Diff
The diff command simply compares two text sources/text files and outputs their differences. It compares the files line by line to find the differences.
$ diff log1.txt log2.txt
The diff command simply compares two text sources/text files and outputs their differences. It compares the files line by line to find the differences.
$ diff log1.txt log2.txt
The vimdiff command is also another great command for comparing files. vimdiff launches Vim on two to eight files. Each file is given its own window.
The distinctions between the files are indicated. This is a convenient way to check changes and move modifications from one version of the same file to another.
$ vimdiff log1.txt log2.txt
$ vimdiff log1.txt log2.txt
[+] awk/Gawk
Awk is a command-line utility and scripting language for manipulating data and formatting output reports. To achieve the desired result, this command supports a number of variables, functions, and logical operators.
Awk is a command-line utility and scripting language for manipulating data and formatting output reports. To achieve the desired result, this command supports a number of variables, functions, and logical operators.
Awk does not require compilation and offers advanced text processing capabilities. Awk can be used to create small statement programs that search files for matching patterns.
Its primary function is to scan patterns and process data. Awk can perform multiple file searches in a single statement.
Here is an example of using awk to print all the user names in the /etc/passwd file:
$ awk -F : '{print $1}' /etc/passwd
Here is an example of using awk to print all the user names in the /etc/passwd file:
$ awk -F : '{print $1}' /etc/passwd
That's all!
Thank you for getting this far. I hope you find this thread useful. If you found this thread valuable:
1. Toss us a follow for more daily threads on Linux, sysadmin, and securityβ @xtremepentest
2. Like and RT the first tweet so other Linux folks can find it too.
Thank you for getting this far. I hope you find this thread useful. If you found this thread valuable:
1. Toss us a follow for more daily threads on Linux, sysadmin, and securityβ @xtremepentest
2. Like and RT the first tweet so other Linux folks can find it too.
Loading suggestions...