If I wanted to count the number of times that each unique instance showed up. What would I do for that? Would I do the unique and then do the word count for each instance by using grep for that specific phrase?
this one is one of the most helpful tutorials out there that show how powerful grep and pipe are. Thanks for sharing that and I hope you make more cool stuff.
To filter .log files using cat, grep, cut, sort, and uniq commands, follow these steps: 1. First, open your terminal or command prompt. 2. Navigate to the directory containing the .log files you want to filter. You can use the 'cd' command followed by the directory path. For example: ```bash cd /path/to/your/log/files ``` 3. Use the 'cat' command to concatenate and display the contents of a .log file. For instance: ```bash cat your_log_file.log ``` 4. To search for specific lines in the .log file, use the 'grep' command. For example, if you want to find all lines containing the word 'error', you can use: ```bash grep 'error' your_log_file.log ``` 5. If you want to extract specific columns from the output, use the 'cut' command. The format is 'cut -d delimiter -f fields'. For example, if your log file has columns separated by a space and you want to extract the first column, use: ```bash cut -d ' ' -f1 ``` 6. To sort the lines alphabetically or numerically, use the 'sort' command. For example: ```bash sort your_log_file.log ``` 7. Finally, to remove duplicate lines from the sorted output, use the 'uniq' command. For example: ```bash uniq your_log_file.log ``` By combining these commands, you can create a pipeline to filter .log files effectively. For instance: ```bash cat your_log_file.log | grep 'error' | cut -d ' ' -f1 | sort | uniq ``` This command will display unique first columns from lines containing the word 'error' in your_log_file.log.
@Hackpens hope you are doing well , amazing videos full of information , can not find in hours of traning videos , pleaase if you create more here community is waiting !!!!!!!
Great video showing the power of the built in command line tools. Remember, the command line (and chosen shell, ie; bash) interact directly with the kernel. Control your hardware directly from your keyboard rather than depending on gui interpreters that stand between you and the kernel like other operating systems.
Now dump all the unique IPs into a text file, and run nslookup on each one. $50 says they all are located in China or Russia. At least %98-99 of them. At least that's what I always end up finding.
Great introduction to the topic, a few things that i think are worth mentioning, once people have learned the commands that were being demonstrated: If the logs your using have a variable amount of spaces between columns (to make things look nice), that can mess up using cut, to get around that you can use `sed 's/ */ /g` to replace any n spaces in a row with a single space. You can also use awk to replace the sed/cut combo, but that's a whole different topic. uniq also has the extremely useful -c flag which will add a count of how many instances of each item there were. And as an aside if people wanted to cut down on the number of commands used you can do things like `grep expression filepath` or `sort -u` (on a new enough system), but in the context of this video it is probably better that people learn about the existence of the stand alone utilities, which can be more versatile. Once you're confident in using the tools mentioned in the video, but you still find that you need more granularity than the grep/grep -v combo, you can use globbing, which involves special characters that represent concepts like "the start of a line"(^) or the wildcard "any thing"(*) (for example `grep "^Hello*World"` means any line that starts with Hello, and at some point also contains World, with anything or nothing in-between/after). If that still isn't enough you might want to look into using regular expressions with grep, but they can be even harder to wrap your mind around if you've never used them before. (If you don't understand globbing or re really are just from reading this that's fine, I'm just trying to give you the right terms to Google, because once you know something's name it becomes infinitely easier to find resources on them)
Thanks! That was informative. The only thing I would have done differently is flip the order of uniq -d and sort. Less items to sort after uniq filters them out.
Sir, wonderful explanation. Kindly do more real-time videos on Linux Bash scripting, it will be very helpful for me and people like me who are trying to get into Bash scripting just by pressure from the higher management to finish an automation task :):)
From description: > _"I show you how to filter information from a .log file, and you find out just how important strong passwords really are."_ i always wondered that pattern matching has smth to do with password security, but then i thought, you have to have passwords to apply pattern matching on 'em right? 'cz the password input field of a site doesn't accept regex, and generating exhaustive strings from regex doesn't help either... so, what are scenario we are imagining for talking about regex in context of secure passwords?