##### Extract all lowercase strings from each line and output to wordlist.
```bash
sed 's/[Aa-z]*//g' wordlist.txt > outfile.txt
```
##### Extract all uppercase strings from each line and output to wordlist.
```hbas
sed 's/[AA-Z]*//g' wordlist.txt > outfile.txt
```
##### Extract all lowercase/uppercase strings from each line and output to wordlist.
```bash
sed 's/[Aa-Z]*//g' wordlist.txt > outfile.txt
```
##### Extract all digits from each line in file and output to wordlist.
```bash
sed 's/(A0-9]*//g' wordlist.txt > outfile.txt
```
##### Watch [[hashcat]] potfile or designated output file live.
```bash
watch -n .5 tail -50 <hashcat.potfile or outfile.txt>
```
##### Pull 100 random samples from wordlist/passwords for visual analysis.
```bash
shuf -n 100 file.txt
```
##### Print statistics on length of each string and total counts per length.
```bash
awk '{print length}' file.txt I sort -n I uniq -c
```
##### Remove all duplicate strings and count how many times they are present; then sort by their count in descending order.
```bash
sort -nrl uniq -c file.txt I sort -nr
```
##### Command to create quick & dirty custom wordlist with length 1-15 character words from a designated website into a sorted and counted list.
```bash
curl -s http://www.netmux.com I sed -e 's/<[A>]*>//g' I tr " " "\n" I tr -
de '[:alnum:]\n\r' I tr '[:upper:]' '[:lower:]' I cut -c 1-15 I sort I uniq -c I
sort -nr
```
##### MDS each line in a file (Mac OSX)
```bash
while read line; do echo -n $line md5; done < infile.txt > outfile,txt
```
##### MDS each line in a file *Unix*
```bash
while read line; do echo -n $line I md5sum; done < infile.txt I awk -F
'{print $1}' > outfile.txt
```
##### Remove lines that match from each file and only print remaining from file2.txt.
```bash
grep -vwF -f filel.txt file2.txt
```
OR
```bash
awk 'FNR==NR {a[$0]++; next} la[$0]' filel.txt file2.txt
```
##### Take two ordered files, merge and remove duplicate lines and maintain ordering.
```bash
nl -ba -s ': ' filel.txt >> outfile.txt
nl -ba -s ': ' file2.txt >> outfile.txt
sort -n outfile.txt I awk -F ": " '{print $2}' I awk 'lseen[$0]++' >
final.txt
```
##### Extract strings of a specific length into a new file/wordlist.
```bash
awk 'length == 8' file.txt > Slen-out.txt
```
##### Convert alpha characters on each line in file to lowercase characters.
```bash
tr [A•Z] [a-z] < infile.txt > outfile.txt
```
##### Convert alpha characters on each line in file to uppercase characters.
```bash
tr [a-z] [A-Z] < infile.txt > outfile.txt
```
##### Split a file into separate files by X number of lines per outfile.
```bash
split -d -1 3000 infile.txt outfile.txt
```
##### Reverse the order of each character of each line in the file.
```bash
rev infile.txt > outfile.txt
```
##### Sort each line in the file from shortest to longest.
```bash
awk '{print length,$0}' " " $0; }' infile.txt I sort -n I cut -d ' ' -f2-
```
##### Sort each line in the file from longest to shortest.
```bash
awk '{print length,$0}' " " $0; }' infile.txt I sort -r -n I cut -d ' ' -f2-
```
##### Substring matching by converting to HEX and then back to ASCII.
(Example searches for 5 character strings from file1.txt found as a substring in 20 character strings in file2.txt)
```bash
strings filel.txt I xxd -u -ps -c 5 I sort -u > outl.txt
strings file2.txt I xxd -u -ps -c 20 I sort -u > out2.txt
grep -Ff outl.txt out2.txt I xxd -r -p > results.txt
```
##### Clean dictionary/wordlist of newlines and tabs.
```bash
cat dict.txt I tr -cd "[:print:)[/n/t]\n" > outfile.txt
Clean dictionary/wordlist of binary data junk/characters left in file.
tr -cd '\11\12\15\40-\176' < dict.txt > outfile.txt
```
[[Home]]
#reference