_ Hashcat-utils _ are a set of small utilities that are useful in advanced password cracking. They all are packed into multiple stand-alone binaries.
All of these utils are designed to execute only one specific function. Since they all work with STDIN and STDOUT you can group them into chains .
The programs are available for Linux and Windows on both 32 bit and 64 bit architectures. The programs are also available as open source.
List of Utilities
Each word from _ file2 _ is appended to each word from _ file1 _ and then printed to STDOUT.
Since the program is required to rewind the files multiple times it cannot work with STDIN and requires real files .
- cutb: This program (new in hashcat-utils-0.6) is designed to cut up a wordlist (read from STDIN) to be used in Combinator attack. Suppose you notice that passwords in a particular dump tend to have a common padding length at the beginning or end of the plaintext, this program will cut the specific prefix or suffix length off the existing words in a list and pass it to STDOUT.
- expander: This program has no parameters to configure. Each word going into STDIN is parsed and split into all its single chars, mutated and reconstructed and then sent to STDOUT.
There are a couple of reconstructions generating all possible patterns of the input word by applying the following iterations:
All possible lengths of the patterns within a maximum of 7 (defined in LEN_MAX macro, which you can increase in the source).
All possible offsets of the word.
Shifting the word to the right until a full cycle.
Shifting the word to the left until a full cycle.
- gate: Each wordlist going into STDIN is parsed and split into equal sections and then passed to STDOUT based on the amount you specify. The reason for splitting is to distribute the workload that gets generated. The two important parameters are “ mod ” and “ offset ”.
The mod value is the number of times you want to split your dictionary.
The offset value is which section of the split is getting that feed.
- hcstatgen: Tool used to generate .hcstat files for use with the statsprocessor .
- len: Each word going into STDIN is parsed for its length and passed to STDOUT if it matches a specified word-length range.
- morph: Basically morph generates insertion rules for the most frequent chains of characters from the dictionary that you provide and that, per position.
_ Dictionary _ = Wordlist used for frequency analysis.
_ Depth _ = Determines what “top” chains that you want. For example 10 would give you the top 10 (in fact, it seems to start with value 0 so that 10 would give the top 11). _ Width _ = Max length of the chain. With 3 for example, you will get up to 3 rules per line for the most frequent 3 letter chains. _ pos_min _ = Minimum position where the insertion rule will be generated. For example 5 would mean that it will make rule to insert the string only from position 5 and up. _ pos_max _ = Maximum position where the insertion rule will be generated. For example 10 would mean that it will make rule to insert the string so that it’s end finishes at a maximum of position 10.
- permute: This program is a stand-alone implementation of the Permutation Attack . It has no parameters to configure. Each word going into STDIN is parsed and run through “ _ The Countdown QuickPerm Algorithm _ ” by Phillip Paul Fuchs.
- prepare: This program is made as an dictionary optimizer for the Permutation Attack . Due to the nature of the permutation algorithm itself, the input words “BCA” and “CAB” would produce exactly the same password candidates.
- req: Each word going into STDIN is parsed and passed to STDOUT if it matches an specified password group criteria. Sometimes you know that some password must include a lower-case char, a upper-case char and a digit to pass a specific password policy. That means checking passwords that do not match this policy will definitely not result in a cracked password. So we should skip it. This program is not very complex and it can not fully match all the common password policy criteria, but it does provide a little help.
- rli: compares a single file against another file(s) and removes all duplicates. rli can be very useful to clean your dicts and to have one unique set of dictionaries.
- rli2: Unlike rli, rli2 is not limited. But it requires infile and removefile to be sorted and uniqued before, otherwise it won’t work as it should.
- splitlen: This program is designed to be a dictionary optimizer for oclHashcat . oclHashcat has a very specific way of loading dictionaries, unlike hashcat or oclHashcat. The best way to organize your dictionaries for use with oclHashcat is to sort each word in your dictionary by its length into specific files, into a specific directory, and then to run oclHashcat in directory mode.