skipfish [options] -o output-directory [ start-url | @url-file [ start-url2 ... ]]
skipfish is an active web application security reconnaissance tool. It prepares an interactive sitemap for the targeted site by carrying out a recursive crawl and dictionary-based probes. The resulting map is then annotated with the output from a number of active (but hopefully non-disruptive) security checks. The final report generated by the tool is meant to serve as a foundation for professional web application security assessments.
Authentication and access options: -A user:pass - use specified HTTP authentication credentials -F host=IP - pretend that 'host' resolves to 'IP' -C name=val - append a custom cookie to all requests -H name=val - append a custom HTTP header to all requests -b (i|f|p) - use headers consistent with MSIE / Firefox / iPhone -N - do not accept any new cookies Crawl scope options: -d max_depth - maximum crawl tree depth (16) -c max_child - maximum children to index per node (512) -x max_desc - maximum descendants to index per branch (8192) -r r_limit - max total number of requests to send (100000000) -p crawl% - node and link crawl probability (100%) -q hex - repeat probabilistic scan with given seed -I string - only follow URLs matching 'string' -X string - exclude URLs matching 'string' -K string - do not fuzz parameters named 'string' -D domain - crawl cross-site links to another domain -B domain - trust, but do not crawl, another domain -Z - do not descend into 5xx locations -O - do not submit any forms -P - do not parse HTML, etc, to find new links Reporting options: -o dir - write output to specified directory (required) -M - log warnings about mixed content / non-SSL passwords -E - log all caching intent mismatches -U - log all external URLs and e-mails seen -Q - completely suppress duplicate nodes in reports -u - be quiet, disable realtime progress stats Dictionary management options: -W wordlist - use a specified read-write wordlist (required) -S wordlist - load a supplemental read-only wordlist -L - do not auto-learn new keywords for the site -Y - do not fuzz extensions in directory brute-force -R age - purge words hit more than 'age' scans ago -T name=val - add new form auto-fill rule -G max_guess - maximum number of keyword guesses to keep (256) Performance settings: -l max_req - max requests per second (0..000000) -g max_conn - max simultaneous TCP connections, global (40) -m host_conn - max simultaneous connections, per target IP (10) -f max_fail - max number of consecutive HTTP errors (100) -t req_tmout - total request response timeout (20 s) -w rw_tmout - individual network I/O timeout (10 s) -i idle_tmout - timeout on idle HTTP connections (10 s) -s s_limit - response size limit (200000 B) -e - do not keep binary responses for reporting Other settings: -k duration - stop scanning after the given duration h:m:s --config file - load specified configuration file
Some sites require authentication, and skipfish supports this in different ways. First there is basic HTTP authentication, for which you can use the -A flag. Second, and more common, are sites that require authentication on a web application level. For these sites, the best approach is to capture authenticated session cookies and provide them to skipfish using the -C flag (multiple if needed). Last, you'll need to put some effort in protecting the session from being destroyed by excluding logout links with -X and/or by rejecting new cookies with -N.
Some sites may be too big to scan in a reasonable timeframe. If the site features well-defined tarpits - for example, 100,000 nearly identical user profiles as a part of a social network - these specific locations can be excluded with -X or -S. In other cases, you may need to resort to other settings: -d limits crawl depth to a specified number of subdirectories; -c limits the number of children per directory; -x limits the total number of descendants per crawl tree branch; and -r limits the total number of requests to send in a scan.
Make sure you've read the instructions provided in doc/dictionaries.txt to select the right dictionary file and configure it correctly. This step has a profound impact on the quality of scan results later on.
The default performance setting should be fine for most servers but when the report indicates there were connection problems, you might want to tweak some of the values here. For unstable servers, the scan coverage is likely to improve when using low values for rate and connection flags.
Scan type: config skipfish --config config/example.conf http://example.com
Scan type: quick skipfish -o output/dir/ http://example.com
Scan type: extensive bruteforce skipfish [...other options..] -S dictionaries/complete.wl http://example.com
Scan type: without bruteforcing skipfish [...other options..] -LY http://example.com
Scan type: authenticated (basic) skipfish [...other options..] -A username:password http://example.com
Scan type: authenticated (cookie) skipfish [...other options..] -C jsession=myauthcookiehere -X /logout http://example.com
Scan type: flaky server skipfish [...other options..] -l 5 -g 2 -t 30 -i 15 http://example.com
The default values for all flags can be viewed by running './skipfish -h' .
skipfish was written by Michal Zalewski <email@example.com>, with contributions from Niels Heinen <firstname.lastname@example.org>, Sebastian Roschke <email@example.com>, and other parties.
This manual page was written with the help of Thorsten Schifferdecker <firstname.lastname@example.org>.