skipfish is an active web application security reconnaissance tool. It
prepares an interactive sitemap for the targeted site by carrying out a
recursive crawl and dictionary-based probes. The resulting map is then
annotated with the output from a number of active (but hopefully non-
disruptive) security checks. The final report generated by the tool is
meant to serve as a foundation for professional web application secu‐
Authentication and access options:
-A user:pass - use specified HTTP authentication credentials
-F host=IP - pretend that ´host´ resolves to ´IP´
-C name=val - append a custom cookie to all requests
-H name=val - append a custom HTTP header to all requests
-b (i|f|p) - use headers consistent with MSIE / Firefox / iPhone
-N - do not accept any new cookies
Crawl scope options:
-d max_depth - maximum crawl tree depth (16)
-c max_child - maximum children to index per node (512)
-x max_desc - maximum descendants to index per branch (8192)
-r r_limit - max total number of requests to send (100000000)
-p crawl% - node and link crawl probability (100%)
-q hex - repeat probabilistic scan with given seed
-I string - only follow URLs matching ´string´
-X string - exclude URLs matching ´string´
-K string - do not fuzz parameters named ´string´
-D domain - crawl cross-site links to another domain
-B domain - trust, but do not crawl, another domain
-Z - do not descend into 5xx locations
-O - do not submit any forms
-P - do not parse HTML, etc, to find new links
-o dir - write output to specified directory (required)
-M - log warnings about mixed content / non-SSL passwords
-E - log all caching intent mismatches
-U - log all external URLs and e-mails seen
-Q - completely suppress duplicate nodes in reports
-u - be quiet, disable realtime progress stats
Dictionary management options:
-W wordlist - use a specified read-write wordlist (required)
-S wordlist - load a supplemental read-only wordlist
-L - do not auto-learn new keywords for the site
-Y - do not fuzz extensions in directory brute-force
-R age - purge words hit more than ´age´ scans ago
-T name=val - add new form auto-fill rule
-G max_guess - maximum number of keyword guesses to keep (256)
-l max_req - max requests per second (0..000000)
-g max_conn - max simultaneous TCP connections, global (40)
-m host_conn - max simultaneous connections, per target IP (10)
-f max_fail - max number of consecutive HTTP errors (100)
-t req_tmout - total request response timeout (20 s)
-w rw_tmout - individual network I/O timeout (10 s)
-i idle_tmout - timeout on idle HTTP connections (10 s)
-s s_limit - response size limit (200000 B)
-e - do not keep binary responses for reporting
-k duration - stop scanning after the given duration h:m:s
--config file - load specified configuration file
Some sites require authentication, and skipfish supports this in dif‐
ferent ways. First there is basic HTTP authentication, for which you
can use the -A flag. Second, and more common, are sites that require
authentication on a web application level. For these sites, the best
approach is to capture authenticated session cookies and provide them
to skipfish using the -C flag (multiple if needed). Last, you'll need
to put some effort in protecting the session from being destroyed by
excluding logout links with -X and/or by rejecting new cookies with -N.
Using this flag, you can set the ´Host:´ header value to define
a custom mapping between a host and an IP (bypassing the re‐
solver). This feature is particularly useful for not-yet-
launched or legacy services that don't have the necessary DNS
When it comes to customizing your HTTP requests, you can also
use the -H option to insert any additional, non-standard head‐
ers. This flag also allows the default headers to be overwrit‐
This flag can be used to add a cookie to the skipfish HTTP re‐
quests; This is particularly useful to perform authenticated
scans by providing session cookies. When doing so, keep in mind
that cetain URLs (e.g. /logout) may destroy your session; you
can combat this in two ways: by using the -N option, which
causes the scanner to reject attempts to set or delete cookies;
or by using the -X option to exclude logout URLs.
This flag allows the user-agent to be specified where ´i´ stands
for Internet Explorer, ´f´ for Firefox and ´p´ for iPhone. Using
this flag is recommended in case the target site shows different
behavior based on the user-agent (e.g some sites use different
templates for mobiles and desktop clients).
This flag causes skipfish to ignore cookies that are being set
by the site. This helps to enforce stateless tests and also pre‐
vent that cookies set with ´-C´ are not overwritten.
For sites requiring basic HTTP authentication, you can use this
flag to specify your credentials.
The login form to use with form authentication. By default skip‐
fish will use the form's action URL to submit the credentials.
If this is missing than the login data is send to the form URL.
In case that is wrong, you can set the form handler URL with
The username to be used during form authentication. Skipfish
will try to detect the correct form field to use but if it fails
to do so (and gives an error), then you can specify the form
field name with --auth-user-field.
The password to be used during form authentication. Similar to
auth-user, the form field name can (optionally) be set with
This URL allows skipfish to verify whether authentication was
successful. This requires a URL where anonymous and authenti‐
cated requests are answered with a different response.
Some sites may be too big to scan in a reasonable timeframe. If the
site features well-defined tarpits - for example, 100,000 nearly iden‐
tical user profiles as a part of a social network - these specific lo‐
cations can be excluded with -X or -S. In other cases, you may need to
resort to other settings: -d limits crawl depth to a specified number
of subdirectories; -c limits the number of children per directory; -x
limits the total number of descendants per crawl tree branch; and -r
limits the total number of requests to send in a scan.
Limit the depth of subdirectories being crawled (see above).
Limit the amount of subdirectories per directory we crawl into
Limit the total number of descendants per crawl tree branch (see
The maximum number of requests can be limited with this flag.
By specifying a percentage between 1 and 100%, it is possible to
tell the crawler to follow fewer than 100% of all links, and try
fewer than 100% of all dictionary entries. This - naturally -
limits the completeness of a scan, but unlike most other set‐
tings, it does so in a balanced, non-deterministic manner. It is
extremely useful when you are setting up time-bound, but peri‐
odic assessments of your infrastructure.
This flag sets the initial random seed for the crawler to a
specified value. This can be used to exactly reproduce a previ‐
ous scan to compare results. Randomness is relied upon most
heavily in the -p mode, but also influences a couple of other
scan management decisions.
With this flag, you can tell skipfish to only crawl and test
URLs that match a certain string. This can help to narrow down
the scope of a scan by only whitelisting certain sections of a
web site (e.g. -I /shop).
The -X option can be used to exclude files / directories from
the scan. This is useful to avoid session termination (i.e. by
excluding /logout) or just for speeding up your scans by exclud‐
ing static content directories like /icons/, /doc/, /manuals/,
and other standard, mundane locations along these lines.
This flag allows you to specify parameter names not to fuzz.
(useful for applications that put session IDs in the URL, to
Allows you to specify additional hosts or domains to be in-scope
for the test. By default, all hosts appearing in the command-
line URLs are added to the list - but you can use -D to broaden
these rules. The result of this will be that the crawler will
follow links and tests links that point to these additional
In some cases, you do not want to actually crawl a third-party
domain, but you trust the owner of that domain enough not to
worry about cross-domain content inclusion from that location.
To suppress warnings, you can use the -B option
Do not crawl into pages / directories that give an error 5XX.
Using this flag will cause forms to be ignored during the scan.
This flag will disable link extracting and effectively disables
crawling. Using -P is useful when you want to test one specific
URL or when you want to feed skipfish a list of URLs that were
collected with an external crawler.
EXPERIMENTAL: Displays the crawler injection tests. The output
shows the index number (useful for --checks-toggle), the check
name and whether the check is enabled.
EXPERIMENTAL: Every injection test can be enabled/disabled with
using this flag. As value, you need to provide the check numbers
which can be obtained with the --checks flag. Multiple checks
can be toggled via a comma separated value (i.e. --checks-toggle
EXPERIMENTAL: Disables all injection tests for this scan and
limits the scan to crawling and, optionally, bruteforcing. As
with all scans, the output directory will contain a pivots.txt
file. This file can be used to feed future scans.
The report wil be written to this location. The directory is one
of the two mandatory options and must not exist upon starting
Enable the logging of mixed content. This is highly recommended
when scanning SSL-only sites to detect insecure content inclu‐
sion via non-SSL protected links.
This will cause additonal content caching error to be reported.
Log all external URLs and email addresses that were seen during
Enable this to completely suppress duplicate nodes in reports.
This will cause skipfish to suppress all console output during
EXPERIMENTAL: Use this flag to enable runtime reporting of, for
example, problems that are detected. Can be used multiple times
to increase verbosity and should be used in combination with -u
unless you run skipfish with stderr redirected to a file.
Make sure you've read the instructions provided in doc/dictionaries.txt
to select the right dictionary file and configure it correctly. This
step has a profound impact on the quality of scan results later on.
Load the specified (read-only) wordlist for use during the scan.
This flag is optional but use of a dictionary is highly recom‐
mended when performing a blackbox scan as it will highlight hid‐
den files and directories.
Specify an initially empty file for any newly learned site-spe‐
cific keywords (which will come handy in future assessments).
You can use -W- or -W /dev/null if you don't want to store auto-
learned keywords anywhere. Typically you will want to use one of
the packaged dictonaries (i.e. complete.wl) and possibly add a
During the scan, skipfish will try to learn and use new key‐
words. This flag disables that behavior and should be used when
any form of brute-forcing is not desired.
This flag will disable extension guessing during directory
Use of this flag allows old words to be purged from wordlists.
It is intended to help keeping dictionaries clean when used in
Skipfish also features a form auto-completion mechanism in order
to maximize scan coverage. The values should be non-malicious,
as they are not meant to implement security checks - but rather,
to get past input validation logic. You can define additional
rules, or override existing ones, with the -T option (-T
form_field_name=field_value, e.g. -T login=test123 -T pass‐
word=test321 - although note that -C and -A are a much better
method of logging in).
During the scan, a temporary buffer of newly detected keywords
is maintained. The size of this buffer can be changed with this
flag and doing so influences bruteforcing.
The default performance setting should be fine for most servers but
when the report indicates there were connection problems, you might
want to tweak some of the values here. For unstable servers, the scan
coverage is likely to improve when using low values for rate and con‐
This flag can be used to limit the amount of requests per sec‐
ond. This is very useful when the target server can't keep up
with the high amount of requests that are generated by skipfish.
Keeping the amount requests per second low can also help pre‐
venting some rate-based DoS protection mechanisms from kicking
in and ruining the scan.
The max simultaneous TCP connections (global) can be set with
The max simultaneous TCP connections, per target IP, can be set
with this flag.
Controls the maximum number of consecutive HTTP errors you are
willing to see before aborting the scan. For large scans, you
probably want to set a higher value here.
Set the total request timeout, to account for really slow or re‐
ally fast sites.
Set the network I/O timeout.
Specify the timeout for idle HTTP connections.
Sets the maximum length of a response to fetch and parse (longer
responses will be truncated).
This prevents binary documents from being kept in memory for re‐
porting purposes, and frees up a lot of RAM.
This causes request / response data to be flushed to disk in‐
stead of being kept in memory. As a result, the memory usage for
large scans will be significant lower.
skipfish was written by Michal Zalewski , with con‐
tributions from Niels Heinen , Sebastian Roschke
, and other parties.
This manual page was written with the help of Thorsten Schifferdecker
May 6, 2012 SKIPFISH(1)