LinuxCommandLibrary

skipfish

TLDR

Scan a website with default settings

$ skipfish -o [output_dir] [https://example.com]
copy
Scan with authentication
$ skipfish -o [output_dir] -A [user]:[password] [https://example.com]
copy
Scan with custom wordlist
$ skipfish -o [output_dir] -W [wordlist.txt] [https://example.com]
copy
Limit scan depth
$ skipfish -o [output_dir] -d [5] [https://example.com]
copy
Exclude URLs matching pattern
$ skipfish -o [output_dir] -X [/logout] [https://example.com]
copy
Limit requests per second
$ skipfish -o [output_dir] -l [10] [https://example.com]
copy
Scan with cookies
$ skipfish -o [output_dir] -C "[session=abc123]" [https://example.com]
copy

SYNOPSIS

skipfish [options] -W wordlist -o outputdir url [url_...]

DESCRIPTION

skipfish is a high-performance web application security scanner that creates an interactive sitemap through recursive crawling and dictionary-based probing. It performs active security checks and generates an HTML report highlighting potential vulnerabilities.
The scanner detects issues including XSS, SQL injection, shell injection, directory traversal, and various server misconfigurations. It uses adaptive techniques to minimize false positives and handles modern web applications with AJAX and complex state management.
Output is an interactive HTML report with a sitemap showing discovered paths, parameters, and identified security issues. Each finding includes severity rating, description, and evidence. The report serves as a foundation for manual security assessment.

PARAMETERS

-o DIR

Output directory for report (required, must not exist)
-W FILE
Wordlist file for dictionary-based probing
-S FILE
Load additional scope rules
-A USER:PASS
HTTP authentication credentials
-C NAME=VAL
Add custom cookie to all requests
-H NAME=VAL
Add custom HTTP header
-b i|f|p
Browser headers (MSIE, Firefox, iPhone)
-d DEPTH
Maximum crawl depth (default: 16)
-c NUM
Maximum children per node (default: 512)
-r NUM
Maximum total requests
-l NUM
Maximum requests per second
-I STRING
Only crawl URLs containing string
-X STRING
Exclude URLs containing string
-D DOMAIN
Add domain to scan scope
-K PARAM
Skip fuzzing specified parameter
-N
Do not accept new cookies
-M
Log mixed content (HTTP in HTTPS)
-E
Log cache mismatches
-U
Log external URLs found
-Q
Suppress duplicate nodes in report
-u
Quiet mode; suppress console output
-v
Verbose mode

CAVEATS

Skipfish is resource-intensive on both the scanner and target server. Always obtain explicit authorization before scanning. The aggressive crawling can trigger DoS protections or generate large amounts of log data. Some dynamic applications may not be fully covered. Excluded patterns (-X) should include logout URLs to prevent session termination.

HISTORY

Skipfish was developed by Michal Zalewski (lcamtuf) at Google and released in 2010. Zalewski is renowned for security research including the AFL fuzzer. Skipfish was designed for speed and accuracy, using optimized HTTP handling and intelligent crawling algorithms. While development has slowed, it remains a useful tool for web application reconnaissance and automated security testing.

SEE ALSO

nikto(1), wpscan(1), sqlmap(1), burpsuite(1)

Copied to clipboard