LinuxCommandLibrary

paramspider

TLDR

Find parameters for domain

$ paramspider -d [example.com]
copy
Output to file
$ paramspider -d [example.com] -o [output.txt]
copy
Exclude specific extensions
$ paramspider -d [example.com] --exclude [woff,css,js]
copy
Use custom placeholder
$ paramspider -d [example.com] -p "[FUZZ]"
copy
Set output directory
$ paramspider -d [example.com] --output [results/]
copy

SYNOPSIS

paramspider -d domain [options]

DESCRIPTION

paramspider discovers URL parameters by mining web archives. It queries archive.org's Wayback Machine to find historical URLs with parameters for a target domain.
Useful for finding hidden parameters, endpoints, and potential injection points during security testing.

PARAMETERS

-d, --domain domain

Target domain.
-o, --output file
Output file.
--exclude exts
Exclude extensions.
-p, --placeholder str
Parameter placeholder.
-q, --quiet
Quiet mode.
-s, --subs
Include subdomains.

OUTPUT FORMAT

$ https://example.com/page?id=FUZZ
https://example.com/search?q=FUZZ&page=FUZZ
copy

CAVEATS

Requires internet access. Results from archived URLs. May find outdated parameters. Use responsibly.

HISTORY

ParamSpider was created by Devansh Batham as a tool for bug bounty hunters and penetration testers to discover parameters.

SEE ALSO

waybackurls(1), gau(1), arjun(1)

Copied to clipboard