LinuxCommandLibrary
GitHubF-DroidGoogle Play Store

paramspider

discovers URL parameters by mining web archives

TLDR

Find parameters for domain
$ paramspider -d [example.com]
copy
Exclude specific file extensions
$ paramspider -d [example.com] --exclude [woff,css,js,png,jpg]
copy
Use custom placeholder for parameter values
$ paramspider -d [example.com] -p "[FUZZ]"
copy
Set output directory
$ paramspider -d [example.com] -o [results/]
copy
Spider a list of domains from a file
$ paramspider -l [domains.txt]
copy

SYNOPSIS

paramspider -d domain [options]

DESCRIPTION

paramspider discovers URL parameters by mining web archives. It queries archive.org's Wayback Machine to find historical URLs with parameters for a target domain.
Useful for finding hidden parameters, endpoints, and potential injection points during security testing.

PARAMETERS

-d, --domain domain

Target domain.
-l, --list file
File containing a list of domains.
-o, --output dir
Output directory (default: ./results).
--exclude exts
Exclude URLs with specific extensions (comma-separated).
-p, --placeholder str
Placeholder for parameter values (default: FUZZ).
--level level
Find nested parameters (e.g., high).
-q, --quiet
Quiet mode, suppress URL output to screen.
-s, --subs
Include subdomains.

OUTPUT FORMAT

$ https://example.com/page?id=FUZZ
https://example.com/search?q=FUZZ&page=FUZZ
copy

CAVEATS

Requires internet access. Results from archived URLs. May find outdated parameters. Use responsibly.

HISTORY

ParamSpider was created by Devansh Batham as a tool for bug bounty hunters and penetration testers to discover parameters.

SEE ALSO

waybackurls(1), gau(1), arjun(1)

Copied to clipboard
Kai