LinuxCommandLibrary

gau

URL fetcher from multiple archive sources

TLDR

Fetch URLs for domain

$ gau [example.com]
copy
Output to file
$ gau [example.com] -o [urls.txt]
copy
Fetch with providers
$ gau --providers [wayback,otx] [example.com]
copy
Include subdomains
$ gau --subs [example.com]
copy
Filter by date
$ gau --from [202201] --to [202212] [example.com]
copy

SYNOPSIS

gau [options] domains...

DESCRIPTION

gau (Get All URLs) fetches known URLs for domains from multiple sources including Wayback Machine, Common Crawl, and AlienVault OTX. It's used for reconnaissance and security research.
The tool aggregates historical URLs that may reveal hidden endpoints, parameters, or old vulnerabilities. Results include archived pages, API endpoints, and file paths.
gau enables discovering attack surface by finding URLs that were once publicly accessible.

PARAMETERS

DOMAINS

Target domains to fetch URLs for.
-o FILE, --o FILE
Output file.
--providers LIST
URL sources: wayback, otx, commoncrawl.
--subs
Include subdomains.
--from DATE
Start date (YYYYMM).
--to DATE
End date (YYYYMM).
--help
Display help information.

CAVEATS

Results include historical dead URLs. May produce large output. Subject to source rate limits.

HISTORY

gau was created for security research and bug bounty hunting, providing easy access to archived URL databases for reconnaissance purposes.

SEE ALSO

> TERMINAL_GEAR

Curated for the Linux community

Copied to clipboard

> TERMINAL_GEAR

Curated for the Linux community