LinuxCommandLibrary

cariddi

Discover virtual hosts and subdomains

TLDR

Hunt for secrets using custom regexes and output results in JSON

$ cat [path/to/urls.txt] | cariddi -s -sf [path/to/custom_secrets.txt] -json
copy

Hunt for juicy endpoints with high concurrency and timeout with plain output results
$ cat [path/to/urls.txt] | cariddi -e -c [250] -t [15] -plain
copy

Crawl with debug mode and store HTTP responses and output results in txt file
$ cat [path/to/urls.txt] | cariddi -debug -sr -ot [path/to/debug_output.txt]
copy

Perform an intensive crawl with a proxy and random user agent and output results in html file
$ cat [path/to/urls.txt] | cariddi -intensive -proxy [http://127.0.0.1:8080] -rua -oh [path/to/intensive_crawl.html]
copy

Hunt for errors and useful information with a custom delay and use .cariddi_cache folder as cache
$ cat [path/to/urls.txt] | cariddi -err -info -d [3] -cache
copy

Show example uses
$ cariddi -examples
copy

SYNOPSIS

cariddi [-options...] target_url

PARAMETERS

-url string
    Target URL to analyze for JS files and endpoints.

-timeout int
    HTTP request timeout in seconds (default: 10).

-concurrency int
    Number of concurrent threads (default: 20).

-delay int
    Delay between requests in seconds (default: 0).

-plain
    Output in plain text format.

-json
    Output in JSON format.

-silent
    Suppress verbose output.

-verify
    Verify found endpoints with HTTP requests.

-intensive
    Use intensive verification mode.

-proxy string
    HTTP proxy address (e.g., http://127.0.0.1:8080).

-insecure
    Skip SSL certificate verification.

-headers string
    Custom headers (JSON format).

-method string
    HTTP method for verification (default: GET).

-random-agent
    Use random User-Agent strings.

-user-agent string
    Custom User-Agent string.

-output string
    Save output to file.

-version
    Display version information.

-el
    Extract endpoints from JavaScript links.

-ef
    Extract endpoints from provided files.

DESCRIPTION

Cariddi is a security reconnaissance tool designed to quickly extract endpoints from JavaScript files on target websites. It crawls pages, fetches linked JS files, and parses them to uncover hidden APIs, AJAX calls, and other endpoints that might be missed by traditional scanners. Ideal for bug bounty hunters and pentesters, it supports concurrency for speed, verification of found endpoints, and output in plain or JSON formats.

By default, it uses 20 threads with a 10-second timeout per request. Users can customize delays, proxies, headers, and user agents to evade detection. Intensive mode enhances verification by sending requests to endpoints, helping confirm live ones. It's lightweight, written in Go, and excels in passive recon without heavy footprint.

Common workflow: pipe output to httpx or grep for filtering and probing. Always respect robots.txt and legal boundaries during usage.

CAVEATS

High concurrency may trigger rate limits or WAFs; use delays/proxies. Not for unauthorized targets. Requires network access.

INSTALLATION

go install github.com/ludie/cariddi@latest
Or download binaries from GitHub releases.

EXAMPLE

cariddi -url https://example.com -verify -plain | httpx -silent
Fetches JS, extracts endpoints, verifies, and checks liveness.

HISTORY

Developed by Ludie (@ludieco) in Go and released on GitHub in 2020. Gained popularity in bug bounty communities for efficient JS parsing; actively maintained with updates for performance and features like intensive verification.

SEE ALSO

curl(1), httpx(1), katana(1), gau(1)

Copied to clipboard