LinuxCommandLibrary

cariddi

Discover virtual hosts and subdomains

TLDR

Hunt for secrets using custom regexes and output results in JSON

$ cat [path/to/urls.txt] | cariddi -s -sf [path/to/custom_secrets.txt] -json
copy

Hunt for juicy endpoints with high concurrency and timeout with plain output results
$ cat [path/to/urls.txt] | cariddi -e -c [250] -t [15] -plain
copy

Crawl with debug mode and store HTTP responses and output results in txt file
$ cat [path/to/urls.txt] | cariddi -debug -sr -ot [path/to/debug_output.txt]
copy

Perform an intensive crawl with a proxy and random user agent and output results in html file
$ cat [path/to/urls.txt] | cariddi -intensive -proxy [http://127.0.0.1:8080] -rua -oh [path/to/intensive_crawl.html]
copy

Hunt for errors and useful information with a custom delay and use .cariddi_cache folder as cache
$ cat [path/to/urls.txt] | cariddi -err -info -d [3] -cache
copy

Show example uses
$ cariddi -examples
copy

SYNOPSIS

cariddi [options] {-u <target_url> | -l <list_file>} [-w <wordlist>]

PARAMETERS

-u, --url <target_url>
    Specifies a single target URL for content discovery. This is a mandatory option if --list is not used.

-l, --list <list_file>
    Provides a path to a file containing a list of target URLs, one per line. This is a mandatory option if --url is not used.

-w, --wordlist <wordlist_file>
    Sets the path to the wordlist file to be used for brute-forcing directories and files. If not specified, a default wordlist might be used or discovery will be limited.

-s, --subdomains
    Enables subdomain enumeration for the target(s).

-r, --recursive
    Activates recursive content discovery on found directories.

-d, --depth <int>
    Sets the maximum recursion depth when --recursive is enabled. Default is usually 1.

-t, --threads <int>
    Defines the number of concurrent threads to use for requests. Higher values increase speed but can strain the target or network.

-c, --status-codes <codes>
    Comma-separated list of HTTP status codes to include in the output (e.g., '200,301,403').

-x, --exclude-status <codes>
    Comma-separated list of HTTP status codes to exclude from the output.

-o, --output <file>
    Writes the discovery results to the specified file.

-j, --json
    Outputs results in JSON format.

-p, --proxy <proxy_url>
    Configures an HTTP/SOCKS5 proxy for all requests (e.g., 'http://127.0.0.1:8080').

--headers <headers>
    Adds custom HTTP headers (e.g., 'Authorization: Bearer token'). Can be specified multiple times.

--cookies <cookies>
    Adds custom HTTP cookies (e.g., 'PHPSESSID=value').

--data <data>
    Sends HTTP POST data with requests (e.g., 'key=value').

--timeout <duration>
    Sets the HTTP request timeout duration (e.g., '5s').

--user-agent <agent>
    Specifies a custom User-Agent string for requests.

--random-agent
    Uses a random User-Agent string for each request.

--delay <duration>
    Adds a delay between HTTP requests (e.g., '100ms').

--add-extension <ext>
    Appends specified extensions to wordlist entries (e.g., '.php,.bak').

--no-colors
    Disables colored output.

--disable-tls-check
    Disables TLS certificate verification, useful for self-signed certificates.

-v, --verbose
    Enables verbose output, showing more details during the scan.

-h, --help
    Displays the help message and exits.

--version
    Prints the cariddi version information and exits.

DESCRIPTION

cariddi is a powerful, multi-threaded command-line tool designed for comprehensive web content discovery and network reconnaissance. Written in Go, it efficiently identifies hidden directories, files, and subdomains on web servers. It leverages wordlists to brute-force common paths and supports recursive scanning to uncover deeper structures within a target website.

Primarily used by penetration testers and security professionals, cariddi aids in mapping out the attack surface of web applications. Its features include customizable HTTP headers and cookies, proxy support, timeout settings, and filtering options based on HTTP status codes and response lengths. The tool can output results in various formats, including JSON, making it suitable for integration into automated workflows.

Beyond simple content discovery, cariddi can also enumerate subdomains, providing a more complete picture of an organization's web presence. Its multi-threaded nature ensures rapid execution, making it an efficient choice for initial reconnaissance phases of security assessments.

CAVEATS

cariddi is not a standard Linux command and must be installed separately, typically by compiling from source using Go or downloading pre-compiled binaries. Its aggressive scanning nature can potentially be detected by intrusion detection systems (IDS) or lead to IP blocking. Users must ensure they have proper authorization before scanning any target system, as unauthorized scanning can be illegal and unethical.

INSTALLATION

As cariddi is not pre-installed on most Linux distributions, it typically needs to be built from source if Go is installed (go install github.com/milo2012/cariddi@latest) or downloaded as a pre-compiled binary from its GitHub releases page. Ensure the resulting executable is in your system's PATH for easy access.

ETHICAL USE

It is imperative to use cariddi responsibly and ethically. Only scan systems for which you have explicit permission. Unauthorized scanning can lead to legal repercussions and may be considered a cyberattack. Always adhere to local laws and regulations concerning computer security and network scanning.

HISTORY

cariddi is a relatively modern tool, developed by `milo2012` and primarily hosted on GitHub. It was created to fill a specific niche in web reconnaissance, focusing on efficient and multi-threaded content discovery. Its development leverages the Go programming language, known for its concurrency features, which contributes to cariddi's speed and performance. While its exact initial release date isn't widely publicized, its continuous updates reflect an active development aiming to provide robust features for penetration testing and security auditing, evolving to include advanced filtering and output options based on community needs.

SEE ALSO

dirb(1), gobuster(1), ffuf(1), nikto(1), curl(1)

Copied to clipboard