LinuxCommandLibrary

gallery-dl

Download images/videos from various image hosts

TLDR

Download images from the specified URL

$ gallery-dl "[url]"
copy

Save images to a specific directory
$ gallery-dl --destination [path/to/directory] "[url]"
copy

Retrieve pre-existing cookies from your web browser (useful for sites that require login)
$ gallery-dl --cookies-from-browser [browser] "[url]"
copy

Get the direct URL of an image from a site supporting authentication with username and password
$ gallery-dl --get-urls --username [username] --password [password] "[url]"
copy

Filter manga chapters by chapter number and language
$ gallery-dl --chapter-filter "[10 <= chapter < 20]" --option "lang=[language_code]" "[url]"
copy

SYNOPSIS

gallery-dl [OPTIONS]... URL [URL...]

PARAMETERS

-h, --help
    Show help message and exit

-V, --version
    Print program version and exit

-v, --verbose
    Enable verbose output (repeat for more verbosity)

-q, --quiet
    Suppress non-essential output

-d DIR, --dest DIR
    Set base destination directory

-D PATH, --directory PATH
    Path template for output directory

-f FORMAT, --filename FORMAT
    Filename format template

-c PATTERN, --chapter-pattern PATTERN
    Regular expression for chapter selection

-r RANGE, --range RANGE
    File number range (e.g., '1-10)

--filter EXPR
    Python expression to filter files

-o KEY=VALUE, --option KEY=VALUE
    Set extractor option

--config FILE
    Load configuration from FILE

--cookies-from-browser BROWSER
    Load cookies from BROWSER (e.g., chrome)

-i, --ignore-errors
    Continue on extractor errors

-a, --abort-after NUMBER
    Abort after NUMBER failed files

--rate-limit NUMBER
    Limit requests per second

--external-downloader CMD
    Use external downloader command

--write-info-json
    Write metadata to .json file

--no-download
    Do not download files (dry-run)

DESCRIPTION

gallery-dl is a cross-platform command-line program designed to download image galleries, manga, and collections from hundreds of websites, including Pixiv, DeviantArt, Tumblr, Twitter, Instagram, and more.

It supports batch downloading of albums, user profiles, tags, and feeds using flexible URL patterns and keyword extraction. Output is highly customizable via directory and filename templates, filters, ranges, and post-processing hooks. Metadata like titles, descriptions, tags, and thumbnails are extracted and can be embedded in files.

Configuration is done through CLI options, JSON config files (~/.config/gallery-dl/config.json), or environment variables. It respects robots.txt by default, handles rate limiting, retries failed downloads, and supports external downloaders like aria2c. Ideal for archiving artwork, comics, and social media content efficiently.

CAVEATS

Many sites enforce rate limits or require login/cookies; use --cookies-from-browser. Respects robots.txt by default (disable with extractor options). Large archives may consume significant disk space and bandwidth.

INSTALLATION

Install via pip install --user gallery-dl or from package managers (e.g., apt install gallery-dl on Debian). Requires Python 3.7+

CONFIGURATION

Edit ~/.config/gallery-dl/config.json for site-specific options, templates, and defaults. See 'gallery-dl --help-config'.

KEYWORDS

Use {title}, {id}, {num}, {extension} in templates. Full list via 'gallery-dl --help-keywords'.

HISTORY

Originally developed in 2016 by Mike Crawford as a Python script. Actively maintained since 2017 by NikkyLin2008 on GitHub with regular updates for new sites, bug fixes, and Python 3.8+ support. Widely used for bulk art/manga archiving.

SEE ALSO

wget(1), curl(1), yt-dlp(1), aria2c(1)

Copied to clipboard