aws-s3
Manage objects in Amazon S3 buckets
TLDR
Show files in a bucket
Sync files and directories from local to bucket
Sync files and directories from bucket to local
Sync files and directories with exclusions
Remove file from bucket
Preview changes only
SYNOPSIS
aws s3 <subcommand> [options] [arguments]
PARAMETERS
ls
Lists S3 objects and common prefixes in a bucket or under a specified prefix. Similar to 'ls' for local file systems.
cp
Copies local files to S3, S3 objects to local files, or S3 objects between buckets. Supports recursive copies for directories (--recursive).
mv
Moves local files to S3, S3 objects to local files, or S3 objects between buckets. This operation performs a copy followed by a delete.
rm
Removes S3 objects. Supports recursive removal for prefixes and objects within a bucket (--recursive).
sync
Recursively copies new and updated files from a source directory to a destination, or vice versa, ensuring they are in sync. Often used for backups or keeping local copies up-to-date with S3.
mb
Makes (creates) a new S3 bucket.
rb
Removes (deletes) an S3 bucket. A bucket must be empty before it can be removed, unless the --force option is used.
DESCRIPTION
The aws s3 command is a high-level interface within the AWS Command Line Interface (CLI), designed for simplified management of Amazon Simple Storage Service (S3) buckets and objects. It abstracts away many complexities of the underlying S3 API, providing familiar file system-like commands such as cp, ls, and sync.
This command suite facilitates common S3 operations like copying files, syncing local directories with S3 buckets, listing bucket contents, and creating or deleting buckets. It automatically handles features like multipart uploads for large files, robust error retries, and efficient data transfers, making it a powerful and convenient tool for interacting with your cloud storage directly from the terminal. It's ideal for scripting and automating S3-related tasks.
CAVEATS
- Requires the AWS CLI to be installed and configured with appropriate AWS credentials and region settings.
- Operations are subject to IAM permissions; incorrect permissions can lead to 'Access Denied' errors or unintended public exposure of data.
- S3 storage and data transfer incur costs based on AWS pricing models.
- High-volume operations might be subject to S3 API rate limits.
- Errors related to file paths or bucket names (e.g., region mismatch for buckets) can be common.
<B>CONFIGURATION AND CREDENTIALS</B>
Before using aws s3, you must configure your AWS credentials and default region using aws configure. This typically involves setting up your access key ID, secret access key, and default region in ~/.aws/credentials and ~/.aws/config.
<B>S3 OBJECT VERSIONING</B>
If S3 bucket versioning is enabled, operations like rm will create new 'delete markers' rather than permanently deleting objects, preserving previous versions. To permanently delete all versions, additional steps or s3api commands might be required.
<B>MULTIPART UPLOADS</B>
For large files (typically over 5MB), aws s3 cp and aws s3 sync automatically utilize S3's multipart upload capability, splitting the file into smaller parts for more efficient and robust transfers. This also allows for pausing and resuming uploads in some scenarios.
HISTORY
The AWS Command Line Interface (CLI) was first released in 2013, aiming to provide a unified tool for interacting with all AWS services. The aws s3 high-level commands were a significant addition, designed to simplify common S3 operations by offering a more intuitive, file system-like experience compared to the granular aws s3api commands. Its development focused on ease of use, robust error handling, and efficiency, making it a cornerstone for cloud storage management from the command line.