aws-s3-mv
Move objects between S3 locations
TLDR
Move a file from local to a specified bucket
Move a specific S3 object into another bucket
Move a specific S3 object into another bucket keeping the original name
Display help
SYNOPSIS
aws s3 mv <source> <destination> [--options]
PARAMETERS
source
The location of the file or directory to move. This can be a local path or an S3 URI (s3://bucket/key
).destination
The destination to move the file or directory to. This can be a local path or an S3 URI (s3://bucket/key
).--recursive
Recursively copy all files under the source directory.--exclude
Exclude files or objects that match this pattern.--include
Include files or objects that match this pattern.--acl
Sets the Access Control List (ACL) for the object when copying. For example, --acl public-read
.--storage-class
The type of storage to use for the object when copying. For example, --storage-class STANDARD_IA
.--dryrun
Performs a dry run, which means that it simulates the command without actually moving or copying anything.--sse
Specifies server-side encryption to use for the object.--sse-kms-key-id
Specifies the KMS key id to use for encryption.--no-guess-mime-type
By default, the AWS CLI uses the file extension to determine the MIME type. This option disables this behavior.--content-type
Overrides the guessed MIME type with the specified content type.--quiet
Suppresses verbose output.
DESCRIPTION
The `aws s3 mv` command in the AWS Command Line Interface (CLI) is used to move files or directories within Amazon Simple Storage Service (S3). It effectively combines a copy and delete operation. It copies the source object(s) to the specified destination and then deletes the source after a successful copy. The source and destination can be specified as S3 URIs (`s3://bucket/key`) or local file paths. This command offers capabilities for moving individual files, entire directories, or applying filters using wildcards. It is essential for managing object organization within S3 buckets without the need to download and re-upload data from the local filesystem, which improves efficiency and reduces costs. The command offers useful features such as recursive moving for directories and synchronization. Moving objects within S3 is typically faster and more cost-effective than moving objects to your local system and back up to S3.
CAVEATS
If the destination is a local path, the `aws s3 mv` command behaves like a standard move operation on the local file system, removing the source file/directory after a successful copy. Moving large objects can take significant time. Ensure appropriate AWS credentials are configured before using the command. Moving between different AWS accounts requires appropriate cross-account permissions.
EXAMPLES
Move a single file from local to S3: aws s3 mv myfile.txt s3://mybucket/
Move a file within S3:aws s3 mv s3://mybucket/myfile.txt s3://mybucket/newlocation/
Move a directory recursively to S3:aws s3 mv mydirectory s3://mybucket/ --recursive
Move files with a specific extension using include/exclude:aws s3 mv s3://mybucket/ s3://mybucket/backup/ --recursive --exclude "*" --include "*.jpg"
ERROR HANDLING
The command returns an error code if the move operation fails. Common reasons for failure include insufficient permissions, incorrect S3 URI formats, and network connectivity issues. The `--dryrun` option is very useful to test the operations and permissions without making any changes to the S3 bucket. Ensure that the AWS CLI is configured with appropriate permissions to access and modify the S3 bucket. Check the AWS CLI logs for detailed error messages.
HISTORY
The `aws s3 mv` command was introduced as part of the AWS CLI to provide a convenient way to move objects within Amazon S3. Before its introduction, users had to manually copy and then delete files, making the process more cumbersome and error-prone. The `mv` command simplifies this process, aligning with familiar command-line semantics for moving files. Its development aimed to improve efficiency in S3 object management, reducing both time and costs associated with data transfer. It has evolved over time with the addition of various options to handle different storage classes, encryption methods, and access control, reflecting the increasing complexity and demands of S3 usage.