Zum Inhalt springen

Set up the AWS CLI

Zuletzt aktualisiert am

The AWS CLI is a command-line tool for managing files in S3-compatible object storage. This how-to walks you through installing the AWS CLI and configuring it to connect to STACKIT Object Storage.

Terminal window
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install

Run the interactive configuration command and enter your STACKIT Object Storage credentials when prompted:

Terminal window
aws configure

The command prompts you for four values:

The CLI writes these values to ~/.aws/credentials and ~/.aws/config.

The AWS CLI targets Amazon S3 endpoints by default. To use STACKIT Object Storage, you must point it at the STACKIT endpoint for your region.

Add the --endpoint-url flag to every command you run:

Terminal window
aws s3 ls --endpoint-url https://object.storage.eu01.onstackit.cloud

The examples in the following sections assume you have configured the endpoint in ~/.aws/config. If you passed --endpoint-url instead, append --endpoint-url https://object.storage.eu01.onstackit.cloud to each command.

Lists all buckets in your STACKIT Object Storage account for the configured region.

Terminal window
aws s3 ls

Lists all objects stored in the given bucket.

Terminal window
aws s3 ls s3://my-bucket

Retrieves the metadata for a specific object without downloading its contents.

Terminal window
aws s3api head-object --bucket my-bucket --key object.gz

Downloads a single object from the specified bucket to the current working directory.

Terminal window
aws s3 cp s3://my-bucket/object.gz .

Use the --recursive flag to download all objects that share a common prefix. The AWS CLI does not support wildcard patterns for this purpose.

Terminal window
aws s3 cp s3://my-bucket/logs/2020/03/ logs/ --recursive

Uploads a local file to the specified bucket.

Terminal window
aws s3 cp object.gz s3://my-bucket/

Use the --recursive flag to upload all files in a local directory to the specified bucket.

Terminal window
aws s3 cp directory/ s3://my-bucket/ --recursive

Permanently removes a single object from the specified bucket.

Terminal window
aws s3 rm s3://my-bucket/logs/2020/03/18/file1.gz

Use the --recursive flag to remove all objects that share a common prefix.

Terminal window
aws s3 rm s3://my-bucket/logs/2020/03/19/ --recursive

Copies all objects that share a common prefix from one location to another. Both source and destination can be in different buckets.

Terminal window
aws s3 cp s3://my-bucket/logs/2020/ s3://my-bucket/logs/backup/ --recursive

The AWS CLI exposes several configuration options that control how it handles concurrent requests and large file transfers. Set these with aws configure set or by editing ~/.aws/config directly. All settings apply to subsequent S3 commands for the configured profile.

The max_concurrent_requests setting controls the number of simultaneous transfer operations the CLI performs. Increasing this value reduces total execution time when transferring many small files.

Terminal window
aws configure set default.s3.max_concurrent_requests 20

For large files, the CLI splits the upload into smaller chunks and transfers them in parallel. Two settings control this behavior.

multipart_threshold defines the file size at which the CLI starts chunking the upload:

Terminal window
aws configure set default.s3.multipart_threshold 64MB

multipart_chunksize defines the size of each individual part:

Terminal window
aws configure set default.s3.multipart_chunksize 16MB

When uploading or downloading a small number of very large files, increasing multipart_chunksize typically has a greater impact than increasing max_concurrent_requests. When transferring thousands of small files, increasing max_concurrent_requests reduces total execution time most significantly.