Set up the AWS CLI
Last updated on
The AWS CLI is a command-line tool for managing files in S3-compatible object storage. This how-to walks you through installing the AWS CLI and configuring it to connect to STACKIT Object Storage.
Prerequisites
Section titled “Prerequisites”- A STACKIT project with Object Storage enabled
- An Object Storage bucket — to create one, read Create and manage Object Storage buckets
- Object Storage credentials (access key ID and secret access key) — to generate them, read Create and delete Object Storage credentials
Install the AWS CLI
Section titled “Install the AWS CLI”curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"unzip awscliv2.zipsudo ./aws/installThe AWS CLI supports macOS 11 and later.
- Download the macOS installer package from https://awscli.amazonaws.com/AWSCLIV2.pkg.
- Run the downloaded
.pkgfile and follow the on-screen instructions.-
For all users: The installer places the CLI in
/usr/local/aws-cliand creates a symlink at/usr/local/bin/awsautomatically. -
For current user only: After installation, create the symlinks manually:
Terminal window ln -s /folder/installed/aws-cli/aws /usr/local/bin/awsln -s /folder/installed/aws-cli/aws_completer /usr/local/bin/aws_completer
-
curl "https://awscli.amazonaws.com/AWSCLIV2.pkg" -o "AWSCLIV2.pkg"sudo installer -pkg AWSCLIV2.pkg -target /The AWS CLI supports 64-bit versions of Windows that Microsoft actively supports.
- Download the MSI installer from https://awscli.amazonaws.com/AWSCLIV2.msi.
- Run the downloaded
.msifile and follow the setup wizard instructions.
Open Command Prompt and run:
msiexec.exe /i https://awscli.amazonaws.com/AWSCLIV2.msiConfigure credentials
Section titled “Configure credentials”Run the interactive configuration command and enter your STACKIT Object Storage credentials when prompted:
aws configureThe command prompts you for four values:
| Prompt | Value |
|---|---|
| AWS Access Key ID | Your Object Storage access key ID |
| AWS Secret Access Key | Your Object Storage secret access key |
| Default region name | eu01 |
| Default output format | json |
The CLI writes these values to ~/.aws/credentials and ~/.aws/config.
Configure the STACKIT endpoint
Section titled “Configure the STACKIT endpoint”The AWS CLI targets Amazon S3 endpoints by default. To use STACKIT Object Storage, you must point it at the STACKIT endpoint for your region.
Add the --endpoint-url flag to every command you run:
aws s3 ls --endpoint-url https://object.storage.eu01.onstackit.cloudAdd the endpoint_url property to your default profile in ~/.aws/config. This removes the need to pass --endpoint-url with every subsequent command:
[default]output = jsonendpoint_url = https://object.storage.eu01.onstackit.cloudThe examples in the following sections assume you have configured the endpoint in ~/.aws/config. If you passed --endpoint-url instead, append --endpoint-url https://object.storage.eu01.onstackit.cloud to each command.
Common operations
Section titled “Common operations”List all buckets
Section titled “List all buckets”Lists all buckets in your STACKIT Object Storage account for the configured region.
aws s3 lsList objects in a bucket
Section titled “List objects in a bucket”Lists all objects stored in the given bucket.
aws s3 ls s3://my-bucketRead an object’s metadata
Section titled “Read an object’s metadata”Retrieves the metadata for a specific object without downloading its contents.
aws s3api head-object --bucket my-bucket --key object.gzDownload a single object
Section titled “Download a single object”Downloads a single object from the specified bucket to the current working directory.
aws s3 cp s3://my-bucket/object.gz .Download multiple objects
Section titled “Download multiple objects”Use the --recursive flag to download all objects that share a common prefix. The AWS CLI does not support wildcard patterns for this purpose.
aws s3 cp s3://my-bucket/logs/2020/03/ logs/ --recursiveUpload a single file
Section titled “Upload a single file”Uploads a local file to the specified bucket.
aws s3 cp object.gz s3://my-bucket/Upload multiple files
Section titled “Upload multiple files”Use the --recursive flag to upload all files in a local directory to the specified bucket.
aws s3 cp directory/ s3://my-bucket/ --recursiveDelete a single object
Section titled “Delete a single object”Permanently removes a single object from the specified bucket.
aws s3 rm s3://my-bucket/logs/2020/03/18/file1.gzDelete multiple objects
Section titled “Delete multiple objects”Use the --recursive flag to remove all objects that share a common prefix.
aws s3 rm s3://my-bucket/logs/2020/03/19/ --recursiveCopy objects between buckets
Section titled “Copy objects between buckets”Copies all objects that share a common prefix from one location to another. Both source and destination can be in different buckets.
aws s3 cp s3://my-bucket/logs/2020/ s3://my-bucket/logs/backup/ --recursiveOptimize transfer performance
Section titled “Optimize transfer performance”The AWS CLI exposes several configuration options that control how it handles concurrent requests and large file transfers. Set these with aws configure set or by editing ~/.aws/config directly. All settings apply to subsequent S3 commands for the configured profile.
Increase concurrent requests
Section titled “Increase concurrent requests”The max_concurrent_requests setting controls the number of simultaneous transfer operations the CLI performs. Increasing this value reduces total execution time when transferring many small files.
aws configure set default.s3.max_concurrent_requests 20Configure multipart uploads
Section titled “Configure multipart uploads”For large files, the CLI splits the upload into smaller chunks and transfers them in parallel. Two settings control this behavior.
multipart_threshold defines the file size at which the CLI starts chunking the upload:
aws configure set default.s3.multipart_threshold 64MBmultipart_chunksize defines the size of each individual part:
aws configure set default.s3.multipart_chunksize 16MBWhen uploading or downloading a small number of very large files, increasing multipart_chunksize typically has a greater impact than increasing max_concurrent_requests. When transferring thousands of small files, increasing max_concurrent_requests reduces total execution time most significantly.