1
mirror of https://github.com/rclone/rclone synced 2024-12-21 11:45:56 +01:00
rclone/docs/content/s3.md

29 KiB
Raw Blame History

title description date
Amazon S3 Rclone docs for Amazon S3 2016-07-11

Amazon S3 Storage Providers

The S3 backend can be used with a number of different providers:

Paths are specified as remote:bucket (or remote: for the lsd command.) You may put subdirectories in too, eg remote:bucket/path/to/dir.

Once you have made a remote (see the provider specific section above) you can use it like this:

See all buckets

rclone lsd remote:

Make a new bucket

rclone mkdir remote:bucket

List the contents of a bucket

rclone ls remote:bucket

Sync /home/local/directory to the remote bucket, deleting any excess files in the bucket.

rclone sync /home/local/directory remote:bucket

AWS S3

Here is an example of making an s3 configuration. First run

rclone config

This will guide you through an interactive setup process.

No remotes found - make a new one
n) New remote
s) Set configuration password
q) Quit config
n/s/q> n
name> remote
Type of storage to configure.
Choose a number from below, or type in your own value
 1 / Alias for a existing remote
   \ "alias"
 2 / Amazon Drive
   \ "amazon cloud drive"
 3 / Amazon S3 Compliant Storage Providers (AWS, Ceph, Dreamhost, IBM COS, Minio)
   \ "s3"
 4 / Backblaze B2
   \ "b2"
[snip]
23 / http Connection
   \ "http"
Storage> s3
Choose your S3 provider.
Choose a number from below, or type in your own value
 1 / Amazon Web Services (AWS) S3
   \ "AWS"
 2 / Ceph Object Storage
   \ "Ceph"
 3 / Digital Ocean Spaces
   \ "DigitalOcean"
 4 / Dreamhost DreamObjects
   \ "Dreamhost"
 5 / IBM COS S3
   \ "IBMCOS"
 6 / Minio Object Storage
   \ "Minio"
 7 / Wasabi Object Storage
   \ "Wasabi"
 8 / Any other S3 compatible provider
   \ "Other"
provider> 1
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank.
Choose a number from below, or type in your own value
 1 / Enter AWS credentials in the next step
   \ "false"
 2 / Get AWS credentials from the environment (env vars or IAM)
   \ "true"
env_auth> 1
AWS Access Key ID - leave blank for anonymous access or runtime credentials.
access_key_id> XXX
AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials.
secret_access_key> YYY
Region to connect to.
Choose a number from below, or type in your own value
   / The default endpoint - a good choice if you are unsure.
 1 | US Region, Northern Virginia or Pacific Northwest.
   | Leave location constraint empty.
   \ "us-east-1"
   / US East (Ohio) Region
 2 | Needs location constraint us-east-2.
   \ "us-east-2"
   / US West (Oregon) Region
 3 | Needs location constraint us-west-2.
   \ "us-west-2"
   / US West (Northern California) Region
 4 | Needs location constraint us-west-1.
   \ "us-west-1"
   / Canada (Central) Region
 5 | Needs location constraint ca-central-1.
   \ "ca-central-1"
   / EU (Ireland) Region
 6 | Needs location constraint EU or eu-west-1.
   \ "eu-west-1"
   / EU (London) Region
 7 | Needs location constraint eu-west-2.
   \ "eu-west-2"
   / EU (Frankfurt) Region
 8 | Needs location constraint eu-central-1.
   \ "eu-central-1"
   / Asia Pacific (Singapore) Region
 9 | Needs location constraint ap-southeast-1.
   \ "ap-southeast-1"
   / Asia Pacific (Sydney) Region
10 | Needs location constraint ap-southeast-2.
   \ "ap-southeast-2"
   / Asia Pacific (Tokyo) Region
11 | Needs location constraint ap-northeast-1.
   \ "ap-northeast-1"
   / Asia Pacific (Seoul)
12 | Needs location constraint ap-northeast-2.
   \ "ap-northeast-2"
   / Asia Pacific (Mumbai)
13 | Needs location constraint ap-south-1.
   \ "ap-south-1"
   / South America (Sao Paulo) Region
14 | Needs location constraint sa-east-1.
   \ "sa-east-1"
region> 1
Endpoint for S3 API.
Leave blank if using AWS to use the default endpoint for the region.
endpoint> 
Location constraint - must be set to match the Region. Used when creating buckets only.
Choose a number from below, or type in your own value
 1 / Empty for US Region, Northern Virginia or Pacific Northwest.
   \ ""
 2 / US East (Ohio) Region.
   \ "us-east-2"
 3 / US West (Oregon) Region.
   \ "us-west-2"
 4 / US West (Northern California) Region.
   \ "us-west-1"
 5 / Canada (Central) Region.
   \ "ca-central-1"
 6 / EU (Ireland) Region.
   \ "eu-west-1"
 7 / EU (London) Region.
   \ "eu-west-2"
 8 / EU Region.
   \ "EU"
 9 / Asia Pacific (Singapore) Region.
   \ "ap-southeast-1"
10 / Asia Pacific (Sydney) Region.
   \ "ap-southeast-2"
11 / Asia Pacific (Tokyo) Region.
   \ "ap-northeast-1"
12 / Asia Pacific (Seoul)
   \ "ap-northeast-2"
13 / Asia Pacific (Mumbai)
   \ "ap-south-1"
14 / South America (Sao Paulo) Region.
   \ "sa-east-1"
location_constraint> 1
Canned ACL used when creating buckets and/or storing objects in S3.
For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
Choose a number from below, or type in your own value
 1 / Owner gets FULL_CONTROL. No one else has access rights (default).
   \ "private"
 2 / Owner gets FULL_CONTROL. The AllUsers group gets READ access.
   \ "public-read"
   / Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access.
 3 | Granting this on a bucket is generally not recommended.
   \ "public-read-write"
 4 / Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access.
   \ "authenticated-read"
   / Object owner gets FULL_CONTROL. Bucket owner gets READ access.
 5 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
   \ "bucket-owner-read"
   / Both the object owner and the bucket owner get FULL_CONTROL over the object.
 6 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
   \ "bucket-owner-full-control"
acl> 1
The server-side encryption algorithm used when storing this object in S3.
Choose a number from below, or type in your own value
 1 / None
   \ ""
 2 / AES256
   \ "AES256"
server_side_encryption> 1
The storage class to use when storing objects in S3.
Choose a number from below, or type in your own value
 1 / Default
   \ ""
 2 / Standard storage class
   \ "STANDARD"
 3 / Reduced redundancy storage class
   \ "REDUCED_REDUNDANCY"
 4 / Standard Infrequent Access storage class
   \ "STANDARD_IA"
 5 / One Zone Infrequent Access storage class
   \ "ONEZONE_IA"
storage_class> 1
Remote config
--------------------
[remote]
type = s3
provider = AWS
env_auth = false
access_key_id = XXX
secret_access_key = YYY
region = us-east-1
endpoint = 
location_constraint = 
acl = private
server_side_encryption = 
storage_class = 
--------------------
y) Yes this is OK
e) Edit this remote
d) Delete this remote
y/e/d> 

--fast-list

This remote supports --fast-list which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details.

--update and --use-server-modtime

As noted below, the modified time is stored on metadata on the object. It is used by default for all operations that require checking the time a file was last updated. It allows rclone to treat the remote more like a true filesystem, but it is inefficient because it requires an extra API call to retrieve the metadata.

For many operations, the time the object was last uploaded to the remote is sufficient to determine if it is "dirty". By using --update along with --use-server-modtime, you can avoid the extra API call and simply upload files whose local modtime is newer than the time it was last uploaded.

Modified time

The modified time is stored as metadata on the object as X-Amz-Meta-Mtime as floating point since the epoch accurate to 1 ns.

Multipart uploads

rclone supports multipart uploads with S3 which means that it can upload files bigger than 5GB. Note that files uploaded both with multipart upload and through crypt remotes do not have MD5 sums.

Buckets and Regions

With Amazon S3 you can list buckets (rclone lsd) using any region, but you can only access the content of a bucket from the region it was created in. If you attempt to access a bucket from the wrong region, you will get an error, incorrect region, the bucket is not in 'XXX' region.

Authentication

There are a number of ways to supply rclone with a set of AWS credentials, with and without using the environment.

The different authentication methods are tried in this order:

  • Directly in the rclone configuration file (env_auth = false in the config file):
    • access_key_id and secret_access_key are required.
    • session_token can be optionally set when using AWS STS.
  • Runtime configuration (env_auth = true in the config file):
    • Export the following environment variables before running rclone:
      • Access Key ID: AWS_ACCESS_KEY_ID or AWS_ACCESS_KEY
      • Secret Access Key: AWS_SECRET_ACCESS_KEY or AWS_SECRET_KEY
      • Session Token: AWS_SESSION_TOKEN (optional)
    • Or, use a named profile:
      • Profile files are standard files used by AWS CLI tools
      • By default it will use the profile in your home directory (eg ~/.aws/credentials on unix based systems) file and the "default" profile, to change set these environment variables:
        • AWS_SHARED_CREDENTIALS_FILE to control which file.
        • AWS_PROFILE to control which profile to use.
    • Or, run rclone in an ECS task with an IAM role (AWS only).
    • Or, run rclone on an EC2 instance with an IAM role (AWS only).

If none of these option actually end up providing rclone with AWS credentials then S3 interaction will be non-authenticated (see below).

S3 Permissions

When using the sync subcommand of rclone the following minimum permissions are required to be available on the bucket being written to:

  • ListBucket
  • DeleteObject
  • GetObject
  • PutObject
  • PutObjectACL

Example policy:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::USER_SID:user/USER_NAME"
            },
            "Action": [
                "s3:ListBucket",
                "s3:DeleteObject",
                "s3:GetObject",
                "s3:PutObject",
                "s3:PutObjectAcl"
            ],
            "Resource": [
              "arn:aws:s3:::BUCKET_NAME/*",
              "arn:aws:s3:::BUCKET_NAME"
            ]
        }
    ]
}

Notes on above:

  1. This is a policy that can be used when creating bucket. It assumes that USER_NAME has been created.
  2. The Resource entry must include both resource ARNs, as one implies the bucket and the other implies the bucket's objects.

For reference, here's an Ansible script that will generate one or more buckets that will work with rclone sync.

Key Management System (KMS)

If you are using server side encryption with KMS then you will find you can't transfer small objects. As a work-around you can use the --ignore-checksum flag.

A proper fix is being worked on in issue #1824.

Glacier

You can transition objects to glacier storage using a lifecycle policy. The bucket can still be synced or copied into normally, but if rclone tries to access the data you will see an error like below.

2017/09/11 19:07:43 Failed to sync: failed to open source object: Object in GLACIER, restore first: path/to/file

In this case you need to restore the object(s) in question before using rclone.

Specific options

Here are the command line options specific to this cloud storage system.

--s3-acl=STRING

Canned ACL used when creating buckets and/or storing objects in S3.

For more info visit the canned ACL docs.

--s3-storage-class=STRING

Storage class to upload new objects with.

Available options include:

  • STANDARD - default storage class
  • STANDARD_IA - for less frequently accessed data (e.g backups)
  • ONEZONE_IA - for storing data in only one Availability Zone
  • REDUCED_REDUNDANCY (only for noncritical, reproducible data, has lower redundancy)

--s3-chunk-size=SIZE

Any files larger than this will be uploaded in chunks of this size. The default is 5MB. The minimum is 5MB.

Note that 2 chunks of this size are buffered in memory per transfer.

If you are transferring large files over high speed links and you have enough memory, then increasing this will speed up the transfers.

--s3-upload-concurrency

Number of chunks of the same file that are uploaded concurrently. Default is 2.

If you are uploading small amount of large file over high speed link and these uploads do not fully utilize your bandwidth, then increasing this may help to speed up the transfers.

Anonymous access to public buckets

If you want to use rclone to access a public bucket, configure with a blank access_key_id and secret_access_key. Your config should end up looking like this:

[anons3]
type = s3
provider = AWS
env_auth = false
access_key_id = 
secret_access_key = 
region = us-east-1
endpoint = 
location_constraint = 
acl = private
server_side_encryption = 
storage_class = 

Then use it as normal with the name of the public bucket, eg

rclone lsd anons3:1000genomes

You will be able to list and copy data but not upload it.

Ceph

Ceph is an open source unified, distributed storage system designed for excellent performance, reliability and scalability. It has an S3 compatible object storage interface.

To use rclone with Ceph, configure as above but leave the region blank and set the endpoint. You should end up with something like this in your config:

[ceph]
type = s3
provider = Ceph
env_auth = false
access_key_id = XXX
secret_access_key = YYY
region =
endpoint = https://ceph.endpoint.example.com
location_constraint =
acl =
server_side_encryption =
storage_class =

Note also that Ceph sometimes puts / in the passwords it gives users. If you read the secret access key using the command line tools you will get a JSON blob with the / escaped as \/. Make sure you only write / in the secret access key.

Eg the dump from Ceph looks something like this (irrelevant keys removed).

{
    "user_id": "xxx",
    "display_name": "xxxx",
    "keys": [
        {
            "user": "xxx",
            "access_key": "xxxxxx",
            "secret_key": "xxxxxx\/xxxx"
        }
    ],
}

Because this is a json dump, it is encoding the / as \/, so if you use the secret key as xxxxxx/xxxx it will work fine.

Dreamhost

Dreamhost DreamObjects is an object storage system based on CEPH.

To use rclone with Dreamhost, configure as above but leave the region blank and set the endpoint. You should end up with something like this in your config:

[dreamobjects]
type = s3
provider = DreamHost
env_auth = false
access_key_id = your_access_key
secret_access_key = your_secret_key
region =
endpoint = objects-us-west-1.dream.io
location_constraint =
acl = private
server_side_encryption =
storage_class =

DigitalOcean Spaces

Spaces is an S3-interoperable object storage service from cloud provider DigitalOcean.

To connect to DigitalOcean Spaces you will need an access key and secret key. These can be retrieved on the "Applications & API" page of the DigitalOcean control panel. They will be needed when promted by rclone config for your access_key_id and secret_access_key.

When prompted for a region or location_constraint, press enter to use the default value. The region must be included in the endpoint setting (e.g. nyc3.digitaloceanspaces.com). The defualt values can be used for other settings.

Going through the whole process of creating a new remote by running rclone config, each prompt should be answered as shown below:

Storage> s3
env_auth> 1
access_key_id> YOUR_ACCESS_KEY
secret_access_key> YOUR_SECRET_KEY
region>
endpoint> nyc3.digitaloceanspaces.com
location_constraint>
acl>
storage_class>

The resulting configuration file should look like:

[spaces]
type = s3
provider = DigitalOcean
env_auth = false
access_key_id = YOUR_ACCESS_KEY
secret_access_key = YOUR_SECRET_KEY
region =
endpoint = nyc3.digitaloceanspaces.com
location_constraint =
acl =
server_side_encryption =
storage_class =

Once configured, you can create a new Space and begin copying files. For example:

rclone mkdir spaces:my-new-space
rclone copy /path/to/files spaces:my-new-space

IBM COS (S3)

Information stored with IBM Cloud Object Storage is encrypted and dispersed across multiple geographic locations, and accessed through an implementation of the S3 API. This service makes use of the distributed storage technologies provided by IBMs Cloud Object Storage System (formerly Cleversafe). For more information visit: (http://www.ibm.com/cloud/object-storage)

To configure access to IBM COS S3, follow the steps below:

  1. Run rclone config and select n for a new remote.
	2018/02/14 14:13:11 NOTICE: Config file "C:\\Users\\a\\.config\\rclone\\rclone.conf" not found - using defaults
	No remotes found - make a new one
	n) New remote
	s) Set configuration password
	q) Quit config
	n/s/q> n
  1. Enter the name for the configuration
	name> <YOUR NAME>
  1. Select "s3" storage.
Choose a number from below, or type in your own value
 	1 / Alias for a existing remote
   	\ "alias"
 	2 / Amazon Drive
   	\ "amazon cloud drive"
 	3 / Amazon S3 Complaint Storage Providers (Dreamhost, Ceph, Minio, IBM COS)
   	\ "s3"
 	4 / Backblaze B2
   	\ "b2"
[snip]
	23 / http Connection
    \ "http"
Storage> 3
  1. Select IBM COS as the S3 Storage Provider.
Choose the S3 provider.
Choose a number from below, or type in your own value
	 1 / Choose this option to configure Storage to AWS S3
	   \ "AWS"
 	 2 / Choose this option to configure Storage to Ceph Systems
  	 \ "Ceph"
	 3 /  Choose this option to configure Storage to Dreamhost
     \ "Dreamhost"
   4 / Choose this option to the configure Storage to IBM COS S3
   	 \ "IBMCOS"
 	 5 / Choose this option to the configure Storage to Minio
     \ "Minio"
	 Provider>4
  1. Enter the Access Key and Secret.
	AWS Access Key ID - leave blank for anonymous access or runtime credentials.
	access_key_id> <>
	AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials.
	secret_access_key> <>
  1. Specify the endpoint for IBM COS. For Public IBM COS, choose from the option below. For On Premise IBM COS, enter an enpoint address.
	Endpoint for IBM COS S3 API.
	Specify if using an IBM COS On Premise.
	Choose a number from below, or type in your own value
	 1 / US Cross Region Endpoint
   	   \ "s3-api.us-geo.objectstorage.softlayer.net"
	 2 / US Cross Region Dallas Endpoint
   	   \ "s3-api.dal.us-geo.objectstorage.softlayer.net"
 	 3 / US Cross Region Washington DC Endpoint
   	   \ "s3-api.wdc-us-geo.objectstorage.softlayer.net"
	 4 / US Cross Region San Jose Endpoint
	   \ "s3-api.sjc-us-geo.objectstorage.softlayer.net"
	 5 / US Cross Region Private Endpoint
	   \ "s3-api.us-geo.objectstorage.service.networklayer.com"
	 6 / US Cross Region Dallas Private Endpoint
	   \ "s3-api.dal-us-geo.objectstorage.service.networklayer.com"
	 7 / US Cross Region Washington DC Private Endpoint
	   \ "s3-api.wdc-us-geo.objectstorage.service.networklayer.com"
	 8 / US Cross Region San Jose Private Endpoint
	   \ "s3-api.sjc-us-geo.objectstorage.service.networklayer.com"
	 9 / US Region East Endpoint
	   \ "s3.us-east.objectstorage.softlayer.net"
	10 / US Region East Private Endpoint
	   \ "s3.us-east.objectstorage.service.networklayer.com"
	11 / US Region South Endpoint
[snip]
	34 / Toronto Single Site Private Endpoint
	   \ "s3.tor01.objectstorage.service.networklayer.com"
	endpoint>1
  1. Specify a IBM COS Location Constraint. The location constraint must match endpoint when using IBM Cloud Public. For on-prem COS, do not make a selection from this list, hit enter
	 1 / US Cross Region Standard
	   \ "us-standard"
	 2 / US Cross Region Vault
	   \ "us-vault"
	 3 / US Cross Region Cold
	   \ "us-cold"
	 4 / US Cross Region Flex
	   \ "us-flex"
	 5 / US East Region Standard
	   \ "us-east-standard"
	 6 / US East Region Vault
	   \ "us-east-vault"
	 7 / US East Region Cold
	   \ "us-east-cold"
	 8 / US East Region Flex
	   \ "us-east-flex"
	 9 / US South Region Standard
	   \ "us-south-standard"
	10 / US South Region Vault
	   \ "us-south-vault"
[snip]
	32 / Toronto Flex
	   \ "tor01-flex"
location_constraint>1
  1. Specify a canned ACL. IBM Cloud (Strorage) supports "public-read" and "private". IBM Cloud(Infra) supports all the canned ACLs. On-Premise COS supports all the canned ACLs.
Canned ACL used when creating buckets and/or storing objects in S3.
For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
Choose a number from below, or type in your own value
      1 / Owner gets FULL_CONTROL. No one else has access rights (default). This acl is available on IBM Cloud (Infra), IBM Cloud (Storage), On-Premise COS
      \ "private"
      2  / Owner gets FULL_CONTROL. The AllUsers group gets READ access. This acl is available on IBM Cloud (Infra), IBM Cloud (Storage), On-Premise IBM COS
      \ "public-read"
      3 / Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access. This acl is available on IBM Cloud (Infra), On-Premise IBM COS
      \ "public-read-write"
      4  / Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access. Not supported on Buckets. This acl is available on IBM Cloud (Infra) and On-Premise IBM COS
      \ "authenticated-read"
acl> 1
  1. Review the displayed configuration and accept to save the "remote" then quit. The config file should look like this
	[xxx]
	type = s3
	Provider = IBMCOS
	access_key_id = xxx
	secret_access_key = yyy
	endpoint = s3-api.us-geo.objectstorage.softlayer.net
	location_constraint = us-standard
	acl = private
  1. Execute rclone commands
	1)	Create a bucket.
		rclone mkdir IBM-COS-XREGION:newbucket
	2)	List available buckets.
		rclone lsd IBM-COS-XREGION:
		-1 2017-11-08 21:16:22        -1 test
		-1 2018-02-14 20:16:39        -1 newbucket
	3)	List contents of a bucket.
		rclone ls IBM-COS-XREGION:newbucket
		18685952 test.exe
	4)	Copy a file from local to remote.
		rclone copy /Users/file.txt IBM-COS-XREGION:newbucket
	5)	Copy a file from remote to local.
		rclone copy IBM-COS-XREGION:newbucket/file.txt .
	6)	Delete a file on remote.
		rclone delete IBM-COS-XREGION:newbucket/file.txt

Minio

Minio is an object storage server built for cloud application developers and devops.

It is very easy to install and provides an S3 compatible server which can be used by rclone.

To use it, install Minio following the instructions here.

When it configures itself Minio will print something like this

Endpoint:  http://192.168.1.106:9000  http://172.23.0.1:9000
AccessKey: USWUXHGYZQYFYFFIT3RE
SecretKey: MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
Region:    us-east-1
SQS ARNs:  arn:minio:sqs:us-east-1:1:redis arn:minio:sqs:us-east-1:2:redis

Browser Access:
   http://192.168.1.106:9000  http://172.23.0.1:9000

Command-line Access: https://docs.minio.io/docs/minio-client-quickstart-guide
   $ mc config host add myminio http://192.168.1.106:9000 USWUXHGYZQYFYFFIT3RE MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03

Object API (Amazon S3 compatible):
   Go:         https://docs.minio.io/docs/golang-client-quickstart-guide
   Java:       https://docs.minio.io/docs/java-client-quickstart-guide
   Python:     https://docs.minio.io/docs/python-client-quickstart-guide
   JavaScript: https://docs.minio.io/docs/javascript-client-quickstart-guide
   .NET:       https://docs.minio.io/docs/dotnet-client-quickstart-guide

Drive Capacity: 26 GiB Free, 165 GiB Total

These details need to go into rclone config like this. Note that it is important to put the region in as stated above.

env_auth> 1
access_key_id> USWUXHGYZQYFYFFIT3RE
secret_access_key> MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
region> us-east-1
endpoint> http://192.168.1.106:9000
location_constraint>
server_side_encryption>

Which makes the config file look like this

[minio]
type = s3
provider = Minio
env_auth = false
access_key_id = USWUXHGYZQYFYFFIT3RE
secret_access_key = MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
region = us-east-1
endpoint = http://192.168.1.106:9000
location_constraint =
server_side_encryption =

So once set up, for example to copy files into a bucket

rclone copy /path/to/files minio:bucket

Wasabi

Wasabi is a cloud-based object storage service for a broad range of applications and use cases. Wasabi is designed for individuals and organizations that require a high-performance, reliable, and secure data storage infrastructure at minimal cost.

Wasabi provides an S3 interface which can be configured for use with rclone like this.

No remotes found - make a new one
n) New remote
s) Set configuration password
n/s> n
name> wasabi
Type of storage to configure.
Choose a number from below, or type in your own value
 1 / Amazon Drive
   \ "amazon cloud drive"
 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
   \ "s3"
[snip]
Storage> s3
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank.
Choose a number from below, or type in your own value
 1 / Enter AWS credentials in the next step
   \ "false"
 2 / Get AWS credentials from the environment (env vars or IAM)
   \ "true"
env_auth> 1
AWS Access Key ID - leave blank for anonymous access or runtime credentials.
access_key_id> YOURACCESSKEY
AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials.
secret_access_key> YOURSECRETACCESSKEY
Region to connect to.
Choose a number from below, or type in your own value
   / The default endpoint - a good choice if you are unsure.
 1 | US Region, Northern Virginia or Pacific Northwest.
   | Leave location constraint empty.
   \ "us-east-1"
[snip]
region> us-east-1
Endpoint for S3 API.
Leave blank if using AWS to use the default endpoint for the region.
Specify if using an S3 clone such as Ceph.
endpoint> s3.wasabisys.com
Location constraint - must be set to match the Region. Used when creating buckets only.
Choose a number from below, or type in your own value
 1 / Empty for US Region, Northern Virginia or Pacific Northwest.
   \ ""
[snip]
location_constraint>
Canned ACL used when creating buckets and/or storing objects in S3.
For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
Choose a number from below, or type in your own value
 1 / Owner gets FULL_CONTROL. No one else has access rights (default).
   \ "private"
[snip]
acl>
The server-side encryption algorithm used when storing this object in S3.
Choose a number from below, or type in your own value
 1 / None
   \ ""
 2 / AES256
   \ "AES256"
server_side_encryption>
The storage class to use when storing objects in S3.
Choose a number from below, or type in your own value
 1 / Default
   \ ""
 2 / Standard storage class
   \ "STANDARD"
 3 / Reduced redundancy storage class
   \ "REDUCED_REDUNDANCY"
 4 / Standard Infrequent Access storage class
   \ "STANDARD_IA"
storage_class>
Remote config
--------------------
[wasabi]
env_auth = false
access_key_id = YOURACCESSKEY
secret_access_key = YOURSECRETACCESSKEY
region = us-east-1
endpoint = s3.wasabisys.com
location_constraint =
acl =
server_side_encryption =
storage_class =
--------------------
y) Yes this is OK
e) Edit this remote
d) Delete this remote
y/e/d> y

This will leave the config file looking like this.

[wasabi]
type = s3
provider = Wasabi
env_auth = false
access_key_id = YOURACCESSKEY
secret_access_key = YOURSECRETACCESSKEY
region =
endpoint = s3.wasabisys.com
location_constraint =
acl =
server_side_encryption =
storage_class =