1
mirror of https://github.com/rclone/rclone synced 2024-09-27 02:19:05 +02:00

Documention for IBM COS (S3) configuration.

This commit is contained in:
Giri Badanahatti 2018-03-15 09:11:32 -05:00 committed by Nick Craig-Wood
parent 7f744033d8
commit aba43cd3a4
3 changed files with 205 additions and 1 deletions

View File

@ -51,7 +51,7 @@ import (
func init() {
fs.Register(&fs.RegInfo{
Name: "s3",
Description: "Amazon S3 (also Dreamhost, Ceph, Minio)",
Description: "Amazon S3 (also Dreamhost, Ceph, Minio, IBM COS)",
NewFs: NewFs,
// AWS endpoints: http://docs.amazonwebservices.com/general/latest/gr/rande.html#s3_region
Options: []fs.Option{{

View File

@ -26,6 +26,7 @@ Rclone is a command line program to sync files and directories to and from:
* {{< provider name="Google Drive" home="https://www.google.com/drive/" config="/drive/" >}}
* {{< provider name="HTTP" home="https://en.wikipedia.org/wiki/Hypertext_Transfer_Protocol" config="/http/" >}}
* {{< provider name="Hubic" home="https://hubic.com/" config="/hubic/" >}}
* {{< provider name="IBM COS S3" home="http://https://www.ibm.com/cloud/object-storage" config="/s3/" >}}
* {{< provider name="Memset Memstore" home="https://www.memset.com/cloud/storage/" config="/swift/" >}}
* {{< provider name="Microsoft Azure Blob Storage" home="https://azure.microsoft.com/en-us/services/storage/blobs/" config="/azureblob/" >}}
* {{< provider name="Microsoft OneDrive" home="https://onedrive.live.com/" config="/onedrive/" >}}

View File

@ -494,6 +494,209 @@ rclone mkdir spaces:my-new-space
rclone copy /path/to/files spaces:my-new-space
```
### IBM COS (S3) ###
Information stored with IBM Cloud Object Storage is encrypted and dispersed across multiple geographic locations, and accessed through an implementation of the S3 API. This service makes use of the distributed storage technologies provided by IBMs Cloud Object Storage System (formerly Cleversafe). For more information visit: (https://www.ibm.com/cloud/object-storage)
To configure access to IBM COS S3, follow the steps below:
1. Run rclone config and select n for a new remote.
```
2018/02/14 14:13:11 NOTICE: Config file "C:\\Users\\a\\.config\\rclone\\rclone.conf" not found - using defaults
No remotes found - make a new one
n) New remote
s) Set configuration password
q) Quit config
n/s/q> n
```
2. Enter the name for the configuration
```
name> IBM-COS-XREGION
```
3. Select "s3" storage.
```
Type of storage to configure.
Choose a number from below, or type in your own value
1 / Amazon Drive
\ "amazon cloud drive"
2 / Amazon S3 (also Dreamhost, Ceph, Minio, IBM COS(S3))
\ "s3"
3 / Backblaze B2
Storage> 2
```
4. Select "Enter AWS credentials…"
```
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank.
Choose a number from below, or type in your own value
1 / Enter AWS credentials in the next step
\ "false"
2 / Get AWS credentials from the environment (env vars or IAM)
\ "true"
env_auth> 1
```
5. Enter the Access Key and Secret.
```
AWS Access Key ID - leave blank for anonymous access or runtime credentials.
access_key_id> <>
AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials.
secret_access_key> <>
```
6. Select "other-v4-signature" region.
```
Region to connect to.
Choose a number from below, or type in your own value
/ The default endpoint - a good choice if you are unsure.
1 | US Region, Northern Virginia or Pacific Northwest.
| Leave location constraint empty.
\ "us-east-1"
/ US East (Ohio) Region
2 | Needs location constraint us-east-2.
\ "us-east-2"
/ US West (Oregon) Region
<omitted>
15 | eg Ceph/Dreamhost
| set this and make sure you set the endpoint.
\ "other-v2-signature"
/ If using an S3 clone that understands v4 signatures set this
16 | and make sure you set the endpoint.
\ "other-v4-signature
region> 16
```
7. Enter the endpoint FQDN.
```
Leave blank if using AWS to use the default endpoint for the region.
Specify if using an S3 clone such as Ceph.
endpoint> s3-api.us-geo.objectstorage.softlayer.net
```
8. Specify a IBM COS Location Constraint.
a. Currently, the only IBM COS values for LocationConstraint are:
us-standard / us-vault / us-cold / us-flex
us-east-standard / us-east-vault / us-east-cold / us-east-flex
us-south-standard / us-south-vault / us-south-cold / us-south-flex
eu-standard / eu-vault / eu-cold / eu-flex
```
Location constraint - must be set to match the Region. Used when creating buckets only.
Choose a number from below, or type in your own value
1 / Empty for US Region, Northern Virginia or Pacific Northwest.
\ ""
2 / US East (Ohio) Region.
\ "us-east-2"
<omitted>
location_constraint> us-standard
```
9. Specify a canned ACL.
```
Canned ACL used when creating buckets and/or storing objects in S3.
For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
Choose a number from below, or type in your own value
1 / Owner gets FULL_CONTROL. No one else has access rights (default).
\ "private"
2 / Owner gets FULL_CONTROL. The AllUsers group gets READ access.
\ "public-read"
/ Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access.
3 | Granting this on a bucket is generally not recommended.
\ "public-read-write"
4 / Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access.
\ "authenticated-read"
/ Object owner gets FULL_CONTROL. Bucket owner gets READ access.
5 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
\ "bucket-owner-read"
/ Both the object owner and the bucket owner get FULL_CONTROL over the object.
6 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
\ "bucket-owner-full-control"
acl> 1
```
10. Set the SSE option to "None".
```
Choose a number from below, or type in your own value
1 / None
\ ""
2 / AES256
\ "AES256"
server_side_encryption> 1
```
11. Set the storage class to "None" (IBM COS uses the LocationConstraint at the bucket level).
```
The storage class to use when storing objects in S3.
Choose a number from below, or type in your own value
1 / Default
\ ""
2 / Standard storage class
\ "STANDARD"
3 / Reduced redundancy storage class
\ "REDUCED_REDUNDANCY"
4 / Standard Infrequent Access storage class
\ "STANDARD_IA"
storage_class>
```
12. Review the displayed configuration and accept to save the "remote" then quit.
```
Remote config
--------------------
[IBM-COS-XREGION]
env_auth = false
access_key_id = <>
secret_access_key = <>
region = other-v4-signature
endpoint = s3-api.us-geo.objectstorage.softlayer.net
location_constraint = us-standard
acl = private
server_side_encryption =
storage_class =
--------------------
y) Yes this is OK
e) Edit this remote
d) Delete this remote
y/e/d> y
Remote config
Current remotes:
Name Type
==== ====
IBM-COS-XREGION s3
e) Edit existing remote
n) New remote
d) Delete remote
r) Rename remote
c) Copy remote
s) Set configuration password
q) Quit config
e/n/d/r/c/s/q> q
```
13. Execute rclone commands
```
1) Create a bucket.
rclone mkdir IBM-COS-XREGION:newbucket
2) List available buckets.
rclone lsd IBM-COS-XREGION:
-1 2017-11-08 21:16:22 -1 test
-1 2018-02-14 20:16:39 -1 newbucket
3) List contents of a bucket.
rclone ls IBM-COS-XREGION:newbucket
18685952 test.exe
4) Copy a file from local to remote.
rclone copy /Users/file.txt IBM-COS-XREGION:newbucket
5) Copy a file from remote to local.
rclone copy IBM-COS-XREGION:newbucket/file.txt .
6) Delete a file on remote.
rclone delete IBM-COS-XREGION:newbucket/file.txt
```
### Minio ###
[Minio](https://minio.io/) is an object storage server built for cloud application developers and devops.