1
mirror of https://github.com/rclone/rclone synced 2024-11-23 00:06:55 +01:00
Commit Graph

327 Commits

Author SHA1 Message Date
Nick Craig-Wood
e6fde67491 s3: fix retry logic, logging and error reporting for chunk upload
- move retries into correct place into lowest level functions
- fix logging and error reporting
2023-08-24 12:39:27 +01:00
Vitor Gomes
6dd736fbdc s3: refactor MultipartUpload to use OpenChunkWriter and ChunkWriter #7056 2023-08-12 17:55:01 +01:00
kapitainsky
e66675d346 docs: rclone backend restore 2023-07-29 11:31:16 +09:00
Benjamin
119ccb2b95
s3: add Leviia S3 Object Storage as provider 2023-07-16 18:08:47 +01:00
Nick Craig-Wood
d0d41fe847 rclone config redacted: implement support mechanism for showing redacted config
This introduces a new fs.Option flag, Sensitive and uses this along
with IsPassword to redact the info in the config file for support
purposes.

It adds this flag into backends where appropriate. It was necessary to
add oauthutil.SharedOptions to some backends as they were missing
them.

Fixes #5209
2023-07-07 16:25:14 +01:00
BakaWang
f1a8420814
s3: add synology to s3 provider list 2023-07-06 10:54:07 +01:00
zzq
e9a753f678 s3: add Qiniu KODO quirks virtualHostStyle is false 2023-06-26 17:47:27 +01:00
Ehsan Tadayon
2dd2072cdb s3: Fix Arvancloud Domain and region changes and alphabetise the provider 2023-06-25 11:01:41 +01:00
Nick Craig-Wood
5f938fb9ed s3: fix "Entry doesn't belong in directory" errors when using directory markers
Before this change we were incorrectly identifying the root directory
of the listing and adding it into the listing.

This caused higher layers of rclone to emit the error above.

See #7038
2023-06-23 18:01:11 +01:00
Nick Craig-Wood
2c50f26c36 mount: fix mount failure on macOS with on the fly remote
This commit

3567a47258 fs: make ConfigString properly reverse suffixed file systems

made fs.ConfigString() return the full config of the backend. Because
mount was using this to make a volume name it started to make volume
names with illegal characters in which couldn't be mounted by macOS.

This fixes the problem by making a separate fs.ConfigStringFull() and
using that where appropriate and leaving the original
fs.ConfigString() function untouched.

Fixes #7063
See: https://forum.rclone.org/t/1-63-beta-fails-to-mount-on-macos-with-on-the-fly-crypt-remote/39090
2023-06-23 14:12:03 +01:00
Nick Craig-Wood
000ddc4951 s3: fix versions tests when running on minio 2023-06-14 17:30:36 +01:00
cc
37a3309438
s3: v3sign: add missing subresource delete
The delete query string parameter must be included when you create the
CanonicalizedResource for a multi-object Delete request.
2023-05-14 11:25:52 +01:00
Nick Craig-Wood
baf16a65f0 s3: fix directory marker code #3453
Use Update to upload the directory markers
2023-05-07 12:47:09 +01:00
Andrei Smirnov
f226f2dfb1
s3: add petabox.io to s3 providers 2023-05-05 09:44:25 +01:00
Nick Craig-Wood
f5bab284c3 s3: fix missing "tier" metadata
Before this change if the storage class wasn't set on the object, we
didn't set the "tier" metadata.

This made it impossible to filter on tier using the metadata filters.

This returns the "tier" metadata as STANDARD if the storage class
isn't set on the object.

See: https://forum.rclone.org/t/copy-from-s3-to-another-s3-filter-by-storage-class/37861
2023-04-28 14:33:01 +01:00
Nick Craig-Wood
74652bf318 s3: empty directory markers further work #3453
- Report correct feature flag
- Fix test failures due to that
- don't output the root directory marker
- Don't create the directory marker if it is the bucket or root
- Create directories when uploading files
2023-04-28 14:31:05 +01:00
Jānis Bebrītis
b6a95c70e9 s3: empty directory markers - #3453 2023-04-28 14:31:05 +01:00
Nick Craig-Wood
aca7d0fd22 s3: fix potential crash in integration tests 2023-04-28 14:31:05 +01:00
Brian Starkey
589b7b4873 s3: update Scaleway storage classes
There are now 3 classes:
 * "STANDARD" - Multi-AZ, all regions
 * "ONEZONE_IA" - Single-AZ, FR-PAR only
 * "GLACIER" - Archive, FR-PAR and NL-AMS only
2023-04-19 17:20:30 +01:00
Dimitri Papadopoulos
bfe272bf67 backend: fix typos found by codespell 2023-03-24 11:34:14 +00:00
Nick Craig-Wood
ddb3b17e96 s3: fix hang on aborting multpart upload with iDrive e2
Apparently the abort multipart upload call doesn't return while
multipart uploads are in progress on iDrive e2.

This means that if we CTRL-C a multpart upload rclone hangs until the
all parts uploading have completed. However since rclone is uploading
multiple parts at once this doesn't happen until after the entire file
is uploaded.

This was fixed by cancelling the upload context which causes all the
uploads to stop instantly.
2023-03-22 12:50:58 +00:00
Nick Craig-Wood
542677d807 s3: fix --s3-versions on individual objects
Before this fix attempting to access an s3 versioned object by name in
a subdirectory of root would not find the object.

This fixes the problem and introduced an integraton test.

See: https://forum.rclone.org/t/s3-versions-cant-retrieve-old-version/36900
2023-03-21 12:44:45 +00:00
Nick Craig-Wood
d481aa8613 Revert "s3: fix InvalidRequest copying to a locked bucket from a source with no MD5SUM"
This reverts commit e5a1bcb1ce.

This causes a lot of integration test failures so may need to be optional.
2023-03-21 11:43:43 +00:00
Nick Craig-Wood
e5a1bcb1ce s3: fix InvalidRequest copying to a locked bucket from a source with no MD5SUM
Before this change, we would upload files as single part uploads even
if the source MD5SUM was not available.

AWS won't let you upload a file to a locket bucket without some sort
of hash protection of the upload which we don't have with no MD5SUM.

So we switch to multipart upload when the source does not have an
MD5SUM.

This means that if --s3-disable-checksum is set or we are copying from
a source with no MD5SUMs we will copy with multipart uploads.

This patch changes all uploads, not just those to locked buckets
because having no MD5SUM protection on uploads is undesirable.

Fixes #6846
2023-03-17 11:34:20 +00:00
Anthony Pessy
54a9488e59 s3: add GCS to provider list 2023-03-16 14:24:21 +00:00
Nick Craig-Wood
19e8c8d42a s3: make purge remove directory markers too
See: https://forum.rclone.org/t/cannot-purge-aws-s3/36169/
2023-03-03 15:51:00 +00:00
Nick Craig-Wood
de9c4a3611 s3: use bucket.Join instead of path.Join to preserve paths
Before this change, path.Join would remove the trailing / from objects
which had them. The simplified bucket.Join does not.
2023-03-03 15:51:00 +00:00
Nick Craig-Wood
59e7982040 s3: add --s3-sts-endpoint to specify STS endpoint
See: https://forum.rclone.org/t/s3-profile-failing-when-explicit-s3-endpoint-is-present/36063/
2023-03-02 09:56:09 +00:00
Nick Craig-Wood
c6b0587dc0 s3: fix AWS STS failing if --s3-endpoint is set
Before this change if an --s3-profile was set which used AWS STS (eg
to assume a role) and --s3-endpoint was set then rclone would use the
value from --s3-endpoint to contact the STS server which did not work.

This fix implements an endpoint resolver which only overrides the "s3"
service if --s3-endpoint is set. It sends the "sts" service (and any
other service) to the default resolver.

Fixes #6443
See: https://forum.rclone.org/t/s3-profile-failing-when-explicit-s3-endpoint-is-present/36063/
2023-03-01 16:24:40 +00:00
Nick Craig-Wood
019a486d5b accounting: Make checkers show what they are doing
Before this change, all types of checkers showed "checking" after the
file name despite the fact that not all of them were checking.

After this change, they can show

- checking
- deleting
- hashing
- importing
- listing
- merging
- moving
- renaming

See: https://forum.rclone.org/t/what-is-rclone-checking-during-a-purge/35931/
2023-03-01 11:10:38 +00:00
Nick Craig-Wood
b3e0672535 s3: Check multipart upload ETag when --s3-no-head is in use
Before this change if --s3-no-head was in use rclone didn't check the
multipart upload ETag at all. However the ETag is returned in the
final POST request when completing the object.

This change uses that ETag from the final POST if --s3-no-head is in
use, otherwise it uses the ETag from a fresh HEAD request.

See: https://forum.rclone.org/t/in-some-cases-rclone-does-not-use-etag-to-verify-files/36095/
2023-02-14 12:04:28 +00:00
Ole Frost
14e852ee9d
s3: fix incorrect tier support for StorJ and IDrive when pointing at a file
Fixes #6734
2023-02-02 18:12:00 +00:00
albertony
b9d9f9edb0 docs: use --interactive instead of -i in examples to avoid confusion 2023-01-24 20:43:51 +01:00
Kaloyan Raev
d049cbb59e s3/storj: update endpoints
Storj switched to a single global s3 endpoint backed by a BGP routing.
We want to stop advertizing the former regional endpoints and have the
global one as the only option.
2022-12-22 15:46:49 +00:00
Nick Craig-Wood
01877e5a0f s3: ignore versionIDs from uploads unless using --s3-versions or --s3-versions-at
Before this change, when a new object was created s3 returns its
versionID (on a versioned bucket) and rclone recorded it in the
object.

This means that when rclone came to delete the object it would delete
it with the versionID.

However it is common to forbid actions with versionIDs on buckets so
as to preserve the historical record and these operations would fail
whereas they succeeded in pre-v1.60.0 versions.

This patch fixes the problem by not recording versions of objects
supplied by the S3 API on upload unless `--s3-versions` or
`--s3-version-at` is used. This makes rclone behave as it did before
v1.60.0 when version support was introduced.

See: https://forum.rclone.org/t/s3-and-intermittent-403-errors-with-file-renames-and-drag-and-drop-operations-in-windows-explorer/34773
2022-12-17 10:24:56 +00:00
Jack
9a81885b51 s3: add DigitalOcean Spaces regions sfo3, fra1, syd1 2022-12-14 16:35:05 +00:00
Nick Craig-Wood
43bf177ff7 s3: fix excess memory usage when using versions
Before this change, we were taking the version ID straight from the
XML blob returned by the SDK and thus pinning the XML into memory
which bulked up the average memory per object from about 400 bytes to
4k.

Copying the string fixes the excess memory usage.
2022-12-14 14:24:26 +00:00
Nick Craig-Wood
c446651be8 Revert "s3: turn off list v2 support for Alibaba OSS since it does not work"
This reverts commit 4f386a1ccd.

It turns out that Alibaba OSS does support list v2 and the detection
code was wrong.

This means that users of the gov version of Alibaba will have to add
`list_version 1` to their config files.

See #6600
2022-12-14 14:24:26 +00:00
Nick Craig-Wood
6c407dbe15 s3: fix detection of listing routines which don't support v2 properly
In this commit

ab849b3613 s3: fix listing loop when using v2 listing on v1 server

The ContinuationToken was tested for existence, but it is the
NextContinuationToken that we are interested in.

See: #6600
2022-12-14 14:24:26 +00:00
Nick Craig-Wood
450c366403 s3: fix nil pointer exception when using Versions
This was caused by

a9bd0c8de6 s3: reduce memory consumption for s3 objects

Which assumed that the StorageClass would always be set, but it isn't
set for Versions.
2022-12-09 12:23:51 +00:00
MohammadReza
0a8b1fe5de
s3: add Liara LOS to provider list 2022-12-06 12:25:23 +00:00
Nick Craig-Wood
4f386a1ccd s3: turn off list v2 support for Alibaba OSS since it does not work
See: #6600
2022-12-06 12:11:21 +00:00
Nick Craig-Wood
ab849b3613 s3: fix listing loop when using v2 listing on v1 server
Before this change, rclone would enter a listing loop if it used v2
listing on a v1 server and the list exceeded 1000 items.

This change detects the problem and gives the user a helpful message.

Fixes #6600
2022-12-06 12:11:21 +00:00
Erik Agterdenbos
a9bd0c8de6
s3: reduce memory consumption for s3 objects
Copying the storageClass string instead of using a pointer to the original string.
This prevents the Go garbage collector from keeping large amounts of
XMLNode structs and references in memory, created by xmlutil.XMLToStruct()
from the aws-sdk-go.
2022-12-05 23:07:08 +00:00
techknowlogick
0e427216db s3: Add additional Wasabi locations 2022-11-11 14:39:12 +00:00
Aaron Gokaslan
b0248e8070
s3: fix for unchecked err value in s3 listv2 2022-11-10 11:52:59 +00:00
Nick Craig-Wood
691159fe94 s3: allow Storj to server side copy since it seems to work now - fixes #6550 2022-11-08 16:05:24 +00:00
Arnie97
6a5b7664f7 backend/http: support content-range response header 2022-11-08 13:04:17 +00:00
albertony
5d6b8141ec Replace deprecated ioutil
As of Go 1.16, the same functionality is now provided by package io or
package os, and those implementations should be preferred in new code.
2022-11-07 11:41:47 +00:00
Nick Craig-Wood
5b5fdc6bc5 s3: add provider quirk --s3-might-gzip to fix corrupted on transfer: sizes differ
Before this change, some files were giving this error when downloaded
from Cloudflare and other providers.

    ERROR corrupted on transfer: sizes differ NNN vs MMM

This is because these providers auto gzips the object when rclone
wasn't expecting it to. (AWS does not gzip objects without their being
uploaded gzipped).

This patch adds a quirk to for fix the problem and a flag to control
it. The quirk `might_gzip` is set to `true` for all providers except
AWS.

See: https://forum.rclone.org/t/s3-error-corrupted-on-transfer-sizes-differ-nnn-vs-mmm/33694/
Fixes: #6533
2022-11-04 16:53:32 +00:00
Nick Craig-Wood
028832ce73 s3: if bucket or object ACL is empty string then don't add X-Amz-Acl: header - fixes #5730
Before this fix it was impossible to stop rclone generating an
X-Amx-Acl: header which is incompatible with GCS with uniform access
control and is generally deprecated at AWS.
2022-11-03 17:06:24 +00:00
Philip Harvey
c7c9356af5 s3: stop setting object and bucket ACL to "private" if it is an empty string #5730 2022-11-03 17:06:24 +00:00
Anthony Pessy
10c884552c s3: use different strategy to resolve s3 region
The API endpoint GetBucketLocation requires
top level permission.

If we do an authenticated head request to a bucket, the bucket location will be returned in the HTTP headers.

Fixes #5066
2022-11-02 11:48:08 +00:00
Nick Craig-Wood
fce22c0065 s3: add --s3-no-system-metadata to suppress read and write of system metadata
See: https://forum.rclone.org/t/problems-with-content-disposition-and-backblaze-b2-using-s3/33292/
2022-10-14 11:12:04 +01:00
Bachue Zhou
66ed0ca726
s3: add Qiniu KODO to s3 provider list - fixes #6195 2022-10-13 15:49:22 +01:00
Nick Craig-Wood
90d23139f6 s3: drop binary metadata with an ERROR message
Before this change, rclone would attempt to upload metadata with
binary contents which fail to be uploaded by net/http.

This checks the keys and values for validity as http header values
before uploading.

See: https://forum.rclone.org/t/invalid-metadata-key-names-result-in-a-failure-to-transfer-xattr-results-in-failure-to-upload-net-http-invalid-header-field-value-for-x-amz-meta-samba-pai/33406/
2022-10-13 12:00:45 +01:00
Nick Craig-Wood
cf0bf159ab s3: try to keep the maximum precision in ModTime with --user-server-modtime
Before this change if --user-server-modtime was in use the ModTime
could change for an object as we receive it accurate to the nearest ms
in listings, but only accurate to the nearest second in HEAD and GET
requests.

Normally AWS returns the milliseconds as .000 in listings, but if
versions are in use it may not. Storj S3 also seems to return
milliseconds.

This patch tries to keep the maximum precision in the last modified
time, so it doesn't update a last modified time with a truncated
version if the times were the same to the nearest second.

See: https://forum.rclone.org/t/cache-fingerprint-miss-behavior-leading-to-false-positive-stalen-cache/33404/
2022-10-12 09:18:10 +01:00
Richard Bateman
4f374bc264
s3: add --s3-sse-customer-key-base64 to supply keys with binary data
Fixes #6400
2022-09-17 17:28:44 +01:00
Dmitry Deniskin
c080b39e47
s3: add support for IONOS Cloud Storage 2022-09-15 16:04:34 +01:00
Josh Soref
ce3b65e6dc all: fix spelling across the project
* abcdefghijklmnopqrstuvwxyz
* accounting
* additional
* allowed
* almost
* already
* appropriately
* arise
* bandwidth
* behave
* bidirectional
* brackets
* cached
* characters
* cloud
* committing
* concatenating
* configured
* constructs
* current
* cutoff
* deferred
* different
* directory
* disposition
* dropbox
* either way
* error
* excess
* experiments
* explicitly
* externally
* files
* github
* gzipped
* hierarchies
* huffman
* hyphen
* implicitly
* independent
* insensitive
* integrity
* libraries
* literally
* metadata
* mimics
* missing
* modification
* multipart
* multiple
* nightmare
* nonexistent
* number
* obscure
* ourselves
* overridden
* potatoes
* preexisting
* priority
* received
* remote
* replacement
* represents
* reproducibility
* response
* satisfies
* sensitive
* separately
* separator
* specifying
* string
* successful
* synchronization
* syncing
* šenfeld
* take
* temporarily
* testcontents
* that
* the
* themselves
* throttling
* timeout
* transaction
* transferred
* unnecessary
* using
* webbrowser
* which
* with
* workspace

Signed-off-by: Josh Soref <2119212+jsoref@users.noreply.github.com>
2022-08-30 11:16:26 +02:00
Nick Craig-Wood
0501773db1 azureblob,b2,s3: fix chunksize calculations producing too many parts
Before this fix, the chunksize calculator was using the previous size
of the object, not the new size of the object to calculate the chunk
sizes.

This meant that uploading a replacement object which needed a new
chunk size would fail, using too many parts.

This fix fixes the calculator to take the size explicitly.
2022-08-09 12:57:38 +01:00
Nick Craig-Wood
ebe86c6cec s3: add --s3-decompress flag to download gzip-encoded files
Before this change, if an object compressed with "Content-Encoding:
gzip" was downloaded, a length and hash mismatch would occur since the
go runtime automatically decompressed the object on download.

If --s3-decompress is set, this change erases the length and hash on
compressed objects so they can be downloaded successfully, at the cost
of not being able to check the length or the hash of the downloaded
object.

If --s3-decompress is not set the compressed files will be downloaded
as-is providing compressed objects with intact size and hash
information.

See #2658
2022-08-05 16:45:23 +01:00
Nick Craig-Wood
4b981100db s3: refactor to use generated code instead of reflection to copy structs 2022-08-05 16:45:23 +01:00
Nick Craig-Wood
4344a3e2ea s3: implement --s3-version-at flag - Fixes #1776 2022-08-05 16:45:23 +01:00
Nick Craig-Wood
1542a979f9 s3: refactor f.list() to take an options struct as it had too many parameters 2022-08-05 16:45:23 +01:00
Nick Craig-Wood
81d242473a s3: implement Purge to purge versions and backend cleanup-hidden 2022-08-05 16:45:23 +01:00
Nick Craig-Wood
0ae171416f s3: implement --s3-versions flag - See #1776 2022-08-05 16:45:23 +01:00
Nick Craig-Wood
a59fa2977d s3: factor different listing versions into separate objects 2022-08-05 16:42:30 +01:00
Nick Craig-Wood
7243918069 s3: implement backend versioning command to get/set bucket versioning 2022-08-05 16:42:30 +01:00
Nick Craig-Wood
6fd9e3d717 build: reformat comments to pass go1.19 vet
See: https://go.dev/doc/go1.19#go-doc
2022-08-05 16:35:41 +01:00
Nick Craig-Wood
440d0cd179 s3: fix --s3-no-head panic: reflect: Elem of invalid type s3.PutObjectInput
In

22abd785eb s3: implement reading and writing of metadata #111

The reading information of objects was refactored to use the
s3.HeadObjectOutput structure.

Unfortunately the code branch with `--s3-no-head` was not tested
otherwise this panic would have been discovered.

This shows that this is path is not integration tested, so this adds a
new integration test.

Fixes #6322
2022-07-18 23:38:50 +01:00
albertony
986bb17656 staticcheck: awserr.BatchError is deprecated: Replaced with BatchedErrors 2022-07-04 11:24:59 +02:00
Nick Craig-Wood
06182a3443 s3: actually compress the payload for content-type gzip test 2022-07-04 09:42:49 +01:00
Nick Craig-Wood
22abd785eb s3: implement reading and writing of metadata #111 2022-06-29 14:29:36 +01:00
Nick Craig-Wood
a692bd2cd4 s3: change metadata storage to normal map with lowercase keys 2022-06-29 14:29:36 +01:00
vyloy
326c43ab3f s3: add IDrive e2 to provider list 2022-06-28 09:12:36 +01:00
Nick Craig-Wood
f7c36ce0f9 s3: unwrap SDK errors to reveal underlying errors on upload
The SDK doesn't wrap errors in a Go standard way so they can't be
unwrapped and tested for - eg fatal error.

The code looks for a Serialization or RequestError and returns the
unwrapped underlying error if possible.

This fixes the fs/operations integration tests checking for fatal
errors being returned.
2022-06-17 16:52:30 +01:00
Nick Craig-Wood
c85fbebce6 s3: simplify PutObject code to use the Request.SetStreamingBody method
In this commit

e5974ac4b0 s3: use PutObject from the aws SDK to upload single part objects

rclone was made to upload objects to s3 using PUT requests rather than
using signed uploads.

However this change missed the fact that there is a supported way to
do this in the SDK using the SetStreamingBody method on the Request.

This therefore reverts a lot of the previous commit to do with making
an unsigned connection and other complication and uses the SDK
facility.
2022-06-16 23:26:19 +01:00
Nick Craig-Wood
fa48b880c2 s3: retry RequestTimeout errors
See: https://forum.rclone.org/t/s3-failed-upload-large-files-bad-request-400/27695
2022-06-16 22:13:50 +01:00
Maciej Radzikowski
2e91287b2e
docs/s3: add note about chunk size decreasing progress accuracy 2022-06-16 22:29:36 +02:00
m00594701
02b4638a22 backend: add Huawei OBS to s3 provider list 2022-06-14 09:21:01 +01:00
albertony
ec117593f1 Fix lint issues reported by staticcheck
Used staticcheck 2022.1.2 (v0.3.2)

See: staticcheck.io
2022-06-13 21:13:50 +02:00
Alex JOST
a34276e9b3 s3: Add Warsaw location for Scaleway
Add new location in Warsaw (Poland) to endpoints for Scaleway.

More Information:
https://blog.scaleway.com/scaleway-is-now-in-warsaw/
https://www.scaleway.com/en/docs/storage/object/how-to/create-a-bucket/
2022-05-19 14:06:16 +01:00
Nick Craig-Wood
813a5e0931 s3: Remove bucket ACL configuration for Cloudflare R2
Bucket ACLs are not supported by Cloudflare R2. All buckets are
private and must be shared using a Cloudflare Worker.
2022-05-17 15:57:09 +01:00
Eng Zer Jun
4f0ddb60e7 refactor: replace strings.Replace with strings.ReplaceAll
strings.ReplaceAll(s, old, new) is a wrapper function for
strings.Replace(s, old, new, -1). But strings.ReplaceAll is more
readable and removes the hardcoded -1.

Signed-off-by: Eng Zer Jun <engzerjun@gmail.com>
2022-05-17 11:08:37 +01:00
Derek Battams
fb4f7555c7 s3: use chunksize lib to determine chunksize dynamically 2022-05-13 09:25:48 +01:00
Vincent Murphy
319ac225e4
s3: backend restore command to skip non-GLACIER objects 2022-05-12 20:42:37 +01:00
Nick Craig-Wood
6f91198b57 s3: Support Cloudflare R2 - fixes #5642 2022-05-12 08:49:20 +01:00
Nick Craig-Wood
e5974ac4b0 s3: use PutObject from the aws SDK to upload single part objects
Before this change rclone used presigned requests to upload single
part objects. This was because of a limitation in the SDK which didn't
allow non seekable io.Readers to be passed in.

This is incompatible with some S3 backends, and rclone wasn't adding
the `X-Amz-Content-Sha256: UNSIGNED-PAYLOAD` header which was
incompatible with other S3 backends.

The SDK now allows for this so rclone can use PutObject directly.

This sets the `X-Amz-Content-Sha256: UNSIGNED-PAYLOAD` flag on the PUT
request. However rclone will add a `Content-Md5` header if at all
possible so the body data is still protected.

Note that the old behaviour can still be configured if required with
the `use_presigned_request` config parameter.

Fixes #5422
2022-05-12 08:49:20 +01:00
ehsantdy
a446106041
s3: update Arvancloud default values and correct docs 2022-05-02 16:04:01 +01:00
ehsantdy
e34c543660
s3: Add ArvanCloud AOS to provider list 2022-04-28 10:42:30 +01:00
Nick Craig-Wood
603e51c43f s3: sync providers in config description with providers 2022-03-31 17:55:54 +01:00
GuoXingbin
c2bfda22ab
s3: Add ChinaMobile EOS to provider list
China Mobile Ecloud Elastic Object Storage (EOS) is a cloud object storage service, and is fully compatible with S3.

Fixes #6054
2022-03-24 11:57:00 +00:00
Nick Craig-Wood
189cba0fbe s3: add other regions for Lyve and correct Provider name 2022-03-14 15:43:35 +00:00
Nick Craig-Wood
6a6d254a9f s3: add support for Seagate Lyve Cloud storage 2022-03-09 11:30:55 +00:00
Nick Craig-Wood
537b62917f s3: add --s3-use-multipart-etag provider quirk #5993
Before this change the new multipart upload ETag checking code was
failing in the integration tests with Alibaba OSS.

Apparently Alibaba calculate the ETag in a different way to AWS.

This introduces a new provider quirk with a flag to disable the
checking of the ETag for multipart uploads.

Mulpart Etag checking has been enabled for all providers that we can
test for and work, and left disabled for the others.
2022-03-01 16:36:39 +00:00
Nick Craig-Wood
8f164e4df5 s3: Use the ETag on multipart transfers to verify the transfer was OK
Before this rclone ignored the ETag on multipart uploads which missed
an opportunity for a whole file integrity check.

This adds that check which means that we now check even harder that
multipart uploads have arrived properly.

See #5993
2022-02-25 16:19:03 +00:00
Márton Elek
25ea04f1db s3: add specific provider for Storj Shared gateways
- unsupported features (Copy) are turned off for Storj
- enable urlEncodedListing for Storj provider
- set chunksize to 64Mb
2022-02-08 11:40:29 +00:00
Nick Craig-Wood
051685baa1 s3: fix multipart upload with --no-head flag - Fixes #5956
Before this change a multipart upload with the --no-head flag returned
the MD5SUM as a base64 string rather than a Hex string as the rest of
rclone was expecting.
2022-01-29 12:48:51 +00:00
Yunhai Luo
408d9f3e7a s3: Add GLACIER_IR storage class 2021-12-03 14:46:45 +00:00