Before this change attempting to download a file with
`Content-Encoding: gzip` from Cloudflare R2 gave this error
corrupted on transfer: sizes differ src 0 vs dst 999
This was caused by the SDK v2 overriding our attempt to set
`Accept-Encoding: gzip`.
This fixes the problem by disabling the middleware that does that
overriding.
Before this change, if writing to a local backend with --metadata and
--links, if the incoming metadata contained mode or ownership
information then rclone would apply the mode/ownership to the
destination of the link not the link itself.
This fixes the problem by using the link safe sycall variants
lchown/fchmodat when --links and --metadata is in use. Note that Linux
does not support setting permissions on symlinks, so rclone emits a
debug message in this case.
This also fixes setting times on symlinks on Windows which wasn't
implemented for atime, mtime and was incorrectly setting the target of
the symlink for btime.
See: https://github.com/rclone/rclone/security/advisories/GHSA-hrxh-9w67-g4cv
Before this change, if writing to a local backend with --metadata and
--links, if the incoming metadata contained mode or ownership
information then rclone would apply the mode/ownership to the
destination of the link not the link itself.
This fixes the problem by using the link safe sycall variants
lchown/fchmodat when --links and --metadata is in use. Note that Linux
does not support setting permissions on symlinks, so rclone emits a
debug message in this case.
This also fixes setting times on symlinks on Windows which wasn't
implemented for atime, mtime and was incorrectly setting the target of
the symlink for btime.
See: https://github.com/rclone/rclone/security/advisories/GHSA-hrxh-9w67-g4cv
We changed the precision of the onedrive personal backend in
c053429b9c from 1mS to 1S.
However the tests did not get updated. This changes the time tests to
use `fstest.AssertTimeEqualWithPrecision` which compares with
precision so hopefully won't break again.
Before this change, if rclone is used as a library and logrus is used
after a call to rc `sync/bisync`, logging does not work anymore and
leads to writing to a closed pipe.
This change restores the output correctly.
Fixes#8158
Before this change, upgrading to v1.13.7 caused a deadlock in the tests.
This was caused by additional locking in the sftp package exposing a
bad choice by the rclone code.
See https://github.com/pkg/sftp/issues/603 and thanks to @puellanivis
for the fix suggestion.
The Mailru backend integration tests have been failing due to new rate
limits on the backend.
This patch
- Removes Mailru from the chunker tests
- Adds the flag so we only run one Mailru test at once
Currently rclone allows us to specify the path to a public ssh
certificate file.
That works great for cases where we can specify key path, like local
envs.
If users are using rclone with [volsync](https://github.com/backube/volsync/tree/main/docs/usage/rclone)
there currently is a limitation that users can specify only the rclone config file.
With this change users can pass the public certificate in the same fashion
as they can with `key_file`.
Disabling the authentication for unix sockets makes it impossible to
use `rclone serve` behind a proxy that that communicates with rclone
via a unix socket.
Re-enabling the authentication should not have any effect on most
users of unix sockets as they do not set authentication up with a unix
socket anyway.
Like some other S3-compatible providers, Storj does not currently
implements UploadPartCopy and returns NotImplemented errors for
multi-part server side copies.
This patch works around the problem by raising --s3-copy-cutoff for
Storj to the maximum. This means that rclone will never use
multi-part copies for files in Storj. This includes files larger than
5GB which (according to AWS documentation) must be copied with
multi-part copy. This works fine for Storj.
See https://github.com/storj/roadmap/issues/40
Before this testing any backend which implemented the OpenChunkWriter
gave this error:
ERROR : writer-at-subdir/writer-at-file: Don't know how to set key "chunkSize" on upload
This was due to the ChunkOption incorrectly rendering into HTTP
headers which weren't understood by the backend.