Before this fix, the chunksize calculator was using the previous size
of the object, not the new size of the object to calculate the chunk
sizes.
This meant that uploading a replacement object which needed a new
chunk size would fail, using too many parts.
This fix fixes the calculator to take the size explicitly.
Before this change, if rclone was run with `-M` on a filesystem
without xattr support, it would error out.
This patch makes rclone detect the not supported errors and disable
xattrs from then on. It prints one ERROR level message about this.
See: https://forum.rclone.org/t/metadata-update-local-s3/32277/7
Changes in github.com/anacrolix/dms changed upnp.ServiceURN to include a
namespace identifier. This identifier was previously hardcoded, but is
now parsed out of the URN. The old SOAP action header parsing logic was
duplicated in rclone and did not handle this field. Resulting responses
included a URN with an empty namespace identifier, breaking clients.
Before this change, rclone serve sftp operating with a new rclone
after the md5sum/sha1sum detection was reworked to just run a plain
`md5sum`/`sha1sum` command in
3ea82032e7 sftp: support md5/sha1 with rsync.net #3254
Failed to signal to the remote that md5sum/sha1sum wasn't supported as
in
71e172a139 serve/sftp: support empty "md5sum" and "sha1sum" commands
We unconditionally return good hashes even if the remote being served
doesn't support the hash type in question.
This fix checks the hash type is supported and returns an error
MD5 hash not supported
When the backend is first contacted this will cause the sftp backend
to detect that the hash type isn't available.
Unfortunately this may have cached the wrong state so editing or
remaking the config may be necessary to fix it.
Before this fix, the parsing code gave an error like this
parsing "2022-08-02 07:00:00" as fs.Time failed: expected newline
This was due to the Scan call failing to read all the data.
This patch fixes that, and redoes the tests
Before this change, if an object compressed with "Content-Encoding:
gzip" was downloaded, a length and hash mismatch would occur since the
go runtime automatically decompressed the object on download.
If --s3-decompress is set, this change erases the length and hash on
compressed objects so they can be downloaded successfully, at the cost
of not being able to check the length or the hash of the downloaded
object.
If --s3-decompress is not set the compressed files will be downloaded
as-is providing compressed objects with intact size and hash
information.
See #2658
Before this fix, the dropbox backend wasn't decoding the file names
received in changenotify events into rclone standard format.
This meant that changenotify events for filenames which had encoded
characters were failing to be decrypted properly if wrapped in crypt.
See: https://forum.rclone.org/t/rclone-vfs-cache-says-file-name-too-long/31535
Before this change the VFS cache cleaner would loop indefinitely while
the cache was above quota. This used up all the CPU.
This fix prevents the cache cleaner from looping. It will be kicked on
ENOSPACE and run in its scheduled time otherwise so this should be
sufficient.
See: https://forum.rclone.org/t/vfs-keeps-checking-same-files/32120
Before this patch backends could be shutdown when they fell out of the
cache when they were in use with combine. This was particularly
noticeable with the dropbox backend which gave this error when
uploading files after the backend was Shutdown.
Failed to copy: upload failed: batcher is shutting down
This patch gets the combine remote to pin them until it is finished.
See: https://forum.rclone.org/t/rclone-combine-upload-failed-batcher-is-shutting-down/32168
Before this change --compare-dest and --copy-dest would check to see
if the compare/copy object existed first, before seeing if the
destination object was present.
This is inefficient, because in most --copy-dest syncs the destination
will be present and the compare/copy object need never be tested.
--compare-dest syncs may also be speeded up if they are done to the
same directory repeatedly.
This fixes the problem by re-arranging the logic so if the transfer is
not needed then the compare/copy object is never tested.
See: https://forum.rclone.org/t/union-with-copy-dest-enabled-is-slower-than-expected/32172
Before this change the android build started failing with
gomobile: ANDROID_NDK_HOME specifies /usr/local/lib/android/sdk/ndk/25.0.8775105
which is unusable: unsupported API version 16 (not in 19..33)
This was caused by a change to github actions, but is ultimately due
to an issue in gomobile with the newest version of the SDK.
This change fixes the problem by declaring a minimum API version of 21
and using version 21 compilers to build everything and using the
default NDK in github actions.
See: https://github.com/actions/virtual-environments/issues/5930
See: https://github.com/lightningnetwork/lnd/issues/6651
Previously, with standard auth, the username would be stored in config - but only after
entering the non-standard device/mountpoint sequence during config (a feature introduced
with #5926). Regardless of that, rclone always requests the username from the api at
startup (NewFS).
In #6270 (commit 9dbed02329) this was changed to always
store username in config (consistency), and then also use it to avoid the repeated
customer info request in NewFs (performance). But, as reported in #6309, it did not work
with legacy auth, where user enters username manually, if user entered an email address
instead of the internal username required for api requests. This change was therefore
recently reverted.
The current commit takes another step back to not store the username in config during
the non-standard device/mountpoint config sequence (consistentcy). The username will
now only be stored in config when using legacy auth, where it is an input parameter.
Extend the shouldRetry function by also checking for the quotaExceeded
reason, and since this function appeared to be untested, add a test case
for the existing errors and this new one.
Fixes#615