Before this change the logic which makes sure we create all
directories could get confused with directories which started with
slashes and get into an infinite loop consuming 100% of the CPU.
Before this change, bucket.Join would tidy up object keys by removing
repeated / in them. This means we can't access objects with // in them
which is valid for object keys (but not for file system paths).
This could have consequences for users who are relying on rclone to
fix improper paths for them.
When doing a multipart upload or copy, if a InvalidBlobOrBlock error
is received, it can mean that there are uncomitted blocks from a
previous failed attempt with a different length of ID.
This patch makes rclone attempt to clear the uncomitted blocks and
retry if it receives this error.
This implements multipart server side copy to improve copying from one
azure region to another by orders of magnitude (from 30s for a 100M
file to 10s for a 10G file with --azureblob-upload-concurrency 500).
- Add `--azureblob-copy-cutoff` to control the cutoff from single to multipart copy
- Add `--azureblob-copy-concurrency` to control the copy concurrency
- Add ServerSideAcrossConfigs flag as this now works properly
- Implement multipart copy using put block list API
- Shortcut multipart copy for same storage account
- Override with `--azureblob-use-copy-blob`
Fixes#8249
This speeds up server side copies for small files which need the check
the copy status by using an exponential ramp up of time to check the
copy status endpoint.
Before this change, if a multipart upload was aborted, then rclone
would leave uncommitted blocks lying around. Azure has a limit of
100,000 uncommitted blocks per storage account, so when you then try
to upload other stuff into that account, or simply the same file
again, you can run into this limit. This causes errors like the
following:
BlockCountExceedsLimit: The uncommitted block count cannot exceed the
maximum limit of 100,000 blocks.
This change removes the uncommitted blocks if a multipart upload is
aborted or fails.
If there was an existing destination file, it takes care not to
overwrite it by recomitting already comitted blocks.
This means that the scheme for allocating block IDs had to change to
make them different for each block and each upload.
Fixes#5583
Added the ability to include item's metadata on uploads via the
Internet Archive backend using the `--internetarchive-metadata="key=value"`
argument. This is hidden from the configurator as should only
really be used on the command line.
Before this change, metadata had to be manually added after uploads.
With this new feature, users can specify metadata directly during the
upload process.
This race would only happen when --dir-cache-time was very small.
This was noticed in the VFS tests when --dir-cache-time was 100 mS so
is unlikely to affect normal users.
Before this change rclone would always use encoding-type url even if
the client hadn't asked for it.
This confused some clients.
This fixes the problem by leaving the URL encoding to the gofakes3
library which has also been fixed.
Fixes#7836