Each part of a multipart upload takes 96M of memory, so we make sure
that we don't user more than `--transfers` * 96M of memory buffering
the multipart uploads.
This has the consequence that some uploads may appear to be at 0% for
a while, however they will get going eventually so this won't
re-introduce #731.
Optional interfaces are becoming more important in rclone,
--track-renames and --backup-dir both rely on them.
Up to this point rclone has used interface upgrades to define optional
behaviour on Fs objects. However when one Fs object wraps another it
is very difficult for this scheme to work accurately. rclone has
relied on specific error messages being returned when the interface
isn't supported - this is unsatisfactory because it means you have to
call the interface to see whether it is supported.
This change enables accurate detection of optional interfaces by use
of a Features struct as returned by an obligatory Fs.Features()
method. The Features struct contains flags and function pointers
which can be tested against nil to see whether they can be used.
As a result crypt and hubic can accurately reflect the capabilities of
the underlying Fs they are wrapping.
These are set in the form RCLONE_CONFIG_remote_option where remote is
the uppercased remote name and option is the uppercased config file
option name. Note that RCLONE_CONFIG_remote_TYPE must be set if
defining a new remote.
Fixes#616
This makes --max-depth 1 directory listings much more efficient (it no
longer lists all the files) and simplifies the code, bringing it into
line with s3/swift/gcs
Fixes#944
Originally it was thought the upload URL expiring would provide 401
errors so it was excluded from reauth when doing uploads, but on
re-reading the docs and looking at this issue it seems that 401 errors
are only caused by the account token expiring and not the upload token
expiring.
We will refresh both the upload token and account token on a 401 error
while uploading, and just the account token when we get a 401 at any
other time.
Large files were failing to download with an sha1 mismatch error.
Correct this by making sure we use the sha1 read from the info rather
than the header.
If remote:path points to a file make NewFs return a sentinel error
fs.ErrorIsFile and an Fs which points to the parent.
Use this to remove the LimitedFs and just add this file to the
--files-from list.
This means that server side operations can be used also.
Fixes#518Fixes#545
This should fix duplicate files on drive and 409 errors on
amazonclouddrive however it will slow down the upload slightly as
another roundtrip will be needed.
None of the other Fses needed adjusting.
Fixes#483
Gives more accurate error propagation, control of depth of recursion
and short circuit recursion where possible.
Most of the the heavy lifting is done in the "fs" package, making file
system implementations a bit simpler.
This commit contains some code originally by Klaus Post.
Fixes#316