Version v1.57.0

This commit is contained in:
Nick Craig-Wood 2021-11-01 15:42:05 +00:00
parent e781bcbba1
commit 169990e270
96 changed files with 19756 additions and 11228 deletions

5883
MANUAL.html generated

File diff suppressed because it is too large Load Diff

6631
MANUAL.md generated

File diff suppressed because it is too large Load Diff

6620
MANUAL.txt generated

File diff suppressed because it is too large Load Diff

View File

@ -43,6 +43,7 @@ docs = [
"googlecloudstorage.md",
"drive.md",
"googlephotos.md",
"hasher.md",
"hdfs.md",
"http.md",
"hubic.md",
@ -55,6 +56,7 @@ docs = [
"onedrive.md",
"opendrive.md",
"qingstor.md",
"sia.md",
"swift.md",
"pcloud.md",
"premiumizeme.md",

View File

@ -89,13 +89,14 @@ Copy another local directory to the alias directory called source
rclone copy /home/source remote:source
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/alias/alias.go then run make backenddocs" >}}
### Standard Options
### Standard options
Here are the standard options specific to alias (Alias for an existing remote).
#### --alias-remote
Remote or path to alias.
Can be "myremote:path/to/dir", "myremote:bucket", "myremote:" or "/local/path".
- Config: remote

View File

@ -158,13 +158,14 @@ rclone it will take you to an `amazon.com` page to log in. Your
`amazon.co.uk` email and password should work here just fine.
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/amazonclouddrive/amazonclouddrive.go then run make backenddocs" >}}
### Standard Options
### Standard options
Here are the standard options specific to amazon cloud drive (Amazon Drive).
#### --acd-client-id
OAuth Client Id
OAuth Client Id.
Leave blank normally.
- Config: client_id
@ -174,7 +175,8 @@ Leave blank normally.
#### --acd-client-secret
OAuth Client Secret
OAuth Client Secret.
Leave blank normally.
- Config: client_secret
@ -182,7 +184,7 @@ Leave blank normally.
- Type: string
- Default: ""
### Advanced Options
### Advanced options
Here are the advanced options specific to amazon cloud drive (Amazon Drive).
@ -198,6 +200,7 @@ OAuth Access Token as a JSON blob.
#### --acd-auth-url
Auth server URL.
Leave blank to use the provider defaults.
- Config: auth_url
@ -208,6 +211,7 @@ Leave blank to use the provider defaults.
#### --acd-token-url
Token server url.
Leave blank to use the provider defaults.
- Config: token_url
@ -275,7 +279,7 @@ underlying S3 storage.
This sets the encoding for the backend.
See: the [encoding section in the overview](/overview/#encoding) for more info.
See the [encoding section in the overview](/overview/#encoding) for more info.
- Config: encoding
- Env Var: RCLONE_ACD_ENCODING

View File

@ -148,13 +148,15 @@ parties access to a single container or putting credentials into an
untrusted environment such as a CI build server.
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/azureblob/azureblob.go then run make backenddocs" >}}
### Standard Options
### Standard options
Here are the standard options specific to azureblob (Microsoft Azure Blob Storage).
#### --azureblob-account
Storage Account Name (leave blank to use SAS URL or Emulator)
Storage Account Name.
Leave blank to use SAS URL or Emulator.
- Config: account
- Env Var: RCLONE_AZUREBLOB_ACCOUNT
@ -182,7 +184,9 @@ See ["Create an Azure service principal"](https://docs.microsoft.com/en-us/cli/a
#### --azureblob-key
Storage Account Key (leave blank to use SAS URL or Emulator)
Storage Account Key.
Leave blank to use SAS URL or Emulator.
- Config: key
- Env Var: RCLONE_AZUREBLOB_KEY
@ -191,8 +195,9 @@ Storage Account Key (leave blank to use SAS URL or Emulator)
#### --azureblob-sas-url
SAS URL for container level access only
(leave blank if using account/key or Emulator)
SAS URL for container level access only.
Leave blank if using account/key or Emulator.
- Config: sas_url
- Env Var: RCLONE_AZUREBLOB_SAS_URL
@ -201,7 +206,7 @@ SAS URL for container level access only
#### --azureblob-use-msi
Use a managed service identity to authenticate (only works in Azure)
Use a managed service identity to authenticate (only works in Azure).
When true, use a [managed service identity](https://docs.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/)
to authenticate to Azure Storage instead of a SAS token or account key.
@ -219,20 +224,24 @@ msi_client_id, or msi_mi_res_id parameters.
#### --azureblob-use-emulator
Uses local storage emulator if provided as 'true' (leave blank if using real azure storage endpoint)
Uses local storage emulator if provided as 'true'.
Leave blank if using real azure storage endpoint.
- Config: use_emulator
- Env Var: RCLONE_AZUREBLOB_USE_EMULATOR
- Type: bool
- Default: false
### Advanced Options
### Advanced options
Here are the advanced options specific to azureblob (Microsoft Azure Blob Storage).
#### --azureblob-msi-object-id
Object ID of the user-assigned MSI to use, if any. Leave blank if msi_client_id or msi_mi_res_id specified.
Object ID of the user-assigned MSI to use, if any.
Leave blank if msi_client_id or msi_mi_res_id specified.
- Config: msi_object_id
- Env Var: RCLONE_AZUREBLOB_MSI_OBJECT_ID
@ -241,7 +250,9 @@ Object ID of the user-assigned MSI to use, if any. Leave blank if msi_client_id
#### --azureblob-msi-client-id
Object ID of the user-assigned MSI to use, if any. Leave blank if msi_object_id or msi_mi_res_id specified.
Object ID of the user-assigned MSI to use, if any.
Leave blank if msi_object_id or msi_mi_res_id specified.
- Config: msi_client_id
- Env Var: RCLONE_AZUREBLOB_MSI_CLIENT_ID
@ -250,7 +261,9 @@ Object ID of the user-assigned MSI to use, if any. Leave blank if msi_object_id
#### --azureblob-msi-mi-res-id
Azure resource ID of the user-assigned MSI to use, if any. Leave blank if msi_client_id or msi_object_id specified.
Azure resource ID of the user-assigned MSI to use, if any.
Leave blank if msi_client_id or msi_object_id specified.
- Config: msi_mi_res_id
- Env Var: RCLONE_AZUREBLOB_MSI_MI_RES_ID
@ -259,7 +272,8 @@ Azure resource ID of the user-assigned MSI to use, if any. Leave blank if msi_cl
#### --azureblob-endpoint
Endpoint for the service
Endpoint for the service.
Leave blank normally.
- Config: endpoint
@ -269,7 +283,7 @@ Leave blank normally.
#### --azureblob-upload-cutoff
Cutoff for switching to chunked upload (<= 256 MiB). (Deprecated)
Cutoff for switching to chunked upload (<= 256 MiB) (deprecated).
- Config: upload_cutoff
- Env Var: RCLONE_AZUREBLOB_UPLOAD_CUTOFF
@ -364,6 +378,7 @@ to start uploading.
#### --azureblob-memory-pool-flush-time
How often internal memory buffer pools will be flushed.
Uploads which requires additional buffers (f.e multipart) will use memory pool for allocations.
This option controls how often unused buffers will be removed from the pool.
@ -385,7 +400,7 @@ Whether to use mmap buffers in internal memory pool.
This sets the encoding for the backend.
See: the [encoding section in the overview](/overview/#encoding) for more info.
See the [encoding section in the overview](/overview/#encoding) for more info.
- Config: encoding
- Env Var: RCLONE_AZUREBLOB_ENCODING
@ -394,7 +409,7 @@ See: the [encoding section in the overview](/overview/#encoding) for more info.
#### --azureblob-public-access
Public access level of a container: blob, container.
Public access level of a container: blob or container.
- Config: public_access
- Env Var: RCLONE_AZUREBLOB_PUBLIC_ACCESS
@ -402,12 +417,22 @@ Public access level of a container: blob, container.
- Default: ""
- Examples:
- ""
- The container and its blobs can be accessed only with an authorized request. It's a default value
- The container and its blobs can be accessed only with an authorized request.
- It's a default value.
- "blob"
- Blob data within this container can be read via anonymous request.
- "container"
- Allow full public read access for container and blob data.
#### --azureblob-no-head-object
If set, do not do HEAD before GET when getting objects.
- Config: no_head_object
- Env Var: RCLONE_AZUREBLOB_NO_HEAD_OBJECT
- Type: bool
- Default: false
{{< rem autogenerated options stop >}}
## Limitations

View File

@ -321,13 +321,13 @@ https://f002.backblazeb2.com/file/bucket/path/folder/file3?Authorization=xxxxxxx
```
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/b2/b2.go then run make backenddocs" >}}
### Standard Options
### Standard options
Here are the standard options specific to b2 (Backblaze B2).
#### --b2-account
Account ID or Application Key ID
Account ID or Application Key ID.
- Config: account
- Env Var: RCLONE_B2_ACCOUNT
@ -336,7 +336,7 @@ Account ID or Application Key ID
#### --b2-key
Application Key
Application Key.
- Config: key
- Env Var: RCLONE_B2_KEY
@ -352,13 +352,14 @@ Permanently delete files on remote removal, otherwise hide files.
- Type: bool
- Default: false
### Advanced Options
### Advanced options
Here are the advanced options specific to b2 (Backblaze B2).
#### --b2-endpoint
Endpoint for the service.
Leave blank normally.
- Config: endpoint
@ -388,6 +389,7 @@ in the [b2 integrations checklist](https://www.backblaze.com/b2/docs/integration
#### --b2-versions
Include old versions in directory listings.
Note that when using this no file write operations are permitted,
so you can't upload files or delete them.
@ -411,7 +413,7 @@ This value should be set no larger than 4.657 GiB (== 5 GB).
#### --b2-copy-cutoff
Cutoff for switching to multipart copy
Cutoff for switching to multipart copy.
Any files larger than this that need to be server-side copied will be
copied in chunks of this size.
@ -425,12 +427,14 @@ The minimum is 0 and the maximum is 4.6 GiB.
#### --b2-chunk-size
Upload chunk size. Must fit in memory.
Upload chunk size.
When uploading large files, chunk the file into this size. Note that
these chunks are buffered in memory and there might a maximum of
"--transfers" chunks in progress at once. 5,000,000 Bytes is the
minimum size.
When uploading large files, chunk the file into this size.
Must fit in memory. These chunks are buffered in memory and there
might a maximum of "--transfers" chunks in progress at once.
5,000,000 Bytes is the minimum size.
- Config: chunk_size
- Env Var: RCLONE_B2_CHUNK_SIZE
@ -439,7 +443,7 @@ minimum size.
#### --b2-disable-checksum
Disable checksums for large (> upload cutoff) files
Disable checksums for large (> upload cutoff) files.
Normally rclone will calculate the SHA1 checksum of the input before
uploading it so it can add it to metadata on the object. This is great
@ -503,7 +507,7 @@ Whether to use mmap buffers in internal memory pool.
This sets the encoding for the backend.
See: the [encoding section in the overview](/overview/#encoding) for more info.
See the [encoding section in the overview](/overview/#encoding) for more info.
- Config: encoding
- Env Var: RCLONE_B2_ENCODING

View File

@ -265,13 +265,14 @@ in the browser, then you use `11xxxxxxxxx8` as
the `root_folder_id` in the config.
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/box/box.go then run make backenddocs" >}}
### Standard Options
### Standard options
Here are the standard options specific to box (Box).
#### --box-client-id
OAuth Client Id
OAuth Client Id.
Leave blank normally.
- Config: client_id
@ -281,7 +282,8 @@ Leave blank normally.
#### --box-client-secret
OAuth Client Secret
OAuth Client Secret.
Leave blank normally.
- Config: client_secret
@ -292,11 +294,11 @@ Leave blank normally.
#### --box-box-config-file
Box App config.json location
Leave blank normally.
Leading `~` will be expanded in the file name as will environment variables such as `${RCLONE_CONFIG_DIR}`.
- Config: box_config_file
- Env Var: RCLONE_BOX_BOX_CONFIG_FILE
- Type: string
@ -305,6 +307,7 @@ Leading `~` will be expanded in the file name as will environment variables such
#### --box-access-token
Box App Primary Access Token
Leave blank normally.
- Config: access_token
@ -322,11 +325,11 @@ Leave blank normally.
- Default: "user"
- Examples:
- "user"
- Rclone should act on behalf of a user
- Rclone should act on behalf of a user.
- "enterprise"
- Rclone should act on behalf of a service account
- Rclone should act on behalf of a service account.
### Advanced Options
### Advanced options
Here are the advanced options specific to box (Box).
@ -342,6 +345,7 @@ OAuth Access Token as a JSON blob.
#### --box-auth-url
Auth server URL.
Leave blank to use the provider defaults.
- Config: auth_url
@ -352,6 +356,7 @@ Leave blank to use the provider defaults.
#### --box-token-url
Token server url.
Leave blank to use the provider defaults.
- Config: token_url
@ -386,11 +391,29 @@ Max number of times to try committing a multipart file.
- Type: int
- Default: 100
#### --box-list-chunk
Size of listing chunk 1-1000.
- Config: list_chunk
- Env Var: RCLONE_BOX_LIST_CHUNK
- Type: int
- Default: 1000
#### --box-owned-by
Only show items owned by the login (email address) passed in.
- Config: owned_by
- Env Var: RCLONE_BOX_OWNED_BY
- Type: string
- Default: ""
#### --box-encoding
This sets the encoding for the backend.
See: the [encoding section in the overview](/overview/#encoding) for more info.
See the [encoding section in the overview](/overview/#encoding) for more info.
- Config: encoding
- Env Var: RCLONE_BOX_ENCODING

View File

@ -305,13 +305,14 @@ Params:
- **withData** = true/false to delete cached data (chunks) as well _(optional, false by default)_
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/cache/cache.go then run make backenddocs" >}}
### Standard Options
### Standard options
Here are the standard options specific to cache (Cache a remote).
#### --cache-remote
Remote to cache.
Normally should contain a ':' and a path, e.g. "myremote:path/to/dir",
"myremote:bucket" or maybe "myremote:" (not recommended).
@ -322,7 +323,7 @@ Normally should contain a ':' and a path, e.g. "myremote:path/to/dir",
#### --cache-plex-url
The URL of the Plex server
The URL of the Plex server.
- Config: plex_url
- Env Var: RCLONE_CACHE_PLEX_URL
@ -331,7 +332,7 @@ The URL of the Plex server
#### --cache-plex-username
The username of the Plex user
The username of the Plex user.
- Config: plex_username
- Env Var: RCLONE_CACHE_PLEX_USERNAME
@ -340,7 +341,7 @@ The username of the Plex user
#### --cache-plex-password
The password of the Plex user
The password of the Plex user.
**NB** Input to this must be obscured - see [rclone obscure](/commands/rclone_obscure/).
@ -406,13 +407,13 @@ oldest chunks until it goes under this value.
- "10G"
- 10 GiB
### Advanced Options
### Advanced options
Here are the advanced options specific to cache (Cache a remote).
#### --cache-plex-token
The plex token for authentication - auto set normally
The plex token for authentication - auto set normally.
- Config: plex_token
- Env Var: RCLONE_CACHE_PLEX_TOKEN
@ -421,7 +422,7 @@ The plex token for authentication - auto set normally
#### --cache-plex-insecure
Skip all certificate verification when connecting to the Plex server
Skip all certificate verification when connecting to the Plex server.
- Config: plex_insecure
- Env Var: RCLONE_CACHE_PLEX_INSECURE
@ -431,6 +432,7 @@ Skip all certificate verification when connecting to the Plex server
#### --cache-db-path
Directory to store file structure metadata DB.
The remote name is used as the DB file name.
- Config: db_path
@ -466,6 +468,7 @@ Clear all the cached data for this remote on start.
#### --cache-chunk-clean-interval
How often should the cache perform cleanups of the chunk storage.
The default value should be ok for most people. If you find that the
cache goes over "cache-chunk-total-size" too often then try to lower
this value to force it to perform cleanups more often.
@ -535,7 +538,7 @@ available on the local machine.
#### --cache-rps
Limits the number of requests per second to the source FS (-1 to disable)
Limits the number of requests per second to the source FS (-1 to disable).
This setting places a hard limit on the number of requests per second
that cache will be doing to the cloud provider remote and try to
@ -560,7 +563,7 @@ still pass.
#### --cache-writes
Cache file data on writes through the FS
Cache file data on writes through the FS.
If you need to read files immediately after you upload them through
cache you can enable this flag to have their data stored in the
@ -589,7 +592,7 @@ provider
#### --cache-tmp-wait-time
How long should files be stored in local cache before being uploaded
How long should files be stored in local cache before being uploaded.
This is the duration that a file must wait in the temporary location
_cache-tmp-upload-path_ before it is selected for upload.
@ -604,7 +607,7 @@ to start the upload if a queue formed for this purpose.
#### --cache-db-wait-time
How long to wait for the DB to be available - 0 is unlimited
How long to wait for the DB to be available - 0 is unlimited.
Only one process can have the DB open at any one time, so rclone waits
for this duration for the DB to become available before it gives an
@ -617,7 +620,7 @@ If you set it to 0 then it will wait forever.
- Type: Duration
- Default: 1s
### Backend commands
## Backend commands
Here are the commands specific to the cache backend.
@ -633,7 +636,7 @@ info on how to pass options and arguments.
These can be run on a running backend using the rc command
[backend/command](/rc/#backend/command).
#### stats
### stats
Print stats on the cache backend in JSON format.

View File

@ -5,6 +5,179 @@ description: "Rclone Changelog"
# Changelog
## v1.57.0 - 2021-11-01
[See commits](https://github.com/rclone/rclone/compare/v1.56.0...v1.57.0)
* New backends
* Sia: for Sia decentralized cloud (Ian Levesque, Matthew Sevey, Ivan Andreev)
* Hasher: caches hashes and enable hashes for backends that don't support them (Ivan Andreev)
* New commands
* lsjson --stat: to get info about a single file/dir and `operations/stat` api (Nick Craig-Wood)
* config paths: show configured paths (albertony)
* New Features
* about: Make human-readable output more consistent with other commands (albertony)
* build
* Use go1.17 for building and make go1.14 the minimum supported (Nick Craig-Wood)
* Update Go to 1.16 and NDK to 22b for Android builds (x0b)
* config
* Support hyphen in remote name from environment variable (albertony)
* Make temporary directory user-configurable (albertony)
* Convert `--cache-dir` value to an absolute path (albertony)
* Do not override MIME types from OS defaults (albertony)
* docs
* Toc styling and header levels cleanup (albertony)
* Extend documentation on valid remote names (albertony)
* Mention make for building and cmount tag for macos (Alex Chen)
* ...and many more contributions to numerous to mention!
* fs: Move with `--ignore-existing` will not delete skipped files (Nathan Collins)
* hashsum
* Treat hash values in sum file as case insensitive (Ivan Andreev)
* Don't put `ERROR` or `UNSUPPORTED` in output (Ivan Andreev)
* lib/encoder: Add encoding of square brackets (Ivan Andreev)
* lib/file: Improve error message when attempting to create dir on nonexistent drive on windows (albertony)
* lib/http: Factor password hash salt into options with default (Nolan Woods)
* lib/kv: Add key-value database api (Ivan Andreev)
* librclone
* Add `RcloneFreeString` function (albertony)
* Free strings in python example (albertony)
* log: Optionally print pid in logs (Ivan Andreev)
* ls: Introduce `--human-readable` global option to print human-readable sizes (albertony)
* ncdu: Introduce key `u` to toggle human-readable (albertony)
* operations: Add `rmdirs -v` output (Justin Winokur)
* serve sftp
* Generate an ECDSA server key as well as RSA (Nick Craig-Wood)
* Generate an Ed25519 server key as well as ECDSA and RSA (albertony)
* serve docker
* Allow to customize proxy settings of docker plugin (Ivan Andreev)
* Build docker plugin for multiple platforms (Thomas Stachl)
* size: Include human-readable count (albertony)
* touch: Add support for touching files in directory, with recursive option, filtering and `--dry-run`/`-i` (albertony)
* tree: Option to print human-readable sizes removed in favor of global option (albertony)
* Bug Fixes
* lib/http
* Fix bad username check in single auth secret provider (Nolan Woods)
* Fix handling of SSL credentials (Nolan Woods)
* serve ftp: Ensure modtime is passed as UTC always to fix timezone oddities (Nick Craig-Wood)
* serve sftp: Fix generation of server keys on windows (albertony)
* serve docker: Fix octal umask (Ivan Andreev)
* Mount
* Enable rclone to be run as mount helper direct from the fstab (Ivan Andreev)
* Use procfs to validate mount on linux (Ivan Andreev)
* Correctly daemonize for compatibility with automount (Ivan Andreev)
* VFS
* Ensure names used in cache path are legal on current OS (albertony)
* Ignore `ECLOSED` when truncating file handles to fix intermittent bad file descriptor error (Nick Craig-Wood)
* Local
* Refactor default OS encoding out from local backend into shared encoder lib (albertony)
* Crypt
* Return wrapped object even with `--crypt-no-data-encryption` (Ivan Andreev)
* Fix uploads with `--crypt-no-data-encryption` (Nick Craig-Wood)
* Azure Blob
* Add `--azureblob-no-head-object` (Tatsuya Noyori)
* Box
* Make listings of heavily used directories more reliable (Nick Craig-Wood)
* When doing cleanup delete as much as possible (Nick Craig-Wood)
* Add `--box-list-chunk` to control listing chunk size (Nick Craig-Wood)
* Delete items in parallel in cleanup using `--checkers` threads (Nick Craig-Wood)
* Add `--box-owned-by` to only show items owned by the login passed (Nick Craig-Wood)
* Retry `operation_blocked_temporary` errors (Nick Craig-Wood)
* Chunker
* Md5all must create metadata if base hash is slow (Ivan Andreev)
* Drive
* Speed up directory listings by constraining the API listing using the current filters (fotile96, Ivan Andreev)
* Fix buffering for single request upload for files smaller than `--drive-upload-cutoff` (YenForYang)
* Add `-o config` option to `backend drives` to make config for all shared drives (Nick Craig-Wood)
* Dropbox
* Add `--dropbox-batch-commit-timeout` to control batch timeout (Nick Craig-Wood)
* Filefabric
* Make backoff exponential for error_background to fix errors (Nick Craig-Wood)
* Fix directory move after API change (Nick Craig-Wood)
* FTP
* Enable tls session cache by default (Ivan Andreev)
* Add option to disable tls13 (Ivan Andreev)
* Fix timeout after long uploads (Ivan Andreev)
* Add support for precise time (Ivan Andreev)
* Enable CI for ProFtpd, PureFtpd, VsFtpd (Ivan Andreev)
* Googlephotos
* Use encoder for album names to fix albums with control characters (Parth Shukla)
* Jottacloud
* Implement `SetModTime` to support modtime-only changes (albertony)
* Improved error handling with `SetModTime` and corrupt files in general (albertony)
* Add support for `UserInfo` (`rclone config userinfo`) feature (albertony)
* Return direct download link from `rclone link` command (albertony)
* Koofr
* Create direct share link (Dmitry Bogatov)
* Pcloud
* Add sha256 support (Ken Enrique Morel)
* Premiumizeme
* Fix directory listing after API changes (Nick Craig-Wood)
* Fix server side move after API change (Nick Craig-Wood)
* Fix server side directory move after API changes (Nick Craig-Wood)
* S3
* Add support to use CDN URL to download the file (Logeshwaran)
* Add AWS Snowball Edge to providers examples (r0kk3rz)
* Use a combination of SDK retries and rclone retries (Nick Craig-Wood)
* Fix IAM Role for Service Account not working and other auth problems (Nick Craig-Wood)
* Fix `shared_credentials_file` auth after reverting incorrect fix (Nick Craig-Wood)
* Fix corrupted on transfer: sizes differ 0 vs xxxx with Ceph (Nick Craig-Wood)
* Seafile
* Fix error when not configured for 2fa (Fred)
* SFTP
* Fix timeout when doing MD5SUM of large file (Nick Craig-Wood)
* Swift
* Update OCI URL (David Liu)
* Document OVH Cloud Archive (HNGamingUK)
* Union
* Fix rename not working with union of local disk and bucket based remote (Nick Craig-Wood)
## v1.56.2 - 2021-10-01
[See commits](https://github.com/rclone/rclone/compare/v1.56.1...v1.56.2)
* Bug Fixes
* serve http: Re-add missing auth to http service (Nolan Woods)
* build: Update golang.org/x/sys to fix crash on macOS when compiled with go1.17 (Herby Gillot)
* FTP
* Fix deadlock after failed update when concurrency=1 (Ivan Andreev)
## v1.56.1 - 2021-09-19
[See commits](https://github.com/rclone/rclone/compare/v1.56.0...v1.56.1)
* Bug Fixes
* accounting: Fix maximum bwlimit by scaling scale max token bucket size (Nick Craig-Wood)
* rc: Fix speed does not update in core/stats (negative0)
* selfupdate: Fix `--quiet` option, not quite quiet (yedamo)
* serve http: Fix `serve http` exiting directly after starting (Cnly)
* build
* Apply gofmt from golang 1.17 (Ivan Andreev)
* Update Go to 1.16 and NDK to 22b for android/any (x0b)
* Mount
* Fix `--daemon` mode (Ivan Andreev)
* VFS
* Fix duplicates on rename (Nick Craig-Wood)
* Fix crash when truncating a just uploaded object (Nick Craig-Wood)
* Fix issue where empty dirs would build up in cache meta dir (albertony)
* Drive
* Fix instructions for auto config (Greg Sadetsky)
* Fix lsf example without drive-impersonate (Greg Sadetsky)
* Onedrive
* Handle HTTP 400 better in PublicLink (Alex Chen)
* Clarification of the process for creating custom client_id (Mariano Absatz)
* Pcloud
* Return an early error when Put is called with an unknown size (Nick Craig-Wood)
* Try harder to delete a failed upload (Nick Craig-Wood)
* S3
* Add Wasabi's AP-Northeast endpoint info (hota)
* Fix typo in s3 documentation (Greg Sadetsky)
* Seafile
* Fix 2fa config state machine (Fred)
* SFTP
* Remove spurious error message on `--sftp-disable-concurrent-reads` (Nick Craig-Wood)
* Sugarsync
* Fix initial connection after config re-arrangement (Nick Craig-Wood)
## v1.56.0 - 2021-07-20
[See commits](https://github.com/rclone/rclone/compare/v1.55.0...v1.56.0)

View File

@ -311,13 +311,14 @@ to keep rclone up-to-date to avoid data corruption.
Changing `transactions` is dangerous and requires explicit migration.
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/chunker/chunker.go then run make backenddocs" >}}
### Standard Options
### Standard options
Here are the standard options specific to chunker (Transparently chunk/split large files).
#### --chunker-remote
Remote to chunk/unchunk.
Normally should contain a ':' and a path, e.g. "myremote:path/to/dir",
"myremote:bucket" or maybe "myremote:" (not recommended).
@ -337,7 +338,9 @@ Files larger than chunk size will be split in chunks.
#### --chunker-hash-type
Choose how chunker handles hash sums. All modes but "none" require metadata.
Choose how chunker handles hash sums.
All modes but "none" require metadata.
- Config: hash_type
- Env Var: RCLONE_CHUNKER_HASH_TYPE
@ -345,27 +348,30 @@ Choose how chunker handles hash sums. All modes but "none" require metadata.
- Default: "md5"
- Examples:
- "none"
- Pass any hash supported by wrapped remote for non-chunked files, return nothing otherwise
- Pass any hash supported by wrapped remote for non-chunked files.
- Return nothing otherwise.
- "md5"
- MD5 for composite files
- MD5 for composite files.
- "sha1"
- SHA1 for composite files
- SHA1 for composite files.
- "md5all"
- MD5 for all files
- MD5 for all files.
- "sha1all"
- SHA1 for all files
- SHA1 for all files.
- "md5quick"
- Copying a file to chunker will request MD5 from the source falling back to SHA1 if unsupported
- Copying a file to chunker will request MD5 from the source.
- Falling back to SHA1 if unsupported.
- "sha1quick"
- Similar to "md5quick" but prefers SHA1 over MD5
- Similar to "md5quick" but prefers SHA1 over MD5.
### Advanced Options
### Advanced options
Here are the advanced options specific to chunker (Transparently chunk/split large files).
#### --chunker-name-format
String format of chunk file names.
The two placeholders are: base file name (*) and chunk number (#...).
There must be one and only one asterisk and one or more consecutive hash characters.
If chunk number has less digits than the number of hashes, it is left-padded by zeros.
@ -380,6 +386,7 @@ Possible chunk files are ignored if their name does not match given format.
#### --chunker-start-from
Minimum valid chunk number. Usually 0 or 1.
By default chunk numbers start from 1.
- Config: start_from
@ -389,7 +396,9 @@ By default chunk numbers start from 1.
#### --chunker-meta-format
Format of the metadata object or "none". By default "simplejson".
Format of the metadata object or "none".
By default "simplejson".
Metadata is a small JSON file named after the composite file.
- Config: meta_format
@ -398,9 +407,11 @@ Metadata is a small JSON file named after the composite file.
- Default: "simplejson"
- Examples:
- "none"
- Do not use metadata files at all. Requires hash type "none".
- Do not use metadata files at all.
- Requires hash type "none".
- "simplejson"
- Simple JSON supports hash sums and chunk validation.
-
- It has the following fields: ver, size, nchunks, md5, sha1.
#### --chunker-fail-hard

View File

@ -41,9 +41,10 @@ See the [global flags page](/flags/) for global options not listed here.
* [rclone check](/commands/rclone_check/) - Checks the files in the source and destination match.
* [rclone checksum](/commands/rclone_checksum/) - Checks the files in the source against a SUM file.
* [rclone cleanup](/commands/rclone_cleanup/) - Clean up the remote if possible.
* [rclone completion](/commands/rclone_completion/) - generate the autocompletion script for the specified shell
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
* [rclone copy](/commands/rclone_copy/) - Copy files from source to dest, skipping already copied.
* [rclone copyto](/commands/rclone_copyto/) - Copy files from source to dest, skipping already copied.
* [rclone copy](/commands/rclone_copy/) - Copy files from source to dest, skipping identical files.
* [rclone copyto](/commands/rclone_copyto/) - Copy files from source to dest, skipping identical files.
* [rclone copyurl](/commands/rclone_copyurl/) - Copy url content to dest.
* [rclone cryptcheck](/commands/rclone_cryptcheck/) - Cryptcheck checks the integrity of a crypted remote.
* [rclone cryptdecode](/commands/rclone_cryptdecode/) - Cryptdecode returns unencrypted file names.

View File

@ -17,23 +17,22 @@ output. The output is typically used, free, quota and trash contents.
E.g. Typical output from `rclone about remote:` is:
Total: 17G
Used: 7.444G
Free: 1.315G
Trashed: 100.000M
Other: 8.241G
Total: 17 GiB
Used: 7.444 GiB
Free: 1.315 GiB
Trashed: 100.000 MiB
Other: 8.241 GiB
Where the fields are:
* Total: total size available.
* Used: total size used
* Free: total space available to this user.
* Trashed: total space used by trash
* Other: total amount in other storage (e.g. Gmail, Google Photos)
* Objects: total number of objects in the storage
* Total: Total size available.
* Used: Total size used.
* Free: Total space available to this user.
* Trashed: Total space used by trash.
* Other: Total amount in other storage (e.g. Gmail, Google Photos).
* Objects: Total number of objects in the storage.
Not all backends print all fields. Information is not included if it is not
provided by a backend. Where the value is unlimited it is omitted.
All sizes are in number of bytes.
Applying a `--full` flag to the command prints the bytes in full, e.g.
@ -53,9 +52,11 @@ A `--json` flag generates conveniently computer readable output, e.g.
"free": 1411001220
}
Not all backends support the `rclone about` command.
Not all backends print all fields. Information is not included if it is not
provided by a backend. Where the value is unlimited it is omitted.
See [List of backends that do not support about](https://rclone.org/overview/#optional-features)
Some backends does not support the `rclone about` command at all,
see complete list in [documentation](https://rclone.org/overview/#optional-features).
```
@ -65,7 +66,7 @@ rclone about remote: [flags]
## Options
```
--full Full numbers instead of SI units
--full Full numbers instead of human-readable
-h, --help help for about
--json Format output as JSON
```

View File

@ -47,8 +47,8 @@ rclone backend <command> remote:path [opts] <args> [flags]
```
-h, --help help for backend
--json Always output in JSON format.
-o, --option stringArray Option in the form name=value or name.
--json Always output in JSON format
-o, --option stringArray Option in the form name=value or name
```
See the [global flags page](/flags/) for global options not listed here.

View File

@ -39,12 +39,12 @@ rclone cat remote:path [flags]
## Options
```
--count int Only print N characters. (default -1)
--discard Discard the output instead of printing.
--head int Only print the first N characters.
--count int Only print N characters (default -1)
--discard Discard the output instead of printing
--head int Only print the first N characters
-h, --help help for cat
--offset int Start printing at offset N (or from end if -ve).
--tail int Only print the last N characters.
--offset int Start printing at offset N (or from end if -ve)
--tail int Only print the last N characters
```
See the [global flags page](/flags/) for global options not listed here.

View File

@ -59,7 +59,7 @@ rclone check source:path dest:path [flags]
-C, --checkfile string Treat source:path as a SUM file with hashes of given type
--combined string Make a combined report of changes to this file
--differ string Report all non-matching files to this file
--download Check by downloading rather than with hash.
--download Check by downloading rather than with hash
--error string Report all files with errors (hashing or reading) to this file
-h, --help help for check
--match string Report all matching files to this file

View File

@ -53,7 +53,7 @@ rclone checksum <hash> sumfile src:path [flags]
```
--combined string Make a combined report of changes to this file
--differ string Report all non-matching files to this file
--download Check by hashing the contents.
--download Check by hashing the contents
--error string Report all files with errors (hashing or reading) to this file
-h, --help help for checksum
--match string Report all matching files to this file

View File

@ -0,0 +1,34 @@
---
title: "rclone completion"
description: "generate the autocompletion script for the specified shell"
slug: rclone_completion
url: /commands/rclone_completion/
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/completion/ and as part of making a release run "make commanddocs"
---
# rclone completion
generate the autocompletion script for the specified shell
## Synopsis
Generate the autocompletion script for rclone for the specified shell.
See each sub-command's help for details on how to use the generated script.
## Options
```
-h, --help help for completion
```
See the [global flags page](/flags/) for global options not listed here.
## SEE ALSO
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
* [rclone completion bash](/commands/rclone_completion_bash/) - generate the autocompletion script for bash
* [rclone completion fish](/commands/rclone_completion_fish/) - generate the autocompletion script for fish
* [rclone completion powershell](/commands/rclone_completion_powershell/) - generate the autocompletion script for powershell
* [rclone completion zsh](/commands/rclone_completion_zsh/) - generate the autocompletion script for zsh

View File

@ -0,0 +1,48 @@
---
title: "rclone completion bash"
description: "generate the autocompletion script for bash"
slug: rclone_completion_bash
url: /commands/rclone_completion_bash/
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/completion/bash/ and as part of making a release run "make commanddocs"
---
# rclone completion bash
generate the autocompletion script for bash
## Synopsis
Generate the autocompletion script for the bash shell.
This script depends on the 'bash-completion' package.
If it is not installed already, you can install it via your OS's package manager.
To load completions in your current shell session:
$ source <(rclone completion bash)
To load completions for every new session, execute once:
Linux:
$ rclone completion bash > /etc/bash_completion.d/rclone
MacOS:
$ rclone completion bash > /usr/local/etc/bash_completion.d/rclone
You will need to start a new shell for this setup to take effect.
```
rclone completion bash
```
## Options
```
-h, --help help for bash
--no-descriptions disable completion descriptions
```
See the [global flags page](/flags/) for global options not listed here.
## SEE ALSO
* [rclone completion](/commands/rclone_completion/) - generate the autocompletion script for the specified shell

View File

@ -0,0 +1,42 @@
---
title: "rclone completion fish"
description: "generate the autocompletion script for fish"
slug: rclone_completion_fish
url: /commands/rclone_completion_fish/
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/completion/fish/ and as part of making a release run "make commanddocs"
---
# rclone completion fish
generate the autocompletion script for fish
## Synopsis
Generate the autocompletion script for the fish shell.
To load completions in your current shell session:
$ rclone completion fish | source
To load completions for every new session, execute once:
$ rclone completion fish > ~/.config/fish/completions/rclone.fish
You will need to start a new shell for this setup to take effect.
```
rclone completion fish [flags]
```
## Options
```
-h, --help help for fish
--no-descriptions disable completion descriptions
```
See the [global flags page](/flags/) for global options not listed here.
## SEE ALSO
* [rclone completion](/commands/rclone_completion/) - generate the autocompletion script for the specified shell

View File

@ -0,0 +1,40 @@
---
title: "rclone completion powershell"
description: "generate the autocompletion script for powershell"
slug: rclone_completion_powershell
url: /commands/rclone_completion_powershell/
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/completion/powershell/ and as part of making a release run "make commanddocs"
---
# rclone completion powershell
generate the autocompletion script for powershell
## Synopsis
Generate the autocompletion script for powershell.
To load completions in your current shell session:
PS C:\> rclone completion powershell | Out-String | Invoke-Expression
To load completions for every new session, add the output of the above command
to your powershell profile.
```
rclone completion powershell [flags]
```
## Options
```
-h, --help help for powershell
--no-descriptions disable completion descriptions
```
See the [global flags page](/flags/) for global options not listed here.
## SEE ALSO
* [rclone completion](/commands/rclone_completion/) - generate the autocompletion script for the specified shell

View File

@ -0,0 +1,47 @@
---
title: "rclone completion zsh"
description: "generate the autocompletion script for zsh"
slug: rclone_completion_zsh
url: /commands/rclone_completion_zsh/
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/completion/zsh/ and as part of making a release run "make commanddocs"
---
# rclone completion zsh
generate the autocompletion script for zsh
## Synopsis
Generate the autocompletion script for the zsh shell.
If shell completion is not already enabled in your environment you will need
to enable it. You can execute the following once:
$ echo "autoload -U compinit; compinit" >> ~/.zshrc
To load completions for every new session, execute once:
# Linux:
$ rclone completion zsh > "${fpath[1]}/_rclone"
# macOS:
$ rclone completion zsh > /usr/local/share/zsh/site-functions/_rclone
You will need to start a new shell for this setup to take effect.
```
rclone completion zsh [flags]
```
## Options
```
-h, --help help for zsh
--no-descriptions disable completion descriptions
```
See the [global flags page](/flags/) for global options not listed here.
## SEE ALSO
* [rclone completion](/commands/rclone_completion/) - generate the autocompletion script for the specified shell

View File

@ -32,11 +32,12 @@ See the [global flags page](/flags/) for global options not listed here.
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
* [rclone config create](/commands/rclone_config_create/) - Create a new remote with name, type and options.
* [rclone config delete](/commands/rclone_config_delete/) - Delete an existing remote `name`.
* [rclone config delete](/commands/rclone_config_delete/) - Delete an existing remote.
* [rclone config disconnect](/commands/rclone_config_disconnect/) - Disconnects user from remote
* [rclone config dump](/commands/rclone_config_dump/) - Dump the config file as JSON.
* [rclone config file](/commands/rclone_config_file/) - Show path of configuration file in use.
* [rclone config password](/commands/rclone_config_password/) - Update password in an existing remote.
* [rclone config paths](/commands/rclone_config_paths/) - Show paths used for configuration, cache, temp etc.
* [rclone config providers](/commands/rclone_config_providers/) - List in JSON format all the providers and options.
* [rclone config reconnect](/commands/rclone_config_reconnect/) - Re-authenticates user with remote.
* [rclone config show](/commands/rclone_config_show/) - Print (decrypted) config file, or the config for a single remote.

View File

@ -115,20 +115,20 @@ as a readable demonstration.
```
rclone config create `name` `type` [`key` `value`]* [flags]
rclone config create name type [key value]* [flags]
```
## Options
```
--all Ask the full set of config questions.
--continue Continue the configuration process with an answer.
--all Ask the full set of config questions
--continue Continue the configuration process with an answer
-h, --help help for create
--no-obscure Force any passwords not to be obscured.
--non-interactive Don't interact with user and return questions.
--obscure Force any passwords to be obscured.
--result string Result - use with --continue.
--state string State - use with --continue.
--no-obscure Force any passwords not to be obscured
--non-interactive Don't interact with user and return questions
--obscure Force any passwords to be obscured
--result string Result - use with --continue
--state string State - use with --continue
```
See the [global flags page](/flags/) for global options not listed here.

View File

@ -1,16 +1,16 @@
---
title: "rclone config delete"
description: "Delete an existing remote `name`."
description: "Delete an existing remote."
slug: rclone_config_delete
url: /commands/rclone_config_delete/
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/config/delete/ and as part of making a release run "make commanddocs"
---
# rclone config delete
Delete an existing remote `name`.
Delete an existing remote.
```
rclone config delete `name` [flags]
rclone config delete name [flags]
```
## Options

View File

@ -26,7 +26,7 @@ both support obscuring passwords directly.
```
rclone config password `name` [`key` `value`]+ [flags]
rclone config password name [key value]+ [flags]
```
## Options

View File

@ -0,0 +1,27 @@
---
title: "rclone config paths"
description: "Show paths used for configuration, cache, temp etc."
slug: rclone_config_paths
url: /commands/rclone_config_paths/
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/config/paths/ and as part of making a release run "make commanddocs"
---
# rclone config paths
Show paths used for configuration, cache, temp etc.
```
rclone config paths [flags]
```
## Options
```
-h, --help help for paths
```
See the [global flags page](/flags/) for global options not listed here.
## SEE ALSO
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.

View File

@ -115,20 +115,20 @@ as a readable demonstration.
```
rclone config update `name` [`key` `value`]+ [flags]
rclone config update name [key value]+ [flags]
```
## Options
```
--all Ask the full set of config questions.
--continue Continue the configuration process with an answer.
--all Ask the full set of config questions
--continue Continue the configuration process with an answer
-h, --help help for update
--no-obscure Force any passwords not to be obscured.
--non-interactive Don't interact with user and return questions.
--obscure Force any passwords to be obscured.
--result string Result - use with --continue.
--state string State - use with --continue.
--no-obscure Force any passwords not to be obscured
--non-interactive Don't interact with user and return questions
--obscure Force any passwords to be obscured
--result string Result - use with --continue
--state string State - use with --continue
```
See the [global flags page](/flags/) for global options not listed here.

View File

@ -1,62 +1,48 @@
---
title: "rclone copy"
description: "Copy files from source to dest, skipping already copied."
description: "Copy files from source to dest, skipping identical files."
slug: rclone_copy
url: /commands/rclone_copy/
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/copy/ and as part of making a release run "make commanddocs"
---
# rclone copy
Copy files from source to dest, skipping already copied.
Copy files from source to dest, skipping identical files.
## Synopsis
Copy the source to the destination. Doesn't transfer
unchanged files, testing by size and modification time or
MD5SUM. Doesn't delete files from the destination.
Copy the source to the destination. Does not transfer files that are
identical on source and destination, testing by size and modification
time or MD5SUM. Doesn't delete files from the destination.
Note that when the source is a directory, it is always the contents
of the directory that is copied, not the directory itself.
Note that it is always the contents of the directory that is synced,
not the directory so when source:path is a directory, it's the
contents of source:path that are copied, not the directory name and
contents.
For example, given the following command:
If dest:path doesn't exist, it is created and the source:path contents
go there.
For example
rclone copy source:sourcepath dest:destpath
Let's say there are two files in source:
Let's say there are two files in sourcepath
sourcepath/one.txt
sourcepath/two.txt
The command will copy them to:
This copies them to
destpath/one.txt
destpath/two.txt
Not to:
Not to
destpath/sourcepath/one.txt
destpath/sourcepath/two.txt
Also note that the destination is always a directory. If the path
does not exist, it will be created as a directory and the contents of
the source will be copied into it. This is the case even if the source
path points to a file. If you want to copy a single file to a different
name you must use [copyto](commands/rclone_copyto/) instead.
For example, given the command:
rclone copy source:sourcepath/one.txt dest:destpath/one.txt
Rclone will create a directory `dest:destpath/one.txt` and put the source file in there:
dest:destpath/one.txt/one.txt
Not copy the single source file as a file with the given destination path,
which would be the result if copyto had been used instead:
dest:destpath/one.txt
If you are familiar with `rsync`, rclone always works as if you had
written a trailing `/` - meaning "copy the contents of this directory".
This applies to all commands and whether you are talking about the

View File

@ -1,13 +1,13 @@
---
title: "rclone copyto"
description: "Copy files from source to dest, skipping already copied."
description: "Copy files from source to dest, skipping identical files."
slug: rclone_copyto
url: /commands/rclone_copyto/
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/copyto/ and as part of making a release run "make commanddocs"
---
# rclone copyto
Copy files from source to dest, skipping already copied.
Copy files from source to dest, skipping identical files.
## Synopsis
@ -34,9 +34,9 @@ This will:
copy it to dst, overwriting existing files if they exist
see copy command for full details
This doesn't transfer unchanged files, testing by size and
modification time or MD5SUM. It doesn't delete files from the
destination.
This doesn't transfer files that are identical on src and dst, testing
by size and modification time or MD5SUM. It doesn't delete files from
the destination.
**Note**: Use the `-P`/`--progress` flag to view real-time transfer statistics

View File

@ -127,7 +127,7 @@ rclone dedupe [mode] remote:path [flags]
```
--by-hash Find indentical hashes rather than names
--dedupe-mode string Dedupe mode interactive|skip|first|newest|oldest|largest|smallest|rename. (default "interactive")
--dedupe-mode string Dedupe mode interactive|skip|first|newest|oldest|largest|smallest|rename (default "interactive")
-h, --help help for dedupe
```

View File

@ -25,7 +25,7 @@ rclone listremotes [flags]
```
-h, --help help for listremotes
--long Show the type as well as names.
--long Show the type as well as names
```
See the [global flags page](/flags/) for global options not listed here.

View File

@ -65,7 +65,7 @@ rclone lsd remote:path [flags]
```
-h, --help help for lsd
-R, --recursive Recurse into the listing.
-R, --recursive Recurse into the listing
```
See the [global flags page](/flags/) for global options not listed here.

View File

@ -137,16 +137,16 @@ rclone lsf remote:path [flags]
## Options
```
--absolute Put a leading / in front of path names.
--csv Output in CSV format.
-d, --dir-slash Append a slash to directory names. (default true)
--dirs-only Only list directories.
--files-only Only list files.
--absolute Put a leading / in front of path names
--csv Output in CSV format
-d, --dir-slash Append a slash to directory names (default true)
--dirs-only Only list directories
--files-only Only list files
-F, --format string Output format - see help for details (default "p")
--hash h Use this hash when h is used in the format MD5|SHA-1|DropboxHash (default "md5")
-h, --help help for lsf
-R, --recursive Recurse into the listing.
-s, --separator string Separator for the items in the format. (default ";")
-R, --recursive Recurse into the listing
-s, --separator string Separator for the items in the format (default ";")
```
See the [global flags page](/flags/) for global options not listed here.

View File

@ -55,6 +55,12 @@ returned
If --files-only is not specified directories in addition to the files
will be returned.
if --stat is set then a single JSON blob will be returned about the
item pointed to. This will return an error if the item isn't found.
However on bucket based backends (like s3, gcs, b2, azureblob etc) if
the item isn't found it will return an empty directory as it isn't
possible to tell empty directories from missing directories there.
The Path field will only show folders below the remote path being listed.
If "remote:path" contains the file "subfolder/file.txt", the Path for "file.txt"
will be "subfolder/file.txt", not "remote:path/subfolder/file.txt".
@ -105,16 +111,17 @@ rclone lsjson remote:path [flags]
## Options
```
--dirs-only Show only directories in the listing.
-M, --encrypted Show the encrypted names.
--files-only Show only files in the listing.
--hash Include hashes in the output (may take longer).
--hash-type stringArray Show only this hash type (may be repeated).
--dirs-only Show only directories in the listing
-M, --encrypted Show the encrypted names
--files-only Show only files in the listing
--hash Include hashes in the output (may take longer)
--hash-type stringArray Show only this hash type (may be repeated)
-h, --help help for lsjson
--no-mimetype Don't read the mime type (can speed things up).
--no-modtime Don't read the modification time (can speed things up).
--original Show the ID of the underlying Object.
-R, --recursive Recurse into the listing.
--no-mimetype Don't read the mime type (can speed things up)
--no-modtime Don't read the modification time (can speed things up)
--original Show the ID of the underlying Object
-R, --recursive Recurse into the listing
--stat Just return the info for the pointed to file
```
See the [global flags page](/flags/) for global options not listed here.

View File

@ -57,7 +57,7 @@ When running in background mode the user will have to stop the mount manually:
# Linux
fusermount -u /path/to/local/mount
# macOS
# OS X
umount /path/to/local/mount
The umount operation can fail, for example when the mountpoint is busy.
@ -70,9 +70,6 @@ then an additional 1 PiB of free space is assumed. If the remote does not
[support](https://rclone.org/overview/#optional-features) the about feature
at all, then 1 PiB is set as both the total and the free size.
**Note**: As of `rclone` 1.52.2, `rclone mount` now requires Go version 1.13
or newer on some platforms depending on the underlying FUSE library in use.
## Installing on Windows
To run rclone mount on Windows, you will need to
@ -171,11 +168,16 @@ By default, the owner and group will be taken from the current user, and the bui
group "Everyone" will be used to represent others. The user/group can be customized
with FUSE options "UserName" and "GroupName",
e.g. `-o UserName=user123 -o GroupName="Authenticated Users"`.
The permissions on each entry will be set according to [options](#options)
`--dir-perms` and `--file-perms`, which takes a value in traditional
[numeric notation](https://en.wikipedia.org/wiki/File-system_permissions#Numeric_notation).
The permissions on each entry will be set according to
[options](#options) `--dir-perms` and `--file-perms`,
which takes a value in traditional [numeric notation](https://en.wikipedia.org/wiki/File-system_permissions#Numeric_notation),
where the default corresponds to `--file-perms 0666 --dir-perms 0777`.
The default permissions corresponds to `--file-perms 0666 --dir-perms 0777`,
i.e. read and write permissions to everyone. This means you will not be able
to start any programs from the the mount. To be able to do that you must add
execute permissions, e.g. `--file-perms 0777 --dir-perms 0777` to add it
to everyone. If the program needs to write files, chances are you will have
to enable [VFS File Caching](#vfs-file-caching) as well (see also [limitations](#limitations)).
Note that the mapping of permissions is not always trivial, and the result
you see in Windows Explorer may not be exactly like you expected.
@ -255,7 +257,7 @@ ProcFS so the flag in fact sets **maximum** time to wait, while the real wait
can be less. On macOS / BSD the time to wait is constant and the check is
performed only at the end. We advise you to set wait time on macOS reasonably.
Only supported on Linux, FreeBSD, macOS and Windows at the moment.
Only supported on Linux, FreeBSD, OS X and Windows at the moment.
## rclone mount vs rclone sync/copy
@ -390,22 +392,6 @@ Mount option syntax includes a few extra options treated specially:
- standard mount options like `x-systemd.automount`, `_netdev`, `nosuid` and alike
are intended only for Automountd and ignored by rclone.
## chunked reading
`--vfs-read-chunk-size` will enable reading the source objects in parts.
This can reduce the used download quota for some remotes by requesting only chunks
from the remote that are actually read at the cost of an increased number of requests.
When `--vfs-read-chunk-size-limit` is also specified and greater than
`--vfs-read-chunk-size`, the chunk size for each open file will get doubled
for each chunk read, until the specified value is reached. A value of `-1` will disable
the limit and the chunk size will grow indefinitely.
With `--vfs-read-chunk-size 100M` and `--vfs-read-chunk-size-limit 0`
the following parts will be downloaded: 0-100M, 100M-200M, 200M-300M, 300M-400M and so on.
When `--vfs-read-chunk-size-limit 500M` is specified, the result would be
0-100M, 100M-300M, 300M-700M, 700M-1200M, 1200M-1700M and so on.
## VFS - Virtual File System
This command uses the VFS layer. This adapts the cloud storage objects
@ -427,8 +413,8 @@ directory should be considered up to date and not refreshed from the
backend. Changes made through the mount will appear immediately or
invalidate the cache.
--dir-cache-time duration Time to cache directory entries for. (default 5m0s)
--poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
--dir-cache-time duration Time to cache directory entries for (default 5m0s)
--poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s)
However, changes made directly on the cloud storage by the web
interface or a different copy of rclone will only be picked up once
@ -482,10 +468,10 @@ find that you need one or the other or both.
--cache-dir string Directory rclone will use for caching.
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
--vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off)
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
--vfs-write-back duration Time to writeback files after last use when using cache. (default 5s)
--vfs-cache-max-age duration Max age of objects in the cache (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s)
--vfs-write-back duration Time to writeback files after last use when using cache (default 5s)
If run with `-vv` rclone will print the location of the file cache. The
files are stored in the user cache file area which is OS dependent but
@ -497,8 +483,8 @@ The higher the cache mode the more compatible rclone becomes at the
cost of using disk space.
Note that files are written back to the remote only when they are
closed and if they haven't been accessed for --vfs-write-back
second. If rclone is quit or dies with files that haven't been
closed and if they haven't been accessed for `--vfs-write-back`
seconds. If rclone is quit or dies with files that haven't been
uploaded, these will be uploaded next time rclone is run with the same
flags.
@ -567,27 +553,55 @@ their full size in the cache, but they will be sparse files with only
the data that has been downloaded present in them.
This mode should support all normal file system operations and is
otherwise identical to --vfs-cache-mode writes.
otherwise identical to `--vfs-cache-mode` writes.
When reading a file rclone will read --buffer-size plus
--vfs-read-ahead bytes ahead. The --buffer-size is buffered in memory
whereas the --vfs-read-ahead is buffered on disk.
When reading a file rclone will read `--buffer-size` plus
`--vfs-read-ahead` bytes ahead. The `--buffer-size` is buffered in memory
whereas the `--vfs-read-ahead` is buffered on disk.
When using this mode it is recommended that --buffer-size is not set
too big and --vfs-read-ahead is set large if required.
When using this mode it is recommended that `--buffer-size` is not set
too large and `--vfs-read-ahead` is set large if required.
**IMPORTANT** not all file systems support sparse files. In particular
FAT/exFAT do not. Rclone will perform very badly if the cache
directory is on a filesystem which doesn't support sparse files and it
will log an ERROR message if one is detected.
## VFS Chunked Reading
When rclone reads files from a remote it reads them in chunks. This
means that rather than requesting the whole file rclone reads the
chunk specified. This can reduce the used download quota for some
remotes by requesting only chunks from the remote that are actually
read, at the cost of an increased number of requests.
These flags control the chunking:
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128M)
--vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default off)
Rclone will start reading a chunk of size `--vfs-read-chunk-size`,
and then double the size for each read. When `--vfs-read-chunk-size-limit` is
specified, and greater than `--vfs-read-chunk-size`, the chunk size for each
open file will get doubled only until the specified value is reached. If the
value is "off", which is the default, the limit is disabled and the chunk size
will grow indefinitely.
With `--vfs-read-chunk-size 100M` and `--vfs-read-chunk-size-limit 0`
the following parts will be downloaded: 0-100M, 100M-200M, 200M-300M, 300M-400M and so on.
When `--vfs-read-chunk-size-limit 500M` is specified, the result would be
0-100M, 100M-300M, 300M-700M, 700M-1200M, 1200M-1700M and so on.
Setting `--vfs-read-chunk-size` to `0` or "off" disables chunked reading.
## VFS Performance
These flags may be used to enable/disable features of the VFS for
performance or other reasons.
performance or other reasons. See also the [chunked reading](#vfs-chunked-reading)
feature.
In particular S3 and Swift benefit hugely from the --no-modtime flag
(or use --use-server-modtime for a slightly different effect) as each
In particular S3 and Swift benefit hugely from the `--no-modtime` flag
(or use `--use-server-modtime` for a slightly different effect) as each
read of the modification time takes a transaction.
--no-checksum Don't compare checksums on up/download.
@ -595,32 +609,19 @@ read of the modification time takes a transaction.
--no-seek Don't allow seeking in files.
--read-only Mount read-only.
When rclone reads files from a remote it reads them in chunks. This
means that rather than requesting the whole file rclone reads the
chunk specified. This is advantageous because some cloud providers
account for reads being all the data requested, not all the data
delivered.
Rclone will keep doubling the chunk size requested starting at
--vfs-read-chunk-size with a maximum of --vfs-read-chunk-size-limit
unless it is set to "off" in which case there will be no limit.
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M)
--vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default "off")
Sometimes rclone is delivered reads or writes out of order. Rather
than seeking rclone will wait a short time for the in sequence read or
write to come in. These flags only come into effect when not using an
on disk cache file.
--vfs-read-wait duration Time to wait for in-sequence read before seeking. (default 20ms)
--vfs-write-wait duration Time to wait for in-sequence write before giving error. (default 1s)
--vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms)
--vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s)
When using VFS write caching (--vfs-cache-mode with value writes or full),
the global flag --transfers can be set to adjust the number of parallel uploads of
modified files from cache (the related global flag --checkers have no effect on mount).
When using VFS write caching (`--vfs-cache-mode` with value writes or full),
the global flag `--transfers` can be set to adjust the number of parallel uploads of
modified files from cache (the related global flag `--checkers` have no effect on mount).
--transfers int Number of file transfers to run in parallel. (default 4)
--transfers int Number of file transfers to run in parallel (default 4)
## VFS Case Sensitivity
@ -633,7 +634,7 @@ to create the file is preserved and available for programs to query.
It is not allowed for two files in the same directory to differ only by case.
Usually file systems on macOS are case-insensitive. It is possible to make macOS
file systems case-sensitive but that is not the default
file systems case-sensitive but that is not the default.
The `--vfs-case-insensitive` mount flag controls how rclone handles these
two cases. If its value is "false", rclone passes file names to the mounted
@ -678,48 +679,48 @@ rclone mount remote:path /path/to/mountpoint [flags]
## Options
```
--allow-non-empty Allow mounting over a non-empty directory. Not supported on Windows.
--allow-other Allow access to other users. Not supported on Windows.
--allow-root Allow access to root user. Not supported on Windows.
--async-read Use asynchronous reads. Not supported on Windows. (default true)
--attr-timeout duration Time for which file/directory attributes are cached. (default 1s)
--daemon Run mount in background and exit parent process. Not supported on Windows. As background output is suppressed, use --log-file with --log-format=pid,... to monitor.
--daemon-timeout duration Time limit for rclone to respond to kernel. Not supported on Windows.
--daemon-wait duration Time to wait for ready mount from daemon (maximum time on Linux, constant sleep time on OSX/BSD). Ignored on Windows. (default 1m0s)
--debug-fuse Debug the FUSE internals - needs -v.
--default-permissions Makes kernel enforce access control based on the file mode. Not supported on Windows.
--dir-cache-time duration Time to cache directory entries for. (default 5m0s)
--allow-non-empty Allow mounting over a non-empty directory (not supported on Windows)
--allow-other Allow access to other users (not supported on Windows)
--allow-root Allow access to root user (not supported on Windows)
--async-read Use asynchronous reads (not supported on Windows) (default true)
--attr-timeout duration Time for which file/directory attributes are cached (default 1s)
--daemon Run mount in background and exit parent process (as background output is suppressed, use --log-file with --log-format=pid,... to monitor) (not supported on Windows)
--daemon-timeout duration Time limit for rclone to respond to kernel (not supported on Windows)
--daemon-wait duration Time to wait for ready mount from daemon (maximum time on Linux, constant sleep time on OSX/BSD) (not supported on Windows) (default 1m0s)
--debug-fuse Debug the FUSE internals - needs -v
--default-permissions Makes kernel enforce access control based on the file mode (not supported on Windows)
--dir-cache-time duration Time to cache directory entries for (default 5m0s)
--dir-perms FileMode Directory permissions (default 0777)
--file-perms FileMode File permissions (default 0666)
--fuse-flag stringArray Flags or arguments to be passed direct to libfuse/WinFsp. Repeat if required.
--gid uint32 Override the gid field set by the filesystem. Not supported on Windows. (default 1000)
--fuse-flag stringArray Flags or arguments to be passed direct to libfuse/WinFsp (repeat if required)
--gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
-h, --help help for mount
--max-read-ahead SizeSuffix The number of bytes that can be prefetched for sequential reads. Not supported on Windows. (default 128Ki)
--network-mode Mount as remote network drive, instead of fixed disk drive. Supported on Windows only
--no-checksum Don't compare checksums on up/download.
--no-modtime Don't read/write the modification time (can speed things up).
--no-seek Don't allow seeking in files.
--noappledouble Ignore Apple Double (._) and .DS_Store files. Supported on macOS only. (default true)
--noapplexattr Ignore all "com.apple.*" extended attributes. Supported on macOS only.
-o, --option stringArray Option for libfuse/WinFsp. Repeat if required.
--poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
--read-only Mount read-only.
--uid uint32 Override the uid field set by the filesystem. Not supported on Windows. (default 1000)
--umask int Override the permission bits set by the filesystem. Not supported on Windows. (default 2)
--vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off)
--max-read-ahead SizeSuffix The number of bytes that can be prefetched for sequential reads (not supported on Windows) (default 128Ki)
--network-mode Mount as remote network drive, instead of fixed disk drive (supported on Windows only)
--no-checksum Don't compare checksums on up/download
--no-modtime Don't read/write the modification time (can speed things up)
--no-seek Don't allow seeking in files
--noappledouble Ignore Apple Double (._) and .DS_Store files (supported on OSX only) (default true)
--noapplexattr Ignore all "com.apple.*" extended attributes (supported on OSX only)
-o, --option stringArray Option for libfuse/WinFsp (repeat if required)
--poll-interval duration Time to wait between polling for changes, must be smaller than dir-cache-time and only on supported remotes (set 0 to disable) (default 1m0s)
--read-only Mount read-only
--uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000)
--umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2)
--vfs-cache-max-age duration Max age of objects in the cache (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
--vfs-case-insensitive If a file name not found, find a case insensitive match.
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full.
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off)
--vfs-read-wait duration Time to wait for in-sequence read before seeking. (default 20ms)
--vfs-used-is-size rclone size Use the rclone size algorithm for Used size.
--vfs-write-back duration Time to writeback files after last use when using cache. (default 5s)
--vfs-write-wait duration Time to wait for in-sequence write before giving error. (default 1s)
--volname string Set the volume name. Supported on Windows and macOS only.
--write-back-cache Makes kernel buffer writes before sending them to rclone. Without this, writethrough caching is used. Not supported on Windows.
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s)
--vfs-case-insensitive If a file name not found, find a case insensitive match
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
--vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms)
--vfs-used-is-size rclone size Use the rclone size algorithm for Used size
--vfs-write-back duration Time to writeback files after last use when using cache (default 5s)
--vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s)
--volname string Set the volume name (supported on Windows and OSX only)
--write-back-cache Makes kernel buffer writes before sending them to rclone (without this, writethrough caching is used) (not supported on Windows)
```
See the [global flags page](/flags/) for global options not listed here.

View File

@ -26,25 +26,6 @@ move will be used, otherwise it will copy it (server-side if possible)
into `dest:path` then delete the original (if no errors on copy) in
`source:path`.
Note that the destination is always a directory. If the path
does not exist, it will be created as a directory and the contents of
the source will be moved into it. This is the case even if the source
path points to a file. If you want to move a single file to a different
name you must use [moveto](commands/rclone_moveto/) instead.
For example, given the command:
rclone move source:sourcepath/one.txt dest:destpath/one.txt
Rclone will create a directory `dest:destpath/one.txt` and put the source file in there:
dest:destpath/one.txt/one.txt
Not move the single source file into the given destination path,
which would be the result if moveto had been used instead:
dest:destpath/one.txt
If you want to delete empty source directories after move, use the --delete-empty-src-dirs flag.
See the [--no-traverse](/docs/#no-traverse) option for controlling

View File

@ -34,9 +34,9 @@ This will:
move it to dst, overwriting existing files if they exist
see move command for full details
This doesn't transfer unchanged files, testing by size and
modification time or MD5SUM. src will be deleted on successful
transfer.
This doesn't transfer files that are identical on src and dst, testing
by size and modification time or MD5SUM. src will be deleted on
successful transfer.
**Important**: Since this can cause data loss, test first with the
`--dry-run` or the `--interactive`/`-i` flag.

View File

@ -31,6 +31,7 @@ Here are the keys - press '?' to toggle the help on and off
c toggle counts
g toggle graph
a toggle average size in directory
u toggle human-readable format
n,s,C,A sort by name,size,count,average size
d delete file/directory
y copy current path to clipboard

View File

@ -69,15 +69,15 @@ rclone rc commands parameter [flags]
## Options
```
-a, --arg stringArray Argument placed in the "arg" array.
-a, --arg stringArray Argument placed in the "arg" array
-h, --help help for rc
--json string Input JSON - use instead of key=value args.
--loopback If set connect to this rclone instance not via HTTP.
--no-output If set, don't output the JSON result.
-o, --opt stringArray Option in the form name=value or name placed in the "opt" array.
--pass string Password to use to connect to rclone remote control.
--url string URL to connect to rclone remote control. (default "http://localhost:5572/")
--user string Username to use to rclone remote control.
--json string Input JSON - use instead of key=value args
--loopback If set connect to this rclone instance not via HTTP
--no-output If set, don't output the JSON result
-o, --opt stringArray Option in the form name=value or name placed in the "opt" array
--pass string Password to use to connect to rclone remote control
--url string URL to connect to rclone remote control (default "http://localhost:5572/")
--user string Username to use to rclone remote control
```
See the [global flags page](/flags/) for global options not listed here.

View File

@ -67,8 +67,8 @@ rclone selfupdate [flags]
## Options
```
--beta Install beta release.
--check Check for latest release, do not download.
--beta Install beta release
--check Check for latest release, do not download
-h, --help help for selfupdate
--output string Save the downloaded binary at a given path (default: replace running binary)
--package string Package format: zip|deb|rpm (default: zip)

View File

@ -54,8 +54,8 @@ directory should be considered up to date and not refreshed from the
backend. Changes made through the mount will appear immediately or
invalidate the cache.
--dir-cache-time duration Time to cache directory entries for. (default 5m0s)
--poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
--dir-cache-time duration Time to cache directory entries for (default 5m0s)
--poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s)
However, changes made directly on the cloud storage by the web
interface or a different copy of rclone will only be picked up once
@ -109,10 +109,10 @@ find that you need one or the other or both.
--cache-dir string Directory rclone will use for caching.
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
--vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off)
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
--vfs-write-back duration Time to writeback files after last use when using cache. (default 5s)
--vfs-cache-max-age duration Max age of objects in the cache (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s)
--vfs-write-back duration Time to writeback files after last use when using cache (default 5s)
If run with `-vv` rclone will print the location of the file cache. The
files are stored in the user cache file area which is OS dependent but
@ -124,8 +124,8 @@ The higher the cache mode the more compatible rclone becomes at the
cost of using disk space.
Note that files are written back to the remote only when they are
closed and if they haven't been accessed for --vfs-write-back
second. If rclone is quit or dies with files that haven't been
closed and if they haven't been accessed for `--vfs-write-back`
seconds. If rclone is quit or dies with files that haven't been
uploaded, these will be uploaded next time rclone is run with the same
flags.
@ -194,27 +194,55 @@ their full size in the cache, but they will be sparse files with only
the data that has been downloaded present in them.
This mode should support all normal file system operations and is
otherwise identical to --vfs-cache-mode writes.
otherwise identical to `--vfs-cache-mode` writes.
When reading a file rclone will read --buffer-size plus
--vfs-read-ahead bytes ahead. The --buffer-size is buffered in memory
whereas the --vfs-read-ahead is buffered on disk.
When reading a file rclone will read `--buffer-size` plus
`--vfs-read-ahead` bytes ahead. The `--buffer-size` is buffered in memory
whereas the `--vfs-read-ahead` is buffered on disk.
When using this mode it is recommended that --buffer-size is not set
too big and --vfs-read-ahead is set large if required.
When using this mode it is recommended that `--buffer-size` is not set
too large and `--vfs-read-ahead` is set large if required.
**IMPORTANT** not all file systems support sparse files. In particular
FAT/exFAT do not. Rclone will perform very badly if the cache
directory is on a filesystem which doesn't support sparse files and it
will log an ERROR message if one is detected.
## VFS Chunked Reading
When rclone reads files from a remote it reads them in chunks. This
means that rather than requesting the whole file rclone reads the
chunk specified. This can reduce the used download quota for some
remotes by requesting only chunks from the remote that are actually
read, at the cost of an increased number of requests.
These flags control the chunking:
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128M)
--vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default off)
Rclone will start reading a chunk of size `--vfs-read-chunk-size`,
and then double the size for each read. When `--vfs-read-chunk-size-limit` is
specified, and greater than `--vfs-read-chunk-size`, the chunk size for each
open file will get doubled only until the specified value is reached. If the
value is "off", which is the default, the limit is disabled and the chunk size
will grow indefinitely.
With `--vfs-read-chunk-size 100M` and `--vfs-read-chunk-size-limit 0`
the following parts will be downloaded: 0-100M, 100M-200M, 200M-300M, 300M-400M and so on.
When `--vfs-read-chunk-size-limit 500M` is specified, the result would be
0-100M, 100M-300M, 300M-700M, 700M-1200M, 1200M-1700M and so on.
Setting `--vfs-read-chunk-size` to `0` or "off" disables chunked reading.
## VFS Performance
These flags may be used to enable/disable features of the VFS for
performance or other reasons.
performance or other reasons. See also the [chunked reading](#vfs-chunked-reading)
feature.
In particular S3 and Swift benefit hugely from the --no-modtime flag
(or use --use-server-modtime for a slightly different effect) as each
In particular S3 and Swift benefit hugely from the `--no-modtime` flag
(or use `--use-server-modtime` for a slightly different effect) as each
read of the modification time takes a transaction.
--no-checksum Don't compare checksums on up/download.
@ -222,32 +250,19 @@ read of the modification time takes a transaction.
--no-seek Don't allow seeking in files.
--read-only Mount read-only.
When rclone reads files from a remote it reads them in chunks. This
means that rather than requesting the whole file rclone reads the
chunk specified. This is advantageous because some cloud providers
account for reads being all the data requested, not all the data
delivered.
Rclone will keep doubling the chunk size requested starting at
--vfs-read-chunk-size with a maximum of --vfs-read-chunk-size-limit
unless it is set to "off" in which case there will be no limit.
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M)
--vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default "off")
Sometimes rclone is delivered reads or writes out of order. Rather
than seeking rclone will wait a short time for the in sequence read or
write to come in. These flags only come into effect when not using an
on disk cache file.
--vfs-read-wait duration Time to wait for in-sequence read before seeking. (default 20ms)
--vfs-write-wait duration Time to wait for in-sequence write before giving error. (default 1s)
--vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms)
--vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s)
When using VFS write caching (--vfs-cache-mode with value writes or full),
the global flag --transfers can be set to adjust the number of parallel uploads of
modified files from cache (the related global flag --checkers have no effect on mount).
When using VFS write caching (`--vfs-cache-mode` with value writes or full),
the global flag `--transfers` can be set to adjust the number of parallel uploads of
modified files from cache (the related global flag `--checkers` have no effect on mount).
--transfers int Number of file transfers to run in parallel. (default 4)
--transfers int Number of file transfers to run in parallel (default 4)
## VFS Case Sensitivity
@ -260,7 +275,7 @@ to create the file is preserved and available for programs to query.
It is not allowed for two files in the same directory to differ only by case.
Usually file systems on macOS are case-insensitive. It is possible to make macOS
file systems case-sensitive but that is not the default
file systems case-sensitive but that is not the default.
The `--vfs-case-insensitive` mount flag controls how rclone handles these
two cases. If its value is "false", rclone passes file names to the mounted
@ -305,33 +320,33 @@ rclone serve dlna remote:path [flags]
## Options
```
--addr string ip:port or :port to bind the DLNA http server to. (default ":7879")
--dir-cache-time duration Time to cache directory entries for. (default 5m0s)
--addr string The ip:port or :port to bind the DLNA http server to (default ":7879")
--dir-cache-time duration Time to cache directory entries for (default 5m0s)
--dir-perms FileMode Directory permissions (default 0777)
--file-perms FileMode File permissions (default 0666)
--gid uint32 Override the gid field set by the filesystem. Not supported on Windows. (default 1000)
--gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
-h, --help help for dlna
--log-trace enable trace logging of SOAP traffic
--name string name of DLNA server
--no-checksum Don't compare checksums on up/download.
--no-modtime Don't read/write the modification time (can speed things up).
--no-seek Don't allow seeking in files.
--poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
--read-only Mount read-only.
--uid uint32 Override the uid field set by the filesystem. Not supported on Windows. (default 1000)
--umask int Override the permission bits set by the filesystem. Not supported on Windows. (default 2)
--vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off)
--log-trace Enable trace logging of SOAP traffic
--name string Name of DLNA server
--no-checksum Don't compare checksums on up/download
--no-modtime Don't read/write the modification time (can speed things up)
--no-seek Don't allow seeking in files
--poll-interval duration Time to wait between polling for changes, must be smaller than dir-cache-time and only on supported remotes (set 0 to disable) (default 1m0s)
--read-only Mount read-only
--uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000)
--umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2)
--vfs-cache-max-age duration Max age of objects in the cache (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
--vfs-case-insensitive If a file name not found, find a case insensitive match.
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full.
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off)
--vfs-read-wait duration Time to wait for in-sequence read before seeking. (default 20ms)
--vfs-used-is-size rclone size Use the rclone size algorithm for Used size.
--vfs-write-back duration Time to writeback files after last use when using cache. (default 5s)
--vfs-write-wait duration Time to wait for in-sequence write before giving error. (default 1s)
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s)
--vfs-case-insensitive If a file name not found, find a case insensitive match
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
--vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms)
--vfs-used-is-size rclone size Use the rclone size algorithm for Used size
--vfs-write-back duration Time to writeback files after last use when using cache (default 5s)
--vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s)
```
See the [global flags page](/flags/) for global options not listed here.

View File

@ -72,8 +72,8 @@ directory should be considered up to date and not refreshed from the
backend. Changes made through the mount will appear immediately or
invalidate the cache.
--dir-cache-time duration Time to cache directory entries for. (default 5m0s)
--poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
--dir-cache-time duration Time to cache directory entries for (default 5m0s)
--poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s)
However, changes made directly on the cloud storage by the web
interface or a different copy of rclone will only be picked up once
@ -127,10 +127,10 @@ find that you need one or the other or both.
--cache-dir string Directory rclone will use for caching.
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
--vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off)
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
--vfs-write-back duration Time to writeback files after last use when using cache. (default 5s)
--vfs-cache-max-age duration Max age of objects in the cache (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s)
--vfs-write-back duration Time to writeback files after last use when using cache (default 5s)
If run with `-vv` rclone will print the location of the file cache. The
files are stored in the user cache file area which is OS dependent but
@ -142,8 +142,8 @@ The higher the cache mode the more compatible rclone becomes at the
cost of using disk space.
Note that files are written back to the remote only when they are
closed and if they haven't been accessed for --vfs-write-back
second. If rclone is quit or dies with files that haven't been
closed and if they haven't been accessed for `--vfs-write-back`
seconds. If rclone is quit or dies with files that haven't been
uploaded, these will be uploaded next time rclone is run with the same
flags.
@ -212,27 +212,55 @@ their full size in the cache, but they will be sparse files with only
the data that has been downloaded present in them.
This mode should support all normal file system operations and is
otherwise identical to --vfs-cache-mode writes.
otherwise identical to `--vfs-cache-mode` writes.
When reading a file rclone will read --buffer-size plus
--vfs-read-ahead bytes ahead. The --buffer-size is buffered in memory
whereas the --vfs-read-ahead is buffered on disk.
When reading a file rclone will read `--buffer-size` plus
`--vfs-read-ahead` bytes ahead. The `--buffer-size` is buffered in memory
whereas the `--vfs-read-ahead` is buffered on disk.
When using this mode it is recommended that --buffer-size is not set
too big and --vfs-read-ahead is set large if required.
When using this mode it is recommended that `--buffer-size` is not set
too large and `--vfs-read-ahead` is set large if required.
**IMPORTANT** not all file systems support sparse files. In particular
FAT/exFAT do not. Rclone will perform very badly if the cache
directory is on a filesystem which doesn't support sparse files and it
will log an ERROR message if one is detected.
## VFS Chunked Reading
When rclone reads files from a remote it reads them in chunks. This
means that rather than requesting the whole file rclone reads the
chunk specified. This can reduce the used download quota for some
remotes by requesting only chunks from the remote that are actually
read, at the cost of an increased number of requests.
These flags control the chunking:
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128M)
--vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default off)
Rclone will start reading a chunk of size `--vfs-read-chunk-size`,
and then double the size for each read. When `--vfs-read-chunk-size-limit` is
specified, and greater than `--vfs-read-chunk-size`, the chunk size for each
open file will get doubled only until the specified value is reached. If the
value is "off", which is the default, the limit is disabled and the chunk size
will grow indefinitely.
With `--vfs-read-chunk-size 100M` and `--vfs-read-chunk-size-limit 0`
the following parts will be downloaded: 0-100M, 100M-200M, 200M-300M, 300M-400M and so on.
When `--vfs-read-chunk-size-limit 500M` is specified, the result would be
0-100M, 100M-300M, 300M-700M, 700M-1200M, 1200M-1700M and so on.
Setting `--vfs-read-chunk-size` to `0` or "off" disables chunked reading.
## VFS Performance
These flags may be used to enable/disable features of the VFS for
performance or other reasons.
performance or other reasons. See also the [chunked reading](#vfs-chunked-reading)
feature.
In particular S3 and Swift benefit hugely from the --no-modtime flag
(or use --use-server-modtime for a slightly different effect) as each
In particular S3 and Swift benefit hugely from the `--no-modtime` flag
(or use `--use-server-modtime` for a slightly different effect) as each
read of the modification time takes a transaction.
--no-checksum Don't compare checksums on up/download.
@ -240,32 +268,19 @@ read of the modification time takes a transaction.
--no-seek Don't allow seeking in files.
--read-only Mount read-only.
When rclone reads files from a remote it reads them in chunks. This
means that rather than requesting the whole file rclone reads the
chunk specified. This is advantageous because some cloud providers
account for reads being all the data requested, not all the data
delivered.
Rclone will keep doubling the chunk size requested starting at
--vfs-read-chunk-size with a maximum of --vfs-read-chunk-size-limit
unless it is set to "off" in which case there will be no limit.
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M)
--vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default "off")
Sometimes rclone is delivered reads or writes out of order. Rather
than seeking rclone will wait a short time for the in sequence read or
write to come in. These flags only come into effect when not using an
on disk cache file.
--vfs-read-wait duration Time to wait for in-sequence read before seeking. (default 20ms)
--vfs-write-wait duration Time to wait for in-sequence write before giving error. (default 1s)
--vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms)
--vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s)
When using VFS write caching (--vfs-cache-mode with value writes or full),
the global flag --transfers can be set to adjust the number of parallel uploads of
modified files from cache (the related global flag --checkers have no effect on mount).
When using VFS write caching (`--vfs-cache-mode` with value writes or full),
the global flag `--transfers` can be set to adjust the number of parallel uploads of
modified files from cache (the related global flag `--checkers` have no effect on mount).
--transfers int Number of file transfers to run in parallel. (default 4)
--transfers int Number of file transfers to run in parallel (default 4)
## VFS Case Sensitivity
@ -278,7 +293,7 @@ to create the file is preserved and available for programs to query.
It is not allowed for two files in the same directory to differ only by case.
Usually file systems on macOS are case-insensitive. It is possible to make macOS
file systems case-sensitive but that is not the default
file systems case-sensitive but that is not the default.
The `--vfs-case-insensitive` mount flag controls how rclone handles these
two cases. If its value is "false", rclone passes file names to the mounted
@ -323,52 +338,53 @@ rclone serve docker [flags]
## Options
```
--allow-non-empty Allow mounting over a non-empty directory. Not supported on Windows.
--allow-other Allow access to other users. Not supported on Windows.
--allow-root Allow access to root user. Not supported on Windows.
--async-read Use asynchronous reads. Not supported on Windows. (default true)
--attr-timeout duration Time for which file/directory attributes are cached. (default 1s)
--base-dir string base directory for volumes (default "/var/lib/docker-volumes/rclone")
--daemon Run mount as a daemon (background mode). Not supported on Windows.
--daemon-timeout duration Time limit for rclone to respond to kernel. Not supported on Windows.
--debug-fuse Debug the FUSE internals - needs -v.
--default-permissions Makes kernel enforce access control based on the file mode. Not supported on Windows.
--dir-cache-time duration Time to cache directory entries for. (default 5m0s)
--allow-non-empty Allow mounting over a non-empty directory (not supported on Windows)
--allow-other Allow access to other users (not supported on Windows)
--allow-root Allow access to root user (not supported on Windows)
--async-read Use asynchronous reads (not supported on Windows) (default true)
--attr-timeout duration Time for which file/directory attributes are cached (default 1s)
--base-dir string Base directory for volumes (default "/var/lib/docker-volumes/rclone")
--daemon Run mount in background and exit parent process (as background output is suppressed, use --log-file with --log-format=pid,... to monitor) (not supported on Windows)
--daemon-timeout duration Time limit for rclone to respond to kernel (not supported on Windows)
--daemon-wait duration Time to wait for ready mount from daemon (maximum time on Linux, constant sleep time on OSX/BSD) (not supported on Windows) (default 1m0s)
--debug-fuse Debug the FUSE internals - needs -v
--default-permissions Makes kernel enforce access control based on the file mode (not supported on Windows)
--dir-cache-time duration Time to cache directory entries for (default 5m0s)
--dir-perms FileMode Directory permissions (default 0777)
--file-perms FileMode File permissions (default 0666)
--forget-state skip restoring previous state
--fuse-flag stringArray Flags or arguments to be passed direct to libfuse/WinFsp. Repeat if required.
--gid uint32 Override the gid field set by the filesystem. Not supported on Windows. (default 1000)
--forget-state Skip restoring previous state
--fuse-flag stringArray Flags or arguments to be passed direct to libfuse/WinFsp (repeat if required)
--gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
-h, --help help for docker
--max-read-ahead SizeSuffix The number of bytes that can be prefetched for sequential reads. Not supported on Windows. (default 128Ki)
--network-mode Mount as remote network drive, instead of fixed disk drive. Supported on Windows only
--no-checksum Don't compare checksums on up/download.
--no-modtime Don't read/write the modification time (can speed things up).
--no-seek Don't allow seeking in files.
--no-spec do not write spec file
--noappledouble Ignore Apple Double (._) and .DS_Store files. Supported on OSX only. (default true)
--noapplexattr Ignore all "com.apple.*" extended attributes. Supported on OSX only.
-o, --option stringArray Option for libfuse/WinFsp. Repeat if required.
--poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
--read-only Mount read-only.
--socket-addr string <host:port> or absolute path (default: /run/docker/plugins/rclone.sock)
--max-read-ahead SizeSuffix The number of bytes that can be prefetched for sequential reads (not supported on Windows) (default 128Ki)
--network-mode Mount as remote network drive, instead of fixed disk drive (supported on Windows only)
--no-checksum Don't compare checksums on up/download
--no-modtime Don't read/write the modification time (can speed things up)
--no-seek Don't allow seeking in files
--no-spec Do not write spec file
--noappledouble Ignore Apple Double (._) and .DS_Store files (supported on OSX only) (default true)
--noapplexattr Ignore all "com.apple.*" extended attributes (supported on OSX only)
-o, --option stringArray Option for libfuse/WinFsp (repeat if required)
--poll-interval duration Time to wait between polling for changes, must be smaller than dir-cache-time and only on supported remotes (set 0 to disable) (default 1m0s)
--read-only Mount read-only
--socket-addr string Address <host:port> or absolute path (default: /run/docker/plugins/rclone.sock)
--socket-gid int GID for unix socket (default: current process GID) (default 1000)
--uid uint32 Override the uid field set by the filesystem. Not supported on Windows. (default 1000)
--umask int Override the permission bits set by the filesystem. Not supported on Windows. (default 2)
--vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off)
--uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000)
--umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2)
--vfs-cache-max-age duration Max age of objects in the cache (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
--vfs-case-insensitive If a file name not found, find a case insensitive match.
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full.
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off)
--vfs-read-wait duration Time to wait for in-sequence read before seeking. (default 20ms)
--vfs-used-is-size rclone size Use the rclone size algorithm for Used size.
--vfs-write-back duration Time to writeback files after last use when using cache. (default 5s)
--vfs-write-wait duration Time to wait for in-sequence write before giving error. (default 1s)
--volname string Set the volume name. Supported on Windows and OSX only.
--write-back-cache Makes kernel buffer writes before sending them to rclone. Without this, writethrough caching is used. Not supported on Windows.
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s)
--vfs-case-insensitive If a file name not found, find a case insensitive match
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
--vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms)
--vfs-used-is-size rclone size Use the rclone size algorithm for Used size
--vfs-write-back duration Time to writeback files after last use when using cache (default 5s)
--vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s)
--volname string Set the volume name (supported on Windows and OSX only)
--write-back-cache Makes kernel buffer writes before sending them to rclone (without this, writethrough caching is used) (not supported on Windows)
```
See the [global flags page](/flags/) for global options not listed here.

View File

@ -53,8 +53,8 @@ directory should be considered up to date and not refreshed from the
backend. Changes made through the mount will appear immediately or
invalidate the cache.
--dir-cache-time duration Time to cache directory entries for. (default 5m0s)
--poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
--dir-cache-time duration Time to cache directory entries for (default 5m0s)
--poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s)
However, changes made directly on the cloud storage by the web
interface or a different copy of rclone will only be picked up once
@ -108,10 +108,10 @@ find that you need one or the other or both.
--cache-dir string Directory rclone will use for caching.
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
--vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off)
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
--vfs-write-back duration Time to writeback files after last use when using cache. (default 5s)
--vfs-cache-max-age duration Max age of objects in the cache (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s)
--vfs-write-back duration Time to writeback files after last use when using cache (default 5s)
If run with `-vv` rclone will print the location of the file cache. The
files are stored in the user cache file area which is OS dependent but
@ -123,8 +123,8 @@ The higher the cache mode the more compatible rclone becomes at the
cost of using disk space.
Note that files are written back to the remote only when they are
closed and if they haven't been accessed for --vfs-write-back
second. If rclone is quit or dies with files that haven't been
closed and if they haven't been accessed for `--vfs-write-back`
seconds. If rclone is quit or dies with files that haven't been
uploaded, these will be uploaded next time rclone is run with the same
flags.
@ -193,27 +193,55 @@ their full size in the cache, but they will be sparse files with only
the data that has been downloaded present in them.
This mode should support all normal file system operations and is
otherwise identical to --vfs-cache-mode writes.
otherwise identical to `--vfs-cache-mode` writes.
When reading a file rclone will read --buffer-size plus
--vfs-read-ahead bytes ahead. The --buffer-size is buffered in memory
whereas the --vfs-read-ahead is buffered on disk.
When reading a file rclone will read `--buffer-size` plus
`--vfs-read-ahead` bytes ahead. The `--buffer-size` is buffered in memory
whereas the `--vfs-read-ahead` is buffered on disk.
When using this mode it is recommended that --buffer-size is not set
too big and --vfs-read-ahead is set large if required.
When using this mode it is recommended that `--buffer-size` is not set
too large and `--vfs-read-ahead` is set large if required.
**IMPORTANT** not all file systems support sparse files. In particular
FAT/exFAT do not. Rclone will perform very badly if the cache
directory is on a filesystem which doesn't support sparse files and it
will log an ERROR message if one is detected.
## VFS Chunked Reading
When rclone reads files from a remote it reads them in chunks. This
means that rather than requesting the whole file rclone reads the
chunk specified. This can reduce the used download quota for some
remotes by requesting only chunks from the remote that are actually
read, at the cost of an increased number of requests.
These flags control the chunking:
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128M)
--vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default off)
Rclone will start reading a chunk of size `--vfs-read-chunk-size`,
and then double the size for each read. When `--vfs-read-chunk-size-limit` is
specified, and greater than `--vfs-read-chunk-size`, the chunk size for each
open file will get doubled only until the specified value is reached. If the
value is "off", which is the default, the limit is disabled and the chunk size
will grow indefinitely.
With `--vfs-read-chunk-size 100M` and `--vfs-read-chunk-size-limit 0`
the following parts will be downloaded: 0-100M, 100M-200M, 200M-300M, 300M-400M and so on.
When `--vfs-read-chunk-size-limit 500M` is specified, the result would be
0-100M, 100M-300M, 300M-700M, 700M-1200M, 1200M-1700M and so on.
Setting `--vfs-read-chunk-size` to `0` or "off" disables chunked reading.
## VFS Performance
These flags may be used to enable/disable features of the VFS for
performance or other reasons.
performance or other reasons. See also the [chunked reading](#vfs-chunked-reading)
feature.
In particular S3 and Swift benefit hugely from the --no-modtime flag
(or use --use-server-modtime for a slightly different effect) as each
In particular S3 and Swift benefit hugely from the `--no-modtime` flag
(or use `--use-server-modtime` for a slightly different effect) as each
read of the modification time takes a transaction.
--no-checksum Don't compare checksums on up/download.
@ -221,32 +249,19 @@ read of the modification time takes a transaction.
--no-seek Don't allow seeking in files.
--read-only Mount read-only.
When rclone reads files from a remote it reads them in chunks. This
means that rather than requesting the whole file rclone reads the
chunk specified. This is advantageous because some cloud providers
account for reads being all the data requested, not all the data
delivered.
Rclone will keep doubling the chunk size requested starting at
--vfs-read-chunk-size with a maximum of --vfs-read-chunk-size-limit
unless it is set to "off" in which case there will be no limit.
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M)
--vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default "off")
Sometimes rclone is delivered reads or writes out of order. Rather
than seeking rclone will wait a short time for the in sequence read or
write to come in. These flags only come into effect when not using an
on disk cache file.
--vfs-read-wait duration Time to wait for in-sequence read before seeking. (default 20ms)
--vfs-write-wait duration Time to wait for in-sequence write before giving error. (default 1s)
--vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms)
--vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s)
When using VFS write caching (--vfs-cache-mode with value writes or full),
the global flag --transfers can be set to adjust the number of parallel uploads of
modified files from cache (the related global flag --checkers have no effect on mount).
When using VFS write caching (`--vfs-cache-mode` with value writes or full),
the global flag `--transfers` can be set to adjust the number of parallel uploads of
modified files from cache (the related global flag `--checkers` have no effect on mount).
--transfers int Number of file transfers to run in parallel. (default 4)
--transfers int Number of file transfers to run in parallel (default 4)
## VFS Case Sensitivity
@ -259,7 +274,7 @@ to create the file is preserved and available for programs to query.
It is not allowed for two files in the same directory to differ only by case.
Usually file systems on macOS are case-insensitive. It is possible to make macOS
file systems case-sensitive but that is not the default
file systems case-sensitive but that is not the default.
The `--vfs-case-insensitive` mount flag controls how rclone handles these
two cases. If its value is "false", rclone passes file names to the mounted
@ -385,38 +400,38 @@ rclone serve ftp remote:path [flags]
## Options
```
--addr string IPaddress:Port or :Port to bind server to. (default "localhost:2121")
--auth-proxy string A program to use to create the backend from the auth.
--addr string IPaddress:Port or :Port to bind server to (default "localhost:2121")
--auth-proxy string A program to use to create the backend from the auth
--cert string TLS PEM key (concatenation of certificate and CA certificate)
--dir-cache-time duration Time to cache directory entries for. (default 5m0s)
--dir-cache-time duration Time to cache directory entries for (default 5m0s)
--dir-perms FileMode Directory permissions (default 0777)
--file-perms FileMode File permissions (default 0666)
--gid uint32 Override the gid field set by the filesystem. Not supported on Windows. (default 1000)
--gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
-h, --help help for ftp
--key string TLS PEM Private key
--no-checksum Don't compare checksums on up/download.
--no-modtime Don't read/write the modification time (can speed things up).
--no-seek Don't allow seeking in files.
--pass string Password for authentication. (empty value allow every password)
--passive-port string Passive port range to use. (default "30000-32000")
--poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
--public-ip string Public IP address to advertise for passive connections.
--read-only Mount read-only.
--uid uint32 Override the uid field set by the filesystem. Not supported on Windows. (default 1000)
--umask int Override the permission bits set by the filesystem. Not supported on Windows. (default 2)
--user string User name for authentication. (default "anonymous")
--vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off)
--no-checksum Don't compare checksums on up/download
--no-modtime Don't read/write the modification time (can speed things up)
--no-seek Don't allow seeking in files
--pass string Password for authentication (empty value allow every password)
--passive-port string Passive port range to use (default "30000-32000")
--poll-interval duration Time to wait between polling for changes, must be smaller than dir-cache-time and only on supported remotes (set 0 to disable) (default 1m0s)
--public-ip string Public IP address to advertise for passive connections
--read-only Mount read-only
--uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000)
--umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2)
--user string User name for authentication (default "anonymous")
--vfs-cache-max-age duration Max age of objects in the cache (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
--vfs-case-insensitive If a file name not found, find a case insensitive match.
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full.
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off)
--vfs-read-wait duration Time to wait for in-sequence read before seeking. (default 20ms)
--vfs-used-is-size rclone size Use the rclone size algorithm for Used size.
--vfs-write-back duration Time to writeback files after last use when using cache. (default 5s)
--vfs-write-wait duration Time to wait for in-sequence write before giving error. (default 1s)
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s)
--vfs-case-insensitive If a file name not found, find a case insensitive match
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
--vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms)
--vfs-used-is-size rclone size Use the rclone size algorithm for Used size
--vfs-write-back duration Time to writeback files after last use when using cache (default 5s)
--vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s)
```
See the [global flags page](/flags/) for global options not listed here.

View File

@ -103,6 +103,8 @@ The password file can be updated while rclone is running.
Use --realm to set the authentication realm.
Use --salt to change the password hashing salt from the default.
## VFS - Virtual File System
This command uses the VFS layer. This adapts the cloud storage objects
@ -124,8 +126,8 @@ directory should be considered up to date and not refreshed from the
backend. Changes made through the mount will appear immediately or
invalidate the cache.
--dir-cache-time duration Time to cache directory entries for. (default 5m0s)
--poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
--dir-cache-time duration Time to cache directory entries for (default 5m0s)
--poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s)
However, changes made directly on the cloud storage by the web
interface or a different copy of rclone will only be picked up once
@ -179,10 +181,10 @@ find that you need one or the other or both.
--cache-dir string Directory rclone will use for caching.
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
--vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off)
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
--vfs-write-back duration Time to writeback files after last use when using cache. (default 5s)
--vfs-cache-max-age duration Max age of objects in the cache (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s)
--vfs-write-back duration Time to writeback files after last use when using cache (default 5s)
If run with `-vv` rclone will print the location of the file cache. The
files are stored in the user cache file area which is OS dependent but
@ -194,8 +196,8 @@ The higher the cache mode the more compatible rclone becomes at the
cost of using disk space.
Note that files are written back to the remote only when they are
closed and if they haven't been accessed for --vfs-write-back
second. If rclone is quit or dies with files that haven't been
closed and if they haven't been accessed for `--vfs-write-back`
seconds. If rclone is quit or dies with files that haven't been
uploaded, these will be uploaded next time rclone is run with the same
flags.
@ -264,27 +266,55 @@ their full size in the cache, but they will be sparse files with only
the data that has been downloaded present in them.
This mode should support all normal file system operations and is
otherwise identical to --vfs-cache-mode writes.
otherwise identical to `--vfs-cache-mode` writes.
When reading a file rclone will read --buffer-size plus
--vfs-read-ahead bytes ahead. The --buffer-size is buffered in memory
whereas the --vfs-read-ahead is buffered on disk.
When reading a file rclone will read `--buffer-size` plus
`--vfs-read-ahead` bytes ahead. The `--buffer-size` is buffered in memory
whereas the `--vfs-read-ahead` is buffered on disk.
When using this mode it is recommended that --buffer-size is not set
too big and --vfs-read-ahead is set large if required.
When using this mode it is recommended that `--buffer-size` is not set
too large and `--vfs-read-ahead` is set large if required.
**IMPORTANT** not all file systems support sparse files. In particular
FAT/exFAT do not. Rclone will perform very badly if the cache
directory is on a filesystem which doesn't support sparse files and it
will log an ERROR message if one is detected.
## VFS Chunked Reading
When rclone reads files from a remote it reads them in chunks. This
means that rather than requesting the whole file rclone reads the
chunk specified. This can reduce the used download quota for some
remotes by requesting only chunks from the remote that are actually
read, at the cost of an increased number of requests.
These flags control the chunking:
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128M)
--vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default off)
Rclone will start reading a chunk of size `--vfs-read-chunk-size`,
and then double the size for each read. When `--vfs-read-chunk-size-limit` is
specified, and greater than `--vfs-read-chunk-size`, the chunk size for each
open file will get doubled only until the specified value is reached. If the
value is "off", which is the default, the limit is disabled and the chunk size
will grow indefinitely.
With `--vfs-read-chunk-size 100M` and `--vfs-read-chunk-size-limit 0`
the following parts will be downloaded: 0-100M, 100M-200M, 200M-300M, 300M-400M and so on.
When `--vfs-read-chunk-size-limit 500M` is specified, the result would be
0-100M, 100M-300M, 300M-700M, 700M-1200M, 1200M-1700M and so on.
Setting `--vfs-read-chunk-size` to `0` or "off" disables chunked reading.
## VFS Performance
These flags may be used to enable/disable features of the VFS for
performance or other reasons.
performance or other reasons. See also the [chunked reading](#vfs-chunked-reading)
feature.
In particular S3 and Swift benefit hugely from the --no-modtime flag
(or use --use-server-modtime for a slightly different effect) as each
In particular S3 and Swift benefit hugely from the `--no-modtime` flag
(or use `--use-server-modtime` for a slightly different effect) as each
read of the modification time takes a transaction.
--no-checksum Don't compare checksums on up/download.
@ -292,32 +322,19 @@ read of the modification time takes a transaction.
--no-seek Don't allow seeking in files.
--read-only Mount read-only.
When rclone reads files from a remote it reads them in chunks. This
means that rather than requesting the whole file rclone reads the
chunk specified. This is advantageous because some cloud providers
account for reads being all the data requested, not all the data
delivered.
Rclone will keep doubling the chunk size requested starting at
--vfs-read-chunk-size with a maximum of --vfs-read-chunk-size-limit
unless it is set to "off" in which case there will be no limit.
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M)
--vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default "off")
Sometimes rclone is delivered reads or writes out of order. Rather
than seeking rclone will wait a short time for the in sequence read or
write to come in. These flags only come into effect when not using an
on disk cache file.
--vfs-read-wait duration Time to wait for in-sequence read before seeking. (default 20ms)
--vfs-write-wait duration Time to wait for in-sequence write before giving error. (default 1s)
--vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms)
--vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s)
When using VFS write caching (--vfs-cache-mode with value writes or full),
the global flag --transfers can be set to adjust the number of parallel uploads of
modified files from cache (the related global flag --checkers have no effect on mount).
When using VFS write caching (`--vfs-cache-mode` with value writes or full),
the global flag `--transfers` can be set to adjust the number of parallel uploads of
modified files from cache (the related global flag `--checkers` have no effect on mount).
--transfers int Number of file transfers to run in parallel. (default 4)
--transfers int Number of file transfers to run in parallel (default 4)
## VFS Case Sensitivity
@ -330,7 +347,7 @@ to create the file is preserved and available for programs to query.
It is not allowed for two files in the same directory to differ only by case.
Usually file systems on macOS are case-insensitive. It is possible to make macOS
file systems case-sensitive but that is not the default
file systems case-sensitive but that is not the default.
The `--vfs-case-insensitive` mount flag controls how rclone handles these
two cases. If its value is "false", rclone passes file names to the mounted
@ -375,43 +392,44 @@ rclone serve http remote:path [flags]
## Options
```
--addr string IPaddress:Port or :Port to bind server to. (default "127.0.0.1:8080")
--baseurl string Prefix for URLs - leave blank for root.
--addr string IPaddress:Port or :Port to bind server to (default "127.0.0.1:8080")
--baseurl string Prefix for URLs - leave blank for root
--cert string SSL PEM key (concatenation of certificate and CA certificate)
--client-ca string Client certificate authority to verify clients with
--dir-cache-time duration Time to cache directory entries for. (default 5m0s)
--dir-cache-time duration Time to cache directory entries for (default 5m0s)
--dir-perms FileMode Directory permissions (default 0777)
--file-perms FileMode File permissions (default 0666)
--gid uint32 Override the gid field set by the filesystem. Not supported on Windows. (default 1000)
--gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
-h, --help help for http
--htpasswd string htpasswd file - if not provided no authentication is done
--htpasswd string A htpasswd file - if not provided no authentication is done
--key string SSL PEM Private key
--max-header-bytes int Maximum size of request header (default 4096)
--no-checksum Don't compare checksums on up/download.
--no-modtime Don't read/write the modification time (can speed things up).
--no-seek Don't allow seeking in files.
--pass string Password for authentication.
--poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
--read-only Mount read-only.
--realm string realm for authentication
--no-checksum Don't compare checksums on up/download
--no-modtime Don't read/write the modification time (can speed things up)
--no-seek Don't allow seeking in files
--pass string Password for authentication
--poll-interval duration Time to wait between polling for changes, must be smaller than dir-cache-time and only on supported remotes (set 0 to disable) (default 1m0s)
--read-only Mount read-only
--realm string Realm for authentication
--salt string Password hashing salt (default "dlPL2MqE")
--server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--template string User Specified Template.
--uid uint32 Override the uid field set by the filesystem. Not supported on Windows. (default 1000)
--umask int Override the permission bits set by the filesystem. Not supported on Windows. (default 2)
--user string User name for authentication.
--vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off)
--template string User-specified template
--uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000)
--umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2)
--user string User name for authentication
--vfs-cache-max-age duration Max age of objects in the cache (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
--vfs-case-insensitive If a file name not found, find a case insensitive match.
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full.
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off)
--vfs-read-wait duration Time to wait for in-sequence read before seeking. (default 20ms)
--vfs-used-is-size rclone size Use the rclone size algorithm for Used size.
--vfs-write-back duration Time to writeback files after last use when using cache. (default 5s)
--vfs-write-wait duration Time to wait for in-sequence write before giving error. (default 1s)
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s)
--vfs-case-insensitive If a file name not found, find a case insensitive match
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
--vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms)
--vfs-used-is-size rclone size Use the rclone size algorithm for Used size
--vfs-write-back duration Time to writeback files after last use when using cache (default 5s)
--vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s)
```
See the [global flags page](/flags/) for global options not listed here.

View File

@ -182,24 +182,24 @@ rclone serve restic remote:path [flags]
## Options
```
--addr string IPaddress:Port or :Port to bind server to. (default "localhost:8080")
--append-only disallow deletion of repository data
--baseurl string Prefix for URLs - leave blank for root.
--cache-objects cache listed objects (default true)
--addr string IPaddress:Port or :Port to bind server to (default "localhost:8080")
--append-only Disallow deletion of repository data
--baseurl string Prefix for URLs - leave blank for root
--cache-objects Cache listed objects (default true)
--cert string SSL PEM key (concatenation of certificate and CA certificate)
--client-ca string Client certificate authority to verify clients with
-h, --help help for restic
--htpasswd string htpasswd file - if not provided no authentication is done
--key string SSL PEM Private key
--max-header-bytes int Maximum size of request header (default 4096)
--pass string Password for authentication.
--private-repos users can only access their private repo
--pass string Password for authentication
--private-repos Users can only access their private repo
--realm string realm for authentication (default "rclone")
--server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--stdio run an HTTP2 server on stdin/stdout
--template string User Specified Template.
--user string User name for authentication.
--stdio Run an HTTP2 server on stdin/stdout
--template string User-specified template
--user string User name for authentication
```
See the [global flags page](/flags/) for global options not listed here.

View File

@ -33,8 +33,10 @@ that it can provide md5sum/sha1sum/df information for the rclone sftp
backend. This means that is can support SHA1SUMs, MD5SUMs and the
about command when paired with the rclone sftp backend.
If you don't supply a --key then rclone will generate one and cache it
for later use.
If you don't supply a host --key then rclone will generate rsa, ecdsa
and ed25519 variants, and cache them for later use in rclone's cache
directory (see "rclone help flags cache-dir") in the "serve-sftp"
directory.
By default the server binds to localhost:2022 - if you want it to be
reachable externally then supply "--addr :2022" for example.
@ -69,8 +71,8 @@ directory should be considered up to date and not refreshed from the
backend. Changes made through the mount will appear immediately or
invalidate the cache.
--dir-cache-time duration Time to cache directory entries for. (default 5m0s)
--poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
--dir-cache-time duration Time to cache directory entries for (default 5m0s)
--poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s)
However, changes made directly on the cloud storage by the web
interface or a different copy of rclone will only be picked up once
@ -124,10 +126,10 @@ find that you need one or the other or both.
--cache-dir string Directory rclone will use for caching.
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
--vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off)
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
--vfs-write-back duration Time to writeback files after last use when using cache. (default 5s)
--vfs-cache-max-age duration Max age of objects in the cache (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s)
--vfs-write-back duration Time to writeback files after last use when using cache (default 5s)
If run with `-vv` rclone will print the location of the file cache. The
files are stored in the user cache file area which is OS dependent but
@ -139,8 +141,8 @@ The higher the cache mode the more compatible rclone becomes at the
cost of using disk space.
Note that files are written back to the remote only when they are
closed and if they haven't been accessed for --vfs-write-back
second. If rclone is quit or dies with files that haven't been
closed and if they haven't been accessed for `--vfs-write-back`
seconds. If rclone is quit or dies with files that haven't been
uploaded, these will be uploaded next time rclone is run with the same
flags.
@ -209,27 +211,55 @@ their full size in the cache, but they will be sparse files with only
the data that has been downloaded present in them.
This mode should support all normal file system operations and is
otherwise identical to --vfs-cache-mode writes.
otherwise identical to `--vfs-cache-mode` writes.
When reading a file rclone will read --buffer-size plus
--vfs-read-ahead bytes ahead. The --buffer-size is buffered in memory
whereas the --vfs-read-ahead is buffered on disk.
When reading a file rclone will read `--buffer-size` plus
`--vfs-read-ahead` bytes ahead. The `--buffer-size` is buffered in memory
whereas the `--vfs-read-ahead` is buffered on disk.
When using this mode it is recommended that --buffer-size is not set
too big and --vfs-read-ahead is set large if required.
When using this mode it is recommended that `--buffer-size` is not set
too large and `--vfs-read-ahead` is set large if required.
**IMPORTANT** not all file systems support sparse files. In particular
FAT/exFAT do not. Rclone will perform very badly if the cache
directory is on a filesystem which doesn't support sparse files and it
will log an ERROR message if one is detected.
## VFS Chunked Reading
When rclone reads files from a remote it reads them in chunks. This
means that rather than requesting the whole file rclone reads the
chunk specified. This can reduce the used download quota for some
remotes by requesting only chunks from the remote that are actually
read, at the cost of an increased number of requests.
These flags control the chunking:
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128M)
--vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default off)
Rclone will start reading a chunk of size `--vfs-read-chunk-size`,
and then double the size for each read. When `--vfs-read-chunk-size-limit` is
specified, and greater than `--vfs-read-chunk-size`, the chunk size for each
open file will get doubled only until the specified value is reached. If the
value is "off", which is the default, the limit is disabled and the chunk size
will grow indefinitely.
With `--vfs-read-chunk-size 100M` and `--vfs-read-chunk-size-limit 0`
the following parts will be downloaded: 0-100M, 100M-200M, 200M-300M, 300M-400M and so on.
When `--vfs-read-chunk-size-limit 500M` is specified, the result would be
0-100M, 100M-300M, 300M-700M, 700M-1200M, 1200M-1700M and so on.
Setting `--vfs-read-chunk-size` to `0` or "off" disables chunked reading.
## VFS Performance
These flags may be used to enable/disable features of the VFS for
performance or other reasons.
performance or other reasons. See also the [chunked reading](#vfs-chunked-reading)
feature.
In particular S3 and Swift benefit hugely from the --no-modtime flag
(or use --use-server-modtime for a slightly different effect) as each
In particular S3 and Swift benefit hugely from the `--no-modtime` flag
(or use `--use-server-modtime` for a slightly different effect) as each
read of the modification time takes a transaction.
--no-checksum Don't compare checksums on up/download.
@ -237,32 +267,19 @@ read of the modification time takes a transaction.
--no-seek Don't allow seeking in files.
--read-only Mount read-only.
When rclone reads files from a remote it reads them in chunks. This
means that rather than requesting the whole file rclone reads the
chunk specified. This is advantageous because some cloud providers
account for reads being all the data requested, not all the data
delivered.
Rclone will keep doubling the chunk size requested starting at
--vfs-read-chunk-size with a maximum of --vfs-read-chunk-size-limit
unless it is set to "off" in which case there will be no limit.
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M)
--vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default "off")
Sometimes rclone is delivered reads or writes out of order. Rather
than seeking rclone will wait a short time for the in sequence read or
write to come in. These flags only come into effect when not using an
on disk cache file.
--vfs-read-wait duration Time to wait for in-sequence read before seeking. (default 20ms)
--vfs-write-wait duration Time to wait for in-sequence write before giving error. (default 1s)
--vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms)
--vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s)
When using VFS write caching (--vfs-cache-mode with value writes or full),
the global flag --transfers can be set to adjust the number of parallel uploads of
modified files from cache (the related global flag --checkers have no effect on mount).
When using VFS write caching (`--vfs-cache-mode` with value writes or full),
the global flag `--transfers` can be set to adjust the number of parallel uploads of
modified files from cache (the related global flag `--checkers` have no effect on mount).
--transfers int Number of file transfers to run in parallel. (default 4)
--transfers int Number of file transfers to run in parallel (default 4)
## VFS Case Sensitivity
@ -275,7 +292,7 @@ to create the file is preserved and available for programs to query.
It is not allowed for two files in the same directory to differ only by case.
Usually file systems on macOS are case-insensitive. It is possible to make macOS
file systems case-sensitive but that is not the default
file systems case-sensitive but that is not the default.
The `--vfs-case-insensitive` mount flag controls how rclone handles these
two cases. If its value is "false", rclone passes file names to the mounted
@ -401,38 +418,38 @@ rclone serve sftp remote:path [flags]
## Options
```
--addr string IPaddress:Port or :Port to bind server to. (default "localhost:2022")
--auth-proxy string A program to use to create the backend from the auth.
--addr string IPaddress:Port or :Port to bind server to (default "localhost:2022")
--auth-proxy string A program to use to create the backend from the auth
--authorized-keys string Authorized keys file (default "~/.ssh/authorized_keys")
--dir-cache-time duration Time to cache directory entries for. (default 5m0s)
--dir-cache-time duration Time to cache directory entries for (default 5m0s)
--dir-perms FileMode Directory permissions (default 0777)
--file-perms FileMode File permissions (default 0666)
--gid uint32 Override the gid field set by the filesystem. Not supported on Windows. (default 1000)
--gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
-h, --help help for sftp
--key stringArray SSH private host key file (Can be multi-valued, leave blank to auto generate)
--no-auth Allow connections with no authentication if set.
--no-checksum Don't compare checksums on up/download.
--no-modtime Don't read/write the modification time (can speed things up).
--no-seek Don't allow seeking in files.
--pass string Password for authentication.
--poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
--read-only Mount read-only.
--no-auth Allow connections with no authentication if set
--no-checksum Don't compare checksums on up/download
--no-modtime Don't read/write the modification time (can speed things up)
--no-seek Don't allow seeking in files
--pass string Password for authentication
--poll-interval duration Time to wait between polling for changes, must be smaller than dir-cache-time and only on supported remotes (set 0 to disable) (default 1m0s)
--read-only Mount read-only
--stdio Run an sftp server on run stdin/stdout
--uid uint32 Override the uid field set by the filesystem. Not supported on Windows. (default 1000)
--umask int Override the permission bits set by the filesystem. Not supported on Windows. (default 2)
--user string User name for authentication.
--vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off)
--uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000)
--umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2)
--user string User name for authentication
--vfs-cache-max-age duration Max age of objects in the cache (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
--vfs-case-insensitive If a file name not found, find a case insensitive match.
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full.
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off)
--vfs-read-wait duration Time to wait for in-sequence read before seeking. (default 20ms)
--vfs-used-is-size rclone size Use the rclone size algorithm for Used size.
--vfs-write-back duration Time to writeback files after last use when using cache. (default 5s)
--vfs-write-wait duration Time to wait for in-sequence write before giving error. (default 1s)
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s)
--vfs-case-insensitive If a file name not found, find a case insensitive match
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
--vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms)
--vfs-used-is-size rclone size Use the rclone size algorithm for Used size
--vfs-write-back duration Time to writeback files after last use when using cache (default 5s)
--vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s)
```
See the [global flags page](/flags/) for global options not listed here.

View File

@ -133,8 +133,8 @@ directory should be considered up to date and not refreshed from the
backend. Changes made through the mount will appear immediately or
invalidate the cache.
--dir-cache-time duration Time to cache directory entries for. (default 5m0s)
--poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
--dir-cache-time duration Time to cache directory entries for (default 5m0s)
--poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s)
However, changes made directly on the cloud storage by the web
interface or a different copy of rclone will only be picked up once
@ -188,10 +188,10 @@ find that you need one or the other or both.
--cache-dir string Directory rclone will use for caching.
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
--vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off)
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
--vfs-write-back duration Time to writeback files after last use when using cache. (default 5s)
--vfs-cache-max-age duration Max age of objects in the cache (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s)
--vfs-write-back duration Time to writeback files after last use when using cache (default 5s)
If run with `-vv` rclone will print the location of the file cache. The
files are stored in the user cache file area which is OS dependent but
@ -203,8 +203,8 @@ The higher the cache mode the more compatible rclone becomes at the
cost of using disk space.
Note that files are written back to the remote only when they are
closed and if they haven't been accessed for --vfs-write-back
second. If rclone is quit or dies with files that haven't been
closed and if they haven't been accessed for `--vfs-write-back`
seconds. If rclone is quit or dies with files that haven't been
uploaded, these will be uploaded next time rclone is run with the same
flags.
@ -273,27 +273,55 @@ their full size in the cache, but they will be sparse files with only
the data that has been downloaded present in them.
This mode should support all normal file system operations and is
otherwise identical to --vfs-cache-mode writes.
otherwise identical to `--vfs-cache-mode` writes.
When reading a file rclone will read --buffer-size plus
--vfs-read-ahead bytes ahead. The --buffer-size is buffered in memory
whereas the --vfs-read-ahead is buffered on disk.
When reading a file rclone will read `--buffer-size` plus
`--vfs-read-ahead` bytes ahead. The `--buffer-size` is buffered in memory
whereas the `--vfs-read-ahead` is buffered on disk.
When using this mode it is recommended that --buffer-size is not set
too big and --vfs-read-ahead is set large if required.
When using this mode it is recommended that `--buffer-size` is not set
too large and `--vfs-read-ahead` is set large if required.
**IMPORTANT** not all file systems support sparse files. In particular
FAT/exFAT do not. Rclone will perform very badly if the cache
directory is on a filesystem which doesn't support sparse files and it
will log an ERROR message if one is detected.
## VFS Chunked Reading
When rclone reads files from a remote it reads them in chunks. This
means that rather than requesting the whole file rclone reads the
chunk specified. This can reduce the used download quota for some
remotes by requesting only chunks from the remote that are actually
read, at the cost of an increased number of requests.
These flags control the chunking:
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128M)
--vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default off)
Rclone will start reading a chunk of size `--vfs-read-chunk-size`,
and then double the size for each read. When `--vfs-read-chunk-size-limit` is
specified, and greater than `--vfs-read-chunk-size`, the chunk size for each
open file will get doubled only until the specified value is reached. If the
value is "off", which is the default, the limit is disabled and the chunk size
will grow indefinitely.
With `--vfs-read-chunk-size 100M` and `--vfs-read-chunk-size-limit 0`
the following parts will be downloaded: 0-100M, 100M-200M, 200M-300M, 300M-400M and so on.
When `--vfs-read-chunk-size-limit 500M` is specified, the result would be
0-100M, 100M-300M, 300M-700M, 700M-1200M, 1200M-1700M and so on.
Setting `--vfs-read-chunk-size` to `0` or "off" disables chunked reading.
## VFS Performance
These flags may be used to enable/disable features of the VFS for
performance or other reasons.
performance or other reasons. See also the [chunked reading](#vfs-chunked-reading)
feature.
In particular S3 and Swift benefit hugely from the --no-modtime flag
(or use --use-server-modtime for a slightly different effect) as each
In particular S3 and Swift benefit hugely from the `--no-modtime` flag
(or use `--use-server-modtime` for a slightly different effect) as each
read of the modification time takes a transaction.
--no-checksum Don't compare checksums on up/download.
@ -301,32 +329,19 @@ read of the modification time takes a transaction.
--no-seek Don't allow seeking in files.
--read-only Mount read-only.
When rclone reads files from a remote it reads them in chunks. This
means that rather than requesting the whole file rclone reads the
chunk specified. This is advantageous because some cloud providers
account for reads being all the data requested, not all the data
delivered.
Rclone will keep doubling the chunk size requested starting at
--vfs-read-chunk-size with a maximum of --vfs-read-chunk-size-limit
unless it is set to "off" in which case there will be no limit.
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M)
--vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default "off")
Sometimes rclone is delivered reads or writes out of order. Rather
than seeking rclone will wait a short time for the in sequence read or
write to come in. These flags only come into effect when not using an
on disk cache file.
--vfs-read-wait duration Time to wait for in-sequence read before seeking. (default 20ms)
--vfs-write-wait duration Time to wait for in-sequence write before giving error. (default 1s)
--vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms)
--vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s)
When using VFS write caching (--vfs-cache-mode with value writes or full),
the global flag --transfers can be set to adjust the number of parallel uploads of
modified files from cache (the related global flag --checkers have no effect on mount).
When using VFS write caching (`--vfs-cache-mode` with value writes or full),
the global flag `--transfers` can be set to adjust the number of parallel uploads of
modified files from cache (the related global flag `--checkers` have no effect on mount).
--transfers int Number of file transfers to run in parallel. (default 4)
--transfers int Number of file transfers to run in parallel (default 4)
## VFS Case Sensitivity
@ -339,7 +354,7 @@ to create the file is preserved and available for programs to query.
It is not allowed for two files in the same directory to differ only by case.
Usually file systems on macOS are case-insensitive. It is possible to make macOS
file systems case-sensitive but that is not the default
file systems case-sensitive but that is not the default.
The `--vfs-case-insensitive` mount flag controls how rclone handles these
two cases. If its value is "false", rclone passes file names to the mounted
@ -465,46 +480,46 @@ rclone serve webdav remote:path [flags]
## Options
```
--addr string IPaddress:Port or :Port to bind server to. (default "localhost:8080")
--auth-proxy string A program to use to create the backend from the auth.
--baseurl string Prefix for URLs - leave blank for root.
--addr string IPaddress:Port or :Port to bind server to (default "localhost:8080")
--auth-proxy string A program to use to create the backend from the auth
--baseurl string Prefix for URLs - leave blank for root
--cert string SSL PEM key (concatenation of certificate and CA certificate)
--client-ca string Client certificate authority to verify clients with
--dir-cache-time duration Time to cache directory entries for. (default 5m0s)
--dir-cache-time duration Time to cache directory entries for (default 5m0s)
--dir-perms FileMode Directory permissions (default 0777)
--disable-dir-list Disable HTML directory list on GET request for a directory
--etag-hash string Which hash to use for the ETag, or auto or blank for off
--file-perms FileMode File permissions (default 0666)
--gid uint32 Override the gid field set by the filesystem. Not supported on Windows. (default 1000)
--gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
-h, --help help for webdav
--htpasswd string htpasswd file - if not provided no authentication is done
--key string SSL PEM Private key
--max-header-bytes int Maximum size of request header (default 4096)
--no-checksum Don't compare checksums on up/download.
--no-modtime Don't read/write the modification time (can speed things up).
--no-seek Don't allow seeking in files.
--pass string Password for authentication.
--poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
--read-only Mount read-only.
--no-checksum Don't compare checksums on up/download
--no-modtime Don't read/write the modification time (can speed things up)
--no-seek Don't allow seeking in files
--pass string Password for authentication
--poll-interval duration Time to wait between polling for changes, must be smaller than dir-cache-time and only on supported remotes (set 0 to disable) (default 1m0s)
--read-only Mount read-only
--realm string realm for authentication (default "rclone")
--server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--template string User Specified Template.
--uid uint32 Override the uid field set by the filesystem. Not supported on Windows. (default 1000)
--umask int Override the permission bits set by the filesystem. Not supported on Windows. (default 2)
--user string User name for authentication.
--vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off)
--template string User-specified template
--uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000)
--umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2)
--user string User name for authentication
--vfs-cache-max-age duration Max age of objects in the cache (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
--vfs-case-insensitive If a file name not found, find a case insensitive match.
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full.
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off)
--vfs-read-wait duration Time to wait for in-sequence read before seeking. (default 20ms)
--vfs-used-is-size rclone size Use the rclone size algorithm for Used size.
--vfs-write-back duration Time to writeback files after last use when using cache. (default 5s)
--vfs-write-wait duration Time to wait for in-sequence write before giving error. (default 1s)
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s)
--vfs-case-insensitive If a file name not found, find a case insensitive match
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
--vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms)
--vfs-used-is-size rclone size Use the rclone size algorithm for Used size
--vfs-write-back duration Time to writeback files after last use when using cache (default 5s)
--vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s)
```
See the [global flags page](/flags/) for global options not listed here.

View File

@ -17,7 +17,7 @@ rclone size remote:path [flags]
```
-h, --help help for size
--json format output as JSON
--json Format output as JSON
```
See the [global flags page](/flags/) for global options not listed here.

View File

@ -13,10 +13,10 @@ Make source and dest identical, modifying destination only.
Sync the source to the destination, changing the destination
only. Doesn't transfer unchanged files, testing by size and
modification time or MD5SUM. Destination is updated to match
source, including deleting files if necessary (except duplicate
objects, see below).
only. Doesn't transfer files that are identical on source and
destination, testing by size and modification time or MD5SUM.
Destination is updated to match source, including deleting files
if necessary (except duplicate objects, see below).
**Important**: Since this can cause data loss, test first with the
`--dry-run` or the `--interactive`/`-i` flag.

View File

@ -17,7 +17,7 @@ rclone test changenotify remote: [flags]
```
-h, --help help for changenotify
--poll-interval duration Time to wait between polling for changes. (default 10s)
--poll-interval duration Time to wait between polling for changes (default 10s)
```
See the [global flags page](/flags/) for global options not listed here.

View File

@ -26,14 +26,14 @@ rclone test info [remote:path]+ [flags]
## Options
```
--all Run all tests.
--check-control Check control characters.
--check-length Check max filename length.
--check-normalization Check UTF-8 Normalization.
--check-streaming Check uploads with indeterminate file size.
--all Run all tests
--check-control Check control characters
--check-length Check max filename length
--check-normalization Check UTF-8 Normalization
--check-streaming Check uploads with indeterminate file size
-h, --help help for info
--upload-wait duration Wait after writing a file.
--write-json string Write results to file.
--upload-wait duration Wait after writing a file
--write-json string Write results to file
```
See the [global flags page](/flags/) for global options not listed here.

View File

@ -12,21 +12,25 @@ Create new file or change file modification time.
## Synopsis
Set the modification time on object(s) as specified by remote:path to
Set the modification time on file(s) as specified by remote:path to
have the current time.
If remote:path does not exist then a zero sized object will be created
unless the --no-create flag is provided.
If remote:path does not exist then a zero sized file will be created,
unless `--no-create` or `--recursive` is provided.
If --timestamp is used then it will set the modification time to that
If `--recursive` is used then recursively sets the modification
time on all existing files that is found under the path. Filters are supported,
and you can test with the `--dry-run` or the `--interactive` flag.
If `--timestamp` is used then sets the modification time to that
time instead of the current time. Times may be specified as one of:
- 'YYMMDD' - e.g. 17.10.30
- 'YYYY-MM-DDTHH:MM:SS' - e.g. 2006-01-02T15:04:05
- 'YYYY-MM-DDTHH:MM:SS.SSS' - e.g. 2006-01-02T15:04:05.123456789
Note that --timestamp is in UTC if you want local time then add the
--localtime flag.
Note that value of `--timestamp` is in UTC. If you want local time
then add the `--localtime` flag.
```
@ -37,9 +41,10 @@ rclone touch remote:path [flags]
```
-h, --help help for touch
--localtime Use localtime for timestamp, not UTC.
-C, --no-create Do not create the file if it does not exist.
-t, --timestamp string Use specified time instead of the current time of day.
--localtime Use localtime for timestamp, not UTC
-C, --no-create Do not create the file if it does not exist (implied with --recursive)
-R, --recursive Recursively touch all files
-t, --timestamp string Use specified time instead of the current time of day
```
See the [global flags page](/flags/) for global options not listed here.

View File

@ -43,27 +43,27 @@ rclone tree remote:path [flags]
## Options
```
-a, --all All files are listed (list . files too).
-C, --color Turn colorization on always.
-d, --dirs-only List directories only.
--dirsfirst List directories before files (-U disables).
--full-path Print the full path prefix for each file.
-a, --all All files are listed (list . files too)
-C, --color Turn colorization on always
-d, --dirs-only List directories only
--dirsfirst List directories before files (-U disables)
--full-path Print the full path prefix for each file
-h, --help help for tree
--human Print the size in a more human readable way.
--level int Descend only level directories deep.
--level int Descend only level directories deep
-D, --modtime Print the date of last modification.
--noindent Don't print indentation lines.
--noreport Turn off file/directory count at end of tree listing.
-o, --output string Output to file instead of stdout.
--noindent Don't print indentation lines
--noreport Turn off file/directory count at end of tree listing
-o, --output string Output to file instead of stdout
-p, --protections Print the protections for each file.
-Q, --quote Quote filenames with double quotes.
-s, --size Print the size in bytes of each file.
--sort string Select sort: name,version,size,mtime,ctime.
--sort-ctime Sort files by last status change time.
-t, --sort-modtime Sort files by last modification time.
-r, --sort-reverse Reverse the order of the sort.
-U, --unsorted Leave files unsorted.
--version Sort files alphanumerically by version.
--sort string Select sort: name,version,size,mtime,ctime
--sort-ctime Sort files by last status change time
-t, --sort-modtime Sort files by last modification time
-r, --sort-reverse Reverse the order of the sort
-U, --unsorted Leave files unsorted
--version Sort files alphanumerically by version
```
See the [global flags page](/flags/) for global options not listed here.

View File

@ -57,7 +57,7 @@ rclone version [flags]
## Options
```
--check Check for new version.
--check Check for new version
-h, --help help for version
```

View File

@ -88,7 +88,7 @@ The compressed files will be named `*.###########.gz` where `*` is the base file
size of the uncompressed file. The file names should not be changed by anything other than the rclone compression backend.
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/compress/compress.go then run make backenddocs" >}}
### Standard Options
### Standard options
Here are the standard options specific to compress (Compress a remote).
@ -113,7 +113,7 @@ Compression mode.
- "gzip"
- Standard gzip compression with fastest parameters.
### Advanced Options
### Advanced options
Here are the advanced options specific to compress (Compress a remote).
@ -121,13 +121,13 @@ Here are the advanced options specific to compress (Compress a remote).
GZIP compression level (-2 to 9).
Generally -1 (default, equivalent to 5) is recommended.
Levels 1 to 9 increase compressiong at the cost of speed.. Going past 6
generally offers very little return.
Level -2 uses Huffmann encoding only. Only use if you now what you
are doing
Level 0 turns off compression.
Generally -1 (default, equivalent to 5) is recommended.
Levels 1 to 9 increase compression at the cost of speed. Going past 6
generally offers very little return.
Level -2 uses Huffmann encoding only. Only use if you know what you
are doing.
Level 0 turns off compression.
- Config: level
- Env Var: RCLONE_COMPRESS_LEVEL
@ -137,11 +137,11 @@ GZIP compression level (-2 to 9).
#### --compress-ram-cache-limit
Some remotes don't allow the upload of files with unknown size.
In this case the compressed file will need to be cached to determine
it's size.
Files smaller than this limit will be cached in RAM, file larger than
this limit will be cached on disk
In this case the compressed file will need to be cached to determine
it's size.
Files smaller than this limit will be cached in RAM, files larger than
this limit will be cached on disk.
- Config: ram_cache_limit
- Env Var: RCLONE_COMPRESS_RAM_CACHE_LIMIT

View File

@ -409,13 +409,14 @@ integrity of a crypted remote instead of `rclone check` which can't
check the checksums properly.
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/crypt/crypt.go then run make backenddocs" >}}
### Standard Options
### Standard options
Here are the standard options specific to crypt (Encrypt/Decrypt a remote).
#### --crypt-remote
Remote to encrypt/decrypt.
Normally should contain a ':' and a path, e.g. "myremote:path/to/dir",
"myremote:bucket" or maybe "myremote:" (not recommended).
@ -434,11 +435,13 @@ How to encrypt the filenames.
- Default: "standard"
- Examples:
- "standard"
- Encrypt the filenames see the docs for the details.
- Encrypt the filenames.
- See the docs for the details.
- "obfuscate"
- Very simple filename obfuscation.
- "off"
- Don't encrypt the file names. Adds a ".bin" extension only.
- Don't encrypt the file names.
- Adds a ".bin" extension only.
#### --crypt-directory-name-encryption
@ -469,7 +472,9 @@ Password or pass phrase for encryption.
#### --crypt-password2
Password or pass phrase for salt. Optional but recommended.
Password or pass phrase for salt.
Optional but recommended.
Should be different to the previous password.
**NB** Input to this must be obscured - see [rclone obscure](/commands/rclone_obscure/).
@ -479,7 +484,7 @@ Should be different to the previous password.
- Type: string
- Default: ""
### Advanced Options
### Advanced options
Here are the advanced options specific to crypt (Encrypt/Decrypt a remote).
@ -532,7 +537,7 @@ Option to either encrypt file data or leave it unencrypted.
- "false"
- Encrypt file data.
### Backend commands
## Backend commands
Here are the commands specific to the crypt backend.
@ -548,7 +553,7 @@ info on how to pass options and arguments.
These can be run on a running backend using the rc command
[backend/command](/rc/#backend/command).
#### encode
### encode
Encode the given filename(s)
@ -563,7 +568,7 @@ Usage Example:
rclone rc backend/command command=encode fs=crypt: file1 [file2...]
#### decode
### decode
Decode the given filename(s)

View File

@ -543,7 +543,7 @@ Google Documents.
| webloc | macOS specific XML format | macOS |
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/drive/drive.go then run make backenddocs" >}}
### Standard Options
### Standard options
Here are the standard options specific to drive (Google Drive).
@ -561,7 +561,8 @@ If you leave this blank, it will use an internal key which is low performance.
#### --drive-client-secret
OAuth Client Secret
OAuth Client Secret.
Leave blank normally.
- Config: client_secret
@ -595,7 +596,7 @@ Scope that rclone should use when requesting access from drive.
#### --drive-root-folder-id
ID of the root folder
ID of the root folder.
Leave blank normally.
Fill in to access "Computers" folders (see docs), or for rclone to use
@ -609,13 +610,13 @@ a non root folder as its starting point.
#### --drive-service-account-file
Service Account Credentials JSON file path
Service Account Credentials JSON file path.
Leave blank normally.
Needed only if you want use SA instead of interactive login.
Leading `~` will be expanded in the file name as will environment variables such as `${RCLONE_CONFIG_DIR}`.
- Config: service_account_file
- Env Var: RCLONE_DRIVE_SERVICE_ACCOUNT_FILE
- Type: string
@ -623,14 +624,14 @@ Leading `~` will be expanded in the file name as will environment variables such
#### --drive-alternate-export
Deprecated: no longer needed
Deprecated: No longer needed.
- Config: alternate_export
- Env Var: RCLONE_DRIVE_ALTERNATE_EXPORT
- Type: bool
- Default: false
### Advanced Options
### Advanced options
Here are the advanced options specific to drive (Google Drive).
@ -646,6 +647,7 @@ OAuth Access Token as a JSON blob.
#### --drive-auth-url
Auth server URL.
Leave blank to use the provider defaults.
- Config: auth_url
@ -656,6 +658,7 @@ Leave blank to use the provider defaults.
#### --drive-token-url
Token server url.
Leave blank to use the provider defaults.
- Config: token_url
@ -665,7 +668,8 @@ Leave blank to use the provider defaults.
#### --drive-service-account-credentials
Service Account Credentials JSON blob
Service Account Credentials JSON blob.
Leave blank normally.
Needed only if you want use SA instead of interactive login.
@ -676,7 +680,7 @@ Needed only if you want use SA instead of interactive login.
#### --drive-team-drive
ID of the Shared Drive (Team Drive)
ID of the Shared Drive (Team Drive).
- Config: team_drive
- Env Var: RCLONE_DRIVE_TEAM_DRIVE
@ -695,6 +699,7 @@ Only consider files owned by the authenticated user.
#### --drive-use-trash
Send files to the trash instead of deleting permanently.
Defaults to true, namely sending files to the trash.
Use `--drive-use-trash=false` to delete files permanently instead.
@ -706,6 +711,7 @@ Use `--drive-use-trash=false` to delete files permanently instead.
#### --drive-skip-gdocs
Skip google documents in all listings.
If given, gdocs practically become invisible to rclone.
- Config: skip_gdocs
@ -752,6 +758,7 @@ commands (copy, sync, etc.), and with all other commands too.
#### --drive-trashed-only
Only show files that are in the trash.
This will show trashed files in their original directory structure.
- Config: trashed_only
@ -770,7 +777,7 @@ Only show files that are starred.
#### --drive-formats
Deprecated: see export_formats
Deprecated: See export_formats.
- Config: formats
- Env Var: RCLONE_DRIVE_FORMATS
@ -797,7 +804,9 @@ Comma separated list of preferred formats for uploading Google docs.
#### --drive-allow-import-name-change
Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
Allow the filetype to change when uploading Google docs.
E.g. file.doc to file.docx. This will confuse sync and reupload every time.
- Config: allow_import_name_change
- Env Var: RCLONE_DRIVE_ALLOW_IMPORT_NAME_CHANGE
@ -806,7 +815,7 @@ Allow the filetype to change when uploading Google docs (e.g. file.doc to file.d
#### --drive-use-created-date
Use file created date instead of modified date.,
Use file created date instead of modified date.
Useful when downloading data and you want the creation date used in
place of the last modified date.
@ -846,7 +855,7 @@ date is used.
#### --drive-list-chunk
Size of listing chunk 100-1000. 0 to disable.
Size of listing chunk 100-1000, 0 to disable.
- Config: list_chunk
- Env Var: RCLONE_DRIVE_LIST_CHUNK
@ -864,7 +873,7 @@ Impersonate this user when using a service account.
#### --drive-upload-cutoff
Cutoff for switching to chunked upload
Cutoff for switching to chunked upload.
- Config: upload_cutoff
- Env Var: RCLONE_DRIVE_UPLOAD_CUTOFF
@ -873,7 +882,9 @@ Cutoff for switching to chunked upload
#### --drive-chunk-size
Upload chunk size. Must a power of 2 >= 256k.
Upload chunk size.
Must a power of 2 >= 256k.
Making this larger will improve performance, but note that each chunk
is buffered in memory one per transfer.
@ -974,7 +985,7 @@ configurations.
#### --drive-disable-http2
Disable drive using http2
Disable drive using http2.
There is currently an unsolved issue with the google drive backend and
HTTP/2. HTTP/2 is therefore disabled by default for the drive backend
@ -992,7 +1003,7 @@ See: https://github.com/rclone/rclone/issues/3631
#### --drive-stop-on-upload-limit
Make upload limit errors be fatal
Make upload limit errors be fatal.
At the time of writing it is only possible to upload 750 GiB of data to
Google Drive a day (this is an undocumented limit). When this limit is
@ -1013,7 +1024,7 @@ See: https://github.com/rclone/rclone/issues/3857
#### --drive-stop-on-download-limit
Make download limit errors be fatal
Make download limit errors be fatal.
At the time of writing it is only possible to download 10 TiB of data from
Google Drive a day (this is an undocumented limit). When this limit is
@ -1032,7 +1043,7 @@ Google don't document so it may break in the future.
#### --drive-skip-shortcuts
If set skip shortcut files
If set skip shortcut files.
Normally rclone dereferences shortcut files making them appear as if
they are the original file (see [the shortcuts section](#shortcuts)).
@ -1048,14 +1059,14 @@ If this flag is set then rclone will ignore shortcut files completely.
This sets the encoding for the backend.
See: the [encoding section in the overview](/overview/#encoding) for more info.
See the [encoding section in the overview](/overview/#encoding) for more info.
- Config: encoding
- Env Var: RCLONE_DRIVE_ENCODING
- Type: MultiEncoder
- Default: InvalidUtf8
### Backend commands
## Backend commands
Here are the commands specific to the drive backend.
@ -1071,7 +1082,7 @@ info on how to pass options and arguments.
These can be run on a running backend using the rc command
[backend/command](/rc/#backend/command).
#### get
### get
Get command for fetching the drive config parameters
@ -1090,7 +1101,7 @@ Options:
- "chunk_size": show the current upload chunk size
- "service_account_file": show the current service account file
#### set
### set
Set command for updating the drive config parameters
@ -1109,7 +1120,7 @@ Options:
- "chunk_size": update the current upload chunk size
- "service_account_file": update the current service account file
#### shortcut
### shortcut
Create shortcuts from files or directories
@ -1137,7 +1148,7 @@ Options:
- "target": optional target remote for the shortcut destination
#### drives
### drives
List the Shared Drives available to this account
@ -1148,7 +1159,7 @@ account.
Usage:
rclone backend drives drive:
rclone backend [-o config] drives drive:
This will return a JSON list of objects like this
@ -1165,9 +1176,25 @@ This will return a JSON list of objects like this
}
]
With the -o config parameter it will output the list in a format
suitable for adding to a config file to make aliases for all the
drives found.
[My Drive]
type = alias
remote = drive,team_drive=0ABCDEF-01234567890,root_folder_id=:
[Test Drive]
type = alias
remote = drive,team_drive=0ABCDEFabcdefghijkl,root_folder_id=:
Adding this to the rclone config file will cause those team drives to
be accessible with the aliases shown. This may require manual editing
of the names.
#### untrash
### untrash
Untrash files and directories
@ -1194,7 +1221,7 @@ Result:
}
#### copyid
### copyid
Copy files by ID

View File

@ -180,13 +180,14 @@ finishes up the last batch using this mode.
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/dropbox/dropbox.go then run make backenddocs" >}}
### Standard Options
### Standard options
Here are the standard options specific to dropbox (Dropbox).
#### --dropbox-client-id
OAuth Client Id
OAuth Client Id.
Leave blank normally.
- Config: client_id
@ -196,7 +197,8 @@ Leave blank normally.
#### --dropbox-client-secret
OAuth Client Secret
OAuth Client Secret.
Leave blank normally.
- Config: client_secret
@ -204,7 +206,7 @@ Leave blank normally.
- Type: string
- Default: ""
### Advanced Options
### Advanced options
Here are the advanced options specific to dropbox (Dropbox).
@ -220,6 +222,7 @@ OAuth Access Token as a JSON blob.
#### --dropbox-auth-url
Auth server URL.
Leave blank to use the provider defaults.
- Config: auth_url
@ -230,6 +233,7 @@ Leave blank to use the provider defaults.
#### --dropbox-token-url
Token server url.
Leave blank to use the provider defaults.
- Config: token_url
@ -239,7 +243,7 @@ Leave blank to use the provider defaults.
#### --dropbox-chunk-size
Upload chunk size. (< 150Mi).
Upload chunk size (< 150Mi).
Any files larger than this will be uploaded in chunks of this size.
@ -361,7 +365,7 @@ maximise throughput.
#### --dropbox-batch-timeout
Max time to allow an idle upload batch before uploading
Max time to allow an idle upload batch before uploading.
If an upload batch is idle for more than this long then it will be
uploaded.
@ -379,11 +383,20 @@ default based on the batch_mode in use.
- Type: Duration
- Default: 0s
#### --dropbox-batch-commit-timeout
Max time to wait for a batch to finish comitting
- Config: batch_commit_timeout
- Env Var: RCLONE_DROPBOX_BATCH_COMMIT_TIMEOUT
- Type: Duration
- Default: 10m0s
#### --dropbox-encoding
This sets the encoding for the backend.
See: the [encoding section in the overview](/overview/#encoding) for more info.
See the [encoding section in the overview](/overview/#encoding) for more info.
- Config: encoding
- Env Var: RCLONE_DROPBOX_ENCODING

View File

@ -114,26 +114,26 @@ Invalid UTF-8 bytes will also be [replaced](/overview/#invalid-utf8),
as they can't be used in JSON strings.
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/fichier/fichier.go then run make backenddocs" >}}
### Standard Options
### Standard options
Here are the standard options specific to fichier (1Fichier).
#### --fichier-api-key
Your API Key, get it from https://1fichier.com/console/params.pl
Your API Key, get it from https://1fichier.com/console/params.pl.
- Config: api_key
- Env Var: RCLONE_FICHIER_API_KEY
- Type: string
- Default: ""
### Advanced Options
### Advanced options
Here are the advanced options specific to fichier (1Fichier).
#### --fichier-shared-folder
If you want to download a shared folder, add this parameter
If you want to download a shared folder, add this parameter.
- Config: shared_folder
- Env Var: RCLONE_FICHIER_SHARED_FOLDER
@ -142,7 +142,7 @@ If you want to download a shared folder, add this parameter
#### --fichier-file-password
If you want to download a shared file that is password protected, add this parameter
If you want to download a shared file that is password protected, add this parameter.
**NB** Input to this must be obscured - see [rclone obscure](/commands/rclone_obscure/).
@ -153,7 +153,7 @@ If you want to download a shared file that is password protected, add this param
#### --fichier-folder-password
If you want to list the files in a shared folder that is password protected, add this parameter
If you want to list the files in a shared folder that is password protected, add this parameter.
**NB** Input to this must be obscured - see [rclone obscure](/commands/rclone_obscure/).
@ -166,7 +166,7 @@ If you want to list the files in a shared folder that is password protected, add
This sets the encoding for the backend.
See: the [encoding section in the overview](/overview/#encoding) for more info.
See the [encoding section in the overview](/overview/#encoding) for more info.
- Config: encoding
- Env Var: RCLONE_FICHIER_ENCODING

View File

@ -152,13 +152,13 @@ $ rclone lsf --dirs-only -Fip --csv filefabric:
The ID for "S3 Storage" would be `120673761`.
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/filefabric/filefabric.go then run make backenddocs" >}}
### Standard Options
### Standard options
Here are the standard options specific to filefabric (Enterprise File Fabric).
#### --filefabric-url
URL of the Enterprise File Fabric to connect to
URL of the Enterprise File Fabric to connect to.
- Config: url
- Env Var: RCLONE_FILEFABRIC_URL
@ -174,7 +174,8 @@ URL of the Enterprise File Fabric to connect to
#### --filefabric-root-folder-id
ID of the root folder
ID of the root folder.
Leave blank normally.
Fill in to make rclone start with directory of a given ID.
@ -187,7 +188,7 @@ Fill in to make rclone start with directory of a given ID.
#### --filefabric-permanent-token
Permanent Authentication Token
Permanent Authentication Token.
A Permanent Authentication Token can be created in the Enterprise File
Fabric, on the users Dashboard under Security, there is an entry
@ -204,13 +205,13 @@ For more info see: https://docs.storagemadeeasy.com/organisationcloud/api-tokens
- Type: string
- Default: ""
### Advanced Options
### Advanced options
Here are the advanced options specific to filefabric (Enterprise File Fabric).
#### --filefabric-token
Session Token
Session Token.
This is a session token which rclone caches in the config file. It is
usually valid for 1 hour.
@ -225,7 +226,7 @@ Don't set this value - rclone will set it automatically.
#### --filefabric-token-expiry
Token expiry time
Token expiry time.
Don't set this value - rclone will set it automatically.
@ -237,7 +238,7 @@ Don't set this value - rclone will set it automatically.
#### --filefabric-version
Version read from the file fabric
Version read from the file fabric.
Don't set this value - rclone will set it automatically.
@ -251,7 +252,7 @@ Don't set this value - rclone will set it automatically.
This sets the encoding for the backend.
See: the [encoding section in the overview](/overview/#encoding) for more info.
See the [encoding section in the overview](/overview/#encoding) for more info.
- Config: encoding
- Env Var: RCLONE_FILEFABRIC_ENCODING

View File

@ -21,7 +21,6 @@ These flags are available for every command.
--bwlimit BwTimetable Bandwidth limit in KiB/s, or use suffix B|K|M|G|T|P or a full timetable
--bwlimit-file BwTimetable Bandwidth limit per file in KiB/s, or use suffix B|K|M|G|T|P or a full timetable
--ca-cert string CA certificate used to verify servers
--temp-dir string Directory rclone will use for temporary files (default "$TMPDIR")
--cache-dir string Directory rclone will use for caching (default "$HOME/.cache/rclone")
--check-first Do all the checks before starting transfers
--checkers int Number of checkers to run in parallel (default 8)
@ -50,20 +49,21 @@ These flags are available for every command.
--exclude-from stringArray Read exclude patterns from file (use - to read from stdin)
--exclude-if-present string Exclude directories if filename is present
--expect-continue-timeout duration Timeout when using expect / 100-continue in HTTP (default 1s)
--fast-list Use recursive list if available; Uses more memory but fewer transactions
--fast-list Use recursive list if available; uses more memory but fewer transactions
--files-from stringArray Read list of source-file names from file (use - to read from stdin)
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file (use - to read from stdin)
--fs-cache-expire-duration duration cache remotes for this long (0 to disable caching) (default 5m0s)
--fs-cache-expire-interval duration interval to check for expired remotes (default 1m0s)
--fs-cache-expire-duration duration Cache remotes for this long (0 to disable caching) (default 5m0s)
--fs-cache-expire-interval duration Interval to check for expired remotes (default 1m0s)
--header stringArray Set HTTP header for all transactions
--header-download stringArray Set HTTP header for download transactions
--header-upload stringArray Set HTTP header for upload transactions
--human-readable Print numbers in a human-readable format, sizes with suffix Ki|Mi|Gi|Ti|Pi
--ignore-case Ignore case in filters (case insensitive)
--ignore-case-sync Ignore case when synchronizing
--ignore-checksum Skip post copy check of checksums
--ignore-errors delete even if there are I/O errors
--ignore-errors Delete even if there are I/O errors
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum
-I, --ignore-times Don't skip files that match size and time - transfer all files
@ -122,7 +122,7 @@ These flags are available for every command.
--rc-serve Enable the serving of remote objects
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--rc-template string User Specified Template
--rc-template string User-specified template
--rc-user string User name for authentication
--rc-web-fetch-url string URL to fetch the releases for webgui (default "https://api.github.com/repos/rclone/rclone-webui-react/releases/latest")
--rc-web-gui Launch WebGUI on localhost
@ -131,10 +131,10 @@ These flags are available for every command.
--rc-web-gui-update Check and update to latest version of web gui
--refresh-times Refresh the modtime of remote files
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5. (0 to disable)
--retries-sleep duration Interval between retrying operations if they fail, e.g. 500ms, 60s, 5m (0 to disable)
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats, 0 for no limit (default 45)
--stats duration Interval between printing stats, e.g. 500ms, 60s, 5m (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats (0 for no limit) (default 45)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line
--stats-one-line-date Enable --stats-one-line and add current date/time prefix
@ -145,6 +145,7 @@ These flags are available for every command.
--suffix-keep-extension Preserve the extension when using --suffix
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, e.g. KERN,USER,... (default "DAEMON")
--temp-dir string Directory rclone will use for temporary files (default "/tmp")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this
--tpslimit-burst int Max burst of transactions for --tpslimit (default 1)
@ -156,7 +157,7 @@ These flags are available for every command.
--use-json-log Use json log format
--use-mmap Use mmap allocator (see docs)
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string (default "rclone/v1.56.0")
--user-agent string Set the user-agent to a specified string (default "rclone/v1.57.0")
-v, --verbose count Print lots more stuff (repeat for more)
```
@ -166,437 +167,451 @@ These flags are available for every command. They control the backends
and may be set in the config file.
```
--acd-auth-url string Auth server URL
--acd-client-id string OAuth Client Id
--acd-client-secret string OAuth Client Secret
--acd-encoding MultiEncoder This sets the encoding for the backend (default Slash,InvalidUtf8,Dot)
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink (default 9Gi)
--acd-token string OAuth Access Token as a JSON blob
--acd-token-url string Token server url
--acd-upload-wait-per-gb Duration Additional time per GiB to wait after a failed complete upload to see if it appears (default 3m0s)
--alias-remote string Remote or path to alias
--azureblob-access-tier string Access tier of blob: hot, cool or archive
--azureblob-account string Storage Account Name (leave blank to use SAS URL or Emulator)
--azureblob-archive-tier-delete Delete archive tier blobs before overwriting
--azureblob-chunk-size SizeSuffix Upload chunk size (<= 100 MiB) (default 4Mi)
--azureblob-disable-checksum Don't store MD5 checksum with object metadata
--azureblob-encoding MultiEncoder This sets the encoding for the backend (default Slash,BackSlash,Del,Ctl,RightPeriod,InvalidUtf8)
--azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use SAS URL or Emulator)
--azureblob-list-chunk int Size of blob list (default 5000)
--azureblob-memory-pool-flush-time Duration How often internal memory buffer pools will be flushed (default 1m0s)
--azureblob-memory-pool-use-mmap Whether to use mmap buffers in internal memory pool
--azureblob-msi-client-id string Object ID of the user-assigned MSI to use, if any
--azureblob-msi-mi-res-id string Azure resource ID of the user-assigned MSI to use, if any
--azureblob-msi-object-id string Object ID of the user-assigned MSI to use, if any
--azureblob-public-access string Public access level of a container: blob, container
--azureblob-sas-url string SAS URL for container level access only
--azureblob-service-principal-file string Path to file containing credentials for use with a service principal
--azureblob-upload-cutoff string Cutoff for switching to chunked upload (<= 256 MiB) (deprecated)
--azureblob-use-emulator Uses local storage emulator if provided as 'true' (leave blank if using real azure storage endpoint)
--azureblob-use-msi Use a managed service identity to authenticate (only works in Azure)
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size (default 96Mi)
--b2-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 4Gi)
--b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-download-auth-duration Duration Time before the authorization token will expire in s or suffix ms|s|m|h|d (default 1w)
--b2-download-url string Custom endpoint for downloads
--b2-encoding MultiEncoder This sets the encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
--b2-endpoint string Endpoint for the service
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files
--b2-key string Application Key
--b2-memory-pool-flush-time Duration How often internal memory buffer pools will be flushed (default 1m0s)
--b2-memory-pool-use-mmap Whether to use mmap buffers in internal memory pool
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi)
--b2-versions Include old versions in directory listings
--box-access-token string Box App Primary Access Token
--box-auth-url string Auth server URL
--box-box-config-file string Box App config.json location
--box-box-sub-type string (Default "user")
--box-client-id string OAuth Client Id
--box-client-secret string OAuth Client Secret
--box-commit-retries int Max number of times to try committing a multipart file (default 100)
--box-encoding MultiEncoder This sets the encoding for the backend (default Slash,BackSlash,Del,Ctl,RightSpace,InvalidUtf8,Dot)
--box-root-folder-id string Fill in for rclone to use a non root folder as its starting point
--box-token string OAuth Access Token as a JSON blob
--box-token-url string Token server url
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50 MiB) (default 50Mi)
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--cache-chunk-path string Directory to cache chunk files (default "$HOME/.cache/rclone/cache-backend")
--cache-chunk-size SizeSuffix The size of a chunk (partial file data) (default 5Mi)
--cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk (default 10Gi)
--cache-db-path string Directory to store file structure metadata DB (default "$HOME/.cache/rclone/cache-backend")
--cache-db-purge Clear all the cached data for this remote on start
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How long to cache file structure information (directory listings, file size, times, etc) (default 6h0m0s)
--cache-plex-insecure string Skip all certificate verification when connecting to the Plex server
--cache-plex-password string The password of the Plex user (obscured)
--cache-plex-url string The URL of the Plex server
--cache-plex-username string The username of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-remote string Remote to cache
--cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Cache file data on writes through the FS
--chunker-chunk-size SizeSuffix Files larger than chunk size will be split in chunks (default 2Gi)
--chunker-fail-hard Choose how chunker should handle files with missing or invalid chunks
--chunker-hash-type string Choose how chunker handles hash sums (default "md5")
--chunker-remote string Remote to chunk/unchunk
--compress-level int GZIP compression level (-2 to 9) (default -1)
--compress-mode string Compression mode (default "gzip")
--compress-ram-cache-limit SizeSuffix Some remotes don't allow the upload of files with unknown size (default 20Mi)
--compress-remote string Remote to compress
-L, --copy-links Follow symlinks and copy the pointed to item
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact (default true)
--crypt-filename-encryption string How to encrypt the filenames (default "standard")
--crypt-no-data-encryption Option to either encrypt file data or leave it unencrypted
--crypt-password string Password or pass phrase for encryption (obscured)
--crypt-password2 string Password or pass phrase for salt (obscured)
--crypt-remote string Remote to encrypt/decrypt
--crypt-server-side-across-configs Allow server-side operations (e.g. copy) to work across different crypt configs
--crypt-show-mapping For all files listed show how the names encrypt
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded
--drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx)
--drive-auth-owner-only Only consider files owned by the authenticated user
--drive-auth-url string Auth server URL
--drive-chunk-size SizeSuffix Upload chunk size (default 8Mi)
--drive-client-id string Google Application Client Id
--drive-client-secret string OAuth Client Secret
--drive-disable-http2 Disable drive using http2 (default true)
--drive-encoding MultiEncoder This sets the encoding for the backend (default InvalidUtf8)
--drive-export-formats string Comma separated list of preferred formats for downloading Google docs (default "docx,xlsx,pptx,svg")
--drive-formats string Deprecated: see export_formats
--drive-impersonate string Impersonate this user when using a service account
--drive-import-formats string Comma separated list of preferred formats for uploading Google docs
--drive-keep-revision-forever Keep new head revision of each file forever
--drive-list-chunk int Size of listing chunk 100-1000, 0 to disable (default 1000)
--drive-pacer-burst int Number of API calls to allow without sleeping (default 100)
--drive-pacer-min-sleep Duration Minimum time to sleep between API calls (default 100ms)
--drive-root-folder-id string ID of the root folder
--drive-scope string Scope that rclone should use when requesting access from drive
--drive-server-side-across-configs Allow server-side operations (e.g. copy) to work across different drive configs
--drive-service-account-credentials string Service Account Credentials JSON blob
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me
--drive-size-as-quota Show sizes as storage quota usage, not actual size
--drive-skip-checksum-gphotos Skip MD5 checksum on Google photos and videos only
--drive-skip-gdocs Skip google documents in all listings
--drive-skip-shortcuts If set skip shortcut files
--drive-starred-only Only show files that are starred
--drive-stop-on-download-limit Make download limit errors be fatal
--drive-stop-on-upload-limit Make upload limit errors be fatal
--drive-team-drive string ID of the Shared Drive (Team Drive)
--drive-token string OAuth Access Token as a JSON blob
--drive-token-url string Token server url
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8Mi)
--drive-use-created-date Use file created date instead of modified date
--drive-use-shared-date Use date file was shared instead of modified date
--drive-use-trash Send files to the trash instead of deleting permanently (default true)
--drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download (default off)
--dropbox-auth-url string Auth server URL
--dropbox-batch-mode string Upload file batching sync|async|off (default "sync")
--dropbox-batch-size int Max number of files in upload batch
--dropbox-batch-timeout Duration Max time to allow an idle upload batch before uploading (default 0s)
--dropbox-chunk-size SizeSuffix Upload chunk size (< 150Mi) (default 48Mi)
--dropbox-client-id string OAuth Client Id
--dropbox-client-secret string OAuth Client Secret
--dropbox-encoding MultiEncoder This sets the encoding for the backend (default Slash,BackSlash,Del,RightSpace,InvalidUtf8,Dot)
--dropbox-impersonate string Impersonate this user when using a business account
--dropbox-shared-files Instructs rclone to work on individual shared files
--dropbox-shared-folders Instructs rclone to work on shared folders
--dropbox-token string OAuth Access Token as a JSON blob
--dropbox-token-url string Token server url
--fichier-api-key string Your API Key, get it from https://1fichier.com/console/params.pl
--fichier-encoding MultiEncoder This sets the encoding for the backend (default Slash,LtGt,DoubleQuote,SingleQuote,BackQuote,Dollar,BackSlash,Del,Ctl,LeftSpace,RightSpace,InvalidUtf8,Dot)
--fichier-file-password string If you want to download a shared file that is password protected, add this parameter (obscured)
--fichier-folder-password string If you want to list the files in a shared folder that is password protected, add this parameter (obscured)
--fichier-shared-folder string If you want to download a shared folder, add this parameter
--filefabric-encoding MultiEncoder This sets the encoding for the backend (default Slash,Del,Ctl,InvalidUtf8,Dot)
--filefabric-permanent-token string Permanent Authentication Token
--filefabric-root-folder-id string ID of the root folder
--filefabric-token string Session Token
--filefabric-token-expiry string Token expiry time
--filefabric-url string URL of the Enterprise File Fabric to connect to
--filefabric-version string Version read from the file fabric
--ftp-close-timeout Duration Maximum time to wait for a response to close (default 1m0s)
--ftp-concurrency int Maximum number of FTP simultaneous connections, 0 for unlimited
--ftp-disable-epsv Disable using EPSV even if server advertises support
--ftp-disable-mlsd Disable using MLSD even if server advertises support
--ftp-encoding MultiEncoder This sets the encoding for the backend (default Slash,Del,Ctl,RightSpace,Dot)
--ftp-explicit-tls Use Explicit FTPS (FTP over TLS)
--ftp-host string FTP host to connect to
--ftp-idle-timeout Duration Max time before closing idle connections (default 1m0s)
--ftp-no-check-certificate Do not verify the TLS certificate of the server
--ftp-pass string FTP password (obscured)
--ftp-port string FTP port, leave blank to use default (21)
--ftp-tls Use Implicit FTPS (FTP over TLS)
--ftp-user string FTP username, leave blank for current username, $USER
--gcs-anonymous Access public buckets and objects without credentials
--gcs-auth-url string Auth server URL
--gcs-bucket-acl string Access Control List for new buckets
--gcs-bucket-policy-only Access checks should use bucket-level IAM policies
--gcs-client-id string OAuth Client Id
--gcs-client-secret string OAuth Client Secret
--gcs-encoding MultiEncoder This sets the encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot)
--gcs-location string Location for the newly created buckets
--gcs-object-acl string Access Control List for new objects
--gcs-project-number string Project number
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage
--gcs-token string OAuth Access Token as a JSON blob
--gcs-token-url string Token server url
--gphotos-auth-url string Auth server URL
--gphotos-client-id string OAuth Client Id
--gphotos-client-secret string OAuth Client Secret
--gphotos-include-archived Also view and download archived media
--gphotos-read-only Set to make the Google Photos backend read only
--gphotos-read-size Set to read the size of media items
--gphotos-start-year int Year limits the photos to be downloaded to those which are uploaded after the given year (default 2000)
--gphotos-token string OAuth Access Token as a JSON blob.
--gphotos-token-url string Token server url
--hasher-auto-size SizeSuffix Auto-update checksum for files smaller than this size (disabled by default)
--hasher-hashes CommaSepList Comma separated list of supported checksum types (default md5,sha1)
--hasher-max-age Duration Maximum time to keep checksums in cache (0 = no cache, off = cache forever) (default off)
--hasher-remote string Remote to cache checksums for (e.g. myRemote:path)
--hdfs-data-transfer-protection string Kerberos data transfer protection: authentication|integrity|privacy
--hdfs-encoding MultiEncoder This sets the encoding for the backend (default Slash,Colon,Del,Ctl,InvalidUtf8,Dot)
--hdfs-namenode string hadoop name node and port
--hdfs-service-principal-name string Kerberos service principal name for the namenode
--hdfs-username string hadoop user name
--http-headers CommaSepList Set HTTP headers for all transactions
--http-no-head Don't use HEAD requests to find file sizes in dir listing
--http-no-slash Set this if the site doesn't end directories with /
--http-url string URL of http host to connect to
--hubic-auth-url string Auth server URL
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container (default 5Gi)
--hubic-client-id string OAuth Client Id
--hubic-client-secret string OAuth Client Secret
--hubic-encoding MultiEncoder This sets the encoding for the backend (default Slash,InvalidUtf8)
--hubic-no-chunk Don't chunk files during streaming upload
--hubic-token string OAuth Access Token as a JSON blob
--hubic-token-url string Token server url
--jottacloud-encoding MultiEncoder This sets the encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,Del,Ctl,InvalidUtf8,Dot)
--jottacloud-hard-delete Delete files permanently rather than putting them into the trash
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required (default 10Mi)
--jottacloud-no-versions Avoid server side versioning by deleting files and recreating files instead of overwriting them
--jottacloud-trashed-only Only show files that are in the trash
--jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's (default 10Mi)
--koofr-encoding MultiEncoder This sets the encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
--koofr-endpoint string The Koofr API endpoint to use (default "https://app.koofr.net")
--koofr-mountid string Mount ID of the mount to use
--koofr-password string Your Koofr password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password) (obscured)
--koofr-setmtime Does the backend support setting modification time (default true)
--koofr-user string Your Koofr user name
-l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--local-case-insensitive Force the filesystem to report itself as case insensitive
--local-case-sensitive Force the filesystem to report itself as case sensitive
--local-encoding MultiEncoder This sets the encoding for the backend (default Slash,Dot)
--local-no-check-updated Don't check to see if the files change during upload
--local-no-preallocate Disable preallocation of disk space for transferred files
--local-no-set-modtime Disable setting modtime
--local-no-sparse Disable sparse files for multi-thread downloads
--local-nounc string Disable UNC (long path names) conversion on Windows
--local-unicode-normalization Apply unicode NFC normalization to paths and filenames
--local-zero-size-links Assume the Stat size of links is zero (and read them instead) (deprecated)
--mailru-check-hash What should copy do if file checksum is mismatched or invalid (default true)
--mailru-encoding MultiEncoder This sets the encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,InvalidUtf8,Dot)
--mailru-pass string Password (obscured)
--mailru-speedup-enable Skip full upload if there is another file with same data hash (default true)
--mailru-speedup-file-patterns string Comma separated list of file name patterns eligible for speedup (put by hash) (default "*.mkv,*.avi,*.mp4,*.mp3,*.zip,*.gz,*.rar,*.pdf")
--mailru-speedup-max-disk SizeSuffix This option allows you to disable speedup (put by hash) for large files (default 3Gi)
--mailru-speedup-max-memory SizeSuffix Files larger than the size given below will always be hashed on disk (default 32Mi)
--mailru-user string User name (usually email)
--mega-debug Output more debug from Mega
--mega-encoding MultiEncoder This sets the encoding for the backend (default Slash,InvalidUtf8,Dot)
--mega-hard-delete Delete files permanently rather than putting them into the trash
--mega-pass string Password (obscured)
--mega-user string User name
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only)
--onedrive-auth-url string Auth server URL
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k (327,680 bytes) (default 10Mi)
--onedrive-client-id string OAuth Client Id
--onedrive-client-secret string OAuth Client Secret
--onedrive-drive-id string The ID of the drive to use
--onedrive-drive-type string The type of the drive (personal | business | documentLibrary)
--onedrive-encoding MultiEncoder This sets the encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,LeftSpace,LeftTilde,RightSpace,RightPeriod,InvalidUtf8,Dot)
--onedrive-expose-onenote-files Set to make OneNote files show up in directory listings
--onedrive-link-password string Set the password for links created by the link command
--onedrive-link-scope string Set the scope of the links created by the link command (default "anonymous")
--onedrive-link-type string Set the type of the links created by the link command (default "view")
--onedrive-list-chunk int Size of listing chunk (default 1000)
--onedrive-no-versions Remove all versions on modifying operations
--onedrive-region string Choose national cloud region for OneDrive (default "global")
--onedrive-server-side-across-configs Allow server-side operations (e.g. copy) to work across different onedrive configs
--onedrive-token string OAuth Access Token as a JSON blob
--onedrive-token-url string Token server url
--opendrive-chunk-size SizeSuffix Files will be uploaded in chunks this size (default 10Mi)
--opendrive-encoding MultiEncoder This sets the encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,LeftSpace,LeftCrLfHtVt,RightSpace,RightCrLfHtVt,InvalidUtf8,Dot)
--opendrive-password string Password (obscured)
--opendrive-username string Username
--pcloud-auth-url string Auth server URL
--pcloud-client-id string OAuth Client Id
--pcloud-client-secret string OAuth Client Secret
--pcloud-encoding MultiEncoder This sets the encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
--pcloud-hostname string Hostname to connect to (default "api.pcloud.com")
--pcloud-root-folder-id string Fill in for rclone to use a non root folder as its starting point (default "d0")
--pcloud-token string OAuth Access Token as a JSON blob
--pcloud-token-url string Token server url
--premiumizeme-encoding MultiEncoder This sets the encoding for the backend (default Slash,DoubleQuote,BackSlash,Del,Ctl,InvalidUtf8,Dot)
--putio-encoding MultiEncoder This sets the encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-chunk-size SizeSuffix Chunk size to use for uploading (default 4Mi)
--qingstor-connection-retries int Number of connection retries (default 3)
--qingstor-encoding MultiEncoder This sets the encoding for the backend (default Slash,Ctl,InvalidUtf8)
--qingstor-endpoint string Enter an endpoint URL to connection QingStor API
--qingstor-env-auth Get QingStor credentials from runtime
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-upload-concurrency int Concurrency for multipart uploads (default 1)
--qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi)
--qingstor-zone string Zone to connect to
--s3-access-key-id string AWS Access Key ID
--s3-acl string Canned ACL used when creating buckets and storing or copying objects
--s3-bucket-acl string Canned ACL used when creating buckets
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5Mi)
--s3-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 4.656Gi)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-disable-http2 Disable usage of http2 for S3 backends
--s3-encoding MultiEncoder This sets the encoding for the backend (default Slash,InvalidUtf8,Dot)
--s3-endpoint string Endpoint for S3 API
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars)
--s3-force-path-style If true use path style access if false use virtual hosted style (default true)
--s3-leave-parts-on-error If true avoid calling abort upload on a failure, leaving all successfully uploaded parts on S3 for manual recovery
--s3-list-chunk int Size of listing chunk (response list for each ListObject S3 request) (default 1000)
--s3-location-constraint string Location constraint - must be set to match the Region
--s3-max-upload-parts int Maximum number of parts in a multipart upload (default 10000)
--s3-memory-pool-flush-time Duration How often internal memory buffer pools will be flushed (default 1m0s)
--s3-memory-pool-use-mmap Whether to use mmap buffers in internal memory pool
--s3-no-check-bucket If set, don't attempt to check the bucket exists or create it
--s3-no-head If set, don't HEAD uploaded objects to check integrity
--s3-no-head-object If set, don't HEAD objects
--s3-profile string Profile to use in the shared credentials file
--s3-provider string Choose your S3 provider
--s3-region string Region to connect to
--s3-requester-pays Enables requester pays option when interacting with S3 bucket
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3
--s3-session-token string An AWS session token
--s3-shared-credentials-file string Path to the shared credentials file
--s3-sse-customer-algorithm string If using SSE-C, the server-side encryption algorithm used when storing this object in S3
--s3-sse-customer-key string If using SSE-C you must provide the secret encryption key used to encrypt/decrypt your data
--s3-sse-customer-key-md5 string If using SSE-C you may provide the secret encryption key MD5 checksum (optional)
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key
--s3-storage-class string The storage class to use when storing new objects in S3
--s3-upload-concurrency int Concurrency for multipart uploads (default 4)
--s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi)
--s3-use-accelerate-endpoint If true use the AWS S3 accelerated endpoint
--s3-v2-auth If true use v2 authentication
--seafile-2fa Two-factor authentication ('true' if the account has 2FA enabled)
--seafile-create-library Should rclone create a library if it doesn't exist
--seafile-encoding MultiEncoder This sets the encoding for the backend (default Slash,DoubleQuote,BackSlash,Ctl,InvalidUtf8)
--seafile-library string Name of the library
--seafile-library-key string Library password (for encrypted libraries only) (obscured)
--seafile-pass string Password (obscured)
--seafile-url string URL of seafile host to connect to
--seafile-user string User name (usually email address)
--sftp-ask-password Allow asking for SFTP password when needed
--sftp-disable-concurrent-reads If set don't use concurrent reads
--sftp-disable-concurrent-writes If set don't use concurrent writes
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available
--sftp-host string SSH host to connect to
--sftp-idle-timeout Duration Max time before closing idle connections (default 1m0s)
--sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent
--sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file (obscured)
--sftp-key-pem string Raw PEM-encoded private key, If specified, will override key_file parameter
--sftp-key-use-agent When set forces the usage of the ssh-agent
--sftp-known-hosts-file string Optional path to known_hosts file
--sftp-md5sum-command string The command used to read md5 hashes
--sftp-pass string SSH password, leave blank to use ssh-agent (obscured)
--sftp-path-override string Override path used by SSH connection
--sftp-port string SSH port, leave blank to use default (22)
--sftp-pubkey-file string Optional path to public key file
--sftp-server-command string Specifies the path or command to run a sftp server on the remote host
--sftp-set-modtime Set the modified time on the remote if set (default true)
--sftp-sha1sum-command string The command used to read sha1 hashes
--sftp-skip-links Set to skip any symlinks and any other non regular files
--sftp-subsystem string Specifies the SSH2 subsystem on the remote host (default "sftp")
--sftp-use-fstat If set use fstat instead of stat
--sftp-use-insecure-cipher Enable the use of insecure ciphers and key exchange methods
--sftp-user string SSH username, leave blank for current username, $USER
--sharefile-chunk-size SizeSuffix Upload chunk size (default 64Mi)
--sharefile-encoding MultiEncoder This sets the encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,LeftPeriod,RightSpace,RightPeriod,InvalidUtf8,Dot)
--sharefile-endpoint string Endpoint for API calls
--sharefile-root-folder-id string ID of the root folder
--sharefile-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (default 128Mi)
--skip-links Don't warn about skipped symlinks
--sugarsync-access-key-id string Sugarsync Access Key ID
--sugarsync-app-id string Sugarsync App ID
--sugarsync-authorization string Sugarsync authorization
--sugarsync-authorization-expiry string Sugarsync authorization expiry
--sugarsync-deleted-id string Sugarsync deleted folder id
--sugarsync-encoding MultiEncoder This sets the encoding for the backend (default Slash,Ctl,InvalidUtf8,Dot)
--sugarsync-hard-delete Permanently delete files if true
--sugarsync-private-access-key string Sugarsync Private Access Key
--sugarsync-refresh-token string Sugarsync refresh token
--sugarsync-root-id string Sugarsync root id
--sugarsync-user string Sugarsync user
--swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
--swift-auth string Authentication URL for server (OS_AUTH_URL)
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container (default 5Gi)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-encoding MultiEncoder This sets the encoding for the backend (default Slash,InvalidUtf8)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form
--swift-key string API key or password (OS_PASSWORD)
--swift-leave-parts-on-error If true avoid calling abort upload on a failure
--swift-no-chunk Don't chunk files during streaming upload
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME)
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID)
--tardigrade-access-grant string Access grant
--tardigrade-api-key string API key
--tardigrade-passphrase string Encryption passphrase
--tardigrade-provider string Choose an authentication method (default "existing")
--tardigrade-satellite-address <nodeid>@<address>:<port> Satellite address (default "us-central-1.tardigrade.io")
--union-action-policy string Policy to choose upstream on ACTION category (default "epall")
--union-cache-time int Cache time of usage and free space (in seconds) (default 120)
--union-create-policy string Policy to choose upstream on CREATE category (default "epmfs")
--union-search-policy string Policy to choose upstream on SEARCH category (default "ff")
--union-upstreams string List of space separated upstreams
--uptobox-access-token string Your access Token, get it from https://uptobox.com/my_account
--uptobox-encoding MultiEncoder This sets the encoding for the backend (default Slash,LtGt,DoubleQuote,BackQuote,Del,Ctl,LeftSpace,InvalidUtf8,Dot)
--webdav-bearer-token string Bearer token instead of user/pass (e.g. a Macaroon)
--webdav-bearer-token-command string Command to run to get a bearer token
--webdav-encoding string This sets the encoding for the backend
--webdav-headers CommaSepList Set HTTP headers for all transactions
--webdav-pass string Password (obscured)
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-auth-url string Auth server URL
--yandex-client-id string OAuth Client Id
--yandex-client-secret string OAuth Client Secret
--yandex-encoding MultiEncoder This sets the encoding for the backend (default Slash,Del,Ctl,InvalidUtf8,Dot)
--yandex-token string OAuth Access Token as a JSON blob
--yandex-token-url string Token server url
--zoho-auth-url string Auth server URL
--zoho-client-id string OAuth Client Id
--zoho-client-secret string OAuth Client Secret
--zoho-encoding MultiEncoder This sets the encoding for the backend (default Del,Ctl,InvalidUtf8)
--zoho-region string Zoho region to connect to
--zoho-token string OAuth Access Token as a JSON blob
--zoho-token-url string Token server url
--acd-auth-url string Auth server URL
--acd-client-id string OAuth Client Id
--acd-client-secret string OAuth Client Secret
--acd-encoding MultiEncoder This sets the encoding for the backend (default Slash,InvalidUtf8,Dot)
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink (default 9Gi)
--acd-token string OAuth Access Token as a JSON blob
--acd-token-url string Token server url
--acd-upload-wait-per-gb Duration Additional time per GiB to wait after a failed complete upload to see if it appears (default 3m0s)
--alias-remote string Remote or path to alias
--azureblob-access-tier string Access tier of blob: hot, cool or archive
--azureblob-account string Storage Account Name
--azureblob-archive-tier-delete Delete archive tier blobs before overwriting
--azureblob-chunk-size SizeSuffix Upload chunk size (<= 100 MiB) (default 4Mi)
--azureblob-disable-checksum Don't store MD5 checksum with object metadata
--azureblob-encoding MultiEncoder This sets the encoding for the backend (default Slash,BackSlash,Del,Ctl,RightPeriod,InvalidUtf8)
--azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key
--azureblob-list-chunk int Size of blob list (default 5000)
--azureblob-memory-pool-flush-time Duration How often internal memory buffer pools will be flushed (default 1m0s)
--azureblob-memory-pool-use-mmap Whether to use mmap buffers in internal memory pool
--azureblob-msi-client-id string Object ID of the user-assigned MSI to use, if any
--azureblob-msi-mi-res-id string Azure resource ID of the user-assigned MSI to use, if any
--azureblob-msi-object-id string Object ID of the user-assigned MSI to use, if any
--azureblob-no-head-object If set, do not do HEAD before GET when getting objects
--azureblob-public-access string Public access level of a container: blob or container
--azureblob-sas-url string SAS URL for container level access only
--azureblob-service-principal-file string Path to file containing credentials for use with a service principal
--azureblob-upload-cutoff string Cutoff for switching to chunked upload (<= 256 MiB) (deprecated)
--azureblob-use-emulator Uses local storage emulator if provided as 'true'
--azureblob-use-msi Use a managed service identity to authenticate (only works in Azure)
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size (default 96Mi)
--b2-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 4Gi)
--b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-download-auth-duration Duration Time before the authorization token will expire in s or suffix ms|s|m|h|d (default 1w)
--b2-download-url string Custom endpoint for downloads
--b2-encoding MultiEncoder This sets the encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
--b2-endpoint string Endpoint for the service
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files
--b2-key string Application Key
--b2-memory-pool-flush-time Duration How often internal memory buffer pools will be flushed (default 1m0s)
--b2-memory-pool-use-mmap Whether to use mmap buffers in internal memory pool
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi)
--b2-versions Include old versions in directory listings
--box-access-token string Box App Primary Access Token
--box-auth-url string Auth server URL
--box-box-config-file string Box App config.json location
--box-box-sub-type string (default "user")
--box-client-id string OAuth Client Id
--box-client-secret string OAuth Client Secret
--box-commit-retries int Max number of times to try committing a multipart file (default 100)
--box-encoding MultiEncoder This sets the encoding for the backend (default Slash,BackSlash,Del,Ctl,RightSpace,InvalidUtf8,Dot)
--box-list-chunk int Size of listing chunk 1-1000 (default 1000)
--box-owned-by string Only show items owned by the login (email address) passed in
--box-root-folder-id string Fill in for rclone to use a non root folder as its starting point
--box-token string OAuth Access Token as a JSON blob
--box-token-url string Token server url
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50 MiB) (default 50Mi)
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--cache-chunk-path string Directory to cache chunk files (default "$HOME/.cache/rclone/cache-backend")
--cache-chunk-size SizeSuffix The size of a chunk (partial file data) (default 5Mi)
--cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk (default 10Gi)
--cache-db-path string Directory to store file structure metadata DB (default "$HOME/.cache/rclone/cache-backend")
--cache-db-purge Clear all the cached data for this remote on start
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How long to cache file structure information (directory listings, file size, times, etc.) (default 6h0m0s)
--cache-plex-insecure string Skip all certificate verification when connecting to the Plex server
--cache-plex-password string The password of the Plex user (obscured)
--cache-plex-url string The URL of the Plex server
--cache-plex-username string The username of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-remote string Remote to cache
--cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Cache file data on writes through the FS
--chunker-chunk-size SizeSuffix Files larger than chunk size will be split in chunks (default 2Gi)
--chunker-fail-hard Choose how chunker should handle files with missing or invalid chunks
--chunker-hash-type string Choose how chunker handles hash sums (default "md5")
--chunker-remote string Remote to chunk/unchunk
--compress-level int GZIP compression level (-2 to 9) (default -1)
--compress-mode string Compression mode (default "gzip")
--compress-ram-cache-limit SizeSuffix Some remotes don't allow the upload of files with unknown size (default 20Mi)
--compress-remote string Remote to compress
-L, --copy-links Follow symlinks and copy the pointed to item
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact (default true)
--crypt-filename-encryption string How to encrypt the filenames (default "standard")
--crypt-no-data-encryption Option to either encrypt file data or leave it unencrypted
--crypt-password string Password or pass phrase for encryption (obscured)
--crypt-password2 string Password or pass phrase for salt (obscured)
--crypt-remote string Remote to encrypt/decrypt
--crypt-server-side-across-configs Allow server-side operations (e.g. copy) to work across different crypt configs
--crypt-show-mapping For all files listed show how the names encrypt
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded
--drive-allow-import-name-change Allow the filetype to change when uploading Google docs
--drive-auth-owner-only Only consider files owned by the authenticated user
--drive-auth-url string Auth server URL
--drive-chunk-size SizeSuffix Upload chunk size (default 8Mi)
--drive-client-id string Google Application Client Id
--drive-client-secret string OAuth Client Secret
--drive-disable-http2 Disable drive using http2 (default true)
--drive-encoding MultiEncoder This sets the encoding for the backend (default InvalidUtf8)
--drive-export-formats string Comma separated list of preferred formats for downloading Google docs (default "docx,xlsx,pptx,svg")
--drive-formats string Deprecated: See export_formats
--drive-impersonate string Impersonate this user when using a service account
--drive-import-formats string Comma separated list of preferred formats for uploading Google docs
--drive-keep-revision-forever Keep new head revision of each file forever
--drive-list-chunk int Size of listing chunk 100-1000, 0 to disable (default 1000)
--drive-pacer-burst int Number of API calls to allow without sleeping (default 100)
--drive-pacer-min-sleep Duration Minimum time to sleep between API calls (default 100ms)
--drive-root-folder-id string ID of the root folder
--drive-scope string Scope that rclone should use when requesting access from drive
--drive-server-side-across-configs Allow server-side operations (e.g. copy) to work across different drive configs
--drive-service-account-credentials string Service Account Credentials JSON blob
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me
--drive-size-as-quota Show sizes as storage quota usage, not actual size
--drive-skip-checksum-gphotos Skip MD5 checksum on Google photos and videos only
--drive-skip-gdocs Skip google documents in all listings
--drive-skip-shortcuts If set skip shortcut files
--drive-starred-only Only show files that are starred
--drive-stop-on-download-limit Make download limit errors be fatal
--drive-stop-on-upload-limit Make upload limit errors be fatal
--drive-team-drive string ID of the Shared Drive (Team Drive)
--drive-token string OAuth Access Token as a JSON blob
--drive-token-url string Token server url
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8Mi)
--drive-use-created-date Use file created date instead of modified date
--drive-use-shared-date Use date file was shared instead of modified date
--drive-use-trash Send files to the trash instead of deleting permanently (default true)
--drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download (default off)
--dropbox-auth-url string Auth server URL
--dropbox-batch-commit-timeout Duration Max time to wait for a batch to finish comitting (default 10m0s)
--dropbox-batch-mode string Upload file batching sync|async|off (default "sync")
--dropbox-batch-size int Max number of files in upload batch
--dropbox-batch-timeout Duration Max time to allow an idle upload batch before uploading (default 0s)
--dropbox-chunk-size SizeSuffix Upload chunk size (< 150Mi) (default 48Mi)
--dropbox-client-id string OAuth Client Id
--dropbox-client-secret string OAuth Client Secret
--dropbox-encoding MultiEncoder This sets the encoding for the backend (default Slash,BackSlash,Del,RightSpace,InvalidUtf8,Dot)
--dropbox-impersonate string Impersonate this user when using a business account
--dropbox-shared-files Instructs rclone to work on individual shared files
--dropbox-shared-folders Instructs rclone to work on shared folders
--dropbox-token string OAuth Access Token as a JSON blob
--dropbox-token-url string Token server url
--fichier-api-key string Your API Key, get it from https://1fichier.com/console/params.pl
--fichier-encoding MultiEncoder This sets the encoding for the backend (default Slash,LtGt,DoubleQuote,SingleQuote,BackQuote,Dollar,BackSlash,Del,Ctl,LeftSpace,RightSpace,InvalidUtf8,Dot)
--fichier-file-password string If you want to download a shared file that is password protected, add this parameter (obscured)
--fichier-folder-password string If you want to list the files in a shared folder that is password protected, add this parameter (obscured)
--fichier-shared-folder string If you want to download a shared folder, add this parameter
--filefabric-encoding MultiEncoder This sets the encoding for the backend (default Slash,Del,Ctl,InvalidUtf8,Dot)
--filefabric-permanent-token string Permanent Authentication Token
--filefabric-root-folder-id string ID of the root folder
--filefabric-token string Session Token
--filefabric-token-expiry string Token expiry time
--filefabric-url string URL of the Enterprise File Fabric to connect to
--filefabric-version string Version read from the file fabric
--ftp-close-timeout Duration Maximum time to wait for a response to close (default 1m0s)
--ftp-concurrency int Maximum number of FTP simultaneous connections, 0 for unlimited
--ftp-disable-epsv Disable using EPSV even if server advertises support
--ftp-disable-mlsd Disable using MLSD even if server advertises support
--ftp-disable-tls13 Disable TLS 1.3 (workaround for FTP servers with buggy TLS)
--ftp-encoding MultiEncoder This sets the encoding for the backend (default Slash,Del,Ctl,RightSpace,Dot)
--ftp-explicit-tls Use Explicit FTPS (FTP over TLS)
--ftp-host string FTP host to connect to
--ftp-idle-timeout Duration Max time before closing idle connections (default 1m0s)
--ftp-no-check-certificate Do not verify the TLS certificate of the server
--ftp-pass string FTP password (obscured)
--ftp-port string FTP port, leave blank to use default (21)
--ftp-shut-timeout Duration Maximum time to wait for data connection closing status (default 1m0s)
--ftp-tls Use Implicit FTPS (FTP over TLS)
--ftp-tls-cache-size int Size of TLS session cache for all control and data connections (default 32)
--ftp-user string FTP username, leave blank for current username, $USER
--ftp-writing-mdtm Use MDTM to set modification time (VsFtpd quirk)
--gcs-anonymous Access public buckets and objects without credentials
--gcs-auth-url string Auth server URL
--gcs-bucket-acl string Access Control List for new buckets
--gcs-bucket-policy-only Access checks should use bucket-level IAM policies
--gcs-client-id string OAuth Client Id
--gcs-client-secret string OAuth Client Secret
--gcs-encoding MultiEncoder This sets the encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot)
--gcs-location string Location for the newly created buckets
--gcs-object-acl string Access Control List for new objects
--gcs-project-number string Project number
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage
--gcs-token string OAuth Access Token as a JSON blob
--gcs-token-url string Token server url
--gphotos-auth-url string Auth server URL
--gphotos-client-id string OAuth Client Id
--gphotos-client-secret string OAuth Client Secret
--gphotos-encoding MultiEncoder This sets the encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot)
--gphotos-include-archived Also view and download archived media
--gphotos-read-only Set to make the Google Photos backend read only
--gphotos-read-size Set to read the size of media items
--gphotos-start-year int Year limits the photos to be downloaded to those which are uploaded after the given year (default 2000)
--gphotos-token string OAuth Access Token as a JSON blob
--gphotos-token-url string Token server url
--hasher-auto-size SizeSuffix Auto-update checksum for files smaller than this size (disabled by default)
--hasher-hashes CommaSepList Comma separated list of supported checksum types (default md5,sha1)
--hasher-max-age Duration Maximum time to keep checksums in cache (0 = no cache, off = cache forever) (default off)
--hasher-remote string Remote to cache checksums for (e.g. myRemote:path)
--hdfs-data-transfer-protection string Kerberos data transfer protection: authentication|integrity|privacy
--hdfs-encoding MultiEncoder This sets the encoding for the backend (default Slash,Colon,Del,Ctl,InvalidUtf8,Dot)
--hdfs-namenode string Hadoop name node and port
--hdfs-service-principal-name string Kerberos service principal name for the namenode
--hdfs-username string Hadoop user name
--http-headers CommaSepList Set HTTP headers for all transactions
--http-no-head Don't use HEAD requests to find file sizes in dir listing
--http-no-slash Set this if the site doesn't end directories with /
--http-url string URL of http host to connect to
--hubic-auth-url string Auth server URL
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container (default 5Gi)
--hubic-client-id string OAuth Client Id
--hubic-client-secret string OAuth Client Secret
--hubic-encoding MultiEncoder This sets the encoding for the backend (default Slash,InvalidUtf8)
--hubic-no-chunk Don't chunk files during streaming upload
--hubic-token string OAuth Access Token as a JSON blob
--hubic-token-url string Token server url
--jottacloud-encoding MultiEncoder This sets the encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,Del,Ctl,InvalidUtf8,Dot)
--jottacloud-hard-delete Delete files permanently rather than putting them into the trash
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required (default 10Mi)
--jottacloud-no-versions Avoid server side versioning by deleting files and recreating files instead of overwriting them
--jottacloud-trashed-only Only show files that are in the trash
--jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's (default 10Mi)
--koofr-encoding MultiEncoder This sets the encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
--koofr-endpoint string The Koofr API endpoint to use (default "https://app.koofr.net")
--koofr-mountid string Mount ID of the mount to use
--koofr-password string Your Koofr password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password) (obscured)
--koofr-setmtime Does the backend support setting modification time (default true)
--koofr-user string Your Koofr user name
-l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--local-case-insensitive Force the filesystem to report itself as case insensitive
--local-case-sensitive Force the filesystem to report itself as case sensitive
--local-encoding MultiEncoder This sets the encoding for the backend (default Slash,Dot)
--local-no-check-updated Don't check to see if the files change during upload
--local-no-preallocate Disable preallocation of disk space for transferred files
--local-no-set-modtime Disable setting modtime
--local-no-sparse Disable sparse files for multi-thread downloads
--local-nounc string Disable UNC (long path names) conversion on Windows
--local-unicode-normalization Apply unicode NFC normalization to paths and filenames
--local-zero-size-links Assume the Stat size of links is zero (and read them instead) (deprecated)
--mailru-check-hash What should copy do if file checksum is mismatched or invalid (default true)
--mailru-encoding MultiEncoder This sets the encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,InvalidUtf8,Dot)
--mailru-pass string Password (obscured)
--mailru-speedup-enable Skip full upload if there is another file with same data hash (default true)
--mailru-speedup-file-patterns string Comma separated list of file name patterns eligible for speedup (put by hash) (default "*.mkv,*.avi,*.mp4,*.mp3,*.zip,*.gz,*.rar,*.pdf")
--mailru-speedup-max-disk SizeSuffix This option allows you to disable speedup (put by hash) for large files (default 3Gi)
--mailru-speedup-max-memory SizeSuffix Files larger than the size given below will always be hashed on disk (default 32Mi)
--mailru-user string User name (usually email)
--mega-debug Output more debug from Mega
--mega-encoding MultiEncoder This sets the encoding for the backend (default Slash,InvalidUtf8,Dot)
--mega-hard-delete Delete files permanently rather than putting them into the trash
--mega-pass string Password (obscured)
--mega-user string User name
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only)
--onedrive-auth-url string Auth server URL
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k (327,680 bytes) (default 10Mi)
--onedrive-client-id string OAuth Client Id
--onedrive-client-secret string OAuth Client Secret
--onedrive-drive-id string The ID of the drive to use
--onedrive-drive-type string The type of the drive (personal | business | documentLibrary)
--onedrive-encoding MultiEncoder This sets the encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,LeftSpace,LeftTilde,RightSpace,RightPeriod,InvalidUtf8,Dot)
--onedrive-expose-onenote-files Set to make OneNote files show up in directory listings
--onedrive-link-password string Set the password for links created by the link command
--onedrive-link-scope string Set the scope of the links created by the link command (default "anonymous")
--onedrive-link-type string Set the type of the links created by the link command (default "view")
--onedrive-list-chunk int Size of listing chunk (default 1000)
--onedrive-no-versions Remove all versions on modifying operations
--onedrive-region string Choose national cloud region for OneDrive (default "global")
--onedrive-server-side-across-configs Allow server-side operations (e.g. copy) to work across different onedrive configs
--onedrive-token string OAuth Access Token as a JSON blob
--onedrive-token-url string Token server url
--opendrive-chunk-size SizeSuffix Files will be uploaded in chunks this size (default 10Mi)
--opendrive-encoding MultiEncoder This sets the encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,LeftSpace,LeftCrLfHtVt,RightSpace,RightCrLfHtVt,InvalidUtf8,Dot)
--opendrive-password string Password (obscured)
--opendrive-username string Username
--pcloud-auth-url string Auth server URL
--pcloud-client-id string OAuth Client Id
--pcloud-client-secret string OAuth Client Secret
--pcloud-encoding MultiEncoder This sets the encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
--pcloud-hostname string Hostname to connect to (default "api.pcloud.com")
--pcloud-root-folder-id string Fill in for rclone to use a non root folder as its starting point (default "d0")
--pcloud-token string OAuth Access Token as a JSON blob
--pcloud-token-url string Token server url
--premiumizeme-encoding MultiEncoder This sets the encoding for the backend (default Slash,DoubleQuote,BackSlash,Del,Ctl,InvalidUtf8,Dot)
--putio-encoding MultiEncoder This sets the encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-chunk-size SizeSuffix Chunk size to use for uploading (default 4Mi)
--qingstor-connection-retries int Number of connection retries (default 3)
--qingstor-encoding MultiEncoder This sets the encoding for the backend (default Slash,Ctl,InvalidUtf8)
--qingstor-endpoint string Enter an endpoint URL to connection QingStor API
--qingstor-env-auth Get QingStor credentials from runtime
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-upload-concurrency int Concurrency for multipart uploads (default 1)
--qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi)
--qingstor-zone string Zone to connect to
--s3-access-key-id string AWS Access Key ID
--s3-acl string Canned ACL used when creating buckets and storing or copying objects
--s3-bucket-acl string Canned ACL used when creating buckets
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5Mi)
--s3-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 4.656Gi)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-disable-http2 Disable usage of http2 for S3 backends
--s3-download-url string Custom endpoint for downloads
--s3-encoding MultiEncoder This sets the encoding for the backend (default Slash,InvalidUtf8,Dot)
--s3-endpoint string Endpoint for S3 API
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars)
--s3-force-path-style If true use path style access if false use virtual hosted style (default true)
--s3-leave-parts-on-error If true avoid calling abort upload on a failure, leaving all successfully uploaded parts on S3 for manual recovery
--s3-list-chunk int Size of listing chunk (response list for each ListObject S3 request) (default 1000)
--s3-location-constraint string Location constraint - must be set to match the Region
--s3-max-upload-parts int Maximum number of parts in a multipart upload (default 10000)
--s3-memory-pool-flush-time Duration How often internal memory buffer pools will be flushed (default 1m0s)
--s3-memory-pool-use-mmap Whether to use mmap buffers in internal memory pool
--s3-no-check-bucket If set, don't attempt to check the bucket exists or create it
--s3-no-head If set, don't HEAD uploaded objects to check integrity
--s3-no-head-object If set, do not do HEAD before GET when getting objects
--s3-profile string Profile to use in the shared credentials file
--s3-provider string Choose your S3 provider
--s3-region string Region to connect to
--s3-requester-pays Enables requester pays option when interacting with S3 bucket
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3
--s3-session-token string An AWS session token
--s3-shared-credentials-file string Path to the shared credentials file
--s3-sse-customer-algorithm string If using SSE-C, the server-side encryption algorithm used when storing this object in S3
--s3-sse-customer-key string If using SSE-C you must provide the secret encryption key used to encrypt/decrypt your data
--s3-sse-customer-key-md5 string If using SSE-C you may provide the secret encryption key MD5 checksum (optional)
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key
--s3-storage-class string The storage class to use when storing new objects in S3
--s3-upload-concurrency int Concurrency for multipart uploads (default 4)
--s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi)
--s3-use-accelerate-endpoint If true use the AWS S3 accelerated endpoint
--s3-v2-auth If true use v2 authentication
--seafile-2fa Two-factor authentication ('true' if the account has 2FA enabled)
--seafile-create-library Should rclone create a library if it doesn't exist
--seafile-encoding MultiEncoder This sets the encoding for the backend (default Slash,DoubleQuote,BackSlash,Ctl,InvalidUtf8)
--seafile-library string Name of the library
--seafile-library-key string Library password (for encrypted libraries only) (obscured)
--seafile-pass string Password (obscured)
--seafile-url string URL of seafile host to connect to
--seafile-user string User name (usually email address)
--sftp-ask-password Allow asking for SFTP password when needed
--sftp-disable-concurrent-reads If set don't use concurrent reads
--sftp-disable-concurrent-writes If set don't use concurrent writes
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available
--sftp-host string SSH host to connect to
--sftp-idle-timeout Duration Max time before closing idle connections (default 1m0s)
--sftp-key-file string Path to PEM-encoded private key file
--sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file (obscured)
--sftp-key-pem string Raw PEM-encoded private key
--sftp-key-use-agent When set forces the usage of the ssh-agent
--sftp-known-hosts-file string Optional path to known_hosts file
--sftp-md5sum-command string The command used to read md5 hashes
--sftp-pass string SSH password, leave blank to use ssh-agent (obscured)
--sftp-path-override string Override path used by SSH connection
--sftp-port string SSH port, leave blank to use default (22)
--sftp-pubkey-file string Optional path to public key file
--sftp-server-command string Specifies the path or command to run a sftp server on the remote host
--sftp-set-modtime Set the modified time on the remote if set (default true)
--sftp-sha1sum-command string The command used to read sha1 hashes
--sftp-skip-links Set to skip any symlinks and any other non regular files
--sftp-subsystem string Specifies the SSH2 subsystem on the remote host (default "sftp")
--sftp-use-fstat If set use fstat instead of stat
--sftp-use-insecure-cipher Enable the use of insecure ciphers and key exchange methods
--sftp-user string SSH username, leave blank for current username, $USER
--sharefile-chunk-size SizeSuffix Upload chunk size (default 64Mi)
--sharefile-encoding MultiEncoder This sets the encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,LeftPeriod,RightSpace,RightPeriod,InvalidUtf8,Dot)
--sharefile-endpoint string Endpoint for API calls
--sharefile-root-folder-id string ID of the root folder
--sharefile-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (default 128Mi)
--sia-api-password string Sia Daemon API Password (obscured)
--sia-api-url string Sia daemon API URL, like http://sia.daemon.host:9980 (default "http://127.0.0.1:9980")
--sia-encoding MultiEncoder This sets the encoding for the backend (default Slash,Question,Hash,Percent,Del,Ctl,InvalidUtf8,Dot)
--sia-user-agent string Siad User Agent (default "Sia-Agent")
--skip-links Don't warn about skipped symlinks
--sugarsync-access-key-id string Sugarsync Access Key ID
--sugarsync-app-id string Sugarsync App ID
--sugarsync-authorization string Sugarsync authorization
--sugarsync-authorization-expiry string Sugarsync authorization expiry
--sugarsync-deleted-id string Sugarsync deleted folder id
--sugarsync-encoding MultiEncoder This sets the encoding for the backend (default Slash,Ctl,InvalidUtf8,Dot)
--sugarsync-hard-delete Permanently delete files if true
--sugarsync-private-access-key string Sugarsync Private Access Key
--sugarsync-refresh-token string Sugarsync refresh token
--sugarsync-root-id string Sugarsync root id
--sugarsync-user string Sugarsync user
--swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
--swift-auth string Authentication URL for server (OS_AUTH_URL)
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container (default 5Gi)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-encoding MultiEncoder This sets the encoding for the backend (default Slash,InvalidUtf8)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form
--swift-key string API key or password (OS_PASSWORD)
--swift-leave-parts-on-error If true avoid calling abort upload on a failure
--swift-no-chunk Don't chunk files during streaming upload
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME)
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID)
--tardigrade-access-grant string Access grant
--tardigrade-api-key string API key
--tardigrade-passphrase string Encryption passphrase
--tardigrade-provider string Choose an authentication method (default "existing")
--tardigrade-satellite-address string Satellite address (default "us-central-1.tardigrade.io")
--union-action-policy string Policy to choose upstream on ACTION category (default "epall")
--union-cache-time int Cache time of usage and free space (in seconds) (default 120)
--union-create-policy string Policy to choose upstream on CREATE category (default "epmfs")
--union-search-policy string Policy to choose upstream on SEARCH category (default "ff")
--union-upstreams string List of space separated upstreams
--uptobox-access-token string Your access token
--uptobox-encoding MultiEncoder This sets the encoding for the backend (default Slash,LtGt,DoubleQuote,BackQuote,Del,Ctl,LeftSpace,InvalidUtf8,Dot)
--webdav-bearer-token string Bearer token instead of user/pass (e.g. a Macaroon)
--webdav-bearer-token-command string Command to run to get a bearer token
--webdav-encoding string This sets the encoding for the backend
--webdav-headers CommaSepList Set HTTP headers for all transactions
--webdav-pass string Password (obscured)
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-auth-url string Auth server URL
--yandex-client-id string OAuth Client Id
--yandex-client-secret string OAuth Client Secret
--yandex-encoding MultiEncoder This sets the encoding for the backend (default Slash,Del,Ctl,InvalidUtf8,Dot)
--yandex-token string OAuth Access Token as a JSON blob
--yandex-token-url string Token server url
--zoho-auth-url string Auth server URL
--zoho-client-id string OAuth Client Id
--zoho-client-secret string OAuth Client Secret
--zoho-encoding MultiEncoder This sets the encoding for the backend (default Del,Ctl,InvalidUtf8)
--zoho-region string Zoho region to connect to
--zoho-token string OAuth Access Token as a JSON blob
--zoho-token-url string Token server url
```

View File

@ -136,25 +136,24 @@ sensible encoding settings for major FTP servers: ProFTPd, PureFTPd, VsFTPd.
Just hit a selection number when prompted.
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/ftp/ftp.go then run make backenddocs" >}}
### Standard Options
### Standard options
Here are the standard options specific to ftp (FTP Connection).
#### --ftp-host
FTP host to connect to
FTP host to connect to.
E.g. "ftp.example.com".
- Config: host
- Env Var: RCLONE_FTP_HOST
- Type: string
- Default: ""
- Examples:
- "ftp.example.com"
- Connect to ftp.example.com
#### --ftp-user
FTP username, leave blank for current username, $USER
FTP username, leave blank for current username, $USER.
- Config: user
- Env Var: RCLONE_FTP_USER
@ -163,7 +162,7 @@ FTP username, leave blank for current username, $USER
#### --ftp-port
FTP port, leave blank to use default (21)
FTP port, leave blank to use default (21).
- Config: port
- Env Var: RCLONE_FTP_PORT
@ -172,7 +171,7 @@ FTP port, leave blank to use default (21)
#### --ftp-pass
FTP password
FTP password.
**NB** Input to this must be obscured - see [rclone obscure](/commands/rclone_obscure/).
@ -183,7 +182,8 @@ FTP password
#### --ftp-tls
Use Implicit FTPS (FTP over TLS)
Use Implicit FTPS (FTP over TLS).
When using implicit FTP over TLS the client connects using TLS
right from the start which breaks compatibility with
non-TLS-aware servers. This is usually served over port 990 rather
@ -196,7 +196,8 @@ than port 21. Cannot be used in combination with explicit FTP.
#### --ftp-explicit-tls
Use Explicit FTPS (FTP over TLS)
Use Explicit FTPS (FTP over TLS).
When using explicit FTP over TLS the client explicitly requests
security from the server in order to upgrade a plain text connection
to an encrypted one. Cannot be used in combination with implicit FTP.
@ -206,13 +207,13 @@ to an encrypted one. Cannot be used in combination with implicit FTP.
- Type: bool
- Default: false
### Advanced Options
### Advanced options
Here are the advanced options specific to ftp (FTP Connection).
#### --ftp-concurrency
Maximum number of FTP simultaneous connections, 0 for unlimited
Maximum number of FTP simultaneous connections, 0 for unlimited.
- Config: concurrency
- Env Var: RCLONE_FTP_CONCURRENCY
@ -221,7 +222,7 @@ Maximum number of FTP simultaneous connections, 0 for unlimited
#### --ftp-no-check-certificate
Do not verify the TLS certificate of the server
Do not verify the TLS certificate of the server.
- Config: no_check_certificate
- Env Var: RCLONE_FTP_NO_CHECK_CERTIFICATE
@ -230,7 +231,7 @@ Do not verify the TLS certificate of the server
#### --ftp-disable-epsv
Disable using EPSV even if server advertises support
Disable using EPSV even if server advertises support.
- Config: disable_epsv
- Env Var: RCLONE_FTP_DISABLE_EPSV
@ -239,7 +240,7 @@ Disable using EPSV even if server advertises support
#### --ftp-disable-mlsd
Disable using MLSD even if server advertises support
Disable using MLSD even if server advertises support.
- Config: disable_mlsd
- Env Var: RCLONE_FTP_DISABLE_MLSD
@ -257,7 +258,7 @@ Use MDTM to set modification time (VsFtpd quirk)
#### --ftp-idle-timeout
Max time before closing idle connections
Max time before closing idle connections.
If no connections have been returned to the connection pool in the time
given, rclone will empty the connection pool.
@ -279,11 +280,42 @@ Maximum time to wait for a response to close.
- Type: Duration
- Default: 1m0s
#### --ftp-tls-cache-size
Size of TLS session cache for all control and data connections.
TLS cache allows to resume TLS sessions and reuse PSK between connections.
Increase if default size is not enough resulting in TLS resumption errors.
Enabled by default. Use 0 to disable.
- Config: tls_cache_size
- Env Var: RCLONE_FTP_TLS_CACHE_SIZE
- Type: int
- Default: 32
#### --ftp-disable-tls13
Disable TLS 1.3 (workaround for FTP servers with buggy TLS)
- Config: disable_tls13
- Env Var: RCLONE_FTP_DISABLE_TLS13
- Type: bool
- Default: false
#### --ftp-shut-timeout
Maximum time to wait for data connection closing status.
- Config: shut_timeout
- Env Var: RCLONE_FTP_SHUT_TIMEOUT
- Type: Duration
- Default: 1m0s
#### --ftp-encoding
This sets the encoding for the backend.
See: the [encoding section in the overview](/overview/#encoding) for more info.
See the [encoding section in the overview](/overview/#encoding) for more info.
- Config: encoding
- Env Var: RCLONE_FTP_ENCODING

View File

@ -271,13 +271,14 @@ Invalid UTF-8 bytes will also be [replaced](/overview/#invalid-utf8),
as they can't be used in JSON strings.
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/googlecloudstorage/googlecloudstorage.go then run make backenddocs" >}}
### Standard Options
### Standard options
Here are the standard options specific to google cloud storage (Google Cloud Storage (this is not Google Drive)).
#### --gcs-client-id
OAuth Client Id
OAuth Client Id.
Leave blank normally.
- Config: client_id
@ -287,7 +288,8 @@ Leave blank normally.
#### --gcs-client-secret
OAuth Client Secret
OAuth Client Secret.
Leave blank normally.
- Config: client_secret
@ -298,6 +300,7 @@ Leave blank normally.
#### --gcs-project-number
Project number.
Optional - needed only for list/create/delete buckets - see your developer console.
- Config: project_number
@ -307,13 +310,13 @@ Optional - needed only for list/create/delete buckets - see your developer conso
#### --gcs-service-account-file
Service Account Credentials JSON file path
Service Account Credentials JSON file path.
Leave blank normally.
Needed only if you want use SA instead of interactive login.
Leading `~` will be expanded in the file name as will environment variables such as `${RCLONE_CONFIG_DIR}`.
- Config: service_account_file
- Env Var: RCLONE_GCS_SERVICE_ACCOUNT_FILE
- Type: string
@ -321,7 +324,8 @@ Leading `~` will be expanded in the file name as will environment variables such
#### --gcs-service-account-credentials
Service Account Credentials JSON blob
Service Account Credentials JSON blob.
Leave blank normally.
Needed only if you want use SA instead of interactive login.
@ -332,7 +336,8 @@ Needed only if you want use SA instead of interactive login.
#### --gcs-anonymous
Access public buckets and objects without credentials
Access public buckets and objects without credentials.
Set to 'true' if you just want to download files and don't configure credentials.
- Config: anonymous
@ -350,17 +355,23 @@ Access Control List for new objects.
- Default: ""
- Examples:
- "authenticatedRead"
- Object owner gets OWNER access, and all Authenticated Users get READER access.
- Object owner gets OWNER access.
- All Authenticated Users get READER access.
- "bucketOwnerFullControl"
- Object owner gets OWNER access, and project team owners get OWNER access.
- Object owner gets OWNER access.
- Project team owners get OWNER access.
- "bucketOwnerRead"
- Object owner gets OWNER access, and project team owners get READER access.
- Object owner gets OWNER access.
- Project team owners get READER access.
- "private"
- Object owner gets OWNER access [default if left blank].
- Object owner gets OWNER access.
- Default if left blank.
- "projectPrivate"
- Object owner gets OWNER access, and project team members get access according to their roles.
- Object owner gets OWNER access.
- Project team members get access according to their roles.
- "publicRead"
- Object owner gets OWNER access, and all Users get READER access.
- Object owner gets OWNER access.
- All Users get READER access.
#### --gcs-bucket-acl
@ -372,15 +383,19 @@ Access Control List for new buckets.
- Default: ""
- Examples:
- "authenticatedRead"
- Project team owners get OWNER access, and all Authenticated Users get READER access.
- Project team owners get OWNER access.
- All Authenticated Users get READER access.
- "private"
- Project team owners get OWNER access [default if left blank].
- Project team owners get OWNER access.
- Default if left blank.
- "projectPrivate"
- Project team members get access according to their roles.
- "publicRead"
- Project team owners get OWNER access, and all Users get READER access.
- Project team owners get OWNER access.
- All Users get READER access.
- "publicReadWrite"
- Project team owners get OWNER access, and all Users get WRITER access.
- Project team owners get OWNER access.
- All Users get WRITER access.
#### --gcs-bucket-policy-only
@ -413,45 +428,45 @@ Location for the newly created buckets.
- Default: ""
- Examples:
- ""
- Empty for default location (US).
- Empty for default location (US)
- "asia"
- Multi-regional location for Asia.
- Multi-regional location for Asia
- "eu"
- Multi-regional location for Europe.
- Multi-regional location for Europe
- "us"
- Multi-regional location for United States.
- Multi-regional location for United States
- "asia-east1"
- Taiwan.
- Taiwan
- "asia-east2"
- Hong Kong.
- Hong Kong
- "asia-northeast1"
- Tokyo.
- Tokyo
- "asia-south1"
- Mumbai.
- Mumbai
- "asia-southeast1"
- Singapore.
- Singapore
- "australia-southeast1"
- Sydney.
- Sydney
- "europe-north1"
- Finland.
- Finland
- "europe-west1"
- Belgium.
- Belgium
- "europe-west2"
- London.
- London
- "europe-west3"
- Frankfurt.
- Frankfurt
- "europe-west4"
- Netherlands.
- Netherlands
- "us-central1"
- Iowa.
- Iowa
- "us-east1"
- South Carolina.
- South Carolina
- "us-east4"
- Northern Virginia.
- Northern Virginia
- "us-west1"
- Oregon.
- Oregon
- "us-west2"
- California.
- California
#### --gcs-storage-class
@ -477,7 +492,7 @@ The storage class to use when storing objects in Google Cloud Storage.
- "DURABLE_REDUCED_AVAILABILITY"
- Durable reduced availability storage class
### Advanced Options
### Advanced options
Here are the advanced options specific to google cloud storage (Google Cloud Storage (this is not Google Drive)).
@ -493,6 +508,7 @@ OAuth Access Token as a JSON blob.
#### --gcs-auth-url
Auth server URL.
Leave blank to use the provider defaults.
- Config: auth_url
@ -503,6 +519,7 @@ Leave blank to use the provider defaults.
#### --gcs-token-url
Token server url.
Leave blank to use the provider defaults.
- Config: token_url
@ -514,7 +531,7 @@ Leave blank to use the provider defaults.
This sets the encoding for the backend.
See: the [encoding section in the overview](/overview/#encoding) for more info.
See the [encoding section in the overview](/overview/#encoding) for more info.
- Config: encoding
- Env Var: RCLONE_GCS_ENCODING

View File

@ -222,13 +222,14 @@ The `shared-album` directory shows albums shared with you or by you.
This is similar to the Sharing tab in the Google Photos web interface.
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/googlephotos/googlephotos.go then run make backenddocs" >}}
### Standard Options
### Standard options
Here are the standard options specific to google photos (Google Photos).
#### --gphotos-client-id
OAuth Client Id
OAuth Client Id.
Leave blank normally.
- Config: client_id
@ -238,7 +239,8 @@ Leave blank normally.
#### --gphotos-client-secret
OAuth Client Secret
OAuth Client Secret.
Leave blank normally.
- Config: client_secret
@ -258,7 +260,7 @@ to your photos, otherwise rclone will request full access.
- Type: bool
- Default: false
### Advanced Options
### Advanced options
Here are the advanced options specific to google photos (Google Photos).
@ -274,6 +276,7 @@ OAuth Access Token as a JSON blob.
#### --gphotos-auth-url
Auth server URL.
Leave blank to use the provider defaults.
- Config: auth_url
@ -284,6 +287,7 @@ Leave blank to use the provider defaults.
#### --gphotos-token-url
Token server url.
Leave blank to use the provider defaults.
- Config: token_url
@ -308,7 +312,7 @@ you want to read the media.
#### --gphotos-start-year
Year limits the photos to be downloaded to those which are uploaded after the given year
Year limits the photos to be downloaded to those which are uploaded after the given year.
- Config: start_year
- Env Var: RCLONE_GPHOTOS_START_YEAR
@ -336,6 +340,17 @@ listings and won't be transferred.
- Type: bool
- Default: false
#### --gphotos-encoding
This sets the encoding for the backend.
See the [encoding section in the overview](/overview/#encoding) for more info.
- Config: encoding
- Env Var: RCLONE_GPHOTOS_ENCODING
- Type: MultiEncoder
- Default: Slash,CrLf,InvalidUtf8,Dot
{{< rem autogenerated options stop >}}
## Limitations

View File

@ -170,7 +170,7 @@ or by full re-read/re-write of the files.
## Configuration reference
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/hasher/hasher.go then run make backenddocs" >}}
### Standard Options
### Standard options
Here are the standard options specific to hasher (Better checksums for other remotes).
@ -201,7 +201,7 @@ Maximum time to keep checksums in cache (0 = no cache, off = cache forever).
- Type: Duration
- Default: off
### Advanced Options
### Advanced options
Here are the advanced options specific to hasher (Better checksums for other remotes).
@ -214,7 +214,7 @@ Auto-update checksum for files smaller than this size (disabled by default).
- Type: SizeSuffix
- Default: 0
### Backend commands
## Backend commands
Here are the commands specific to the hasher backend.
@ -230,7 +230,7 @@ info on how to pass options and arguments.
These can be run on a running backend using the rc command
[backend/command](/rc/#backend/command).
#### drop
### drop
Drop cache
@ -241,7 +241,7 @@ Usage Example:
rclone backend drop hasher:
#### dump
### dump
Dump the database
@ -249,7 +249,7 @@ Dump the database
Dump cache records covered by the current remote
#### fulldump
### fulldump
Full dump of the database
@ -257,7 +257,7 @@ Full dump of the database
Dump all cache records in the database
#### import
### import
Import a SUM file
@ -268,7 +268,7 @@ Usage Example:
rclone backend import hasher:subdir md5 /path/to/sum.md5
#### stickyimport
### stickyimport
Perform fast import of a SUM file

View File

@ -149,25 +149,24 @@ the following characters are also replaced:
Invalid UTF-8 bytes will also be [replaced](/overview/#invalid-utf8).
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/hdfs/hdfs.go then run make backenddocs" >}}
### Standard Options
### Standard options
Here are the standard options specific to hdfs (Hadoop distributed file system).
#### --hdfs-namenode
hadoop name node and port
Hadoop name node and port.
E.g. "namenode:8020" to connect to host namenode at port 8020.
- Config: namenode
- Env Var: RCLONE_HDFS_NAMENODE
- Type: string
- Default: ""
- Examples:
- "namenode:8020"
- Connect to host namenode at port 8020
#### --hdfs-username
hadoop user name
Hadoop user name.
- Config: username
- Env Var: RCLONE_HDFS_USERNAME
@ -175,30 +174,28 @@ hadoop user name
- Default: ""
- Examples:
- "root"
- Connect to hdfs as root
- Connect to hdfs as root.
### Advanced Options
### Advanced options
Here are the advanced options specific to hdfs (Hadoop distributed file system).
#### --hdfs-service-principal-name
Kerberos service principal name for the namenode
Kerberos service principal name for the namenode.
Enables KERBEROS authentication. Specifies the Service Principal Name
(SERVICE/FQDN) for the namenode.
(SERVICE/FQDN) for the namenode. E.g. \"hdfs/namenode.hadoop.docker\"
for namenode running as service 'hdfs' with FQDN 'namenode.hadoop.docker'.
- Config: service_principal_name
- Env Var: RCLONE_HDFS_SERVICE_PRINCIPAL_NAME
- Type: string
- Default: ""
- Examples:
- "hdfs/namenode.hadoop.docker"
- Namenode running as service 'hdfs' with FQDN 'namenode.hadoop.docker'.
#### --hdfs-data-transfer-protection
Kerberos data transfer protection: authentication|integrity|privacy
Kerberos data transfer protection: authentication|integrity|privacy.
Specifies whether or not authentication, data signature integrity
checks, and wire encryption is required when communicating the the
@ -217,7 +214,7 @@ datanodes. Possible values are 'authentication', 'integrity' and
This sets the encoding for the backend.
See: the [encoding section in the overview](/overview/#encoding) for more info.
See the [encoding section in the overview](/overview/#encoding) for more info.
- Config: encoding
- Env Var: RCLONE_HDFS_ENCODING

View File

@ -101,33 +101,30 @@ without a config file:
rclone lsd --http-url https://beta.rclone.org :http:
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/http/http.go then run make backenddocs" >}}
### Standard Options
### Standard options
Here are the standard options specific to http (http Connection).
#### --http-url
URL of http host to connect to
URL of http host to connect to.
E.g. "https://example.com", or "https://user:pass@example.com" to use a username and password.
- Config: url
- Env Var: RCLONE_HTTP_URL
- Type: string
- Default: ""
- Examples:
- "https://example.com"
- Connect to example.com
- "https://user:pass@example.com"
- Connect to example.com using a username and password
### Advanced Options
### Advanced options
Here are the advanced options specific to http (http Connection).
#### --http-headers
Set HTTP headers for all transactions
Set HTTP headers for all transactions.
Use this to set additional HTTP headers for all transactions
Use this to set additional HTTP headers for all transactions.
The input format is comma separated list of key,value pairs. Standard
[CSV encoding](https://godoc.org/encoding/csv) may be used.
@ -144,7 +141,7 @@ You can set multiple headers, e.g. '"Cookie","name=value","Authorization","xxx"'
#### --http-no-slash
Set this if the site doesn't end directories with /
Set this if the site doesn't end directories with /.
Use this if your target website does not use / on the end of
directories.
@ -164,7 +161,7 @@ directories.
#### --http-no-head
Don't use HEAD requests to find file sizes in dir listing
Don't use HEAD requests to find file sizes in dir listing.
If your site is being very slow to load then you can try this option.
Normally rclone does a HEAD request for each potential file in a

View File

@ -107,13 +107,14 @@ Note that Hubic wraps the Swift backend, so most of the properties of
are the same.
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/hubic/hubic.go then run make backenddocs" >}}
### Standard Options
### Standard options
Here are the standard options specific to hubic (Hubic).
#### --hubic-client-id
OAuth Client Id
OAuth Client Id.
Leave blank normally.
- Config: client_id
@ -123,7 +124,8 @@ Leave blank normally.
#### --hubic-client-secret
OAuth Client Secret
OAuth Client Secret.
Leave blank normally.
- Config: client_secret
@ -131,7 +133,7 @@ Leave blank normally.
- Type: string
- Default: ""
### Advanced Options
### Advanced options
Here are the advanced options specific to hubic (Hubic).
@ -147,6 +149,7 @@ OAuth Access Token as a JSON blob.
#### --hubic-auth-url
Auth server URL.
Leave blank to use the provider defaults.
- Config: auth_url
@ -157,6 +160,7 @@ Leave blank to use the provider defaults.
#### --hubic-token-url
Token server url.
Leave blank to use the provider defaults.
- Config: token_url
@ -198,7 +202,7 @@ copy operations.
This sets the encoding for the backend.
See: the [encoding section in the overview](/overview/#encoding) for more info.
See the [encoding section in the overview](/overview/#encoding) for more info.
- Config: encoding
- Env Var: RCLONE_HUBIC_ENCODING

View File

@ -220,7 +220,7 @@ command which will display your usage limit (unless it is unlimited)
and the current usage.
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/jottacloud/jottacloud.go then run make backenddocs" >}}
### Advanced Options
### Advanced options
Here are the advanced options specific to jottacloud (Jottacloud).
@ -236,6 +236,7 @@ Files bigger than this will be cached on disk to calculate the MD5 if required.
#### --jottacloud-trashed-only
Only show files that are in the trash.
This will show trashed files in their original directory structure.
- Config: trashed_only
@ -274,7 +275,7 @@ Avoid server side versioning by deleting files and recreating files instead of o
This sets the encoding for the backend.
See: the [encoding section in the overview](/overview/#encoding) for more info.
See the [encoding section in the overview](/overview/#encoding) for more info.
- Config: encoding
- Env Var: RCLONE_JOTTACLOUD_ENCODING

View File

@ -99,13 +99,13 @@ Invalid UTF-8 bytes will also be [replaced](/overview/#invalid-utf8),
as they can't be used in XML strings.
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/koofr/koofr.go then run make backenddocs" >}}
### Standard Options
### Standard options
Here are the standard options specific to koofr (Koofr).
#### --koofr-user
Your Koofr user name
Your Koofr user name.
- Config: user
- Env Var: RCLONE_KOOFR_USER
@ -114,7 +114,7 @@ Your Koofr user name
#### --koofr-password
Your Koofr password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password)
Your Koofr password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password).
**NB** Input to this must be obscured - see [rclone obscure](/commands/rclone_obscure/).
@ -123,13 +123,13 @@ Your Koofr password for rclone (generate one at https://app.koofr.net/app/admin/
- Type: string
- Default: ""
### Advanced Options
### Advanced options
Here are the advanced options specific to koofr (Koofr).
#### --koofr-endpoint
The Koofr API endpoint to use
The Koofr API endpoint to use.
- Config: endpoint
- Env Var: RCLONE_KOOFR_ENDPOINT
@ -138,7 +138,9 @@ The Koofr API endpoint to use
#### --koofr-mountid
Mount ID of the mount to use. If omitted, the primary mount is used.
Mount ID of the mount to use.
If omitted, the primary mount is used.
- Config: mountid
- Env Var: RCLONE_KOOFR_MOUNTID
@ -147,7 +149,9 @@ Mount ID of the mount to use. If omitted, the primary mount is used.
#### --koofr-setmtime
Does the backend support setting modification time. Set this to false if you use a mount ID that points to a Dropbox or Amazon Drive backend.
Does the backend support setting modification time.
Set this to false if you use a mount ID that points to a Dropbox or Amazon Drive backend.
- Config: setmtime
- Env Var: RCLONE_KOOFR_SETMTIME
@ -158,7 +162,7 @@ Does the backend support setting modification time. Set this to false if you use
This sets the encoding for the backend.
See: the [encoding section in the overview](/overview/#encoding) for more info.
See the [encoding section in the overview](/overview/#encoding) for more info.
- Config: encoding
- Env Var: RCLONE_KOOFR_ENCODING

View File

@ -325,13 +325,13 @@ filesystem.
where it isn't supported (e.g. Windows) it will be ignored.
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/local/local.go then run make backenddocs" >}}
### Advanced Options
### Advanced options
Here are the advanced options specific to local (Local Disk).
#### --local-nounc
Disable UNC (long path names) conversion on Windows
Disable UNC (long path names) conversion on Windows.
- Config: nounc
- Env Var: RCLONE_LOCAL_NOUNC
@ -339,7 +339,7 @@ Disable UNC (long path names) conversion on Windows
- Default: ""
- Examples:
- "true"
- Disables long file names
- Disables long file names.
#### --copy-links / -L
@ -352,7 +352,7 @@ Follow symlinks and copy the pointed to item.
#### --links / -l
Translate symlinks to/from regular files with a '.rclonelink' extension
Translate symlinks to/from regular files with a '.rclonelink' extension.
- Config: links
- Env Var: RCLONE_LOCAL_LINKS
@ -362,6 +362,7 @@ Translate symlinks to/from regular files with a '.rclonelink' extension
#### --skip-links
Don't warn about skipped symlinks.
This flag disables warning messages on skipped symlinks or junction
points, as you explicitly acknowledge that they should be skipped.
@ -372,15 +373,15 @@ points, as you explicitly acknowledge that they should be skipped.
#### --local-zero-size-links
Assume the Stat size of links is zero (and read them instead) (Deprecated)
Assume the Stat size of links is zero (and read them instead) (deprecated).
Rclone used to use the Stat size of links as the link size, but this fails in quite a few places
Rclone used to use the Stat size of links as the link size, but this fails in quite a few places:
- Windows
- On some virtual filesystems (such ash LucidLink)
- Android
So rclone now always reads the link
So rclone now always reads the link.
- Config: zero_size_links
@ -390,7 +391,7 @@ So rclone now always reads the link
#### --local-unicode-normalization
Apply unicode NFC normalization to paths and filenames
Apply unicode NFC normalization to paths and filenames.
This flag can be used to normalize file names into unicode NFC form
that are read from the local filesystem.
@ -412,7 +413,7 @@ routine so this flag shouldn't normally be used.
#### --local-no-check-updated
Don't check to see if the files change during upload
Don't check to see if the files change during upload.
Normally rclone checks the size and modification time of files as they
are being uploaded and aborts with a message which starts "can't copy
@ -468,7 +469,7 @@ to override the default choice.
#### --local-case-insensitive
Force the filesystem to report itself as case insensitive
Force the filesystem to report itself as case insensitive.
Normally the local backend declares itself as case insensitive on
Windows/macOS and case sensitive for everything else. Use this flag
@ -481,7 +482,7 @@ to override the default choice.
#### --local-no-preallocate
Disable preallocation of disk space for transferred files
Disable preallocation of disk space for transferred files.
Preallocation of disk space helps prevent filesystem fragmentation.
However, some virtual filesystem layers (such as Google Drive File
@ -496,7 +497,7 @@ Use this flag to disable preallocation.
#### --local-no-sparse
Disable sparse files for multi-thread downloads
Disable sparse files for multi-thread downloads.
On Windows platforms rclone will make sparse files when doing
multi-thread downloads. This avoids long pauses on large files where
@ -510,7 +511,7 @@ cause disk fragmentation and can be slow to work with.
#### --local-no-set-modtime
Disable setting modtime
Disable setting modtime.
Normally rclone updates modification time of files after they are done
uploading. This can cause permissions issues on Linux platforms when
@ -527,14 +528,14 @@ enabled, rclone will no longer update the modtime after copying a file.
This sets the encoding for the backend.
See: the [encoding section in the overview](/overview/#encoding) for more info.
See the [encoding section in the overview](/overview/#encoding) for more info.
- Config: encoding
- Env Var: RCLONE_LOCAL_ENCODING
- Type: MultiEncoder
- Default: Slash,Dot
### Backend commands
## Backend commands
Here are the commands specific to the local backend.
@ -550,7 +551,7 @@ info on how to pass options and arguments.
These can be run on a running backend using the rc command
[backend/command](/rc/#backend/command).
#### noop
### noop
A null operation for testing backend commands

View File

@ -154,13 +154,13 @@ Invalid UTF-8 bytes will also be [replaced](/overview/#invalid-utf8),
as they can't be used in JSON strings.
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/mailru/mailru.go then run make backenddocs" >}}
### Standard Options
### Standard options
Here are the standard options specific to mailru (Mail.ru Cloud).
#### --mailru-user
User name (usually email)
User name (usually email).
- Config: user
- Env Var: RCLONE_MAILRU_USER
@ -169,7 +169,7 @@ User name (usually email)
#### --mailru-pass
Password
Password.
**NB** Input to this must be obscured - see [rclone obscure](/commands/rclone_obscure/).
@ -181,6 +181,7 @@ Password
#### --mailru-speedup-enable
Skip full upload if there is another file with same data hash.
This feature is called "speedup" or "put by hash". It is especially efficient
in case of generally available files like popular books, video or audio clips,
because files are searched by hash in all accounts of all mailru users.
@ -200,13 +201,14 @@ streaming or partial uploads), it will not even try this optimization.
- "false"
- Disable
### Advanced Options
### Advanced options
Here are the advanced options specific to mailru (Mail.ru Cloud).
#### --mailru-speedup-file-patterns
Comma separated list of file name patterns eligible for speedup (put by hash).
Patterns are case insensitive and can contain '*' or '?' meta characters.
- Config: speedup_file_patterns
@ -225,8 +227,9 @@ Patterns are case insensitive and can contain '*' or '?' meta characters.
#### --mailru-speedup-max-disk
This option allows you to disable speedup (put by hash) for large files
(because preliminary hashing can exhaust you RAM or disk space)
This option allows you to disable speedup (put by hash) for large files.
Reason is that preliminary hashing can exhaust your RAM or disk space.
- Config: speedup_max_disk
- Env Var: RCLONE_MAILRU_SPEEDUP_MAX_DISK
@ -258,7 +261,7 @@ Files larger than the size given below will always be hashed on disk.
#### --mailru-check-hash
What should copy do if file checksum is mismatched or invalid
What should copy do if file checksum is mismatched or invalid.
- Config: check_hash
- Env Var: RCLONE_MAILRU_CHECK_HASH
@ -273,6 +276,7 @@ What should copy do if file checksum is mismatched or invalid
#### --mailru-user-agent
HTTP user agent used internally by client.
Defaults to "rclone/VERSION" or "--user-agent" provided on command line.
- Config: user_agent
@ -283,6 +287,7 @@ Defaults to "rclone/VERSION" or "--user-agent" provided on command line.
#### --mailru-quirks
Comma separated list of internal maintenance flags.
This option must not be used by an ordinary user. It is intended only to
facilitate remote troubleshooting of backend issues. Strict meaning of
flags is not documented and not guaranteed to persist between releases.
@ -298,7 +303,7 @@ Supported quirks: atomicmkdir binlist unknowndirs
This sets the encoding for the backend.
See: the [encoding section in the overview](/overview/#encoding) for more info.
See the [encoding section in the overview](/overview/#encoding) for more info.
- Config: encoding
- Env Var: RCLONE_MAILRU_ENCODING

View File

@ -152,13 +152,13 @@ and you are sure the user and the password are correct, likely you
have got the remote blocked for a while.
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/mega/mega.go then run make backenddocs" >}}
### Standard Options
### Standard options
Here are the standard options specific to mega (Mega).
#### --mega-user
User name
User name.
- Config: user
- Env Var: RCLONE_MEGA_USER
@ -176,7 +176,7 @@ Password.
- Type: string
- Default: ""
### Advanced Options
### Advanced options
Here are the advanced options specific to mega (Mega).
@ -209,7 +209,7 @@ permanently delete objects instead.
This sets the encoding for the backend.
See: the [encoding section in the overview](/overview/#encoding) for more info.
See the [encoding section in the overview](/overview/#encoding) for more info.
- Config: encoding
- Env Var: RCLONE_MEGA_ENCODING

View File

@ -192,13 +192,14 @@ trash, so you will have to do that with one of Microsoft's apps or via
the OneDrive website.
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/onedrive/onedrive.go then run make backenddocs" >}}
### Standard Options
### Standard options
Here are the standard options specific to onedrive (Microsoft OneDrive).
#### --onedrive-client-id
OAuth Client Id
OAuth Client Id.
Leave blank normally.
- Config: client_id
@ -208,7 +209,8 @@ Leave blank normally.
#### --onedrive-client-secret
OAuth Client Secret
OAuth Client Secret.
Leave blank normally.
- Config: client_secret
@ -234,7 +236,7 @@ Choose national cloud region for OneDrive.
- "cn"
- Azure and Office 365 operated by 21Vianet in China
### Advanced Options
### Advanced options
Here are the advanced options specific to onedrive (Microsoft OneDrive).
@ -250,6 +252,7 @@ OAuth Access Token as a JSON blob.
#### --onedrive-auth-url
Auth server URL.
Leave blank to use the provider defaults.
- Config: auth_url
@ -260,6 +263,7 @@ Leave blank to use the provider defaults.
#### --onedrive-token-url
Token server url.
Leave blank to use the provider defaults.
- Config: token_url
@ -282,7 +286,7 @@ Note that the chunks will be buffered into memory.
#### --onedrive-drive-id
The ID of the drive to use
The ID of the drive to use.
- Config: drive_id
- Env Var: RCLONE_ONEDRIVE_DRIVE_ID
@ -291,7 +295,7 @@ The ID of the drive to use
#### --onedrive-drive-type
The type of the drive ( personal | business | documentLibrary )
The type of the drive (personal | business | documentLibrary).
- Config: drive_type
- Env Var: RCLONE_ONEDRIVE_DRIVE_TYPE
@ -337,7 +341,7 @@ Size of listing chunk.
#### --onedrive-no-versions
Remove all versions on modifying operations
Remove all versions on modifying operations.
Onedrive for business creates versions when rclone uploads new files
overwriting an existing one and when it sets the modification time.
@ -366,9 +370,12 @@ Set the scope of the links created by the link command.
- Default: "anonymous"
- Examples:
- "anonymous"
- Anyone with the link has access, without needing to sign in. This may include people outside of your organization. Anonymous link support may be disabled by an administrator.
- Anyone with the link has access, without needing to sign in.
- This may include people outside of your organization.
- Anonymous link support may be disabled by an administrator.
- "organization"
- Anyone signed into your organization (tenant) can use the link to get access. Only available in OneDrive for Business and SharePoint.
- Anyone signed into your organization (tenant) can use the link to get access.
- Only available in OneDrive for Business and SharePoint.
#### --onedrive-link-type
@ -402,7 +409,7 @@ At the time of writing this only works with OneDrive personal paid accounts.
This sets the encoding for the backend.
See: the [encoding section in the overview](/overview/#encoding) for more info.
See the [encoding section in the overview](/overview/#encoding) for more info.
- Config: encoding
- Env Var: RCLONE_ONEDRIVE_ENCODING

View File

@ -100,13 +100,13 @@ Invalid UTF-8 bytes will also be [replaced](/overview/#invalid-utf8),
as they can't be used in JSON strings.
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/opendrive/opendrive.go then run make backenddocs" >}}
### Standard Options
### Standard options
Here are the standard options specific to opendrive (OpenDrive).
#### --opendrive-username
Username
Username.
- Config: username
- Env Var: RCLONE_OPENDRIVE_USERNAME
@ -124,7 +124,7 @@ Password.
- Type: string
- Default: ""
### Advanced Options
### Advanced options
Here are the advanced options specific to opendrive (OpenDrive).
@ -132,7 +132,7 @@ Here are the advanced options specific to opendrive (OpenDrive).
This sets the encoding for the backend.
See: the [encoding section in the overview](/overview/#encoding) for more info.
See the [encoding section in the overview](/overview/#encoding) for more info.
- Config: encoding
- Env Var: RCLONE_OPENDRIVE_ENCODING

View File

@ -135,13 +135,14 @@ in the browser, then you use `5xxxxxxxx8` as
the `root_folder_id` in the config.
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/pcloud/pcloud.go then run make backenddocs" >}}
### Standard Options
### Standard options
Here are the standard options specific to pcloud (Pcloud).
#### --pcloud-client-id
OAuth Client Id
OAuth Client Id.
Leave blank normally.
- Config: client_id
@ -151,7 +152,8 @@ Leave blank normally.
#### --pcloud-client-secret
OAuth Client Secret
OAuth Client Secret.
Leave blank normally.
- Config: client_secret
@ -159,7 +161,7 @@ Leave blank normally.
- Type: string
- Default: ""
### Advanced Options
### Advanced options
Here are the advanced options specific to pcloud (Pcloud).
@ -175,6 +177,7 @@ OAuth Access Token as a JSON blob.
#### --pcloud-auth-url
Auth server URL.
Leave blank to use the provider defaults.
- Config: auth_url
@ -185,6 +188,7 @@ Leave blank to use the provider defaults.
#### --pcloud-token-url
Token server url.
Leave blank to use the provider defaults.
- Config: token_url
@ -196,7 +200,7 @@ Leave blank to use the provider defaults.
This sets the encoding for the backend.
See: the [encoding section in the overview](/overview/#encoding) for more info.
See the [encoding section in the overview](/overview/#encoding) for more info.
- Config: encoding
- Env Var: RCLONE_PCLOUD_ENCODING

View File

@ -102,7 +102,7 @@ Invalid UTF-8 bytes will also be [replaced](/overview/#invalid-utf8),
as they can't be used in JSON strings.
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/premiumizeme/premiumizeme.go then run make backenddocs" >}}
### Standard Options
### Standard options
Here are the standard options specific to premiumizeme (premiumize.me).
@ -118,7 +118,7 @@ This is not normally used - use oauth instead.
- Type: string
- Default: ""
### Advanced Options
### Advanced options
Here are the advanced options specific to premiumizeme (premiumize.me).
@ -126,7 +126,7 @@ Here are the advanced options specific to premiumizeme (premiumize.me).
This sets the encoding for the backend.
See: the [encoding section in the overview](/overview/#encoding) for more info.
See the [encoding section in the overview](/overview/#encoding) for more info.
- Config: encoding
- Env Var: RCLONE_PREMIUMIZEME_ENCODING

View File

@ -109,7 +109,7 @@ Invalid UTF-8 bytes will also be [replaced](/overview/#invalid-utf8),
as they can't be used in JSON strings.
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/putio/putio.go then run make backenddocs" >}}
### Advanced Options
### Advanced options
Here are the advanced options specific to putio (Put.io).
@ -117,7 +117,7 @@ Here are the advanced options specific to putio (Put.io).
This sets the encoding for the backend.
See: the [encoding section in the overview](/overview/#encoding) for more info.
See the [encoding section in the overview](/overview/#encoding) for more info.
- Config: encoding
- Env Var: RCLONE_PUTIO_ENCODING

View File

@ -142,13 +142,15 @@ Invalid UTF-8 bytes will also be [replaced](/overview/#invalid-utf8),
as they can't be used in JSON strings.
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/qingstor/qingstor.go then run make backenddocs" >}}
### Standard Options
### Standard options
Here are the standard options specific to qingstor (QingCloud Object Storage).
#### --qingstor-env-auth
Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
Get QingStor credentials from runtime.
Only applies if access_key_id and secret_access_key is blank.
- Config: env_auth
- Env Var: RCLONE_QINGSTOR_ENV_AUTH
@ -156,13 +158,14 @@ Get QingStor credentials from runtime. Only applies if access_key_id and secret_
- Default: false
- Examples:
- "false"
- Enter QingStor credentials in the next step
- Enter QingStor credentials in the next step.
- "true"
- Get QingStor credentials from the environment (env vars or IAM)
- Get QingStor credentials from the environment (env vars or IAM).
#### --qingstor-access-key-id
QingStor Access Key ID
QingStor Access Key ID.
Leave blank for anonymous access or runtime credentials.
- Config: access_key_id
@ -172,7 +175,8 @@ Leave blank for anonymous access or runtime credentials.
#### --qingstor-secret-access-key
QingStor Secret Access Key (password)
QingStor Secret Access Key (password).
Leave blank for anonymous access or runtime credentials.
- Config: secret_access_key
@ -183,7 +187,8 @@ Leave blank for anonymous access or runtime credentials.
#### --qingstor-endpoint
Enter an endpoint URL to connection QingStor API.
Leave blank will use the default value "https://qingstor.com:443"
Leave blank will use the default value "https://qingstor.com:443".
- Config: endpoint
- Env Var: RCLONE_QINGSTOR_ENDPOINT
@ -193,6 +198,7 @@ Leave blank will use the default value "https://qingstor.com:443"
#### --qingstor-zone
Zone to connect to.
Default is "pek3a".
- Config: zone
@ -201,16 +207,16 @@ Default is "pek3a".
- Default: ""
- Examples:
- "pek3a"
- The Beijing (China) Three Zone
- The Beijing (China) Three Zone.
- Needs location constraint pek3a.
- "sh1a"
- The Shanghai (China) First Zone
- The Shanghai (China) First Zone.
- Needs location constraint sh1a.
- "gd2a"
- The Guangdong (China) Second Zone
- The Guangdong (China) Second Zone.
- Needs location constraint gd2a.
### Advanced Options
### Advanced options
Here are the advanced options specific to qingstor (QingCloud Object Storage).
@ -225,7 +231,7 @@ Number of connection retries.
#### --qingstor-upload-cutoff
Cutoff for switching to chunked upload
Cutoff for switching to chunked upload.
Any files larger than this will be uploaded in chunks of chunk_size.
The minimum is 0 and the maximum is 5 GiB.
@ -276,7 +282,7 @@ this may help to speed up the transfers.
This sets the encoding for the backend.
See: the [encoding section in the overview](/overview/#encoding) for more info.
See the [encoding section in the overview](/overview/#encoding) for more info.
- Config: encoding
- Env Var: RCLONE_QINGSTOR_ENCODING

View File

@ -431,18 +431,18 @@ And this is equivalent to `/tmp/dir`
{{< rem autogenerated start "- run make rcdocs - don't edit here" >}}
### backend/command: Runs a backend command. {#backend-command}
This takes the following parameters
This takes the following parameters:
- command - a string with the command name
- fs - a remote name string e.g. "drive:"
- arg - a list of arguments for the backend command
- opt - a map of string to string of options
Returns
Returns:
- result - result from the backend command
For example
Example:
rclone rc backend/command command=noop fs=. -o echo=yes -o blue -a path1 -a path2
@ -520,7 +520,7 @@ Show statistics for the cache remote.
### config/create: create the config for a remote. {#config-create}
This takes the following parameters
This takes the following parameters:
- name - name of remote
- parameters - a map of \{ "key": "value" \} pairs
@ -581,7 +581,7 @@ See the [listremotes command](/commands/rclone_listremotes/) command for more in
### config/password: password the config for a remote. {#config-password}
This takes the following parameters
This takes the following parameters:
- name - name of remote
- parameters - a map of \{ "key": "value" \} pairs
@ -602,7 +602,7 @@ See the [config providers command](/commands/rclone_config_providers/) command f
### config/update: update the config for a remote. {#config-update}
This takes the following parameters
This takes the following parameters:
- name - name of remote
- parameters - a map of \{ "key": "value" \} pairs
@ -668,29 +668,29 @@ In either case "rate" is returned as a human readable string, and
### core/command: Run a rclone terminal command over rc. {#core-command}
This takes the following parameters
This takes the following parameters:
- command - a string with the command name
- arg - a list of arguments for the backend command
- opt - a map of string to string of options
- returnType - one of ("COMBINED_OUTPUT", "STREAM", "STREAM_ONLY_STDOUT", "STREAM_ONLY_STDERR")
- defaults to "COMBINED_OUTPUT" if not set
- the STREAM returnTypes will write the output to the body of the HTTP message
- the COMBINED_OUTPUT will write the output to the "result" parameter
- command - a string with the command name.
- arg - a list of arguments for the backend command.
- opt - a map of string to string of options.
- returnType - one of ("COMBINED_OUTPUT", "STREAM", "STREAM_ONLY_STDOUT", "STREAM_ONLY_STDERR").
- Defaults to "COMBINED_OUTPUT" if not set.
- The STREAM returnTypes will write the output to the body of the HTTP message.
- The COMBINED_OUTPUT will write the output to the "result" parameter.
Returns
Returns:
- result - result from the backend command
- only set when using returnType "COMBINED_OUTPUT"
- error - set if rclone exits with an error code
- returnType - one of ("COMBINED_OUTPUT", "STREAM", "STREAM_ONLY_STDOUT", "STREAM_ONLY_STDERR")
- result - result from the backend command.
- Only set when using returnType "COMBINED_OUTPUT".
- error - set if rclone exits with an error code.
- returnType - one of ("COMBINED_OUTPUT", "STREAM", "STREAM_ONLY_STDOUT", "STREAM_ONLY_STDERR").
For example
Example:
rclone rc core/command command=ls -a mydrive:/ -o max-depth=1
rclone rc core/command -a ls -a mydrive:/ -o max-depth=1
Returns
Returns:
```
{
@ -737,17 +737,17 @@ are explained in the go docs: https://golang.org/pkg/runtime/#MemStats
The most interesting values for most people are:
* HeapAlloc: This is the amount of memory rclone is actually using
* HeapSys: This is the amount of memory rclone has obtained from the OS
* Sys: this is the total amount of memory requested from the OS
* It is virtual memory so may include unused memory
- HeapAlloc - this is the amount of memory rclone is actually using
- HeapSys - this is the amount of memory rclone has obtained from the OS
- Sys - this is the total amount of memory requested from the OS
- It is virtual memory so may include unused memory
### core/obscure: Obscures a string passed in. {#core-obscure}
Pass a clear string and rclone will obscure it for the config file:
- clear - string
Returns
Returns:
- obscured - string
### core/pid: Return PID of current process {#core-pid}
@ -757,7 +757,7 @@ Useful for stopping rclone process.
### core/quit: Terminates the app. {#core-quit}
(optional) Pass an exit code to be used for terminating the app:
(Optional) Pass an exit code to be used for terminating the app:
- exitCode - int
### core/stats: Returns stats about current transfers. {#core-stats}
@ -814,7 +814,7 @@ The value for "eta" is null if an eta cannot be determined.
### core/stats-delete: Delete stats group. {#core-stats-delete}
This deletes entire stats group
This deletes entire stats group.
Parameters
@ -864,7 +864,7 @@ Returns the following values:
### core/version: Shows the current version of rclone and the go runtime. {#core-version}
This shows the current version of go and the go runtime
This shows the current version of go and the go runtime:
- version - rclone version, e.g. "v1.53.0"
- decomposed - version number as [major, minor, patch]
@ -889,7 +889,7 @@ After calling this you can use this to see the blocking profile:
go tool pprof http://localhost:5572/debug/pprof/block
Parameters
Parameters:
- rate - int
@ -906,11 +906,11 @@ Once this is set you can look use this to profile the mutex contention:
go tool pprof http://localhost:5572/debug/pprof/mutex
Parameters
Parameters:
- rate - int
Results
Results:
- previousRate - int
@ -936,19 +936,19 @@ Returns
### job/list: Lists the IDs of the running jobs {#job-list}
Parameters - None
Parameters: None.
Results
Results:
- jobids - array of integer job ids
- jobids - array of integer job ids.
### job/status: Reads the status of the job ID {#job-status}
Parameters
Parameters:
- jobid - id of the job (integer)
- jobid - id of the job (integer).
Results
Results:
- finished - boolean
- duration - time in seconds that the job ran for
@ -963,13 +963,13 @@ Results
### job/stop: Stop the running job {#job-stop}
Parameters
Parameters:
- jobid - id of the job (integer)
- jobid - id of the job (integer).
### mount/listmounts: Show current mount points {#mount-listmounts}
This shows currently mounted points, which can be used for performing an unmount
This shows currently mounted points, which can be used for performing an unmount.
This takes no parameters and returns
@ -988,22 +988,22 @@ Rclone's cloud storage systems as a file system with FUSE.
If no mountType is provided, the priority is given as follows: 1. mount 2.cmount 3.mount2
This takes the following parameters
This takes the following parameters:
- fs - a remote path to be mounted (required)
- mountPoint: valid path on the local machine (required)
- mountType: One of the values (mount, cmount, mount2) specifies the mount implementation to use
- mountType: one of the values (mount, cmount, mount2) specifies the mount implementation to use
- mountOpt: a JSON object with Mount options in.
- vfsOpt: a JSON object with VFS options in.
Eg
Example:
rclone rc mount/mount fs=mydrive: mountPoint=/home/<user>/mountPoint
rclone rc mount/mount fs=mydrive: mountPoint=/home/<user>/mountPoint mountType=mount
rclone rc mount/mount fs=TestDrive: mountPoint=/mnt/tmp vfsOpt='{"CacheMode": 2}' mountOpt='{"AllowOther": true}'
The vfsOpt are as described in options/get and can be seen in the the
"vfs" section when running and the mountOpt can be seen in the "mount" section.
"vfs" section when running and the mountOpt can be seen in the "mount" section:
rclone rc options/get
@ -1032,11 +1032,11 @@ rclone allows Linux, FreeBSD, macOS and Windows to
mount any of Rclone's cloud storage systems as a file system with
FUSE.
This takes the following parameters
This takes the following parameters:
- mountPoint: valid path on the local machine where the mount was created (required)
Eg
Example:
rclone rc mount/unmount mountPoint=/home/<user>/mountPoint
@ -1044,7 +1044,7 @@ Eg
### mount/unmountall: Show current mount points {#mount-unmountall}
This shows currently mounted points, which can be used for performing an unmount
This shows currently mounted points, which can be used for performing an unmount.
This takes no parameters and returns error if unmount does not succeed.
@ -1056,7 +1056,7 @@ Eg
### operations/about: Return the space used on the remote {#operations-about}
This takes the following parameters
This takes the following parameters:
- fs - a remote name string e.g. "drive:"
@ -1068,7 +1068,7 @@ See the [about command](/commands/rclone_size/) command for more information on
### operations/cleanup: Remove trashed files in the remote or path {#operations-cleanup}
This takes the following parameters
This takes the following parameters:
- fs - a remote name string e.g. "drive:"
@ -1078,7 +1078,7 @@ See the [cleanup command](/commands/rclone_cleanup/) command for more informatio
### operations/copyfile: Copy a file from source remote to destination remote {#operations-copyfile}
This takes the following parameters
This takes the following parameters:
- srcFs - a remote name string e.g. "drive:" for the source
- srcRemote - a path within that remote e.g. "file.txt" for the source
@ -1089,7 +1089,7 @@ This takes the following parameters
### operations/copyurl: Copy the URL to the object {#operations-copyurl}
This takes the following parameters
This takes the following parameters:
- fs - a remote name string e.g. "drive:"
- remote - a path within that remote e.g. "dir"
@ -1101,7 +1101,7 @@ See the [copyurl command](/commands/rclone_copyurl/) command for more informatio
### operations/delete: Remove files in the path {#operations-delete}
This takes the following parameters
This takes the following parameters:
- fs - a remote name string e.g. "drive:"
@ -1111,7 +1111,7 @@ See the [delete command](/commands/rclone_delete/) command for more information
### operations/deletefile: Remove the single file pointed to {#operations-deletefile}
This takes the following parameters
This takes the following parameters:
- fs - a remote name string e.g. "drive:"
- remote - a path within that remote e.g. "dir"
@ -1122,7 +1122,7 @@ See the [deletefile command](/commands/rclone_deletefile/) command for more info
### operations/fsinfo: Return information about the remote {#operations-fsinfo}
This takes the following parameters
This takes the following parameters:
- fs - a remote name string e.g. "drive:"
@ -1179,7 +1179,7 @@ This command does not have a command line equivalent so use this instead:
### operations/list: List the given remote and path in JSON format {#operations-list}
This takes the following parameters
This takes the following parameters:
- fs - a remote name string e.g. "drive:"
- remote - a path within that remote e.g. "dir"
@ -1189,8 +1189,12 @@ This takes the following parameters
- showEncrypted - If set show decrypted names
- showOrigIDs - If set show the IDs for each item if known
- showHash - If set return a dictionary of hashes
- noMimeType - If set don't show mime types
- dirsOnly - If set only show directories
- filesOnly - If set only show files
- hashTypes - array of strings of hash types to show if showHash set
The result is
Returns:
- list
- This is an array of objects as described in the lsjson command
@ -1201,7 +1205,7 @@ See the [lsjson command](/commands/rclone_lsjson/) for more information on the a
### operations/mkdir: Make a destination directory or container {#operations-mkdir}
This takes the following parameters
This takes the following parameters:
- fs - a remote name string e.g. "drive:"
- remote - a path within that remote e.g. "dir"
@ -1212,7 +1216,7 @@ See the [mkdir command](/commands/rclone_mkdir/) command for more information on
### operations/movefile: Move a file from source remote to destination remote {#operations-movefile}
This takes the following parameters
This takes the following parameters:
- srcFs - a remote name string e.g. "drive:" for the source
- srcRemote - a path within that remote e.g. "file.txt" for the source
@ -1223,14 +1227,14 @@ This takes the following parameters
### operations/publiclink: Create or retrieve a public link to the given file or folder. {#operations-publiclink}
This takes the following parameters
This takes the following parameters:
- fs - a remote name string e.g. "drive:"
- remote - a path within that remote e.g. "dir"
- unlink - boolean - if set removes the link rather than adding it (optional)
- expire - string - the expiry time of the link e.g. "1d" (optional)
Returns
Returns:
- url - URL of the resource
@ -1240,7 +1244,7 @@ See the [link command](/commands/rclone_link/) command for more information on t
### operations/purge: Remove a directory or container and all of its contents {#operations-purge}
This takes the following parameters
This takes the following parameters:
- fs - a remote name string e.g. "drive:"
- remote - a path within that remote e.g. "dir"
@ -1251,7 +1255,7 @@ See the [purge command](/commands/rclone_purge/) command for more information on
### operations/rmdir: Remove an empty directory or container {#operations-rmdir}
This takes the following parameters
This takes the following parameters:
- fs - a remote name string e.g. "drive:"
- remote - a path within that remote e.g. "dir"
@ -1262,23 +1266,22 @@ See the [rmdir command](/commands/rclone_rmdir/) command for more information on
### operations/rmdirs: Remove all the empty directories in the path {#operations-rmdirs}
This takes the following parameters
This takes the following parameters:
- fs - a remote name string e.g. "drive:"
- remote - a path within that remote e.g. "dir"
- leaveRoot - boolean, set to true not to delete the root
See the [rmdirs command](/commands/rclone_rmdirs/) command for more information on the above.
**Authentication is required for this call.**
### operations/size: Count the number of bytes and files in remote {#operations-size}
This takes the following parameters
This takes the following parameters:
- fs - a remote name string e.g. "drive:path/to/dir"
Returns
Returns:
- count - number of files
- bytes - number of bytes in those files
@ -1287,10 +1290,30 @@ See the [size command](/commands/rclone_size/) command for more information on t
**Authentication is required for this call.**
### operations/uploadfile: Upload file using multiform/form-data {#operations-uploadfile}
### operations/stat: Give information about the supplied file or directory {#operations-stat}
This takes the following parameters
- fs - a remote name string eg "drive:"
- remote - a path within that remote eg "dir"
- opt - a dictionary of options to control the listing (optional)
- see operations/list for the options
The result is
- item - an object as described in the lsjson command. Will be null if not found.
Note that if you are only interested in files then it is much more
efficient to set the filesOnly flag in the options.
See the [lsjson command](/commands/rclone_lsjson/) for more information on the above and examples.
**Authentication is required for this call.**
### operations/uploadfile: Upload file using multiform/form-data {#operations-uploadfile}
This takes the following parameters:
- fs - a remote name string e.g. "drive:"
- remote - a path within that remote e.g. "dir"
- each part in body represents a file to be uploaded
@ -1300,7 +1323,7 @@ See the [uploadfile command](/commands/rclone_uploadfile/) command for more info
### options/blocks: List all the option blocks {#options-blocks}
Returns
Returns:
- options - a list of the options block names
### options/get: Get all the global options {#options-get}
@ -1333,7 +1356,7 @@ map to the external options very easily with a few exceptions.
### options/set: Set an option {#options-set}
Parameters
Parameters:
- option block name containing an object with
- key: value
@ -1361,13 +1384,13 @@ And this sets NOTICE level logs (normal without -v)
### pluginsctl/addPlugin: Add a plugin using url {#pluginsctl-addPlugin}
used for adding a plugin to the webgui
Used for adding a plugin to the webgui.
This takes the following parameters
This takes the following parameters:
- url: http url of the github repo where the plugin is hosted (http://github.com/rclone/rclone-webui-react)
- url - http url of the github repo where the plugin is hosted (http://github.com/rclone/rclone-webui-react).
Eg
Example:
rclone rc pluginsctl/addPlugin
@ -1375,19 +1398,19 @@ Eg
### pluginsctl/getPluginsForType: Get plugins with type criteria {#pluginsctl-getPluginsForType}
This shows all possible plugins by a mime type
This shows all possible plugins by a mime type.
This takes the following parameters
This takes the following parameters:
- type: supported mime type by a loaded plugin e.g. (video/mp4, audio/mp3)
- pluginType: filter plugins based on their type e.g. (DASHBOARD, FILE_HANDLER, TERMINAL)
- type - supported mime type by a loaded plugin e.g. (video/mp4, audio/mp3).
- pluginType - filter plugins based on their type e.g. (DASHBOARD, FILE_HANDLER, TERMINAL).
and returns
Returns:
- loadedPlugins: list of current production plugins
- testPlugins: list of temporarily loaded development plugins, usually running on a different server.
- loadedPlugins - list of current production plugins.
- testPlugins - list of temporarily loaded development plugins, usually running on a different server.
Eg
Example:
rclone rc pluginsctl/getPluginsForType type=video/mp4
@ -1397,12 +1420,12 @@ Eg
This allows you to get the currently enabled plugins and their details.
This takes no parameters and returns
This takes no parameters and returns:
- loadedPlugins: list of current production plugins
- testPlugins: list of temporarily loaded development plugins, usually running on a different server.
- loadedPlugins - list of current production plugins.
- testPlugins - list of temporarily loaded development plugins, usually running on a different server.
Eg
E.g.
rclone rc pluginsctl/listPlugins
@ -1410,13 +1433,13 @@ Eg
### pluginsctl/listTestPlugins: Show currently loaded test plugins {#pluginsctl-listTestPlugins}
allows listing of test plugins with the rclone.test set to true in package.json of the plugin
Allows listing of test plugins with the rclone.test set to true in package.json of the plugin.
This takes no parameters and returns
This takes no parameters and returns:
- loadedTestPlugins: list of currently available test plugins
- loadedTestPlugins - list of currently available test plugins.
Eg
E.g.
rclone rc pluginsctl/listTestPlugins
@ -1424,13 +1447,13 @@ Eg
### pluginsctl/removePlugin: Remove a loaded plugin {#pluginsctl-removePlugin}
This allows you to remove a plugin using it's name
This allows you to remove a plugin using it's name.
This takes parameters
This takes parameters:
- name: name of the plugin in the format `author`/`plugin_name`
- name - name of the plugin in the format `author`/`plugin_name`.
Eg
E.g.
rclone rc pluginsctl/removePlugin name=rclone/video-plugin
@ -1438,13 +1461,13 @@ Eg
### pluginsctl/removeTestPlugin: Remove a test plugin {#pluginsctl-removeTestPlugin}
This allows you to remove a plugin using it's name
This allows you to remove a plugin using it's name.
This takes the following parameters
This takes the following parameters:
- name: name of the plugin in the format `author`/`plugin_name`
- name - name of the plugin in the format `author`/`plugin_name`.
Eg
Example:
rclone rc pluginsctl/removeTestPlugin name=rclone/rclone-webui-react
@ -1476,7 +1499,7 @@ check that parameter passing is working properly.
### sync/copy: copy a directory from source remote to destination remote {#sync-copy}
This takes the following parameters
This takes the following parameters:
- srcFs - a remote name string e.g. "drive:src" for the source
- dstFs - a remote name string e.g. "drive:dst" for the destination
@ -1489,7 +1512,7 @@ See the [copy command](/commands/rclone_copy/) command for more information on t
### sync/move: move a directory from source remote to destination remote {#sync-move}
This takes the following parameters
This takes the following parameters:
- srcFs - a remote name string e.g. "drive:src" for the source
- dstFs - a remote name string e.g. "drive:dst" for the destination
@ -1503,7 +1526,7 @@ See the [move command](/commands/rclone_move/) command for more information on t
### sync/sync: sync a directory from source remote to destination remote {#sync-sync}
This takes the following parameters
This takes the following parameters:
- srcFs - a remote name string e.g. "drive:src" for the source
- dstFs - a remote name string e.g. "drive:dst" for the destination

View File

@ -550,7 +550,7 @@ Note that rclone only speaks the S3 API it does not speak the Glacier
Vault API, so rclone cannot directly access Glacier Vaults.
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/s3/s3.go then run make backenddocs" >}}
### Standard Options
### Standard options
Here are the standard options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, SeaweedFS, and Tencent COS).
@ -595,6 +595,7 @@ Choose your S3 provider.
#### --s3-env-auth
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
Only applies if access_key_id and secret_access_key is blank.
- Config: env_auth
@ -603,13 +604,14 @@ Only applies if access_key_id and secret_access_key is blank.
- Default: false
- Examples:
- "false"
- Enter AWS credentials in the next step
- Enter AWS credentials in the next step.
- "true"
- Get AWS credentials from the environment (env vars or IAM)
- Get AWS credentials from the environment (env vars or IAM).
#### --s3-access-key-id
AWS Access Key ID.
Leave blank for anonymous access or runtime credentials.
- Config: access_key_id
@ -619,7 +621,8 @@ Leave blank for anonymous access or runtime credentials.
#### --s3-secret-access-key
AWS Secret Access Key (password)
AWS Secret Access Key (password).
Leave blank for anonymous access or runtime credentials.
- Config: secret_access_key
@ -641,76 +644,76 @@ Region to connect to.
- US Region, Northern Virginia, or Pacific Northwest.
- Leave location constraint empty.
- "us-east-2"
- US East (Ohio) Region
- US East (Ohio) Region.
- Needs location constraint us-east-2.
- "us-west-1"
- US West (Northern California) Region
- US West (Northern California) Region.
- Needs location constraint us-west-1.
- "us-west-2"
- US West (Oregon) Region
- US West (Oregon) Region.
- Needs location constraint us-west-2.
- "ca-central-1"
- Canada (Central) Region
- Canada (Central) Region.
- Needs location constraint ca-central-1.
- "eu-west-1"
- EU (Ireland) Region
- EU (Ireland) Region.
- Needs location constraint EU or eu-west-1.
- "eu-west-2"
- EU (London) Region
- EU (London) Region.
- Needs location constraint eu-west-2.
- "eu-west-3"
- EU (Paris) Region
- EU (Paris) Region.
- Needs location constraint eu-west-3.
- "eu-north-1"
- EU (Stockholm) Region
- EU (Stockholm) Region.
- Needs location constraint eu-north-1.
- "eu-south-1"
- EU (Milan) Region
- EU (Milan) Region.
- Needs location constraint eu-south-1.
- "eu-central-1"
- EU (Frankfurt) Region
- EU (Frankfurt) Region.
- Needs location constraint eu-central-1.
- "ap-southeast-1"
- Asia Pacific (Singapore) Region
- Asia Pacific (Singapore) Region.
- Needs location constraint ap-southeast-1.
- "ap-southeast-2"
- Asia Pacific (Sydney) Region
- Asia Pacific (Sydney) Region.
- Needs location constraint ap-southeast-2.
- "ap-northeast-1"
- Asia Pacific (Tokyo) Region
- Asia Pacific (Tokyo) Region.
- Needs location constraint ap-northeast-1.
- "ap-northeast-2"
- Asia Pacific (Seoul)
- Asia Pacific (Seoul).
- Needs location constraint ap-northeast-2.
- "ap-northeast-3"
- Asia Pacific (Osaka-Local)
- Asia Pacific (Osaka-Local).
- Needs location constraint ap-northeast-3.
- "ap-south-1"
- Asia Pacific (Mumbai)
- Asia Pacific (Mumbai).
- Needs location constraint ap-south-1.
- "ap-east-1"
- Asia Pacific (Hong Kong) Region
- Asia Pacific (Hong Kong) Region.
- Needs location constraint ap-east-1.
- "sa-east-1"
- South America (Sao Paulo) Region
- South America (Sao Paulo) Region.
- Needs location constraint sa-east-1.
- "me-south-1"
- Middle East (Bahrain) Region
- Middle East (Bahrain) Region.
- Needs location constraint me-south-1.
- "af-south-1"
- Africa (Cape Town) Region
- Africa (Cape Town) Region.
- Needs location constraint af-south-1.
- "cn-north-1"
- China (Beijing) Region
- China (Beijing) Region.
- Needs location constraint cn-north-1.
- "cn-northwest-1"
- China (Ningxia) Region
- China (Ningxia) Region.
- Needs location constraint cn-northwest-1.
- "us-gov-east-1"
- AWS GovCloud (US-East) Region
- AWS GovCloud (US-East) Region.
- Needs location constraint us-gov-east-1.
- "us-gov-west-1"
- AWS GovCloud (US) Region
- AWS GovCloud (US) Region.
- Needs location constraint us-gov-west-1.
#### --s3-region
@ -730,6 +733,7 @@ Region to connect to.
#### --s3-region
Region to connect to.
Leave blank if you are using an S3 clone and you don't have a region.
- Config: region
@ -738,13 +742,16 @@ Leave blank if you are using an S3 clone and you don't have a region.
- Default: ""
- Examples:
- ""
- Use this if unsure. Will use v4 signatures and an empty region.
- Use this if unsure.
- Will use v4 signatures and an empty region.
- "other-v2-signature"
- Use this only if v4 signatures don't work, e.g. pre Jewel/v10 CEPH.
- Use this only if v4 signatures don't work.
- E.g. pre Jewel/v10 CEPH.
#### --s3-endpoint
Endpoint for S3 API.
Leave blank if using AWS to use the default endpoint for the region.
- Config: endpoint
@ -755,6 +762,7 @@ Leave blank if using AWS to use the default endpoint for the region.
#### --s3-endpoint
Endpoint for IBM COS S3 API.
Specify if using an IBM COS On Premise.
- Config: endpoint
@ -987,47 +995,48 @@ Endpoint for Tencent COS API.
- Default: ""
- Examples:
- "cos.ap-beijing.myqcloud.com"
- Beijing Region.
- Beijing Region
- "cos.ap-nanjing.myqcloud.com"
- Nanjing Region.
- Nanjing Region
- "cos.ap-shanghai.myqcloud.com"
- Shanghai Region.
- Shanghai Region
- "cos.ap-guangzhou.myqcloud.com"
- Guangzhou Region.
- Guangzhou Region
- "cos.ap-nanjing.myqcloud.com"
- Nanjing Region.
- Nanjing Region
- "cos.ap-chengdu.myqcloud.com"
- Chengdu Region.
- Chengdu Region
- "cos.ap-chongqing.myqcloud.com"
- Chongqing Region.
- Chongqing Region
- "cos.ap-hongkong.myqcloud.com"
- Hong Kong (China) Region.
- Hong Kong (China) Region
- "cos.ap-singapore.myqcloud.com"
- Singapore Region.
- Singapore Region
- "cos.ap-mumbai.myqcloud.com"
- Mumbai Region.
- Mumbai Region
- "cos.ap-seoul.myqcloud.com"
- Seoul Region.
- Seoul Region
- "cos.ap-bangkok.myqcloud.com"
- Bangkok Region.
- Bangkok Region
- "cos.ap-tokyo.myqcloud.com"
- Tokyo Region.
- Tokyo Region
- "cos.na-siliconvalley.myqcloud.com"
- Silicon Valley Region.
- Silicon Valley Region
- "cos.na-ashburn.myqcloud.com"
- Virginia Region.
- Virginia Region
- "cos.na-toronto.myqcloud.com"
- Toronto Region.
- Toronto Region
- "cos.eu-frankfurt.myqcloud.com"
- Frankfurt Region.
- Frankfurt Region
- "cos.eu-moscow.myqcloud.com"
- Moscow Region.
- Moscow Region
- "cos.accelerate.myqcloud.com"
- Use Tencent COS Accelerate Endpoint.
- Use Tencent COS Accelerate Endpoint
#### --s3-endpoint
Endpoint for S3 API.
Required when using an S3 clone.
- Config: endpoint
@ -1057,6 +1066,7 @@ Required when using an S3 clone.
#### --s3-location-constraint
Location constraint - must be set to match the Region.
Used when creating buckets only.
- Config: location_constraint
@ -1065,60 +1075,61 @@ Used when creating buckets only.
- Default: ""
- Examples:
- ""
- Empty for US Region, Northern Virginia, or Pacific Northwest.
- Empty for US Region, Northern Virginia, or Pacific Northwest
- "us-east-2"
- US East (Ohio) Region.
- US East (Ohio) Region
- "us-west-1"
- US West (Northern California) Region.
- US West (Northern California) Region
- "us-west-2"
- US West (Oregon) Region.
- US West (Oregon) Region
- "ca-central-1"
- Canada (Central) Region.
- Canada (Central) Region
- "eu-west-1"
- EU (Ireland) Region.
- EU (Ireland) Region
- "eu-west-2"
- EU (London) Region.
- EU (London) Region
- "eu-west-3"
- EU (Paris) Region.
- EU (Paris) Region
- "eu-north-1"
- EU (Stockholm) Region.
- EU (Stockholm) Region
- "eu-south-1"
- EU (Milan) Region.
- EU (Milan) Region
- "EU"
- EU Region.
- EU Region
- "ap-southeast-1"
- Asia Pacific (Singapore) Region.
- Asia Pacific (Singapore) Region
- "ap-southeast-2"
- Asia Pacific (Sydney) Region.
- Asia Pacific (Sydney) Region
- "ap-northeast-1"
- Asia Pacific (Tokyo) Region.
- Asia Pacific (Tokyo) Region
- "ap-northeast-2"
- Asia Pacific (Seoul) Region.
- Asia Pacific (Seoul) Region
- "ap-northeast-3"
- Asia Pacific (Osaka-Local) Region.
- Asia Pacific (Osaka-Local) Region
- "ap-south-1"
- Asia Pacific (Mumbai) Region.
- Asia Pacific (Mumbai) Region
- "ap-east-1"
- Asia Pacific (Hong Kong) Region.
- Asia Pacific (Hong Kong) Region
- "sa-east-1"
- South America (Sao Paulo) Region.
- South America (Sao Paulo) Region
- "me-south-1"
- Middle East (Bahrain) Region.
- Middle East (Bahrain) Region
- "af-south-1"
- Africa (Cape Town) Region.
- Africa (Cape Town) Region
- "cn-north-1"
- China (Beijing) Region
- "cn-northwest-1"
- China (Ningxia) Region.
- China (Ningxia) Region
- "us-gov-east-1"
- AWS GovCloud (US-East) Region.
- AWS GovCloud (US-East) Region
- "us-gov-west-1"
- AWS GovCloud (US) Region.
- AWS GovCloud (US) Region
#### --s3-location-constraint
Location constraint - must match endpoint when using IBM Cloud Public.
For on-prem COS, do not make a selection from this list, hit enter
For on-prem COS, do not make a selection from this list, hit enter.
- Config: location_constraint
- Env Var: RCLONE_S3_LOCATION_CONSTRAINT
@ -1193,6 +1204,7 @@ For on-prem COS, do not make a selection from this list, hit enter
#### --s3-location-constraint
Location constraint - must be set to match the Region.
Leave blank if not sure. Used when creating buckets only.
- Config: location_constraint
@ -1217,30 +1229,45 @@ doesn't copy the ACL from the source but rather writes a fresh one.
- Default: ""
- Examples:
- "default"
- Owner gets Full_CONTROL. No one else has access rights (default).
- Owner gets Full_CONTROL.
- No one else has access rights (default).
- "private"
- Owner gets FULL_CONTROL. No one else has access rights (default).
- Owner gets FULL_CONTROL.
- No one else has access rights (default).
- "public-read"
- Owner gets FULL_CONTROL. The AllUsers group gets READ access.
- Owner gets FULL_CONTROL.
- The AllUsers group gets READ access.
- "public-read-write"
- Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access.
- Owner gets FULL_CONTROL.
- The AllUsers group gets READ and WRITE access.
- Granting this on a bucket is generally not recommended.
- "authenticated-read"
- Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access.
- Owner gets FULL_CONTROL.
- The AuthenticatedUsers group gets READ access.
- "bucket-owner-read"
- Object owner gets FULL_CONTROL. Bucket owner gets READ access.
- Object owner gets FULL_CONTROL.
- Bucket owner gets READ access.
- If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
- "bucket-owner-full-control"
- Both the object owner and the bucket owner get FULL_CONTROL over the object.
- If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
- "private"
- Owner gets FULL_CONTROL. No one else has access rights (default). This acl is available on IBM Cloud (Infra), IBM Cloud (Storage), On-Premise COS
- Owner gets FULL_CONTROL.
- No one else has access rights (default).
- This acl is available on IBM Cloud (Infra), IBM Cloud (Storage), On-Premise COS.
- "public-read"
- Owner gets FULL_CONTROL. The AllUsers group gets READ access. This acl is available on IBM Cloud (Infra), IBM Cloud (Storage), On-Premise IBM COS
- Owner gets FULL_CONTROL.
- The AllUsers group gets READ access.
- This acl is available on IBM Cloud (Infra), IBM Cloud (Storage), On-Premise IBM COS.
- "public-read-write"
- Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access. This acl is available on IBM Cloud (Infra), On-Premise IBM COS
- Owner gets FULL_CONTROL.
- The AllUsers group gets READ and WRITE access.
- This acl is available on IBM Cloud (Infra), On-Premise IBM COS.
- "authenticated-read"
- Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access. Not supported on Buckets. This acl is available on IBM Cloud (Infra) and On-Premise IBM COS
- Owner gets FULL_CONTROL.
- The AuthenticatedUsers group gets READ access.
- Not supported on Buckets.
- This acl is available on IBM Cloud (Infra) and On-Premise IBM COS.
#### --s3-server-side-encryption
@ -1312,9 +1339,9 @@ The storage class to use when storing new objects in OSS.
- "STANDARD"
- Standard storage class
- "GLACIER"
- Archive storage mode.
- Archive storage mode
- "STANDARD_IA"
- Infrequent access storage mode.
- Infrequent access storage mode
#### --s3-storage-class
@ -1330,9 +1357,9 @@ The storage class to use when storing new objects in Tencent COS.
- "STANDARD"
- Standard storage class
- "ARCHIVE"
- Archive storage mode.
- Archive storage mode
- "STANDARD_IA"
- Infrequent access storage mode.
- Infrequent access storage mode
#### --s3-storage-class
@ -1344,13 +1371,15 @@ The storage class to use when storing new objects in S3.
- Default: ""
- Examples:
- ""
- Default
- Default.
- "STANDARD"
- The Standard class for any upload; suitable for on-demand content like streaming or CDN.
- The Standard class for any upload.
- Suitable for on-demand content like streaming or CDN.
- "GLACIER"
- Archived storage; prices are lower, but it needs to be restored first to be accessed.
- Archived storage.
- Prices are lower, but it needs to be restored first to be accessed.
### Advanced Options
### Advanced options
Here are the advanced options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, SeaweedFS, and Tencent COS).
@ -1369,14 +1398,18 @@ isn't set then "acl" is used instead.
- Default: ""
- Examples:
- "private"
- Owner gets FULL_CONTROL. No one else has access rights (default).
- Owner gets FULL_CONTROL.
- No one else has access rights (default).
- "public-read"
- Owner gets FULL_CONTROL. The AllUsers group gets READ access.
- Owner gets FULL_CONTROL.
- The AllUsers group gets READ access.
- "public-read-write"
- Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access.
- Owner gets FULL_CONTROL.
- The AllUsers group gets READ and WRITE access.
- Granting this on a bucket is generally not recommended.
- "authenticated-read"
- Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access.
- Owner gets FULL_CONTROL.
- The AuthenticatedUsers group gets READ access.
#### --s3-requester-pays
@ -1430,7 +1463,7 @@ If you leave it blank, this is calculated automatically from the sse_customer_ke
#### --s3-upload-cutoff
Cutoff for switching to chunked upload
Cutoff for switching to chunked upload.
Any files larger than this will be uploaded in chunks of chunk_size.
The minimum is 0 and the maximum is 5 GiB.
@ -1490,7 +1523,7 @@ large file of a known size to stay below this number of chunks limit.
#### --s3-copy-cutoff
Cutoff for switching to multipart copy
Cutoff for switching to multipart copy.
Any files larger than this that need to be server-side copied will be
copied in chunks of this size.
@ -1504,7 +1537,7 @@ The minimum is 0 and the maximum is 5 GiB.
#### --s3-disable-checksum
Don't store MD5 checksum with object metadata
Don't store MD5 checksum with object metadata.
Normally rclone will calculate the MD5 checksum of the input before
uploading it so it can add it to metadata on the object. This is great
@ -1518,7 +1551,7 @@ to start uploading.
#### --s3-shared-credentials-file
Path to the shared credentials file
Path to the shared credentials file.
If env_auth = true then rclone can use a shared credentials file.
@ -1537,7 +1570,7 @@ it will default to the current user's home directory.
#### --s3-profile
Profile to use in the shared credentials file
Profile to use in the shared credentials file.
If env_auth = true then rclone can use a shared credentials file. This
variable controls which profile is used in that file.
@ -1553,7 +1586,7 @@ If empty it will default to the environment variable "AWS_PROFILE" or
#### --s3-session-token
An AWS session token
An AWS session token.
- Config: session_token
- Env Var: RCLONE_S3_SESSION_TOKEN
@ -1650,7 +1683,7 @@ In Ceph, this can be increased with the "rgw list buckets max chunk" option.
#### --s3-no-check-bucket
If set, don't attempt to check the bucket exists or create it
If set, don't attempt to check the bucket exists or create it.
This can be useful when trying to minimise the number of transactions
rclone does if you know the bucket exists already.
@ -1667,7 +1700,7 @@ due to a bug.
#### --s3-no-head
If set, don't HEAD uploaded objects to check integrity
If set, don't HEAD uploaded objects to check integrity.
This can be useful when trying to minimise the number of transactions
rclone does.
@ -1704,7 +1737,7 @@ very small even with this flag.
#### --s3-no-head-object
If set, don't HEAD objects
If set, do not do HEAD before GET when getting objects.
- Config: no_head_object
- Env Var: RCLONE_S3_NO_HEAD_OBJECT
@ -1715,7 +1748,7 @@ If set, don't HEAD objects
This sets the encoding for the backend.
See: the [encoding section in the overview](/overview/#encoding) for more info.
See the [encoding section in the overview](/overview/#encoding) for more info.
- Config: encoding
- Env Var: RCLONE_S3_ENCODING
@ -1725,6 +1758,7 @@ See: the [encoding section in the overview](/overview/#encoding) for more info.
#### --s3-memory-pool-flush-time
How often internal memory buffer pools will be flushed.
Uploads which requires additional buffers (f.e multipart) will use memory pool for allocations.
This option controls how often unused buffers will be removed from the pool.
@ -1744,7 +1778,7 @@ Whether to use mmap buffers in internal memory pool.
#### --s3-disable-http2
Disable usage of http2 for S3 backends
Disable usage of http2 for S3 backends.
There is currently an unsolved issue with the s3 (specifically minio) backend
and HTTP/2. HTTP/2 is enabled by default for the s3 backend but can be
@ -1759,7 +1793,18 @@ See: https://github.com/rclone/rclone/issues/4673, https://github.com/rclone/rcl
- Type: bool
- Default: false
### Backend commands
#### --s3-download-url
Custom endpoint for downloads.
This is usually set to a CloudFront CDN URL as AWS S3 offers
cheaper egress for data downloaded through the CloudFront network.
- Config: download_url
- Env Var: RCLONE_S3_DOWNLOAD_URL
- Type: string
- Default: ""
## Backend commands
Here are the commands specific to the s3 backend.
@ -1775,7 +1820,7 @@ info on how to pass options and arguments.
These can be run on a running backend using the rc command
[backend/command](/rc/#backend/command).
#### restore
### restore
Restore objects from GLACIER to normal storage
@ -1821,7 +1866,7 @@ Options:
- "lifetime": Lifetime of the active copy in days
- "priority": Priority of restore: Standard|Expedited|Bulk
#### list-multipart-uploads
### list-multipart-uploads
List the unfinished multipart uploads
@ -1860,7 +1905,7 @@ a bucket or with a bucket and path.
#### cleanup
### cleanup
Remove unfinished multipart uploads.

View File

@ -264,13 +264,13 @@ Versions below 6.0 are not supported.
Versions between 6.0 and 6.3 haven't been tested and might not work properly.
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/seafile/seafile.go then run make backenddocs" >}}
### Standard Options
### Standard options
Here are the standard options specific to seafile (seafile).
#### --seafile-url
URL of seafile host to connect to
URL of seafile host to connect to.
- Config: url
- Env Var: RCLONE_SEAFILE_URL
@ -278,11 +278,11 @@ URL of seafile host to connect to
- Default: ""
- Examples:
- "https://cloud.seafile.com/"
- Connect to cloud.seafile.com
- Connect to cloud.seafile.com.
#### --seafile-user
User name (usually email address)
User name (usually email address).
- Config: user
- Env Var: RCLONE_SEAFILE_USER
@ -291,7 +291,7 @@ User name (usually email address)
#### --seafile-pass
Password
Password.
**NB** Input to this must be obscured - see [rclone obscure](/commands/rclone_obscure/).
@ -302,7 +302,7 @@ Password
#### --seafile-2fa
Two-factor authentication ('true' if the account has 2FA enabled)
Two-factor authentication ('true' if the account has 2FA enabled).
- Config: 2fa
- Env Var: RCLONE_SEAFILE_2FA
@ -311,7 +311,9 @@ Two-factor authentication ('true' if the account has 2FA enabled)
#### --seafile-library
Name of the library. Leave blank to access all non-encrypted libraries.
Name of the library.
Leave blank to access all non-encrypted libraries.
- Config: library
- Env Var: RCLONE_SEAFILE_LIBRARY
@ -320,7 +322,9 @@ Name of the library. Leave blank to access all non-encrypted libraries.
#### --seafile-library-key
Library password (for encrypted libraries only). Leave blank if you pass it through the command line.
Library password (for encrypted libraries only).
Leave blank if you pass it through the command line.
**NB** Input to this must be obscured - see [rclone obscure](/commands/rclone_obscure/).
@ -331,20 +335,20 @@ Library password (for encrypted libraries only). Leave blank if you pass it thro
#### --seafile-auth-token
Authentication token
Authentication token.
- Config: auth_token
- Env Var: RCLONE_SEAFILE_AUTH_TOKEN
- Type: string
- Default: ""
### Advanced Options
### Advanced options
Here are the advanced options specific to seafile (seafile).
#### --seafile-create-library
Should rclone create a library if it doesn't exist
Should rclone create a library if it doesn't exist.
- Config: create_library
- Env Var: RCLONE_SEAFILE_CREATE_LIBRARY
@ -355,7 +359,7 @@ Should rclone create a library if it doesn't exist
This sets the encoding for the backend.
See: the [encoding section in the overview](/overview/#encoding) for more info.
See the [encoding section in the overview](/overview/#encoding) for more info.
- Config: encoding
- Env Var: RCLONE_SEAFILE_ENCODING

View File

@ -252,25 +252,24 @@ are using one of these servers, you can set the option `set_modtime = false` in
your RClone backend configuration to disable this behaviour.
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/sftp/sftp.go then run make backenddocs" >}}
### Standard Options
### Standard options
Here are the standard options specific to sftp (SSH/SFTP Connection).
#### --sftp-host
SSH host to connect to
SSH host to connect to.
E.g. "example.com".
- Config: host
- Env Var: RCLONE_SFTP_HOST
- Type: string
- Default: ""
- Examples:
- "example.com"
- Connect to example.com
#### --sftp-user
SSH username, leave blank for current username, $USER
SSH username, leave blank for current username, $USER.
- Config: user
- Env Var: RCLONE_SFTP_USER
@ -279,7 +278,7 @@ SSH username, leave blank for current username, $USER
#### --sftp-port
SSH port, leave blank to use default (22)
SSH port, leave blank to use default (22).
- Config: port
- Env Var: RCLONE_SFTP_PORT
@ -299,7 +298,9 @@ SSH password, leave blank to use ssh-agent.
#### --sftp-key-pem
Raw PEM-encoded private key, If specified, will override key_file parameter.
Raw PEM-encoded private key.
If specified, will override key_file parameter.
- Config: key_pem
- Env Var: RCLONE_SFTP_KEY_PEM
@ -308,11 +309,12 @@ Raw PEM-encoded private key, If specified, will override key_file parameter.
#### --sftp-key-file
Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent.
Path to PEM-encoded private key file.
Leave blank or set key-use-agent to use ssh-agent.
Leading `~` will be expanded in the file name as will environment variables such as `${RCLONE_CONFIG_DIR}`.
- Config: key_file
- Env Var: RCLONE_SFTP_KEY_FILE
- Type: string
@ -340,7 +342,6 @@ Set this if you have a signed certificate you want to use for authentication.
Leading `~` will be expanded in the file name as will environment variables such as `${RCLONE_CONFIG_DIR}`.
- Config: pubkey_file
- Env Var: RCLONE_SFTP_PUBKEY_FILE
- Type: string
@ -387,6 +388,7 @@ Those algorithms are insecure and may allow plaintext data to be recovered by an
#### --sftp-disable-hashcheck
Disable the execution of SSH commands to determine if remote file hashing is available.
Leave blank or set to false to enable hashing (recommended), set to true to disable hashing.
- Config: disable_hashcheck
@ -394,7 +396,7 @@ Leave blank or set to false to enable hashing (recommended), set to true to disa
- Type: bool
- Default: false
### Advanced Options
### Advanced options
Here are the advanced options specific to sftp (SSH/SFTP Connection).
@ -406,14 +408,13 @@ Set this value to enable server host key validation.
Leading `~` will be expanded in the file name as will environment variables such as `${RCLONE_CONFIG_DIR}`.
- Config: known_hosts_file
- Env Var: RCLONE_SFTP_KNOWN_HOSTS_FILE
- Type: string
- Default: ""
- Examples:
- "~/.ssh/known_hosts"
- Use OpenSSH's known_hosts file
- Use OpenSSH's known_hosts file.
#### --sftp-ask-password
@ -460,7 +461,9 @@ Set the modified time on the remote if set.
#### --sftp-md5sum-command
The command used to read md5 hashes. Leave blank for autodetect.
The command used to read md5 hashes.
Leave blank for autodetect.
- Config: md5sum_command
- Env Var: RCLONE_SFTP_MD5SUM_COMMAND
@ -469,7 +472,9 @@ The command used to read md5 hashes. Leave blank for autodetect.
#### --sftp-sha1sum-command
The command used to read sha1 hashes. Leave blank for autodetect.
The command used to read sha1 hashes.
Leave blank for autodetect.
- Config: sha1sum_command
- Env Var: RCLONE_SFTP_SHA1SUM_COMMAND
@ -507,7 +512,7 @@ The subsystem option is ignored when server_command is defined.
#### --sftp-use-fstat
If set use fstat instead of stat
If set use fstat instead of stat.
Some servers limit the amount of open files and calling Stat after opening
the file will throw an error from the server. Setting this flag will call
@ -525,7 +530,7 @@ any given time.
#### --sftp-disable-concurrent-reads
If set don't use concurrent reads
If set don't use concurrent reads.
Normally concurrent reads are safe to use and not using them will
degrade performance, so this option is disabled by default.
@ -548,7 +553,7 @@ If concurrent reads are disabled, the use_fstat option is ignored.
#### --sftp-disable-concurrent-writes
If set don't use concurrent writes
If set don't use concurrent writes.
Normally rclone uses concurrent writes to upload files. This improves
the performance greatly, especially for distant servers.
@ -563,7 +568,7 @@ This option disables concurrent writes should that be necessary.
#### --sftp-idle-timeout
Max time before closing idle connections
Max time before closing idle connections.
If no connections have been returned to the connection pool in the time
given, rclone will empty the connection pool.

View File

@ -148,13 +148,13 @@ Invalid UTF-8 bytes will also be [replaced](/overview/#invalid-utf8),
as they can't be used in JSON strings.
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/sharefile/sharefile.go then run make backenddocs" >}}
### Standard Options
### Standard options
Here are the standard options specific to sharefile (Citrix Sharefile).
#### --sharefile-root-folder-id
ID of the root folder
ID of the root folder.
Leave blank to access "Personal Folders". You can use one of the
standard values here or any folder ID (long hex number ID).
@ -165,7 +165,7 @@ standard values here or any folder ID (long hex number ID).
- Default: ""
- Examples:
- ""
- Access the Personal Folders. (Default)
- Access the Personal Folders (default).
- "favorites"
- Access the Favorites folder.
- "allshared"
@ -175,7 +175,7 @@ standard values here or any folder ID (long hex number ID).
- "top"
- Access the home, favorites, and shared folders as well as the connectors.
### Advanced Options
### Advanced options
Here are the advanced options specific to sharefile (Citrix Sharefile).
@ -190,7 +190,9 @@ Cutoff for switching to multipart upload.
#### --sharefile-chunk-size
Upload chunk size. Must a power of 2 >= 256k.
Upload chunk size.
Must a power of 2 >= 256k.
Making this larger will improve performance, but note that each chunk
is buffered in memory one per transfer.
@ -219,7 +221,7 @@ be set manually to something like: https://XXX.sharefile.com
This sets the encoding for the backend.
See: the [encoding section in the overview](/overview/#encoding) for more info.
See the [encoding section in the overview](/overview/#encoding) for more info.
- Config: encoding
- Env Var: RCLONE_SHAREFILE_ENCODING

View File

@ -137,6 +137,7 @@ Here are the standard options specific to sia (Sia Decentralized Cloud).
#### --sia-api-url
Sia daemon API URL, like http://sia.daemon.host:9980.
Note that siad must run with --disable-api-security to open API port for other hosts (not recommended).
Keep default if Sia daemon runs on localhost.
@ -148,6 +149,7 @@ Keep default if Sia daemon runs on localhost.
#### --sia-api-password
Sia Daemon API Password.
Can be found in the apipassword file located in HOME/.sia/ or in the daemon directory.
**NB** Input to this must be obscured - see [rclone obscure](/commands/rclone_obscure/).
@ -164,6 +166,7 @@ Here are the advanced options specific to sia (Sia Decentralized Cloud).
#### --sia-user-agent
Siad User Agent
Sia daemon requires the 'Sia-Agent' user agent by default for security
- Config: user_agent
@ -175,7 +178,7 @@ Sia daemon requires the 'Sia-Agent' user agent by default for security
This sets the encoding for the backend.
See: the [encoding section in the overview](/overview/#encoding) for more info.
See the [encoding section in the overview](/overview/#encoding) for more info.
- Config: encoding
- Env Var: RCLONE_SIA_ENCODING

View File

@ -121,7 +121,7 @@ deleted straight away.
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/sugarsync/sugarsync.go then run make backenddocs" >}}
### Standard Options
### Standard options
Here are the standard options specific to sugarsync (Sugarsync).
@ -149,7 +149,7 @@ Leave blank to use rclone's.
#### --sugarsync-private-access-key
Sugarsync Private Access Key
Sugarsync Private Access Key.
Leave blank to use rclone's.
@ -168,13 +168,13 @@ otherwise put them in the deleted files.
- Type: bool
- Default: false
### Advanced Options
### Advanced options
Here are the advanced options specific to sugarsync (Sugarsync).
#### --sugarsync-refresh-token
Sugarsync refresh token
Sugarsync refresh token.
Leave blank normally, will be auto configured by rclone.
@ -185,7 +185,7 @@ Leave blank normally, will be auto configured by rclone.
#### --sugarsync-authorization
Sugarsync authorization
Sugarsync authorization.
Leave blank normally, will be auto configured by rclone.
@ -196,7 +196,7 @@ Leave blank normally, will be auto configured by rclone.
#### --sugarsync-authorization-expiry
Sugarsync authorization expiry
Sugarsync authorization expiry.
Leave blank normally, will be auto configured by rclone.
@ -207,7 +207,7 @@ Leave blank normally, will be auto configured by rclone.
#### --sugarsync-user
Sugarsync user
Sugarsync user.
Leave blank normally, will be auto configured by rclone.
@ -218,7 +218,7 @@ Leave blank normally, will be auto configured by rclone.
#### --sugarsync-root-id
Sugarsync root id
Sugarsync root id.
Leave blank normally, will be auto configured by rclone.
@ -229,7 +229,7 @@ Leave blank normally, will be auto configured by rclone.
#### --sugarsync-deleted-id
Sugarsync deleted folder id
Sugarsync deleted folder id.
Leave blank normally, will be auto configured by rclone.
@ -242,7 +242,7 @@ Leave blank normally, will be auto configured by rclone.
This sets the encoding for the backend.
See: the [encoding section in the overview](/overview/#encoding) for more info.
See the [encoding section in the overview](/overview/#encoding) for more info.
- Config: encoding
- Env Var: RCLONE_SUGARSYNC_ENCODING

View File

@ -243,7 +243,7 @@ Invalid UTF-8 bytes will also be [replaced](/overview/#invalid-utf8),
as they can't be used in JSON strings.
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/swift/swift.go then run make backenddocs" >}}
### Standard Options
### Standard options
Here are the standard options specific to swift (OpenStack Swift (Rackspace Cloud Files, Memset Memstore, OVH)).
@ -257,9 +257,10 @@ Get swift credentials from environment variables in standard OpenStack form.
- Default: false
- Examples:
- "false"
- Enter swift credentials in the next step
- Enter swift credentials in the next step.
- "true"
- Get swift credentials from environment vars. Leave other fields blank if using this.
- Get swift credentials from environment vars.
- Leave other fields blank if using this.
#### --swift-user
@ -321,7 +322,7 @@ User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
#### --swift-tenant
Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME).
- Config: tenant
- Env Var: RCLONE_SWIFT_TENANT
@ -330,7 +331,7 @@ Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TEN
#### --swift-tenant-id
Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID).
- Config: tenant_id
- Env Var: RCLONE_SWIFT_TENANT_ID
@ -339,7 +340,7 @@ Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_I
#### --swift-tenant-domain
Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME).
- Config: tenant_domain
- Env Var: RCLONE_SWIFT_TENANT_DOMAIN
@ -348,7 +349,7 @@ Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
#### --swift-region
Region name - optional (OS_REGION_NAME)
Region name - optional (OS_REGION_NAME).
- Config: region
- Env Var: RCLONE_SWIFT_REGION
@ -357,7 +358,7 @@ Region name - optional (OS_REGION_NAME)
#### --swift-storage-url
Storage URL - optional (OS_STORAGE_URL)
Storage URL - optional (OS_STORAGE_URL).
- Config: storage_url
- Env Var: RCLONE_SWIFT_STORAGE_URL
@ -366,7 +367,7 @@ Storage URL - optional (OS_STORAGE_URL)
#### --swift-auth-token
Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
Auth Token from alternate authentication - optional (OS_AUTH_TOKEN).
- Config: auth_token
- Env Var: RCLONE_SWIFT_AUTH_TOKEN
@ -375,7 +376,7 @@ Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
#### --swift-application-credential-id
Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
Application Credential ID (OS_APPLICATION_CREDENTIAL_ID).
- Config: application_credential_id
- Env Var: RCLONE_SWIFT_APPLICATION_CREDENTIAL_ID
@ -384,7 +385,7 @@ Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
#### --swift-application-credential-name
Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME).
- Config: application_credential_name
- Env Var: RCLONE_SWIFT_APPLICATION_CREDENTIAL_NAME
@ -393,7 +394,7 @@ Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
#### --swift-application-credential-secret
Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET).
- Config: application_credential_secret
- Env Var: RCLONE_SWIFT_APPLICATION_CREDENTIAL_SECRET
@ -402,7 +403,7 @@ Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
#### --swift-auth-version
AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION).
- Config: auth_version
- Env Var: RCLONE_SWIFT_AUTH_VERSION
@ -411,7 +412,7 @@ AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH
#### --swift-endpoint-type
Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE)
Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE).
- Config: endpoint_type
- Env Var: RCLONE_SWIFT_ENDPOINT_TYPE
@ -427,7 +428,7 @@ Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE)
#### --swift-storage-policy
The storage policy to use when creating a new container
The storage policy to use when creating a new container.
This applies the specified storage policy when creating a new
container. The policy cannot be changed afterwards. The allowed
@ -446,13 +447,15 @@ provider.
- "pca"
- OVH Public Cloud Archive
### Advanced Options
### Advanced options
Here are the advanced options specific to swift (OpenStack Swift (Rackspace Cloud Files, Memset Memstore, OVH)).
#### --swift-leave-parts-on-error
If true avoid calling abort upload on a failure. It should be set to true for resuming uploads across different sessions.
If true avoid calling abort upload on a failure.
It should be set to true for resuming uploads across different sessions.
- Config: leave_parts_on_error
- Env Var: RCLONE_SWIFT_LEAVE_PARTS_ON_ERROR
@ -493,7 +496,7 @@ copy operations.
This sets the encoding for the backend.
See: the [encoding section in the overview](/overview/#encoding) for more info.
See the [encoding section in the overview](/overview/#encoding) for more info.
- Config: encoding
- Env Var: RCLONE_SWIFT_ENCODING

View File

@ -123,7 +123,7 @@ y/e/d> y
```
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/tardigrade/tardigrade.go then run make backenddocs" >}}
### Standard Options
### Standard options
Here are the standard options specific to tardigrade (Tardigrade Decentralized Cloud Storage).
@ -143,7 +143,7 @@ Choose an authentication method.
#### --tardigrade-access-grant
Access Grant.
Access grant.
- Config: access_grant
- Env Var: RCLONE_TARDIGRADE_ACCESS_GRANT
@ -152,7 +152,9 @@ Access Grant.
#### --tardigrade-satellite-address
Satellite Address. Custom satellite address should match the format: `<nodeid>@<address>:<port>`.
Satellite address.
Custom satellite address should match the format: `<nodeid>@<address>:<port>`.
- Config: satellite_address
- Env Var: RCLONE_TARDIGRADE_SATELLITE_ADDRESS
@ -168,7 +170,7 @@ Satellite Address. Custom satellite address should match the format: `<nodeid>@<
#### --tardigrade-api-key
API Key.
API key.
- Config: api_key
- Env Var: RCLONE_TARDIGRADE_API_KEY
@ -177,7 +179,9 @@ API Key.
#### --tardigrade-passphrase
Encryption Passphrase. To access existing objects enter passphrase used for uploading.
Encryption passphrase.
To access existing objects enter passphrase used for uploading.
- Config: passphrase
- Env Var: RCLONE_TARDIGRADE_PASSPHRASE

View File

@ -172,15 +172,15 @@ The policies definition are inspired by [trapexit/mergerfs](https://github.com/t
| rand (random) | Calls **all** and then randomizes. Returns only one upstream. |
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/union/union.go then run make backenddocs" >}}
### Standard Options
### Standard options
Here are the standard options specific to union (Union merges the contents of several upstream fs).
#### --union-upstreams
List of space separated upstreams.
Can be 'upstreama:test/dir upstreamb:', '"upstreama:test/space:ro dir" upstreamb:', etc.
Can be 'upstreama:test/dir upstreamb:', '"upstreama:test/space:ro dir" upstreamb:', etc.
- Config: upstreams
- Env Var: RCLONE_UNION_UPSTREAMS
@ -216,7 +216,9 @@ Policy to choose upstream on SEARCH category.
#### --union-cache-time
Cache time of usage and free space (in seconds). This option is only useful when a path preserving policy is used.
Cache time of usage and free space (in seconds).
This option is only useful when a path preserving policy is used.
- Config: cache_time
- Env Var: RCLONE_UNION_CACHE_TIME

View File

@ -99,20 +99,22 @@ Invalid UTF-8 bytes will also be [replaced](/overview/#invalid-utf8),
as they can't be used in XML strings.
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/uptobox/uptobox.go then run make backenddocs" >}}
### Standard Options
### Standard options
Here are the standard options specific to uptobox (Uptobox).
#### --uptobox-access-token
Your access Token, get it from https://uptobox.com/my_account
Your access token.
Get it from https://uptobox.com/my_account.
- Config: access_token
- Env Var: RCLONE_UPTOBOX_ACCESS_TOKEN
- Type: string
- Default: ""
### Advanced Options
### Advanced options
Here are the advanced options specific to uptobox (Uptobox).
@ -120,7 +122,7 @@ Here are the advanced options specific to uptobox (Uptobox).
This sets the encoding for the backend.
See: the [encoding section in the overview](/overview/#encoding) for more info.
See the [encoding section in the overview](/overview/#encoding) for more info.
- Config: encoding
- Env Var: RCLONE_UPTOBOX_ENCODING

View File

@ -108,25 +108,24 @@ appear on all objects, or only on objects which had a hash uploaded
with them.
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/webdav/webdav.go then run make backenddocs" >}}
### Standard Options
### Standard options
Here are the standard options specific to webdav (Webdav).
#### --webdav-url
URL of http host to connect to
URL of http host to connect to.
E.g. https://example.com.
- Config: url
- Env Var: RCLONE_WEBDAV_URL
- Type: string
- Default: ""
- Examples:
- "https://example.com"
- Connect to example.com
#### --webdav-vendor
Name of the Webdav site/service/software you are using
Name of the Webdav site/service/software you are using.
- Config: vendor
- Env Var: RCLONE_WEBDAV_VENDOR
@ -138,15 +137,17 @@ Name of the Webdav site/service/software you are using
- "owncloud"
- Owncloud
- "sharepoint"
- Sharepoint Online, authenticated by Microsoft account.
- Sharepoint Online, authenticated by Microsoft account
- "sharepoint-ntlm"
- Sharepoint with NTLM authentication. Usually self-hosted or on-premises.
- Sharepoint with NTLM authentication, usually self-hosted or on-premises
- "other"
- Other site/service or software
#### --webdav-user
User name. In case NTLM authentication is used, the username should be in the format 'Domain\User'.
User name.
In case NTLM authentication is used, the username should be in the format 'Domain\User'.
- Config: user
- Env Var: RCLONE_WEBDAV_USER
@ -166,20 +167,20 @@ Password.
#### --webdav-bearer-token
Bearer token instead of user/pass (e.g. a Macaroon)
Bearer token instead of user/pass (e.g. a Macaroon).
- Config: bearer_token
- Env Var: RCLONE_WEBDAV_BEARER_TOKEN
- Type: string
- Default: ""
### Advanced Options
### Advanced options
Here are the advanced options specific to webdav (Webdav).
#### --webdav-bearer-token-command
Command to run to get a bearer token
Command to run to get a bearer token.
- Config: bearer_token_command
- Env Var: RCLONE_WEBDAV_BEARER_TOKEN_COMMAND
@ -190,7 +191,7 @@ Command to run to get a bearer token
This sets the encoding for the backend.
See: the [encoding section in the overview](/overview/#encoding) for more info.
See the [encoding section in the overview](/overview/#encoding) for more info.
Default encoding is Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,Hash,Percent,BackSlash,Del,Ctl,LeftSpace,LeftTilde,RightSpace,RightPeriod,InvalidUtf8 for sharepoint-ntlm or identity otherwise.
@ -201,7 +202,7 @@ Default encoding is Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,Hash,Per
#### --webdav-headers
Set HTTP headers for all transactions
Set HTTP headers for all transactions.
Use this to set additional HTTP headers for all transactions

View File

@ -114,13 +114,14 @@ Invalid UTF-8 bytes will also be [replaced](/overview/#invalid-utf8),
as they can't be used in JSON strings.
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/yandex/yandex.go then run make backenddocs" >}}
### Standard Options
### Standard options
Here are the standard options specific to yandex (Yandex Disk).
#### --yandex-client-id
OAuth Client Id
OAuth Client Id.
Leave blank normally.
- Config: client_id
@ -130,7 +131,8 @@ Leave blank normally.
#### --yandex-client-secret
OAuth Client Secret
OAuth Client Secret.
Leave blank normally.
- Config: client_secret
@ -138,7 +140,7 @@ Leave blank normally.
- Type: string
- Default: ""
### Advanced Options
### Advanced options
Here are the advanced options specific to yandex (Yandex Disk).
@ -154,6 +156,7 @@ OAuth Access Token as a JSON blob.
#### --yandex-auth-url
Auth server URL.
Leave blank to use the provider defaults.
- Config: auth_url
@ -164,6 +167,7 @@ Leave blank to use the provider defaults.
#### --yandex-token-url
Token server url.
Leave blank to use the provider defaults.
- Config: token_url
@ -175,7 +179,7 @@ Leave blank to use the provider defaults.
This sets the encoding for the backend.
See: the [encoding section in the overview](/overview/#encoding) for more info.
See the [encoding section in the overview](/overview/#encoding) for more info.
- Config: encoding
- Env Var: RCLONE_YANDEX_ENCODING

View File

@ -125,13 +125,14 @@ Unicode full-width characters are not supported at all and will be removed
from filenames during upload.
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/zoho/zoho.go then run make backenddocs" >}}
### Standard Options
### Standard options
Here are the standard options specific to zoho (Zoho).
#### --zoho-client-id
OAuth Client Id
OAuth Client Id.
Leave blank normally.
- Config: client_id
@ -141,7 +142,8 @@ Leave blank normally.
#### --zoho-client-secret
OAuth Client Secret
OAuth Client Secret.
Leave blank normally.
- Config: client_secret
@ -171,7 +173,7 @@ browser.
- "com.au"
- Australia
### Advanced Options
### Advanced options
Here are the advanced options specific to zoho (Zoho).
@ -187,6 +189,7 @@ OAuth Access Token as a JSON blob.
#### --zoho-auth-url
Auth server URL.
Leave blank to use the provider defaults.
- Config: auth_url
@ -197,6 +200,7 @@ Leave blank to use the provider defaults.
#### --zoho-token-url
Token server url.
Leave blank to use the provider defaults.
- Config: token_url
@ -208,7 +212,7 @@ Leave blank to use the provider defaults.
This sets the encoding for the backend.
See: the [encoding section in the overview](/overview/#encoding) for more info.
See the [encoding section in the overview](/overview/#encoding) for more info.
- Config: encoding
- Env Var: RCLONE_ZOHO_ENCODING

7539
rclone.1 generated

File diff suppressed because it is too large Load Diff