From a0d4fdb2fa34c7f6ac5a86021d16403713499aaf Mon Sep 17 00:00:00 2001 From: Nick Craig-Wood Date: Sat, 13 Apr 2019 11:01:58 +0100 Subject: [PATCH] Version v1.47.0 --- MANUAL.html | 4146 ++++++++------- MANUAL.md | 694 ++- MANUAL.txt | 3775 ++++++++------ RELEASE.md | 2 +- docs/content/b2.md | 13 + docs/content/changelog.md | 88 +- docs/content/commands/rclone.md | 19 +- docs/content/commands/rclone_about.md | 19 +- docs/content/commands/rclone_authorize.md | 19 +- docs/content/commands/rclone_cachestats.md | 19 +- docs/content/commands/rclone_cat.md | 23 +- docs/content/commands/rclone_check.md | 19 +- docs/content/commands/rclone_cleanup.md | 19 +- docs/content/commands/rclone_config.md | 19 +- docs/content/commands/rclone_config_create.md | 19 +- docs/content/commands/rclone_config_delete.md | 19 +- docs/content/commands/rclone_config_dump.md | 19 +- docs/content/commands/rclone_config_edit.md | 19 +- docs/content/commands/rclone_config_file.md | 19 +- .../commands/rclone_config_password.md | 19 +- .../commands/rclone_config_providers.md | 19 +- docs/content/commands/rclone_config_show.md | 19 +- docs/content/commands/rclone_config_update.md | 21 +- docs/content/commands/rclone_copy.md | 22 +- docs/content/commands/rclone_copyto.md | 19 +- docs/content/commands/rclone_copyurl.md | 23 +- docs/content/commands/rclone_cryptcheck.md | 19 +- docs/content/commands/rclone_cryptdecode.md | 19 +- docs/content/commands/rclone_dbhashsum.md | 19 +- docs/content/commands/rclone_dedupe.md | 19 +- docs/content/commands/rclone_delete.md | 19 +- docs/content/commands/rclone_deletefile.md | 19 +- .../commands/rclone_genautocomplete.md | 19 +- .../commands/rclone_genautocomplete_bash.md | 19 +- .../commands/rclone_genautocomplete_zsh.md | 19 +- docs/content/commands/rclone_gendocs.md | 19 +- docs/content/commands/rclone_hashsum.md | 19 +- docs/content/commands/rclone_link.md | 19 +- docs/content/commands/rclone_listremotes.md | 19 +- docs/content/commands/rclone_ls.md | 19 +- docs/content/commands/rclone_lsd.md | 19 +- docs/content/commands/rclone_lsf.md | 23 +- docs/content/commands/rclone_lsjson.md | 25 +- docs/content/commands/rclone_lsl.md | 19 +- docs/content/commands/rclone_md5sum.md | 19 +- docs/content/commands/rclone_mkdir.md | 19 +- docs/content/commands/rclone_mount.md | 19 +- docs/content/commands/rclone_move.md | 20 +- docs/content/commands/rclone_moveto.md | 19 +- docs/content/commands/rclone_ncdu.md | 19 +- docs/content/commands/rclone_obscure.md | 19 +- docs/content/commands/rclone_purge.md | 19 +- docs/content/commands/rclone_rc.md | 19 +- docs/content/commands/rclone_rcat.md | 19 +- docs/content/commands/rclone_rcd.md | 19 +- docs/content/commands/rclone_rmdir.md | 19 +- docs/content/commands/rclone_rmdirs.md | 19 +- docs/content/commands/rclone_serve.md | 19 +- docs/content/commands/rclone_serve_dlna.md | 19 +- docs/content/commands/rclone_serve_ftp.md | 19 +- docs/content/commands/rclone_serve_http.md | 19 +- docs/content/commands/rclone_serve_restic.md | 19 +- docs/content/commands/rclone_serve_webdav.md | 19 +- docs/content/commands/rclone_settier.md | 19 +- docs/content/commands/rclone_sha1sum.md | 19 +- docs/content/commands/rclone_size.md | 19 +- docs/content/commands/rclone_sync.md | 22 +- docs/content/commands/rclone_touch.md | 19 +- docs/content/commands/rclone_tree.md | 19 +- docs/content/commands/rclone_version.md | 19 +- docs/content/drive.md | 26 +- docs/content/ftp.md | 13 + docs/content/http.md | 24 + docs/content/koofr.md | 8 +- docs/content/s3.md | 4 +- docs/layouts/partials/version.html | 2 +- fs/version.go | 2 +- rclone.1 | 4559 ++++++++++------- 78 files changed, 8967 insertions(+), 5632 deletions(-) diff --git a/MANUAL.html b/MANUAL.html index f95ca6dd8..b74934154 100644 --- a/MANUAL.html +++ b/MANUAL.html @@ -1,19 +1,24 @@ - - + + - - + + rclone(1) User Manual - + - +

Nick Craig-Wood

+

Apr 13, 2019

+

Rclone

Logo

Rclone is a command line program to sync files and directories to and from:

@@ -34,6 +39,7 @@
  • Hubic
  • Jottacloud
  • IBM COS S3
  • +
  • Koofr
  • Memset Memstore
  • Mega
  • Microsoft Azure Blob Storage
  • @@ -75,7 +81,6 @@
  • Home page
  • GitHub project page for source and bug tracker
  • Rclone Forum
  • -
  • Google+ page
  • Downloads
  • Install

    @@ -93,7 +98,7 @@
    curl https://rclone.org/install.sh | sudo bash

    For beta installation, run:

    curl https://rclone.org/install.sh | sudo bash -s beta
    -

    Note that this script checks the version of rclone installed first and won't re-download if not needed.

    +

    Note that this script checks the version of rclone installed first and won’t re-download if not needed.

    Linux installation from precompiled binary

    Fetch and unpack

    curl -O https://downloads.rclone.org/rclone-current-linux-amd64.zip
    @@ -132,9 +137,9 @@ go build
     
    go get -u -v github.com/ncw/rclone

    and this will build the binary in $GOPATH/bin (~/go/bin/rclone by default) after downloading the source to $GOPATH/src/github.com/ncw/rclone (~/go/src/github.com/ncw/rclone by default).

    Installation with Ansible

    -

    This can be done with Stefan Weichinger's ansible role.

    +

    This can be done with Stefan Weichinger’s ansible role.

    Instructions

    -
      +
      1. git clone https://github.com/stefangweichinger/ansible-rclone.git into your local roles-directory
      2. add the role to the hosts you want rclone installed to:
      @@ -142,7 +147,7 @@ go build roles: - rclone

    Configure

    -

    First, you'll need to configure rclone. As the object storage systems have quite complicated authentication these are kept in a config file. (See the --config entry for how to find the config file and choose its location.)

    +

    First, you’ll need to configure rclone. As the object storage systems have quite complicated authentication these are kept in a config file. (See the --config entry for how to find the config file and choose its location.)

    The easiest way to make the config is to run rclone with the config option:

    rclone config

    See the following for detailed instructions for

    @@ -162,6 +167,7 @@ go build
  • HTTP
  • Hubic
  • Jottacloud
  • +
  • Koofr
  • Mega
  • Microsoft Azure Blob Storage
  • Microsoft OneDrive
  • @@ -179,11 +185,11 @@ go build

    Rclone syncs a directory tree from one storage system to another.

    Its syntax is like this

    Syntax: [options] subcommand <parameters> <parameters...>
    -

    Source and destination paths are specified by the name you gave the storage system in the config file then the sub path, eg "drive:myfolder" to look at "myfolder" in Google drive.

    +

    Source and destination paths are specified by the name you gave the storage system in the config file then the sub path, eg “drive:myfolder” to look at “myfolder” in Google drive.

    You can define as many storage paths as you like in the config file.

    Subcommands

    rclone uses a system of subcommands. For example

    -
    rclone ls remote:path # lists a re
    +
    rclone ls remote:path # lists a remote
     rclone copy /local/path remote:path # copies /local/path to the remote
     rclone sync /local/path remote:path # syncs /local/path to the remote

    rclone config

    @@ -196,12 +202,12 @@ rclone sync /local/path remote:path # syncs /local/path to the remote
    rclone copy

    Copy files from source to dest, skipping already copied

    Synopsis

    -

    Copy the source to the destination. Doesn't transfer unchanged files, testing by size and modification time or MD5SUM. Doesn't delete files from the destination.

    -

    Note that it is always the contents of the directory that is synced, not the directory so when source:path is a directory, it's the contents of source:path that are copied, not the directory name and contents.

    -

    If dest:path doesn't exist, it is created and the source:path contents go there.

    +

    Copy the source to the destination. Doesn’t transfer unchanged files, testing by size and modification time or MD5SUM. Doesn’t delete files from the destination.

    +

    Note that it is always the contents of the directory that is synced, not the directory so when source:path is a directory, it’s the contents of source:path that are copied, not the directory name and contents.

    +

    If dest:path doesn’t exist, it is created and the source:path contents go there.

    For example

    rclone copy source:sourcepath dest:destpath
    -

    Let's say there are two files in sourcepath

    +

    Let’s say there are two files in sourcepath

    sourcepath/one.txt
     sourcepath/two.txt

    This copies them to

    @@ -210,39 +216,42 @@ destpath/two.txt

    Not to

    destpath/sourcepath/one.txt
     destpath/sourcepath/two.txt
    -

    If you are familiar with rsync, rclone always works as if you had written a trailing / - meaning "copy the contents of this directory". This applies to all commands and whether you are talking about the source or destination.

    -

    See the --no-traverse option for controlling whether rclone lists the destination directory or not. Supplying this option when copying a small number of files into a large destination can speed transfers up greatly.

    +

    If you are familiar with rsync, rclone always works as if you had written a trailing / - meaning “copy the contents of this directory”. This applies to all commands and whether you are talking about the source or destination.

    +

    See the –no-traverse option for controlling whether rclone lists the destination directory or not. Supplying this option when copying a small number of files into a large destination can speed transfers up greatly.

    For example, if you have many files in /path/to/src but only a few of them change every day, you can to copy all the files which have changed recently very efficiently like this:

    rclone copy --max-age 24h --no-traverse /path/to/src remote:

    Note: Use the -P/--progress flag to view real-time transfer statistics

    rclone copy source:path dest:path [flags]

    Options

    -
      -h, --help   help for copy
    +
          --create-empty-src-dirs   Create empty source dirs on destination after copy
    +  -h, --help                    help for copy

    rclone sync

    Make source and dest identical, modifying destination only.

    Synopsis

    -

    Sync the source to the destination, changing the destination only. Doesn't transfer unchanged files, testing by size and modification time or MD5SUM. Destination is updated to match source, including deleting files if necessary.

    +

    Sync the source to the destination, changing the destination only. Doesn’t transfer unchanged files, testing by size and modification time or MD5SUM. Destination is updated to match source, including deleting files if necessary.

    Important: Since this can cause data loss, test first with the --dry-run flag to see exactly what would be copied and deleted.

    -

    Note that files in the destination won't be deleted if there were any errors at any point.

    -

    It is always the contents of the directory that is synced, not the directory so when source:path is a directory, it's the contents of source:path that are copied, not the directory name and contents. See extended explanation in the copy command above if unsure.

    -

    If dest:path doesn't exist, it is created and the source:path contents go there.

    +

    Note that files in the destination won’t be deleted if there were any errors at any point.

    +

    It is always the contents of the directory that is synced, not the directory so when source:path is a directory, it’s the contents of source:path that are copied, not the directory name and contents. See extended explanation in the copy command above if unsure.

    +

    If dest:path doesn’t exist, it is created and the source:path contents go there.

    Note: Use the -P/--progress flag to view real-time transfer statistics

    rclone sync source:path dest:path [flags]

    Options

    -
      -h, --help   help for sync
    +
          --create-empty-src-dirs   Create empty source dirs on destination after sync
    +  -h, --help                    help for sync

    rclone move

    Move files from source to dest.

    Synopsis

    Moves the contents of the source directory to the destination directory. Rclone will error if the source and destination overlap and the remote does not support a server side directory move operation.

    If no filters are in use and if possible this will server side move source:path into dest:path. After this source:path will no longer longer exist.

    Otherwise for each file in source:path selected by the filters (if any) this will move it into dest:path. If possible a server side move will be used, otherwise it will copy it (server side if possible) into dest:path then delete the original (if no errors on copy) in source:path.

    -

    If you want to delete empty source directories after move, use the --delete-empty-src-dirs flag.

    -

    See the --no-traverse option for controlling whether rclone lists the destination directory or not. Supplying this option when moving a small number of files into a large destination can speed transfers up greatly.

    -

    Important: Since this can cause data loss, test first with the --dry-run flag.

    +

    If you want to delete empty source directories after move, use the –delete-empty-src-dirs flag.

    +

    See the –no-traverse option for controlling whether rclone lists the destination directory or not. Supplying this option when moving a small number of files into a large destination can speed transfers up greatly.

    +

    Important: Since this can cause data loss, test first with the –dry-run flag.

    Note: Use the -P/--progress flag to view real-time transfer statistics.

    rclone move source:path dest:path [flags]

    Options

    -
          --delete-empty-src-dirs   Delete empty source dirs after move
    +
          --create-empty-src-dirs   Create empty source dirs on destination after move
    +      --delete-empty-src-dirs   Delete empty source dirs after move
       -h, --help                    help for move

    rclone delete

    Remove the contents of path.

    @@ -255,7 +264,7 @@ destpath/sourcepath/two.txt
    rclone --dry-run --min-size 100M delete remote:path

    Then delete

    rclone --min-size 100M delete remote:path
    -

    That reads "delete everything with a minimum size of 100 MB", hence delete all files bigger than 100MBytes.

    +

    That reads “delete everything with a minimum size of 100 MB”, hence delete all files bigger than 100MBytes.

    rclone delete remote:path [flags]

    Options

      -h, --help   help for delete
    @@ -267,26 +276,26 @@ rclone --dry-run --min-size 100M delete remote:path

    Options

      -h, --help   help for purge

    rclone mkdir

    -

    Make the path if it doesn't already exist.

    +

    Make the path if it doesn’t already exist.

    Synopsis

    -

    Make the path if it doesn't already exist.

    +

    Make the path if it doesn’t already exist.

    rclone mkdir remote:path [flags]

    Options

      -h, --help   help for mkdir

    rclone rmdir

    Remove the path if empty.

    Synopsis

    -

    Remove the path. Note that you can't remove a path with objects in it, use purge for that.

    +

    Remove the path. Note that you can’t remove a path with objects in it, use purge for that.

    rclone rmdir remote:path [flags]

    Options

      -h, --help   help for rmdir

    rclone check

    Checks the files in the source and destination match.

    Synopsis

    -

    Checks the files in the source and destination match. It compares sizes and hashes (MD5 or SHA1) and logs a report of files which don't match. It doesn't alter the source or destination.

    -

    If you supply the --size-only flag, it will only compare the sizes not the hashes as well. Use this for a quick check.

    -

    If you supply the --download flag, it will download the data from both remotes and check them against each other on the fly. This can be useful for remotes that don't support hashes or if you really want to check all the data.

    -

    If you supply the --one-way flag, it will only check that files in source match the files in destination, not the other way around. Meaning extra files in destination that are not in the source will not trigger an error.

    +

    Checks the files in the source and destination match. It compares sizes and hashes (MD5 or SHA1) and logs a report of files which don’t match. It doesn’t alter the source or destination.

    +

    If you supply the –size-only flag, it will only compare the sizes not the hashes as well. Use this for a quick check.

    +

    If you supply the –download flag, it will download the data from both remotes and check them against each other on the fly. This can be useful for remotes that don’t support hashes or if you really want to check all the data.

    +

    If you supply the –one-way flag, it will only check that files in source match the files in destination, not the other way around. Meaning extra files in destination that are not in the source will not trigger an error.

    rclone check source:path dest:path [flags]

    Options

          --download   Check by downloading rather than with hash.
    @@ -312,9 +321,9 @@ rclone --dry-run --min-size 100M delete remote:path
  • lsjson to list objects and directories in JSON format
  • ls,lsl,lsd are designed to be human readable. lsf is designed to be human and machine readable. lsjson is designed to be machine readable.

    -

    Note that ls and lsl recurse by default - use "--max-depth 1" to stop the recursion.

    -

    The other list commands lsd,lsf,lsjson do not recurse by default - use "-R" to make them recurse.

    -

    Listing a non existent directory will produce an error except for remotes which can't have empty directories (eg s3, swift, gcs, etc - the bucket based remotes).

    +

    Note that ls and lsl recurse by default - use “–max-depth 1” to stop the recursion.

    +

    The other list commands lsd,lsf,lsjson do not recurse by default - use “-R” to make them recurse.

    +

    Listing a non existent directory will produce an error except for remotes which can’t have empty directories (eg s3, swift, gcs, etc - the bucket based remotes).

    rclone ls remote:path [flags]

    Options

      -h, --help   help for ls
    @@ -331,7 +340,7 @@ rclone --dry-run --min-size 100M delete remote:path -1 2016-10-17 17:41:53 -1 1000files -1 2017-01-03 14:40:54 -1 2500files -1 2017-07-08 14:39:28 -1 4000files -

    If you just want the directory names use "rclone lsf --dirs-only".

    +

    If you just want the directory names use “rclone lsf –dirs-only”.

    Any of the filtering options can be applied to this commmand.

    There are several related list commands

    ls,lsl,lsd are designed to be human readable. lsf is designed to be human and machine readable. lsjson is designed to be machine readable.

    -

    Note that ls and lsl recurse by default - use "--max-depth 1" to stop the recursion.

    -

    The other list commands lsd,lsf,lsjson do not recurse by default - use "-R" to make them recurse.

    -

    Listing a non existent directory will produce an error except for remotes which can't have empty directories (eg s3, swift, gcs, etc - the bucket based remotes).

    +

    Note that ls and lsl recurse by default - use “–max-depth 1” to stop the recursion.

    +

    The other list commands lsd,lsf,lsjson do not recurse by default - use “-R” to make them recurse.

    +

    Listing a non existent directory will produce an error except for remotes which can’t have empty directories (eg s3, swift, gcs, etc - the bucket based remotes).

    rclone lsd remote:path [flags]

    Options

      -h, --help        help for lsd
    @@ -369,9 +378,9 @@ rclone --dry-run --min-size 100M delete remote:path
  • lsjson to list objects and directories in JSON format
  • ls,lsl,lsd are designed to be human readable. lsf is designed to be human and machine readable. lsjson is designed to be machine readable.

    -

    Note that ls and lsl recurse by default - use "--max-depth 1" to stop the recursion.

    -

    The other list commands lsd,lsf,lsjson do not recurse by default - use "-R" to make them recurse.

    -

    Listing a non existent directory will produce an error except for remotes which can't have empty directories (eg s3, swift, gcs, etc - the bucket based remotes).

    +

    Note that ls and lsl recurse by default - use “–max-depth 1” to stop the recursion.

    +

    The other list commands lsd,lsf,lsjson do not recurse by default - use “-R” to make them recurse.

    +

    Listing a non existent directory will produce an error except for remotes which can’t have empty directories (eg s3, swift, gcs, etc - the bucket based remotes).

    rclone lsl remote:path [flags]

    Options

      -h, --help   help for lsl
    @@ -406,7 +415,7 @@ rclone --dry-run --min-size 100M delete remote:path rclone v1.41 - os/arch: linux/amd64 - go version: go1.10 -

    If you supply the --check flag, then it will do an online check to compare your version with the latest release and the latest beta.

    +

    If you supply the –check flag, then it will do an online check to compare your version with the latest release and the latest beta.

    $ rclone version --check
     yours:  1.42.0.6
     latest: 1.42          (released 2018-06-16)
    @@ -515,13 +524,13 @@ Other:   8.241G
  • Objects: total number of objects in the storage
  • Note that not all the backends provide all the fields - they will be missing if they are not known for that backend. Where it is known that the value is unlimited the value will also be omitted.

    -

    Use the --full flag to see the numbers written out in full, eg

    +

    Use the –full flag to see the numbers written out in full, eg

    Total:   18253611008
     Used:    7993453766
     Free:    1411001220
     Trashed: 104857602
     Other:   8849156022
    -

    Use the --json flag for a computer readable output, eg

    +

    Use the –json flag for a computer readable output, eg

    {
         "total": 18253611008,
         "used": 7993453766,
    @@ -558,7 +567,7 @@ Other:   8849156022
    rclone cat remote:path/to/dir

    Or like this to output any .txt files in dir or subdirectories.

    rclone --include "*.txt" cat remote:path/to/dir
    -

    Use the --head flag to print characters only at the start, --tail for the end and --offset and --count to print a section in the middle. Note that if offset is negative it will count from the end, so --offset -1 --count 1 is equivalent to --tail 1.

    +

    Use the –head flag to print characters only at the start, –tail for the end and –offset and –count to print a section in the middle. Note that if offset is negative it will count from the end, so –offset -1 –count 1 is equivalent to –tail 1.

    rclone cat remote:path [flags]

    Options

          --count int    Only print N characters. (default -1)
    @@ -610,7 +619,7 @@ Other:   8849156022

    rclone config password

    Update password in an existing remote.

    Synopsis

    -

    Update an existing remote's password. The password should be passed in in pairs of .

    +

    Update an existing remote’s password. The password should be passed in in pairs of .

    For example to set password of a remote of name myremote you would do:

    rclone config password myremote fieldname mypassword
    rclone config password <name> [<key> <value>]+ [flags]
    @@ -633,10 +642,10 @@ Other: 8849156022

    rclone config update

    Update options in an existing remote.

    Synopsis

    -

    Update an existing remote's options. The options should be passed in in pairs of .

    +

    Update an existing remote’s options. The options should be passed in in pairs of .

    For example to update the env_auth field of a remote of name myremote you would do:

    rclone config update myremote swift env_auth true
    -

    If the remote uses oauth the token will be updated, if you don't require this add an extra parameter thus:

    +

    If the remote uses oauth the token will be updated, if you don’t require this add an extra parameter thus:

    rclone config update myremote swift env_auth true config_refresh_token false
    rclone config update <name> [<key> <value>]+ [flags]

    Options

    @@ -655,7 +664,7 @@ Other: 8849156022 if src is directory copy it to dst, overwriting existing files if they exist see copy command for full details -

    This doesn't transfer unchanged files, testing by size and modification time or MD5SUM. It doesn't delete files from the destination.

    +

    This doesn’t transfer unchanged files, testing by size and modification time or MD5SUM. It doesn’t delete files from the destination.

    Note: Use the -P/--progress flag to view real-time transfer statistics

    rclone copyto source:path dest:path [flags]

    Options

    @@ -678,7 +687,7 @@ if src is directory

    You can use it like this also, but that will involve downloading all the files in remote:path.

    rclone cryptcheck remote:path encryptedremote:path

    After it has run it will log the status of the encryptedremote:.

    -

    If you supply the --one-way flag, it will only check that files in source match the files in destination, not the other way around. Meaning extra files in destination that are not in the source will not trigger an error.

    +

    If you supply the –one-way flag, it will only check that files in source match the files in destination, not the other way around. Meaning extra files in destination that are not in the source will not trigger an error.

    rclone cryptcheck remote:path cryptedremote:path [flags]

    Options

      -h, --help      help for cryptcheck
    @@ -687,7 +696,7 @@ if src is directory
     

    Cryptdecode returns unencrypted file names.

    Synopsis

    rclone cryptdecode returns unencrypted file names when provided with a list of encrypted file names. List limit is 10 items.

    -

    If you supply the --reverse flag, it will return encrypted file names.

    +

    If you supply the –reverse flag, it will return encrypted file names.

    use it like this

    rclone cryptdecode encryptedremote: encryptedfilename1 encryptedfilename2
     
    @@ -706,14 +715,14 @@ rclone cryptdecode --reverse encryptedremote: filename1 filename2

    rclone deletefile

    Remove a single file from remote.

    Synopsis

    -

    Remove a single file from remote. Unlike delete it cannot be used to remove a directory and it doesn't obey include/exclude filters - if the specified file exists, it will always be removed.

    +

    Remove a single file from remote. Unlike delete it cannot be used to remove a directory and it doesn’t obey include/exclude filters - if the specified file exists, it will always be removed.

    rclone deletefile remote:path [flags]

    Options

      -h, --help   help for deletefile

    rclone genautocomplete

    Output completion script for a given shell.

    Synopsis

    -

    Generates a shell completion script for rclone. Run with --help to list the supported shells.

    +

    Generates a shell completion script for rclone. Run with –help to list the supported shells.

    Options

      -h, --help   help for genautocomplete

    rclone genautocomplete bash

    @@ -769,7 +778,7 @@ Supported hashes are:

    rclone link will create or retrieve a public link to the given file or folder.

    rclone link remote:path/to/file
     rclone link remote:path/to/folder/
    -

    If successful, the last line of the output will contain the link. Exact capabilities depend on the remote, but the link will always be created with the least constraints – e.g. no expiry, no password protection, accessible without account.

    +

    If successful, the last line of the output will contain the link. Exact capabilities depend on the remote, but the link will always be created with the least constraints – e.g. no expiry, no password protection, accessible without account.

    rclone link remote:path [flags]

    Options

      -h, --help   help for link
    @@ -793,14 +802,16 @@ canole diwogej7 ferejej3gux/ fubuwic
    -

    Use the --format option to control what gets listed. By default this is just the path, but you can use these parameters to control the output:

    +

    Use the –format option to control what gets listed. By default this is just the path, but you can use these parameters to control the output:

    p - path
     s - size
     t - modification time
     h - hash
    -i - ID of object if known
    -m - MimeType of object if known
    -

    So if you wanted the path, size and modification time, you would use --format "pst", or maybe --format "tsp" to put the path last.

    +i - ID of object +o - Original ID of underlying object +m - MimeType of object if known +e - encrypted name +

    So if you wanted the path, size and modification time, you would use –format “pst”, or maybe –format “tsp” to put the path last.

    Eg

    $ rclone lsf  --format "tsp" swift:bucket
     2016-06-25 18:55:41;60295;bevajer5jef
    @@ -808,7 +819,7 @@ m - MimeType of object if known
    2016-06-25 18:55:43;94467;diwogej7 2018-04-26 08:50:45;0;ferejej3gux/ 2016-06-25 18:55:40;37600;fubuwic -

    If you specify "h" in the format you will get the MD5 hash by default, use the "--hash" flag to change which hash you want. Note that this can be returned as an empty string if it isn't available on the object (and for directories), "ERROR" if there was an error reading it from the object and "UNSUPPORTED" if that object does not support that hash type.

    +

    If you specify “h” in the format you will get the MD5 hash by default, use the “–hash” flag to change which hash you want. Note that this can be returned as an empty string if it isn’t available on the object (and for directories), “ERROR” if there was an error reading it from the object and “UNSUPPORTED” if that object does not support that hash type.

    For example to emulate the md5sum command you can use

    rclone lsf -R --hash MD5 --format hp --separator "  " --files-only .

    Eg

    @@ -818,8 +829,8 @@ cd65ac234e6fea5925974a51cdd865cc canole 03b5341b4f234b9d984d03ad076bae91 diwogej7 8fd37c3810dd660778137ac3a66cc06d fubuwic 99713e14a4c4ff553acaf1930fad985b gixacuh7ku -

    (Though "rclone md5sum ." is an easier way of typing this.)

    -

    By default the separator is ";" this can be changed with the --separator flag. Note that separators aren't escaped in the path so putting it last is a good strategy.

    +

    (Though “rclone md5sum .” is an easier way of typing this.)

    +

    By default the separator is “;” this can be changed with the –separator flag. Note that separators aren’t escaped in the path so putting it last is a good strategy.

    Eg

    $ rclone lsf  --separator "," --format "tshp" swift:bucket
     2016-06-25 18:55:41,60295,7908e352297f0f530b84a756f188baa3,bevajer5jef
    @@ -833,7 +844,7 @@ cd65ac234e6fea5925974a51cdd865cc  canole
     test.log,22355
     test.sh,449
     "this file contains a comma, in the file name.txt",6
    -

    Note that the --absolute parameter is useful for making lists of files to pass to an rclone copy with the --files-from flag.

    +

    Note that the –absolute parameter is useful for making lists of files to pass to an rclone copy with the –files-from flag.

    For example to find all the files modified within one day and copy those only (without traversing the whole directory structure):

    rclone lsf --absolute --files-only --max-age 1d /path/to/local > new_files
     rclone copy --files-from new_files /path/to/local remote:path
    @@ -847,9 +858,9 @@ rclone copy --files-from new_files /path/to/local remote:path
  • lsjson to list objects and directories in JSON format
  • ls,lsl,lsd are designed to be human readable. lsf is designed to be human and machine readable. lsjson is designed to be machine readable.

    -

    Note that ls and lsl recurse by default - use "--max-depth 1" to stop the recursion.

    -

    The other list commands lsd,lsf,lsjson do not recurse by default - use "-R" to make them recurse.

    -

    Listing a non existent directory will produce an error except for remotes which can't have empty directories (eg s3, swift, gcs, etc - the bucket based remotes).

    +

    Note that ls and lsl recurse by default - use “–max-depth 1” to stop the recursion.

    +

    The other list commands lsd,lsf,lsjson do not recurse by default - use “-R” to make them recurse.

    +

    Listing a non existent directory will produce an error except for remotes which can’t have empty directories (eg s3, swift, gcs, etc - the bucket based remotes).

    rclone lsf remote:path [flags]

    Options

          --absolute           Put a leading / in front of path names.
    @@ -867,12 +878,14 @@ rclone copy --files-from new_files /path/to/local remote:path

    Synopsis

    List directories and objects in the path in JSON format.

    The output is an array of Items, where each Item looks like this

    -

    { "Hashes" : { "SHA-1" : "f572d396fae9206628714fb2ce00f72e94f2258f", "MD5" : "b1946ac92492d2347c6235b4d2611184", "DropboxHash" : "ecb65bb98f9d905b70458986c39fcbad7715e5f2fcc3b1f07767d7c83e2438cc" }, "ID": "y2djkhiujf83u33", "OrigID": "UYOJVTUW00Q1RzTDA", "IsDir" : false, "MimeType" : "application/octet-stream", "ModTime" : "2017-05-31T16:15:57.034468261+01:00", "Name" : "file.txt", "Encrypted" : "v0qpsdq8anpci8n929v3uu9338", "Path" : "full/path/goes/here/file.txt", "Size" : 6 }

    -

    If --hash is not specified the Hashes property won't be emitted.

    -

    If --no-modtime is specified then ModTime will be blank.

    -

    If --encrypted is not specified the Encrypted won't be emitted.

    -

    The Path field will only show folders below the remote path being listed. If "remote:path" contains the file "subfolder/file.txt", the Path for "file.txt" will be "subfolder/file.txt", not "remote:path/subfolder/file.txt". When used without --recursive the Path will always be the same as Name.

    -

    The time is in RFC3339 format with up to nanosecond precision. The number of decimal digits in the seconds will depend on the precision that the remote can hold the times, so if times are accurate to the nearest millisecond (eg Google Drive) then 3 digits will always be shown ("2017-05-31T16:15:57.034+01:00") whereas if the times are accurate to the nearest second (Dropbox, Box, WebDav etc) no digits will be shown ("2017-05-31T16:15:57+01:00").

    +

    { “Hashes” : { “SHA-1” : “f572d396fae9206628714fb2ce00f72e94f2258f”, “MD5” : “b1946ac92492d2347c6235b4d2611184”, “DropboxHash” : “ecb65bb98f9d905b70458986c39fcbad7715e5f2fcc3b1f07767d7c83e2438cc” }, “ID”: “y2djkhiujf83u33”, “OrigID”: “UYOJVTUW00Q1RzTDA”, “IsDir” : false, “MimeType” : “application/octet-stream”, “ModTime” : “2017-05-31T16:15:57.034468261+01:00”, “Name” : “file.txt”, “Encrypted” : “v0qpsdq8anpci8n929v3uu9338”, “Path” : “full/path/goes/here/file.txt”, “Size” : 6 }

    +

    If –hash is not specified the Hashes property won’t be emitted.

    +

    If –no-modtime is specified then ModTime will be blank.

    +

    If –encrypted is not specified the Encrypted won’t be emitted.

    +

    If –dirs-only is not specified files in addition to directories are returned

    +

    If –files-only is not specified directories in addition to the files will be returned.

    +

    The Path field will only show folders below the remote path being listed. If “remote:path” contains the file “subfolder/file.txt”, the Path for “file.txt” will be “subfolder/file.txt”, not “remote:path/subfolder/file.txt”. When used without –recursive the Path will always be the same as Name.

    +

    The time is in RFC3339 format with up to nanosecond precision. The number of decimal digits in the seconds will depend on the precision that the remote can hold the times, so if times are accurate to the nearest millisecond (eg Google Drive) then 3 digits will always be shown (“2017-05-31T16:15:57.034+01:00”) whereas if the times are accurate to the nearest second (Dropbox, Box, WebDav etc) no digits will be shown (“2017-05-31T16:15:57+01:00”).

    The whole output can be processed as a JSON blob, or alternatively it can be processed line by line as each item is written one to a line.

    Any of the filtering options can be applied to this commmand.

    There are several related list commands

    @@ -884,12 +897,14 @@ rclone copy --files-from new_files /path/to/local remote:path
  • lsjson to list objects and directories in JSON format
  • ls,lsl,lsd are designed to be human readable. lsf is designed to be human and machine readable. lsjson is designed to be machine readable.

    -

    Note that ls and lsl recurse by default - use "--max-depth 1" to stop the recursion.

    -

    The other list commands lsd,lsf,lsjson do not recurse by default - use "-R" to make them recurse.

    -

    Listing a non existent directory will produce an error except for remotes which can't have empty directories (eg s3, swift, gcs, etc - the bucket based remotes).

    +

    Note that ls and lsl recurse by default - use “–max-depth 1” to stop the recursion.

    +

    The other list commands lsd,lsf,lsjson do not recurse by default - use “-R” to make them recurse.

    +

    Listing a non existent directory will produce an error except for remotes which can’t have empty directories (eg s3, swift, gcs, etc - the bucket based remotes).

    rclone lsjson remote:path [flags]

    Options

    -
      -M, --encrypted    Show the encrypted names.
    +
          --dirs-only    Show only directories in the listing.
    +  -M, --encrypted    Show the encrypted names.
    +      --files-only   Show only files in the listing.
           --hash         Include hashes in the output (may take longer).
       -h, --help         help for lsjson
           --no-modtime   Don't read the modification time (can speed things up).
    @@ -898,14 +913,14 @@ rclone copy --files-from new_files /path/to/local remote:path

    rclone mount

    Mount the remote as file system on a mountpoint.

    Synopsis

    -

    rclone mount allows Linux, FreeBSD, macOS and Windows to mount any of Rclone's cloud storage systems as a file system with FUSE.

    +

    rclone mount allows Linux, FreeBSD, macOS and Windows to mount any of Rclone’s cloud storage systems as a file system with FUSE.

    First set up your remote using rclone config. Check it works with rclone ls etc.

    Start the mount like this

    rclone mount remote:path/to/files /path/to/local/mount

    Or on Windows like this where X: is an unused drive letter

    rclone mount remote:path/to/files X:

    When the program ends, either via Ctrl+C or receiving a SIGINT or SIGTERM signal, the mount is automatically stopped.

    -

    The umount operation can fail, for example when the mountpoint is busy. When that happens, it is the user's responsibility to stop the mount manually with

    +

    The umount operation can fail, for example when the mountpoint is busy. When that happens, it is the user’s responsibility to stop the mount manually with

    # Linux
     fusermount -u /path/to/local/mount
     # OS X
    @@ -917,28 +932,28 @@ umount /path/to/local/mount

    Note that drives created as Administrator are not visible by other accounts (including the account that was elevated as Administrator). So if you start a Windows drive from an Administrative Command Prompt and then try to access the same drive from Explorer (which does not run as Administrator), you will not be able to see the new drive.

    The easiest way around this is to start the drive from a normal command prompt. It is also possible to start a drive from the SYSTEM account (using the WinFsp.Launcher infrastructure) which creates drives accessible for everyone on the system or alternatively using the nssm service manager.

    Limitations

    -

    Without the use of "--vfs-cache-mode" this can only write files sequentially, it can only seek when reading. This means that many applications won't work with their files on an rclone mount without "--vfs-cache-mode writes" or "--vfs-cache-mode full". See the File Caching section for more info.

    -

    The bucket based remotes (eg Swift, S3, Google Compute Storage, B2, Hubic) won't work from the root - you will need to specify a bucket, or a path within the bucket. So swift: won't work whereas swift:bucket will as will swift:bucket/path. None of these support the concept of directories, so empty directories will have a tendency to disappear once they fall out of the directory cache.

    +

    Without the use of “–vfs-cache-mode” this can only write files sequentially, it can only seek when reading. This means that many applications won’t work with their files on an rclone mount without “–vfs-cache-mode writes” or “–vfs-cache-mode full”. See the File Caching section for more info.

    +

    The bucket based remotes (eg Swift, S3, Google Compute Storage, B2, Hubic) won’t work from the root - you will need to specify a bucket, or a path within the bucket. So swift: won’t work whereas swift:bucket will as will swift:bucket/path. None of these support the concept of directories, so empty directories will have a tendency to disappear once they fall out of the directory cache.

    Only supported on Linux, FreeBSD, OS X and Windows at the moment.

    rclone mount vs rclone sync/copy

    -

    File systems expect things to be 100% reliable, whereas cloud storage systems are a long way from 100% reliable. The rclone sync/copy commands cope with this with lots of retries. However rclone mount can't use retries in the same way without making local copies of the uploads. Look at the file caching for solutions to make mount more reliable.

    +

    File systems expect things to be 100% reliable, whereas cloud storage systems are a long way from 100% reliable. The rclone sync/copy commands cope with this with lots of retries. However rclone mount can’t use retries in the same way without making local copies of the uploads. Look at the file caching for solutions to make mount more reliable.

    Attribute caching

    -

    You can use the flag --attr-timeout to set the time the kernel caches the attributes (size, modification time etc) for directory entries.

    -

    The default is "1s" which caches files just long enough to avoid too many callbacks to rclone from the kernel.

    +

    You can use the flag –attr-timeout to set the time the kernel caches the attributes (size, modification time etc) for directory entries.

    +

    The default is “1s” which caches files just long enough to avoid too many callbacks to rclone from the kernel.

    In theory 0s should be the correct value for filesystems which can change outside the control of the kernel. However this causes quite a few problems such as rclone using too much memory, rclone not serving files to samba and excessive time listing directories.

    -

    The kernel can cache the info about a file for the time given by "--attr-timeout". You may see corruption if the remote file changes length during this window. It will show up as either a truncated file or a file with garbage on the end. With "--attr-timeout 1s" this is very unlikely but not impossible. The higher you set "--attr-timeout" the more likely it is. The default setting of "1s" is the lowest setting which mitigates the problems above.

    -

    If you set it higher ('10s' or '1m' say) then the kernel will call back to rclone less often making it more efficient, however there is more chance of the corruption issue above.

    -

    If files don't change on the remote outside of the control of rclone then there is no chance of corruption.

    +

    The kernel can cache the info about a file for the time given by “–attr-timeout”. You may see corruption if the remote file changes length during this window. It will show up as either a truncated file or a file with garbage on the end. With “–attr-timeout 1s” this is very unlikely but not impossible. The higher you set “–attr-timeout” the more likely it is. The default setting of “1s” is the lowest setting which mitigates the problems above.

    +

    If you set it higher (‘10s’ or ‘1m’ say) then the kernel will call back to rclone less often making it more efficient, however there is more chance of the corruption issue above.

    +

    If files don’t change on the remote outside of the control of rclone then there is no chance of corruption.

    This is the same as setting the attr_timeout option in mount.fuse.

    Filters

    Note that all the rclone filters can be used to select a subset of the files to be visible in the mount.

    systemd

    When running rclone mount as a systemd service, it is possible to use Type=notify. In this case the service will enter the started state after the mountpoint has been successfully set up. Units having the rclone mount service specified as a requirement will see all files and folders immediately in this mode.

    chunked reading

    -

    --vfs-read-chunk-size will enable reading the source objects in parts. This can reduce the used download quota for some remotes by requesting only chunks from the remote that are actually read at the cost of an increased number of requests.

    -

    When --vfs-read-chunk-size-limit is also specified and greater than --vfs-read-chunk-size, the chunk size for each open file will get doubled for each chunk read, until the specified value is reached. A value of -1 will disable the limit and the chunk size will grow indefinitely.

    -

    With --vfs-read-chunk-size 100M and --vfs-read-chunk-size-limit 0 the following parts will be downloaded: 0-100M, 100M-200M, 200M-300M, 300M-400M and so on. When --vfs-read-chunk-size-limit 500M is specified, the result would be 0-100M, 100M-300M, 300M-700M, 700M-1200M, 1200M-1700M and so on.

    -

    Chunked reading will only work with --vfs-cache-mode < full, as the file will always be copied to the vfs cache before opening with --vfs-cache-mode full.

    +

    –vfs-read-chunk-size will enable reading the source objects in parts. This can reduce the used download quota for some remotes by requesting only chunks from the remote that are actually read at the cost of an increased number of requests.

    +

    When –vfs-read-chunk-size-limit is also specified and greater than –vfs-read-chunk-size, the chunk size for each open file will get doubled for each chunk read, until the specified value is reached. A value of -1 will disable the limit and the chunk size will grow indefinitely.

    +

    With –vfs-read-chunk-size 100M and –vfs-read-chunk-size-limit 0 the following parts will be downloaded: 0-100M, 100M-200M, 200M-300M, 300M-400M and so on. When –vfs-read-chunk-size-limit 500M is specified, the result would be 0-100M, 100M-300M, 300M-700M, 700M-1200M, 1200M-1700M and so on.

    +

    Chunked reading will only work with –vfs-cache-mode < full, as the file will always be copied to the vfs cache before opening with –vfs-cache-mode full.

    Directory Cache

    Using the --dir-cache-time flag, you can set how long a directory should be considered up to date and not refreshed from the backend. Changes made locally in the mount may appear immediately or invalidate the cache. However, changes done on the remote will only be picked up once the cache expires.

    Alternatively, you can send a SIGHUP signal to rclone for it to flush all directory caches, regardless of how old they are. Assuming only one rclone instance is running, you can reset the cache like this:

    @@ -949,11 +964,11 @@ umount /path/to/local/mount
    rclone rc vfs/forget file=path/to/file dir=path/to/dir

    File Buffering

    The --buffer-size flag determines the amount of memory, that will be used to buffer data in advance.

    -

    Each open file descriptor will try to keep the specified amount of data in memory at all times. The buffered data is bound to one file descriptor and won't be shared between multiple open file descriptors of the same file.

    +

    Each open file descriptor will try to keep the specified amount of data in memory at all times. The buffered data is bound to one file descriptor and won’t be shared between multiple open file descriptors of the same file.

    This flag is a upper limit for the used memory per file descriptor. The buffer will only use memory for data that is downloaded but not not yet read. If the buffer is empty, only a small amount of memory will be used. The maximum memory used by rclone for buffering can be up to --buffer-size * open files.

    File Caching

    These flags control the VFS file caching options. The VFS layer is used by rclone mount to make a cloud storage system work more like a normal file system.

    -

    You'll need to enable VFS caching if you want, for example, to read and write simultaneously to a file. See below for more details.

    +

    You’ll need to enable VFS caching if you want, for example, to read and write simultaneously to a file. See below for more details.

    Note that the VFS cache works in addition to the cache backend and you may find that you need one or the other or both.

    --cache-dir string                   Directory rclone will use for caching.
     --vfs-cache-max-age duration         Max age of objects in the cache. (default 1h0m0s)
    @@ -962,39 +977,39 @@ umount /path/to/local/mount
    --vfs-cache-max-size int Max total size of objects in the cache. (default off)

    If run with -vv rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but can be controlled with --cache-dir or setting the appropriate environment variable.

    The cache has 4 different modes selected by --vfs-cache-mode. The higher the cache mode the more compatible rclone becomes at the cost of using disk space.

    -

    Note that files are written back to the remote only when they are closed so if rclone is quit or dies with open files then these won't get written back to the remote. However they will still be in the on disk cache.

    -

    If using --vfs-cache-max-size note that the cache may exceed this size for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval. Secondly because open files cannot be evicted from the cache.

    -

    --vfs-cache-mode off

    +

    Note that files are written back to the remote only when they are closed so if rclone is quit or dies with open files then these won’t get written back to the remote. However they will still be in the on disk cache.

    +

    If using –vfs-cache-max-size note that the cache may exceed this size for two reasons. Firstly because it is only checked every –vfs-cache-poll-interval. Secondly because open files cannot be evicted from the cache.

    +

    –vfs-cache-mode off

    In this mode the cache will read directly from the remote and write directly to the remote without caching anything on disk.

    This will mean some operations are not possible

    -

    --vfs-cache-mode minimal

    -

    This is very similar to "off" except that files opened for read AND write will be buffered to disks. This means that files opened for write will be a lot more compatible, but uses the minimal disk space.

    +

    –vfs-cache-mode minimal

    +

    This is very similar to “off” except that files opened for read AND write will be buffered to disks. This means that files opened for write will be a lot more compatible, but uses the minimal disk space.

    These operations are not possible

    -

    --vfs-cache-mode writes

    +

    –vfs-cache-mode writes

    In this mode files opened for read only are still read directly from the remote, write only and read/write files are buffered to disk first.

    This mode should support all normal file system operations.

    -

    If an upload fails it will be retried up to --low-level-retries times.

    -

    --vfs-cache-mode full

    +

    If an upload fails it will be retried up to –low-level-retries times.

    +

    –vfs-cache-mode full

    In this mode all reads and writes are buffered to and from disk. When a file is opened for read it will be downloaded in its entirety first.

    This may be appropriate for your needs, or you may prefer to look at the cache backend which does a much more sophisticated job of caching, including caching directory hierarchies and chunks of files.

    In this mode, unlike the others, when a file is written to the disk, it will be kept on the disk after it is written to the remote. It will be purged on a schedule according to --vfs-cache-max-age.

    This mode should support all normal file system operations.

    -

    If an upload or download fails it will be retried up to --low-level-retries times.

    +

    If an upload or download fails it will be retried up to –low-level-retries times.

    rclone mount remote:path /path/to/mountpoint [flags]

    Options

          --allow-non-empty                        Allow mounting over a non-empty directory.
    @@ -1042,8 +1057,8 @@ umount /path/to/local/mount
    if src is directory move it to dst, overwriting existing files if they exist see move command for full details -

    This doesn't transfer unchanged files, testing by size and modification time or MD5SUM. src will be deleted on successful transfer.

    -

    Important: Since this can cause data loss, test first with the --dry-run flag.

    +

    This doesn’t transfer unchanged files, testing by size and modification time or MD5SUM. src will be deleted on successful transfer.

    +

    Important: Since this can cause data loss, test first with the –dry-run flag.

    Note: Use the -P/--progress flag to view real-time transfer statistics.

    rclone moveto source:path dest:path [flags]

    Options

    @@ -1051,10 +1066,10 @@ if src is directory

    rclone ncdu

    Explore a remote with a text based user interface.

    Synopsis

    -

    This displays a text based user interface allowing the navigation of a remote. It is most useful for answering the question - "What is using all my disk space?".

    +

    This displays a text based user interface allowing the navigation of a remote. It is most useful for answering the question - “What is using all my disk space?”.

    To make the user interface it first scans the entire remote given and builds an in memory representation. rclone ncdu can be used during this scanning phase and you will see it building up the directory structure as it goes along.

    -

    Here are the keys - press '?' to toggle the help on and off

    +

    Here are the keys - press ‘?’ to toggle the help on and off

     ↑,↓ or k,j to Move
      →,l to enter
      ←,h to return
    @@ -1066,7 +1081,7 @@ if src is directory
      ? to toggle help on and off
      q/ESC/c-C to quit

    This an homage to the ncdu tool but for rclone remotes. It is missing lots of features at the moment but is useful as it stands.

    -

    Note that it might take some time to delete big files/folders. The UI won't respond in the meantime since the deletion is done synchronously.

    +

    Note that it might take some time to delete big files/folders. The UI won’t respond in the meantime since the deletion is done synchronously.

    rclone ncdu remote:path [flags]

    Options

      -h, --help   help for ncdu
    @@ -1080,13 +1095,13 @@ if src is directory

    rclone rc

    Run a command against a running rclone.

    Synopsis

    -

    This runs a command against a running rclone. Use the --url flag to specify an non default URL to connect on. This can be either a ":port" which is taken to mean "http://localhost:port" or a "host:port" which is taken to mean "http://host:port"

    -

    A username and password can be passed in with --user and --pass.

    -

    Note that --rc-addr, --rc-user, --rc-pass will be read also for --url, --user, --pass.

    +

    This runs a command against a running rclone. Use the –url flag to specify an non default URL to connect on. This can be either a “:port” which is taken to mean “http://localhost:port” or a “host:port” which is taken to mean “http://host:port”

    +

    A username and password can be passed in with –user and –pass.

    +

    Note that –rc-addr, –rc-user, –rc-pass will be read also for –url, –user, –pass.

    Arguments should be passed in as parameter=value.

    The result will be returned as a JSON object by default.

    -

    The --json parameter can be used to pass in a JSON blob as an input instead of key=value arguments. This is the only way of passing in more complicated values.

    -

    Use "rclone rc" to see a list of all possible commands.

    +

    The –json parameter can be used to pass in a JSON blob as an input instead of key=value arguments. This is the only way of passing in more complicated values.

    +

    Use “rclone rc” to see a list of all possible commands.

    rclone rc commands parameter [flags]

    Options

      -h, --help          help for rc
    @@ -1103,7 +1118,7 @@ if src is directory
     ffmpeg - | rclone rcat remote:path/to/file

    If the remote file already exists, it will be overwritten.

    rcat will try to upload small files in a single request, which is usually more efficient than the streaming/chunked upload endpoints, which use multiple requests. Exact behaviour depends on the remote. What is considered a small file may be set through --streaming-upload-cutoff. Uploading only starts after the cutoff is reached or if the file ends before that. The data must fit into RAM. The cutoff needs to be small enough to adhere the limits of your remote, please see there. Generally speaking, setting this cutoff too high will decrease your performance.

    -

    Note that the upload can also not be retried because the data is not kept around until the upload succeeds. If you need to transfer a lot of data, you're better off caching locally and then rclone move it to the destination.

    +

    Note that the upload can also not be retried because the data is not kept around until the upload succeeds. If you need to transfer a lot of data, you’re better off caching locally and then rclone move it to the destination.

    rclone rcat remote:path [flags]

    Options

      -h, --help   help for rcat
    @@ -1121,7 +1136,7 @@ ffmpeg - | rclone rcat remote:path/to/file

    Remove empty directories under the path.

    Synopsis

    This removes any empty directories (or directories that only contain empty directories) under the path that it finds, including the path if it has nothing in.

    -

    If you supply the --leave-root flag, it will not remove the root directory.

    +

    If you supply the –leave-root flag, it will not remove the root directory.

    This is useful for tidying up remotes that rclone has left a lot of empty directories in.

    rclone rmdirs remote:path [flags]

    Options

    @@ -1142,7 +1157,7 @@ ffmpeg - | rclone rcat remote:path/to/file

    rclone serve dlna is a DLNA media server for media stored in a rclone remote. Many devices, such as the Xbox and PlayStation, can automatically discover this server in the LAN and play audio/video from it. VLC is also supported. Service discovery uses UDP multicast packets (SSDP) and will thus only work on LANs.

    Rclone will list all files present in the remote, without filtering based on media formats or file extensions. Additionally, there is no media transcoding support. This means that some players might show files that they are not able to play back correctly.

    Server options

    -

    Use --addr to specify which IP address and port the server should listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs.

    +

    Use –addr to specify which IP address and port the server should listen on, eg –addr 1.2.3.4:8000 or –addr :8080 to listen to all IPs.

    Directory Cache

    Using the --dir-cache-time flag, you can set how long a directory should be considered up to date and not refreshed from the backend. Changes made locally in the mount may appear immediately or invalidate the cache. However, changes done on the remote will only be picked up once the cache expires.

    Alternatively, you can send a SIGHUP signal to rclone for it to flush all directory caches, regardless of how old they are. Assuming only one rclone instance is running, you can reset the cache like this:

    @@ -1153,11 +1168,11 @@ ffmpeg - | rclone rcat remote:path/to/file
    rclone rc vfs/forget file=path/to/file dir=path/to/dir

    File Buffering

    The --buffer-size flag determines the amount of memory, that will be used to buffer data in advance.

    -

    Each open file descriptor will try to keep the specified amount of data in memory at all times. The buffered data is bound to one file descriptor and won't be shared between multiple open file descriptors of the same file.

    +

    Each open file descriptor will try to keep the specified amount of data in memory at all times. The buffered data is bound to one file descriptor and won’t be shared between multiple open file descriptors of the same file.

    This flag is a upper limit for the used memory per file descriptor. The buffer will only use memory for data that is downloaded but not not yet read. If the buffer is empty, only a small amount of memory will be used. The maximum memory used by rclone for buffering can be up to --buffer-size * open files.

    File Caching

    These flags control the VFS file caching options. The VFS layer is used by rclone mount to make a cloud storage system work more like a normal file system.

    -

    You'll need to enable VFS caching if you want, for example, to read and write simultaneously to a file. See below for more details.

    +

    You’ll need to enable VFS caching if you want, for example, to read and write simultaneously to a file. See below for more details.

    Note that the VFS cache works in addition to the cache backend and you may find that you need one or the other or both.

    --cache-dir string                   Directory rclone will use for caching.
     --vfs-cache-max-age duration         Max age of objects in the cache. (default 1h0m0s)
    @@ -1166,39 +1181,39 @@ ffmpeg - | rclone rcat remote:path/to/file
    --vfs-cache-max-size int Max total size of objects in the cache. (default off)

    If run with -vv rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but can be controlled with --cache-dir or setting the appropriate environment variable.

    The cache has 4 different modes selected by --vfs-cache-mode. The higher the cache mode the more compatible rclone becomes at the cost of using disk space.

    -

    Note that files are written back to the remote only when they are closed so if rclone is quit or dies with open files then these won't get written back to the remote. However they will still be in the on disk cache.

    -

    If using --vfs-cache-max-size note that the cache may exceed this size for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval. Secondly because open files cannot be evicted from the cache.

    -

    --vfs-cache-mode off

    +

    Note that files are written back to the remote only when they are closed so if rclone is quit or dies with open files then these won’t get written back to the remote. However they will still be in the on disk cache.

    +

    If using –vfs-cache-max-size note that the cache may exceed this size for two reasons. Firstly because it is only checked every –vfs-cache-poll-interval. Secondly because open files cannot be evicted from the cache.

    +

    –vfs-cache-mode off

    In this mode the cache will read directly from the remote and write directly to the remote without caching anything on disk.

    This will mean some operations are not possible

    -

    --vfs-cache-mode minimal

    -

    This is very similar to "off" except that files opened for read AND write will be buffered to disks. This means that files opened for write will be a lot more compatible, but uses the minimal disk space.

    +

    –vfs-cache-mode minimal

    +

    This is very similar to “off” except that files opened for read AND write will be buffered to disks. This means that files opened for write will be a lot more compatible, but uses the minimal disk space.

    These operations are not possible

    -

    --vfs-cache-mode writes

    +

    –vfs-cache-mode writes

    In this mode files opened for read only are still read directly from the remote, write only and read/write files are buffered to disk first.

    This mode should support all normal file system operations.

    -

    If an upload fails it will be retried up to --low-level-retries times.

    -

    --vfs-cache-mode full

    +

    If an upload fails it will be retried up to –low-level-retries times.

    +

    –vfs-cache-mode full

    In this mode all reads and writes are buffered to and from disk. When a file is opened for read it will be downloaded in its entirety first.

    This may be appropriate for your needs, or you may prefer to look at the cache backend which does a much more sophisticated job of caching, including caching directory hierarchies and chunks of files.

    In this mode, unlike the others, when a file is written to the disk, it will be kept on the disk after it is written to the remote. It will be purged on a schedule according to --vfs-cache-max-age.

    This mode should support all normal file system operations.

    -

    If an upload or download fails it will be retried up to --low-level-retries times.

    +

    If an upload or download fails it will be retried up to –low-level-retries times.

    rclone serve dlna remote:path [flags]

    Options

          --addr string                            ip:port or :port to bind the DLNA http server to. (default ":7879")
    @@ -1225,11 +1240,11 @@ ffmpeg - | rclone rcat remote:path/to/file

    Synopsis

    rclone serve ftp implements a basic ftp server to serve the remote over FTP protocol. This can be viewed with a ftp client or you can make a remote of type ftp to read and write it.

    Server options

    -

    Use --addr to specify which IP address and port the server should listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port.

    -

    If you set --addr to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info.

    +

    Use –addr to specify which IP address and port the server should listen on, eg –addr 1.2.3.4:8000 or –addr :8080 to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port.

    +

    If you set –addr to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info.

    Authentication

    By default this will serve files without needing a login.

    -

    You can set a single username and password with the --user and --pass flags.

    +

    You can set a single username and password with the –user and –pass flags.

    Directory Cache

    Using the --dir-cache-time flag, you can set how long a directory should be considered up to date and not refreshed from the backend. Changes made locally in the mount may appear immediately or invalidate the cache. However, changes done on the remote will only be picked up once the cache expires.

    Alternatively, you can send a SIGHUP signal to rclone for it to flush all directory caches, regardless of how old they are. Assuming only one rclone instance is running, you can reset the cache like this:

    @@ -1240,11 +1255,11 @@ ffmpeg - | rclone rcat remote:path/to/file
    rclone rc vfs/forget file=path/to/file dir=path/to/dir

    File Buffering

    The --buffer-size flag determines the amount of memory, that will be used to buffer data in advance.

    -

    Each open file descriptor will try to keep the specified amount of data in memory at all times. The buffered data is bound to one file descriptor and won't be shared between multiple open file descriptors of the same file.

    +

    Each open file descriptor will try to keep the specified amount of data in memory at all times. The buffered data is bound to one file descriptor and won’t be shared between multiple open file descriptors of the same file.

    This flag is a upper limit for the used memory per file descriptor. The buffer will only use memory for data that is downloaded but not not yet read. If the buffer is empty, only a small amount of memory will be used. The maximum memory used by rclone for buffering can be up to --buffer-size * open files.

    File Caching

    These flags control the VFS file caching options. The VFS layer is used by rclone mount to make a cloud storage system work more like a normal file system.

    -

    You'll need to enable VFS caching if you want, for example, to read and write simultaneously to a file. See below for more details.

    +

    You’ll need to enable VFS caching if you want, for example, to read and write simultaneously to a file. See below for more details.

    Note that the VFS cache works in addition to the cache backend and you may find that you need one or the other or both.

    --cache-dir string                   Directory rclone will use for caching.
     --vfs-cache-max-age duration         Max age of objects in the cache. (default 1h0m0s)
    @@ -1253,39 +1268,39 @@ ffmpeg - | rclone rcat remote:path/to/file
    --vfs-cache-max-size int Max total size of objects in the cache. (default off)

    If run with -vv rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but can be controlled with --cache-dir or setting the appropriate environment variable.

    The cache has 4 different modes selected by --vfs-cache-mode. The higher the cache mode the more compatible rclone becomes at the cost of using disk space.

    -

    Note that files are written back to the remote only when they are closed so if rclone is quit or dies with open files then these won't get written back to the remote. However they will still be in the on disk cache.

    -

    If using --vfs-cache-max-size note that the cache may exceed this size for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval. Secondly because open files cannot be evicted from the cache.

    -

    --vfs-cache-mode off

    +

    Note that files are written back to the remote only when they are closed so if rclone is quit or dies with open files then these won’t get written back to the remote. However they will still be in the on disk cache.

    +

    If using –vfs-cache-max-size note that the cache may exceed this size for two reasons. Firstly because it is only checked every –vfs-cache-poll-interval. Secondly because open files cannot be evicted from the cache.

    +

    –vfs-cache-mode off

    In this mode the cache will read directly from the remote and write directly to the remote without caching anything on disk.

    This will mean some operations are not possible

    -

    --vfs-cache-mode minimal

    -

    This is very similar to "off" except that files opened for read AND write will be buffered to disks. This means that files opened for write will be a lot more compatible, but uses the minimal disk space.

    +

    –vfs-cache-mode minimal

    +

    This is very similar to “off” except that files opened for read AND write will be buffered to disks. This means that files opened for write will be a lot more compatible, but uses the minimal disk space.

    These operations are not possible

    -

    --vfs-cache-mode writes

    +

    –vfs-cache-mode writes

    In this mode files opened for read only are still read directly from the remote, write only and read/write files are buffered to disk first.

    This mode should support all normal file system operations.

    -

    If an upload fails it will be retried up to --low-level-retries times.

    -

    --vfs-cache-mode full

    +

    If an upload fails it will be retried up to –low-level-retries times.

    +

    –vfs-cache-mode full

    In this mode all reads and writes are buffered to and from disk. When a file is opened for read it will be downloaded in its entirety first.

    This may be appropriate for your needs, or you may prefer to look at the cache backend which does a much more sophisticated job of caching, including caching directory hierarchies and chunks of files.

    In this mode, unlike the others, when a file is written to the disk, it will be kept on the disk after it is written to the remote. It will be purged on a schedule according to --vfs-cache-max-age.

    This mode should support all normal file system operations.

    -

    If an upload or download fails it will be retried up to --low-level-retries times.

    +

    If an upload or download fails it will be retried up to –low-level-retries times.

    rclone serve ftp remote:path [flags]

    Options

          --addr string                            IPaddress:Port or :Port to bind server to. (default "localhost:2121")
    @@ -1314,27 +1329,27 @@ ffmpeg - | rclone rcat remote:path/to/file

    Serve the remote over HTTP.

    Synopsis

    rclone serve http implements a basic web server to serve the remote over HTTP. This can be viewed in a web browser or you can make a remote of type http read from it.

    -

    You can use the filter flags (eg --include, --exclude) to control what is served.

    +

    You can use the filter flags (eg –include, –exclude) to control what is served.

    The server will log errors. Use -v to see access logs.

    -

    --bwlimit will be respected for file transfers. Use --stats to control the stats printing.

    +

    –bwlimit will be respected for file transfers. Use –stats to control the stats printing.

    Server options

    -

    Use --addr to specify which IP address and port the server should listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port.

    -

    If you set --addr to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info.

    -

    --server-read-timeout and --server-write-timeout can be used to control the timeouts on the server. Note that this is the total time for a transfer.

    -

    --max-header-bytes controls the maximum number of bytes the server will accept in the HTTP header.

    +

    Use –addr to specify which IP address and port the server should listen on, eg –addr 1.2.3.4:8000 or –addr :8080 to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port.

    +

    If you set –addr to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info.

    +

    –server-read-timeout and –server-write-timeout can be used to control the timeouts on the server. Note that this is the total time for a transfer.

    +

    –max-header-bytes controls the maximum number of bytes the server will accept in the HTTP header.

    Authentication

    By default this will serve files without needing a login.

    -

    You can either use an htpasswd file which can take lots of users, or set a single username and password with the --user and --pass flags.

    -

    Use --htpasswd /path/to/htpasswd to provide an htpasswd file. This is in standard apache format and supports MD5, SHA1 and BCrypt for basic authentication. Bcrypt is recommended.

    +

    You can either use an htpasswd file which can take lots of users, or set a single username and password with the –user and –pass flags.

    +

    Use –htpasswd /path/to/htpasswd to provide an htpasswd file. This is in standard apache format and supports MD5, SHA1 and BCrypt for basic authentication. Bcrypt is recommended.

    To create an htpasswd file:

    touch htpasswd
     htpasswd -B htpasswd user
     htpasswd -B htpasswd anotherUser

    The password file can be updated while rclone is running.

    -

    Use --realm to set the authentication realm.

    +

    Use –realm to set the authentication realm.

    SSL/TLS

    -

    By default this will serve over http. If you want you can serve over https. You will need to supply the --cert and --key flags. If you wish to do client side certificate validation then you will need to supply --client-ca also.

    -

    --cert should be a either a PEM encoded certificate or a concatenation of that with the CA certificate. --key should be the PEM encoded private key and --client-ca should be the PEM encoded client certificate authority certificate.

    +

    By default this will serve over http. If you want you can serve over https. You will need to supply the –cert and –key flags. If you wish to do client side certificate validation then you will need to supply –client-ca also.

    +

    –cert should be a either a PEM encoded certificate or a concatenation of that with the CA certificate. –key should be the PEM encoded private key and –client-ca should be the PEM encoded client certificate authority certificate.

    Directory Cache

    Using the --dir-cache-time flag, you can set how long a directory should be considered up to date and not refreshed from the backend. Changes made locally in the mount may appear immediately or invalidate the cache. However, changes done on the remote will only be picked up once the cache expires.

    Alternatively, you can send a SIGHUP signal to rclone for it to flush all directory caches, regardless of how old they are. Assuming only one rclone instance is running, you can reset the cache like this:

    @@ -1345,11 +1360,11 @@ htpasswd -B htpasswd anotherUser
    rclone rc vfs/forget file=path/to/file dir=path/to/dir

    File Buffering

    The --buffer-size flag determines the amount of memory, that will be used to buffer data in advance.

    -

    Each open file descriptor will try to keep the specified amount of data in memory at all times. The buffered data is bound to one file descriptor and won't be shared between multiple open file descriptors of the same file.

    +

    Each open file descriptor will try to keep the specified amount of data in memory at all times. The buffered data is bound to one file descriptor and won’t be shared between multiple open file descriptors of the same file.

    This flag is a upper limit for the used memory per file descriptor. The buffer will only use memory for data that is downloaded but not not yet read. If the buffer is empty, only a small amount of memory will be used. The maximum memory used by rclone for buffering can be up to --buffer-size * open files.

    File Caching

    These flags control the VFS file caching options. The VFS layer is used by rclone mount to make a cloud storage system work more like a normal file system.

    -

    You'll need to enable VFS caching if you want, for example, to read and write simultaneously to a file. See below for more details.

    +

    You’ll need to enable VFS caching if you want, for example, to read and write simultaneously to a file. See below for more details.

    Note that the VFS cache works in addition to the cache backend and you may find that you need one or the other or both.

    --cache-dir string                   Directory rclone will use for caching.
     --vfs-cache-max-age duration         Max age of objects in the cache. (default 1h0m0s)
    @@ -1358,39 +1373,39 @@ htpasswd -B htpasswd anotherUser
    --vfs-cache-max-size int Max total size of objects in the cache. (default off)

    If run with -vv rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but can be controlled with --cache-dir or setting the appropriate environment variable.

    The cache has 4 different modes selected by --vfs-cache-mode. The higher the cache mode the more compatible rclone becomes at the cost of using disk space.

    -

    Note that files are written back to the remote only when they are closed so if rclone is quit or dies with open files then these won't get written back to the remote. However they will still be in the on disk cache.

    -

    If using --vfs-cache-max-size note that the cache may exceed this size for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval. Secondly because open files cannot be evicted from the cache.

    -

    --vfs-cache-mode off

    +

    Note that files are written back to the remote only when they are closed so if rclone is quit or dies with open files then these won’t get written back to the remote. However they will still be in the on disk cache.

    +

    If using –vfs-cache-max-size note that the cache may exceed this size for two reasons. Firstly because it is only checked every –vfs-cache-poll-interval. Secondly because open files cannot be evicted from the cache.

    +

    –vfs-cache-mode off

    In this mode the cache will read directly from the remote and write directly to the remote without caching anything on disk.

    This will mean some operations are not possible

    -

    --vfs-cache-mode minimal

    -

    This is very similar to "off" except that files opened for read AND write will be buffered to disks. This means that files opened for write will be a lot more compatible, but uses the minimal disk space.

    +

    –vfs-cache-mode minimal

    +

    This is very similar to “off” except that files opened for read AND write will be buffered to disks. This means that files opened for write will be a lot more compatible, but uses the minimal disk space.

    These operations are not possible

    -

    --vfs-cache-mode writes

    +

    –vfs-cache-mode writes

    In this mode files opened for read only are still read directly from the remote, write only and read/write files are buffered to disk first.

    This mode should support all normal file system operations.

    -

    If an upload fails it will be retried up to --low-level-retries times.

    -

    --vfs-cache-mode full

    +

    If an upload fails it will be retried up to –low-level-retries times.

    +

    –vfs-cache-mode full

    In this mode all reads and writes are buffered to and from disk. When a file is opened for read it will be downloaded in its entirety first.

    This may be appropriate for your needs, or you may prefer to look at the cache backend which does a much more sophisticated job of caching, including caching directory hierarchies and chunks of files.

    In this mode, unlike the others, when a file is written to the disk, it will be kept on the disk after it is written to the remote. It will be purged on a schedule according to --vfs-cache-max-age.

    This mode should support all normal file system operations.

    -

    If an upload or download fails it will be retried up to --low-level-retries times.

    +

    If an upload or download fails it will be retried up to –low-level-retries times.

    rclone serve http remote:path [flags]

    Options

          --addr string                            IPaddress:Port or :Port to bind server to. (default "localhost:8080")
    @@ -1423,24 +1438,24 @@ htpasswd -B htpasswd anotherUser
    --vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off)

    rclone serve restic

    -

    Serve the remote for restic's REST API.

    +

    Serve the remote for restic’s REST API.

    Synopsis

    -

    rclone serve restic implements restic's REST backend API over HTTP. This allows restic to use rclone as a data storage mechanism for cloud providers that restic does not support directly.

    +

    rclone serve restic implements restic’s REST backend API over HTTP. This allows restic to use rclone as a data storage mechanism for cloud providers that restic does not support directly.

    Restic is a command line program for doing backups.

    The server will log errors. Use -v to see access logs.

    -

    --bwlimit will be respected for file transfers. Use --stats to control the stats printing.

    +

    –bwlimit will be respected for file transfers. Use –stats to control the stats printing.

    Setting up rclone for use by restic

    First set up a remote for your chosen cloud provider.

    -

    Once you have set up the remote, check it is working with, for example "rclone lsd remote:". You may have called the remote something other than "remote:" - just substitute whatever you called it in the following instructions.

    +

    Once you have set up the remote, check it is working with, for example “rclone lsd remote:”. You may have called the remote something other than “remote:” - just substitute whatever you called it in the following instructions.

    Now start the rclone restic server

    rclone serve restic -v remote:backup
    -

    Where you can replace "backup" in the above by whatever path in the remote you wish to use.

    -

    By default this will serve on "localhost:8080" you can change this with use of the "--addr" flag.

    +

    Where you can replace “backup” in the above by whatever path in the remote you wish to use.

    +

    By default this will serve on “localhost:8080” you can change this with use of the “–addr” flag.

    You might wish to start this server on boot.

    Setting up restic to use rclone

    Now you can follow the restic instructions on setting up restic.

    Note that you will need restic 0.8.2 or later to interoperate with rclone.

    -

    For the example above you will want to use "http://localhost:8080/" as the URL for the REST server.

    +

    For the example above you will want to use “http://localhost:8080/” as the URL for the REST server.

    For example:

    $ export RESTIC_REPOSITORY=rest:http://localhost:8080/
     $ export RESTIC_PASSWORD=yourpassword
    @@ -1463,23 +1478,23 @@ snapshot 45c8fdd8 saved
    $ export RESTIC_REPOSITORY=rest:http://localhost:8080/user2repo/ # backup user2 stuff

    Server options

    -

    Use --addr to specify which IP address and port the server should listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port.

    -

    If you set --addr to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info.

    -

    --server-read-timeout and --server-write-timeout can be used to control the timeouts on the server. Note that this is the total time for a transfer.

    -

    --max-header-bytes controls the maximum number of bytes the server will accept in the HTTP header.

    +

    Use –addr to specify which IP address and port the server should listen on, eg –addr 1.2.3.4:8000 or –addr :8080 to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port.

    +

    If you set –addr to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info.

    +

    –server-read-timeout and –server-write-timeout can be used to control the timeouts on the server. Note that this is the total time for a transfer.

    +

    –max-header-bytes controls the maximum number of bytes the server will accept in the HTTP header.

    Authentication

    By default this will serve files without needing a login.

    -

    You can either use an htpasswd file which can take lots of users, or set a single username and password with the --user and --pass flags.

    -

    Use --htpasswd /path/to/htpasswd to provide an htpasswd file. This is in standard apache format and supports MD5, SHA1 and BCrypt for basic authentication. Bcrypt is recommended.

    +

    You can either use an htpasswd file which can take lots of users, or set a single username and password with the –user and –pass flags.

    +

    Use –htpasswd /path/to/htpasswd to provide an htpasswd file. This is in standard apache format and supports MD5, SHA1 and BCrypt for basic authentication. Bcrypt is recommended.

    To create an htpasswd file:

    touch htpasswd
     htpasswd -B htpasswd user
     htpasswd -B htpasswd anotherUser

    The password file can be updated while rclone is running.

    -

    Use --realm to set the authentication realm.

    +

    Use –realm to set the authentication realm.

    SSL/TLS

    -

    By default this will serve over http. If you want you can serve over https. You will need to supply the --cert and --key flags. If you wish to do client side certificate validation then you will need to supply --client-ca also.

    -

    --cert should be a either a PEM encoded certificate or a concatenation of that with the CA certificate. --key should be the PEM encoded private key and --client-ca should be the PEM encoded client certificate authority certificate.

    +

    By default this will serve over http. If you want you can serve over https. You will need to supply the –cert and –key flags. If you wish to do client side certificate validation then you will need to supply –client-ca also.

    +

    –cert should be a either a PEM encoded certificate or a concatenation of that with the CA certificate. –key should be the PEM encoded private key and –client-ca should be the PEM encoded client certificate authority certificate.

    rclone serve restic remote:path [flags]

    Options

          --addr string                     IPaddress:Port or :Port to bind server to. (default "localhost:8080")
    @@ -1501,28 +1516,28 @@ htpasswd -B htpasswd anotherUser

    Synopsis

    rclone serve webdav implements a basic webdav server to serve the remote over HTTP via the webdav protocol. This can be viewed with a webdav client or you can make a remote of type webdav to read and write it.

    Webdav options

    -

    --etag-hash

    +

    –etag-hash

    This controls the ETag header. Without this flag the ETag will be based on the ModTime and Size of the object.

    -

    If this flag is set to "auto" then rclone will choose the first supported hash on the backend or you can use a named hash such as "MD5" or "SHA-1".

    -

    Use "rclone hashsum" to see the full list.

    +

    If this flag is set to “auto” then rclone will choose the first supported hash on the backend or you can use a named hash such as “MD5” or “SHA-1”.

    +

    Use “rclone hashsum” to see the full list.

    Server options

    -

    Use --addr to specify which IP address and port the server should listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port.

    -

    If you set --addr to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info.

    -

    --server-read-timeout and --server-write-timeout can be used to control the timeouts on the server. Note that this is the total time for a transfer.

    -

    --max-header-bytes controls the maximum number of bytes the server will accept in the HTTP header.

    +

    Use –addr to specify which IP address and port the server should listen on, eg –addr 1.2.3.4:8000 or –addr :8080 to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port.

    +

    If you set –addr to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info.

    +

    –server-read-timeout and –server-write-timeout can be used to control the timeouts on the server. Note that this is the total time for a transfer.

    +

    –max-header-bytes controls the maximum number of bytes the server will accept in the HTTP header.

    Authentication

    By default this will serve files without needing a login.

    -

    You can either use an htpasswd file which can take lots of users, or set a single username and password with the --user and --pass flags.

    -

    Use --htpasswd /path/to/htpasswd to provide an htpasswd file. This is in standard apache format and supports MD5, SHA1 and BCrypt for basic authentication. Bcrypt is recommended.

    +

    You can either use an htpasswd file which can take lots of users, or set a single username and password with the –user and –pass flags.

    +

    Use –htpasswd /path/to/htpasswd to provide an htpasswd file. This is in standard apache format and supports MD5, SHA1 and BCrypt for basic authentication. Bcrypt is recommended.

    To create an htpasswd file:

    touch htpasswd
     htpasswd -B htpasswd user
     htpasswd -B htpasswd anotherUser

    The password file can be updated while rclone is running.

    -

    Use --realm to set the authentication realm.

    +

    Use –realm to set the authentication realm.

    SSL/TLS

    -

    By default this will serve over http. If you want you can serve over https. You will need to supply the --cert and --key flags. If you wish to do client side certificate validation then you will need to supply --client-ca also.

    -

    --cert should be a either a PEM encoded certificate or a concatenation of that with the CA certificate. --key should be the PEM encoded private key and --client-ca should be the PEM encoded client certificate authority certificate.

    +

    By default this will serve over http. If you want you can serve over https. You will need to supply the –cert and –key flags. If you wish to do client side certificate validation then you will need to supply –client-ca also.

    +

    –cert should be a either a PEM encoded certificate or a concatenation of that with the CA certificate. –key should be the PEM encoded private key and –client-ca should be the PEM encoded client certificate authority certificate.

    Directory Cache

    Using the --dir-cache-time flag, you can set how long a directory should be considered up to date and not refreshed from the backend. Changes made locally in the mount may appear immediately or invalidate the cache. However, changes done on the remote will only be picked up once the cache expires.

    Alternatively, you can send a SIGHUP signal to rclone for it to flush all directory caches, regardless of how old they are. Assuming only one rclone instance is running, you can reset the cache like this:

    @@ -1533,11 +1548,11 @@ htpasswd -B htpasswd anotherUser
    rclone rc vfs/forget file=path/to/file dir=path/to/dir

    File Buffering

    The --buffer-size flag determines the amount of memory, that will be used to buffer data in advance.

    -

    Each open file descriptor will try to keep the specified amount of data in memory at all times. The buffered data is bound to one file descriptor and won't be shared between multiple open file descriptors of the same file.

    +

    Each open file descriptor will try to keep the specified amount of data in memory at all times. The buffered data is bound to one file descriptor and won’t be shared between multiple open file descriptors of the same file.

    This flag is a upper limit for the used memory per file descriptor. The buffer will only use memory for data that is downloaded but not not yet read. If the buffer is empty, only a small amount of memory will be used. The maximum memory used by rclone for buffering can be up to --buffer-size * open files.

    File Caching

    These flags control the VFS file caching options. The VFS layer is used by rclone mount to make a cloud storage system work more like a normal file system.

    -

    You'll need to enable VFS caching if you want, for example, to read and write simultaneously to a file. See below for more details.

    +

    You’ll need to enable VFS caching if you want, for example, to read and write simultaneously to a file. See below for more details.

    Note that the VFS cache works in addition to the cache backend and you may find that you need one or the other or both.

    --cache-dir string                   Directory rclone will use for caching.
     --vfs-cache-max-age duration         Max age of objects in the cache. (default 1h0m0s)
    @@ -1546,39 +1561,39 @@ htpasswd -B htpasswd anotherUser
    --vfs-cache-max-size int Max total size of objects in the cache. (default off)

    If run with -vv rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but can be controlled with --cache-dir or setting the appropriate environment variable.

    The cache has 4 different modes selected by --vfs-cache-mode. The higher the cache mode the more compatible rclone becomes at the cost of using disk space.

    -

    Note that files are written back to the remote only when they are closed so if rclone is quit or dies with open files then these won't get written back to the remote. However they will still be in the on disk cache.

    -

    If using --vfs-cache-max-size note that the cache may exceed this size for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval. Secondly because open files cannot be evicted from the cache.

    -

    --vfs-cache-mode off

    +

    Note that files are written back to the remote only when they are closed so if rclone is quit or dies with open files then these won’t get written back to the remote. However they will still be in the on disk cache.

    +

    If using –vfs-cache-max-size note that the cache may exceed this size for two reasons. Firstly because it is only checked every –vfs-cache-poll-interval. Secondly because open files cannot be evicted from the cache.

    +

    –vfs-cache-mode off

    In this mode the cache will read directly from the remote and write directly to the remote without caching anything on disk.

    This will mean some operations are not possible

    -

    --vfs-cache-mode minimal

    -

    This is very similar to "off" except that files opened for read AND write will be buffered to disks. This means that files opened for write will be a lot more compatible, but uses the minimal disk space.

    +

    –vfs-cache-mode minimal

    +

    This is very similar to “off” except that files opened for read AND write will be buffered to disks. This means that files opened for write will be a lot more compatible, but uses the minimal disk space.

    These operations are not possible

    -

    --vfs-cache-mode writes

    +

    –vfs-cache-mode writes

    In this mode files opened for read only are still read directly from the remote, write only and read/write files are buffered to disk first.

    This mode should support all normal file system operations.

    -

    If an upload fails it will be retried up to --low-level-retries times.

    -

    --vfs-cache-mode full

    +

    If an upload fails it will be retried up to –low-level-retries times.

    +

    –vfs-cache-mode full

    In this mode all reads and writes are buffered to and from disk. When a file is opened for read it will be downloaded in its entirety first.

    This may be appropriate for your needs, or you may prefer to look at the cache backend which does a much more sophisticated job of caching, including caching directory hierarchies and chunks of files.

    In this mode, unlike the others, when a file is written to the disk, it will be kept on the disk after it is written to the remote. It will be purged on a schedule according to --vfs-cache-max-age.

    This mode should support all normal file system operations.

    -

    If an upload or download fails it will be retried up to --low-level-retries times.

    +

    If an upload or download fails it will be retried up to –low-level-retries times.

    rclone serve webdav remote:path [flags]

    Options

          --addr string                            IPaddress:Port or :Port to bind server to. (default "localhost:8080")
    @@ -1649,8 +1664,8 @@ htpasswd -B htpasswd anotherUser
    └── file5 1 directories, 5 files -

    You can use any of the filtering options with the tree command (eg --include and --exclude). You can also use --fast-list.

    -

    The tree command has many options for controlling the listing which are compatible with the tree command. Note that not all of them have short options as they conflict with rclone's short options.

    +

    You can use any of the filtering options with the tree command (eg –include and –exclude). You can also use –fast-list.

    +

    The tree command has many options for controlling the listing which are compatible with the tree command. Note that not all of them have short options as they conflict with rclone’s short options.

    rclone tree remote:path [flags]

    Options

      -a, --all             All files are listed (list . files too).
    @@ -1675,7 +1690,7 @@ htpasswd -B htpasswd anotherUser
    -U, --unsorted Leave files unsorted. --version Sort files alphanumerically by version.

    Copying single files

    -

    rclone normally syncs or copies directories. However, if the source remote points to a file, rclone will just copy that file. The destination remote must point to a directory - rclone will give the error Failed to create file system for "remote:file": is a file not a directory if it isn't.

    +

    rclone normally syncs or copies directories. However, if the source remote points to a file, rclone will just copy that file. The destination remote must point to a directory - rclone will give the error Failed to create file system for "remote:file": is a file not a directory if it isn’t.

    For example, suppose you have a remote with a file in called test.jpg, then you could copy just that file like this

    rclone copy remote:test.jpg /tmp/download

    The file test.jpg will be placed inside /tmp/download.

    @@ -1689,16 +1704,22 @@ htpasswd -B htpasswd anotherUser

    /path/to/dir

    This refers to the local file system.

    On Windows only \ may be used instead of / in local paths only, non local paths must use /.

    -

    These paths needn't start with a leading / - if they don't then they will be relative to the current directory.

    +

    These paths needn’t start with a leading / - if they don’t then they will be relative to the current directory.

    remote:path/to/dir

    This refers to a directory path/to/dir on remote: as defined in the config file (configured with rclone config).

    remote:/path/to/dir

    -

    On most backends this is refers to the same directory as remote:path/to/dir and that format should be preferred. On a very small number of remotes (FTP, SFTP, Dropbox for business) this will refer to a different directory. On these, paths without a leading / will refer to your "home" directory and paths with a leading / will refer to the root.

    +

    On most backends this is refers to the same directory as remote:path/to/dir and that format should be preferred. On a very small number of remotes (FTP, SFTP, Dropbox for business) this will refer to a different directory. On these, paths without a leading / will refer to your “home” directory and paths with a leading / will refer to the root.

    :backend:path/to/dir

    This is an advanced form for creating remotes on the fly. backend should be the name or prefix of a backend (the type in the config file) and all the configuration for the backend should be provided on the command line (or in environment variables).

    -

    Eg

    +

    Here are some examples:

    rclone lsd --http-url https://pub.rclone.org :http:
    -

    Which lists all the directories in pub.rclone.org.

    +

    To list all the directories in the root of https://pub.rclone.org/.

    +
    rclone lsf --http-url https://example.com :http:path/to/dir
    +

    To list files and directories in https://example.com/path/to/dir/

    +
    rclone copy --http-url https://example.com :http:path/to/dir /tmp/dir
    +

    To copy files and directories in https://example.com/path/to/dir to /tmp/dir.

    +
    rclone copy --sftp-host example.com :sftp:path/to/dir /tmp/dir
    +

    To copy files and directories from example.com in the relative directory path/to/dir to /tmp/dir using sftp.

    Quoting and the shell

    When you are typing commands to your computer you are using something called the command line shell. This interprets various characters in an OS specific way.

    Here are some gotchas which may help users unfamiliar with the shell rules

    @@ -1707,11 +1728,11 @@ htpasswd -B htpasswd anotherUser
    rclone copy 'Important files?' remote:backup

    If you want to send a ' you will need to use ", eg

    rclone copy "O'Reilly Reviews" remote:backup
    -

    The rules for quoting metacharacters are complicated and if you want the full details you'll have to consult the manual page for your shell.

    +

    The rules for quoting metacharacters are complicated and if you want the full details you’ll have to consult the manual page for your shell.

    Windows

    If your names have spaces in you need to put them in ", eg

    rclone copy "E:\folder name\folder name\folder name" remote:backup
    -

    If you are using the root directory on its own then don't quote it (see #464 for why), eg

    +

    If you are using the root directory on its own then don’t quote it (see #464 for why), eg

    rclone copy E:\ remote:backup

    Copying files or directories with : in the names

    rclone uses : to mark a remote name. This is, however, a valid filename component in non-Windows OSes. The remote name parser will only search for a : up to the first / so if you need to act on a file or directory like this then use the full path starting with a /, or use ./ as a current directory prefix.

    @@ -1721,12 +1742,12 @@ htpasswd -B htpasswd anotherUser
    rclone sync /full/path/to/sync:me remote:path

    Server Side Copy

    Most remotes (but not all - see the overview) support server side copy.

    -

    This means if you want to copy one folder to another then rclone won't download all the files and re-upload them; it will instruct the server to copy them in place.

    +

    This means if you want to copy one folder to another then rclone won’t download all the files and re-upload them; it will instruct the server to copy them in place.

    Eg

    rclone copy s3:oldbucket s3:newbucket

    Will copy the contents of oldbucket to newbucket without downloading and re-uploading.

    -

    Remotes which don't support server side copy will download and re-upload in this case.

    -

    Server side copies are used with sync and copy and will be identified in the log when using the -v flag. The move command may also use them if remote doesn't support server side move directly. This is done by issuing a server side copy then a delete which is much quicker than a download and re-upload.

    +

    Remotes which don’t support server side copy will download and re-upload in this case.

    +

    Server side copies are used with sync and copy and will be identified in the log when using the -v flag. The move command may also use them if remote doesn’t support server side move directly. This is done by issuing a server side copy then a delete which is much quicker than a download and re-upload.

    Server side copies will only be attempted if the remote names are the same.

    This can be used when scripting to make aged backups efficiently, eg

    rclone sync remote:current-backup remote:previous-backup
    @@ -1734,64 +1755,64 @@ rclone sync /path/to/files remote:current-backup

    Options

    Rclone has a number of options to control its behaviour.

    Options that take parameters can have the values passed in two ways, --option=value or --option value. However boolean (true/false) options behave slightly differently to the other options in that --boolean sets the option to true and the absence of the flag sets it to false. It is also possible to specify --boolean=false or --boolean=true. Note that --boolean false is not valid - this is parsed as --boolean and the false is parsed as an extra command line argument for rclone.

    -

    Options which use TIME use the go time parser. A duration string is a possibly signed sequence of decimal numbers, each with optional fraction and a unit suffix, such as "300ms", "-1.5h" or "2h45m". Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h".

    +

    Options which use TIME use the go time parser. A duration string is a possibly signed sequence of decimal numbers, each with optional fraction and a unit suffix, such as “300ms”, “-1.5h” or “2h45m”. Valid time units are “ns”, “us” (or “µs”), “ms”, “s”, “m”, “h”.

    Options which use SIZE use kByte by default. However, a suffix of b for bytes, k for kBytes, M for MBytes, G for GBytes, T for TBytes and P for PBytes may be used. These are the binary units, eg 1, 2**10, 2**20, 2**30 respectively.

    -

    --backup-dir=DIR

    +

    –backup-dir=DIR

    When using sync, copy or move any files which would have been overwritten or deleted are moved in their original hierarchy into this directory.

    If --suffix is set, then the moved files will have the suffix added to them. If there is a file with the same path (after the suffix has been added) in DIR, then it will be overwritten.

    The remote in use must support server side move or copy and you must use the same remote as the destination of the sync. The backup directory must not overlap the destination directory.

    For example

    rclone sync /path/to/local remote:current --backup-dir remote:old

    will sync /path/to/local to remote:current, but for any files which would have been updated or deleted will be stored in remote:old.

    -

    If running rclone from a script you might want to use today's date as the directory name passed to --backup-dir to store the old files, or you might want to pass --suffix with today's date.

    -

    --bind string

    -

    Local address to bind to for outgoing connections. This can be an IPv4 address (1.2.3.4), an IPv6 address (1234::789A) or host name. If the host name doesn't resolve or resolves to more than one IP address it will give an error.

    -

    --bwlimit=BANDWIDTH_SPEC

    +

    If running rclone from a script you might want to use today’s date as the directory name passed to --backup-dir to store the old files, or you might want to pass --suffix with today’s date.

    +

    –bind string

    +

    Local address to bind to for outgoing connections. This can be an IPv4 address (1.2.3.4), an IPv6 address (1234::789A) or host name. If the host name doesn’t resolve or resolves to more than one IP address it will give an error.

    +

    –bwlimit=BANDWIDTH_SPEC

    This option controls the bandwidth limit. Limits can be specified in two ways: As a single limit, or as a timetable.

    Single limits last for the duration of the session. To use a single limit, specify the desired bandwidth in kBytes/s, or use a suffix b|k|M|G. The default is 0 which means to not limit bandwidth.

    For example, to limit bandwidth usage to 10 MBytes/s use --bwlimit 10M

    -

    It is also possible to specify a "timetable" of limits, which will cause certain limits to be applied at certain times. To specify a timetable, format your entries as "WEEKDAY-HH:MM,BANDWIDTH WEEKDAY-HH:MM,BANDWIDTH..." where: WEEKDAY is optional element. It could be writen as whole world or only using 3 first characters. HH:MM is an hour from 00:00 to 23:59.

    +

    It is also possible to specify a “timetable” of limits, which will cause certain limits to be applied at certain times. To specify a timetable, format your entries as “WEEKDAY-HH:MM,BANDWIDTH WEEKDAY-HH:MM,BANDWIDTH…” where: WEEKDAY is optional element. It could be written as whole world or only using 3 first characters. HH:MM is an hour from 00:00 to 23:59.

    An example of a typical timetable to avoid link saturation during daytime working hours could be:

    --bwlimit "08:00,512 12:00,10M 13:00,512 18:00,30M 23:00,off"

    In this example, the transfer bandwidth will be every day set to 512kBytes/sec at 8am. At noon, it will raise to 10Mbytes/s, and drop back to 512kBytes/sec at 1pm. At 6pm, the bandwidth limit will be set to 30MBytes/s, and at 11pm it will be completely disabled (full speed). Anything between 11pm and 8am will remain unlimited.

    An example of timetable with WEEKDAY could be:

    --bwlimit "Mon-00:00,512 Fri-23:59,10M Sat-10:00,1M Sun-20:00,off"

    -

    It mean that, the transfer bandwidh will be set to 512kBytes/sec on Monday. It will raise to 10Mbytes/s before the end of Friday. At 10:00 on Sunday it will be set to 1Mbyte/s. From 20:00 at Sunday will be unlimited.

    +

    It mean that, the transfer bandwidth will be set to 512kBytes/sec on Monday. It will raise to 10Mbytes/s before the end of Friday. At 10:00 on Sunday it will be set to 1Mbyte/s. From 20:00 at Sunday will be unlimited.

    Timeslots without weekday are extended to whole week. So this one example:

    --bwlimit "Mon-00:00,512 12:00,1M Sun-20:00,off"

    Is equal to this:

    --bwlimit "Mon-00:00,512Mon-12:00,1M Tue-12:00,1M Wed-12:00,1M Thu-12:00,1M Fri-12:00,1M Sat-12:00,1M Sun-12:00,1M Sun-20:00,off"

    -

    Bandwidth limits only apply to the data transfer. They don't apply to the bandwidth of the directory listings etc.

    -

    Note that the units are Bytes/s, not Bits/s. Typically connections are measured in Bits/s - to convert divide by 8. For example, let's say you have a 10 Mbit/s connection and you wish rclone to use half of it - 5 Mbit/s. This is 5/8 = 0.625MByte/s so you would use a --bwlimit 0.625M parameter for rclone.

    +

    Bandwidth limits only apply to the data transfer. They don’t apply to the bandwidth of the directory listings etc.

    +

    Note that the units are Bytes/s, not Bits/s. Typically connections are measured in Bits/s - to convert divide by 8. For example, let’s say you have a 10 Mbit/s connection and you wish rclone to use half of it - 5 Mbit/s. This is 5/8 = 0.625MByte/s so you would use a --bwlimit 0.625M parameter for rclone.

    On Unix systems (Linux, MacOS, …) the bandwidth limiter can be toggled by sending a SIGUSR2 signal to rclone. This allows to remove the limitations of a long running rclone transfer and to restore it back to the value specified with --bwlimit quickly when needed. Assuming there is only one rclone instance running, you can toggle the limiter like this:

    kill -SIGUSR2 $(pidof rclone)

    If you configure rclone with a remote control then you can use change the bwlimit dynamically:

    rclone rc core/bwlimit rate=1M
    -

    --buffer-size=SIZE

    +

    –buffer-size=SIZE

    Use this sized buffer to speed up file transfers. Each --transfer will use this much memory for buffering.

    When using mount or cmount each open file descriptor will use this much memory for buffering. See the mount documentation for more details.

    Set to 0 to disable the buffering for the minimum memory usage.

    -

    Note that the memory allocation of the buffers is influenced by the --use-mmap flag.

    -

    --checkers=N

    +

    Note that the memory allocation of the buffers is influenced by the –use-mmap flag.

    +

    –checkers=N

    The number of checkers to run in parallel. Checkers do the equality checking of files during a sync. For some storage systems (eg S3, Swift, Dropbox) this can take a significant amount of time so they are run in parallel.

    The default is to run 8 checkers in parallel.

    -

    -c, --checksum

    +

    -c, –checksum

    Normally rclone will look at modification time and size of files to see if they are equal. If you set this flag then rclone will check the file hash and size to determine if files are equal.

    -

    This is useful when the remote doesn't support setting modified time and a more accurate sync is desired than just checking the file size.

    +

    This is useful when the remote doesn’t support setting modified time and a more accurate sync is desired than just checking the file size.

    This is very useful when transferring between remotes which store the same hash type on the object, eg Drive and Swift. For details of which remotes support which hash type see the table in the overview section.

    Eg rclone --checksum sync s3:/bucket swift:/bucket would run much quicker than without the --checksum flag.

    -

    When using this flag, rclone won't update mtimes of remote files if they are incorrect as it would normally.

    -

    --config=CONFIG_FILE

    +

    When using this flag, rclone won’t update mtimes of remote files if they are incorrect as it would normally.

    +

    –config=CONFIG_FILE

    Specify the location of the rclone config file.

    Normally the config file is in your home directory as a file called .config/rclone/rclone.conf (or .rclone.conf if created with an older version). If $XDG_CONFIG_HOME is set it will be at $XDG_CONFIG_HOME/rclone/rclone.conf

    If you run rclone config file you will see where the default location is for you.

    Use this flag to override the config location, eg rclone --config=".myconfig" .config.

    -

    --contimeout=TIME

    +

    –contimeout=TIME

    Set the connection timeout. This should be in go time format which looks like 5s for 5 seconds, 10m for 10 minutes, or 3h30m.

    The connection timeout is the amount of time rclone will wait for a connection to go through to a remote object storage system. It is 1m by default.

    -

    --dedupe-mode MODE

    +

    –dedupe-mode MODE

    Mode to run dedupe command in. One of interactive, skip, first, newest, oldest, rename. The default is interactive. See the dedupe command for more information as to what these options mean.

    -

    --disable FEATURE,FEATURE,...

    +

    –disable FEATURE,FEATURE,…

    This disables a comma separated list of optional features. For example to disable server side move and server side copy use:

    --disable move,copy

    The features can be put in in any case.

    @@ -1799,140 +1820,148 @@ rclone sync /path/to/files remote:current-backup
    --disable help

    See the overview features and optional features to get an idea of which feature does what.

    This flag can be useful for debugging and in exceptional circumstances (eg Google Drive limiting the total volume of Server Side Copies to 100GB/day).

    -

    -n, --dry-run

    +

    -n, –dry-run

    Do a trial run with no permanent changes. Use this to see what rclone would do without actually doing it. Useful when setting up the sync command which deletes files in the destination.

    -

    --ignore-checksum

    -

    Normally rclone will check that the checksums of transferred files match, and give an error "corrupted on transfer" if they don't.

    -

    You can use this option to skip that check. You should only use it if you have had the "corrupted on transfer" error message and you are sure you might want to transfer potentially corrupted data.

    -

    --ignore-existing

    +

    –ignore-checksum

    +

    Normally rclone will check that the checksums of transferred files match, and give an error “corrupted on transfer” if they don’t.

    +

    You can use this option to skip that check. You should only use it if you have had the “corrupted on transfer” error message and you are sure you might want to transfer potentially corrupted data.

    +

    –ignore-existing

    Using this option will make rclone unconditionally skip all files that exist on the destination, no matter the content of these files.

    -

    While this isn't a generally recommended option, it can be useful in cases where your files change due to encryption. However, it cannot correct partial transfers in case a transfer was interrupted.

    -

    --ignore-size

    +

    While this isn’t a generally recommended option, it can be useful in cases where your files change due to encryption. However, it cannot correct partial transfers in case a transfer was interrupted.

    +

    –ignore-size

    Normally rclone will look at modification time and size of files to see if they are equal. If you set this flag then rclone will check only the modification time. If --checksum is set then it only checks the checksum.

    It will also cause rclone to skip verifying the sizes are the same after transfer.

    This can be useful for transferring files to and from OneDrive which occasionally misreports the size of image files (see #399 for more info).

    -

    -I, --ignore-times

    +

    -I, –ignore-times

    Using this option will cause rclone to unconditionally upload all files regardless of the state of files on the destination.

    Normally rclone would skip any files that have the same modification time and are the same size (or have the same checksum if using --checksum).

    -

    --immutable

    +

    –immutable

    Treat source and destination files as immutable and disallow modification.

    With this option set, files will be created and deleted as requested, but existing files will never be updated. If an existing file does not match between the source and destination, rclone will give the error Source and destination exist but do not match: immutable file modified.

    Note that only commands which transfer files (e.g. sync, copy, move) are affected by this behavior, and only modification is disallowed. Files may still be deleted explicitly (e.g. delete, purge) or implicitly (e.g. sync, move). Use copy --immutable if it is desired to avoid deletion as well as modification.

    This can be useful as an additional layer of protection for immutable or append-only data sets (notably backup archives), where modification implies corruption and should not be propagated.

    -

    --leave-root

    -

    During rmdirs it will not remove root directory, even if it's empty.

    -

    --log-file=FILE

    -

    Log all of rclone's output to FILE. This is not active by default. This can be useful for tracking down problems with syncs in combination with the -v flag. See the Logging section for more info.

    -

    Note that if you are using the logrotate program to manage rclone's logs, then you should use the copytruncate option as rclone doesn't have a signal to rotate logs.

    -

    --log-format LIST

    -

    Comma separated list of log format options. date, time, microseconds, longfile, shortfile, UTC. The default is "date,time".

    -

    --log-level LEVEL

    +

    –leave-root

    +

    During rmdirs it will not remove root directory, even if it’s empty.

    +

    –log-file=FILE

    +

    Log all of rclone’s output to FILE. This is not active by default. This can be useful for tracking down problems with syncs in combination with the -v flag. See the Logging section for more info.

    +

    Note that if you are using the logrotate program to manage rclone’s logs, then you should use the copytruncate option as rclone doesn’t have a signal to rotate logs.

    +

    –log-format LIST

    +

    Comma separated list of log format options. date, time, microseconds, longfile, shortfile, UTC. The default is “date,time”.

    +

    –log-level LEVEL

    This sets the log level for rclone. The default log level is NOTICE.

    DEBUG is equivalent to -vv. It outputs lots of debug info - useful for bug reports and really finding out what rclone is doing.

    INFO is equivalent to -v. It outputs information about each transfer and prints stats once a minute by default.

    NOTICE is the default log level if no logging flags are supplied. It outputs very little when things are working normally. It outputs warnings and significant events.

    ERROR is equivalent to -q. It only outputs error messages.

    -

    --low-level-retries NUMBER

    +

    –low-level-retries NUMBER

    This controls the number of low level retries rclone does.

    A low level retry is used to retry a failing operation - typically one HTTP request. This might be uploading a chunk of a big file for example. You will see low level retries in the log with the -v flag.

    -

    This shouldn't need to be changed from the default in normal operations. However, if you get a lot of low level retries you may wish to reduce the value so rclone moves on to a high level retry (see the --retries flag) quicker.

    +

    This shouldn’t need to be changed from the default in normal operations. However, if you get a lot of low level retries you may wish to reduce the value so rclone moves on to a high level retry (see the --retries flag) quicker.

    Disable low level retries with --low-level-retries 1.

    -

    --max-backlog=N

    +

    –max-backlog=N

    This is the maximum allowable backlog of files in a sync/copy/move queued for being checked or transferred.

    This can be set arbitrarily large. It will only use memory when the queue is in use. Note that it will use in the order of N kB of memory when the backlog is in use.

    Setting this large allows rclone to calculate how many files are pending more accurately and give a more accurate estimated finish time.

    Setting this small will make rclone more synchronous to the listings of the remote which may be desirable.

    -

    --max-delete=N

    +

    –max-delete=N

    This tells rclone not to delete more than N files. If that limit is exceeded then a fatal error will be generated and rclone will stop the operation in progress.

    -

    --max-depth=N

    +

    –max-depth=N

    This modifies the recursion depth for all the commands except purge.

    So if you do rclone --max-depth 1 ls remote:path you will see only the files in the top level directory. Using --max-depth 2 means you will see all the files in first two directory levels and so on.

    For historical reasons the lsd command defaults to using a --max-depth of 1 - you can override this with the command line flag.

    You can use this command to disable recursion (with --max-depth 1).

    Note that if you use this with sync and --delete-excluded the files not recursed through are considered excluded and will be deleted on the destination. Test first with --dry-run if you are not sure what will happen.

    -

    --max-transfer=SIZE

    +

    –max-transfer=SIZE

    Rclone will stop transferring when it has reached the size specified. Defaults to off.

    When the limit is reached all transfers will stop immediately.

    Rclone will exit with exit code 8 if the transfer limit is reached.

    -

    --modify-window=TIME

    +

    –modify-window=TIME

    When checking whether a file has been modified, this is the maximum allowed time difference that a file can have and still be considered equivalent.

    The default is 1ns unless this is overridden by a remote. For example OS X only stores modification times to the nearest second so if you are reading and writing to an OS X filing system this will be 1s by default.

    This command line flag allows you to override that computed default.

    -

    --no-gzip-encoding

    -

    Don't set Accept-Encoding: gzip. This means that rclone won't ask the server for compressed files automatically. Useful if you've set the server to return files with Content-Encoding: gzip but you uploaded compressed files.

    +

    –no-gzip-encoding

    +

    Don’t set Accept-Encoding: gzip. This means that rclone won’t ask the server for compressed files automatically. Useful if you’ve set the server to return files with Content-Encoding: gzip but you uploaded compressed files.

    There is no need to set this in normal operation, and doing so will decrease the network transfer efficiency of rclone.

    -

    --no-update-modtime

    -

    When using this flag, rclone won't update modification times of remote files if they are incorrect as it would normally.

    +

    –no-traverse

    +

    The --no-traverse flag controls whether the destination file system is traversed when using the copy or move commands. --no-traverse is not compatible with sync and will be ignored if you supply it with sync.

    +

    If you are only copying a small number of files (or are filtering most of the files) and/or have a large number of files on the destination then --no-traverse will stop rclone listing the destination and save time.

    +

    However, if you are copying a large number of files, especially if you are doing a copy where lots of the files under consideration haven’t changed and won’t need copying then you shouldn’t use --no-traverse.

    +

    See rclone copy for an example of how to use it.

    +

    –no-update-modtime

    +

    When using this flag, rclone won’t update modification times of remote files if they are incorrect as it would normally.

    This can be used if the remote is being synced with another tool also (eg the Google Drive client).

    -

    -P, --progress

    +

    -P, –progress

    This flag makes rclone update the stats in a static block in the terminal providing a realtime overview of the transfer.

    Any log messages will scroll above the static block. Log messages will push the static block down to the bottom of the terminal where it will stay.

    Normally this is updated every 500mS but this period can be overridden with the --stats flag.

    This can be used with the --stats-one-line flag for a simpler display.

    Note: On Windows untilthis bug is fixed all non-ASCII characters will be replaced with . when --progress is in use.

    -

    -q, --quiet

    +

    -q, –quiet

    Normally rclone outputs stats and a completion message. If you set this flag it will make as little output as possible.

    -

    --retries int

    +

    –retries int

    Retry the entire sync if it fails this many times it fails (default 3).

    -

    Some remotes can be unreliable and a few retries help pick up the files which didn't get transferred because of errors.

    +

    Some remotes can be unreliable and a few retries help pick up the files which didn’t get transferred because of errors.

    Disable retries with --retries 1.

    -

    --retries-sleep=TIME

    +

    –retries-sleep=TIME

    This sets the interval between each retry specified by --retries

    The default is 0. Use 0 to disable.

    -

    --size-only

    +

    –size-only

    Normally rclone will look at modification time and size of files to see if they are equal. If you set this flag then rclone will check only the size.

    -

    This can be useful transferring files from Dropbox which have been modified by the desktop sync client which doesn't set checksums of modification times in the same way as rclone.

    -

    --stats=TIME

    +

    This can be useful transferring files from Dropbox which have been modified by the desktop sync client which doesn’t set checksums of modification times in the same way as rclone.

    +

    –stats=TIME

    Commands which transfer data (sync, copy, copyto, move, moveto) will print data transfer stats at regular intervals to show their progress.

    This sets the interval.

    The default is 1m. Use 0 to disable.

    If you set the stats interval then all commands can show stats. This can be useful when running other commands, check or mount for example.

    -

    Stats are logged at INFO level by default which means they won't show at default log level NOTICE. Use --stats-log-level NOTICE or -v to make them show. See the Logging section for more info on log levels.

    +

    Stats are logged at INFO level by default which means they won’t show at default log level NOTICE. Use --stats-log-level NOTICE or -v to make them show. See the Logging section for more info on log levels.

    Note that on macOS you can send a SIGINFO (which is normally ctrl-T in the terminal) to make the stats print immediately.

    -

    --stats-file-name-length integer

    +

    –stats-file-name-length integer

    By default, the --stats output will truncate file names and paths longer than 40 characters. This is equivalent to providing --stats-file-name-length 40. Use --stats-file-name-length 0 to disable any truncation of file names printed by stats.

    -

    --stats-log-level string

    -

    Log level to show --stats output at. This can be DEBUG, INFO, NOTICE, or ERROR. The default is INFO. This means at the default level of logging which is NOTICE the stats won't show - if you want them to then use --stats-log-level NOTICE. See the Logging section for more info on log levels.

    -

    --stats-one-line

    +

    –stats-log-level string

    +

    Log level to show --stats output at. This can be DEBUG, INFO, NOTICE, or ERROR. The default is INFO. This means at the default level of logging which is NOTICE the stats won’t show - if you want them to then use --stats-log-level NOTICE. See the Logging section for more info on log levels.

    +

    –stats-one-line

    When this is specified, rclone condenses the stats into a single line showing the most important stats only.

    -

    --stats-unit=bits|bytes

    +

    –stats-unit=bits|bytes

    By default, data transfer rates will be printed in bytes/second.

    This option allows the data rate to be printed in bits/second.

    Data transfer volume will still be reported in bytes.

    The rate is reported as a binary unit, not SI unit. So 1 Mbit/s equals 1,048,576 bits/s and not 1,000,000 bits/s.

    The default is bytes.

    -

    --suffix=SUFFIX

    -

    This is for use with --backup-dir only. If this isn't set then --backup-dir will move files with their original name. If it is set then the files will have SUFFIX added on to them.

    +

    –suffix=SUFFIX

    +

    This is for use with --backup-dir only. If this isn’t set then --backup-dir will move files with their original name. If it is set then the files will have SUFFIX added on to them.

    See --backup-dir for more info.

    -

    --syslog

    +

    –suffix-keep-extension

    +

    When using --suffix, setting this causes rclone put the SUFFIX before the extension of the files that it backs up rather than after.

    +

    So let’s say we had --suffix -2019-01-01, without the flag file.txt would be backed up to file.txt-2019-01-01 and with the flag it would be backed up to file-2019-01-01.txt. This can be helpful to make sure the suffixed files can still be opened.

    +

    –syslog

    On capable OSes (not Windows or Plan9) send all log output to syslog.

    This can be useful for running rclone in a script or rclone mount.

    -

    --syslog-facility string

    +

    –syslog-facility string

    If using --syslog this sets the syslog facility (eg KERN, USER). See man syslog for a list of possible facilities. The default facility is DAEMON.

    -

    --tpslimit float

    +

    –tpslimit float

    Limit HTTP transactions per second to this. Default is 0 which is used to mean unlimited transactions per second.

    For example to limit rclone to 10 HTTP transactions per second use --tpslimit 10, or to 1 transaction every 2 seconds use --tpslimit 0.5.

    Use this when the number of transactions per second from rclone is causing a problem with the cloud storage provider (eg getting you banned or rate limited).

    This can be very useful for rclone mount to control the behaviour of applications using it.

    See also --tpslimit-burst.

    -

    --tpslimit-burst int

    +

    –tpslimit-burst int

    Max burst of transactions for --tpslimit. (default 1)

    Normally --tpslimit will do exactly the number of transaction per second specified. However if you supply --tps-burst then rclone can save up some transactions from when it was idle giving a burst of up to the parameter supplied.

    For example if you provide --tpslimit-burst 10 then if rclone has been idle for more than 10*--tpslimit then it can do 10 transactions very quickly before they are limited again.

    This may be used to increase performance of --tpslimit without changing the long term average number of transactions per second.

    -

    --track-renames

    -

    By default, rclone doesn't keep track of renamed files, so if you rename a file locally then sync it to a remote, rclone will delete the old file on the remote and upload a new copy.

    +

    –track-renames

    +

    By default, rclone doesn’t keep track of renamed files, so if you rename a file locally then sync it to a remote, rclone will delete the old file on the remote and upload a new copy.

    If you use this flag, and the remote supports server side copy or server side move, and the source and destination have a compatible hash, then this will track renames during sync operations and perform renaming server-side.

    Files will be matched by size and hash - if both match then a rename will be considered.

    If the destination does not support server-side copy or move, rclone will fall back to the default behaviour and log an error level message to the console. Note: Encrypted destinations are not supported by --track-renames.

    Note that --track-renames is incompatible with --no-traverse and that it uses extra memory to keep track of all the rename candidates.

    Note also that --track-renames is incompatible with --delete-before and will select --delete-after instead of --delete-during.

    -

    --delete-(before,during,after)

    +

    –delete-(before,during,after)

    This option allows you to specify when files on your destination are deleted when you sync folders.

    Specifying the value --delete-before will delete all files present on the destination, but not on the source before starting the transfer of any new or updated files. This uses two passes through the file systems, one for the deletions and one for the copies.

    Specifying --delete-during will delete files while checking and uploading files. This is the fastest option and uses the least memory.

    Specifying --delete-after (the default value) will delay deletion of files until all new/updated files have been successfully transferred. The files to be deleted are collected in the copy pass then deleted after the copy pass has completed successfully. The files to be deleted are held in memory so this mode may use more memory. This is the safest mode as it will only delete files if there have been no errors subsequent to that. If there have been errors before the deletions start then you will get the message not deleting files as there were IO errors.

    -

    --fast-list

    +

    –fast-list

    When doing anything which involves a directory listing (eg sync, copy, ls - in fact nearly every command), rclone normally lists a directory and processes it before using more directory lists to process any subdirectories. This can be parallelised and works very quickly using the least amount of memory.

    However, some remotes have a way of listing all files beneath a directory in one (or a small number) of transactions. These tend to be the bucket based remotes (eg S3, B2, GCS, Swift, Hubic).

    If you use the --fast-list flag then rclone will use this method for listing directories. This will have the following consequences for the listing:

    @@ -1940,37 +1969,52 @@ rclone sync /path/to/files remote:current-backup
  • It will use fewer transactions (important if you pay for them)
  • It will use more memory. Rclone has to load the whole listing into memory.
  • It may be faster because it uses fewer transactions
  • -
  • It may be slower because it can't be parallelized
  • +
  • It may be slower because it can’t be parallelized
  • rclone should always give identical results with and without --fast-list.

    -

    If you pay for transactions and can fit your entire sync listing into memory then --fast-list is recommended. If you have a very big sync to do then don't use --fast-list otherwise you will run out of memory.

    -

    If you use --fast-list on a remote which doesn't support it, then rclone will just ignore it.

    -

    --timeout=TIME

    +

    If you pay for transactions and can fit your entire sync listing into memory then --fast-list is recommended. If you have a very big sync to do then don’t use --fast-list otherwise you will run out of memory.

    +

    If you use --fast-list on a remote which doesn’t support it, then rclone will just ignore it.

    +

    –timeout=TIME

    This sets the IO idle timeout. If a transfer has started but then becomes idle for this long it is considered broken and disconnected.

    The default is 5m. Set to 0 to disable.

    -

    --transfers=N

    +

    –transfers=N

    The number of file transfers to run in parallel. It can sometimes be useful to set this to a smaller number if the remote is giving a lot of timeouts or bigger if you have lots of bandwidth and a fast remote.

    The default is to run 4 file transfers in parallel.

    -

    -u, --update

    +

    -u, –update

    This forces rclone to skip any files which exist on the destination and have a modified time that is newer than the source file.

    -

    If an existing destination file has a modification time equal (within the computed modify window precision) to the source file's, it will be updated if the sizes are different.

    -

    On remotes which don't support mod time directly the time checked will be the uploaded time. This means that if uploading to one of these remotes, rclone will skip any files which exist on the destination and have an uploaded time that is newer than the modification time of the source file.

    -

    This can be useful when transferring to a remote which doesn't support mod times directly as it is more accurate than a --size-only check and faster than using --checksum.

    -

    --use-mmap

    +

    If an existing destination file has a modification time equal (within the computed modify window precision) to the source file’s, it will be updated if the sizes are different.

    +

    On remotes which don’t support mod time directly the time checked will be the uploaded time. This means that if uploading to one of these remotes, rclone will skip any files which exist on the destination and have an uploaded time that is newer than the modification time of the source file.

    +

    This can be useful when transferring to a remote which doesn’t support mod times directly as it is more accurate than a --size-only check and faster than using --checksum.

    +

    –use-mmap

    If this flag is set then rclone will use anonymous memory allocated by mmap on Unix based platforms and VirtualAlloc on Windows for its transfer buffers (size controlled by --buffer-size). Memory allocated like this does not go on the Go heap and can be returned to the OS immediately when it is finished with.

    If this flag is not set then rclone will allocate and free the buffers using the Go memory allocator which may use more memory as memory pages are returned less aggressively to the OS.

    It is possible this does not work well on all platforms so it is disabled by default; in the future it may be enabled by default.

    -

    --use-server-modtime

    +

    –use-server-modtime

    Some object-store backends (e.g, Swift, S3) do not preserve file modification times (modtime). On these backends, rclone stores the original modtime as additional metadata on the object. By default it will make an API call to retrieve the metadata when the modtime is needed by an operation.

    -

    Use this flag to disable the extra API call and rely instead on the server's modified time. In cases such as a local to remote sync, knowing the local file is newer than the time it was last uploaded to the remote is sufficient. In those cases, this flag can speed up the process and reduce the number of API calls necessary.

    -

    -v, -vv, --verbose

    +

    Use this flag to disable the extra API call and rely instead on the server’s modified time. In cases such as a local to remote sync, knowing the local file is newer than the time it was last uploaded to the remote is sufficient. In those cases, this flag can speed up the process and reduce the number of API calls necessary.

    +

    -v, -vv, –verbose

    With -v rclone will tell you about each file that is transferred and a small number of significant events.

    With -vv rclone will become very verbose telling you about every file it considers and transfers. Please send bug reports with a log with this setting.

    -

    -V, --version

    +

    -V, –version

    Prints the version number

    +

    SSL/TLS options

    +

    The outoing SSL/TLS connections rclone makes can be controlled with these options. For example this can be very useful with the HTTP or WebDAV backends. Rclone HTTP servers have their own set of configuration for SSL/TLS which you can find in their documentation.

    +

    –ca-cert string

    +

    This loads the PEM encoded certificate authority certificate and uses it to verify the certificates of the servers rclone connects to.

    +

    If you have generated certificates signed with a local CA then you will need this flag to connect to servers using those certificates.

    +

    –client-cert string

    +

    This loads the PEM encoded client side certificate.

    +

    This is used for mutual TLS authentication.

    +

    The --client-key flag is required too when using this.

    +

    –client-key string

    +

    This loads the PEM encoded client side private key used for mutual TLS authentication. Used in conjunction with --client-cert.

    +

    –no-check-certificate=true/false

    +

    --no-check-certificate controls whether a client verifies the server’s certificate chain and host name. If --no-check-certificate is true, TLS accepts any certificate presented by the server and any host name in that certificate. In this mode, TLS is susceptible to man-in-the-middle attacks.

    +

    This option defaults to false.

    +

    This should be used only for testing.

    Configuration Encryption

    Your configuration file contains information for logging in to your cloud services. This means that you should keep your .rclone.conf file in a secure location.

    -

    If you are in an environment where that isn't possible, you can add a password to your configuration. This means that you will have to enter the password every time you start rclone.

    +

    If you are in an environment where that isn’t possible, you can add a password to your configuration. This means that you will have to enter the password every time you start rclone.

    To add a password to your rclone configuration, execute rclone config.

    >rclone config
     Current remotes:
    @@ -2009,42 +2053,33 @@ c/u/q>
    read -s RCLONE_CONFIG_PASS export RCLONE_CONFIG_PASS

    Then source the file when you want to use it. From the shell you would do source set-rclone-password. It will then ask you for the password and set it in the environment variable.

    -

    If you are running rclone inside a script, you might want to disable password prompts. To do that, pass the parameter --ask-password=false to rclone. This will make rclone fail instead of asking for a password if RCLONE_CONFIG_PASS doesn't contain a valid password.

    +

    If you are running rclone inside a script, you might want to disable password prompts. To do that, pass the parameter --ask-password=false to rclone. This will make rclone fail instead of asking for a password if RCLONE_CONFIG_PASS doesn’t contain a valid password.

    Developer options

    -

    These options are useful when developing or debugging rclone. There are also some more remote specific options which aren't documented here which are used for testing. These start with remote name eg --drive-test-option - see the docs for the remote in question.

    -

    --cpuprofile=FILE

    +

    These options are useful when developing or debugging rclone. There are also some more remote specific options which aren’t documented here which are used for testing. These start with remote name eg --drive-test-option - see the docs for the remote in question.

    +

    –cpuprofile=FILE

    Write CPU profile to file. This can be analysed with go tool pprof.

    -

    --dump flag,flag,flag

    +

    –dump flag,flag,flag

    The --dump flag takes a comma separated list of flags to dump info about. These are:

    -

    --dump headers

    +

    –dump headers

    Dump HTTP headers with Authorization: lines removed. May still contain sensitive info. Can be very verbose. Useful for debugging only.

    Use --dump auth if you do want the Authorization: headers.

    -

    --dump bodies

    +

    –dump bodies

    Dump HTTP headers and bodies - may contain sensitive info. Can be very verbose. Useful for debugging only.

    -

    Note that the bodies are buffered in memory so don't use this for enormous files.

    -

    --dump requests

    +

    Note that the bodies are buffered in memory so don’t use this for enormous files.

    +

    –dump requests

    Like --dump bodies but dumps the request bodies and the response headers. Useful for debugging download problems.

    -

    --dump responses

    +

    –dump responses

    Like --dump bodies but dumps the response bodies and the request headers. Useful for debugging upload problems.

    -

    --dump auth

    +

    –dump auth

    Dump HTTP headers - will contain sensitive info such as Authorization: headers - use --dump headers to dump without Authorization: headers. Can be very verbose. Useful for debugging only.

    -

    --dump filters

    +

    –dump filters

    Dump the filters to the output. Useful to see exactly what include and exclude options are filtering on.

    -

    --dump goroutines

    +

    –dump goroutines

    This dumps a list of the running go-routines at the end of the command to standard output.

    -

    --dump openfiles

    -

    This dumps a list of the open files at the end of the command. It uses the lsof command to do that so you'll need that installed to use it.

    -

    --memprofile=FILE

    +

    –dump openfiles

    +

    This dumps a list of the open files at the end of the command. It uses the lsof command to do that so you’ll need that installed to use it.

    +

    –memprofile=FILE

    Write memory profile to file. This can be analysed with go tool pprof.

    -

    --no-check-certificate=true/false

    -

    --no-check-certificate controls whether a client verifies the server's certificate chain and host name. If --no-check-certificate is true, TLS accepts any certificate presented by the server and any host name in that certificate. In this mode, TLS is susceptible to man-in-the-middle attacks.

    -

    This option defaults to false.

    -

    This should be used only for testing.

    -

    --no-traverse

    -

    The --no-traverse flag controls whether the destination file system is traversed when using the copy or move commands. --no-traverse is not compatible with sync and will be ignored if you supply it with sync.

    -

    If you are only copying a small number of files (or are filtering most of the files) and/or have a large number of files on the destination then --no-traverse will stop rclone listing the destination and save time.

    -

    However, if you are copying a large number of files, especially if you are doing a copy where lots of the files under consideration haven't changed and won't need copying then you shouldn't use --no-traverse.

    -

    See rclone copy for an example of how to use it.

    Filtering

    For the filtering options

    Environment Variables

    Rclone can be configured entirely using environment variables. These can be used to set defaults for options or config file entries.

    @@ -2123,7 +2158,7 @@ mys3:
  • HTTP_PROXY, HTTPS_PROXY and NO_PROXY (or the lowercase versions thereof).
  • Configuring rclone on a remote / headless machine

    @@ -2180,8 +2215,8 @@ Configuration file is stored at:

    The filters are applied for the copy, sync, move, ls, lsl, md5sum, sha1sum, size, delete and check operations. Note that purge does not obey the filters.

    Each path as it passes through rclone is matched against the include and exclude rules like --include, --exclude, --include-from, --exclude-from, --filter, or --filter-from. The simplest way to try them out is using the ls command, or --dry-run together with -v.

    Patterns

    -

    The patterns used to match files for inclusion or exclusion are based on "file globs" as used by the unix shell.

    -

    If the pattern starts with a / then it only matches at the top level of the directory tree, relative to the root of the remote (not necessarily the root of the local drive). If it doesn't start with / then it is matched starting at the end of the path, but it will only match a complete path element:

    +

    The patterns used to match files for inclusion or exclusion are based on “file globs” as used by the unix shell.

    +

    If the pattern starts with a / then it only matches at the top level of the directory tree, relative to the root of the remote (not necessarily the root of the local drive). If it doesn’t start with / then it is matched starting at the end of the path, but it will only match a complete path element:

    file.jpg  - matches "file.jpg"
               - matches "directory/file.jpg"
               - doesn't match "afile.jpg"
    @@ -2203,7 +2238,7 @@ Configuration file is stored at:
     
    l?ss  - matches "less"
           - matches "lass"
           - doesn't match "floss"
    -

    A [ and ] together make a a character class, such as [a-z] or [aeiou] or [[:alpha:]]. See the go regexp docs for more info on these.

    +

    A [ and ] together make a character class, such as [a-z] or [aeiou] or [[:alpha:]]. See the go regexp docs for more info on these.

    h[ae]llo - matches "hello"
              - matches "hallo"
              - doesn't match "hullo"
    @@ -2223,7 +2258,7 @@ Configuration file is stored at:

    With --ignore-case

    potato - matches "potato"
            - matches "POTATO"
    -

    Note also that rclone filter globs can only be used in one of the filter command line flags, not in the specification of the remote, so rclone copy "remote:dir*.jpg" /path/to/dir won't work - what is required is rclone --include "*.jpg" copy remote:dir /path/to/dir

    +

    Note also that rclone filter globs can only be used in one of the filter command line flags, not in the specification of the remote, so rclone copy "remote:dir*.jpg" /path/to/dir won’t work - what is required is rclone --include "*.jpg" copy remote:dir /path/to/dir

    Directories

    Rclone keeps track of directories that could match any file patterns.

    Eg if you add the include rule

    @@ -2231,9 +2266,9 @@ Configuration file is stored at:

    Rclone will synthesize the directory include rule

    /a/

    If you put any rules which end in / then it will only match directories.

    -

    Directory matches are only used to optimise directory access patterns - you must still match the files that you want to match. Directory matches won't optimise anything on bucket based remotes (eg s3, swift, google compute storage, b2) which don't have a concept of directory.

    +

    Directory matches are only used to optimise directory access patterns - you must still match the files that you want to match. Directory matches won’t optimise anything on bucket based remotes (eg s3, swift, google compute storage, b2) which don’t have a concept of directory.

    Differences between rsync and rclone patterns

    -

    Rclone implements bash style {a,b,c} glob matching which rsync doesn't.

    +

    Rclone implements bash style {a,b,c} glob matching which rsync doesn’t.

    Rclone always does a wildcard match so \ must always escape a \.

    How the rules are used

    Rclone maintains a combined list of include rules and exclude rules.

    @@ -2290,7 +2325,7 @@ file2.jpg

    Add a single include rule with --include.

    This flag can be repeated. See above for the order the flags are processed in.

    Eg --include *.{png,jpg} to include all png and jpg files in the backup and no others.

    -

    This adds an implicit --exclude * at the very end of the filter list. This means you can mix --include and --include-from with the other filters (eg --exclude) but you must include all the files you want in the include statement. If this doesn't provide enough flexibility then you must use --filter-from.

    +

    This adds an implicit --exclude * at the very end of the filter list. This means you can mix --include and --include-from with the other filters (eg --exclude) but you must include all the files you want in the include statement. If this doesn’t provide enough flexibility then you must use --filter-from.

    --include-from - Read include patterns from file

    Add include rules from a file.

    This flag can be repeated. See above for the order the flags are processed in.

    @@ -2301,7 +2336,7 @@ file2.jpg file2.avi

    Then use as --include-from include-file.txt. This will sync all jpg, png files and file2.avi.

    This is useful if you have a lot of rules.

    -

    This adds an implicit --exclude * at the very end of the filter list. This means you can mix --include and --include-from with the other filters (eg --exclude) but you must include all the files you want in the include statement. If this doesn't provide enough flexibility then you must use --filter-from.

    +

    This adds an implicit --exclude * at the very end of the filter list. This means you can mix --include and --include-from with the other filters (eg --exclude) but you must include all the files you want in the include statement. If this doesn’t provide enough flexibility then you must use --filter-from.

    --filter - Add a file-filtering rule

    This can be used to add a single include or exclude rule. Include rules start with + and exclude rules start with -. A special rule called ! can be used to clear the existing rules.

    This flag can be repeated. See above for the order the flags are processed in.

    @@ -2323,7 +2358,8 @@ file2.avi

    This example will include all jpg and png files, exclude any files matching secret*.jpg and include file2.avi. It will also include everything in the directory dir at the root of the sync, except dir/Trash which it will exclude. Everything else will be excluded from the sync.

    --files-from - Read list of source-file names

    This reads a list of file names from the file passed in and only these files are transferred. The filtering rules are ignored completely if you use this option.

    -

    Rclone will not scan any directories if you use --files-from it will just look at the files specified. Rclone will not error if any of the files are missing from the source.

    +

    Rclone will traverse the file system if you use --files-from, effectively using the files in --files-from as a set of filters. Rclone will not error if any of the files are missing.

    +

    If you use --no-traverse as well as --files-from then rclone will not traverse the destination file system, it will find each file individually using approximately 1 API call. This can be more efficient for small lists of files.

    This option can be repeated to read from more than one file. These are read in the order that they are placed on the command line.

    Paths within the --files-from file will be interpreted as starting with the root specified in the command. Leading / characters are ignored.

    For example, suppose you had files-from.txt with this content:

    @@ -2335,11 +2371,11 @@ subdir/file2.jpg

    This will transfer these files only (if they exist)

    /home/me/pics/file1.jpg        → remote:pics/file1.jpg
     /home/me/pics/subdir/file2.jpg → remote:pics/subdirfile1.jpg
    -

    To take a more complicated example, let's say you had a few files you want to back up regularly with these absolute paths:

    +

    To take a more complicated example, let’s say you had a few files you want to back up regularly with these absolute paths:

    /home/user1/important
     /home/user1/dir/file
     /home/user2/stuff
    -

    To copy these you'd find a common subdirectory - in this case /home and put the remaining files in files-from.txt with or without leading /, eg

    +

    To copy these you’d find a common subdirectory - in this case /home and put the remaining files in files-from.txt with or without leading /, eg

    user1/important
     user1/dir/file
     user2/stuff
    @@ -2359,13 +2395,13 @@ user2/stuff
    /home/user1/important → remote:home/backup/user1/important
     /home/user1/dir/file  → remote:home/backup/user1/dir/file
     /home/user2/stuff     → remote:home/backup/stuff
    -

    --min-size - Don't transfer any file smaller than this

    +

    --min-size - Don’t transfer any file smaller than this

    This option controls the minimum size file which will be transferred. This defaults to kBytes but a suffix of k, M, or G can be used.

    For example --min-size 50k means no files smaller than 50kByte will be transferred.

    -

    --max-size - Don't transfer any file larger than this

    +

    --max-size - Don’t transfer any file larger than this

    This option controls the maximum size file which will be transferred. This defaults to kBytes but a suffix of k, M, or G can be used.

    For example --max-size 1G means no files larger than 1GByte will be transferred.

    -

    --max-age - Don't transfer any file older than this

    +

    --max-age - Don’t transfer any file older than this

    This option controls the maximum age of files to transfer. Give in seconds or with a suffix of:

    For example --max-age 2d means no files older than 2 days will be transferred.

    -

    --min-age - Don't transfer any file younger than this

    +

    --min-age - Don’t transfer any file younger than this

    This option controls the minimum age of files to transfer. Give in seconds or with a suffix (see --max-age for list of suffixes)

    For example --min-age 2d means no files younger than 2 days will be transferred.

    --delete-excluded - Delete files on dest excluded from sync

    @@ -2423,39 +2459,39 @@ dir1/dir2/dir3/.ignore

    If you just want to run a remote control then see the rcd command.

    NB this is experimental and everything here is subject to change!

    Supported parameters

    -

    --rc

    +

    –rc

    Flag to start the http server listen on remote requests

    -

    --rc-addr=IP

    -

    IPaddress:Port or :Port to bind server to. (default "localhost:5572")

    -

    --rc-cert=KEY

    +

    –rc-addr=IP

    +

    IPaddress:Port or :Port to bind server to. (default “localhost:5572”)

    +

    –rc-cert=KEY

    SSL PEM key (concatenation of certificate and CA certificate)

    -

    --rc-client-ca=PATH

    +

    –rc-client-ca=PATH

    Client certificate authority to verify clients with

    -

    --rc-htpasswd=PATH

    +

    –rc-htpasswd=PATH

    htpasswd file - if not provided no authentication is done

    -

    --rc-key=PATH

    +

    –rc-key=PATH

    SSL PEM Private key

    -

    --rc-max-header-bytes=VALUE

    +

    –rc-max-header-bytes=VALUE

    Maximum size of request header (default 4096)

    -

    --rc-user=VALUE

    +

    –rc-user=VALUE

    User name for authentication.

    -

    --rc-pass=VALUE

    +

    –rc-pass=VALUE

    Password for authentication.

    -

    --rc-realm=VALUE

    -

    Realm for authentication (default "rclone")

    -

    --rc-server-read-timeout=DURATION

    +

    –rc-realm=VALUE

    +

    Realm for authentication (default “rclone”)

    +

    –rc-server-read-timeout=DURATION

    Timeout for server reading data (default 1h0m0s)

    -

    --rc-server-write-timeout=DURATION

    +

    –rc-server-write-timeout=DURATION

    Timeout for server writing data (default 1h0m0s)

    -

    --rc-serve

    +

    –rc-serve

    Enable the serving of remote objects via the HTTP interface. This means objects will be accessible at http://127.0.0.1:5572/ by default, so you can browse to http://127.0.0.1:5572/ or http://127.0.0.1:5572/* to see a listing of the remotes. Objects may be requested from remotes using this syntax http://127.0.0.1:5572/[remote:path]/path/to/object

    Default Off.

    -

    --rc-files /path/to/directory

    +

    –rc-files /path/to/directory

    Path to local files to serve on the HTTP server.

    If this is set then rclone will serve the files in that directory. It will also open the root in the web browser if specified. This is for implementing browser based GUIs for rclone functions.

    If --rc-user or --rc-pass is set then the URL that is opened will have the authorization in the URL in the http://user:pass@localhost/ style.

    Default Off.

    -

    --rc-no-auth

    +

    –rc-no-auth

    By default rclone will require authorisation to have been set up on the rc interface in order to use any methods which access any rclone remotes. Eg operations/list is denied as it involved creating a remote as is sync/copy.

    If this is set then no authorisation will be required on the server to use these methods. The alternative is to use --rc-user and --rc-pass and use these credentials in the request.

    Default Off.

    @@ -2533,9 +2569,9 @@ rclone rc cache/expire remote=/ withData=true

    cache/fetch: Fetch file chunks

    Ensure the specified file chunks are cached on disk.

    The chunks= parameter specifies the file chunks to check. It takes a comma separated list of array slice indices. The slice indices are similar to Python slices: start[:end]

    -

    start is the 0 based chunk number from the beginning of the file to fetch inclusive. end is 0 based chunk number from the beginning of the file to fetch exclusive. Both values can be negative, in which case they count from the back of the file. The value "-5:" represents the last 5 chunks of a file.

    -

    Some valid examples are: ":5,-5:" -> the first and last five chunks "0,-2" -> the first and the second last chunk "0:10" -> the first ten chunks

    -

    Any parameter with a key that starts with "file" can be used to specify files to fetch, eg

    +

    start is the 0 based chunk number from the beginning of the file to fetch inclusive. end is 0 based chunk number from the beginning of the file to fetch exclusive. Both values can be negative, in which case they count from the back of the file. The value “-5:” represents the last 5 chunks of a file.

    +

    Some valid examples are: “:5,-5:” -> the first and last five chunks “0,-2” -> the first and the second last chunk “0:10” -> the first ten chunks

    +

    Any parameter with a key that starts with “file” can be used to specify files to fetch, eg

    rclone rc cache/fetch chunks=0 file=hello file2=home/goodbye

    File names will automatically be encrypted when the a crypt remote is used on top of the cache.

    cache/stats: Get cache stats

    @@ -2591,17 +2627,19 @@ rclone rc cache/expire remote=/ withData=true

    Eg

    rclone rc core/bwlimit rate=1M
     rclone rc core/bwlimit rate=off
    -

    The format of the parameter is exactly the same as passed to --bwlimit except only one bandwidth may be specified.

    +

    The format of the parameter is exactly the same as passed to –bwlimit except only one bandwidth may be specified.

    core/gc: Runs a garbage collection.

    -

    This tells the go runtime to do a garbage collection run. It isn't necessary to call this normally, but it can be useful for debugging memory problems.

    +

    This tells the go runtime to do a garbage collection run. It isn’t necessary to call this normally, but it can be useful for debugging memory problems.

    core/memstats: Returns the memory statistics

    This returns the memory statistics of the running program. What the values mean are explained in the go docs: https://golang.org/pkg/runtime/#MemStats

    The most interesting values for most people are:

    core/obscure: Obscures a string passed in.

    Pass a clear string and rclone will obscure it for the config file: - clear - string

    @@ -2638,44 +2676,44 @@ rclone rc core/bwlimit rate=off "checking": an array of names of currently active file checks [] } -

    Values for "transferring", "checking" and "lastError" are only assigned if data is available. The value for "eta" is null if an eta cannot be determined.

    +

    Values for “transferring”, “checking” and “lastError” are only assigned if data is available. The value for “eta” is null if an eta cannot be determined.

    core/version: Shows the current version of rclone and the go runtime.

    -

    This shows the current version of go and the go runtime - version - rclone version, eg "v1.44" - decomposed - version number as [major, minor, patch, subpatch] - note patch and subpatch will be 999 for a git compiled version - isGit - boolean - true if this was compiled from the git version - os - OS in use as according to Go - arch - cpu architecture in use according to Go - goVersion - version of Go runtime in use

    +

    This shows the current version of go and the go runtime - version - rclone version, eg “v1.44” - decomposed - version number as [major, minor, patch, subpatch] - note patch and subpatch will be 999 for a git compiled version - isGit - boolean - true if this was compiled from the git version - os - OS in use as according to Go - arch - cpu architecture in use according to Go - goVersion - version of Go runtime in use

    job/list: Lists the IDs of the running jobs

    Parameters - None

    Results - jobids - array of integer job ids

    job/status: Reads the status of the job ID

    Parameters - jobid - id of the job (integer)

    -

    Results - finished - boolean - duration - time in seconds that the job ran for - endTime - time the job finished (eg "2018-10-26T18:50:20.528746884+01:00") - error - error from the job or empty string for no error - finished - boolean whether the job has finished or not - id - as passed in above - startTime - time the job started (eg "2018-10-26T18:50:20.528336039+01:00") - success - boolean - true for success false otherwise - output - output of the job as would have been returned if called synchronously

    +

    Results - finished - boolean - duration - time in seconds that the job ran for - endTime - time the job finished (eg “2018-10-26T18:50:20.528746884+01:00”) - error - error from the job or empty string for no error - finished - boolean whether the job has finished or not - id - as passed in above - startTime - time the job started (eg “2018-10-26T18:50:20.528336039+01:00”) - success - boolean - true for success false otherwise - output - output of the job as would have been returned if called synchronously

    operations/about: Return the space used on the remote

    This takes the following parameters

    -

    The result is as returned from rclone about --json

    +

    The result is as returned from rclone about –json

    Authentication is required for this call.

    operations/cleanup: Remove trashed files in the remote or path

    This takes the following parameters

    See the cleanup command command for more information on the above.

    Authentication is required for this call.

    operations/copyfile: Copy a file from source remote to destination remote

    This takes the following parameters

    Authentication is required for this call.

    operations/copyurl: Copy the URL to the object

    This takes the following parameters

    See the copyurl command command for more information on the above.

    @@ -2683,23 +2721,23 @@ rclone rc core/bwlimit rate=off

    operations/delete: Remove files in the path

    This takes the following parameters

    See the delete command command for more information on the above.

    Authentication is required for this call.

    operations/deletefile: Remove the single file pointed to

    This takes the following parameters

    See the deletefile command command for more information on the above.

    Authentication is required for this call.

    operations/list: List the given remote and path in JSON format

    This takes the following parameters

    -

    --s3-storage-class

    +

    –s3-storage-class

    The storage class to use when storing new objects in S3.

    -

    --s3-storage-class

    +

    –s3-storage-class

    The storage class to use when storing new objects in OSS.

    Advanced Options

    Here are the advanced options specific to s3 (Amazon S3 Compliant Storage Provider (AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, etc)).

    -

    --s3-bucket-acl

    +

    –s3-bucket-acl

    Canned ACL used when creating buckets.

    For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl

    -

    Note that this ACL is applied when only when creating buckets. If it isn't set then "acl" is used instead.

    +

    Note that this ACL is applied when only when creating buckets. If it isn’t set then “acl” is used instead.

    -

    --s3-upload-cutoff

    +

    –s3-upload-cutoff

    Cutoff for switching to chunked upload

    Any files larger than this will be uploaded in chunks of chunk_size. The minimum is 0 and the maximum is 5GB.

    -

    --s3-chunk-size

    +

    –s3-chunk-size

    Chunk size to use for uploading.

    When uploading files larger than upload_cutoff they will be uploaded as multipart uploads using this chunk size.

    -

    Note that "--s3-upload-concurrency" chunks of this size are buffered in memory per transfer.

    +

    Note that “–s3-upload-concurrency” chunks of this size are buffered in memory per transfer.

    If you are transferring large files over high speed links and you have enough memory, then increasing this will speed up the transfers.

    -

    --s3-disable-checksum

    -

    Don't store MD5 checksum with object metadata

    +

    –s3-disable-checksum

    +

    Don’t store MD5 checksum with object metadata

    -

    --s3-session-token

    +

    –s3-session-token

    An AWS session token

    -

    --s3-upload-concurrency

    +

    –s3-upload-concurrency

    Concurrency for multipart uploads.

    This is the number of chunks of the same file that are uploaded concurrently.

    If you are uploading small numbers of large file over high speed link and these uploads do not fully utilize your bandwidth, then increasing this may help to speed up the transfers.

    @@ -5006,7 +5076,7 @@ y/e/d>
  • Type: int
  • Default: 4
  • -

    --s3-force-path-style

    +

    –s3-force-path-style

    If true use path style access if false use virtual hosted style.

    If this is true (the default) then rclone will use path style access, if false then rclone will use virtual path style. See the AWS S3 docs for more info.

    Some providers (eg Aliyun OSS or Netease COS) require this set to false.

    @@ -5016,10 +5086,10 @@ y/e/d>
  • Type: bool
  • Default: true
  • -

    --s3-v2-auth

    +

    –s3-v2-auth

    If true use v2 authentication.

    If this is false (the default) then rclone will use v4 authentication. If it is set then rclone will use v2 authentication.

    -

    Use this only if v4 signatures don't work, eg pre Jewel/v10 CEPH.

    +

    Use this only if v4 signatures don’t work, eg pre Jewel/v10 CEPH.

    -

    --cache-db-wait-time

    +

    –cache-db-wait-time

    How long to wait for the DB to be available - 0 is unlimited

    Only one process can have the DB open at any one time, so rclone waits for this duration for the DB to become available before it gives an error.

    If you set it to 0 then it will wait forever.

    @@ -6359,7 +6457,7 @@ chunk_total_size = 10G

    Crypt

    The crypt remote encrypts and decrypts another remote.

    To use it first set up the underlying remote following the config instructions for that remote. You can also use a local pathname instead of a remote which will encrypt and decrypt from that directory which might be useful for encrypting onto a USB stick for example.

    -

    First check your chosen remote is working - we'll call it remote:path in these docs. Note that anything inside remote:path will be encrypted and anything outside won't. This means that if you are using a bucket based remote (eg S3, B2, swift) then you should probably put the bucket in the remote s3:bucket. If you just use s3: then rclone will make encrypted bucket names too (if using file name encryption) which may or may not be what you want.

    +

    First check your chosen remote is working - we’ll call it remote:path in these docs. Note that anything inside remote:path will be encrypted and anything outside won’t. This means that if you are using a bucket based remote (eg S3, B2, swift) then you should probably put the bucket in the remote s3:bucket. If you just use s3: then rclone will make encrypted bucket names too (if using file name encryption) which may or may not be what you want.

    Now configure crypt using rclone config. We will call this one secret to differentiate it from the remote.

    No remotes found - make a new one
     n) New remote
    @@ -6452,7 +6550,7 @@ y) Yes this is OK
     e) Edit this remote
     d) Delete this remote
     y/e/d> y
    -

    Important The password is stored in the config file is lightly obscured so it isn't immediately obvious what it is. It is in no way secure unless you use config file encryption.

    +

    Important The password is stored in the config file is lightly obscured so it isn’t immediately obvious what it is. It is in no way secure unless you use config file encryption.

    A long passphrase is recommended, or you can use a random one. Note that if you reconfigure rclone with the same passwords/passphrases elsewhere it will be compatible - all the secrets used are derived from those two passwords/passphrases.

    Note that rclone does not encrypt

    -

    64k chunk size was chosen as the best performing chunk size (the authenticator takes too much time below this and the performance drops off due to cache effects above this). Note that these chunks are buffered in memory so they can't be too big.

    +

    64k chunk size was chosen as the best performing chunk size (the authenticator takes too much time below this and the performance drops off due to cache effects above this). Note that these chunks are buffered in memory so they can’t be too big.

    This uses a 32 byte (256 bit key) key derived from the user password.

    Examples

    1 byte file will encrypt to

    @@ -6669,12 +6767,12 @@ $ rclone -q ls secret:

    Name encryption

    File names are encrypted segment by segment - the path is broken up into / separated strings and these are encrypted individually.

    File segments are padded using using PKCS#7 to a multiple of 16 bytes before encryption.

    -

    They are then encrypted with EME using AES with 256 bit key. EME (ECB-Mix-ECB) is a wide-block encryption mode presented in the 2003 paper "A Parallelizable Enciphering Mode" by Halevi and Rogaway.

    -

    This makes for deterministic encryption which is what we want - the same filename must encrypt to the same thing otherwise we can't find it on the cloud storage system.

    +

    They are then encrypted with EME using AES with 256 bit key. EME (ECB-Mix-ECB) is a wide-block encryption mode presented in the 2003 paper “A Parallelizable Enciphering Mode” by Halevi and Rogaway.

    +

    This makes for deterministic encryption which is what we want - the same filename must encrypt to the same thing otherwise we can’t find it on the cloud storage system.

    This means that

    This uses a 32 byte key (256 bits) and a 16 byte (128 bits) IV both of which are derived from the user password.

    After encryption they are written out using a modified version of standard base32 encoding as described in RFC4648. The standard encoding is modified in two ways:

    @@ -6684,7 +6782,7 @@ $ rclone -q ls secret:

    base32 is used rather than the more efficient base64 so rclone can be used on case insensitive remotes (eg Windows, Amazon Drive).

    Key derivation

    -

    Rclone uses scrypt with parameters N=16384, r=8, p=1 with an optional user supplied salt (password2) to derive the 32+32+16 = 80 bytes of key material required. If the user doesn't supply a salt then rclone uses an internal one.

    +

    Rclone uses scrypt with parameters N=16384, r=8, p=1 with an optional user supplied salt (password2) to derive the 32+32+16 = 80 bytes of key material required. If the user doesn’t supply a salt then rclone uses an internal one.

    scrypt makes it impractical to mount a dictionary attack on rclone encrypted data. For full protection against this you should always use a salt.

    Dropbox

    Paths are specified as remote:path

    @@ -6760,12 +6858,12 @@ y/e/d> y

    A leading / for a Dropbox personal account will do nothing, but it will take an extra HTTP transaction so it should be avoided.

    Modified time and Hashes

    Dropbox supports modified times, but the only way to set a modification time is to re-upload the file.

    -

    This means that if you uploaded your data with an older version of rclone which didn't support the v2 API and modified times, rclone will decide to upload all your old data to fix the modification times. If you don't want this to happen use --size-only or --checksum flag to stop it.

    +

    This means that if you uploaded your data with an older version of rclone which didn’t support the v2 API and modified times, rclone will decide to upload all your old data to fix the modification times. If you don’t want this to happen use --size-only or --checksum flag to stop it.

    Dropbox supports its own hash type which is checked for all transfers.

    Standard Options

    Here are the standard options specific to dropbox (Dropbox).

    -

    --dropbox-client-id

    +

    –dropbox-client-id

    Dropbox App Client Id Leave blank normally.

    -

    --dropbox-client-secret

    +

    –dropbox-client-secret

    Dropbox App Client Secret Leave blank normally.

    Advanced Options

    Here are the advanced options specific to dropbox (Dropbox).

    -

    --dropbox-chunk-size

    +

    –dropbox-chunk-size

    Upload chunk size. (< 150M).

    Any files larger than this will be uploaded in chunks of this size.

    Note that chunks are buffered in memory (one at a time) so rclone can deal with retries. Setting this larger will increase the speed slightly (at most 10% for 128MB in tests) at the cost of using more memory. It can be set smaller if you are tight on memory.

    @@ -6793,7 +6891,7 @@ y/e/d> y
  • Type: SizeSuffix
  • Default: 48M
  • -

    --dropbox-impersonate

    +

    –dropbox-impersonate

    Impersonate this user when using a business account.

    Limitations

    -

    Note that Dropbox is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".

    -

    There are some file names such as thumbs.db which Dropbox can't store. There is a full list of them in the "Ignored Files" section of this document. Rclone will issue an error message File name disallowed - not uploading if it attempts to upload one of those file names, but the sync won't fail.

    +

    Note that Dropbox is case insensitive so you can’t have a file called “Hello.doc” and one called “hello.doc”.

    +

    There are some file names such as thumbs.db which Dropbox can’t store. There is a full list of them in the “Ignored Files” section of this document. Rclone will issue an error message File name disallowed - not uploading if it attempts to upload one of those file names, but the sync won’t fail.

    If you have more than 10,000 files in a directory then rclone purge dropbox:dir will return the error Failed to purge: There are too many files involved in this operation. As a work-around do an rclone delete dropbox:dir followed by an rclone rmdir dropbox:dir.

    FTP

    FTP is the File Transfer Protocol. FTP support is provided using the github.com/jlaffaye/ftp package.

    @@ -6895,7 +6993,7 @@ y/e/d> y

    Standard Options

    Here are the standard options specific to ftp (FTP Connection).

    -

    --ftp-host

    +

    –ftp-host

    FTP host to connect to

    -

    --ftp-user

    +

    –ftp-user

    FTP username, leave blank for current username, ncw

    -

    --ftp-port

    +

    –ftp-port

    FTP port, leave blank to use default (21)

    -

    --ftp-pass

    +

    –ftp-pass

    FTP password

    +

    Advanced Options

    +

    Here are the advanced options specific to ftp (FTP Connection).

    +

    –ftp-concurrency

    +

    Maximum number of FTP simultaneous connections, 0 for unlimited

    +

    Limitations

    -

    Note that since FTP isn't HTTP based the following flags don't work with it: --dump-headers, --dump-bodies, --dump-auth

    -

    Note that --timeout isn't supported (but --contimeout is).

    -

    Note that --bind isn't supported.

    -

    FTP could support server side move but doesn't yet.

    +

    Note that since FTP isn’t HTTP based the following flags don’t work with it: --dump-headers, --dump-bodies, --dump-auth

    +

    Note that --timeout isn’t supported (but --contimeout is).

    +

    Note that --bind isn’t supported.

    +

    FTP could support server side move but doesn’t yet.

    Note that the ftp backend does not support the ftp_proxy environment variable yet.

    Google Cloud Storage

    Paths are specified as remote:bucket (or remote: for the lsd command.) You may put subdirectories in too, eg remote:bucket/path/to/dir.

    @@ -7099,17 +7207,20 @@ y/e/d> y

    Sync /home/local/directory to the remote bucket, deleting any excess files in the bucket.

    rclone sync /home/local/directory remote:bucket

    Service Account support

    -

    You can set up rclone with Google Cloud Storage in an unattended mode, i.e. not tied to a specific end-user Google account. This is useful when you want to synchronise files onto machines that don't have actively logged-in users, for example build machines.

    -

    To get credentials for Google Cloud Platform IAM Service Accounts, please head to the Service Account section of the Google Developer Console. Service Accounts behave just like normal User permissions in Google Cloud Storage ACLs, so you can limit their access (e.g. make them read only). After creating an account, a JSON file containing the Service Account's credentials will be downloaded onto your machines. These credentials are what rclone will use for authentication.

    -

    To use a Service Account instead of OAuth2 token flow, enter the path to your Service Account credentials at the service_account_file prompt and rclone won't use the browser based authentication flow. If you'd rather stuff the contents of the credentials file into the rclone config file, you can set service_account_credentials with the actual contents of the file instead, or set the equivalent environment variable.

    -

    --fast-list

    +

    You can set up rclone with Google Cloud Storage in an unattended mode, i.e. not tied to a specific end-user Google account. This is useful when you want to synchronise files onto machines that don’t have actively logged-in users, for example build machines.

    +

    To get credentials for Google Cloud Platform IAM Service Accounts, please head to the Service Account section of the Google Developer Console. Service Accounts behave just like normal User permissions in Google Cloud Storage ACLs, so you can limit their access (e.g. make them read only). After creating an account, a JSON file containing the Service Account’s credentials will be downloaded onto your machines. These credentials are what rclone will use for authentication.

    +

    To use a Service Account instead of OAuth2 token flow, enter the path to your Service Account credentials at the service_account_file prompt and rclone won’t use the browser based authentication flow. If you’d rather stuff the contents of the credentials file into the rclone config file, you can set service_account_credentials with the actual contents of the file instead, or set the equivalent environment variable.

    +

    Application Default Credentials

    +

    If no other source of credentials is provided, rclone will fall back to Application Default Credentials this is useful both when you already have configured authentication for your developer account, or in production when running on a google compute host. Note that if running in docker, you may need to run additional commands on your google compute machine - see this page.

    +

    Note that in the case application default credentials are used, there is no need to explicitly configure a project number.

    +

    –fast-list

    This remote supports --fast-list which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details.

    Modified time

    -

    Google google cloud storage stores md5sums natively and rclone stores modification times as metadata on the object, under the "mtime" key in RFC3339 format accurate to 1ns.

    +

    Google google cloud storage stores md5sums natively and rclone stores modification times as metadata on the object, under the “mtime” key in RFC3339 format accurate to 1ns.

    Standard Options

    Here are the standard options specific to google cloud storage (Google Cloud Storage (this is not Google Drive)).

    -

    --gcs-client-id

    +

    –gcs-client-id

    Google Application Client Id Leave blank normally.

    -

    --gcs-client-secret

    +

    –gcs-client-secret

    Google Application Client Secret Leave blank normally.

    -

    --gcs-project-number

    +

    –gcs-project-number

    Project number. Optional - needed only for list/create/delete buckets - see your developer console.

    -

    --gcs-service-account-file

    +

    –gcs-service-account-file

    Service Account Credentials JSON file path Leave blank normally. Needed only if you want use SA instead of interactive login.

    -

    --gcs-service-account-credentials

    +

    –gcs-service-account-credentials

    Service Account Credentials JSON blob Leave blank normally. Needed only if you want use SA instead of interactive login.

    -

    --gcs-object-acl

    +

    –gcs-object-acl

    Access Control List for new objects.

    -

    --gcs-bucket-acl

    +

    –gcs-bucket-acl

    Access Control List for new buckets.

    -

    --gcs-location

    +

    –gcs-bucket-policy-only

    +

    Access checks should use bucket-level IAM policies.

    +

    If you want to upload objects to a bucket with Bucket Policy Only set then you will need to set this.

    +

    When it is set, rclone:

    + +

    Docs: https://cloud.google.com/storage/docs/bucket-policy-only

    + +

    –gcs-location

    Location for the newly created buckets.

    -

    --gcs-storage-class

    +

    –gcs-storage-class

    The storage class to use when storing objects in Google Cloud Storage.