1
mirror of https://github.com/rclone/rclone synced 2025-02-16 11:34:29 +01:00

Version v1.30

This commit is contained in:
Nick Craig-Wood 2016-06-18 16:29:53 +01:00
parent f438f1e9ef
commit bd0227450e
7 changed files with 1194 additions and 151 deletions

View File

@ -12,7 +12,7 @@
<div id="header"> <div id="header">
<h1 class="title">rclone(1) User Manual</h1> <h1 class="title">rclone(1) User Manual</h1>
<h2 class="author">Nick Craig-Wood</h2> <h2 class="author">Nick Craig-Wood</h2>
<h3 class="date">Apr 18, 2016</h3> <h3 class="date">Jun 18, 2016</h3>
</div> </div>
<h1 id="rclone">Rclone</h1> <h1 id="rclone">Rclone</h1>
<p><a href="http://rclone.org/"><img src="http://rclone.org/img/rclone-120x120.png" alt="Logo" /></a></p> <p><a href="http://rclone.org/"><img src="http://rclone.org/img/rclone-120x120.png" alt="Logo" /></a></p>
@ -116,11 +116,11 @@ destpath/sourcepath/two.txt</code></pre>
<p>If dest:path doesn't exist, it is created and the source:path contents go there.</p> <p>If dest:path doesn't exist, it is created and the source:path contents go there.</p>
<h3 id="move-sourcepath-destpath">move source:path dest:path</h3> <h3 id="move-sourcepath-destpath">move source:path dest:path</h3>
<p>Moves the source to the destination.</p> <p>Moves the source to the destination.</p>
<p>If there are no filters in use this is equivalent to a copy followed by a purge, but may using server side operations to speed it up if possible.</p> <p>If there are no filters in use this is equivalent to a copy followed by a purge, but may use server side operations to speed it up if possible.</p>
<p>If filters are in use then it is equivalent to a copy followed by delete, followed by an rmdir (which only removes the directory if empty). The individual file moves will be moved with srver side operations if possible.</p> <p>If filters are in use then it is equivalent to a copy followed by delete, followed by an rmdir (which only removes the directory if empty). The individual file moves will be moved with server side operations if possible.</p>
<p><strong>Important</strong>: Since this can cause data loss, test first with the --dry-run flag.</p> <p><strong>Important</strong>: Since this can cause data loss, test first with the --dry-run flag.</p>
<h3 id="rclone-ls-remotepath">rclone ls remote:path</h3> <h3 id="rclone-ls-remotepath">rclone ls remote:path</h3>
<p>List all the objects in the the path with size and path.</p> <p>List all the objects in the path with size and path.</p>
<h3 id="rclone-lsd-remotepath">rclone lsd remote:path</h3> <h3 id="rclone-lsd-remotepath">rclone lsd remote:path</h3>
<p>List all directories/containers/buckets in the the path.</p> <p>List all directories/containers/buckets in the the path.</p>
<h3 id="rclone-lsl-remotepath">rclone lsl remote:path</h3> <h3 id="rclone-lsl-remotepath">rclone lsl remote:path</h3>
@ -209,6 +209,20 @@ two-3.txt: renamed from: two.txt</code></pre>
<p>Enter an interactive configuration session.</p> <p>Enter an interactive configuration session.</p>
<h3 id="rclone-help">rclone help</h3> <h3 id="rclone-help">rclone help</h3>
<p>Prints help on rclone commands and options.</p> <p>Prints help on rclone commands and options.</p>
<h2 id="quoting-and-the-shell">Quoting and the shell</h2>
<p>When you are typing commands to your computer you are using something called the command line shell. This interprets various characters in an OS specific way.</p>
<p>Here are some gotchas which may help users unfamiliar with the shell rules</p>
<h3 id="linux-osx">Linux / OSX</h3>
<p>If your names have spaces or shell metacharacters (eg <code>*</code>, <code>?</code>, <code>$</code>, <code>'</code>, <code>&quot;</code> etc) then you must quote them. Use single quotes <code>'</code> by default.</p>
<pre><code>rclone copy &#39;Important files?&#39; remote:backup</code></pre>
<p>If you want to send a <code>'</code> you will need to use <code>&quot;</code>, eg</p>
<pre><code>rclone copy &quot;O&#39;Reilly Reviews&quot; remote:backup</code></pre>
<p>The rules for quoting metacharacters are complicated and if you want the full details you'll have to consult the manual page for your shell.</p>
<h3 id="windows">Windows</h3>
<p>If your names have spaces in you need to put them in <code>&quot;</code>, eg</p>
<pre><code>rclone copy &quot;E:\folder name\folder name\folder name&quot; remote:backup</code></pre>
<p>If you are using the root directory on its own then don't quote it (see <a href="https://github.com/ncw/rclone/issues/464">#464</a> for why), eg</p>
<pre><code>rclone copy E:\ remote:backup</code></pre>
<h2 id="server-side-copy">Server Side Copy</h2> <h2 id="server-side-copy">Server Side Copy</h2>
<p>Drive, S3, Dropbox, Swift and Google Cloud Storage support server side copy.</p> <p>Drive, S3, Dropbox, Swift and Google Cloud Storage support server side copy.</p>
<p>This means if you want to copy one folder to another then rclone won't download all the files and re-upload them; it will instruct the server to copy them in place.</p> <p>This means if you want to copy one folder to another then rclone won't download all the files and re-upload them; it will instruct the server to copy them in place.</p>
@ -224,9 +238,9 @@ rclone sync /path/to/files remote:current-backup</code></pre>
<h2 id="options">Options</h2> <h2 id="options">Options</h2>
<p>Rclone has a number of options to control its behaviour.</p> <p>Rclone has a number of options to control its behaviour.</p>
<p>Options which use TIME use the go time parser. A duration string is a possibly signed sequence of decimal numbers, each with optional fraction and a unit suffix, such as &quot;300ms&quot;, &quot;-1.5h&quot; or &quot;2h45m&quot;. Valid time units are &quot;ns&quot;, &quot;us&quot; (or &quot;µs&quot;), &quot;ms&quot;, &quot;s&quot;, &quot;m&quot;, &quot;h&quot;.</p> <p>Options which use TIME use the go time parser. A duration string is a possibly signed sequence of decimal numbers, each with optional fraction and a unit suffix, such as &quot;300ms&quot;, &quot;-1.5h&quot; or &quot;2h45m&quot;. Valid time units are &quot;ns&quot;, &quot;us&quot; (or &quot;µs&quot;), &quot;ms&quot;, &quot;s&quot;, &quot;m&quot;, &quot;h&quot;.</p>
<p>Options which use SIZE use kByte by default. However a suffix of <code>k</code> for kBytes, <code>M</code> for MBytes and <code>G</code> for GBytes may be used. These are the binary units, eg 2**10, 2**20, 2**30 respectively.</p> <p>Options which use SIZE use kByte by default. However a suffix of <code>b</code> for bytes, <code>k</code> for kBytes, <code>M</code> for MBytes and <code>G</code> for GBytes may be used. These are the binary units, eg 1, 2**10, 2**20, 2**30 respectively.</p>
<h3 id="bwlimitsize">--bwlimit=SIZE</h3> <h3 id="bwlimitsize">--bwlimit=SIZE</h3>
<p>Bandwidth limit in kBytes/s, or use suffix k|M|G. The default is <code>0</code> which means to not limit bandwidth.</p> <p>Bandwidth limit in kBytes/s, or use suffix b|k|M|G. The default is <code>0</code> which means to not limit bandwidth.</p>
<p>For example to limit bandwidth usage to 10 MBytes/s use <code>--bwlimit 10M</code></p> <p>For example to limit bandwidth usage to 10 MBytes/s use <code>--bwlimit 10M</code></p>
<p>This only limits the bandwidth of the data transfer, it doesn't limit the bandwith of the directory listings etc.</p> <p>This only limits the bandwidth of the data transfer, it doesn't limit the bandwith of the directory listings etc.</p>
<h3 id="checkersn">--checkers=N</h3> <h3 id="checkersn">--checkers=N</h3>
@ -250,16 +264,26 @@ rclone sync /path/to/files remote:current-backup</code></pre>
<h3 id="ignore-existing">--ignore-existing</h3> <h3 id="ignore-existing">--ignore-existing</h3>
<p>Using this option will make rclone unconditionally skip all files that exist on the destination, no matter the content of these files.</p> <p>Using this option will make rclone unconditionally skip all files that exist on the destination, no matter the content of these files.</p>
<p>While this isn't a generally recommended option, it can be useful in cases where your files change due to encryption. However, it cannot correct partial transfers in case a transfer was interrupted.</p> <p>While this isn't a generally recommended option, it can be useful in cases where your files change due to encryption. However, it cannot correct partial transfers in case a transfer was interrupted.</p>
<h3 id="ignore-size">--ignore-size</h3>
<p>Normally rclone will look at modification time and size of files to see if they are equal. If you set this flag then rclone will check only the modification time. If <code>--checksum</code> is set then it only checks the checksum.</p>
<p>It will also cause rclone to skip verifying the sizes are the same after transfer.</p>
<p>This can be useful for transferring files to and from onedrive which occasionally misreports the size of image files (see <a href="https://github.com/ncw/rclone/issues/399">#399</a> for more info).</p>
<h3 id="i---ignore-times">-I, --ignore-times</h3> <h3 id="i---ignore-times">-I, --ignore-times</h3>
<p>Using this option will cause rclone to unconditionally upload all files regardless of the state of files on the destination.</p> <p>Using this option will cause rclone to unconditionally upload all files regardless of the state of files on the destination.</p>
<p>Normally rclone would skip any files that have the same modification time and are the same size (or have the same checksum if using <code>--checksum</code>).</p> <p>Normally rclone would skip any files that have the same modification time and are the same size (or have the same checksum if using <code>--checksum</code>).</p>
<h3 id="log-filefile">--log-file=FILE</h3> <h3 id="log-filefile">--log-file=FILE</h3>
<p>Log all of rclone's output to FILE. This is not active by default. This can be useful for tracking down problems with syncs in combination with the <code>-v</code> flag.</p> <p>Log all of rclone's output to FILE. This is not active by default. This can be useful for tracking down problems with syncs in combination with the <code>-v</code> flag. See the Logging section for more info.</p>
<h3 id="low-level-retries-number">--low-level-retries NUMBER</h3> <h3 id="low-level-retries-number">--low-level-retries NUMBER</h3>
<p>This controls the number of low level retries rclone does.</p> <p>This controls the number of low level retries rclone does.</p>
<p>A low level retry is used to retry a failing operation - typically one HTTP request. This might be uploading a chunk of a big file for example. You will see low level retries in the log with the <code>-v</code> flag.</p> <p>A low level retry is used to retry a failing operation - typically one HTTP request. This might be uploading a chunk of a big file for example. You will see low level retries in the log with the <code>-v</code> flag.</p>
<p>This shouldn't need to be changed from the default in normal operations, however if you get a lot of low level retries you may wish to reduce the value so rclone moves on to a high level retry (see the <code>--retries</code> flag) quicker.</p> <p>This shouldn't need to be changed from the default in normal operations, however if you get a lot of low level retries you may wish to reduce the value so rclone moves on to a high level retry (see the <code>--retries</code> flag) quicker.</p>
<p>Disable low level retries with <code>--low-level-retries 1</code>.</p> <p>Disable low level retries with <code>--low-level-retries 1</code>.</p>
<h3 id="max-depthn">--max-depth=N</h3>
<p>This modifies the recursion depth for all the commands except purge.</p>
<p>So if you do <code>rclone --max-depth 1 ls remote:path</code> you will see only the files in the top level directory. Using <code>--max-depth 2</code> means you will see all the files in first two directory levels and so on.</p>
<p>For historical reasons the <code>lsd</code> command defaults to using a <code>--max-depth</code> of 1 - you can override this with the command line flag.</p>
<p>You can use this command to disable recursion (with <code>--max-depth 1</code>).</p>
<p>Note that if you use this with <code>sync</code> and <code>--delete-excluded</code> the files not recursed through are considered excluded and will be deleted on the destination. Test first with <code>--dry-run</code> if you are not sure what will happen.</p>
<h3 id="modify-windowtime">--modify-window=TIME</h3> <h3 id="modify-windowtime">--modify-window=TIME</h3>
<p>When checking whether a file has been modified, this is the maximum allowed time difference that a file can have and still be considered equivalent.</p> <p>When checking whether a file has been modified, this is the maximum allowed time difference that a file can have and still be considered equivalent.</p>
<p>The default is <code>1ns</code> unless this is overridden by a remote. For example OS X only stores modification times to the nearest second so if you are reading and writing to an OS X filing system this will be <code>1s</code> by default.</p> <p>The default is <code>1ns</code> unless this is overridden by a remote. For example OS X only stores modification times to the nearest second so if you are reading and writing to an OS X filing system this will be <code>1s</code> by default.</p>
@ -276,7 +300,6 @@ rclone sync /path/to/files remote:current-backup</code></pre>
<h3 id="size-only">--size-only</h3> <h3 id="size-only">--size-only</h3>
<p>Normally rclone will look at modification time and size of files to see if they are equal. If you set this flag then rclone will check only the size.</p> <p>Normally rclone will look at modification time and size of files to see if they are equal. If you set this flag then rclone will check only the size.</p>
<p>This can be useful transferring files from dropbox which have been modified by the desktop sync client which doesn't set checksums of modification times in the same way as rclone.</p> <p>This can be useful transferring files from dropbox which have been modified by the desktop sync client which doesn't set checksums of modification times in the same way as rclone.</p>
<p>When using this flag, rclone won't update mtimes of remote files if they are incorrect as it would normally.</p>
<h3 id="statstime">--stats=TIME</h3> <h3 id="statstime">--stats=TIME</h3>
<p>Rclone will print stats at regular intervals to show its progress.</p> <p>Rclone will print stats at regular intervals to show its progress.</p>
<p>This sets the interval.</p> <p>This sets the interval.</p>
@ -323,9 +346,9 @@ a) Add Password
q) Quit to main menu q) Quit to main menu
a/q&gt; a a/q&gt; a
Enter NEW configuration password: Enter NEW configuration password:
password&gt; password:
Confirm NEW password: Confirm NEW password:
password&gt; password:
Password set Password set
Your configuration is encrypted. Your configuration is encrypted.
c) Change Password c) Change Password
@ -334,10 +357,10 @@ q) Quit to main menu
c/u/q&gt;</code></pre> c/u/q&gt;</code></pre>
<p>Your configuration is now encrypted, and every time you start rclone you will now be asked for the password. In the same menu you can change the password or completely remove encryption from your configuration.</p> <p>Your configuration is now encrypted, and every time you start rclone you will now be asked for the password. In the same menu you can change the password or completely remove encryption from your configuration.</p>
<p>There is no way to recover the configuration if you lose your password.</p> <p>There is no way to recover the configuration if you lose your password.</p>
<p>rclone uses <a href="https://godoc.org/golang.org/x/crypto/nacl/secretbox">nacl secretbox</a> which in term uses XSalsa20 and Poly1305 to encrypt and authenticate your configuration with secret-key cryptography. The password is SHA-256 hashed, which produces the key for secretbox. The hashed password is not stored.</p> <p>rclone uses <a href="https://godoc.org/golang.org/x/crypto/nacl/secretbox">nacl secretbox</a> which in turn uses XSalsa20 and Poly1305 to encrypt and authenticate your configuration with secret-key cryptography. The password is SHA-256 hashed, which produces the key for secretbox. The hashed password is not stored.</p>
<p>While this provides very good security, we do not recommend storing your encrypted rclone configuration in public, if it contains sensitive information, maybe except if you use a very strong password.</p> <p>While this provides very good security, we do not recommend storing your encrypted rclone configuration in public if it contains sensitive information, maybe except if you use a very strong password.</p>
<p>If it is safe in your environment, you can set the <code>RCLONE_CONFIG_PASS</code> environment variable to contain your password, in which case it will be used for decrypting the configuration.</p> <p>If it is safe in your environment, you can set the <code>RCLONE_CONFIG_PASS</code> environment variable to contain your password, in which case it will be used for decrypting the configuration.</p>
<p>If you are running rclone inside a script, you might want to disable password prompts. To do that, pass the parameter <code>--ask-password=false</code> to rclone. This will make rclone fail instead of asking for a password, if if <code>RCLONE_CONFIG_PASS</code> doesn't contain a valid password.</p> <p>If you are running rclone inside a script, you might want to disable password prompts. To do that, pass the parameter <code>--ask-password=false</code> to rclone. This will make rclone fail instead of asking for a password if <code>RCLONE_CONFIG_PASS</code> doesn't contain a valid password.</p>
<h2 id="developer-options">Developer options</h2> <h2 id="developer-options">Developer options</h2>
<p>These options are useful when developing or debugging rclone. There are also some more remote specific options which aren't documented here which are used for testing. These start with remote name eg <code>--drive-test-option</code> - see the docs for the remote in question.</p> <p>These options are useful when developing or debugging rclone. There are also some more remote specific options which aren't documented here which are used for testing. These start with remote name eg <code>--drive-test-option</code> - see the docs for the remote in question.</p>
<h3 id="cpuprofilefile">--cpuprofile=FILE</h3> <h3 id="cpuprofilefile">--cpuprofile=FILE</h3>
@ -372,6 +395,13 @@ c/u/q&gt;</code></pre>
<li><code>--dump-filters</code></li> <li><code>--dump-filters</code></li>
</ul> </ul>
<p>See the <a href="http://rclone.org/filtering/">filtering section</a>.</p> <p>See the <a href="http://rclone.org/filtering/">filtering section</a>.</p>
<h2 id="logging">Logging</h2>
<p>rclone has 3 levels of logging, <code>Error</code>, <code>Info</code> and <code>Debug</code>.</p>
<p>By default rclone logs <code>Error</code> and <code>Info</code> to standard error and <code>Debug</code> to standard output. This means you can redirect standard output and standard error to different places.</p>
<p>By default rclone will produce <code>Error</code> and <code>Info</code> level messages.</p>
<p>If you use the <code>-q</code> flag, rclone will only produce <code>Error</code> messages.</p>
<p>If you use the <code>-v</code> flag, rclone will produce <code>Error</code>, <code>Info</code> and <code>Debug</code> messages.</p>
<p>If you use the <code>--log-file=FILE</code> option, rclone will redirect <code>Error</code>, <code>Info</code> and <code>Debug</code> messages along with standard error to FILE.</p>
<h2 id="exit-code">Exit Code</h2> <h2 id="exit-code">Exit Code</h2>
<p>If any errors occurred during the command, rclone will set a non zero exit code. This allows scripts to detect when rclone operations have failed.</p> <p>If any errors occurred during the command, rclone will set a non zero exit code. This allows scripts to detect when rclone operations have failed.</p>
<h1 id="configuring-rclone-on-a-remote-headless-machine">Configuring rclone on a remote / headless machine</h1> <h1 id="configuring-rclone-on-a-remote-headless-machine">Configuring rclone on a remote / headless machine</h1>
@ -428,6 +458,7 @@ y/e/d&gt;</code></pre>
<p>Rclone has a sophisticated set of include and exclude rules. Some of these are based on patterns and some on other things like file size.</p> <p>Rclone has a sophisticated set of include and exclude rules. Some of these are based on patterns and some on other things like file size.</p>
<p>The filters are applied for the <code>copy</code>, <code>sync</code>, <code>move</code>, <code>ls</code>, <code>lsl</code>, <code>md5sum</code>, <code>sha1sum</code>, <code>size</code>, <code>delete</code> and <code>check</code> operations. Note that <code>purge</code> does not obey the filters.</p> <p>The filters are applied for the <code>copy</code>, <code>sync</code>, <code>move</code>, <code>ls</code>, <code>lsl</code>, <code>md5sum</code>, <code>sha1sum</code>, <code>size</code>, <code>delete</code> and <code>check</code> operations. Note that <code>purge</code> does not obey the filters.</p>
<p>Each path as it passes through rclone is matched against the include and exclude rules like <code>--include</code>, <code>--exclude</code>, <code>--include-from</code>, <code>--exclude-from</code>, <code>--filter</code>, or <code>--filter-from</code>. The simplest way to try them out is using the <code>ls</code> command, or <code>--dry-run</code> together with <code>-v</code>.</p> <p>Each path as it passes through rclone is matched against the include and exclude rules like <code>--include</code>, <code>--exclude</code>, <code>--include-from</code>, <code>--exclude-from</code>, <code>--filter</code>, or <code>--filter-from</code>. The simplest way to try them out is using the <code>ls</code> command, or <code>--dry-run</code> together with <code>-v</code>.</p>
<p><strong>Important</strong> Due to limitations of the command line parser you can only use any of these options once - if you duplicate them then rclone will use the last one only.</p>
<h2 id="patterns">Patterns</h2> <h2 id="patterns">Patterns</h2>
<p>The patterns used to match files for inclusion or exclusion are based on &quot;file globs&quot; as used by the unix shell.</p> <p>The patterns used to match files for inclusion or exclusion are based on &quot;file globs&quot; as used by the unix shell.</p>
<p>If the pattern starts with a <code>/</code> then it only matches at the top level of the directory tree, relative to the root of the remote. If it doesn't start with <code>/</code> then it is matched starting at the <strong>end of the path</strong>, but it will only match a complete path element:</p> <p>If the pattern starts with a <code>/</code> then it only matches at the top level of the directory tree, relative to the root of the remote. If it doesn't start with <code>/</code> then it is matched starting at the <strong>end of the path</strong>, but it will only match a complete path element:</p>
@ -465,9 +496,17 @@ y/e/d&gt;</code></pre>
<pre><code>\*.jpg - matches &quot;*.jpg&quot; <pre><code>\*.jpg - matches &quot;*.jpg&quot;
\\.jpg - matches &quot;\.jpg&quot; \\.jpg - matches &quot;\.jpg&quot;
\[one\].jpg - matches &quot;[one].jpg&quot;</code></pre> \[one\].jpg - matches &quot;[one].jpg&quot;</code></pre>
<p>Note also that rclone filter globs can only be used in one of the filter command line flags, not in the specification of the remote, so <code>rclone copy &quot;remote:dir*.jpg&quot; /path/to/dir</code> won't work - what is required is <code>rclone --include &quot;*.jpg&quot; copy remote:dir /path/to/dir</code></p>
<h3 id="directories">Directories</h3>
<p>Rclone keeps track of directories that could match any file patterns.</p>
<p>Eg if you add the include rule</p>
<pre><code>\a\*.jpg</code></pre>
<p>Rclone will synthesize the directory include rule</p>
<pre><code>\a\</code></pre>
<p>If you put any rules which end in <code>\</code> then it will only match directories.</p>
<p>Directory matches are <strong>only</strong> used to optimise directory access patterns - you must still match the files that you want to match. Directory matches won't optimise anything on bucket based remotes (eg s3, swift, google compute storage, b2) which don't have a concept of directory.</p>
<h3 id="differences-between-rsync-and-rclone-patterns">Differences between rsync and rclone patterns</h3> <h3 id="differences-between-rsync-and-rclone-patterns">Differences between rsync and rclone patterns</h3>
<p>Rclone implements bash style <code>{a,b,c}</code> glob matching which rsync doesn't.</p> <p>Rclone implements bash style <code>{a,b,c}</code> glob matching which rsync doesn't.</p>
<p>Rclone ignores <code>/</code> at the end of a pattern.</p>
<p>Rclone always does a wildcard match so <code>\</code> must always escape a <code>\</code>.</p> <p>Rclone always does a wildcard match so <code>\</code> must always escape a <code>\</code>.</p>
<h2 id="how-the-rules-are-used">How the rules are used</h2> <h2 id="how-the-rules-are-used">How the rules are used</h2>
<p>Rclone maintains a list of include rules and exclude rules.</p> <p>Rclone maintains a list of include rules and exclude rules.</p>
@ -490,6 +529,7 @@ y/e/d&gt;</code></pre>
<li><code>secret17.jpg</code></li> <li><code>secret17.jpg</code></li>
<li>non <code>*.jpg</code> and <code>*.png</code></li> <li>non <code>*.jpg</code> and <code>*.png</code></li>
</ul> </ul>
<p>A similar process is done on directory entries before recursing into them. This only works on remotes which have a concept of directory (Eg local, drive, onedrive, amazon cloud drive) and not on bucket based remotes (eg s3, swift, google compute storage, b2).</p>
<h2 id="adding-filtering-rules">Adding filtering rules</h2> <h2 id="adding-filtering-rules">Adding filtering rules</h2>
<p>Filtering rules are added with the following command line flags.</p> <p>Filtering rules are added with the following command line flags.</p>
<h3 id="exclude---exclude-files-matching-pattern"><code>--exclude</code> - Exclude files matching pattern</h3> <h3 id="exclude---exclude-files-matching-pattern"><code>--exclude</code> - Exclude files matching pattern</h3>
@ -680,7 +720,8 @@ file2.jpg</code></pre>
</tbody> </tbody>
</table> </table>
<h3 id="hash">Hash</h3> <h3 id="hash">Hash</h3>
<p>The cloud storage system supports various hash types of the objects.<br />The hashes are used when transferring data as an integrity check and can be specifically used with the <code>--checksum</code> flag in syncs and in the <code>check</code> command.</p> <p>The cloud storage system supports various hash types of the objects.<br />
The hashes are used when transferring data as an integrity check and can be specifically used with the <code>--checksum</code> flag in syncs and in the <code>check</code> command.</p>
<p>To use the checksum checks between filesystems they must support a common hash type.</p> <p>To use the checksum checks between filesystems they must support a common hash type.</p>
<h3 id="modtime">ModTime</h3> <h3 id="modtime">ModTime</h3>
<p>The cloud storage system supports setting modification times on objects. If it does then this enables a using the modification times as part of the sync. If not then only the size will be checked by default, though the MD5SUM can be checked with the <code>--checksum</code> flag.</p> <p>The cloud storage system supports setting modification times on objects. If it does then this enables a using the modification times as part of the sync. If not then only the size will be checked by default, though the MD5SUM can be checked with the <code>--checksum</code> flag.</p>
@ -800,7 +841,12 @@ y/e/d&gt; y</code></pre>
<p>If you prefer an archive copy then you might use <code>--drive-formats pdf</code>, or if you prefer openoffice/libreoffice formats you might use <code>--drive-formats ods,odt</code>.</p> <p>If you prefer an archive copy then you might use <code>--drive-formats pdf</code>, or if you prefer openoffice/libreoffice formats you might use <code>--drive-formats ods,odt</code>.</p>
<p>Note that rclone adds the extension to the google doc, so if it is calles <code>My Spreadsheet</code> on google docs, it will be exported as <code>My Spreadsheet.xlsx</code> or <code>My Spreadsheet.pdf</code> etc.</p> <p>Note that rclone adds the extension to the google doc, so if it is calles <code>My Spreadsheet</code> on google docs, it will be exported as <code>My Spreadsheet.xlsx</code> or <code>My Spreadsheet.pdf</code> etc.</p>
<p>Here are the possible extensions with their corresponding mime types.</p> <p>Here are the possible extensions with their corresponding mime types.</p>
<table> <table style="width:49%;">
<colgroup>
<col width="13%" />
<col width="16%" />
<col width="18%" />
</colgroup>
<thead> <thead>
<tr class="header"> <tr class="header">
<th align="left">Extension</th> <th align="left">Extension</th>
@ -1007,6 +1053,13 @@ Choose a number from below, or type in your own value
9 / South America (Sao Paulo) Region. 9 / South America (Sao Paulo) Region.
\ &quot;sa-east-1&quot; \ &quot;sa-east-1&quot;
location_constraint&gt; 1 location_constraint&gt; 1
The server-side encryption algorithm used when storing this object in S3.
Choose a number from below, or type in your own value
1 / None
\ &quot;&quot;
2 / AES256
\ &quot;AES256&quot;
server_side_encryption&gt;
Remote config Remote config
-------------------- --------------------
[remote] [remote]
@ -1167,6 +1220,8 @@ Choose a number from below, or type in your own value
6 / OVH 6 / OVH
\ &quot;https://auth.cloud.ovh.net/v2.0&quot; \ &quot;https://auth.cloud.ovh.net/v2.0&quot;
auth&gt; 1 auth&gt; 1
User domain - optional (v3 auth)
domain&gt; Default
Tenant name - optional Tenant name - optional
tenant&gt; tenant&gt;
Region name - optional Region name - optional
@ -1174,6 +1229,8 @@ region&gt;
Storage URL - optional Storage URL - optional
storage_url&gt; storage_url&gt;
Remote config Remote config
AuthVersion - optional - set to (1,2,3) if your auth URL has no version
auth_version&gt;
-------------------- --------------------
[remote] [remote]
user = user_name user = user_name
@ -1205,6 +1262,12 @@ y/e/d&gt; y</code></pre>
<p>This is a defacto standard (used in the official python-swiftclient amongst others) for storing the modification time for an object.</p> <p>This is a defacto standard (used in the official python-swiftclient amongst others) for storing the modification time for an object.</p>
<h3 id="limitations-1">Limitations</h3> <h3 id="limitations-1">Limitations</h3>
<p>The Swift API doesn't return a correct MD5SUM for segmented files (Dynamic or Static Large Objects) so rclone won't check or use the MD5SUM for these.</p> <p>The Swift API doesn't return a correct MD5SUM for segmented files (Dynamic or Static Large Objects) so rclone won't check or use the MD5SUM for these.</p>
<h3 id="troubleshooting">Troubleshooting</h3>
<h4 id="rclone-gives-failed-to-create-file-system-for-remote-bad-request">Rclone gives Failed to create file system for &quot;remote:&quot;: Bad Request</h4>
<p>Due to an oddity of the underlying swift library, it gives a &quot;Bad Request&quot; error rather than a more sensible error when the authentication fails for Swift.</p>
<p>So this most likely means your username / password is wrong. You can investigate further with the <code>--dump-bodies</code> flag.</p>
<h4 id="rclone-gives-failed-to-create-file-system-response-didnt-have-storage-storage-url-and-auth-token">Rclone gives Failed to create file system: Response didn't have storage storage url and auth token</h4>
<p>This is most likely caused by forgetting to specify your tenant when setting up a swift remote.</p>
<h2 id="dropbox">Dropbox</h2> <h2 id="dropbox">Dropbox</h2>
<p>Paths are specified as <code>remote:path</code></p> <p>Paths are specified as <code>remote:path</code></p>
<p>Dropbox paths may be as deep as required, eg <code>remote:directory/subdirectory</code>.</p> <p>Dropbox paths may be as deep as required, eg <code>remote:directory/subdirectory</code>.</p>
@ -1326,6 +1389,8 @@ Google Application Client Secret - leave blank normally.
client_secret&gt; client_secret&gt;
Project number optional - needed only for list/create/delete buckets - see your developer console. Project number optional - needed only for list/create/delete buckets - see your developer console.
project_number&gt; 12345678 project_number&gt; 12345678
Service Account Credentials JSON file path - needed only if you want use SA instead of interactive login.
service_account_file&gt;
Access Control List for new objects. Access Control List for new objects.
Choose a number from below, or type in your own value Choose a number from below, or type in your own value
* Object owner gets OWNER access, and all Authenticated Users get READER access. * Object owner gets OWNER access, and all Authenticated Users get READER access.
@ -1390,6 +1455,10 @@ y/e/d&gt; y</code></pre>
<pre><code>rclone ls remote:bucket</code></pre> <pre><code>rclone ls remote:bucket</code></pre>
<p>Sync <code>/home/local/directory</code> to the remote bucket, deleting any excess files in the bucket.</p> <p>Sync <code>/home/local/directory</code> to the remote bucket, deleting any excess files in the bucket.</p>
<pre><code>rclone sync /home/local/directory remote:bucket</code></pre> <pre><code>rclone sync /home/local/directory remote:bucket</code></pre>
<h3 id="service-account-support">Service Account support</h3>
<p>You can set up rclone with Google Cloud Storage in an unattended mode, i.e. not tied to a specific end-user Google account. This is useful when you want to synchronise files onto machines that don't have actively logged-in users, for example build machines.</p>
<p>To get credentials for Google Cloud Platform <a href="https://cloud.google.com/iam/docs/service-accounts">IAM Service Accounts</a>, please head to the <a href="https://console.cloud.google.com/permissions/serviceaccounts">Service Account</a> section of the Google Developer Console. Service Accounts behave just like normal <code>User</code> permissions in <a href="https://cloud.google.com/storage/docs/access-control">Google Cloud Storage ACLs</a>, so you can limit their access (e.g. make them read only). After creating an account, a JSON file containing the Service Account's credentials will be downloaded onto your machines. These credentials are what rclone will use for authentication.</p>
<p>To use a Service Account instead of OAuth2 token flow, enter the path to your Service Account credentials at the <code>service_account_file</code> prompt and rclone won't use the browser based authentication flow.</p>
<h3 id="modified-time-3">Modified time</h3> <h3 id="modified-time-3">Modified time</h3>
<p>Google google cloud storage stores md5sums natively and rclone stores modification times as metadata on the object, under the &quot;mtime&quot; key in RFC3339 format accurate to 1ns.</p> <p>Google google cloud storage stores md5sums natively and rclone stores modification times as metadata on the object, under the &quot;mtime&quot; key in RFC3339 format accurate to 1ns.</p>
<h2 id="amazon-cloud-drive">Amazon Cloud Drive</h2> <h2 id="amazon-cloud-drive">Amazon Cloud Drive</h2>
@ -1629,6 +1698,8 @@ y/e/d&gt; y</code></pre>
<pre><code>rclone ls remote:</code></pre> <pre><code>rclone ls remote:</code></pre>
<p>To copy a local directory to an Hubic directory called backup</p> <p>To copy a local directory to an Hubic directory called backup</p>
<pre><code>rclone copy /home/source remote:backup</code></pre> <pre><code>rclone copy /home/source remote:backup</code></pre>
<p>If you want the directory to be visible in the official <em>Hubic browser</em>, you need to copy your files to the <code>default</code> directory</p>
<pre><code>rclone copy /home/source remote:default/backup</code></pre>
<h3 id="modified-time-4">Modified time</h3> <h3 id="modified-time-4">Modified time</h3>
<p>The modified time is stored as metadata on the object as <code>X-Object-Meta-Mtime</code> as floating point since the epoch accurate to 1 ns.</p> <p>The modified time is stored as metadata on the object as <code>X-Object-Meta-Mtime</code> as floating point since the epoch accurate to 1 ns.</p>
<p>This is a defacto standard (used in the official python-swiftclient amongst others) for storing the modification time for an object.</p> <p>This is a defacto standard (used in the official python-swiftclient amongst others) for storing the modification time for an object.</p>
@ -1703,14 +1774,21 @@ y/e/d&gt; y</code></pre>
<p>Modified times are used in syncing and are fully supported except in the case of updating a modification time on an existing object. In this case the object will be uploaded again as B2 doesn't have an API method to set the modification time independent of doing an upload.</p> <p>Modified times are used in syncing and are fully supported except in the case of updating a modification time on an existing object. In this case the object will be uploaded again as B2 doesn't have an API method to set the modification time independent of doing an upload.</p>
<h3 id="sha1-checksums">SHA1 checksums</h3> <h3 id="sha1-checksums">SHA1 checksums</h3>
<p>The SHA1 checksums of the files are checked on upload and download and will be used in the syncing process. You can use the <code>--checksum</code> flag.</p> <p>The SHA1 checksums of the files are checked on upload and download and will be used in the syncing process. You can use the <code>--checksum</code> flag.</p>
<p>Large files which are uploaded in chunks will store their SHA1 on the object as <code>X-Bz-Info-large_file_sha1</code> as recommended by Backblaze.</p>
<h3 id="versions">Versions</h3> <h3 id="versions">Versions</h3>
<p>When rclone uploads a new version of a file it creates a <a href="https://www.backblaze.com/b2/docs/file_versions.html">new version of it</a>. Likewise when you delete a file, the old version will still be available.</p> <p>When rclone uploads a new version of a file it creates a <a href="https://www.backblaze.com/b2/docs/file_versions.html">new version of it</a>. Likewise when you delete a file, the old version will still be available.</p>
<p>The old versions of files are visible in the B2 web interface, but not via rclone yet.</p> <p>The old versions of files are visible in the B2 web interface, but not via rclone yet.</p>
<p>Rclone doesn't provide any way of managing old versions (downloading them or deleting them) at the moment. When you <code>purge</code> a bucket, all the old versions will be deleted.</p> <p>Rclone doesn't provide any way of managing old versions (downloading them or deleting them) at the moment. When you <code>purge</code> a bucket, all the old versions will be deleted.</p>
<h3 id="transfers">Transfers</h3> <h3 id="transfers">Transfers</h3>
<p>Backblaze recommends that you do lots of transfers simultaneously for maximum speed. In tests from my SSD equiped laptop the optimum setting is about <code>--transfers 32</code> though higher numbers may be used for a slight speed improvement. The optimum number for you may vary depending on your hardware, how big the files are, how much you want to load your computer, etc. The default of <code>--transfers 4</code> is definitely too low for Backblaze B2 though.</p> <p>Backblaze recommends that you do lots of transfers simultaneously for maximum speed. In tests from my SSD equiped laptop the optimum setting is about <code>--transfers 32</code> though higher numbers may be used for a slight speed improvement. The optimum number for you may vary depending on your hardware, how big the files are, how much you want to load your computer, etc. The default of <code>--transfers 4</code> is definitely too low for Backblaze B2 though.</p>
<h3 id="specific-options-5">Specific options</h3>
<p>Here are the command line options specific to this cloud storage system.</p>
<h4 id="b2-chunk-size-valueesize">--b2-chunk-size valuee=SIZE</h4>
<p>When uploading large files chunk the file into this size. Note that these chunks are buffered in memory. 100,000,000 Bytes is the minimim size (default 96M).</p>
<h4 id="b2-upload-cutoffsize">--b2-upload-cutoff=SIZE</h4>
<p>Cutoff for switching to chunked upload (default 4.657GiB == 5GB). Files above this size will be uploaded in chunks of <code>--b2-chunk-size</code>. The default value is the largest file which can be uploaded without chunks.</p>
<h3 id="api">API</h3> <h3 id="api">API</h3>
<p>Here are <a href="https://gist.github.com/ncw/166dabf352b399f1cc1c">some notes I made on the backblaze API</a> while integrating it with rclone which detail the changes I'd like to see.</p> <p>Here are <a href="https://gist.github.com/ncw/166dabf352b399f1cc1c">some notes I made on the backblaze API</a> while integrating it with rclone.</p>
<h2 id="yandex-disk">Yandex Disk</h2> <h2 id="yandex-disk">Yandex Disk</h2>
<p><a href="https://disk.yandex.com">Yandex Disk</a> is a cloud storage solution created by <a href="http://yandex.com">Yandex</a>.</p> <p><a href="https://disk.yandex.com">Yandex Disk</a> is a cloud storage solution created by <a href="http://yandex.com">Yandex</a>.</p>
<p>Yandex paths may be as deep as required, eg <code>remote:directory/subdirectory</code>.</p> <p>Yandex paths may be as deep as required, eg <code>remote:directory/subdirectory</code>.</p>
@ -1814,6 +1892,46 @@ nounc = true</code></pre>
<p>This will use UNC paths on <code>c:\src</code> but not on <code>z:\dst</code>. Of course this will cause problems if the absolute path length of a file exceeds 258 characters on z, so only use this option if you have to.</p> <p>This will use UNC paths on <code>c:\src</code> but not on <code>z:\dst</code>. Of course this will cause problems if the absolute path length of a file exceeds 258 characters on z, so only use this option if you have to.</p>
<h2 id="changelog">Changelog</h2> <h2 id="changelog">Changelog</h2>
<ul> <ul>
<li>v1.29 - 2016-06-18
<ul>
<li>New Features</li>
<li>Directory listing code reworked for more features and better error reporting (thanks to Klaus Post for help). This enables
<ul>
<li>Directory include filtering for efficiency</li>
<li>--max-depth parameter</li>
<li>Better error reporting</li>
<li>More to come</li>
</ul></li>
<li>Retry more errors</li>
<li>Add --ignore-size flag - for uploading images to onedrive</li>
<li>Log -v output to stdout by default</li>
<li>Display the transfer stats in more human readable form</li>
<li>Make 0 size files specifiable with <code>--max-size 0b</code></li>
<li>Add <code>b</code> suffix so we can specify bytes in --bwlimit, --min-size etc</li>
<li>Use &quot;password:&quot; instead of &quot;password&gt;&quot; prompt - thanks Klaus Post and Leigh Klotz</li>
<li>Bug Fixes</li>
<li>Fix retry doing one too many retries</li>
<li>Local</li>
<li>Fix problems with OS X and UTF-8 characters</li>
<li>Amazon Cloud Drive</li>
<li>Check a file exists before uploading to help with 408 Conflict errors</li>
<li>Reauth on 401 errors - this has been causing a lot of problems</li>
<li>Work around spurious 403 errors</li>
<li>Restart directory listings on error</li>
<li>Google Drive</li>
<li>Check a file exists before uploading to help with duplicates</li>
<li>Fix retry of multipart uploads</li>
<li>Backblaze B2</li>
<li>Implement large file uploading</li>
<li>S3</li>
<li>Add AES256 server-side encryption for - thanks Justin R. Wilson</li>
<li>Google Cloud Storage</li>
<li>Make sure we don't use conflicting content types on upload</li>
<li>Add service account support - thanks Michal Witkowski</li>
<li>Swift</li>
<li>Add auth version parameter</li>
<li>Add domain option for openstack (v3 auth) - thanks Fabian Ruff</li>
</ul></li>
<li>v1.29 - 2016-04-18 <li>v1.29 - 2016-04-18
<ul> <ul>
<li>New Features</li> <li>New Features</li>
@ -2269,7 +2387,7 @@ Server B&gt; rclone copy /tmp/whatever remote:Backup</code></pre>
<p>The environment values may be either a complete URL or a &quot;host[:port]&quot;, in which case the &quot;http&quot; scheme is assumed.</p> <p>The environment values may be either a complete URL or a &quot;host[:port]&quot;, in which case the &quot;http&quot; scheme is assumed.</p>
<p>The <code>NO_PROXY</code> allows you to disable the proxy for specific hosts. Hosts must be comma separated, and can contain domains or parts. For instance &quot;foo.com&quot; also matches &quot;bar.foo.com&quot;.</p> <p>The <code>NO_PROXY</code> allows you to disable the proxy for specific hosts. Hosts must be comma separated, and can contain domains or parts. For instance &quot;foo.com&quot; also matches &quot;bar.foo.com&quot;.</p>
<h3 id="rclone-gives-x509-failed-to-load-system-roots-and-no-roots-provided-error">Rclone gives x509: failed to load system roots and no roots provided error</h3> <h3 id="rclone-gives-x509-failed-to-load-system-roots-and-no-roots-provided-error">Rclone gives x509: failed to load system roots and no roots provided error</h3>
<p>This means that <code>rclone</code> can't file the SSL root certificates. Likely you are running <code>rclone</code> on a NAS with a cut-down Linux OS.</p> <p>This means that <code>rclone</code> can't file the SSL root certificates. Likely you are running <code>rclone</code> on a NAS with a cut-down Linux OS, or possibly on Solaris.</p>
<p>Rclone (via the Go runtime) tries to load the root certificates from these places on Linux.</p> <p>Rclone (via the Go runtime) tries to load the root certificates from these places on Linux.</p>
<pre><code>&quot;/etc/ssl/certs/ca-certificates.crt&quot;, // Debian/Ubuntu/Gentoo etc. <pre><code>&quot;/etc/ssl/certs/ca-certificates.crt&quot;, // Debian/Ubuntu/Gentoo etc.
&quot;/etc/pki/tls/certs/ca-bundle.crt&quot;, // Fedora/RHEL &quot;/etc/pki/tls/certs/ca-bundle.crt&quot;, // Fedora/RHEL
@ -2408,6 +2526,42 @@ h='&#x73;&#x6f;&#102;&#x69;&#46;&#x63;&#x6f;&#x6d;';a='&#64;';n='&#106;&#x67;&#x
document.write('<a h'+'ref'+'="ma'+'ilto'+':'+e+'" clas'+'s="em' + 'ail">'+e+'<\/'+'a'+'>'); document.write('<a h'+'ref'+'="ma'+'ilto'+':'+e+'" clas'+'s="em' + 'ail">'+e+'<\/'+'a'+'>');
// --> // -->
</script><noscript>&#106;&#x67;&#x65;&#100;&#x65;&#x6f;&#110;&#32;&#x61;&#116;&#32;&#x73;&#x6f;&#102;&#x69;&#32;&#100;&#x6f;&#116;&#32;&#x63;&#x6f;&#x6d;</noscript></li> </script><noscript>&#106;&#x67;&#x65;&#100;&#x65;&#x6f;&#110;&#32;&#x61;&#116;&#32;&#x73;&#x6f;&#102;&#x69;&#32;&#100;&#x6f;&#116;&#32;&#x63;&#x6f;&#x6d;</noscript></li>
<li>Jim Tittsler <script type="text/javascript">
<!--
h='&#x6f;&#110;&#106;&#x61;&#112;&#x61;&#110;&#46;&#110;&#x65;&#116;';a='&#64;';n='&#106;&#x77;&#116;';e=n+a+h;
document.write('<a h'+'ref'+'="ma'+'ilto'+':'+e+'" clas'+'s="em' + 'ail">'+e+'<\/'+'a'+'>');
// -->
</script><noscript>&#106;&#x77;&#116;&#32;&#x61;&#116;&#32;&#x6f;&#110;&#106;&#x61;&#112;&#x61;&#110;&#32;&#100;&#x6f;&#116;&#32;&#110;&#x65;&#116;</noscript></li>
<li>Michal Witkowski <script type="text/javascript">
<!--
h='&#x69;&#x6d;&#112;&#114;&#x6f;&#98;&#x61;&#98;&#108;&#x65;&#46;&#x69;&#x6f;';a='&#64;';n='&#x6d;&#x69;&#x63;&#104;&#x61;&#108;';e=n+a+h;
document.write('<a h'+'ref'+'="ma'+'ilto'+':'+e+'" clas'+'s="em' + 'ail">'+e+'<\/'+'a'+'>');
// -->
</script><noscript>&#x6d;&#x69;&#x63;&#104;&#x61;&#108;&#32;&#x61;&#116;&#32;&#x69;&#x6d;&#112;&#114;&#x6f;&#98;&#x61;&#98;&#108;&#x65;&#32;&#100;&#x6f;&#116;&#32;&#x69;&#x6f;</noscript></li>
<li>Fabian Ruff <script type="text/javascript">
<!--
h='&#x73;&#x61;&#112;&#46;&#x63;&#x6f;&#x6d;';a='&#64;';n='&#102;&#x61;&#98;&#x69;&#x61;&#110;&#46;&#114;&#x75;&#102;&#102;';e=n+a+h;
document.write('<a h'+'ref'+'="ma'+'ilto'+':'+e+'" clas'+'s="em' + 'ail">'+e+'<\/'+'a'+'>');
// -->
</script><noscript>&#102;&#x61;&#98;&#x69;&#x61;&#110;&#46;&#114;&#x75;&#102;&#102;&#32;&#x61;&#116;&#32;&#x73;&#x61;&#112;&#32;&#100;&#x6f;&#116;&#32;&#x63;&#x6f;&#x6d;</noscript></li>
<li>Leigh Klotz <script type="text/javascript">
<!--
h='&#x71;&#x75;&#x69;&#120;&#x65;&#x79;&#46;&#x63;&#x6f;&#x6d;';a='&#64;';n='&#x6b;&#108;&#x6f;&#116;&#122;';e=n+a+h;
document.write('<a h'+'ref'+'="ma'+'ilto'+':'+e+'" clas'+'s="em' + 'ail">'+e+'<\/'+'a'+'>');
// -->
</script><noscript>&#x6b;&#108;&#x6f;&#116;&#122;&#32;&#x61;&#116;&#32;&#x71;&#x75;&#x69;&#120;&#x65;&#x79;&#32;&#100;&#x6f;&#116;&#32;&#x63;&#x6f;&#x6d;</noscript></li>
<li>Romain Lapray <script type="text/javascript">
<!--
h='&#x67;&#x6d;&#x61;&#x69;&#108;&#46;&#x63;&#x6f;&#x6d;';a='&#64;';n='&#108;&#x61;&#112;&#114;&#x61;&#x79;&#46;&#114;&#x6f;&#x6d;&#x61;&#x69;&#110;';e=n+a+h;
document.write('<a h'+'ref'+'="ma'+'ilto'+':'+e+'" clas'+'s="em' + 'ail">'+e+'<\/'+'a'+'>');
// -->
</script><noscript>&#108;&#x61;&#112;&#114;&#x61;&#x79;&#46;&#114;&#x6f;&#x6d;&#x61;&#x69;&#110;&#32;&#x61;&#116;&#32;&#x67;&#x6d;&#x61;&#x69;&#108;&#32;&#100;&#x6f;&#116;&#32;&#x63;&#x6f;&#x6d;</noscript></li>
<li>Justin R. Wilson <script type="text/javascript">
<!--
h='&#x67;&#x6d;&#x61;&#x69;&#108;&#46;&#x63;&#x6f;&#x6d;';a='&#64;';n='&#106;&#114;&#x77;&#x39;&#x37;&#50;';e=n+a+h;
document.write('<a h'+'ref'+'="ma'+'ilto'+':'+e+'" clas'+'s="em' + 'ail">'+e+'<\/'+'a'+'>');
// -->
</script><noscript>&#106;&#114;&#x77;&#x39;&#x37;&#50;&#32;&#x61;&#116;&#32;&#x67;&#x6d;&#x61;&#x69;&#108;&#32;&#100;&#x6f;&#116;&#32;&#x63;&#x6f;&#x6d;</noscript></li>
</ul> </ul>
<h2 id="contact-the-rclone-project">Contact the rclone project</h2> <h2 id="contact-the-rclone-project">Contact the rclone project</h2>
<p>The project website is at:</p> <p>The project website is at:</p>
@ -2423,7 +2577,7 @@ document.write('<a h'+'ref'+'="ma'+'ilto'+':'+e+'" clas'+'s="em' + 'ail">'+e+'<\
<p>Or email <script type="text/javascript"> <p>Or email <script type="text/javascript">
<!-- <!--
h='&#x63;&#114;&#x61;&#x69;&#x67;&#x2d;&#x77;&#x6f;&#x6f;&#100;&#46;&#x63;&#x6f;&#x6d;';a='&#64;';n='&#110;&#x69;&#x63;&#x6b;';e=n+a+h; h='&#x63;&#114;&#x61;&#x69;&#x67;&#x2d;&#x77;&#x6f;&#x6f;&#100;&#46;&#x63;&#x6f;&#x6d;';a='&#64;';n='&#110;&#x69;&#x63;&#x6b;';e=n+a+h;
document.write('<a h'+'ref'+'="ma'+'ilto'+':'+e+'" clas'+'s="em' + 'ail">'+'Nick Craig-Wood'+'<\/'+'a'+'>'); document.write('<a h'+'ref'+'="ma'+'ilto'+':'+e+'" clas'+'s="em' + 'ail">'+'&#78;&#x69;&#x63;&#x6b;&#32;&#x43;&#114;&#x61;&#x69;&#x67;&#x2d;&#x57;&#x6f;&#x6f;&#100;'+'<\/'+'a'+'>');
// --> // -->
</script><noscript>&#78;&#x69;&#x63;&#x6b;&#32;&#x43;&#114;&#x61;&#x69;&#x67;&#x2d;&#x57;&#x6f;&#x6f;&#100;&#32;&#40;&#110;&#x69;&#x63;&#x6b;&#32;&#x61;&#116;&#32;&#x63;&#114;&#x61;&#x69;&#x67;&#x2d;&#x77;&#x6f;&#x6f;&#100;&#32;&#100;&#x6f;&#116;&#32;&#x63;&#x6f;&#x6d;&#x29;</noscript></p> </script><noscript>&#78;&#x69;&#x63;&#x6b;&#32;&#x43;&#114;&#x61;&#x69;&#x67;&#x2d;&#x57;&#x6f;&#x6f;&#100;&#32;&#40;&#110;&#x69;&#x63;&#x6b;&#32;&#x61;&#116;&#32;&#x63;&#114;&#x61;&#x69;&#x67;&#x2d;&#x77;&#x6f;&#x6f;&#100;&#32;&#100;&#x6f;&#116;&#32;&#x63;&#x6f;&#x6d;&#x29;</noscript></p>
</body> </body>

282
MANUAL.md
View File

@ -1,6 +1,6 @@
% rclone(1) User Manual % rclone(1) User Manual
% Nick Craig-Wood % Nick Craig-Wood
% Apr 18, 2016 % Jun 18, 2016
Rclone Rclone
====== ======
@ -181,12 +181,12 @@ go there.
Moves the source to the destination. Moves the source to the destination.
If there are no filters in use this is equivalent to a copy followed If there are no filters in use this is equivalent to a copy followed
by a purge, but may using server side operations to speed it up if by a purge, but may use server side operations to speed it up if
possible. possible.
If filters are in use then it is equivalent to a copy followed by If filters are in use then it is equivalent to a copy followed by
delete, followed by an rmdir (which only removes the directory if delete, followed by an rmdir (which only removes the directory if
empty). The individual file moves will be moved with srver side empty). The individual file moves will be moved with server side
operations if possible. operations if possible.
**Important**: Since this can cause data loss, test first with the **Important**: Since this can cause data loss, test first with the
@ -194,7 +194,7 @@ operations if possible.
### rclone ls remote:path ### ### rclone ls remote:path ###
List all the objects in the the path with size and path. List all the objects in the path with size and path.
### rclone lsd remote:path ### ### rclone lsd remote:path ###
@ -349,6 +349,41 @@ Enter an interactive configuration session.
Prints help on rclone commands and options. Prints help on rclone commands and options.
Quoting and the shell
---------------------
When you are typing commands to your computer you are using something
called the command line shell. This interprets various characters in
an OS specific way.
Here are some gotchas which may help users unfamiliar with the shell rules
### Linux / OSX ###
If your names have spaces or shell metacharacters (eg `*`, `?`, `$`,
`'`, `"` etc) then you must quote them. Use single quotes `'` by default.
rclone copy 'Important files?' remote:backup
If you want to send a `'` you will need to use `"`, eg
rclone copy "O'Reilly Reviews" remote:backup
The rules for quoting metacharacters are complicated and if you want
the full details you'll have to consult the manual page for your
shell.
### Windows ###
If your names have spaces in you need to put them in `"`, eg
rclone copy "E:\folder name\folder name\folder name" remote:backup
If you are using the root directory on its own then don't quote it
(see [#464](https://github.com/ncw/rclone/issues/464) for why), eg
rclone copy E:\ remote:backup
Server Side Copy Server Side Copy
---------------- ----------------
@ -390,13 +425,14 @@ possibly signed sequence of decimal numbers, each with optional
fraction and a unit suffix, such as "300ms", "-1.5h" or "2h45m". Valid fraction and a unit suffix, such as "300ms", "-1.5h" or "2h45m". Valid
time units are "ns", "us" (or "µs"), "ms", "s", "m", "h". time units are "ns", "us" (or "µs"), "ms", "s", "m", "h".
Options which use SIZE use kByte by default. However a suffix of `k` Options which use SIZE use kByte by default. However a suffix of `b`
for kBytes, `M` for MBytes and `G` for GBytes may be used. These are for bytes, `k` for kBytes, `M` for MBytes and `G` for GBytes may be
the binary units, eg 2\*\*10, 2\*\*20, 2\*\*30 respectively. used. These are the binary units, eg 1, 2\*\*10, 2\*\*20, 2\*\*30
respectively.
### --bwlimit=SIZE ### ### --bwlimit=SIZE ###
Bandwidth limit in kBytes/s, or use suffix k|M|G. The default is `0` Bandwidth limit in kBytes/s, or use suffix b|k|M|G. The default is `0`
which means to not limit bandwidth. which means to not limit bandwidth.
For example to limit bandwidth usage to 10 MBytes/s use `--bwlimit 10M` For example to limit bandwidth usage to 10 MBytes/s use `--bwlimit 10M`
@ -469,6 +505,20 @@ While this isn't a generally recommended option, it can be useful
in cases where your files change due to encryption. However, it cannot in cases where your files change due to encryption. However, it cannot
correct partial transfers in case a transfer was interrupted. correct partial transfers in case a transfer was interrupted.
### --ignore-size ###
Normally rclone will look at modification time and size of files to
see if they are equal. If you set this flag then rclone will check
only the modification time. If `--checksum` is set then it only
checks the checksum.
It will also cause rclone to skip verifying the sizes are the same
after transfer.
This can be useful for transferring files to and from onedrive which
occasionally misreports the size of image files (see
[#399](https://github.com/ncw/rclone/issues/399) for more info).
### -I, --ignore-times ### ### -I, --ignore-times ###
Using this option will cause rclone to unconditionally upload all Using this option will cause rclone to unconditionally upload all
@ -482,7 +532,8 @@ using `--checksum`).
Log all of rclone's output to FILE. This is not active by default. Log all of rclone's output to FILE. This is not active by default.
This can be useful for tracking down problems with syncs in This can be useful for tracking down problems with syncs in
combination with the `-v` flag. combination with the `-v` flag. See the Logging section for more
info.
### --low-level-retries NUMBER ### ### --low-level-retries NUMBER ###
@ -500,6 +551,24 @@ to reduce the value so rclone moves on to a high level retry (see the
Disable low level retries with `--low-level-retries 1`. Disable low level retries with `--low-level-retries 1`.
### --max-depth=N ###
This modifies the recursion depth for all the commands except purge.
So if you do `rclone --max-depth 1 ls remote:path` you will see only
the files in the top level directory. Using `--max-depth 2` means you
will see all the files in first two directory levels and so on.
For historical reasons the `lsd` command defaults to using a
`--max-depth` of 1 - you can override this with the command line flag.
You can use this command to disable recursion (with `--max-depth 1`).
Note that if you use this with `sync` and `--delete-excluded` the
files not recursed through are considered excluded and will be deleted
on the destination. Test first with `--dry-run` if you are not sure
what will happen.
### --modify-window=TIME ### ### --modify-window=TIME ###
When checking whether a file has been modified, this is the maximum When checking whether a file has been modified, this is the maximum
@ -547,9 +616,6 @@ This can be useful transferring files from dropbox which have been
modified by the desktop sync client which doesn't set checksums of modified by the desktop sync client which doesn't set checksums of
modification times in the same way as rclone. modification times in the same way as rclone.
When using this flag, rclone won't update mtimes of remote files if
they are incorrect as it would normally.
### --stats=TIME ### ### --stats=TIME ###
Rclone will print stats at regular intervals to show its progress. Rclone will print stats at regular intervals to show its progress.
@ -651,9 +717,9 @@ a) Add Password
q) Quit to main menu q) Quit to main menu
a/q> a a/q> a
Enter NEW configuration password: Enter NEW configuration password:
password> password:
Confirm NEW password: Confirm NEW password:
password> password:
Password set Password set
Your configuration is encrypted. Your configuration is encrypted.
c) Change Password c) Change Password
@ -670,13 +736,13 @@ configuration.
There is no way to recover the configuration if you lose your password. There is no way to recover the configuration if you lose your password.
rclone uses [nacl secretbox](https://godoc.org/golang.org/x/crypto/nacl/secretbox) rclone uses [nacl secretbox](https://godoc.org/golang.org/x/crypto/nacl/secretbox)
which in term uses XSalsa20 and Poly1305 to encrypt and authenticate which in turn uses XSalsa20 and Poly1305 to encrypt and authenticate
your configuration with secret-key cryptography. your configuration with secret-key cryptography.
The password is SHA-256 hashed, which produces the key for secretbox. The password is SHA-256 hashed, which produces the key for secretbox.
The hashed password is not stored. The hashed password is not stored.
While this provides very good security, we do not recommend storing While this provides very good security, we do not recommend storing
your encrypted rclone configuration in public, if it contains sensitive your encrypted rclone configuration in public if it contains sensitive
information, maybe except if you use a very strong password. information, maybe except if you use a very strong password.
If it is safe in your environment, you can set the `RCLONE_CONFIG_PASS` If it is safe in your environment, you can set the `RCLONE_CONFIG_PASS`
@ -686,7 +752,7 @@ used for decrypting the configuration.
If you are running rclone inside a script, you might want to disable If you are running rclone inside a script, you might want to disable
password prompts. To do that, pass the parameter password prompts. To do that, pass the parameter
`--ask-password=false` to rclone. This will make rclone fail instead `--ask-password=false` to rclone. This will make rclone fail instead
of asking for a password, if if `RCLONE_CONFIG_PASS` doesn't contain of asking for a password if `RCLONE_CONFIG_PASS` doesn't contain
a valid password. a valid password.
@ -754,6 +820,25 @@ For the filtering options
See the [filtering section](http://rclone.org/filtering/). See the [filtering section](http://rclone.org/filtering/).
Logging
-------
rclone has 3 levels of logging, `Error`, `Info` and `Debug`.
By default rclone logs `Error` and `Info` to standard error and `Debug`
to standard output. This means you can redirect standard output and
standard error to different places.
By default rclone will produce `Error` and `Info` level messages.
If you use the `-q` flag, rclone will only produce `Error` messages.
If you use the `-v` flag, rclone will produce `Error`, `Info` and
`Debug` messages.
If you use the `--log-file=FILE` option, rclone will redirect `Error`,
`Info` and `Debug` messages along with standard error to FILE.
Exit Code Exit Code
--------- ---------
@ -859,6 +944,10 @@ and exclude rules like `--include`, `--exclude`, `--include-from`,
try them out is using the `ls` command, or `--dry-run` together with try them out is using the `ls` command, or `--dry-run` together with
`-v`. `-v`.
**Important** Due to limitations of the command line parser you can
only use any of these options once - if you duplicate them then rclone
will use the last one only.
## Patterns ## ## Patterns ##
The patterns used to match files for inclusion or exclusion are based The patterns used to match files for inclusion or exclusion are based
@ -922,12 +1011,36 @@ Special characters can be escaped with a `\` before them.
\\.jpg - matches "\.jpg" \\.jpg - matches "\.jpg"
\[one\].jpg - matches "[one].jpg" \[one\].jpg - matches "[one].jpg"
Note also that rclone filter globs can only be used in one of the
filter command line flags, not in the specification of the remote, so
`rclone copy "remote:dir*.jpg" /path/to/dir` won't work - what is
required is `rclone --include "*.jpg" copy remote:dir /path/to/dir`
### Directories ###
Rclone keeps track of directories that could match any file patterns.
Eg if you add the include rule
\a\*.jpg
Rclone will synthesize the directory include rule
\a\
If you put any rules which end in `\` then it will only match
directories.
Directory matches are **only** used to optimise directory access
patterns - you must still match the files that you want to match.
Directory matches won't optimise anything on bucket based remotes (eg
s3, swift, google compute storage, b2) which don't have a concept of
directory.
### Differences between rsync and rclone patterns ### ### Differences between rsync and rclone patterns ###
Rclone implements bash style `{a,b,c}` glob matching which rsync doesn't. Rclone implements bash style `{a,b,c}` glob matching which rsync doesn't.
Rclone ignores `/` at the end of a pattern.
Rclone always does a wildcard match so `\` must always escape a `\`. Rclone always does a wildcard match so `\` must always escape a `\`.
## How the rules are used ## ## How the rules are used ##
@ -960,6 +1073,11 @@ This would exclude
* `secret17.jpg` * `secret17.jpg`
* non `*.jpg` and `*.png` * non `*.jpg` and `*.png`
A similar process is done on directory entries before recursing into
them. This only works on remotes which have a concept of directory
(Eg local, drive, onedrive, amazon cloud drive) and not on bucket
based remotes (eg s3, swift, google compute storage, b2).
## Adding filtering rules ## ## Adding filtering rules ##
Filtering rules are added with the following command line flags. Filtering rules are added with the following command line flags.
@ -1540,6 +1658,13 @@ Choose a number from below, or type in your own value
9 / South America (Sao Paulo) Region. 9 / South America (Sao Paulo) Region.
\ "sa-east-1" \ "sa-east-1"
location_constraint> 1 location_constraint> 1
The server-side encryption algorithm used when storing this object in S3.
Choose a number from below, or type in your own value
1 / None
\ ""
2 / AES256
\ "AES256"
server_side_encryption>
Remote config Remote config
-------------------- --------------------
[remote] [remote]
@ -1762,6 +1887,8 @@ Choose a number from below, or type in your own value
6 / OVH 6 / OVH
\ "https://auth.cloud.ovh.net/v2.0" \ "https://auth.cloud.ovh.net/v2.0"
auth> 1 auth> 1
User domain - optional (v3 auth)
domain> Default
Tenant name - optional Tenant name - optional
tenant> tenant>
Region name - optional Region name - optional
@ -1769,6 +1896,8 @@ region>
Storage URL - optional Storage URL - optional
storage_url> storage_url>
Remote config Remote config
AuthVersion - optional - set to (1,2,3) if your auth URL has no version
auth_version>
-------------------- --------------------
[remote] [remote]
user = user_name user = user_name
@ -1828,6 +1957,22 @@ The Swift API doesn't return a correct MD5SUM for segmented files
(Dynamic or Static Large Objects) so rclone won't check or use the (Dynamic or Static Large Objects) so rclone won't check or use the
MD5SUM for these. MD5SUM for these.
### Troubleshooting ###
#### Rclone gives Failed to create file system for "remote:": Bad Request ####
Due to an oddity of the underlying swift library, it gives a "Bad
Request" error rather than a more sensible error when the
authentication fails for Swift.
So this most likely means your username / password is wrong. You can
investigate further with the `--dump-bodies` flag.
#### Rclone gives Failed to create file system: Response didn't have storage storage url and auth token ####
This is most likely caused by forgetting to specify your tenant when
setting up a swift remote.
Dropbox Dropbox
--------------------------------- ---------------------------------
@ -2005,6 +2150,8 @@ Google Application Client Secret - leave blank normally.
client_secret> client_secret>
Project number optional - needed only for list/create/delete buckets - see your developer console. Project number optional - needed only for list/create/delete buckets - see your developer console.
project_number> 12345678 project_number> 12345678
Service Account Credentials JSON file path - needed only if you want use SA instead of interactive login.
service_account_file>
Access Control List for new objects. Access Control List for new objects.
Choose a number from below, or type in your own value Choose a number from below, or type in your own value
* Object owner gets OWNER access, and all Authenticated Users get READER access. * Object owner gets OWNER access, and all Authenticated Users get READER access.
@ -2087,6 +2234,30 @@ files in the bucket.
rclone sync /home/local/directory remote:bucket rclone sync /home/local/directory remote:bucket
### Service Account support ###
You can set up rclone with Google Cloud Storage in an unattended mode,
i.e. not tied to a specific end-user Google account. This is useful
when you want to synchronise files onto machines that don't have
actively logged-in users, for example build machines.
To get credentials for Google Cloud Platform
[IAM Service Accounts](https://cloud.google.com/iam/docs/service-accounts),
please head to the
[Service Account](https://console.cloud.google.com/permissions/serviceaccounts)
section of the Google Developer Console. Service Accounts behave just
like normal `User` permissions in
[Google Cloud Storage ACLs](https://cloud.google.com/storage/docs/access-control),
so you can limit their access (e.g. make them read only). After
creating an account, a JSON file containing the Service Account's
credentials will be downloaded onto your machines. These credentials
are what rclone will use for authentication.
To use a Service Account instead of OAuth2 token flow, enter the path
to your Service Account credentials at the `service_account_file`
prompt and rclone won't use the browser based authentication
flow.
### Modified time ### ### Modified time ###
Google google cloud storage stores md5sums natively and rclone stores Google google cloud storage stores md5sums natively and rclone stores
@ -2479,6 +2650,11 @@ To copy a local directory to an Hubic directory called backup
rclone copy /home/source remote:backup rclone copy /home/source remote:backup
If you want the directory to be visible in the official *Hubic
browser*, you need to copy your files to the `default` directory
rclone copy /home/source remote:default/backup
### Modified time ### ### Modified time ###
The modified time is stored as metadata on the object as The modified time is stored as metadata on the object as
@ -2603,6 +2779,9 @@ method to set the modification time independent of doing an upload.
The SHA1 checksums of the files are checked on upload and download and The SHA1 checksums of the files are checked on upload and download and
will be used in the syncing process. You can use the `--checksum` flag. will be used in the syncing process. You can use the `--checksum` flag.
Large files which are uploaded in chunks will store their SHA1 on the
object as `X-Bz-Info-large_file_sha1` as recommended by Backblaze.
### Versions ### ### Versions ###
When rclone uploads a new version of a file it creates a [new version When rclone uploads a new version of a file it creates a [new version
@ -2627,11 +2806,29 @@ depending on your hardware, how big the files are, how much you want
to load your computer, etc. The default of `--transfers 4` is to load your computer, etc. The default of `--transfers 4` is
definitely too low for Backblaze B2 though. definitely too low for Backblaze B2 though.
### Specific options ###
Here are the command line options specific to this cloud storage
system.
#### --b2-chunk-size valuee=SIZE ####
When uploading large files chunk the file into this size. Note that
these chunks are buffered in memory. 100,000,000 Bytes is the minimim
size (default 96M).
#### --b2-upload-cutoff=SIZE ####
Cutoff for switching to chunked upload (default 4.657GiB ==
5GB). Files above this size will be uploaded in chunks of
`--b2-chunk-size`. The default value is the largest file which can be
uploaded without chunks.
### API ### ### API ###
Here are [some notes I made on the backblaze Here are [some notes I made on the backblaze
API](https://gist.github.com/ncw/166dabf352b399f1cc1c) while API](https://gist.github.com/ncw/166dabf352b399f1cc1c) while
integrating it with rclone which detail the changes I'd like to see. integrating it with rclone.
Yandex Disk Yandex Disk
---------------------------------------- ----------------------------------------
@ -2815,6 +3012,42 @@ file exceeds 258 characters on z, so only use this option if you have to.
Changelog Changelog
--------- ---------
* v1.29 - 2016-06-18
* New Features
* Directory listing code reworked for more features and better error reporting (thanks to Klaus Post for help). This enables
* Directory include filtering for efficiency
* --max-depth parameter
* Better error reporting
* More to come
* Retry more errors
* Add --ignore-size flag - for uploading images to onedrive
* Log -v output to stdout by default
* Display the transfer stats in more human readable form
* Make 0 size files specifiable with `--max-size 0b`
* Add `b` suffix so we can specify bytes in --bwlimit, --min-size etc
* Use "password:" instead of "password>" prompt - thanks Klaus Post and Leigh Klotz
* Bug Fixes
* Fix retry doing one too many retries
* Local
* Fix problems with OS X and UTF-8 characters
* Amazon Cloud Drive
* Check a file exists before uploading to help with 408 Conflict errors
* Reauth on 401 errors - this has been causing a lot of problems
* Work around spurious 403 errors
* Restart directory listings on error
* Google Drive
* Check a file exists before uploading to help with duplicates
* Fix retry of multipart uploads
* Backblaze B2
* Implement large file uploading
* S3
* Add AES256 server-side encryption for - thanks Justin R. Wilson
* Google Cloud Storage
* Make sure we don't use conflicting content types on upload
* Add service account support - thanks Michal Witkowski
* Swift
* Add auth version parameter
* Add domain option for openstack (v3 auth) - thanks Fabian Ruff
* v1.29 - 2016-04-18 * v1.29 - 2016-04-18
* New Features * New Features
* Implement `-I, --ignore-times` for unconditional upload * Implement `-I, --ignore-times` for unconditional upload
@ -3261,7 +3494,8 @@ For instance "foo.com" also matches "bar.foo.com".
### Rclone gives x509: failed to load system roots and no roots provided error ### ### Rclone gives x509: failed to load system roots and no roots provided error ###
This means that `rclone` can't file the SSL root certificates. Likely This means that `rclone` can't file the SSL root certificates. Likely
you are running `rclone` on a NAS with a cut-down Linux OS. you are running `rclone` on a NAS with a cut-down Linux OS, or
possibly on Solaris.
Rclone (via the Go runtime) tries to load the root certificates from Rclone (via the Go runtime) tries to load the root certificates from
these places on Linux. these places on Linux.
@ -3353,6 +3587,12 @@ Contributors
* Werner Beroux <werner@beroux.com> * Werner Beroux <werner@beroux.com>
* Brian Stengaard <brian@stengaard.eu> * Brian Stengaard <brian@stengaard.eu>
* Jakub Gedeon <jgedeon@sofi.com> * Jakub Gedeon <jgedeon@sofi.com>
* Jim Tittsler <jwt@onjapan.net>
* Michal Witkowski <michal@improbable.io>
* Fabian Ruff <fabian.ruff@sap.com>
* Leigh Klotz <klotz@quixey.com>
* Romain Lapray <lapray.romain@gmail.com>
* Justin R. Wilson <jrw972@gmail.com>
Contact the rclone project Contact the rclone project
-------------------------- --------------------------

View File

@ -1,6 +1,6 @@
rclone(1) User Manual rclone(1) User Manual
Nick Craig-Wood Nick Craig-Wood
Apr 18, 2016 Jun 18, 2016
@ -182,12 +182,11 @@ move source:path dest:path
Moves the source to the destination. Moves the source to the destination.
If there are no filters in use this is equivalent to a copy followed by If there are no filters in use this is equivalent to a copy followed by
a purge, but may using server side operations to speed it up if a purge, but may use server side operations to speed it up if possible.
possible.
If filters are in use then it is equivalent to a copy followed by If filters are in use then it is equivalent to a copy followed by
delete, followed by an rmdir (which only removes the directory if delete, followed by an rmdir (which only removes the directory if
empty). The individual file moves will be moved with srver side empty). The individual file moves will be moved with server side
operations if possible. operations if possible.
IMPORTANT: Since this can cause data loss, test first with the --dry-run IMPORTANT: Since this can cause data loss, test first with the --dry-run
@ -195,7 +194,7 @@ flag.
rclone ls remote:path rclone ls remote:path
List all the objects in the the path with size and path. List all the objects in the path with size and path.
rclone lsd remote:path rclone lsd remote:path
@ -326,14 +325,14 @@ The result being
Dedupe can be run non interactively using the --dedupe-mode flag. Dedupe can be run non interactively using the --dedupe-mode flag.
- --dedupe-mode interactive - interactive as above. - --dedupe-mode interactive - interactive as above.
- --dedupe-mode skip - removes identical files then skips anything - --dedupe-mode skip - removes identical files then skips
left. anything left.
- --dedupe-mode first - removes identical files then keeps the first - --dedupe-mode first - removes identical files then keeps the
one. first one.
- --dedupe-mode newest - removes identical files then keeps the newest - --dedupe-mode newest - removes identical files then keeps the
one. newest one.
- --dedupe-mode oldest - removes identical files then keeps the oldest - --dedupe-mode oldest - removes identical files then keeps the
one. oldest one.
- --dedupe-mode rename - removes identical files then renames the rest - --dedupe-mode rename - removes identical files then renames the rest
to be different. to be different.
@ -351,6 +350,41 @@ rclone help
Prints help on rclone commands and options. Prints help on rclone commands and options.
Quoting and the shell
When you are typing commands to your computer you are using something
called the command line shell. This interprets various characters in an
OS specific way.
Here are some gotchas which may help users unfamiliar with the shell
rules
Linux / OSX
If your names have spaces or shell metacharacters (eg *, ?, $, ', " etc)
then you must quote them. Use single quotes ' by default.
rclone copy 'Important files?' remote:backup
If you want to send a ' you will need to use ", eg
rclone copy "O'Reilly Reviews" remote:backup
The rules for quoting metacharacters are complicated and if you want the
full details you'll have to consult the manual page for your shell.
Windows
If your names have spaces in you need to put them in ", eg
rclone copy "E:\folder name\folder name\folder name" remote:backup
If you are using the root directory on its own then don't quote it (see
#464 for why), eg
rclone copy E:\ remote:backup
Server Side Copy Server Side Copy
Drive, S3, Dropbox, Swift and Google Cloud Storage support server side Drive, S3, Dropbox, Swift and Google Cloud Storage support server side
@ -391,14 +425,14 @@ possibly signed sequence of decimal numbers, each with optional fraction
and a unit suffix, such as "300ms", "-1.5h" or "2h45m". Valid time units and a unit suffix, such as "300ms", "-1.5h" or "2h45m". Valid time units
are "ns", "us" (or "µs"), "ms", "s", "m", "h". are "ns", "us" (or "µs"), "ms", "s", "m", "h".
Options which use SIZE use kByte by default. However a suffix of k for Options which use SIZE use kByte by default. However a suffix of b for
kBytes, M for MBytes and G for GBytes may be used. These are the binary bytes, k for kBytes, M for MBytes and G for GBytes may be used. These
units, eg 2**10, 2**20, 2**30 respectively. are the binary units, eg 1, 2**10, 2**20, 2**30 respectively.
--bwlimit=SIZE --bwlimit=SIZE
Bandwidth limit in kBytes/s, or use suffix k|M|G. The default is 0 which Bandwidth limit in kBytes/s, or use suffix b|k|M|G. The default is 0
means to not limit bandwidth. which means to not limit bandwidth.
For example to limit bandwidth usage to 10 MBytes/s use --bwlimit 10M For example to limit bandwidth usage to 10 MBytes/s use --bwlimit 10M
@ -471,6 +505,20 @@ While this isn't a generally recommended option, it can be useful in
cases where your files change due to encryption. However, it cannot cases where your files change due to encryption. However, it cannot
correct partial transfers in case a transfer was interrupted. correct partial transfers in case a transfer was interrupted.
--ignore-size
Normally rclone will look at modification time and size of files to see
if they are equal. If you set this flag then rclone will check only the
modification time. If --checksum is set then it only checks the
checksum.
It will also cause rclone to skip verifying the sizes are the same after
transfer.
This can be useful for transferring files to and from onedrive which
occasionally misreports the size of image files (see #399 for more
info).
-I, --ignore-times -I, --ignore-times
Using this option will cause rclone to unconditionally upload all files Using this option will cause rclone to unconditionally upload all files
@ -484,7 +532,7 @@ time and are the same size (or have the same checksum if using
Log all of rclone's output to FILE. This is not active by default. This Log all of rclone's output to FILE. This is not active by default. This
can be useful for tracking down problems with syncs in combination with can be useful for tracking down problems with syncs in combination with
the -v flag. the -v flag. See the Logging section for more info.
--low-level-retries NUMBER --low-level-retries NUMBER
@ -501,6 +549,24 @@ quicker.
Disable low level retries with --low-level-retries 1. Disable low level retries with --low-level-retries 1.
--max-depth=N
This modifies the recursion depth for all the commands except purge.
So if you do rclone --max-depth 1 ls remote:path you will see only the
files in the top level directory. Using --max-depth 2 means you will see
all the files in first two directory levels and so on.
For historical reasons the lsd command defaults to using a --max-depth
of 1 - you can override this with the command line flag.
You can use this command to disable recursion (with --max-depth 1).
Note that if you use this with sync and --delete-excluded the files not
recursed through are considered excluded and will be deleted on the
destination. Test first with --dry-run if you are not sure what will
happen.
--modify-window=TIME --modify-window=TIME
When checking whether a file has been modified, this is the maximum When checking whether a file has been modified, this is the maximum
@ -547,9 +613,6 @@ This can be useful transferring files from dropbox which have been
modified by the desktop sync client which doesn't set checksums of modified by the desktop sync client which doesn't set checksums of
modification times in the same way as rclone. modification times in the same way as rclone.
When using this flag, rclone won't update mtimes of remote files if they
are incorrect as it would normally.
--stats=TIME --stats=TIME
Rclone will print stats at regular intervals to show its progress. Rclone will print stats at regular intervals to show its progress.
@ -650,9 +713,9 @@ Go into s, Set configuration password:
q) Quit to main menu q) Quit to main menu
a/q> a a/q> a
Enter NEW configuration password: Enter NEW configuration password:
password> password:
Confirm NEW password: Confirm NEW password:
password> password:
Password set Password set
Your configuration is encrypted. Your configuration is encrypted.
c) Change Password c) Change Password
@ -666,13 +729,13 @@ password or completely remove encryption from your configuration.
There is no way to recover the configuration if you lose your password. There is no way to recover the configuration if you lose your password.
rclone uses nacl secretbox which in term uses XSalsa20 and Poly1305 to rclone uses nacl secretbox which in turn uses XSalsa20 and Poly1305 to
encrypt and authenticate your configuration with secret-key encrypt and authenticate your configuration with secret-key
cryptography. The password is SHA-256 hashed, which produces the key for cryptography. The password is SHA-256 hashed, which produces the key for
secretbox. The hashed password is not stored. secretbox. The hashed password is not stored.
While this provides very good security, we do not recommend storing your While this provides very good security, we do not recommend storing your
encrypted rclone configuration in public, if it contains sensitive encrypted rclone configuration in public if it contains sensitive
information, maybe except if you use a very strong password. information, maybe except if you use a very strong password.
If it is safe in your environment, you can set the RCLONE_CONFIG_PASS If it is safe in your environment, you can set the RCLONE_CONFIG_PASS
@ -681,8 +744,8 @@ used for decrypting the configuration.
If you are running rclone inside a script, you might want to disable If you are running rclone inside a script, you might want to disable
password prompts. To do that, pass the parameter --ask-password=false to password prompts. To do that, pass the parameter --ask-password=false to
rclone. This will make rclone fail instead of asking for a password, if rclone. This will make rclone fail instead of asking for a password if
if RCLONE_CONFIG_PASS doesn't contain a valid password. RCLONE_CONFIG_PASS doesn't contain a valid password.
Developer options Developer options
@ -749,6 +812,25 @@ For the filtering options
See the filtering section. See the filtering section.
Logging
rclone has 3 levels of logging, Error, Info and Debug.
By default rclone logs Error and Info to standard error and Debug to
standard output. This means you can redirect standard output and
standard error to different places.
By default rclone will produce Error and Info level messages.
If you use the -q flag, rclone will only produce Error messages.
If you use the -v flag, rclone will produce Error, Info and Debug
messages.
If you use the --log-file=FILE option, rclone will redirect Error, Info
and Debug messages along with standard error to FILE.
Exit Code Exit Code
If any errors occurred during the command, rclone will set a non zero If any errors occurred during the command, rclone will set a non zero
@ -853,6 +935,10 @@ exclude rules like --include, --exclude, --include-from, --exclude-from,
--filter, or --filter-from. The simplest way to try them out is using --filter, or --filter-from. The simplest way to try them out is using
the ls command, or --dry-run together with -v. the ls command, or --dry-run together with -v.
IMPORTANT Due to limitations of the command line parser you can only use
any of these options once - if you duplicate them then rclone will use
the last one only.
Patterns Patterns
@ -916,12 +1002,34 @@ Special characters can be escaped with a \ before them.
\\.jpg - matches "\.jpg" \\.jpg - matches "\.jpg"
\[one\].jpg - matches "[one].jpg" \[one\].jpg - matches "[one].jpg"
Note also that rclone filter globs can only be used in one of the filter
command line flags, not in the specification of the remote, so
rclone copy "remote:dir*.jpg" /path/to/dir won't work - what is required
is rclone --include "*.jpg" copy remote:dir /path/to/dir
Directories
Rclone keeps track of directories that could match any file patterns.
Eg if you add the include rule
\a\*.jpg
Rclone will synthesize the directory include rule
\a\
If you put any rules which end in \ then it will only match directories.
Directory matches are ONLY used to optimise directory access patterns -
you must still match the files that you want to match. Directory matches
won't optimise anything on bucket based remotes (eg s3, swift, google
compute storage, b2) which don't have a concept of directory.
Differences between rsync and rclone patterns Differences between rsync and rclone patterns
Rclone implements bash style {a,b,c} glob matching which rsync doesn't. Rclone implements bash style {a,b,c} glob matching which rsync doesn't.
Rclone ignores / at the end of a pattern.
Rclone always does a wildcard match so \ must always escape a \. Rclone always does a wildcard match so \ must always escape a \.
@ -954,6 +1062,11 @@ This would exclude
- secret17.jpg - secret17.jpg
- non *.jpg and *.png - non *.jpg and *.png
A similar process is done on directory entries before recursing into
them. This only works on remotes which have a concept of directory (Eg
local, drive, onedrive, amazon cloud drive) and not on bucket based
remotes (eg s3, swift, google compute storage, b2).
Adding filtering rules Adding filtering rules
@ -1327,8 +1440,8 @@ of that file.
Revisions follow the standard google policy which at time of writing was Revisions follow the standard google policy which at time of writing was
- They are deleted after 30 days or 100 revisions (whatever comes - They are deleted after 30 days or 100 revisions (whatever
first). comes first).
- They do not count towards a user storage quota. - They do not count towards a user storage quota.
Deleting files Deleting files
@ -1365,8 +1478,8 @@ off, namely deleting files permanently.
--drive-auth-owner-only --drive-auth-owner-only
Only consider files owned by the authenticated user. Requires Only consider files owned by the authenticated user. Requires that
that --drive-full-list=true (default). --drive-full-list=true (default).
--drive-formats --drive-formats
@ -1392,25 +1505,84 @@ My Spreadsheet.xlsx or My Spreadsheet.pdf etc.
Here are the possible extensions with their corresponding mime types. Here are the possible extensions with their corresponding mime types.
Extension Mime Type Description -------------------------------------
----------- --------------------------------------------------------------------------- -------------------------------------- Extension Mime Type Description
csv text/csv Standard CSV format for Spreadsheets ---------- ------------ -------------
doc application/msword Micosoft Office Document csv text/csv Standard CSV
docx application/vnd.openxmlformats-officedocument.wordprocessingml.document Microsoft Office Document format for
html text/html An HTML Document Spreadsheets
jpg image/jpeg A JPEG Image File
ods application/vnd.oasis.opendocument.spreadsheet Openoffice Spreadsheet doc application/ Micosoft
ods application/x-vnd.oasis.opendocument.spreadsheet Openoffice Spreadsheet msword Office
odt application/vnd.oasis.opendocument.text Openoffice Document Document
pdf application/pdf Adobe PDF Format
png image/png PNG Image Format docx application/ Microsoft
pptx application/vnd.openxmlformats-officedocument.presentationml.presentation Microsoft Office Powerpoint vnd.openxmlf Office
rtf application/rtf Rich Text Format ormats-offic Document
svg image/svg+xml Scalable Vector Graphics Format edocument.wo
txt text/plain Plain Text rdprocessing
xls application/vnd.ms-excel Microsoft Office Spreadsheet ml.document
xlsx application/vnd.openxmlformats-officedocument.spreadsheetml.sheet Microsoft Office Spreadsheet
zip application/zip A ZIP file of HTML, Images CSS html text/html An HTML
Document
jpg image/jpeg A JPEG Image
File
ods application/ Openoffice
vnd.oasis.op Spreadsheet
endocument.s
preadsheet
ods application/ Openoffice
x-vnd.oasis. Spreadsheet
opendocument
.spreadsheet
odt application/ Openoffice
vnd.oasis.op Document
endocument.t
ext
pdf application/ Adobe PDF
pdf Format
png image/png PNG Image
Format
pptx application/ Microsoft
vnd.openxmlf Office
ormats-offic Powerpoint
edocument.pr
esentationml
.presentatio
n
rtf application/ Rich Text
rtf Format
svg image/svg+xm Scalable
l Vector
Graphics
Format
txt text/plain Plain Text
xls application/ Microsoft
vnd.ms-excel Office
Spreadsheet
xlsx application/ Microsoft
vnd.openxmlf Office
ormats-offic Spreadsheet
edocument.sp
readsheetml.
sheet
zip application/ A ZIP file of
zip HTML, Images
CSS
-------------------------------------
Limitations Limitations
@ -1535,6 +1707,13 @@ This will guide you through an interactive setup process.
9 / South America (Sao Paulo) Region. 9 / South America (Sao Paulo) Region.
\ "sa-east-1" \ "sa-east-1"
location_constraint> 1 location_constraint> 1
The server-side encryption algorithm used when storing this object in S3.
Choose a number from below, or type in your own value
1 / None
\ ""
2 / AES256
\ "AES256"
server_side_encryption>
Remote config Remote config
-------------------- --------------------
[remote] [remote]
@ -1751,6 +1930,8 @@ This will guide you through an interactive setup process.
6 / OVH 6 / OVH
\ "https://auth.cloud.ovh.net/v2.0" \ "https://auth.cloud.ovh.net/v2.0"
auth> 1 auth> 1
User domain - optional (v3 auth)
domain> Default
Tenant name - optional Tenant name - optional
tenant> tenant>
Region name - optional Region name - optional
@ -1758,6 +1939,8 @@ This will guide you through an interactive setup process.
Storage URL - optional Storage URL - optional
storage_url> storage_url>
Remote config Remote config
AuthVersion - optional - set to (1,2,3) if your auth URL has no version
auth_version>
-------------------- --------------------
[remote] [remote]
user = user_name user = user_name
@ -1814,6 +1997,22 @@ The Swift API doesn't return a correct MD5SUM for segmented files
(Dynamic or Static Large Objects) so rclone won't check or use the (Dynamic or Static Large Objects) so rclone won't check or use the
MD5SUM for these. MD5SUM for these.
Troubleshooting
Rclone gives Failed to create file system for "remote:": Bad Request
Due to an oddity of the underlying swift library, it gives a "Bad
Request" error rather than a more sensible error when the authentication
fails for Swift.
So this most likely means your username / password is wrong. You can
investigate further with the --dump-bodies flag.
Rclone gives Failed to create file system: Response didn't have storage storage url and auth token
This is most likely caused by forgetting to specify your tenant when
setting up a swift remote.
Dropbox Dropbox
@ -1988,6 +2187,8 @@ This will guide you through an interactive setup process:
client_secret> client_secret>
Project number optional - needed only for list/create/delete buckets - see your developer console. Project number optional - needed only for list/create/delete buckets - see your developer console.
project_number> 12345678 project_number> 12345678
Service Account Credentials JSON file path - needed only if you want use SA instead of interactive login.
service_account_file>
Access Control List for new objects. Access Control List for new objects.
Choose a number from below, or type in your own value Choose a number from below, or type in your own value
* Object owner gets OWNER access, and all Authenticated Users get READER access. * Object owner gets OWNER access, and all Authenticated Users get READER access.
@ -2069,6 +2270,25 @@ files in the bucket.
rclone sync /home/local/directory remote:bucket rclone sync /home/local/directory remote:bucket
Service Account support
You can set up rclone with Google Cloud Storage in an unattended mode,
i.e. not tied to a specific end-user Google account. This is useful when
you want to synchronise files onto machines that don't have actively
logged-in users, for example build machines.
To get credentials for Google Cloud Platform IAM Service Accounts,
please head to the Service Account section of the Google Developer
Console. Service Accounts behave just like normal User permissions in
Google Cloud Storage ACLs, so you can limit their access (e.g. make them
read only). After creating an account, a JSON file containing the
Service Account's credentials will be downloaded onto your machines.
These credentials are what rclone will use for authentication.
To use a Service Account instead of OAuth2 token flow, enter the path to
your Service Account credentials at the service_account_file prompt and
rclone won't use the browser based authentication flow.
Modified time Modified time
Google google cloud storage stores md5sums natively and rclone stores Google google cloud storage stores md5sums natively and rclone stores
@ -2454,6 +2674,11 @@ To copy a local directory to an Hubic directory called backup
rclone copy /home/source remote:backup rclone copy /home/source remote:backup
If you want the directory to be visible in the official _Hubic browser_,
you need to copy your files to the default directory
rclone copy /home/source remote:default/backup
Modified time Modified time
The modified time is stored as metadata on the object as The modified time is stored as metadata on the object as
@ -2574,6 +2799,9 @@ SHA1 checksums
The SHA1 checksums of the files are checked on upload and download and The SHA1 checksums of the files are checked on upload and download and
will be used in the syncing process. You can use the --checksum flag. will be used in the syncing process. You can use the --checksum flag.
Large files which are uploaded in chunks will store their SHA1 on the
object as X-Bz-Info-large_file_sha1 as recommended by Backblaze.
Versions Versions
When rclone uploads a new version of a file it creates a new version of When rclone uploads a new version of a file it creates a new version of
@ -2597,10 +2825,26 @@ hardware, how big the files are, how much you want to load your
computer, etc. The default of --transfers 4 is definitely too low for computer, etc. The default of --transfers 4 is definitely too low for
Backblaze B2 though. Backblaze B2 though.
Specific options
Here are the command line options specific to this cloud storage system.
--b2-chunk-size valuee=SIZE
When uploading large files chunk the file into this size. Note that
these chunks are buffered in memory. 100,000,000 Bytes is the minimim
size (default 96M).
--b2-upload-cutoff=SIZE
Cutoff for switching to chunked upload (default 4.657GiB == 5GB). Files
above this size will be uploaded in chunks of --b2-chunk-size. The
default value is the largest file which can be uploaded without chunks.
API API
Here are some notes I made on the backblaze API while integrating it Here are some notes I made on the backblaze API while integrating it
with rclone which detail the changes I'd like to see. with rclone.
Yandex Disk Yandex Disk
@ -2779,6 +3023,46 @@ characters on z, so only use this option if you have to.
Changelog Changelog
- v1.29 - 2016-06-18
- New Features
- Directory listing code reworked for more features and better
error reporting (thanks to Klaus Post for help). This enables
- Directory include filtering for efficiency
- --max-depth parameter
- Better error reporting
- More to come
- Retry more errors
- Add --ignore-size flag - for uploading images to onedrive
- Log -v output to stdout by default
- Display the transfer stats in more human readable form
- Make 0 size files specifiable with --max-size 0b
- Add b suffix so we can specify bytes in --bwlimit, --min-size
etc
- Use "password:" instead of "password>" prompt - thanks Klaus
Post and Leigh Klotz
- Bug Fixes
- Fix retry doing one too many retries
- Local
- Fix problems with OS X and UTF-8 characters
- Amazon Cloud Drive
- Check a file exists before uploading to help with 408 Conflict
errors
- Reauth on 401 errors - this has been causing a lot of problems
- Work around spurious 403 errors
- Restart directory listings on error
- Google Drive
- Check a file exists before uploading to help with duplicates
- Fix retry of multipart uploads
- Backblaze B2
- Implement large file uploading
- S3
- Add AES256 server-side encryption for - thanks Justin R. Wilson
- Google Cloud Storage
- Make sure we don't use conflicting content types on upload
- Add service account support - thanks Michal Witkowski
- Swift
- Add auth version parameter
- Add domain option for openstack (v3 auth) - thanks Fabian Ruff
- v1.29 - 2016-04-18 - v1.29 - 2016-04-18
- New Features - New Features
- Implement -I, --ignore-times for unconditional upload - Implement -I, --ignore-times for unconditional upload
@ -2799,8 +3083,8 @@ Changelog
the rest to be different. the rest to be different.
- Bug fixes - Bug fixes
- Make rclone check obey the --size-only flag. - Make rclone check obey the --size-only flag.
- Use "application/octet-stream" if discovered mime type is - Use "application/octet-stream" if discovered mime type
invalid. is invalid.
- Fix missing "quit" option when there are no remotes. - Fix missing "quit" option when there are no remotes.
- Google Drive - Google Drive
- Increase default chunk size to 8 MB - increases upload speed of - Increase default chunk size to 8 MB - increases upload speed of
@ -2834,8 +3118,8 @@ Changelog
- Don't make directories if --dry-run set - Don't make directories if --dry-run set
- Fix and document the move command - Fix and document the move command
- Fix redirecting stderr on unix-like OSes when using --log-file - Fix redirecting stderr on unix-like OSes when using --log-file
- Fix delete command to wait until all finished - fixes missing - Fix delete command to wait until all finished - fixes
deletes. missing deletes.
- Backblaze B2 - Backblaze B2
- Use one upload URL per go routine fixes - Use one upload URL per go routine fixes
more than one upload using auth token more than one upload using auth token
@ -2863,10 +3147,10 @@ Changelog
- Add support for multiple hash types - we now check SHA1 as well - Add support for multiple hash types - we now check SHA1 as well
as MD5 hashes. as MD5 hashes.
- delete command which does obey the filters (unlike purge) - delete command which does obey the filters (unlike purge)
- dedupe command to deduplicate a remote. Useful with Google - dedupe command to deduplicate a remote. Useful with
Drive. Google Drive.
- Add --ignore-existing flag to skip all files that exist on - Add --ignore-existing flag to skip all files that exist
destination. on destination.
- Add --delete-before, --delete-during, --delete-after flags. - Add --delete-before, --delete-during, --delete-after flags.
- Add --memprofile flag to debug memory use. - Add --memprofile flag to debug memory use.
- Warn the user about files with same name but different case - Warn the user about files with same name but different case
@ -2909,8 +3193,8 @@ Changelog
- Re-enable server side copy - Re-enable server side copy
- Don't mask HTTP error codes with JSON decode error - Don't mask HTTP error codes with JSON decode error
- S3 - S3
- Fix corrupting Content-Type on mod time update (thanks Joseph - Fix corrupting Content-Type on mod time update (thanks
Spurrier) Joseph Spurrier)
- v1.25 - 2015-11-14 - v1.25 - 2015-11-14
- New features - New features
- Implement Hubic storage system - Implement Hubic storage system
@ -3261,7 +3545,8 @@ must be comma separated, and can contain domains or parts. For instance
Rclone gives x509: failed to load system roots and no roots provided error Rclone gives x509: failed to load system roots and no roots provided error
This means that rclone can't file the SSL root certificates. Likely you This means that rclone can't file the SSL root certificates. Likely you
are running rclone on a NAS with a cut-down Linux OS. are running rclone on a NAS with a cut-down Linux OS, or possibly on
Solaris.
Rclone (via the Go runtime) tries to load the root certificates from Rclone (via the Go runtime) tries to load the root certificates from
these places on Linux. these places on Linux.
@ -3348,6 +3633,12 @@ Contributors
- Werner Beroux werner@beroux.com - Werner Beroux werner@beroux.com
- Brian Stengaard brian@stengaard.eu - Brian Stengaard brian@stengaard.eu
- Jakub Gedeon jgedeon@sofi.com - Jakub Gedeon jgedeon@sofi.com
- Jim Tittsler jwt@onjapan.net
- Michal Witkowski michal@improbable.io
- Fabian Ruff fabian.ruff@sap.com
- Leigh Klotz klotz@quixey.com
- Romain Lapray lapray.romain@gmail.com
- Justin R. Wilson jrw972@gmail.com
Contact the rclone project Contact the rclone project

View File

@ -1,12 +1,48 @@
--- ---
title: "Documentation" title: "Documentation"
description: "Rclone Changelog" description: "Rclone Changelog"
date: "2016-04-18" date: "2016-06-18"
--- ---
Changelog Changelog
--------- ---------
* v1.29 - 2016-06-18
* New Features
* Directory listing code reworked for more features and better error reporting (thanks to Klaus Post for help). This enables
* Directory include filtering for efficiency
* --max-depth parameter
* Better error reporting
* More to come
* Retry more errors
* Add --ignore-size flag - for uploading images to onedrive
* Log -v output to stdout by default
* Display the transfer stats in more human readable form
* Make 0 size files specifiable with `--max-size 0b`
* Add `b` suffix so we can specify bytes in --bwlimit, --min-size etc
* Use "password:" instead of "password>" prompt - thanks Klaus Post and Leigh Klotz
* Bug Fixes
* Fix retry doing one too many retries
* Local
* Fix problems with OS X and UTF-8 characters
* Amazon Cloud Drive
* Check a file exists before uploading to help with 408 Conflict errors
* Reauth on 401 errors - this has been causing a lot of problems
* Work around spurious 403 errors
* Restart directory listings on error
* Google Drive
* Check a file exists before uploading to help with duplicates
* Fix retry of multipart uploads
* Backblaze B2
* Implement large file uploading
* S3
* Add AES256 server-side encryption for - thanks Justin R. Wilson
* Google Cloud Storage
* Make sure we don't use conflicting content types on upload
* Add service account support - thanks Michal Witkowski
* Swift
* Add auth version parameter
* Add domain option for openstack (v3 auth) - thanks Fabian Ruff
* v1.29 - 2016-04-18 * v1.29 - 2016-04-18
* New Features * New Features
* Implement `-I, --ignore-times` for unconditional upload * Implement `-I, --ignore-times` for unconditional upload

View File

@ -2,40 +2,40 @@
title: "Rclone downloads" title: "Rclone downloads"
description: "Download rclone binaries for your OS." description: "Download rclone binaries for your OS."
type: page type: page
date: "2016-04-18" date: "2016-06-18"
--- ---
Rclone Download v1.29 Rclone Download v1.30
===================== =====================
* Windows * Windows
* [386 - 32 Bit](http://downloads.rclone.org/rclone-v1.29-windows-386.zip) * [386 - 32 Bit](http://downloads.rclone.org/rclone-v1.30-windows-386.zip)
* [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.29-windows-amd64.zip) * [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.30-windows-amd64.zip)
* OSX * OSX
* [386 - 32 Bit](http://downloads.rclone.org/rclone-v1.29-osx-386.zip) * [386 - 32 Bit](http://downloads.rclone.org/rclone-v1.30-osx-386.zip)
* [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.29-osx-amd64.zip) * [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.30-osx-amd64.zip)
* Linux * Linux
* [386 - 32 Bit](http://downloads.rclone.org/rclone-v1.29-linux-386.zip) * [386 - 32 Bit](http://downloads.rclone.org/rclone-v1.30-linux-386.zip)
* [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.29-linux-amd64.zip) * [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.30-linux-amd64.zip)
* [ARM - 32 Bit](http://downloads.rclone.org/rclone-v1.29-linux-arm.zip) * [ARM - 32 Bit](http://downloads.rclone.org/rclone-v1.30-linux-arm.zip)
* FreeBSD * FreeBSD
* [386 - 32 Bit](http://downloads.rclone.org/rclone-v1.29-freebsd-386.zip) * [386 - 32 Bit](http://downloads.rclone.org/rclone-v1.30-freebsd-386.zip)
* [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.29-freebsd-amd64.zip) * [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.30-freebsd-amd64.zip)
* [ARM - 32 Bit](http://downloads.rclone.org/rclone-v1.29-freebsd-arm.zip) * [ARM - 32 Bit](http://downloads.rclone.org/rclone-v1.30-freebsd-arm.zip)
* NetBSD * NetBSD
* [386 - 32 Bit](http://downloads.rclone.org/rclone-v1.29-netbsd-386.zip) * [386 - 32 Bit](http://downloads.rclone.org/rclone-v1.30-netbsd-386.zip)
* [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.29-netbsd-amd64.zip) * [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.30-netbsd-amd64.zip)
* [ARM - 32 Bit](http://downloads.rclone.org/rclone-v1.29-netbsd-arm.zip) * [ARM - 32 Bit](http://downloads.rclone.org/rclone-v1.30-netbsd-arm.zip)
* OpenBSD * OpenBSD
* [386 - 32 Bit](http://downloads.rclone.org/rclone-v1.29-openbsd-386.zip) * [386 - 32 Bit](http://downloads.rclone.org/rclone-v1.30-openbsd-386.zip)
* [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.29-openbsd-amd64.zip) * [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.30-openbsd-amd64.zip)
* Plan 9 * Plan 9
* [386 - 32 Bit](http://downloads.rclone.org/rclone-v1.29-plan9-386.zip) * [386 - 32 Bit](http://downloads.rclone.org/rclone-v1.30-plan9-386.zip)
* [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.29-plan9-amd64.zip) * [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.30-plan9-amd64.zip)
* Solaris * Solaris
* [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.29-solaris-amd64.zip) * [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.30-solaris-amd64.zip)
You can also find a [mirror of the downloads on github](https://github.com/ncw/rclone/releases/tag/v1.29). You can also find a [mirror of the downloads on github](https://github.com/ncw/rclone/releases/tag/v1.30).
Downloads for scripting Downloads for scripting
======================= =======================

View File

@ -1,4 +1,4 @@
package fs package fs
// Version of rclone // Version of rclone
var Version = "v1.29" var Version = "v1.30"

366
rclone.1
View File

@ -1,5 +1,8 @@
.\"t .\"t
.TH "rclone" "1" "Apr 18, 2016" "User Manual" "" .\" Automatically generated by Pandoc 1.16.0.2
.\"
.TH "rclone" "1" "Jun 18, 2016" "User Manual" ""
.hy
.SH Rclone .SH Rclone
.PP .PP
[IMAGE: Logo (http://rclone.org/img/rclone-120x120.png)] (http://rclone.org/) [IMAGE: Logo (http://rclone.org/img/rclone-120x120.png)] (http://rclone.org/)
@ -240,20 +243,19 @@ contents go there.
Moves the source to the destination. Moves the source to the destination.
.PP .PP
If there are no filters in use this is equivalent to a copy followed by If there are no filters in use this is equivalent to a copy followed by
a purge, but may using server side operations to speed it up if a purge, but may use server side operations to speed it up if possible.
possible.
.PP .PP
If filters are in use then it is equivalent to a copy followed by If filters are in use then it is equivalent to a copy followed by
delete, followed by an rmdir (which only removes the directory if delete, followed by an rmdir (which only removes the directory if
empty). empty).
The individual file moves will be moved with srver side operations if The individual file moves will be moved with server side operations if
possible. possible.
.PP .PP
\f[B]Important\f[]: Since this can cause data loss, test first with the \f[B]Important\f[]: Since this can cause data loss, test first with the
\-\-dry\-run flag. \-\-dry\-run flag.
.SS rclone ls remote:path .SS rclone ls remote:path
.PP .PP
List all the objects in the the path with size and path. List all the objects in the path with size and path.
.SS rclone lsd remote:path .SS rclone lsd remote:path
.PP .PP
List all directories/containers/buckets in the the path. List all directories/containers/buckets in the the path.
@ -431,6 +433,55 @@ Enter an interactive configuration session.
.SS rclone help .SS rclone help
.PP .PP
Prints help on rclone commands and options. Prints help on rclone commands and options.
.SS Quoting and the shell
.PP
When you are typing commands to your computer you are using something
called the command line shell.
This interprets various characters in an OS specific way.
.PP
Here are some gotchas which may help users unfamiliar with the shell
rules
.SS Linux / OSX
.PP
If your names have spaces or shell metacharacters (eg \f[C]*\f[],
\f[C]?\f[], \f[C]$\f[], \f[C]\[aq]\f[], \f[C]"\f[] etc) then you must
quote them.
Use single quotes \f[C]\[aq]\f[] by default.
.IP
.nf
\f[C]
rclone\ copy\ \[aq]Important\ files?\[aq]\ remote:backup
\f[]
.fi
.PP
If you want to send a \f[C]\[aq]\f[] you will need to use \f[C]"\f[], eg
.IP
.nf
\f[C]
rclone\ copy\ "O\[aq]Reilly\ Reviews"\ remote:backup
\f[]
.fi
.PP
The rules for quoting metacharacters are complicated and if you want the
full details you\[aq]ll have to consult the manual page for your shell.
.SS Windows
.PP
If your names have spaces in you need to put them in \f[C]"\f[], eg
.IP
.nf
\f[C]
rclone\ copy\ "E:\\folder\ name\\folder\ name\\folder\ name"\ remote:backup
\f[]
.fi
.PP
If you are using the root directory on its own then don\[aq]t quote it
(see #464 (https://github.com/ncw/rclone/issues/464) for why), eg
.IP
.nf
\f[C]
rclone\ copy\ E:\\\ remote:backup
\f[]
.fi
.SS Server Side Copy .SS Server Side Copy
.PP .PP
Drive, S3, Dropbox, Swift and Google Cloud Storage support server side Drive, S3, Dropbox, Swift and Google Cloud Storage support server side
@ -479,12 +530,12 @@ with optional fraction and a unit suffix, such as "300ms", "\-1.5h" or
Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h". Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h".
.PP .PP
Options which use SIZE use kByte by default. Options which use SIZE use kByte by default.
However a suffix of \f[C]k\f[] for kBytes, \f[C]M\f[] for MBytes and However a suffix of \f[C]b\f[] for bytes, \f[C]k\f[] for kBytes,
\f[C]G\f[] for GBytes may be used. \f[C]M\f[] for MBytes and \f[C]G\f[] for GBytes may be used.
These are the binary units, eg 2**10, 2**20, 2**30 respectively. These are the binary units, eg 1, 2**10, 2**20, 2**30 respectively.
.SS \-\-bwlimit=SIZE .SS \-\-bwlimit=SIZE
.PP .PP
Bandwidth limit in kBytes/s, or use suffix k|M|G. Bandwidth limit in kBytes/s, or use suffix b|k|M|G.
The default is \f[C]0\f[] which means to not limit bandwidth. The default is \f[C]0\f[] which means to not limit bandwidth.
.PP .PP
For example to limit bandwidth usage to 10 MBytes/s use For example to limit bandwidth usage to 10 MBytes/s use
@ -562,6 +613,19 @@ While this isn\[aq]t a generally recommended option, it can be useful in
cases where your files change due to encryption. cases where your files change due to encryption.
However, it cannot correct partial transfers in case a transfer was However, it cannot correct partial transfers in case a transfer was
interrupted. interrupted.
.SS \-\-ignore\-size
.PP
Normally rclone will look at modification time and size of files to see
if they are equal.
If you set this flag then rclone will check only the modification time.
If \f[C]\-\-checksum\f[] is set then it only checks the checksum.
.PP
It will also cause rclone to skip verifying the sizes are the same after
transfer.
.PP
This can be useful for transferring files to and from onedrive which
occasionally misreports the size of image files (see
#399 (https://github.com/ncw/rclone/issues/399) for more info).
.SS \-I, \-\-ignore\-times .SS \-I, \-\-ignore\-times
.PP .PP
Using this option will cause rclone to unconditionally upload all files Using this option will cause rclone to unconditionally upload all files
@ -576,6 +640,7 @@ Log all of rclone\[aq]s output to FILE.
This is not active by default. This is not active by default.
This can be useful for tracking down problems with syncs in combination This can be useful for tracking down problems with syncs in combination
with the \f[C]\-v\f[] flag. with the \f[C]\-v\f[] flag.
See the Logging section for more info.
.SS \-\-low\-level\-retries NUMBER .SS \-\-low\-level\-retries NUMBER
.PP .PP
This controls the number of low level retries rclone does. This controls the number of low level retries rclone does.
@ -591,6 +656,27 @@ to reduce the value so rclone moves on to a high level retry (see the
\f[C]\-\-retries\f[] flag) quicker. \f[C]\-\-retries\f[] flag) quicker.
.PP .PP
Disable low level retries with \f[C]\-\-low\-level\-retries\ 1\f[]. Disable low level retries with \f[C]\-\-low\-level\-retries\ 1\f[].
.SS \-\-max\-depth=N
.PP
This modifies the recursion depth for all the commands except purge.
.PP
So if you do \f[C]rclone\ \-\-max\-depth\ 1\ ls\ remote:path\f[] you
will see only the files in the top level directory.
Using \f[C]\-\-max\-depth\ 2\f[] means you will see all the files in
first two directory levels and so on.
.PP
For historical reasons the \f[C]lsd\f[] command defaults to using a
\f[C]\-\-max\-depth\f[] of 1 \- you can override this with the command
line flag.
.PP
You can use this command to disable recursion (with
\f[C]\-\-max\-depth\ 1\f[]).
.PP
Note that if you use this with \f[C]sync\f[] and
\f[C]\-\-delete\-excluded\f[] the files not recursed through are
considered excluded and will be deleted on the destination.
Test first with \f[C]\-\-dry\-run\f[] if you are not sure what will
happen.
.SS \-\-modify\-window=TIME .SS \-\-modify\-window=TIME
.PP .PP
When checking whether a file has been modified, this is the maximum When checking whether a file has been modified, this is the maximum
@ -634,9 +720,6 @@ If you set this flag then rclone will check only the size.
This can be useful transferring files from dropbox which have been This can be useful transferring files from dropbox which have been
modified by the desktop sync client which doesn\[aq]t set checksums of modified by the desktop sync client which doesn\[aq]t set checksums of
modification times in the same way as rclone. modification times in the same way as rclone.
.PP
When using this flag, rclone won\[aq]t update mtimes of remote files if
they are incorrect as it would normally.
.SS \-\-stats=TIME .SS \-\-stats=TIME
.PP .PP
Rclone will print stats at regular intervals to show its progress. Rclone will print stats at regular intervals to show its progress.
@ -744,9 +827,9 @@ a)\ Add\ Password
q)\ Quit\ to\ main\ menu q)\ Quit\ to\ main\ menu
a/q>\ a a/q>\ a
Enter\ NEW\ configuration\ password: Enter\ NEW\ configuration\ password:
password> password:
Confirm\ NEW\ password: Confirm\ NEW\ password:
password> password:
Password\ set Password\ set
Your\ configuration\ is\ encrypted. Your\ configuration\ is\ encrypted.
c)\ Change\ Password c)\ Change\ Password
@ -765,13 +848,13 @@ There is no way to recover the configuration if you lose your password.
.PP .PP
rclone uses nacl rclone uses nacl
secretbox (https://godoc.org/golang.org/x/crypto/nacl/secretbox) which secretbox (https://godoc.org/golang.org/x/crypto/nacl/secretbox) which
in term uses XSalsa20 and Poly1305 to encrypt and authenticate your in turn uses XSalsa20 and Poly1305 to encrypt and authenticate your
configuration with secret\-key cryptography. configuration with secret\-key cryptography.
The password is SHA\-256 hashed, which produces the key for secretbox. The password is SHA\-256 hashed, which produces the key for secretbox.
The hashed password is not stored. The hashed password is not stored.
.PP .PP
While this provides very good security, we do not recommend storing your While this provides very good security, we do not recommend storing your
encrypted rclone configuration in public, if it contains sensitive encrypted rclone configuration in public if it contains sensitive
information, maybe except if you use a very strong password. information, maybe except if you use a very strong password.
.PP .PP
If it is safe in your environment, you can set the If it is safe in your environment, you can set the
@ -783,7 +866,7 @@ If you are running rclone inside a script, you might want to disable
password prompts. password prompts.
To do that, pass the parameter \f[C]\-\-ask\-password=false\f[] to To do that, pass the parameter \f[C]\-\-ask\-password=false\f[] to
rclone. rclone.
This will make rclone fail instead of asking for a password, if if This will make rclone fail instead of asking for a password if
\f[C]RCLONE_CONFIG_PASS\f[] doesn\[aq]t contain a valid password. \f[C]RCLONE_CONFIG_PASS\f[] doesn\[aq]t contain a valid password.
.SS Developer options .SS Developer options
.PP .PP
@ -857,6 +940,28 @@ For the filtering options
\f[C]\-\-dump\-filters\f[] \f[C]\-\-dump\-filters\f[]
.PP .PP
See the filtering section (http://rclone.org/filtering/). See the filtering section (http://rclone.org/filtering/).
.SS Logging
.PP
rclone has 3 levels of logging, \f[C]Error\f[], \f[C]Info\f[] and
\f[C]Debug\f[].
.PP
By default rclone logs \f[C]Error\f[] and \f[C]Info\f[] to standard
error and \f[C]Debug\f[] to standard output.
This means you can redirect standard output and standard error to
different places.
.PP
By default rclone will produce \f[C]Error\f[] and \f[C]Info\f[] level
messages.
.PP
If you use the \f[C]\-q\f[] flag, rclone will only produce
\f[C]Error\f[] messages.
.PP
If you use the \f[C]\-v\f[] flag, rclone will produce \f[C]Error\f[],
\f[C]Info\f[] and \f[C]Debug\f[] messages.
.PP
If you use the \f[C]\-\-log\-file=FILE\f[] option, rclone will redirect
\f[C]Error\f[], \f[C]Info\f[] and \f[C]Debug\f[] messages along with
standard error to FILE.
.SS Exit Code .SS Exit Code
.PP .PP
If any errors occurred during the command, rclone will set a non zero If any errors occurred during the command, rclone will set a non zero
@ -973,6 +1078,10 @@ exclude rules like \f[C]\-\-include\f[], \f[C]\-\-exclude\f[],
\f[C]\-\-filter\f[], or \f[C]\-\-filter\-from\f[]. \f[C]\-\-filter\f[], or \f[C]\-\-filter\-from\f[].
The simplest way to try them out is using the \f[C]ls\f[] command, or The simplest way to try them out is using the \f[C]ls\f[] command, or
\f[C]\-\-dry\-run\f[] together with \f[C]\-v\f[]. \f[C]\-\-dry\-run\f[] together with \f[C]\-v\f[].
.PP
\f[B]Important\f[] Due to limitations of the command line parser you can
only use any of these options once \- if you duplicate them then rclone
will use the last one only.
.SS Patterns .SS Patterns
.PP .PP
The patterns used to match files for inclusion or exclusion are based on The patterns used to match files for inclusion or exclusion are based on
@ -1066,13 +1175,45 @@ Special characters can be escaped with a \f[C]\\\f[] before them.
\\[one\\].jpg\ \ \-\ matches\ "[one].jpg" \\[one\\].jpg\ \ \-\ matches\ "[one].jpg"
\f[] \f[]
.fi .fi
.PP
Note also that rclone filter globs can only be used in one of the filter
command line flags, not in the specification of the remote, so
\f[C]rclone\ copy\ "remote:dir*.jpg"\ /path/to/dir\f[] won\[aq]t work \-
what is required is
\f[C]rclone\ \-\-include\ "*.jpg"\ copy\ remote:dir\ /path/to/dir\f[]
.SS Directories
.PP
Rclone keeps track of directories that could match any file patterns.
.PP
Eg if you add the include rule
.IP
.nf
\f[C]
\\a\\*.jpg
\f[]
.fi
.PP
Rclone will synthesize the directory include rule
.IP
.nf
\f[C]
\\a\\
\f[]
.fi
.PP
If you put any rules which end in \f[C]\\\f[] then it will only match
directories.
.PP
Directory matches are \f[B]only\f[] used to optimise directory access
patterns \- you must still match the files that you want to match.
Directory matches won\[aq]t optimise anything on bucket based remotes
(eg s3, swift, google compute storage, b2) which don\[aq]t have a
concept of directory.
.SS Differences between rsync and rclone patterns .SS Differences between rsync and rclone patterns
.PP .PP
Rclone implements bash style \f[C]{a,b,c}\f[] glob matching which rsync Rclone implements bash style \f[C]{a,b,c}\f[] glob matching which rsync
doesn\[aq]t. doesn\[aq]t.
.PP .PP
Rclone ignores \f[C]/\f[] at the end of a pattern.
.PP
Rclone always does a wildcard match so \f[C]\\\f[] must always escape a Rclone always does a wildcard match so \f[C]\\\f[] must always escape a
\f[C]\\\f[]. \f[C]\\\f[].
.SS How the rules are used .SS How the rules are used
@ -1111,6 +1252,12 @@ This would exclude
\f[C]secret17.jpg\f[] \f[C]secret17.jpg\f[]
.IP \[bu] 2 .IP \[bu] 2
non \f[C]*.jpg\f[] and \f[C]*.png\f[] non \f[C]*.jpg\f[] and \f[C]*.png\f[]
.PP
A similar process is done on directory entries before recursing into
them.
This only works on remotes which have a concept of directory (Eg local,
drive, onedrive, amazon cloud drive) and not on bucket based remotes (eg
s3, swift, google compute storage, b2).
.SS Adding filtering rules .SS Adding filtering rules
.PP .PP
Filtering rules are added with the following command line flags. Filtering rules are added with the following command line flags.
@ -1729,7 +1876,7 @@ Here are the possible extensions with their corresponding mime types.
.PP .PP
.TS .TS
tab(@); tab(@);
l l l. lw(9.7n) lw(11.7n) lw(12.6n).
T{ T{
Extension Extension
T}@T{ T}@T{
@ -1988,6 +2135,13 @@ Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value
\ 9\ /\ South\ America\ (Sao\ Paulo)\ Region. \ 9\ /\ South\ America\ (Sao\ Paulo)\ Region.
\ \ \ \\\ "sa\-east\-1" \ \ \ \\\ "sa\-east\-1"
location_constraint>\ 1 location_constraint>\ 1
The\ server\-side\ encryption\ algorithm\ used\ when\ storing\ this\ object\ in\ S3.
Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value
\ 1\ /\ None
\ \ \ \\\ ""
\ 2\ /\ AES256
\ \ \ \\\ "AES256"
server_side_encryption>
Remote\ config Remote\ config
\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\- \-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-
[remote] [remote]
@ -2258,6 +2412,8 @@ Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value
\ 6\ /\ OVH \ 6\ /\ OVH
\ \ \ \\\ "https://auth.cloud.ovh.net/v2.0" \ \ \ \\\ "https://auth.cloud.ovh.net/v2.0"
auth>\ 1 auth>\ 1
User\ domain\ \-\ optional\ (v3\ auth)
domain>\ Default
Tenant\ name\ \-\ optional Tenant\ name\ \-\ optional
tenant>\ tenant>\
Region\ name\ \-\ optional Region\ name\ \-\ optional
@ -2265,6 +2421,8 @@ region>\
Storage\ URL\ \-\ optional Storage\ URL\ \-\ optional
storage_url>\ storage_url>\
Remote\ config Remote\ config
AuthVersion\ \-\ optional\ \-\ set\ to\ (1,2,3)\ if\ your\ auth\ URL\ has\ no\ version
auth_version>\
\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\- \-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-
[remote] [remote]
user\ =\ user_name user\ =\ user_name
@ -2335,6 +2493,20 @@ amongst others) for storing the modification time for an object.
The Swift API doesn\[aq]t return a correct MD5SUM for segmented files The Swift API doesn\[aq]t return a correct MD5SUM for segmented files
(Dynamic or Static Large Objects) so rclone won\[aq]t check or use the (Dynamic or Static Large Objects) so rclone won\[aq]t check or use the
MD5SUM for these. MD5SUM for these.
.SS Troubleshooting
.SS Rclone gives Failed to create file system for "remote:": Bad Request
.PP
Due to an oddity of the underlying swift library, it gives a "Bad
Request" error rather than a more sensible error when the authentication
fails for Swift.
.PP
So this most likely means your username / password is wrong.
You can investigate further with the \f[C]\-\-dump\-bodies\f[] flag.
.SS Rclone gives Failed to create file system: Response didn\[aq]t have
storage storage url and auth token
.PP
This is most likely caused by forgetting to specify your tenant when
setting up a swift remote.
.SS Dropbox .SS Dropbox
.PP .PP
Paths are specified as \f[C]remote:path\f[] Paths are specified as \f[C]remote:path\f[]
@ -2537,6 +2709,8 @@ Google\ Application\ Client\ Secret\ \-\ leave\ blank\ normally.
client_secret>\ client_secret>\
Project\ number\ optional\ \-\ needed\ only\ for\ list/create/delete\ buckets\ \-\ see\ your\ developer\ console. Project\ number\ optional\ \-\ needed\ only\ for\ list/create/delete\ buckets\ \-\ see\ your\ developer\ console.
project_number>\ 12345678 project_number>\ 12345678
Service\ Account\ Credentials\ JSON\ file\ path\ \-\ needed\ only\ if\ you\ want\ use\ SA\ instead\ of\ interactive\ login.
service_account_file>\
Access\ Control\ List\ for\ new\ objects. Access\ Control\ List\ for\ new\ objects.
Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value
\ *\ Object\ owner\ gets\ OWNER\ access,\ and\ all\ Authenticated\ Users\ get\ READER\ access. \ *\ Object\ owner\ gets\ OWNER\ access,\ and\ all\ Authenticated\ Users\ get\ READER\ access.
@ -2636,6 +2810,31 @@ excess files in the bucket.
rclone\ sync\ /home/local/directory\ remote:bucket rclone\ sync\ /home/local/directory\ remote:bucket
\f[] \f[]
.fi .fi
.SS Service Account support
.PP
You can set up rclone with Google Cloud Storage in an unattended mode,
i.e.
not tied to a specific end\-user Google account.
This is useful when you want to synchronise files onto machines that
don\[aq]t have actively logged\-in users, for example build machines.
.PP
To get credentials for Google Cloud Platform IAM Service
Accounts (https://cloud.google.com/iam/docs/service-accounts), please
head to the Service
Account (https://console.cloud.google.com/permissions/serviceaccounts)
section of the Google Developer Console.
Service Accounts behave just like normal \f[C]User\f[] permissions in
Google Cloud Storage
ACLs (https://cloud.google.com/storage/docs/access-control), so you can
limit their access (e.g.
make them read only).
After creating an account, a JSON file containing the Service
Account\[aq]s credentials will be downloaded onto your machines.
These credentials are what rclone will use for authentication.
.PP
To use a Service Account instead of OAuth2 token flow, enter the path to
your Service Account credentials at the \f[C]service_account_file\f[]
prompt and rclone won\[aq]t use the browser based authentication flow.
.SS Modified time .SS Modified time
.PP .PP
Google google cloud storage stores md5sums natively and rclone stores Google google cloud storage stores md5sums natively and rclone stores
@ -3080,6 +3279,16 @@ To copy a local directory to an Hubic directory called backup
rclone\ copy\ /home/source\ remote:backup rclone\ copy\ /home/source\ remote:backup
\f[] \f[]
.fi .fi
.PP
If you want the directory to be visible in the official \f[I]Hubic
browser\f[], you need to copy your files to the \f[C]default\f[]
directory
.IP
.nf
\f[C]
rclone\ copy\ /home/source\ remote:default/backup
\f[]
.fi
.SS Modified time .SS Modified time
.PP .PP
The modified time is stored as metadata on the object as The modified time is stored as metadata on the object as
@ -3223,6 +3432,10 @@ API method to set the modification time independent of doing an upload.
The SHA1 checksums of the files are checked on upload and download and The SHA1 checksums of the files are checked on upload and download and
will be used in the syncing process. will be used in the syncing process.
You can use the \f[C]\-\-checksum\f[] flag. You can use the \f[C]\-\-checksum\f[] flag.
.PP
Large files which are uploaded in chunks will store their SHA1 on the
object as \f[C]X\-Bz\-Info\-large_file_sha1\f[] as recommended by
Backblaze.
.SS Versions .SS Versions
.PP .PP
When rclone uploads a new version of a file it creates a new version of When rclone uploads a new version of a file it creates a new version of
@ -3247,11 +3460,26 @@ The optimum number for you may vary depending on your hardware, how big
the files are, how much you want to load your computer, etc. the files are, how much you want to load your computer, etc.
The default of \f[C]\-\-transfers\ 4\f[] is definitely too low for The default of \f[C]\-\-transfers\ 4\f[] is definitely too low for
Backblaze B2 though. Backblaze B2 though.
.SS Specific options
.PP
Here are the command line options specific to this cloud storage system.
.SS \-\-b2\-chunk\-size valuee=SIZE
.PP
When uploading large files chunk the file into this size.
Note that these chunks are buffered in memory.
100,000,000 Bytes is the minimim size (default 96M).
.SS \-\-b2\-upload\-cutoff=SIZE
.PP
Cutoff for switching to chunked upload (default 4.657GiB == 5GB).
Files above this size will be uploaded in chunks of
\f[C]\-\-b2\-chunk\-size\f[].
The default value is the largest file which can be uploaded without
chunks.
.SS API .SS API
.PP .PP
Here are some notes I made on the backblaze Here are some notes I made on the backblaze
API (https://gist.github.com/ncw/166dabf352b399f1cc1c) while integrating API (https://gist.github.com/ncw/166dabf352b399f1cc1c) while integrating
it with rclone which detail the changes I\[aq]d like to see. it with rclone.
.SS Yandex Disk .SS Yandex Disk
.PP .PP
Yandex Disk (https://disk.yandex.com) is a cloud storage solution Yandex Disk (https://disk.yandex.com) is a cloud storage solution
@ -3469,6 +3697,87 @@ Of course this will cause problems if the absolute path length of a file
exceeds 258 characters on z, so only use this option if you have to. exceeds 258 characters on z, so only use this option if you have to.
.SS Changelog .SS Changelog
.IP \[bu] 2 .IP \[bu] 2
v1.29 \- 2016\-06\-18
.RS 2
.IP \[bu] 2
New Features
.IP \[bu] 2
Directory listing code reworked for more features and better error
reporting (thanks to Klaus Post for help).
This enables
.RS 2
.IP \[bu] 2
Directory include filtering for efficiency
.IP \[bu] 2
\-\-max\-depth parameter
.IP \[bu] 2
Better error reporting
.IP \[bu] 2
More to come
.RE
.IP \[bu] 2
Retry more errors
.IP \[bu] 2
Add \-\-ignore\-size flag \- for uploading images to onedrive
.IP \[bu] 2
Log \-v output to stdout by default
.IP \[bu] 2
Display the transfer stats in more human readable form
.IP \[bu] 2
Make 0 size files specifiable with \f[C]\-\-max\-size\ 0b\f[]
.IP \[bu] 2
Add \f[C]b\f[] suffix so we can specify bytes in \-\-bwlimit,
\-\-min\-size etc
.IP \[bu] 2
Use "password:" instead of "password>" prompt \- thanks Klaus Post and
Leigh Klotz
.IP \[bu] 2
Bug Fixes
.IP \[bu] 2
Fix retry doing one too many retries
.IP \[bu] 2
Local
.IP \[bu] 2
Fix problems with OS X and UTF\-8 characters
.IP \[bu] 2
Amazon Cloud Drive
.IP \[bu] 2
Check a file exists before uploading to help with 408 Conflict errors
.IP \[bu] 2
Reauth on 401 errors \- this has been causing a lot of problems
.IP \[bu] 2
Work around spurious 403 errors
.IP \[bu] 2
Restart directory listings on error
.IP \[bu] 2
Google Drive
.IP \[bu] 2
Check a file exists before uploading to help with duplicates
.IP \[bu] 2
Fix retry of multipart uploads
.IP \[bu] 2
Backblaze B2
.IP \[bu] 2
Implement large file uploading
.IP \[bu] 2
S3
.IP \[bu] 2
Add AES256 server\-side encryption for \- thanks Justin R.
Wilson
.IP \[bu] 2
Google Cloud Storage
.IP \[bu] 2
Make sure we don\[aq]t use conflicting content types on upload
.IP \[bu] 2
Add service account support \- thanks Michal Witkowski
.IP \[bu] 2
Swift
.IP \[bu] 2
Add auth version parameter
.IP \[bu] 2
Add domain option for openstack (v3 auth) \- thanks Fabian Ruff
.RE
.IP \[bu] 2
v1.29 \- 2016\-04\-18 v1.29 \- 2016\-04\-18
.RS 2 .RS 2
.IP \[bu] 2 .IP \[bu] 2
@ -4373,7 +4682,7 @@ error
This means that \f[C]rclone\f[] can\[aq]t file the SSL root This means that \f[C]rclone\f[] can\[aq]t file the SSL root
certificates. certificates.
Likely you are running \f[C]rclone\f[] on a NAS with a cut\-down Linux Likely you are running \f[C]rclone\f[] on a NAS with a cut\-down Linux
OS. OS, or possibly on Solaris.
.PP .PP
Rclone (via the Go runtime) tries to load the root certificates from Rclone (via the Go runtime) tries to load the root certificates from
these places on Linux. these places on Linux.
@ -4484,6 +4793,19 @@ Werner Beroux <werner@beroux.com>
Brian Stengaard <brian@stengaard.eu> Brian Stengaard <brian@stengaard.eu>
.IP \[bu] 2 .IP \[bu] 2
Jakub Gedeon <jgedeon@sofi.com> Jakub Gedeon <jgedeon@sofi.com>
.IP \[bu] 2
Jim Tittsler <jwt@onjapan.net>
.IP \[bu] 2
Michal Witkowski <michal@improbable.io>
.IP \[bu] 2
Fabian Ruff <fabian.ruff@sap.com>
.IP \[bu] 2
Leigh Klotz <klotz@quixey.com>
.IP \[bu] 2
Romain Lapray <lapray.romain@gmail.com>
.IP \[bu] 2
Justin R.
Wilson <jrw972@gmail.com>
.SS Contact the rclone project .SS Contact the rclone project
.PP .PP
The project website is at: The project website is at: