1
mirror of https://github.com/rclone/rclone synced 2024-11-20 21:27:33 +01:00

docs: spelling: e.g.

Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>
This commit is contained in:
Josh Soref 2020-10-13 17:49:58 -04:00 committed by Nick Craig-Wood
parent d4f38d45a5
commit e4a87f772f
113 changed files with 325 additions and 325 deletions

View File

@ -33,18 +33,18 @@ The Rclone Developers
#### Which OS you are using and how many bits (eg Windows 7, 64 bit)
#### Which OS you are using and how many bits (e.g. Windows 7, 64 bit)
#### Which cloud storage system are you using? (eg Google Drive)
#### Which cloud storage system are you using? (e.g. Google Drive)
#### The command you were trying to run (eg `rclone copy /tmp remote:tmp`)
#### The command you were trying to run (e.g. `rclone copy /tmp remote:tmp`)
#### A log from the command with the `-vv` flag (eg output from `rclone -vv copy /tmp remote:tmp`)
#### A log from the command with the `-vv` flag (e.g. output from `rclone -vv copy /tmp remote:tmp`)

View File

@ -12,10 +12,10 @@ When filing an issue, please include the following information if
possible as well as a description of the problem. Make sure you test
with the [latest beta of rclone](https://beta.rclone.org/):
* Rclone version (eg output from `rclone -V`)
* Which OS you are using and how many bits (eg Windows 7, 64 bit)
* The command you were trying to run (eg `rclone copy /tmp remote:tmp`)
* A log of the command with the `-vv` flag (eg output from `rclone -vv copy /tmp remote:tmp`)
* Rclone version (e.g. output from `rclone -V`)
* Which OS you are using and how many bits (e.g. Windows 7, 64 bit)
* The command you were trying to run (e.g. `rclone copy /tmp remote:tmp`)
* A log of the command with the `-vv` flag (e.g. output from `rclone -vv copy /tmp remote:tmp`)
* if the log contains secrets then edit the file with a text editor first to obscure them
## Submitting a pull request ##
@ -48,7 +48,7 @@ When ready - run the unit tests for the code you changed
go test -v
Note that you may need to make a test remote, eg `TestSwift` for some
Note that you may need to make a test remote, e.g. `TestSwift` for some
of the unit tests.
Note the top level Makefile targets
@ -170,7 +170,7 @@ with modules beneath.
* log - logging facilities
* march - iterates directories in lock step
* object - in memory Fs objects
* operations - primitives for sync, eg Copy, Move
* operations - primitives for sync, e.g. Copy, Move
* sync - sync directories
* walk - walk a directory
* fstest - provides integration test framework
@ -207,7 +207,7 @@ from those during the release process. See the `make doc` and `make
website` targets in the Makefile if you are interested in how. You
don't need to run these when adding a feature.
Documentation for rclone sub commands is with their code, eg
Documentation for rclone sub commands is with their code, e.g.
`cmd/ls/ls.go`.
Note that you can use [GitHub's online editor](https://help.github.com/en/github/managing-files-in-a-repository/editing-files-in-another-users-repository)
@ -364,7 +364,7 @@ See the [testing](#testing) section for more information on integration tests.
Add your fs to the docs - you'll need to pick an icon for it from
[fontawesome](http://fontawesome.io/icons/). Keep lists of remotes in
alphabetical order of full name of remote (eg `drive` is ordered as
alphabetical order of full name of remote (e.g. `drive` is ordered as
`Google Drive`) but with the local file system last.
* `README.md` - main GitHub page

View File

@ -45,7 +45,7 @@ Rclone uses the labels like this:
If it turns out to be a bug or an enhancement it should be tagged as such, with the appropriate other tags. Don't forget the "good first issue" tag to give new contributors something easy to do to get going.
When a ticket is tagged it should be added to a milestone, either the next release, the one after, Soon or Help Wanted. Bugs can be added to the "Known Bugs" milestone if they aren't planned to be fixed or need to wait for something (eg the next go release).
When a ticket is tagged it should be added to a milestone, either the next release, the one after, Soon or Help Wanted. Bugs can be added to the "Known Bugs" milestone if they aren't planned to be fixed or need to wait for something (e.g. the next go release).
The milestones have these meanings:

View File

@ -48,8 +48,8 @@ If rclone needs a point release due to some horrendous bug:
Set vars
* BASE_TAG=v1.XX # eg v1.52
* NEW_TAG=${BASE_TAG}.Y # eg v1.52.1
* BASE_TAG=v1.XX # e.g. v1.52
* NEW_TAG=${BASE_TAG}.Y # e.g. v1.52.1
* echo $BASE_TAG $NEW_TAG # v1.52 v1.52.1
First make the release branch. If this is a second point release then

View File

@ -274,7 +274,7 @@ func validateAccessTier(tier string) bool {
// retryErrorCodes is a slice of error codes that we will retry
var retryErrorCodes = []int{
401, // Unauthorized (eg "Token has expired")
401, // Unauthorized (e.g. "Token has expired")
408, // Request Timeout
429, // Rate exceeded.
500, // Get occasional 500 Internal Server Error

View File

@ -290,7 +290,7 @@ func (o *Object) split() (bucket, bucketPath string) {
// retryErrorCodes is a slice of error codes that we will retry
var retryErrorCodes = []int{
401, // Unauthorized (eg "Token has expired")
401, // Unauthorized (e.g. "Token has expired")
408, // Request Timeout
429, // Rate exceeded.
500, // Get occasional 500 Internal Server Error
@ -1440,7 +1440,7 @@ func (o *Object) Size() int64 {
// Make sure it is lower case
//
// Remove unverified prefix - see https://www.backblaze.com/b2/docs/uploading.html
// Some tools (eg Cyberduck) use this
// Some tools (e.g. Cyberduck) use this
func cleanSHA1(sha1 string) (out string) {
out = strings.ToLower(sha1)
const unverified = "unverified:"

View File

@ -68,7 +68,7 @@ func init() {
CommandHelp: commandHelp,
Options: []fs.Option{{
Name: "remote",
Help: "Remote to cache.\nNormally should contain a ':' and a path, eg \"myremote:path/to/dir\",\n\"myremote:bucket\" or maybe \"myremote:\" (not recommended).",
Help: "Remote to cache.\nNormally should contain a ':' and a path, e.g. \"myremote:path/to/dir\",\n\"myremote:bucket\" or maybe \"myremote:\" (not recommended).",
Required: true,
}, {
Name: "plex_url",
@ -581,7 +581,7 @@ Some valid examples are:
"0:10" -> the first ten chunks
Any parameter with a key that starts with "file" can be used to
specify files to fetch, eg
specify files to fetch, e.g.
rclone rc cache/fetch chunks=0 file=hello file2=home/goodbye

View File

@ -42,7 +42,7 @@ import (
// used mostly for consistency checks (lazily for performance reasons).
// Other formats can be developed that use an external meta store
// free of these limitations, but this needs some support from
// rclone core (eg. metadata store interfaces).
// rclone core (e.g. metadata store interfaces).
//
// The following types of chunks are supported:
// data and control, active and temporary.
@ -140,7 +140,7 @@ func init() {
Name: "remote",
Required: true,
Help: `Remote to chunk/unchunk.
Normally should contain a ':' and a path, eg "myremote:path/to/dir",
Normally should contain a ':' and a path, e.g. "myremote:path/to/dir",
"myremote:bucket" or maybe "myremote:" (not recommended).`,
}, {
Name: "chunk_size",
@ -464,7 +464,7 @@ func (f *Fs) setChunkNameFormat(pattern string) error {
// filePath can be name, relative or absolute path of main file.
//
// chunkNo must be a zero based index of data chunk.
// Negative chunkNo eg. -1 indicates a control chunk.
// Negative chunkNo e.g. -1 indicates a control chunk.
// ctrlType is type of control chunk (must be valid).
// ctrlType must be "" for data chunks.
//
@ -994,7 +994,7 @@ func (f *Fs) put(ctx context.Context, in io.Reader, src fs.ObjectInfo, remote st
}
// Wrapped remote may or may not have seen EOF from chunking reader,
// eg. the box multi-uploader reads exactly the chunk size specified
// e.g. the box multi-uploader reads exactly the chunk size specified
// and skips the "EOF" read. Hence, switch to next limit here.
if !(c.chunkLimit == 0 || c.chunkLimit == c.chunkSize || c.sizeTotal == -1 || c.done) {
silentlyRemove(ctx, chunk)
@ -1183,7 +1183,7 @@ func (c *chunkingReader) Read(buf []byte) (bytesRead int, err error) {
if c.chunkLimit <= 0 {
// Chunk complete - switch to next one.
// Note #1:
// We might not get here because some remotes (eg. box multi-uploader)
// We might not get here because some remotes (e.g. box multi-uploader)
// read the specified size exactly and skip the concluding EOF Read.
// Then a check in the put loop will kick in.
// Note #2:
@ -1387,7 +1387,7 @@ func (f *Fs) Purge(ctx context.Context, dir string) error {
// However, if rclone dies unexpectedly, it can leave hidden temporary
// chunks, which cannot be discovered using the `list` command.
// Remove does not try to search for such chunks or to delete them.
// Sometimes this can lead to strange results eg. when `list` shows that
// Sometimes this can lead to strange results e.g. when `list` shows that
// directory is empty but `rmdir` refuses to remove it because on the
// level of wrapped remote it's actually *not* empty.
// As a workaround users can use `purge` to forcibly remove it.

View File

@ -15,10 +15,10 @@ import (
// Command line flags
var (
// Invalid characters are not supported by some remotes, eg. Mailru.
// Invalid characters are not supported by some remotes, e.g. Mailru.
// We enable testing with invalid characters when -remote is not set, so
// chunker overlays a local directory, but invalid characters are disabled
// by default when -remote is set, eg. when test_all runs backend tests.
// by default when -remote is set, e.g. when test_all runs backend tests.
// You can still test with invalid characters using the below flag.
UseBadChars = flag.Bool("bad-chars", false, "Set to test bad characters in file names when -remote is set")
)

View File

@ -30,7 +30,7 @@ func init() {
CommandHelp: commandHelp,
Options: []fs.Option{{
Name: "remote",
Help: "Remote to encrypt/decrypt.\nNormally should contain a ':' and a path, eg \"myremote:path/to/dir\",\n\"myremote:bucket\" or maybe \"myremote:\" (not recommended).",
Help: "Remote to encrypt/decrypt.\nNormally should contain a ':' and a path, e.g. \"myremote:path/to/dir\",\n\"myremote:bucket\" or maybe \"myremote:\" (not recommended).",
Required: true,
}, {
Name: "filename_encryption",
@ -76,7 +76,7 @@ NB If filename_encryption is "off" then this option will do nothing.`,
}, {
Name: "server_side_across_configs",
Default: false,
Help: `Allow server-side operations (eg copy) to work across different crypt configs.
Help: `Allow server-side operations (e.g. copy) to work across different crypt configs.
Normally this option is not what you want, but if you have two crypts
pointing to the same backend you can use it.

View File

@ -435,7 +435,7 @@ need to use --ignore size also.`,
}, {
Name: "server_side_across_configs",
Default: false,
Help: `Allow server-side operations (eg copy) to work across different drive configs.
Help: `Allow server-side operations (e.g. copy) to work across different drive configs.
This can be useful if you wish to do a server-side copy between two
different Google drives. Note that this isn't enabled by default
@ -1690,7 +1690,7 @@ func (f *Fs) listRRunner(ctx context.Context, wg *sync.WaitGroup, in chan listRE
if len(paths) == 1 {
// don't check parents at root because
// - shared with me items have no parents at the root
// - if using a root alias, eg "root" or "appDataFolder" the ID won't match
// - if using a root alias, e.g. "root" or "appDataFolder" the ID won't match
i = 0
// items at root can have more than one parent so we need to put
// the item in just once.
@ -2440,7 +2440,7 @@ func (f *Fs) About(ctx context.Context) (*fs.Usage, error) {
usage := &fs.Usage{
Used: fs.NewUsageValue(q.UsageInDrive), // bytes in use
Trashed: fs.NewUsageValue(q.UsageInDriveTrash), // bytes in trash
Other: fs.NewUsageValue(q.Usage - q.UsageInDrive), // other usage eg gmail in drive
Other: fs.NewUsageValue(q.Usage - q.UsageInDrive), // other usage e.g. gmail in drive
}
if q.Limit > 0 {
usage.Total = fs.NewUsageValue(q.Limit) // quota of bytes that can be used

View File

@ -58,7 +58,7 @@ The input format is comma separated list of key,value pairs. Standard
For example to set a Cookie use 'Cookie,name=value', or '"Cookie","name=value"'.
You can set multiple headers, eg '"Cookie","name=value","Authorization","xxx"'.
You can set multiple headers, e.g. '"Cookie","name=value","Authorization","xxx"'.
`,
Default: fs.CommaSepList{},
Advanced: true,

View File

@ -71,7 +71,7 @@ func init() {
type credentials struct {
Token string `json:"token"` // OpenStack token
Endpoint string `json:"endpoint"` // OpenStack endpoint
Expires string `json:"expires"` // Expires date - eg "2015-11-09T14:24:56+01:00"
Expires string `json:"expires"` // Expires date - e.g. "2015-11-09T14:24:56+01:00"
}
// Fs represents a remote hubic

View File

@ -87,13 +87,13 @@ Normally rclone checks the size and modification time of files as they
are being uploaded and aborts with a message which starts "can't copy
- source file is being updated" if the file changes during upload.
However on some file systems this modification time check may fail (eg
However on some file systems this modification time check may fail (e.g.
[Glusterfs #2206](https://github.com/rclone/rclone/issues/2206)) so this
check can be disabled with this flag.
If this flag is set, rclone will use its best efforts to transfer a
file which is being updated. If the file is only having things
appended to it (eg a log) then rclone will transfer the log file with
appended to it (e.g. a log) then rclone will transfer the log file with
the size it had the first time rclone saw it.
If the file is being modified throughout (not just appended to) then

View File

@ -274,7 +274,7 @@ listing, set this option.`,
}, {
Name: "server_side_across_configs",
Default: false,
Help: `Allow server-side operations (eg copy) to work across different onedrive configs.
Help: `Allow server-side operations (e.g. copy) to work across different onedrive configs.
This can be useful if you wish to do a server-side copy between two
different Onedrives. Note that this isn't enabled by default

View File

@ -207,7 +207,7 @@ func (o *Object) split() (bucket, bucketPath string) {
func qsParseEndpoint(endpoint string) (protocol, host, port string, err error) {
/*
Pattern to match an endpoint,
eg: "http(s)://qingstor.com:443" --> "http(s)", "qingstor.com", 443
e.g.: "http(s)://qingstor.com:443" --> "http(s)", "qingstor.com", 443
"http(s)//qingstor.com" --> "http(s)", "qingstor.com", ""
"qingstor.com" --> "", "qingstor.com", ""
*/

View File

@ -225,7 +225,7 @@ func init() {
Help: "Use this if unsure. Will use v4 signatures and an empty region.",
}, {
Value: "other-v2-signature",
Help: "Use this only if v4 signatures don't work, eg pre Jewel/v10 CEPH.",
Help: "Use this only if v4 signatures don't work, e.g. pre Jewel/v10 CEPH.",
}},
}, {
Name: "endpoint",
@ -1016,7 +1016,7 @@ The minimum is 0 and the maximum is 5GB.`,
Help: `Chunk size to use for uploading.
When uploading files larger than upload_cutoff or files with unknown
size (eg from "rclone rcat" or uploaded with "rclone mount" or google
size (e.g. from "rclone rcat" or uploaded with "rclone mount" or google
photos or google docs) they will be uploaded as multipart uploads
using this chunk size.
@ -1121,7 +1121,7 @@ if false then rclone will use virtual path style. See [the AWS S3
docs](https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingBucket.html#access-bucket-intro)
for more info.
Some providers (eg AWS, Aliyun OSS, Netease COS or Tencent COS) require this set to
Some providers (e.g. AWS, Aliyun OSS, Netease COS or Tencent COS) require this set to
false - rclone will do this automatically based on the provider
setting.`,
Default: true,
@ -1133,7 +1133,7 @@ setting.`,
If this is false (the default) then rclone will use v4 authentication.
If it is set then rclone will use v2 authentication.
Use this only if v4 signatures don't work, eg pre Jewel/v10 CEPH.`,
Use this only if v4 signatures don't work, e.g. pre Jewel/v10 CEPH.`,
Default: false,
Advanced: true,
}, {
@ -1223,7 +1223,7 @@ See: https://github.com/rclone/rclone/issues/4673, https://github.com/rclone/rcl
// Constants
const (
metaMtime = "Mtime" // the meta key to store mtime in - eg X-Amz-Meta-Mtime
metaMtime = "Mtime" // the meta key to store mtime in - e.g. X-Amz-Meta-Mtime
metaMD5Hash = "Md5chksum" // the meta key to store md5hash in
// The maximum size of object we can COPY - this should be 5GiB but is < 5GB for b2 compatibility
// See https://forum.rclone.org/t/copying-files-within-a-b2-bucket/16680/76
@ -1306,7 +1306,7 @@ type Object struct {
lastModified time.Time // Last modified
meta map[string]*string // The object metadata if known - may be nil
mimeType string // MimeType of object - may be ""
storageClass string // eg GLACIER
storageClass string // e.g. GLACIER
}
// ------------------------------------------------------------

View File

@ -576,7 +576,7 @@ func (f *Fs) CreateDir(ctx context.Context, pathID, leaf string) (newID string,
}
newID = resp.Header.Get("Location")
if newID == "" {
// look up ID if not returned (eg for syncFolder)
// look up ID if not returned (e.g. for syncFolder)
var found bool
newID, found, err = f.FindLeaf(ctx, pathID, leaf)
if err != nil {

View File

@ -51,7 +51,7 @@ default for this is 5GB which is its maximum value.`,
Name: "no_chunk",
Help: `Don't chunk files during streaming upload.
When doing streaming uploads (eg using rcat or mount) setting this
When doing streaming uploads (e.g. using rcat or mount) setting this
flag will cause the swift backend to not upload chunked files.
This will limit the maximum upload size to 5GB. However non chunked
@ -272,7 +272,7 @@ func (f *Fs) Features() *fs.Features {
// retryErrorCodes is a slice of error codes that we will retry
var retryErrorCodes = []int{
401, // Unauthorized (eg "Token has expired")
401, // Unauthorized (e.g. "Token has expired")
408, // Request Timeout
409, // Conflict - various states that could be resolved on a retry
429, // Rate exceeded.

View File

@ -81,7 +81,7 @@ func init() {
IsPassword: true,
}, {
Name: "bearer_token",
Help: "Bearer token instead of user/pass (eg a Macaroon)",
Help: "Bearer token instead of user/pass (e.g. a Macaroon)",
}, {
Name: "bearer_token_command",
Help: "Command to run to get a bearer token",

View File

@ -14,7 +14,7 @@ don't fail very often.
Syntax: $0 [flags]
Note that flags for 'go test' need to be expanded, eg '-test.v' instead
Note that flags for 'go test' need to be expanded, e.g. '-test.v' instead
of just '-v'. '-race' does not need to be expanded.
Flags this script understands

View File

@ -3,7 +3,7 @@
version="$1"
if [ "$version" = "" ]; then
echo "Syntax: $0 <version, eg v1.42> [delete]"
echo "Syntax: $0 <version, e.g. v1.42> [delete]"
exit 1
fi
dry_run="--dry-run"

View File

@ -61,14 +61,14 @@ Where the fields are:
* Used: total size used
* Free: total amount this user could upload.
* Trashed: total amount in the trash
* Other: total amount in other storage (eg Gmail, Google Photos)
* Other: total amount in other storage (e.g. Gmail, Google Photos)
* Objects: total number of objects in the storage
Note that not all the backends provide all the fields - they will be
missing if they are not known for that backend. Where it is known
that the value is unlimited the value will also be omitted.
Use the --full flag to see the numbers written out in full, eg
Use the --full flag to see the numbers written out in full, e.g.
Total: 18253611008
Used: 7993453766
@ -76,7 +76,7 @@ Use the --full flag to see the numbers written out in full, eg
Trashed: 104857602
Other: 8849156022
Use the --json flag for a computer readable output, eg
Use the --json flag for a computer readable output, e.g.
{
"total": 18253611008,

View File

@ -47,7 +47,7 @@ for more info).
rclone backend features remote:
Pass options to the backend command with -o. This should be key=value or key, eg:
Pass options to the backend command with -o. This should be key=value or key, e.g.:
rclone backend stats remote:path stats -o format=json -o long

View File

@ -495,7 +495,7 @@ func AddBackendFlags() {
done := map[string]struct{}{}
for i := range fsInfo.Options {
opt := &fsInfo.Options[i]
// Skip if done already (eg with Provider options)
// Skip if done already (e.g. with Provider options)
if _, doneAlready := done[opt.Name]; doneAlready {
continue
}

View File

@ -30,7 +30,7 @@ names and offers to delete all but one or rename them to be
different.
This is only useful with backends like Google Drive which can have
duplicate file names. It can be run on wrapping backends (eg crypt) if
duplicate file names. It can be run on wrapping backends (e.g. crypt) if
they wrap a backend which supports duplicate file names.
In the first pass it will merge directories with the same name. It
@ -43,7 +43,7 @@ This means that for most duplicated files the ` + "`dedupe`" + `
command will not be interactive.
` + "`dedupe`" + ` considers files to be identical if they have the
same file path and the same hash. If the backend does not support hashes (eg crypt wrapping
same file path and the same hash. If the backend does not support hashes (e.g. crypt wrapping
Google Drive) then they will never be found to be identical. If you
use the ` + "`--size-only`" + ` flag then files will be considered
identical if they have the same size (any hash will be ignored). This

View File

@ -19,7 +19,7 @@ var bashCommandDefinition = &cobra.Command{
Generates a bash shell autocompletion script for rclone.
This writes to /etc/bash_completion.d/rclone by default so will
probably need to be run with sudo or as root, eg
probably need to be run with sudo or as root, e.g.
sudo rclone genautocomplete bash

View File

@ -19,7 +19,7 @@ var fishCommandDefinition = &cobra.Command{
Generates a fish autocompletion script for rclone.
This writes to /etc/fish/completions/rclone.fish by default so will
probably need to be run with sudo or as root, eg
probably need to be run with sudo or as root, e.g.
sudo rclone genautocomplete fish

View File

@ -19,7 +19,7 @@ var zshCommandDefinition = &cobra.Command{
Generates a zsh autocompletion script for rclone.
This writes to /usr/share/zsh/vendor-completions/_rclone by default so will
probably need to be run with sudo or as root, eg
probably need to be run with sudo or as root, e.g.
sudo rclone genautocomplete zsh

View File

@ -31,7 +31,7 @@ Produces a hash file for all the objects in the path using the hash
named. The output is in the same format as the standard
md5sum/sha1sum tool.
Run without a hash to see the list of supported hashes, eg
Run without a hash to see the list of supported hashes, e.g.
$ rclone hashsum
Supported hashes are:

View File

@ -297,7 +297,7 @@ func showBackend(name string) {
var standardOptions, advancedOptions fs.Options
done := map[string]struct{}{}
for _, opt := range backend.Options {
// Skip if done already (eg with Provider options)
// Skip if done already (e.g. with Provider options)
if _, doneAlready := done[opt.Name]; doneAlready {
continue
}

View File

@ -21,6 +21,6 @@ Note that ` + "`ls` and `lsl`" + ` recurse by default - use "--max-depth 1" to s
The other list commands ` + "`lsd`,`lsf`,`lsjson`" + ` do not recurse by default - use "-R" to make them recurse.
Listing a non existent directory will produce an error except for
remotes which can't have empty directories (eg s3, swift, gcs, etc -
remotes which can't have empty directories (e.g. s3, swift, gcs, etc -
the bucket based remotes).
`

View File

@ -72,7 +72,7 @@ output:
o - Original ID of underlying object
m - MimeType of object if known
e - encrypted name
T - tier of storage if known, eg "Hot" or "Cool"
T - tier of storage if known, e.g. "Hot" or "Cool"
So if you wanted the path, size and modification time, you would use
--format "pst", or maybe --format "tsp" to put the path last.

View File

@ -65,11 +65,11 @@ may be repeated). If --hash-type is set then it implies --hash.
If --no-modtime is specified then ModTime will be blank. This can
speed things up on remotes where reading the ModTime takes an extra
request (eg s3, swift).
request (e.g. s3, swift).
If --no-mimetype is specified then MimeType will be blank. This can
speed things up on remotes where reading the MimeType takes an extra
request (eg s3, swift).
request (e.g. s3, swift).
If --encrypted is not specified the Encrypted won't be emitted.
@ -91,7 +91,7 @@ If the directory is a bucket in a bucket based backend, then
The time is in RFC3339 format with up to nanosecond precision. The
number of decimal digits in the seconds will depend on the precision
that the remote can hold the times, so if times are accurate to the
nearest millisecond (eg Google Drive) then 3 digits will always be
nearest millisecond (e.g. Google Drive) then 3 digits will always be
shown ("2017-05-31T16:15:57.034+01:00") whereas if the times are
accurate to the nearest second (Dropbox, Box, WebDav etc) no digits
will be shown ("2017-05-31T16:15:57+01:00").

View File

@ -21,7 +21,7 @@ var (
maxBlockSize = flag.Int("b", 1024*1024, "Max block size to read")
simultaneous = flag.Int("transfers", 16, "Number of simultaneous files to open")
seeksPerFile = flag.Int("seeks", 8, "Seeks per file")
mask = flag.Int64("mask", 0, "mask for seek, eg 0x7fff")
mask = flag.Int64("mask", 0, "mask for seek, e.g. 0x7fff")
)
func init() {

View File

@ -235,7 +235,7 @@ func (ds *dirStream) Next() (de fuse.DirEntry, errno syscall.Errno) {
// defer log.Trace(nil, "")("de=%+v, errno=%v", &de, &errno)
fi := ds.nodes[ds.i]
de = fuse.DirEntry{
// Mode is the file's mode. Only the high bits (eg. S_IFDIR)
// Mode is the file's mode. Only the high bits (e.g. S_IFDIR)
// are considered.
Mode: getMode(fi),

View File

@ -260,7 +260,7 @@ applications won't work with their files on an rclone mount without
"--vfs-cache-mode writes" or "--vfs-cache-mode full". See the [File
Caching](#file-caching) section for more info.
The bucket based remotes (eg Swift, S3, Google Compute Storage, B2,
The bucket based remotes (e.g. Swift, S3, Google Compute Storage, B2,
Hubic) do not support the concept of empty directories, so empty
directories will have a tendency to disappear once they fall out of
the directory cache.

View File

@ -92,7 +92,7 @@ Will place this in the "arg" value
Use --loopback to connect to the rclone instance running "rclone rc".
This is very useful for testing commands without having to run an
rclone rc server, eg:
rclone rc server, e.g.:
rclone rc --loopback operations/about fs=/

View File

@ -23,7 +23,7 @@ import (
)
// Add a minimal number of mime types to augment go's built in types
// for environments which don't have access to a mime.types file (eg
// for environments which don't have access to a mime.types file (e.g.
// Termux on android)
func init() {
for _, t := range []struct {

View File

@ -11,7 +11,7 @@ var Help = `
### Server options
Use ` + "`--addr`" + ` to specify which IP address and port the server should
listen on, eg ` + "`--addr 1.2.3.4:8000` or `--addr :8080`" + ` to listen to all
listen on, e.g. ` + "`--addr 1.2.3.4:8000` or `--addr :8080`" + ` to listen to all
IPs.
Use ` + "`--name`" + ` to choose the friendly server name, which is by

View File

@ -79,7 +79,7 @@ or you can make a remote of type ftp to read and write it.
### Server options
Use --addr to specify which IP address and port the server should
listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all
listen on, e.g. --addr 1.2.3.4:8000 or --addr :8080 to listen to all
IPs. By default it only listens on localhost. You can use port
:0 to let the OS choose an available port.

View File

@ -32,7 +32,7 @@ var Command = &cobra.Command{
over HTTP. This can be viewed in a web browser or you can make a
remote of type http read from it.
You can use the filter flags (eg --include, --exclude) to control what
You can use the filter flags (e.g. --include, --exclude) to control what
is served.
The server will log errors. Use -v to see access logs.

View File

@ -30,7 +30,7 @@ var Help = `
### Server options
Use --addr to specify which IP address and port the server should
listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all
listen on, e.g. --addr 1.2.3.4:8000 or --addr :8080 to listen to all
IPs. By default it only listens on localhost. You can use port
:0 to let the OS choose an available port.

View File

@ -38,7 +38,7 @@ var Command = &cobra.Command{
Use: "serve <protocol> [opts] <remote>",
Short: `Serve a remote over a protocol.`,
Long: `rclone serve is used to serve a remote over a given protocol. This
command requires the use of a subcommand to specify the protocol, eg
command requires the use of a subcommand to specify the protocol, e.g.
rclone serve http remote:
@ -46,7 +46,7 @@ Each subcommand has its own options which you can see in their help.
`,
RunE: func(command *cobra.Command, args []string) error {
if len(args) == 0 {
return errors.New("serve requires a protocol, eg 'rclone serve http remote:'")
return errors.New("serve requires a protocol, e.g. 'rclone serve http remote:'")
}
return errors.New("unknown protocol")
},

View File

@ -61,7 +61,7 @@ var Command = &cobra.Command{
over SFTP. This can be used with an SFTP client or you can make a
remote of type sftp to use with it.
You can use the filter flags (eg --include, --exclude) to control what
You can use the filter flags (e.g. --include, --exclude) to control what
is served.
The server will log errors. Use -v to see access logs.

View File

@ -46,9 +46,9 @@ unless the --no-create flag is provided.
If --timestamp is used then it will set the modification time to that
time instead of the current time. Times may be specified as one of:
- 'YYMMDD' - eg. 17.10.30
- 'YYYY-MM-DDTHH:MM:SS' - eg. 2006-01-02T15:04:05
- 'YYYY-MM-DDTHH:MM:SS.SSS' - eg. 2006-01-02T15:04:05.123456789
- 'YYMMDD' - e.g. 17.10.30
- 'YYYY-MM-DDTHH:MM:SS' - e.g. 2006-01-02T15:04:05
- 'YYYY-MM-DDTHH:MM:SS.SSS' - e.g. 2006-01-02T15:04:05.123456789
Note that --timestamp is in UTC if you want local time then add the
--localtime flag.

View File

@ -85,7 +85,7 @@ For example
1 directories, 5 files
You can use any of the filtering options with the tree command (eg
You can use any of the filtering options with the tree command (e.g.
--include and --exclude). You can also use --fast-list.
The tree command has many options for controlling the listing which

View File

@ -37,7 +37,7 @@ so it is easy to tweak stuff.
│   │   ├── footer.copyright.html - copyright footer
│   │   ├── footer.html - footer including scripts
│   │   ├── header.html - the whole html header
│   │   ├── header.includes.html - header includes eg css files
│   │   ├── header.includes.html - header includes e.g. css files
│   │   ├── menu.html - left hand side menu
│   │   ├── meta.html - meta tags for the header
│   │   └── navbar.html - top navigation bar

View File

@ -86,7 +86,7 @@ Rclone helps you:
- MD5, SHA1 hashes are checked at all times for file integrity
- Timestamps are preserved on files
- Operations can be restarted at any time
- Can be to and from network, eg two different cloud providers
- Can be to and from network, e.g. two different cloud providers
- Can use multi-threaded downloads to local disk
- [Copy](/commands/rclone_copy/) new or changed files to cloud storage
- [Sync](/commands/rclone_sync/) (one way) to make a directory identical

View File

@ -9,7 +9,7 @@ description: "Remote Aliases"
The `alias` remote provides a new name for another remote.
Paths may be as deep as required or a local path,
eg `remote:directory/subdirectory` or `/directory/subdirectory`.
e.g. `remote:directory/subdirectory` or `/directory/subdirectory`.
During the initial setup with `rclone config` you will specify the target
remote. The target remote can either be a local path or another remote.

View File

@ -7,7 +7,7 @@ description: "Rclone docs for Microsoft Azure Blob Storage"
-----------------------------------------
Paths are specified as `remote:container` (or `remote:` for the `lsd`
command.) You may put subdirectories in too, eg
command.) You may put subdirectories in too, e.g.
`remote:container/path/to/dir`.
Here is an example of making a Microsoft Azure Blob Storage
@ -104,7 +104,7 @@ as they can't be used in JSON strings.
MD5 hashes are stored with blobs. However blobs that were uploaded in
chunks only have an MD5 if the source remote was capable of MD5
hashes, eg the local disk.
hashes, e.g. the local disk.
### Authenticating with Azure Blob Storage
@ -127,7 +127,7 @@ container level SAS URL right click on a container in the Azure Blob
explorer in the Azure portal.
If you use a container level SAS URL, rclone operations are permitted
only on a particular container, eg
only on a particular container, e.g.
rclone ls azureblob:container

View File

@ -9,7 +9,7 @@ description: "Backblaze B2"
B2 is [Backblaze's cloud storage system](https://www.backblaze.com/b2/).
Paths are specified as `remote:bucket` (or `remote:` for the `lsd`
command.) You may put subdirectories in too, eg `remote:bucket/path/to/dir`.
command.) You may put subdirectories in too, e.g. `remote:bucket/path/to/dir`.
Here is an example of making a b2 configuration. First run
@ -181,7 +181,7 @@ If you wish to remove all the old versions then you can use the
`rclone cleanup remote:bucket` command which will delete all the old
versions of files, leaving the current ones intact. You can also
supply a path and only old versions under that path will be deleted,
eg `rclone cleanup remote:bucket/path/to/stuff`.
e.g. `rclone cleanup remote:bucket/path/to/stuff`.
Note that `cleanup` will remove partially uploaded files from the bucket
if they are more than a day old.

View File

@ -8,7 +8,7 @@ description: "Rclone docs for Box"
Paths are specified as `remote:path`
Paths may be as deep as required, eg `remote:directory/subdirectory`.
Paths may be as deep as required, e.g. `remote:directory/subdirectory`.
The initial setup for Box involves getting a token from Box which you
can do either in your browser, or with a config.json downloaded from Box

View File

@ -51,7 +51,7 @@ XX / Cache a remote
[snip]
Storage> cache
Remote to cache.
Normally should contain a ':' and a path, eg "myremote:path/to/dir",
Normally should contain a ':' and a path, e.g. "myremote:path/to/dir",
"myremote:bucket" or maybe "myremote:" (not recommended).
remote> local:/test
Optional: The URL of the Plex server
@ -313,7 +313,7 @@ Here are the standard options specific to cache (Cache a remote).
#### --cache-remote
Remote to cache.
Normally should contain a ':' and a path, eg "myremote:path/to/dir",
Normally should contain a ':' and a path, e.g. "myremote:path/to/dir",
"myremote:bucket" or maybe "myremote:" (not recommended).
- Config: remote

View File

@ -268,7 +268,7 @@ description: "Rclone Changelog"
* Bug Fixes
* docs
* Disable smart typography (eg en-dash) in MANUAL.* and man page (Nick Craig-Wood)
* Disable smart typography (e.g. en-dash) in MANUAL.* and man page (Nick Craig-Wood)
* Update install.md to reflect minimum Go version (Evan Harris)
* Update install from source instructions (Nick Craig-Wood)
* make_manual: Support SOURCE_DATE_EPOCH (Morten Linderud)
@ -373,7 +373,7 @@ all the docs and Edward Barker for helping re-write the front page.
* Add `--check-first` to do all checking before starting transfers (Nick Craig-Wood)
* Add `--track-renames-strategy` for configurable matching criteria for `--track-renames` (Bernd Schoolmann)
* Add `--cutoff-mode` hard,soft,cautious (Shing Kit Chan & Franklyn Tackitt)
* Filter flags (eg `--files-from -`) can read from stdin (fishbullet)
* Filter flags (e.g. `--files-from -`) can read from stdin (fishbullet)
* Add `--error-on-no-transfer` option (Jon Fautley)
* Implement `--order-by xxx,mixed` for copying some small and some big files (Nick Craig-Wood)
* Allow `--max-backlog` to be negative meaning as large as possible (Nick Craig-Wood)
@ -817,7 +817,7 @@ all the docs and Edward Barker for helping re-write the front page.
* Check config names more carefully and report errors (Nick Craig-Wood)
* Remove error: can't use `--size-only` and `--ignore-size` together. (Nick Craig-Wood)
* filter: Prevent mixing options when `--files-from` is in use (Michele Caci)
* serve sftp: Fix crash on unsupported operations (eg Readlink) (Nick Craig-Wood)
* serve sftp: Fix crash on unsupported operations (e.g. Readlink) (Nick Craig-Wood)
* Mount
* Allow files of unknown size to be read properly (Nick Craig-Wood)
* Skip tests on <= 2 CPUs to avoid lockup (Nick Craig-Wood)
@ -833,7 +833,7 @@ all the docs and Edward Barker for helping re-write the front page.
* Azure Blob
* Disable logging to the Windows event log (Nick Craig-Wood)
* B2
* Remove `unverified:` prefix on sha1 to improve interop (eg with CyberDuck) (Nick Craig-Wood)
* Remove `unverified:` prefix on sha1 to improve interop (e.g. with CyberDuck) (Nick Craig-Wood)
* Box
* Add options to get access token via JWT auth (David)
* Drive
@ -1048,7 +1048,7 @@ all the docs and Edward Barker for helping re-write the front page.
* controlled with `--multi-thread-cutoff` and `--multi-thread-streams`
* Use rclone.conf from rclone executable directory to enable portable use (albertony)
* Allow sync of a file and a directory with the same name (forgems)
* this is common on bucket based remotes, eg s3, gcs
* this is common on bucket based remotes, e.g. s3, gcs
* Add `--ignore-case-sync` for forced case insensitivity (garry415)
* Implement `--stats-one-line-date` and `--stats-one-line-date-format` (Peter Berbec)
* Log an ERROR for all commands which exit with non-zero status (Nick Craig-Wood)
@ -1319,7 +1319,7 @@ all the docs and Edward Barker for helping re-write the front page.
* Add support for PEM encrypted private keys (Fabian Möller)
* Add option to force the usage of an ssh-agent (Fabian Möller)
* Perform environment variable expansion on key-file (Fabian Möller)
* Fix rmdir on Windows based servers (eg CrushFTP) (Nick Craig-Wood)
* Fix rmdir on Windows based servers (e.g. CrushFTP) (Nick Craig-Wood)
* Fix rmdir deleting directory contents on some SFTP servers (Nick Craig-Wood)
* Fix error on dangling symlinks (Nick Craig-Wood)
* Swift
@ -1350,7 +1350,7 @@ all the docs and Edward Barker for helping re-write the front page.
* sensitive operations require authorization or the `--rc-no-auth` flag
* config/* operations to configure rclone
* options/* for reading/setting command line flags
* operations/* for all low level operations, eg copy file, list directory
* operations/* for all low level operations, e.g. copy file, list directory
* sync/* for sync, copy and move
* `--rc-files` flag to serve files on the rc http server
* this is for building web native GUIs for rclone
@ -1745,7 +1745,7 @@ Point release to fix hubic and azureblob backends.
* rc: fix setting bwlimit to unlimited
* rc: take note of the --rc-addr flag too as per the docs
* Mount
* Use About to return the correct disk total/used/free (eg in `df`)
* Use About to return the correct disk total/used/free (e.g. in `df`)
* Set `--attr-timeout default` to `1s` - fixes:
* rclone using too much memory
* rclone not serving files to samba
@ -1984,7 +1984,7 @@ Point release to fix hubic and azureblob backends.
* Retry lots more different types of errors to make multipart transfers more reliable
* Save the config before asking for a token, fixes disappearing oauth config
* Warn the user if --include and --exclude are used together (Ernest Borowski)
* Fix duplicate files (eg on Google drive) causing spurious copies
* Fix duplicate files (e.g. on Google drive) causing spurious copies
* Allow trailing and leading whitespace for passwords (Jason Rose)
* ncdu: fix crashes on empty directories
* rcat: fix goroutine leak
@ -2412,7 +2412,7 @@ Point release to fix hubic and azureblob backends.
* New B2 API endpoint (thanks Per Cederberg)
* Set maximum backoff to 5 Minutes
* onedrive
* Fix URL escaping in file names - eg uploading files with `+` in them.
* Fix URL escaping in file names - e.g. uploading files with `+` in them.
* amazon cloud drive
* Fix token expiry during large uploads
* Work around 408 REQUEST_TIMEOUT and 504 GATEWAY_TIMEOUT errors
@ -2453,7 +2453,7 @@ Point release to fix hubic and azureblob backends.
* Skip setting the modified time for objects > 5GB as it isn't possible.
* Backblaze B2
* Add --b2-versions flag so old versions can be listed and retrieved.
* Treat 403 errors (eg cap exceeded) as fatal.
* Treat 403 errors (e.g. cap exceeded) as fatal.
* Implement cleanup command for deleting old file versions.
* Make error handling compliant with B2 integrations notes.
* Fix handling of token expiry.
@ -2625,7 +2625,7 @@ Point release to fix hubic and azureblob backends.
* This could have deleted files unexpectedly on sync
* Always check first with `--dry-run`!
* Swift
* Stop SetModTime losing metadata (eg X-Object-Manifest)
* Stop SetModTime losing metadata (e.g. X-Object-Manifest)
* This could have caused data loss for files > 5GB in size
* Use ContentType from Object to avoid lookups in listings
* OneDrive
@ -2788,7 +2788,7 @@ Point release to fix hubic and azureblob backends.
## v1.09 - 2015-02-07
* windows: Stop drive letters (eg C:) getting mixed up with remotes (eg drive:)
* windows: Stop drive letters (e.g. C:) getting mixed up with remotes (e.g. drive:)
* local: Fix directory separators on Windows
* drive: fix rate limit exceeded errors

View File

@ -17,7 +17,7 @@ a remote.
First check your chosen remote is working - we'll call it `remote:path` here.
Note that anything inside `remote:path` will be chunked and anything outside
won't. This means that if you are using a bucket based remote (eg S3, B2, swift)
won't. This means that if you are using a bucket based remote (e.g. S3, B2, swift)
then you should probably put the bucket in the remote `s3:bucket`.
Now configure `chunker` using `rclone config`. We will call this one `overlay`
@ -38,7 +38,7 @@ XX / Transparently chunk/split large files
[snip]
Storage> chunker
Remote to chunk/unchunk.
Normally should contain a ':' and a path, eg "myremote:path/to/dir",
Normally should contain a ':' and a path, e.g. "myremote:path/to/dir",
"myremote:bucket" or maybe "myremote:" (not recommended).
Enter a string value. Press Enter for the default ("").
remote> remote:path
@ -118,7 +118,7 @@ the potential chunk files are accounted for, grouped and assembled into
composite directory entries. Any temporary chunks are hidden.
List and other commands can sometimes come across composite files with
missing or invalid chunks, eg. shadowed by like-named directory or
missing or invalid chunks, e.g. shadowed by like-named directory or
another file. This usually means that wrapped file system has been directly
tampered with or damaged. If chunker detects a missing chunk it will
by default print warning, skip the whole incomplete group of chunks but
@ -140,7 +140,7 @@ characters defines the minimum length of a string representing a chunk number.
If decimal chunk number has less digits than the number of hashes, it is
left-padded by zeros. If the decimal string is longer, it is left intact.
By default numbering starts from 1 but there is another option that allows
user to start from 0, eg. for compatibility with legacy software.
user to start from 0, e.g. for compatibility with legacy software.
For example, if name format is `big_*-##.part` and original file name is
`data.txt` and numbering starts from 0, then the first chunk will be named
@ -211,7 +211,7 @@ guarantee given hash for all files. If wrapped remote doesn't support it,
chunker will then add metadata to all files, even small. However, this can
double the amount of small files in storage and incur additional service charges.
You can even use chunker to force md5/sha1 support in any other remote
at expense of sidecar meta objects by setting eg. `chunk_type=sha1all`
at expense of sidecar meta objects by setting e.g. `chunk_type=sha1all`
to force hashsums and `chunk_size=1P` to effectively disable chunking.
Normally, when a file is copied to chunker controlled remote, chunker
@ -282,7 +282,7 @@ suffix during operations. Many file systems limit base file name without path
by 255 characters. Using rclone's crypt remote as a base file system limits
file name by 143 characters. Thus, maximum name length is 231 for most files
and 119 for chunker-over-crypt. A user in need can change name format to
eg. `*.rcc##` and save 10 characters (provided at most 99 chunks per file).
e.g. `*.rcc##` and save 10 characters (provided at most 99 chunks per file).
Note that a move implemented using the copy-and-delete method may incur
double charging with some cloud storage providers.
@ -308,7 +308,7 @@ Here are the standard options specific to chunker (Transparently chunk/split lar
#### --chunker-remote
Remote to chunk/unchunk.
Normally should contain a ':' and a path, eg "myremote:path/to/dir",
Normally should contain a ':' and a path, e.g. "myremote:path/to/dir",
"myremote:bucket" or maybe "myremote:" (not recommended).
- Config: remote

View File

@ -18,7 +18,7 @@ removable drives.
Before configuring the crypt remote, check the underlying remote is
working. In this example the underlying remote is called `remote:path`.
Anything inside `remote:path` will be encrypted and anything outside
will not. In the case of an S3 based underlying remote (eg Amazon S3,
will not. In the case of an S3 based underlying remote (e.g. Amazon S3,
B2, Swift) it is generally advisable to define a crypt remote in the
underlying remote `s3:bucket`. If `s3:` alone is specified alongside
file name encryption, rclone will encrypt the bucket name.
@ -42,7 +42,7 @@ XX / Encrypt/Decrypt a remote
[snip]
Storage> crypt
Remote to encrypt/decrypt.
Normally should contain a ':' and a path, eg "myremote:path/to/dir",
Normally should contain a ':' and a path, e.g. "myremote:path/to/dir",
"myremote:bucket" or maybe "myremote:" (not recommended).
remote> remote:path
How to encrypt the filenames.
@ -281,7 +281,7 @@ Here are the standard options specific to crypt (Encrypt/Decrypt a remote).
#### --crypt-remote
Remote to encrypt/decrypt.
Normally should contain a ':' and a path, eg "myremote:path/to/dir",
Normally should contain a ':' and a path, e.g. "myremote:path/to/dir",
"myremote:bucket" or maybe "myremote:" (not recommended).
- Config: remote
@ -350,7 +350,7 @@ Here are the advanced options specific to crypt (Encrypt/Decrypt a remote).
#### --crypt-server-side-across-configs
Allow server-side operations (eg copy) to work across different crypt configs.
Allow server-side operations (e.g. copy) to work across different crypt configs.
Normally this option is not what you want, but if you have two crypts
pointing to the same backend you can use it.
@ -545,7 +545,7 @@ encoding is modified in two ways:
* we strip the padding character `=`
`base32` is used rather than the more efficient `base64` so rclone can be
used on case insensitive remotes (eg Windows, Amazon Drive).
used on case insensitive remotes (e.g. Windows, Amazon Drive).
### Key derivation ###

View File

@ -68,7 +68,7 @@ Its syntax is like this
Syntax: [options] subcommand <parameters> <parameters...>
Source and destination paths are specified by the name you gave the
storage system in the config file then the sub path, eg
storage system in the config file then the sub path, e.g.
"drive:myfolder" to look at "myfolder" in Google drive.
You can define as many storage paths as you like in the config file.
@ -219,12 +219,12 @@ Here are some gotchas which may help users unfamiliar with the shell rules
### Linux / OSX ###
If your names have spaces or shell metacharacters (eg `*`, `?`, `$`,
If your names have spaces or shell metacharacters (e.g. `*`, `?`, `$`,
`'`, `"` etc) then you must quote them. Use single quotes `'` by default.
rclone copy 'Important files?' remote:backup
If you want to send a `'` you will need to use `"`, eg
If you want to send a `'` you will need to use `"`, e.g.
rclone copy "O'Reilly Reviews" remote:backup
@ -234,12 +234,12 @@ shell.
### Windows ###
If your names have spaces in you need to put them in `"`, eg
If your names have spaces in you need to put them in `"`, e.g.
rclone copy "E:\folder name\folder name\folder name" remote:backup
If you are using the root directory on its own then don't quote it
(see [#464](https://github.com/rclone/rclone/issues/464) for why), eg
(see [#464](https://github.com/rclone/rclone/issues/464) for why), e.g.
rclone copy E:\ remote:backup
@ -289,7 +289,7 @@ quicker than a download and re-upload.
Server side copies will only be attempted if the remote names are the
same.
This can be used when scripting to make aged backups efficiently, eg
This can be used when scripting to make aged backups efficiently, e.g.
rclone sync -i remote:current-backup remote:previous-backup
rclone sync -i /path/to/files remote:current-backup
@ -315,7 +315,7 @@ time units are "ns", "us" (or "µs"), "ms", "s", "m", "h".
Options which use SIZE use kByte by default. However, a suffix of `b`
for bytes, `k` for kBytes, `M` for MBytes, `G` for GBytes, `T` for
TBytes and `P` for PBytes may be used. These are the binary units, eg
TBytes and `P` for PBytes may be used. These are the binary units, e.g.
1, 2\*\*10, 2\*\*20, 2\*\*30 respectively.
### --backup-dir=DIR ###
@ -467,7 +467,7 @@ objects to transfer is held in memory before the transfers start.
### --checkers=N ###
The number of checkers to run in parallel. Checkers do the equality
checking of files during a sync. For some storage systems (eg S3,
checking of files during a sync. For some storage systems (e.g. S3,
Swift, Dropbox) this can take a significant amount of time so they are
run in parallel.
@ -483,7 +483,7 @@ This is useful when the remote doesn't support setting modified time
and a more accurate sync is desired than just checking the file size.
This is very useful when transferring between remotes which store the
same hash type on the object, eg Drive and Swift. For details of which
same hash type on the object, e.g. Drive and Swift. For details of which
remotes support which hash type see the table in the [overview
section](/overview/).
@ -521,7 +521,7 @@ for Rclone to use it, it will never be created automatically.
If you run `rclone config file` you will see where the default
location is for you.
Use this flag to override the config location, eg `rclone
Use this flag to override the config location, e.g. `rclone
--config=".myconfig" .config`.
### --contimeout=TIME ###
@ -568,7 +568,7 @@ See the overview [features](/overview/#features) and
which feature does what.
This flag can be useful for debugging and in exceptional circumstances
(eg Google Drive limiting the total volume of Server Side Copies to
(e.g. Google Drive limiting the total volume of Server Side Copies to
100GB/day).
### -n, --dry-run ###
@ -956,7 +956,7 @@ This means that:
- the destination is not listed minimising the API calls
- files are always transferred
- this can cause duplicates on remotes which allow it (eg Google Drive)
- this can cause duplicates on remotes which allow it (e.g. Google Drive)
- `--retries 1` is recommended otherwise you'll transfer everything again on a retry
This flag is useful to minimise the transactions if you know that none
@ -1012,7 +1012,7 @@ When using this flag, rclone won't update modification times of remote
files if they are incorrect as it would normally.
This can be used if the remote is being synced with another tool also
(eg the Google Drive client).
(e.g. the Google Drive client).
### --order-by string ###
@ -1033,7 +1033,7 @@ This can have a modifier appended with a comma:
- `mixed` - order so that the smallest is processed first for some threads and the largest for others
If the modifier is `mixed` then it can have an optional percentage
(which defaults to `50`), eg `size,mixed,25` which means that 25% of
(which defaults to `50`), e.g. `size,mixed,25` which means that 25% of
the threads should be taking the smallest items and 75% the
largest. The threads which take the smallest first will always take
the smallest first and likewise the largest first threads. The `mixed`
@ -1127,7 +1127,7 @@ This is useful if you uploaded files with the incorrect timestamps and
you now wish to correct them.
This flag is **only** useful for destinations which don't support
hashes (eg `crypt`).
hashes (e.g. `crypt`).
This can be used any of the sync commands `sync`, `copy` or `move`.
@ -1140,7 +1140,7 @@ to see if there is an existing file on the destination. If this file
matches the source with size (and checksum if available) but has a
differing timestamp then instead of re-uploading it, rclone will
update the timestamp on the destination file. If the checksum does not
match rclone will upload the new file. If the checksum is absent (eg
match rclone will upload the new file. If the checksum is absent (e.g.
on a `crypt` backend) then rclone will update the timestamp.
Note that some remotes can't set the modification time without
@ -1287,7 +1287,7 @@ This can be useful for running rclone in a script or `rclone mount`.
### --syslog-facility string ###
If using `--syslog` this sets the syslog facility (eg `KERN`, `USER`).
If using `--syslog` this sets the syslog facility (e.g. `KERN`, `USER`).
See `man syslog` for a list of possible facilities. The default
facility is `DAEMON`.
@ -1301,7 +1301,7 @@ For example to limit rclone to 10 HTTP transactions per second use
0.5`.
Use this when the number of transactions per second from rclone is
causing a problem with the cloud storage provider (eg getting you
causing a problem with the cloud storage provider (e.g. getting you
banned or rate limited).
This can be very useful for `rclone mount` to control the behaviour of
@ -1400,7 +1400,7 @@ there were IO errors`.
### --fast-list ###
When doing anything which involves a directory listing (eg `sync`,
When doing anything which involves a directory listing (e.g. `sync`,
`copy`, `ls` - in fact nearly every command), rclone normally lists a
directory and processes it before using more directory lists to
process any subdirectories. This can be parallelised and works very
@ -1408,7 +1408,7 @@ quickly using the least amount of memory.
However, some remotes have a way of listing all files beneath a
directory in one (or a small number) of transactions. These tend to
be the bucket based remotes (eg S3, B2, GCS, Swift, Hubic).
be the bucket based remotes (e.g. S3, B2, GCS, Swift, Hubic).
If you use the `--fast-list` flag then rclone will use this method for
listing directories. This will have the following consequences for
@ -1671,7 +1671,7 @@ Developer options
These options are useful when developing or debugging rclone. There
are also some more remote specific options which aren't documented
here which are used for testing. These start with remote name eg
here which are used for testing. These start with remote name e.g.
`--drive-test-option` - see the docs for the remote in question.
### --cpuprofile=FILE ###
@ -1781,7 +1781,7 @@ Logging
rclone has 4 levels of logging, `ERROR`, `NOTICE`, `INFO` and `DEBUG`.
By default, rclone logs to standard error. This means you can redirect
standard error and still see the normal output of rclone commands (eg
standard error and still see the normal output of rclone commands (e.g.
`rclone ls`).
By default, rclone will produce `Error` and `Notice` level messages.
@ -1802,7 +1802,7 @@ If you use the `--log-file=FILE` option, rclone will redirect `Error`,
If you use the `--syslog` flag then rclone will log to syslog and the
`--syslog-facility` control which facility it uses.
Rclone prefixes all log messages with their level in capitals, eg INFO
Rclone prefixes all log messages with their level in capitals, e.g. INFO
which makes it easy to grep the log file for different kinds of
information.
@ -1897,11 +1897,11 @@ you must create the `..._TYPE` variable as above.
The various different methods of backend configuration are read in
this order and the first one with a value is used.
- Flag values as supplied on the command line, eg `--drive-use-trash`.
- Remote specific environment vars, eg `RCLONE_CONFIG_MYREMOTE_USE_TRASH` (see above).
- Backend specific environment vars, eg `RCLONE_DRIVE_USE_TRASH`.
- Config file, eg `use_trash = false`.
- Default values, eg `true` - these can't be changed.
- Flag values as supplied on the command line, e.g. `--drive-use-trash`.
- Remote specific environment vars, e.g. `RCLONE_CONFIG_MYREMOTE_USE_TRASH` (see above).
- Backend specific environment vars, e.g. `RCLONE_DRIVE_USE_TRASH`.
- Config file, e.g. `use_trash = false`.
- Default values, e.g. `true` - these can't be changed.
So if both `--drive-use-trash` is supplied on the config line and an
environment variable `RCLONE_DRIVE_USE_TRASH` is set, the command line
@ -1909,9 +1909,9 @@ flag will take preference.
For non backend configuration the order is as follows:
- Flag values as supplied on the command line, eg `--stats 5s`.
- Environment vars, eg `RCLONE_STATS=5s`.
- Default values, eg `1m` - these can't be changed.
- Flag values as supplied on the command line, e.g. `--stats 5s`.
- Environment vars, e.g. `RCLONE_STATS=5s`.
- Default values, e.g. `1m` - these can't be changed.
### Other environment variables ###

View File

@ -40,7 +40,7 @@ to master. Note these are named like
{Version Tag}.beta.{Commit Number}.{Git Commit Hash}
eg
e.g.
v1.53.0-beta.4677.b657a2204
@ -54,7 +54,7 @@ Some beta releases may have a branch name also:
{Version Tag}-beta.{Commit Number}.{Git Commit Hash}.{Branch Name}
eg
e.g.
v1.53.0-beta.4677.b657a2204.semver

View File

@ -8,7 +8,7 @@ description: "Rclone docs for Google drive"
Paths are specified as `drive:path`
Drive paths may be as deep as required, eg `drive:directory/subdirectory`.
Drive paths may be as deep as required, e.g. `drive:directory/subdirectory`.
The initial setup for drive involves getting a token from Google drive
which you need to do in your browser. `rclone config` walks you
@ -397,7 +397,7 @@ be in multiple folders at once](https://cloud.google.com/blog/products/g-suite/s
Shortcuts are files that link to other files on Google Drive somewhat
like a symlink in unix, except they point to the underlying file data
(eg the inode in unix terms) so they don't break if the source is
(e.g. the inode in unix terms) so they don't break if the source is
renamed or moved about.
Be default rclone treats these as follows.
@ -490,7 +490,7 @@ Here are some examples for allowed and prohibited conversions.
This limitation can be disabled by specifying `--drive-allow-import-name-change`.
When using this flag, rclone can convert multiple files types resulting
in the same document type at once, eg with `--drive-import-formats docx,odt,txt`,
in the same document type at once, e.g. with `--drive-import-formats docx,odt,txt`,
all files having these extension would result in a document represented as a docx file.
This brings the additional risk of overwriting a document, if multiple files
have the same stem. Many rclone operations will not handle this name change
@ -956,7 +956,7 @@ Number of API calls to allow without sleeping.
#### --drive-server-side-across-configs
Allow server-side operations (eg copy) to work across different drive configs.
Allow server-side operations (e.g. copy) to work across different drive configs.
This can be useful if you wish to do a server-side copy between two
different Google drives. Note that this isn't enabled by default
@ -1188,7 +1188,7 @@ and upload the files if you prefer.
#### Limitations of Google Docs ####
Google docs will appear as size -1 in `rclone ls` and as size 0 in
anything which uses the VFS layer, eg `rclone mount`, `rclone serve`.
anything which uses the VFS layer, e.g. `rclone mount`, `rclone serve`.
This is because rclone can't find out the size of the Google docs
without downloading them.

View File

@ -8,7 +8,7 @@ description: "Rclone docs for Dropbox"
Paths are specified as `remote:path`
Dropbox paths may be as deep as required, eg
Dropbox paths may be as deep as required, e.g.
`remote:directory/subdirectory`.
The initial setup for dropbox involves getting a token from Dropbox

View File

@ -8,7 +8,7 @@ Frequently Asked Questions
### Do all cloud storage systems support all rclone commands ###
Yes they do. All the rclone commands (eg `sync`, `copy` etc) will
Yes they do. All the rclone commands (e.g. `sync`, `copy` etc) will
work on all the remote storage systems.
### Can I copy the config from one machine to another ###
@ -40,7 +40,7 @@ Eg
### Using rclone from multiple locations at the same time ###
You can use rclone from multiple places at the same time if you choose
different subdirectory for the output, eg
different subdirectory for the output, e.g.
```
Server A> rclone sync -i /tmp/whatever remote:ServerA
@ -48,7 +48,7 @@ Server B> rclone sync -i /tmp/whatever remote:ServerB
```
If you sync to the same directory then you should use rclone copy
otherwise the two instances of rclone may delete each other's files, eg
otherwise the two instances of rclone may delete each other's files, e.g.
```
Server A> rclone copy /tmp/whatever remote:Backup
@ -56,14 +56,14 @@ Server B> rclone copy /tmp/whatever remote:Backup
```
The file names you upload from Server A and Server B should be
different in this case, otherwise some file systems (eg Drive) may
different in this case, otherwise some file systems (e.g. Drive) may
make duplicates.
### Why doesn't rclone support partial transfers / binary diffs like rsync? ###
Rclone stores each file you transfer as a native object on the remote
cloud storage system. This means that you can see the files you
upload as expected using alternative access methods (eg using the
upload as expected using alternative access methods (e.g. using the
Google Drive web interface). There is a 1:1 mapping between files on
your hard disk and objects created in the cloud storage system.

View File

@ -12,7 +12,7 @@ the API.
Paths are specified as `remote:path`
Paths may be as deep as required, eg `remote:directory/subdirectory`.
Paths may be as deep as required, e.g. `remote:directory/subdirectory`.
The initial setup for 1Fichier involves getting the API key from the website which you
need to do in your browser.

View File

@ -118,7 +118,7 @@ directories.
Directory matches are **only** used to optimise directory access
patterns - you must still match the files that you want to match.
Directory matches won't optimise anything on bucket based remotes (eg
Directory matches won't optimise anything on bucket based remotes (e.g.
s3, swift, google compute storage, b2) which don't have a concept of
directory.
@ -162,7 +162,7 @@ This would exclude
A similar process is done on directory entries before recursing into
them. This only works on remotes which have a concept of directory
(Eg local, google drive, onedrive, amazon drive) and not on bucket
based remotes (eg s3, swift, google compute storage, b2).
based remotes (e.g. s3, swift, google compute storage, b2).
## Adding filtering rules ##
@ -233,7 +233,7 @@ backup and no others.
This adds an implicit `--exclude *` at the very end of the filter
list. This means you can mix `--include` and `--include-from` with the
other filters (eg `--exclude`) but you must include all the files you
other filters (e.g. `--exclude`) but you must include all the files you
want in the include statement. If this doesn't provide enough
flexibility then you must use `--filter-from`.
@ -258,7 +258,7 @@ This is useful if you have a lot of rules.
This adds an implicit `--exclude *` at the very end of the filter
list. This means you can mix `--include` and `--include-from` with the
other filters (eg `--exclude`) but you must include all the files you
other filters (e.g. `--exclude`) but you must include all the files you
want in the include statement. If this doesn't provide enough
flexibility then you must use `--filter-from`.
@ -352,7 +352,7 @@ want to back up regularly with these absolute paths:
To copy these you'd find a common subdirectory - in this case `/home`
and put the remaining files in `files-from.txt` with or without
leading `/`, eg
leading `/`, e.g.
user1/important
user1/dir/file
@ -430,7 +430,7 @@ transferred.
This can also be an absolute time in one of these formats
- RFC3339 - eg "2006-01-02T15:04:05Z07:00"
- RFC3339 - e.g. "2006-01-02T15:04:05Z07:00"
- ISO8601 Date and time, local timezone - "2006-01-02T15:04:05"
- ISO8601 Date and time, local timezone - "2006-01-02 15:04:05"
- ISO8601 Date - "2006-01-02" (YYYY-MM-DD)
@ -481,7 +481,7 @@ Normally a `--include "file.txt"` will not match a file called
## Quoting shell metacharacters ##
The examples above may not work verbatim in your shell as they have
shell metacharacters in them (eg `*`), and may require quoting.
shell metacharacters in them (e.g. `*`), and may require quoting.
Eg linux, OSX

View File

@ -90,7 +90,7 @@ These flags are available for every command.
--no-traverse Don't traverse destination file system on copy.
--no-unicode-normalization Don't normalize unicode characters in filenames.
--no-update-modtime Don't update destination mod-time if files identical.
--order-by string Instructions on how to order the transfers, eg 'size,descending'
--order-by string Instructions on how to order the transfers, e.g. 'size,descending'
--password-command SpaceSepList Command for supplying password for encrypted configuration.
-P, --progress Show progress during transfer.
-q, --quiet Print as little stuff as possible
@ -135,7 +135,7 @@ These flags are available for every command.
--suffix string Suffix to add to changed files.
--suffix-keep-extension Preserve the extension when using --suffix.
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--syslog-facility string Facility for syslog, e.g. KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
@ -239,7 +239,7 @@ and may be set in the config file.
--crypt-password string Password or pass phrase for encryption. (obscured)
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. (obscured)
--crypt-remote string Remote to encrypt/decrypt.
--crypt-server-side-across-configs Allow server-side operations (eg copy) to work across different crypt configs.
--crypt-server-side-across-configs Allow server-side operations (e.g. copy) to work across different crypt configs.
--crypt-show-mapping For all files listed show how the names encrypt.
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
@ -260,7 +260,7 @@ and may be set in the config file.
--drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms)
--drive-root-folder-id string ID of the root folder
--drive-scope string Scope that rclone should use when requesting access from drive.
--drive-server-side-across-configs Allow server-side operations (eg copy) to work across different drive configs.
--drive-server-side-across-configs Allow server-side operations (e.g. copy) to work across different drive configs.
--drive-service-account-credentials string Service Account Credentials JSON blob
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me.
@ -377,7 +377,7 @@ and may be set in the config file.
--onedrive-encoding MultiEncoder This sets the encoding for the backend. (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,Hash,Percent,BackSlash,Del,Ctl,LeftSpace,LeftTilde,RightSpace,RightPeriod,InvalidUtf8,Dot)
--onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--onedrive-no-versions Remove all versions on modifying operations
--onedrive-server-side-across-configs Allow server-side operations (eg copy) to work across different onedrive configs.
--onedrive-server-side-across-configs Allow server-side operations (e.g. copy) to work across different onedrive configs.
--onedrive-token string OAuth Access Token as a JSON blob.
--onedrive-token-url string Token server url.
--opendrive-chunk-size SizeSuffix Files will be uploaded in chunks this size. (default 10M)
@ -511,7 +511,7 @@ and may be set in the config file.
--union-create-policy string Policy to choose upstream on CREATE category. (default "epmfs")
--union-search-policy string Policy to choose upstream on SEARCH category. (default "ff")
--union-upstreams string List of space separated upstreams.
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-bearer-token string Bearer token instead of user/pass (e.g. a Macaroon)
--webdav-bearer-token-command string Command to run to get a bearer token
--webdav-pass string Password. (obscured)
--webdav-url string URL of http host to connect to

View File

@ -7,7 +7,7 @@ description: "Rclone docs for Google Cloud Storage"
-------------------------------------------------
Paths are specified as `remote:bucket` (or `remote:` for the `lsd`
command.) You may put subdirectories in too, eg `remote:bucket/path/to/dir`.
command.) You may put subdirectories in too, e.g. `remote:bucket/path/to/dir`.
The initial setup for google cloud storage involves getting a token from Google Cloud Storage
which you need to do in your browser. `rclone config` walks you

View File

@ -133,7 +133,7 @@ The input format is comma separated list of key,value pairs. Standard
For example to set a Cookie use 'Cookie,name=value', or '"Cookie","name=value"'.
You can set multiple headers, eg '"Cookie","name=value","Authorization","xxx"'.
You can set multiple headers, e.g. '"Cookie","name=value","Authorization","xxx"'.
- Config: headers

View File

@ -9,7 +9,7 @@ description: "Rclone docs for Hubic"
Paths are specified as `remote:path`
Paths are specified as `remote:container` (or `remote:` for the `lsd`
command.) You may put subdirectories in too, eg `remote:container/path/to/dir`.
command.) You may put subdirectories in too, e.g. `remote:container/path/to/dir`.
The initial setup for Hubic involves getting a token from Hubic which
you need to do in your browser. `rclone config` walks you through it.
@ -179,7 +179,7 @@ default for this is 5GB which is its maximum value.
Don't chunk files during streaming upload.
When doing streaming uploads (eg using rcat or mount) setting this
When doing streaming uploads (e.g. using rcat or mount) setting this
flag will cause the swift backend to not upload chunked files.
This will limit the maximum upload size to 5GB. However non chunked

View File

@ -108,7 +108,7 @@ on a minimal Alpine linux image.
The `:latest` tag will always point to the latest stable release. You
can use the `:beta` tag to get the latest build from master. You can
also use version tags, eg `:1.49.1`, `:1.49` or `:1`.
also use version tags, e.g. `:1.49.1`, `:1.49` or `:1`.
```
$ docker pull rclone/rclone:latest

View File

@ -13,7 +13,7 @@ also several whitelabel versions which should work with this backend.
Paths are specified as `remote:path`
Paths may be as deep as required, eg `remote:directory/subdirectory`.
Paths may be as deep as required, e.g. `remote:directory/subdirectory`.
## Setup

View File

@ -8,7 +8,7 @@ description: "Rclone docs for Koofr"
Paths are specified as `remote:path`
Paths may be as deep as required, eg `remote:directory/subdirectory`.
Paths may be as deep as required, e.g. `remote:directory/subdirectory`.
The initial setup for Koofr involves creating an application password for
rclone. You can do that by opening the Koofr

View File

@ -6,7 +6,7 @@ description: "Rclone docs for the local filesystem"
{{< icon "fas fa-hdd" >}} Local Filesystem
-------------------------------------------
Local paths are specified as normal filesystem paths, eg `/path/to/wherever`, so
Local paths are specified as normal filesystem paths, e.g. `/path/to/wherever`, so
rclone sync -i /home/source /tmp/destination
@ -28,14 +28,14 @@ for Windows and OS X.
There is a bit more uncertainty in the Linux world, but new
distributions will have UTF-8 encoded files names. If you are using an
old Linux filesystem with non UTF-8 file names (eg latin1) then you
old Linux filesystem with non UTF-8 file names (e.g. latin1) then you
can use the `convmv` tool to convert the filesystem to UTF-8. This
tool is available in most distributions' package managers.
If an invalid (non-UTF8) filename is read, the invalid characters will
be replaced with a quoted representation of the invalid bytes. The name
`gro\xdf` will be transferred as `groDF`. `rclone` will emit a debug
message in this case (use `-v` to see), eg
message in this case (use `-v` to see), e.g.
```
Local file system at .: Replacing invalid UTF-8 characters in "gro\xdf"
@ -295,7 +295,7 @@ treats a bind mount to the same device as being on the same
filesystem.
**NB** This flag is only available on Unix based systems. On systems
where it isn't supported (eg Windows) it will be ignored.
where it isn't supported (e.g. Windows) it will be ignored.
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/local/local.go then run make backenddocs" >}}
### Standard Options
@ -368,13 +368,13 @@ Normally rclone checks the size and modification time of files as they
are being uploaded and aborts with a message which starts "can't copy
- source file is being updated" if the file changes during upload.
However on some file systems this modification time check may fail (eg
However on some file systems this modification time check may fail (e.g.
[Glusterfs #2206](https://github.com/rclone/rclone/issues/2206)) so this
check can be disabled with this flag.
If this flag is set, rclone will use its best efforts to transfer a
file which is being updated. If the file is only having things
appended to it (eg a log) then rclone will transfer the log file with
appended to it (e.g. a log) then rclone will transfer the log file with
the size it had the first time rclone saw it.
If the file is being modified throughout (not just appended to) then

View File

@ -12,7 +12,7 @@ Currently it is recommended to disable 2FA on Mail.ru accounts intended for rclo
### Features highlights ###
- Paths may be as deep as required, eg `remote:directory/subdirectory`
- Paths may be as deep as required, e.g. `remote:directory/subdirectory`
- Files have a `last modified time` property, directories don't
- Deleted files are by default moved to the trash
- Files and directories can be shared via public links

View File

@ -17,7 +17,7 @@ features of Mega using the same client side encryption.
Paths are specified as `remote:path`
Paths may be as deep as required, eg `remote:directory/subdirectory`.
Paths may be as deep as required, e.g. `remote:directory/subdirectory`.
Here is an example of how to make a remote called `remote`. First run:

View File

@ -9,7 +9,7 @@ description: "Rclone docs for Memory backend"
The memory backend is an in RAM backend. It does not persist its
data - use the local backend for that.
The memory backend behaves like a bucket based remote (eg like
The memory backend behaves like a bucket based remote (e.g. like
s3). Because it has no parameters you can just use it with the
`:memory:` remote name.
@ -46,7 +46,7 @@ y/e/d> y
```
Because the memory backend isn't persistent it is most useful for
testing or with an rclone server or rclone mount, eg
testing or with an rclone server or rclone mount, e.g.
rclone mount :memory: /mnt/tmp
rclone serve webdav :memory:

View File

@ -8,7 +8,7 @@ description: "Rclone docs for Microsoft OneDrive"
Paths are specified as `remote:path`
Paths may be as deep as required, eg `remote:directory/subdirectory`.
Paths may be as deep as required, e.g. `remote:directory/subdirectory`.
The initial setup for OneDrive involves getting a token from
Microsoft which you need to do in your browser. `rclone config` walks
@ -298,7 +298,7 @@ listing, set this option.
#### --onedrive-server-side-across-configs
Allow server-side operations (eg copy) to work across different onedrive configs.
Allow server-side operations (e.g. copy) to work across different onedrive configs.
This can be useful if you wish to do a server-side copy between two
different Onedrives. Note that this isn't enabled by default

View File

@ -8,7 +8,7 @@ description: "Rclone docs for OpenDrive"
Paths are specified as `remote:path`
Paths may be as deep as required, eg `remote:directory/subdirectory`.
Paths may be as deep as required, e.g. `remote:directory/subdirectory`.
Here is an example of how to make a remote called `remote`. First run:

View File

@ -90,7 +90,7 @@ these will be set when transferring from the cloud storage system.
### Case Insensitive ###
If a cloud storage systems is case sensitive then it is possible to
have two files which differ only in case, eg `file.txt` and
have two files which differ only in case, e.g. `file.txt` and
`FILE.txt`. If a cloud storage system is case insensitive then that
isn't possible.
@ -103,7 +103,7 @@ depending on OS.
* Windows - usually case insensitive, though case is preserved
* OSX - usually case insensitive, though it is possible to format case sensitive
* Linux - usually case sensitive, but there are case insensitive file systems (eg FAT formatted USB keys)
* Linux - usually case sensitive, but there are case insensitive file systems (e.g. FAT formatted USB keys)
Most of the time this doesn't cause any problems as people tend to
avoid files whose name differs only by case even on case sensitive
@ -241,7 +241,7 @@ disable the encoding completely with `--backend-encoding None` or set
Encoding takes a comma separated list of encodings. You can see the
list of all available characters by passing an invalid value to this
flag, eg `--local-encoding "help"` and `rclone help flags encoding`
flag, e.g. `--local-encoding "help"` and `rclone help flags encoding`
will show you the defaults for the backends.
| Encoding | Characters |
@ -257,7 +257,7 @@ will show you the defaults for the backends.
| Dot | `.` |
| DoubleQuote | `"` |
| Hash | `#` |
| InvalidUtf8 | An invalid UTF-8 character (eg latin1) |
| InvalidUtf8 | An invalid UTF-8 character (e.g. latin1) |
| LeftCrLfHtVt | CR 0x0D, LF 0x0A,HT 0x09, VT 0x0B on the left of a string |
| LeftPeriod | `.` on the left of a string |
| LeftSpace | SPACE on the left of a string |
@ -302,7 +302,7 @@ This can be specified using the `--local-encoding` flag or using an
### MIME Type ###
MIME types (also known as media types) classify types of documents
using a simple text classification, eg `text/html` or
using a simple text classification, e.g. `text/html` or
`application/pdf`.
Some cloud storage systems support reading (`R`) the MIME type of

View File

@ -8,7 +8,7 @@ description: "Rclone docs for pCloud"
Paths are specified as `remote:path`
Paths may be as deep as required, eg `remote:directory/subdirectory`.
Paths may be as deep as required, e.g. `remote:directory/subdirectory`.
The initial setup for pCloud involves getting a token from pCloud which you
need to do in your browser. `rclone config` walks you through it.

View File

@ -8,7 +8,7 @@ description: "Rclone docs for premiumize.me"
Paths are specified as `remote:path`
Paths may be as deep as required, eg `remote:directory/subdirectory`.
Paths may be as deep as required, e.g. `remote:directory/subdirectory`.
The initial setup for [premiumize.me](https://premiumize.me/) involves getting a token from premiumize.me which you
need to do in your browser. `rclone config` walks you through it.

View File

@ -8,7 +8,7 @@ description: "Rclone docs for put.io"
Paths are specified as `remote:path`
put.io paths may be as deep as required, eg
put.io paths may be as deep as required, e.g.
`remote:directory/subdirectory`.
The initial setup for put.io involves getting a token from put.io

View File

@ -7,7 +7,7 @@ description: "Rclone docs for QingStor Object Storage"
---------------------------------------
Paths are specified as `remote:bucket` (or `remote:` for the `lsd`
command.) You may put subdirectories in too, eg `remote:bucket/path/to/dir`.
command.) You may put subdirectories in too, e.g. `remote:bucket/path/to/dir`.
Here is an example of making an QingStor configuration. First run

View File

@ -218,7 +218,7 @@ background. The `job/status` call can be used to get information of
the background job. The job can be queried for up to 1 minute after
it has finished.
It is recommended that potentially long running jobs, eg `sync/sync`,
It is recommended that potentially long running jobs, e.g. `sync/sync`,
`sync/copy`, `sync/move`, `operations/purge` are run with the `_async`
flag to avoid any potential problems with the HTTP request and
response timing out.
@ -298,7 +298,7 @@ $ rclone rc --json '{ "group": "job/1" }' core/stats
This takes the following parameters
- command - a string with the command name
- fs - a remote name string eg "drive:"
- fs - a remote name string e.g. "drive:"
- arg - a list of arguments for the backend command
- opt - a map of string to string of options
@ -371,7 +371,7 @@ Some valid examples are:
"0:10" -> the first ten chunks
Any parameter with a key that starts with "file" can be used to
specify files to fetch, eg
specify files to fetch, e.g.
rclone rc cache/fetch chunks=0 file=hello file2=home/goodbye
@ -695,7 +695,7 @@ Returns the following values:
This shows the current version of go and the go runtime
- version - rclone version, eg "v1.53.0"
- version - rclone version, e.g. "v1.53.0"
- decomposed - version number as [major, minor, patch]
- isGit - boolean - true if this was compiled from the git version
- isBeta - boolean - true if this is a beta version
@ -759,11 +759,11 @@ Results
- finished - boolean
- duration - time in seconds that the job ran for
- endTime - time the job finished (eg "2018-10-26T18:50:20.528746884+01:00")
- endTime - time the job finished (e.g. "2018-10-26T18:50:20.528746884+01:00")
- error - error from the job or empty string for no error
- finished - boolean whether the job has finished or not
- id - as passed in above
- startTime - time the job started (eg "2018-10-26T18:50:20.528336039+01:00")
- startTime - time the job started (e.g. "2018-10-26T18:50:20.528336039+01:00")
- success - boolean - true for success false otherwise
- output - output of the job as would have been returned if called synchronously
- progress - output of the progress related to the underlying job
@ -865,7 +865,7 @@ Eg
This takes the following parameters
- fs - a remote name string eg "drive:"
- fs - a remote name string e.g. "drive:"
The result is as returned from rclone about --json
@ -877,7 +877,7 @@ See the [about command](/commands/rclone_size/) command for more information on
This takes the following parameters
- fs - a remote name string eg "drive:"
- fs - a remote name string e.g. "drive:"
See the [cleanup command](/commands/rclone_cleanup/) command for more information on the above.
@ -887,10 +887,10 @@ See the [cleanup command](/commands/rclone_cleanup/) command for more informatio
This takes the following parameters
- srcFs - a remote name string eg "drive:" for the source
- srcRemote - a path within that remote eg "file.txt" for the source
- dstFs - a remote name string eg "drive2:" for the destination
- dstRemote - a path within that remote eg "file2.txt" for the destination
- srcFs - a remote name string e.g. "drive:" for the source
- srcRemote - a path within that remote e.g. "file.txt" for the source
- dstFs - a remote name string e.g. "drive2:" for the destination
- dstRemote - a path within that remote e.g. "file2.txt" for the destination
**Authentication is required for this call.**
@ -898,8 +898,8 @@ This takes the following parameters
This takes the following parameters
- fs - a remote name string eg "drive:"
- remote - a path within that remote eg "dir"
- fs - a remote name string e.g. "drive:"
- remote - a path within that remote e.g. "dir"
- url - string, URL to read from
- autoFilename - boolean, set to true to retrieve destination file name from url
See the [copyurl command](/commands/rclone_copyurl/) command for more information on the above.
@ -910,7 +910,7 @@ See the [copyurl command](/commands/rclone_copyurl/) command for more informatio
This takes the following parameters
- fs - a remote name string eg "drive:"
- fs - a remote name string e.g. "drive:"
See the [delete command](/commands/rclone_delete/) command for more information on the above.
@ -920,8 +920,8 @@ See the [delete command](/commands/rclone_delete/) command for more information
This takes the following parameters
- fs - a remote name string eg "drive:"
- remote - a path within that remote eg "dir"
- fs - a remote name string e.g. "drive:"
- remote - a path within that remote e.g. "dir"
See the [deletefile command](/commands/rclone_deletefile/) command for more information on the above.
@ -931,7 +931,7 @@ See the [deletefile command](/commands/rclone_deletefile/) command for more info
This takes the following parameters
- fs - a remote name string eg "drive:"
- fs - a remote name string e.g. "drive:"
This returns info about the remote passed in;
@ -988,8 +988,8 @@ This command does not have a command line equivalent so use this instead:
This takes the following parameters
- fs - a remote name string eg "drive:"
- remote - a path within that remote eg "dir"
- fs - a remote name string e.g. "drive:"
- remote - a path within that remote e.g. "dir"
- opt - a dictionary of options to control the listing (optional)
- recurse - If set recurse directories
- noModTime - If set return modification time
@ -1010,8 +1010,8 @@ See the [lsjson command](/commands/rclone_lsjson/) for more information on the a
This takes the following parameters
- fs - a remote name string eg "drive:"
- remote - a path within that remote eg "dir"
- fs - a remote name string e.g. "drive:"
- remote - a path within that remote e.g. "dir"
See the [mkdir command](/commands/rclone_mkdir/) command for more information on the above.
@ -1021,10 +1021,10 @@ See the [mkdir command](/commands/rclone_mkdir/) command for more information on
This takes the following parameters
- srcFs - a remote name string eg "drive:" for the source
- srcRemote - a path within that remote eg "file.txt" for the source
- dstFs - a remote name string eg "drive2:" for the destination
- dstRemote - a path within that remote eg "file2.txt" for the destination
- srcFs - a remote name string e.g. "drive:" for the source
- srcRemote - a path within that remote e.g. "file.txt" for the source
- dstFs - a remote name string e.g. "drive2:" for the destination
- dstRemote - a path within that remote e.g. "file2.txt" for the destination
**Authentication is required for this call.**
@ -1032,10 +1032,10 @@ This takes the following parameters
This takes the following parameters
- fs - a remote name string eg "drive:"
- remote - a path within that remote eg "dir"
- fs - a remote name string e.g. "drive:"
- remote - a path within that remote e.g. "dir"
- unlink - boolean - if set removes the link rather than adding it (optional)
- expire - string - the expiry time of the link eg "1d" (optional)
- expire - string - the expiry time of the link e.g. "1d" (optional)
Returns
@ -1049,8 +1049,8 @@ See the [link command](/commands/rclone_link/) command for more information on t
This takes the following parameters
- fs - a remote name string eg "drive:"
- remote - a path within that remote eg "dir"
- fs - a remote name string e.g. "drive:"
- remote - a path within that remote e.g. "dir"
See the [purge command](/commands/rclone_purge/) command for more information on the above.
@ -1060,8 +1060,8 @@ See the [purge command](/commands/rclone_purge/) command for more information on
This takes the following parameters
- fs - a remote name string eg "drive:"
- remote - a path within that remote eg "dir"
- fs - a remote name string e.g. "drive:"
- remote - a path within that remote e.g. "dir"
See the [rmdir command](/commands/rclone_rmdir/) command for more information on the above.
@ -1071,8 +1071,8 @@ See the [rmdir command](/commands/rclone_rmdir/) command for more information on
This takes the following parameters
- fs - a remote name string eg "drive:"
- remote - a path within that remote eg "dir"
- fs - a remote name string e.g. "drive:"
- remote - a path within that remote e.g. "dir"
- leaveRoot - boolean, set to true not to delete the root
See the [rmdirs command](/commands/rclone_rmdirs/) command for more information on the above.
@ -1083,7 +1083,7 @@ See the [rmdirs command](/commands/rclone_rmdirs/) command for more information
This takes the following parameters
- fs - a remote name string eg "drive:path/to/dir"
- fs - a remote name string e.g. "drive:path/to/dir"
Returns
@ -1098,8 +1098,8 @@ See the [size command](/commands/rclone_size/) command for more information on t
This takes the following parameters
- fs - a remote name string eg "drive:"
- remote - a path within that remote eg "dir"
- fs - a remote name string e.g. "drive:"
- remote - a path within that remote e.g. "dir"
- each part in body represents a file to be uploaded
See the [uploadfile command](/commands/rclone_uploadfile/) command for more information on the above.
@ -1165,8 +1165,8 @@ This shows all possible plugins by a mime type
This takes the following parameters
- type: supported mime type by a loaded plugin eg (video/mp4, audio/mp3)
- pluginType: filter plugins based on their type eg (DASHBOARD, FILE_HANDLER, TERMINAL)
- type: supported mime type by a loaded plugin e.g. (video/mp4, audio/mp3)
- pluginType: filter plugins based on their type e.g. (DASHBOARD, FILE_HANDLER, TERMINAL)
and returns
@ -1264,8 +1264,8 @@ check that parameter passing is working properly.
This takes the following parameters
- srcFs - a remote name string eg "drive:src" for the source
- dstFs - a remote name string eg "drive:dst" for the destination
- srcFs - a remote name string e.g. "drive:src" for the source
- dstFs - a remote name string e.g. "drive:dst" for the destination
See the [copy command](/commands/rclone_copy/) command for more information on the above.
@ -1276,8 +1276,8 @@ See the [copy command](/commands/rclone_copy/) command for more information on t
This takes the following parameters
- srcFs - a remote name string eg "drive:src" for the source
- dstFs - a remote name string eg "drive:dst" for the destination
- srcFs - a remote name string e.g. "drive:src" for the source
- dstFs - a remote name string e.g. "drive:dst" for the destination
- deleteEmptySrcDirs - delete empty src directories if set
@ -1289,8 +1289,8 @@ See the [move command](/commands/rclone_move/) command for more information on t
This takes the following parameters
- srcFs - a remote name string eg "drive:src" for the source
- dstFs - a remote name string eg "drive:dst" for the destination
- srcFs - a remote name string e.g. "drive:src" for the source
- dstFs - a remote name string e.g. "drive:dst" for the destination
See the [sync command](/commands/rclone_sync/) command for more information on the above.
@ -1309,7 +1309,7 @@ directory cache.
Otherwise pass files or dirs in as file=path or dir=path. Any
parameter key starting with file will forget that file and any
starting with dir will forget that dir, eg
starting with dir will forget that dir, e.g.
rclone rc vfs/forget file=hello file2=goodbye dir=home/junk
@ -1363,7 +1363,7 @@ If no paths are passed in then it will refresh the root directory.
rclone rc vfs/refresh
Otherwise pass directories in as dir=path. Any parameter key
starting with dir will refresh that directory, eg
starting with dir will refresh that directory, e.g.
rclone rc vfs/refresh dir=home/junk dir2=data/misc
@ -1396,9 +1396,9 @@ formatted to be reasonably human readable.
### Error returns
If an error occurs then there will be an HTTP error status (eg 500)
If an error occurs then there will be an HTTP error status (e.g. 500)
and the body of the response will contain a JSON encoded error object,
eg
e.g.
```
{

View File

@ -9,7 +9,7 @@ Some of the configurations (those involving oauth2) require an
Internet connected web browser.
If you are trying to set rclone up on a remote or headless box with no
browser available on it (eg a NAS or a server in a datacenter) then
browser available on it (e.g. a NAS or a server in a datacenter) then
you will need to use an alternative means of configuration. There are
two ways of doing it, described below.

View File

@ -23,7 +23,7 @@ The S3 backend can be used with a number of different providers:
{{< /provider_list >}}
Paths are specified as `remote:bucket` (or `remote:` for the `lsd`
command.) You may put subdirectories in too, eg `remote:bucket/path/to/dir`.
command.) You may put subdirectories in too, e.g. `remote:bucket/path/to/dir`.
Once you have made a remote (see the provider specific section above)
you can use it like this:
@ -366,7 +366,7 @@ The different authentication methods are tried in this order:
- Session Token: `AWS_SESSION_TOKEN` (optional)
- Or, use a [named profile](https://docs.aws.amazon.com/cli/latest/userguide/cli-multiple-profiles.html):
- Profile files are standard files used by AWS CLI tools
- By default it will use the profile in your home directory (eg `~/.aws/credentials` on unix based systems) file and the "default" profile, to change set these environment variables:
- By default it will use the profile in your home directory (e.g. `~/.aws/credentials` on unix based systems) file and the "default" profile, to change set these environment variables:
- `AWS_SHARED_CREDENTIALS_FILE` to control which file.
- `AWS_PROFILE` to control which profile to use.
- Or, run `rclone` in an ECS task with an IAM role (AWS only).
@ -615,7 +615,7 @@ Leave blank if you are using an S3 clone and you don't have a region.
- ""
- Use this if unsure. Will use v4 signatures and an empty region.
- "other-v2-signature"
- Use this only if v4 signatures don't work, eg pre Jewel/v10 CEPH.
- Use this only if v4 signatures don't work, e.g. pre Jewel/v10 CEPH.
#### --s3-endpoint
@ -1206,7 +1206,7 @@ The minimum is 0 and the maximum is 5GB.
Chunk size to use for uploading.
When uploading files larger than upload_cutoff or files with unknown
size (eg from "rclone rcat" or uploaded with "rclone mount" or google
size (e.g. from "rclone rcat" or uploaded with "rclone mount" or google
photos or google docs) they will be uploaded as multipart uploads
using this chunk size.
@ -1346,7 +1346,7 @@ if false then rclone will use virtual path style. See [the AWS S3
docs](https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingBucket.html#access-bucket-intro)
for more info.
Some providers (eg AWS, Aliyun OSS or Netease COS) require this set to
Some providers (e.g. AWS, Aliyun OSS or Netease COS) require this set to
false - rclone will do this automatically based on the provider
setting.
@ -1362,7 +1362,7 @@ If true use v2 authentication.
If this is false (the default) then rclone will use v4 authentication.
If it is set then rclone will use v2 authentication.
Use this only if v4 signatures don't work, eg pre Jewel/v10 CEPH.
Use this only if v4 signatures don't work, e.g. pre Jewel/v10 CEPH.
- Config: v2_auth
- Env Var: RCLONE_S3_V2_AUTH
@ -1599,7 +1599,7 @@ server_side_encryption =
storage_class =
```
Then use it as normal with the name of the public bucket, eg
Then use it as normal with the name of the public bucket, e.g.
rclone lsd anons3:1000genomes
@ -1631,7 +1631,7 @@ server_side_encryption =
storage_class =
```
If you are using an older version of CEPH, eg 10.2.x Jewel, then you
If you are using an older version of CEPH, e.g. 10.2.x Jewel, then you
may need to supply the parameter `--s3-upload-cutoff 0` or put this in
the config file as `upload_cutoff 0` to work around a bug which causes
uploading of small files to fail.

View File

@ -16,7 +16,7 @@ This is a backend for the [Seafile](https://www.seafile.com/) storage service:
There are two distinct modes you can setup your remote:
- you point your remote to the **root of the server**, meaning you don't specify a library during the configuration:
Paths are specified as `remote:library`. You may put subdirectories in too, eg `remote:library/path/to/dir`.
Paths are specified as `remote:library`. You may put subdirectories in too, e.g. `remote:library/path/to/dir`.
- you point your remote to a specific library during the configuration:
Paths are specified as `remote:path/to/dir`. **This is the recommended mode when using encrypted libraries**. (_This mode is possibly slightly faster than the root mode_)

View File

@ -203,7 +203,7 @@ advanced option.
Note that there seem to be various problems with using an ssh-agent on
macOS due to recent changes in the OS. The most effective work-around
seems to be to start an ssh-agent in each session, eg
seems to be to start an ssh-agent in each session, e.g.
eval `ssh-agent -s` && ssh-add -A
@ -498,7 +498,7 @@ the disk of the root on the remote.
`about` will fail if it does not have shell
access or if `df` is not in the remote's PATH.
Note that some SFTP servers (eg Synology) the paths are different for
Note that some SFTP servers (e.g. Synology) the paths are different for
SSH and SFTP so the hashes can't be calculated properly. For them
using `disable_hashcheck` is a good idea.

View File

@ -99,7 +99,7 @@ To copy a local directory to an ShareFile directory called backup
rclone copy /home/source remote:backup
Paths may be as deep as required, eg `remote:directory/subdirectory`.
Paths may be as deep as required, e.g. `remote:directory/subdirectory`.
### Modified time and hashes ###

View File

@ -90,7 +90,7 @@ To copy a local directory to an SugarSync folder called backup
Paths are specified as `remote:path`
Paths may be as deep as required, eg `remote:directory/subdirectory`.
Paths may be as deep as required, e.g. `remote:directory/subdirectory`.
**NB** you can't create files in the top level folder you have to
create a folder, which rclone will create as a "Sync Folder" with

View File

@ -16,7 +16,7 @@ Commercial implementations of that being:
* [IBM Bluemix Cloud ObjectStorage Swift](https://console.bluemix.net/docs/infrastructure/objectstorage-swift/index.html)
Paths are specified as `remote:container` (or `remote:` for the `lsd`
command.) You may put subdirectories in too, eg `remote:container/path/to/dir`.
command.) You may put subdirectories in too, e.g. `remote:container/path/to/dir`.
Here is an example of making a swift configuration. First run
@ -446,7 +446,7 @@ default for this is 5GB which is its maximum value.
Don't chunk files during streaming upload.
When doing streaming uploads (eg using rcat or mount) setting this
When doing streaming uploads (e.g. using rcat or mount) setting this
flag will cause the swift backend to not upload chunked files.
This will limit the maximum upload size to 5GB. However non chunked
@ -510,7 +510,7 @@ So this most likely means your username / password is wrong. You can
investigate further with the `--dump-bodies` flag.
This may also be caused by specifying the region when you shouldn't
have (eg OVH).
have (e.g. OVH).
#### Rclone gives Failed to create file system: Response didn't have storage url and auth token ####

View File

@ -126,7 +126,7 @@ y/e/d> y
## Usage
Paths are specified as `remote:bucket` (or `remote:` for the `lsf`
command.) You may put subdirectories in too, eg `remote:bucket/path/to/dir`.
command.) You may put subdirectories in too, e.g. `remote:bucket/path/to/dir`.
Once configured you can then use `rclone` like this.

View File

@ -9,13 +9,13 @@ description: "Remote Unification"
The `union` remote provides a unification similar to UnionFS using other remotes.
Paths may be as deep as required or a local path,
eg `remote:directory/subdirectory` or `/directory/subdirectory`.
e.g. `remote:directory/subdirectory` or `/directory/subdirectory`.
During the initial setup with `rclone config` you will specify the upstream
remotes as a space separated list. The upstream remotes can either be a local paths or other remotes.
Attribute `:ro` and `:nc` can be attach to the end of path to tag the remote as **read only** or **no create**,
eg `remote:directory/subdirectory:ro` or `remote:directory/subdirectory:nc`.
e.g. `remote:directory/subdirectory:ro` or `remote:directory/subdirectory:nc`.
Subfolders can be used in upstream remotes. Assume a union remote named `backup`
with the remotes `mydrive:private/backup`. Invoking `rclone mkdir backup:desktop`

View File

@ -8,7 +8,7 @@ description: "Rclone docs for WebDAV"
Paths are specified as `remote:path`
Paths may be as deep as required, eg `remote:directory/subdirectory`.
Paths may be as deep as required, e.g. `remote:directory/subdirectory`.
To configure the WebDAV remote you will need to have a URL for it, and
a username and password. If you know what kind of system you are
@ -61,7 +61,7 @@ Enter the password:
password:
Confirm the password:
password:
Bearer token instead of user/pass (eg a Macaroon)
Bearer token instead of user/pass (e.g. a Macaroon)
bearer_token>
Remote config
--------------------
@ -161,7 +161,7 @@ Password.
#### --webdav-bearer-token
Bearer token instead of user/pass (eg a Macaroon)
Bearer token instead of user/pass (e.g. a Macaroon)
- Config: bearer_token
- Env Var: RCLONE_WEBDAV_BEARER_TOKEN

View File

@ -82,7 +82,7 @@ excess files in the path.
rclone sync -i /home/local/directory remote:directory
Yandex paths may be as deep as required, eg `remote:directory/subdirectory`.
Yandex paths may be as deep as required, e.g. `remote:directory/subdirectory`.
### Modified time ###

View File

@ -162,14 +162,14 @@ func NewConfig() *ConfigInfo {
return c
}
// ConfigToEnv converts a config section and name, eg ("myremote",
// ConfigToEnv converts a config section and name, e.g. ("myremote",
// "ignore-size") into an environment name
// "RCLONE_CONFIG_MYREMOTE_IGNORE_SIZE"
func ConfigToEnv(section, name string) string {
return "RCLONE_CONFIG_" + strings.ToUpper(strings.Replace(section+"_"+name, "-", "_", -1))
}
// OptionToEnv converts an option name, eg "ignore-size" into an
// OptionToEnv converts an option name, e.g. "ignore-size" into an
// environment name "RCLONE_IGNORE_SIZE"
func OptionToEnv(name string) string {
return "RCLONE_" + strings.ToUpper(strings.Replace(name, "-", "_", -1))

View File

@ -119,7 +119,7 @@ func AddFlags(flagSet *pflag.FlagSet) {
flags.FVarP(flagSet, &fs.Config.MultiThreadCutoff, "multi-thread-cutoff", "", "Use multi-thread downloads for files above this size.")
flags.IntVarP(flagSet, &fs.Config.MultiThreadStreams, "multi-thread-streams", "", fs.Config.MultiThreadStreams, "Max number of streams to use for multi-thread downloads.")
flags.BoolVarP(flagSet, &fs.Config.UseJSONLog, "use-json-log", "", fs.Config.UseJSONLog, "Use json log format.")
flags.StringVarP(flagSet, &fs.Config.OrderBy, "order-by", "", fs.Config.OrderBy, "Instructions on how to order the transfers, eg 'size,descending'")
flags.StringVarP(flagSet, &fs.Config.OrderBy, "order-by", "", fs.Config.OrderBy, "Instructions on how to order the transfers, e.g. 'size,descending'")
flags.StringArrayVarP(flagSet, &uploadHeaders, "header-upload", "", nil, "Set HTTP header for upload transactions")
flags.StringArrayVarP(flagSet, &downloadHeaders, "header-download", "", nil, "Set HTTP header for download transactions")
flags.StringArrayVarP(flagSet, &headers, "header", "", nil, "Set HTTP header for all transactions")

View File

@ -490,7 +490,7 @@ type Usage struct {
Total *int64 `json:"total,omitempty"` // quota of bytes that can be used
Used *int64 `json:"used,omitempty"` // bytes in use
Trashed *int64 `json:"trashed,omitempty"` // bytes in trash
Other *int64 `json:"other,omitempty"` // other usage eg gmail in drive
Other *int64 `json:"other,omitempty"` // other usage e.g. gmail in drive
Free *int64 `json:"free,omitempty"` // bytes which can be uploaded before reaching the quota
Objects *int64 `json:"objects,omitempty"` // objects in the storage system
}
@ -1079,7 +1079,7 @@ type Disconnecter interface {
//
// These are automatically inserted in the docs
type CommandHelp struct {
Name string // Name of the command, eg "link"
Name string // Name of the command, e.g. "link"
Short string // Single line description
Long string // Long multi-line description
Opts map[string]string // maps option name to a single line help

View File

@ -18,7 +18,7 @@ type Options struct {
File string // Log everything to this file
Format string // Comma separated list of log format options
UseSyslog bool // Use Syslog for logging
SyslogFacility string // Facility for syslog, eg KERN,USER,...
SyslogFacility string // Facility for syslog, e.g. KERN,USER,...
}
// DefaultOpt is the default values used for Opt

View File

@ -15,5 +15,5 @@ func AddFlags(flagSet *pflag.FlagSet) {
flags.StringVarP(flagSet, &log.Opt.File, "log-file", "", log.Opt.File, "Log everything to this file")
flags.StringVarP(flagSet, &log.Opt.Format, "log-format", "", log.Opt.Format, "Comma separated list of log format options")
flags.BoolVarP(flagSet, &log.Opt.UseSyslog, "syslog", "", log.Opt.UseSyslog, "Use Syslog for logging")
flags.StringVarP(flagSet, &log.Opt.SyslogFacility, "syslog-facility", "", log.Opt.SyslogFacility, "Facility for syslog, eg KERN,USER,...")
flags.StringVarP(flagSet, &log.Opt.SyslogFacility, "syslog-facility", "", log.Opt.SyslogFacility, "Facility for syslog, e.g. KERN,USER,...")
}

View File

@ -78,7 +78,7 @@ type ListJSONOpt struct {
ShowHash bool `json:"showHash"`
DirsOnly bool `json:"dirsOnly"`
FilesOnly bool `json:"filesOnly"`
HashTypes []string `json:"hashTypes"` // hash types to show if ShowHash is set, eg "MD5", "SHA-1"
HashTypes []string `json:"hashTypes"` // hash types to show if ShowHash is set, e.g. "MD5", "SHA-1"
}
// ListJSON lists fsrc using the options in opt calling callback for each item

Some files were not shown because too many files have changed in this diff Show More