mirror of
https://github.com/rclone/rclone.git
synced 2026-05-12 01:57:56 -04:00
Version v1.74.0
This commit is contained in:
7025
MANUAL.html
generated
7025
MANUAL.html
generated
File diff suppressed because it is too large
Load Diff
3370
MANUAL.txt
generated
3370
MANUAL.txt
generated
File diff suppressed because it is too large
Load Diff
@@ -54,7 +54,7 @@ none for no resync.)
|
||||
- slowHashSyncOnly - (bool) Ignore slow checksums for listings and deltas, but
|
||||
still consider them during sync calls.
|
||||
- workdir - (string) Use custom working dir - useful for testing. (default:
|
||||
~/.cache/rclone/bisync)
|
||||
/home/ncw/.cache/rclone/bisync)
|
||||
|
||||
See [bisync command help](https://rclone.org/commands/rclone_bisync/)
|
||||
and [full bisync description](https://rclone.org/bisync/)
|
||||
|
||||
@@ -845,6 +845,22 @@ Properties:
|
||||
- Type: int
|
||||
- Default: 512
|
||||
|
||||
#### --azureblob-copy-total-concurrency
|
||||
|
||||
Global concurrency limit for multipart copy chunks.
|
||||
|
||||
This limits the total number of multipart copy chunks running at once
|
||||
across all files.
|
||||
|
||||
Set to 0 to disable this limiter.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: copy_total_concurrency
|
||||
- Env Var: RCLONE_AZUREBLOB_COPY_TOTAL_CONCURRENCY
|
||||
- Type: int
|
||||
- Default: 0
|
||||
|
||||
#### --azureblob-use-copy-blob
|
||||
|
||||
Whether to use the Copy Blob API when copying to the same storage account.
|
||||
@@ -1063,6 +1079,25 @@ Properties:
|
||||
- "only"
|
||||
- Specify 'only' to remove only the snapshots but keep the root blob.
|
||||
|
||||
#### --azureblob-decompress
|
||||
|
||||
If set this will decompress gzip encoded objects.
|
||||
|
||||
It is possible to upload objects to Azure Blob Storage with "Content-Encoding: gzip"
|
||||
set. Normally rclone will download these files as compressed objects.
|
||||
|
||||
If this flag is set then rclone will decompress these files with
|
||||
"Content-Encoding: gzip" as they are received. This means that rclone
|
||||
can't check the size and hash but the file contents will be decompressed.
|
||||
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: decompress
|
||||
- Env Var: RCLONE_AZUREBLOB_DECOMPRESS
|
||||
- Type: bool
|
||||
- Default: false
|
||||
|
||||
#### --azureblob-description
|
||||
|
||||
Description of the remote.
|
||||
|
||||
@@ -1055,11 +1055,7 @@ The following backends have known issues that need more investigation:
|
||||
<!--- start list_failures - DO NOT EDIT THIS SECTION - use make commanddocs --->
|
||||
- `TestDropbox` (`dropbox`)
|
||||
- [`TestBisyncRemoteRemote/normalization`](https://pub.rclone.org/integration-tests/current/dropbox-cmd.bisync-TestDropbox-1.txt)
|
||||
- `TestSeafile` (`seafile`)
|
||||
- [`TestBisyncLocalRemote/volatile`](https://pub.rclone.org/integration-tests/current/seafile-cmd.bisync-TestSeafile-1.txt)
|
||||
- `TestSeafileV6` (`seafile`)
|
||||
- [`TestBisyncLocalRemote/volatile`](https://pub.rclone.org/integration-tests/current/seafile-cmd.bisync-TestSeafileV6-1.txt)
|
||||
- Updated: 2026-01-30-010015
|
||||
- Updated: 2026-05-01-010013
|
||||
<!--- end list_failures - DO NOT EDIT THIS SECTION - use make commanddocs --->
|
||||
|
||||
The following backends either have not been tested recently or have known issues
|
||||
|
||||
@@ -6,6 +6,135 @@ description: "Rclone Changelog"
|
||||
|
||||
# Changelog
|
||||
|
||||
## v1.74.0 - 2026-05-01
|
||||
|
||||
[See commits](https://github.com/rclone/rclone/compare/v1.73.0...v1.74.0)
|
||||
|
||||
- New backends
|
||||
- [Huawei Drive](/huaweidrive/) (王一赫)
|
||||
- [iCloud Photos](/iclouddrive/#icloud-photos) (read only) (Yakov Till)
|
||||
- New S3 providers
|
||||
- [Fastly Object Storage](/s3/#fastly) (Leon Brocard)
|
||||
- [HCP](/s3/#hcp) (Chris)
|
||||
- [Impossible Cloud](/s3/#impossible-cloud) (Nick Craig-Wood)
|
||||
- [UCloud US3](/s3/#us3) (jinkeyuu)
|
||||
- [Zadara](/s3/#zadara) (Shlomi Avihou)
|
||||
- New commands
|
||||
- [gui](/gui/): launch new embedded web based GUI for basic rclone operations (FTCHD, Nick Craig-Wood)
|
||||
- New Features
|
||||
- build
|
||||
- Update `golang.org/x/image/webp` to v0.39.0 to fix CVE-2026-33813 (Nick Craig-Wood)
|
||||
- Bump `github.com/Azure/go-ntlmssp` to 0.1.1 to fix CVE-2026-32952 (dependabot[bot])
|
||||
- Update to go1.26 and make go1.25 the minimum required version (Nick Craig-Wood)
|
||||
- Update all dependencies (Nick Craig-Wood)
|
||||
- Modernize Go code with go fix for go1.25 (Nick Craig-Wood)
|
||||
- Fix `loong64` and `s390x` build (Suyun)
|
||||
- docs
|
||||
- Modernize rclone.org site design (Nick Craig-Wood)
|
||||
- fixes (albertony, Enduriel, Jason, Luke Cyca, mathieulongtin, Nick Craig-Wood, SyoBoN)
|
||||
- fshttp: Add `--dump curl` for dumping HTTP requests as curl commands (Nick Craig-Wood)
|
||||
- graphics: Optimise images losslessly with ImageOptim (Leon Brocard)
|
||||
- listremotes: Add `--exact` flag for filtering (Anton Bordwine)
|
||||
- rc
|
||||
- Flip auth default so all endpoints require auth unless opted out (Nick Craig-Wood)
|
||||
- Add `core/disks` to enumerate attached disks (Nick Craig-Wood)
|
||||
- Add `deletedDirs` stat to `core/stats` help output (Billy Hughes)
|
||||
- serve http
|
||||
- Add fallback embedded favicon (Leon Brocard)
|
||||
- Add gzip compression for text responses (Leon Brocard)
|
||||
- Dark mode for file browser (FTCHD)
|
||||
- Add HTTP/2 cleartext support for all http servers (TheBabu)
|
||||
- touch: Add metadata when using `--metadata-set` (Prakhar Chhalotre)
|
||||
- Bug Fixes
|
||||
- accounting
|
||||
- Update String method output format for clarity in transfer rate representation (Prakhar Chhalotre)
|
||||
- Fix `rcat`/`copyurl` for `files.com` (Nick Craig-Wood)
|
||||
- bisync
|
||||
- Add missing rc params (nielash)
|
||||
- Add more structured info to rc output (nielash)
|
||||
- Auto-generate rc help docs (nielash)
|
||||
- Fix handling of unreadable lockfiles (lif)
|
||||
- Fix flaky TestBisyncConcurrent by increasing random name entropy (Nick Craig-Wood)
|
||||
- Fix integration tests after sftp log changes (Nick Craig-Wood)
|
||||
- copyurl: Fix ignored `--upload-headers` and `--download-headers` (Andriy Senyshyn)
|
||||
- librclone/ctest: Add Windows support and fix memory management (BizaNator)
|
||||
- log: Fix data race on OutputHandler.format field (Nick Craig-Wood)
|
||||
- operations
|
||||
- Multithread copy: grab memory before making go routines (Nick Craig-Wood)
|
||||
- serve dlna: Fix Samsung TV compatibility (Nick Craig-Wood)
|
||||
- serve nfs: Fix EOF flag in READ response not being set when read reaches end of file (Nick Craig-Wood)
|
||||
- Mount
|
||||
- rc: fix mounts created with mountPoint "*" overwriting each other (Nick Craig-Wood)
|
||||
- VFS
|
||||
- Fix slow `nfs serve` by adding `--vfs-handle-caching` (Nick Craig-Wood)
|
||||
- Add context parameter to New() for config propagation (Nick Craig-Wood)
|
||||
- Replace `context.TODO`/`Background` with stored VFS context (Nick Craig-Wood)
|
||||
- Local
|
||||
- Remove fadvise calls that cause spinlock contention (Patrick Farrell)
|
||||
- Azure Blob
|
||||
- Add `--azureblob-copy-total-concurrency` to limit total multipart copy concurrency (Duncan F)
|
||||
- Add server side copy real time accounting (Nick Craig-Wood)
|
||||
- Add `--azureblob-decompress` flag to download gzip-encoded files (Nick Craig-Wood)
|
||||
- Azurefiles
|
||||
- Fix missing `x-ms-file-request-intent` header with OAuth (Nick Craig-Wood)
|
||||
- B2
|
||||
- Add server side copy real time accounting (Nick Craig-Wood)
|
||||
- Drime
|
||||
- Implement About (Cohinem)
|
||||
- Fix listings of large directories (John Volk)
|
||||
- Drive
|
||||
- Add integration test for handling folder names with single quotes (Prakhar Chhalotre)
|
||||
- Filelu
|
||||
- Add multipart init response type (kingston125)
|
||||
- Migrate API calls to `lib/rest` (kingston125)
|
||||
- Filen
|
||||
- Make multi-threaded upload chunks individually retryable (Enduriel)
|
||||
- Iclouddrive
|
||||
- Replace plaintext signin with SRP authentication (Mike GIllan)
|
||||
- Use dynamic origin for SRP auth headers (Xiangzhe)
|
||||
- Lowercase Apple ID for SRP authentication (Mike GIllan)
|
||||
- Add read only iCloud Photos support and SRP authentication (Yakov Till)
|
||||
- Internxt
|
||||
- Implement multi-part uploads (José Zúniga)
|
||||
- Jottacloud
|
||||
- Add encoding of percent character to default backend encoding (albertony)
|
||||
- Linkbox
|
||||
- Fix downloading files by using web API (Nick Craig-Wood)
|
||||
- Mega
|
||||
- Fix crash when logging in with previous auth keys fails (Andrew Gunnerson)
|
||||
- Pcloud
|
||||
- Fix recursive listing from the root (Nick Craig-Wood)
|
||||
- Pikpak
|
||||
- Support custom filenames for addurl backend command (wiserain)
|
||||
- Protondrive
|
||||
- Align backend with newer Proton SDK stack (tdawe)
|
||||
- Update to latest go-proton-api to use new host (dlaumen)
|
||||
- Fix server-side moveto and DirMove against current API (Nick Craig-Wood)
|
||||
- S3
|
||||
- Add Fastly Object Storage provider (Leon Brocard)
|
||||
- Add HCP provider and `list_versions_oldest_first` quirk (Chris)
|
||||
- Add Impossible Cloud as a new S3 provider (Nick Craig-Wood)
|
||||
- Add UCloud Object Storage provider (#9230) (jinkeyuu)
|
||||
- Add Zadara Object Storage provider (Shlomi Avihou)
|
||||
- Remove StackPath Object Storage provider (Leon Brocard)
|
||||
- Add server side copy real time accounting (Nick Craig-Wood)
|
||||
- Add OVHcloud storage classes (Bjoern Franke)
|
||||
- Add Object Lock support (Chris)
|
||||
- Add new Fastly Object Storage regions (Leon Brocard)
|
||||
- Scaleway: ONEZONE_IA is available in all zones, GLACIER only in FR-PAR (Bjoern Franke)
|
||||
- Ionos: updated regions & endpoints (hxnd)
|
||||
- IBM COS: provide ibm_iam_endpoint as a configurable param for IBM IAM-based auth (Bhagyashreek8)
|
||||
- Fix Content-MD5 for Object Lock uploads and add GCS quirk (Chris)
|
||||
- Fix regression where PutObject fails with non-seekable readers (Chris)
|
||||
- Fix `--s3-versions` flag ignored by cleanup-hidden when GetBucketVersioning fails (Chris)
|
||||
- Fix bucket creation failing on Ceph/radosgw (Jan Heylen)
|
||||
- SFTP
|
||||
- Warn the user if no host key validation is configured (Nick Craig-Wood)
|
||||
- WebDAV
|
||||
- Permit redirects on PROPFIND for metadata (Brian Bockelman)
|
||||
- Request only required properties in listAll to improve performance (ZRHan)
|
||||
- Optimize performance by using `Depth=0` for metadata requests (ZRHan)
|
||||
|
||||
## v1.73.5 - 2026-04-19
|
||||
|
||||
[See commits](https://github.com/rclone/rclone/compare/v1.73.4...v1.73.5)
|
||||
|
||||
@@ -40,6 +40,8 @@ rclone [flags]
|
||||
--azureblob-connection-string string Storage Connection String
|
||||
--azureblob-copy-concurrency int Concurrency for multipart copy (default 512)
|
||||
--azureblob-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 8Mi)
|
||||
--azureblob-copy-total-concurrency int Global concurrency limit for multipart copy chunks
|
||||
--azureblob-decompress If set this will decompress gzip encoded objects
|
||||
--azureblob-delete-snapshots string Set to specify how to deal with snapshots on blob deletion
|
||||
--azureblob-description string Description of the remote
|
||||
--azureblob-directory-markers Upload an empty object with a trailing slash when a new directory is created
|
||||
@@ -305,7 +307,7 @@ rclone [flags]
|
||||
--dropbox-token-url string Token server url
|
||||
-n, --dry-run Do a trial run with no permanent changes
|
||||
--dscp string Set DSCP value to connections, value or name, e.g. CS1, LE, DF, AF21
|
||||
--dump DumpFlags List of items to dump from: headers, bodies, requests, responses, auth, filters, goroutines, openfiles, mapper
|
||||
--dump DumpFlags List of items to dump from: headers, bodies, requests, responses, auth, filters, goroutines, openfiles, mapper, curl
|
||||
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
|
||||
--dump-headers Dump HTTP headers - may contain sensitive info
|
||||
--error-on-no-transfer Sets exit code 9 if no files are transferred, useful in scripts
|
||||
@@ -329,9 +331,11 @@ rclone [flags]
|
||||
--filefabric-token-expiry string Token expiry time
|
||||
--filefabric-url string URL of the Enterprise File Fabric to connect to
|
||||
--filefabric-version string Version read from the file fabric
|
||||
--filelu-chunk-size SizeSuffix Chunk size to use for uploading. Used for multipart uploads (default 64Mi)
|
||||
--filelu-description string Description of the remote
|
||||
--filelu-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,SingleQuote,BackQuote,Dollar,Colon,Question,Asterisk,Pipe,Hash,Percent,BackSlash,CrLf,Del,Ctl,LeftSpace,LeftPeriod,LeftTilde,LeftCrLfHtVt,RightSpace,RightPeriod,RightCrLfHtVt,InvalidUtf8,Dot,SquareBracket,Semicolon,Exclamation)
|
||||
--filelu-key string Your FileLu Rclone key from My Account
|
||||
--filelu-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. Any files larger than this will be uploaded in chunks of chunk_size (default 500Mi)
|
||||
--filen-api-key string API Key for your Filen account (obscured)
|
||||
--filen-auth-version string Authentication Version (internal use only)
|
||||
--filen-base-folder-uuid string UUID of Account Root Directory (internal use only)
|
||||
@@ -465,12 +469,25 @@ rclone [flags]
|
||||
--http-no-slash Set this if the site doesn't end directories with /
|
||||
--http-proxy string HTTP proxy URL
|
||||
--http-url string URL of HTTP host to connect to
|
||||
--huaweidrive-auth-url string Auth server URL
|
||||
--huaweidrive-chunk-size SizeSuffix Upload chunk size (default 8Mi)
|
||||
--huaweidrive-client-credentials Use client credentials OAuth flow
|
||||
--huaweidrive-client-id string OAuth Client Id
|
||||
--huaweidrive-client-secret string OAuth Client Secret
|
||||
--huaweidrive-description string Description of the remote
|
||||
--huaweidrive-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,LeftSpace,LeftPeriod,LeftTilde,RightSpace,RightPeriod,InvalidUtf8,Dot)
|
||||
--huaweidrive-list-chunk int Size of listing chunk 1-1000 (default 1000)
|
||||
--huaweidrive-root-folder-id string ID of the root folder
|
||||
--huaweidrive-token string OAuth Access Token as a JSON blob
|
||||
--huaweidrive-token-url string Token server url
|
||||
--huaweidrive-upload-cutoff SizeSuffix Cutoff for switching to resumable upload (default 20Mi)
|
||||
--human-readable Print numbers in a human-readable format, sizes with suffix Ki|Mi|Gi|Ti|Pi
|
||||
--iclouddrive-apple-id string Apple ID
|
||||
--iclouddrive-client-id string Client id (default "d39ba9916b7251055b22c7f910e2ea796ee65e98b2ddecea8f5dde8d9d1a815d")
|
||||
--iclouddrive-client-id string Client ID for iCloud API access (default "d39ba9916b7251055b22c7f910e2ea796ee65e98b2ddecea8f5dde8d9d1a815d")
|
||||
--iclouddrive-description string Description of the remote
|
||||
--iclouddrive-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
|
||||
--iclouddrive-password string Password (obscured)
|
||||
--iclouddrive-service string iCloud service to use (default "drive")
|
||||
--ignore-case Ignore case in filters (case insensitive)
|
||||
--ignore-case-sync Ignore case when synchronizing
|
||||
--ignore-checksum Skip post copy check of checksums
|
||||
@@ -501,17 +518,20 @@ rclone [flags]
|
||||
--internetarchive-item-metadata stringArray Metadata to be set on the IA item, this is different from file-level metadata that can be set using --metadata-set
|
||||
--internetarchive-secret-access-key string IAS3 Secret Key (password)
|
||||
--internetarchive-wait-archive Duration Timeout for waiting the server's processing tasks (specifically archive and book_op) to finish (default 0s)
|
||||
--internxt-chunk-size SizeSuffix Chunk size for multipart uploads (default 30Mi)
|
||||
--internxt-description string Description of the remote
|
||||
--internxt-email string Email of your Internxt account
|
||||
--internxt-encoding Encoding The encoding for the backend (default Slash,BackSlash,CrLf,RightPeriod,InvalidUtf8,Dot)
|
||||
--internxt-pass string Password (obscured)
|
||||
--internxt-skip-hash-validation Skip hash validation when downloading files (default true)
|
||||
--internxt-upload-concurrency int Concurrency for multipart uploads (default 4)
|
||||
--internxt-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (default 100Mi)
|
||||
--jottacloud-auth-url string Auth server URL
|
||||
--jottacloud-client-credentials Use client credentials OAuth flow
|
||||
--jottacloud-client-id string OAuth Client Id
|
||||
--jottacloud-client-secret string OAuth Client Secret
|
||||
--jottacloud-description string Description of the remote
|
||||
--jottacloud-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,Del,Ctl,InvalidUtf8,Dot)
|
||||
--jottacloud-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,Percent,Del,Ctl,InvalidUtf8,Dot)
|
||||
--jottacloud-hard-delete Delete files permanently rather than putting them into the trash
|
||||
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required (default 10Mi)
|
||||
--jottacloud-no-versions Avoid server side versioning by deleting files and recreating files instead of overwriting them
|
||||
@@ -529,6 +549,8 @@ rclone [flags]
|
||||
--koofr-user string Your user name
|
||||
--kv-lock-time Duration Maximum time to keep key-value database locked by process (default 1s)
|
||||
--linkbox-description string Description of the remote
|
||||
--linkbox-email string Email for login
|
||||
--linkbox-password string Password for login (obscured)
|
||||
--linkbox-token string Token from https://www.linkbox.to/admin/account
|
||||
-l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
|
||||
--list-cutoff int To save memory, sort directory listings on disk above this threshold (default 1000000)
|
||||
@@ -744,7 +766,7 @@ rclone [flags]
|
||||
-P, --progress Show progress during transfer
|
||||
--progress-terminal-title Show progress on the terminal title (requires -P/--progress)
|
||||
--protondrive-2fa string The 2FA code
|
||||
--protondrive-app-version string The app version string (default "macos-drive@1.0.0-alpha.1+rclone")
|
||||
--protondrive-app-version string The app version string
|
||||
--protondrive-description string Description of the remote
|
||||
--protondrive-enable-caching Caches the files and folders metadata to reduce API calls (default true)
|
||||
--protondrive-encoding Encoding The encoding for the backend (default Slash,LeftSpace,RightSpace,InvalidUtf8,Dot)
|
||||
@@ -819,6 +841,8 @@ rclone [flags]
|
||||
--s3-access-key-id string AWS Access Key ID
|
||||
--s3-acl string Canned ACL used when creating buckets and storing or copying objects
|
||||
--s3-bucket-acl string Canned ACL used when creating buckets
|
||||
--s3-bucket-object-lock-enabled Enable Object Lock when creating new buckets
|
||||
--s3-bypass-governance-retention Allow deleting or modifying objects locked with GOVERNANCE mode
|
||||
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5Mi)
|
||||
--s3-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 4.656Gi)
|
||||
--s3-decompress If set this will decompress gzip encoded objects
|
||||
@@ -833,11 +857,13 @@ rclone [flags]
|
||||
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars)
|
||||
--s3-force-path-style If true use path style access if false use virtual hosted style (default true)
|
||||
--s3-ibm-api-key string IBM API Key to be used to obtain IAM token
|
||||
--s3-ibm-iam-endpoint string IBM IAM Endpoint to use for authentication
|
||||
--s3-ibm-resource-instance-id string IBM service instance id
|
||||
--s3-leave-parts-on-error If true avoid calling abort upload on a failure, leaving all successfully uploaded parts on S3 for manual recovery
|
||||
--s3-list-chunk int Size of listing chunk (response list for each ListObject S3 request) (default 1000)
|
||||
--s3-list-url-encode Tristate Whether to url encode listings: true/false/unset (default unset)
|
||||
--s3-list-version int Version of ListObjects to use: 1,2 or 0 for auto
|
||||
--s3-list-versions-oldest-first Tristate Set if the backend returns object versions oldest first (default unset)
|
||||
--s3-location-constraint string Location constraint - must be set to match the Region
|
||||
--s3-max-upload-parts int Maximum number of parts in a multipart upload (default 10000)
|
||||
--s3-might-gzip Tristate Set this if the backend might gzip objects (default unset)
|
||||
@@ -845,6 +871,11 @@ rclone [flags]
|
||||
--s3-no-head If set, don't HEAD uploaded objects to check integrity
|
||||
--s3-no-head-object If set, do not do HEAD before GET when getting objects
|
||||
--s3-no-system-metadata Suppress setting and reading of system metadata
|
||||
--s3-object-lock-legal-hold-status string Object Lock legal hold status to apply when uploading or copying objects
|
||||
--s3-object-lock-mode string Object Lock mode to apply when uploading or copying objects
|
||||
--s3-object-lock-retain-until-date string Object Lock retention until date to apply when uploading or copying objects
|
||||
--s3-object-lock-set-after-upload Set Object Lock via separate API calls after upload
|
||||
--s3-object-lock-supported Tristate Whether the provider supports S3 Object Lock (default unset)
|
||||
--s3-profile string Profile to use in the shared credentials file
|
||||
--s3-provider string Choose your S3 provider
|
||||
--s3-region string Region to connect to
|
||||
@@ -1063,7 +1094,7 @@ rclone [flags]
|
||||
--use-json-log Use json log format
|
||||
--use-mmap Use mmap allocator (see docs)
|
||||
--use-server-modtime Use server modified time instead of object metadata
|
||||
--user-agent string Set the user-agent to a specified string (default "rclone/v1.73.0")
|
||||
--user-agent string Set the user-agent to a specified string (default "rclone/v1.74.0")
|
||||
-v, --verbose count Print lots more stuff (repeat for more)
|
||||
-V, --version Print the version number
|
||||
--webdav-auth-redirect Preserve authentication on redirect
|
||||
@@ -1130,6 +1161,7 @@ rclone [flags]
|
||||
* [rclone deletefile](/commands/rclone_deletefile/) - Remove a single file from remote.
|
||||
* [rclone gendocs](/commands/rclone_gendocs/) - Output markdown docs for rclone to the directory supplied.
|
||||
* [rclone gitannex](/commands/rclone_gitannex/) - Speaks with git-annex over stdin/stdout.
|
||||
* [rclone gui](/commands/rclone_gui/) - Open the web based GUI.
|
||||
* [rclone hashsum](/commands/rclone_hashsum/) - Produces a hashsum file for all the objects in the path.
|
||||
* [rclone link](/commands/rclone_link/) - Generate public link to file/folder.
|
||||
* [rclone listremotes](/commands/rclone_listremotes/) - List all the remotes in the config file and defined in environment variables.
|
||||
|
||||
@@ -29,6 +29,7 @@ before using, or data loss can result. Questions can be asked in the
|
||||
|
||||
See [full bisync description](https://rclone.org/bisync/) for details.
|
||||
|
||||
|
||||
```
|
||||
rclone bisync remote1:path1 remote2:path2 [flags]
|
||||
```
|
||||
@@ -60,7 +61,7 @@ rclone bisync remote1:path1 remote2:path2 [flags]
|
||||
-1, --resync Performs the resync run. Equivalent to --resync-mode path1. Consider using --verbose or --dry-run first.
|
||||
--resync-mode string During resync, prefer the version that is: path1, path2, newer, older, larger, smaller (default: path1 if --resync, otherwise none for no resync.) (default "none")
|
||||
--slow-hash-sync-only Ignore slow checksums for listings and deltas, but still consider them during sync calls.
|
||||
--workdir string Use custom working dir - useful for testing. (default: {WORKDIR})
|
||||
--workdir string Use custom working dir - useful for testing. (default: $HOME/.cache/rclone/bisync)
|
||||
```
|
||||
|
||||
Options shared with other commands are described next.
|
||||
@@ -107,6 +108,26 @@ Flags for anything which can copy a file
|
||||
-u, --update Skip files that are newer on the destination
|
||||
```
|
||||
|
||||
### Sync Options
|
||||
|
||||
Flags used for sync commands
|
||||
|
||||
```text
|
||||
--backup-dir string Make backups into hierarchy based in DIR
|
||||
--delete-after When synchronizing, delete files on destination after transferring (default)
|
||||
--delete-before When synchronizing, delete files on destination before transferring
|
||||
--delete-during When synchronizing, delete files during transfer
|
||||
--fix-case Force rename of case insensitive dest to match source
|
||||
--ignore-errors Delete even if there are I/O errors
|
||||
--list-cutoff int To save memory, sort directory listings on disk above this threshold (default 1000000)
|
||||
--max-delete int When synchronizing, limit the number of deletes (default -1)
|
||||
--max-delete-size SizeSuffix When synchronizing, limit the total size of deletes (default off)
|
||||
--suffix string Suffix to add to changed files
|
||||
--suffix-keep-extension Preserve the extension when using --suffix
|
||||
--track-renames When synchronizing, track file renames and do a server-side move if possible
|
||||
--track-renames-strategy string Strategies to use when synchronizing using track-renames hash|modtime|leaf (default "hash")
|
||||
```
|
||||
|
||||
### Important Options
|
||||
|
||||
Important flags useful for most commands
|
||||
|
||||
@@ -231,12 +231,12 @@ rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,command=e
|
||||
|
||||
```console
|
||||
rclone convmv "stories/The Quick Brown Fox!" --name-transform "date=-{YYYYMMDD}"
|
||||
// Output: stories/The Quick Brown Fox!-20260130
|
||||
// Output: stories/The Quick Brown Fox!-20260501
|
||||
```
|
||||
|
||||
```console
|
||||
rclone convmv "stories/The Quick Brown Fox!" --name-transform "date=-{macfriendlytime}"
|
||||
// Output: stories/The Quick Brown Fox!-2026-01-30 0825PM
|
||||
// Output: stories/The Quick Brown Fox!-2026-05-01 0208PM
|
||||
```
|
||||
|
||||
```console
|
||||
|
||||
@@ -39,6 +39,15 @@ Note that `--stdout` and `--print-filename` are incompatible with `--urls`.
|
||||
This will do `--transfers` copies in parallel. Note that if `--auto-filename`
|
||||
is desired for all URLs then a file with only URLs and no filename can be used.
|
||||
|
||||
Each FILENAME in the CSV file can start with a relative path which will be appended
|
||||
to the destination path provided at the command line. For example, running the command
|
||||
shown above with the following CSV file will write two files to the destination:
|
||||
`remote:dir/local/path/bar.json` and `remote:dir/another/local/directory/qux.json`
|
||||
```csv
|
||||
https://example.org/foo/bar.json,local/path/bar.json
|
||||
https://example.org/qux/baz.json,another/local/directory/qux.json
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
If you can't get `rclone copyurl` to work then here are some things you can try:
|
||||
|
||||
118
docs/content/commands/rclone_gui.md
Normal file
118
docs/content/commands/rclone_gui.md
Normal file
@@ -0,0 +1,118 @@
|
||||
---
|
||||
title: "rclone gui"
|
||||
description: "Open the web based GUI."
|
||||
versionIntroduced: v1.74
|
||||
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/gui/ and as part of making a release run "make commanddocs"
|
||||
---
|
||||
# rclone gui
|
||||
|
||||
Open the web based GUI.
|
||||
|
||||
## Synopsis
|
||||
|
||||
This command starts an embedded web GUI for rclone and opens it in
|
||||
your default browser.
|
||||
|
||||
This starts an RC API server and a GUI server on separate localhost
|
||||
ports, generates login credentials automatically unless --no-auth
|
||||
is specified, and opens the browser already authenticated.
|
||||
|
||||
rclone gui
|
||||
|
||||
By default rclone gui serves the web GUI that was embedded into the
|
||||
rclone binary at build time from https://github.com/rclone/rclone-web/
|
||||
You can override this by passing a path to either an unpacked GUI
|
||||
directory or a dist.zip archive (e.g. one downloaded from the
|
||||
rclone-web releases page):
|
||||
|
||||
rclone gui ./my-dist/
|
||||
rclone gui ./dist.zip
|
||||
|
||||
This is useful for iterating on the GUI locally without rebuilding
|
||||
rclone, or for serving a different GUI release than the one embedded.
|
||||
|
||||
Use --no-open-browser to skip opening the browser automatically:
|
||||
|
||||
rclone gui --no-open-browser
|
||||
|
||||
Use --addr to bind the GUI to a specific address:
|
||||
|
||||
rclone gui --addr localhost:5580
|
||||
|
||||
Use --user and --pass to set specific credentials:
|
||||
|
||||
rclone gui --user admin --pass secret
|
||||
|
||||
Use --no-auth to disable authentication entirely:
|
||||
|
||||
rclone gui --no-auth
|
||||
|
||||
For more help see [the GUI docs](/gui/).
|
||||
|
||||
|
||||
```
|
||||
rclone gui [path] [flags]
|
||||
```
|
||||
|
||||
## Options
|
||||
|
||||
```
|
||||
--addr stringArray IPaddress:Port for the GUI server (default auto-chosen localhost port)
|
||||
--api-addr stringArray IPaddress:Port for the RC API server (default auto-chosen localhost port)
|
||||
--enable-metrics Enable OpenMetrics/Prometheus compatible endpoint at /metrics
|
||||
-h, --help help for gui
|
||||
--no-auth Don't require auth for the RC API
|
||||
--no-open-browser Skip opening the browser automatically
|
||||
--pass string Password for RC authentication
|
||||
--user string User name for RC authentication
|
||||
```
|
||||
|
||||
Options shared with other commands are described next.
|
||||
See the [global flags page](/flags/) for global options not listed here.
|
||||
|
||||
### RC Options
|
||||
|
||||
Flags to control the Remote Control API
|
||||
|
||||
```text
|
||||
--rc Enable the remote control server
|
||||
--rc-addr stringArray IPaddress:Port or :Port to bind server to (default localhost:5572)
|
||||
--rc-allow-origin string Origin which cross-domain request (CORS) can be executed from
|
||||
--rc-baseurl string Prefix for URLs - leave blank for root
|
||||
--rc-cert string TLS PEM key (concatenation of certificate and CA certificate)
|
||||
--rc-client-ca string Client certificate authority to verify clients with
|
||||
--rc-enable-metrics Enable the Prometheus metrics path at the remote control server
|
||||
--rc-files string Path to local files to serve on the HTTP server
|
||||
--rc-htpasswd string A htpasswd file - if not provided no authentication is done
|
||||
--rc-job-expire-duration Duration Expire finished async jobs older than this value (default 1m0s)
|
||||
--rc-job-expire-interval Duration Interval to check for expired async jobs (default 10s)
|
||||
--rc-key string TLS PEM Private key
|
||||
--rc-max-header-bytes int Maximum size of request header (default 4096)
|
||||
--rc-min-tls-version string Minimum TLS version that is acceptable (default "tls1.0")
|
||||
--rc-no-auth Don't require auth for certain methods
|
||||
--rc-pass string Password for authentication
|
||||
--rc-realm string Realm for authentication
|
||||
--rc-salt string Password hashing salt (default "dlPL2MqE")
|
||||
--rc-serve Enable the serving of remote objects
|
||||
--rc-serve-no-modtime Don't read the modification time (can speed things up)
|
||||
--rc-server-read-timeout Duration Timeout for server reading data (default 1h0m0s)
|
||||
--rc-server-write-timeout Duration Timeout for server writing data (default 1h0m0s)
|
||||
--rc-template string User-specified template
|
||||
--rc-user string User name for authentication
|
||||
--rc-user-from-header string User name from a defined HTTP header
|
||||
--rc-web-fetch-url string URL to fetch the releases for webgui (default "https://api.github.com/repos/rclone/rclone-webui-react/releases/latest")
|
||||
--rc-web-gui Launch WebGUI on localhost
|
||||
--rc-web-gui-force-update Force update to latest version of web gui
|
||||
--rc-web-gui-no-open-browser Don't open the browser automatically
|
||||
--rc-web-gui-update Check and update to latest version of web gui
|
||||
```
|
||||
|
||||
## See Also
|
||||
|
||||
<!-- markdownlint-capture -->
|
||||
<!-- markdownlint-disable ul-style line-length -->
|
||||
|
||||
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
|
||||
|
||||
|
||||
<!-- markdownlint-restore -->
|
||||
@@ -23,6 +23,9 @@ Result can be filtered by a filter argument which applies to all attributes,
|
||||
and/or filter flags specific for each attribute. The values must be specified
|
||||
according to regular rclone filtering pattern syntax.
|
||||
|
||||
By default filtering uses non-anchored matching, so `--type box` also
|
||||
matches `dropbox`. Use `--exact` to match complete values only.
|
||||
|
||||
```
|
||||
rclone listremotes [<filter>] [flags]
|
||||
```
|
||||
@@ -31,6 +34,7 @@ rclone listremotes [<filter>] [flags]
|
||||
|
||||
```
|
||||
--description string Filter remotes by description
|
||||
--exact Match filter strings exactly instead of using non-anchored glob matching
|
||||
-h, --help help for listremotes
|
||||
--json Format output as JSON
|
||||
--long Show type and description in addition to name
|
||||
|
||||
@@ -13,7 +13,8 @@ Mount the remote as file system on a mountpoint.
|
||||
Rclone mount allows Linux, FreeBSD, macOS and Windows to
|
||||
mount any of Rclone's cloud storage systems as a file system with FUSE.
|
||||
|
||||
First set up your remote using `rclone config`. Check it works with `rclone ls` etc.
|
||||
First set up your remote using `rclone config`. Check it works with `rclone ls`
|
||||
etc.
|
||||
|
||||
On Linux and macOS, you can run mount in either foreground or background (aka
|
||||
daemon) mode. Mount runs in foreground mode by default. Use the `--daemon` flag
|
||||
@@ -41,8 +42,8 @@ The following examples will mount to an automatically assigned drive,
|
||||
to specific drive letter `X:`, to path `C:\path\parent\mount`
|
||||
(where parent directory or drive must exist, and mount must **not** exist,
|
||||
and is not supported when [mounting as a network drive](#mounting-modes-on-windows)),
|
||||
and the last example will mount as network share `\\cloud\remote` and map it to an
|
||||
automatically assigned drive:
|
||||
and the last example will mount as network share `\\cloud\remote` and map it to
|
||||
an automatically assigned drive:
|
||||
|
||||
```console
|
||||
rclone mount remote:path/to/files *
|
||||
@@ -148,8 +149,8 @@ rclone mount remote:path/to/files X: --volname \\server\share
|
||||
|
||||
You may also specify the network share UNC path as the mountpoint itself. Then rclone
|
||||
will automatically assign a drive letter, same as with `*` and use that as
|
||||
mountpoint, and instead use the UNC path specified as the volume name, as if it were
|
||||
specified with the `--volname` option. This will also implicitly set
|
||||
mountpoint, and instead use the UNC path specified as the volume name, as if it
|
||||
were specified with the `--volname` option. This will also implicitly set
|
||||
the `--network-mode` option. This means the following two examples have same result:
|
||||
|
||||
```console
|
||||
@@ -280,7 +281,7 @@ does not suffer from the same limitations.
|
||||
|
||||
Mounting on macOS can be done either via [built-in NFS server](/commands/rclone_serve_nfs/),
|
||||
[macFUSE](https://osxfuse.github.io/) (also known as osxfuse) or
|
||||
[FUSE-T](https://www.fuse-t.org/).macFUSE is a traditional FUSE driver utilizing
|
||||
[FUSE-T](https://www.fuse-t.org/). macFUSE is a traditional FUSE driver utilizing
|
||||
a macOS kernel extension (kext). FUSE-T is an alternative FUSE system which
|
||||
"mounts" via an NFSv4 local server.
|
||||
|
||||
@@ -323,29 +324,30 @@ current as of FUSE-T version 1.0.14.
|
||||
|
||||
As per the [FUSE-T wiki](https://github.com/macos-fuse-t/fuse-t/wiki#caveats):
|
||||
|
||||
> File access and modification times cannot be set separately as it seems to be an
|
||||
> issue with the NFS client which always modifies both. Can be reproduced with
|
||||
> 'touch -m' and 'touch -a' commands
|
||||
> File access and modification times cannot be set separately as it seems to be
|
||||
> an issue with the NFS client which always modifies both. Can be reproduced
|
||||
> with `touch -m` and `touch -a` commands
|
||||
|
||||
This means that viewing files with various tools, notably macOS Finder, will cause
|
||||
rlcone to update the modification time of the file. This may make rclone upload a
|
||||
full new copy of the file.
|
||||
This means that viewing files with various tools, notably macOS Finder, will
|
||||
cause rlcone to update the modification time of the file. This may make rclone
|
||||
upload a full new copy of the file.
|
||||
|
||||
#### Read Only mounts
|
||||
|
||||
When mounting with `--read-only`, attempts to write to files will fail *silently*
|
||||
as opposed to with a clear warning as in macFUSE.
|
||||
|
||||
# Mounting on Linux
|
||||
## Mounting on Linux
|
||||
|
||||
On newer versions of Ubuntu, you may encounter the following error when running
|
||||
`rclone mount`:
|
||||
|
||||
> NOTICE: mount helper error: fusermount3: mount failed: Permission denied
|
||||
> CRITICAL: Fatal error: failed to mount FUSE fs: fusermount: exit status 1
|
||||
|
||||
This may be due to newer [Apparmor](https://wiki.ubuntu.com/AppArmor) restrictions,
|
||||
which can be disabled with `sudo aa-disable /usr/bin/fusermount3` (you may need to
|
||||
`sudo apt install apparmor-utils` beforehand).
|
||||
which can be disabled with `sudo aa-disable /usr/bin/fusermount3` (you may need
|
||||
to `sudo apt install apparmor-utils` beforehand).
|
||||
|
||||
## Limitations
|
||||
|
||||
@@ -1044,6 +1046,7 @@ rclone mount remote:path /path/to/mountpoint [flags]
|
||||
--vfs-case-insensitive If a file name not found, find a case insensitive match
|
||||
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
|
||||
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
|
||||
--vfs-handle-caching Duration Time to keep file handle and downloaders alive after last close (default 5s)
|
||||
--vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
|
||||
--vfs-metadata-extension string Set the extension to read metadata from
|
||||
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
|
||||
|
||||
@@ -14,7 +14,8 @@ Mount the remote as file system on a mountpoint.
|
||||
Rclone nfsmount allows Linux, FreeBSD, macOS and Windows to
|
||||
mount any of Rclone's cloud storage systems as a file system with FUSE.
|
||||
|
||||
First set up your remote using `rclone config`. Check it works with `rclone ls` etc.
|
||||
First set up your remote using `rclone config`. Check it works with `rclone ls`
|
||||
etc.
|
||||
|
||||
On Linux and macOS, you can run mount in either foreground or background (aka
|
||||
daemon) mode. Mount runs in foreground mode by default. Use the `--daemon` flag
|
||||
@@ -42,8 +43,8 @@ The following examples will mount to an automatically assigned drive,
|
||||
to specific drive letter `X:`, to path `C:\path\parent\mount`
|
||||
(where parent directory or drive must exist, and mount must **not** exist,
|
||||
and is not supported when [mounting as a network drive](#mounting-modes-on-windows)),
|
||||
and the last example will mount as network share `\\cloud\remote` and map it to an
|
||||
automatically assigned drive:
|
||||
and the last example will mount as network share `\\cloud\remote` and map it to
|
||||
an automatically assigned drive:
|
||||
|
||||
```console
|
||||
rclone nfsmount remote:path/to/files *
|
||||
@@ -149,8 +150,8 @@ rclone nfsmount remote:path/to/files X: --volname \\server\share
|
||||
|
||||
You may also specify the network share UNC path as the mountpoint itself. Then rclone
|
||||
will automatically assign a drive letter, same as with `*` and use that as
|
||||
mountpoint, and instead use the UNC path specified as the volume name, as if it were
|
||||
specified with the `--volname` option. This will also implicitly set
|
||||
mountpoint, and instead use the UNC path specified as the volume name, as if it
|
||||
were specified with the `--volname` option. This will also implicitly set
|
||||
the `--network-mode` option. This means the following two examples have same result:
|
||||
|
||||
```console
|
||||
@@ -281,7 +282,7 @@ does not suffer from the same limitations.
|
||||
|
||||
Mounting on macOS can be done either via [built-in NFS server](/commands/rclone_serve_nfs/),
|
||||
[macFUSE](https://osxfuse.github.io/) (also known as osxfuse) or
|
||||
[FUSE-T](https://www.fuse-t.org/).macFUSE is a traditional FUSE driver utilizing
|
||||
[FUSE-T](https://www.fuse-t.org/). macFUSE is a traditional FUSE driver utilizing
|
||||
a macOS kernel extension (kext). FUSE-T is an alternative FUSE system which
|
||||
"mounts" via an NFSv4 local server.
|
||||
|
||||
@@ -324,29 +325,30 @@ current as of FUSE-T version 1.0.14.
|
||||
|
||||
As per the [FUSE-T wiki](https://github.com/macos-fuse-t/fuse-t/wiki#caveats):
|
||||
|
||||
> File access and modification times cannot be set separately as it seems to be an
|
||||
> issue with the NFS client which always modifies both. Can be reproduced with
|
||||
> 'touch -m' and 'touch -a' commands
|
||||
> File access and modification times cannot be set separately as it seems to be
|
||||
> an issue with the NFS client which always modifies both. Can be reproduced
|
||||
> with `touch -m` and `touch -a` commands
|
||||
|
||||
This means that viewing files with various tools, notably macOS Finder, will cause
|
||||
rlcone to update the modification time of the file. This may make rclone upload a
|
||||
full new copy of the file.
|
||||
This means that viewing files with various tools, notably macOS Finder, will
|
||||
cause rlcone to update the modification time of the file. This may make rclone
|
||||
upload a full new copy of the file.
|
||||
|
||||
#### Read Only mounts
|
||||
|
||||
When mounting with `--read-only`, attempts to write to files will fail *silently*
|
||||
as opposed to with a clear warning as in macFUSE.
|
||||
|
||||
# Mounting on Linux
|
||||
## Mounting on Linux
|
||||
|
||||
On newer versions of Ubuntu, you may encounter the following error when running
|
||||
`rclone mount`:
|
||||
|
||||
> NOTICE: mount helper error: fusermount3: mount failed: Permission denied
|
||||
> CRITICAL: Fatal error: failed to mount FUSE fs: fusermount: exit status 1
|
||||
|
||||
This may be due to newer [Apparmor](https://wiki.ubuntu.com/AppArmor) restrictions,
|
||||
which can be disabled with `sudo aa-disable /usr/bin/fusermount3` (you may need to
|
||||
`sudo apt install apparmor-utils` beforehand).
|
||||
which can be disabled with `sudo aa-disable /usr/bin/fusermount3` (you may need
|
||||
to `sudo apt install apparmor-utils` beforehand).
|
||||
|
||||
## Limitations
|
||||
|
||||
@@ -1050,6 +1052,7 @@ rclone nfsmount remote:path /path/to/mountpoint [flags]
|
||||
--vfs-case-insensitive If a file name not found, find a case insensitive match
|
||||
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
|
||||
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
|
||||
--vfs-handle-caching Duration Time to keep file handle and downloaders alive after last close (default 5s)
|
||||
--vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
|
||||
--vfs-metadata-extension string Set the extension to read metadata from
|
||||
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
|
||||
|
||||
@@ -53,6 +53,12 @@ identically.
|
||||
|
||||
`--rc-disable-zip` may be set to disable the zipping download option.
|
||||
|
||||
### Protocol
|
||||
|
||||
The server supports HTTP/1.1 and HTTP/2. HTTP/2 is used automatically
|
||||
for TLS connections. For non-TLS connections, HTTP/2 cleartext (h2c)
|
||||
is supported, allowing HTTP/2 without encryption.
|
||||
|
||||
### TLS (SSL)
|
||||
|
||||
By default this will serve over http. If you want you can serve over
|
||||
|
||||
@@ -560,6 +560,7 @@ rclone serve dlna remote:path [flags]
|
||||
--vfs-case-insensitive If a file name not found, find a case insensitive match
|
||||
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
|
||||
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
|
||||
--vfs-handle-caching Duration Time to keep file handle and downloaders alive after last close (default 5s)
|
||||
--vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
|
||||
--vfs-metadata-extension string Set the extension to read metadata from
|
||||
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
|
||||
|
||||
@@ -592,6 +592,7 @@ rclone serve docker [flags]
|
||||
--vfs-case-insensitive If a file name not found, find a case insensitive match
|
||||
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
|
||||
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
|
||||
--vfs-handle-caching Duration Time to keep file handle and downloaders alive after last close (default 5s)
|
||||
--vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
|
||||
--vfs-metadata-extension string Set the extension to read metadata from
|
||||
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
|
||||
|
||||
@@ -639,6 +639,7 @@ rclone serve ftp remote:path [flags]
|
||||
--vfs-case-insensitive If a file name not found, find a case insensitive match
|
||||
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
|
||||
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
|
||||
--vfs-handle-caching Duration Time to keep file handle and downloaders alive after last close (default 5s)
|
||||
--vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
|
||||
--vfs-metadata-extension string Set the extension to read metadata from
|
||||
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
|
||||
|
||||
@@ -55,6 +55,12 @@ identically.
|
||||
|
||||
`--disable-zip` may be set to disable the zipping download option.
|
||||
|
||||
### Protocol
|
||||
|
||||
The server supports HTTP/1.1 and HTTP/2. HTTP/2 is used automatically
|
||||
for TLS connections. For non-TLS connections, HTTP/2 cleartext (h2c)
|
||||
is supported, allowing HTTP/2 without encryption.
|
||||
|
||||
### TLS (SSL)
|
||||
|
||||
By default this will serve over http. If you want you can serve over
|
||||
@@ -778,6 +784,7 @@ rclone serve http remote:path [flags]
|
||||
--vfs-case-insensitive If a file name not found, find a case insensitive match
|
||||
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
|
||||
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
|
||||
--vfs-handle-caching Duration Time to keep file handle and downloaders alive after last close (default 5s)
|
||||
--vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
|
||||
--vfs-metadata-extension string Set the extension to read metadata from
|
||||
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
|
||||
|
||||
@@ -611,6 +611,7 @@ rclone serve nfs remote:path [flags]
|
||||
--vfs-case-insensitive If a file name not found, find a case insensitive match
|
||||
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
|
||||
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
|
||||
--vfs-handle-caching Duration Time to keep file handle and downloaders alive after last close (default 5s)
|
||||
--vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
|
||||
--vfs-metadata-extension string Set the extension to read metadata from
|
||||
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
|
||||
|
||||
@@ -132,6 +132,12 @@ identically.
|
||||
|
||||
`--disable-zip` may be set to disable the zipping download option.
|
||||
|
||||
### Protocol
|
||||
|
||||
The server supports HTTP/1.1 and HTTP/2. HTTP/2 is used automatically
|
||||
for TLS connections. For non-TLS connections, HTTP/2 cleartext (h2c)
|
||||
is supported, allowing HTTP/2 without encryption.
|
||||
|
||||
### TLS (SSL)
|
||||
|
||||
By default this will serve over http. If you want you can serve over
|
||||
|
||||
@@ -234,6 +234,12 @@ identically.
|
||||
|
||||
`--disable-zip` may be set to disable the zipping download option.
|
||||
|
||||
### Protocol
|
||||
|
||||
The server supports HTTP/1.1 and HTTP/2. HTTP/2 is used automatically
|
||||
for TLS connections. For non-TLS connections, HTTP/2 cleartext (h2c)
|
||||
is supported, allowing HTTP/2 without encryption.
|
||||
|
||||
### TLS (SSL)
|
||||
|
||||
By default this will serve over http. If you want you can serve over
|
||||
@@ -808,6 +814,7 @@ rclone serve s3 remote:path [flags]
|
||||
--vfs-case-insensitive If a file name not found, find a case insensitive match
|
||||
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
|
||||
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
|
||||
--vfs-handle-caching Duration Time to keep file handle and downloaders alive after last close (default 5s)
|
||||
--vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
|
||||
--vfs-metadata-extension string Set the extension to read metadata from
|
||||
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
|
||||
|
||||
@@ -686,6 +686,7 @@ rclone serve sftp remote:path [flags]
|
||||
--vfs-case-insensitive If a file name not found, find a case insensitive match
|
||||
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
|
||||
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
|
||||
--vfs-handle-caching Duration Time to keep file handle and downloaders alive after last close (default 5s)
|
||||
--vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
|
||||
--vfs-metadata-extension string Set the extension to read metadata from
|
||||
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
|
||||
|
||||
@@ -79,6 +79,19 @@ rclone --webdav-unix-socket /tmp/my.socket --webdav-url http://localhost lsf :we
|
||||
Note that there is no authentication on http protocol - this is expected to be
|
||||
done by the permissions on the socket.
|
||||
|
||||
## Symlinks / Junction points
|
||||
|
||||
The webdav protocol does not support symlinks or junction points and
|
||||
by default rclone will skip them completely.
|
||||
|
||||
You can use `-L` to get rclone to follow symlinks or you can
|
||||
use `--local-links` to make rclone show `.rclonelink`
|
||||
files in place of the symlinks.
|
||||
|
||||
**NB** Do not use `--links` as since v1.69 this applies to
|
||||
the VFS layer too, use `--local-links` which only applies to
|
||||
the local backend only.
|
||||
|
||||
## Server options
|
||||
|
||||
Use `--addr` to specify which IP address and port the server should
|
||||
@@ -112,6 +125,12 @@ identically.
|
||||
|
||||
`--disable-zip` may be set to disable the zipping download option.
|
||||
|
||||
### Protocol
|
||||
|
||||
The server supports HTTP/1.1 and HTTP/2. HTTP/2 is used automatically
|
||||
for TLS connections. For non-TLS connections, HTTP/2 cleartext (h2c)
|
||||
is supported, allowing HTTP/2 without encryption.
|
||||
|
||||
### TLS (SSL)
|
||||
|
||||
By default this will serve over http. If you want you can serve over
|
||||
@@ -837,6 +856,7 @@ rclone serve webdav remote:path [flags]
|
||||
--vfs-case-insensitive If a file name not found, find a case insensitive match
|
||||
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
|
||||
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
|
||||
--vfs-handle-caching Duration Time to keep file handle and downloaders alive after last close (default 5s)
|
||||
--vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
|
||||
--vfs-metadata-extension string Set the extension to read metadata from
|
||||
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
|
||||
|
||||
@@ -31,6 +31,10 @@ time instead of the current time. Times may be specified as one of:
|
||||
Note that value of `--timestamp` is in UTC. If you want local time
|
||||
then add the `--localtime` flag.
|
||||
|
||||
Metadata can be added when creating a new file with `--metadata-set`.
|
||||
For example:
|
||||
rclone touch remote:path -M --metadata-set key=value
|
||||
|
||||
```
|
||||
rclone touch remote:path [flags]
|
||||
```
|
||||
|
||||
@@ -219,6 +219,28 @@ Properties:
|
||||
|
||||
Here are the Advanced options specific to filelu (FileLu Cloud Storage).
|
||||
|
||||
#### --filelu-upload-cutoff
|
||||
|
||||
Cutoff for switching to chunked upload. Any files larger than this will be uploaded in chunks of chunk_size.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: upload_cutoff
|
||||
- Env Var: RCLONE_FILELU_UPLOAD_CUTOFF
|
||||
- Type: SizeSuffix
|
||||
- Default: 500Mi
|
||||
|
||||
#### --filelu-chunk-size
|
||||
|
||||
Chunk size to use for uploading. Used for multipart uploads.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: chunk_size
|
||||
- Env Var: RCLONE_FILELU_CHUNK_SIZE
|
||||
- Type: SizeSuffix
|
||||
- Default: 64Mi
|
||||
|
||||
#### --filelu-encoding
|
||||
|
||||
The encoding for the backend.
|
||||
|
||||
@@ -124,7 +124,7 @@ Properties:
|
||||
|
||||
#### --filen-api-key
|
||||
|
||||
API Key for your Filen account
|
||||
API Key for your Filen account
|
||||
|
||||
Get this using the Filen CLI export-api-key command
|
||||
You can download the Filen CLI from https://github.com/FilenCloudDienste/filen-cli
|
||||
|
||||
@@ -121,7 +121,7 @@ Flags for general networking and HTTP stuff.
|
||||
--tpslimit float Limit HTTP transactions per second to this
|
||||
--tpslimit-burst int Max burst of transactions for --tpslimit (default 1)
|
||||
--use-cookies Enable session cookiejar
|
||||
--user-agent string Set the user-agent to a specified string (default "rclone/v1.73.0")
|
||||
--user-agent string Set the user-agent to a specified string (default "rclone/v1.74.0")
|
||||
```
|
||||
|
||||
|
||||
@@ -174,7 +174,7 @@ Flags for developers.
|
||||
|
||||
```
|
||||
--cpuprofile string Write cpu profile to file
|
||||
--dump DumpFlags List of items to dump from: headers, bodies, requests, responses, auth, filters, goroutines, openfiles, mapper
|
||||
--dump DumpFlags List of items to dump from: headers, bodies, requests, responses, auth, filters, goroutines, openfiles, mapper, curl
|
||||
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
|
||||
--dump-headers Dump HTTP headers - may contain sensitive info
|
||||
--memprofile string Write memory profile to file
|
||||
@@ -355,6 +355,8 @@ Backend-only flags (these can be set in the config file also).
|
||||
--azureblob-connection-string string Storage Connection String
|
||||
--azureblob-copy-concurrency int Concurrency for multipart copy (default 512)
|
||||
--azureblob-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 8Mi)
|
||||
--azureblob-copy-total-concurrency int Global concurrency limit for multipart copy chunks
|
||||
--azureblob-decompress If set this will decompress gzip encoded objects
|
||||
--azureblob-delete-snapshots string Set to specify how to deal with snapshots on blob deletion
|
||||
--azureblob-description string Description of the remote
|
||||
--azureblob-directory-markers Upload an empty object with a trailing slash when a new directory is created
|
||||
@@ -605,9 +607,11 @@ Backend-only flags (these can be set in the config file also).
|
||||
--filefabric-token-expiry string Token expiry time
|
||||
--filefabric-url string URL of the Enterprise File Fabric to connect to
|
||||
--filefabric-version string Version read from the file fabric
|
||||
--filelu-chunk-size SizeSuffix Chunk size to use for uploading. Used for multipart uploads (default 64Mi)
|
||||
--filelu-description string Description of the remote
|
||||
--filelu-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,SingleQuote,BackQuote,Dollar,Colon,Question,Asterisk,Pipe,Hash,Percent,BackSlash,CrLf,Del,Ctl,LeftSpace,LeftPeriod,LeftTilde,LeftCrLfHtVt,RightSpace,RightPeriod,RightCrLfHtVt,InvalidUtf8,Dot,SquareBracket,Semicolon,Exclamation)
|
||||
--filelu-key string Your FileLu Rclone key from My Account
|
||||
--filelu-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. Any files larger than this will be uploaded in chunks of chunk_size (default 500Mi)
|
||||
--filen-api-key string API Key for your Filen account (obscured)
|
||||
--filen-auth-version string Authentication Version (internal use only)
|
||||
--filen-base-folder-uuid string UUID of Account Root Directory (internal use only)
|
||||
@@ -728,11 +732,24 @@ Backend-only flags (these can be set in the config file also).
|
||||
--http-no-head Don't use HEAD requests
|
||||
--http-no-slash Set this if the site doesn't end directories with /
|
||||
--http-url string URL of HTTP host to connect to
|
||||
--huaweidrive-auth-url string Auth server URL
|
||||
--huaweidrive-chunk-size SizeSuffix Upload chunk size (default 8Mi)
|
||||
--huaweidrive-client-credentials Use client credentials OAuth flow
|
||||
--huaweidrive-client-id string OAuth Client Id
|
||||
--huaweidrive-client-secret string OAuth Client Secret
|
||||
--huaweidrive-description string Description of the remote
|
||||
--huaweidrive-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,LeftSpace,LeftPeriod,LeftTilde,RightSpace,RightPeriod,InvalidUtf8,Dot)
|
||||
--huaweidrive-list-chunk int Size of listing chunk 1-1000 (default 1000)
|
||||
--huaweidrive-root-folder-id string ID of the root folder
|
||||
--huaweidrive-token string OAuth Access Token as a JSON blob
|
||||
--huaweidrive-token-url string Token server url
|
||||
--huaweidrive-upload-cutoff SizeSuffix Cutoff for switching to resumable upload (default 20Mi)
|
||||
--iclouddrive-apple-id string Apple ID
|
||||
--iclouddrive-client-id string Client id (default "d39ba9916b7251055b22c7f910e2ea796ee65e98b2ddecea8f5dde8d9d1a815d")
|
||||
--iclouddrive-client-id string Client ID for iCloud API access (default "d39ba9916b7251055b22c7f910e2ea796ee65e98b2ddecea8f5dde8d9d1a815d")
|
||||
--iclouddrive-description string Description of the remote
|
||||
--iclouddrive-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
|
||||
--iclouddrive-password string Password (obscured)
|
||||
--iclouddrive-service string iCloud service to use (default "drive")
|
||||
--imagekit-description string Description of the remote
|
||||
--imagekit-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Dollar,Question,Hash,Percent,BackSlash,Del,Ctl,InvalidUtf8,Dot,SquareBracket)
|
||||
--imagekit-endpoint string You can find your ImageKit.io URL endpoint in your [dashboard](https://imagekit.io/dashboard/developer/api-keys)
|
||||
@@ -751,17 +768,20 @@ Backend-only flags (these can be set in the config file also).
|
||||
--internetarchive-item-metadata stringArray Metadata to be set on the IA item, this is different from file-level metadata that can be set using --metadata-set
|
||||
--internetarchive-secret-access-key string IAS3 Secret Key (password)
|
||||
--internetarchive-wait-archive Duration Timeout for waiting the server's processing tasks (specifically archive and book_op) to finish (default 0s)
|
||||
--internxt-chunk-size SizeSuffix Chunk size for multipart uploads (default 30Mi)
|
||||
--internxt-description string Description of the remote
|
||||
--internxt-email string Email of your Internxt account
|
||||
--internxt-encoding Encoding The encoding for the backend (default Slash,BackSlash,CrLf,RightPeriod,InvalidUtf8,Dot)
|
||||
--internxt-pass string Password (obscured)
|
||||
--internxt-skip-hash-validation Skip hash validation when downloading files (default true)
|
||||
--internxt-upload-concurrency int Concurrency for multipart uploads (default 4)
|
||||
--internxt-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (default 100Mi)
|
||||
--jottacloud-auth-url string Auth server URL
|
||||
--jottacloud-client-credentials Use client credentials OAuth flow
|
||||
--jottacloud-client-id string OAuth Client Id
|
||||
--jottacloud-client-secret string OAuth Client Secret
|
||||
--jottacloud-description string Description of the remote
|
||||
--jottacloud-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,Del,Ctl,InvalidUtf8,Dot)
|
||||
--jottacloud-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,Percent,Del,Ctl,InvalidUtf8,Dot)
|
||||
--jottacloud-hard-delete Delete files permanently rather than putting them into the trash
|
||||
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required (default 10Mi)
|
||||
--jottacloud-no-versions Avoid server side versioning by deleting files and recreating files instead of overwriting them
|
||||
@@ -778,6 +798,8 @@ Backend-only flags (these can be set in the config file also).
|
||||
--koofr-setmtime Does the backend support setting modification time (default true)
|
||||
--koofr-user string Your user name
|
||||
--linkbox-description string Description of the remote
|
||||
--linkbox-email string Email for login
|
||||
--linkbox-password string Password for login (obscured)
|
||||
--linkbox-token string Token from https://www.linkbox.to/admin/account
|
||||
--local-case-insensitive Force the filesystem to report itself as case insensitive
|
||||
--local-case-sensitive Force the filesystem to report itself as case sensitive
|
||||
@@ -923,7 +945,7 @@ Backend-only flags (these can be set in the config file also).
|
||||
--premiumizeme-token string OAuth Access Token as a JSON blob
|
||||
--premiumizeme-token-url string Token server url
|
||||
--protondrive-2fa string The 2FA code
|
||||
--protondrive-app-version string The app version string (default "macos-drive@1.0.0-alpha.1+rclone")
|
||||
--protondrive-app-version string The app version string
|
||||
--protondrive-description string Description of the remote
|
||||
--protondrive-enable-caching Caches the files and folders metadata to reduce API calls (default true)
|
||||
--protondrive-encoding Encoding The encoding for the backend (default Slash,LeftSpace,RightSpace,InvalidUtf8,Dot)
|
||||
@@ -964,6 +986,8 @@ Backend-only flags (these can be set in the config file also).
|
||||
--s3-access-key-id string AWS Access Key ID
|
||||
--s3-acl string Canned ACL used when creating buckets and storing or copying objects
|
||||
--s3-bucket-acl string Canned ACL used when creating buckets
|
||||
--s3-bucket-object-lock-enabled Enable Object Lock when creating new buckets
|
||||
--s3-bypass-governance-retention Allow deleting or modifying objects locked with GOVERNANCE mode
|
||||
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5Mi)
|
||||
--s3-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 4.656Gi)
|
||||
--s3-decompress If set this will decompress gzip encoded objects
|
||||
@@ -978,11 +1002,13 @@ Backend-only flags (these can be set in the config file also).
|
||||
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars)
|
||||
--s3-force-path-style If true use path style access if false use virtual hosted style (default true)
|
||||
--s3-ibm-api-key string IBM API Key to be used to obtain IAM token
|
||||
--s3-ibm-iam-endpoint string IBM IAM Endpoint to use for authentication
|
||||
--s3-ibm-resource-instance-id string IBM service instance id
|
||||
--s3-leave-parts-on-error If true avoid calling abort upload on a failure, leaving all successfully uploaded parts on S3 for manual recovery
|
||||
--s3-list-chunk int Size of listing chunk (response list for each ListObject S3 request) (default 1000)
|
||||
--s3-list-url-encode Tristate Whether to url encode listings: true/false/unset (default unset)
|
||||
--s3-list-version int Version of ListObjects to use: 1,2 or 0 for auto
|
||||
--s3-list-versions-oldest-first Tristate Set if the backend returns object versions oldest first (default unset)
|
||||
--s3-location-constraint string Location constraint - must be set to match the Region
|
||||
--s3-max-upload-parts int Maximum number of parts in a multipart upload (default 10000)
|
||||
--s3-might-gzip Tristate Set this if the backend might gzip objects (default unset)
|
||||
@@ -990,6 +1016,11 @@ Backend-only flags (these can be set in the config file also).
|
||||
--s3-no-head If set, don't HEAD uploaded objects to check integrity
|
||||
--s3-no-head-object If set, do not do HEAD before GET when getting objects
|
||||
--s3-no-system-metadata Suppress setting and reading of system metadata
|
||||
--s3-object-lock-legal-hold-status string Object Lock legal hold status to apply when uploading or copying objects
|
||||
--s3-object-lock-mode string Object Lock mode to apply when uploading or copying objects
|
||||
--s3-object-lock-retain-until-date string Object Lock retention until date to apply when uploading or copying objects
|
||||
--s3-object-lock-set-after-upload Set Object Lock via separate API calls after upload
|
||||
--s3-object-lock-supported Tristate Whether the provider supports S3 Object Lock (default unset)
|
||||
--s3-profile string Profile to use in the shared credentials file
|
||||
--s3-provider string Choose your S3 provider
|
||||
--s3-region string Region to connect to
|
||||
|
||||
@@ -216,7 +216,6 @@ To view your current quota you can use the `rclone about remote:`
|
||||
command which will display your usage and quota information.
|
||||
|
||||
<!-- autogenerated options start - DO NOT EDIT - instead edit fs.RegInfo in backend/huaweidrive/huaweidrive.go and run make backenddocs to verify --> <!-- markdownlint-disable-line line-length -->
|
||||
|
||||
### Standard options
|
||||
|
||||
Here are the Standard options specific to huaweidrive (Huawei Drive).
|
||||
@@ -288,6 +287,21 @@ Properties:
|
||||
- Type: string
|
||||
- Required: false
|
||||
|
||||
#### --huaweidrive-client-credentials
|
||||
|
||||
Use client credentials OAuth flow.
|
||||
|
||||
This will use the OAUTH2 client Credentials Flow as described in RFC 6749.
|
||||
|
||||
Note that this option is NOT supported by all backends.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: client_credentials
|
||||
- Env Var: RCLONE_HUAWEIDRIVE_CLIENT_CREDENTIALS
|
||||
- Type: bool
|
||||
- Default: false
|
||||
|
||||
#### --huaweidrive-root-folder-id
|
||||
|
||||
ID of the root folder.
|
||||
@@ -295,8 +309,7 @@ ID of the root folder.
|
||||
Normally this is auto-detected, but if it fails or you want to speed up startup,
|
||||
you can set it manually.
|
||||
|
||||
Leave blank normally. Rclone will print the detected root folder ID in debug output
|
||||
which you can then save to the config.
|
||||
Leave blank normally.
|
||||
|
||||
Properties:
|
||||
|
||||
@@ -331,7 +344,7 @@ Properties:
|
||||
|
||||
#### --huaweidrive-upload-cutoff
|
||||
|
||||
Cutoff for switching to multipart upload.
|
||||
Cutoff for switching to resumable upload.
|
||||
|
||||
Any files larger than this will be uploaded using resumable upload.
|
||||
The minimum is 0 and the maximum is 20 MiB (Huawei Drive API limit for single request uploads).
|
||||
@@ -354,7 +367,49 @@ Properties:
|
||||
- Config: encoding
|
||||
- Env Var: RCLONE_HUAWEIDRIVE_ENCODING
|
||||
- Type: Encoding
|
||||
- Default: BackSlash,InvalidUtf8,RightSpace
|
||||
- Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,LeftSpace,LeftPeriod,LeftTilde,RightSpace,RightPeriod,InvalidUtf8,Dot
|
||||
|
||||
#### --huaweidrive-description
|
||||
|
||||
Description of the remote.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: description
|
||||
- Env Var: RCLONE_HUAWEIDRIVE_DESCRIPTION
|
||||
- Type: string
|
||||
- Required: false
|
||||
|
||||
### Metadata
|
||||
|
||||
Huawei Drive supports reading and writing custom metadata.
|
||||
|
||||
User metadata can be set using the "properties" field in the API.
|
||||
System metadata includes standard file information such as:
|
||||
- content-type: MIME type of the object
|
||||
- description: User-provided description
|
||||
- favorite: Whether file is marked as favorite
|
||||
- btime/mtime/utime: Various timestamps
|
||||
- sha256: File hash
|
||||
- recycled/has-thumbnail: File status flags
|
||||
|
||||
Custom metadata keys can be any string and will be stored in the file's properties.
|
||||
|
||||
Here are the possible system metadata items for the huaweidrive backend.
|
||||
|
||||
| Name | Help | Type | Example | Read Only |
|
||||
|------|------|------|---------|-----------|
|
||||
| btime | Time when the file was created (RFC 3339) | RFC 3339 | | **Y** |
|
||||
| content-type | MIME type of the object | string | | **Y** |
|
||||
| description | Description of the file | string | | N |
|
||||
| favorite | Whether the file is marked as favorite (true/false) | string | | N |
|
||||
| has-thumbnail | Whether the file has a thumbnail (true/false) | string | | **Y** |
|
||||
| mtime | Time when the file content was last modified (RFC 3339) | RFC 3339 | | **Y** |
|
||||
| recycled | Whether the file is in recycle bin (true/false) | string | | **Y** |
|
||||
| sha256 | SHA256 hash of the file | string | | **Y** |
|
||||
| utime | Time when the file was last edited by the current user (RFC 3339) | RFC 3339 | | **Y** |
|
||||
|
||||
See the [metadata](/docs/#metadata) docs for more info.
|
||||
|
||||
<!-- autogenerated options stop -->
|
||||
|
||||
|
||||
@@ -232,7 +232,23 @@ If the remote still has stale auth state, clear the `cookies` and
|
||||
<!-- autogenerated options start - DO NOT EDIT - instead edit fs.RegInfo in backend/iclouddrive/iclouddrive.go and run make backenddocs to verify --> <!-- markdownlint-disable-line line-length -->
|
||||
### Standard options
|
||||
|
||||
Here are the Standard options specific to iclouddrive (iCloud Drive).
|
||||
Here are the Standard options specific to iclouddrive (iCloud Drive and Photos).
|
||||
|
||||
#### --iclouddrive-service
|
||||
|
||||
iCloud service to use.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: service
|
||||
- Env Var: RCLONE_ICLOUDDRIVE_SERVICE
|
||||
- Type: string
|
||||
- Default: "drive"
|
||||
- Examples:
|
||||
- "drive"
|
||||
- iCloud Drive
|
||||
- "photos"
|
||||
- iCloud Photos
|
||||
|
||||
#### --iclouddrive-apple-id
|
||||
|
||||
@@ -260,7 +276,7 @@ Properties:
|
||||
|
||||
#### --iclouddrive-trust-token
|
||||
|
||||
Trust token (internal use)
|
||||
Trust token for session authentication.
|
||||
|
||||
Properties:
|
||||
|
||||
@@ -271,7 +287,7 @@ Properties:
|
||||
|
||||
#### --iclouddrive-cookies
|
||||
|
||||
cookies (internal use only)
|
||||
Session cookies.
|
||||
|
||||
Properties:
|
||||
|
||||
@@ -282,11 +298,11 @@ Properties:
|
||||
|
||||
### Advanced options
|
||||
|
||||
Here are the Advanced options specific to iclouddrive (iCloud Drive).
|
||||
Here are the Advanced options specific to iclouddrive (iCloud Drive and Photos).
|
||||
|
||||
#### --iclouddrive-client-id
|
||||
|
||||
Client id
|
||||
Client ID for iCloud API access.
|
||||
|
||||
Properties:
|
||||
|
||||
@@ -319,4 +335,20 @@ Properties:
|
||||
- Type: string
|
||||
- Required: false
|
||||
|
||||
### Metadata
|
||||
|
||||
Metadata is read-only and available for the Photos service only.
|
||||
|
||||
Here are the possible system metadata items for the iclouddrive backend.
|
||||
|
||||
| Name | Help | Type | Example | Read Only |
|
||||
|------|------|------|---------|-----------|
|
||||
| added-time | Time the item was added to the iCloud library | RFC 3339 | 2006-01-02T15:04:05Z | **Y** |
|
||||
| favorite | Whether the item is marked as favorite | bool | | **Y** |
|
||||
| height | Image height in pixels | int | | **Y** |
|
||||
| hidden | Whether the item is hidden | bool | | **Y** |
|
||||
| width | Image width in pixels | int | | **Y** |
|
||||
|
||||
See the [metadata](/docs/#metadata) docs for more info.
|
||||
|
||||
<!-- autogenerated options stop -->
|
||||
|
||||
@@ -161,6 +161,50 @@ Properties:
|
||||
- Type: bool
|
||||
- Default: true
|
||||
|
||||
#### --internxt-upload-concurrency
|
||||
|
||||
Concurrency for multipart uploads.
|
||||
|
||||
This is the number of chunks of the same file that are uploaded concurrently.
|
||||
|
||||
Note that each chunk is buffered in memory.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: upload_concurrency
|
||||
- Env Var: RCLONE_INTERNXT_UPLOAD_CONCURRENCY
|
||||
- Type: int
|
||||
- Default: 4
|
||||
|
||||
#### --internxt-upload-cutoff
|
||||
|
||||
Cutoff for switching to multipart upload.
|
||||
|
||||
Any files larger than this will be uploaded in chunks of chunk_size.
|
||||
The minimum is 100 MiB and the maximum is 5 GiB.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: upload_cutoff
|
||||
- Env Var: RCLONE_INTERNXT_UPLOAD_CUTOFF
|
||||
- Type: SizeSuffix
|
||||
- Default: 100Mi
|
||||
|
||||
#### --internxt-chunk-size
|
||||
|
||||
Chunk size for multipart uploads.
|
||||
|
||||
Files larger than upload_cutoff will be uploaded in chunks of this size.
|
||||
|
||||
Memory usage is approximately chunk_size * upload_concurrency.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: chunk_size
|
||||
- Env Var: RCLONE_INTERNXT_CHUNK_SIZE
|
||||
- Type: SizeSuffix
|
||||
- Default: 30Mi
|
||||
|
||||
#### --internxt-encoding
|
||||
|
||||
The encoding for the backend.
|
||||
|
||||
@@ -555,7 +555,7 @@ Properties:
|
||||
- Config: encoding
|
||||
- Env Var: RCLONE_JOTTACLOUD_ENCODING
|
||||
- Type: Encoding
|
||||
- Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,Del,Ctl,InvalidUtf8,Dot
|
||||
- Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,Percent,Del,Ctl,InvalidUtf8,Dot
|
||||
|
||||
#### --jottacloud-description
|
||||
|
||||
|
||||
@@ -93,6 +93,41 @@ Properties:
|
||||
- Type: string
|
||||
- Required: true
|
||||
|
||||
#### --linkbox-email
|
||||
|
||||
Email for login
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: email
|
||||
- Env Var: RCLONE_LINKBOX_EMAIL
|
||||
- Type: string
|
||||
- Required: true
|
||||
|
||||
#### --linkbox-password
|
||||
|
||||
Password for login
|
||||
|
||||
**NB** Input to this must be obscured - see [rclone obscure](/commands/rclone_obscure/).
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: password
|
||||
- Env Var: RCLONE_LINKBOX_PASSWORD
|
||||
- Type: string
|
||||
- Required: true
|
||||
|
||||
#### --linkbox-web-token
|
||||
|
||||
Web API login token - set automatically.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: web_token
|
||||
- Env Var: RCLONE_LINKBOX_WEB_TOKEN
|
||||
- Type: string
|
||||
- Required: false
|
||||
|
||||
### Advanced options
|
||||
|
||||
Here are the Advanced options specific to linkbox (Linkbox).
|
||||
|
||||
@@ -317,15 +317,20 @@ rclone backend addurl remote: [options] [<arguments>+]
|
||||
|
||||
This command adds offline download task for url.
|
||||
|
||||
Usage example:
|
||||
Usage examples:
|
||||
|
||||
```console
|
||||
rclone backend addurl pikpak:dirpath url
|
||||
rclone backend addurl pikpak:dirpath url -o name=custom_filename.zip
|
||||
```
|
||||
|
||||
Downloads will be stored in 'dirpath'. If 'dirpath' is invalid,
|
||||
download will fallback to default 'My Pack' folder.
|
||||
|
||||
Options:
|
||||
|
||||
- "name": Custom filename for the downloaded file.
|
||||
|
||||
### decompress
|
||||
|
||||
Request decompress of a file/files in a folder.
|
||||
|
||||
@@ -296,16 +296,18 @@ Properties:
|
||||
|
||||
The app version string
|
||||
|
||||
The app version string indicates the client that is currently performing
|
||||
the API request. This information is required and will be sent with every
|
||||
API request.
|
||||
The app version string identifies the client that is currently performing
|
||||
the API request. Third-party Proton Drive integrations should use the form
|
||||
external-drive-<project>@<version>. If this option is left empty, rclone
|
||||
derives a compliant value from its own version. This value is sent with
|
||||
every API request; the option itself is optional.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: app_version
|
||||
- Env Var: RCLONE_PROTONDRIVE_APP_VERSION
|
||||
- Type: string
|
||||
- Default: "macos-drive@1.0.0-alpha.1+rclone"
|
||||
- Required: false
|
||||
|
||||
#### --protondrive-replace-existing-draft
|
||||
|
||||
|
||||
@@ -643,8 +643,6 @@ Note that arguments must be preceded by the "-a" flag
|
||||
|
||||
See the [backend](/commands/rclone_backend/) command for more information.
|
||||
|
||||
**Authentication is required for this call.**
|
||||
|
||||
### cache/expire: Purge a remote from cache {#cache-expire}
|
||||
|
||||
Purge a remote from the cache backend. Supports either a directory or a file.
|
||||
@@ -688,6 +686,8 @@ is used on top of the cache.
|
||||
|
||||
Show statistics for the cache remote.
|
||||
|
||||
**Authentication is not required for this call.**
|
||||
|
||||
### config/create: create the config for a remote. {#config-create}
|
||||
|
||||
This takes the following parameters:
|
||||
@@ -708,8 +708,6 @@ This takes the following parameters:
|
||||
|
||||
See the [config create](/commands/rclone_config_create/) command for more information on the above.
|
||||
|
||||
**Authentication is required for this call.**
|
||||
|
||||
### config/delete: Delete a remote in the config file. {#config-delete}
|
||||
|
||||
Parameters:
|
||||
@@ -718,8 +716,6 @@ Parameters:
|
||||
|
||||
See the [config delete](/commands/rclone_config_delete/) command for more information on the above.
|
||||
|
||||
**Authentication is required for this call.**
|
||||
|
||||
### config/dump: Dumps the config file. {#config-dump}
|
||||
|
||||
Returns a JSON object:
|
||||
@@ -729,8 +725,6 @@ Where keys are remote names and values are the config parameters.
|
||||
|
||||
See the [config dump](/commands/rclone_config_dump/) command for more information on the above.
|
||||
|
||||
**Authentication is required for this call.**
|
||||
|
||||
### config/get: Get a remote in the config file. {#config-get}
|
||||
|
||||
Parameters:
|
||||
@@ -739,8 +733,6 @@ Parameters:
|
||||
|
||||
See the [config dump](/commands/rclone_config_dump/) command for more information on the above.
|
||||
|
||||
**Authentication is required for this call.**
|
||||
|
||||
### config/listremotes: Lists the remotes in the config file and defined in environment variables. {#config-listremotes}
|
||||
|
||||
Returns
|
||||
@@ -748,8 +740,6 @@ Returns
|
||||
|
||||
See the [listremotes](/commands/rclone_listremotes/) command for more information on the above.
|
||||
|
||||
**Authentication is required for this call.**
|
||||
|
||||
### config/password: password the config for a remote. {#config-password}
|
||||
|
||||
This takes the following parameters:
|
||||
@@ -760,8 +750,6 @@ This takes the following parameters:
|
||||
|
||||
See the [config password](/commands/rclone_config_password/) command for more information on the above.
|
||||
|
||||
**Authentication is required for this call.**
|
||||
|
||||
### config/paths: Reads the config file path and other important paths. {#config-paths}
|
||||
|
||||
Returns a JSON object with the following keys:
|
||||
@@ -780,8 +768,6 @@ Eg
|
||||
|
||||
See the [config paths](/commands/rclone_config_paths/) command for more information on the above.
|
||||
|
||||
**Authentication is required for this call.**
|
||||
|
||||
### config/providers: Shows how providers are configured in the config file. {#config-providers}
|
||||
|
||||
Returns a JSON object:
|
||||
@@ -794,16 +780,12 @@ Note that the Options blocks are in the same format as returned by
|
||||
"options/info". They are described in the
|
||||
[option blocks](#option-blocks) section.
|
||||
|
||||
**Authentication is required for this call.**
|
||||
|
||||
### config/setpath: Set the path of the config file {#config-setpath}
|
||||
|
||||
Parameters:
|
||||
|
||||
- path - path to the config file to use
|
||||
|
||||
**Authentication is required for this call.**
|
||||
|
||||
### config/unlock: Unlock the config file. {#config-unlock}
|
||||
|
||||
Unlocks the config file if it is locked.
|
||||
@@ -814,8 +796,6 @@ Parameters:
|
||||
|
||||
A good idea is to disable AskPassword before making this call
|
||||
|
||||
**Authentication is required for this call.**
|
||||
|
||||
### config/update: update the config for a remote. {#config-update}
|
||||
|
||||
This takes the following parameters:
|
||||
@@ -835,8 +815,6 @@ This takes the following parameters:
|
||||
|
||||
See the [config update](/commands/rclone_config_update/) command for more information on the above.
|
||||
|
||||
**Authentication is required for this call.**
|
||||
|
||||
### core/bwlimit: Set the bandwidth limit. {#core-bwlimit}
|
||||
|
||||
This sets the bandwidth limit to the string passed in. This should be
|
||||
@@ -923,7 +901,20 @@ OR
|
||||
|
||||
```
|
||||
|
||||
**Authentication is required for this call.**
|
||||
### core/disks: List the local disks {#core-disks}
|
||||
|
||||
This does not take any parameters
|
||||
|
||||
This call is for rclone GUI programs to enumerate local disks and
|
||||
important directories for doing transfers to and from. The list
|
||||
returned will include the root directory and the user's home directory
|
||||
and any mounted disks. The returned items should be usable directly as
|
||||
remotes.
|
||||
|
||||
Returns:
|
||||
|
||||
- disks
|
||||
- This is an array of strings of local disk names
|
||||
|
||||
### core/du: Returns disk usage of a locally attached disk. {#core-du}
|
||||
|
||||
@@ -947,6 +938,8 @@ Returns:
|
||||
}
|
||||
```
|
||||
|
||||
**Authentication is not required for this call.**
|
||||
|
||||
### core/gc: Runs a garbage collection. {#core-gc}
|
||||
|
||||
This tells the go runtime to do a garbage collection run. It isn't
|
||||
@@ -969,6 +962,8 @@ Returns the following values:
|
||||
}
|
||||
```
|
||||
|
||||
**Authentication is not required for this call.**
|
||||
|
||||
### core/memstats: Returns the memory statistics {#core-memstats}
|
||||
|
||||
This returns the memory statistics of the running program. What the values mean
|
||||
@@ -1019,6 +1014,7 @@ Returns the following values:
|
||||
{
|
||||
"bytes": total transferred bytes since the start of the group,
|
||||
"checks": number of files checked,
|
||||
"deletedDirs": number of directories deleted,
|
||||
"deletes" : number of files deleted,
|
||||
"elapsedTime": time in floating point seconds since rclone was started,
|
||||
"errors": number of errors,
|
||||
@@ -1057,6 +1053,8 @@ Returns the following values:
|
||||
Values for "transferring", "checking" and "lastError" are only assigned if data is available.
|
||||
The value for "eta" is null if an eta cannot be determined.
|
||||
|
||||
**Authentication is not required for this call.**
|
||||
|
||||
### core/stats-delete: Delete stats group. {#core-stats-delete}
|
||||
|
||||
This deletes entire stats group.
|
||||
@@ -1108,6 +1106,8 @@ Returns the following values:
|
||||
}
|
||||
```
|
||||
|
||||
**Authentication is not required for this call.**
|
||||
|
||||
### core/version: Shows the current version of rclone, Go and the OS. {#core-version}
|
||||
|
||||
This shows the current versions of rclone, Go and the OS:
|
||||
@@ -1125,6 +1125,8 @@ This shows the current versions of rclone, Go and the OS:
|
||||
- linking - type of rclone executable (static or dynamic)
|
||||
- goTags - space separated build tags or "none"
|
||||
|
||||
**Authentication is not required for this call.**
|
||||
|
||||
### debug/set-block-profile-rate: Set runtime.SetBlockProfileRate for blocking profiling. {#debug-set-block-profile-rate}
|
||||
|
||||
SetBlockProfileRate controls the fraction of goroutine blocking events
|
||||
@@ -1220,8 +1222,6 @@ If you change the parameters of a backend then you may want to call
|
||||
this to clear an existing remote out of the cache before re-creating
|
||||
it.
|
||||
|
||||
**Authentication is required for this call.**
|
||||
|
||||
### fscache/entries: Returns the number of entries in the fs cache. {#fscache-entries}
|
||||
|
||||
This returns the number of entries in the fs cache.
|
||||
@@ -1229,8 +1229,6 @@ This returns the number of entries in the fs cache.
|
||||
Returns
|
||||
- entries - number of items in the cache
|
||||
|
||||
**Authentication is required for this call.**
|
||||
|
||||
### job/batch: Run a batch of rclone rc commands concurrently. {#job-batch}
|
||||
|
||||
This takes the following parameters:
|
||||
@@ -1290,8 +1288,6 @@ Gives the result:
|
||||
}
|
||||
```
|
||||
|
||||
**Authentication is required for this call.**
|
||||
|
||||
### job/list: Lists the IDs of the running jobs {#job-list}
|
||||
|
||||
Parameters: None.
|
||||
@@ -1303,6 +1299,8 @@ Results:
|
||||
- runningIds - array of integer job ids that are running
|
||||
- finishedIds - array of integer job ids that are finished
|
||||
|
||||
**Authentication is not required for this call.**
|
||||
|
||||
### job/status: Reads the status of the job ID {#job-status}
|
||||
|
||||
Parameters:
|
||||
@@ -1323,6 +1321,8 @@ Results:
|
||||
- output - output of the job as would have been returned if called synchronously
|
||||
- progress - output of the progress related to the underlying job
|
||||
|
||||
**Authentication is not required for this call.**
|
||||
|
||||
### job/stop: Stop the running job {#job-stop}
|
||||
|
||||
Parameters:
|
||||
@@ -1347,8 +1347,6 @@ Eg
|
||||
|
||||
rclone rc mount/listmounts
|
||||
|
||||
**Authentication is required for this call.**
|
||||
|
||||
### mount/mount: Create a new mount point {#mount-mount}
|
||||
|
||||
rclone allows Linux, FreeBSD, macOS and Windows to mount any of
|
||||
@@ -1364,12 +1362,24 @@ This takes the following parameters:
|
||||
- mountOpt: a JSON object with Mount options in.
|
||||
- vfsOpt: a JSON object with VFS options in.
|
||||
|
||||
On Windows mountPoint may be set to "*" to assign the next available
|
||||
drive letter automatically, or a network share UNC path (e.g.
|
||||
"\\server\share") to mount as a network drive. In these cases the
|
||||
actual drive letter is chosen at mount time.
|
||||
|
||||
This returns the following values:
|
||||
|
||||
- mountPoint: the actual mount point that was used (this may differ
|
||||
from the input, e.g. on Windows when "*" is passed the allocated
|
||||
drive letter is returned)
|
||||
|
||||
Example:
|
||||
|
||||
```console
|
||||
rclone rc mount/mount fs=mydrive: mountPoint=/home/<user>/mountPoint
|
||||
rclone rc mount/mount fs=mydrive: mountPoint=/home/<user>/mountPoint mountType=mount
|
||||
rclone rc mount/mount fs=TestDrive: mountPoint=/mnt/tmp vfsOpt='{"CacheMode": 2}' mountOpt='{"AllowOther": true}'
|
||||
rclone rc mount/mount fs=mydrive: mountPoint=* mountType=cmount
|
||||
```
|
||||
|
||||
The vfsOpt are as described in options/get and can be seen in the
|
||||
@@ -1379,8 +1389,6 @@ The vfsOpt are as described in options/get and can be seen in the
|
||||
rclone rc options/get
|
||||
```
|
||||
|
||||
**Authentication is required for this call.**
|
||||
|
||||
### mount/types: Show all possible mount types {#mount-types}
|
||||
|
||||
This shows all possible mount types and returns them as a list.
|
||||
@@ -1396,8 +1404,6 @@ Eg
|
||||
|
||||
rclone rc mount/types
|
||||
|
||||
**Authentication is required for this call.**
|
||||
|
||||
### mount/unmount: Unmount selected active mount {#mount-unmount}
|
||||
|
||||
rclone allows Linux, FreeBSD, macOS and Windows to
|
||||
@@ -1412,8 +1418,6 @@ Example:
|
||||
|
||||
rclone rc mount/unmount mountPoint=/home/<user>/mountPoint
|
||||
|
||||
**Authentication is required for this call.**
|
||||
|
||||
### mount/unmountall: Unmount all active mounts {#mount-unmountall}
|
||||
|
||||
rclone allows Linux, FreeBSD, macOS and Windows to
|
||||
@@ -1426,8 +1430,6 @@ Eg
|
||||
|
||||
rclone rc mount/unmountall
|
||||
|
||||
**Authentication is required for this call.**
|
||||
|
||||
### operations/about: Return the space used on the remote {#operations-about}
|
||||
|
||||
This takes the following parameters:
|
||||
@@ -1438,8 +1440,6 @@ The result is as returned from rclone about --json
|
||||
|
||||
See the [about](/commands/rclone_about/) command for more information on the above.
|
||||
|
||||
**Authentication is required for this call.**
|
||||
|
||||
### operations/check: check the source and destination are the same {#operations-check}
|
||||
|
||||
Checks the files in the source and destination match. It compares
|
||||
@@ -1488,8 +1488,6 @@ Returns:
|
||||
- differ - array of strings of all non-matching files
|
||||
- error - array of strings of all files with errors (hashing or reading)
|
||||
|
||||
**Authentication is required for this call.**
|
||||
|
||||
### operations/cleanup: Remove trashed files in the remote or path {#operations-cleanup}
|
||||
|
||||
This takes the following parameters:
|
||||
@@ -1498,8 +1496,6 @@ This takes the following parameters:
|
||||
|
||||
See the [cleanup](/commands/rclone_cleanup/) command for more information on the above.
|
||||
|
||||
**Authentication is required for this call.**
|
||||
|
||||
### operations/copyfile: Copy a file from source remote to destination remote {#operations-copyfile}
|
||||
|
||||
This takes the following parameters:
|
||||
@@ -1509,8 +1505,6 @@ This takes the following parameters:
|
||||
- dstFs - a remote name string e.g. "drive2:" for the destination, "/" for local filesystem
|
||||
- dstRemote - a path within that remote e.g. "file2.txt" for the destination
|
||||
|
||||
**Authentication is required for this call.**
|
||||
|
||||
### operations/copyurl: Copy the URL to the object {#operations-copyurl}
|
||||
|
||||
This takes the following parameters:
|
||||
@@ -1522,8 +1516,6 @@ This takes the following parameters:
|
||||
|
||||
See the [copyurl](/commands/rclone_copyurl/) command for more information on the above.
|
||||
|
||||
**Authentication is required for this call.**
|
||||
|
||||
### operations/delete: Remove files in the path {#operations-delete}
|
||||
|
||||
This takes the following parameters:
|
||||
@@ -1532,8 +1524,6 @@ This takes the following parameters:
|
||||
|
||||
See the [delete](/commands/rclone_delete/) command for more information on the above.
|
||||
|
||||
**Authentication is required for this call.**
|
||||
|
||||
### operations/deletefile: Remove the single file pointed to {#operations-deletefile}
|
||||
|
||||
This takes the following parameters:
|
||||
@@ -1543,8 +1533,6 @@ This takes the following parameters:
|
||||
|
||||
See the [deletefile](/commands/rclone_deletefile/) command for more information on the above.
|
||||
|
||||
**Authentication is required for this call.**
|
||||
|
||||
### operations/fsinfo: Return information about the remote {#operations-fsinfo}
|
||||
|
||||
This takes the following parameters:
|
||||
@@ -1701,8 +1689,6 @@ Example:
|
||||
|
||||
See the [hashsum](/commands/rclone_hashsum/) command for more information on the above.
|
||||
|
||||
**Authentication is required for this call.**
|
||||
|
||||
### operations/hashsumfile: Produces a hash for a single file. {#operations-hashsumfile}
|
||||
|
||||
Produces a hash for a single file using the hash named.
|
||||
@@ -1735,8 +1721,6 @@ Example:
|
||||
|
||||
See the [hashsum](/commands/rclone_hashsum/) command for more information on the above.
|
||||
|
||||
**Authentication is required for this call.**
|
||||
|
||||
### operations/list: List the given remote and path in JSON format {#operations-list}
|
||||
|
||||
This takes the following parameters:
|
||||
@@ -1762,8 +1746,6 @@ Returns:
|
||||
|
||||
See the [lsjson](/commands/rclone_lsjson/) command for more information on the above and examples.
|
||||
|
||||
**Authentication is required for this call.**
|
||||
|
||||
### operations/mkdir: Make a destination directory or container {#operations-mkdir}
|
||||
|
||||
This takes the following parameters:
|
||||
@@ -1773,8 +1755,6 @@ This takes the following parameters:
|
||||
|
||||
See the [mkdir](/commands/rclone_mkdir/) command for more information on the above.
|
||||
|
||||
**Authentication is required for this call.**
|
||||
|
||||
### operations/movefile: Move a file from source remote to destination remote {#operations-movefile}
|
||||
|
||||
This takes the following parameters:
|
||||
@@ -1784,8 +1764,6 @@ This takes the following parameters:
|
||||
- dstFs - a remote name string e.g. "drive2:" for the destination, "/" for local filesystem
|
||||
- dstRemote - a path within that remote e.g. "file2.txt" for the destination
|
||||
|
||||
**Authentication is required for this call.**
|
||||
|
||||
### operations/publiclink: Create or retrieve a public link to the given file or folder. {#operations-publiclink}
|
||||
|
||||
This takes the following parameters:
|
||||
@@ -1801,8 +1779,6 @@ Returns:
|
||||
|
||||
See the [link](/commands/rclone_link/) command for more information on the above.
|
||||
|
||||
**Authentication is required for this call.**
|
||||
|
||||
### operations/purge: Remove a directory or container and all of its contents {#operations-purge}
|
||||
|
||||
This takes the following parameters:
|
||||
@@ -1812,8 +1788,6 @@ This takes the following parameters:
|
||||
|
||||
See the [purge](/commands/rclone_purge/) command for more information on the above.
|
||||
|
||||
**Authentication is required for this call.**
|
||||
|
||||
### operations/rmdir: Remove an empty directory or container {#operations-rmdir}
|
||||
|
||||
This takes the following parameters:
|
||||
@@ -1823,8 +1797,6 @@ This takes the following parameters:
|
||||
|
||||
See the [rmdir](/commands/rclone_rmdir/) command for more information on the above.
|
||||
|
||||
**Authentication is required for this call.**
|
||||
|
||||
### operations/rmdirs: Remove all the empty directories in the path {#operations-rmdirs}
|
||||
|
||||
This takes the following parameters:
|
||||
@@ -1835,8 +1807,6 @@ This takes the following parameters:
|
||||
|
||||
See the [rmdirs](/commands/rclone_rmdirs/) command for more information on the above.
|
||||
|
||||
**Authentication is required for this call.**
|
||||
|
||||
### operations/settier: Changes storage tier or class on all files in the path {#operations-settier}
|
||||
|
||||
This takes the following parameters:
|
||||
@@ -1845,8 +1815,6 @@ This takes the following parameters:
|
||||
|
||||
See the [settier](/commands/rclone_settier/) command for more information on the above.
|
||||
|
||||
**Authentication is required for this call.**
|
||||
|
||||
### operations/settierfile: Changes storage tier or class on the single file pointed to {#operations-settierfile}
|
||||
|
||||
This takes the following parameters:
|
||||
@@ -1854,8 +1822,6 @@ This takes the following parameters:
|
||||
- fs - a remote name string e.g. "drive:"
|
||||
- remote - a path within that remote e.g. "dir"
|
||||
|
||||
**Authentication is required for this call.**
|
||||
|
||||
### operations/size: Count the number of bytes and files in remote {#operations-size}
|
||||
|
||||
This takes the following parameters:
|
||||
@@ -1869,8 +1835,6 @@ Returns:
|
||||
|
||||
See the [size](/commands/rclone_size/) command for more information on the above.
|
||||
|
||||
**Authentication is required for this call.**
|
||||
|
||||
### operations/stat: Give information about the supplied file or directory {#operations-stat}
|
||||
|
||||
This takes the following parameters
|
||||
@@ -1889,8 +1853,6 @@ efficient to set the filesOnly flag in the options.
|
||||
|
||||
See the [lsjson](/commands/rclone_lsjson/) command for more information on the above and examples.
|
||||
|
||||
**Authentication is required for this call.**
|
||||
|
||||
### operations/uploadfile: Upload file using multiform/form-data {#operations-uploadfile}
|
||||
|
||||
This takes the following parameters:
|
||||
@@ -1899,8 +1861,6 @@ This takes the following parameters:
|
||||
- remote - a path within that remote e.g. "dir"
|
||||
- each part in body represents a file to be uploaded
|
||||
|
||||
**Authentication is required for this call.**
|
||||
|
||||
### options/blocks: List all the option blocks {#options-blocks}
|
||||
|
||||
Returns:
|
||||
@@ -1992,8 +1952,6 @@ Example:
|
||||
|
||||
rclone rc pluginsctl/addPlugin
|
||||
|
||||
**Authentication is required for this call.**
|
||||
|
||||
### pluginsctl/getPluginsForType: Get plugins with type criteria {#pluginsctl-getPluginsForType}
|
||||
|
||||
This shows all possible plugins by a mime type.
|
||||
@@ -2012,8 +1970,6 @@ Example:
|
||||
|
||||
rclone rc pluginsctl/getPluginsForType type=video/mp4
|
||||
|
||||
**Authentication is required for this call.**
|
||||
|
||||
### pluginsctl/listPlugins: Get the list of currently loaded plugins {#pluginsctl-listPlugins}
|
||||
|
||||
This allows you to get the currently enabled plugins and their details.
|
||||
@@ -2027,8 +1983,6 @@ E.g.
|
||||
|
||||
rclone rc pluginsctl/listPlugins
|
||||
|
||||
**Authentication is required for this call.**
|
||||
|
||||
### pluginsctl/listTestPlugins: Show currently loaded test plugins {#pluginsctl-listTestPlugins}
|
||||
|
||||
Allows listing of test plugins with the rclone.test set to true in package.json of the plugin.
|
||||
@@ -2041,8 +1995,6 @@ E.g.
|
||||
|
||||
rclone rc pluginsctl/listTestPlugins
|
||||
|
||||
**Authentication is required for this call.**
|
||||
|
||||
### pluginsctl/removePlugin: Remove a loaded plugin {#pluginsctl-removePlugin}
|
||||
|
||||
This allows you to remove a plugin using it's name.
|
||||
@@ -2055,8 +2007,6 @@ E.g.
|
||||
|
||||
rclone rc pluginsctl/removePlugin name=rclone/video-plugin
|
||||
|
||||
**Authentication is required for this call.**
|
||||
|
||||
### pluginsctl/removeTestPlugin: Remove a test plugin {#pluginsctl-removeTestPlugin}
|
||||
|
||||
This allows you to remove a plugin using it's name.
|
||||
@@ -2069,13 +2019,13 @@ Example:
|
||||
|
||||
rclone rc pluginsctl/removeTestPlugin name=rclone/rclone-webui-react
|
||||
|
||||
**Authentication is required for this call.**
|
||||
|
||||
### rc/error: This returns an error {#rc-error}
|
||||
|
||||
This returns an error with the input as part of its error string.
|
||||
Useful for testing error handling.
|
||||
|
||||
**Authentication is not required for this call.**
|
||||
|
||||
### rc/fatal: This returns an fatal error {#rc-fatal}
|
||||
|
||||
This returns an error with the input as part of its error string.
|
||||
@@ -2086,20 +2036,22 @@ Useful for testing error handling.
|
||||
This lists all the registered remote control commands as a JSON map in
|
||||
the commands response.
|
||||
|
||||
**Authentication is not required for this call.**
|
||||
|
||||
### rc/noop: Echo the input to the output parameters {#rc-noop}
|
||||
|
||||
This echoes the input parameters to the output parameters for testing
|
||||
purposes. It can be used to check that rclone is still alive and to
|
||||
check that parameter passing is working properly.
|
||||
|
||||
**Authentication is not required for this call.**
|
||||
|
||||
### rc/noopauth: Echo the input to the output parameters requiring auth {#rc-noopauth}
|
||||
|
||||
This echoes the input parameters to the output parameters for testing
|
||||
purposes. It can be used to check that rclone is still alive and to
|
||||
check that parameter passing is working properly.
|
||||
|
||||
**Authentication is required for this call.**
|
||||
|
||||
### rc/panic: This returns an error by panicking {#rc-panic}
|
||||
|
||||
This returns an error with the input as part of its error string.
|
||||
@@ -2146,8 +2098,6 @@ Returns
|
||||
}
|
||||
```
|
||||
|
||||
**Authentication is required for this call.**
|
||||
|
||||
### serve/start: Create a new server {#serve-start}
|
||||
|
||||
Create a new server with the specified parameters.
|
||||
@@ -2183,8 +2133,6 @@ Or an error if it failed to start.
|
||||
|
||||
Stop the server with `serve/stop` and list the running servers with `serve/list`.
|
||||
|
||||
**Authentication is required for this call.**
|
||||
|
||||
### serve/stop: Unserve selected active serve {#serve-stop}
|
||||
|
||||
Stops a running `serve` instance by ID.
|
||||
@@ -2199,8 +2147,6 @@ Example:
|
||||
|
||||
rclone rc serve/stop id=12345
|
||||
|
||||
**Authentication is required for this call.**
|
||||
|
||||
### serve/stopall: Stop all active servers {#serve-stopall}
|
||||
|
||||
Stop all active servers.
|
||||
@@ -2209,8 +2155,6 @@ This will stop all active servers.
|
||||
|
||||
rclone rc serve/stopall
|
||||
|
||||
**Authentication is required for this call.**
|
||||
|
||||
### serve/types: Show all possible serve types {#serve-types}
|
||||
|
||||
This shows all possible serve types and returns them as a list.
|
||||
@@ -2238,40 +2182,70 @@ Returns
|
||||
}
|
||||
```
|
||||
|
||||
**Authentication is required for this call.**
|
||||
|
||||
### sync/bisync: Perform bidirectional synchronization between two paths. {#sync-bisync}
|
||||
|
||||
This takes the following parameters
|
||||
<!--- Docs generated by help.go - use go generate to rebuild - DO NOT EDIT --->
|
||||
|
||||
- path1 - a remote directory string e.g. `drive:path1`
|
||||
- path2 - a remote directory string e.g. `drive:path2`
|
||||
- dryRun - dry-run mode
|
||||
- resync - performs the resync run
|
||||
- checkAccess - abort if RCLONE_TEST files are not found on both filesystems
|
||||
- checkFilename - file name for checkAccess (default: RCLONE_TEST)
|
||||
- maxDelete - abort sync if percentage of deleted files is above
|
||||
this threshold (default: 50)
|
||||
- force - Bypass maxDelete safety check and run the sync
|
||||
- checkSync - `true` by default, `false` disables comparison of final listings,
|
||||
`only` will skip sync, only compare listings from the last run
|
||||
- createEmptySrcDirs - Sync creation and deletion of empty directories.
|
||||
(Not compatible with --remove-empty-dirs)
|
||||
- removeEmptyDirs - remove empty directories at the final cleanup step
|
||||
- filtersFile - read filtering patterns from a file
|
||||
- ignoreListingChecksum - Do not use checksums for listings
|
||||
- resilient - Allow future runs to retry after certain less-serious errors, instead of requiring resync.
|
||||
- workdir - server directory for history files (default: `~/.cache/rclone/bisync`)
|
||||
- backupdir1 - --backup-dir for Path1. Must be a non-overlapping path on the same remote.
|
||||
- backupdir2 - --backup-dir for Path2. Must be a non-overlapping path on the same remote.
|
||||
- noCleanup - retain working files
|
||||
This takes the following parameters:
|
||||
|
||||
- path1 (required) - (string) a remote directory string e.g. `drive:path1`
|
||||
- path2 (required) - (string) a remote directory string e.g. `drive:path2`
|
||||
- dryRun - (bool) dry-run mode
|
||||
- backupDir1 - (string) --backup-dir for Path1. Must be a non-overlapping path on
|
||||
the same remote.
|
||||
- backupDir2 - (string) --backup-dir for Path2. Must be a non-overlapping path on
|
||||
the same remote.
|
||||
- checkAccess - (bool) Ensure expected RCLONE_TEST files are found on both
|
||||
Path1 and Path2 filesystems, else abort.
|
||||
- checkFilename - (string) Filename for --check-access (default: RCLONE_TEST)
|
||||
- checkSync - (string) Controls comparison of final listings: true|false|only
|
||||
(default: true)
|
||||
- compare - (string) Comma-separated list of bisync-specific compare options ex.
|
||||
'size,modtime,checksum' (default: 'size,modtime')
|
||||
- conflictLoser - (ConflictLoserAction) Action to take on the loser of a sync
|
||||
conflict (when there is a winner) or on both files (when there is no
|
||||
winner): , num, pathname, delete (default: num)
|
||||
- conflictResolve - (string) Automatically resolve conflicts by preferring the
|
||||
version that is: none, path1, path2, newer, older, larger, smaller (default:
|
||||
none)
|
||||
- conflictSuffix - (string) Suffix to use when renaming a --conflict-loser. Can
|
||||
be either one string or two comma-separated strings to assign different
|
||||
suffixes to Path1/Path2. (default: 'conflict')
|
||||
- createEmptySrcDirs - (bool) Sync creation and deletion of empty directories.
|
||||
(Not compatible with --remove-empty-dirs)
|
||||
- downloadHash - (bool) Compute hash by downloading when otherwise
|
||||
unavailable. (warning: may be slow and use lots of data!)
|
||||
- filtersFile - (string) Read filtering patterns from a file
|
||||
- force - (bool) Bypass --max-delete safety check and run the sync. Consider
|
||||
using with --verbose
|
||||
- ignoreListingChecksum - (bool) Do not use checksums for listings (add --ignore-
|
||||
checksum to additionally skip post-copy checksum checks)
|
||||
- maxLock - (Duration) Consider lock files older than this to be expired
|
||||
(default: 0 (never expire)) (minimum: 2m)
|
||||
- noCleanup - (bool) Retain working files (useful for troubleshooting and
|
||||
testing).
|
||||
- noSlowHash - (bool) Ignore listing checksums only on backends where they are
|
||||
slow
|
||||
- recover - (bool) Automatically recover from interruptions without requiring --
|
||||
resync.
|
||||
- removeEmptyDirs - (bool) Remove ALL empty directories at the final cleanup
|
||||
step.
|
||||
- resilient - (bool) Allow future runs to retry after certain less-serious
|
||||
errors, instead of requiring --resync.
|
||||
- resync - (bool) Performs the resync run. Equivalent to --resync-mode path1.
|
||||
Consider using --verbose or --dry-run first.
|
||||
- resyncMode - (string) During resync, prefer the version that is: path1,
|
||||
path2, newer, older, larger, smaller (default: path1 if --resync, otherwise
|
||||
none for no resync.)
|
||||
- slowHashSyncOnly - (bool) Ignore slow checksums for listings and deltas, but
|
||||
still consider them during sync calls.
|
||||
- workdir - (string) Use custom working dir - useful for testing. (default:
|
||||
/home/ncw/.cache/rclone/bisync)
|
||||
|
||||
See [bisync command help](https://rclone.org/commands/rclone_bisync/)
|
||||
and [full bisync description](https://rclone.org/bisync/)
|
||||
for more information.
|
||||
|
||||
**Authentication is required for this call.**
|
||||
|
||||
### sync/copy: copy a directory from source remote to destination remote {#sync-copy}
|
||||
|
||||
This takes the following parameters:
|
||||
@@ -2283,8 +2257,6 @@ This takes the following parameters:
|
||||
|
||||
See the [copy](/commands/rclone_copy/) command for more information on the above.
|
||||
|
||||
**Authentication is required for this call.**
|
||||
|
||||
### sync/move: move a directory from source remote to destination remote {#sync-move}
|
||||
|
||||
This takes the following parameters:
|
||||
@@ -2297,8 +2269,6 @@ This takes the following parameters:
|
||||
|
||||
See the [move](/commands/rclone_move/) command for more information on the above.
|
||||
|
||||
**Authentication is required for this call.**
|
||||
|
||||
### sync/sync: sync a directory from source remote to destination remote {#sync-sync}
|
||||
|
||||
This takes the following parameters:
|
||||
@@ -2310,8 +2280,6 @@ This takes the following parameters:
|
||||
|
||||
See the [sync](/commands/rclone_sync/) command for more information on the above.
|
||||
|
||||
**Authentication is required for this call.**
|
||||
|
||||
### vfs/forget: Forget files or directories in the directory cache. {#vfs-forget}
|
||||
|
||||
This forgets the paths in the directory cache causing them to be
|
||||
@@ -2341,6 +2309,8 @@ It returns a list under the key "vfses" where the values are the VFS
|
||||
names that could be passed to the other VFS commands in the "fs"
|
||||
parameter.
|
||||
|
||||
**Authentication is not required for this call.**
|
||||
|
||||
### vfs/poll-interval: Get the status or update the value of the poll-interval option. {#vfs-poll-interval}
|
||||
|
||||
Without any parameter given this returns the current status of the
|
||||
@@ -2403,6 +2373,8 @@ supplied and if there is only one VFS in use then that VFS will be
|
||||
used. If there is more than one VFS in use then the "fs" parameter
|
||||
must be supplied.
|
||||
|
||||
**Authentication is not required for this call.**
|
||||
|
||||
### vfs/queue-set-expiry: Set the expiry time for an item queued for upload. {#vfs-queue-set-expiry}
|
||||
|
||||
Use this to adjust the `expiry` time for an item in the upload queue.
|
||||
@@ -2496,6 +2468,8 @@ supplied and if there is only one VFS in use then that VFS will be
|
||||
used. If there is more than one VFS in use then the "fs" parameter
|
||||
must be supplied.
|
||||
|
||||
**Authentication is not required for this call.**
|
||||
|
||||
<!-- autogenerated stop -->
|
||||
|
||||
## Accessing the remote control via HTTP {#api-http}
|
||||
|
||||
@@ -965,7 +965,7 @@ mode from COMPLIANCE to GOVERNANCE while preserving the original retention date:
|
||||
<!-- autogenerated options start - DO NOT EDIT - instead edit fs.RegInfo in backend/s3/s3.go and run make backenddocs to verify --> <!-- markdownlint-disable-line line-length -->
|
||||
### Standard options
|
||||
|
||||
Here are the Standard options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, BizflyCloud, Ceph, ChinaMobile, Cloudflare, Cubbit, DigitalOcean, Dreamhost, Exaba, FileLu, FlashBlade, GCS, Hetzner, HuaweiOBS, IBMCOS, IDrive, Intercolo, IONOS, Leviia, Liara, Linode, LyveCloud, Magalu, Mega, Minio, Netease, Outscale, OVHcloud, Petabox, Qiniu, Rabata, RackCorp, Rclone, Scaleway, SeaweedFS, Selectel, Servercore, SpectraLogic, Storj, Synology, TencentCOS, Wasabi, Zata, Other).
|
||||
Here are the Standard options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, BizflyCloud, Ceph, ChinaMobile, Cloudflare, Cubbit, DigitalOcean, Dreamhost, Exaba, Fastly, FileLu, FlashBlade, GCS, HCP, Hetzner, HuaweiOBS, IBMCOS, IDrive, ImpossibleCloud, Intercolo, IONOS, Leviia, Liara, Linode, LyveCloud, Magalu, Mega, Minio, Netease, Outscale, OVHcloud, Petabox, Qiniu, Rabata, RackCorp, Rclone, Scaleway, SeaweedFS, Selectel, Servercore, SpectraLogic, Storj, Synology, TencentCOS, US3, Wasabi, Zadara, Zata, Other).
|
||||
|
||||
#### --s3-provider
|
||||
|
||||
@@ -1000,12 +1000,16 @@ Properties:
|
||||
- Dreamhost DreamObjects
|
||||
- "Exaba"
|
||||
- Exaba Object Storage
|
||||
- "Fastly"
|
||||
- Fastly Object Storage
|
||||
- "FileLu"
|
||||
- FileLu S5 (S3-Compatible Object Storage)
|
||||
- "FlashBlade"
|
||||
- Pure Storage FlashBlade Object Storage
|
||||
- "GCS"
|
||||
- Google Cloud Storage
|
||||
- "HCP"
|
||||
- Hitachi Content Platform (HCP)
|
||||
- "Hetzner"
|
||||
- Hetzner Object Storage
|
||||
- "HuaweiOBS"
|
||||
@@ -1014,6 +1018,8 @@ Properties:
|
||||
- IBM COS S3
|
||||
- "IDrive"
|
||||
- IDrive e2
|
||||
- "ImpossibleCloud"
|
||||
- Impossible Cloud Object Storage
|
||||
- "Intercolo"
|
||||
- Intercolo Object Storage
|
||||
- "IONOS"
|
||||
@@ -1064,8 +1070,12 @@ Properties:
|
||||
- Synology C2 Object Storage
|
||||
- "TencentCOS"
|
||||
- Tencent Cloud Object Storage (COS)
|
||||
- "US3"
|
||||
- US3 Object Storage
|
||||
- "Wasabi"
|
||||
- Wasabi Object Storage
|
||||
- "Zadara"
|
||||
- Zadara Object Storage
|
||||
- "Zata"
|
||||
- Zata (S3 compatible Gateway)
|
||||
- "Other"
|
||||
@@ -1125,7 +1135,7 @@ Properties:
|
||||
|
||||
- Config: region
|
||||
- Env Var: RCLONE_S3_REGION
|
||||
- Provider: AWS,BizflyCloud,Ceph,Cloudflare,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IBMCOS,Intercolo,IONOS,Leviia,LyveCloud,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,Rabata,RackCorp,Scaleway,SeaweedFS,Selectel,Servercore,Synology,Wasabi,Zata,Other
|
||||
- Provider: AWS,BizflyCloud,Ceph,Cloudflare,Cubbit,DigitalOcean,Dreamhost,Exaba,Fastly,FileLu,GCS,HCP,Hetzner,HuaweiOBS,IBMCOS,ImpossibleCloud,Intercolo,IONOS,Leviia,LyveCloud,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,Rabata,RackCorp,Scaleway,SeaweedFS,Selectel,Servercore,Synology,Wasabi,Zadara,Zata,Other
|
||||
- Type: string
|
||||
- Required: false
|
||||
- Examples:
|
||||
@@ -1243,17 +1253,41 @@ Properties:
|
||||
- ""
|
||||
- Use this if unsure.
|
||||
- Will use v4 signatures and an empty region.
|
||||
- Provider: Ceph,DigitalOcean,Dreamhost,Exaba,GCS,IBMCOS,Leviia,LyveCloud,Minio,Netease,SeaweedFS,Wasabi,Other
|
||||
- Provider: Ceph,DigitalOcean,Dreamhost,Exaba,GCS,HCP,IBMCOS,Leviia,LyveCloud,Minio,Netease,SeaweedFS,Wasabi,Other
|
||||
- "other-v2-signature"
|
||||
- Use this only if v4 signatures don't work.
|
||||
- E.g. pre Jewel/v10 CEPH.
|
||||
- Provider: Ceph,DigitalOcean,Dreamhost,Exaba,GCS,IBMCOS,Leviia,LyveCloud,Minio,Netease,SeaweedFS,Wasabi,Other
|
||||
- Provider: Ceph,DigitalOcean,Dreamhost,Exaba,GCS,HCP,IBMCOS,Leviia,LyveCloud,Minio,Netease,SeaweedFS,Wasabi,Other
|
||||
- "auto"
|
||||
- R2 buckets are automatically distributed across Cloudflare's data centers for low latency.
|
||||
- Provider: Cloudflare
|
||||
- "eu-west-1"
|
||||
- Europe West
|
||||
- Provider: Cubbit
|
||||
- "au-east-1"
|
||||
- AU East 1
|
||||
- Provider: Fastly
|
||||
- "eu-central"
|
||||
- EU Central
|
||||
- Provider: Fastly
|
||||
- "eu-south-1"
|
||||
- EU South 1
|
||||
- Provider: Fastly
|
||||
- "jp-central-1"
|
||||
- JP Central 1
|
||||
- Provider: Fastly
|
||||
- "uk-east-1"
|
||||
- UK East 1
|
||||
- Provider: Fastly
|
||||
- "us-central-1"
|
||||
- US Central 1
|
||||
- Provider: Fastly
|
||||
- "us-east"
|
||||
- US East
|
||||
- Provider: Fastly
|
||||
- "us-west"
|
||||
- US West
|
||||
- Provider: Fastly
|
||||
- "global"
|
||||
- Global
|
||||
- Provider: FileLu
|
||||
@@ -1323,6 +1357,27 @@ Properties:
|
||||
- "ru-northwest-2"
|
||||
- RU-Moscow2
|
||||
- Provider: HuaweiOBS
|
||||
- "eu-central-2"
|
||||
- Frankfurt, Germany
|
||||
- Provider: ImpossibleCloud
|
||||
- "eu-west-1"
|
||||
- Amsterdam, Netherlands
|
||||
- Provider: ImpossibleCloud
|
||||
- "eu-west-2"
|
||||
- London, UK
|
||||
- Provider: ImpossibleCloud
|
||||
- "eu-west-3"
|
||||
- Paris, France
|
||||
- Provider: ImpossibleCloud
|
||||
- "eu-east-1"
|
||||
- Poznań, Poland
|
||||
- Provider: ImpossibleCloud
|
||||
- "eu-north-1"
|
||||
- Copenhagen, Denmark
|
||||
- Provider: ImpossibleCloud
|
||||
- "us-east-1"
|
||||
- New York, USA
|
||||
- Provider: ImpossibleCloud
|
||||
- "de-fra"
|
||||
- Frankfurt, Germany
|
||||
- Provider: Intercolo
|
||||
@@ -1332,9 +1387,18 @@ Properties:
|
||||
- "eu-central-2"
|
||||
- Berlin, Germany
|
||||
- Provider: IONOS
|
||||
- "eu-central-3"
|
||||
- Berlin, Germany
|
||||
- Provider: IONOS
|
||||
- "eu-central-4"
|
||||
- Frankfurt, Germany
|
||||
- Provider: IONOS
|
||||
- "eu-south-2"
|
||||
- Logrono, Spain
|
||||
- Provider: IONOS
|
||||
- "us-central-1"
|
||||
- Lenexa, USA
|
||||
- Provider: IONOS
|
||||
- "eu-west-2"
|
||||
- Paris, France
|
||||
- Provider: Outscale
|
||||
@@ -1547,6 +1611,10 @@ Properties:
|
||||
- "tw-001"
|
||||
- Asia (Taiwan)
|
||||
- Provider: Synology
|
||||
- "us-east-1"
|
||||
- The default region.
|
||||
- Leave location constraint empty.
|
||||
- Provider: Zadara
|
||||
- "us-east-1"
|
||||
- Indore, Madhya Pradesh, India
|
||||
- Provider: Zata
|
||||
@@ -1561,7 +1629,7 @@ Properties:
|
||||
|
||||
- Config: endpoint
|
||||
- Env Var: RCLONE_S3_ENDPOINT
|
||||
- Provider: AWS,Alibaba,ArvanCloud,BizflyCloud,Ceph,ChinaMobile,Cloudflare,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,FlashBlade,GCS,Hetzner,HuaweiOBS,IBMCOS,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Mega,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,Rabata,RackCorp,Rclone,Scaleway,SeaweedFS,Selectel,Servercore,SpectraLogic,Storj,Synology,TencentCOS,Wasabi,Zata,Other
|
||||
- Provider: AWS,Alibaba,ArvanCloud,BizflyCloud,Ceph,ChinaMobile,Cloudflare,Cubbit,DigitalOcean,Dreamhost,Exaba,Fastly,FileLu,FlashBlade,GCS,HCP,Hetzner,HuaweiOBS,IBMCOS,ImpossibleCloud,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Mega,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,Rabata,RackCorp,Rclone,Scaleway,SeaweedFS,Selectel,Servercore,SpectraLogic,Storj,Synology,TencentCOS,US3,Wasabi,Zadara,Zata,Other
|
||||
- Type: string
|
||||
- Required: false
|
||||
- Examples:
|
||||
@@ -1747,6 +1815,9 @@ Properties:
|
||||
- "s3.cubbit.eu"
|
||||
- Cubbit DS3 Object Storage endpoint
|
||||
- Provider: Cubbit
|
||||
- "s3.{tenant_name}.cubbit.eu"
|
||||
- Multi-tenant endpoint - replace {tenant_name} with your actual tenant name. Do not select this directly
|
||||
- Provider: Cubbit
|
||||
- "syd1.digitaloceanspaces.com"
|
||||
- DigitalOcean Spaces Sydney 1
|
||||
- Provider: DigitalOcean
|
||||
@@ -1780,6 +1851,30 @@ Properties:
|
||||
- "objects-us-east-1.dream.io"
|
||||
- Dream Objects endpoint
|
||||
- Provider: Dreamhost
|
||||
- "au-east-1.object.fastlystorage.app"
|
||||
- AU East 1
|
||||
- Provider: Fastly
|
||||
- "eu-central.object.fastlystorage.app"
|
||||
- EU Central
|
||||
- Provider: Fastly
|
||||
- "eu-south-1.object.fastlystorage.app"
|
||||
- EU South 1
|
||||
- Provider: Fastly
|
||||
- "jp-central-1.object.fastlystorage.app"
|
||||
- JP Central 1
|
||||
- Provider: Fastly
|
||||
- "uk-east-1.object.fastlystorage.app"
|
||||
- UK East 1
|
||||
- Provider: Fastly
|
||||
- "us-central-1.object.fastlystorage.app"
|
||||
- US Central 1
|
||||
- Provider: Fastly
|
||||
- "us-east.object.fastlystorage.app"
|
||||
- US East
|
||||
- Provider: Fastly
|
||||
- "us-west.object.fastlystorage.app"
|
||||
- US West
|
||||
- Provider: Fastly
|
||||
- "s5lu.com"
|
||||
- Global FileLu S5 endpoint
|
||||
- Provider: FileLu
|
||||
@@ -2038,18 +2133,48 @@ Properties:
|
||||
- "s3.private.sng01.cloud-object-storage.appdomain.cloud"
|
||||
- Singapore Single Site Private Endpoint
|
||||
- Provider: IBMCOS
|
||||
- "eu-central-2.storage.impossibleapi.net"
|
||||
- Frankfurt, Germany
|
||||
- Provider: ImpossibleCloud
|
||||
- "eu-west-1.storage.impossibleapi.net"
|
||||
- Amsterdam, Netherlands
|
||||
- Provider: ImpossibleCloud
|
||||
- "eu-west-2.storage.impossibleapi.net"
|
||||
- London, UK
|
||||
- Provider: ImpossibleCloud
|
||||
- "eu-west-3.storage.impossibleapi.net"
|
||||
- Paris, France
|
||||
- Provider: ImpossibleCloud
|
||||
- "eu-east-1.storage.impossibleapi.net"
|
||||
- Poznań, Poland
|
||||
- Provider: ImpossibleCloud
|
||||
- "eu-north-1.storage.impossibleapi.net"
|
||||
- Copenhagen, Denmark
|
||||
- Provider: ImpossibleCloud
|
||||
- "us-east-1.storage.impossibleapi.net"
|
||||
- New York, USA
|
||||
- Provider: ImpossibleCloud
|
||||
- "de-fra.i3storage.com"
|
||||
- Frankfurt, Germany
|
||||
- Provider: Intercolo
|
||||
- "s3-eu-central-1.ionoscloud.com"
|
||||
- "s3.eu-central-1.ionoscloud.com"
|
||||
- Frankfurt, Germany
|
||||
- Provider: IONOS
|
||||
- "s3-eu-central-2.ionoscloud.com"
|
||||
- "s3.eu-central-2.ionoscloud.com"
|
||||
- Berlin, Germany
|
||||
- Provider: IONOS
|
||||
- "s3-eu-south-2.ionoscloud.com"
|
||||
- "s3.eu-central-3.ionoscloud.com"
|
||||
- Berlin, Germany
|
||||
- Provider: IONOS
|
||||
- "s3.eu-central-4.ionoscloud.com"
|
||||
- Frankfurt, Germany
|
||||
- Provider: IONOS
|
||||
- "s3.eu-south-2.ionoscloud.com"
|
||||
- Logrono, Spain
|
||||
- Provider: IONOS
|
||||
- "s3.us-central-1.ionoscloud.com"
|
||||
- Lenexa, USA
|
||||
- Provider: IONOS
|
||||
- "s3.leviia.com"
|
||||
- The default endpoint
|
||||
- Leviia
|
||||
@@ -2421,6 +2546,75 @@ Properties:
|
||||
- "cos.accelerate.myqcloud.com"
|
||||
- Use Tencent COS Accelerate Endpoint
|
||||
- Provider: TencentCOS
|
||||
- "s3-cn-bj.ufileos.com"
|
||||
- North China (Beijing 2)
|
||||
- Provider: US3
|
||||
- "s3-cn-wlcb.ufileos.com"
|
||||
- North China (Ulanqab)
|
||||
- Provider: US3
|
||||
- "s3-cn-sh2.ufileos.com"
|
||||
- Shanghai
|
||||
- Provider: US3
|
||||
- "s3-cn-gd.ufileos.com"
|
||||
- South China (Guangzhou)
|
||||
- Provider: US3
|
||||
- "s3-hk.ufileos.com"
|
||||
- Hongkong
|
||||
- Provider: US3
|
||||
- "s3-us-ca.ufileos.com"
|
||||
- United States (Los Angeles)
|
||||
- Provider: US3
|
||||
- "s3-sg.ufileos.com"
|
||||
- Singapore
|
||||
- Provider: US3
|
||||
- "s3-idn-jakarta.ufileos.com"
|
||||
- Indonesia (Jakarta)
|
||||
- Provider: US3
|
||||
- "s3-tw-tp.ufileos.com"
|
||||
- Taiwan (Taipei)
|
||||
- Provider: US3
|
||||
- "s3-afr-nigeria.ufileos.com"
|
||||
- Nigeria (Lagos)
|
||||
- Provider: US3
|
||||
- "s3-bra-saopaulo.ufileos.com"
|
||||
- Brazil (São Paulo)
|
||||
- Provider: US3
|
||||
- "s3-uae-dubai.ufileos.com"
|
||||
- United Arab Emirates (Dubai)
|
||||
- Provider: US3
|
||||
- "s3-ge-fra.ufileos.com"
|
||||
- Germany (Frankfurt)
|
||||
- Provider: US3
|
||||
- "s3-vn-sng.ufileos.com"
|
||||
- Vietnam (Ho Chi Minh City)
|
||||
- Provider: US3
|
||||
- "s3-us-ws.ufileos.com"
|
||||
- United States (Washington)
|
||||
- Provider: US3
|
||||
- "s3-ind-mumbai.ufileos.com"
|
||||
- India (Mumbai)
|
||||
- Provider: US3
|
||||
- "s3-kr-seoul.ufileos.com"
|
||||
- Seoul (South Korea)
|
||||
- Provider: US3
|
||||
- "s3-jpn-tky.ufileos.com"
|
||||
- Japan (Tokyo)
|
||||
- Provider: US3
|
||||
- "s3-th-bkk.ufileos.com"
|
||||
- Bangkok (Thailand)
|
||||
- Provider: US3
|
||||
- "s3-uk-london.ufileos.com"
|
||||
- United Kingdom (London)
|
||||
- Provider: US3
|
||||
- "s3-rus-mosc.ufileos.com"
|
||||
- Russia (Moscow)
|
||||
- Provider: US3
|
||||
- "s3-cn-guiyang1.ufileos.com"
|
||||
- Guiyang 1
|
||||
- Provider: US3
|
||||
- "s3-pk-khi.ufileos.com"
|
||||
- Pakistan
|
||||
- Provider: US3
|
||||
- "s3.wasabisys.com"
|
||||
- Wasabi US East 1 (N. Virginia)
|
||||
- Provider: Wasabi
|
||||
@@ -2477,7 +2671,7 @@ Properties:
|
||||
|
||||
- Config: location_constraint
|
||||
- Env Var: RCLONE_S3_LOCATION_CONSTRAINT
|
||||
- Provider: AWS,ArvanCloud,Ceph,ChinaMobile,DigitalOcean,Dreamhost,Exaba,GCS,Hetzner,IBMCOS,LyveCloud,Minio,Netease,Qiniu,Rabata,RackCorp,SeaweedFS,Synology,Wasabi,Zata,Other
|
||||
- Provider: AWS,ArvanCloud,Ceph,ChinaMobile,DigitalOcean,Dreamhost,Exaba,GCS,HCP,Hetzner,IBMCOS,ImpossibleCloud,LyveCloud,Minio,Netease,Qiniu,Rabata,RackCorp,SeaweedFS,Synology,Wasabi,Zata,Other
|
||||
- Type: string
|
||||
- Required: false
|
||||
- Examples:
|
||||
@@ -2858,36 +3052,36 @@ Properties:
|
||||
|
||||
- Config: acl
|
||||
- Env Var: RCLONE_S3_ACL
|
||||
- Provider: AWS,Alibaba,ArvanCloud,BizflyCloud,Ceph,ChinaMobile,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IBMCOS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,TencentCOS,Wasabi,Zata,Other
|
||||
- Provider: AWS,Alibaba,ArvanCloud,BizflyCloud,Ceph,ChinaMobile,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,HCP,Hetzner,HuaweiOBS,IBMCOS,IDrive,ImpossibleCloud,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,TencentCOS,US3,Wasabi,Zata,Other
|
||||
- Type: string
|
||||
- Required: false
|
||||
- Examples:
|
||||
- "private"
|
||||
- Owner gets FULL_CONTROL.
|
||||
- No one else has access rights (default).
|
||||
- Provider: AWS,Alibaba,ArvanCloud,BizflyCloud,Ceph,ChinaMobile,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,Wasabi,Zata,Other
|
||||
- Provider: AWS,Alibaba,ArvanCloud,BizflyCloud,Ceph,ChinaMobile,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,HCP,Hetzner,HuaweiOBS,IDrive,ImpossibleCloud,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,US3,Wasabi,Zata,Other
|
||||
- "public-read"
|
||||
- Owner gets FULL_CONTROL.
|
||||
- The AllUsers group gets READ access.
|
||||
- Provider: AWS,Alibaba,ArvanCloud,BizflyCloud,Ceph,ChinaMobile,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,TencentCOS,Wasabi,Zata,Other
|
||||
- Provider: AWS,Alibaba,ArvanCloud,BizflyCloud,Ceph,ChinaMobile,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,HCP,Hetzner,HuaweiOBS,IDrive,ImpossibleCloud,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,TencentCOS,US3,Wasabi,Zata,Other
|
||||
- "public-read-write"
|
||||
- Owner gets FULL_CONTROL.
|
||||
- The AllUsers group gets READ and WRITE access.
|
||||
- Granting this on a bucket is generally not recommended.
|
||||
- Provider: AWS,Alibaba,ArvanCloud,BizflyCloud,Ceph,ChinaMobile,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,TencentCOS,Wasabi,Zata,Other
|
||||
- Provider: AWS,Alibaba,ArvanCloud,BizflyCloud,Ceph,ChinaMobile,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,HCP,Hetzner,HuaweiOBS,IDrive,ImpossibleCloud,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,TencentCOS,Wasabi,Zata,Other
|
||||
- "authenticated-read"
|
||||
- Owner gets FULL_CONTROL.
|
||||
- The AuthenticatedUsers group gets READ access.
|
||||
- Provider: AWS,Alibaba,ArvanCloud,BizflyCloud,Ceph,ChinaMobile,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,TencentCOS,Wasabi,Zata,Other
|
||||
- Provider: AWS,Alibaba,ArvanCloud,BizflyCloud,Ceph,ChinaMobile,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,HCP,Hetzner,HuaweiOBS,IDrive,ImpossibleCloud,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,TencentCOS,Wasabi,Zata,Other
|
||||
- "bucket-owner-read"
|
||||
- Object owner gets FULL_CONTROL.
|
||||
- Bucket owner gets READ access.
|
||||
- If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
|
||||
- Provider: AWS,Alibaba,ArvanCloud,BizflyCloud,Ceph,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,TencentCOS,Wasabi,Zata,Other
|
||||
- Provider: AWS,Alibaba,ArvanCloud,BizflyCloud,Ceph,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,HCP,Hetzner,HuaweiOBS,IDrive,ImpossibleCloud,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,TencentCOS,Wasabi,Zata,Other
|
||||
- "bucket-owner-full-control"
|
||||
- Both the object owner and the bucket owner get FULL_CONTROL over the object.
|
||||
- If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
|
||||
- Provider: AWS,Alibaba,ArvanCloud,BizflyCloud,Ceph,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,TencentCOS,Wasabi,Zata,Other
|
||||
- Provider: AWS,Alibaba,ArvanCloud,BizflyCloud,Ceph,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,HCP,Hetzner,HuaweiOBS,IDrive,ImpossibleCloud,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,TencentCOS,Wasabi,Zata,Other
|
||||
- "private"
|
||||
- Owner gets FULL_CONTROL.
|
||||
- No one else has access rights (default).
|
||||
@@ -2961,22 +3155,22 @@ Properties:
|
||||
|
||||
- Config: storage_class
|
||||
- Env Var: RCLONE_S3_STORAGE_CLASS
|
||||
- Provider: AWS,Alibaba,ArvanCloud,ChinaMobile,Liara,Magalu,Qiniu,Scaleway,TencentCOS
|
||||
- Provider: AWS,Alibaba,ArvanCloud,ChinaMobile,Liara,Magalu,OVHcloud,Qiniu,Scaleway,TencentCOS,US3
|
||||
- Type: string
|
||||
- Required: false
|
||||
- Examples:
|
||||
- ""
|
||||
- Default
|
||||
- Provider: AWS,Alibaba,ChinaMobile,TencentCOS
|
||||
- Provider: AWS,Alibaba,ChinaMobile,OVHcloud,TencentCOS,US3
|
||||
- "STANDARD"
|
||||
- Standard storage class
|
||||
- Provider: AWS,Alibaba,ArvanCloud,ChinaMobile,Liara,Magalu,Qiniu,TencentCOS
|
||||
- Provider: AWS,Alibaba,ArvanCloud,ChinaMobile,Liara,Magalu,OVHcloud,Qiniu,TencentCOS,US3
|
||||
- "REDUCED_REDUNDANCY"
|
||||
- Reduced redundancy storage class
|
||||
- Provider: AWS
|
||||
- "STANDARD_IA"
|
||||
- Standard Infrequent Access storage class
|
||||
- Provider: AWS
|
||||
- Provider: AWS,OVHcloud
|
||||
- "ONEZONE_IA"
|
||||
- One Zone Infrequent Access storage class
|
||||
- Provider: AWS
|
||||
@@ -2997,7 +3191,25 @@ Properties:
|
||||
- Provider: Alibaba,ChinaMobile,Qiniu
|
||||
- "STANDARD_IA"
|
||||
- Infrequent access storage mode
|
||||
- Provider: Alibaba,ChinaMobile,TencentCOS
|
||||
- Provider: Alibaba,ChinaMobile,TencentCOS,US3
|
||||
- "EXPRESS_ONEZONE"
|
||||
- High Performance storage class
|
||||
- Provider: OVHcloud
|
||||
- "ONEZONE_IA"
|
||||
- Standard Infrequent Access storage class
|
||||
- Provider: OVHcloud
|
||||
- "GLACIER"
|
||||
- Active Archive storage class
|
||||
- Provider: OVHcloud
|
||||
- "GLACIER_IR"
|
||||
- Active Archive storage class
|
||||
- Provider: OVHcloud
|
||||
- "DEEP_ARCHIVE"
|
||||
- Cold Archive storage class
|
||||
- Provider: OVHcloud
|
||||
- "INTELLIGENT_TIERING"
|
||||
- Standard storage class
|
||||
- Provider: OVHcloud
|
||||
- "LINE"
|
||||
- Infrequent access storage mode
|
||||
- Provider: Qiniu
|
||||
@@ -3015,16 +3227,16 @@ Properties:
|
||||
- "GLACIER"
|
||||
- Archived storage.
|
||||
- Prices are lower, but it needs to be restored first to be accessed.
|
||||
- Available in FR-PAR and NL-AMS regions.
|
||||
- Available in the FR-PAR region only.
|
||||
- Provider: Scaleway
|
||||
- "ONEZONE_IA"
|
||||
- One Zone - Infrequent Access.
|
||||
- A good choice for storing secondary backup copies or easily re-creatable data.
|
||||
- Available in the FR-PAR region only.
|
||||
- Available in all regions.
|
||||
- Provider: Scaleway
|
||||
- "ARCHIVE"
|
||||
- Archive storage mode
|
||||
- Provider: TencentCOS
|
||||
- Provider: TencentCOS,US3
|
||||
|
||||
#### --s3-ibm-api-key
|
||||
|
||||
@@ -3050,9 +3262,20 @@ Properties:
|
||||
- Type: string
|
||||
- Required: false
|
||||
|
||||
#### --s3-bucket-object-lock-enabled
|
||||
|
||||
Enable Object Lock when creating new buckets.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: bucket_object_lock_enabled
|
||||
- Env Var: RCLONE_S3_BUCKET_OBJECT_LOCK_ENABLED
|
||||
- Type: bool
|
||||
- Default: false
|
||||
|
||||
### Advanced options
|
||||
|
||||
Here are the Advanced options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, BizflyCloud, Ceph, ChinaMobile, Cloudflare, Cubbit, DigitalOcean, Dreamhost, Exaba, FileLu, FlashBlade, GCS, Hetzner, HuaweiOBS, IBMCOS, IDrive, Intercolo, IONOS, Leviia, Liara, Linode, LyveCloud, Magalu, Mega, Minio, Netease, Outscale, OVHcloud, Petabox, Qiniu, Rabata, RackCorp, Rclone, Scaleway, SeaweedFS, Selectel, Servercore, SpectraLogic, Storj, Synology, TencentCOS, Wasabi, Zata, Other).
|
||||
Here are the Advanced options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, BizflyCloud, Ceph, ChinaMobile, Cloudflare, Cubbit, DigitalOcean, Dreamhost, Exaba, Fastly, FileLu, FlashBlade, GCS, HCP, Hetzner, HuaweiOBS, IBMCOS, IDrive, ImpossibleCloud, Intercolo, IONOS, Leviia, Liara, Linode, LyveCloud, Magalu, Mega, Minio, Netease, Outscale, OVHcloud, Petabox, Qiniu, Rabata, RackCorp, Rclone, Scaleway, SeaweedFS, Selectel, Servercore, SpectraLogic, Storj, Synology, TencentCOS, US3, Wasabi, Zadara, Zata, Other).
|
||||
|
||||
#### --s3-bucket-acl
|
||||
|
||||
@@ -3071,7 +3294,7 @@ Properties:
|
||||
|
||||
- Config: bucket_acl
|
||||
- Env Var: RCLONE_S3_BUCKET_ACL
|
||||
- Provider: AWS,Alibaba,ArvanCloud,BizflyCloud,Ceph,ChinaMobile,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IBMCOS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Mega,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,Servercore,TencentCOS,Wasabi,Zata,Other
|
||||
- Provider: AWS,Alibaba,ArvanCloud,BizflyCloud,Ceph,ChinaMobile,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,HCP,Hetzner,HuaweiOBS,IBMCOS,IDrive,ImpossibleCloud,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Mega,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,Servercore,TencentCOS,US3,Wasabi,Zata,Other
|
||||
- Type: string
|
||||
- Required: false
|
||||
- Examples:
|
||||
@@ -3963,6 +4186,27 @@ Properties:
|
||||
- Type: Tristate
|
||||
- Default: unset
|
||||
|
||||
#### --s3-list-versions-oldest-first
|
||||
|
||||
Set if the backend returns object versions oldest first.
|
||||
|
||||
The S3 standard returns object versions newest first. Some backends
|
||||
(e.g. Hitachi HCP) return them oldest first instead.
|
||||
|
||||
Set this quirk if --s3-version-at or --s3-versions produce incorrect
|
||||
results with your backend.
|
||||
|
||||
This should be automatically set correctly for all providers rclone
|
||||
knows about - please make a bug report if not.
|
||||
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: list_versions_oldest_first
|
||||
- Env Var: RCLONE_S3_LIST_VERSIONS_OLDEST_FIRST
|
||||
- Type: Tristate
|
||||
- Default: unset
|
||||
|
||||
#### --s3-use-x-id
|
||||
|
||||
Set if rclone should add x-id URL parameters.
|
||||
@@ -4065,6 +4309,158 @@ Properties:
|
||||
- Type: Bits
|
||||
- Default: Off
|
||||
|
||||
#### --s3-ibm-iam-endpoint
|
||||
|
||||
IBM IAM Endpoint to use for authentication.
|
||||
|
||||
Leave blank to use the default public endpoint.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: ibm_iam_endpoint
|
||||
- Env Var: RCLONE_S3_IBM_IAM_ENDPOINT
|
||||
- Provider: IBMCOS
|
||||
- Type: string
|
||||
- Required: false
|
||||
|
||||
#### --s3-object-lock-mode
|
||||
|
||||
Object Lock mode to apply when uploading or copying objects.
|
||||
|
||||
Set this to apply Object Lock retention mode to objects.
|
||||
If not set, no Object Lock mode is applied (even with --metadata).
|
||||
|
||||
Note: To enable Object Lock retention, you must set BOTH object_lock_mode
|
||||
AND object_lock_retain_until_date. Setting only one has no effect.
|
||||
|
||||
- GOVERNANCE: Set Object Lock mode to GOVERNANCE
|
||||
- COMPLIANCE: Set Object Lock mode to COMPLIANCE
|
||||
- copy: Copy the mode from the source object (requires --metadata)
|
||||
|
||||
See: https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lock.html
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: object_lock_mode
|
||||
- Env Var: RCLONE_S3_OBJECT_LOCK_MODE
|
||||
- Type: string
|
||||
- Required: false
|
||||
- Examples:
|
||||
- "GOVERNANCE"
|
||||
- Set Object Lock mode to GOVERNANCE
|
||||
- "COMPLIANCE"
|
||||
- Set Object Lock mode to COMPLIANCE
|
||||
- "copy"
|
||||
- Copy from source object (requires --metadata)
|
||||
|
||||
#### --s3-object-lock-retain-until-date
|
||||
|
||||
Object Lock retention until date to apply when uploading or copying objects.
|
||||
|
||||
Set this to apply Object Lock retention date to objects.
|
||||
If not set, no retention date is applied (even with --metadata).
|
||||
|
||||
Note: To enable Object Lock retention, you must set BOTH object_lock_mode
|
||||
AND object_lock_retain_until_date. Setting only one has no effect.
|
||||
|
||||
Accepts:
|
||||
- RFC 3339 format: 2030-01-02T15:04:05Z
|
||||
- Duration from now: 365d, 1y, 6M (days, years, months)
|
||||
- copy: Copy the date from the source object (requires --metadata)
|
||||
|
||||
See: https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lock.html
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: object_lock_retain_until_date
|
||||
- Env Var: RCLONE_S3_OBJECT_LOCK_RETAIN_UNTIL_DATE
|
||||
- Type: string
|
||||
- Required: false
|
||||
- Examples:
|
||||
- "copy"
|
||||
- Copy from source object (requires --metadata)
|
||||
- "2030-01-01T00:00:00Z"
|
||||
- Set specific date (RFC 3339 format)
|
||||
- "365d"
|
||||
- Set retention for 365 days from now
|
||||
- "1y"
|
||||
- Set retention for 1 year from now
|
||||
|
||||
#### --s3-object-lock-legal-hold-status
|
||||
|
||||
Object Lock legal hold status to apply when uploading or copying objects.
|
||||
|
||||
Set this to apply Object Lock legal hold to objects.
|
||||
If not set, no legal hold is applied (even with --metadata).
|
||||
|
||||
Note: Legal hold is independent of retention and can be set separately.
|
||||
|
||||
- ON: Enable legal hold
|
||||
- OFF: Disable legal hold
|
||||
- copy: Copy the legal hold status from the source object (requires --metadata)
|
||||
|
||||
See: https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lock.html
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: object_lock_legal_hold_status
|
||||
- Env Var: RCLONE_S3_OBJECT_LOCK_LEGAL_HOLD_STATUS
|
||||
- Type: string
|
||||
- Required: false
|
||||
- Examples:
|
||||
- "ON"
|
||||
- Enable legal hold
|
||||
- "OFF"
|
||||
- Disable legal hold
|
||||
- "copy"
|
||||
- Copy from source object (requires --metadata)
|
||||
|
||||
#### --s3-bypass-governance-retention
|
||||
|
||||
Allow deleting or modifying objects locked with GOVERNANCE mode.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: bypass_governance_retention
|
||||
- Env Var: RCLONE_S3_BYPASS_GOVERNANCE_RETENTION
|
||||
- Type: bool
|
||||
- Default: false
|
||||
|
||||
#### --s3-object-lock-set-after-upload
|
||||
|
||||
Set Object Lock via separate API calls after upload.
|
||||
|
||||
Use this for S3-compatible providers that don't support setting Object Lock
|
||||
headers during PUT operations. When enabled, Object Lock is set via separate
|
||||
PutObjectRetention and PutObjectLegalHold API calls after the upload completes.
|
||||
|
||||
This adds extra API calls per object, so only enable if your provider requires it.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: object_lock_set_after_upload
|
||||
- Env Var: RCLONE_S3_OBJECT_LOCK_SET_AFTER_UPLOAD
|
||||
- Type: bool
|
||||
- Default: false
|
||||
|
||||
#### --s3-object-lock-supported
|
||||
|
||||
Whether the provider supports S3 Object Lock.
|
||||
|
||||
This should be true, false or left unset to use the default for the provider.
|
||||
|
||||
Set to false for providers that don't fully support the S3 Object Lock API
|
||||
(e.g. GCS which uses non-standard headers for bypass governance retention
|
||||
and doesn't implement Legal Hold via the S3 API).
|
||||
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: object_lock_supported
|
||||
- Env Var: RCLONE_S3_OBJECT_LOCK_SUPPORTED
|
||||
- Type: Tristate
|
||||
- Default: unset
|
||||
|
||||
#### --s3-description
|
||||
|
||||
Description of the remote.
|
||||
@@ -4091,6 +4487,9 @@ Here are the possible system metadata items for the s3 backend.
|
||||
| content-language | Content-Language header | string | en-US | N |
|
||||
| content-type | Content-Type header | string | text/plain | N |
|
||||
| mtime | Time of last modification, read from rclone metadata | RFC 3339 | 2006-01-02T15:04:05.999999999Z07:00 | N |
|
||||
| object-lock-legal-hold-status | Object Lock legal hold status: ON or OFF | string | OFF | N |
|
||||
| object-lock-mode | Object Lock mode: GOVERNANCE or COMPLIANCE | string | GOVERNANCE | N |
|
||||
| object-lock-retain-until-date | Object Lock retention until date | RFC 3339 | 2030-01-02T15:04:05Z | N |
|
||||
| tier | Tier of the object | string | GLACIER | **Y** |
|
||||
|
||||
See the [metadata](/docs/#metadata) docs for more info.
|
||||
|
||||
@@ -218,12 +218,12 @@ rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,command=e
|
||||
|
||||
```console
|
||||
rclone convmv "stories/The Quick Brown Fox!" --name-transform "date=-{YYYYMMDD}"
|
||||
// Output: stories/The Quick Brown Fox!-20260130
|
||||
// Output: stories/The Quick Brown Fox!-20260501
|
||||
```
|
||||
|
||||
```console
|
||||
rclone convmv "stories/The Quick Brown Fox!" --name-transform "date=-{macfriendlytime}"
|
||||
// Output: stories/The Quick Brown Fox!-2026-01-30 0852PM
|
||||
// Output: stories/The Quick Brown Fox!-2026-05-01 0355PM
|
||||
```
|
||||
|
||||
```console
|
||||
|
||||
Reference in New Issue
Block a user