When using rcat to upload a new version of a file that already existed,
the file upload would succeed. The subsequent deletion of the old file
is attempted after the upload. Drime appears to handle the deletion of
the old file automatically and returns HTTP status code 422, stating
the "The selected entry ids is invalid."
The deletion and the rcat would fail before this change. This is with
file history enabled on my Drime account.
This change detects the error and ignores it since the file has
already been deleted.
At some point Drime recommended 200M for the upload cutoff for
switching to multipart upload. However uploads have stopped working
using single part upload for 100..200Mish files.
Their docs now recommend 5M as the cutoff for multipart upload so this
changes the default.
The /s3/multipart/create and /s3/entries endpoints interpret relativePath
as an absolute path from the drive root, not relative to parent_id. When
root_folder_id was set to a non-root folder, files larger than
upload_cutoff ended up at the user's drive root instead of the configured
folder.
Resolve the absolute path of the Fs root once via GET /folders/{hash}/path
(cached on first OpenChunkWriter call) and use that to build the correct
relativePath.
Fixes#9392
When enabled, an out-of-space error during a local write returns a
fatal error that aborts the run, instead of being retried.
Without this option, ENOSPC errors are treated as retryable and
rclone may spin through the retry loop many times on a full disk
before giving up. That is fine for transient network errors but
unhelpful when the disk is genuinely full and the operator wants
the run to fail loudly. Default is off so existing behaviour is
unchanged.
Implementation follows the pattern suggested in the issue: a defer
at the top of Update wraps the error with fserrors.FatalError when
the option is on and the error is disk-full. Detection covers both
file.ErrDiskFull from the preallocate path and syscall.ENOSPC from
io.Copy or Close, via a small helper that uses fserrors.IsErrNoSpace.
Add three new regions and their endpoints for Fastly Object Storage:
- eu-west-1 (Paris)
- us-east-1 (Virginia)
- us-west-1 (Oregon)
These are distinct from the existing us-east, us-west and eu-central
endpoints, which are kept in place.
shouldRetry treated every non-nil error as retryable, so permanent
failures (auth, 4xx, not-found) burned through the LowLevelRetries
budget instead of returning fast.
This also fixes the pacer sleeps: pacer.MinSleep(1000) and
MaxSleep(10000) are time.Duration values, so they were 1µs and 10µs -
almost certainly intended as 10ms and 2s.
The test set the short idle timeout before creating the test Fs, which
made fs.NewFs fail to read the FTP welcome banner within 1s on slow CI
hosts. Restore the long timeout while NewFs dials the control
connection, then apply the short idle timeout before the upload so the
data connection still exercises the close race that shut_timeout fixes.
The previous wording "Already have a token - refresh?" was misleading
because answering yes triggers a full re-authorization flow, not an
OAuth2 refresh token grant. Updated to "Token already configured -
replace it?" to accurately describe what happens.
Also updated the SugarSync backend which has its own copy of the prompt,
and the docs for box, drive, and onedrive that reference it.
The stscreds.AssumeRoleProvider from AWS SDK Go v2 does not cache
credentials by itself. The SDK only auto-wraps providers with
aws.CredentialsCache when they are loaded via
config.LoadDefaultConfig; when assigned directly to
aws.Config.Credentials it must be wrapped manually, as documented on
stscreds.NewAssumeRoleProvider.
Without the cache, configurations using role_arn would call AssumeRole
once per S3 request, flooding STS and CloudTrail.
See: https://forum.rclone.org/t/aws-iam-roles-credentials-arent-cached/53732
When a Proton Drive file has no active revision attributes,
readMetaDataForLink returns a nil FileSystemAttrs and Object.originalSize
is left as nil. Object.Open then dereferenced this nil pointer when
calling fs.FixRangeOption, causing a SIGSEGV during copy.
Use Object.Size() instead, which already implements the correct fallback
to the link size when originalSize is unavailable.
This updates the github.com/rclone/Proton-API-Bridge package to fix a
segfault when reading files with no metadata.
Fixes#9377Fixes#9117
Previously all log output produced by Proton-API-Bridge (stdlib log)
and go-proton-api (logrus + resty's logger) bypassed rclone's
logging: it ignored -v / -vv levels and didn't reach --log-file.
Add a small adapter implementing the resty.Logger / bridge Logger
shape that calls fs.Errorf / fs.Logf / fs.Debugf, and pass it via
the new Config.Logger hook. The bridge in turn forwards the same
value to go-proton-api's WithLogger option, so HTTP-layer warnings
and the formerly-hardcoded logrus warnings inside go-proton-api
also surface through rclone's log levels.
The Proton Drive backend constructed the upstream Proton-API-Bridge
without ever passing rclone's HTTP transport. As a result none of
rclone's HTTP flags reached Proton: --dump headers, --dump bodies,
--no-check-certificate, --user-agent, --bind, --ca-cert, --header,
--tpslimit etc. all silently did nothing for this remote, and HTTP
traffic was invisible to -vv.
Pass fshttp.NewTransport(ctx) through the new Config.Transport hook on
the bridge, which forwards it to the updated go-proton-api's
WithTransport option and so to the underlying resty client.
next_page is not currently being returned on listings which is causing
the rclone listing code to go wrong. This was returned so is likely a
regression in Drime.
This changes the page counter to calculate using current_page and
last_page. last_page on the first page request is just current_page+1.
drime appears to be capping per_page to 200. as more pages are
requested, last_page increments by 1 until current_page = last_page
Add read-only iCloud Photos support to the existing iclouddrive
backend via `service = photos` config option.
Also includes auth improvements on top of #9209's SRP authentication.
**Photos features:**
- 3-level hierarchy: libraries (Personal + Shared Photo Library) →
albums → photos/videos
- server-side smart albums (All Photos, Videos, Favorites,
Screenshots, Live, Bursts, Panoramas, Slo-mo, Time-lapse, Portrait,
Long Exposure, Animated, Hidden, Recently Deleted)
- User-created albums and nested album folders
- Live Photo `.MOV` companions as first-class entries
- Edited photo versions (`-edited` suffix) and RAW alternatives
- Duplicate filename dedup for camera counter wrap collisions
- Parallel cold listing for large albums
- Delta sync via CloudKit `changes/zone` - warm listings near-instant from disk cache
- Disk cache (libraries, albums, photos) with atomic writes for crash safety
- `ChangeNotify` support for FUSE mounts via `changes/zone` polling
- `ListR` support for `--fast-list` and recursive operations
- `--metadata` support - width, height, added-time, favorite, hidden
- Fresh download URLs per file - no stale URL failures on long copies
- FUSE mount documentation with recommended flags
**Auth improvements over #9209:**
- SMS 2FA fallback for users without trusted Apple devices
- Explicit push notification request - fixes iOS/macOS 26.4+ where 409
no longer auto-pushes
- Thread safety for concurrent FUSE callers (mutexes on session and client state)
- Session endpoint caching - skips ~5s `/validate` round-trip on warm start
- `Disconnect` support - clears auth state + disk cache
- PCS cookie support for Advanced Data Protection accounts, including
trusted-device approval for PCS cookies
Built on @coughlanio's Photos PoC (Closes#8734) and @mikegillan's SRP auth (#9209).
Fixes#7982
Co-authored-by: Chris Coughlan <chris@coughlan.io>
Implement multipart upload support with configurable chunk size and concurrency options
Enable OpenChunkWriter with per-chunk encryption
Enhance multipart upload handling with new upload cutoff and error management for small files
The fix for #8241 set FileRequestIntent=Backup on
service.ClientOptions so the azfile SDK emits the
x-ms-file-request-intent header was inadvertently dropped in this
commit (released in v1.73.0)
846f193806 azureblob,azurefiles: factor the common auth into a library
This broke azurefiles with OAuth (service principal secret,
certificate, MSI, etc.) with:
400 MissingRequiredHeader: x-ms-file-request-intent
This restores it in the azurefiles SetClientOptions callback. The SDK
only emits the header for TokenCredential auth, so shared-key and SAS
paths are unaffected.
Before this change, uploading to an existing bucket on Ceph (radosgw)
could fail with:
BucketAlreadyExists: 409 Conflict
when rclone attempted to create the destination bucket (which it does
by default unless --s3-no-check-bucket is set).
The Ceph rgw S3 implementation never returns BucketAlreadyOwnedByYou;
it returns BucketAlreadyExists for every CreateBucket on an existing
bucket, even one the caller owns. With the use_already_exists quirk
set to true, rclone wraps BucketAlreadyExists as a non-retriable error
and aborts the transfer.
The Ceph provider used to set useAlreadyExists = true explicitly. When
the s3 providers were converted to YAML files in f28c83c, Ceph did not
set use_already_exists so it picked up the default of true (via
set(&opt.UseAlreadyExists, true, provider.Quirks.UseAlreadyExists)),
which matched the previous behaviour but is the wrong setting for
Ceph.
This sets use_already_exists: false for the Ceph provider so rclone
ignores BucketAlreadyExists on CreateBucket and continues with the
upload.
Side effect: this partially reverts #7871 for the Ceph provider. If a
user tries to create a bucket on Ceph that is actually owned by
someone else, rclone will no longer fail fast at CreateBucket time;
the subsequent object PUT will fail instead. This is unavoidable on
Ceph since the server does not distinguish "already owned by you" from
"owned by someone else".
When Mega.LoginWithKeys() fails to make the API request, it leaves the
object in a state where Mega.FS.root is nil because it could never query
any information about the filesystem tree. An easy way for this to
happen is if the device is not connected to the internet.
Previously, these failures would be ignored, but Fs.findRoot() on the
rclone side is written in a way that assumes the go-meta filesystem will
have a non-nil root. This leads to an immediate nil pointer dereference
when NewFs() calls Fs.findRoot().
This commit fixes the problem by making LoginWithKeys() failures hard
failures, similar to the MultiFactorLogin() path.
Signed-off-by: Andrew Gunnerson <accounts+github@chiller3.com>
Replace AuthRequired bool with NoAuth bool on the rc.Call struct and
flip the auth check logic. Previously endpoints were unauthenticated
by default and had to opt in with AuthRequired: true, which led to
security vulnerabilities when developers forgot to set the flag.
Now all endpoints require authentication by default. Only explicitly
safe read-only endpoints are marked with NoAuth: true:
- rc/noop
- rc/error
- rc/list
- core/version
- core/stats
- core/group-list
- core/transferred
- core/du
- cache/stats
- vfs/list
- vfs/stats
- vfs/queue
- job/status
- job/list
See GHSA-25qr-6mpr-f7qx, GHSA-jfwf-28xr-xw6q
Some S3-compatible servers (e.g. Archiware P5) reject requests with an
empty `?delimiter=` query parameter. For recursive listings, pass `nil`
instead of a pointer to an empty string so the parameter is omitted
entirely from the request.
Fixes#9342
Before this change, if an object compressed with "Content-Encoding:
gzip" was downloaded, a length and hash mismatch would occur since the
go runtime automatically decompressed the object on download.
If --azureblob-decompress is set, this change erases the length and hash on
compressed objects so they can be downloaded successfully, at the cost
of not being able to check the length or the hash of the downloaded
object.
If --azureblob-decompress is not set the compressed files will be downloaded
as-is providing compressed objects with intact size and hash
information.
Fixes#9337
Set the User-Agent to include the APN prefix for Azure backends
(azureblob, azurefiles, onelake) to identify rclone as a Microsoft
Partner. The User-Agent is now:
APN/1.0 rclone/1.0 rclone/<version>
Add a ctx parameter to vfs.New() so callers can pass in context
carrying ConfigInfo and FilterInfo. The context is stripped of
cancellation but config and filter values are preserved into a fresh
background context.
`json:"entry_permissions"` is known to be either empty [] or of
structure {string: boolean}. This may have been a breaking API change on
Drime's side. Because EntryPermissions is not used, the type was changed
to `any` to capture both cases, otherwise we could implement custom
unmarshalling for that type.
The Global Acceleration Endpoint (cos.accelerate.myqcloud.com) of
Tencent COS does not seem to support "CreateBucket" (maybe also other
bucket management operations). Since the acceleration functionality must
be enabled per-bucket in the Tencent Cloud console, the bucket will
always exist before this endpoint is used, so this check can be safely
skipped.
Now, "no_check_bucket = true" will be auto set when using this endpoint.
Why "NewFs()": on-the-fly remotes (connection string remotes), for
example, ":s3,provider=TencentCOS,...:..." will also be fixed.
Why no unit test: I can't find a good way to test "NewFs()" without
leveraging live endpoints. I think we can extract all existing mutations
for different providers (e.g., AWS, Fastly, and Rabata) from "NewFs()"
to a new function in the future.
Some Tencent docs about this CDN endpoint:
- English: Global Acceleration Endpoint | https://www.tencentcloud.com/pt/document/product/436/40700
- Chinese: 对象存储 全球加速概述_腾讯云 | https://cloud.tencent.com/document/product/436/38866
Assisted-By: OpenCode
When a user has --s3-versions set but lacks the s3:GetBucketVersioning
permission, GetBucketVersioning returns an error and isVersioned() caches
the result as false. This caused CleanUpHidden (backend cleanup-hidden) to
silently exit with "bucket is not versioned so not removing old versions",
ignoring the user's explicit --s3-versions flag.
Fix this by trusting the explicit --s3-versions flag in purge(), bypassing
the GetBucketVersioning check when the user has explicitly declared the
bucket is versioned.
Previously ssh.InsecureIgnoreHostKey() was set unconditionally as the
default HostKeyCallback with no indication to the user.
This logs a warning pointing users to the documentation on how to
enable host key validation.
See: https://github.com/rclone/rclone/security/code-scanning/167
The Linkbox open API (/api/open/file_search) no longer returns download
URLs, breaking all downloads. This switches to using the web API
(/api/file/my_file_list/web) which requires email+password authentication
but returns working download URLs.
This will unfortunately require changing your existing rclone config.
- Add email, password, and web_token config options
- Add web API login via /api/user/login_email with token caching and retry
- Create separate CDN HTTP client with HTTP/2 disabled and browser
User-Agent to avoid CDN fingerprint blocking
- Remove searchOK regex and name-filtering (web API doesn't support it)
Apple IDs are case-insensitive, but the SRP proof computation (M1)
hashes the username client-side. The old plaintext signin let the
server normalize the case, but with SRP the client must match.
Lowercase the Apple ID before use so mixed-case IDs authenticate
correctly.
Reported-by: ArturKlauser
This fixes China mainland iCloud authentication by deriving the Origin
and Referer headers from authEndpoint instead of hardcoding idmsa.apple.com.
Fixes compatibility with PR #8818 (China region support) and PR #9209
(SRP authentication).
Signed-off-by: Xiangzhe <xiangzhedev@gmail.com>
Apple has deprecated the legacy /appleauth/auth/signin endpoint and
now blocks it, causing "Invalid Session Token" errors for all users
when their trust token expires. The browser login flow now requires
SRP (Secure Remote Password), a cryptographic handshake that never
transmits the password.
Replace Session.SignIn() with a multi-step SRP-6a flow:
1. authStart - initialize session at /authorize/signin
2. authFederate - submit account name to /federate
3. authSRPInit - exchange client public value for salt/B at /signin/init
4. authSRPComplete - send M1/M2 proofs to /signin/complete
The SRP implementation uses the RFC 5054 2048-bit group with SHA-256
and Apple's NoUserNameInX variant. Password derivation supports both
s2k and s2k_fo protocols via SHA-256 + PBKDF2.
The 2FA and trust token flow is unchanged. Auth headers for all
idmsa.apple.com requests now include X-Apple-Auth-Attributes,
X-Apple-Frame-Id, and use Origin/Referer of https://idmsa.apple.com.
Fixes#8587
Commit a3e1312d accidentally replaced io.NopCloser(in) with a bare
io.Reader when assigning req.Body in uploadSinglepartPutObject.
rclone wraps upload readers in an accounting.Account for progress
tracking. When the AWS SDK calls Seek on the body, Account.Seek does
a type assert on the inner reader. With a bare io.Reader the type
is unexpected and causes:
operation error S3: PutObject, serialization failed: internal error:
Seek not implemented for io.nopCloser
With io.NopCloser(in) the type assert works correctly for both seekable
and non-seekable readers.
Restore io.NopCloser(in) to wrap the reader correctly in all cases.
Verified by running both before (regression confirmed) and after (fix
confirmed):
go test ./backend/s3/... ./fs/operations/... -remote TestS3:
Remove the POSIX_FADV_DONTNEED and POSIX_FADV_SEQUENTIAL calls
from the local backend. The DONTNEED calls cause severe spinlock
contention on parallel file systems (and any system with many
concurrent transfers), because each call triggers per-page cache
teardown under a global lock.
Observed on a 256-core system running rclone with 64 parallel
transfers over Lustre: 69% of all CPU cycles were spent in
kernel spinlock contention from the fadvise path, with effective
throughput well below hardware capability.
The kernel's own page reclaim (kswapd) handles eviction more
efficiently from a single context. Since rclone does not always
read files sequentially (e.g. multipart uploads rewind and
re-read blocks), FADV_SEQUENTIAL was also not reliably correct.
This is consistent with the non-Linux behavior (which never
called fadvise) and with restic's decision to remove identical
code (restic/restic#670).
Fixes#7886
This PR optimizes the PROPFIND requests in the webdav backend to only ask for
the specific properties rclone actually needs.
Currently, the generic webdav backend sends an empty XML body during directory
listing (listAll), which causes the server to fall back to allprops by default.
This forces the server to return properties we never use, such as
getcontenttype.
Fetching getcontenttype can be a very expensive operation on the server side.
For instance, in the official golang.org/x/net/webdav library, determining the
content type requires the server to open the file and read the first 500 bytes.
For a directory with 1,300 files in my environment, rclone ls time dropped from
~30s to ~4s (as fast as native ls).
This only applies to the other vendor for backwards compatibility which could be
expanded.
AWS S3 requires Content-MD5 for PutObject with Object Lock parameters.
Since rclone passes a non-seekable io.Reader, the SDK cannot compute
checksums automatically. Buffer the body and compute MD5 manually for
singlepart PutObject and presigned request uploads when Object Lock
parameters are set. Multipart uploads are unaffected as Object Lock
headers go on CreateMultipartUpload which has no body.
Add object_lock_supported provider quirk (default true) to allow
skipping Object Lock integration tests on providers with incomplete
S3 API support. Set to false for GCS which uses non-standard
x-goog-bypass-governance-retention header and doesn't implement
PutObjectLegalHold/GetObjectLegalHold.
Add Multipart and Presigned subtests to Object Lock integration tests
to cover all three upload paths.
Fixes#9199