next_page is not currently being returned on listings which is causing
the rclone listing code to go wrong. This was returned so is likely a
regression in Drime.
This changes the page counter to calculate using current_page and
last_page. last_page on the first page request is just current_page+1.
drime appears to be capping per_page to 200. as more pages are
requested, last_page increments by 1 until current_page = last_page
Add read-only iCloud Photos support to the existing iclouddrive
backend via `service = photos` config option.
Also includes auth improvements on top of #9209's SRP authentication.
**Photos features:**
- 3-level hierarchy: libraries (Personal + Shared Photo Library) →
albums → photos/videos
- server-side smart albums (All Photos, Videos, Favorites,
Screenshots, Live, Bursts, Panoramas, Slo-mo, Time-lapse, Portrait,
Long Exposure, Animated, Hidden, Recently Deleted)
- User-created albums and nested album folders
- Live Photo `.MOV` companions as first-class entries
- Edited photo versions (`-edited` suffix) and RAW alternatives
- Duplicate filename dedup for camera counter wrap collisions
- Parallel cold listing for large albums
- Delta sync via CloudKit `changes/zone` - warm listings near-instant from disk cache
- Disk cache (libraries, albums, photos) with atomic writes for crash safety
- `ChangeNotify` support for FUSE mounts via `changes/zone` polling
- `ListR` support for `--fast-list` and recursive operations
- `--metadata` support - width, height, added-time, favorite, hidden
- Fresh download URLs per file - no stale URL failures on long copies
- FUSE mount documentation with recommended flags
**Auth improvements over #9209:**
- SMS 2FA fallback for users without trusted Apple devices
- Explicit push notification request - fixes iOS/macOS 26.4+ where 409
no longer auto-pushes
- Thread safety for concurrent FUSE callers (mutexes on session and client state)
- Session endpoint caching - skips ~5s `/validate` round-trip on warm start
- `Disconnect` support - clears auth state + disk cache
- PCS cookie support for Advanced Data Protection accounts, including
trusted-device approval for PCS cookies
Built on @coughlanio's Photos PoC (Closes#8734) and @mikegillan's SRP auth (#9209).
Fixes#7982
Co-authored-by: Chris Coughlan <chris@coughlan.io>
Implement multipart upload support with configurable chunk size and concurrency options
Enable OpenChunkWriter with per-chunk encryption
Enhance multipart upload handling with new upload cutoff and error management for small files
The fix for #8241 set FileRequestIntent=Backup on
service.ClientOptions so the azfile SDK emits the
x-ms-file-request-intent header was inadvertently dropped in this
commit (released in v1.73.0)
846f193806 azureblob,azurefiles: factor the common auth into a library
This broke azurefiles with OAuth (service principal secret,
certificate, MSI, etc.) with:
400 MissingRequiredHeader: x-ms-file-request-intent
This restores it in the azurefiles SetClientOptions callback. The SDK
only emits the header for TokenCredential auth, so shared-key and SAS
paths are unaffected.
Before this change, uploading to an existing bucket on Ceph (radosgw)
could fail with:
BucketAlreadyExists: 409 Conflict
when rclone attempted to create the destination bucket (which it does
by default unless --s3-no-check-bucket is set).
The Ceph rgw S3 implementation never returns BucketAlreadyOwnedByYou;
it returns BucketAlreadyExists for every CreateBucket on an existing
bucket, even one the caller owns. With the use_already_exists quirk
set to true, rclone wraps BucketAlreadyExists as a non-retriable error
and aborts the transfer.
The Ceph provider used to set useAlreadyExists = true explicitly. When
the s3 providers were converted to YAML files in f28c83c, Ceph did not
set use_already_exists so it picked up the default of true (via
set(&opt.UseAlreadyExists, true, provider.Quirks.UseAlreadyExists)),
which matched the previous behaviour but is the wrong setting for
Ceph.
This sets use_already_exists: false for the Ceph provider so rclone
ignores BucketAlreadyExists on CreateBucket and continues with the
upload.
Side effect: this partially reverts #7871 for the Ceph provider. If a
user tries to create a bucket on Ceph that is actually owned by
someone else, rclone will no longer fail fast at CreateBucket time;
the subsequent object PUT will fail instead. This is unavoidable on
Ceph since the server does not distinguish "already owned by you" from
"owned by someone else".
When Mega.LoginWithKeys() fails to make the API request, it leaves the
object in a state where Mega.FS.root is nil because it could never query
any information about the filesystem tree. An easy way for this to
happen is if the device is not connected to the internet.
Previously, these failures would be ignored, but Fs.findRoot() on the
rclone side is written in a way that assumes the go-meta filesystem will
have a non-nil root. This leads to an immediate nil pointer dereference
when NewFs() calls Fs.findRoot().
This commit fixes the problem by making LoginWithKeys() failures hard
failures, similar to the MultiFactorLogin() path.
Signed-off-by: Andrew Gunnerson <accounts+github@chiller3.com>
Replace AuthRequired bool with NoAuth bool on the rc.Call struct and
flip the auth check logic. Previously endpoints were unauthenticated
by default and had to opt in with AuthRequired: true, which led to
security vulnerabilities when developers forgot to set the flag.
Now all endpoints require authentication by default. Only explicitly
safe read-only endpoints are marked with NoAuth: true:
- rc/noop
- rc/error
- rc/list
- core/version
- core/stats
- core/group-list
- core/transferred
- core/du
- cache/stats
- vfs/list
- vfs/stats
- vfs/queue
- job/status
- job/list
See GHSA-25qr-6mpr-f7qx, GHSA-jfwf-28xr-xw6q
Some S3-compatible servers (e.g. Archiware P5) reject requests with an
empty `?delimiter=` query parameter. For recursive listings, pass `nil`
instead of a pointer to an empty string so the parameter is omitted
entirely from the request.
Fixes#9342
Before this change, if an object compressed with "Content-Encoding:
gzip" was downloaded, a length and hash mismatch would occur since the
go runtime automatically decompressed the object on download.
If --azureblob-decompress is set, this change erases the length and hash on
compressed objects so they can be downloaded successfully, at the cost
of not being able to check the length or the hash of the downloaded
object.
If --azureblob-decompress is not set the compressed files will be downloaded
as-is providing compressed objects with intact size and hash
information.
Fixes#9337
Set the User-Agent to include the APN prefix for Azure backends
(azureblob, azurefiles, onelake) to identify rclone as a Microsoft
Partner. The User-Agent is now:
APN/1.0 rclone/1.0 rclone/<version>
Add a ctx parameter to vfs.New() so callers can pass in context
carrying ConfigInfo and FilterInfo. The context is stripped of
cancellation but config and filter values are preserved into a fresh
background context.
`json:"entry_permissions"` is known to be either empty [] or of
structure {string: boolean}. This may have been a breaking API change on
Drime's side. Because EntryPermissions is not used, the type was changed
to `any` to capture both cases, otherwise we could implement custom
unmarshalling for that type.
The Global Acceleration Endpoint (cos.accelerate.myqcloud.com) of
Tencent COS does not seem to support "CreateBucket" (maybe also other
bucket management operations). Since the acceleration functionality must
be enabled per-bucket in the Tencent Cloud console, the bucket will
always exist before this endpoint is used, so this check can be safely
skipped.
Now, "no_check_bucket = true" will be auto set when using this endpoint.
Why "NewFs()": on-the-fly remotes (connection string remotes), for
example, ":s3,provider=TencentCOS,...:..." will also be fixed.
Why no unit test: I can't find a good way to test "NewFs()" without
leveraging live endpoints. I think we can extract all existing mutations
for different providers (e.g., AWS, Fastly, and Rabata) from "NewFs()"
to a new function in the future.
Some Tencent docs about this CDN endpoint:
- English: Global Acceleration Endpoint | https://www.tencentcloud.com/pt/document/product/436/40700
- Chinese: 对象存储 全球加速概述_腾讯云 | https://cloud.tencent.com/document/product/436/38866
Assisted-By: OpenCode
When a user has --s3-versions set but lacks the s3:GetBucketVersioning
permission, GetBucketVersioning returns an error and isVersioned() caches
the result as false. This caused CleanUpHidden (backend cleanup-hidden) to
silently exit with "bucket is not versioned so not removing old versions",
ignoring the user's explicit --s3-versions flag.
Fix this by trusting the explicit --s3-versions flag in purge(), bypassing
the GetBucketVersioning check when the user has explicitly declared the
bucket is versioned.
Previously ssh.InsecureIgnoreHostKey() was set unconditionally as the
default HostKeyCallback with no indication to the user.
This logs a warning pointing users to the documentation on how to
enable host key validation.
See: https://github.com/rclone/rclone/security/code-scanning/167
The Linkbox open API (/api/open/file_search) no longer returns download
URLs, breaking all downloads. This switches to using the web API
(/api/file/my_file_list/web) which requires email+password authentication
but returns working download URLs.
This will unfortunately require changing your existing rclone config.
- Add email, password, and web_token config options
- Add web API login via /api/user/login_email with token caching and retry
- Create separate CDN HTTP client with HTTP/2 disabled and browser
User-Agent to avoid CDN fingerprint blocking
- Remove searchOK regex and name-filtering (web API doesn't support it)
Apple IDs are case-insensitive, but the SRP proof computation (M1)
hashes the username client-side. The old plaintext signin let the
server normalize the case, but with SRP the client must match.
Lowercase the Apple ID before use so mixed-case IDs authenticate
correctly.
Reported-by: ArturKlauser
This fixes China mainland iCloud authentication by deriving the Origin
and Referer headers from authEndpoint instead of hardcoding idmsa.apple.com.
Fixes compatibility with PR #8818 (China region support) and PR #9209
(SRP authentication).
Signed-off-by: Xiangzhe <xiangzhedev@gmail.com>
Apple has deprecated the legacy /appleauth/auth/signin endpoint and
now blocks it, causing "Invalid Session Token" errors for all users
when their trust token expires. The browser login flow now requires
SRP (Secure Remote Password), a cryptographic handshake that never
transmits the password.
Replace Session.SignIn() with a multi-step SRP-6a flow:
1. authStart - initialize session at /authorize/signin
2. authFederate - submit account name to /federate
3. authSRPInit - exchange client public value for salt/B at /signin/init
4. authSRPComplete - send M1/M2 proofs to /signin/complete
The SRP implementation uses the RFC 5054 2048-bit group with SHA-256
and Apple's NoUserNameInX variant. Password derivation supports both
s2k and s2k_fo protocols via SHA-256 + PBKDF2.
The 2FA and trust token flow is unchanged. Auth headers for all
idmsa.apple.com requests now include X-Apple-Auth-Attributes,
X-Apple-Frame-Id, and use Origin/Referer of https://idmsa.apple.com.
Fixes#8587
Commit a3e1312d accidentally replaced io.NopCloser(in) with a bare
io.Reader when assigning req.Body in uploadSinglepartPutObject.
rclone wraps upload readers in an accounting.Account for progress
tracking. When the AWS SDK calls Seek on the body, Account.Seek does
a type assert on the inner reader. With a bare io.Reader the type
is unexpected and causes:
operation error S3: PutObject, serialization failed: internal error:
Seek not implemented for io.nopCloser
With io.NopCloser(in) the type assert works correctly for both seekable
and non-seekable readers.
Restore io.NopCloser(in) to wrap the reader correctly in all cases.
Verified by running both before (regression confirmed) and after (fix
confirmed):
go test ./backend/s3/... ./fs/operations/... -remote TestS3:
Remove the POSIX_FADV_DONTNEED and POSIX_FADV_SEQUENTIAL calls
from the local backend. The DONTNEED calls cause severe spinlock
contention on parallel file systems (and any system with many
concurrent transfers), because each call triggers per-page cache
teardown under a global lock.
Observed on a 256-core system running rclone with 64 parallel
transfers over Lustre: 69% of all CPU cycles were spent in
kernel spinlock contention from the fadvise path, with effective
throughput well below hardware capability.
The kernel's own page reclaim (kswapd) handles eviction more
efficiently from a single context. Since rclone does not always
read files sequentially (e.g. multipart uploads rewind and
re-read blocks), FADV_SEQUENTIAL was also not reliably correct.
This is consistent with the non-Linux behavior (which never
called fadvise) and with restic's decision to remove identical
code (restic/restic#670).
Fixes#7886
This PR optimizes the PROPFIND requests in the webdav backend to only ask for
the specific properties rclone actually needs.
Currently, the generic webdav backend sends an empty XML body during directory
listing (listAll), which causes the server to fall back to allprops by default.
This forces the server to return properties we never use, such as
getcontenttype.
Fetching getcontenttype can be a very expensive operation on the server side.
For instance, in the official golang.org/x/net/webdav library, determining the
content type requires the server to open the file and read the first 500 bytes.
For a directory with 1,300 files in my environment, rclone ls time dropped from
~30s to ~4s (as fast as native ls).
This only applies to the other vendor for backwards compatibility which could be
expanded.
AWS S3 requires Content-MD5 for PutObject with Object Lock parameters.
Since rclone passes a non-seekable io.Reader, the SDK cannot compute
checksums automatically. Buffer the body and compute MD5 manually for
singlepart PutObject and presigned request uploads when Object Lock
parameters are set. Multipart uploads are unaffected as Object Lock
headers go on CreateMultipartUpload which has no body.
Add object_lock_supported provider quirk (default true) to allow
skipping Object Lock integration tests on providers with incomplete
S3 API support. Set to false for GCS which uses non-standard
x-goog-bypass-governance-retention header and doesn't implement
PutObjectLegalHold/GetObjectLegalHold.
Add Multipart and Presigned subtests to Object Lock integration tests
to cover all three upload paths.
Fixes#9199
The WebDAV implementation already permits redirects on PROPFIND for
listing paths in the `listAll` method but does not permit this for
metadata in `readMetaDataForPath`. This results in a strange experience
for endpoints that heavily use redirects -
```
rclone lsl endpoint:
```
functions and lists `hello_world.txt` in its output but
```
rclone lsl endpoint:hello_world.txt
```
Fails with a HTTP 307.
The git history for this setting indicates this was done to avoid
an issue where redirects cause a verb change to GET in the Go HTTP
client; it does not appear to be problematic with HTTP 307.
To fix, a new `CheckRedirect` function is added in the `rest` library
to force the client to use the same verb across redirects, forcing this
for the PROPFIND case.
When specifying --drime-workspace-id, a file greater than the limit at
which file uploads get chunked would ignore the specified ID and get put
into the default workspace instead.
Completes the fix described in commit 2360e65 by properly closing the
chunkwriter by providing the workspace ID to the Drime API call.
Add AU East 1, EU South 1, JP Central 1, UK East 1, and US Central 1
regions and endpoints for Fastly Object Storage.
Also sort the entries alphabetically.
Add support for S3 Object Lock with the following new options:
- --s3-object-lock-mode: set retention mode (GOVERNANCE/COMPLIANCE/copy)
- --s3-object-lock-retain-until-date: set retention date (RFC3339/duration/copy)
- --s3-object-lock-legal-hold-status: set legal hold (ON/OFF/copy)
- --s3-bypass-governance-retention: bypass GOVERNANCE lock on delete
- --s3-bucket-object-lock-enabled: enable Object Lock on bucket creation
- --s3-object-lock-set-after-upload: apply lock via separate API calls
The special value "copy" preserves the source object's setting when used
with --metadata flag, enabling scenarios like cloning objects from
COMPLIANCE to GOVERNANCE mode while preserving the original retention date.
Includes integration tests that create a temporary Object Lock bucket covering:
- Retention Mode and Date
- Legal Hold
- Apply settings after upload
- Override protections using bypass-governance flag
The tests are gracefully skipped on providers that do not support Object Lock.
Fixes#4683Closes#7894#7893#8866
Use URLPathEscapeAll instead of URLPathEscape for path encoding.
URLPathEscape relies on Go's url.URL.String() which only minimally
escapes paths - reserved sub-delimiter characters like semicolons and
equals signs pass through unescaped. Per RFC 3986 section 3.3, these
characters must be percent-encoded when used as literal values in
path segments.
Some WebDAV servers (notably dCache/Jetty) interpret unescaped
semicolons as path parameter delimiters, which truncates filenames
at the semicolon position. URLPathEscapeAll encodes everything
except [A-Za-z0-9/], which is safe for all servers.
Fixes#9082