next_page is not currently being returned on listings which is causing
the rclone listing code to go wrong. This was returned so is likely a
regression in Drime.
This changes the page counter to calculate using current_page and
last_page. last_page on the first page request is just current_page+1.
drime appears to be capping per_page to 200. as more pages are
requested, last_page increments by 1 until current_page = last_page
Bump go-proton-api and Proton-API-Bridge to versions that send the new
NameSignatureEmail field and omit NodePassphraseSignature/SignatureEmail
for ordinary nodes, matching the schema accepted by the Proton Drive
API. Without this rclone moveto, --backup-dir, server-side rename and
DirMove all failed with "value cannot be empty" / "outdated app" 422.
Fixes#8512
Add read-only iCloud Photos support to the existing iclouddrive
backend via `service = photos` config option.
Also includes auth improvements on top of #9209's SRP authentication.
**Photos features:**
- 3-level hierarchy: libraries (Personal + Shared Photo Library) →
albums → photos/videos
- server-side smart albums (All Photos, Videos, Favorites,
Screenshots, Live, Bursts, Panoramas, Slo-mo, Time-lapse, Portrait,
Long Exposure, Animated, Hidden, Recently Deleted)
- User-created albums and nested album folders
- Live Photo `.MOV` companions as first-class entries
- Edited photo versions (`-edited` suffix) and RAW alternatives
- Duplicate filename dedup for camera counter wrap collisions
- Parallel cold listing for large albums
- Delta sync via CloudKit `changes/zone` - warm listings near-instant from disk cache
- Disk cache (libraries, albums, photos) with atomic writes for crash safety
- `ChangeNotify` support for FUSE mounts via `changes/zone` polling
- `ListR` support for `--fast-list` and recursive operations
- `--metadata` support - width, height, added-time, favorite, hidden
- Fresh download URLs per file - no stale URL failures on long copies
- FUSE mount documentation with recommended flags
**Auth improvements over #9209:**
- SMS 2FA fallback for users without trusted Apple devices
- Explicit push notification request - fixes iOS/macOS 26.4+ where 409
no longer auto-pushes
- Thread safety for concurrent FUSE callers (mutexes on session and client state)
- Session endpoint caching - skips ~5s `/validate` round-trip on warm start
- `Disconnect` support - clears auth state + disk cache
- PCS cookie support for Advanced Data Protection accounts, including
trusted-device approval for PCS cookies
Built on @coughlanio's Photos PoC (Closes#8734) and @mikegillan's SRP auth (#9209).
Fixes#7982
Co-authored-by: Chris Coughlan <chris@coughlan.io>
On Windows, passing "*" as mountPoint to the mount/mount RC command
auto-assigns a drive letter (e.g. "Z:"), but the resolved letter was
never propagated back to mountlib. This caused liveMounts to be keyed
on the literal "*", breaking tracking of multiple mounts and making
unmount unreliable.
Change MountFn to return the actual mount point as an additional
return value. Update MountPoint.Mount() to store the resolved value,
and mountRc() to use it as the liveMounts key. The mount/mount RC
response now returns the actual mountPoint so callers can discover
which drive letter was assigned.
The background kicker goroutine had a bare select outside a for loop,
so the 5s ticker fired at most once before the goroutine exited. The
intent was to run every 5s for the lifetime of the Downloaders.
This wraps the select in a for loop so the ticker fires repeatedly
until ctx is cancelled.
In practice this was benign because every downloader exit and every
successful Write already calls kickWaiters, so the background kicker
is only load-bearing when a waiter is queued, no downloader is
running, and _ensureDownloader failed transiently. In that state,
before this fix, the waiter would hang until another Download() call
or Close() arrived; now it gets retried every 5s and will either
recover or accumulate enough errors to trip maxErrorCount and error
out cleanly.
Implement multipart upload support with configurable chunk size and concurrency options
Enable OpenChunkWriter with per-chunk encryption
Enhance multipart upload handling with new upload cutoff and error management for small files
Samsung TVs have a bug where they duplicate file extensions when both
the title contains an extension and the MIME type indicates the same
file type. For example, "photo.jpg" becomes "photo.jpg.jpg".
Remove extensions from <dc:title> while keeping them in the resource URL
and MIME type. This provides a cleaner display and prevents Samsung TVs
from incorrectly "fixing" what they perceive as missing extensions.
Samsung TVs have strict XML parsers that fail to interpret "
(numeric quote entity) correctly within DIDL-Lite metadata, causing
files to appear as empty folders. By replacing " with "
(named quote entity) in all marshaled XML, Samsung TVs can now
properly parse the metadata and display files.
This handles the "Big 5" XML entities that might cause parsing issues:
- " -> " (double quotes)
- ' -> ' (apostrophes)
- & -> & (ampersands)
- < -> < (less than)
- > -> > (greater than)
While Go's xml.Marshal already uses named entities for &, <, >
characters, this ensures complete protection against any edge cases
where numeric entities might be generated. Samsung TVs are known
to have strict XML parsers that can't handle numeric entities.
Fixes#9346
Samsung TVs sometimes send Browse requests with empty ObjectID
parameters (<ObjectID></ObjectID>) which causes DLNA servers to
return errors. Default empty ObjectID to "0" (root container) to
maintain compatibility.
This fix is based on ReadyMedia/MiniDLNA Bug 311 which documented
the same issue and solution for Samsung TVs.
See #9346
Add xmlns:sec="http://www.sec.co.kr/" namespace to DIDL-Lite responses
as required by Samsung TV DLNA implementations. This namespace is used
by working DLNA servers like MediaBrowser/Emby for Samsung compatibility.
Based on research of open source DLNA servers that successfully work
with Samsung TVs.
See #9346
Containers (directories) never had their Date field set, producing
<dc:date>0001-01-01</dc:date> (Go's zero time) in DIDL-Lite metadata.
This invalid date can confuse strict DLNA clients.
Set the dc:date to the directory's modification time, and as a safety
net, omit the dc:date element entirely when the timestamp is zero.
See #9346
The childCount attribute on DLNA containers was hardcoded to 1
regardless of how many items the directory actually contained. Some
DLNA clients (notably Samsung TVs) use childCount to decide whether
to browse into a container. Report the actual number of directory
entries instead.
See #9346
Samsung TVs are strict DLNA clients that expect SOAP response arguments
in the order defined by the service SCPD (Service Control Protocol
Description). The Browse response was using a Go map which produces
random iteration order, causing arguments like Result, NumberReturned,
TotalMatches, and UpdateID to appear in unpredictable order. Samsung TVs
fail to parse such responses and never proceed to browse directory
children, showing "no content" to the user.
Replace the map[string]string return type with an ordered []soapArg
slice throughout the UPnPService.Handle() interface, ensuring response
arguments always appear in SCPD-defined order.
See #9346
Fix CVE-2026-32952: A malicious NTLM challenge message can causes an slice out
of bounds panic, which can crash any Go process using ntlmssp.Negotiator as an
HTTP transport.
This is in use in rclone in the webdav backend to access sharepoint.
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
The fix for #8241 set FileRequestIntent=Backup on
service.ClientOptions so the azfile SDK emits the
x-ms-file-request-intent header was inadvertently dropped in this
commit (released in v1.73.0)
846f193806 azureblob,azurefiles: factor the common auth into a library
This broke azurefiles with OAuth (service principal secret,
certificate, MSI, etc.) with:
400 MissingRequiredHeader: x-ms-file-request-intent
This restores it in the azurefiles SetClientOptions callback. The SDK
only emits the header for TokenCredential auth, so shared-key and SAS
paths are unaffected.
Before this change, uploading to an existing bucket on Ceph (radosgw)
could fail with:
BucketAlreadyExists: 409 Conflict
when rclone attempted to create the destination bucket (which it does
by default unless --s3-no-check-bucket is set).
The Ceph rgw S3 implementation never returns BucketAlreadyOwnedByYou;
it returns BucketAlreadyExists for every CreateBucket on an existing
bucket, even one the caller owns. With the use_already_exists quirk
set to true, rclone wraps BucketAlreadyExists as a non-retriable error
and aborts the transfer.
The Ceph provider used to set useAlreadyExists = true explicitly. When
the s3 providers were converted to YAML files in f28c83c, Ceph did not
set use_already_exists so it picked up the default of true (via
set(&opt.UseAlreadyExists, true, provider.Quirks.UseAlreadyExists)),
which matched the previous behaviour but is the wrong setting for
Ceph.
This sets use_already_exists: false for the Ceph provider so rclone
ignores BucketAlreadyExists on CreateBucket and continues with the
upload.
Side effect: this partially reverts #7871 for the Ceph provider. If a
user tries to create a bucket on Ceph that is actually owned by
someone else, rclone will no longer fail fast at CreateBucket time;
the subsequent object PUT will fail instead. This is unavoidable on
Ceph since the server does not distinguish "already owned by you" from
"owned by someone else".
Parsing a WEBP image with an invalid, large size panics on 32-bit platforms.
This only affects users on 32 bit platforms using the Internxt backend.
See: https://pkg.go.dev/vuln/GO-2026-4961
When Mega.LoginWithKeys() fails to make the API request, it leaves the
object in a state where Mega.FS.root is nil because it could never query
any information about the filesystem tree. An easy way for this to
happen is if the device is not connected to the internet.
Previously, these failures would be ignored, but Fs.findRoot() on the
rclone side is written in a way that assumes the go-meta filesystem will
have a non-nil root. This leads to an immediate nil pointer dereference
when NewFs() calls Fs.findRoot().
This commit fixes the problem by making LoginWithKeys() failures hard
failures, similar to the MultiFactorLogin() path.
Signed-off-by: Andrew Gunnerson <accounts+github@chiller3.com>
Replace AuthRequired bool with NoAuth bool on the rc.Call struct and
flip the auth check logic. Previously endpoints were unauthenticated
by default and had to opt in with AuthRequired: true, which led to
security vulnerabilities when developers forgot to set the flag.
Now all endpoints require authentication by default. Only explicitly
safe read-only endpoints are marked with NoAuth: true:
- rc/noop
- rc/error
- rc/list
- core/version
- core/stats
- core/group-list
- core/transferred
- core/du
- cache/stats
- vfs/list
- vfs/stats
- vfs/queue
- job/status
- job/list
See GHSA-25qr-6mpr-f7qx, GHSA-jfwf-28xr-xw6q
The operations/fsinfo RC endpoint was registered without AuthRequired,
allowing unauthenticated callers to instantiate arbitrary backends via
inline backend definitions.
See GHSA-jfwf-28xr-xw6q
Snapshot the NoAuth setting when the RC server is created rather than
reading it from the mutable options struct on each request. This
prevents any runtime mutation of rc.NoAuth (e.g. via options/set)
from disabling the auth gate for protected RC methods.
See GHSA-25qr-6mpr-f7qx
The options/set RC endpoint was registered without AuthRequired,
allowing unauthenticated callers to mutate global runtime options
including rc.NoAuth, which disables the auth gate for all protected
RC methods. Require authentication for options/set.
See GHSA-25qr-6mpr-f7qx
The files.com integration tests for rcat/copyurl were failing because
fs/account.Account was declaring a ReadAt method when the underlying
handle did not support it. The files.com SDK decided to use the ReadAt
method to speed transfers up which failed.
ReadAt and Seek methods were added in this commit to support the
archive command:
409dc75328 accounting: add io.Seeker/io.ReaderAt support to accounting.Account
This fixes the problem by adding new methods to the Account object
WithSeeker/WithReaderAt/WithReadAtSeeker which produce an object with
the desired methods or errors if it isn't possible.
This stops Account advertising things it can't do which is bad Go
practice.