The childCount attribute on DLNA containers was hardcoded to 1
regardless of how many items the directory actually contained. Some
DLNA clients (notably Samsung TVs) use childCount to decide whether
to browse into a container. Report the actual number of directory
entries instead.
See #9346
Samsung TVs are strict DLNA clients that expect SOAP response arguments
in the order defined by the service SCPD (Service Control Protocol
Description). The Browse response was using a Go map which produces
random iteration order, causing arguments like Result, NumberReturned,
TotalMatches, and UpdateID to appear in unpredictable order. Samsung TVs
fail to parse such responses and never proceed to browse directory
children, showing "no content" to the user.
Replace the map[string]string return type with an ordered []soapArg
slice throughout the UPnPService.Handle() interface, ensuring response
arguments always appear in SCPD-defined order.
See #9346
Fix CVE-2026-32952: A malicious NTLM challenge message can causes an slice out
of bounds panic, which can crash any Go process using ntlmssp.Negotiator as an
HTTP transport.
This is in use in rclone in the webdav backend to access sharepoint.
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
The fix for #8241 set FileRequestIntent=Backup on
service.ClientOptions so the azfile SDK emits the
x-ms-file-request-intent header was inadvertently dropped in this
commit (released in v1.73.0)
846f193806 azureblob,azurefiles: factor the common auth into a library
This broke azurefiles with OAuth (service principal secret,
certificate, MSI, etc.) with:
400 MissingRequiredHeader: x-ms-file-request-intent
This restores it in the azurefiles SetClientOptions callback. The SDK
only emits the header for TokenCredential auth, so shared-key and SAS
paths are unaffected.
Before this change, uploading to an existing bucket on Ceph (radosgw)
could fail with:
BucketAlreadyExists: 409 Conflict
when rclone attempted to create the destination bucket (which it does
by default unless --s3-no-check-bucket is set).
The Ceph rgw S3 implementation never returns BucketAlreadyOwnedByYou;
it returns BucketAlreadyExists for every CreateBucket on an existing
bucket, even one the caller owns. With the use_already_exists quirk
set to true, rclone wraps BucketAlreadyExists as a non-retriable error
and aborts the transfer.
The Ceph provider used to set useAlreadyExists = true explicitly. When
the s3 providers were converted to YAML files in f28c83c, Ceph did not
set use_already_exists so it picked up the default of true (via
set(&opt.UseAlreadyExists, true, provider.Quirks.UseAlreadyExists)),
which matched the previous behaviour but is the wrong setting for
Ceph.
This sets use_already_exists: false for the Ceph provider so rclone
ignores BucketAlreadyExists on CreateBucket and continues with the
upload.
Side effect: this partially reverts #7871 for the Ceph provider. If a
user tries to create a bucket on Ceph that is actually owned by
someone else, rclone will no longer fail fast at CreateBucket time;
the subsequent object PUT will fail instead. This is unavoidable on
Ceph since the server does not distinguish "already owned by you" from
"owned by someone else".
Parsing a WEBP image with an invalid, large size panics on 32-bit platforms.
This only affects users on 32 bit platforms using the Internxt backend.
See: https://pkg.go.dev/vuln/GO-2026-4961
When Mega.LoginWithKeys() fails to make the API request, it leaves the
object in a state where Mega.FS.root is nil because it could never query
any information about the filesystem tree. An easy way for this to
happen is if the device is not connected to the internet.
Previously, these failures would be ignored, but Fs.findRoot() on the
rclone side is written in a way that assumes the go-meta filesystem will
have a non-nil root. This leads to an immediate nil pointer dereference
when NewFs() calls Fs.findRoot().
This commit fixes the problem by making LoginWithKeys() failures hard
failures, similar to the MultiFactorLogin() path.
Signed-off-by: Andrew Gunnerson <accounts+github@chiller3.com>
Replace AuthRequired bool with NoAuth bool on the rc.Call struct and
flip the auth check logic. Previously endpoints were unauthenticated
by default and had to opt in with AuthRequired: true, which led to
security vulnerabilities when developers forgot to set the flag.
Now all endpoints require authentication by default. Only explicitly
safe read-only endpoints are marked with NoAuth: true:
- rc/noop
- rc/error
- rc/list
- core/version
- core/stats
- core/group-list
- core/transferred
- core/du
- cache/stats
- vfs/list
- vfs/stats
- vfs/queue
- job/status
- job/list
See GHSA-25qr-6mpr-f7qx, GHSA-jfwf-28xr-xw6q
The operations/fsinfo RC endpoint was registered without AuthRequired,
allowing unauthenticated callers to instantiate arbitrary backends via
inline backend definitions.
See GHSA-jfwf-28xr-xw6q
Snapshot the NoAuth setting when the RC server is created rather than
reading it from the mutable options struct on each request. This
prevents any runtime mutation of rc.NoAuth (e.g. via options/set)
from disabling the auth gate for protected RC methods.
See GHSA-25qr-6mpr-f7qx
The options/set RC endpoint was registered without AuthRequired,
allowing unauthenticated callers to mutate global runtime options
including rc.NoAuth, which disables the auth gate for all protected
RC methods. Require authentication for options/set.
See GHSA-25qr-6mpr-f7qx
The files.com integration tests for rcat/copyurl were failing because
fs/account.Account was declaring a ReadAt method when the underlying
handle did not support it. The files.com SDK decided to use the ReadAt
method to speed transfers up which failed.
ReadAt and Seek methods were added in this commit to support the
archive command:
409dc75328 accounting: add io.Seeker/io.ReaderAt support to accounting.Account
This fixes the problem by adding new methods to the Account object
WithSeeker/WithReaderAt/WithReadAtSeeker which produce an object with
the desired methods or errors if it isn't possible.
This stops Account advertising things it can't do which is bad Go
practice.
Until now test_all relied entirely on per-goroutine defer finish()
calls in fstest/runs to stop test servers. A Ctrl-C, kill, or panic
aborted those defers and left docker containers running, breaking the
next run.
Register testserver.CleanupAll with lib/atexit so SIGINT/SIGTERM
delivery runs the sweep automatically. Also defer atexit.Run for the
normal exit and unrecovered-panic paths, and call it explicitly
before os.Exit(1) since os.Exit does not fire defers. The fs.Fatalf
call sites above only fire before any server starts so they need no
explicit sweep.
Cleanup today is entirely per-goroutine via the stop closure that Start
returns. If the driver process is killed or panics, those deferred
stops never run and the underlying container keeps running.
Track every remote Start has brought up in a process-local map, and
expose CleanupAll which force-stops each tracked remote via the new
run.bash "force-stop" verb. The returned stop closure is now
sync.Once-wrapped so it and CleanupAll can both fire harmlessly. No
callers yet; wired up in fstest/test_all in a follow-up commit.
run.bash holds a persistent refcount file in the shared state directory
so multiple concurrent tests can share a single container. If a prior
test_all run is killed (e.g. Ctrl-C), the count never reaches zero on
the next run and the container is never stopped - forcing manual
cleanup.
Three fixes, all in fstest/testserver/init.d/run.bash:
- On start, if the refcount is non-zero but no container is running,
treat it as zero. Stops leaking through future runs.
- reset now rm -rfs RUN_ROOT (the per-server state) instead of
RUN_BASE (the shared parent) which was clobbering sibling services.
- New force-stop verb unconditionally stops the container and zeroes
the refcount. This is the primitive that the Go-side cleanup sweep
will call at end-of-run.
Some S3-compatible servers (e.g. Archiware P5) reject requests with an
empty `?delimiter=` query parameter. For recursive listings, pass `nil`
instead of a pointer to an empty string so the parameter is omitted
entirely from the request.
Fixes#9342
The temp directory name used random.String(2) giving only 676 possible
values. When multiple concurrent tests started in the same second, they
shared the same timestamp prefix, causing name collisions and shared
temp directories. This led to lock file conflicts, listing file races,
and file deletion errors.
Increase to random.String(8) to make collisions effectively impossible.
Before this change, if an object compressed with "Content-Encoding:
gzip" was downloaded, a length and hash mismatch would occur since the
go runtime automatically decompressed the object on download.
If --azureblob-decompress is set, this change erases the length and hash on
compressed objects so they can be downloaded successfully, at the cost
of not being able to check the length or the hash of the downloaded
object.
If --azureblob-decompress is not set the compressed files will be downloaded
as-is providing compressed objects with intact size and hash
information.
Fixes#9337
Add a Hugo page at /backends/index.json that exports all the
backend YAML data from docs/data/backends/ as a single JSON file
for external consumption.
Set the User-Agent to include the APN prefix for Azure backends
(azureblob, azurefiles, onelake) to identify rclone as a Microsoft
Partner. The User-Agent is now:
APN/1.0 rclone/1.0 rclone/<version>
Add a ctx parameter to vfs.New() so callers can pass in context
carrying ConfigInfo and FilterInfo. The context is stripped of
cancellation but config and filter values are preserved into a fresh
background context.
Add a ctx field to the VFS struct, initialized in New() from the
existing cancellable context. Propagate this through the cache
subsystem hierarchy.
This ensures proper context cancellation when a VFS shuts down, rather
than using disconnected context.TODO() or context.Background() calls
throughout and paves the way for VFS to have its own config.
The flag.Lookup("test.v") check existed to skip opening a browser
during tests, but the tests don't exercise RunE, so this was never
used. The --no-open-browser flag is sufficient on its own.
Bind the RC server to localhost:0 and read the bound URL back via a
new rcserver.Server.URLs() accessor instead of pre-allocating a port
in cmd/gui. This removes the small TOCTOU race window between
freePort() closing its listener and rcserver claiming the same port.
- Fail gracefully if `make fetch-gui` hasn't been run
- Return errors instead of panic or fatal errrors
- Don't run `make fetch-gui` on every make since we have it in the workflow
The fetch-gui-dist.sh script calls the GitHub releases API
unauthenticated, which is limited to 60 requests/hour per source IP.
GitHub Actions runners share outbound IPs, so this quota is regularly
exhausted.
Pass GITHUB_TOKEN (or GH_TOKEN) as an Authorization header when
present, raising the limit to 1000/hour, and wire secrets.GITHUB_TOKEN
into the workflow step. Local unauthenticated runs still work.
`json:"entry_permissions"` is known to be either empty [] or of
structure {string: boolean}. This may have been a breaking API change on
Drime's side. Because EntryPermissions is not used, the type was changed
to `any` to capture both cases, otherwise we could implement custom
unmarshalling for that type.
These Debugf calls in NewFilter() ran during GlobalOptionsInit(), before
InitLogging() configured the JSON log format. This caused plain-text
debug lines to leak to stderr when --use-json-log was set, breaking
tooling that expected only JSON output.
The resolved time values are already available via --dump filters so
this commit removes the debug messages.