The background kicker goroutine had a bare select outside a for loop,
so the 5s ticker fired at most once before the goroutine exited. The
intent was to run every 5s for the lifetime of the Downloaders.
This wraps the select in a for loop so the ticker fires repeatedly
until ctx is cancelled.
In practice this was benign because every downloader exit and every
successful Write already calls kickWaiters, so the background kicker
is only load-bearing when a waiter is queued, no downloader is
running, and _ensureDownloader failed transiently. In that state,
before this fix, the waiter would hang until another Download() call
or Close() arrived; now it gets retried every 5s and will either
recover or accumulate enough errors to trip maxErrorCount and error
out cleanly.
Implement multipart upload support with configurable chunk size and concurrency options
Enable OpenChunkWriter with per-chunk encryption
Enhance multipart upload handling with new upload cutoff and error management for small files
Samsung TVs have a bug where they duplicate file extensions when both
the title contains an extension and the MIME type indicates the same
file type. For example, "photo.jpg" becomes "photo.jpg.jpg".
Remove extensions from <dc:title> while keeping them in the resource URL
and MIME type. This provides a cleaner display and prevents Samsung TVs
from incorrectly "fixing" what they perceive as missing extensions.
Samsung TVs have strict XML parsers that fail to interpret "
(numeric quote entity) correctly within DIDL-Lite metadata, causing
files to appear as empty folders. By replacing " with "
(named quote entity) in all marshaled XML, Samsung TVs can now
properly parse the metadata and display files.
This handles the "Big 5" XML entities that might cause parsing issues:
- " -> " (double quotes)
- ' -> ' (apostrophes)
- & -> & (ampersands)
- < -> < (less than)
- > -> > (greater than)
While Go's xml.Marshal already uses named entities for &, <, >
characters, this ensures complete protection against any edge cases
where numeric entities might be generated. Samsung TVs are known
to have strict XML parsers that can't handle numeric entities.
Fixes#9346
Samsung TVs sometimes send Browse requests with empty ObjectID
parameters (<ObjectID></ObjectID>) which causes DLNA servers to
return errors. Default empty ObjectID to "0" (root container) to
maintain compatibility.
This fix is based on ReadyMedia/MiniDLNA Bug 311 which documented
the same issue and solution for Samsung TVs.
See #9346
Add xmlns:sec="http://www.sec.co.kr/" namespace to DIDL-Lite responses
as required by Samsung TV DLNA implementations. This namespace is used
by working DLNA servers like MediaBrowser/Emby for Samsung compatibility.
Based on research of open source DLNA servers that successfully work
with Samsung TVs.
See #9346
Containers (directories) never had their Date field set, producing
<dc:date>0001-01-01</dc:date> (Go's zero time) in DIDL-Lite metadata.
This invalid date can confuse strict DLNA clients.
Set the dc:date to the directory's modification time, and as a safety
net, omit the dc:date element entirely when the timestamp is zero.
See #9346
The childCount attribute on DLNA containers was hardcoded to 1
regardless of how many items the directory actually contained. Some
DLNA clients (notably Samsung TVs) use childCount to decide whether
to browse into a container. Report the actual number of directory
entries instead.
See #9346
Samsung TVs are strict DLNA clients that expect SOAP response arguments
in the order defined by the service SCPD (Service Control Protocol
Description). The Browse response was using a Go map which produces
random iteration order, causing arguments like Result, NumberReturned,
TotalMatches, and UpdateID to appear in unpredictable order. Samsung TVs
fail to parse such responses and never proceed to browse directory
children, showing "no content" to the user.
Replace the map[string]string return type with an ordered []soapArg
slice throughout the UPnPService.Handle() interface, ensuring response
arguments always appear in SCPD-defined order.
See #9346
Fix CVE-2026-32952: A malicious NTLM challenge message can causes an slice out
of bounds panic, which can crash any Go process using ntlmssp.Negotiator as an
HTTP transport.
This is in use in rclone in the webdav backend to access sharepoint.
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
The fix for #8241 set FileRequestIntent=Backup on
service.ClientOptions so the azfile SDK emits the
x-ms-file-request-intent header was inadvertently dropped in this
commit (released in v1.73.0)
846f193806 azureblob,azurefiles: factor the common auth into a library
This broke azurefiles with OAuth (service principal secret,
certificate, MSI, etc.) with:
400 MissingRequiredHeader: x-ms-file-request-intent
This restores it in the azurefiles SetClientOptions callback. The SDK
only emits the header for TokenCredential auth, so shared-key and SAS
paths are unaffected.
Before this change, uploading to an existing bucket on Ceph (radosgw)
could fail with:
BucketAlreadyExists: 409 Conflict
when rclone attempted to create the destination bucket (which it does
by default unless --s3-no-check-bucket is set).
The Ceph rgw S3 implementation never returns BucketAlreadyOwnedByYou;
it returns BucketAlreadyExists for every CreateBucket on an existing
bucket, even one the caller owns. With the use_already_exists quirk
set to true, rclone wraps BucketAlreadyExists as a non-retriable error
and aborts the transfer.
The Ceph provider used to set useAlreadyExists = true explicitly. When
the s3 providers were converted to YAML files in f28c83c, Ceph did not
set use_already_exists so it picked up the default of true (via
set(&opt.UseAlreadyExists, true, provider.Quirks.UseAlreadyExists)),
which matched the previous behaviour but is the wrong setting for
Ceph.
This sets use_already_exists: false for the Ceph provider so rclone
ignores BucketAlreadyExists on CreateBucket and continues with the
upload.
Side effect: this partially reverts #7871 for the Ceph provider. If a
user tries to create a bucket on Ceph that is actually owned by
someone else, rclone will no longer fail fast at CreateBucket time;
the subsequent object PUT will fail instead. This is unavoidable on
Ceph since the server does not distinguish "already owned by you" from
"owned by someone else".
Parsing a WEBP image with an invalid, large size panics on 32-bit platforms.
This only affects users on 32 bit platforms using the Internxt backend.
See: https://pkg.go.dev/vuln/GO-2026-4961
When Mega.LoginWithKeys() fails to make the API request, it leaves the
object in a state where Mega.FS.root is nil because it could never query
any information about the filesystem tree. An easy way for this to
happen is if the device is not connected to the internet.
Previously, these failures would be ignored, but Fs.findRoot() on the
rclone side is written in a way that assumes the go-meta filesystem will
have a non-nil root. This leads to an immediate nil pointer dereference
when NewFs() calls Fs.findRoot().
This commit fixes the problem by making LoginWithKeys() failures hard
failures, similar to the MultiFactorLogin() path.
Signed-off-by: Andrew Gunnerson <accounts+github@chiller3.com>
Replace AuthRequired bool with NoAuth bool on the rc.Call struct and
flip the auth check logic. Previously endpoints were unauthenticated
by default and had to opt in with AuthRequired: true, which led to
security vulnerabilities when developers forgot to set the flag.
Now all endpoints require authentication by default. Only explicitly
safe read-only endpoints are marked with NoAuth: true:
- rc/noop
- rc/error
- rc/list
- core/version
- core/stats
- core/group-list
- core/transferred
- core/du
- cache/stats
- vfs/list
- vfs/stats
- vfs/queue
- job/status
- job/list
See GHSA-25qr-6mpr-f7qx, GHSA-jfwf-28xr-xw6q
The operations/fsinfo RC endpoint was registered without AuthRequired,
allowing unauthenticated callers to instantiate arbitrary backends via
inline backend definitions.
See GHSA-jfwf-28xr-xw6q
Snapshot the NoAuth setting when the RC server is created rather than
reading it from the mutable options struct on each request. This
prevents any runtime mutation of rc.NoAuth (e.g. via options/set)
from disabling the auth gate for protected RC methods.
See GHSA-25qr-6mpr-f7qx
The options/set RC endpoint was registered without AuthRequired,
allowing unauthenticated callers to mutate global runtime options
including rc.NoAuth, which disables the auth gate for all protected
RC methods. Require authentication for options/set.
See GHSA-25qr-6mpr-f7qx
The files.com integration tests for rcat/copyurl were failing because
fs/account.Account was declaring a ReadAt method when the underlying
handle did not support it. The files.com SDK decided to use the ReadAt
method to speed transfers up which failed.
ReadAt and Seek methods were added in this commit to support the
archive command:
409dc75328 accounting: add io.Seeker/io.ReaderAt support to accounting.Account
This fixes the problem by adding new methods to the Account object
WithSeeker/WithReaderAt/WithReadAtSeeker which produce an object with
the desired methods or errors if it isn't possible.
This stops Account advertising things it can't do which is bad Go
practice.
Until now test_all relied entirely on per-goroutine defer finish()
calls in fstest/runs to stop test servers. A Ctrl-C, kill, or panic
aborted those defers and left docker containers running, breaking the
next run.
Register testserver.CleanupAll with lib/atexit so SIGINT/SIGTERM
delivery runs the sweep automatically. Also defer atexit.Run for the
normal exit and unrecovered-panic paths, and call it explicitly
before os.Exit(1) since os.Exit does not fire defers. The fs.Fatalf
call sites above only fire before any server starts so they need no
explicit sweep.
Cleanup today is entirely per-goroutine via the stop closure that Start
returns. If the driver process is killed or panics, those deferred
stops never run and the underlying container keeps running.
Track every remote Start has brought up in a process-local map, and
expose CleanupAll which force-stops each tracked remote via the new
run.bash "force-stop" verb. The returned stop closure is now
sync.Once-wrapped so it and CleanupAll can both fire harmlessly. No
callers yet; wired up in fstest/test_all in a follow-up commit.
run.bash holds a persistent refcount file in the shared state directory
so multiple concurrent tests can share a single container. If a prior
test_all run is killed (e.g. Ctrl-C), the count never reaches zero on
the next run and the container is never stopped - forcing manual
cleanup.
Three fixes, all in fstest/testserver/init.d/run.bash:
- On start, if the refcount is non-zero but no container is running,
treat it as zero. Stops leaking through future runs.
- reset now rm -rfs RUN_ROOT (the per-server state) instead of
RUN_BASE (the shared parent) which was clobbering sibling services.
- New force-stop verb unconditionally stops the container and zeroes
the refcount. This is the primitive that the Go-side cleanup sweep
will call at end-of-run.
Some S3-compatible servers (e.g. Archiware P5) reject requests with an
empty `?delimiter=` query parameter. For recursive listings, pass `nil`
instead of a pointer to an empty string so the parameter is omitted
entirely from the request.
Fixes#9342
The temp directory name used random.String(2) giving only 676 possible
values. When multiple concurrent tests started in the same second, they
shared the same timestamp prefix, causing name collisions and shared
temp directories. This led to lock file conflicts, listing file races,
and file deletion errors.
Increase to random.String(8) to make collisions effectively impossible.
Before this change, if an object compressed with "Content-Encoding:
gzip" was downloaded, a length and hash mismatch would occur since the
go runtime automatically decompressed the object on download.
If --azureblob-decompress is set, this change erases the length and hash on
compressed objects so they can be downloaded successfully, at the cost
of not being able to check the length or the hash of the downloaded
object.
If --azureblob-decompress is not set the compressed files will be downloaded
as-is providing compressed objects with intact size and hash
information.
Fixes#9337