Before this change, --conflict-loser pathname assumed --conflict-resolve none,
following the legacy behavior prior to v1.66. This produced unexpected behavior
when used with a different --conflict-resolve option.
This change fixes the issue by ensuring that --conflict-loser pathname looks for
the correct name on the side not being renamed, when only one side should be
renamed.
https://forum.rclone.org/t/bisync-does-not-copy-the-winner-file-to-the-loser-site/53768
The metrics_addr option was registered twice: once explicitly and once
implicitly via AddPrefix(libhttp.ConfigInfo, "metrics", ...). Both
pointed at the same MetricsHTTP.ListenAddr field, so options/info
returned a duplicate entry.
Drop the explicit entry and use SetDefault to keep the empty default
(so the metrics server stays off unless configured), matching the
pattern already used for rc_addr.
Fixes#9419
When using rcat to upload a new version of a file that already existed,
the file upload would succeed. The subsequent deletion of the old file
is attempted after the upload. Drime appears to handle the deletion of
the old file automatically and returns HTTP status code 422, stating
the "The selected entry ids is invalid."
The deletion and the rcat would fail before this change. This is with
file history enabled on my Drime account.
This change detects the error and ignores it since the file has
already been deleted.
- CVE-2026-42501: cmd/go: malicious module proxy can bypass checksum database
- CVE-2026-39825: net/http/httputil: ReverseProxy forwards queries with more than urlmaxqueryparams parameters
- CVE-2026-39836: net: panic in Dial and LookupPort when handling NUL byte on Windows
- CVE-2026-42499: net/mail: quadratic string concatenation in consumePhrase
- CVE-2026-39820: net/mail: quadratic string concatentation in consumeComment
- CVE-2026-39819: cmd/go: "go bug" follows symlinks in predictable temporary filenames
- CVE-2026-39817: cmd/go: "go tool pack" does not sanitize output paths
- CVE-2026-33814: net/http: infinite loop in HTTP/2 transport when given bad SETTINGS_MAX_FRAME_SIZE
- CVE-2026-39826: html/template: escaper bypass leads to XSS
- CVE-2026-33811: net: crash when handling long CNAME response
- CVE-2026-39823: html/template: bypass of meta content URL escaping causes XSS
The bisync normalization test relies on uploading distinct NFC and NFD
versions of the same filename and on the backend supporting in-place
modtime updates. Dropbox normalizes unicode server-side (NFD -> NFC)
and can't set modtime in place, so the test inevitably takes a
different code path on Dropbox and the log diverges from the golden
output without any functional difference.
operations.NeedTransfer's equality check may have deleted pair.Dst as
a precursor to re-uploading it if SetModTime returns
ErrorCantSetModTimeWithoutDelete (e.g. Dropbox). If so skip the eager
delete of the destination if --fix-case will rename it to a different
name. The rename itself replaces the destination, and any subsequent
re-upload happens at the correctly-cased path.
See: #8881
This reverts commit de67f29b3f.
This solved the original Dropbox "from_lookup/not_found" failure, but
broke --fix-case on case-sensitive backends that update modtime via a
server-side copy (such as S3 on Cloudflare R2).
At some point Drime recommended 200M for the upload cutoff for
switching to multipart upload. However uploads have stopped working
using single part upload for 100..200Mish files.
Their docs now recommend 5M as the cutoff for multipart upload so this
changes the default.
The /s3/multipart/create and /s3/entries endpoints interpret relativePath
as an absolute path from the drive root, not relative to parent_id. When
root_folder_id was set to a non-root folder, files larger than
upload_cutoff ended up at the user's drive root instead of the configured
folder.
Resolve the absolute path of the Fs root once via GET /folders/{hash}/path
(cached on first OpenChunkWriter call) and use that to build the correct
relativePath.
Fixes#9392
- Add Data Raven as a silver sponsor
- Add Impossible Cloud as a bronze sponsor
- Shuffle silver sponsors once per page load
- Remove TOC from sponsors page
When enabled, an out-of-space error during a local write returns a
fatal error that aborts the run, instead of being retried.
Without this option, ENOSPC errors are treated as retryable and
rclone may spin through the retry loop many times on a full disk
before giving up. That is fine for transient network errors but
unhelpful when the disk is genuinely full and the operator wants
the run to fail loudly. Default is off so existing behaviour is
unchanged.
Implementation follows the pattern suggested in the issue: a defer
at the top of Update wraps the error with fserrors.FatalError when
the option is on and the error is disk-full. Detection covers both
file.ErrDiskFull from the preallocate path and syscall.ENOSPC from
io.Copy or Close, via a small helper that uses fserrors.IsErrNoSpace.
Add three new regions and their endpoints for Fastly Object Storage:
- eu-west-1 (Paris)
- us-east-1 (Virginia)
- us-west-1 (Oregon)
These are distinct from the existing us-east, us-west and eu-central
endpoints, which are kept in place.
shouldRetry treated every non-nil error as retryable, so permanent
failures (auth, 4xx, not-found) burned through the LowLevelRetries
budget instead of returning fast.
This also fixes the pacer sleeps: pacer.MinSleep(1000) and
MaxSleep(10000) are time.Duration values, so they were 1µs and 10µs -
almost certainly intended as 10ms and 2s.
The TransformFile tests in fs/sync call operations.TransformFile
immediately after MoveDir. On eventually-consistent backends the
internal NewObject lookup can momentarily fail with "object not
found", making the tests flaky.
This wraps the two operations.TransformFile calls in TestTransformFile
and TestManualTransformFile with fstest.Retry
When --fix-case was used (e.g. by bisync) on backends that can't set
modification times in place - such as Dropbox - files whose content
matched but whose modtimes differed would fail to rename with a
"from_lookup/not_found" error and abort the operation.
This happened because operations.NeedTransfer was called before the
fix-case rename. NeedTransfer's equality check would delete the
destination as a precursor to re-uploading it (the standard way to
update a modtime on these backends), so by the time the rename ran the
file no longer existed on the remote.
Fix by running the fix-case rename first, so that any subsequent
delete/re-upload happens at the correctly-cased destination path.
See: #8881
The test set the short idle timeout before creating the test Fs, which
made fs.NewFs fail to read the FTP welcome banner within 1s on slow CI
hosts. Restore the long timeout while NewFs dials the control
connection, then apply the short idle timeout before the upload so the
data connection still exercises the close race that shut_timeout fixes.
The previous wording "Already have a token - refresh?" was misleading
because answering yes triggers a full re-authorization flow, not an
OAuth2 refresh token grant. Updated to "Token already configured -
replace it?" to accurately describe what happens.
Also updated the SugarSync backend which has its own copy of the prompt,
and the docs for box, drive, and onedrive that reference it.
Enable on-the-fly response compression for WebDAV when the client sends
Accept-Encoding and the response content type is suitable for
compression.
This adds compression for the WebDAV responses that benefit most in
practice, notably PROPFIND XML responses and text file downloads.
I tested this with Cyberduck, which sends
`Accept-Encoding: gzip,deflate` and accepted the compressed responses.
Range requests are explicitly left uncompressed.
Fixes#5777
Before this change, the GUI server sent all static files uncompressed,
meaning the browser had to download the full size of every JS, CSS,
and HTML asset.
After this change, the GUI server uses chi's Compress middleware at
level 5, which negotiates gzip or deflate encoding based on the
client's Accept-Encoding header.
This reduces transfer sizes significantly for the web UI assets, for
example assets/index-CvfdU_RR.js is 874 KB uncompressed, and
265 KB compressed.
This is consistent with how rclone serve http, webdav, and restic
already compress their responses.
The stscreds.AssumeRoleProvider from AWS SDK Go v2 does not cache
credentials by itself. The SDK only auto-wraps providers with
aws.CredentialsCache when they are loaded via
config.LoadDefaultConfig; when assigned directly to
aws.Config.Credentials it must be wrapped manually, as documented on
stscreds.NewAssumeRoleProvider.
Without the cache, configurations using role_arn would call AssumeRole
once per S3 request, flooding STS and CloudTrail.
See: https://forum.rclone.org/t/aws-iam-roles-credentials-arent-cached/53732
When a Proton Drive file has no active revision attributes,
readMetaDataForLink returns a nil FileSystemAttrs and Object.originalSize
is left as nil. Object.Open then dereferenced this nil pointer when
calling fs.FixRangeOption, causing a SIGSEGV during copy.
Use Object.Size() instead, which already implements the correct fallback
to the link size when originalSize is unavailable.
This updates the github.com/rclone/Proton-API-Bridge package to fix a
segfault when reading files with no metadata.
Fixes#9377Fixes#9117
Previously all log output produced by Proton-API-Bridge (stdlib log)
and go-proton-api (logrus + resty's logger) bypassed rclone's
logging: it ignored -v / -vv levels and didn't reach --log-file.
Add a small adapter implementing the resty.Logger / bridge Logger
shape that calls fs.Errorf / fs.Logf / fs.Debugf, and pass it via
the new Config.Logger hook. The bridge in turn forwards the same
value to go-proton-api's WithLogger option, so HTTP-layer warnings
and the formerly-hardcoded logrus warnings inside go-proton-api
also surface through rclone's log levels.
The Proton Drive backend constructed the upstream Proton-API-Bridge
without ever passing rclone's HTTP transport. As a result none of
rclone's HTTP flags reached Proton: --dump headers, --dump bodies,
--no-check-certificate, --user-agent, --bind, --ca-cert, --header,
--tpslimit etc. all silently did nothing for this remote, and HTTP
traffic was invisible to -vv.
Pass fshttp.NewTransport(ctx) through the new Config.Transport hook on
the bridge, which forwards it to the updated go-proton-api's
WithTransport option and so to the underlying resty client.
Avoid side effects by using own logger instance
- Importing fs/log only sets rclone's private logger via fs.SetLogger,
so internal rclone logging works from the moment the package is
imported but the process-wide slog default is left untouched.
- slog.SetDefault and slog.SetLogLoggerLevel move into InitLogging,
which is called explicitly from the CLI (cmd/cmd.go), the librclone
wrapper and the integration test framework. So rclone-as-a-program
keeps capturing log.Print/log.Fatal and slog.Default() output as
before.
Library consumers that import fs/log without calling InitLogging now
keep their own slog default and can safely route rclone output back
into it via log.Handler.SetOutput without recursing.
Fixes#8907
Co-authored-by: Nick Craig-Wood <nick@craig-wood.com>
The S3 ListObjects response from `rclone serve s3` was sorting object
contents by modification time instead of object key. This made the
listing order incompatible with S3 clients which expect lexicographic
key ordering.
In particular, `aws s3 sync` assumes both source and destination
iterators are ordered by key. With the old modtime ordering it could
misidentify files as missing or outdated and re-download objects that
were already up to date.
Change the pager to sort returned objects by key and add a regression
test which uses keys and modtimes arranged so the old behaviour would
fail.
Fixes#9002
Previously `make fetch-gui` extracted the GUI release into cmd/gui/dist/
and the unpacked tree was embedded uncompressed via `//go:embed dist`.
This commits and embeds the GUI bundle (dist.zip) and its release tag
(dist.tag) to the repo so:
- the rclone binary is smaller
- `go build` works on a fresh clone without first running fetch-gui
- a given commit pins an exact GUI version
The "Fetch GUI" step was removed from .github/workflows/build.yml.