Replace AuthRequired bool with NoAuth bool on the rc.Call struct and
flip the auth check logic. Previously endpoints were unauthenticated
by default and had to opt in with AuthRequired: true, which led to
security vulnerabilities when developers forgot to set the flag.
Now all endpoints require authentication by default. Only explicitly
safe read-only endpoints are marked with NoAuth: true:
- rc/noop
- rc/error
- rc/list
- core/version
- core/stats
- core/group-list
- core/transferred
- core/du
- cache/stats
- vfs/list
- vfs/stats
- vfs/queue
- job/status
- job/list
See GHSA-25qr-6mpr-f7qx, GHSA-jfwf-28xr-xw6q
The operations/fsinfo RC endpoint was registered without AuthRequired,
allowing unauthenticated callers to instantiate arbitrary backends via
inline backend definitions.
See GHSA-jfwf-28xr-xw6q
Snapshot the NoAuth setting when the RC server is created rather than
reading it from the mutable options struct on each request. This
prevents any runtime mutation of rc.NoAuth (e.g. via options/set)
from disabling the auth gate for protected RC methods.
See GHSA-25qr-6mpr-f7qx
The options/set RC endpoint was registered without AuthRequired,
allowing unauthenticated callers to mutate global runtime options
including rc.NoAuth, which disables the auth gate for all protected
RC methods. Require authentication for options/set.
See GHSA-25qr-6mpr-f7qx
The files.com integration tests for rcat/copyurl were failing because
fs/account.Account was declaring a ReadAt method when the underlying
handle did not support it. The files.com SDK decided to use the ReadAt
method to speed transfers up which failed.
ReadAt and Seek methods were added in this commit to support the
archive command:
409dc75328 accounting: add io.Seeker/io.ReaderAt support to accounting.Account
This fixes the problem by adding new methods to the Account object
WithSeeker/WithReaderAt/WithReadAtSeeker which produce an object with
the desired methods or errors if it isn't possible.
This stops Account advertising things it can't do which is bad Go
practice.
Until now test_all relied entirely on per-goroutine defer finish()
calls in fstest/runs to stop test servers. A Ctrl-C, kill, or panic
aborted those defers and left docker containers running, breaking the
next run.
Register testserver.CleanupAll with lib/atexit so SIGINT/SIGTERM
delivery runs the sweep automatically. Also defer atexit.Run for the
normal exit and unrecovered-panic paths, and call it explicitly
before os.Exit(1) since os.Exit does not fire defers. The fs.Fatalf
call sites above only fire before any server starts so they need no
explicit sweep.
Cleanup today is entirely per-goroutine via the stop closure that Start
returns. If the driver process is killed or panics, those deferred
stops never run and the underlying container keeps running.
Track every remote Start has brought up in a process-local map, and
expose CleanupAll which force-stops each tracked remote via the new
run.bash "force-stop" verb. The returned stop closure is now
sync.Once-wrapped so it and CleanupAll can both fire harmlessly. No
callers yet; wired up in fstest/test_all in a follow-up commit.
run.bash holds a persistent refcount file in the shared state directory
so multiple concurrent tests can share a single container. If a prior
test_all run is killed (e.g. Ctrl-C), the count never reaches zero on
the next run and the container is never stopped - forcing manual
cleanup.
Three fixes, all in fstest/testserver/init.d/run.bash:
- On start, if the refcount is non-zero but no container is running,
treat it as zero. Stops leaking through future runs.
- reset now rm -rfs RUN_ROOT (the per-server state) instead of
RUN_BASE (the shared parent) which was clobbering sibling services.
- New force-stop verb unconditionally stops the container and zeroes
the refcount. This is the primitive that the Go-side cleanup sweep
will call at end-of-run.
Some S3-compatible servers (e.g. Archiware P5) reject requests with an
empty `?delimiter=` query parameter. For recursive listings, pass `nil`
instead of a pointer to an empty string so the parameter is omitted
entirely from the request.
Fixes#9342
The temp directory name used random.String(2) giving only 676 possible
values. When multiple concurrent tests started in the same second, they
shared the same timestamp prefix, causing name collisions and shared
temp directories. This led to lock file conflicts, listing file races,
and file deletion errors.
Increase to random.String(8) to make collisions effectively impossible.
Before this change, if an object compressed with "Content-Encoding:
gzip" was downloaded, a length and hash mismatch would occur since the
go runtime automatically decompressed the object on download.
If --azureblob-decompress is set, this change erases the length and hash on
compressed objects so they can be downloaded successfully, at the cost
of not being able to check the length or the hash of the downloaded
object.
If --azureblob-decompress is not set the compressed files will be downloaded
as-is providing compressed objects with intact size and hash
information.
Fixes#9337
Add a Hugo page at /backends/index.json that exports all the
backend YAML data from docs/data/backends/ as a single JSON file
for external consumption.
Set the User-Agent to include the APN prefix for Azure backends
(azureblob, azurefiles, onelake) to identify rclone as a Microsoft
Partner. The User-Agent is now:
APN/1.0 rclone/1.0 rclone/<version>
Add a ctx parameter to vfs.New() so callers can pass in context
carrying ConfigInfo and FilterInfo. The context is stripped of
cancellation but config and filter values are preserved into a fresh
background context.
Add a ctx field to the VFS struct, initialized in New() from the
existing cancellable context. Propagate this through the cache
subsystem hierarchy.
This ensures proper context cancellation when a VFS shuts down, rather
than using disconnected context.TODO() or context.Background() calls
throughout and paves the way for VFS to have its own config.
The flag.Lookup("test.v") check existed to skip opening a browser
during tests, but the tests don't exercise RunE, so this was never
used. The --no-open-browser flag is sufficient on its own.
Bind the RC server to localhost:0 and read the bound URL back via a
new rcserver.Server.URLs() accessor instead of pre-allocating a port
in cmd/gui. This removes the small TOCTOU race window between
freePort() closing its listener and rcserver claiming the same port.
- Fail gracefully if `make fetch-gui` hasn't been run
- Return errors instead of panic or fatal errrors
- Don't run `make fetch-gui` on every make since we have it in the workflow
The fetch-gui-dist.sh script calls the GitHub releases API
unauthenticated, which is limited to 60 requests/hour per source IP.
GitHub Actions runners share outbound IPs, so this quota is regularly
exhausted.
Pass GITHUB_TOKEN (or GH_TOKEN) as an Authorization header when
present, raising the limit to 1000/hour, and wire secrets.GITHUB_TOKEN
into the workflow step. Local unauthenticated runs still work.
`json:"entry_permissions"` is known to be either empty [] or of
structure {string: boolean}. This may have been a breaking API change on
Drime's side. Because EntryPermissions is not used, the type was changed
to `any` to capture both cases, otherwise we could implement custom
unmarshalling for that type.
These Debugf calls in NewFilter() ran during GlobalOptionsInit(), before
InitLogging() configured the JSON log format. This caused plain-text
debug lines to leak to stderr when --use-json-log was set, breaking
tooling that expected only JSON output.
The resolved time values are already available via --dump filters so
this commit removes the debug messages.
This adds a new gui command which runs an embedded copy of the GUI at
https://github.com/rclone/rclone-web/
The GUI release is fetched as part of the CI build.
The Global Acceleration Endpoint (cos.accelerate.myqcloud.com) of
Tencent COS does not seem to support "CreateBucket" (maybe also other
bucket management operations). Since the acceleration functionality must
be enabled per-bucket in the Tencent Cloud console, the bucket will
always exist before this endpoint is used, so this check can be safely
skipped.
Now, "no_check_bucket = true" will be auto set when using this endpoint.
Why "NewFs()": on-the-fly remotes (connection string remotes), for
example, ":s3,provider=TencentCOS,...:..." will also be fixed.
Why no unit test: I can't find a good way to test "NewFs()" without
leveraging live endpoints. I think we can extract all existing mutations
for different providers (e.g., AWS, Fastly, and Rabata) from "NewFs()"
to a new function in the future.
Some Tencent docs about this CDN endpoint:
- English: Global Acceleration Endpoint | https://www.tencentcloud.com/pt/document/product/436/40700
- Chinese: 对象存储 全球加速概述_腾讯云 | https://cloud.tencent.com/document/product/436/38866
Assisted-By: OpenCode
When a user has --s3-versions set but lacks the s3:GetBucketVersioning
permission, GetBucketVersioning returns an error and isVersioned() caches
the result as false. This caused CleanUpHidden (backend cleanup-hidden) to
silently exit with "bucket is not versioned so not removing old versions",
ignoring the user's explicit --s3-versions flag.
Fix this by trusting the explicit --s3-versions flag in purge(), bypassing
the GetBucketVersioning check when the user has explicitly declared the
bucket is versioned.
Previously ssh.InsecureIgnoreHostKey() was set unconditionally as the
default HostKeyCallback with no indication to the user.
This logs a warning pointing users to the documentation on how to
enable host key validation.
See: https://github.com/rclone/rclone/security/code-scanning/167
Before this change when a cache item was in its grace period (with
HandleCaching) and the file is reopened, _checkObject runs before the
grace timer recovery check. If the remote object's fingerprint changed
_checkObject removes the cache file from disk. However the grace
recovery path still reused the now-stale fd pointing to a deleted
inode, skipping _createFile entirely. This left no cache file on disk,
causing cache.Exists() to return false and breaking
rename-while-writing logic.
Fix this by checking the cache file still exists before reusing the fd
in grace recovery. If the file was removed, close the stale fd and
downloaders and fall through to _createFile.
Also update the fingerprint in item.rename after setting the new object,
preventing unnecessary cache invalidation when a file is reopened after
a rename.
This was discovered in the integration tests on backends that update
modtime on rename (like mailru).