The Linkbox open API (/api/open/file_search) no longer returns download
URLs, breaking all downloads. This switches to using the web API
(/api/file/my_file_list/web) which requires email+password authentication
but returns working download URLs.
This will unfortunately require changing your existing rclone config.
- Add email, password, and web_token config options
- Add web API login via /api/user/login_email with token caching and retry
- Create separate CDN HTTP client with HTTP/2 disabled and browser
User-Agent to avoid CDN fingerprint blocking
- Remove searchOK regex and name-filtering (web API doesn't support it)
In this commit
0db3e7a2a0 vfs: fix slow nfs serve by adding --vfs-handle-caching
We added --vfs-handle-caching but unfortunately forgot to disable it
for the TestRWCacheUpdate test.
Add a configurable grace period (default 5s) that delays closing file
handles and downloaders when the last handle closes. If a new handle
opens within the grace period, it reuses the existing resources.
This fixes 40x performance degradation with serve nfs vs serve sftp
caused by go-nfs opening/reading/closing on every NFS READ RPC, which
destroyed read-ahead prefetch before it could accumulate.
The grace period only applies to non-dirty files so that writeback
proceeds immediately on close.
Fixes#9251
Apple IDs are case-insensitive, but the SRP proof computation (M1)
hashes the username client-side. The old plaintext signin let the
server normalize the case, but with SRP the client must match.
Lowercase the Apple ID before use so mixed-case IDs authenticate
correctly.
Reported-by: ArturKlauser
This fixes China mainland iCloud authentication by deriving the Origin
and Referer headers from authEndpoint instead of hardcoding idmsa.apple.com.
Fixes compatibility with PR #8818 (China region support) and PR #9209
(SRP authentication).
Signed-off-by: Xiangzhe <xiangzhedev@gmail.com>
Apple has deprecated the legacy /appleauth/auth/signin endpoint and
now blocks it, causing "Invalid Session Token" errors for all users
when their trust token expires. The browser login flow now requires
SRP (Secure Remote Password), a cryptographic handshake that never
transmits the password.
Replace Session.SignIn() with a multi-step SRP-6a flow:
1. authStart - initialize session at /authorize/signin
2. authFederate - submit account name to /federate
3. authSRPInit - exchange client public value for salt/B at /signin/init
4. authSRPComplete - send M1/M2 proofs to /signin/complete
The SRP implementation uses the RFC 5054 2048-bit group with SHA-256
and Apple's NoUserNameInX variant. Password derivation supports both
s2k and s2k_fo protocols via SHA-256 + PBKDF2.
The 2FA and trust token flow is unchanged. Auth headers for all
idmsa.apple.com requests now include X-Apple-Auth-Attributes,
X-Apple-Frame-Id, and use Origin/Referer of https://idmsa.apple.com.
Fixes#8587
- replace Bootstrap/jQuery with purpose-built CSS and JS
- remove backend icons from navbar and content pages
- replace remaining FontAwesome icons with inline SVGs, remove FontAwesome
- modernize CSS styling for menus, typography, cards, tables, and code blocks
- add copy-to-clipboard buttons on code blocks using SVG icon
- move TOC to left sidebar with responsive overlay drawer
- add sticky header, top scrollbar and first column for wide tables
- add left/right arrow buttons to scrollable tables
- hide homepage logo on mobile
- make wide menus with filter for Commands and Storage Systems
- add dark mode support based on browser preference
- fix CSS/JS cache busting to use build time
Lockfiles with invalid JSON content caused bisync to fail permanently
because lockFileIsExpired() logged the decode error but still fell
through to the "valid lock file" path with zero-value TimeExpires.
Now when a JSON decode error is detected:
- If --max-lock is set (< basicallyforever): treat garbled lockfile as
expired, mark listings failed, and proceed (safe assumption: the
previous bisync run crashed and left garbage).
- If --max-lock is not set (default): log a clear error telling the
user the lockfile needs manual inspection, and return false.
Make ctest build and run on Windows in addition to Linux/macOS:
- Add OS detection in Makefile using ifeq ($(OS),Windows_NT)
- Use .lib extension and .exe suffix on Windows
- Link Windows system libraries (winmm, ws2_32, ole32)
- Remove unused dlfcn.h include that prevented compilation on Windows
Fix memory management to use RcloneFreeString instead of free for
strings returned by RcloneRPC, as documented in the librclone README.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Commit a3e1312d accidentally replaced io.NopCloser(in) with a bare
io.Reader when assigning req.Body in uploadSinglepartPutObject.
rclone wraps upload readers in an accounting.Account for progress
tracking. When the AWS SDK calls Seek on the body, Account.Seek does
a type assert on the inner reader. With a bare io.Reader the type
is unexpected and causes:
operation error S3: PutObject, serialization failed: internal error:
Seek not implemented for io.nopCloser
With io.NopCloser(in) the type assert works correctly for both seekable
and non-seekable readers.
Restore io.NopCloser(in) to wrap the reader correctly in all cases.
Verified by running both before (regression confirmed) and after (fix
confirmed):
go test ./backend/s3/... ./fs/operations/... -remote TestS3:
Remove the POSIX_FADV_DONTNEED and POSIX_FADV_SEQUENTIAL calls
from the local backend. The DONTNEED calls cause severe spinlock
contention on parallel file systems (and any system with many
concurrent transfers), because each call triggers per-page cache
teardown under a global lock.
Observed on a 256-core system running rclone with 64 parallel
transfers over Lustre: 69% of all CPU cycles were spent in
kernel spinlock contention from the fadvise path, with effective
throughput well below hardware capability.
The kernel's own page reclaim (kswapd) handles eviction more
efficiently from a single context. Since rclone does not always
read files sequentially (e.g. multipart uploads rewind and
re-read blocks), FADV_SEQUENTIAL was also not reliably correct.
This is consistent with the non-Linux behavior (which never
called fadvise) and with restic's decision to remove identical
code (restic/restic#670).
Fixes#7886
This PR optimizes the PROPFIND requests in the webdav backend to only ask for
the specific properties rclone actually needs.
Currently, the generic webdav backend sends an empty XML body during directory
listing (listAll), which causes the server to fall back to allprops by default.
This forces the server to return properties we never use, such as
getcontenttype.
Fetching getcontenttype can be a very expensive operation on the server side.
For instance, in the official golang.org/x/net/webdav library, determining the
content type requires the server to open the file and read the first 500 bytes.
For a directory with 1,300 files in my environment, rclone ls time dropped from
~30s to ~4s (as fast as native ls).
This only applies to the other vendor for backwards compatibility which could be
expanded.
AWS S3 requires Content-MD5 for PutObject with Object Lock parameters.
Since rclone passes a non-seekable io.Reader, the SDK cannot compute
checksums automatically. Buffer the body and compute MD5 manually for
singlepart PutObject and presigned request uploads when Object Lock
parameters are set. Multipart uploads are unaffected as Object Lock
headers go on CreateMultipartUpload which has no body.
Add object_lock_supported provider quirk (default true) to allow
skipping Object Lock integration tests on providers with incomplete
S3 API support. Set to false for GCS which uses non-standard
x-goog-bypass-governance-retention header and doesn't implement
PutObjectLegalHold/GetObjectLegalHold.
Add Multipart and Presigned subtests to Object Lock integration tests
to cover all three upload paths.
Fixes#9199
URLPathEscapeAll was only passing [A-Za-z0-9/] through unencoded, causing
it to percent-encode RFC 3986 unreserved characters (-, ., _, ~). Per RFC
3986 §2.3, unreserved characters MUST NOT be percent-encoded, and a URI
that unnecessarily encodes them is not equivalent to one that does not.
Servers that perform strict path matching without normalising
percent-encoded characters will reject the over-encoded form with a 404.
Before: /files/my-report.pdf → /files/my%2Dreport%2Epdf
After: /files/my-report.pdf → /files/my-report.pdf
Reserved characters (spaces, semicolons, colons, etc.) continue to be
encoded as before.