2485 Commits

Author SHA1 Message Date
John Volk
4343b80949 drime: fix file doesn't exists error when trying to delete
When using rcat to upload a new version of a file that already existed,
the file upload would succeed. The subsequent deletion of the old file
is attempted after the upload. Drime appears to handle the deletion of
the old file automatically and returns HTTP status code 422, stating
the "The selected entry ids is invalid."

The deletion and the rcat would fail before this change. This is with
file history enabled on my Drime account.

This change detects the error and ignores it since the file has
already been deleted.
2026-05-11 13:04:49 +01:00
Nick Craig-Wood
c00756810a http: don't list parent directory when pointing at a single file
When an HTTP URL points to a single file, listing the parent
directory is unnecessary and may fail entirely on servers that
disable directory listings but still serve HEAD/GET on the file.

Remember the file name in the Fs and short-circuit List to return
just that one object.

See: https://forum.rclone.org/t/how-to-combine-on-the-fly-http-archive-remote-to-list-crc32s-in-a-http-hosted-zip/53761
2026-05-11 11:05:22 +01:00
Nick Craig-Wood
c55634bdf8 drime: fix uploads of 100..200M files
At some point Drime recommended 200M for the upload cutoff for
switching to multipart upload. However uploads have stopped working
using single part upload for 100..200Mish files.

Their docs now recommend 5M as the cutoff for multipart upload so this
changes the default.
2026-05-07 17:38:27 +01:00
Nick Craig-Wood
667903dca0 drime: fix large file uploads landing in drive root instead of configured folder
The /s3/multipart/create and /s3/entries endpoints interpret relativePath
as an absolute path from the drive root, not relative to parent_id. When
root_folder_id was set to a non-root folder, files larger than
upload_cutoff ended up at the user's drive root instead of the configured
folder.

Resolve the absolute path of the Fs root once via GET /folders/{hash}/path
(cached on first OpenChunkWriter call) and use that to build the correct
relativePath.

Fixes #9392
2026-05-07 17:38:27 +01:00
ferrumclaudepilgrim
1bbe758bc5 local: add --local-fatal-if-no-space flag - fixes #8011
When enabled, an out-of-space error during a local write returns a
fatal error that aborts the run, instead of being retried.

Without this option, ENOSPC errors are treated as retryable and
rclone may spin through the retry loop many times on a full disk
before giving up. That is fine for transient network errors but
unhelpful when the disk is genuinely full and the operator wants
the run to fail loudly. Default is off so existing behaviour is
unchanged.

Implementation follows the pattern suggested in the issue: a defer
at the top of Update wraps the error with fserrors.FatalError when
the option is on and the error is disk-full. Detection covers both
file.ErrDiskFull from the preallocate path and syscall.ENOSPC from
io.Copy or Close, via a small helper that uses fserrors.IsErrNoSpace.
2026-05-07 10:38:47 +01:00
Leon Brocard
4c8bfb7500 s3: add new Fastly Object Storage regions
Add three new regions and their endpoints for Fastly Object Storage:

- eu-west-1 (Paris)
- us-east-1 (Virginia)
- us-west-1 (Oregon)

These are distinct from the existing us-east, us-west and eu-central
endpoints, which are kept in place.
2026-05-07 10:36:26 +01:00
Nick Craig-Wood
0c8d098b7f cloudinary: fix retrying every error and fix pacer sleep units
shouldRetry treated every non-nil error as retryable, so permanent
failures (auth, 4xx, not-found) burned through the LowLevelRetries
budget instead of returning fast.

This also fixes the pacer sleeps: pacer.MinSleep(1000) and
MaxSleep(10000) are time.Duration values, so they were 1µs and 10µs -
almost certainly intended as 10ms and 2s.
2026-05-06 17:47:53 +01:00
Nick Craig-Wood
03b06ac459 ftp: fix flaky UploadTimeout test on slow integration servers
The test set the short idle timeout before creating the test Fs, which
made fs.NewFs fail to read the FTP welcome banner within 1s on slow CI
hosts. Restore the long timeout while NewFs dials the control
connection, then apply the short idle timeout before the upload so the
data connection still exercises the close race that shut_timeout fixes.
2026-05-06 11:40:07 +01:00
KTibow
7200e377dd oauthutil: clarify token replacement prompt wording
The previous wording "Already have a token - refresh?" was misleading
because answering yes triggers a full re-authorization flow, not an
OAuth2 refresh token grant. Updated to "Token already configured -
replace it?" to accurately describe what happens.

Also updated the SugarSync backend which has its own copy of the prompt,
and the docs for box, drive, and onedrive that reference it.
2026-05-06 10:51:16 +01:00
Nick Craig-Wood
9d4c912e0e s3: fix STS call per request by caching AssumeRole credentials
The stscreds.AssumeRoleProvider from AWS SDK Go v2 does not cache
credentials by itself. The SDK only auto-wraps providers with
aws.CredentialsCache when they are loaded via
config.LoadDefaultConfig; when assigned directly to
aws.Config.Credentials it must be wrapped manually, as documented on
stscreds.NewAssumeRoleProvider.

Without the cache, configurations using role_arn would call AssumeRole
once per S3 request, flooding STS and CloudTrail.

See: https://forum.rclone.org/t/aws-iam-roles-credentials-arent-cached/53732
2026-05-05 15:47:18 +01:00
Nick Craig-Wood
0737599cd4 protondrive: fix segfault when copying files missing revision metadata
When a Proton Drive file has no active revision attributes,
readMetaDataForLink returns a nil FileSystemAttrs and Object.originalSize
is left as nil. Object.Open then dereferenced this nil pointer when
calling fs.FixRangeOption, causing a SIGSEGV during copy.

Use Object.Size() instead, which already implements the correct fallback
to the link size when originalSize is unavailable.

This updates the github.com/rclone/Proton-API-Bridge package to fix a
segfault when reading files with no metadata.

Fixes #9377
Fixes #9117
2026-05-05 15:02:34 +01:00
Nick Craig-Wood
3b2011c7a0 protondrive: route library logging through rclone's logger
Previously all log output produced by Proton-API-Bridge (stdlib log)
and go-proton-api (logrus + resty's logger) bypassed rclone's
logging: it ignored -v / -vv levels and didn't reach --log-file.

Add a small adapter implementing the resty.Logger / bridge Logger
shape that calls fs.Errorf / fs.Logf / fs.Debugf, and pass it via
the new Config.Logger hook. The bridge in turn forwards the same
value to go-proton-api's WithLogger option, so HTTP-layer warnings
and the formerly-hardcoded logrus warnings inside go-proton-api
also surface through rclone's log levels.
2026-05-05 09:43:39 +01:00
Nick Craig-Wood
ef26e6d26d protondrive: route HTTP through rclone's transport
The Proton Drive backend constructed the upstream Proton-API-Bridge
without ever passing rclone's HTTP transport. As a result none of
rclone's HTTP flags reached Proton: --dump headers, --dump bodies,
--no-check-certificate, --user-agent, --bind, --ca-cert, --header,
--tpslimit etc. all silently did nothing for this remote, and HTTP
traffic was invisible to -vv.

Pass fshttp.NewTransport(ctx) through the new Config.Transport hook on
the bridge, which forwards it to the updated go-proton-api's
WithTransport option and so to the underlying resty client.
2026-05-05 09:43:39 +01:00
王一赫
18899a58f3 Add Huawei Drive support
Add Huawei Drive backend implementation and tests

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Co-authored-by: Nick Craig-Wood <nick@craig-wood.com>
2026-05-01 13:41:07 +01:00
Nick Craig-Wood
7c3909589c s3: add Impossible Cloud as a new S3 provider 2026-05-01 12:47:07 +01:00
John Volk
306fb0a304 drime: fix listings of large directories
next_page is not currently being returned on listings which is causing
the rclone listing code to go wrong. This was returned so is likely a
regression in Drime.

This changes the page counter to calculate using current_page and
last_page. last_page on the first page request is just current_page+1.
drime appears to be capping per_page to 200. as more pages are
requested, last_page increments by 1 until current_page = last_page
2026-05-01 12:37:38 +01:00
Yakov Till
d0c469c3c0 iclouddrive: add read only iCloud Photos support and SRP authentication
Add read-only iCloud Photos support to the existing iclouddrive
backend via `service = photos` config option.

Also includes auth improvements on top of #9209's SRP authentication.

**Photos features:**
- 3-level hierarchy: libraries (Personal + Shared Photo Library) →
  albums → photos/videos
- server-side smart albums (All Photos, Videos, Favorites,
  Screenshots, Live, Bursts, Panoramas, Slo-mo, Time-lapse, Portrait,
  Long Exposure, Animated, Hidden, Recently Deleted)
- User-created albums and nested album folders
- Live Photo `.MOV` companions as first-class entries
- Edited photo versions (`-edited` suffix) and RAW alternatives
- Duplicate filename dedup for camera counter wrap collisions
- Parallel cold listing for large albums
- Delta sync via CloudKit `changes/zone` - warm listings near-instant from disk cache
- Disk cache (libraries, albums, photos) with atomic writes for crash safety
- `ChangeNotify` support for FUSE mounts via `changes/zone` polling
- `ListR` support for `--fast-list` and recursive operations
- `--metadata` support - width, height, added-time, favorite, hidden
- Fresh download URLs per file - no stale URL failures on long copies
- FUSE mount documentation with recommended flags

**Auth improvements over #9209:**
- SMS 2FA fallback for users without trusted Apple devices
- Explicit push notification request - fixes iOS/macOS 26.4+ where 409
  no longer auto-pushes
- Thread safety for concurrent FUSE callers (mutexes on session and client state)
- Session endpoint caching - skips ~5s `/validate` round-trip on warm start
- `Disconnect` support - clears auth state + disk cache
- PCS cookie support for Advanced Data Protection accounts, including
  trusted-device approval for PCS cookies

Built on @coughlanio's Photos PoC (Closes #8734) and @mikegillan's SRP auth (#9209).

Fixes #7982
Co-authored-by: Chris Coughlan <chris@coughlan.io>
2026-04-27 16:55:31 +01:00
José Zúniga
c385d8586a internxt: implement multi-part uploads
Implement multipart upload support with configurable chunk size and concurrency options

Enable OpenChunkWriter with per-chunk encryption

Enhance multipart upload handling with new upload cutoff and error management for small files
2026-04-24 17:20:18 +01:00
Nick Craig-Wood
90028ab3da azurefiles: fix missing x-ms-file-request-intent header with OAuth - fixes #9367
The fix for #8241 set FileRequestIntent=Backup on
service.ClientOptions so the azfile SDK emits the
x-ms-file-request-intent header was inadvertently dropped in this
commit (released in v1.73.0)

846f193806 azureblob,azurefiles: factor the common auth into a library

This broke azurefiles with OAuth (service principal secret,
certificate, MSI, etc.) with:

    400 MissingRequiredHeader: x-ms-file-request-intent

This restores it in the azurefiles SetClientOptions callback. The SDK
only emits the header for TokenCredential auth, so shared-key and SAS
paths are unaffected.
2026-04-24 15:55:53 +01:00
tdawe
ae5d388ea3 protondrive: align backend with newer Proton SDK stack
send SDK-era app headers for move and upload compatibility
2026-04-24 14:40:29 +01:00
Jan Heylen
a1ad9b3f46 s3: fix bucket creation failing on Ceph/radosgw
Before this change, uploading to an existing bucket on Ceph (radosgw)
could fail with:

    BucketAlreadyExists: 409 Conflict

when rclone attempted to create the destination bucket (which it does
by default unless --s3-no-check-bucket is set).

The Ceph rgw S3 implementation never returns BucketAlreadyOwnedByYou;
it returns BucketAlreadyExists for every CreateBucket on an existing
bucket, even one the caller owns. With the use_already_exists quirk
set to true, rclone wraps BucketAlreadyExists as a non-retriable error
and aborts the transfer.

The Ceph provider used to set useAlreadyExists = true explicitly. When
the s3 providers were converted to YAML files in f28c83c, Ceph did not
set use_already_exists so it picked up the default of true (via
set(&opt.UseAlreadyExists, true, provider.Quirks.UseAlreadyExists)),
which matched the previous behaviour but is the wrong setting for
Ceph.

This sets use_already_exists: false for the Ceph provider so rclone
ignores BucketAlreadyExists on CreateBucket and continues with the
upload.

Side effect: this partially reverts #7871 for the Ceph provider. If a
user tries to create a bucket on Ceph that is actually owned by
someone else, rclone will no longer fail fast at CreateBucket time;
the subsequent object PUT will fail instead. This is unavoidable on
Ceph since the server does not distinguish "already owned by you" from
"owned by someone else".
2026-04-23 19:13:29 +01:00
Chris
65ef7d8e6c s3: add HCP provider and list_versions_oldest_first quirk
Hitachi Content Platform (HCP) returns object versions in ascending
chronological order (oldest first), unlike the S3 standard which
returns them newest first. This causes --s3-version-at to return the
wrong version when used with HCP.

Add a new list_versions_oldest_first quirk which reverses the Versions
and DeleteMarkers lists before merging, so the existing versionAt
filter works correctly regardless of backend sort order.

Add HCP as a new provider with this quirk enabled by default.

See: https://docs.hitachivantara.com/r/en-us/content-platform/9.6.x/mk-95hcph002/using-the-hitachi-api-for-amazon-s3/working-with-buckets/listing-bucket-contents-version-2
2026-04-20 13:45:18 +01:00
Andrew Gunnerson
c744949d91 mega: fix crash when logging in with previous auth keys fails
When Mega.LoginWithKeys() fails to make the API request, it leaves the
object in a state where Mega.FS.root is nil because it could never query
any information about the filesystem tree. An easy way for this to
happen is if the device is not connected to the internet.

Previously, these failures would be ignored, but Fs.findRoot() on the
rclone side is written in a way that assumes the go-meta filesystem will
have a non-nil root. This leads to an immediate nil pointer dereference
when NewFs() calls Fs.findRoot().

This commit fixes the problem by making LoginWithKeys() failures hard
failures, similar to the MultiFactorLogin() path.

Signed-off-by: Andrew Gunnerson <accounts+github@chiller3.com>
2026-04-20 13:37:45 +01:00
Nick Craig-Wood
94e7adaeba pcloud: fix recursive listing from the root - fixes #9315
pCloud now disallow recursive listing from the root so this change
lists the root normally then uses recursive listings for
subdirectories.
2026-04-20 12:16:10 +01:00
Nick Craig-Wood
f191448b0d rc: flip auth default so all endpoints require auth unless opted out
Replace AuthRequired bool with NoAuth bool on the rc.Call struct and
flip the auth check logic. Previously endpoints were unauthenticated
by default and had to opt in with AuthRequired: true, which led to
security vulnerabilities when developers forgot to set the flag.

Now all endpoints require authentication by default. Only explicitly
safe read-only endpoints are marked with NoAuth: true:

- rc/noop
- rc/error
- rc/list
- core/version
- core/stats
- core/group-list
- core/transferred
- core/du
- cache/stats
- vfs/list
- vfs/stats
- vfs/queue
- job/status
- job/list

See GHSA-25qr-6mpr-f7qx, GHSA-jfwf-28xr-xw6q
2026-04-19 13:31:27 +01:00
Nick Craig-Wood
e12c250705 s3: fix empty delimiter parameter rejected by Archiware P5 server
Some S3-compatible servers (e.g. Archiware P5) reject requests with an
empty `?delimiter=` query parameter. For recursive listings, pass `nil`
instead of a pointer to an empty string so the parameter is omitted
entirely from the request.

Fixes #9342
2026-04-15 18:01:14 +01:00
ZRHan
3be3347e86 webdav: optimize performance by using Depth=0 for metadata requests 2026-04-15 17:40:27 +01:00
Nick Craig-Wood
dd5250ca55 azureblob: add --azureblob-decompress flag to download gzip-encoded files
Before this change, if an object compressed with "Content-Encoding:
gzip" was downloaded, a length and hash mismatch would occur since the
go runtime automatically decompressed the object on download.

If --azureblob-decompress is set, this change erases the length and hash on
compressed objects so they can be downloaded successfully, at the cost
of not being able to check the length or the hash of the downloaded
object.

If --azureblob-decompress is not set the compressed files will be downloaded
as-is providing compressed objects with intact size and hash
information.

Fixes #9337
2026-04-13 18:05:52 +01:00
Nick Craig-Wood
bbd7297b33 azureblob/auth: add Microsoft Partner Network User-Agent prefix
Set the User-Agent to include the APN prefix for Azure backends
(azureblob, azurefiles, onelake) to identify rclone as a Microsoft
Partner. The User-Agent is now:

    APN/1.0 rclone/1.0 rclone/<version>
2026-04-13 15:27:20 +01:00
Nick Craig-Wood
7b8994ab32 vfs: add context parameter to New() for config propagation
Add a ctx parameter to vfs.New() so callers can pass in context
carrying ConfigInfo and FilterInfo. The context is stripped of
cancellation but config and filter values are preserved into a fresh
background context.
2026-04-13 12:48:38 +01:00
a1pcm
3ad0178b5b drime: fix User.EntryPermissions JSON unmarshalling
`json:"entry_permissions"` is known to be either empty [] or of
structure {string: boolean}. This may have been a breaking API change on
Drime's side. Because EntryPermissions is not used, the type was changed
to `any` to capture both cases, otherwise we could implement custom
unmarshalling for that type.
2026-04-10 20:48:21 +01:00
Enduriel
1a924aa746 filen: make multi-threaded upload chunks individually retryable 2026-04-10 20:46:55 +01:00
Mozi
082031cc85 s3: fix TencentCOS CDN endpoint failing on bucket check
The Global Acceleration Endpoint (cos.accelerate.myqcloud.com) of
Tencent COS does not seem to support "CreateBucket" (maybe also other
bucket management operations). Since the acceleration functionality must
be enabled per-bucket in the Tencent Cloud console, the bucket will
always exist before this endpoint is used, so this check can be safely
skipped.

Now, "no_check_bucket = true" will be auto set when using this endpoint.

Why "NewFs()": on-the-fly remotes (connection string remotes), for
example, ":s3,provider=TencentCOS,...:..." will also be fixed.

Why no unit test: I can't find a good way to test "NewFs()" without
leveraging live endpoints. I think we can extract all existing mutations
for different providers (e.g., AWS, Fastly, and Rabata) from "NewFs()"
to a new function in the future.

Some Tencent docs about this CDN endpoint:
- English: Global Acceleration Endpoint | https://www.tencentcloud.com/pt/document/product/436/40700
- Chinese: 对象存储 全球加速概述_腾讯云 | https://cloud.tencent.com/document/product/436/38866

Assisted-By: OpenCode
2026-04-09 17:36:29 +01:00
Chris
40b064993e s3: fix --s3-versions flag ignored by cleanup-hidden when GetBucketVersioning fails
When a user has --s3-versions set but lacks the s3:GetBucketVersioning
permission, GetBucketVersioning returns an error and isVersioned() caches
the result as false. This caused CleanUpHidden (backend cleanup-hidden) to
silently exit with "bucket is not versioned so not removing old versions",
ignoring the user's explicit --s3-versions flag.

Fix this by trusting the explicit --s3-versions flag in purge(), bypassing
the GetBucketVersioning check when the user has explicitly declared the
bucket is versioned.
2026-04-09 17:08:03 +01:00
Brais Couce
d15f1142ef iclouddrive: fix 'directory not found' error when the directory contains accent marks 2026-04-09 17:04:20 +01:00
Nick Craig-Wood
3658470022 sftp: warn the user if no host key validation is configured
Previously ssh.InsecureIgnoreHostKey() was set unconditionally as the
default HostKeyCallback with no indication to the user.

This logs a warning pointing users to the documentation on how to
enable host key validation.

See: https://github.com/rclone/rclone/security/code-scanning/167
2026-04-09 17:00:45 +01:00
Nick Craig-Wood
20eaad4b6d linkbox: fix downloading files by using web API - fixes #8665
The Linkbox open API (/api/open/file_search) no longer returns download
URLs, breaking all downloads. This switches to using the web API
(/api/file/my_file_list/web) which requires email+password authentication
but returns working download URLs.

This will unfortunately require changing your existing rclone config.

- Add email, password, and web_token config options
- Add web API login via /api/user/login_email with token caching and retry
- Create separate CDN HTTP client with HTTP/2 disabled and browser
  User-Agent to avoid CDN fingerprint blocking
- Remove searchOK regex and name-filtering (web API doesn't support it)
2026-04-08 08:49:42 +01:00
albertony
cb9bdf629c jottacloud: add encoding of percent character to default backend encoding
Fixes #9153
2026-04-06 08:28:28 +01:00
Mike GIllan
4a00a4dc4b iclouddrive: lowercase Apple ID for SRP authentication
Apple IDs are case-insensitive, but the SRP proof computation (M1)
hashes the username client-side. The old plaintext signin let the
server normalize the case, but with SRP the client must match.
Lowercase the Apple ID before use so mixed-case IDs authenticate
correctly.

Reported-by: ArturKlauser
2026-04-02 17:52:56 +01:00
Xiangzhe
2610beb18d iclouddrive: use dynamic origin for SRP auth headers
This fixes China mainland iCloud authentication by deriving the Origin
and Referer headers from authEndpoint instead of hardcoding idmsa.apple.com.

Fixes compatibility with PR #8818 (China region support) and PR #9209
(SRP authentication).

Signed-off-by: Xiangzhe <xiangzhedev@gmail.com>
2026-04-02 17:52:56 +01:00
Mike GIllan
35e4f60548 iclouddrive: replace plaintext signin with SRP authentication
Apple has deprecated the legacy /appleauth/auth/signin endpoint and
now blocks it, causing "Invalid Session Token" errors for all users
when their trust token expires. The browser login flow now requires
SRP (Secure Remote Password), a cryptographic handshake that never
transmits the password.

Replace Session.SignIn() with a multi-step SRP-6a flow:
1. authStart - initialize session at /authorize/signin
2. authFederate - submit account name to /federate
3. authSRPInit - exchange client public value for salt/B at /signin/init
4. authSRPComplete - send M1/M2 proofs to /signin/complete

The SRP implementation uses the RFC 5054 2048-bit group with SHA-256
and Apple's NoUserNameInX variant. Password derivation supports both
s2k and s2k_fo protocols via SHA-256 + PBKDF2.

The 2FA and trust token flow is unchanged. Auth headers for all
idmsa.apple.com requests now include X-Apple-Auth-Attributes,
X-Apple-Frame-Id, and use Origin/Referer of https://idmsa.apple.com.

Fixes #8587
2026-04-02 17:52:56 +01:00
jinkeyuu
e9fddaabeb s3: add UCloud Object Storage provider (#9230)
Co-authored-by: jinyu.han <jinyu.han@ucloud.cn>
2026-03-31 11:45:40 +01:00
Chris
434edba275 s3: fix regression where PutObject fails with non-seekable readers
Commit a3e1312d accidentally replaced io.NopCloser(in) with a bare
io.Reader when assigning req.Body in uploadSinglepartPutObject.

rclone wraps upload readers in an accounting.Account for progress
tracking. When the AWS SDK calls Seek on the body, Account.Seek does
a type assert on the inner reader. With a bare io.Reader the type
is unexpected and causes:

  operation error S3: PutObject, serialization failed: internal error:
  Seek not implemented for io.nopCloser

With io.NopCloser(in) the type assert works correctly for both seekable
and non-seekable readers.

Restore io.NopCloser(in) to wrap the reader correctly in all cases.

Verified by running both before (regression confirmed) and after (fix
confirmed):
  go test ./backend/s3/... ./fs/operations/... -remote TestS3:
2026-03-31 10:22:27 +01:00
Bjoern Franke
7a63990df2 Add OVHcloud storage classes
Added OVHcloud storage classes according to https://help.ovhcloud.com/csm/en-ie-public-cloud-storage-s3-choosing-right-storage-class?id=kb_article_view&sysparm_article=KB0047293 and https://help.ovhcloud.com/csm/en-ie-public-cloud-storage-s3-choosing-right-storage-class?id=kb_article_view&sysparm_article=KB0047293
2026-03-25 21:48:56 +00:00
Patrick Farrell
7ca667d35d local: remove fadvise calls that cause spinlock contention
Remove the POSIX_FADV_DONTNEED and POSIX_FADV_SEQUENTIAL calls
from the local backend. The DONTNEED calls cause severe spinlock
contention on parallel file systems (and any system with many
concurrent transfers), because each call triggers per-page cache
teardown under a global lock.

Observed on a 256-core system running rclone with 64 parallel
transfers over Lustre: 69% of all CPU cycles were spent in
kernel spinlock contention from the fadvise path, with effective
throughput well below hardware capability.

The kernel's own page reclaim (kswapd) handles eviction more
efficiently from a single context. Since rclone does not always
read files sequentially (e.g. multipart uploads rewind and
re-read blocks), FADV_SEQUENTIAL was also not reliably correct.

This is consistent with the non-Linux behavior (which never
called fadvise) and with restic's decision to remove identical
code (restic/restic#670).

Fixes #7886
2026-03-24 10:17:36 +00:00
ZRHan
523a29a4e9 webdav: request only required properties in listAll to improve performance
This PR optimizes the PROPFIND requests in the webdav backend to only ask for
the specific properties rclone actually needs.

Currently, the generic webdav backend sends an empty XML body during directory
listing (listAll), which causes the server to fall back to allprops by default.
This forces the server to return properties we never use, such as
getcontenttype.

Fetching getcontenttype can be a very expensive operation on the server side.
For instance, in the official golang.org/x/net/webdav library, determining the
content type requires the server to open the file and read the first 500 bytes.

For a directory with 1,300 files in my environment, rclone ls time dropped from
~30s to ~4s (as fast as native ls).

This only applies to the other vendor for backwards compatibility which could be
expanded.
2026-03-16 17:29:45 +00:00
Chris
a3e1312d9d s3: fix Content-MD5 for Object Lock uploads and add GCS quirk
AWS S3 requires Content-MD5 for PutObject with Object Lock parameters.
Since rclone passes a non-seekable io.Reader, the SDK cannot compute
checksums automatically. Buffer the body and compute MD5 manually for
singlepart PutObject and presigned request uploads when Object Lock
parameters are set. Multipart uploads are unaffected as Object Lock
headers go on CreateMultipartUpload which has no body.

Add object_lock_supported provider quirk (default true) to allow
skipping Object Lock integration tests on providers with incomplete
S3 API support. Set to false for GCS which uses non-standard
x-goog-bypass-governance-retention header and doesn't implement
PutObjectLegalHold/GetObjectLegalHold.

Add Multipart and Presigned subtests to Object Lock integration tests
to cover all three upload paths.

Fixes #9199
2026-03-14 22:18:43 +00:00
Marco Ferretti
e987d4f351 s3: add multi tenant support for Cubbit 2026-03-14 22:15:47 +00:00
Bhagyashreek8
69ccbacf30 s3: IBM COS: provide ibm_iam_endpoint as a configurable param for IBM IAM-based auth 2026-03-12 10:04:16 +00:00
Nick Craig-Wood
f0dfe9280c b2: add server side copy real time accounting 2026-03-03 14:01:11 +00:00