Add three new regions and their endpoints for Fastly Object Storage:
- eu-west-1 (Paris)
- us-east-1 (Virginia)
- us-west-1 (Oregon)
These are distinct from the existing us-east, us-west and eu-central
endpoints, which are kept in place.
The stscreds.AssumeRoleProvider from AWS SDK Go v2 does not cache
credentials by itself. The SDK only auto-wraps providers with
aws.CredentialsCache when they are loaded via
config.LoadDefaultConfig; when assigned directly to
aws.Config.Credentials it must be wrapped manually, as documented on
stscreds.NewAssumeRoleProvider.
Without the cache, configurations using role_arn would call AssumeRole
once per S3 request, flooding STS and CloudTrail.
See: https://forum.rclone.org/t/aws-iam-roles-credentials-arent-cached/53732
Before this change, uploading to an existing bucket on Ceph (radosgw)
could fail with:
BucketAlreadyExists: 409 Conflict
when rclone attempted to create the destination bucket (which it does
by default unless --s3-no-check-bucket is set).
The Ceph rgw S3 implementation never returns BucketAlreadyOwnedByYou;
it returns BucketAlreadyExists for every CreateBucket on an existing
bucket, even one the caller owns. With the use_already_exists quirk
set to true, rclone wraps BucketAlreadyExists as a non-retriable error
and aborts the transfer.
The Ceph provider used to set useAlreadyExists = true explicitly. When
the s3 providers were converted to YAML files in f28c83c, Ceph did not
set use_already_exists so it picked up the default of true (via
set(&opt.UseAlreadyExists, true, provider.Quirks.UseAlreadyExists)),
which matched the previous behaviour but is the wrong setting for
Ceph.
This sets use_already_exists: false for the Ceph provider so rclone
ignores BucketAlreadyExists on CreateBucket and continues with the
upload.
Side effect: this partially reverts #7871 for the Ceph provider. If a
user tries to create a bucket on Ceph that is actually owned by
someone else, rclone will no longer fail fast at CreateBucket time;
the subsequent object PUT will fail instead. This is unavoidable on
Ceph since the server does not distinguish "already owned by you" from
"owned by someone else".
Some S3-compatible servers (e.g. Archiware P5) reject requests with an
empty `?delimiter=` query parameter. For recursive listings, pass `nil`
instead of a pointer to an empty string so the parameter is omitted
entirely from the request.
Fixes#9342
The Global Acceleration Endpoint (cos.accelerate.myqcloud.com) of
Tencent COS does not seem to support "CreateBucket" (maybe also other
bucket management operations). Since the acceleration functionality must
be enabled per-bucket in the Tencent Cloud console, the bucket will
always exist before this endpoint is used, so this check can be safely
skipped.
Now, "no_check_bucket = true" will be auto set when using this endpoint.
Why "NewFs()": on-the-fly remotes (connection string remotes), for
example, ":s3,provider=TencentCOS,...:..." will also be fixed.
Why no unit test: I can't find a good way to test "NewFs()" without
leveraging live endpoints. I think we can extract all existing mutations
for different providers (e.g., AWS, Fastly, and Rabata) from "NewFs()"
to a new function in the future.
Some Tencent docs about this CDN endpoint:
- English: Global Acceleration Endpoint | https://www.tencentcloud.com/pt/document/product/436/40700
- Chinese: 对象存储 全球加速概述_腾讯云 | https://cloud.tencent.com/document/product/436/38866
Assisted-By: OpenCode
When a user has --s3-versions set but lacks the s3:GetBucketVersioning
permission, GetBucketVersioning returns an error and isVersioned() caches
the result as false. This caused CleanUpHidden (backend cleanup-hidden) to
silently exit with "bucket is not versioned so not removing old versions",
ignoring the user's explicit --s3-versions flag.
Fix this by trusting the explicit --s3-versions flag in purge(), bypassing
the GetBucketVersioning check when the user has explicitly declared the
bucket is versioned.
Commit a3e1312d accidentally replaced io.NopCloser(in) with a bare
io.Reader when assigning req.Body in uploadSinglepartPutObject.
rclone wraps upload readers in an accounting.Account for progress
tracking. When the AWS SDK calls Seek on the body, Account.Seek does
a type assert on the inner reader. With a bare io.Reader the type
is unexpected and causes:
operation error S3: PutObject, serialization failed: internal error:
Seek not implemented for io.nopCloser
With io.NopCloser(in) the type assert works correctly for both seekable
and non-seekable readers.
Restore io.NopCloser(in) to wrap the reader correctly in all cases.
Verified by running both before (regression confirmed) and after (fix
confirmed):
go test ./backend/s3/... ./fs/operations/... -remote TestS3:
AWS S3 requires Content-MD5 for PutObject with Object Lock parameters.
Since rclone passes a non-seekable io.Reader, the SDK cannot compute
checksums automatically. Buffer the body and compute MD5 manually for
singlepart PutObject and presigned request uploads when Object Lock
parameters are set. Multipart uploads are unaffected as Object Lock
headers go on CreateMultipartUpload which has no body.
Add object_lock_supported provider quirk (default true) to allow
skipping Object Lock integration tests on providers with incomplete
S3 API support. Set to false for GCS which uses non-standard
x-goog-bypass-governance-retention header and doesn't implement
PutObjectLegalHold/GetObjectLegalHold.
Add Multipart and Presigned subtests to Object Lock integration tests
to cover all three upload paths.
Fixes#9199
Add AU East 1, EU South 1, JP Central 1, UK East 1, and US Central 1
regions and endpoints for Fastly Object Storage.
Also sort the entries alphabetically.
Add support for S3 Object Lock with the following new options:
- --s3-object-lock-mode: set retention mode (GOVERNANCE/COMPLIANCE/copy)
- --s3-object-lock-retain-until-date: set retention date (RFC3339/duration/copy)
- --s3-object-lock-legal-hold-status: set legal hold (ON/OFF/copy)
- --s3-bypass-governance-retention: bypass GOVERNANCE lock on delete
- --s3-bucket-object-lock-enabled: enable Object Lock on bucket creation
- --s3-object-lock-set-after-upload: apply lock via separate API calls
The special value "copy" preserves the source object's setting when used
with --metadata flag, enabling scenarios like cloning objects from
COMPLIANCE to GOVERNANCE mode while preserving the original retention date.
Includes integration tests that create a temporary Object Lock bucket covering:
- Retention Mode and Date
- Legal Hold
- Apply settings after upload
- Override protections using bypass-governance flag
The tests are gracefully skipped on providers that do not support Object Lock.
Fixes#4683Closes#7894#7893#8866
StackPath's object storage service no longer exists and all S3
endpoints are no longer operational.
Before this change, users could select StackPath as an S3 provider
during configuration, but connections would fail as the endpoints no
longer respond and the service has been discontinued.
After this change, StackPath is removed from the list of supported
S3 providers, preventing users from attempting to configure a
non-functional service.
Fixes#9148
- Add new Fastly provider with US East, US West, and EU Central regions
- Add `etag_is_not_md5` quirk for providers with mandatory encryption
- Disable server-side copy for Fastly (not supported)
#8947 implemented support for the If-Match and If-None-Match headers for S3 PUT
Object requests; however, this support did not extend to multi-part copy and
upload requests. These headers are implemented via inclusion in the
CompleteMultipartUpload request.
This updates the auto generated code also which was needed for multipart copy.
The If-Match and If-None-Match headers were being dropped rather
than implemented in the Put Object request to S3. These headers
make requests conditional which allow AWS S3 Bucket Policies to
prevent Object overwriting.
Before this change, you had to modify a fragile data-structure
containing all providers. This often led to things being out of order,
duplicates and conflicts whilst merging. As well as the changes for
one provider being in different places across the file.
After this change, new providers are defined in an easy to edit YAML file,
one per provider.
The config output has been tested before and after for all providers
and any changes are cosmetic only.