Compare commits

...

18 Commits

Author SHA1 Message Date
Viktor Scharf
2b4a88feb9 trigger full-ci 2025-11-06 17:44:21 +01:00
Ralf Haferkamp
96042752bb Bump reva
Fixes: #1774
2025-11-06 17:24:42 +01:00
Viktor Scharf
4ffb79b680 lint fix 2025-11-06 16:31:21 +01:00
Viktor Scharf
a0f90fee1a Sync share before action 2025-11-06 16:31:21 +01:00
opencloudeu
d8859757d9 [tx] updated from transifex 2025-11-06 00:02:59 +00:00
Ralf Haferkamp
bb776c7556 fix typo
Co-authored-by: Benedikt Kulmann <benedikt@kulmann.biz>
2025-11-05 11:57:48 +01:00
Ralf Haferkamp
177afc41c7 fix: set global signing secret fallback correctly
When falling back to the transfer secret we need to set the global
cfg.URLSigningSecret as well otherwise the Validate() step fails.
2025-11-05 11:57:48 +01:00
Ralf Haferkamp
500487f2fa test: fix a few more collaborative posixfs test (#1777)
wait for postprocessing to complete before accessing files on disk

Related: #1747
2025-11-04 16:46:34 +01:00
Ralf Haferkamp
8a7d51ca88 Apply typo fixes from code review
Co-authored-by: Michael Barz <michael.barz@zeitgestalten.eu>
2025-11-04 16:45:08 +01:00
Ralf Haferkamp
a2f9cadd9f feat(collaboration): Set IsAnonymousUser flag for Collabora
Closes: #796
2025-11-04 16:45:08 +01:00
Ralf Haferkamp
30ef495c92 feat(collaboration): Set IsAdminUser property for Collabora
This set the 'IsAdminUser' Property correctly in the CheckFileInfo
Response. For that a new Permission 'WebOffice.Manage' is introduced. By
default this permission is only assigned to the Admin role.
User with this permission get access to certain admin features in
Collabora (e.g. the 'Server Audit' dashboard)

Closes: #796
2025-11-04 16:45:08 +01:00
dependabot[bot]
2da203613a build(deps): bump github.com/gabriel-vasile/mimetype (#1775)
Bumps [github.com/gabriel-vasile/mimetype](https://github.com/gabriel-vasile/mimetype) from 1.4.10 to 1.4.11.
- [Release notes](https://github.com/gabriel-vasile/mimetype/releases)
- [Commits](https://github.com/gabriel-vasile/mimetype/compare/v1.4.10...v1.4.11)

---
updated-dependencies:
- dependency-name: github.com/gabriel-vasile/mimetype
  dependency-version: 1.4.11
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-11-04 16:02:02 +01:00
Ralf Haferkamp
fcff855e16 feat: Add fallback for OC_URL_SIGNING_SECRET
When OC_URL_SIGNING_SECRET is not set. Fall back to the value of the
reva transfer token. This allows handling upgrades on a instance that
was created before the OC_URL_SIGNING_SECRET was introduced to be
handled more graceful.

Unfortunately this still only works reliably for single instance
deployments (or instance that where bootstrapped using 'opencloud init')
that are guaranteed to have the transfer token available.

When running 'proxy' and 'ocdav' as separate services the upgrade might
still require manual intervention.
2025-11-04 16:01:00 +01:00
Ralf Haferkamp
37609e52df feat!: Make the url signing secret a mandatory config option
This is required for allowing the web office to download images to
insert into documents.

The secret is generated by `opencloud init` and the server refuses
to start now without a secret being set. (Breaking Change)

Also the setting is now moved to the shared options as all involved
services need the same secret to work properly.

Related: https://github.com/opencloud-eu/web/issues/704
2025-11-04 16:01:00 +01:00
Ralf Haferkamp
589cee4ab3 collaboration: Enable InsertRemoteImage option
Related: https://github.com/opencloud-eu/web/issues/704
2025-11-04 16:01:00 +01:00
dependabot[bot]
dcaa1ceadb build(deps): bump github.com/nats-io/nats-server/v2
Bumps [github.com/nats-io/nats-server/v2](https://github.com/nats-io/nats-server) from 2.12.0 to 2.12.1.
- [Release notes](https://github.com/nats-io/nats-server/releases)
- [Changelog](https://github.com/nats-io/nats-server/blob/main/.goreleaser.yml)
- [Commits](https://github.com/nats-io/nats-server/compare/v2.12.0...v2.12.1)

---
updated-dependencies:
- dependency-name: github.com/nats-io/nats-server/v2
  dependency-version: 2.12.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-11-04 15:41:03 +01:00
dependabot[bot]
6e0bb09aff build(deps): bump github.com/onsi/ginkgo/v2 from 2.27.1 to 2.27.2
Bumps [github.com/onsi/ginkgo/v2](https://github.com/onsi/ginkgo) from 2.27.1 to 2.27.2.
- [Release notes](https://github.com/onsi/ginkgo/releases)
- [Changelog](https://github.com/onsi/ginkgo/blob/master/CHANGELOG.md)
- [Commits](https://github.com/onsi/ginkgo/compare/v2.27.1...v2.27.2)

---
updated-dependencies:
- dependency-name: github.com/onsi/ginkgo/v2
  dependency-version: 2.27.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-11-04 12:51:49 +01:00
Viktor Scharf
59eb411024 correct STORAGE_USERS_POSIX_WATCH_FS env typo in CI (#1746)
* correct env typo

* set STORAGE_USERS_POSIX_SCAN_DEBOUNCE_DELAY=0

* tests: wait for postprocessing to finish before accessing files

---------

Co-authored-by: Ralf Haferkamp <r.haferkamp@opencloud.eu>
2025-11-04 10:22:06 +01:00
181 changed files with 11076 additions and 2583 deletions

View File

@@ -1481,7 +1481,7 @@ def multiServiceE2ePipeline(ctx, watch_fs_enabled = False):
}
if watch_fs_enabled:
extra_server_environment["STORAGE_USES_POSIX_WATCH_FS"] = True
extra_server_environment["STORAGE_USERS_POSIX_WATCH_FS"] = True
storage_users_environment = {
"OC_CORS_ALLOW_ORIGINS": "%s,https://%s:9201" % (OC_URL, OC_SERVER_NAME),
@@ -2073,6 +2073,7 @@ def opencloudServer(storage = "decomposed", accounts_hash_difficulty = 4, depend
"WEB_DEBUG_ADDR": "0.0.0.0:9104",
"WEBDAV_DEBUG_ADDR": "0.0.0.0:9119",
"WEBFINGER_DEBUG_ADDR": "0.0.0.0:9279",
"STORAGE_USERS_POSIX_SCAN_DEBOUNCE_DELAY": 0,
}
if storage == "posix":
@@ -2110,7 +2111,7 @@ def opencloudServer(storage = "decomposed", accounts_hash_difficulty = 4, depend
environment["SEARCH_EXTRACTOR_CS3SOURCE_INSECURE"] = True
if watch_fs_enabled:
environment["STORAGE_USES_POSIX_WATCH_FS"] = True
environment["STORAGE_USERS_POSIX_WATCH_FS"] = True
# Pass in "default" accounts_hash_difficulty to not set this environment variable.
# That will allow OpenCloud to use whatever its built-in default is.

22
go.mod
View File

@@ -18,7 +18,7 @@ require (
github.com/davidbyttow/govips/v2 v2.16.0
github.com/dhowden/tag v0.0.0-20240417053706-3d75831295e8
github.com/dutchcoders/go-clamd v0.0.0-20170520113014-b970184f4d9e
github.com/gabriel-vasile/mimetype v1.4.10
github.com/gabriel-vasile/mimetype v1.4.11
github.com/ggwhite/go-masker v1.1.0
github.com/go-chi/chi/v5 v5.2.3
github.com/go-chi/render v1.0.3
@@ -55,17 +55,17 @@ require (
github.com/mitchellh/mapstructure v1.5.0
github.com/mna/pigeon v1.3.0
github.com/mohae/deepcopy v0.0.0-20170929034955-c48cc78d4826
github.com/nats-io/nats-server/v2 v2.12.0
github.com/nats-io/nats-server/v2 v2.12.1
github.com/nats-io/nats.go v1.47.0
github.com/oklog/run v1.2.0
github.com/olekukonko/tablewriter v1.1.0
github.com/onsi/ginkgo v1.16.5
github.com/onsi/ginkgo/v2 v2.27.1
github.com/onsi/ginkgo/v2 v2.27.2
github.com/onsi/gomega v1.38.2
github.com/open-policy-agent/opa v1.9.0
github.com/opencloud-eu/icap-client v0.0.0-20250930132611-28a2afe62d89
github.com/opencloud-eu/libre-graph-api-go v1.0.8-0.20250724122329-41ba6b191e76
github.com/opencloud-eu/reva/v2 v2.39.2-0.20251030154544-cac8a0257da6
github.com/opencloud-eu/reva/v2 v2.39.2-0.20251106122902-c13e27f55362
github.com/opensearch-project/opensearch-go/v4 v4.5.0
github.com/orcaman/concurrent-map v1.0.0
github.com/pkg/errors v0.9.1
@@ -161,7 +161,7 @@ require (
github.com/bombsimon/logrusr/v3 v3.1.0 // indirect
github.com/cenkalti/backoff/v4 v4.3.0 // indirect
github.com/cenkalti/backoff/v5 v5.0.3 // indirect
github.com/ceph/go-ceph v0.35.0 // indirect
github.com/ceph/go-ceph v0.36.0 // indirect
github.com/cespare/xxhash/v2 v2.3.0 // indirect
github.com/cevaris/ordered_map v0.0.0-20190319150403-3adeae072e73 // indirect
github.com/cloudflare/circl v1.6.1 // indirect
@@ -234,9 +234,9 @@ require (
github.com/golang-jwt/jwt/v4 v4.5.2 // indirect
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da // indirect
github.com/golang/snappy v0.0.4 // indirect
github.com/gomodule/redigo v1.9.2 // indirect
github.com/gomodule/redigo v1.9.3 // indirect
github.com/google/go-querystring v1.1.0 // indirect
github.com/google/go-tpm v0.9.5 // indirect
github.com/google/go-tpm v0.9.6 // indirect
github.com/google/pprof v0.0.0-20250403155104-27863c87afa6 // indirect
github.com/google/renameio/v2 v2.0.0 // indirect
github.com/gookit/goutil v0.7.1 // indirect
@@ -257,6 +257,7 @@ require (
github.com/kevinburke/ssh_config v1.2.0 // indirect
github.com/klauspost/compress v1.18.0 // indirect
github.com/klauspost/cpuid/v2 v2.2.11 // indirect
github.com/klauspost/crc32 v1.3.0 // indirect
github.com/kovidgoyal/go-parallel v1.0.1 // indirect
github.com/leodido/go-urn v1.4.0 // indirect
github.com/lestrrat-go/blackmagic v1.0.4 // indirect
@@ -281,10 +282,10 @@ require (
github.com/mendsley/gojwk v0.0.0-20141217222730-4d5ec6e58103 // indirect
github.com/miekg/dns v1.1.57 // indirect
github.com/mileusna/useragent v1.3.5 // indirect
github.com/minio/crc64nvme v1.0.2 // indirect
github.com/minio/crc64nvme v1.1.0 // indirect
github.com/minio/highwayhash v1.0.3 // indirect
github.com/minio/md5-simd v1.1.2 // indirect
github.com/minio/minio-go/v7 v7.0.95 // indirect
github.com/minio/minio-go/v7 v7.0.97 // indirect
github.com/mitchellh/copystructure v1.2.0 // indirect
github.com/mitchellh/reflectwalk v1.0.2 // indirect
github.com/moby/docker-image-spec v1.3.1 // indirect
@@ -371,14 +372,13 @@ require (
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.38.0 // indirect
go.opentelemetry.io/otel/metric v1.38.0 // indirect
go.opentelemetry.io/proto/otlp v1.7.1 // indirect
go.uber.org/automaxprocs v1.6.0 // indirect
go.uber.org/multierr v1.11.0 // indirect
go.uber.org/zap v1.27.0 // indirect
go.yaml.in/yaml/v2 v2.4.2 // indirect
go.yaml.in/yaml/v3 v3.0.4 // indirect
golang.org/x/mod v0.28.0 // indirect
golang.org/x/sys v0.37.0 // indirect
golang.org/x/time v0.13.0 // indirect
golang.org/x/time v0.14.0 // indirect
golang.org/x/tools v0.37.0 // indirect
google.golang.org/genproto v0.0.0-20250303144028-a0af3efb3deb // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20250929231259-57b25ae835d4 // indirect

46
go.sum
View File

@@ -210,8 +210,8 @@ github.com/cenkalti/backoff/v5 v5.0.3 h1:ZN+IMa753KfX5hd8vVaMixjnqRZ3y8CuJKRKj1x
github.com/cenkalti/backoff/v5 v5.0.3/go.mod h1:rkhZdG3JZukswDf7f0cwqPNk4K0sa+F97BxZthm/crw=
github.com/census-instrumentation/opencensus-proto v0.2.0/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU=
github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU=
github.com/ceph/go-ceph v0.35.0 h1:wcDUbsjeNJ7OfbWCE7I5prqUL794uXchopw3IvrGQkk=
github.com/ceph/go-ceph v0.35.0/go.mod h1:ILF8WKhQQ2p2YuX1oWigkmsfT39U8T/HS2NrqxExq2s=
github.com/ceph/go-ceph v0.36.0 h1:IDE4vEF+4fmjve+CPjD1WStgfQ+Lh6vD+9PMUI712KI=
github.com/ceph/go-ceph v0.36.0/go.mod h1:fGCbndVDLuHW7q2954d6y+tgPFOBnRLqJRe2YXyngw4=
github.com/cespare/xxhash v1.1.0/go.mod h1:XrSqR1VqqWfGrhpAt58auRo0WTKS1nRRg3ghfAqPWnc=
github.com/cespare/xxhash/v2 v2.1.1/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/cespare/xxhash/v2 v2.1.2/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
@@ -349,8 +349,8 @@ github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMo
github.com/fsnotify/fsnotify v1.4.9/go.mod h1:znqG4EE+3YCdAaPaxE2ZRY/06pZUdp0tY4IgpuI1SZQ=
github.com/fsnotify/fsnotify v1.9.0 h1:2Ml+OJNzbYCTzsxtv8vKSFD9PbJjmhYF14k/jKC7S9k=
github.com/fsnotify/fsnotify v1.9.0/go.mod h1:8jBTzvmWwFyi3Pb8djgCCO5IBqzKJ/Jwo8TRcHyHii0=
github.com/gabriel-vasile/mimetype v1.4.10 h1:zyueNbySn/z8mJZHLt6IPw0KoZsiQNszIpU+bX4+ZK0=
github.com/gabriel-vasile/mimetype v1.4.10/go.mod h1:d+9Oxyo1wTzWdyVUPMmXFvp4F9tea18J8ufA774AB3s=
github.com/gabriel-vasile/mimetype v1.4.11 h1:AQvxbp830wPhHTqc1u7nzoLT+ZFxGY7emj5DR5DYFik=
github.com/gabriel-vasile/mimetype v1.4.11/go.mod h1:d+9Oxyo1wTzWdyVUPMmXFvp4F9tea18J8ufA774AB3s=
github.com/gdexlab/go-render v1.0.1 h1:rxqB3vo5s4n1kF0ySmoNeSPRYkEsyHgln4jFIQY7v0U=
github.com/gdexlab/go-render v1.0.1/go.mod h1:wRi5nW2qfjiGj4mPukH4UV0IknS1cHD4VgFTmJX5JzM=
github.com/getkin/kin-openapi v0.13.0/go.mod h1:WGRs2ZMM1Q8LR1QBEwUxC6RJEfaBcD0s+pcEVXFuAjw=
@@ -536,8 +536,8 @@ github.com/golang/snappy v0.0.0-20180518054509-2e65f85255db/go.mod h1:/XxbfmMg8l
github.com/golang/snappy v0.0.4 h1:yAGX7huGHXlcLOEtBnF4w7FQwA26wojNCwOYAEhLjQM=
github.com/golang/snappy v0.0.4/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
github.com/golangci/lint-1 v0.0.0-20181222135242-d2cdd8c08219/go.mod h1:/X8TswGSh1pIozq4ZwCfxS0WA5JGXguxk94ar/4c87Y=
github.com/gomodule/redigo v1.9.2 h1:HrutZBLhSIU8abiSfW8pj8mPhOyMYjZT/wcA4/L9L9s=
github.com/gomodule/redigo v1.9.2/go.mod h1:KsU3hiK/Ay8U42qpaJk+kuNa3C+spxapWpM+ywhcgtw=
github.com/gomodule/redigo v1.9.3 h1:dNPSXeXv6HCq2jdyWfjgmhBdqnR6PRO3m/G05nvpPC8=
github.com/gomodule/redigo v1.9.3/go.mod h1:KsU3hiK/Ay8U42qpaJk+kuNa3C+spxapWpM+ywhcgtw=
github.com/google/btree v0.0.0-20180813153112-4030bb1f1f0c/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
github.com/google/btree v1.0.0/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
github.com/google/flatbuffers v25.2.10+incompatible h1:F3vclr7C3HpB1k9mxCGRMXq6FdUalZ6H/pNX4FP1v0Q=
@@ -565,8 +565,8 @@ github.com/google/go-querystring v1.1.0 h1:AnCroh3fv4ZBgVIf1Iwtovgjaw/GiKJo8M8yD
github.com/google/go-querystring v1.1.0/go.mod h1:Kcdr2DB4koayq7X8pmAG4sNG59So17icRSOU623lUBU=
github.com/google/go-tika v0.3.1 h1:l+jr10hDhZjcgxFRfcQChRLo1bPXQeLFluMyvDhXTTA=
github.com/google/go-tika v0.3.1/go.mod h1:DJh5N8qxXIl85QkqmXknd+PeeRkUOTbvwyYf7ieDz6c=
github.com/google/go-tpm v0.9.5 h1:ocUmnDebX54dnW+MQWGQRbdaAcJELsa6PqZhJ48KwVU=
github.com/google/go-tpm v0.9.5/go.mod h1:h9jEsEECg7gtLis0upRBQU+GhYVH6jMjrFxI8u6bVUY=
github.com/google/go-tpm v0.9.6 h1:Ku42PT4LmjDu1H5C5ISWLlpI1mj+Zq7sPGKoRw2XROA=
github.com/google/go-tpm v0.9.6/go.mod h1:h9jEsEECg7gtLis0upRBQU+GhYVH6jMjrFxI8u6bVUY=
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/google/martian v2.1.0+incompatible/go.mod h1:9I4somxYTbIHy5NJKHRl3wXiIaQGbYVAs8BPL6v8lEs=
github.com/google/martian/v3 v3.0.0/go.mod h1:y5Zk1BBys9G+gd6Jrk0W3cC1+ELVxBWuIGO+w/tUAp0=
@@ -723,6 +723,8 @@ github.com/klauspost/compress v1.18.0/go.mod h1:2Pp+KzxcywXVXMr50+X0Q/Lsb43OQHYW
github.com/klauspost/cpuid/v2 v2.0.1/go.mod h1:FInQzS24/EEf25PyTYn52gqo7WaD8xa0213Md/qVLRg=
github.com/klauspost/cpuid/v2 v2.2.11 h1:0OwqZRYI2rFrjS4kvkDnqJkKHdHaRnCm68/DY4OxRzU=
github.com/klauspost/cpuid/v2 v2.2.11/go.mod h1:hqwkgyIinND0mEev00jJYCxPNVRVXFQeu1XKlok6oO0=
github.com/klauspost/crc32 v1.3.0 h1:sSmTt3gUt81RP655XGZPElI0PelVTZ6YwCRnPSupoFM=
github.com/klauspost/crc32 v1.3.0/go.mod h1:D7kQaZhnkX/Y0tstFGf8VUzv2UofNGqCjnC3zdHB0Hw=
github.com/kobergj/gowebdav v0.0.0-20250102091030-aa65266db202 h1:A1xJ2NKgiYFiaHiLl9B5yw/gUBACSs9crDykTS3GuQI=
github.com/kobergj/gowebdav v0.0.0-20250102091030-aa65266db202/go.mod h1:bHA7t77X/QFExdeAnDzK6vKM34kEZAcE1OX4MfiwjkE=
github.com/kolo/xmlrpc v0.0.0-20200310150728-e0350524596b/go.mod h1:o03bZfuBwAXHetKXuInt4S7omeXUu62/A845kiycsSQ=
@@ -837,14 +839,14 @@ github.com/miekg/dns v1.1.57 h1:Jzi7ApEIzwEPLHWRcafCN9LZSBbqQpxjt/wpgvg7wcM=
github.com/miekg/dns v1.1.57/go.mod h1:uqRjCRUuEAA6qsOiJvDd+CFo/vW+y5WR6SNmHE55hZk=
github.com/mileusna/useragent v1.3.5 h1:SJM5NzBmh/hO+4LGeATKpaEX9+b4vcGg2qXGLiNGDws=
github.com/mileusna/useragent v1.3.5/go.mod h1:3d8TOmwL/5I8pJjyVDteHtgDGcefrFUX4ccGOMKNYYc=
github.com/minio/crc64nvme v1.0.2 h1:6uO1UxGAD+kwqWWp7mBFsi5gAse66C4NXO8cmcVculg=
github.com/minio/crc64nvme v1.0.2/go.mod h1:eVfm2fAzLlxMdUGc0EEBGSMmPwmXD5XiNRpnu9J3bvg=
github.com/minio/crc64nvme v1.1.0 h1:e/tAguZ+4cw32D+IO/8GSf5UVr9y+3eJcxZI2WOO/7Q=
github.com/minio/crc64nvme v1.1.0/go.mod h1:eVfm2fAzLlxMdUGc0EEBGSMmPwmXD5XiNRpnu9J3bvg=
github.com/minio/highwayhash v1.0.3 h1:kbnuUMoHYyVl7szWjSxJnxw11k2U709jqFPPmIUyD6Q=
github.com/minio/highwayhash v1.0.3/go.mod h1:GGYsuwP/fPD6Y9hMiXuapVvlIUEhFhMTh0rxU3ik1LQ=
github.com/minio/md5-simd v1.1.2 h1:Gdi1DZK69+ZVMoNHRXJyNcxrMA4dSxoYHZSQbirFg34=
github.com/minio/md5-simd v1.1.2/go.mod h1:MzdKDxYpY2BT9XQFocsiZf/NKVtR7nkE4RoEpN+20RM=
github.com/minio/minio-go/v7 v7.0.95 h1:ywOUPg+PebTMTzn9VDsoFJy32ZuARN9zhB+K3IYEvYU=
github.com/minio/minio-go/v7 v7.0.95/go.mod h1:wOOX3uxS334vImCNRVyIDdXX9OsXDm89ToynKgqUKlo=
github.com/minio/minio-go/v7 v7.0.97 h1:lqhREPyfgHTB/ciX8k2r8k0D93WaFqxbJX36UZq5occ=
github.com/minio/minio-go/v7 v7.0.97/go.mod h1:re5VXuo0pwEtoNLsNuSr0RrLfT/MBtohwdaSmPPSRSk=
github.com/mitchellh/cli v1.0.0/go.mod h1:hNIlj7HEI86fIcpObd7a0FcrxTWetlwJDGcceTlRvqc=
github.com/mitchellh/copystructure v1.2.0 h1:vpKXTN4ewci03Vljg/q9QvCGUDttBOGBIa15WveJJGw=
github.com/mitchellh/copystructure v1.2.0/go.mod h1:qLl+cE2AmVv+CoeAwDPye/v+N2HKCj9FbZEVFJRxO9s=
@@ -899,8 +901,8 @@ github.com/mwitkow/go-conntrack v0.0.0-20190716064945-2f068394615f/go.mod h1:qRW
github.com/namedotcom/go v0.0.0-20180403034216-08470befbe04/go.mod h1:5sN+Lt1CaY4wsPvgQH/jsuJi4XO2ssZbdsIizr4CVC8=
github.com/nats-io/jwt/v2 v2.8.0 h1:K7uzyz50+yGZDO5o772eRE7atlcSEENpL7P+b74JV1g=
github.com/nats-io/jwt/v2 v2.8.0/go.mod h1:me11pOkwObtcBNR8AiMrUbtVOUGkqYjMQZ6jnSdVUIA=
github.com/nats-io/nats-server/v2 v2.12.0 h1:OIwe8jZUqJFrh+hhiyKu8snNib66qsx806OslqJuo74=
github.com/nats-io/nats-server/v2 v2.12.0/go.mod h1:nr8dhzqkP5E/lDwmn+A2CvQPMd1yDKXQI7iGg3lAvww=
github.com/nats-io/nats-server/v2 v2.12.1 h1:0tRrc9bzyXEdBLcHr2XEjDzVpUxWx64aZBm7Rl1QDrA=
github.com/nats-io/nats-server/v2 v2.12.1/go.mod h1:OEaOLmu/2e6J9LzUt2OuGjgNem4EpYApO5Rpf26HDs8=
github.com/nats-io/nats.go v1.47.0 h1:YQdADw6J/UfGUd2Oy6tn4Hq6YHxCaJrVKayxxFqYrgM=
github.com/nats-io/nats.go v1.47.0/go.mod h1:iRWIPokVIFbVijxuMQq4y9ttaBTMe0SFdlZfMDd+33g=
github.com/nats-io/nkeys v0.4.11 h1:q44qGV008kYd9W1b1nEBkNzvnWxtRSQ7A8BoqRrcfa0=
@@ -933,8 +935,8 @@ github.com/onsi/ginkgo v1.7.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+W
github.com/onsi/ginkgo v1.12.1/go.mod h1:zj2OWP4+oCPe1qIXoGWkgMRwljMUYCdkwsT2108oapk=
github.com/onsi/ginkgo v1.16.5 h1:8xi0RTUf59SOSfEtZMvwTvXYMzG4gV23XVHOZiXNtnE=
github.com/onsi/ginkgo v1.16.5/go.mod h1:+E8gABHa3K6zRBolWtd+ROzc/U5bkGt0FwiG042wbpU=
github.com/onsi/ginkgo/v2 v2.27.1 h1:0LJC8MpUSQnfnp4n/3W3GdlmJP3ENGF0ZPzjQGLPP7s=
github.com/onsi/ginkgo/v2 v2.27.1/go.mod h1:wmy3vCqiBjirARfVhAqFpYt8uvX0yaFe+GudAqqcCqA=
github.com/onsi/ginkgo/v2 v2.27.2 h1:LzwLj0b89qtIy6SSASkzlNvX6WktqurSHwkk2ipF/Ns=
github.com/onsi/ginkgo/v2 v2.27.2/go.mod h1:ArE1D/XhNXBXCBkKOLkbsb2c81dQHCRcF5zwn/ykDRo=
github.com/onsi/gomega v1.4.3/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY=
github.com/onsi/gomega v1.7.1/go.mod h1:XdKZgCCFLUoM/7CFJVPcG8C1xQ1AJ0vpAezJrB7JYyY=
github.com/onsi/gomega v1.10.1/go.mod h1:iN09h71vgCQne3DLsj+A5owkum+a2tYe+TOCB1ybHNo=
@@ -948,8 +950,8 @@ github.com/opencloud-eu/icap-client v0.0.0-20250930132611-28a2afe62d89 h1:W1ms+l
github.com/opencloud-eu/icap-client v0.0.0-20250930132611-28a2afe62d89/go.mod h1:vigJkNss1N2QEceCuNw/ullDehncuJNFB6mEnzfq9UI=
github.com/opencloud-eu/libre-graph-api-go v1.0.8-0.20250724122329-41ba6b191e76 h1:vD/EdfDUrv4omSFjrinT8Mvf+8D7f9g4vgQ2oiDrVUI=
github.com/opencloud-eu/libre-graph-api-go v1.0.8-0.20250724122329-41ba6b191e76/go.mod h1:pzatilMEHZFT3qV7C/X3MqOa3NlRQuYhlRhZTL+hN6Q=
github.com/opencloud-eu/reva/v2 v2.39.2-0.20251030154544-cac8a0257da6 h1:BUrCUrRqBg04MJuhnIK4H1KNK4aebK6H/AYcHjQ0DM4=
github.com/opencloud-eu/reva/v2 v2.39.2-0.20251030154544-cac8a0257da6/go.mod h1:Qm0CibFYrFc096OhWWL14nsGiFoE6g/4oMFHV5CqU+Q=
github.com/opencloud-eu/reva/v2 v2.39.2-0.20251106122902-c13e27f55362 h1:O9oHbqPnC+tAQTbaLD4Tj6I5jmSmTLaQCynTHkFP+cI=
github.com/opencloud-eu/reva/v2 v2.39.2-0.20251106122902-c13e27f55362/go.mod h1:hOCR1OHAhGY8ecpq6sIS5Ru1ZOC/hBgNz+sYf6CrO9Y=
github.com/opencontainers/go-digest v1.0.0 h1:apOUWs51W5PlhuyGyz9FCeeBIOUDA/6nW8Oi/yOhh5U=
github.com/opencontainers/go-digest v1.0.0/go.mod h1:0JzlMkj0TRzQZfJkVvzbP0HBR3IKzErnv2BNG4W4MAM=
github.com/opencontainers/image-spec v1.1.1 h1:y0fUlFfIZhPF1W537XOLg0/fcx6zcHCJwooC2xJA040=
@@ -1001,8 +1003,6 @@ github.com/power-devops/perfstat v0.0.0-20210106213030-5aafc221ea8c/go.mod h1:Om
github.com/pquerna/cachecontrol v0.2.0 h1:vBXSNuE5MYP9IJ5kjsdo8uq+w41jSPgvba2DEnkRx9k=
github.com/pquerna/cachecontrol v0.2.0/go.mod h1:NrUG3Z7Rdu85UNR3vm7SOsl1nFIeSiQnrHV5K9mBcUI=
github.com/pquerna/otp v1.3.0/go.mod h1:dkJfzwRKNiegxyNb54X/3fLwhCynbMspSyWKnvi1AEg=
github.com/prashantv/gostub v1.1.0 h1:BTyx3RfQjRHnUWaGF9oQos79AlQ5k8WNktv7VGvVH4g=
github.com/prashantv/gostub v1.1.0/go.mod h1:A5zLQHz7ieHGG7is6LLXLz7I8+3LZzsrV0P1IAHhP5U=
github.com/prometheus/alertmanager v0.28.1 h1:BK5pCoAtaKg01BYRUJhEDV1tqJMEtYBGzPw8QdvnnvA=
github.com/prometheus/alertmanager v0.28.1/go.mod h1:0StpPUDDHi1VXeM7p2yYfeZgLVi/PPlt39vo9LQUHxM=
github.com/prometheus/client_golang v0.8.0/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw=
@@ -1308,8 +1308,6 @@ go.opentelemetry.io/proto/otlp v1.7.1/go.mod h1:b2rVh6rfI/s2pHWNlB7ILJcRALpcNDzK
go.uber.org/atomic v1.3.2/go.mod h1:gD2HeocX3+yG+ygLZcrzQJaqmWj9AIm7n08wl/qW/PE=
go.uber.org/atomic v1.4.0/go.mod h1:gD2HeocX3+yG+ygLZcrzQJaqmWj9AIm7n08wl/qW/PE=
go.uber.org/atomic v1.7.0/go.mod h1:fEN4uk6kAWBTFdckzkM89CLk9XfWZrxpCo0nPH17wJc=
go.uber.org/automaxprocs v1.6.0 h1:O3y2/QNTOdbF+e/dpXNNW7Rx2hZ4sTIPyybbxyNqTUs=
go.uber.org/automaxprocs v1.6.0/go.mod h1:ifeIMSnPZuznNm6jmdzmU3/bfk01Fe2fotchwEFJ8r8=
go.uber.org/goleak v1.1.10/go.mod h1:8a7PlsEVH3e/a/GLqe5IIrQx6GzcnRmZEufDUTk4A7A=
go.uber.org/goleak v1.3.0 h1:2K3zAYmnTNqV73imy9J1T3WC+gmCePx2hEGkimedGto=
go.uber.org/goleak v1.3.0/go.mod h1:CoHD4mav9JJNrW/WLlf7HGZPjdw8EucARQHekz1X6bE=
@@ -1595,8 +1593,8 @@ golang.org/x/time v0.0.0-20191024005414-555d28b269f0/go.mod h1:tRJNPiyCQ0inRvYxb
golang.org/x/time v0.0.0-20200630173020-3af7569d3a1e/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20201208040808-7e3f01d25324/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20210220033141-f8bda1e9f3ba/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.13.0 h1:eUlYslOIt32DgYD6utsuUeHs4d7AsEYLuIAdg7FlYgI=
golang.org/x/time v0.13.0/go.mod h1:eL/Oa2bBBK0TkX57Fyni+NgnyQQN4LitPmob2Hjnqw4=
golang.org/x/time v0.14.0 h1:MRx4UaLrDotUKUdCIqzPC48t1Y9hANFKIRpNx+Te8PI=
golang.org/x/time v0.14.0/go.mod h1:eL/Oa2bBBK0TkX57Fyni+NgnyQQN4LitPmob2Hjnqw4=
golang.org/x/tools v0.0.0-20180221164845-07fd8470d635/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20180828015842-6cd1fcedba52/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=

View File

@@ -68,7 +68,7 @@ func CreateConfig(insecure, forceOverwrite, diff bool, configPath, adminPassword
systemUserID, adminUserID, graphApplicationID, storageUsersMountID, serviceAccountID string
idmServicePassword, idpServicePassword, ocAdminServicePassword, revaServicePassword string
tokenManagerJwtSecret, collaborationWOPISecret, machineAuthAPIKey, systemUserAPIKey string
revaTransferSecret, thumbnailsTransferSecret, serviceAccountSecret string
revaTransferSecret, thumbnailsTransferSecret, serviceAccountSecret, urlSigningSecret string
)
if diff {
@@ -95,6 +95,13 @@ func CreateConfig(insecure, forceOverwrite, diff bool, configPath, adminPassword
revaTransferSecret = oldCfg.TransferSecret
thumbnailsTransferSecret = oldCfg.Thumbnails.Thumbnail.TransferSecret
serviceAccountSecret = oldCfg.Graph.ServiceAccount.ServiceAccountSecret
urlSigningSecret = oldCfg.URLSigningSecret
if urlSigningSecret == "" {
urlSigningSecret, err = generators.GenerateRandomPassword(passwordLength)
if err != nil {
return fmt.Errorf("could not generate random secret for urlSigningSecret: %s", err)
}
}
} else {
systemUserID = uuid.Must(uuid.NewV4()).String()
adminUserID = uuid.Must(uuid.NewV4()).String()
@@ -142,13 +149,17 @@ func CreateConfig(insecure, forceOverwrite, diff bool, configPath, adminPassword
if err != nil {
return fmt.Errorf("could not generate random password for revaTransferSecret: %s", err)
}
urlSigningSecret, err = generators.GenerateRandomPassword(passwordLength)
if err != nil {
return fmt.Errorf("could not generate random secret for urlSigningSecret: %s", err)
}
thumbnailsTransferSecret, err = generators.GenerateRandomPassword(passwordLength)
if err != nil {
return fmt.Errorf("could not generate random password for thumbnailsTransferSecret: %s", err)
}
serviceAccountSecret, err = generators.GenerateRandomPassword(passwordLength)
if err != nil {
return fmt.Errorf("could not generate random password for thumbnailsTransferSecret: %s", err)
return fmt.Errorf("could not generate random secret for serviceAccountSecret: %s", err)
}
}
@@ -164,6 +175,7 @@ func CreateConfig(insecure, forceOverwrite, diff bool, configPath, adminPassword
MachineAuthAPIKey: machineAuthAPIKey,
SystemUserAPIKey: systemUserAPIKey,
TransferSecret: revaTransferSecret,
URLSigningSecret: urlSigningSecret,
SystemUserID: systemUserID,
AdminUserID: adminUserID,
Idm: IdmService{

View File

@@ -19,6 +19,7 @@ type OpenCloudConfig struct {
MachineAuthAPIKey string `yaml:"machine_auth_api_key"`
SystemUserAPIKey string `yaml:"system_user_api_key"`
TransferSecret string `yaml:"transfer_secret"`
URLSigningSecret string `yaml:"url_signing_secret"`
SystemUserID string `yaml:"system_user_id"`
AdminUserID string `yaml:"admin_user_id"`
Graph GraphService `yaml:"graph"`

View File

@@ -78,6 +78,7 @@ type Config struct {
TokenManager *shared.TokenManager `yaml:"token_manager"`
MachineAuthAPIKey string `yaml:"machine_auth_api_key" env:"OC_MACHINE_AUTH_API_KEY" desc:"Machine auth API key used to validate internal requests necessary for the access to resources from other services." introductionVersion:"1.0.0"`
TransferSecret string `yaml:"transfer_secret" env:"OC_TRANSFER_SECRET" desc:"Transfer secret for signing file up- and download requests." introductionVersion:"1.0.0"`
URLSigningSecret string `yaml:"url_signing_secret" env:"OC_URL_SIGNING_SECRET" desc:"The shared secret used to sign URLs e.g. for image downloads by the web office suite." introductionVersion:"%%NEXT%%"`
SystemUserID string `yaml:"system_user_id" env:"OC_SYSTEM_USER_ID" desc:"ID of the OpenCloud storage-system system user. Admins need to set the ID for the storage-system system user in this config option which is then used to reference the user. Any reasonable long string is possible, preferably this would be an UUIDv4 format." introductionVersion:"1.0.0"`
SystemUserAPIKey string `yaml:"system_user_api_key" env:"OC_SYSTEM_USER_API_KEY" desc:"API key for the storage-system system user." introductionVersion:"1.0.0"`
AdminUserID string `yaml:"admin_user_id" env:"OC_ADMIN_USER_ID" desc:"ID of a user, that should receive admin privileges. Consider that the UUID can be encoded in some LDAP deployment configurations like in .ldif files. These need to be decoded beforehand." introductionVersion:"1.0.0"`

View File

@@ -100,6 +100,14 @@ func EnsureCommons(cfg *config.Config) {
cfg.Commons.TransferSecret = cfg.TransferSecret
}
// make sure url signing secret is set and copy it to the commons part
// fall back to transfer secret for url signing secret to avoid
// issues when upgrading from an older release
if cfg.URLSigningSecret == "" {
cfg.URLSigningSecret = cfg.TransferSecret
}
cfg.Commons.URLSigningSecret = cfg.URLSigningSecret
// copy metadata user id to the commons part if set
if cfg.SystemUserID != "" {
cfg.Commons.SystemUserID = cfg.SystemUserID
@@ -128,6 +136,10 @@ func Validate(cfg *config.Config) error {
return shared.MissingRevaTransferSecretError("opencloud")
}
if cfg.URLSigningSecret == "" {
return shared.MissingURLSigningSecret("opencloud")
}
if cfg.MachineAuthAPIKey == "" {
return shared.MissingMachineAuthApiKeyError("opencloud")
}

View File

@@ -93,3 +93,11 @@ func MissingWOPISecretError(service string) error {
"the config/corresponding environment variable).",
service, defaults.BaseConfigPath())
}
func MissingURLSigningSecret(service string) error {
return fmt.Errorf("The URL signing secret has not been set properly in your config for %s. "+
"Make sure your %s config contains the proper values "+
"(e.g. by using 'opencloud init --diff' and applying the patch or setting a value manually in "+
"the config/corresponding environment variable).",
service, defaults.BaseConfigPath())
}

View File

@@ -80,6 +80,7 @@ type Commons struct {
Reva *Reva `yaml:"reva"`
MachineAuthAPIKey string `mask:"password" yaml:"machine_auth_api_key" env:"OC_MACHINE_AUTH_API_KEY" desc:"Machine auth API key used to validate internal requests necessary for the access to resources from other services." introductionVersion:"1.0.0"`
TransferSecret string `mask:"password" yaml:"transfer_secret,omitempty" env:"REVA_TRANSFER_SECRET" desc:"The secret used for signing the requests towards the data gateway for up- and downloads." introductionVersion:"1.0.0"`
URLSigningSecret string `yaml:"url_signing_secret" env:"OC_URL_SIGNING_SECRET" desc:"The shared secret used to sign URLs e.g. for image downloads by the web office suite." introductionVersion:"%%NEXT%%"`
SystemUserID string `yaml:"system_user_id" env:"OC_SYSTEM_USER_ID" desc:"ID of the OpenCloud storage-system system user. Admins need to set the ID for the storage-system system user in this config option which is then used to reference the user. Any reasonable long string is possible, preferably this would be an UUIDv4 format." introductionVersion:"1.0.0"`
SystemUserAPIKey string `mask:"password" yaml:"system_user_api_key" env:"SYSTEM_USER_API_KEY" desc:"API key for all system users." introductionVersion:"1.0.0"`
AdminUserID string `yaml:"admin_user_id" env:"OC_ADMIN_USER_ID" desc:"ID of a user, that should receive admin privileges. Consider that the UUID can be encoded in some LDAP deployment configurations like in .ldif files. These need to be decoded beforehand." introductionVersion:"1.0.0"`

View File

@@ -0,0 +1,103 @@
# SOME DESCRIPTIVE TITLE.
# Copyright (C) YEAR THE PACKAGE'S COPYRIGHT HOLDER
# This file is distributed under the same license as the PACKAGE package.
# FIRST AUTHOR <EMAIL@ADDRESS>, YEAR.
#
# Translators:
# Jiri Grönroos <jiri.gronroos@iki.fi>, 2025
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: \n"
"Report-Msgid-Bugs-To: EMAIL\n"
"POT-Creation-Date: 2025-11-06 00:02+0000\n"
"PO-Revision-Date: 2025-01-27 10:17+0000\n"
"Last-Translator: Jiri Grönroos <jiri.gronroos@iki.fi>, 2025\n"
"Language-Team: Finnish (https://app.transifex.com/opencloud-eu/teams/204053/fi/)\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Language: fi\n"
"Plural-Forms: nplurals=2; plural=(n != 1);\n"
#: pkg/service/response.go:44
msgid "description"
msgstr "kuvaus"
#: pkg/service/response.go:43
msgid "display name"
msgstr "näyttönimi"
#: pkg/service/response.go:42
msgid "expiration date"
msgstr "vanhenemispäivä"
#: pkg/service/response.go:41
msgid "password"
msgstr "salasana"
#: pkg/service/response.go:40
msgid "permission"
msgstr "käyttöoikeus"
#: pkg/service/response.go:39
msgid "some field"
msgstr "jokin kieltä"
#: pkg/service/response.go:26
msgid "{resource} was downloaded via public link {token}"
msgstr "{resource} ladattiin julkisen linkin {token} kautta"
#: pkg/service/response.go:24
msgid "{user} added {resource} to {folder}"
msgstr "{user} lisäsi {resource} kansioon {folder}"
#: pkg/service/response.go:36
msgid "{user} added {sharee} as member of {space}"
msgstr "{user} lisäsi {sharee} jäseneksi avaruuteen {space}"
#: pkg/service/response.go:27
msgid "{user} deleted {resource} from {folder}"
msgstr "{user} poisti {resource} kansiosta {folder}"
#: pkg/service/response.go:28
msgid "{user} moved {resource} to {folder}"
msgstr "{user} siirsi {resource} kansioon {folder}"
#: pkg/service/response.go:35
msgid "{user} removed link to {resource}"
msgstr "{user} poisti linkin resurssiin {resource}"
#: pkg/service/response.go:32
msgid "{user} removed {sharee} from {resource}"
msgstr "{user} poisti {sharee} resurssista {resource}"
#: pkg/service/response.go:37
msgid "{user} removed {sharee} from {space}"
msgstr "{user} poisti {sharee} avaruudesta {space}"
#: pkg/service/response.go:29
msgid "{user} renamed {oldResource} to {resource}"
msgstr "{user} muutti kohteen {oldResource} uudeksi nimeksi {resource}"
#: pkg/service/response.go:33
msgid "{user} shared {resource} via link"
msgstr "{user} jakoi {resource} linkin kautta"
#: pkg/service/response.go:30
msgid "{user} shared {resource} with {sharee}"
msgstr "{user} jakoi {resource} käyttäjän {sharee} kanssa"
#: pkg/service/response.go:34
msgid "{user} updated {field} for a link {token} on {resource}"
msgstr ""
"{user} päivitti kentän {field} linkkiin {token} resurssissa {resource}"
#: pkg/service/response.go:31
msgid "{user} updated {field} for the {resource}"
msgstr "{user} päivitti kentän {field} resurssille {resource}"
#: pkg/service/response.go:25
msgid "{user} updated {resource} in {folder}"
msgstr "{user} päivitti {resource} kansiossa {folder}"

View File

@@ -1198,6 +1198,7 @@ func (f *FileConnector) CheckFileInfo(ctx context.Context) (*ConnectorResponse,
isAnonymousUser := true
isPublicShare := false
isAdminUser := false
user := ctxpkg.ContextMustGetUser(ctx)
if user.String() != "" {
// if we have a wopiContext.User
@@ -1207,6 +1208,12 @@ func (f *FileConnector) CheckFileInfo(ctx context.Context) (*ConnectorResponse,
isAnonymousUser = false
userFriendlyName = user.GetDisplayName()
userId = hexEncodedWopiUserId
isAdminUser, err = utils.CheckPermission(ctx, "WebOffice.Manage", gwc)
if err != nil {
logger.Error().Err(err).Msg("CheckPermission failed")
isAdminUser = false
}
}
}
@@ -1259,6 +1266,7 @@ func (f *FileConnector) CheckFileInfo(ctx context.Context) (*ConnectorResponse,
fileinfo.KeyFileVersionURL: createVersionsUrl(privateLinkURL),
fileinfo.KeyEnableOwnerTermination: true, // only for collabora
fileinfo.KeyEnableInsertRemoteImage: true,
fileinfo.KeySupportsExtendedLockLength: true,
fileinfo.KeySupportsGetLock: true,
fileinfo.KeySupportsLocks: true,
@@ -1267,6 +1275,7 @@ func (f *FileConnector) CheckFileInfo(ctx context.Context) (*ConnectorResponse,
fileinfo.KeySupportsRename: true,
fileinfo.KeyIsAnonymousUser: isAnonymousUser,
fileinfo.KeyIsAdminUser: isAdminUser,
fileinfo.KeyUserFriendlyName: userFriendlyName,
fileinfo.KeyUserID: userId,

View File

@@ -13,6 +13,7 @@ import (
auth "github.com/cs3org/go-cs3apis/cs3/auth/provider/v1beta1"
gateway "github.com/cs3org/go-cs3apis/cs3/gateway/v1beta1"
userv1beta1 "github.com/cs3org/go-cs3apis/cs3/identity/user/v1beta1"
permissions "github.com/cs3org/go-cs3apis/cs3/permissions/v1beta1"
link "github.com/cs3org/go-cs3apis/cs3/sharing/link/v1beta1"
providerv1beta1 "github.com/cs3org/go-cs3apis/cs3/storage/provider/v1beta1"
typesv1beta1 "github.com/cs3org/go-cs3apis/cs3/types/v1beta1"
@@ -1671,6 +1672,13 @@ var _ = Describe("FileConnector", func() {
}
ctx = ctxpkg.ContextSetUser(ctx, u)
gatewayClient.On("CheckPermission", mock.Anything, mock.Anything).Return(
&permissions.CheckPermissionResponse{
Status: status.NewOK(ctx),
},
nil,
)
gatewayClient.On("Stat", mock.Anything, mock.Anything).Times(1).Return(&providerv1beta1.StatResponse{
Status: status.NewOK(ctx),
Info: &providerv1beta1.ResourceInfo{
@@ -1792,6 +1800,8 @@ var _ = Describe("FileConnector", func() {
UserCanRename: false,
BreadcrumbDocName: "test.txt",
PostMessageOrigin: "https://cloud.opencloud.test",
EnableInsertRemoteImage: true,
IsAnonymousUser: true,
}
response, err := fc.CheckFileInfo(ctx)
@@ -1889,7 +1899,7 @@ var _ = Describe("FileConnector", func() {
UserCanRename: false,
UserCanReview: false,
UserCanWrite: false,
EnableInsertRemoteImage: false,
EnableInsertRemoteImage: true,
UserID: "guest-zzz000",
UserFriendlyName: "guest zzz000",
FileSharingURL: "https://cloud.opencloud.test/f/storageid$spaceid%21opaqueid?details=sharing",
@@ -1928,6 +1938,13 @@ var _ = Describe("FileConnector", func() {
}
ctx = ctxpkg.ContextSetUser(ctx, u)
gatewayClient.On("CheckPermission", mock.Anything, mock.Anything).Return(
&permissions.CheckPermissionResponse{
Status: status.NewOK(ctx),
},
nil,
)
gatewayClient.On("Stat", mock.Anything, mock.Anything).Times(1).Return(&providerv1beta1.StatResponse{
Status: status.NewOK(ctx),
Info: &providerv1beta1.ResourceInfo{
@@ -1975,6 +1992,8 @@ var _ = Describe("FileConnector", func() {
UserCanRename: false,
BreadcrumbDocName: "test.txt",
PostMessageOrigin: "https://cloud.opencloud.test",
EnableInsertRemoteImage: true,
IsAdminUser: true,
}
response, err := fc.CheckFileInfo(ctx)
@@ -2001,6 +2020,13 @@ var _ = Describe("FileConnector", func() {
}
ctx = ctxpkg.ContextSetUser(ctx, u)
gatewayClient.On("CheckPermission", mock.Anything, mock.Anything).Return(
&permissions.CheckPermissionResponse{
Status: status.NewOK(ctx),
},
nil,
)
gatewayClient.On("Stat", mock.Anything, mock.Anything).Times(1).Return(&providerv1beta1.StatResponse{
Status: status.NewOK(ctx),
Info: &providerv1beta1.ResourceInfo{
@@ -2045,6 +2071,7 @@ var _ = Describe("FileConnector", func() {
FileVersionURL: "https://cloud.opencloud.test/f/storageid$spaceid%21opaqueid?details=versions",
HostEditURL: "https://cloud.opencloud.test/external-onlyoffice/path/to/test.txt?fileId=storageid%24spaceid%21opaqueid&view_mode=write",
PostMessageOrigin: "https://cloud.opencloud.test",
EnableInsertRemoteImage: true,
}
// change wopi app provider

View File

@@ -56,6 +56,10 @@ type Collabora struct {
SaveAsPostmessage bool `json:"SaveAsPostmessage,omitempty"`
// If set to true, it allows the document owner (the one with OwnerId =UserId) to send a closedocument message (see protocol.txt)
EnableOwnerTermination bool `json:"EnableOwnerTermination,omitempty"`
// If set to true, the user has administrator rights in the integration. Some functionality of Collabora Online, such as update check and server audit are supposed to be shown to administrators only.
IsAdminUser bool `json:"IsAdminUser"`
// If set to true, some functionality of Collabora which is supposed to be shown to authenticated users only is hidden
IsAnonymousUser bool `json:"IsAnonymousUser,omitempty"`
// JSON object that contains additional info about the user, namely the avatar image.
//UserExtraInfo -> requires definition, currently not used
@@ -131,6 +135,10 @@ func (cinfo *Collabora) SetProperties(props map[string]interface{}) {
//UserPrivateInfo -> requires definition, currently not used
case KeyWatermarkText:
cinfo.WatermarkText = value.(string)
case KeyIsAdminUser:
cinfo.IsAdminUser = value.(bool)
case KeyIsAnonymousUser:
cinfo.IsAnonymousUser = value.(bool)
case KeyEnableShare:
cinfo.EnableShare = value.(bool)

View File

@@ -50,6 +50,7 @@ const (
KeyIsAnonymousUser = "IsAnonymousUser"
KeyIsEduUser = "IsEduUser"
KeyIsAdminUser = "IsAdminUser"
KeyLicenseCheckForEditIsEnabled = "LicenseCheckForEditIsEnabled"
KeyUserFriendlyName = "UserFriendlyName"
KeyUserInfo = "UserInfo"

View File

@@ -0,0 +1,132 @@
# SOME DESCRIPTIVE TITLE.
# Copyright (C) YEAR THE PACKAGE'S COPYRIGHT HOLDER
# This file is distributed under the same license as the PACKAGE package.
# FIRST AUTHOR <EMAIL@ADDRESS>, YEAR.
#
# Translators:
# Jiri Grönroos <jiri.gronroos@iki.fi>, 2025
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: \n"
"Report-Msgid-Bugs-To: EMAIL\n"
"POT-Creation-Date: 2025-11-06 00:02+0000\n"
"PO-Revision-Date: 2025-01-27 10:17+0000\n"
"Last-Translator: Jiri Grönroos <jiri.gronroos@iki.fi>, 2025\n"
"Language-Team: Finnish (https://app.transifex.com/opencloud-eu/teams/204053/fi/)\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Language: fi\n"
"Plural-Forms: nplurals=2; plural=(n != 1);\n"
#. UnifiedRole Editor, Role DisplayName (resolves directly)
#. UnifiedRole EditorListGrants, Role DisplayName (resolves directly)
#. UnifiedRole SpaseEditor, Role DisplayName (resolves directly)
#. UnifiedRole FileEditor, Role DisplayName (resolves directly)
#. UnifiedRole FileEditorListGrants, Role DisplayName (resolves directly)
#: pkg/unifiedrole/roles.go:116 pkg/unifiedrole/roles.go:122
#: pkg/unifiedrole/roles.go:128 pkg/unifiedrole/roles.go:140
#: pkg/unifiedrole/roles.go:146
msgid "Can edit"
msgstr "Voi muokata"
#. UnifiedRole SpaseEditorWithoutVersions, Role DisplayName (resolves
#. directly)
#: pkg/unifiedrole/roles.go:134
msgid "Can edit without versions"
msgstr "Voi muokata ilman versioita"
#. UnifiedRole Manager, Role DisplayName (resolves directly)
#: pkg/unifiedrole/roles.go:158
msgid "Can manage"
msgstr "Voi hallita"
#. UnifiedRole EditorLite, Role DisplayName (resolves directly)
#: pkg/unifiedrole/roles.go:152
msgid "Can upload"
msgstr "Voi lähettää"
#. UnifiedRole Viewer, Role DisplayName (resolves directly)
#. UnifiedRole Viewer, Role DisplayName (resolves directly)
#. UnifiedRole SpaseViewer, Role DisplayName (resolves directly)
#: pkg/unifiedrole/roles.go:98 pkg/unifiedrole/roles.go:104
#: pkg/unifiedrole/roles.go:110
msgid "Can view"
msgstr "Voi nähdä"
#. UnifiedRole SecureViewer, Role DisplayName (resolves directly)
#: pkg/unifiedrole/roles.go:164
msgid "Can view (secure)"
msgstr "Voi nähdä (turvallinen)"
#. UnifiedRole FullDenial, Role DisplayName (resolves directly)
#: pkg/unifiedrole/roles.go:170
msgid "Cannot access"
msgstr "Ei pääsyä"
#. UnifiedRole FullDenial, Role Description (resolves directly)
#: pkg/unifiedrole/roles.go:167
msgid "Deny all access."
msgstr "Estä kaikki pääsy."
#. default description for new spaces
#: pkg/service/v0/spacetemplates.go:32
msgid "Here you can add a description for this Space."
msgstr "Täällä voit lisätä kuvauksen avaruudelle."
#. UnifiedRole Viewer, Role Description (resolves directly)
#. UnifiedRole SpaceViewer, Role Description (resolves directly)
#: pkg/unifiedrole/roles.go:95 pkg/unifiedrole/roles.go:107
msgid "View and download."
msgstr "Näytä ja lataa."
#. UnifiedRole SecureViewer, Role Description (resolves directly)
#: pkg/unifiedrole/roles.go:161
msgid "View only documents, images and PDFs. Watermarks will be applied."
msgstr "Näytä vain asiakirjat, kuvat ja PDF:t. Vesileimat lisätään."
#. UnifiedRole FileEditor, Role Description (resolves directly)
#: pkg/unifiedrole/roles.go:137
msgid "View, download and edit."
msgstr "Näytä, lataa ja muokkaa."
#. UnifiedRole ViewerListGrants, Role Description (resolves directly)
#: pkg/unifiedrole/roles.go:101
msgid "View, download and show all invited people."
msgstr "Näytä, lataa ja näytä kaikki kutsutut henkilöt."
#. UnifiedRole EditorLite, Role Description (resolves directly)
#: pkg/unifiedrole/roles.go:149
msgid "View, download and upload."
msgstr "Näytä, lataa ja lähetä."
#. UnifiedRole FileEditorListGrants, Role Description (resolves directly)
#: pkg/unifiedrole/roles.go:143
msgid "View, download, edit and show all invited people."
msgstr "Näytä, lataa, muokkaa ja näytä kaikki kaikki kutsutut henkilöt."
#. UnifiedRole Editor, Role Description (resolves directly)
#. UnifiedRole SpaseEditorWithoutVersions, Role Description (resolves
#. directly)
#: pkg/unifiedrole/roles.go:113 pkg/unifiedrole/roles.go:131
msgid "View, download, upload, edit, add and delete."
msgstr "Näytä, lataa lähetä, muokkaa, lisää ja poista."
#. UnifiedRole Manager, Role Description (resolves directly)
#: pkg/unifiedrole/roles.go:155
msgid "View, download, upload, edit, add, delete and manage members."
msgstr "Näytä, lataa, lähetä, muokkaa, lisää, poista ja hallitse jäseniä."
#. UnifiedRoleListGrants Editor, Role Description (resolves directly)
#: pkg/unifiedrole/roles.go:119
msgid "View, download, upload, edit, add, delete and show all invited people."
msgstr ""
"Näytä, lataa, lähetä, muokkaa, lisää, poista ja näytä kaikki kutsutut "
"henkilöt."
#. UnifiedRole SpaseEditor, Role Description (resolves directly)
#: pkg/unifiedrole/roles.go:125
msgid "View, download, upload, edit, add, delete including the history."
msgstr "Näytä, lataa, lähetä, muokkaa, lisää, poista mukaan lukien historia."

View File

@@ -11,7 +11,7 @@ msgid ""
msgstr ""
"Project-Id-Version: \n"
"Report-Msgid-Bugs-To: EMAIL\n"
"POT-Creation-Date: 2025-10-16 08:04+0000\n"
"POT-Creation-Date: 2025-11-06 00:02+0000\n"
"PO-Revision-Date: 2025-01-27 10:17+0000\n"
"Last-Translator: LinkinWires <darkinsonic13@gmail.com>, 2025\n"
"Language-Team: Ukrainian (https://app.transifex.com/opencloud-eu/teams/204053/uk/)\n"

View File

@@ -95,7 +95,7 @@ func Server(cfg *config.Config) *cli.Command {
ocdav.WithTraceProvider(traceProvider),
ocdav.RegisterTTL(registry.GetRegisterTTL()),
ocdav.RegisterInterval(registry.GetRegisterInterval()),
ocdav.URLSigningSharedSecret(cfg.URLSigningSharedSecret),
ocdav.URLSigningSharedSecret(cfg.Commons.URLSigningSecret),
}
s, err := ocdav.Service(opts...)

View File

@@ -34,9 +34,8 @@ type Config struct {
MachineAuthAPIKey string `yaml:"machine_auth_api_key" env:"OC_MACHINE_AUTH_API_KEY;OCDAV_MACHINE_AUTH_API_KEY" desc:"Machine auth API key used to validate internal requests necessary for the access to resources from other services." introductionVersion:"1.0.0"`
URLSigningSharedSecret string `yaml:"url_signing_shared_secret" env:"OC_URL_SIGNING_SHARED_SECRET" desc:"The shared secret used to sign URLs." introductionVersion:"4.0.0"`
Context context.Context `yaml:"-"`
Status Status `yaml:"-"`
Context context.Context `yaml:"-"`
Status Status `yaml:"-"`
AllowPropfindDepthInfinity bool `yaml:"allow_propfind_depth_infinity" env:"OCDAV_ALLOW_PROPFIND_DEPTH_INFINITY" desc:"Allow the use of depth infinity in PROPFINDS. When enabled, a propfind will traverse through all subfolders. If many subfolders are expected, depth infinity can cause heavy server load and/or delayed response times." introductionVersion:"1.0.0"`
}

View File

@@ -37,9 +37,14 @@ func Validate(cfg *config.Config) error {
if cfg.TokenManager.JWTSecret == "" {
return shared.MissingJWTTokenError(cfg.Service.Name)
}
if cfg.MachineAuthAPIKey == "" {
return shared.MissingMachineAuthApiKeyError(cfg.Service.Name)
}
if cfg.Commons.URLSigningSecret == "" {
return shared.MissingURLSigningSecret(cfg.Service.Name)
}
return nil
}

View File

@@ -311,15 +311,11 @@ func loadMiddlewares(logger log.Logger, cfg *config.Config,
RevaGatewaySelector: gatewaySelector,
})
var signURLVerifier signedurl.Verifier
if cfg.PreSignedURL.JWTSigningSharedSecret != "" {
var err error
signURLVerifier, err = signedurl.NewJWTSignedURL(signedurl.WithSecret(cfg.PreSignedURL.JWTSigningSharedSecret))
if err != nil {
logger.Fatal().Err(err).Msg("Failed to initialize signed URL configuration.")
}
signURLVerifier, err := signedurl.NewJWTSignedURL(signedurl.WithSecret(cfg.Commons.URLSigningSecret))
if err != nil {
logger.Fatal().Err(err).Msg("Failed to initialize signed URL configuration.")
}
authenticators = append(authenticators, middleware.SignedURLAuthenticator{
Logger: logger,
PreSignedURLConfig: cfg.PreSignedURL,

View File

@@ -180,10 +180,9 @@ type StaticSelectorConf struct {
// PreSignedURL is the config for the pre-signed url middleware
type PreSignedURL struct {
AllowedHTTPMethods []string `yaml:"allowed_http_methods"`
Enabled bool `yaml:"enabled" env:"PROXY_ENABLE_PRESIGNEDURLS" desc:"Allow OCS to get a signing key to sign requests." introductionVersion:"1.0.0"`
SigningKeys *SigningKeys `yaml:"signing_keys"`
JWTSigningSharedSecret string `yaml:"url_signing_shared_secret" env:"OC_URL_SIGNING_SHARED_SECRET" desc:"The shared secret used to sign URLs." introductionVersion:"4.0.0"`
AllowedHTTPMethods []string `yaml:"allowed_http_methods"`
Enabled bool `yaml:"enabled" env:"PROXY_ENABLE_PRESIGNEDURLS" desc:"Allow OCS to get a signing key to sign requests." introductionVersion:"1.0.0"`
SigningKeys *SigningKeys `yaml:"signing_keys"`
}
// SigningKeys is a store configuration.

View File

@@ -56,9 +56,14 @@ func Validate(cfg *config.Config) error {
if cfg.ServiceAccount.ServiceAccountID == "" {
return shared.MissingServiceAccountID(cfg.Service.Name)
}
if cfg.ServiceAccount.ServiceAccountSecret == "" {
return shared.MissingServiceAccountSecret(cfg.Service.Name)
}
if cfg.Commons.URLSigningSecret == "" {
return shared.MissingURLSigningSecret(cfg.Service.Name)
}
return nil
}

View File

@@ -0,0 +1,144 @@
# SOME DESCRIPTIVE TITLE.
# Copyright (C) YEAR THE PACKAGE'S COPYRIGHT HOLDER
# This file is distributed under the same license as the PACKAGE package.
# FIRST AUTHOR <EMAIL@ADDRESS>, YEAR.
#
# Translators:
# Jiri Grönroos <jiri.gronroos@iki.fi>, 2025
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: \n"
"Report-Msgid-Bugs-To: EMAIL\n"
"POT-Creation-Date: 2025-11-06 00:02+0000\n"
"PO-Revision-Date: 2025-01-27 10:17+0000\n"
"Last-Translator: Jiri Grönroos <jiri.gronroos@iki.fi>, 2025\n"
"Language-Team: Finnish (https://app.transifex.com/opencloud-eu/teams/204053/fi/)\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Language: fi\n"
"Plural-Forms: nplurals=2; plural=(n != 1);\n"
#. name of the notification option 'Space Shared'
#: pkg/store/defaults/templates.go:20
msgid "Added as space member"
msgstr "Lisätty avaruuden jäseneksi"
#. translation for the 'daily' email interval option
#: pkg/store/defaults/templates.go:50
msgid "Daily"
msgstr "Päivittäin"
#. name of the notification option 'Email Interval'
#: pkg/store/defaults/templates.go:44
msgid "Email sending interval"
msgstr "Sähköpostin lähetyksen aikaväli"
#. name of the notification option 'File Rejected'
#: pkg/store/defaults/templates.go:40
msgid "File rejected"
msgstr "Tiedosto hylätty"
#. translation for the 'instant' email interval option
#: pkg/store/defaults/templates.go:48
msgid "Instant"
msgstr "Välittömästi"
#. translation for the 'never' email interval option
#: pkg/store/defaults/templates.go:54
msgid "Never"
msgstr "Ei koskaan"
#. description of the notification option 'Space Shared'
#: pkg/store/defaults/templates.go:22
msgid "Notify when I have been added as a member to a space"
msgstr "Ilmoita kun minut lisätty jäseneksi avaruuteen"
#. description of the notification option 'Space Unshared'
#: pkg/store/defaults/templates.go:26
msgid "Notify when I have been removed as member from a space"
msgstr "Ilmoita kun jäsenyyteni avaruudesta on poistettu"
#. description of the notification option 'Share Received'
#: pkg/store/defaults/templates.go:10
msgid "Notify when I have received a share"
msgstr "Ilmoita kun olen vastaanottanut jaon"
#. description of the notification option 'File Rejected'
#: pkg/store/defaults/templates.go:42
msgid ""
"Notify when a file I uploaded was rejected because of a virus infection or "
"policy violation"
msgstr ""
#. description of the notification option 'Share Removed'
#: pkg/store/defaults/templates.go:14
msgid "Notify when a received share has been removed"
msgstr "Ilmoita kun vastaanotettu jako on poistettu"
#. description of the notification option 'Share Expired'
#: pkg/store/defaults/templates.go:18
msgid "Notify when a received share has expired"
msgstr "Ilmoita kun vastaanotettu jako on vanhentunut"
#. description of the notification option 'Space Deleted'
#: pkg/store/defaults/templates.go:38
msgid "Notify when a space I am member of has been deleted"
msgstr ""
#. description of the notification option 'Space Disabled'
#: pkg/store/defaults/templates.go:34
msgid "Notify when a space I am member of has been disabled"
msgstr ""
#. description of the notification option 'Space Membership Expired'
#: pkg/store/defaults/templates.go:30
msgid "Notify when a space membership has expired"
msgstr ""
#. name of the notification option 'Space Unshared'
#: pkg/store/defaults/templates.go:24
msgid "Removed as space member"
msgstr ""
#. description of the notification option 'Email Interval'
#: pkg/store/defaults/templates.go:46
msgid "Selected value:"
msgstr "Valittu arvo:"
#. name of the notification option 'Share Expired'
#: pkg/store/defaults/templates.go:16
msgid "Share Expired"
msgstr "Jako vanheni"
#. name of the notification option 'Share Received'
#: pkg/store/defaults/templates.go:8
msgid "Share Received"
msgstr "Jako vastaanotettu"
#. name of the notification option 'Share Removed'
#: pkg/store/defaults/templates.go:12
msgid "Share Removed"
msgstr "Jako poistettu"
#. name of the notification option 'Space Deleted'
#: pkg/store/defaults/templates.go:36
msgid "Space deleted"
msgstr "Avaruus poistettu"
#. name of the notification option 'Space Disabled'
#: pkg/store/defaults/templates.go:32
msgid "Space disabled"
msgstr "Avaruus poistettu käytöstä"
#. name of the notification option 'Space Membership Expired'
#: pkg/store/defaults/templates.go:28
msgid "Space membership expired"
msgstr "Avaruuden jäsenyys vanhentunut"
#. translation for the 'weekly' email interval option
#: pkg/store/defaults/templates.go:52
msgid "Weekly"
msgstr "Viikottain"

View File

@@ -11,7 +11,7 @@ msgid ""
msgstr ""
"Project-Id-Version: \n"
"Report-Msgid-Bugs-To: EMAIL\n"
"POT-Creation-Date: 2025-10-16 08:04+0000\n"
"POT-Creation-Date: 2025-11-06 00:02+0000\n"
"PO-Revision-Date: 2025-01-27 10:17+0000\n"
"Last-Translator: LinkinWires <darkinsonic13@gmail.com>, 2025\n"
"Language-Team: Ukrainian (https://app.transifex.com/opencloud-eu/teams/204053/uk/)\n"

View File

@@ -140,6 +140,7 @@ func generateBundleAdminRole() *settingsmsg.Bundle {
SetProjectSpaceQuotaPermission(All),
SettingsManagementPermission(All),
SpaceAbilityPermission(All),
WebOfficeManagementPermssion(All),
WriteFavoritesPermission(Own),
},
}
@@ -659,9 +660,9 @@ func DefaultRoleAssignments(cfg *config.Config) []*settingsmsg.UserRoleAssignmen
RoleId: BundleUUIDRoleUser,
},
{
AccountUuid: "60708dda-e897-11ef-919f-bbb7437d6ec2",
RoleId: BundleUUIDRoleUser,
},
AccountUuid: "60708dda-e897-11ef-919f-bbb7437d6ec2",
RoleId: BundleUUIDRoleUser,
},
{
// additional admin user
AccountUuid: "cd88bf9a-dd7f-11ef-a609-7f78deb2345f", // demo user "dennis"

View File

@@ -621,3 +621,22 @@ func WriteFavoritesPermission(c settingsmsg.Permission_Constraint) *settingsmsg.
},
}
}
// WebOfficManagementPermssion is the permission to mark/unmark files as favorites
func WebOfficeManagementPermssion(c settingsmsg.Permission_Constraint) *settingsmsg.Setting {
return &settingsmsg.Setting{
Id: "27a29046-a816-424f-bd71-2ffb9029162f",
Name: "WebOffice.Manage",
DisplayName: "Manage WebOffice",
Description: "This permission gives access to the admin features in the WebOffice suite.",
Resource: &settingsmsg.Resource{
Type: settingsmsg.Resource_TYPE_SYSTEM,
},
Value: &settingsmsg.Setting_PermissionValue{
PermissionValue: &settingsmsg.Permission{
Operation: settingsmsg.Permission_OPERATION_READWRITE,
Constraint: c,
},
},
}
}

View File

@@ -0,0 +1,117 @@
# SOME DESCRIPTIVE TITLE.
# Copyright (C) YEAR THE PACKAGE'S COPYRIGHT HOLDER
# This file is distributed under the same license as the PACKAGE package.
# FIRST AUTHOR <EMAIL@ADDRESS>, YEAR.
#
# Translators:
# Jiri Grönroos <jiri.gronroos@iki.fi>, 2025
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: \n"
"Report-Msgid-Bugs-To: EMAIL\n"
"POT-Creation-Date: 2025-11-06 00:02+0000\n"
"PO-Revision-Date: 2025-01-27 10:17+0000\n"
"Last-Translator: Jiri Grönroos <jiri.gronroos@iki.fi>, 2025\n"
"Language-Team: Finnish (https://app.transifex.com/opencloud-eu/teams/204053/fi/)\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Language: fi\n"
"Plural-Forms: nplurals=2; plural=(n != 1);\n"
#: pkg/service/templates.go:39
msgid "Access to Space {space} lost"
msgstr "Pääsy avaruuteen {space} menetetty"
#: pkg/service/templates.go:54
msgid "Access to {resource} expired"
msgstr "Pääsy resurssiin {resource} vanheni"
#: pkg/service/templates.go:59
msgid ""
"Attention! The instance will be shut down and deprovisioned on {date}. "
"Download all your data before that date as no access past that date is "
"possible."
msgstr ""
"Huomio! Tämä instanssi sammutetaan ja poistetaan {date}. Lataa datasi ennen "
"kyseistä päivää, sillä pääsy sen jälkeen ei ole mahdollista."
#: pkg/service/templates.go:14
msgid "File {resource} was deleted because it violates the policies"
msgstr "Tiedosto {resource} poistettiin, koska loukkaa käytäntöjä"
#: pkg/service/templates.go:58
msgid "Instance will be shut down and deprovisioned"
msgstr "Instanssi sammutetaan ja poistetaan käytöstä"
#: pkg/service/templates.go:38
msgid "Membership expired"
msgstr "Jäsenyys vanhentunut"
#: pkg/service/templates.go:13
msgid "Policies enforced"
msgstr "Käytännöt pakotettu"
#: pkg/service/templates.go:23
msgid "Removed from Space"
msgstr "Poistettu avaruudesta"
#: pkg/service/templates.go:43
msgid "Resource shared"
msgstr "Resurssi jaettu"
#: pkg/service/templates.go:48
msgid "Resource unshared"
msgstr "Resurssin jako lopetettu"
#: pkg/service/templates.go:53
msgid "Share expired"
msgstr "Jako vanhentunut"
#: pkg/service/templates.go:33
msgid "Space deleted"
msgstr "Jako poistettu"
#: pkg/service/templates.go:28
msgid "Space disabled"
msgstr "Avaruus poistettu käytöstä"
#: pkg/service/templates.go:18
msgid "Space shared"
msgstr "Avaruus jaettu"
#: pkg/service/templates.go:8
msgid "Virus found"
msgstr "Virus löydetty"
#: pkg/service/templates.go:9
msgid "Virus found in {resource}. Upload not possible. Virus: {virus}"
msgstr ""
"Virus paikannettu resurssiin {resource}. Lähetys ei ole mahdollista. Virus: "
"{virus}"
#: pkg/service/templates.go:19
msgid "{user} added you to Space {space}"
msgstr "{user} lisäsi sinut avaruuteen {space}"
#: pkg/service/templates.go:34
msgid "{user} deleted Space {space}"
msgstr "{user} poisti avaruuden {space}"
#: pkg/service/templates.go:29
msgid "{user} disabled Space {space}"
msgstr "{user} poisti käytöstä avaruuden {space}"
#: pkg/service/templates.go:24
msgid "{user} removed you from Space {space}"
msgstr "{user} poisti sinut avaruudesta {space}"
#: pkg/service/templates.go:44
msgid "{user} shared {resource} with you"
msgstr "{user} jakoi {resource} kanssasi"
#: pkg/service/templates.go:49
msgid "{user} unshared {resource} with you"
msgstr "{user} lopettu resurssin {resource} jakamisen kanssasi"

View File

@@ -719,6 +719,10 @@ class CliContext implements Context {
* @return void
*/
public function theAdministratorCopiesFileToFolder(string $user, string $file, string $folder): void {
// this downloads the file using WebDAV and by that checks if it's still in
// postprocessing. So its effectively a check for finished postprocessing
$this->featureContext->userDownloadsFileUsingTheAPI($user, $file);
$userUuid = $this->featureContext->getAttributeOfCreatedUser($user, 'id');
$storagePath = $this->getUsersStoragePath();
@@ -743,6 +747,10 @@ class CliContext implements Context {
* @return void
*/
public function theAdministratorRenamesFile(string $user, string $file, string $newName): void {
// this downloads the file using WebDAV and by that checks if it's still in
// postprocessing. So its effectively a check for finished postprocessing
$this->featureContext->userDownloadsFileUsingTheAPI($user, $file);
$userUuid = $this->featureContext->getAttributeOfCreatedUser($user, 'id');
$storagePath = $this->getUsersStoragePath();
@@ -767,6 +775,10 @@ class CliContext implements Context {
* @return void
*/
public function theAdministratorMovesFileToFolder(string $user, string $file, string $folder): void {
// this downloads the file using WebDAV and by that checks if it's still in
// postprocessing. So its effectively a check for finished postprocessing
$this->featureContext->userDownloadsFileUsingTheAPI($user, $file);
$userUuid = $this->featureContext->getAttributeOfCreatedUser($user, 'id');
$storagePath = $this->getUsersStoragePath();
@@ -831,6 +843,10 @@ class CliContext implements Context {
* @return void
*/
public function theAdministratorCopiesFileToSpace(string $user, string $file, string $space): void {
// this downloads the file using WebDAV and by that checks if it's still in
// postprocessing. So its effectively a check for finished postprocessing
$this->featureContext->userDownloadsFileUsingTheAPI($user, $file);
$userUuid = $this->featureContext->getAttributeOfCreatedUser($user, 'id');
$usersStoragePath = $this->getUsersStoragePath();
$projectsStoragePath = $this->getProjectsStoragePath();

View File

@@ -116,8 +116,8 @@ _ocdav: api compatibility, return correct status code_
- [coreApiWebdavUploadTUS/uploadToShare.feature:256](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/uploadToShare.feature#L256)
- [coreApiWebdavUploadTUS/uploadToShare.feature:279](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/uploadToShare.feature#L279)
- [coreApiWebdavUploadTUS/uploadToShare.feature:280](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/uploadToShare.feature#L280)
- [coreApiWebdavUploadTUS/uploadToShare.feature:375](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/uploadToShare.feature#L375)
- [coreApiWebdavUploadTUS/uploadToShare.feature:376](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/uploadToShare.feature#L376)
- [coreApiWebdavUploadTUS/uploadToShare.feature:377](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/uploadToShare.feature#L377)
#### [Renaming resource to banned name is allowed in spaces webdav](https://github.com/owncloud/ocis/issues/3099)

View File

@@ -116,8 +116,8 @@ _ocdav: api compatibility, return correct status code_
- [coreApiWebdavUploadTUS/uploadToShare.feature:256](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/uploadToShare.feature#L256)
- [coreApiWebdavUploadTUS/uploadToShare.feature:279](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/uploadToShare.feature#L279)
- [coreApiWebdavUploadTUS/uploadToShare.feature:280](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/uploadToShare.feature#L280)
- [coreApiWebdavUploadTUS/uploadToShare.feature:375](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/uploadToShare.feature#L375)
- [coreApiWebdavUploadTUS/uploadToShare.feature:376](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/uploadToShare.feature#L376)
- [coreApiWebdavUploadTUS/uploadToShare.feature:377](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/uploadToShare.feature#L377)
#### [Renaming resource to banned name is allowed in spaces webdav](https://github.com/owncloud/ocis/issues/3099)

View File

@@ -23,6 +23,7 @@ Feature: sharing
| sharee | grp1 |
| shareType | group |
| permissionsRole | File Editor |
And user "Brian" has a share "textfile0.txt" synced
And user "Brian" has moved file "/Shares/textfile0.txt" to "/Shares/anotherName.txt"
When user "Alice" deletes the last share using the sharing API
Then the OCS status code should be "<ocs-status-code>"
@@ -89,6 +90,7 @@ Feature: sharing
| sharee | Brian |
| shareType | user |
| permissionsRole | Editor |
And user "Brian" has a share "shared" synced
When user "Brian" deletes file "/Shares/shared/shared_file.txt" using the WebDAV API
Then the HTTP status code should be "204"
And as "Brian" file "/Shares/shared/shared_file.txt" should not exist
@@ -114,6 +116,7 @@ Feature: sharing
| sharee | Brian |
| shareType | user |
| permissionsRole | Editor |
And user "Brian" has a share "shared" synced
When user "Brian" deletes folder "/Shares/shared/sub" using the WebDAV API
Then the HTTP status code should be "204"
And as "Brian" folder "/Shares/shared/sub" should not exist
@@ -165,6 +168,7 @@ Feature: sharing
| sharee | Brian |
| shareType | user |
| permissionsRole | Viewer |
And user "Brian" has a share "shared" synced
When user "Brian" deletes file "/Shares/shared/shared_file.txt" using the WebDAV API
Then the HTTP status code should be "403"
And as "Alice" file "/shared/shared_file.txt" should exist
@@ -187,6 +191,7 @@ Feature: sharing
| sharee | Brian |
| shareType | user |
| permissionsRole | Uploader |
And user "Brian" has a share "shared" synced
When user "Brian" deletes file "/Shares/shared/shared_file.txt" using the WebDAV API
Then the HTTP status code should be "403"
And as "Alice" file "/shared/shared_file.txt" should exist
@@ -208,6 +213,7 @@ Feature: sharing
| sharee | Brian |
| shareType | user |
| permissionsRole | Uploader |
And user "Brian" has a share "shared" synced
And user "Brian" has uploaded file "filesForUpload/textfile.txt" to "/Shares/shared/textfile.txt"
When user "Brian" deletes file "/Shares/shared/textfile.txt" using the WebDAV API
Then the HTTP status code should be "403"
@@ -237,6 +243,7 @@ Feature: sharing
| sharee | grp1 |
| shareType | group |
| permissionsRole | <permission-role> |
And user "Brian" has a share "<sync-entry>" synced
When user "Brian" deletes the last share of user "Alice" using the sharing API
Then the OCS status code should be "996"
And the HTTP status code should be "<http-status-code>"
@@ -244,11 +251,11 @@ Feature: sharing
And as "Brian" entry "<received-entry>" should exist
And as "Carol" entry "<received-entry>" should exist
Examples:
| entry-to-share | permission-role | ocs-api-version | http-status-code | received-entry |
| /shared/shared_file.txt | File Editor | 1 | 200 | /Shares/shared_file.txt |
| /shared/shared_file.txt | File Editor | 2 | 500 | /Shares/shared_file.txt |
| /shared | Editor | 1 | 200 | /Shares/shared |
| /shared | Editor | 2 | 500 | /Shares/shared |
| entry-to-share | permission-role | ocs-api-version | http-status-code | received-entry | sync-entry |
| /shared/shared_file.txt | File Editor | 1 | 200 | /Shares/shared_file.txt | shared_file.txt |
| /shared/shared_file.txt | File Editor | 2 | 500 | /Shares/shared_file.txt | shared_file.txt |
| /shared | Editor | 1 | 200 | /Shares/shared | shared |
| /shared | Editor | 2 | 500 | /Shares/shared | shared |
Scenario Outline: individual share recipient tries to delete the share
@@ -262,17 +269,18 @@ Feature: sharing
| sharee | Brian |
| shareType | user |
| permissionsRole | <permission-role> |
And user "Brian" has a share "<sync-entry>" synced
When user "Brian" deletes the last share of user "Alice" using the sharing API
Then the OCS status code should be "996"
And the HTTP status code should be "<http-status-code>"
And as "Alice" entry "<entry-to-share>" should exist
And as "Brian" entry "<received-entry>" should exist
Examples:
| entry-to-share | permission-role | ocs-api-version | http-status-code | received-entry |
| /shared/shared_file.txt | File Editor | 1 | 200 | /Shares/shared_file.txt |
| /shared/shared_file.txt | File Editor | 2 | 500 | /Shares/shared_file.txt |
| /shared | Editor | 1 | 200 | /Shares/shared |
| /shared | Editor | 2 | 500 | /Shares/shared |
| entry-to-share | permission-role | ocs-api-version | http-status-code | received-entry | sync-entry |
| /shared/shared_file.txt | File Editor | 1 | 200 | /Shares/shared_file.txt | shared_file.txt |
| /shared/shared_file.txt | File Editor | 2 | 500 | /Shares/shared_file.txt | shared_file.txt |
| /shared | Editor | 1 | 200 | /Shares/shared | shared |
| /shared | Editor | 2 | 500 | /Shares/shared | shared |
@issue-720
Scenario Outline: request PROPFIND after sharer deletes the collaborator
@@ -284,6 +292,7 @@ Feature: sharing
| sharee | Brian |
| shareType | user |
| permissionsRole | File Editor |
And user "Brian" has a share "textfile0.txt" synced
When user "Alice" deletes the last share using the sharing API
Then the OCS status code should be "<ocs-status-code>"
And the HTTP status code should be "200"
@@ -304,6 +313,7 @@ Feature: sharing
| sharee | Brian |
| shareType | user |
| permissionsRole | File Editor |
And user "Brian" has a share "textfile0.txt" synced
When user "Brian" tries to delete the last share of user "Alice" using the sharing API
Then the HTTP status code should be "<http-status-code>"
And the OCS status code should be "996"

View File

@@ -290,6 +290,7 @@ Feature: upload file to shared folder
And user "Alice" sends a chunk to the last created TUS Location with offset "5" and data "56789" with checksum "MD5 099ebea48ea9666a7da2177267983138" using the TUS protocol on the WebDAV API
And user "Alice" shares file "textFile.txt" with user "Brian" using the sharing API
Then the HTTP status code should be "200"
And user "Brian" has a share "textFile.txt" synced
And the content of file "/Shares/textFile.txt" for user "Brian" should be "0123456789"
Examples:
| dav-path-version |

View File

@@ -1,4 +1,4 @@
//go:build !(octopus || pacific || quincy || reef || squid) && ceph_preview
//go:build !(octopus || pacific || quincy || reef || squid)
package admin

View File

@@ -0,0 +1,9 @@
# Github is obeying this ignore file by default.
# Run this command on local to ignore formatting commits in `git blame`
# git config blame.ignoreRevsFile .git-blame-ignore-revs
# Added a new column to supported_mimes.md
# The supported_mimes.md file was a nice way to find when a file format was
# introduced. However, when I changed to add a new column in the table, the
# whole git blame got poisoned for the file.
eb497f9bc5d31c6eab2929a112051218670137ba

View File

@@ -97,7 +97,9 @@ or from a [file](https://pkg.go.dev/github.com/gabriel-vasile/mimetype#DetectFil
</div>
## Contributing
Contributions are unexpected but welcome. When submitting a PR for detection of
a new file format, please make sure to add a record to the list of testcases
from [mimetype_test.go](mimetype_test.go). For complex files a record can be added
in the [testdata](testdata) directory.
Contributions are never expected but very much welcome.
[mimetype_tests](https://github.com/gabriel-vasile/mimetype_tests/actions/workflows/test.yml)
shows which file formats are most often misidentified and can help prioritise.
When submitting a PR for detection of a new file format, please make sure to
add a record to the list of testcases in [mimetype_test.go](mimetype_test.go).
For complex files a record can be added in the [testdata](testdata) directory.

View File

@@ -2,6 +2,7 @@ package charset
import (
"bytes"
"strings"
"unicode/utf8"
"github.com/gabriel-vasile/mimetype/internal/markup"
@@ -141,27 +142,25 @@ func FromXML(content []byte) string {
return FromPlain(content)
}
func fromXML(s scan.Bytes) string {
xml := []byte("<?XML")
xml := []byte("<?xml")
lxml := len(xml)
for {
if len(s) == 0 {
return ""
}
for scan.ByteIsWS(s.Peek()) {
s.Advance(1)
}
s.TrimLWS()
if len(s) <= lxml {
return ""
}
if !s.Match(xml, scan.IgnoreCase) {
s = s[1:] // safe to slice instead of s.Advance(1) because bounds are checked
continue
i, k := s.Search(xml, 0)
if i == -1 {
return ""
}
aName, aVal, hasMore := "", "", true
s.Advance(i + k)
var aName, aVal []byte
hasMore := true
for hasMore {
aName, aVal, hasMore = markup.GetAnAttribute(&s)
if aName == "encoding" && aVal != "" {
return aVal
if scan.Bytes(aName).Match([]byte("encoding"), 0) != -1 && len(aVal) != 0 {
return string(aVal)
}
}
}
@@ -198,10 +197,10 @@ func fromHTML(s scan.Bytes) string {
return ""
}
// Abort when <body is reached.
if s.Match(body, scan.IgnoreCase) {
if s.Match(body, scan.IgnoreCase) != -1 {
return ""
}
if !s.Match(meta, scan.IgnoreCase) {
if s.Match(meta, scan.IgnoreCase) == -1 {
s = s[1:] // safe to slice instead of s.Advance(1) because bounds are checked
continue
}
@@ -215,14 +214,16 @@ func fromHTML(s scan.Bytes) string {
needPragma := dontKnow
charset := ""
aName, aVal, hasMore := "", "", true
var aNameB, aValB []byte
hasMore := true
for hasMore {
aName, aVal, hasMore = markup.GetAnAttribute(&s)
aNameB, aValB, hasMore = markup.GetAnAttribute(&s)
aName := strings.ToLower(string(aNameB))
if attrList[aName] {
continue
}
// processing step
if len(aName) == 0 && len(aVal) == 0 {
if len(aName) == 0 && len(aValB) == 0 {
if needPragma == dontKnow {
continue
}
@@ -231,15 +232,18 @@ func fromHTML(s scan.Bytes) string {
}
}
attrList[aName] = true
if aName == "http-equiv" && scan.Bytes(aVal).Match([]byte("CONTENT-TYPE"), scan.IgnoreCase) {
gotPragma = true
} else if aName == "content" {
charset = string(extractCharsetFromMeta(scan.Bytes(aVal)))
switch aName {
case "http-equiv":
if scan.Bytes(aValB).Match([]byte("CONTENT-TYPE"), scan.IgnoreCase) != -1 {
gotPragma = true
}
case "content":
charset = string(extractCharsetFromMeta(scan.Bytes(aValB)))
if len(charset) != 0 {
needPragma = doNeedPragma
}
} else if aName == "charset" {
charset = aVal
case "charset":
charset = string(aValB)
needPragma = doNotNeedPragma
}
}

View File

@@ -257,8 +257,11 @@ out:
return 0
}
// openArray is used instead of an inline []byte{'['} to avoid mem alllocs.
var openArray = []byte{'['}
func (p *parserState) consumeArray(b []byte, qs []query, lvl int) (n int) {
p.appendPath([]byte{'['}, qs)
p.appendPath(openArray, qs)
if len(b) == 0 {
return 0
}

View File

@@ -5,43 +5,80 @@ import (
"encoding/binary"
)
var (
// SevenZ matches a 7z archive.
SevenZ = prefix([]byte{0x37, 0x7A, 0xBC, 0xAF, 0x27, 0x1C})
// Gzip matches gzip files based on http://www.zlib.org/rfc-gzip.html#header-trailer.
Gzip = prefix([]byte{0x1f, 0x8b})
// Fits matches an Flexible Image Transport System file.
Fits = prefix([]byte{
// SevenZ matches a 7z archive.
func SevenZ(raw []byte, _ uint32) bool {
return bytes.HasPrefix(raw, []byte{0x37, 0x7A, 0xBC, 0xAF, 0x27, 0x1C})
}
// Gzip matches gzip files based on http://www.zlib.org/rfc-gzip.html#header-trailer.
func Gzip(raw []byte, _ uint32) bool {
return bytes.HasPrefix(raw, []byte{0x1f, 0x8b})
}
// Fits matches an Flexible Image Transport System file.
func Fits(raw []byte, _ uint32) bool {
return bytes.HasPrefix(raw, []byte{
0x53, 0x49, 0x4D, 0x50, 0x4C, 0x45, 0x20, 0x20, 0x3D, 0x20,
0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20,
0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x54,
})
// Xar matches an eXtensible ARchive format file.
Xar = prefix([]byte{0x78, 0x61, 0x72, 0x21})
// Bz2 matches a bzip2 file.
Bz2 = prefix([]byte{0x42, 0x5A, 0x68})
// Ar matches an ar (Unix) archive file.
Ar = prefix([]byte{0x21, 0x3C, 0x61, 0x72, 0x63, 0x68, 0x3E})
// Deb matches a Debian package file.
Deb = offset([]byte{
}
// Xar matches an eXtensible ARchive format file.
func Xar(raw []byte, _ uint32) bool {
return bytes.HasPrefix(raw, []byte{0x78, 0x61, 0x72, 0x21})
}
// Bz2 matches a bzip2 file.
func Bz2(raw []byte, _ uint32) bool {
return bytes.HasPrefix(raw, []byte{0x42, 0x5A, 0x68})
}
// Ar matches an ar (Unix) archive file.
func Ar(raw []byte, _ uint32) bool {
return bytes.HasPrefix(raw, []byte{0x21, 0x3C, 0x61, 0x72, 0x63, 0x68, 0x3E})
}
// Deb matches a Debian package file.
func Deb(raw []byte, _ uint32) bool {
return offset(raw, []byte{
0x64, 0x65, 0x62, 0x69, 0x61, 0x6E, 0x2D,
0x62, 0x69, 0x6E, 0x61, 0x72, 0x79,
}, 8)
// Warc matches a Web ARChive file.
Warc = prefix([]byte("WARC/1.0"), []byte("WARC/1.1"))
// Cab matches a Microsoft Cabinet archive file.
Cab = prefix([]byte("MSCF\x00\x00\x00\x00"))
// Xz matches an xz compressed stream based on https://tukaani.org/xz/xz-file-format.txt.
Xz = prefix([]byte{0xFD, 0x37, 0x7A, 0x58, 0x5A, 0x00})
// Lzip matches an Lzip compressed file.
Lzip = prefix([]byte{0x4c, 0x5a, 0x49, 0x50})
// RPM matches an RPM or Delta RPM package file.
RPM = prefix([]byte{0xed, 0xab, 0xee, 0xdb}, []byte("drpm"))
// Cpio matches a cpio archive file.
Cpio = prefix([]byte("070707"), []byte("070701"), []byte("070702"))
// RAR matches a RAR archive file.
RAR = prefix([]byte("Rar!\x1A\x07\x00"), []byte("Rar!\x1A\x07\x01\x00"))
)
}
// Warc matches a Web ARChive file.
func Warc(raw []byte, _ uint32) bool {
return bytes.HasPrefix(raw, []byte("WARC/1.0")) ||
bytes.HasPrefix(raw, []byte("WARC/1.1"))
}
// Cab matches a Microsoft Cabinet archive file.
func Cab(raw []byte, _ uint32) bool {
return bytes.HasPrefix(raw, []byte("MSCF\x00\x00\x00\x00"))
}
// Xz matches an xz compressed stream based on https://tukaani.org/xz/xz-file-format.txt.
func Xz(raw []byte, _ uint32) bool {
return bytes.HasPrefix(raw, []byte{0xFD, 0x37, 0x7A, 0x58, 0x5A, 0x00})
}
// Lzip matches an Lzip compressed file.
func Lzip(raw []byte, _ uint32) bool {
return bytes.HasPrefix(raw, []byte{0x4c, 0x5a, 0x49, 0x50})
}
// RPM matches an RPM or Delta RPM package file.
func RPM(raw []byte, _ uint32) bool {
return bytes.HasPrefix(raw, []byte{0xed, 0xab, 0xee, 0xdb}) ||
bytes.HasPrefix(raw, []byte("drpm"))
}
// RAR matches a RAR archive file.
func RAR(raw []byte, _ uint32) bool {
return bytes.HasPrefix(raw, []byte("Rar!\x1A\x07\x00")) ||
bytes.HasPrefix(raw, []byte("Rar!\x1A\x07\x01\x00"))
}
// InstallShieldCab matches an InstallShield Cabinet archive file.
func InstallShieldCab(raw []byte, _ uint32) bool {
@@ -78,6 +115,17 @@ func CRX(raw []byte, limit uint32) bool {
return Zip(raw[zipOffset:], limit)
}
// Cpio matches a cpio archive file.
func Cpio(raw []byte, _ uint32) bool {
if len(raw) < 6 {
return false
}
return binary.LittleEndian.Uint16(raw) == 070707 || // binary cpio
bytes.HasPrefix(raw, []byte("070707")) || // portable ASCII cpios
bytes.HasPrefix(raw, []byte("070701")) ||
bytes.HasPrefix(raw, []byte("070702"))
}
// Tar matches a (t)ape (ar)chive file.
// Tar files are divided into 512 bytes records. First record contains a 257
// bytes header padded with NUL.

View File

@@ -5,26 +5,50 @@ import (
"encoding/binary"
)
var (
// Flac matches a Free Lossless Audio Codec file.
Flac = prefix([]byte("\x66\x4C\x61\x43\x00\x00\x00\x22"))
// Midi matches a Musical Instrument Digital Interface file.
Midi = prefix([]byte("\x4D\x54\x68\x64"))
// Ape matches a Monkey's Audio file.
Ape = prefix([]byte("\x4D\x41\x43\x20\x96\x0F\x00\x00\x34\x00\x00\x00\x18\x00\x00\x00\x90\xE3"))
// MusePack matches a Musepack file.
MusePack = prefix([]byte("MPCK"))
// Au matches a Sun Microsystems au file.
Au = prefix([]byte("\x2E\x73\x6E\x64"))
// Amr matches an Adaptive Multi-Rate file.
Amr = prefix([]byte("\x23\x21\x41\x4D\x52"))
// Voc matches a Creative Voice file.
Voc = prefix([]byte("Creative Voice File"))
// M3u matches a Playlist file.
M3u = prefix([]byte("#EXTM3U"))
// AAC matches an Advanced Audio Coding file.
AAC = prefix([]byte{0xFF, 0xF1}, []byte{0xFF, 0xF9})
)
// Flac matches a Free Lossless Audio Codec file.
func Flac(raw []byte, _ uint32) bool {
return bytes.HasPrefix(raw, []byte("\x66\x4C\x61\x43\x00\x00\x00\x22"))
}
// Midi matches a Musical Instrument Digital Interface file.
func Midi(raw []byte, _ uint32) bool {
return bytes.HasPrefix(raw, []byte("\x4D\x54\x68\x64"))
}
// Ape matches a Monkey's Audio file.
func Ape(raw []byte, _ uint32) bool {
return bytes.HasPrefix(raw, []byte("\x4D\x41\x43\x20\x96\x0F\x00\x00\x34\x00\x00\x00\x18\x00\x00\x00\x90\xE3"))
}
// MusePack matches a Musepack file.
func MusePack(raw []byte, _ uint32) bool {
return bytes.HasPrefix(raw, []byte("MPCK"))
}
// Au matches a Sun Microsystems au file.
func Au(raw []byte, _ uint32) bool {
return bytes.HasPrefix(raw, []byte("\x2E\x73\x6E\x64"))
}
// Amr matches an Adaptive Multi-Rate file.
func Amr(raw []byte, _ uint32) bool {
return bytes.HasPrefix(raw, []byte("\x23\x21\x41\x4D\x52"))
}
// Voc matches a Creative Voice file.
func Voc(raw []byte, _ uint32) bool {
return bytes.HasPrefix(raw, []byte("Creative Voice File"))
}
// M3u matches a Playlist file.
func M3u(raw []byte, _ uint32) bool {
return bytes.HasPrefix(raw, []byte("#EXTM3U"))
}
// AAC matches an Advanced Audio Coding file.
func AAC(raw []byte, _ uint32) bool {
return len(raw) > 1 && ((raw[0] == 0xFF && raw[1] == 0xF1) || (raw[0] == 0xFF && raw[1] == 0xF9))
}
// Mp3 matches an mp3 file.
func Mp3(raw []byte, limit uint32) bool {

View File

@@ -6,26 +6,52 @@ import (
"encoding/binary"
)
var (
// Lnk matches Microsoft lnk binary format.
Lnk = prefix([]byte{0x4C, 0x00, 0x00, 0x00, 0x01, 0x14, 0x02, 0x00})
// Wasm matches a web assembly File Format file.
Wasm = prefix([]byte{0x00, 0x61, 0x73, 0x6D})
// Exe matches a Windows/DOS executable file.
Exe = prefix([]byte{0x4D, 0x5A})
// Elf matches an Executable and Linkable Format file.
Elf = prefix([]byte{0x7F, 0x45, 0x4C, 0x46})
// Nes matches a Nintendo Entertainment system ROM file.
Nes = prefix([]byte{0x4E, 0x45, 0x53, 0x1A})
// SWF matches an Adobe Flash swf file.
SWF = prefix([]byte("CWS"), []byte("FWS"), []byte("ZWS"))
// Torrent has bencoded text in the beginning.
Torrent = prefix([]byte("d8:announce"))
// PAR1 matches a parquet file.
Par1 = prefix([]byte{0x50, 0x41, 0x52, 0x31})
// CBOR matches a Concise Binary Object Representation https://cbor.io/
CBOR = prefix([]byte{0xD9, 0xD9, 0xF7})
)
// Lnk matches Microsoft lnk binary format.
func Lnk(raw []byte, _ uint32) bool {
return bytes.HasPrefix(raw, []byte{0x4C, 0x00, 0x00, 0x00, 0x01, 0x14, 0x02, 0x00})
}
// Wasm matches a web assembly File Format file.
func Wasm(raw []byte, _ uint32) bool {
return bytes.HasPrefix(raw, []byte{0x00, 0x61, 0x73, 0x6D})
}
// Exe matches a Windows/DOS executable file.
func Exe(raw []byte, _ uint32) bool {
return len(raw) > 1 && raw[0] == 0x4D && raw[1] == 0x5A
}
// Elf matches an Executable and Linkable Format file.
func Elf(raw []byte, _ uint32) bool {
return bytes.HasPrefix(raw, []byte{0x7F, 0x45, 0x4C, 0x46})
}
// Nes matches a Nintendo Entertainment system ROM file.
func Nes(raw []byte, _ uint32) bool {
return bytes.HasPrefix(raw, []byte{0x4E, 0x45, 0x53, 0x1A})
}
// SWF matches an Adobe Flash swf file.
func SWF(raw []byte, _ uint32) bool {
return bytes.HasPrefix(raw, []byte("CWS")) ||
bytes.HasPrefix(raw, []byte("FWS")) ||
bytes.HasPrefix(raw, []byte("ZWS"))
}
// Torrent has bencoded text in the beginning.
func Torrent(raw []byte, _ uint32) bool {
return bytes.HasPrefix(raw, []byte("d8:announce"))
}
// PAR1 matches a parquet file.
func Par1(raw []byte, _ uint32) bool {
return bytes.HasPrefix(raw, []byte{0x50, 0x41, 0x52, 0x31})
}
// CBOR matches a Concise Binary Object Representation https://cbor.io/
func CBOR(raw []byte, _ uint32) bool {
return bytes.HasPrefix(raw, []byte{0xD9, 0xD9, 0xF7})
}
// Java bytecode and Mach-O binaries share the same magic number.
// More info here https://github.com/threatstack/libmagic/blob/master/magic/Magdir/cafebabe
@@ -168,8 +194,10 @@ func Marc(raw []byte, limit uint32) bool {
//
// [glTF specification]: https://registry.khronos.org/glTF/specs/2.0/glTF-2.0.html
// [IANA glTF entry]: https://www.iana.org/assignments/media-types/model/gltf-binary
var GLB = prefix([]byte("\x67\x6C\x54\x46\x02\x00\x00\x00"),
[]byte("\x67\x6C\x54\x46\x01\x00\x00\x00"))
func GLB(raw []byte, _ uint32) bool {
return bytes.HasPrefix(raw, []byte("\x67\x6C\x54\x46\x02\x00\x00\x00")) ||
bytes.HasPrefix(raw, []byte("\x67\x6C\x54\x46\x01\x00\x00\x00"))
}
// TzIf matches a Time Zone Information Format (TZif) file.
// See more: https://tools.ietf.org/id/draft-murchison-tzdist-tzif-00.html#rfc.section.3

View File

@@ -1,13 +1,21 @@
package magic
var (
// Sqlite matches an SQLite database file.
Sqlite = prefix([]byte{
import "bytes"
// Sqlite matches an SQLite database file.
func Sqlite(raw []byte, _ uint32) bool {
return bytes.HasPrefix(raw, []byte{
0x53, 0x51, 0x4c, 0x69, 0x74, 0x65, 0x20, 0x66,
0x6f, 0x72, 0x6d, 0x61, 0x74, 0x20, 0x33, 0x00,
})
// MsAccessAce matches Microsoft Access dababase file.
MsAccessAce = offset([]byte("Standard ACE DB"), 4)
// MsAccessMdb matches legacy Microsoft Access database file (JET, 2003 and earlier).
MsAccessMdb = offset([]byte("Standard Jet DB"), 4)
)
}
// MsAccessAce matches Microsoft Access dababase file.
func MsAccessAce(raw []byte, _ uint32) bool {
return offset(raw, []byte("Standard ACE DB"), 4)
}
// MsAccessMdb matches legacy Microsoft Access database file (JET, 2003 and earlier).
func MsAccessMdb(raw []byte, _ uint32) bool {
return offset(raw, []byte("Standard Jet DB"), 4)
}

View File

@@ -5,14 +5,31 @@ import (
"encoding/binary"
)
var (
// Fdf matches a Forms Data Format file.
Fdf = prefix([]byte("%FDF"))
// Mobi matches a Mobi file.
Mobi = offset([]byte("BOOKMOBI"), 60)
// Lit matches a Microsoft Lit file.
Lit = prefix([]byte("ITOLITLS"))
)
// Pdf matches a Portable Document Format file.
// https://github.com/file/file/blob/11010cc805546a3e35597e67e1129a481aed40e8/magic/Magdir/pdf
func Pdf(raw []byte, _ uint32) bool {
// usual pdf signature
return bytes.HasPrefix(raw, []byte("%PDF-")) ||
// new-line prefixed signature
bytes.HasPrefix(raw, []byte("\012%PDF-")) ||
// UTF-8 BOM prefixed signature
bytes.HasPrefix(raw, []byte("\xef\xbb\xbf%PDF-"))
}
// Fdf matches a Forms Data Format file.
func Fdf(raw []byte, _ uint32) bool {
return bytes.HasPrefix(raw, []byte("%FDF"))
}
// Mobi matches a Mobi file.
func Mobi(raw []byte, _ uint32) bool {
return offset(raw, []byte("BOOKMOBI"), 60)
}
// Lit matches a Microsoft Lit file.
func Lit(raw []byte, _ uint32) bool {
return bytes.HasPrefix(raw, []byte("ITOLITLS"))
}
// PDF matches a Portable Document Format file.
// The %PDF- header should be the first thing inside the file but many

View File

@@ -4,14 +4,20 @@ import (
"bytes"
)
var (
// Woff matches a Web Open Font Format file.
Woff = prefix([]byte("wOFF"))
// Woff2 matches a Web Open Font Format version 2 file.
Woff2 = prefix([]byte("wOF2"))
// Otf matches an OpenType font file.
Otf = prefix([]byte{0x4F, 0x54, 0x54, 0x4F, 0x00})
)
// Woff matches a Web Open Font Format file.
func Woff(raw []byte, _ uint32) bool {
return bytes.HasPrefix(raw, []byte("wOFF"))
}
// Woff2 matches a Web Open Font Format version 2 file.
func Woff2(raw []byte, _ uint32) bool {
return bytes.HasPrefix(raw, []byte("wOF2"))
}
// Otf matches an OpenType font file.
func Otf(raw []byte, _ uint32) bool {
return bytes.HasPrefix(raw, []byte{0x4F, 0x54, 0x54, 0x4F, 0x00})
}
// Ttf matches a TrueType font file.
func Ttf(raw []byte, limit uint32) bool {

View File

@@ -4,24 +4,33 @@ import (
"bytes"
)
var (
// AVIF matches an AV1 Image File Format still or animated.
// Wikipedia page seems outdated listing image/avif-sequence for animations.
// https://github.com/AOMediaCodec/av1-avif/issues/59
AVIF = ftyp([]byte("avif"), []byte("avis"))
// ThreeGP matches a 3GPP file.
ThreeGP = ftyp(
// AVIF matches an AV1 Image File Format still or animated.
// Wikipedia page seems outdated listing image/avif-sequence for animations.
// https://github.com/AOMediaCodec/av1-avif/issues/59
func AVIF(raw []byte, _ uint32) bool {
return ftyp(raw, []byte("avif"), []byte("avis"))
}
// ThreeGP matches a 3GPP file.
func ThreeGP(raw []byte, _ uint32) bool {
return ftyp(raw,
[]byte("3gp1"), []byte("3gp2"), []byte("3gp3"), []byte("3gp4"),
[]byte("3gp5"), []byte("3gp6"), []byte("3gp7"), []byte("3gs7"),
[]byte("3ge6"), []byte("3ge7"), []byte("3gg6"),
)
// ThreeG2 matches a 3GPP2 file.
ThreeG2 = ftyp(
}
// ThreeG2 matches a 3GPP2 file.
func ThreeG2(raw []byte, _ uint32) bool {
return ftyp(raw,
[]byte("3g24"), []byte("3g25"), []byte("3g26"), []byte("3g2a"),
[]byte("3g2b"), []byte("3g2c"), []byte("KDDI"),
)
// AMp4 matches an audio MP4 file.
AMp4 = ftyp(
}
// AMp4 matches an audio MP4 file.
func AMp4(raw []byte, _ uint32) bool {
return ftyp(raw,
// audio for Adobe Flash Player 9+
[]byte("F4A "), []byte("F4B "),
// Apple iTunes AAC-LC (.M4A) Audio
@@ -31,33 +40,61 @@ var (
// Nero Digital AAC Audio
[]byte("NDAS"),
)
// Mqv matches a Sony / Mobile QuickTime file.
Mqv = ftyp([]byte("mqt "))
// M4a matches an audio M4A file.
M4a = ftyp([]byte("M4A "))
// M4v matches an Appl4 M4V video file.
M4v = ftyp([]byte("M4V "), []byte("M4VH"), []byte("M4VP"))
// Heic matches a High Efficiency Image Coding (HEIC) file.
Heic = ftyp([]byte("heic"), []byte("heix"))
// HeicSequence matches a High Efficiency Image Coding (HEIC) file sequence.
HeicSequence = ftyp([]byte("hevc"), []byte("hevx"))
// Heif matches a High Efficiency Image File Format (HEIF) file.
Heif = ftyp([]byte("mif1"), []byte("heim"), []byte("heis"), []byte("avic"))
// HeifSequence matches a High Efficiency Image File Format (HEIF) file sequence.
HeifSequence = ftyp([]byte("msf1"), []byte("hevm"), []byte("hevs"), []byte("avcs"))
// Mj2 matches a Motion JPEG 2000 file: https://en.wikipedia.org/wiki/Motion_JPEG_2000.
Mj2 = ftyp([]byte("mj2s"), []byte("mjp2"), []byte("MFSM"), []byte("MGSV"))
// Dvb matches a Digital Video Broadcasting file: https://dvb.org.
// https://cconcolato.github.io/mp4ra/filetype.html
// https://github.com/file/file/blob/512840337ead1076519332d24fefcaa8fac36e06/magic/Magdir/animation#L135-L154
Dvb = ftyp(
}
// Mqv matches a Sony / Mobile QuickTime file.
func Mqv(raw []byte, _ uint32) bool {
return ftyp(raw, []byte("mqt "))
}
// M4a matches an audio M4A file.
func M4a(raw []byte, _ uint32) bool {
return ftyp(raw, []byte("M4A "))
}
// M4v matches an Appl4 M4V video file.
func M4v(raw []byte, _ uint32) bool {
return ftyp(raw, []byte("M4V "), []byte("M4VH"), []byte("M4VP"))
}
// Heic matches a High Efficiency Image Coding (HEIC) file.
func Heic(raw []byte, _ uint32) bool {
return ftyp(raw, []byte("heic"), []byte("heix"))
}
// HeicSequence matches a High Efficiency Image Coding (HEIC) file sequence.
func HeicSequence(raw []byte, _ uint32) bool {
return ftyp(raw, []byte("hevc"), []byte("hevx"))
}
// Heif matches a High Efficiency Image File Format (HEIF) file.
func Heif(raw []byte, _ uint32) bool {
return ftyp(raw, []byte("mif1"), []byte("heim"), []byte("heis"), []byte("avic"))
}
// HeifSequence matches a High Efficiency Image File Format (HEIF) file sequence.
func HeifSequence(raw []byte, _ uint32) bool {
return ftyp(raw, []byte("msf1"), []byte("hevm"), []byte("hevs"), []byte("avcs"))
}
// Mj2 matches a Motion JPEG 2000 file: https://en.wikipedia.org/wiki/Motion_JPEG_2000.
func Mj2(raw []byte, _ uint32) bool {
return ftyp(raw, []byte("mj2s"), []byte("mjp2"), []byte("MFSM"), []byte("MGSV"))
}
// Dvb matches a Digital Video Broadcasting file: https://dvb.org.
// https://cconcolato.github.io/mp4ra/filetype.html
// https://github.com/file/file/blob/512840337ead1076519332d24fefcaa8fac36e06/magic/Magdir/animation#L135-L154
func Dvb(raw []byte, _ uint32) bool {
return ftyp(raw,
[]byte("dby1"), []byte("dsms"), []byte("dts1"), []byte("dts2"),
[]byte("dts3"), []byte("dxo "), []byte("dmb1"), []byte("dmpf"),
[]byte("drc1"), []byte("dv1a"), []byte("dv1b"), []byte("dv2a"),
[]byte("dv2b"), []byte("dv3a"), []byte("dv3b"), []byte("dvr1"),
[]byte("dvt1"), []byte("emsg"))
// TODO: add support for remaining video formats at ftyps.com.
)
}
// TODO: add support for remaining video formats at ftyps.com.
// QuickTime matches a QuickTime File Format file.
// https://www.loc.gov/preservation/digital/formats/fdd/fdd000052.shtml

View File

@@ -2,66 +2,127 @@ package magic
import "bytes"
var (
// Png matches a Portable Network Graphics file.
// https://www.w3.org/TR/PNG/
Png = prefix([]byte{0x89, 0x50, 0x4E, 0x47, 0x0D, 0x0A, 0x1A, 0x0A})
// Apng matches an Animated Portable Network Graphics file.
// https://wiki.mozilla.org/APNG_Specification
Apng = offset([]byte("acTL"), 37)
// Jpg matches a Joint Photographic Experts Group file.
Jpg = prefix([]byte{0xFF, 0xD8, 0xFF})
// Jp2 matches a JPEG 2000 Image file (ISO 15444-1).
Jp2 = jpeg2k([]byte{0x6a, 0x70, 0x32, 0x20})
// Jpx matches a JPEG 2000 Image file (ISO 15444-2).
Jpx = jpeg2k([]byte{0x6a, 0x70, 0x78, 0x20})
// Jpm matches a JPEG 2000 Image file (ISO 15444-6).
Jpm = jpeg2k([]byte{0x6a, 0x70, 0x6D, 0x20})
// Gif matches a Graphics Interchange Format file.
Gif = prefix([]byte("GIF87a"), []byte("GIF89a"))
// Bmp matches a bitmap image file.
Bmp = prefix([]byte{0x42, 0x4D})
// Ps matches a PostScript file.
Ps = prefix([]byte("%!PS-Adobe-"))
// Psd matches a Photoshop Document file.
Psd = prefix([]byte("8BPS"))
// Ico matches an ICO file.
Ico = prefix([]byte{0x00, 0x00, 0x01, 0x00}, []byte{0x00, 0x00, 0x02, 0x00})
// Icns matches an ICNS (Apple Icon Image format) file.
Icns = prefix([]byte("icns"))
// Tiff matches a Tagged Image File Format file.
Tiff = prefix([]byte{0x49, 0x49, 0x2A, 0x00}, []byte{0x4D, 0x4D, 0x00, 0x2A})
// Bpg matches a Better Portable Graphics file.
Bpg = prefix([]byte{0x42, 0x50, 0x47, 0xFB})
// Xcf matches GIMP image data.
Xcf = prefix([]byte("gimp xcf"))
// Pat matches GIMP pattern data.
Pat = offset([]byte("GPAT"), 20)
// Gbr matches GIMP brush data.
Gbr = offset([]byte("GIMP"), 20)
// Hdr matches Radiance HDR image.
// https://web.archive.org/web/20060913152809/http://local.wasp.uwa.edu.au/~pbourke/dataformats/pic/
Hdr = prefix([]byte("#?RADIANCE\n"))
// Xpm matches X PixMap image data.
Xpm = prefix([]byte{0x2F, 0x2A, 0x20, 0x58, 0x50, 0x4D, 0x20, 0x2A, 0x2F})
// Jxs matches a JPEG XS coded image file (ISO/IEC 21122-3).
Jxs = prefix([]byte{0x00, 0x00, 0x00, 0x0C, 0x4A, 0x58, 0x53, 0x20, 0x0D, 0x0A, 0x87, 0x0A})
// Jxr matches Microsoft HD JXR photo file.
Jxr = prefix([]byte{0x49, 0x49, 0xBC, 0x01})
)
// Png matches a Portable Network Graphics file.
// https://www.w3.org/TR/PNG/
func Png(raw []byte, _ uint32) bool {
return bytes.HasPrefix(raw, []byte{0x89, 0x50, 0x4E, 0x47, 0x0D, 0x0A, 0x1A, 0x0A})
}
func jpeg2k(sig []byte) Detector {
return func(raw []byte, _ uint32) bool {
if len(raw) < 24 {
return false
}
// Apng matches an Animated Portable Network Graphics file.
// https://wiki.mozilla.org/APNG_Specification
func Apng(raw []byte, _ uint32) bool {
return offset(raw, []byte("acTL"), 37)
}
if !bytes.Equal(raw[4:8], []byte{0x6A, 0x50, 0x20, 0x20}) &&
!bytes.Equal(raw[4:8], []byte{0x6A, 0x50, 0x32, 0x20}) {
return false
}
return bytes.Equal(raw[20:24], sig)
// Jpg matches a Joint Photographic Experts Group file.
func Jpg(raw []byte, _ uint32) bool {
return bytes.HasPrefix(raw, []byte{0xFF, 0xD8, 0xFF})
}
// Jp2 matches a JPEG 2000 Image file (ISO 15444-1).
func Jp2(raw []byte, _ uint32) bool {
return jpeg2k(raw, []byte{0x6a, 0x70, 0x32, 0x20})
}
// Jpx matches a JPEG 2000 Image file (ISO 15444-2).
func Jpx(raw []byte, _ uint32) bool {
return jpeg2k(raw, []byte{0x6a, 0x70, 0x78, 0x20})
}
// Jpm matches a JPEG 2000 Image file (ISO 15444-6).
func Jpm(raw []byte, _ uint32) bool {
return jpeg2k(raw, []byte{0x6a, 0x70, 0x6D, 0x20})
}
// Gif matches a Graphics Interchange Format file.
func Gif(raw []byte, _ uint32) bool {
return bytes.HasPrefix(raw, []byte("GIF87a")) ||
bytes.HasPrefix(raw, []byte("GIF89a"))
}
// Bmp matches a bitmap image file.
func Bmp(raw []byte, _ uint32) bool {
return bytes.HasPrefix(raw, []byte{0x42, 0x4D})
}
// Ps matches a PostScript file.
func Ps(raw []byte, _ uint32) bool {
return bytes.HasPrefix(raw, []byte("%!PS-Adobe-"))
}
// Psd matches a Photoshop Document file.
func Psd(raw []byte, _ uint32) bool {
return bytes.HasPrefix(raw, []byte("8BPS"))
}
// Ico matches an ICO file.
func Ico(raw []byte, _ uint32) bool {
return bytes.HasPrefix(raw, []byte{0x00, 0x00, 0x01, 0x00}) ||
bytes.HasPrefix(raw, []byte{0x00, 0x00, 0x02, 0x00})
}
// Icns matches an ICNS (Apple Icon Image format) file.
func Icns(raw []byte, _ uint32) bool {
return bytes.HasPrefix(raw, []byte("icns"))
}
// Tiff matches a Tagged Image File Format file.
func Tiff(raw []byte, _ uint32) bool {
return bytes.HasPrefix(raw, []byte{0x49, 0x49, 0x2A, 0x00}) ||
bytes.HasPrefix(raw, []byte{0x4D, 0x4D, 0x00, 0x2A})
}
// Bpg matches a Better Portable Graphics file.
func Bpg(raw []byte, _ uint32) bool {
return bytes.HasPrefix(raw, []byte{0x42, 0x50, 0x47, 0xFB})
}
// Xcf matches GIMP image data.
func Xcf(raw []byte, _ uint32) bool {
return bytes.HasPrefix(raw, []byte("gimp xcf"))
}
// Pat matches GIMP pattern data.
func Pat(raw []byte, _ uint32) bool {
return offset(raw, []byte("GPAT"), 20)
}
// Gbr matches GIMP brush data.
func Gbr(raw []byte, _ uint32) bool {
return offset(raw, []byte("GIMP"), 20)
}
// Hdr matches Radiance HDR image.
// https://web.archive.org/web/20060913152809/http://local.wasp.uwa.edu.au/~pbourke/dataformats/pic/
func Hdr(raw []byte, _ uint32) bool {
return bytes.HasPrefix(raw, []byte("#?RADIANCE\n"))
}
// Xpm matches X PixMap image data.
func Xpm(raw []byte, _ uint32) bool {
return bytes.HasPrefix(raw, []byte{0x2F, 0x2A, 0x20, 0x58, 0x50, 0x4D, 0x20, 0x2A, 0x2F})
}
// Jxs matches a JPEG XS coded image file (ISO/IEC 21122-3).
func Jxs(raw []byte, _ uint32) bool {
return bytes.HasPrefix(raw, []byte{0x00, 0x00, 0x00, 0x0C, 0x4A, 0x58, 0x53, 0x20, 0x0D, 0x0A, 0x87, 0x0A})
}
// Jxr matches Microsoft HD JXR photo file.
func Jxr(raw []byte, _ uint32) bool {
return bytes.HasPrefix(raw, []byte{0x49, 0x49, 0xBC, 0x01})
}
func jpeg2k(raw []byte, sig []byte) bool {
if len(raw) < 24 {
return false
}
if !bytes.Equal(raw[4:8], []byte{0x6A, 0x50, 0x20, 0x20}) &&
!bytes.Equal(raw[4:8], []byte{0x6A, 0x50, 0x32, 0x20}) {
return false
}
return bytes.Equal(raw[20:24], sig)
}
// Webp matches a WebP file.
@@ -108,3 +169,20 @@ func Jxl(raw []byte, _ uint32) bool {
return bytes.HasPrefix(raw, []byte{0xFF, 0x0A}) ||
bytes.HasPrefix(raw, []byte("\x00\x00\x00\x0cJXL\x20\x0d\x0a\x87\x0a"))
}
// DXF matches Drawing Exchange Format AutoCAD file.
// There does not seem to be a clear specification and the files in the wild
// differ wildly.
// https://images.autodesk.com/adsk/files/autocad_2012_pdf_dxf-reference_enu.pdf
//
// I collected these signatures by downloading a few dozen files from
// http://cd.textfiles.com/amigaenv/DXF/OBJEKTE/ and
// https://sembiance.com/fileFormatSamples/poly/dxf/ and then
// xxd -l 16 {} | sort | uniq.
// These signatures are only for the ASCII version of DXF. There is a binary version too.
func DXF(raw []byte, _ uint32) bool {
return bytes.HasPrefix(raw, []byte(" 0\x0ASECTION\x0A")) ||
bytes.HasPrefix(raw, []byte(" 0\x0D\x0ASECTION\x0D\x0A")) ||
bytes.HasPrefix(raw, []byte("0\x0ASECTION\x0A")) ||
bytes.HasPrefix(raw, []byte("0\x0D\x0ASECTION\x0D\x0A"))
}

View File

@@ -3,7 +3,6 @@ package magic
import (
"bytes"
"fmt"
"github.com/gabriel-vasile/mimetype/internal/scan"
)
@@ -22,37 +21,20 @@ type (
}
)
// prefix creates a Detector which returns true if any of the provided signatures
// is the prefix of the raw input.
func prefix(sigs ...[]byte) Detector {
return func(raw []byte, limit uint32) bool {
for _, s := range sigs {
if bytes.HasPrefix(raw, s) {
return true
}
}
return false
}
}
// offset creates a Detector which returns true if the provided signature can be
// offset returns true if the provided signature can be
// found at offset in the raw input.
func offset(sig []byte, offset int) Detector {
return func(raw []byte, limit uint32) bool {
return len(raw) > offset && bytes.HasPrefix(raw[offset:], sig)
}
func offset(raw []byte, sig []byte, offset int) bool {
return len(raw) > offset && bytes.HasPrefix(raw[offset:], sig)
}
// ciPrefix is like prefix but the check is case insensitive.
func ciPrefix(sigs ...[]byte) Detector {
return func(raw []byte, limit uint32) bool {
for _, s := range sigs {
if ciCheck(s, raw) {
return true
}
func ciPrefix(raw []byte, sigs ...[]byte) bool {
for _, s := range sigs {
if ciCheck(s, raw) {
return true
}
return false
}
return false
}
func ciCheck(sig, raw []byte) bool {
if len(raw) < len(sig)+1 {
@@ -72,22 +54,18 @@ func ciCheck(sig, raw []byte) bool {
return true
}
// xml creates a Detector which returns true if any of the provided XML signatures
// matches the raw input.
func xml(sigs ...xmlSig) Detector {
return func(raw []byte, limit uint32) bool {
b := scan.Bytes(raw)
b.TrimLWS()
if len(b) == 0 {
return false
}
for _, s := range sigs {
if xmlCheck(s, b) {
return true
}
}
// xml returns true if any of the provided XML signatures matches the raw input.
func xml(b scan.Bytes, sigs ...xmlSig) bool {
b.TrimLWS()
if len(b) == 0 {
return false
}
for _, s := range sigs {
if xmlCheck(s, b) {
return true
}
}
return false
}
func xmlCheck(sig xmlSig, raw []byte) bool {
raw = raw[:min(len(raw), 512)]
@@ -103,28 +81,24 @@ func xmlCheck(sig xmlSig, raw []byte) bool {
return localNameIndex != -1 && localNameIndex < bytes.Index(raw, sig.xmlns)
}
// markup creates a Detector which returns true is any of the HTML signatures
// matches the raw input.
func markup(sigs ...[]byte) Detector {
return func(raw []byte, limit uint32) bool {
b := scan.Bytes(raw)
if bytes.HasPrefix(b, []byte{0xEF, 0xBB, 0xBF}) {
// We skip the UTF-8 BOM if present to ensure we correctly
// process any leading whitespace. The presence of the BOM
// is taken into account during charset detection in charset.go.
b.Advance(3)
}
b.TrimLWS()
if len(b) == 0 {
return false
}
for _, s := range sigs {
if markupCheck(s, b) {
return true
}
}
// markup returns true is any of the HTML signatures matches the raw input.
func markup(b scan.Bytes, sigs ...[]byte) bool {
if bytes.HasPrefix(b, []byte{0xEF, 0xBB, 0xBF}) {
// We skip the UTF-8 BOM if present to ensure we correctly
// process any leading whitespace. The presence of the BOM
// is taken into account during charset detection in charset.go.
b.Advance(3)
}
b.TrimLWS()
if len(b) == 0 {
return false
}
for _, s := range sigs {
if markupCheck(s, b) {
return true
}
}
return false
}
func markupCheck(sig, raw []byte) bool {
if len(raw) < len(sig)+1 {
@@ -149,29 +123,17 @@ func markupCheck(sig, raw []byte) bool {
return true
}
// ftyp creates a Detector which returns true if any of the FTYP signatures
// matches the raw input.
func ftyp(sigs ...[]byte) Detector {
return func(raw []byte, limit uint32) bool {
if len(raw) < 12 {
return false
}
for _, s := range sigs {
if bytes.Equal(raw[8:12], s) {
return true
}
}
// ftyp returns true if any of the FTYP signatures matches the raw input.
func ftyp(raw []byte, sigs ...[]byte) bool {
if len(raw) < 12 {
return false
}
}
func newXMLSig(localName, xmlns string) xmlSig {
ret := xmlSig{xmlns: []byte(xmlns)}
if localName != "" {
ret.localName = []byte(fmt.Sprintf("<%s", localName))
for _, s := range sigs {
if bytes.Equal(raw[8:12], s) {
return true
}
}
return ret
return false
}
// A valid shebang starts with the "#!" characters,
@@ -184,29 +146,17 @@ func newXMLSig(localName, xmlns string) xmlSig {
// #! /usr/bin/env php
//
// /usr/bin/env is the interpreter, php is the first and only argument.
func shebang(sigs ...[]byte) Detector {
return func(raw []byte, limit uint32) bool {
b := scan.Bytes(raw)
line := b.Line()
for _, s := range sigs {
if shebangCheck(s, line) {
return true
}
func shebang(b scan.Bytes, matchFlags scan.Flags, sigs ...[]byte) bool {
line := b.Line()
if len(line) < 2 || line[0] != '#' || line[1] != '!' {
return false
}
line = line[2:]
line.TrimLWS()
for _, s := range sigs {
if line.Match(s, matchFlags) != -1 {
return true
}
return false
}
}
func shebangCheck(sig []byte, raw scan.Bytes) bool {
if len(raw) < len(sig)+2 {
return false
}
if raw[0] != '#' || raw[1] != '!' {
return false
}
raw.Advance(2) // skip #! we checked above
raw.TrimLWS()
raw.TrimRWS()
return bytes.Equal(raw, sig)
return false
}

View File

@@ -44,17 +44,6 @@ func Ole(raw []byte, limit uint32) bool {
return bytes.HasPrefix(raw, []byte{0xD0, 0xCF, 0x11, 0xE0, 0xA1, 0xB1, 0x1A, 0xE1})
}
// Aaf matches an Advanced Authoring Format file.
// See: https://pyaaf.readthedocs.io/en/latest/about.html
// See: https://en.wikipedia.org/wiki/Advanced_Authoring_Format
func Aaf(raw []byte, limit uint32) bool {
if len(raw) < 31 {
return false
}
return bytes.HasPrefix(raw[8:], []byte{0x41, 0x41, 0x46, 0x42, 0x0D, 0x00, 0x4F, 0x4D}) &&
(raw[30] == 0x09 || raw[30] == 0x0C)
}
// Doc matches a Microsoft Word 97-2003 file.
// See: https://github.com/decalage2/oletools/blob/412ee36ae45e70f42123e835871bac956d958461/oletools/common/clsid.py
func Doc(raw []byte, _ uint32) bool {
@@ -209,3 +198,14 @@ func matchOleClsid(in []byte, clsid []byte) bool {
return bytes.HasPrefix(in[clsidOffset:], clsid)
}
// WPD matches a WordPerfect document.
func WPD(raw []byte, _ uint32) bool {
if len(raw) < 10 {
return false
}
if !bytes.HasPrefix(raw, []byte("\xffWPC")) {
return false
}
return raw[8] == 1 && raw[9] == 10
}

View File

@@ -10,9 +10,9 @@ import (
"github.com/gabriel-vasile/mimetype/internal/scan"
)
var (
// HTML matches a Hypertext Markup Language file.
HTML = markup(
// HTML matches a Hypertext Markup Language file.
func HTML(raw []byte, _ uint32) bool {
return markup(raw,
[]byte("<!DOCTYPE HTML"),
[]byte("<HTML"),
[]byte("<HEAD"),
@@ -31,133 +31,254 @@ var (
[]byte("<P"),
[]byte("<!--"),
)
// XML matches an Extensible Markup Language file.
XML = markup([]byte("<?XML"))
// Owl2 matches an Owl ontology file.
Owl2 = xml(newXMLSig("Ontology", `xmlns="http://www.w3.org/2002/07/owl#"`))
// Rss matches a Rich Site Summary file.
Rss = xml(newXMLSig("rss", ""))
// Atom matches an Atom Syndication Format file.
Atom = xml(newXMLSig("feed", `xmlns="http://www.w3.org/2005/Atom"`))
// Kml matches a Keyhole Markup Language file.
Kml = xml(
newXMLSig("kml", `xmlns="http://www.opengis.net/kml/2.2"`),
newXMLSig("kml", `xmlns="http://earth.google.com/kml/2.0"`),
newXMLSig("kml", `xmlns="http://earth.google.com/kml/2.1"`),
newXMLSig("kml", `xmlns="http://earth.google.com/kml/2.2"`),
}
// XML matches an Extensible Markup Language file.
func XML(raw []byte, _ uint32) bool {
return markup(raw, []byte("<?XML"))
}
// Owl2 matches an Owl ontology file.
func Owl2(raw []byte, _ uint32) bool {
return xml(raw,
xmlSig{[]byte("<Ontology"), []byte(`xmlns="http://www.w3.org/2002/07/owl#"`)},
)
// Xliff matches a XML Localization Interchange File Format file.
Xliff = xml(newXMLSig("xliff", `xmlns="urn:oasis:names:tc:xliff:document:1.2"`))
// Collada matches a COLLAborative Design Activity file.
Collada = xml(newXMLSig("COLLADA", `xmlns="http://www.collada.org/2005/11/COLLADASchema"`))
// Gml matches a Geography Markup Language file.
Gml = xml(
newXMLSig("", `xmlns:gml="http://www.opengis.net/gml"`),
newXMLSig("", `xmlns:gml="http://www.opengis.net/gml/3.2"`),
newXMLSig("", `xmlns:gml="http://www.opengis.net/gml/3.3/exr"`),
}
// Rss matches a Rich Site Summary file.
func Rss(raw []byte, _ uint32) bool {
return xml(raw,
xmlSig{[]byte("<rss"), []byte{}},
)
// Gpx matches a GPS Exchange Format file.
Gpx = xml(newXMLSig("gpx", `xmlns="http://www.topografix.com/GPX/1/1"`))
// Tcx matches a Training Center XML file.
Tcx = xml(newXMLSig("TrainingCenterDatabase", `xmlns="http://www.garmin.com/xmlschemas/TrainingCenterDatabase/v2"`))
// X3d matches an Extensible 3D Graphics file.
X3d = xml(newXMLSig("X3D", `xmlns:xsd="http://www.w3.org/2001/XMLSchema-instance"`))
// Amf matches an Additive Manufacturing XML file.
Amf = xml(newXMLSig("amf", ""))
// Threemf matches a 3D Manufacturing Format file.
Threemf = xml(newXMLSig("model", `xmlns="http://schemas.microsoft.com/3dmanufacturing/core/2015/02"`))
// Xfdf matches a XML Forms Data Format file.
Xfdf = xml(newXMLSig("xfdf", `xmlns="http://ns.adobe.com/xfdf/"`))
// VCard matches a Virtual Contact File.
VCard = ciPrefix([]byte("BEGIN:VCARD\n"), []byte("BEGIN:VCARD\r\n"))
// ICalendar matches a iCalendar file.
ICalendar = ciPrefix([]byte("BEGIN:VCALENDAR\n"), []byte("BEGIN:VCALENDAR\r\n"))
phpPageF = ciPrefix(
}
// Atom matches an Atom Syndication Format file.
func Atom(raw []byte, _ uint32) bool {
return xml(raw,
xmlSig{[]byte("<feed"), []byte(`xmlns="http://www.w3.org/2005/Atom"`)},
)
}
// Kml matches a Keyhole Markup Language file.
func Kml(raw []byte, _ uint32) bool {
return xml(raw,
xmlSig{[]byte("<kml"), []byte(`xmlns="http://www.opengis.net/kml/2.2"`)},
xmlSig{[]byte("<kml"), []byte(`xmlns="http://earth.google.com/kml/2.0"`)},
xmlSig{[]byte("<kml"), []byte(`xmlns="http://earth.google.com/kml/2.1"`)},
xmlSig{[]byte("<kml"), []byte(`xmlns="http://earth.google.com/kml/2.2"`)},
)
}
// Xliff matches a XML Localization Interchange File Format file.
func Xliff(raw []byte, _ uint32) bool {
return xml(raw,
xmlSig{[]byte("<xliff"), []byte(`xmlns="urn:oasis:names:tc:xliff:document:1.2"`)},
)
}
// Collada matches a COLLAborative Design Activity file.
func Collada(raw []byte, _ uint32) bool {
return xml(raw,
xmlSig{[]byte("<COLLADA"), []byte(`xmlns="http://www.collada.org/2005/11/COLLADASchema"`)},
)
}
// Gml matches a Geography Markup Language file.
func Gml(raw []byte, _ uint32) bool {
return xml(raw,
xmlSig{[]byte{}, []byte(`xmlns:gml="http://www.opengis.net/gml"`)},
xmlSig{[]byte{}, []byte(`xmlns:gml="http://www.opengis.net/gml/3.2"`)},
xmlSig{[]byte{}, []byte(`xmlns:gml="http://www.opengis.net/gml/3.3/exr"`)},
)
}
// Gpx matches a GPS Exchange Format file.
func Gpx(raw []byte, _ uint32) bool {
return xml(raw,
xmlSig{[]byte("<gpx"), []byte(`xmlns="http://www.topografix.com/GPX/1/1"`)},
)
}
// Tcx matches a Training Center XML file.
func Tcx(raw []byte, _ uint32) bool {
return xml(raw,
xmlSig{[]byte("<TrainingCenterDatabase"), []byte(`xmlns="http://www.garmin.com/xmlschemas/TrainingCenterDatabase/v2"`)},
)
}
// X3d matches an Extensible 3D Graphics file.
func X3d(raw []byte, _ uint32) bool {
return xml(raw,
xmlSig{[]byte("<X3D"), []byte(`xmlns:xsd="http://www.w3.org/2001/XMLSchema-instance"`)},
)
}
// Amf matches an Additive Manufacturing XML file.
func Amf(raw []byte, _ uint32) bool {
return xml(raw, xmlSig{[]byte("<amf"), []byte{}})
}
// Threemf matches a 3D Manufacturing Format file.
func Threemf(raw []byte, _ uint32) bool {
return xml(raw,
xmlSig{[]byte("<model"), []byte(`xmlns="http://schemas.microsoft.com/3dmanufacturing/core/2015/02"`)},
)
}
// Xfdf matches a XML Forms Data Format file.
func Xfdf(raw []byte, _ uint32) bool {
return xml(raw, xmlSig{[]byte("<xfdf"), []byte(`xmlns="http://ns.adobe.com/xfdf/"`)})
}
// VCard matches a Virtual Contact File.
func VCard(raw []byte, _ uint32) bool {
return ciPrefix(raw, []byte("BEGIN:VCARD\n"), []byte("BEGIN:VCARD\r\n"))
}
// ICalendar matches a iCalendar file.
func ICalendar(raw []byte, _ uint32) bool {
return ciPrefix(raw, []byte("BEGIN:VCALENDAR\n"), []byte("BEGIN:VCALENDAR\r\n"))
}
func phpPageF(raw []byte, _ uint32) bool {
return ciPrefix(raw,
[]byte("<?PHP"),
[]byte("<?\n"),
[]byte("<?\r"),
[]byte("<? "),
)
phpScriptF = shebang(
}
func phpScriptF(raw []byte, _ uint32) bool {
return shebang(raw,
scan.CompactWS,
[]byte("/usr/local/bin/php"),
[]byte("/usr/bin/php"),
[]byte("/usr/bin/env php"),
[]byte("/usr/bin/env -S php"),
)
// Js matches a Javascript file.
Js = shebang(
}
// Js matches a Javascript file.
func Js(raw []byte, _ uint32) bool {
return shebang(raw,
scan.CompactWS,
[]byte("/bin/node"),
[]byte("/usr/bin/node"),
[]byte("/bin/nodejs"),
[]byte("/usr/bin/nodejs"),
[]byte("/usr/bin/env node"),
[]byte("/usr/bin/env -S node"),
[]byte("/usr/bin/env nodejs"),
[]byte("/usr/bin/env -S nodejs"),
)
// Lua matches a Lua programming language file.
Lua = shebang(
}
// Lua matches a Lua programming language file.
func Lua(raw []byte, _ uint32) bool {
return shebang(raw,
scan.CompactWS|scan.FullWord,
[]byte("/usr/bin/lua"),
[]byte("/usr/local/bin/lua"),
[]byte("/usr/bin/env lua"),
[]byte("/usr/bin/env -S lua"),
)
// Perl matches a Perl programming language file.
Perl = shebang(
}
// Perl matches a Perl programming language file.
func Perl(raw []byte, _ uint32) bool {
return shebang(raw,
scan.CompactWS|scan.FullWord,
[]byte("/usr/bin/perl"),
[]byte("/usr/bin/env perl"),
[]byte("/usr/bin/env -S perl"),
)
// Python matches a Python programming language file.
Python = shebang(
}
// Python matches a Python programming language file.
func Python(raw []byte, _ uint32) bool {
return shebang(raw,
scan.CompactWS,
[]byte("/usr/bin/python"),
[]byte("/usr/local/bin/python"),
[]byte("/usr/bin/env python"),
[]byte("/usr/bin/env -S python"),
[]byte("/usr/bin/python2"),
[]byte("/usr/local/bin/python2"),
[]byte("/usr/bin/env python2"),
[]byte("/usr/bin/env -S python2"),
[]byte("/usr/bin/python3"),
[]byte("/usr/local/bin/python3"),
[]byte("/usr/bin/env python3"),
[]byte("/usr/bin/env -S python3"),
)
// Ruby matches a Ruby programming language file.
Ruby = shebang(
}
// Ruby matches a Ruby programming language file.
func Ruby(raw []byte, _ uint32) bool {
return shebang(raw,
scan.CompactWS,
[]byte("/usr/bin/ruby"),
[]byte("/usr/local/bin/ruby"),
[]byte("/usr/bin/env ruby"),
[]byte("/usr/bin/env -S ruby"),
)
// Tcl matches a Tcl programming language file.
Tcl = shebang(
}
// Tcl matches a Tcl programming language file.
func Tcl(raw []byte, _ uint32) bool {
return shebang(raw,
scan.CompactWS,
[]byte("/usr/bin/tcl"),
[]byte("/usr/local/bin/tcl"),
[]byte("/usr/bin/env tcl"),
[]byte("/usr/bin/env -S tcl"),
[]byte("/usr/bin/tclsh"),
[]byte("/usr/local/bin/tclsh"),
[]byte("/usr/bin/env tclsh"),
[]byte("/usr/bin/env -S tclsh"),
[]byte("/usr/bin/wish"),
[]byte("/usr/local/bin/wish"),
[]byte("/usr/bin/env wish"),
[]byte("/usr/bin/env -S wish"),
)
// Rtf matches a Rich Text Format file.
Rtf = prefix([]byte("{\\rtf"))
// Shell matches a shell script file.
Shell = shebang(
}
// Rtf matches a Rich Text Format file.
func Rtf(raw []byte, _ uint32) bool {
return bytes.HasPrefix(raw, []byte("{\\rtf"))
}
// Shell matches a shell script file.
func Shell(raw []byte, _ uint32) bool {
return shebang(raw,
scan.CompactWS|scan.FullWord,
[]byte("/bin/sh"),
[]byte("/bin/bash"),
[]byte("/usr/local/bin/bash"),
[]byte("/usr/bin/env bash"),
[]byte("/usr/bin/env -S bash"),
[]byte("/bin/csh"),
[]byte("/usr/local/bin/csh"),
[]byte("/usr/bin/env csh"),
[]byte("/usr/bin/env -S csh"),
[]byte("/bin/dash"),
[]byte("/usr/local/bin/dash"),
[]byte("/usr/bin/env dash"),
[]byte("/usr/bin/env -S dash"),
[]byte("/bin/ksh"),
[]byte("/usr/local/bin/ksh"),
[]byte("/usr/bin/env ksh"),
[]byte("/usr/bin/env -S ksh"),
[]byte("/bin/tcsh"),
[]byte("/usr/local/bin/tcsh"),
[]byte("/usr/bin/env tcsh"),
[]byte("/usr/bin/env -S tcsh"),
[]byte("/bin/zsh"),
[]byte("/usr/local/bin/zsh"),
[]byte("/usr/bin/env zsh"),
[]byte("/usr/bin/env -S zsh"),
)
)
}
// Text matches a plain text file.
//
@@ -183,10 +304,14 @@ func Text(raw []byte, _ uint32) bool {
// XHTML matches an XHTML file. This check depends on the XML check to have passed.
func XHTML(raw []byte, limit uint32) bool {
raw = raw[:min(len(raw), 4096)]
raw = raw[:min(len(raw), 1024)]
b := scan.Bytes(raw)
return b.Search([]byte("<!DOCTYPE HTML"), scan.CompactWS|scan.IgnoreCase) != -1 ||
b.Search([]byte("<HTML XMLNS="), scan.CompactWS|scan.IgnoreCase) != -1
i, _ := b.Search([]byte("<!DOCTYPE HTML"), scan.CompactWS|scan.IgnoreCase)
if i != -1 {
return true
}
i, _ = b.Search([]byte("<HTML XMLNS="), scan.CompactWS|scan.IgnoreCase)
return i != -1
}
// Php matches a PHP: Hypertext Preprocessor file.
@@ -294,11 +419,12 @@ func svgWithoutXMLDeclaration(s scan.Bytes) bool {
return false
}
targetName, targetVal := "xmlns", "http://www.w3.org/2000/svg"
aName, aVal, hasMore := "", "", true
targetName, targetVal := []byte("xmlns"), []byte("http://www.w3.org/2000/svg")
var aName, aVal []byte
hasMore := true
for hasMore {
aName, aVal, hasMore = mkup.GetAnAttribute(&s)
if aName == targetName && aVal == targetVal {
if bytes.Equal(aName, targetName) && bytes.Equal(aVal, targetVal) {
return true
}
if !hasMore {
@@ -325,10 +451,11 @@ func svgWithXMLDeclaration(s scan.Bytes) bool {
// version is a required attribute for XML.
hasVersion := false
aName, hasMore := "", true
var aName []byte
hasMore := true
for hasMore {
aName, _, hasMore = mkup.GetAnAttribute(&s)
if aName == "version" {
if bytes.Equal(aName, []byte("version")) {
hasVersion = true
break
}

View File

@@ -4,17 +4,23 @@ import (
"bytes"
)
var (
// Flv matches a Flash video file.
Flv = prefix([]byte("\x46\x4C\x56\x01"))
// Asf matches an Advanced Systems Format file.
Asf = prefix([]byte{
// Flv matches a Flash video file.
func Flv(raw []byte, _ uint32) bool {
return bytes.HasPrefix(raw, []byte("\x46\x4C\x56\x01"))
}
// Asf matches an Advanced Systems Format file.
func Asf(raw []byte, _ uint32) bool {
return bytes.HasPrefix(raw, []byte{
0x30, 0x26, 0xB2, 0x75, 0x8E, 0x66, 0xCF, 0x11,
0xA6, 0xD9, 0x00, 0xAA, 0x00, 0x62, 0xCE, 0x6C,
})
// Rmvb matches a RealMedia Variable Bitrate file.
Rmvb = prefix([]byte{0x2E, 0x52, 0x4D, 0x46})
)
}
// Rmvb matches a RealMedia Variable Bitrate file.
func Rmvb(raw []byte, _ uint32) bool {
return bytes.HasPrefix(raw, []byte{0x2E, 0x52, 0x4D, 0x46})
}
// WebM matches a WebM file.
func WebM(raw []byte, limit uint32) bool {

View File

@@ -6,32 +6,65 @@ import (
"github.com/gabriel-vasile/mimetype/internal/scan"
)
var (
// Odt matches an OpenDocument Text file.
Odt = offset([]byte("mimetypeapplication/vnd.oasis.opendocument.text"), 30)
// Ott matches an OpenDocument Text Template file.
Ott = offset([]byte("mimetypeapplication/vnd.oasis.opendocument.text-template"), 30)
// Ods matches an OpenDocument Spreadsheet file.
Ods = offset([]byte("mimetypeapplication/vnd.oasis.opendocument.spreadsheet"), 30)
// Ots matches an OpenDocument Spreadsheet Template file.
Ots = offset([]byte("mimetypeapplication/vnd.oasis.opendocument.spreadsheet-template"), 30)
// Odp matches an OpenDocument Presentation file.
Odp = offset([]byte("mimetypeapplication/vnd.oasis.opendocument.presentation"), 30)
// Otp matches an OpenDocument Presentation Template file.
Otp = offset([]byte("mimetypeapplication/vnd.oasis.opendocument.presentation-template"), 30)
// Odg matches an OpenDocument Drawing file.
Odg = offset([]byte("mimetypeapplication/vnd.oasis.opendocument.graphics"), 30)
// Otg matches an OpenDocument Drawing Template file.
Otg = offset([]byte("mimetypeapplication/vnd.oasis.opendocument.graphics-template"), 30)
// Odf matches an OpenDocument Formula file.
Odf = offset([]byte("mimetypeapplication/vnd.oasis.opendocument.formula"), 30)
// Odc matches an OpenDocument Chart file.
Odc = offset([]byte("mimetypeapplication/vnd.oasis.opendocument.chart"), 30)
// Epub matches an EPUB file.
Epub = offset([]byte("mimetypeapplication/epub+zip"), 30)
// Sxc matches an OpenOffice Spreadsheet file.
Sxc = offset([]byte("mimetypeapplication/vnd.sun.xml.calc"), 30)
)
// Odt matches an OpenDocument Text file.
func Odt(raw []byte, _ uint32) bool {
return offset(raw, []byte("mimetypeapplication/vnd.oasis.opendocument.text"), 30)
}
// Ott matches an OpenDocument Text Template file.
func Ott(raw []byte, _ uint32) bool {
return offset(raw, []byte("mimetypeapplication/vnd.oasis.opendocument.text-template"), 30)
}
// Ods matches an OpenDocument Spreadsheet file.
func Ods(raw []byte, _ uint32) bool {
return offset(raw, []byte("mimetypeapplication/vnd.oasis.opendocument.spreadsheet"), 30)
}
// Ots matches an OpenDocument Spreadsheet Template file.
func Ots(raw []byte, _ uint32) bool {
return offset(raw, []byte("mimetypeapplication/vnd.oasis.opendocument.spreadsheet-template"), 30)
}
// Odp matches an OpenDocument Presentation file.
func Odp(raw []byte, _ uint32) bool {
return offset(raw, []byte("mimetypeapplication/vnd.oasis.opendocument.presentation"), 30)
}
// Otp matches an OpenDocument Presentation Template file.
func Otp(raw []byte, _ uint32) bool {
return offset(raw, []byte("mimetypeapplication/vnd.oasis.opendocument.presentation-template"), 30)
}
// Odg matches an OpenDocument Drawing file.
func Odg(raw []byte, _ uint32) bool {
return offset(raw, []byte("mimetypeapplication/vnd.oasis.opendocument.graphics"), 30)
}
// Otg matches an OpenDocument Drawing Template file.
func Otg(raw []byte, _ uint32) bool {
return offset(raw, []byte("mimetypeapplication/vnd.oasis.opendocument.graphics-template"), 30)
}
// Odf matches an OpenDocument Formula file.
func Odf(raw []byte, _ uint32) bool {
return offset(raw, []byte("mimetypeapplication/vnd.oasis.opendocument.formula"), 30)
}
// Odc matches an OpenDocument Chart file.
func Odc(raw []byte, _ uint32) bool {
return offset(raw, []byte("mimetypeapplication/vnd.oasis.opendocument.chart"), 30)
}
// Epub matches an EPUB file.
func Epub(raw []byte, _ uint32) bool {
return offset(raw, []byte("mimetypeapplication/epub+zip"), 30)
}
// Sxc matches an OpenOffice Spreadsheet file.
func Sxc(raw []byte, _ uint32) bool {
return offset(raw, []byte("mimetypeapplication/vnd.sun.xml.calc"), 30)
}
// Zip matches a zip archive.
func Zip(raw []byte, limit uint32) bool {
@@ -134,11 +167,11 @@ func msoxml(raw scan.Bytes, searchFor zipEntries, stopAfter int) bool {
// If the first is not one of the next usually expected entries,
// then abort this check.
if i == 0 {
if !bytes.Equal(f, []byte("[Content_Types].xml")) &&
!bytes.Equal(f, []byte("_rels/.rels")) &&
!bytes.Equal(f, []byte("docProps")) &&
!bytes.Equal(f, []byte("customXml")) &&
!bytes.Equal(f, []byte("[trash]")) {
if !bytes.Equal(f, []byte("[Content_Types].xml")) && // this is a file
!bytes.HasPrefix(f, []byte("_rels/")) && // these are directories
!bytes.HasPrefix(f, []byte("docProps/")) &&
!bytes.HasPrefix(f, []byte("customXml/")) &&
!bytes.HasPrefix(f, []byte("[trash]/")) {
return false
}
}

View File

@@ -8,46 +8,48 @@ import (
"github.com/gabriel-vasile/mimetype/internal/scan"
)
func GetAnAttribute(s *scan.Bytes) (name, val string, hasMore bool) {
// GetAnAttribute assumes we passed over an SGML tag and extracts first
// attribute and its value.
//
// Initially, this code existed inside charset/charset.go, because it was part of
// implementing the https://html.spec.whatwg.org/multipage/parsing.html#prescan-a-byte-stream-to-determine-its-encoding
// algorithm. But because extracting an attribute from a tag is the same for
// both HTML and XML, then the code was moved here.
func GetAnAttribute(s *scan.Bytes) (name, val []byte, hasMore bool) {
for scan.ByteIsWS(s.Peek()) || s.Peek() == '/' {
s.Advance(1)
}
if s.Peek() == '>' {
return "", "", false
return nil, nil, false
}
// Allocate 10 to avoid resizes.
// Attribute names and values are continuous slices of bytes in input,
// so we could do without allocating and returning slices of input.
nameB := make([]byte, 0, 10)
origS, end := *s, 0
// step 4 and 5
for {
// bap means byte at position in the specification.
bap := s.Pop()
if bap == 0 {
return "", "", false
return nil, nil, false
}
if bap == '=' && len(nameB) > 0 {
if bap == '=' && end > 0 {
val, hasMore := getAValue(s)
return string(nameB), string(val), hasMore
return origS[:end], val, hasMore
} else if scan.ByteIsWS(bap) {
for scan.ByteIsWS(s.Peek()) {
s.Advance(1)
}
if s.Peek() != '=' {
return string(nameB), "", true
return origS[:end], nil, true
}
s.Advance(1)
for scan.ByteIsWS(s.Peek()) {
s.Advance(1)
}
val, hasMore := getAValue(s)
return string(nameB), string(val), hasMore
return origS[:end], val, hasMore
} else if bap == '/' || bap == '>' {
return string(nameB), "", false
} else if bap >= 'A' && bap <= 'Z' {
nameB = append(nameB, bap+0x20)
} else {
nameB = append(nameB, bap)
return origS[:end], nil, false
} else { // for any ASCII, non-ASCII, just advance
end++
}
}
}

View File

@@ -138,46 +138,85 @@ func (b *Bytes) Uint16() (uint16, bool) {
return v, true
}
type Flags int
const (
CompactWS = 1 << iota
// CompactWS will make one whitespace from pattern to match one or more spaces from input.
CompactWS Flags = 1 << iota
// IgnoreCase will match lower case from pattern with lower case from input.
// IgnoreCase will match upper case from pattern with both lower and upper case from input.
// This flag is not really well named,
IgnoreCase
// FullWord ensures the input ends with a full word (it's followed by spaces.)
FullWord
)
// Search for occurences of pattern p inside b at any index.
func (b Bytes) Search(p []byte, flags int) int {
// It returns the index where p was found in b and how many bytes were needed
// for matching the pattern.
func (b Bytes) Search(p []byte, flags Flags) (i int, l int) {
lb, lp := len(b), len(p)
if lp == 0 {
return 0, 0
}
if lb == 0 {
return -1, 0
}
if flags == 0 {
return bytes.Index(b, p)
if i = bytes.Index(b, p); i == -1 {
return -1, 0
} else {
return i, lp
}
}
lb, lp := len(b), len(p)
for i := range b {
if lb-i < lp {
return -1
return -1, 0
}
if b[i:].Match(p, flags) {
return i
if l = b[i:].Match(p, flags); l != -1 {
return i, l
}
}
return 0
return -1, 0
}
// Match pattern p at index 0 of b.
func (b Bytes) Match(p []byte, flags int) bool {
// Match returns how many bytes were needed to match pattern p.
// It returns -1 if p does not match b.
func (b Bytes) Match(p []byte, flags Flags) int {
l := len(b)
if len(p) == 0 {
return 0
}
if l == 0 {
return -1
}
// If no flags, or scanning for full word at the end of pattern then
// do a fast HasPrefix check.
// For other flags it's not possible to use HasPrefix.
if flags == 0 || flags&FullWord > 0 {
if bytes.HasPrefix(b, p) {
b = b[len(p):]
p = p[len(p):]
goto out
}
return -1
}
for len(b) > 0 {
// If we finished all we we're looking for from p.
// If we finished all we were looking for from p.
if len(p) == 0 {
return true
goto out
}
if flags&IgnoreCase > 0 && isUpper(p[0]) {
if upper(b[0]) != p[0] {
return false
return -1
}
b, p = b[1:], p[1:]
} else if flags&CompactWS > 0 && ByteIsWS(p[0]) {
p = p[1:]
if !ByteIsWS(b[0]) {
return false
return -1
}
b = b[1:]
if !ByteIsWS(p[0]) {
@@ -185,12 +224,22 @@ func (b Bytes) Match(p []byte, flags int) bool {
}
} else {
if b[0] != p[0] {
return false
return -1
}
b, p = b[1:], p[1:]
}
}
return true
out:
// If p still has leftover characters, it means it didn't fully match b.
if len(p) > 0 {
return -1
}
if flags&FullWord > 0 {
if len(b) > 0 && !ByteIsWS(b[0]) {
return -1
}
}
return l - len(b)
}
func isUpper(c byte) bool {

View File

@@ -2,6 +2,7 @@ package mimetype
import (
"mime"
"strings"
"github.com/gabriel-vasile/mimetype/internal/charset"
"github.com/gabriel-vasile/mimetype/internal/magic"
@@ -109,7 +110,7 @@ func (m *MIME) match(in []byte, readLimit uint32) *MIME {
// Limit the number of bytes searched for to 1024.
charset = f(in[:min(len(in), 1024)])
}
if m == root {
if m == root || charset == "" {
return m
}
@@ -126,6 +127,27 @@ func (m *MIME) flatten() []*MIME {
return out
}
// hierarchy returns an easy to read list of ancestors for m.
// For example, application/json would return json>txt>root.
func (m *MIME) hierarchy() string {
h := ""
for m := m; m != nil; m = m.Parent() {
e := strings.TrimPrefix(m.Extension(), ".")
if e == "" {
// There are some MIME without extensions. When generating the hierarchy,
// it would be confusing to use empty string as extension.
// Use the subtype instead; ex: application/x-executable -> x-executable.
e = strings.Split(m.String(), "/")[1]
if m.Is("application/octet-stream") {
// for octet-stream use root, because it's short and used in many places
e = "root"
}
}
h += ">" + e
}
return strings.TrimPrefix(h, ">")
}
// clone creates a new MIME with the provided optional MIME parameters.
func (m *MIME) clone(charset string) *MIME {
clonedMIME := m.mime
@@ -155,7 +177,10 @@ func (m *MIME) cloneHierarchy(charset string) *MIME {
}
func (m *MIME) lookup(mime string) *MIME {
for _, n := range append(m.aliases, m.mime) {
if mime == m.mime {
return m
}
for _, n := range m.aliases {
if n == mime {
return m
}

View File

@@ -12,7 +12,7 @@ import (
"sync/atomic"
)
var defaultLimit uint32 = 3072
const defaultLimit uint32 = 3072
// readLimit is the maximum number of bytes from the input used when detecting.
var readLimit uint32 = defaultLimit
@@ -112,15 +112,18 @@ func SetLimit(limit uint32) {
}
// Extend adds detection for other file formats.
// It is equivalent to calling Extend() on the root mime type "application/octet-stream".
// It is equivalent to calling Extend() on the root MIME type "application/octet-stream".
func Extend(detector func(raw []byte, limit uint32) bool, mime, extension string, aliases ...string) {
root.Extend(detector, mime, extension, aliases...)
}
// Lookup finds a MIME object by its string representation.
// The representation can be the main mime type, or any of its aliases.
func Lookup(mime string) *MIME {
// The representation can be the main MIME type, or any of its aliases.
func Lookup(m string) *MIME {
// We store the MIME types without optional params, so
// perform parsing to extract the target MIME type without optional params.
m, _, _ = mime.ParseMediaType(m)
mu.RLock()
defer mu.RUnlock()
return root.lookup(mime)
return root.lookup(m)
}

View File

@@ -1,196 +1,197 @@
## 191 Supported MIME types
## 192 Supported MIME types
This file is automatically generated when running tests. Do not edit manually.
Extension | MIME type | Aliases
--------- | --------- | -------
**n/a** | application/octet-stream | -
**.xpm** | image/x-xpixmap | -
**.7z** | application/x-7z-compressed | -
**.zip** | application/zip | application/x-zip, application/x-zip-compressed
**.docx** | application/vnd.openxmlformats-officedocument.wordprocessingml.document | -
**.pptx** | application/vnd.openxmlformats-officedocument.presentationml.presentation | -
**.xlsx** | application/vnd.openxmlformats-officedocument.spreadsheetml.sheet | -
**.epub** | application/epub+zip | -
**.apk** | application/vnd.android.package-archive | -
**.jar** | application/java-archive | application/jar, application/jar-archive, application/x-java-archive
**.odt** | application/vnd.oasis.opendocument.text | application/x-vnd.oasis.opendocument.text
**.ott** | application/vnd.oasis.opendocument.text-template | application/x-vnd.oasis.opendocument.text-template
**.ods** | application/vnd.oasis.opendocument.spreadsheet | application/x-vnd.oasis.opendocument.spreadsheet
**.ots** | application/vnd.oasis.opendocument.spreadsheet-template | application/x-vnd.oasis.opendocument.spreadsheet-template
**.odp** | application/vnd.oasis.opendocument.presentation | application/x-vnd.oasis.opendocument.presentation
**.otp** | application/vnd.oasis.opendocument.presentation-template | application/x-vnd.oasis.opendocument.presentation-template
**.odg** | application/vnd.oasis.opendocument.graphics | application/x-vnd.oasis.opendocument.graphics
**.otg** | application/vnd.oasis.opendocument.graphics-template | application/x-vnd.oasis.opendocument.graphics-template
**.odf** | application/vnd.oasis.opendocument.formula | application/x-vnd.oasis.opendocument.formula
**.odc** | application/vnd.oasis.opendocument.chart | application/x-vnd.oasis.opendocument.chart
**.sxc** | application/vnd.sun.xml.calc | -
**.kmz** | application/vnd.google-earth.kmz | -
**.vsdx** | application/vnd.ms-visio.drawing.main+xml | -
**.pdf** | application/pdf | application/x-pdf
**.fdf** | application/vnd.fdf | -
**n/a** | application/x-ole-storage | -
**.msi** | application/x-ms-installer | application/x-windows-installer, application/x-msi
**.aaf** | application/octet-stream | -
**.msg** | application/vnd.ms-outlook | -
**.xls** | application/vnd.ms-excel | application/msexcel
**.pub** | application/vnd.ms-publisher | -
**.ppt** | application/vnd.ms-powerpoint | application/mspowerpoint
**.doc** | application/msword | application/vnd.ms-word
**.ps** | application/postscript | -
**.psd** | image/vnd.adobe.photoshop | image/x-psd, application/photoshop
**.p7s** | application/pkcs7-signature | -
**.ogg** | application/ogg | application/x-ogg
**.oga** | audio/ogg | -
**.ogv** | video/ogg | -
**.png** | image/png | -
**.png** | image/vnd.mozilla.apng | -
**.jpg** | image/jpeg | -
**.jxl** | image/jxl | -
**.jp2** | image/jp2 | -
**.jpf** | image/jpx | -
**.jpm** | image/jpm | video/jpm
**.jxs** | image/jxs | -
**.gif** | image/gif | -
**.webp** | image/webp | -
**.exe** | application/vnd.microsoft.portable-executable | -
**n/a** | application/x-elf | -
**n/a** | application/x-object | -
**n/a** | application/x-executable | -
**.so** | application/x-sharedlib | -
**n/a** | application/x-coredump | -
**.a** | application/x-archive | application/x-unix-archive
**.deb** | application/vnd.debian.binary-package | -
**.tar** | application/x-tar | -
**.xar** | application/x-xar | -
**.bz2** | application/x-bzip2 | -
**.fits** | application/fits | image/fits
**.tiff** | image/tiff | -
**.bmp** | image/bmp | image/x-bmp, image/x-ms-bmp
**.123** | application/vnd.lotus-1-2-3 | -
**.ico** | image/x-icon | -
**.mp3** | audio/mpeg | audio/x-mpeg, audio/mp3
**.flac** | audio/flac | -
**.midi** | audio/midi | audio/mid, audio/sp-midi, audio/x-mid, audio/x-midi
**.ape** | audio/ape | -
**.mpc** | audio/musepack | -
**.amr** | audio/amr | audio/amr-nb
**.wav** | audio/wav | audio/x-wav, audio/vnd.wave, audio/wave
**.aiff** | audio/aiff | audio/x-aiff
**.au** | audio/basic | -
**.mpeg** | video/mpeg | -
**.mov** | video/quicktime | -
**.mp4** | video/mp4 | -
**.avif** | image/avif | -
**.3gp** | video/3gpp | video/3gp, audio/3gpp
**.3g2** | video/3gpp2 | video/3g2, audio/3gpp2
**.mp4** | audio/mp4 | audio/x-mp4a
**.mqv** | video/quicktime | -
**.m4a** | audio/x-m4a | -
**.m4v** | video/x-m4v | -
**.heic** | image/heic | -
**.heic** | image/heic-sequence | -
**.heif** | image/heif | -
**.heif** | image/heif-sequence | -
**.mj2** | video/mj2 | -
**.dvb** | video/vnd.dvb.file | -
**.webm** | video/webm | audio/webm
**.avi** | video/x-msvideo | video/avi, video/msvideo
**.flv** | video/x-flv | -
**.mkv** | video/x-matroska | -
**.asf** | video/x-ms-asf | video/asf, video/x-ms-wmv
**.aac** | audio/aac | -
**.voc** | audio/x-unknown | -
**.m3u** | application/vnd.apple.mpegurl | audio/mpegurl
**.rmvb** | application/vnd.rn-realmedia-vbr | -
**.gz** | application/gzip | application/x-gzip, application/x-gunzip, application/gzipped, application/gzip-compressed, application/x-gzip-compressed, gzip/document
**.class** | application/x-java-applet | -
**.swf** | application/x-shockwave-flash | -
**.crx** | application/x-chrome-extension | -
**.ttf** | font/ttf | font/sfnt, application/x-font-ttf, application/font-sfnt
**.woff** | font/woff | -
**.woff2** | font/woff2 | -
**.otf** | font/otf | -
**.ttc** | font/collection | -
**.eot** | application/vnd.ms-fontobject | -
**.wasm** | application/wasm | -
**.shx** | application/vnd.shx | -
**.shp** | application/vnd.shp | -
**.dbf** | application/x-dbf | -
**.dcm** | application/dicom | -
**.rar** | application/x-rar-compressed | application/x-rar
**.djvu** | image/vnd.djvu | -
**.mobi** | application/x-mobipocket-ebook | -
**.lit** | application/x-ms-reader | -
**.bpg** | image/bpg | -
**.cbor** | application/cbor | -
**.sqlite** | application/vnd.sqlite3 | application/x-sqlite3
**.dwg** | image/vnd.dwg | image/x-dwg, application/acad, application/x-acad, application/autocad_dwg, application/dwg, application/x-dwg, application/x-autocad, drawing/dwg
**.nes** | application/vnd.nintendo.snes.rom | -
**.lnk** | application/x-ms-shortcut | -
**.macho** | application/x-mach-binary | -
**.qcp** | audio/qcelp | -
**.icns** | image/x-icns | -
**.hdr** | image/vnd.radiance | -
**.mrc** | application/marc | -
**.mdb** | application/x-msaccess | -
**.accdb** | application/x-msaccess | -
**.zst** | application/zstd | -
**.cab** | application/vnd.ms-cab-compressed | -
**.rpm** | application/x-rpm | -
**.xz** | application/x-xz | -
**.lz** | application/lzip | application/x-lzip
**.torrent** | application/x-bittorrent | -
**.cpio** | application/x-cpio | -
**n/a** | application/tzif | -
**.xcf** | image/x-xcf | -
**.pat** | image/x-gimp-pat | -
**.gbr** | image/x-gimp-gbr | -
**.glb** | model/gltf-binary | -
**.cab** | application/x-installshield | -
**.jxr** | image/jxr | image/vnd.ms-photo
**.parquet** | application/vnd.apache.parquet | application/x-parquet
**.one** | application/onenote | -
**.chm** | application/vnd.ms-htmlhelp | -
**.txt** | text/plain | -
**.svg** | image/svg+xml | -
**.html** | text/html | -
**.xml** | text/xml | application/xml
**.rss** | application/rss+xml | text/rss
**.atom** | application/atom+xml | -
**.x3d** | model/x3d+xml | -
**.kml** | application/vnd.google-earth.kml+xml | -
**.xlf** | application/x-xliff+xml | -
**.dae** | model/vnd.collada+xml | -
**.gml** | application/gml+xml | -
**.gpx** | application/gpx+xml | -
**.tcx** | application/vnd.garmin.tcx+xml | -
**.amf** | application/x-amf | -
**.3mf** | application/vnd.ms-package.3dmanufacturing-3dmodel+xml | -
**.xfdf** | application/vnd.adobe.xfdf | -
**.owl** | application/owl+xml | -
**.html** | application/xhtml+xml | -
**.php** | text/x-php | -
**.js** | text/javascript | application/x-javascript, application/javascript
**.lua** | text/x-lua | -
**.pl** | text/x-perl | -
**.py** | text/x-python | text/x-script.python, application/x-python
**.rb** | text/x-ruby | application/x-ruby
**.json** | application/json | -
**.geojson** | application/geo+json | -
**.har** | application/json | -
**.gltf** | model/gltf+json | -
**.ndjson** | application/x-ndjson | -
**.rtf** | text/rtf | application/rtf
**.srt** | application/x-subrip | application/x-srt, text/x-srt
**.tcl** | text/x-tcl | application/x-tcl
**.csv** | text/csv | -
**.tsv** | text/tab-separated-values | -
**.vcf** | text/vcard | -
**.ics** | text/calendar | -
**.warc** | application/warc | -
**.vtt** | text/vtt | -
**.sh** | text/x-shellscript | text/x-sh, application/x-shellscript, application/x-sh
**.pbm** | image/x-portable-bitmap | -
**.pgm** | image/x-portable-graymap | -
**.ppm** | image/x-portable-pixmap | -
**.pam** | image/x-portable-arbitrarymap | -
Extension | MIME type <br> Aliases | Hierarchy
--------- | ---------------------- | ---------
**n/a** | **application/octet-stream** | root
**.xpm** | **image/x-xpixmap** | xpm>root
**.7z** | **application/x-7z-compressed** | 7z>root
**.zip** | **application/zip** <br> application/x-zip, application/x-zip-compressed | zip>root
**.docx** | **application/vnd.openxmlformats-officedocument.wordprocessingml.document** | docx>zip>root
**.pptx** | **application/vnd.openxmlformats-officedocument.presentationml.presentation** | pptx>zip>root
**.xlsx** | **application/vnd.openxmlformats-officedocument.spreadsheetml.sheet** | xlsx>zip>root
**.epub** | **application/epub+zip** | epub>zip>root
**.apk** | **application/vnd.android.package-archive** | apk>zip>root
**.jar** | **application/java-archive** <br> application/jar, application/jar-archive, application/x-java-archive | jar>zip>root
**.odt** | **application/vnd.oasis.opendocument.text** <br> application/x-vnd.oasis.opendocument.text | odt>zip>root
**.ott** | **application/vnd.oasis.opendocument.text-template** <br> application/x-vnd.oasis.opendocument.text-template | ott>odt>zip>root
**.ods** | **application/vnd.oasis.opendocument.spreadsheet** <br> application/x-vnd.oasis.opendocument.spreadsheet | ods>zip>root
**.ots** | **application/vnd.oasis.opendocument.spreadsheet-template** <br> application/x-vnd.oasis.opendocument.spreadsheet-template | ots>ods>zip>root
**.odp** | **application/vnd.oasis.opendocument.presentation** <br> application/x-vnd.oasis.opendocument.presentation | odp>zip>root
**.otp** | **application/vnd.oasis.opendocument.presentation-template** <br> application/x-vnd.oasis.opendocument.presentation-template | otp>odp>zip>root
**.odg** | **application/vnd.oasis.opendocument.graphics** <br> application/x-vnd.oasis.opendocument.graphics | odg>zip>root
**.otg** | **application/vnd.oasis.opendocument.graphics-template** <br> application/x-vnd.oasis.opendocument.graphics-template | otg>odg>zip>root
**.odf** | **application/vnd.oasis.opendocument.formula** <br> application/x-vnd.oasis.opendocument.formula | odf>zip>root
**.odc** | **application/vnd.oasis.opendocument.chart** <br> application/x-vnd.oasis.opendocument.chart | odc>zip>root
**.sxc** | **application/vnd.sun.xml.calc** | sxc>zip>root
**.kmz** | **application/vnd.google-earth.kmz** | kmz>zip>root
**.vsdx** | **application/vnd.ms-visio.drawing.main+xml** | vsdx>zip>root
**.pdf** | **application/pdf** <br> application/x-pdf | pdf>root
**.fdf** | **application/vnd.fdf** | fdf>root
**n/a** | **application/x-ole-storage** | x-ole-storage>root
**.msi** | **application/x-ms-installer** <br> application/x-windows-installer, application/x-msi | msi>x-ole-storage>root
**.msg** | **application/vnd.ms-outlook** | msg>x-ole-storage>root
**.xls** | **application/vnd.ms-excel** <br> application/msexcel | xls>x-ole-storage>root
**.pub** | **application/vnd.ms-publisher** | pub>x-ole-storage>root
**.ppt** | **application/vnd.ms-powerpoint** <br> application/mspowerpoint | ppt>x-ole-storage>root
**.doc** | **application/msword** <br> application/vnd.ms-word | doc>x-ole-storage>root
**.ps** | **application/postscript** | ps>root
**.psd** | **image/vnd.adobe.photoshop** <br> image/x-psd, application/photoshop | psd>root
**.p7s** | **application/pkcs7-signature** | p7s>root
**.ogg** | **application/ogg** <br> application/x-ogg | ogg>root
**.oga** | **audio/ogg** | oga>ogg>root
**.ogv** | **video/ogg** | ogv>ogg>root
**.png** | **image/png** | png>root
**.png** | **image/vnd.mozilla.apng** | png>png>root
**.jpg** | **image/jpeg** | jpg>root
**.jxl** | **image/jxl** | jxl>root
**.jp2** | **image/jp2** | jp2>root
**.jpf** | **image/jpx** | jpf>root
**.jpm** | **image/jpm** <br> video/jpm | jpm>root
**.jxs** | **image/jxs** | jxs>root
**.gif** | **image/gif** | gif>root
**.webp** | **image/webp** | webp>root
**.exe** | **application/vnd.microsoft.portable-executable** | exe>root
**n/a** | **application/x-elf** | x-elf>root
**n/a** | **application/x-object** | x-object>x-elf>root
**n/a** | **application/x-executable** | x-executable>x-elf>root
**.so** | **application/x-sharedlib** | so>x-elf>root
**n/a** | **application/x-coredump** | x-coredump>x-elf>root
**.a** | **application/x-archive** <br> application/x-unix-archive | a>root
**.deb** | **application/vnd.debian.binary-package** | deb>a>root
**.tar** | **application/x-tar** | tar>root
**.xar** | **application/x-xar** | xar>root
**.bz2** | **application/x-bzip2** | bz2>root
**.fits** | **application/fits** <br> image/fits | fits>root
**.tiff** | **image/tiff** | tiff>root
**.bmp** | **image/bmp** <br> image/x-bmp, image/x-ms-bmp | bmp>root
**.123** | **application/vnd.lotus-1-2-3** | 123>root
**.ico** | **image/x-icon** | ico>root
**.mp3** | **audio/mpeg** <br> audio/x-mpeg, audio/mp3 | mp3>root
**.flac** | **audio/flac** | flac>root
**.midi** | **audio/midi** <br> audio/mid, audio/sp-midi, audio/x-mid, audio/x-midi | midi>root
**.ape** | **audio/ape** | ape>root
**.mpc** | **audio/musepack** | mpc>root
**.amr** | **audio/amr** <br> audio/amr-nb | amr>root
**.wav** | **audio/wav** <br> audio/x-wav, audio/vnd.wave, audio/wave | wav>root
**.aiff** | **audio/aiff** <br> audio/x-aiff | aiff>root
**.au** | **audio/basic** | au>root
**.mpeg** | **video/mpeg** | mpeg>root
**.mov** | **video/quicktime** | mov>root
**.mp4** | **video/mp4** | mp4>root
**.avif** | **image/avif** | avif>mp4>root
**.3gp** | **video/3gpp** <br> video/3gp, audio/3gpp | 3gp>mp4>root
**.3g2** | **video/3gpp2** <br> video/3g2, audio/3gpp2 | 3g2>mp4>root
**.mp4** | **audio/mp4** <br> audio/x-mp4a | mp4>mp4>root
**.mqv** | **video/quicktime** | mqv>mp4>root
**.m4a** | **audio/x-m4a** | m4a>mp4>root
**.m4v** | **video/x-m4v** | m4v>mp4>root
**.heic** | **image/heic** | heic>mp4>root
**.heic** | **image/heic-sequence** | heic>mp4>root
**.heif** | **image/heif** | heif>mp4>root
**.heif** | **image/heif-sequence** | heif>mp4>root
**.mj2** | **video/mj2** | mj2>mp4>root
**.dvb** | **video/vnd.dvb.file** | dvb>mp4>root
**.webm** | **video/webm** <br> audio/webm | webm>root
**.avi** | **video/x-msvideo** <br> video/avi, video/msvideo | avi>root
**.flv** | **video/x-flv** | flv>root
**.mkv** | **video/x-matroska** | mkv>root
**.asf** | **video/x-ms-asf** <br> video/asf, video/x-ms-wmv | asf>root
**.aac** | **audio/aac** | aac>root
**.voc** | **audio/x-unknown** | voc>root
**.m3u** | **application/vnd.apple.mpegurl** <br> audio/mpegurl | m3u>root
**.rmvb** | **application/vnd.rn-realmedia-vbr** | rmvb>root
**.gz** | **application/gzip** <br> application/x-gzip, application/x-gunzip, application/gzipped, application/gzip-compressed, application/x-gzip-compressed, gzip/document | gz>root
**.class** | **application/x-java-applet** | class>root
**.swf** | **application/x-shockwave-flash** | swf>root
**.crx** | **application/x-chrome-extension** | crx>root
**.ttf** | **font/ttf** <br> font/sfnt, application/x-font-ttf, application/font-sfnt | ttf>root
**.woff** | **font/woff** | woff>root
**.woff2** | **font/woff2** | woff2>root
**.otf** | **font/otf** | otf>root
**.ttc** | **font/collection** | ttc>root
**.eot** | **application/vnd.ms-fontobject** | eot>root
**.wasm** | **application/wasm** | wasm>root
**.shx** | **application/vnd.shx** | shx>root
**.shp** | **application/vnd.shp** | shp>shx>root
**.dbf** | **application/x-dbf** | dbf>root
**.dcm** | **application/dicom** | dcm>root
**.rar** | **application/x-rar-compressed** <br> application/x-rar | rar>root
**.djvu** | **image/vnd.djvu** | djvu>root
**.mobi** | **application/x-mobipocket-ebook** | mobi>root
**.lit** | **application/x-ms-reader** | lit>root
**.bpg** | **image/bpg** | bpg>root
**.cbor** | **application/cbor** | cbor>root
**.sqlite** | **application/vnd.sqlite3** <br> application/x-sqlite3 | sqlite>root
**.dwg** | **image/vnd.dwg** <br> image/x-dwg, application/acad, application/x-acad, application/autocad_dwg, application/dwg, application/x-dwg, application/x-autocad, drawing/dwg | dwg>root
**.nes** | **application/vnd.nintendo.snes.rom** | nes>root
**.lnk** | **application/x-ms-shortcut** | lnk>root
**.macho** | **application/x-mach-binary** | macho>root
**.qcp** | **audio/qcelp** | qcp>root
**.icns** | **image/x-icns** | icns>root
**.hdr** | **image/vnd.radiance** | hdr>root
**.mrc** | **application/marc** | mrc>root
**.mdb** | **application/x-msaccess** | mdb>root
**.accdb** | **application/x-msaccess** | accdb>root
**.zst** | **application/zstd** | zst>root
**.cab** | **application/vnd.ms-cab-compressed** | cab>root
**.rpm** | **application/x-rpm** | rpm>root
**.xz** | **application/x-xz** | xz>root
**.lz** | **application/lzip** <br> application/x-lzip | lz>root
**.torrent** | **application/x-bittorrent** | torrent>root
**.cpio** | **application/x-cpio** | cpio>root
**n/a** | **application/tzif** | tzif>root
**.xcf** | **image/x-xcf** | xcf>root
**.pat** | **image/x-gimp-pat** | pat>root
**.gbr** | **image/x-gimp-gbr** | gbr>root
**.glb** | **model/gltf-binary** | glb>root
**.cab** | **application/x-installshield** | cab>root
**.jxr** | **image/jxr** <br> image/vnd.ms-photo | jxr>root
**.parquet** | **application/vnd.apache.parquet** <br> application/x-parquet | parquet>root
**.one** | **application/onenote** | one>root
**.chm** | **application/vnd.ms-htmlhelp** | chm>root
**.wpd** | **application/vnd.wordperfect** | wpd>root
**.dxf** | **image/vnd.dxf** | dxf>root
**.txt** | **text/plain** | txt>root
**.svg** | **image/svg+xml** | svg>txt>root
**.html** | **text/html** | html>txt>root
**.xml** | **text/xml** <br> application/xml | xml>txt>root
**.rss** | **application/rss+xml** <br> text/rss | rss>xml>txt>root
**.atom** | **application/atom+xml** | atom>xml>txt>root
**.x3d** | **model/x3d+xml** | x3d>xml>txt>root
**.kml** | **application/vnd.google-earth.kml+xml** | kml>xml>txt>root
**.xlf** | **application/x-xliff+xml** | xlf>xml>txt>root
**.dae** | **model/vnd.collada+xml** | dae>xml>txt>root
**.gml** | **application/gml+xml** | gml>xml>txt>root
**.gpx** | **application/gpx+xml** | gpx>xml>txt>root
**.tcx** | **application/vnd.garmin.tcx+xml** | tcx>xml>txt>root
**.amf** | **application/x-amf** | amf>xml>txt>root
**.3mf** | **application/vnd.ms-package.3dmanufacturing-3dmodel+xml** | 3mf>xml>txt>root
**.xfdf** | **application/vnd.adobe.xfdf** | xfdf>xml>txt>root
**.owl** | **application/owl+xml** | owl>xml>txt>root
**.html** | **application/xhtml+xml** | html>xml>txt>root
**.php** | **text/x-php** | php>txt>root
**.js** | **text/javascript** <br> application/x-javascript, application/javascript | js>txt>root
**.lua** | **text/x-lua** | lua>txt>root
**.pl** | **text/x-perl** | pl>txt>root
**.py** | **text/x-python** <br> text/x-script.python, application/x-python | py>txt>root
**.rb** | **text/x-ruby** <br> application/x-ruby | rb>txt>root
**.json** | **application/json** | json>txt>root
**.geojson** | **application/geo+json** | geojson>json>txt>root
**.har** | **application/json** | har>json>txt>root
**.gltf** | **model/gltf+json** | gltf>json>txt>root
**.ndjson** | **application/x-ndjson** | ndjson>txt>root
**.rtf** | **text/rtf** <br> application/rtf | rtf>txt>root
**.srt** | **application/x-subrip** <br> application/x-srt, text/x-srt | srt>txt>root
**.tcl** | **text/x-tcl** <br> application/x-tcl | tcl>txt>root
**.csv** | **text/csv** | csv>txt>root
**.tsv** | **text/tab-separated-values** | tsv>txt>root
**.vcf** | **text/vcard** | vcf>txt>root
**.ics** | **text/calendar** | ics>txt>root
**.warc** | **application/warc** | warc>txt>root
**.vtt** | **text/vtt** | vtt>txt>root
**.sh** | **text/x-shellscript** <br> text/x-sh, application/x-shellscript, application/x-sh | sh>txt>root
**.pbm** | **image/x-portable-bitmap** | pbm>txt>root
**.pgm** | **image/x-portable-graymap** | pgm>txt>root
**.ppm** | **image/x-portable-pixmap** | ppm>txt>root
**.pam** | **image/x-portable-arbitrarymap** | pam>txt>root

View File

@@ -24,7 +24,7 @@ var root = newMIME("application/octet-stream", "",
woff2, otf, ttc, eot, wasm, shx, dbf, dcm, rar, djvu, mobi, lit, bpg, cbor,
sqlite3, dwg, nes, lnk, macho, qcp, icns, hdr, mrc, mdb, accdb, zstd, cab,
rpm, xz, lzip, torrent, cpio, tzif, xcf, pat, gbr, glb, cabIS, jxr, parquet,
oneNote, chm,
oneNote, chm, wpd, dxf,
// Keep text last because it is the slowest check.
text,
)
@@ -65,10 +65,9 @@ var (
jar = newMIME("application/java-archive", ".jar", magic.Jar).
alias("application/jar", "application/jar-archive", "application/x-java-archive")
apk = newMIME("application/vnd.android.package-archive", ".apk", magic.APK)
ole = newMIME("application/x-ole-storage", "", magic.Ole, msi, aaf, msg, xls, pub, ppt, doc)
ole = newMIME("application/x-ole-storage", "", magic.Ole, msi, msg, xls, pub, ppt, doc)
msi = newMIME("application/x-ms-installer", ".msi", magic.Msi).
alias("application/x-windows-installer", "application/x-msi")
aaf = newMIME("application/octet-stream", ".aaf", magic.Aaf)
doc = newMIME("application/msword", ".doc", magic.Doc).
alias("application/vnd.ms-word")
ppt = newMIME("application/vnd.ms-powerpoint", ".ppt", magic.Ppt).
@@ -286,4 +285,6 @@ var (
cbor = newMIME("application/cbor", ".cbor", magic.CBOR)
oneNote = newMIME("application/onenote", ".one", magic.One)
chm = newMIME("application/vnd.ms-htmlhelp", ".chm", magic.CHM)
wpd = newMIME("application/vnd.wordperfect", ".wpd", magic.WPD)
dxf = newMIME("image/vnd.dxf", ".dxf", magic.DXF)
)

View File

@@ -332,7 +332,10 @@ func DialURLContext(ctx context.Context, rawurl string, options ...DialOption) (
return nil, err
}
if u.Scheme != "redis" && u.Scheme != "rediss" {
switch u.Scheme {
case "redis", "rediss", "valkey", "valkeys":
// valid scheme
default:
return nil, fmt.Errorf("invalid redis URL scheme: %s", u.Scheme)
}
@@ -386,7 +389,7 @@ func DialURLContext(ctx context.Context, rawurl string, options ...DialOption) (
return nil, fmt.Errorf("invalid database: %s", u.Path[1:])
}
options = append(options, DialUseTLS(u.Scheme == "rediss"))
options = append(options, DialUseTLS(u.Scheme == "rediss" || u.Scheme == "valkeys"))
return DialContext(ctx, "tcp", address, options...)
}

View File

@@ -477,7 +477,7 @@ var errScanStructValue = errors.New("redigo.ScanStruct: value must be non-nil po
// ScanStruct uses exported field names to match values in the response. Use
// 'redis' field tag to override the name:
//
// Field int `redis:"myName"`
// Field int `redis:"myName"`
//
// Fields with the tag redis:"-" are ignored.
//
@@ -513,9 +513,9 @@ func ScanStruct(src []interface{}, dest interface{}) error {
continue
}
name, ok := src[i].([]byte)
name, ok := convertToBulk(src[i])
if !ok {
return fmt.Errorf("redigo.ScanStruct: key %d not a bulk string value", i)
return fmt.Errorf("redigo.ScanStruct: key %d not a bulk string value got type: %T", i, src[i])
}
fs := ss.fieldSpec(name)
@@ -530,6 +530,19 @@ func ScanStruct(src []interface{}, dest interface{}) error {
return nil
}
// convertToBulk converts src to a []byte if src is a string or bulk string
// and returns true. Otherwise nil and false is returned.
func convertToBulk(src interface{}) ([]byte, bool) {
switch v := src.(type) {
case []byte:
return v, true
case string:
return []byte(v), true
default:
return nil, false
}
}
var (
errScanSliceValue = errors.New("redigo.ScanSlice: dest must be non-nil pointer to a struct")
)

View File

@@ -6,7 +6,6 @@
# Folders
_obj
_test
vendor
# Architecture specific extensions/prefixes
*.[568vq]
@@ -23,11 +22,3 @@ _testmain.go
*.exe
*.test
*.prof
*.pprof
*.out
*.log
coverage.txt
/bin
cover.out
cover.html

27
vendor/github.com/klauspost/crc32/LICENSE generated vendored Normal file
View File

@@ -0,0 +1,27 @@
Copyright (c) 2012 The Go Authors. All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following disclaimer
in the documentation and/or other materials provided with the
distribution.
* Neither the name of Google Inc. nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

42
vendor/github.com/klauspost/crc32/README.md generated vendored Normal file
View File

@@ -0,0 +1,42 @@
# 2025 revival
For IEEE checksums AVX512 can be used to speed up CRC32 checksums by approximately 2x.
Castagnoli checksums (CRC32C) can also be computer with AVX512,
but the performance gain is not as significant enough for the downsides of using it at this point.
# crc32
This package is a drop-in replacement for the standard library `hash/crc32` package,
that features AVX 512 optimizations on x64 platforms, for a 2x speedup for IEEE CRC32 checksums.
# usage
Install using `go get github.com/klauspost/crc32`. This library is based on Go 1.24
Replace `import "hash/crc32"` with `import "github.com/klauspost/crc32"` and you are good to go.
# changes
* 2025: Revived and updated to Go 1.24, with AVX 512 optimizations.
# performance
AVX512 are enabled above 1KB input size. This rather high limit is due to AVX512 may be slower to ramp up than
the regular SSE4 implementation for smaller inputs. This is not reflected in the benchmarks below.
| Benchmark | Old MB/s | New MB/s | Speedup |
|-----------------------------------------------|----------|----------|---------|
| BenchmarkCRC32/poly=IEEE/size=512/align=0-32 | 17996.39 | 17969.94 | 1.00x |
| BenchmarkCRC32/poly=IEEE/size=512/align=1-32 | 18021.48 | 17945.55 | 1.00x |
| BenchmarkCRC32/poly=IEEE/size=1kB/align=0-32 | 19921.70 | 45613.77 | 2.29x |
| BenchmarkCRC32/poly=IEEE/size=1kB/align=1-32 | 19946.60 | 46819.09 | 2.35x |
| BenchmarkCRC32/poly=IEEE/size=4kB/align=0-32 | 21538.65 | 48600.93 | 2.26x |
| BenchmarkCRC32/poly=IEEE/size=4kB/align=1-32 | 21449.20 | 48477.84 | 2.26x |
| BenchmarkCRC32/poly=IEEE/size=32kB/align=0-32 | 21785.49 | 46013.10 | 2.11x |
| BenchmarkCRC32/poly=IEEE/size=32kB/align=1-32 | 21946.47 | 45954.10 | 2.09x |
cpu: AMD Ryzen 9 9950X 16-Core Processor
# license
Standard Go license. See [LICENSE](LICENSE) for details.

253
vendor/github.com/klauspost/crc32/crc32.go generated vendored Normal file
View File

@@ -0,0 +1,253 @@
// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Package crc32 implements the 32-bit cyclic redundancy check, or CRC-32,
// checksum. See https://en.wikipedia.org/wiki/Cyclic_redundancy_check for
// information.
//
// Polynomials are represented in LSB-first form also known as reversed representation.
//
// See https://en.wikipedia.org/wiki/Mathematics_of_cyclic_redundancy_checks#Reversed_representations_and_reciprocal_polynomials
// for information.
package crc32
import (
"encoding/binary"
"errors"
"hash"
"sync"
"sync/atomic"
)
// The size of a CRC-32 checksum in bytes.
const Size = 4
// Predefined polynomials.
const (
// IEEE is by far and away the most common CRC-32 polynomial.
// Used by ethernet (IEEE 802.3), v.42, fddi, gzip, zip, png, ...
IEEE = 0xedb88320
// Castagnoli's polynomial, used in iSCSI.
// Has better error detection characteristics than IEEE.
// https://dx.doi.org/10.1109/26.231911
Castagnoli = 0x82f63b78
// Koopman's polynomial.
// Also has better error detection characteristics than IEEE.
// https://dx.doi.org/10.1109/DSN.2002.1028931
Koopman = 0xeb31d82e
)
// Table is a 256-word table representing the polynomial for efficient processing.
type Table [256]uint32
// This file makes use of functions implemented in architecture-specific files.
// The interface that they implement is as follows:
//
// // archAvailableIEEE reports whether an architecture-specific CRC32-IEEE
// // algorithm is available.
// archAvailableIEEE() bool
//
// // archInitIEEE initializes the architecture-specific CRC3-IEEE algorithm.
// // It can only be called if archAvailableIEEE() returns true.
// archInitIEEE()
//
// // archUpdateIEEE updates the given CRC32-IEEE. It can only be called if
// // archInitIEEE() was previously called.
// archUpdateIEEE(crc uint32, p []byte) uint32
//
// // archAvailableCastagnoli reports whether an architecture-specific
// // CRC32-C algorithm is available.
// archAvailableCastagnoli() bool
//
// // archInitCastagnoli initializes the architecture-specific CRC32-C
// // algorithm. It can only be called if archAvailableCastagnoli() returns
// // true.
// archInitCastagnoli()
//
// // archUpdateCastagnoli updates the given CRC32-C. It can only be called
// // if archInitCastagnoli() was previously called.
// archUpdateCastagnoli(crc uint32, p []byte) uint32
// castagnoliTable points to a lazily initialized Table for the Castagnoli
// polynomial. MakeTable will always return this value when asked to make a
// Castagnoli table so we can compare against it to find when the caller is
// using this polynomial.
var castagnoliTable *Table
var castagnoliTable8 *slicing8Table
var updateCastagnoli func(crc uint32, p []byte) uint32
var haveCastagnoli atomic.Bool
var castagnoliInitOnce = sync.OnceFunc(func() {
castagnoliTable = simpleMakeTable(Castagnoli)
if archAvailableCastagnoli() {
archInitCastagnoli()
updateCastagnoli = archUpdateCastagnoli
} else {
// Initialize the slicing-by-8 table.
castagnoliTable8 = slicingMakeTable(Castagnoli)
updateCastagnoli = func(crc uint32, p []byte) uint32 {
return slicingUpdate(crc, castagnoliTable8, p)
}
}
haveCastagnoli.Store(true)
})
// IEEETable is the table for the [IEEE] polynomial.
var IEEETable = simpleMakeTable(IEEE)
// ieeeTable8 is the slicing8Table for IEEE
var ieeeTable8 *slicing8Table
var updateIEEE func(crc uint32, p []byte) uint32
var ieeeInitOnce = sync.OnceFunc(func() {
if archAvailableIEEE() {
archInitIEEE()
updateIEEE = archUpdateIEEE
} else {
// Initialize the slicing-by-8 table.
ieeeTable8 = slicingMakeTable(IEEE)
updateIEEE = func(crc uint32, p []byte) uint32 {
return slicingUpdate(crc, ieeeTable8, p)
}
}
})
// MakeTable returns a [Table] constructed from the specified polynomial.
// The contents of this [Table] must not be modified.
func MakeTable(poly uint32) *Table {
switch poly {
case IEEE:
ieeeInitOnce()
return IEEETable
case Castagnoli:
castagnoliInitOnce()
return castagnoliTable
default:
return simpleMakeTable(poly)
}
}
// digest represents the partial evaluation of a checksum.
type digest struct {
crc uint32
tab *Table
}
// New creates a new [hash.Hash32] computing the CRC-32 checksum using the
// polynomial represented by the [Table]. Its Sum method will lay the
// value out in big-endian byte order. The returned Hash32 also
// implements [encoding.BinaryMarshaler] and [encoding.BinaryUnmarshaler] to
// marshal and unmarshal the internal state of the hash.
func New(tab *Table) hash.Hash32 {
if tab == IEEETable {
ieeeInitOnce()
}
return &digest{0, tab}
}
// NewIEEE creates a new [hash.Hash32] computing the CRC-32 checksum using
// the [IEEE] polynomial. Its Sum method will lay the value out in
// big-endian byte order. The returned Hash32 also implements
// [encoding.BinaryMarshaler] and [encoding.BinaryUnmarshaler] to marshal
// and unmarshal the internal state of the hash.
func NewIEEE() hash.Hash32 { return New(IEEETable) }
func (d *digest) Size() int { return Size }
func (d *digest) BlockSize() int { return 1 }
func (d *digest) Reset() { d.crc = 0 }
const (
magic = "crc\x01"
marshaledSize = len(magic) + 4 + 4
)
func (d *digest) AppendBinary(b []byte) ([]byte, error) {
b = append(b, magic...)
b = binary.BigEndian.AppendUint32(b, tableSum(d.tab))
b = binary.BigEndian.AppendUint32(b, d.crc)
return b, nil
}
func (d *digest) MarshalBinary() ([]byte, error) {
return d.AppendBinary(make([]byte, 0, marshaledSize))
}
func (d *digest) UnmarshalBinary(b []byte) error {
if len(b) < len(magic) || string(b[:len(magic)]) != magic {
return errors.New("hash/crc32: invalid hash state identifier")
}
if len(b) != marshaledSize {
return errors.New("hash/crc32: invalid hash state size")
}
if tableSum(d.tab) != binary.BigEndian.Uint32(b[4:]) {
return errors.New("hash/crc32: tables do not match")
}
d.crc = binary.BigEndian.Uint32(b[8:])
return nil
}
func update(crc uint32, tab *Table, p []byte, checkInitIEEE bool) uint32 {
switch {
case haveCastagnoli.Load() && tab == castagnoliTable:
return updateCastagnoli(crc, p)
case tab == IEEETable:
if checkInitIEEE {
ieeeInitOnce()
}
return updateIEEE(crc, p)
default:
return simpleUpdate(crc, tab, p)
}
}
// Update returns the result of adding the bytes in p to the crc.
func Update(crc uint32, tab *Table, p []byte) uint32 {
// Unfortunately, because IEEETable is exported, IEEE may be used without a
// call to MakeTable. We have to make sure it gets initialized in that case.
return update(crc, tab, p, true)
}
func (d *digest) Write(p []byte) (n int, err error) {
// We only create digest objects through New() which takes care of
// initialization in this case.
d.crc = update(d.crc, d.tab, p, false)
return len(p), nil
}
func (d *digest) Sum32() uint32 { return d.crc }
func (d *digest) Sum(in []byte) []byte {
s := d.Sum32()
return append(in, byte(s>>24), byte(s>>16), byte(s>>8), byte(s))
}
// Checksum returns the CRC-32 checksum of data
// using the polynomial represented by the [Table].
func Checksum(data []byte, tab *Table) uint32 { return Update(0, tab, data) }
// ChecksumIEEE returns the CRC-32 checksum of data
// using the [IEEE] polynomial.
func ChecksumIEEE(data []byte) uint32 {
ieeeInitOnce()
return updateIEEE(0, data)
}
// tableSum returns the IEEE checksum of table t.
func tableSum(t *Table) uint32 {
var a [1024]byte
b := a[:0]
if t != nil {
for _, x := range t {
b = binary.BigEndian.AppendUint32(b, x)
}
}
return ChecksumIEEE(b)
}

253
vendor/github.com/klauspost/crc32/crc32_amd64.go generated vendored Normal file
View File

@@ -0,0 +1,253 @@
// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// AMD64-specific hardware-assisted CRC32 algorithms. See crc32.go for a
// description of the interface that each architecture-specific file
// implements.
package crc32
import (
"unsafe"
"golang.org/x/sys/cpu"
)
// This file contains the code to call the SSE 4.2 version of the Castagnoli
// and IEEE CRC.
// castagnoliSSE42 is defined in crc32_amd64.s and uses the SSE 4.2 CRC32
// instruction.
//
//go:noescape
func castagnoliSSE42(crc uint32, p []byte) uint32
// castagnoliSSE42Triple is defined in crc32_amd64.s and uses the SSE 4.2 CRC32
// instruction.
//
//go:noescape
func castagnoliSSE42Triple(
crcA, crcB, crcC uint32,
a, b, c []byte,
rounds uint32,
) (retA uint32, retB uint32, retC uint32)
// ieeeCLMUL is defined in crc_amd64.s and uses the PCLMULQDQ
// instruction as well as SSE 4.1.
//
//go:noescape
func ieeeCLMUL(crc uint32, p []byte) uint32
// castagnoliCLMULAvx512 is defined in crc_amd64.s and uses the PCLMULQDQ
// instruction as well as SSE 4.1.
//
//go:noescape
func castagnoliCLMULAvx512(crc uint32, p []byte) uint32
// ieeeCLMUL is defined in crc_amd64.s and uses the PCLMULQDQ
// instruction as well as SSE 4.1.
//
//go:noescape
func ieeeCLMULAvx512(crc uint32, p []byte) uint32
const castagnoliK1 = 168
const castagnoliK2 = 1344
type sse42Table [4]Table
var castagnoliSSE42TableK1 *sse42Table
var castagnoliSSE42TableK2 *sse42Table
func archAvailableCastagnoli() bool {
return cpu.X86.HasSSE42
}
func archInitCastagnoli() {
if !cpu.X86.HasSSE42 {
panic("arch-specific Castagnoli not available")
}
castagnoliSSE42TableK1 = new(sse42Table)
castagnoliSSE42TableK2 = new(sse42Table)
// See description in updateCastagnoli.
// t[0][i] = CRC(i000, O)
// t[1][i] = CRC(0i00, O)
// t[2][i] = CRC(00i0, O)
// t[3][i] = CRC(000i, O)
// where O is a sequence of K zeros.
var tmp [castagnoliK2]byte
for b := 0; b < 4; b++ {
for i := 0; i < 256; i++ {
val := uint32(i) << uint32(b*8)
castagnoliSSE42TableK1[b][i] = castagnoliSSE42(val, tmp[:castagnoliK1])
castagnoliSSE42TableK2[b][i] = castagnoliSSE42(val, tmp[:])
}
}
}
// castagnoliShift computes the CRC32-C of K1 or K2 zeroes (depending on the
// table given) with the given initial crc value. This corresponds to
// CRC(crc, O) in the description in updateCastagnoli.
func castagnoliShift(table *sse42Table, crc uint32) uint32 {
return table[3][crc>>24] ^
table[2][(crc>>16)&0xFF] ^
table[1][(crc>>8)&0xFF] ^
table[0][crc&0xFF]
}
func archUpdateCastagnoli(crc uint32, p []byte) uint32 {
if !cpu.X86.HasSSE42 {
panic("not available")
}
// This method is inspired from the algorithm in Intel's white paper:
// "Fast CRC Computation for iSCSI Polynomial Using CRC32 Instruction"
// The same strategy of splitting the buffer in three is used but the
// combining calculation is different; the complete derivation is explained
// below.
//
// -- The basic idea --
//
// The CRC32 instruction (available in SSE4.2) can process 8 bytes at a
// time. In recent Intel architectures the instruction takes 3 cycles;
// however the processor can pipeline up to three instructions if they
// don't depend on each other.
//
// Roughly this means that we can process three buffers in about the same
// time we can process one buffer.
//
// The idea is then to split the buffer in three, CRC the three pieces
// separately and then combine the results.
//
// Combining the results requires precomputed tables, so we must choose a
// fixed buffer length to optimize. The longer the length, the faster; but
// only buffers longer than this length will use the optimization. We choose
// two cutoffs and compute tables for both:
// - one around 512: 168*3=504
// - one around 4KB: 1344*3=4032
//
// -- The nitty gritty --
//
// Let CRC(I, X) be the non-inverted CRC32-C of the sequence X (with
// initial non-inverted CRC I). This function has the following properties:
// (a) CRC(I, AB) = CRC(CRC(I, A), B)
// (b) CRC(I, A xor B) = CRC(I, A) xor CRC(0, B)
//
// Say we want to compute CRC(I, ABC) where A, B, C are three sequences of
// K bytes each, where K is a fixed constant. Let O be the sequence of K zero
// bytes.
//
// CRC(I, ABC) = CRC(I, ABO xor C)
// = CRC(I, ABO) xor CRC(0, C)
// = CRC(CRC(I, AB), O) xor CRC(0, C)
// = CRC(CRC(I, AO xor B), O) xor CRC(0, C)
// = CRC(CRC(I, AO) xor CRC(0, B), O) xor CRC(0, C)
// = CRC(CRC(CRC(I, A), O) xor CRC(0, B), O) xor CRC(0, C)
//
// The castagnoliSSE42Triple function can compute CRC(I, A), CRC(0, B),
// and CRC(0, C) efficiently. We just need to find a way to quickly compute
// CRC(uvwx, O) given a 4-byte initial value uvwx. We can precompute these
// values; since we can't have a 32-bit table, we break it up into four
// 8-bit tables:
//
// CRC(uvwx, O) = CRC(u000, O) xor
// CRC(0v00, O) xor
// CRC(00w0, O) xor
// CRC(000x, O)
//
// We can compute tables corresponding to the four terms for all 8-bit
// values.
crc = ^crc
// Disabled, since it is not significantly faster than the SSE 4.2 version, even on Zen 5.
if false && len(p) >= 2048 && cpu.X86.HasAVX512F && cpu.X86.HasAVX512VL && cpu.X86.HasAVX512VPCLMULQDQ && cpu.X86.HasPCLMULQDQ {
left := len(p) & 15
do := len(p) - left
crc = castagnoliCLMULAvx512(crc, p[:do])
return ^castagnoliSSE42(crc, p[do:])
}
// If a buffer is long enough to use the optimization, process the first few
// bytes to align the buffer to an 8 byte boundary (if necessary).
if len(p) >= castagnoliK1*3 {
delta := int(uintptr(unsafe.Pointer(&p[0])) & 7)
if delta != 0 {
delta = 8 - delta
crc = castagnoliSSE42(crc, p[:delta])
p = p[delta:]
}
}
// Process 3*K2 at a time.
for len(p) >= castagnoliK2*3 {
// Compute CRC(I, A), CRC(0, B), and CRC(0, C).
crcA, crcB, crcC := castagnoliSSE42Triple(
crc, 0, 0,
p, p[castagnoliK2:], p[castagnoliK2*2:],
castagnoliK2/24)
// CRC(I, AB) = CRC(CRC(I, A), O) xor CRC(0, B)
crcAB := castagnoliShift(castagnoliSSE42TableK2, crcA) ^ crcB
// CRC(I, ABC) = CRC(CRC(I, AB), O) xor CRC(0, C)
crc = castagnoliShift(castagnoliSSE42TableK2, crcAB) ^ crcC
p = p[castagnoliK2*3:]
}
// Process 3*K1 at a time.
for len(p) >= castagnoliK1*3 {
// Compute CRC(I, A), CRC(0, B), and CRC(0, C).
crcA, crcB, crcC := castagnoliSSE42Triple(
crc, 0, 0,
p, p[castagnoliK1:], p[castagnoliK1*2:],
castagnoliK1/24)
// CRC(I, AB) = CRC(CRC(I, A), O) xor CRC(0, B)
crcAB := castagnoliShift(castagnoliSSE42TableK1, crcA) ^ crcB
// CRC(I, ABC) = CRC(CRC(I, AB), O) xor CRC(0, C)
crc = castagnoliShift(castagnoliSSE42TableK1, crcAB) ^ crcC
p = p[castagnoliK1*3:]
}
// Use the simple implementation for what's left.
crc = castagnoliSSE42(crc, p)
return ^crc
}
func archAvailableIEEE() bool {
return cpu.X86.HasPCLMULQDQ && cpu.X86.HasSSE41
}
var archIeeeTable8 *slicing8Table
func archInitIEEE() {
if !cpu.X86.HasPCLMULQDQ || !cpu.X86.HasSSE41 {
panic("not available")
}
// We still use slicing-by-8 for small buffers.
archIeeeTable8 = slicingMakeTable(IEEE)
}
func archUpdateIEEE(crc uint32, p []byte) uint32 {
if !cpu.X86.HasPCLMULQDQ || !cpu.X86.HasSSE41 {
panic("not available")
}
if len(p) >= 64 {
if len(p) >= 1024 && cpu.X86.HasAVX512F && cpu.X86.HasAVX512VL && cpu.X86.HasAVX512VPCLMULQDQ && cpu.X86.HasPCLMULQDQ {
left := len(p) & 15
do := len(p) - left
crc = ^ieeeCLMULAvx512(^crc, p[:do])
p = p[do:]
} else {
left := len(p) & 15
do := len(p) - left
crc = ^ieeeCLMUL(^crc, p[:do])
p = p[do:]
}
}
if len(p) == 0 {
return crc
}
return slicingUpdate(crc, archIeeeTable8, p)
}

527
vendor/github.com/klauspost/crc32/crc32_amd64.s generated vendored Normal file
View File

@@ -0,0 +1,527 @@
// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
#include "textflag.h"
// castagnoliSSE42 updates the (non-inverted) crc with the given buffer.
//
// func castagnoliSSE42(crc uint32, p []byte) uint32
TEXT ·castagnoliSSE42(SB), NOSPLIT, $0
MOVL crc+0(FP), AX // CRC value
MOVQ p+8(FP), SI // data pointer
MOVQ p_len+16(FP), CX // len(p)
// If there are fewer than 8 bytes to process, skip alignment.
CMPQ CX, $8
JL less_than_8
MOVQ SI, BX
ANDQ $7, BX
JZ aligned
// Process the first few bytes to 8-byte align the input.
// BX = 8 - BX. We need to process this many bytes to align.
SUBQ $1, BX
XORQ $7, BX
BTQ $0, BX
JNC align_2
CRC32B (SI), AX
DECQ CX
INCQ SI
align_2:
BTQ $1, BX
JNC align_4
CRC32W (SI), AX
SUBQ $2, CX
ADDQ $2, SI
align_4:
BTQ $2, BX
JNC aligned
CRC32L (SI), AX
SUBQ $4, CX
ADDQ $4, SI
aligned:
// The input is now 8-byte aligned and we can process 8-byte chunks.
CMPQ CX, $8
JL less_than_8
CRC32Q (SI), AX
ADDQ $8, SI
SUBQ $8, CX
JMP aligned
less_than_8:
// We may have some bytes left over; process 4 bytes, then 2, then 1.
BTQ $2, CX
JNC less_than_4
CRC32L (SI), AX
ADDQ $4, SI
less_than_4:
BTQ $1, CX
JNC less_than_2
CRC32W (SI), AX
ADDQ $2, SI
less_than_2:
BTQ $0, CX
JNC done
CRC32B (SI), AX
done:
MOVL AX, ret+32(FP)
RET
// castagnoliSSE42Triple updates three (non-inverted) crcs with (24*rounds)
// bytes from each buffer.
//
// func castagnoliSSE42Triple(
// crc1, crc2, crc3 uint32,
// a, b, c []byte,
// rounds uint32,
// ) (retA uint32, retB uint32, retC uint32)
TEXT ·castagnoliSSE42Triple(SB), NOSPLIT, $0
MOVL crcA+0(FP), AX
MOVL crcB+4(FP), CX
MOVL crcC+8(FP), DX
MOVQ a+16(FP), R8 // data pointer
MOVQ b+40(FP), R9 // data pointer
MOVQ c+64(FP), R10 // data pointer
MOVL rounds+88(FP), R11
loop:
CRC32Q (R8), AX
CRC32Q (R9), CX
CRC32Q (R10), DX
CRC32Q 8(R8), AX
CRC32Q 8(R9), CX
CRC32Q 8(R10), DX
CRC32Q 16(R8), AX
CRC32Q 16(R9), CX
CRC32Q 16(R10), DX
ADDQ $24, R8
ADDQ $24, R9
ADDQ $24, R10
DECQ R11
JNZ loop
MOVL AX, retA+96(FP)
MOVL CX, retB+100(FP)
MOVL DX, retC+104(FP)
RET
// CRC32 polynomial data
//
// These constants are lifted from the
// Linux kernel, since they avoid the costly
// PSHUFB 16 byte reversal proposed in the
// original Intel paper.
DATA r2r1<>+0(SB)/8, $0x154442bd4
DATA r2r1<>+8(SB)/8, $0x1c6e41596
DATA r4r3<>+0(SB)/8, $0x1751997d0
DATA r4r3<>+8(SB)/8, $0x0ccaa009e
DATA rupoly<>+0(SB)/8, $0x1db710641
DATA rupoly<>+8(SB)/8, $0x1f7011641
DATA r5<>+0(SB)/8, $0x163cd6124
GLOBL r2r1<>(SB), RODATA, $16
GLOBL r4r3<>(SB), RODATA, $16
GLOBL rupoly<>(SB), RODATA, $16
GLOBL r5<>(SB), RODATA, $8
// Based on https://www.intel.com/content/dam/www/public/us/en/documents/white-papers/fast-crc-computation-generic-polynomials-pclmulqdq-paper.pdf
// len(p) must be at least 64, and must be a multiple of 16.
// func ieeeCLMUL(crc uint32, p []byte) uint32
TEXT ·ieeeCLMUL(SB), NOSPLIT, $0
MOVL crc+0(FP), X0 // Initial CRC value
MOVQ p+8(FP), SI // data pointer
MOVQ p_len+16(FP), CX // len(p)
MOVOU (SI), X1
MOVOU 16(SI), X2
MOVOU 32(SI), X3
MOVOU 48(SI), X4
PXOR X0, X1
ADDQ $64, SI // buf+=64
SUBQ $64, CX // len-=64
CMPQ CX, $64 // Less than 64 bytes left
JB remain64
MOVOA r2r1<>+0(SB), X0
loopback64:
MOVOA X1, X5
MOVOA X2, X6
MOVOA X3, X7
MOVOA X4, X8
PCLMULQDQ $0, X0, X1
PCLMULQDQ $0, X0, X2
PCLMULQDQ $0, X0, X3
PCLMULQDQ $0, X0, X4
// Load next early
MOVOU (SI), X11
MOVOU 16(SI), X12
MOVOU 32(SI), X13
MOVOU 48(SI), X14
PCLMULQDQ $0x11, X0, X5
PCLMULQDQ $0x11, X0, X6
PCLMULQDQ $0x11, X0, X7
PCLMULQDQ $0x11, X0, X8
PXOR X5, X1
PXOR X6, X2
PXOR X7, X3
PXOR X8, X4
PXOR X11, X1
PXOR X12, X2
PXOR X13, X3
PXOR X14, X4
ADDQ $0x40, DI
ADDQ $64, SI // buf+=64
SUBQ $64, CX // len-=64
CMPQ CX, $64 // Less than 64 bytes left?
JGE loopback64
// Fold result into a single register (X1)
remain64:
MOVOA r4r3<>+0(SB), X0
MOVOA X1, X5
PCLMULQDQ $0, X0, X1
PCLMULQDQ $0x11, X0, X5
PXOR X5, X1
PXOR X2, X1
MOVOA X1, X5
PCLMULQDQ $0, X0, X1
PCLMULQDQ $0x11, X0, X5
PXOR X5, X1
PXOR X3, X1
MOVOA X1, X5
PCLMULQDQ $0, X0, X1
PCLMULQDQ $0x11, X0, X5
PXOR X5, X1
PXOR X4, X1
// If there is less than 16 bytes left we are done
CMPQ CX, $16
JB finish
// Encode 16 bytes
remain16:
MOVOU (SI), X10
MOVOA X1, X5
PCLMULQDQ $0, X0, X1
PCLMULQDQ $0x11, X0, X5
PXOR X5, X1
PXOR X10, X1
SUBQ $16, CX
ADDQ $16, SI
CMPQ CX, $16
JGE remain16
finish:
// Fold final result into 32 bits and return it
PCMPEQB X3, X3
PCLMULQDQ $1, X1, X0
PSRLDQ $8, X1
PXOR X0, X1
MOVOA X1, X2
MOVQ r5<>+0(SB), X0
// Creates 32 bit mask. Note that we don't care about upper half.
PSRLQ $32, X3
PSRLDQ $4, X2
PAND X3, X1
PCLMULQDQ $0, X0, X1
PXOR X2, X1
MOVOA rupoly<>+0(SB), X0
MOVOA X1, X2
PAND X3, X1
PCLMULQDQ $0x10, X0, X1
PAND X3, X1
PCLMULQDQ $0, X0, X1
PXOR X2, X1
PEXTRD $1, X1, AX
MOVL AX, ret+32(FP)
RET
DATA r2r1X<>+0(SB)/8, $0x154442bd4
DATA r2r1X<>+8(SB)/8, $0x1c6e41596
DATA r2r1X<>+16(SB)/8, $0x154442bd4
DATA r2r1X<>+24(SB)/8, $0x1c6e41596
DATA r2r1X<>+32(SB)/8, $0x154442bd4
DATA r2r1X<>+40(SB)/8, $0x1c6e41596
DATA r2r1X<>+48(SB)/8, $0x154442bd4
DATA r2r1X<>+56(SB)/8, $0x1c6e41596
GLOBL r2r1X<>(SB), RODATA, $64
// Based on https://www.intel.com/content/dam/www/public/us/en/documents/white-papers/fast-crc-computation-generic-polynomials-pclmulqdq-paper.pdf
// len(p) must be at least 128, and must be a multiple of 16.
// func ieeeCLMULAvx512(crc uint32, p []byte) uint32
TEXT ·ieeeCLMULAvx512(SB), NOSPLIT, $0
MOVL crc+0(FP), AX // Initial CRC value
MOVQ p+8(FP), SI // data pointer
MOVQ p_len+16(FP), CX // len(p)
VPXORQ Z0, Z0, Z0
VMOVDQU64 (SI), Z1
VMOVQ AX, X0
VPXORQ Z0, Z1, Z1 // Merge initial CRC value into Z1
ADDQ $64, SI // buf+=64
SUBQ $64, CX // len-=64
VMOVDQU64 r2r1X<>+0(SB), Z0
loopback64:
// Load next early
VMOVDQU64 (SI), Z11
VPCLMULQDQ $0x11, Z0, Z1, Z5
VPCLMULQDQ $0, Z0, Z1, Z1
VPTERNLOGD $0x96, Z11, Z5, Z1 // Combine results with xor into Z1
ADDQ $0x40, DI
ADDQ $64, SI // buf+=64
SUBQ $64, CX // len-=64
CMPQ CX, $64 // Less than 64 bytes left?
JGE loopback64
// Fold result into a single register (X1)
remain64:
VEXTRACTF32X4 $1, Z1, X2 // X2: Second 128-bit lane
VEXTRACTF32X4 $2, Z1, X3 // X3: Third 128-bit lane
VEXTRACTF32X4 $3, Z1, X4 // X4: Fourth 128-bit lane
MOVOA r4r3<>+0(SB), X0
MOVOA X1, X5
PCLMULQDQ $0, X0, X1
PCLMULQDQ $0x11, X0, X5
PXOR X5, X1
PXOR X2, X1
MOVOA X1, X5
PCLMULQDQ $0, X0, X1
PCLMULQDQ $0x11, X0, X5
PXOR X5, X1
PXOR X3, X1
MOVOA X1, X5
PCLMULQDQ $0, X0, X1
PCLMULQDQ $0x11, X0, X5
PXOR X5, X1
PXOR X4, X1
// If there is less than 16 bytes left we are done
CMPQ CX, $16
JB finish
// Encode 16 bytes
remain16:
MOVOU (SI), X10
MOVOA X1, X5
PCLMULQDQ $0, X0, X1
PCLMULQDQ $0x11, X0, X5
PXOR X5, X1
PXOR X10, X1
SUBQ $16, CX
ADDQ $16, SI
CMPQ CX, $16
JGE remain16
finish:
// Fold final result into 32 bits and return it
PCMPEQB X3, X3
PCLMULQDQ $1, X1, X0
PSRLDQ $8, X1
PXOR X0, X1
MOVOA X1, X2
MOVQ r5<>+0(SB), X0
// Creates 32 bit mask. Note that we don't care about upper half.
PSRLQ $32, X3
PSRLDQ $4, X2
PAND X3, X1
PCLMULQDQ $0, X0, X1
PXOR X2, X1
MOVOA rupoly<>+0(SB), X0
MOVOA X1, X2
PAND X3, X1
PCLMULQDQ $0x10, X0, X1
PAND X3, X1
PCLMULQDQ $0, X0, X1
PXOR X2, X1
PEXTRD $1, X1, AX
MOVL AX, ret+32(FP)
VZEROUPPER
RET
// Castagonli Polynomial constants
DATA r2r1C<>+0(SB)/8, $0x0740eef02
DATA r2r1C<>+8(SB)/8, $0x09e4addf8
DATA r2r1C<>+16(SB)/8, $0x0740eef02
DATA r2r1C<>+24(SB)/8, $0x09e4addf8
DATA r2r1C<>+32(SB)/8, $0x0740eef02
DATA r2r1C<>+40(SB)/8, $0x09e4addf8
DATA r2r1C<>+48(SB)/8, $0x0740eef02
DATA r2r1C<>+56(SB)/8, $0x09e4addf8
GLOBL r2r1C<>(SB), RODATA, $64
DATA r4r3C<>+0(SB)/8, $0xf20c0dfe
DATA r4r3C<>+8(SB)/8, $0x14cd00bd6
DATA rupolyC<>+0(SB)/8, $0x105ec76f0
DATA rupolyC<>+8(SB)/8, $0xdea713f1
DATA r5C<>+0(SB)/8, $0xdd45aab8
GLOBL r4r3C<>(SB), RODATA, $16
GLOBL rupolyC<>(SB), RODATA, $16
GLOBL r5C<>(SB), RODATA, $8
// Based on https://www.intel.com/content/dam/www/public/us/en/documents/white-papers/fast-crc-computation-generic-polynomials-pclmulqdq-paper.pdf
// len(p) must be at least 128, and must be a multiple of 16.
// func castagnoliCLMULAvx512(crc uint32, p []byte) uint32
TEXT ·castagnoliCLMULAvx512(SB), NOSPLIT, $0
MOVL crc+0(FP), AX // Initial CRC value
MOVQ p+8(FP), SI // data pointer
MOVQ p_len+16(FP), CX // len(p)
VPXORQ Z0, Z0, Z0
VMOVDQU64 (SI), Z1
VMOVQ AX, X0
VPXORQ Z0, Z1, Z1 // Merge initial CRC value into Z1
ADDQ $64, SI // buf+=64
SUBQ $64, CX // len-=64
VMOVDQU64 r2r1C<>+0(SB), Z0
loopback64:
// Load next early
VMOVDQU64 (SI), Z11
VPCLMULQDQ $0x11, Z0, Z1, Z5
VPCLMULQDQ $0, Z0, Z1, Z1
VPTERNLOGD $0x96, Z11, Z5, Z1 // Combine results with xor into Z1
ADDQ $0x40, DI
ADDQ $64, SI // buf+=64
SUBQ $64, CX // len-=64
CMPQ CX, $64 // Less than 64 bytes left?
JGE loopback64
// Fold result into a single register (X1)
remain64:
VEXTRACTF32X4 $1, Z1, X2 // X2: Second 128-bit lane
VEXTRACTF32X4 $2, Z1, X3 // X3: Third 128-bit lane
VEXTRACTF32X4 $3, Z1, X4 // X4: Fourth 128-bit lane
MOVOA r4r3C<>+0(SB), X0
MOVOA X1, X5
PCLMULQDQ $0, X0, X1
PCLMULQDQ $0x11, X0, X5
PXOR X5, X1
PXOR X2, X1
MOVOA X1, X5
PCLMULQDQ $0, X0, X1
PCLMULQDQ $0x11, X0, X5
PXOR X5, X1
PXOR X3, X1
MOVOA X1, X5
PCLMULQDQ $0, X0, X1
PCLMULQDQ $0x11, X0, X5
PXOR X5, X1
PXOR X4, X1
// If there is less than 16 bytes left we are done
CMPQ CX, $16
JB finish
// Encode 16 bytes
remain16:
MOVOU (SI), X10
MOVOA X1, X5
PCLMULQDQ $0, X0, X1
PCLMULQDQ $0x11, X0, X5
PXOR X5, X1
PXOR X10, X1
SUBQ $16, CX
ADDQ $16, SI
CMPQ CX, $16
JGE remain16
finish:
// Fold final result into 32 bits and return it
PCMPEQB X3, X3
PCLMULQDQ $1, X1, X0
PSRLDQ $8, X1
PXOR X0, X1
MOVOA X1, X2
MOVQ r5C<>+0(SB), X0
// Creates 32 bit mask. Note that we don't care about upper half.
PSRLQ $32, X3
PSRLDQ $4, X2
PAND X3, X1
PCLMULQDQ $0, X0, X1
PXOR X2, X1
MOVOA rupolyC<>+0(SB), X0
MOVOA X1, X2
PAND X3, X1
PCLMULQDQ $0x10, X0, X1
PAND X3, X1
PCLMULQDQ $0, X0, X1
PXOR X2, X1
PEXTRD $1, X1, AX
MOVL AX, ret+32(FP)
VZEROUPPER
RET

50
vendor/github.com/klauspost/crc32/crc32_arm64.go generated vendored Normal file
View File

@@ -0,0 +1,50 @@
// Copyright 2017 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// ARM64-specific hardware-assisted CRC32 algorithms. See crc32.go for a
// description of the interface that each architecture-specific file
// implements.
package crc32
import "golang.org/x/sys/cpu"
func castagnoliUpdate(crc uint32, p []byte) uint32
func ieeeUpdate(crc uint32, p []byte) uint32
func archAvailableCastagnoli() bool {
return cpu.ARM64.HasCRC32
}
func archInitCastagnoli() {
if !cpu.ARM64.HasCRC32 {
panic("arch-specific crc32 instruction for Castagnoli not available")
}
}
func archUpdateCastagnoli(crc uint32, p []byte) uint32 {
if !cpu.ARM64.HasCRC32 {
panic("arch-specific crc32 instruction for Castagnoli not available")
}
return ^castagnoliUpdate(^crc, p)
}
func archAvailableIEEE() bool {
return cpu.ARM64.HasCRC32
}
func archInitIEEE() {
if !cpu.ARM64.HasCRC32 {
panic("arch-specific crc32 instruction for IEEE not available")
}
}
func archUpdateIEEE(crc uint32, p []byte) uint32 {
if !cpu.ARM64.HasCRC32 {
panic("arch-specific crc32 instruction for IEEE not available")
}
return ^ieeeUpdate(^crc, p)
}

97
vendor/github.com/klauspost/crc32/crc32_arm64.s generated vendored Normal file
View File

@@ -0,0 +1,97 @@
// Copyright 2017 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
#include "textflag.h"
// castagnoliUpdate updates the non-inverted crc with the given data.
// func castagnoliUpdate(crc uint32, p []byte) uint32
TEXT ·castagnoliUpdate(SB), NOSPLIT, $0-36
MOVWU crc+0(FP), R9 // CRC value
MOVD p+8(FP), R13 // data pointer
MOVD p_len+16(FP), R11 // len(p)
update:
CMP $16, R11
BLT less_than_16
LDP.P 16(R13), (R8, R10)
CRC32CX R8, R9
CRC32CX R10, R9
SUB $16, R11
JMP update
less_than_16:
TBZ $3, R11, less_than_8
MOVD.P 8(R13), R10
CRC32CX R10, R9
less_than_8:
TBZ $2, R11, less_than_4
MOVWU.P 4(R13), R10
CRC32CW R10, R9
less_than_4:
TBZ $1, R11, less_than_2
MOVHU.P 2(R13), R10
CRC32CH R10, R9
less_than_2:
TBZ $0, R11, done
MOVBU (R13), R10
CRC32CB R10, R9
done:
MOVWU R9, ret+32(FP)
RET
// ieeeUpdate updates the non-inverted crc with the given data.
// func ieeeUpdate(crc uint32, p []byte) uint32
TEXT ·ieeeUpdate(SB), NOSPLIT, $0-36
MOVWU crc+0(FP), R9 // CRC value
MOVD p+8(FP), R13 // data pointer
MOVD p_len+16(FP), R11 // len(p)
update:
CMP $16, R11
BLT less_than_16
LDP.P 16(R13), (R8, R10)
CRC32X R8, R9
CRC32X R10, R9
SUB $16, R11
JMP update
less_than_16:
TBZ $3, R11, less_than_8
MOVD.P 8(R13), R10
CRC32X R10, R9
less_than_8:
TBZ $2, R11, less_than_4
MOVWU.P 4(R13), R10
CRC32W R10, R9
less_than_4:
TBZ $1, R11, less_than_2
MOVHU.P 2(R13), R10
CRC32H R10, R9
less_than_2:
TBZ $0, R11, done
MOVBU (R13), R10
CRC32B R10, R9
done:
MOVWU R9, ret+32(FP)
RET

91
vendor/github.com/klauspost/crc32/crc32_generic.go generated vendored Normal file
View File

@@ -0,0 +1,91 @@
// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// This file contains CRC32 algorithms that are not specific to any architecture
// and don't use hardware acceleration.
//
// The simple (and slow) CRC32 implementation only uses a 256*4 bytes table.
//
// The slicing-by-8 algorithm is a faster implementation that uses a bigger
// table (8*256*4 bytes).
package crc32
import "encoding/binary"
// simpleMakeTable allocates and constructs a Table for the specified
// polynomial. The table is suitable for use with the simple algorithm
// (simpleUpdate).
func simpleMakeTable(poly uint32) *Table {
t := new(Table)
simplePopulateTable(poly, t)
return t
}
// simplePopulateTable constructs a Table for the specified polynomial, suitable
// for use with simpleUpdate.
func simplePopulateTable(poly uint32, t *Table) {
for i := 0; i < 256; i++ {
crc := uint32(i)
for j := 0; j < 8; j++ {
if crc&1 == 1 {
crc = (crc >> 1) ^ poly
} else {
crc >>= 1
}
}
t[i] = crc
}
}
// simpleUpdate uses the simple algorithm to update the CRC, given a table that
// was previously computed using simpleMakeTable.
func simpleUpdate(crc uint32, tab *Table, p []byte) uint32 {
crc = ^crc
for _, v := range p {
crc = tab[byte(crc)^v] ^ (crc >> 8)
}
return ^crc
}
// Use slicing-by-8 when payload >= this value.
const slicing8Cutoff = 16
// slicing8Table is array of 8 Tables, used by the slicing-by-8 algorithm.
type slicing8Table [8]Table
// slicingMakeTable constructs a slicing8Table for the specified polynomial. The
// table is suitable for use with the slicing-by-8 algorithm (slicingUpdate).
func slicingMakeTable(poly uint32) *slicing8Table {
t := new(slicing8Table)
simplePopulateTable(poly, &t[0])
for i := 0; i < 256; i++ {
crc := t[0][i]
for j := 1; j < 8; j++ {
crc = t[0][crc&0xFF] ^ (crc >> 8)
t[j][i] = crc
}
}
return t
}
// slicingUpdate uses the slicing-by-8 algorithm to update the CRC, given a
// table that was previously computed using slicingMakeTable.
func slicingUpdate(crc uint32, tab *slicing8Table, p []byte) uint32 {
if len(p) >= slicing8Cutoff {
crc = ^crc
for len(p) > 8 {
crc ^= binary.LittleEndian.Uint32(p)
crc = tab[0][p[7]] ^ tab[1][p[6]] ^ tab[2][p[5]] ^ tab[3][p[4]] ^
tab[4][crc>>24] ^ tab[5][(crc>>16)&0xFF] ^
tab[6][(crc>>8)&0xFF] ^ tab[7][crc&0xFF]
p = p[8:]
}
crc = ^crc
}
if len(p) == 0 {
return crc
}
return simpleUpdate(crc, &tab[0], p)
}

50
vendor/github.com/klauspost/crc32/crc32_loong64.go generated vendored Normal file
View File

@@ -0,0 +1,50 @@
// Copyright 2024 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// LoongArch64-specific hardware-assisted CRC32 algorithms. See crc32.go for a
// description of the interface that each architecture-specific file
// implements.
package crc32
import "golang.org/x/sys/cpu"
func castagnoliUpdate(crc uint32, p []byte) uint32
func ieeeUpdate(crc uint32, p []byte) uint32
func archAvailableCastagnoli() bool {
return cpu.Loong64.HasCRC32
}
func archInitCastagnoli() {
if !cpu.Loong64.HasCRC32 {
panic("arch-specific crc32 instruction for Castagnoli not available")
}
}
func archUpdateCastagnoli(crc uint32, p []byte) uint32 {
if !cpu.Loong64.HasCRC32 {
panic("arch-specific crc32 instruction for Castagnoli not available")
}
return ^castagnoliUpdate(^crc, p)
}
func archAvailableIEEE() bool {
return cpu.Loong64.HasCRC32
}
func archInitIEEE() {
if !cpu.Loong64.HasCRC32 {
panic("arch-specific crc32 instruction for IEEE not available")
}
}
func archUpdateIEEE(crc uint32, p []byte) uint32 {
if !cpu.Loong64.HasCRC32 {
panic("arch-specific crc32 instruction for IEEE not available")
}
return ^ieeeUpdate(^crc, p)
}

160
vendor/github.com/klauspost/crc32/crc32_loong64.s generated vendored Normal file
View File

@@ -0,0 +1,160 @@
// Copyright 2024 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
#include "textflag.h"
// castagnoliUpdate updates the non-inverted crc with the given data.
// func castagnoliUpdate(crc uint32, p []byte) uint32
TEXT ·castagnoliUpdate(SB), NOSPLIT, $0-36
MOVWU crc+0(FP), R4 // a0 = CRC value
MOVV p+8(FP), R5 // a1 = data pointer
MOVV p_len+16(FP), R6 // a2 = len(p)
SGT $8, R6, R12
BNE R12, less_than_8
AND $7, R5, R12
BEQ R12, aligned
// Process the first few bytes to 8-byte align the input.
// t0 = 8 - t0. We need to process this many bytes to align.
SUB $1, R12
XOR $7, R12
AND $1, R12, R13
BEQ R13, align_2
MOVB (R5), R13
CRCCWBW R4, R13, R4
ADDV $1, R5
ADDV $-1, R6
align_2:
AND $2, R12, R13
BEQ R13, align_4
MOVH (R5), R13
CRCCWHW R4, R13, R4
ADDV $2, R5
ADDV $-2, R6
align_4:
AND $4, R12, R13
BEQ R13, aligned
MOVW (R5), R13
CRCCWWW R4, R13, R4
ADDV $4, R5
ADDV $-4, R6
aligned:
// The input is now 8-byte aligned and we can process 8-byte chunks.
SGT $8, R6, R12
BNE R12, less_than_8
MOVV (R5), R13
CRCCWVW R4, R13, R4
ADDV $8, R5
ADDV $-8, R6
JMP aligned
less_than_8:
// We may have some bytes left over; process 4 bytes, then 2, then 1.
AND $4, R6, R12
BEQ R12, less_than_4
MOVW (R5), R13
CRCCWWW R4, R13, R4
ADDV $4, R5
ADDV $-4, R6
less_than_4:
AND $2, R6, R12
BEQ R12, less_than_2
MOVH (R5), R13
CRCCWHW R4, R13, R4
ADDV $2, R5
ADDV $-2, R6
less_than_2:
BEQ R6, done
MOVB (R5), R13
CRCCWBW R4, R13, R4
done:
MOVW R4, ret+32(FP)
RET
// ieeeUpdate updates the non-inverted crc with the given data.
// func ieeeUpdate(crc uint32, p []byte) uint32
TEXT ·ieeeUpdate(SB), NOSPLIT, $0-36
MOVWU crc+0(FP), R4 // a0 = CRC value
MOVV p+8(FP), R5 // a1 = data pointer
MOVV p_len+16(FP), R6 // a2 = len(p)
SGT $8, R6, R12
BNE R12, less_than_8
AND $7, R5, R12
BEQ R12, aligned
// Process the first few bytes to 8-byte align the input.
// t0 = 8 - t0. We need to process this many bytes to align.
SUB $1, R12
XOR $7, R12
AND $1, R12, R13
BEQ R13, align_2
MOVB (R5), R13
CRCWBW R4, R13, R4
ADDV $1, R5
ADDV $-1, R6
align_2:
AND $2, R12, R13
BEQ R13, align_4
MOVH (R5), R13
CRCWHW R4, R13, R4
ADDV $2, R5
ADDV $-2, R6
align_4:
AND $4, R12, R13
BEQ R13, aligned
MOVW (R5), R13
CRCWWW R4, R13, R4
ADDV $4, R5
ADDV $-4, R6
aligned:
// The input is now 8-byte aligned and we can process 8-byte chunks.
SGT $8, R6, R12
BNE R12, less_than_8
MOVV (R5), R13
CRCWVW R4, R13, R4
ADDV $8, R5
ADDV $-8, R6
JMP aligned
less_than_8:
// We may have some bytes left over; process 4 bytes, then 2, then 1.
AND $4, R6, R12
BEQ R12, less_than_4
MOVW (R5), R13
CRCWWW R4, R13, R4
ADDV $4, R5
ADDV $-4, R6
less_than_4:
AND $2, R6, R12
BEQ R12, less_than_2
MOVH (R5), R13
CRCWHW R4, R13, R4
ADDV $2, R5
ADDV $-2, R6
less_than_2:
BEQ R6, done
MOVB (R5), R13
CRCWBW R4, R13, R4
done:
MOVW R4, ret+32(FP)
RET

15
vendor/github.com/klauspost/crc32/crc32_otherarch.go generated vendored Normal file
View File

@@ -0,0 +1,15 @@
// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//go:build !amd64 && !s390x && !ppc64le && !arm64 && !loong64
package crc32
func archAvailableIEEE() bool { return false }
func archInitIEEE() { panic("not available") }
func archUpdateIEEE(crc uint32, p []byte) uint32 { panic("not available") }
func archAvailableCastagnoli() bool { return false }
func archInitCastagnoli() { panic("not available") }
func archUpdateCastagnoli(crc uint32, p []byte) uint32 { panic("not available") }

88
vendor/github.com/klauspost/crc32/crc32_ppc64le.go generated vendored Normal file
View File

@@ -0,0 +1,88 @@
// Copyright 2017 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package crc32
import (
"unsafe"
)
const (
vecMinLen = 16
vecAlignMask = 15 // align to 16 bytes
crcIEEE = 1
crcCast = 2
)
//go:noescape
func ppc64SlicingUpdateBy8(crc uint32, table8 *slicing8Table, p []byte) uint32
// this function requires the buffer to be 16 byte aligned and > 16 bytes long.
//
//go:noescape
func vectorCrc32(crc uint32, poly uint32, p []byte) uint32
var archCastagnoliTable8 *slicing8Table
func archInitCastagnoli() {
archCastagnoliTable8 = slicingMakeTable(Castagnoli)
}
func archUpdateCastagnoli(crc uint32, p []byte) uint32 {
if len(p) >= 4*vecMinLen {
// If not aligned then process the initial unaligned bytes
if uint64(uintptr(unsafe.Pointer(&p[0])))&uint64(vecAlignMask) != 0 {
align := uint64(uintptr(unsafe.Pointer(&p[0]))) & uint64(vecAlignMask)
newlen := vecMinLen - align
crc = ppc64SlicingUpdateBy8(crc, archCastagnoliTable8, p[:newlen])
p = p[newlen:]
}
// p should be aligned now
aligned := len(p) & ^vecAlignMask
crc = vectorCrc32(crc, crcCast, p[:aligned])
p = p[aligned:]
}
if len(p) == 0 {
return crc
}
return ppc64SlicingUpdateBy8(crc, archCastagnoliTable8, p)
}
func archAvailableIEEE() bool {
return true
}
func archAvailableCastagnoli() bool {
return true
}
var archIeeeTable8 *slicing8Table
func archInitIEEE() {
// We still use slicing-by-8 for small buffers.
archIeeeTable8 = slicingMakeTable(IEEE)
}
// archUpdateIEEE calculates the checksum of p using vectorizedIEEE.
func archUpdateIEEE(crc uint32, p []byte) uint32 {
// Check if vector code should be used. If not aligned, then handle those
// first up to the aligned bytes.
if len(p) >= 4*vecMinLen {
if uint64(uintptr(unsafe.Pointer(&p[0])))&uint64(vecAlignMask) != 0 {
align := uint64(uintptr(unsafe.Pointer(&p[0]))) & uint64(vecAlignMask)
newlen := vecMinLen - align
crc = ppc64SlicingUpdateBy8(crc, archIeeeTable8, p[:newlen])
p = p[newlen:]
}
aligned := len(p) & ^vecAlignMask
crc = vectorCrc32(crc, crcIEEE, p[:aligned])
p = p[aligned:]
}
if len(p) == 0 {
return crc
}
return ppc64SlicingUpdateBy8(crc, archIeeeTable8, p)
}

736
vendor/github.com/klauspost/crc32/crc32_ppc64le.s generated vendored Normal file
View File

@@ -0,0 +1,736 @@
// Copyright 2017 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// The vectorized implementation found below is a derived work
// from code written by Anton Blanchard <anton@au.ibm.com> found
// at https://github.com/antonblanchard/crc32-vpmsum. The original
// is dual licensed under GPL and Apache 2. As the copyright holder
// for the work, IBM has contributed this new work under
// the golang license.
// Changes include porting to Go assembler with modifications for
// the Go ABI for ppc64le.
#include "textflag.h"
#define POWER8_OFFSET 132
#define off16 R16
#define off32 R17
#define off48 R18
#define off64 R19
#define off80 R20
#define off96 R21
#define off112 R22
#define const1 V24
#define const2 V25
#define byteswap V26
#define mask_32bit V27
#define mask_64bit V28
#define zeroes V29
#define MAX_SIZE 32*1024
#define REFLECT
TEXT ·ppc64SlicingUpdateBy8(SB), NOSPLIT|NOFRAME, $0-44
MOVWZ crc+0(FP), R3 // incoming crc
MOVD table8+8(FP), R4 // *Table
MOVD p+16(FP), R5
MOVD p_len+24(FP), R6 // p len
CMP $0, R6 // len == 0?
BNE start
MOVW R3, ret+40(FP) // return crc
RET
start:
NOR R3, R3, R7 // ^crc
MOVWZ R7, R7 // 32 bits
CMP R6, $16
MOVD R6, CTR
BLT short
SRAD $3, R6, R8 // 8 byte chunks
MOVD R8, CTR
loop:
MOVWZ 0(R5), R8 // 0-3 bytes of p ?Endian?
MOVWZ 4(R5), R9 // 4-7 bytes of p
MOVD R4, R10 // &tab[0]
XOR R7, R8, R7 // crc ^= byte[0:3]
RLDICL $40, R9, $56, R17 // p[7]
SLD $2, R17, R17 // p[7]*4
RLDICL $40, R7, $56, R8 // crc>>24
SLD $2, R8, R8 // crc>>24*4
RLDICL $48, R9, $56, R18 // p[6]
SLD $2, R18, R18 // p[6]*4
MOVWZ (R10)(R17), R21 // tab[0][p[7]]
ADD $1024, R10, R10 // tab[1]
RLDICL $56, R9, $56, R19 // p[5]
SLD $2, R19, R19 // p[5]*4:1
MOVWZ (R10)(R18), R22 // tab[1][p[6]]
ADD $1024, R10, R10 // tab[2]
XOR R21, R22, R21 // xor done R22
CLRLSLDI $56, R9, $2, R20
MOVWZ (R10)(R19), R23 // tab[2][p[5]]
ADD $1024, R10, R10 // &tab[3]
XOR R21, R23, R21 // xor done R23
MOVWZ (R10)(R20), R24 // tab[3][p[4]]
ADD $1024, R10, R10 // &tab[4]
XOR R21, R24, R21 // xor done R24
MOVWZ (R10)(R8), R25 // tab[4][crc>>24]
RLDICL $48, R7, $56, R24 // crc>>16&0xFF
XOR R21, R25, R21 // xor done R25
ADD $1024, R10, R10 // &tab[5]
SLD $2, R24, R24 // crc>>16&0xFF*4
MOVWZ (R10)(R24), R26 // tab[5][crc>>16&0xFF]
XOR R21, R26, R21 // xor done R26
RLDICL $56, R7, $56, R25 // crc>>8
ADD $1024, R10, R10 // &tab[6]
SLD $2, R25, R25 // crc>>8&FF*2
MOVBZ R7, R26 // crc&0xFF
MOVWZ (R10)(R25), R27 // tab[6][crc>>8&0xFF]
ADD $1024, R10, R10 // &tab[7]
SLD $2, R26, R26 // crc&0xFF*2
XOR R21, R27, R21 // xor done R27
ADD $8, R5 // p = p[8:]
MOVWZ (R10)(R26), R28 // tab[7][crc&0xFF]
XOR R21, R28, R21 // xor done R28
MOVWZ R21, R7 // crc for next round
BDNZ loop
ANDCC $7, R6, R8 // any leftover bytes
BEQ done // none --> done
MOVD R8, CTR // byte count
PCALIGN $16 // align short loop
short:
MOVBZ 0(R5), R8 // get v
XOR R8, R7, R8 // byte(crc)^v -> R8
RLDIC $2, R8, $54, R8 // rldicl r8,r8,2,22
SRD $8, R7, R14 // crc>>8
MOVWZ (R4)(R8), R10
ADD $1, R5
XOR R10, R14, R7 // loop crc in R7
BDNZ short
done:
NOR R7, R7, R7 // ^crc
MOVW R7, ret+40(FP) // return crc
RET
#ifdef BYTESWAP_DATA
DATA ·byteswapcons+0(SB)/8, $0x0706050403020100
DATA ·byteswapcons+8(SB)/8, $0x0f0e0d0c0b0a0908
GLOBL ·byteswapcons+0(SB), RODATA, $16
#endif
TEXT ·vectorCrc32(SB), NOSPLIT|NOFRAME, $0-36
MOVWZ crc+0(FP), R3 // incoming crc
MOVWZ ctab+4(FP), R14 // crc poly id
MOVD p+8(FP), R4
MOVD p_len+16(FP), R5 // p len
// R3 = incoming crc
// R14 = constant table identifier
// R5 = address of bytes
// R6 = length of bytes
// defines for index loads
MOVD $16, off16
MOVD $32, off32
MOVD $48, off48
MOVD $64, off64
MOVD $80, off80
MOVD $96, off96
MOVD $112, off112
MOVD $0, R15
MOVD R3, R10 // save initial crc
NOR R3, R3, R3 // ^crc
MOVWZ R3, R3 // 32 bits
VXOR zeroes, zeroes, zeroes // clear the V reg
VSPLTISW $-1, V0
VSLDOI $4, V29, V0, mask_32bit
VSLDOI $8, V29, V0, mask_64bit
VXOR V8, V8, V8
MTVSRD R3, VS40 // crc initial value VS40 = V8
#ifdef REFLECT
VSLDOI $8, zeroes, V8, V8 // or: VSLDOI V29,V8,V27,4 for top 32 bits?
#else
VSLDOI $4, V8, zeroes, V8
#endif
#ifdef BYTESWAP_DATA
MOVD $·byteswapcons(SB), R3
LVX (R3), byteswap
#endif
CMPU R5, $256 // length of bytes
BLT short
RLDICR $0, R5, $56, R6 // chunk to process
// First step for larger sizes
l1:
MOVD $32768, R7
MOVD R7, R9
CMP R6, R7 // compare R6, R7 (MAX SIZE)
BGT top // less than MAX, just do remainder
MOVD R6, R7
top:
SUB R7, R6, R6
// mainloop does 128 bytes at a time
SRD $7, R7
// determine the offset into the constants table to start with.
// Each constant is 128 bytes, used against 16 bytes of data.
SLD $4, R7, R8
SRD $3, R9, R9
SUB R8, R9, R8
// The last iteration is reduced in a separate step
ADD $-1, R7
MOVD R7, CTR
// Determine which constant table (depends on poly)
CMP R14, $1
BNE castTable
MOVD $·IEEEConst(SB), R3
BR startConst
castTable:
MOVD $·CastConst(SB), R3
startConst:
ADD R3, R8, R3 // starting point in constants table
VXOR V0, V0, V0 // clear the V regs
VXOR V1, V1, V1
VXOR V2, V2, V2
VXOR V3, V3, V3
VXOR V4, V4, V4
VXOR V5, V5, V5
VXOR V6, V6, V6
VXOR V7, V7, V7
LVX (R3), const1 // loading constant values
CMP R15, $1 // Identify warm up pass
BEQ next
// First warm up pass: load the bytes to process
LVX (R4), V16
LVX (R4+off16), V17
LVX (R4+off32), V18
LVX (R4+off48), V19
LVX (R4+off64), V20
LVX (R4+off80), V21
LVX (R4+off96), V22
LVX (R4+off112), V23
ADD $128, R4 // bump up to next 128 bytes in buffer
VXOR V16, V8, V16 // xor in initial CRC in V8
next:
BC 18, 0, first_warm_up_done
ADD $16, R3 // bump up to next constants
LVX (R3), const2 // table values
VPMSUMD V16, const1, V8 // second warm up pass
LVX (R4), V16 // load from buffer
OR $0, R2, R2
VPMSUMD V17, const1, V9 // vpmsumd with constants
LVX (R4+off16), V17 // load next from buffer
OR $0, R2, R2
VPMSUMD V18, const1, V10 // vpmsumd with constants
LVX (R4+off32), V18 // load next from buffer
OR $0, R2, R2
VPMSUMD V19, const1, V11 // vpmsumd with constants
LVX (R4+off48), V19 // load next from buffer
OR $0, R2, R2
VPMSUMD V20, const1, V12 // vpmsumd with constants
LVX (R4+off64), V20 // load next from buffer
OR $0, R2, R2
VPMSUMD V21, const1, V13 // vpmsumd with constants
LVX (R4+off80), V21 // load next from buffer
OR $0, R2, R2
VPMSUMD V22, const1, V14 // vpmsumd with constants
LVX (R4+off96), V22 // load next from buffer
OR $0, R2, R2
VPMSUMD V23, const1, V15 // vpmsumd with constants
LVX (R4+off112), V23 // load next from buffer
ADD $128, R4 // bump up to next 128 bytes in buffer
BC 18, 0, first_cool_down
cool_top:
LVX (R3), const1 // constants
ADD $16, R3 // inc to next constants
OR $0, R2, R2
VXOR V0, V8, V0 // xor in previous vpmsumd
VPMSUMD V16, const2, V8 // vpmsumd with constants
LVX (R4), V16 // buffer
OR $0, R2, R2
VXOR V1, V9, V1 // xor in previous
VPMSUMD V17, const2, V9 // vpmsumd with constants
LVX (R4+off16), V17 // next in buffer
OR $0, R2, R2
VXOR V2, V10, V2 // xor in previous
VPMSUMD V18, const2, V10 // vpmsumd with constants
LVX (R4+off32), V18 // next in buffer
OR $0, R2, R2
VXOR V3, V11, V3 // xor in previous
VPMSUMD V19, const2, V11 // vpmsumd with constants
LVX (R4+off48), V19 // next in buffer
LVX (R3), const2 // get next constant
OR $0, R2, R2
VXOR V4, V12, V4 // xor in previous
VPMSUMD V20, const1, V12 // vpmsumd with constants
LVX (R4+off64), V20 // next in buffer
OR $0, R2, R2
VXOR V5, V13, V5 // xor in previous
VPMSUMD V21, const1, V13 // vpmsumd with constants
LVX (R4+off80), V21 // next in buffer
OR $0, R2, R2
VXOR V6, V14, V6 // xor in previous
VPMSUMD V22, const1, V14 // vpmsumd with constants
LVX (R4+off96), V22 // next in buffer
OR $0, R2, R2
VXOR V7, V15, V7 // xor in previous
VPMSUMD V23, const1, V15 // vpmsumd with constants
LVX (R4+off112), V23 // next in buffer
ADD $128, R4 // bump up buffer pointer
BDNZ cool_top // are we done?
first_cool_down:
// load the constants
// xor in the previous value
// vpmsumd the result with constants
LVX (R3), const1
ADD $16, R3
VXOR V0, V8, V0
VPMSUMD V16, const1, V8
OR $0, R2, R2
VXOR V1, V9, V1
VPMSUMD V17, const1, V9
OR $0, R2, R2
VXOR V2, V10, V2
VPMSUMD V18, const1, V10
OR $0, R2, R2
VXOR V3, V11, V3
VPMSUMD V19, const1, V11
OR $0, R2, R2
VXOR V4, V12, V4
VPMSUMD V20, const1, V12
OR $0, R2, R2
VXOR V5, V13, V5
VPMSUMD V21, const1, V13
OR $0, R2, R2
VXOR V6, V14, V6
VPMSUMD V22, const1, V14
OR $0, R2, R2
VXOR V7, V15, V7
VPMSUMD V23, const1, V15
OR $0, R2, R2
second_cool_down:
VXOR V0, V8, V0
VXOR V1, V9, V1
VXOR V2, V10, V2
VXOR V3, V11, V3
VXOR V4, V12, V4
VXOR V5, V13, V5
VXOR V6, V14, V6
VXOR V7, V15, V7
#ifdef REFLECT
VSLDOI $4, V0, zeroes, V0
VSLDOI $4, V1, zeroes, V1
VSLDOI $4, V2, zeroes, V2
VSLDOI $4, V3, zeroes, V3
VSLDOI $4, V4, zeroes, V4
VSLDOI $4, V5, zeroes, V5
VSLDOI $4, V6, zeroes, V6
VSLDOI $4, V7, zeroes, V7
#endif
LVX (R4), V8
LVX (R4+off16), V9
LVX (R4+off32), V10
LVX (R4+off48), V11
LVX (R4+off64), V12
LVX (R4+off80), V13
LVX (R4+off96), V14
LVX (R4+off112), V15
ADD $128, R4
VXOR V0, V8, V16
VXOR V1, V9, V17
VXOR V2, V10, V18
VXOR V3, V11, V19
VXOR V4, V12, V20
VXOR V5, V13, V21
VXOR V6, V14, V22
VXOR V7, V15, V23
MOVD $1, R15
CMP $0, R6
ADD $128, R6
BNE l1
ANDCC $127, R5
SUBC R5, $128, R6
ADD R3, R6, R3
SRD $4, R5, R7
MOVD R7, CTR
LVX (R3), V0
LVX (R3+off16), V1
LVX (R3+off32), V2
LVX (R3+off48), V3
LVX (R3+off64), V4
LVX (R3+off80), V5
LVX (R3+off96), V6
LVX (R3+off112), V7
ADD $128, R3
VPMSUMW V16, V0, V0
VPMSUMW V17, V1, V1
VPMSUMW V18, V2, V2
VPMSUMW V19, V3, V3
VPMSUMW V20, V4, V4
VPMSUMW V21, V5, V5
VPMSUMW V22, V6, V6
VPMSUMW V23, V7, V7
// now reduce the tail
CMP $0, R7
BEQ next1
LVX (R4), V16
LVX (R3), V17
VPMSUMW V16, V17, V16
VXOR V0, V16, V0
BC 18, 0, next1
LVX (R4+off16), V16
LVX (R3+off16), V17
VPMSUMW V16, V17, V16
VXOR V0, V16, V0
BC 18, 0, next1
LVX (R4+off32), V16
LVX (R3+off32), V17
VPMSUMW V16, V17, V16
VXOR V0, V16, V0
BC 18, 0, next1
LVX (R4+off48), V16
LVX (R3+off48), V17
VPMSUMW V16, V17, V16
VXOR V0, V16, V0
BC 18, 0, next1
LVX (R4+off64), V16
LVX (R3+off64), V17
VPMSUMW V16, V17, V16
VXOR V0, V16, V0
BC 18, 0, next1
LVX (R4+off80), V16
LVX (R3+off80), V17
VPMSUMW V16, V17, V16
VXOR V0, V16, V0
BC 18, 0, next1
LVX (R4+off96), V16
LVX (R3+off96), V17
VPMSUMW V16, V17, V16
VXOR V0, V16, V0
next1:
VXOR V0, V1, V0
VXOR V2, V3, V2
VXOR V4, V5, V4
VXOR V6, V7, V6
VXOR V0, V2, V0
VXOR V4, V6, V4
VXOR V0, V4, V0
barrett_reduction:
CMP R14, $1
BNE barcstTable
MOVD $·IEEEBarConst(SB), R3
BR startbarConst
barcstTable:
MOVD $·CastBarConst(SB), R3
startbarConst:
LVX (R3), const1
LVX (R3+off16), const2
VSLDOI $8, V0, V0, V1
VXOR V0, V1, V0
#ifdef REFLECT
VSPLTISB $1, V1
VSL V0, V1, V0
#endif
VAND V0, mask_64bit, V0
#ifndef REFLECT
VPMSUMD V0, const1, V1
VSLDOI $8, zeroes, V1, V1
VPMSUMD V1, const2, V1
VXOR V0, V1, V0
VSLDOI $8, V0, zeroes, V0
#else
VAND V0, mask_32bit, V1
VPMSUMD V1, const1, V1
VAND V1, mask_32bit, V1
VPMSUMD V1, const2, V1
VXOR V0, V1, V0
VSLDOI $4, V0, zeroes, V0
#endif
MFVSRD VS32, R3 // VS32 = V0
NOR R3, R3, R3 // return ^crc
MOVW R3, ret+32(FP)
RET
first_warm_up_done:
LVX (R3), const1
ADD $16, R3
VPMSUMD V16, const1, V8
VPMSUMD V17, const1, V9
VPMSUMD V18, const1, V10
VPMSUMD V19, const1, V11
VPMSUMD V20, const1, V12
VPMSUMD V21, const1, V13
VPMSUMD V22, const1, V14
VPMSUMD V23, const1, V15
BR second_cool_down
short:
CMP $0, R5
BEQ zero
// compute short constants
CMP R14, $1
BNE castshTable
MOVD $·IEEEConst(SB), R3
ADD $4080, R3
BR startshConst
castshTable:
MOVD $·CastConst(SB), R3
ADD $4080, R3
startshConst:
SUBC R5, $256, R6 // sub from 256
ADD R3, R6, R3
// calculate where to start
SRD $4, R5, R7
MOVD R7, CTR
VXOR V19, V19, V19
VXOR V20, V20, V20
LVX (R4), V0
LVX (R3), V16
VXOR V0, V8, V0
VPMSUMW V0, V16, V0
BC 18, 0, v0
LVX (R4+off16), V1
LVX (R3+off16), V17
VPMSUMW V1, V17, V1
BC 18, 0, v1
LVX (R4+off32), V2
LVX (R3+off32), V16
VPMSUMW V2, V16, V2
BC 18, 0, v2
LVX (R4+off48), V3
LVX (R3+off48), V17
VPMSUMW V3, V17, V3
BC 18, 0, v3
LVX (R4+off64), V4
LVX (R3+off64), V16
VPMSUMW V4, V16, V4
BC 18, 0, v4
LVX (R4+off80), V5
LVX (R3+off80), V17
VPMSUMW V5, V17, V5
BC 18, 0, v5
LVX (R4+off96), V6
LVX (R3+off96), V16
VPMSUMW V6, V16, V6
BC 18, 0, v6
LVX (R4+off112), V7
LVX (R3+off112), V17
VPMSUMW V7, V17, V7
BC 18, 0, v7
ADD $128, R3
ADD $128, R4
LVX (R4), V8
LVX (R3), V16
VPMSUMW V8, V16, V8
BC 18, 0, v8
LVX (R4+off16), V9
LVX (R3+off16), V17
VPMSUMW V9, V17, V9
BC 18, 0, v9
LVX (R4+off32), V10
LVX (R3+off32), V16
VPMSUMW V10, V16, V10
BC 18, 0, v10
LVX (R4+off48), V11
LVX (R3+off48), V17
VPMSUMW V11, V17, V11
BC 18, 0, v11
LVX (R4+off64), V12
LVX (R3+off64), V16
VPMSUMW V12, V16, V12
BC 18, 0, v12
LVX (R4+off80), V13
LVX (R3+off80), V17
VPMSUMW V13, V17, V13
BC 18, 0, v13
LVX (R4+off96), V14
LVX (R3+off96), V16
VPMSUMW V14, V16, V14
BC 18, 0, v14
LVX (R4+off112), V15
LVX (R3+off112), V17
VPMSUMW V15, V17, V15
VXOR V19, V15, V19
v14:
VXOR V20, V14, V20
v13:
VXOR V19, V13, V19
v12:
VXOR V20, V12, V20
v11:
VXOR V19, V11, V19
v10:
VXOR V20, V10, V20
v9:
VXOR V19, V9, V19
v8:
VXOR V20, V8, V20
v7:
VXOR V19, V7, V19
v6:
VXOR V20, V6, V20
v5:
VXOR V19, V5, V19
v4:
VXOR V20, V4, V20
v3:
VXOR V19, V3, V19
v2:
VXOR V20, V2, V20
v1:
VXOR V19, V1, V19
v0:
VXOR V20, V0, V20
VXOR V19, V20, V0
BR barrett_reduction
zero:
// This case is the original crc, so just return it
MOVW R10, ret+32(FP)
RET

91
vendor/github.com/klauspost/crc32/crc32_s390x.go generated vendored Normal file
View File

@@ -0,0 +1,91 @@
// Copyright 2016 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package crc32
import "golang.org/x/sys/cpu"
const (
vxMinLen = 64
vxAlignMask = 15 // align to 16 bytes
)
// hasVX reports whether the machine has the z/Architecture
// vector facility installed and enabled.
var hasVX = cpu.S390X.HasVX
// vectorizedCastagnoli implements CRC32 using vector instructions.
// It is defined in crc32_s390x.s.
//
//go:noescape
func vectorizedCastagnoli(crc uint32, p []byte) uint32
// vectorizedIEEE implements CRC32 using vector instructions.
// It is defined in crc32_s390x.s.
//
//go:noescape
func vectorizedIEEE(crc uint32, p []byte) uint32
func archAvailableCastagnoli() bool {
return hasVX
}
var archCastagnoliTable8 *slicing8Table
func archInitCastagnoli() {
if !hasVX {
panic("not available")
}
// We still use slicing-by-8 for small buffers.
archCastagnoliTable8 = slicingMakeTable(Castagnoli)
}
// archUpdateCastagnoli calculates the checksum of p using
// vectorizedCastagnoli.
func archUpdateCastagnoli(crc uint32, p []byte) uint32 {
if !hasVX {
panic("not available")
}
// Use vectorized function if data length is above threshold.
if len(p) >= vxMinLen {
aligned := len(p) & ^vxAlignMask
crc = vectorizedCastagnoli(crc, p[:aligned])
p = p[aligned:]
}
if len(p) == 0 {
return crc
}
return slicingUpdate(crc, archCastagnoliTable8, p)
}
func archAvailableIEEE() bool {
return hasVX
}
var archIeeeTable8 *slicing8Table
func archInitIEEE() {
if !hasVX {
panic("not available")
}
// We still use slicing-by-8 for small buffers.
archIeeeTable8 = slicingMakeTable(IEEE)
}
// archUpdateIEEE calculates the checksum of p using vectorizedIEEE.
func archUpdateIEEE(crc uint32, p []byte) uint32 {
if !hasVX {
panic("not available")
}
// Use vectorized function if data length is above threshold.
if len(p) >= vxMinLen {
aligned := len(p) & ^vxAlignMask
crc = vectorizedIEEE(crc, p[:aligned])
p = p[aligned:]
}
if len(p) == 0 {
return crc
}
return slicingUpdate(crc, archIeeeTable8, p)
}

225
vendor/github.com/klauspost/crc32/crc32_s390x.s generated vendored Normal file
View File

@@ -0,0 +1,225 @@
// Copyright 2016 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
#include "textflag.h"
// Vector register range containing CRC-32 constants
#define CONST_PERM_LE2BE V9
#define CONST_R2R1 V10
#define CONST_R4R3 V11
#define CONST_R5 V12
#define CONST_RU_POLY V13
#define CONST_CRC_POLY V14
// The CRC-32 constant block contains reduction constants to fold and
// process particular chunks of the input data stream in parallel.
//
// Note that the constant definitions below are extended in order to compute
// intermediate results with a single VECTOR GALOIS FIELD MULTIPLY instruction.
// The rightmost doubleword can be 0 to prevent contribution to the result or
// can be multiplied by 1 to perform an XOR without the need for a separate
// VECTOR EXCLUSIVE OR instruction.
//
// The polynomials used are bit-reflected:
//
// IEEE: P'(x) = 0x0edb88320
// Castagnoli: P'(x) = 0x082f63b78
// IEEE polynomial constants
DATA ·crclecons+0(SB)/8, $0x0F0E0D0C0B0A0908 // LE-to-BE mask
DATA ·crclecons+8(SB)/8, $0x0706050403020100
DATA ·crclecons+16(SB)/8, $0x00000001c6e41596 // R2
DATA ·crclecons+24(SB)/8, $0x0000000154442bd4 // R1
DATA ·crclecons+32(SB)/8, $0x00000000ccaa009e // R4
DATA ·crclecons+40(SB)/8, $0x00000001751997d0 // R3
DATA ·crclecons+48(SB)/8, $0x0000000000000000
DATA ·crclecons+56(SB)/8, $0x0000000163cd6124 // R5
DATA ·crclecons+64(SB)/8, $0x0000000000000000
DATA ·crclecons+72(SB)/8, $0x00000001F7011641 // u'
DATA ·crclecons+80(SB)/8, $0x0000000000000000
DATA ·crclecons+88(SB)/8, $0x00000001DB710641 // P'(x) << 1
GLOBL ·crclecons(SB), RODATA, $144
// Castagonli Polynomial constants
DATA ·crcclecons+0(SB)/8, $0x0F0E0D0C0B0A0908 // LE-to-BE mask
DATA ·crcclecons+8(SB)/8, $0x0706050403020100
DATA ·crcclecons+16(SB)/8, $0x000000009e4addf8 // R2
DATA ·crcclecons+24(SB)/8, $0x00000000740eef02 // R1
DATA ·crcclecons+32(SB)/8, $0x000000014cd00bd6 // R4
DATA ·crcclecons+40(SB)/8, $0x00000000f20c0dfe // R3
DATA ·crcclecons+48(SB)/8, $0x0000000000000000
DATA ·crcclecons+56(SB)/8, $0x00000000dd45aab8 // R5
DATA ·crcclecons+64(SB)/8, $0x0000000000000000
DATA ·crcclecons+72(SB)/8, $0x00000000dea713f1 // u'
DATA ·crcclecons+80(SB)/8, $0x0000000000000000
DATA ·crcclecons+88(SB)/8, $0x0000000105ec76f0 // P'(x) << 1
GLOBL ·crcclecons(SB), RODATA, $144
// The CRC-32 function(s) use these calling conventions:
//
// Parameters:
//
// R2: Initial CRC value, typically ~0; and final CRC (return) value.
// R3: Input buffer pointer, performance might be improved if the
// buffer is on a doubleword boundary.
// R4: Length of the buffer, must be 64 bytes or greater.
//
// Register usage:
//
// R5: CRC-32 constant pool base pointer.
// V0: Initial CRC value and intermediate constants and results.
// V1..V4: Data for CRC computation.
// V5..V8: Next data chunks that are fetched from the input buffer.
//
// V9..V14: CRC-32 constants.
// func vectorizedIEEE(crc uint32, p []byte) uint32
TEXT ·vectorizedIEEE(SB), NOSPLIT, $0
MOVWZ crc+0(FP), R2 // R2 stores the CRC value
MOVD p+8(FP), R3 // data pointer
MOVD p_len+16(FP), R4 // len(p)
MOVD $·crclecons(SB), R5
BR vectorizedBody<>(SB)
// func vectorizedCastagnoli(crc uint32, p []byte) uint32
TEXT ·vectorizedCastagnoli(SB), NOSPLIT, $0
MOVWZ crc+0(FP), R2 // R2 stores the CRC value
MOVD p+8(FP), R3 // data pointer
MOVD p_len+16(FP), R4 // len(p)
// R5: crc-32 constant pool base pointer, constant is used to reduce crc
MOVD $·crcclecons(SB), R5
BR vectorizedBody<>(SB)
TEXT vectorizedBody<>(SB), NOSPLIT, $0
XOR $0xffffffff, R2 // NOTW R2
VLM 0(R5), CONST_PERM_LE2BE, CONST_CRC_POLY
// Load the initial CRC value into the rightmost word of V0
VZERO V0
VLVGF $3, R2, V0
// Crash if the input size is less than 64-bytes.
CMP R4, $64
BLT crash
// Load a 64-byte data chunk and XOR with CRC
VLM 0(R3), V1, V4 // 64-bytes into V1..V4
// Reflect the data if the CRC operation is in the bit-reflected domain
VPERM V1, V1, CONST_PERM_LE2BE, V1
VPERM V2, V2, CONST_PERM_LE2BE, V2
VPERM V3, V3, CONST_PERM_LE2BE, V3
VPERM V4, V4, CONST_PERM_LE2BE, V4
VX V0, V1, V1 // V1 ^= CRC
ADD $64, R3 // BUF = BUF + 64
ADD $(-64), R4
// Check remaining buffer size and jump to proper folding method
CMP R4, $64
BLT less_than_64bytes
fold_64bytes_loop:
// Load the next 64-byte data chunk into V5 to V8
VLM 0(R3), V5, V8
VPERM V5, V5, CONST_PERM_LE2BE, V5
VPERM V6, V6, CONST_PERM_LE2BE, V6
VPERM V7, V7, CONST_PERM_LE2BE, V7
VPERM V8, V8, CONST_PERM_LE2BE, V8
// Perform a GF(2) multiplication of the doublewords in V1 with
// the reduction constants in V0. The intermediate result is
// then folded (accumulated) with the next data chunk in V5 and
// stored in V1. Repeat this step for the register contents
// in V2, V3, and V4 respectively.
VGFMAG CONST_R2R1, V1, V5, V1
VGFMAG CONST_R2R1, V2, V6, V2
VGFMAG CONST_R2R1, V3, V7, V3
VGFMAG CONST_R2R1, V4, V8, V4
// Adjust buffer pointer and length for next loop
ADD $64, R3 // BUF = BUF + 64
ADD $(-64), R4 // LEN = LEN - 64
CMP R4, $64
BGE fold_64bytes_loop
less_than_64bytes:
// Fold V1 to V4 into a single 128-bit value in V1
VGFMAG CONST_R4R3, V1, V2, V1
VGFMAG CONST_R4R3, V1, V3, V1
VGFMAG CONST_R4R3, V1, V4, V1
// Check whether to continue with 64-bit folding
CMP R4, $16
BLT final_fold
fold_16bytes_loop:
VL 0(R3), V2 // Load next data chunk
VPERM V2, V2, CONST_PERM_LE2BE, V2
VGFMAG CONST_R4R3, V1, V2, V1 // Fold next data chunk
// Adjust buffer pointer and size for folding next data chunk
ADD $16, R3
ADD $-16, R4
// Process remaining data chunks
CMP R4, $16
BGE fold_16bytes_loop
final_fold:
VLEIB $7, $0x40, V9
VSRLB V9, CONST_R4R3, V0
VLEIG $0, $1, V0
VGFMG V0, V1, V1
VLEIB $7, $0x20, V9 // Shift by words
VSRLB V9, V1, V2 // Store remaining bits in V2
VUPLLF V1, V1 // Split rightmost doubleword
VGFMAG CONST_R5, V1, V2, V1 // V1 = (V1 * R5) XOR V2
// The input values to the Barret reduction are the degree-63 polynomial
// in V1 (R(x)), degree-32 generator polynomial, and the reduction
// constant u. The Barret reduction result is the CRC value of R(x) mod
// P(x).
//
// The Barret reduction algorithm is defined as:
//
// 1. T1(x) = floor( R(x) / x^32 ) GF2MUL u
// 2. T2(x) = floor( T1(x) / x^32 ) GF2MUL P(x)
// 3. C(x) = R(x) XOR T2(x) mod x^32
//
// Note: To compensate the division by x^32, use the vector unpack
// instruction to move the leftmost word into the leftmost doubleword
// of the vector register. The rightmost doubleword is multiplied
// with zero to not contribute to the intermediate results.
// T1(x) = floor( R(x) / x^32 ) GF2MUL u
VUPLLF V1, V2
VGFMG CONST_RU_POLY, V2, V2
// Compute the GF(2) product of the CRC polynomial in VO with T1(x) in
// V2 and XOR the intermediate result, T2(x), with the value in V1.
// The final result is in the rightmost word of V2.
VUPLLF V2, V2
VGFMAG CONST_CRC_POLY, V2, V1, V2
done:
VLGVF $2, V2, R2
XOR $0xffffffff, R2 // NOTW R2
MOVWZ R2, ret + 32(FP)
RET
crash:
MOVD $0, (R0) // input size is less than 64-bytes

3285
vendor/github.com/klauspost/crc32/crc32_table_ppc64le.s generated vendored Normal file
View File

File diff suppressed because it is too large Load Diff

7
vendor/github.com/klauspost/crc32/gen.go generated vendored Normal file
View File

@@ -0,0 +1,7 @@
// Copyright 2023 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//go:generate go run gen_const_ppc64le.go
package crc32

View File

@@ -125,14 +125,19 @@ func update(crc uint64, p []byte) uint64 {
p = p[align:]
}
runs := len(p) / 128
crc = updateAsm(crc, p[:128*runs])
if hasAsm512 && runs >= 8 {
// Use 512-bit wide instructions for >= 1KB.
crc = updateAsm512(crc, p[:128*runs])
} else {
crc = updateAsm(crc, p[:128*runs])
}
return update(crc, p[128*runs:])
}
buildSlicing8TablesOnce()
crc = ^crc
// table comparison is somewhat expensive, so avoid it for small sizes
for len(p) >= 64 {
if len(p) >= 64 {
var helperTable = slicing8TableNVME
// Update using slicing-by-8
for len(p) > 8 {

View File

@@ -11,5 +11,7 @@ import (
)
var hasAsm = cpuid.CPU.Supports(cpuid.SSE2, cpuid.CLMUL, cpuid.SSE4)
var hasAsm512 = cpuid.CPU.Supports(cpuid.AVX512F, cpuid.VPCLMULQDQ, cpuid.AVX512VL, cpuid.CLMUL)
func updateAsm(crc uint64, p []byte) (checksum uint64)
func updateAsm512(crc uint64, p []byte) (checksum uint64)

View File

@@ -155,3 +155,153 @@ skip128:
NOTQ AX
MOVQ AX, checksum+32(FP)
RET
// Constants, pre-splatted.
DATA ·asmConstantsPoly<>+0x00(SB)/8, $0xa1ca681e733f9c40
DATA ·asmConstantsPoly<>+0x08(SB)/8, $0
DATA ·asmConstantsPoly<>+0x10(SB)/8, $0xa1ca681e733f9c40
DATA ·asmConstantsPoly<>+0x18(SB)/8, $0
DATA ·asmConstantsPoly<>+0x20(SB)/8, $0xa1ca681e733f9c40
DATA ·asmConstantsPoly<>+0x28(SB)/8, $0
DATA ·asmConstantsPoly<>+0x30(SB)/8, $0xa1ca681e733f9c40
DATA ·asmConstantsPoly<>+0x38(SB)/8, $0
// Upper
DATA ·asmConstantsPoly<>+0x40(SB)/8, $0
DATA ·asmConstantsPoly<>+0x48(SB)/8, $0x5f852fb61e8d92dc
DATA ·asmConstantsPoly<>+0x50(SB)/8, $0
DATA ·asmConstantsPoly<>+0x58(SB)/8, $0x5f852fb61e8d92dc
DATA ·asmConstantsPoly<>+0x60(SB)/8, $0
DATA ·asmConstantsPoly<>+0x68(SB)/8, $0x5f852fb61e8d92dc
DATA ·asmConstantsPoly<>+0x70(SB)/8, $0
DATA ·asmConstantsPoly<>+0x78(SB)/8, $0x5f852fb61e8d92dc
GLOBL ·asmConstantsPoly<>(SB), (NOPTR+RODATA), $128
TEXT ·updateAsm512(SB), $0-40
MOVQ crc+0(FP), AX // checksum
MOVQ p_base+8(FP), SI // start pointer
MOVQ p_len+16(FP), CX // length of buffer
NOTQ AX
SHRQ $7, CX
CMPQ CX, $1
VPXORQ Z8, Z8, Z8 // Initialize ZMM8 to zero
JLT skip128
VMOVDQU64 0x00(SI), Z0
VMOVDQU64 0x40(SI), Z4
MOVQ $·asmConstantsPoly<>(SB), BX
VMOVQ AX, X8
// XOR initialization value into lower 64 bits of ZMM0
VPXORQ Z8, Z0, Z0
CMPQ CX, $1
JE tail128
VMOVDQU64 0(BX), Z8
VMOVDQU64 64(BX), Z9
PCALIGN $16
loop128:
VMOVDQU64 0x80(SI), Z1
VMOVDQU64 0xc0(SI), Z5
ADDQ $128, SI
SUBQ $1, CX
VPCLMULQDQ $0x00, Z8, Z0, Z10
VPCLMULQDQ $0x11, Z9, Z0, Z0
VPTERNLOGD $0x96, Z1, Z10, Z0 // Combine results with xor into Z0
VPCLMULQDQ $0x00, Z8, Z4, Z10
VPCLMULQDQ $0x11, Z9, Z4, Z4
VPTERNLOGD $0x96, Z5, Z10, Z4 // Combine results with xor into Z4
CMPQ CX, $1
JGT loop128
tail128:
// Extract X0 to X3 from ZMM0
VEXTRACTF32X4 $1, Z0, X1 // X1: Second 128-bit lane
VEXTRACTF32X4 $2, Z0, X2 // X2: Third 128-bit lane
VEXTRACTF32X4 $3, Z0, X3 // X3: Fourth 128-bit lane
// Extract X4 to X7 from ZMM4
VEXTRACTF32X4 $1, Z4, X5 // X5: Second 128-bit lane
VEXTRACTF32X4 $2, Z4, X6 // X6: Third 128-bit lane
VEXTRACTF32X4 $3, Z4, X7 // X7: Fourth 128-bit lane
MOVQ $0xd083dd594d96319d, AX
MOVQ AX, X11
PCLMULQDQ $0x00, X0, X11
MOVQ $0x946588403d4adcbc, AX
PINSRQ $0x1, AX, X12
PCLMULQDQ $0x11, X12, X0
PXOR X11, X7
PXOR X0, X7
MOVQ $0x3c255f5ebc414423, AX
MOVQ AX, X11
PCLMULQDQ $0x00, X1, X11
MOVQ $0x34f5a24e22d66e90, AX
PINSRQ $0x1, AX, X12
PCLMULQDQ $0x11, X12, X1
PXOR X11, X1
PXOR X7, X1
MOVQ $0x7b0ab10dd0f809fe, AX
MOVQ AX, X11
PCLMULQDQ $0x00, X2, X11
MOVQ $0x03363823e6e791e5, AX
PINSRQ $0x1, AX, X12
PCLMULQDQ $0x11, X12, X2
PXOR X11, X2
PXOR X1, X2
MOVQ $0x0c32cdb31e18a84a, AX
MOVQ AX, X11
PCLMULQDQ $0x00, X3, X11
MOVQ $0x62242240ace5045a, AX
PINSRQ $0x1, AX, X12
PCLMULQDQ $0x11, X12, X3
PXOR X11, X3
PXOR X2, X3
MOVQ $0xbdd7ac0ee1a4a0f0, AX
MOVQ AX, X11
PCLMULQDQ $0x00, X4, X11
MOVQ $0xa3ffdc1fe8e82a8b, AX
PINSRQ $0x1, AX, X12
PCLMULQDQ $0x11, X12, X4
PXOR X11, X4
PXOR X3, X4
MOVQ $0xb0bc2e589204f500, AX
MOVQ AX, X11
PCLMULQDQ $0x00, X5, X11
MOVQ $0xe1e0bb9d45d7a44c, AX
PINSRQ $0x1, AX, X12
PCLMULQDQ $0x11, X12, X5
PXOR X11, X5
PXOR X4, X5
MOVQ $0xeadc41fd2ba3d420, AX
MOVQ AX, X11
PCLMULQDQ $0x00, X6, X11
MOVQ $0x21e9761e252621ac, AX
PINSRQ $0x1, AX, X12
PCLMULQDQ $0x11, X12, X6
PXOR X11, X6
PXOR X5, X6
MOVQ AX, X5
PCLMULQDQ $0x00, X6, X5
PSHUFD $0xee, X6, X6
PXOR X5, X6
MOVQ $0x27ecfa329aef9f77, AX
MOVQ AX, X4
PCLMULQDQ $0x00, X4, X6
PEXTRQ $0, X6, BX
MOVQ $0x34d926535897936b, AX
MOVQ AX, X4
PCLMULQDQ $0x00, X4, X6
PXOR X5, X6
PEXTRQ $1, X6, AX
XORQ BX, AX
skip128:
NOTQ AX
MOVQ AX, checksum+32(FP)
VZEROUPPER
RET

View File

@@ -11,5 +11,7 @@ import (
)
var hasAsm = cpuid.CPU.Supports(cpuid.ASIMD, cpuid.PMULL, cpuid.SHA3)
var hasAsm512 = false
func updateAsm(crc uint64, p []byte) (checksum uint64)
func updateAsm512(crc uint64, p []byte) (checksum uint64) { panic("should not be reached") }

View File

@@ -7,5 +7,7 @@
package crc64nvme
var hasAsm = false
var hasAsm512 = false
func updateAsm(crc uint64, p []byte) (checksum uint64) { panic("should not be reached") }
func updateAsm(crc uint64, p []byte) (checksum uint64) { panic("should not be reached") }
func updateAsm512(crc uint64, p []byte) (checksum uint64) { panic("should not be reached") }

125
vendor/github.com/minio/minio-go/v7/CLAUDE.md generated vendored Normal file
View File

@@ -0,0 +1,125 @@
CLAUDE.md
=========
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
Commands
--------
### Testing
```bash
# Run all tests with race detection (requires MinIO server at localhost:9000)
SERVER_ENDPOINT=localhost:9000 ACCESS_KEY=minioadmin SECRET_KEY=minioadmin ENABLE_HTTPS=1 MINT_MODE=full go test -race -v ./...
# Run tests without race detection
go test ./...
# Run short tests only (no functional tests)
go test -short -race ./...
# Run functional tests
go build -race functional_tests.go
SERVER_ENDPOINT=localhost:9000 ACCESS_KEY=minioadmin SECRET_KEY=minioadmin ENABLE_HTTPS=1 MINT_MODE=full ./functional_tests
# Run functional tests without TLS
SERVER_ENDPOINT=localhost:9000 ACCESS_KEY=minioadmin SECRET_KEY=minioadmin ENABLE_HTTPS=0 MINT_MODE=full ./functional_tests
```
### Linting and Code Quality
```bash
# Run all checks (lint, vet, test, examples, functional tests)
make checks
# Run linter only
make lint
# Run vet and staticcheck
make vet
# Alternative: run golangci-lint directly
golangci-lint run --timeout=5m --config ./.golangci.yml
```
### Building Examples
```bash
# Build all examples
make examples
# Build a specific example
cd examples/s3 && go build -mod=mod putobject.go
```
Architecture
------------
### Core Client Structure
The MinIO Go SDK is organized around a central `Client` struct (api.go:52) that implements Amazon S3 compatible methods. Key architectural patterns:
1. **Modular API Organization**: API methods are split into logical files:
- `api-bucket-*.go`: Bucket operations (lifecycle, encryption, versioning, etc.)
- `api-object-*.go`: Object operations (legal hold, retention, tagging, etc.)
- `api-get-*.go`, `api-put-*.go`: GET and PUT operations
- `api-list.go`: Listing operations
- `api-stat.go`: Status/info operations
2. **Credential Management**: The `pkg/credentials/` package provides various credential providers:
- Static credentials
- Environment variables (AWS/MinIO)
- IAM roles
- STS (Security Token Service) variants
- File-based credentials
- Chain provider for fallback mechanisms
3. **Request Signing**: The `pkg/signer/` package handles AWS signature versions:
- V2 signatures (legacy)
- V4 signatures (standard)
- Streaming signatures for large uploads
4. **Transport Layer**: Custom HTTP transport with:
- Retry logic with configurable max retries
- Health status monitoring
- Tracing support via httptrace
- Bucket location caching (`bucketLocCache`\)
- Session caching for credentials
5. **Helper Packages**:
- `pkg/encrypt/`: Server-side encryption utilities
- `pkg/notification/`: Event notification handling
- `pkg/policy/`: Bucket policy management
- `pkg/lifecycle/`: Object lifecycle rules
- `pkg/tags/`: Object and bucket tagging
- `pkg/s3utils/`: S3 utility functions
- `pkg/kvcache/`: Key-value caching
- `pkg/singleflight/`: Deduplication of concurrent requests
### Testing Strategy
- Unit tests alongside implementation files (`*_test.go`\)
- Comprehensive functional tests in `functional_tests.go` requiring a live MinIO server
- Example programs in `examples/` directory demonstrating API usage
- Build tag `//go:build mint` for integration tests
### Error Handling
- Custom error types in `api-error-response.go`
- HTTP status code mapping
- Retry logic for transient failures
- Detailed error context preservation
Important Patterns
------------------
1. **Context Usage**: All API methods accept `context.Context` for cancellation and timeout control
2. **Options Pattern**: Methods use Options structs for optional parameters (e.g., `PutObjectOptions`, `GetObjectOptions`\)
3. **Streaming Support**: Large file operations use io.Reader/Writer interfaces for memory efficiency
4. **Bucket Lookup Types**: Supports both path-style and virtual-host-style S3 URLs
5. **MD5/SHA256 Hashing**: Configurable hash functions for integrity checks via `md5Hasher` and `sha256Hasher`

View File

@@ -1,22 +1,23 @@
### Developer Guidelines
### Developer Guidelines
``minio-go`` welcomes your contribution. To make the process as seamless as possible, we ask for the following:
`minio-go` welcomes your contribution. To make the process as seamless as possible, we ask for the following:
* Go ahead and fork the project and make your changes. We encourage pull requests to discuss code changes.
- Fork it
- Create your feature branch (git checkout -b my-new-feature)
- Commit your changes (git commit -am 'Add some feature')
- Push to the branch (git push origin my-new-feature)
- Create new Pull Request
- Go ahead and fork the project and make your changes. We encourage pull requests to discuss code changes.
* When you're ready to create a pull request, be sure to:
- Have test cases for the new code. If you have questions about how to do it, please ask in your pull request.
- Run `go fmt`
- Squash your commits into a single commit. `git rebase -i`. It's okay to force update your pull request.
- Make sure `go test -race ./...` and `go build` completes.
NOTE: go test runs functional tests and requires you to have a AWS S3 account. Set them as environment variables
``ACCESS_KEY`` and ``SECRET_KEY``. To run shorter version of the tests please use ``go test -short -race ./...``
- Fork it
- Create your feature branch (git checkout -b my-new-feature)
- Commit your changes (git commit -am 'Add some feature')
- Push to the branch (git push origin my-new-feature)
- Create new Pull Request
* Read [Effective Go](https://github.com/golang/go/wiki/CodeReviewComments) article from Golang project
- `minio-go` project is strictly conformant with Golang style
- if you happen to observe offending code, please feel free to send a pull request
- When you're ready to create a pull request, be sure to:
- Have test cases for the new code. If you have questions about how to do it, please ask in your pull request.
- Run `go fmt`
- Squash your commits into a single commit. `git rebase -i`. It's okay to force update your pull request.
- Make sure `go test -race ./...` and `go build` completes. NOTE: go test runs functional tests and requires you to have a AWS S3 account. Set them as environment variables`ACCESS_KEY` and `SECRET_KEY`. To run shorter version of the tests please use `go test -short -race ./...`
- Read [Effective Go](https://github.com/golang/go/wiki/CodeReviewComments) article from Golang project
- `minio-go` project is strictly conformant with Golang style
- if you happen to observe offending code, please feel free to send a pull request

View File

@@ -1,11 +1,15 @@
# For maintainers only
For maintainers only
====================
## Responsibilities
Responsibilities
----------------
Please go through this link [Maintainer Responsibility](https://gist.github.com/abperiasamy/f4d9b31d3186bbd26522)
### Making new releases
Tag and sign your release commit, additionally this step requires you to have access to MinIO's trusted private key.
```sh
$ export GNUPGHOME=/media/${USER}/minio/trusted
$ git tag -s 4.0.0
@@ -14,6 +18,7 @@ $ git push --tags
```
### Update version
Once release has been made update `libraryVersion` constant in `api.go` to next to be released version.
```sh
@@ -22,14 +27,17 @@ $ grep libraryVersion api.go
```
Commit your changes
```
$ git commit -a -m "Update version for next release" --author "MinIO Trusted <trusted@min.io>"
```
### Announce
Announce new release by adding release notes at https://github.com/minio/minio-go/releases from `trusted@min.io` account. Release notes requires two sections `highlights` and `changelog`. Highlights is a bulleted list of salient features in this release and Changelog contains list of all commits since the last release.
To generate `changelog`
```sh
$ git log --no-color --pretty=format:'-%d %s (%cr) <%an>' <last_release_tag>..<latest_release_tag>
```

View File

@@ -1,13 +1,14 @@
# MinIO Go Client SDK for Amazon S3 Compatible Cloud Storage [![Slack](https://slack.min.io/slack?type=svg)](https://slack.min.io) [![Sourcegraph](https://sourcegraph.com/github.com/minio/minio-go/-/badge.svg)](https://sourcegraph.com/github.com/minio/minio-go?badge) [![Apache V2 License](https://img.shields.io/badge/license-Apache%20V2-blue.svg)](https://github.com/minio/minio-go/blob/master/LICENSE)
MinIO Go Client SDK for Amazon S3 Compatible Cloud Storage [![Slack](https://slack.min.io/slack?type=svg)](https://slack.min.io) [![Sourcegraph](https://sourcegraph.com/github.com/minio/minio-go/-/badge.svg)](https://sourcegraph.com/github.com/minio/minio-go?badge) [![Apache V2 License](https://img.shields.io/badge/license-Apache%20V2-blue.svg)](https://github.com/minio/minio-go/blob/master/LICENSE)
==================================================================================================================================================================================================================================================================================================================================================================================================================
The MinIO Go Client SDK provides straightforward APIs to access any Amazon S3 compatible object storage.
This Quickstart Guide covers how to install the MinIO client SDK, connect to MinIO, and create a sample file uploader.
For a complete list of APIs and examples, see the [godoc documentation](https://pkg.go.dev/github.com/minio/minio-go/v7) or [Go Client API Reference](https://min.io/docs/minio/linux/developers/go/API.html).
This Quickstart Guide covers how to install the MinIO client SDK, connect to MinIO, and create a sample file uploader. For a complete list of APIs and examples, see the [godoc documentation](https://pkg.go.dev/github.com/minio/minio-go/v7) or [Go Client API Reference](https://min.io/docs/minio/linux/developers/go/API.html).
These examples presume a working [Go development environment](https://golang.org/doc/install) and the [MinIO `mc` command line tool](https://min.io/docs/minio/linux/reference/minio-mc.html).
## Download from Github
Download from Github
--------------------
From your project directory:
@@ -15,12 +16,13 @@ From your project directory:
go get github.com/minio/minio-go/v7
```
## Initialize a MinIO Client Object
Initialize a MinIO Client Object
--------------------------------
The MinIO client requires the following parameters to connect to an Amazon S3 compatible object storage:
| Parameter | Description |
| ----------------- | ---------------------------------------------------------- |
|-------------------|------------------------------------------------------------|
| `endpoint` | URL to object storage service. |
| `_minio.Options_` | All the options such as credentials, custom transport etc. |
@@ -53,83 +55,82 @@ func main() {
}
```
## Example - File Uploader
Example - File Uploader
-----------------------
This sample code connects to an object storage server, creates a bucket, and uploads a file to the bucket.
It uses the MinIO `play` server, a public MinIO cluster located at [https://play.min.io](https://play.min.io).
This sample code connects to an object storage server, creates a bucket, and uploads a file to the bucket. It uses the MinIO `play` server, a public MinIO cluster located at [https://play.min.io](https://play.min.io).
The `play` server runs the latest stable version of MinIO and may be used for testing and development.
The access credentials shown in this example are open to the public and all data uploaded to `play` should be considered public and non-protected.
The `play` server runs the latest stable version of MinIO and may be used for testing and development. The access credentials shown in this example are open to the public and all data uploaded to `play` should be considered public and non-protected.
### FileUploader.go
This example does the following:
- Connects to the MinIO `play` server using the provided credentials.
- Creates a bucket named `testbucket`.
- Uploads a file named `testdata` from `/tmp`.
- Verifies the file was created using `mc ls`.
- Connects to the MinIO `play` server using the provided credentials.
- Creates a bucket named `testbucket`.
- Uploads a file named `testdata` from `/tmp`.
- Verifies the file was created using `mc ls`.
```go
// FileUploader.go MinIO example
package main
```go
// FileUploader.go MinIO example
package main
import (
"context"
"log"
import (
"context"
"log"
"github.com/minio/minio-go/v7"
"github.com/minio/minio-go/v7/pkg/credentials"
)
"github.com/minio/minio-go/v7"
"github.com/minio/minio-go/v7/pkg/credentials"
)
func main() {
ctx := context.Background()
endpoint := "play.min.io"
accessKeyID := "Q3AM3UQ867SPQQA43P2F"
secretAccessKey := "zuf+tfteSlswRu7BJ86wekitnifILbZam1KYY3TG"
useSSL := true
func main() {
ctx := context.Background()
endpoint := "play.min.io"
accessKeyID := "Q3AM3UQ867SPQQA43P2F"
secretAccessKey := "zuf+tfteSlswRu7BJ86wekitnifILbZam1KYY3TG"
useSSL := true
// Initialize minio client object.
minioClient, err := minio.New(endpoint, &minio.Options{
Creds: credentials.NewStaticV4(accessKeyID, secretAccessKey, ""),
Secure: useSSL,
})
if err != nil {
log.Fatalln(err)
}
// Make a new bucket called testbucket.
bucketName := "testbucket"
location := "us-east-1"
err = minioClient.MakeBucket(ctx, bucketName, minio.MakeBucketOptions{Region: location})
if err != nil {
// Check to see if we already own this bucket (which happens if you run this twice)
exists, errBucketExists := minioClient.BucketExists(ctx, bucketName)
if errBucketExists == nil && exists {
log.Printf("We already own %s\n", bucketName)
} else {
// Initialize minio client object.
minioClient, err := minio.New(endpoint, &minio.Options{
Creds: credentials.NewStaticV4(accessKeyID, secretAccessKey, ""),
Secure: useSSL,
})
if err != nil {
log.Fatalln(err)
}
} else {
log.Printf("Successfully created %s\n", bucketName)
// Make a new bucket called testbucket.
bucketName := "testbucket"
location := "us-east-1"
err = minioClient.MakeBucket(ctx, bucketName, minio.MakeBucketOptions{Region: location})
if err != nil {
// Check to see if we already own this bucket (which happens if you run this twice)
exists, errBucketExists := minioClient.BucketExists(ctx, bucketName)
if errBucketExists == nil && exists {
log.Printf("We already own %s\n", bucketName)
} else {
log.Fatalln(err)
}
} else {
log.Printf("Successfully created %s\n", bucketName)
}
// Upload the test file
// Change the value of filePath if the file is in another location
objectName := "testdata"
filePath := "/tmp/testdata"
contentType := "application/octet-stream"
// Upload the test file with FPutObject
info, err := minioClient.FPutObject(ctx, bucketName, objectName, filePath, minio.PutObjectOptions{ContentType: contentType})
if err != nil {
log.Fatalln(err)
}
log.Printf("Successfully uploaded %s of size %d\n", objectName, info.Size)
}
// Upload the test file
// Change the value of filePath if the file is in another location
objectName := "testdata"
filePath := "/tmp/testdata"
contentType := "application/octet-stream"
// Upload the test file with FPutObject
info, err := minioClient.FPutObject(ctx, bucketName, objectName, filePath, minio.PutObjectOptions{ContentType: contentType})
if err != nil {
log.Fatalln(err)
}
log.Printf("Successfully uploaded %s of size %d\n", objectName, info.Size)
}
```
```
**1. Create a test file containing data:**
@@ -168,145 +169,150 @@ mc ls play/testbucket
[2023-11-01 14:27:55 UTC] 20KiB STANDARD TestDataFile
```
## API Reference
API Reference
-------------
The full API Reference is available here.
* [Complete API Reference](https://min.io/docs/minio/linux/developers/go/API.html)
- [Complete API Reference](https://min.io/docs/minio/linux/developers/go/API.html)
### API Reference : Bucket Operations
* [`MakeBucket`](https://min.io/docs/minio/linux/developers/go/API.html#MakeBucket)
* [`ListBuckets`](https://min.io/docs/minio/linux/developers/go/API.html#ListBuckets)
* [`BucketExists`](https://min.io/docs/minio/linux/developers/go/API.html#BucketExists)
* [`RemoveBucket`](https://min.io/docs/minio/linux/developers/go/API.html#RemoveBucket)
* [`ListObjects`](https://min.io/docs/minio/linux/developers/go/API.html#ListObjects)
* [`ListIncompleteUploads`](https://min.io/docs/minio/linux/developers/go/API.html#ListIncompleteUploads)
- [`MakeBucket`](https://min.io/docs/minio/linux/developers/go/API.html#MakeBucket)
- [`ListBuckets`](https://min.io/docs/minio/linux/developers/go/API.html#ListBuckets)
- [`BucketExists`](https://min.io/docs/minio/linux/developers/go/API.html#BucketExists)
- [`RemoveBucket`](https://min.io/docs/minio/linux/developers/go/API.html#RemoveBucket)
- [`ListObjects`](https://min.io/docs/minio/linux/developers/go/API.html#ListObjects)
- [`ListIncompleteUploads`](https://min.io/docs/minio/linux/developers/go/API.html#ListIncompleteUploads)
### API Reference : Bucket policy Operations
* [`SetBucketPolicy`](https://min.io/docs/minio/linux/developers/go/API.html#SetBucketPolicy)
* [`GetBucketPolicy`](https://min.io/docs/minio/linux/developers/go/API.html#GetBucketPolicy)
- [`SetBucketPolicy`](https://min.io/docs/minio/linux/developers/go/API.html#SetBucketPolicy)
- [`GetBucketPolicy`](https://min.io/docs/minio/linux/developers/go/API.html#GetBucketPolicy)
### API Reference : Bucket notification Operations
* [`SetBucketNotification`](https://min.io/docs/minio/linux/developers/go/API.html#SetBucketNotification)
* [`GetBucketNotification`](https://min.io/docs/minio/linux/developers/go/API.html#GetBucketNotification)
* [`RemoveAllBucketNotification`](https://min.io/docs/minio/linux/developers/go/API.html#RemoveAllBucketNotification)
* [`ListenBucketNotification`](https://min.io/docs/minio/linux/developers/go/API.html#ListenBucketNotification) (MinIO Extension)
* [`ListenNotification`](https://min.io/docs/minio/linux/developers/go/API.html#ListenNotification) (MinIO Extension)
- [`SetBucketNotification`](https://min.io/docs/minio/linux/developers/go/API.html#SetBucketNotification)
- [`GetBucketNotification`](https://min.io/docs/minio/linux/developers/go/API.html#GetBucketNotification)
- [`RemoveAllBucketNotification`](https://min.io/docs/minio/linux/developers/go/API.html#RemoveAllBucketNotification)
- [`ListenBucketNotification`](https://min.io/docs/minio/linux/developers/go/API.html#ListenBucketNotification) (MinIO Extension)
- [`ListenNotification`](https://min.io/docs/minio/linux/developers/go/API.html#ListenNotification) (MinIO Extension)
### API Reference : File Object Operations
* [`FPutObject`](https://min.io/docs/minio/linux/developers/go/API.html#FPutObject)
* [`FGetObject`](https://min.io/docs/minio/linux/developers/go/API.html#FGetObject)
- [`FPutObject`](https://min.io/docs/minio/linux/developers/go/API.html#FPutObject)
- [`FGetObject`](https://min.io/docs/minio/linux/developers/go/API.html#FGetObject)
### API Reference : Object Operations
* [`GetObject`](https://min.io/docs/minio/linux/developers/go/API.html#GetObject)
* [`PutObject`](https://min.io/docs/minio/linux/developers/go/API.html#PutObject)
* [`PutObjectStreaming`](https://min.io/docs/minio/linux/developers/go/API.html#PutObjectStreaming)
* [`StatObject`](https://min.io/docs/minio/linux/developers/go/API.html#StatObject)
* [`CopyObject`](https://min.io/docs/minio/linux/developers/go/API.html#CopyObject)
* [`RemoveObject`](https://min.io/docs/minio/linux/developers/go/API.html#RemoveObject)
* [`RemoveObjects`](https://min.io/docs/minio/linux/developers/go/API.html#RemoveObjects)
* [`RemoveIncompleteUpload`](https://min.io/docs/minio/linux/developers/go/API.html#RemoveIncompleteUpload)
* [`SelectObjectContent`](https://min.io/docs/minio/linux/developers/go/API.html#SelectObjectContent)
- [`GetObject`](https://min.io/docs/minio/linux/developers/go/API.html#GetObject)
- [`PutObject`](https://min.io/docs/minio/linux/developers/go/API.html#PutObject)
- [`PutObjectStreaming`](https://min.io/docs/minio/linux/developers/go/API.html#PutObjectStreaming)
- [`StatObject`](https://min.io/docs/minio/linux/developers/go/API.html#StatObject)
- [`CopyObject`](https://min.io/docs/minio/linux/developers/go/API.html#CopyObject)
- [`RemoveObject`](https://min.io/docs/minio/linux/developers/go/API.html#RemoveObject)
- [`RemoveObjects`](https://min.io/docs/minio/linux/developers/go/API.html#RemoveObjects)
- [`RemoveIncompleteUpload`](https://min.io/docs/minio/linux/developers/go/API.html#RemoveIncompleteUpload)
- [`SelectObjectContent`](https://min.io/docs/minio/linux/developers/go/API.html#SelectObjectContent)
### API Reference : Presigned Operations
* [`PresignedGetObject`](https://min.io/docs/minio/linux/developers/go/API.html#PresignedGetObject)
* [`PresignedPutObject`](https://min.io/docs/minio/linux/developers/go/API.html#PresignedPutObject)
* [`PresignedHeadObject`](https://min.io/docs/minio/linux/developers/go/API.html#PresignedHeadObject)
* [`PresignedPostPolicy`](https://min.io/docs/minio/linux/developers/go/API.html#PresignedPostPolicy)
- [`PresignedGetObject`](https://min.io/docs/minio/linux/developers/go/API.html#PresignedGetObject)
- [`PresignedPutObject`](https://min.io/docs/minio/linux/developers/go/API.html#PresignedPutObject)
- [`PresignedHeadObject`](https://min.io/docs/minio/linux/developers/go/API.html#PresignedHeadObject)
- [`PresignedPostPolicy`](https://min.io/docs/minio/linux/developers/go/API.html#PresignedPostPolicy)
### API Reference : Client custom settings
* [`SetAppInfo`](https://min.io/docs/minio/linux/developers/go/API.html#SetAppInfo)
* [`TraceOn`](https://min.io/docs/minio/linux/developers/go/API.html#TraceOn)
* [`TraceOff`](https://min.io/docs/minio/linux/developers/go/API.html#TraceOff)
- [`SetAppInfo`](https://min.io/docs/minio/linux/developers/go/API.html#SetAppInfo)
- [`TraceOn`](https://min.io/docs/minio/linux/developers/go/API.html#TraceOn)
- [`TraceOff`](https://min.io/docs/minio/linux/developers/go/API.html#TraceOff)
## Full Examples
Full Examples
-------------
### Full Examples : Bucket Operations
* [makebucket.go](https://github.com/minio/minio-go/blob/master/examples/s3/makebucket.go)
* [listbuckets.go](https://github.com/minio/minio-go/blob/master/examples/s3/listbuckets.go)
* [bucketexists.go](https://github.com/minio/minio-go/blob/master/examples/s3/bucketexists.go)
* [removebucket.go](https://github.com/minio/minio-go/blob/master/examples/s3/removebucket.go)
* [listobjects.go](https://github.com/minio/minio-go/blob/master/examples/s3/listobjects.go)
* [listobjectsV2.go](https://github.com/minio/minio-go/blob/master/examples/s3/listobjectsV2.go)
* [listincompleteuploads.go](https://github.com/minio/minio-go/blob/master/examples/s3/listincompleteuploads.go)
- [makebucket.go](https://github.com/minio/minio-go/blob/master/examples/s3/makebucket.go)
- [listbuckets.go](https://github.com/minio/minio-go/blob/master/examples/s3/listbuckets.go)
- [bucketexists.go](https://github.com/minio/minio-go/blob/master/examples/s3/bucketexists.go)
- [removebucket.go](https://github.com/minio/minio-go/blob/master/examples/s3/removebucket.go)
- [listobjects.go](https://github.com/minio/minio-go/blob/master/examples/s3/listobjects.go)
- [listobjectsV2.go](https://github.com/minio/minio-go/blob/master/examples/s3/listobjectsV2.go)
- [listincompleteuploads.go](https://github.com/minio/minio-go/blob/master/examples/s3/listincompleteuploads.go)
### Full Examples : Bucket policy Operations
* [setbucketpolicy.go](https://github.com/minio/minio-go/blob/master/examples/s3/setbucketpolicy.go)
* [getbucketpolicy.go](https://github.com/minio/minio-go/blob/master/examples/s3/getbucketpolicy.go)
* [listbucketpolicies.go](https://github.com/minio/minio-go/blob/master/examples/s3/listbucketpolicies.go)
- [setbucketpolicy.go](https://github.com/minio/minio-go/blob/master/examples/s3/setbucketpolicy.go)
- [getbucketpolicy.go](https://github.com/minio/minio-go/blob/master/examples/s3/getbucketpolicy.go)
- [listbucketpolicies.go](https://github.com/minio/minio-go/blob/master/examples/s3/listbucketpolicies.go)
### Full Examples : Bucket lifecycle Operations
* [setbucketlifecycle.go](https://github.com/minio/minio-go/blob/master/examples/s3/setbucketlifecycle.go)
* [getbucketlifecycle.go](https://github.com/minio/minio-go/blob/master/examples/s3/getbucketlifecycle.go)
- [setbucketlifecycle.go](https://github.com/minio/minio-go/blob/master/examples/s3/setbucketlifecycle.go)
- [getbucketlifecycle.go](https://github.com/minio/minio-go/blob/master/examples/s3/getbucketlifecycle.go)
### Full Examples : Bucket encryption Operations
* [setbucketencryption.go](https://github.com/minio/minio-go/blob/master/examples/s3/setbucketencryption.go)
* [getbucketencryption.go](https://github.com/minio/minio-go/blob/master/examples/s3/getbucketencryption.go)
* [removebucketencryption.go](https://github.com/minio/minio-go/blob/master/examples/s3/removebucketencryption.go)
- [setbucketencryption.go](https://github.com/minio/minio-go/blob/master/examples/s3/setbucketencryption.go)
- [getbucketencryption.go](https://github.com/minio/minio-go/blob/master/examples/s3/getbucketencryption.go)
- [removebucketencryption.go](https://github.com/minio/minio-go/blob/master/examples/s3/removebucketencryption.go)
### Full Examples : Bucket replication Operations
* [setbucketreplication.go](https://github.com/minio/minio-go/blob/master/examples/s3/setbucketreplication.go)
* [getbucketreplication.go](https://github.com/minio/minio-go/blob/master/examples/s3/getbucketreplication.go)
* [removebucketreplication.go](https://github.com/minio/minio-go/blob/master/examples/s3/removebucketreplication.go)
- [setbucketreplication.go](https://github.com/minio/minio-go/blob/master/examples/s3/setbucketreplication.go)
- [getbucketreplication.go](https://github.com/minio/minio-go/blob/master/examples/s3/getbucketreplication.go)
- [removebucketreplication.go](https://github.com/minio/minio-go/blob/master/examples/s3/removebucketreplication.go)
### Full Examples : Bucket notification Operations
* [setbucketnotification.go](https://github.com/minio/minio-go/blob/master/examples/s3/setbucketnotification.go)
* [getbucketnotification.go](https://github.com/minio/minio-go/blob/master/examples/s3/getbucketnotification.go)
* [removeallbucketnotification.go](https://github.com/minio/minio-go/blob/master/examples/s3/removeallbucketnotification.go)
* [listenbucketnotification.go](https://github.com/minio/minio-go/blob/master/examples/minio/listenbucketnotification.go) (MinIO Extension)
* [listennotification.go](https://github.com/minio/minio-go/blob/master/examples/minio/listen-notification.go) (MinIO Extension)
- [setbucketnotification.go](https://github.com/minio/minio-go/blob/master/examples/s3/setbucketnotification.go)
- [getbucketnotification.go](https://github.com/minio/minio-go/blob/master/examples/s3/getbucketnotification.go)
- [removeallbucketnotification.go](https://github.com/minio/minio-go/blob/master/examples/s3/removeallbucketnotification.go)
- [listenbucketnotification.go](https://github.com/minio/minio-go/blob/master/examples/minio/listenbucketnotification.go) (MinIO Extension)
- [listennotification.go](https://github.com/minio/minio-go/blob/master/examples/minio/listen-notification.go) (MinIO Extension)
### Full Examples : File Object Operations
* [fputobject.go](https://github.com/minio/minio-go/blob/master/examples/s3/fputobject.go)
* [fgetobject.go](https://github.com/minio/minio-go/blob/master/examples/s3/fgetobject.go)
- [fputobject.go](https://github.com/minio/minio-go/blob/master/examples/s3/fputobject.go)
- [fgetobject.go](https://github.com/minio/minio-go/blob/master/examples/s3/fgetobject.go)
### Full Examples : Object Operations
* [putobject.go](https://github.com/minio/minio-go/blob/master/examples/s3/putobject.go)
* [getobject.go](https://github.com/minio/minio-go/blob/master/examples/s3/getobject.go)
* [statobject.go](https://github.com/minio/minio-go/blob/master/examples/s3/statobject.go)
* [copyobject.go](https://github.com/minio/minio-go/blob/master/examples/s3/copyobject.go)
* [removeobject.go](https://github.com/minio/minio-go/blob/master/examples/s3/removeobject.go)
* [removeincompleteupload.go](https://github.com/minio/minio-go/blob/master/examples/s3/removeincompleteupload.go)
* [removeobjects.go](https://github.com/minio/minio-go/blob/master/examples/s3/removeobjects.go)
- [putobject.go](https://github.com/minio/minio-go/blob/master/examples/s3/putobject.go)
- [getobject.go](https://github.com/minio/minio-go/blob/master/examples/s3/getobject.go)
- [statobject.go](https://github.com/minio/minio-go/blob/master/examples/s3/statobject.go)
- [copyobject.go](https://github.com/minio/minio-go/blob/master/examples/s3/copyobject.go)
- [removeobject.go](https://github.com/minio/minio-go/blob/master/examples/s3/removeobject.go)
- [removeincompleteupload.go](https://github.com/minio/minio-go/blob/master/examples/s3/removeincompleteupload.go)
- [removeobjects.go](https://github.com/minio/minio-go/blob/master/examples/s3/removeobjects.go)
### Full Examples : Encrypted Object Operations
* [put-encrypted-object.go](https://github.com/minio/minio-go/blob/master/examples/s3/put-encrypted-object.go)
* [get-encrypted-object.go](https://github.com/minio/minio-go/blob/master/examples/s3/get-encrypted-object.go)
* [fput-encrypted-object.go](https://github.com/minio/minio-go/blob/master/examples/s3/fputencrypted-object.go)
- [put-encrypted-object.go](https://github.com/minio/minio-go/blob/master/examples/s3/put-encrypted-object.go)
- [get-encrypted-object.go](https://github.com/minio/minio-go/blob/master/examples/s3/get-encrypted-object.go)
- [fput-encrypted-object.go](https://github.com/minio/minio-go/blob/master/examples/s3/fputencrypted-object.go)
### Full Examples : Presigned Operations
* [presignedgetobject.go](https://github.com/minio/minio-go/blob/master/examples/s3/presignedgetobject.go)
* [presignedputobject.go](https://github.com/minio/minio-go/blob/master/examples/s3/presignedputobject.go)
* [presignedheadobject.go](https://github.com/minio/minio-go/blob/master/examples/s3/presignedheadobject.go)
* [presignedpostpolicy.go](https://github.com/minio/minio-go/blob/master/examples/s3/presignedpostpolicy.go)
- [presignedgetobject.go](https://github.com/minio/minio-go/blob/master/examples/s3/presignedgetobject.go)
- [presignedputobject.go](https://github.com/minio/minio-go/blob/master/examples/s3/presignedputobject.go)
- [presignedheadobject.go](https://github.com/minio/minio-go/blob/master/examples/s3/presignedheadobject.go)
- [presignedpostpolicy.go](https://github.com/minio/minio-go/blob/master/examples/s3/presignedpostpolicy.go)
## Explore Further
Explore Further
---------------
* [Godoc Documentation](https://pkg.go.dev/github.com/minio/minio-go/v7)
* [Complete Documentation](https://min.io/docs/minio/kubernetes/upstream/index.html)
* [MinIO Go Client SDK API Reference](https://min.io/docs/minio/linux/developers/go/API.html)
- [Godoc Documentation](https://pkg.go.dev/github.com/minio/minio-go/v7)
- [Complete Documentation](https://min.io/docs/minio/kubernetes/upstream/index.html)
- [MinIO Go Client SDK API Reference](https://min.io/docs/minio/linux/developers/go/API.html)
## Contribute
Contribute
----------
[Contributors Guide](https://github.com/minio/minio-go/blob/master/CONTRIBUTING.md)
## License
License
-------
This SDK is distributed under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0), see [LICENSE](https://github.com/minio/minio-go/blob/master/LICENSE) and [NOTICE](https://github.com/minio/minio-go/blob/master/NOTICE) for more information.

View File

@@ -26,7 +26,15 @@ import (
"github.com/minio/minio-go/v7/pkg/s3utils"
)
// SetBucketCors sets the cors configuration for the bucket
// SetBucketCors sets the Cross-Origin Resource Sharing (CORS) configuration for the bucket.
// If corsConfig is nil, the existing CORS configuration will be removed.
//
// Parameters:
// - ctx: Context for request cancellation and timeout
// - bucketName: Name of the bucket
// - corsConfig: CORS configuration to apply (nil to remove existing configuration)
//
// Returns an error if the operation fails.
func (c *Client) SetBucketCors(ctx context.Context, bucketName string, corsConfig *cors.Config) error {
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
return err
@@ -90,7 +98,14 @@ func (c *Client) removeBucketCors(ctx context.Context, bucketName string) error
return nil
}
// GetBucketCors returns the current cors
// GetBucketCors retrieves the Cross-Origin Resource Sharing (CORS) configuration from the bucket.
// If no CORS configuration exists, returns nil with no error.
//
// Parameters:
// - ctx: Context for request cancellation and timeout
// - bucketName: Name of the bucket
//
// Returns the CORS configuration or an error if the operation fails.
func (c *Client) GetBucketCors(ctx context.Context, bucketName string) (*cors.Config, error) {
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
return nil, err

View File

@@ -28,6 +28,14 @@ import (
)
// SetBucketEncryption sets the default encryption configuration on an existing bucket.
// The encryption configuration specifies the default encryption behavior for objects uploaded to the bucket.
//
// Parameters:
// - ctx: Context for request cancellation and timeout
// - bucketName: Name of the bucket
// - config: Server-side encryption configuration to apply
//
// Returns an error if the operation fails or if config is nil.
func (c *Client) SetBucketEncryption(ctx context.Context, bucketName string, config *sse.Configuration) error {
// Input validation.
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
@@ -69,7 +77,15 @@ func (c *Client) SetBucketEncryption(ctx context.Context, bucketName string, con
return nil
}
// RemoveBucketEncryption removes the default encryption configuration on a bucket with a context to control cancellations and timeouts.
// RemoveBucketEncryption removes the default encryption configuration from a bucket.
// After removal, the bucket will no longer apply default encryption to new objects.
// It uses the provided context to control cancellations and timeouts.
//
// Parameters:
// - ctx: Context for request cancellation and timeout
// - bucketName: Name of the bucket
//
// Returns an error if the operation fails.
func (c *Client) RemoveBucketEncryption(ctx context.Context, bucketName string) error {
// Input validation.
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
@@ -97,8 +113,14 @@ func (c *Client) RemoveBucketEncryption(ctx context.Context, bucketName string)
return nil
}
// GetBucketEncryption gets the default encryption configuration
// on an existing bucket with a context to control cancellations and timeouts.
// GetBucketEncryption retrieves the default encryption configuration from a bucket.
// It uses the provided context to control cancellations and timeouts.
//
// Parameters:
// - ctx: Context for request cancellation and timeout
// - bucketName: Name of the bucket
//
// Returns the bucket's encryption configuration or an error if the operation fails.
func (c *Client) GetBucketEncryption(ctx context.Context, bucketName string) (*sse.Configuration, error) {
// Input validation.
if err := s3utils.CheckValidBucketName(bucketName); err != nil {

View File

@@ -21,12 +21,12 @@ import (
"bufio"
"bytes"
"context"
"encoding/json"
"encoding/xml"
"net/http"
"net/url"
"time"
"github.com/minio/minio-go/v7/internal/json"
"github.com/minio/minio-go/v7/pkg/notification"
"github.com/minio/minio-go/v7/pkg/s3utils"
)

View File

@@ -26,7 +26,16 @@ import (
"github.com/minio/minio-go/v7/pkg/s3utils"
)
// SetBucketPolicy sets the access permissions on an existing bucket.
// SetBucketPolicy sets the access permissions policy on an existing bucket.
// The policy should be a valid JSON string that conforms to the IAM policy format.
// If policy is an empty string, the existing bucket policy will be removed.
//
// Parameters:
// - ctx: Context for request cancellation and timeout
// - bucketName: Name of the bucket
// - policy: JSON policy string (empty string to remove existing policy)
//
// Returns an error if the operation fails.
func (c *Client) SetBucketPolicy(ctx context.Context, bucketName, policy string) error {
// Input validation.
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
@@ -95,7 +104,14 @@ func (c *Client) removeBucketPolicy(ctx context.Context, bucketName string) erro
return nil
}
// GetBucketPolicy returns the current policy
// GetBucketPolicy retrieves the access permissions policy for the bucket.
// If no bucket policy exists, returns an empty string with no error.
//
// Parameters:
// - ctx: Context for request cancellation and timeout
// - bucketName: Name of the bucket
//
// Returns the policy as a JSON string or an error if the operation fails.
func (c *Client) GetBucketPolicy(ctx context.Context, bucketName string) (string, error) {
// Input validation.
if err := s3utils.CheckValidBucketName(bucketName); err != nil {

212
vendor/github.com/minio/minio-go/v7/api-bucket-qos.go generated vendored Normal file
View File

@@ -0,0 +1,212 @@
/*
* MinIO Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2025 MinIO, Inc.
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package minio
import (
"context"
"encoding/json"
"io"
"net/http"
"net/url"
"strings"
"github.com/minio/minio-go/v7/pkg/s3utils"
"gopkg.in/yaml.v3"
)
// QOSConfigVersionCurrent is the current version of the QoS configuration.
const QOSConfigVersionCurrent = "v1"
// QOSConfig represents the QoS configuration for a bucket.
type QOSConfig struct {
Version string `yaml:"version"`
Rules []QOSRule `yaml:"rules"`
}
// QOSRule represents a single QoS rule.
type QOSRule struct {
ID string `yaml:"id"`
Label string `yaml:"label,omitempty"`
Priority int `yaml:"priority"`
ObjectPrefix string `yaml:"objectPrefix"`
API string `yaml:"api"`
Rate int64 `yaml:"rate"`
Burst int64 `yaml:"burst"` // not required for concurrency limit
Limit string `yaml:"limit"` // "concurrency" or "rps"
}
// NewQOSConfig creates a new empty QoS configuration.
func NewQOSConfig() *QOSConfig {
return &QOSConfig{
Version: "v1",
Rules: []QOSRule{},
}
}
// GetBucketQOS retrieves the Quality of Service (QoS) configuration for the bucket.
//
// Parameters:
// - ctx: Context for request cancellation and timeout
// - bucket: Name of the bucket
//
// Returns the QoS configuration or an error if the operation fails.
func (c *Client) GetBucketQOS(ctx context.Context, bucket string) (*QOSConfig, error) {
var qosCfg QOSConfig
// Input validation.
if err := s3utils.CheckValidBucketName(bucket); err != nil {
return nil, err
}
urlValues := make(url.Values)
urlValues.Set("qos", "")
// Execute GET on bucket to fetch qos.
resp, err := c.executeMethod(ctx, http.MethodGet, requestMetadata{
bucketName: bucket,
queryValues: urlValues,
contentSHA256Hex: emptySHA256Hex,
})
defer closeResponse(resp)
if err != nil {
return nil, err
}
if resp.StatusCode != http.StatusOK {
return nil, httpRespToErrorResponse(resp, bucket, "")
}
b, err := io.ReadAll(resp.Body)
if err != nil {
return nil, err
}
if err = yaml.Unmarshal(b, &qosCfg); err != nil {
return nil, err
}
return &qosCfg, nil
}
// SetBucketQOS sets the Quality of Service (QoS) configuration for a bucket.
//
// Parameters:
// - ctx: Context for request cancellation and timeout
// - bucket: Name of the bucket
// - qosCfg: QoS configuration to apply
//
// Returns an error if the operation fails.
func (c *Client) SetBucketQOS(ctx context.Context, bucket string, qosCfg *QOSConfig) error {
// Input validation.
if err := s3utils.CheckValidBucketName(bucket); err != nil {
return err
}
data, err := yaml.Marshal(qosCfg)
if err != nil {
return err
}
// Get resources properly escaped and lined up before
// using them in http request.
urlValues := make(url.Values)
urlValues.Set("qos", "")
reqMetadata := requestMetadata{
bucketName: bucket,
queryValues: urlValues,
contentBody: strings.NewReader(string(data)),
contentLength: int64(len(data)),
}
// Execute PUT to upload a new bucket QoS configuration.
resp, err := c.executeMethod(ctx, http.MethodPut, reqMetadata)
defer closeResponse(resp)
if err != nil {
return err
}
if resp != nil {
if resp.StatusCode != http.StatusNoContent && resp.StatusCode != http.StatusOK {
return httpRespToErrorResponse(resp, bucket, "")
}
}
return nil
}
// CounterMetric returns stats for a counter
type CounterMetric struct {
Last1m uint64 `json:"last1m"`
Last1hr uint64 `json:"last1hr"`
Total uint64 `json:"total"`
}
// QOSMetric - metric for a qos rule per bucket
type QOSMetric struct {
APIName string `json:"apiName"`
Rule QOSRule `json:"rule"`
Totals CounterMetric `json:"totals"`
Throttled CounterMetric `json:"throttleCount"`
ExceededRateLimit CounterMetric `json:"exceededRateLimitCount"`
ClientDisconnCount CounterMetric `json:"clientDisconnectCount"`
ReqTimeoutCount CounterMetric `json:"reqTimeoutCount"`
}
// QOSNodeStats represents stats for a bucket on a single node
type QOSNodeStats struct {
Stats []QOSMetric `json:"stats"`
NodeName string `json:"node"`
}
// GetBucketQOSMetrics retrieves Quality of Service (QoS) metrics for a bucket.
//
// Parameters:
// - ctx: Context for request cancellation and timeout
// - bucketName: Name of the bucket
// - nodeName: Name of the node (empty string for all nodes)
//
// Returns QoS metrics per node or an error if the operation fails.
func (c *Client) GetBucketQOSMetrics(ctx context.Context, bucketName, nodeName string) (qs []QOSNodeStats, err error) {
// Input validation.
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
return qs, err
}
// Get resources properly escaped and lined up before
// using them in http request.
urlValues := make(url.Values)
urlValues.Set("qos-metrics", "")
if nodeName != "" {
urlValues.Set("node", nodeName)
}
// Execute GET on bucket to get qos metrics.
resp, err := c.executeMethod(ctx, http.MethodGet, requestMetadata{
bucketName: bucketName,
queryValues: urlValues,
})
defer closeResponse(resp)
if err != nil {
return qs, err
}
if resp.StatusCode != http.StatusOK {
return qs, httpRespToErrorResponse(resp, bucketName, "")
}
respBytes, err := io.ReadAll(resp.Body)
if err != nil {
return qs, err
}
if err := json.Unmarshal(respBytes, &qs); err != nil {
return qs, err
}
return qs, nil
}

View File

@@ -20,6 +20,7 @@ package minio
import (
"bytes"
"context"
"encoding/json"
"encoding/xml"
"io"
"net/http"
@@ -27,17 +28,30 @@ import (
"time"
"github.com/google/uuid"
"github.com/minio/minio-go/v7/internal/json"
"github.com/minio/minio-go/v7/pkg/replication"
"github.com/minio/minio-go/v7/pkg/s3utils"
)
// RemoveBucketReplication removes a replication config on an existing bucket.
// RemoveBucketReplication removes the replication configuration from an existing bucket.
//
// Parameters:
// - ctx: Context for request cancellation and timeout
// - bucketName: Name of the bucket
//
// Returns an error if the operation fails.
func (c *Client) RemoveBucketReplication(ctx context.Context, bucketName string) error {
return c.removeBucketReplication(ctx, bucketName)
}
// SetBucketReplication sets a replication config on an existing bucket.
// SetBucketReplication sets the replication configuration on an existing bucket.
// If the provided configuration is empty, this method removes the existing replication configuration.
//
// Parameters:
// - ctx: Context for request cancellation and timeout
// - bucketName: Name of the bucket
// - cfg: Replication configuration to apply
//
// Returns an error if the operation fails.
func (c *Client) SetBucketReplication(ctx context.Context, bucketName string, cfg replication.Config) error {
// Input validation.
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
@@ -108,8 +122,14 @@ func (c *Client) removeBucketReplication(ctx context.Context, bucketName string)
return nil
}
// GetBucketReplication fetches bucket replication configuration.If config is not
// found, returns empty config with nil error.
// GetBucketReplication retrieves the bucket replication configuration.
// If no replication configuration is found, returns an empty config with nil error.
//
// Parameters:
// - ctx: Context for request cancellation and timeout
// - bucketName: Name of the bucket
//
// Returns the replication configuration or an error if the operation fails.
func (c *Client) GetBucketReplication(ctx context.Context, bucketName string) (cfg replication.Config, err error) {
// Input validation.
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
@@ -155,7 +175,13 @@ func (c *Client) getBucketReplication(ctx context.Context, bucketName string) (c
return cfg, nil
}
// GetBucketReplicationMetrics fetches bucket replication status metrics
// GetBucketReplicationMetrics retrieves bucket replication status metrics.
//
// Parameters:
// - ctx: Context for request cancellation and timeout
// - bucketName: Name of the bucket
//
// Returns the replication metrics or an error if the operation fails.
func (c *Client) GetBucketReplicationMetrics(ctx context.Context, bucketName string) (s replication.Metrics, err error) {
// Input validation.
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
@@ -200,8 +226,15 @@ func mustGetUUID() string {
return u.String()
}
// ResetBucketReplication kicks off replication of previously replicated objects if ExistingObjectReplication
// is enabled in the replication config
// ResetBucketReplication initiates replication of previously replicated objects.
// This requires ExistingObjectReplication to be enabled in the replication configuration.
//
// Parameters:
// - ctx: Context for request cancellation and timeout
// - bucketName: Name of the bucket
// - olderThan: Only replicate objects older than this duration (0 for all objects)
//
// Returns a reset ID that can be used to track the operation, or an error if the operation fails.
func (c *Client) ResetBucketReplication(ctx context.Context, bucketName string, olderThan time.Duration) (rID string, err error) {
rID = mustGetUUID()
_, err = c.resetBucketReplicationOnTarget(ctx, bucketName, olderThan, "", rID)
@@ -211,8 +244,16 @@ func (c *Client) ResetBucketReplication(ctx context.Context, bucketName string,
return rID, nil
}
// ResetBucketReplicationOnTarget kicks off replication of previously replicated objects if
// ExistingObjectReplication is enabled in the replication config
// ResetBucketReplicationOnTarget initiates replication of previously replicated objects to a specific target.
// This requires ExistingObjectReplication to be enabled in the replication configuration.
//
// Parameters:
// - ctx: Context for request cancellation and timeout
// - bucketName: Name of the bucket
// - olderThan: Only replicate objects older than this duration (0 for all objects)
// - tgtArn: ARN of the target to reset replication for
//
// Returns resync target information or an error if the operation fails.
func (c *Client) ResetBucketReplicationOnTarget(ctx context.Context, bucketName string, olderThan time.Duration, tgtArn string) (replication.ResyncTargetsInfo, error) {
return c.resetBucketReplicationOnTarget(ctx, bucketName, olderThan, tgtArn, mustGetUUID())
}
@@ -222,7 +263,7 @@ func (c *Client) ResetBucketReplicationOnTarget(ctx context.Context, bucketName
func (c *Client) resetBucketReplicationOnTarget(ctx context.Context, bucketName string, olderThan time.Duration, tgtArn, resetID string) (rinfo replication.ResyncTargetsInfo, err error) {
// Input validation.
if err = s3utils.CheckValidBucketName(bucketName); err != nil {
return
return rinfo, err
}
// Get resources properly escaped and lined up before
// using them in http request.
@@ -256,7 +297,14 @@ func (c *Client) resetBucketReplicationOnTarget(ctx context.Context, bucketName
return rinfo, nil
}
// GetBucketReplicationResyncStatus gets the status of replication resync
// GetBucketReplicationResyncStatus retrieves the status of a replication resync operation.
//
// Parameters:
// - ctx: Context for request cancellation and timeout
// - bucketName: Name of the bucket
// - arn: ARN of the replication target (empty string for all targets)
//
// Returns resync status information or an error if the operation fails.
func (c *Client) GetBucketReplicationResyncStatus(ctx context.Context, bucketName, arn string) (rinfo replication.ResyncTargetsInfo, err error) {
// Input validation.
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
@@ -290,11 +338,18 @@ func (c *Client) GetBucketReplicationResyncStatus(ctx context.Context, bucketNam
return rinfo, nil
}
// CancelBucketReplicationResync cancels in progress replication resync
// CancelBucketReplicationResync cancels an in-progress replication resync operation.
//
// Parameters:
// - ctx: Context for request cancellation and timeout
// - bucketName: Name of the bucket
// - tgtArn: ARN of the replication target (empty string for all targets)
//
// Returns the ID of the canceled resync operation or an error if the operation fails.
func (c *Client) CancelBucketReplicationResync(ctx context.Context, bucketName string, tgtArn string) (id string, err error) {
// Input validation.
if err = s3utils.CheckValidBucketName(bucketName); err != nil {
return
return id, err
}
// Get resources properly escaped and lined up before
// using them in http request.
@@ -326,7 +381,13 @@ func (c *Client) CancelBucketReplicationResync(ctx context.Context, bucketName s
return id, nil
}
// GetBucketReplicationMetricsV2 fetches bucket replication status metrics
// GetBucketReplicationMetricsV2 retrieves bucket replication status metrics using the V2 API.
//
// Parameters:
// - ctx: Context for request cancellation and timeout
// - bucketName: Name of the bucket
//
// Returns the V2 replication metrics or an error if the operation fails.
func (c *Client) GetBucketReplicationMetricsV2(ctx context.Context, bucketName string) (s replication.MetricsV2, err error) {
// Input validation.
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
@@ -362,7 +423,13 @@ func (c *Client) GetBucketReplicationMetricsV2(ctx context.Context, bucketName s
return s, nil
}
// CheckBucketReplication validates if replication is set up properly for a bucket
// CheckBucketReplication validates whether replication is properly configured for a bucket.
//
// Parameters:
// - ctx: Context for request cancellation and timeout
// - bucketName: Name of the bucket
//
// Returns nil if replication is valid, or an error describing the validation failure.
func (c *Client) CheckBucketReplication(ctx context.Context, bucketName string) (err error) {
// Input validation.
if err := s3utils.CheckValidBucketName(bucketName); err != nil {

View File

@@ -29,8 +29,14 @@ import (
"github.com/minio/minio-go/v7/pkg/tags"
)
// GetBucketTagging fetch tagging configuration for a bucket with a
// context to control cancellations and timeouts.
// GetBucketTagging fetches the tagging configuration for a bucket.
// It uses the provided context to control cancellations and timeouts.
//
// Parameters:
// - ctx: Context for request cancellation and timeout
// - bucketName: Name of the bucket
//
// Returns the bucket's tags or an error if the operation fails.
func (c *Client) GetBucketTagging(ctx context.Context, bucketName string) (*tags.Tags, error) {
// Input validation.
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
@@ -61,8 +67,15 @@ func (c *Client) GetBucketTagging(ctx context.Context, bucketName string) (*tags
return tags.ParseBucketXML(resp.Body)
}
// SetBucketTagging sets tagging configuration for a bucket
// with a context to control cancellations and timeouts.
// SetBucketTagging sets the tagging configuration for a bucket.
// It uses the provided context to control cancellations and timeouts.
//
// Parameters:
// - ctx: Context for request cancellation and timeout
// - bucketName: Name of the bucket
// - tags: Tag set to apply to the bucket
//
// Returns an error if the operation fails or if tags is nil.
func (c *Client) SetBucketTagging(ctx context.Context, bucketName string, tags *tags.Tags) error {
// Input validation.
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
@@ -104,8 +117,14 @@ func (c *Client) SetBucketTagging(ctx context.Context, bucketName string, tags *
return nil
}
// RemoveBucketTagging removes tagging configuration for a
// bucket with a context to control cancellations and timeouts.
// RemoveBucketTagging removes the tagging configuration from a bucket.
// It uses the provided context to control cancellations and timeouts.
//
// Parameters:
// - ctx: Context for request cancellation and timeout
// - bucketName: Name of the bucket
//
// Returns an error if the operation fails.
func (c *Client) RemoveBucketTagging(ctx context.Context, bucketName string) error {
// Input validation.
if err := s3utils.CheckValidBucketName(bucketName); err != nil {

View File

@@ -42,13 +42,15 @@ type CopyDestOptions struct {
// provided key. If it is nil, no encryption is performed.
Encryption encrypt.ServerSide
ChecksumType ChecksumType
// `userMeta` is the user-metadata key-value pairs to be set on the
// destination. The keys are automatically prefixed with `x-amz-meta-`
// if needed. If nil is passed, and if only a single source (of any
// size) is provided in the ComposeObject call, then metadata from the
// source is copied to the destination.
// if no user-metadata is provided, it is copied from source
// (when there is only once source object in the compose
// (when there is only one source object in the compose
// request)
UserMetadata map[string]string
// UserMetadata is only set to destination if ReplaceMetadata is true
@@ -140,6 +142,9 @@ func (opts CopyDestOptions) Marshal(header http.Header) {
if !opts.Expires.IsZero() {
header.Set("Expires", opts.Expires.UTC().Format(http.TimeFormat))
}
if opts.ChecksumType.IsSet() {
header.Set(amzChecksumAlgo, opts.ChecksumType.String())
}
if opts.ReplaceMetadata {
header.Set("x-amz-metadata-directive", replaceDirective)
@@ -345,7 +350,7 @@ func (c *Client) copyObjectPartDo(ctx context.Context, srcBucket, srcObject, des
})
defer closeResponse(resp)
if err != nil {
return
return p, err
}
// Check if we got an error response.
@@ -580,7 +585,7 @@ func partsRequired(size int64) int64 {
// it is not the last part.
func calculateEvenSplits(size int64, src CopySrcOptions) (startIndex, endIndex []int64) {
if size == 0 {
return
return startIndex, endIndex
}
reqParts := partsRequired(size)
@@ -617,5 +622,5 @@ func calculateEvenSplits(size int64, src CopySrcOptions) (startIndex, endIndex [
startIndex[j], endIndex[j] = cStart, cEnd
}
return
return startIndex, endIndex
}

View File

@@ -20,6 +20,7 @@ package minio
import (
"bytes"
"encoding/xml"
"errors"
"fmt"
"io"
"net/http"
@@ -128,9 +129,18 @@ func httpRespToErrorResponse(resp *http.Response, bucketName, objectName string)
Server: resp.Header.Get("Server"),
}
_, success := successStatus[resp.StatusCode]
errBody, err := xmlDecodeAndBody(resp.Body, &errResp)
// Xml decoding failed with no body, fall back to HTTP headers.
if err != nil {
var unmarshalErr xml.UnmarshalError
if success && errors.As(err, &unmarshalErr) {
// This is a successful message so not an error response
// return nil,
return nil
}
switch resp.StatusCode {
case http.StatusNotFound:
if objectName == "" {

View File

@@ -127,7 +127,7 @@ func (o *ObjectAttributes) parseResponse(resp *http.Response) (err error) {
}
o.ObjectAttributesResponse = *response
return
return err
}
// GetObjectAttributes API combines HeadObject and ListParts.

View File

@@ -69,7 +69,7 @@ func (c *Client) FGetObject(ctx context.Context, bucketName, objectName, filePat
}
// Write to a temporary file "fileName.part.minio" before saving.
filePartPath := filePath + sum256Hex([]byte(objectStat.ETag)) + ".part.minio"
filePartPath := filepath.Join(filepath.Dir(filePath), sum256Hex([]byte(filepath.Base(filePath)+objectStat.ETag))+".part.minio")
// If exists, open in append mode. If not create it as a part file.
filePart, err := os.OpenFile(filePartPath, os.O_CREATE|os.O_APPEND|os.O_WRONLY, 0o600)

View File

@@ -0,0 +1,332 @@
/*
* MinIO Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2015-2025 MinIO, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package minio
import (
"context"
"encoding/json"
"io"
"iter"
"net/http"
"net/url"
"strings"
"time"
"github.com/minio/minio-go/v7/pkg/s3utils"
)
// This file contains the inventory API extension for MinIO server. It is not
// compatible with AWS S3.
func makeInventoryReqMetadata(bucket string, urlParams ...string) requestMetadata {
urlValues := make(url.Values)
urlValues.Set("minio-inventory", "")
// If an odd number of parameters is given, we skip the last pair to avoid
// an out of bounds access.
for i := 0; i+1 < len(urlParams); i += 2 {
urlValues.Set(urlParams[i], urlParams[i+1])
}
return requestMetadata{
bucketName: bucket,
queryValues: urlValues,
}
}
// GenerateInventoryConfigYAML generates a YAML template for an inventory configuration.
// This is a MinIO-specific API and is not compatible with AWS S3.
//
// Parameters:
// - ctx: Context for request cancellation and timeout
// - bucket: Name of the bucket
// - id: Unique identifier for the inventory configuration
//
// Returns a YAML template string that can be customized and used with PutBucketInventoryConfiguration.
func (c *Client) GenerateInventoryConfigYAML(ctx context.Context, bucket, id string) (string, error) {
if err := s3utils.CheckValidBucketName(bucket); err != nil {
return "", err
}
if id == "" {
return "", errInvalidArgument("inventory ID cannot be empty")
}
reqMeta := makeInventoryReqMetadata(bucket, "generate", "", "id", id)
resp, err := c.executeMethod(ctx, http.MethodGet, reqMeta)
defer closeResponse(resp)
if err != nil {
return "", err
}
if resp.StatusCode != http.StatusOK {
return "", httpRespToErrorResponse(resp, bucket, "")
}
buf := new(strings.Builder)
_, err = io.Copy(buf, resp.Body)
return buf.String(), err
}
// inventoryPutConfigOpts is a placeholder for future options that may be added.
type inventoryPutConfigOpts struct{}
// InventoryPutConfigOption is to allow for functional options for
// PutBucketInventoryConfiguration. It may be used in the future to customize
// the PutBucketInventoryConfiguration request, but currently does not do
// anything.
type InventoryPutConfigOption func(*inventoryPutConfigOpts)
// PutBucketInventoryConfiguration creates or updates an inventory configuration for a bucket.
// This is a MinIO-specific API and is not compatible with AWS S3.
//
// Parameters:
// - ctx: Context for request cancellation and timeout
// - bucket: Name of the bucket
// - id: Unique identifier for the inventory configuration
// - yamlDef: YAML definition of the inventory configuration
//
// Returns an error if the operation fails, or if bucket name, id, or yamlDef is empty.
func (c *Client) PutBucketInventoryConfiguration(ctx context.Context, bucket string, id string, yamlDef string, _ ...InventoryPutConfigOption) error {
if err := s3utils.CheckValidBucketName(bucket); err != nil {
return err
}
if id == "" {
return errInvalidArgument("inventory ID cannot be empty")
}
if yamlDef == "" {
return errInvalidArgument("YAML definition cannot be empty")
}
reqMeta := makeInventoryReqMetadata(bucket, "id", id)
reqMeta.contentBody = strings.NewReader(yamlDef)
reqMeta.contentLength = int64(len(yamlDef))
reqMeta.contentMD5Base64 = sumMD5Base64([]byte(yamlDef))
resp, err := c.executeMethod(ctx, http.MethodPut, reqMeta)
defer closeResponse(resp)
if err != nil {
return err
}
if resp.StatusCode != http.StatusOK {
return httpRespToErrorResponse(resp, bucket, "")
}
return nil
}
// GetBucketInventoryConfiguration retrieves the inventory configuration for a bucket.
// This is a MinIO-specific API and is not compatible with AWS S3.
//
// Parameters:
// - ctx: Context for request cancellation and timeout
// - bucket: Name of the bucket
// - id: Unique identifier for the inventory configuration
//
// Returns the inventory configuration or an error if the operation fails or if the configuration doesn't exist.
func (c *Client) GetBucketInventoryConfiguration(ctx context.Context, bucket, id string) (*InventoryConfiguration, error) {
if err := s3utils.CheckValidBucketName(bucket); err != nil {
return nil, err
}
if id == "" {
return nil, errInvalidArgument("inventory ID cannot be empty")
}
reqMeta := makeInventoryReqMetadata(bucket, "id", id)
resp, err := c.executeMethod(ctx, http.MethodGet, reqMeta)
defer closeResponse(resp)
if err != nil {
return nil, err
}
if resp.StatusCode != http.StatusOK {
return nil, httpRespToErrorResponse(resp, bucket, "")
}
decoder := json.NewDecoder(resp.Body)
var ic InventoryConfiguration
err = decoder.Decode(&ic)
if err != nil {
return nil, err
}
return &ic, nil
}
// DeleteBucketInventoryConfiguration deletes an inventory configuration from a bucket.
// This is a MinIO-specific API and is not compatible with AWS S3.
//
// Parameters:
// - ctx: Context for request cancellation and timeout
// - bucket: Name of the bucket
// - id: Unique identifier for the inventory configuration to delete
//
// Returns an error if the operation fails or if the configuration doesn't exist.
func (c *Client) DeleteBucketInventoryConfiguration(ctx context.Context, bucket, id string) error {
if err := s3utils.CheckValidBucketName(bucket); err != nil {
return err
}
if id == "" {
return errInvalidArgument("inventory ID cannot be empty")
}
reqMeta := makeInventoryReqMetadata(bucket, "id", id)
resp, err := c.executeMethod(ctx, http.MethodDelete, reqMeta)
defer closeResponse(resp)
if err != nil {
return err
}
if resp.StatusCode != http.StatusOK {
return httpRespToErrorResponse(resp, bucket, "")
}
return nil
}
// InventoryConfiguration represents the inventory configuration
type InventoryConfiguration struct {
Bucket string `json:"bucket"`
ID string `json:"id"`
User string `json:"user"`
YamlDef string `json:"yamlDef,omitempty"`
}
// InventoryListResult represents the result of listing inventory
// configurations.
type InventoryListResult struct {
Items []InventoryConfiguration `json:"items"`
NextContinuationToken string `json:"nextContinuationToken,omitempty"`
}
// ListBucketInventoryConfigurations lists up to 100 inventory configurations for a bucket.
// This is a MinIO-specific API and is not compatible with AWS S3.
//
// Parameters:
// - ctx: Context for request cancellation and timeout
// - bucket: Name of the bucket
// - continuationToken: Token for pagination (empty string for first request)
//
// Returns a list result with configurations and a continuation token for the next page, or an error.
func (c *Client) ListBucketInventoryConfigurations(ctx context.Context, bucket, continuationToken string) (lr *InventoryListResult, err error) {
if err := s3utils.CheckValidBucketName(bucket); err != nil {
return nil, err
}
reqMeta := makeInventoryReqMetadata(bucket, "continuation-token", continuationToken)
resp, err := c.executeMethod(ctx, http.MethodGet, reqMeta)
defer closeResponse(resp)
if err != nil {
return nil, err
}
if resp.StatusCode != http.StatusOK {
return nil, httpRespToErrorResponse(resp, bucket, "")
}
decoder := json.NewDecoder(resp.Body)
err = decoder.Decode(&lr)
if err != nil {
return nil, err
}
return lr, nil
}
// ListBucketInventoryConfigurationsIterator returns an iterator that lists all inventory configurations
// for a bucket. This is a MinIO-specific API and is not compatible with AWS S3.
//
// Parameters:
// - ctx: Context for request cancellation and timeout
// - bucket: Name of the bucket
//
// Returns an iterator that yields InventoryConfiguration values and errors. The iterator automatically
// handles pagination and fetches all configurations.
func (c *Client) ListBucketInventoryConfigurationsIterator(ctx context.Context, bucket string) iter.Seq2[InventoryConfiguration, error] {
return func(yield func(InventoryConfiguration, error) bool) {
if err := s3utils.CheckValidBucketName(bucket); err != nil {
yield(InventoryConfiguration{}, err)
return
}
var continuationToken string
for {
listResult, err := c.ListBucketInventoryConfigurations(ctx, bucket, continuationToken)
if err != nil {
yield(InventoryConfiguration{}, err)
return
}
for _, item := range listResult.Items {
if !yield(item, nil) {
return
}
}
if listResult.NextContinuationToken == "" {
return
}
continuationToken = listResult.NextContinuationToken
}
}
}
// InventoryJobStatus represents the status of an inventory job.
type InventoryJobStatus struct {
Bucket string `json:"bucket"`
ID string `json:"id"`
User string `json:"user"`
AccessKey string `json:"accessKey"`
Schedule string `json:"schedule"`
State string `json:"state"`
NextScheduledTime time.Time `json:"nextScheduledTime,omitempty"`
StartTime time.Time `json:"startTime,omitempty"`
EndTime time.Time `json:"endTime,omitempty"`
LastUpdate time.Time `json:"lastUpdate,omitempty"`
Scanned string `json:"scanned,omitempty"`
Matched string `json:"matched,omitempty"`
ScannedCount uint64 `json:"scannedCount,omitempty"`
MatchedCount uint64 `json:"matchedCount,omitempty"`
RecordsWritten uint64 `json:"recordsWritten,omitempty"`
OutputFilesCount uint64 `json:"outputFilesCount,omitempty"`
ExecutionTime string `json:"executionTime,omitempty"`
NumStarts uint64 `json:"numStarts,omitempty"`
NumErrors uint64 `json:"numErrors,omitempty"`
NumLockLosses uint64 `json:"numLockLosses,omitempty"`
ManifestPath string `json:"manifestPath,omitempty"`
RetryAttempts uint64 `json:"retryAttempts,omitempty"`
LastFailTime time.Time `json:"lastFailTime,omitempty"`
LastFailErrors []string `json:"lastFailErrors,omitempty"`
}
// GetBucketInventoryJobStatus retrieves the status of an inventory job for a bucket.
// This is a MinIO-specific API and is not compatible with AWS S3.
//
// Parameters:
// - ctx: Context for request cancellation and timeout
// - bucket: Name of the bucket
// - id: Unique identifier for the inventory job
//
// Returns the inventory job status including execution state, progress, and error information, or an error if the operation fails.
func (c *Client) GetBucketInventoryJobStatus(ctx context.Context, bucket, id string) (*InventoryJobStatus, error) {
if err := s3utils.CheckValidBucketName(bucket); err != nil {
return nil, err
}
if id == "" {
return nil, errInvalidArgument("inventory ID cannot be empty")
}
reqMeta := makeInventoryReqMetadata(bucket, "id", id, "status", "")
resp, err := c.executeMethod(ctx, http.MethodGet, reqMeta)
defer closeResponse(resp)
if err != nil {
return nil, err
}
if resp.StatusCode != http.StatusOK {
return nil, httpRespToErrorResponse(resp, bucket, "")
}
decoder := json.NewDecoder(resp.Body)
var jStatus InventoryJobStatus
err = decoder.Decode(&jStatus)
if err != nil {
return nil, err
}
return &jStatus, nil
}

View File

@@ -80,7 +80,16 @@ func newObjectLegalHold(status *LegalHoldStatus) (*objectLegalHold, error) {
return legalHold, nil
}
// PutObjectLegalHold : sets object legal hold for a given object and versionID.
// PutObjectLegalHold sets the legal hold status for an object and specific version.
// Legal hold prevents an object version from being overwritten or deleted, regardless of retention settings.
//
// Parameters:
// - ctx: Context for request cancellation and timeout
// - bucketName: Name of the bucket
// - objectName: Name of the object
// - opts: Options including Status (LegalHoldEnabled or LegalHoldDisabled) and optional VersionID
//
// Returns an error if the operation fails or if the status is invalid.
func (c *Client) PutObjectLegalHold(ctx context.Context, bucketName, objectName string, opts PutObjectLegalHoldOptions) error {
// Input validation.
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
@@ -134,7 +143,15 @@ func (c *Client) PutObjectLegalHold(ctx context.Context, bucketName, objectName
return nil
}
// GetObjectLegalHold gets legal-hold status of given object.
// GetObjectLegalHold retrieves the legal hold status for an object and specific version.
//
// Parameters:
// - ctx: Context for request cancellation and timeout
// - bucketName: Name of the bucket
// - objectName: Name of the object
// - opts: Options including optional VersionID to target a specific version
//
// Returns the legal hold status (LegalHoldEnabled or LegalHoldDisabled) or an error if the operation fails.
func (c *Client) GetObjectLegalHold(ctx context.Context, bucketName, objectName string, opts GetObjectLegalHoldOptions) (status *LegalHoldStatus, err error) {
// Input validation.
if err := s3utils.CheckValidBucketName(bucketName); err != nil {

View File

@@ -62,7 +62,16 @@ type PutObjectRetentionOptions struct {
VersionID string
}
// PutObjectRetention sets object retention for a given object and versionID.
// PutObjectRetention sets the retention configuration for an object and specific version.
// Object retention prevents an object version from being deleted or overwritten for a specified period.
//
// Parameters:
// - ctx: Context for request cancellation and timeout
// - bucketName: Name of the bucket
// - objectName: Name of the object
// - opts: Options including Mode (GOVERNANCE or COMPLIANCE), RetainUntilDate, optional VersionID, and GovernanceBypass
//
// Returns an error if the operation fails or if the retention settings are invalid.
func (c *Client) PutObjectRetention(ctx context.Context, bucketName, objectName string, opts PutObjectRetentionOptions) error {
// Input validation.
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
@@ -125,7 +134,15 @@ func (c *Client) PutObjectRetention(ctx context.Context, bucketName, objectName
return nil
}
// GetObjectRetention gets retention of given object.
// GetObjectRetention retrieves the retention configuration for an object and specific version.
//
// Parameters:
// - ctx: Context for request cancellation and timeout
// - bucketName: Name of the bucket
// - objectName: Name of the object
// - versionID: Optional version ID to target a specific version (empty string for current version)
//
// Returns the retention mode (GOVERNANCE or COMPLIANCE), retain-until date, and any error.
func (c *Client) GetObjectRetention(ctx context.Context, bucketName, objectName, versionID string) (mode *RetentionMode, retainUntilDate *time.Time, err error) {
// Input validation.
if err := s3utils.CheckValidBucketName(bucketName); err != nil {

View File

@@ -40,8 +40,17 @@ type AdvancedObjectTaggingOptions struct {
ReplicationProxyRequest string
}
// PutObjectTagging replaces or creates object tag(s) and can target
// a specific object version in a versioned bucket.
// PutObjectTagging replaces or creates object tag(s) and can target a specific object version
// in a versioned bucket.
//
// Parameters:
// - ctx: Context for request cancellation and timeout
// - bucketName: Name of the bucket
// - objectName: Name of the object
// - otags: Tags to apply to the object
// - opts: Options including VersionID to target a specific version
//
// Returns an error if the operation fails.
func (c *Client) PutObjectTagging(ctx context.Context, bucketName, objectName string, otags *tags.Tags, opts PutObjectTaggingOptions) error {
// Input validation.
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
@@ -96,8 +105,16 @@ type GetObjectTaggingOptions struct {
Internal AdvancedObjectTaggingOptions
}
// GetObjectTagging fetches object tag(s) with options to target
// a specific object version in a versioned bucket.
// GetObjectTagging retrieves object tag(s) with options to target a specific object version
// in a versioned bucket.
//
// Parameters:
// - ctx: Context for request cancellation and timeout
// - bucketName: Name of the bucket
// - objectName: Name of the object
// - opts: Options including VersionID to target a specific version
//
// Returns the object's tags or an error if the operation fails.
func (c *Client) GetObjectTagging(ctx context.Context, bucketName, objectName string, opts GetObjectTaggingOptions) (*tags.Tags, error) {
// Get resources properly escaped and lined up before
// using them in http request.
@@ -139,8 +156,16 @@ type RemoveObjectTaggingOptions struct {
Internal AdvancedObjectTaggingOptions
}
// RemoveObjectTagging removes object tag(s) with options to control a specific object
// version in a versioned bucket
// RemoveObjectTagging removes object tag(s) with options to target a specific object version
// in a versioned bucket.
//
// Parameters:
// - ctx: Context for request cancellation and timeout
// - bucketName: Name of the bucket
// - objectName: Name of the object
// - opts: Options including VersionID to target a specific version
//
// Returns an error if the operation fails.
func (c *Client) RemoveObjectTagging(ctx context.Context, bucketName, objectName string, opts RemoveObjectTaggingOptions) error {
// Get resources properly escaped and lined up before
// using them in http request.

Some files were not shown because too many files have changed in this diff Show More