Compare commits

..

139 Commits

Author SHA1 Message Date
Michael Barz
ae3967d2e5 Update settings.yml 2025-03-26 16:07:09 +01:00
Michael Barz
7d0eee14c9 Merge pull request #496 from opencloud-eu/ready-release-go
feat: add ready release go
2025-03-26 16:00:47 +01:00
Viktor Scharf
afb49f2e87 Merge pull request #492 from opencloud-eu/provideOptionToUseDecomposedFs
add option to run opencloud_full with decomposed
2025-03-26 15:58:31 +01:00
Michael Barz
0f2fdc4f86 feat: add ready release go 2025-03-26 15:25:09 +01:00
Benedikt Kulmann
7dcdc53127 [full-ci] fix(collaboration): hide SaveAs and ExportAs buttons in web office (#471)
* fix(collaboration): hide SaveAs and ExportAs buttons in collabora

---------

Co-authored-by: Viktor Scharf <v.scharf@opencloud.eu>
Co-authored-by: Viktor Scharf <scharf.vi@gmail.com>
2025-03-26 15:06:30 +01:00
Viktor Scharf
39544371f8 add decomposed.yml 2025-03-26 13:20:02 +01:00
Artur Neumann
e4402d9b17 Merge pull request #489 from opencloud-eu/check-for-without-remote-php-bug
[full-ci] add one more TUS test to expected to fail file
2025-03-26 17:15:34 +05:45
Artur Neumann
5561d5f354 [full-ci] add TUS upload test to expected to fail file 2025-03-26 16:00:28 +05:45
Andre Duffeck
10fb2d79e6 Merge pull request #488 from aduffeck/fix-docker-compose-perm-issues
Fix permission errors when uploading a new instance logo
2025-03-26 09:51:21 +01:00
André Duffeck
e37bedda1c Fix permission errors when uploading a new instance logo
Fixes #475
2025-03-26 09:20:50 +01:00
Ralf Haferkamp
0702b8bf9f Merge pull request #476 from rhafer/custom-idp-clients
docs(idp): Document how to add custom OIDC clients
2025-03-26 08:02:48 +01:00
Artur Neumann
a414a2015d Merge pull request #467 from opencloud-eu/update-expected-failure
[full-ci]Remove mtime 500 issue from expected failure
2025-03-26 10:03:54 +05:45
Ralf Haferkamp
292c8e5b63 Merge pull request #484 from rhafer/bump-jwt-v4
Bump golang-jwt/jwt/v4 to 4.5.2
2025-03-25 21:24:47 +01:00
Michael Barz
51994d6398 Merge pull request #480 from opencloud-eu/dependabot/go_modules/github.com/grpc-ecosystem/grpc-gateway/v2-2.26.3
build(deps): bump github.com/grpc-ecosystem/grpc-gateway/v2 from 2.26.1 to 2.26.3
2025-03-25 19:27:02 +01:00
Michael Barz
9d72a7cbe6 Merge pull request #474 from aduffeck/bump-reva-f3ec73475
Bump reva
2025-03-25 19:26:02 +01:00
Michael Barz
88721d8d52 Merge pull request #483 from opencloud-eu/update-base-docker
chore: update alpine to 3.21
2025-03-25 19:23:45 +01:00
Michael Barz
72038e5cea Merge pull request #481 from opencloud-eu/debug-docker
fix: add missing debug docker
2025-03-25 19:22:25 +01:00
amrita-shrestha
2d16e6e693 Remove mtime 500 issue from expected failure 2025-03-25 22:44:28 +05:45
André Duffeck
2344092b81 Bump reva 2025-03-25 17:52:20 +01:00
Michael Barz
2277ba8ffe chore: update alpine to 3.21 2025-03-25 17:43:30 +01:00
Ralf Haferkamp
d9041f2f47 Bump golang-jwt/jwt/v4 to 4.5.2 (CVE-2025-30204)
Closes #482
2025-03-25 17:42:41 +01:00
Ralf Haferkamp
17267e899e Merge pull request #479 from rhafer/downgrade-nats
Downgrade nats.go to 1.39.1
2025-03-25 17:42:23 +01:00
Ralf Haferkamp
c3be42950a Downgrade nats.go to 1.39.1
the 1.40.0 seems to trigger some panics in the ci. Which need to be
investigate in more detail.

Workaround for: #478
2025-03-25 17:09:57 +01:00
Michael Barz
3249d233f5 fix: add missing debug docker 2025-03-25 17:09:20 +01:00
dependabot[bot]
3eed688e5d build(deps): bump github.com/grpc-ecosystem/grpc-gateway/v2
Bumps [github.com/grpc-ecosystem/grpc-gateway/v2](https://github.com/grpc-ecosystem/grpc-gateway) from 2.26.1 to 2.26.3.
- [Release notes](https://github.com/grpc-ecosystem/grpc-gateway/releases)
- [Changelog](https://github.com/grpc-ecosystem/grpc-gateway/blob/main/.goreleaser.yml)
- [Commits](https://github.com/grpc-ecosystem/grpc-gateway/compare/v2.26.1...v2.26.3)

---
updated-dependencies:
- dependency-name: github.com/grpc-ecosystem/grpc-gateway/v2
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-03-25 15:11:36 +00:00
Jannik Stehle
ddf32022de Merge pull request #473 from opencloud-eu/chore/bump-collabora-to-v24.04.13.2.1
chore: bump Collabora in deployment example and fix entrypoint
2025-03-25 13:21:54 +01:00
Ralf Haferkamp
24e5e19825 docs(idp): Document how to add custom OIDC clients 2025-03-25 13:02:40 +01:00
Jannik Stehle
6375de8167 chore: bump Collabora in deployment example and fix entrypoint
Bumps Collabora in the deployment example to `24.04.13.2.1` and fixes the entrypoint. It seems to have changed with newer versions of the docker image, hence we need to specify the entrypoint manually to make the start commands work.
2025-03-25 11:41:12 +01:00
Viktor Scharf
01755c5955 Merge pull request #472 from opencloud-eu/addAuthAppToOcmTestSetup
add auth app to ocm test setup
2025-03-25 10:50:23 +01:00
Viktor Scharf
52c72c852f add auth app to ocm test setup 2025-03-25 09:52:08 +01:00
Ralf Haferkamp
270a6b65ab Merge pull request #469 from opencloud-eu/opencloudCS3Validator
use opencloudeu/cs3api-validator in CI
2025-03-25 08:10:47 +01:00
Ralf Haferkamp
cb700cafae Merge pull request #464 from opencloud-eu/dependabot/go_modules/github.com/nats-io/nats.go-1.40.0
build(deps): bump github.com/nats-io/nats.go from 1.39.1 to 1.40.0
2025-03-25 08:10:07 +01:00
Artur Neumann
659d246482 use opencloudeu/cs3api-validator in CI 2025-03-25 12:18:07 +05:45
dependabot[bot]
fae29f107c build(deps): bump github.com/nats-io/nats.go from 1.39.1 to 1.40.0
Bumps [github.com/nats-io/nats.go](https://github.com/nats-io/nats.go) from 1.39.1 to 1.40.0.
- [Release notes](https://github.com/nats-io/nats.go/releases)
- [Commits](https://github.com/nats-io/nats.go/compare/v1.39.1...v1.40.0)

---
updated-dependencies:
- dependency-name: github.com/nats-io/nats.go
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-03-25 05:58:50 +00:00
Andre Duffeck
8bc17593cb Merge pull request #459 from rhafer/issue/447
fix cli driver initialization for "posix"
2025-03-24 22:32:28 +01:00
Andre Duffeck
7e71089ed6 Merge pull request #466 from opencloud-eu/fix_full_envfile
Clean invalid documentation links
2025-03-24 22:25:37 +01:00
Klaas Freitag
f94eedaee3 Clean invalid documentation links 2025-03-24 17:23:20 +01:00
Andre Duffeck
8584184063 Merge pull request #462 from aduffeck/fix-452
Do not cache when there was an error gathering the data
2025-03-24 16:27:51 +01:00
Viktor Scharf
f0c4e383ca Merge pull request #460 from rhafer/appTokenTestJsonCS3
fix(test): Run app-auth test with jsoncs3 backend
2025-03-24 15:31:26 +01:00
André Duffeck
8b7f12126f Do not cache when there was an error gathering the data
Fixes #452
2025-03-24 15:12:30 +01:00
Ralf Haferkamp
f0e79468f2 Merge pull request #436 from opencloud-eu/dependabot/go_modules/github.com/spf13/afero-1.14.0
build(deps): bump github.com/spf13/afero from 1.12.0 to 1.14.0
2025-03-24 14:40:37 +01:00
Ralf Haferkamp
de485a69d7 fix(test): Run app-auth test with jsoncs3 backend 2025-03-24 14:37:18 +01:00
dependabot[bot]
f788703c55 build(deps): bump github.com/spf13/afero from 1.12.0 to 1.14.0
Bumps [github.com/spf13/afero](https://github.com/spf13/afero) from 1.12.0 to 1.14.0.
- [Release notes](https://github.com/spf13/afero/releases)
- [Commits](https://github.com/spf13/afero/compare/v1.12.0...v1.14.0)

---
updated-dependencies:
- dependency-name: github.com/spf13/afero
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-03-24 11:19:40 +00:00
Ralf Haferkamp
44dea094d7 fix cli driver initialization for "posix"
Fixes: #447
2025-03-24 12:16:32 +01:00
Andre Duffeck
248ff08c59 Merge pull request #451 from aduffeck/bump-reva-61a430b
Bump reva to pull in the latest fixes
2025-03-24 10:46:57 +01:00
André Duffeck
ac54b446a5 Adapt expectation on number of versions of same-mtime uploads 2025-03-24 10:01:35 +01:00
André Duffeck
073da849e0 Fix backup command to understand all versions 2025-03-24 09:48:26 +01:00
André Duffeck
c2ce9a136d Bump reva 2025-03-24 08:47:22 +01:00
André Duffeck
467fc2af0c Adapt to auth-app changes 2025-03-24 08:47:22 +01:00
André Duffeck
73c179e447 Adapt expected failures 2025-03-21 15:10:10 +01:00
André Duffeck
ac8351bd3b Adapt test expectations, now that .mlock files are properly removed 2025-03-21 15:10:06 +01:00
André Duffeck
00d49804cf Bump reva to pull in the latest fixes 2025-03-21 12:29:25 +01:00
Klaas Freitag
5953f950ef Bump opencloud version in bare-metal-simple script 2025-03-21 10:18:13 +01:00
Andre Duffeck
858dd4110f Merge pull request #446 from opencloud-eu/issue/390
fix(storage-users): 'uploads sessions' command crash
2025-03-21 08:03:02 +01:00
Ralf Haferkamp
7fc336d57b Merge pull request #437 from opencloud-eu/dependabot/go_modules/github.com/KimMachineGun/automemlimit-0.7.1
build(deps): bump github.com/KimMachineGun/automemlimit from 0.7.0 to 0.7.1
2025-03-20 17:14:06 +01:00
Ralf Haferkamp
7be2e4693a Merge pull request #433 from rhafer/appauth-jsoncs3
Switch to jsoncs3 backend for app tokens and enable service by default
2025-03-20 16:08:02 +01:00
Ralf Haferkamp
544b354a42 fix(storage-users): 'uploads sessions' command crash
When started with the '--resume' command line switch the
'storage-users uploads sessions' was crashing because it did not
initialize the event queue correctly.

Fixes: #390
2025-03-20 16:02:56 +01:00
dependabot[bot]
99c814e45a build(deps): bump github.com/KimMachineGun/automemlimit
Bumps [github.com/KimMachineGun/automemlimit](https://github.com/KimMachineGun/automemlimit) from 0.7.0 to 0.7.1.
- [Release notes](https://github.com/KimMachineGun/automemlimit/releases)
- [Commits](https://github.com/KimMachineGun/automemlimit/compare/v0.7.0...v0.7.1)

---
updated-dependencies:
- dependency-name: github.com/KimMachineGun/automemlimit
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-03-20 15:01:12 +00:00
Ralf Haferkamp
6055c1476c bump reva to fix app token delete in UI 2025-03-20 15:17:13 +01:00
Ralf Haferkamp
213874dbec auth-app: Allow to set label via API
Instead of using the same hardcoded label for all app tokens, allow to
specify one on the API.
When an app token is generate via the impersonation feature we add the
string ` (Impersonation)` to the label.
2025-03-20 15:09:47 +01:00
Ralf Haferkamp
67ce1d51e2 Update auth-app service documentation 2025-03-20 15:09:47 +01:00
Ralf Haferkamp
cda94ce584 Start auth-app service by default
Co-Authored-By: André Duffeck <a.duffeck@opencloud.eu>
2025-03-20 15:09:47 +01:00
Ralf Haferkamp
fa34a073fd app-auth: Introduce config for the jsoncs3 backend 2025-03-20 15:09:47 +01:00
Andre Duffeck
036b3669c2 Merge pull request #435 from aduffeck/fix-ci
Always run CLI tests with the decomposed storage driver
2025-03-20 15:01:24 +01:00
André Duffeck
fe2e4b6c23 Bump reva 2025-03-20 12:45:46 +01:00
André Duffeck
d58f066376 Re-enable mtime related tests 2025-03-20 10:22:58 +01:00
André Duffeck
073e902626 Bump web in CI 2025-03-20 08:43:33 +01:00
Ralf Haferkamp
0e63721c13 Merge pull request #431 from opencloud-eu/fix-multiarch-image-org
fix: org name in multiarch dev build
2025-03-20 08:13:22 +01:00
Artur Neumann
547cfaee75 Merge pull request #440 from opencloud-eu/fixLocalSetup
fix local setup
2025-03-20 09:46:03 +05:45
André Duffeck
4569064400 Adapt expected failures 2025-03-19 18:24:03 +01:00
André Duffeck
5aa51dcfda Use a separate expected failures file for posix 2025-03-19 18:24:03 +01:00
Viktor Scharf
1d0fcfbca4 fix local setup 2025-03-19 18:10:23 +01:00
André Duffeck
31f24a65b3 Bump reva 2025-03-19 17:41:47 +01:00
André Duffeck
05c2b39281 Always run CLI tests with the decomposed storage driver 2025-03-19 17:41:47 +01:00
Ralf Haferkamp
c69a103e6a Merge pull request #439 from rhafer/woodpecker-disable-exclude
Disable the 'exclude' patterns on the path conditional for now
2025-03-19 17:39:54 +01:00
Michael Barz
f264a5aaac Merge pull request #434 from rhafer/rm-edition
Completely remove "edition" from capabilities
2025-03-19 17:26:02 +01:00
Ralf Haferkamp
1a54b4ce90 Disable the 'exclude' patterns on the path conditional for now
the 'exclude' feature
(https://woodpecker-ci.org/docs/usage/workflow-syntax#path) does not
seem to provide # what we need. It seems to skip the build as soon as
one of the changed files matches an exclude pattern, we only # want to
skip of ALL changed files match. So skip this condition for now:
2025-03-19 17:06:31 +01:00
Ralf Haferkamp
0c2da6e8fd Completely remove "edition" from capabilities
This removes the "edition" value for the capabilities. We don't need that
anymore.
2025-03-19 15:34:38 +01:00
Michael Barz
492dee61e8 Merge pull request #415 from opencloud-eu/runCS3ApiTestsInCI
run CS3 API tests in CI
2025-03-19 14:10:42 +01:00
Benedikt Kulmann
8c9725b46e fix: org name in multiarch dev build 2025-03-19 13:34:45 +01:00
Andre Duffeck
47e82a3d17 Merge pull request #237 from opencloud-eu/make-posixfs-the-default
[posix] change storage users default to posixfs
2025-03-19 12:58:43 +01:00
André Duffeck
a701f973de Run ci with posix by default, allow for switching to decomposed 2025-03-19 12:25:28 +01:00
André Duffeck
94e668b194 Sort env vars 2025-03-19 12:23:28 +01:00
André Duffeck
1f7ce9818d Bump reva 2025-03-19 12:23:28 +01:00
André Duffeck
255f45034a Add more missing options to posix fs 2025-03-19 12:23:28 +01:00
André Duffeck
57836949d8 Add missing config bits to the posix driver 2025-03-19 12:23:28 +01:00
André Duffeck
4d0d0f2a87 Add storage to the pipeline name 2025-03-19 12:23:26 +01:00
André Duffeck
38857f0b27 Bump reva 2025-03-19 12:21:58 +01:00
André Duffeck
38c521e54a Support OC_SPACES_MAX_QUOTA with the posix driver 2025-03-19 12:21:58 +01:00
Jörn Friedrich Dreyer
15cb8680ef change storage users default to posixfs
Signed-off-by: Jörn Friedrich Dreyer <jfd@butonic.de>
2025-03-19 12:21:58 +01:00
Michael Barz
1175b47b04 Merge pull request #426 from opencloud-eu/dependabot/go_modules/golang.org/x/image-0.25.0
build(deps): bump golang.org/x/image from 0.24.0 to 0.25.0
2025-03-19 11:59:04 +01:00
Michael Barz
13f6299463 Merge pull request #427 from opencloud-eu/dev-docs
fix: fix path exclusion glob patterns
2025-03-19 11:57:40 +01:00
Artur Neumann
049b5e4c7e Merge pull request #430 from opencloud-eu/woodpecker-cleanup
Cleanup woodpecker
2025-03-19 16:37:10 +05:45
Ralf Haferkamp
bca032deec Merge pull request #421 from opencloud-eu/feat/clear-edition
feat: clear edition fallback
2025-03-19 11:33:13 +01:00
Michael Barz
8418384209 fix: correct exclude path glob patterns 2025-03-19 11:30:56 +01:00
prashant-gurung899
f323a5f1c0 cleanup woodpecker
Signed-off-by: prashant-gurung899 <prasantgrg777@gmail.com>
2025-03-19 15:48:35 +05:45
Artur Neumann
e527832ece run CS3 API tests in CI 2025-03-19 14:41:50 +05:45
Artur Neumann
bcac922216 Merge pull request #419 from opencloud-eu/runApiTestInCI
enable main API test suite to run in CI
2025-03-19 14:30:56 +05:45
Artur Neumann
88ac6e79e9 Merge pull request #416 from opencloud-eu/runWopiTestsInCI
Run wopi tests in CI
2025-03-19 14:30:42 +05:45
Ralf Haferkamp
c42dec66e9 Merge pull request #411 from opencloud-eu/fix-367
feat: add post logout redirect uris for mobile clients
2025-03-19 09:20:16 +01:00
dependabot[bot]
fb645ba8fb build(deps): bump golang.org/x/image from 0.24.0 to 0.25.0
Bumps [golang.org/x/image](https://github.com/golang/image) from 0.24.0 to 0.25.0.
- [Commits](https://github.com/golang/image/compare/v0.24.0...v0.25.0)

---
updated-dependencies:
- dependency-name: golang.org/x/image
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-03-19 08:15:41 +00:00
Ralf Haferkamp
270858087f Merge pull request #425 from opencloud-eu/dependabot/go_modules/go.opentelemetry.io/contrib/zpages-0.60.0
build(deps): bump go.opentelemetry.io/contrib/zpages from 0.57.0 to 0.60.0
2025-03-19 09:13:02 +01:00
Artur Neumann
cb5254084f Merge pull request #413 from opencloud-eu/test/cliCommands-CI
Run `cliCommands` tests pipeline in CI
2025-03-19 13:57:08 +05:45
prashant-gurung899
5447743481 run cliCommands tests in CI
Signed-off-by: prashant-gurung899 <prasantgrg777@gmail.com>
2025-03-19 12:12:48 +05:45
Artur Neumann
a48750a1a6 enable main API test suite to run in CI 2025-03-19 11:19:35 +05:45
Artur Neumann
6d75ab58b9 run WOPI tests in CI 2025-03-19 09:30:35 +05:45
dependabot[bot]
ec30eae0cc build(deps): bump go.opentelemetry.io/contrib/zpages
Bumps [go.opentelemetry.io/contrib/zpages](https://github.com/open-telemetry/opentelemetry-go-contrib) from 0.57.0 to 0.60.0.
- [Release notes](https://github.com/open-telemetry/opentelemetry-go-contrib/releases)
- [Changelog](https://github.com/open-telemetry/opentelemetry-go-contrib/blob/main/CHANGELOG.md)
- [Commits](https://github.com/open-telemetry/opentelemetry-go-contrib/compare/zpages/v0.57.0...zpages/v0.60.0)

---
updated-dependencies:
- dependency-name: go.opentelemetry.io/contrib/zpages
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-03-18 15:03:17 +00:00
Benedikt Kulmann
cceb574950 Merge pull request #422 from opencloud-eu/bump-version-v1.1.0
chore: bump version to v1.1.0
2025-03-18 16:01:39 +01:00
Benedikt Kulmann
431963b2e0 chore: bump version to v1.1.0 2025-03-18 14:36:04 +01:00
Jannik Stehle
08bba95428 feat: clear edition fallback
Clears the `edition` fallback on "Community" in the capabilities. We want to have this as empty string without any fallback.
2025-03-18 14:12:02 +01:00
Benedikt Kulmann
8b001a671e Merge pull request #420 from opencloud-eu/binary-release
feat: add release build step
2025-03-18 14:12:00 +01:00
Michael Barz
02c1ff9bee feat: add release build step 2025-03-18 13:55:17 +01:00
Michael Barz
205e12c516 Merge pull request #417 from opencloud-eu/chore/bump-web-v2.0.0
chore: bump web to v2.0.0
2025-03-18 12:31:59 +01:00
Michael Barz
c6a8dde4c7 Merge pull request #418 from opencloud-eu/optimize-docker-build
feat: optimize docker build
2025-03-18 12:15:36 +01:00
Michael Barz
29f701efa2 feat: optimize docker build 2025-03-18 12:15:16 +01:00
Jannik Stehle
b40f4ba398 chore: bump web to v2.0.0 2025-03-18 10:53:43 +01:00
Viktor Scharf
49a44fd4bc Merge pull request #405 from opencloud-eu/runTestwithPosix
run test with posix
2025-03-18 10:48:36 +01:00
Viktor Scharf
a7b0911363 run test with posix 2025-03-18 09:55:46 +01:00
Artur Neumann
4e8cad8df3 fix finding existing caches in CI (#381) 2025-03-18 13:45:33 +05:45
Ralf Haferkamp
1a48cda106 Merge pull request #410 from opencloud-eu/dependabot/npm_and_yarn/services/idp/babel/core-7.26.10
build(deps-dev): bump @babel/core from 7.26.9 to 7.26.10 in /services/idp
2025-03-18 08:13:19 +01:00
Ralf Haferkamp
d03bcf784a Merge pull request #412 from opencloud-eu/dependabot/go_modules/github.com/open-policy-agent/opa-1.2.0
build(deps): bump github.com/open-policy-agent/opa from 1.1.0 to 1.2.0
2025-03-18 08:12:50 +01:00
Ralf Haferkamp
1f259397bc Merge pull request #374 from opencloud-eu/dependabot/go_modules/github.com/riandyrn/otelchi-0.12.1
build(deps): bump github.com/riandyrn/otelchi from 0.12.0 to 0.12.1
2025-03-18 08:11:16 +01:00
Artur Neumann
3655b04258 fix finding existing caches in CI 2025-03-18 12:52:10 +05:45
Artur Neumann
648a7b8a59 Merge pull request #380 from opencloud-eu/enableMoreE2Etests
[full-ci] run more E2E tests in CI
2025-03-18 09:54:48 +05:45
dependabot[bot]
871fdd15d7 build(deps): bump github.com/open-policy-agent/opa from 1.1.0 to 1.2.0
Bumps [github.com/open-policy-agent/opa](https://github.com/open-policy-agent/opa) from 1.1.0 to 1.2.0.
- [Release notes](https://github.com/open-policy-agent/opa/releases)
- [Changelog](https://github.com/open-policy-agent/opa/blob/main/CHANGELOG.md)
- [Commits](https://github.com/open-policy-agent/opa/compare/v1.1.0...v1.2.0)

---
updated-dependencies:
- dependency-name: github.com/open-policy-agent/opa
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-03-17 17:29:55 +00:00
Michael Barz
cff4c78e8e feat: add post logout redirect uris for mobile clients 2025-03-17 17:46:18 +01:00
dependabot[bot]
4275ba2dcd build(deps-dev): bump @babel/core in /services/idp
Bumps [@babel/core](https://github.com/babel/babel/tree/HEAD/packages/babel-core) from 7.26.9 to 7.26.10.
- [Release notes](https://github.com/babel/babel/releases)
- [Changelog](https://github.com/babel/babel/blob/main/CHANGELOG.md)
- [Commits](https://github.com/babel/babel/commits/v7.26.10/packages/babel-core)

---
updated-dependencies:
- dependency-name: "@babel/core"
  dependency-type: direct:development
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-03-17 16:43:48 +00:00
Ralf Haferkamp
98bce5e215 Merge pull request #409 from opencloud-eu/dependabot/npm_and_yarn/services/idp/babel-loader-10.0.0
build(deps-dev): bump babel-loader from 9.2.1 to 10.0.0 in /services/idp
2025-03-17 17:42:06 +01:00
dependabot[bot]
0d48832f3c build(deps-dev): bump babel-loader from 9.2.1 to 10.0.0 in /services/idp
Bumps [babel-loader](https://github.com/babel/babel-loader) from 9.2.1 to 10.0.0.
- [Release notes](https://github.com/babel/babel-loader/releases)
- [Changelog](https://github.com/babel/babel-loader/blob/main/CHANGELOG.md)
- [Commits](https://github.com/babel/babel-loader/compare/v9.2.1...v10.0.0)

---
updated-dependencies:
- dependency-name: babel-loader
  dependency-type: direct:development
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-03-17 15:07:19 +00:00
Ralf Haferkamp
43c3b814cc Merge pull request #401 from rhafer/posixfs-template-userid
posixfs: Use userid as the foldername for personal space
2025-03-17 15:54:50 +01:00
Viktor Scharf
19ccfe2678 Merge pull request #386 from opencloud-eu/removeExpectedToFailDublications
[full-ci] delete dublicated lines from expected-failures
2025-03-17 15:34:44 +01:00
Viktor Scharf
1183f7d274 Merge pull request #407 from opencloud-eu/revaBump2.28
bump reva 2.28
2025-03-17 15:29:09 +01:00
Viktor Scharf
e15989e993 bump reva 2.28 2025-03-17 14:49:17 +01:00
Artur Neumann
e70ff761e6 fix expected to fail files format 2025-03-17 18:56:56 +05:45
Artur Neumann
0d505db241 move tests that fail in both with/without remote.php 2025-03-17 18:39:02 +05:45
Artur Neumann
3a48c54618 adjust / fix lines in expected to fail that were forgotten 2025-03-17 18:36:28 +05:45
Artur Neumann
8ef69ed68b [full-ci] delete dublicated lines from expected-failures 2025-03-17 18:13:13 +05:45
Ralf Haferkamp
90328a7ed1 posixfs: Use userid as the foldername for personal space
This avoids loosing the user's personal space after renaming the user.

Closes: #192
2025-03-17 10:54:52 +01:00
dependabot[bot]
e4e7dffdb7 build(deps): bump github.com/riandyrn/otelchi from 0.12.0 to 0.12.1
Bumps [github.com/riandyrn/otelchi](https://github.com/riandyrn/otelchi) from 0.12.0 to 0.12.1.
- [Release notes](https://github.com/riandyrn/otelchi/releases)
- [Changelog](https://github.com/riandyrn/otelchi/blob/master/CHANGELOG.md)
- [Commits](https://github.com/riandyrn/otelchi/compare/v0.12.0...v0.12.1)

---
updated-dependencies:
- dependency-name: github.com/riandyrn/otelchi
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-03-13 14:28:48 +00:00
427 changed files with 41131 additions and 5727 deletions

View File

@@ -1,2 +1 @@
_extends: gh-labels

View File

@@ -110,3 +110,15 @@ debug-linux-docker-amd64: release-dirs
-ldflags '-extldflags "-static" $(DEBUG_LDFLAGS) $(DOCKER_LDFLAGS)' \
-o '$(DIST)/binaries/$(EXECUTABLE)-linux-amd64' \
./cmd/$(NAME)
debug-linux-docker-arm64: release-dirs
GOOS=linux \
GOARCH=arm64 \
go build \
-gcflags="all=-N -l" \
-tags 'netgo $(TAGS)' \
-buildmode=exe \
-trimpath \
-ldflags '-extldflags "-static" $(DEBUG_LDFLAGS) $(DOCKER_LDFLAGS)' \
-o '$(DIST)/binaries/$(EXECUTABLE)-linux-arm64' \
./cmd/$(NAME)

3
.vscode/launch.json vendored
View File

@@ -101,6 +101,9 @@
"APP_PROVIDER_GRPC_ADDR": "127.0.0.1:10164",
"APP_REGISTRY_DEBUG_ADDR": "127.0.0.1:10243",
"APP_REGISTRY_GRPC_ADDR": "127.0.0.1:10242",
"AUTH_APP_DEBUG_ADDR": "127.0.0.1:10245",
"AUTH_APP_GRPC_ADDR": "127.0.0.1:10246",
"AUTH_APP_HTTP_ADDR": "127.0.0.1:10247",
"AUTH_BASIC_DEBUG_ADDR": "127.0.0.1:10147",
"AUTH_BASIC_GRPC_ADDR": "127.0.0.1:10146",
"AUTH_MACHINE_DEBUG_ADDR": "127.0.0.1:10167",

View File

@@ -1,3 +1,3 @@
# The test runner source for UI tests
WEB_COMMITID=74c8df4f64d9bf957a0652fb92e01529efa3c0b3
WEB_COMMITID=a85b8b2f0b22d2e8fa133fee2dae8cc866c0c8c2
WEB_BRANCH=main

View File

@@ -28,14 +28,14 @@ OC_CI_GOLANG = "docker.io/golang:1.24"
OC_CI_NODEJS = "owncloudci/nodejs:%s"
OC_CI_PHP = "owncloudci/php:%s"
OC_CI_WAIT_FOR = "owncloudci/wait-for:latest"
OC_CS3_API_VALIDATOR = "owncloud/cs3api-validator:0.2.1"
OC_CS3_API_VALIDATOR = "opencloudeu/cs3api-validator:latest"
OC_LITMUS = "owncloudci/litmus:latest"
OC_UBUNTU = "owncloud/ubuntu:20.04"
ONLYOFFICE_DOCUMENT_SERVER = "onlyoffice/documentserver:7.5.1"
PLUGINS_CODACY = "plugins/codacy:1"
PLUGINS_DOCKER_BUILDX = "woodpeckerci/plugin-docker-buildx:latest"
PLUGINS_GH_PAGES = "plugins/gh-pages:1"
PLUGINS_GITHUB_RELEASE = "plugins/github-release:1"
PLUGINS_GITHUB_RELEASE = "woodpeckerci/plugin-release"
PLUGINS_GIT_ACTION = "plugins/git-action:1"
PLUGINS_GIT_PUSH = "appleboy/drone-git-push"
PLUGINS_MANIFEST = "plugins/manifest:1"
@@ -44,6 +44,7 @@ PLUGINS_S3_CACHE = "plugins/s3-cache:1"
PLUGINS_SLACK = "plugins/slack:1"
REDIS = "redis:6-alpine"
SONARSOURCE_SONAR_SCANNER_CLI = "sonarsource/sonar-scanner-cli:11.0"
READY_RELEASE_GO = "woodpeckerci/plugin-ready-release-go:latest"
DEFAULT_PHP_VERSION = "8.2"
DEFAULT_NODEJS_VERSION = "20"
@@ -53,13 +54,14 @@ dirs = {
"web": "/woodpecker/src/github.com/opencloud-eu/opencloud/webTestRunner",
"zip": "/woodpecker/src/github.com/opencloud-eu/opencloud/zip",
"webZip": "/woodpecker/src/github.com/opencloud-eu/opencloud/zip/web.tar.gz",
"webPnpmZip": "/woodpecker/src/github.com/opencloud-eu/opencloud/zip/pnpm-store.tar.gz",
"webPnpmZip": "/woodpecker/src/github.com/opencloud-eu/opencloud/zip/web-pnpm.tar.gz",
"baseGo": "/go/src/github.com/opencloud-eu/opencloud",
"gobinTar": "go-bin.tar.gz",
"gobinTarPath": "/go/src/github.com/opencloud-eu/opencloud/go-bin.tar.gz",
"opencloudConfig": "tests/config/woodpecker/opencloud-config.json",
"ocis": "/woodpecker/src/github.com/opencloud-eu/opencloud/srv/app/tmp/ocis",
"ocisRevaDataRoot": "/woodpecker/src/github.com/opencloud-eu/opencloud/srv/app/tmp/ocis/owncloud/data",
"opencloudRevaDataRoot": "/woodpecker/src/github.com/opencloud-eu/opencloud/srv/app/tmp/ocis/owncloud/data",
"multiServiceOcBaseDataPath": "/woodpecker/src/github.com/opencloud-eu/opencloud/multiServiceData",
"ocWrapper": "/woodpecker/src/github.com/opencloud-eu/opencloud/tests/ocwrapper",
"bannedPasswordList": "tests/config/woodpecker/banned-password-list.txt",
"ocmProviders": "tests/config/woodpecker/providers.json",
@@ -79,10 +81,10 @@ OC_FED_DOMAIN = "%s:10200" % FED_OC_SERVER_NAME
# configuration
config = {
"cs3ApiTests": {
"skip": True,
"skip": False,
},
"wopiValidatorTests": {
"skip": True,
"skip": False,
},
"k6LoadTests": {
"skip": True,
@@ -266,7 +268,7 @@ config = {
"suites": [
"apiCollaboration",
],
"skip": True,
"skip": False,
"collaborationServiceNeeded": True,
"extraServerEnvironment": {
"GATEWAY_GRPC_ADDR": "0.0.0.0:9142",
@@ -278,16 +280,12 @@ config = {
],
"skip": False,
"withRemotePhp": [True],
"extraServerEnvironment": {
"OC_ADD_RUN_SERVICES": "auth-app",
"PROXY_ENABLE_APP_AUTH": True,
},
},
"cliCommands": {
"suites": [
"cliCommands",
],
"skip": True,
"skip": False,
"withRemotePhp": [True],
"antivirusNeeded": True,
"extraServerEnvironment": {
@@ -295,12 +293,13 @@ config = {
"ANTIVIRUS_CLAMAV_SOCKET": "tcp://clamav:3310",
"OC_ASYNC_UPLOADS": True,
"OC_ADD_RUN_SERVICES": "antivirus",
"STORAGE_USERS_DRIVER": "decomposed",
},
},
},
"apiTests": {
"numberOfParts": 7,
"skip": True,
"skip": False,
"skipExceptParts": [],
},
"e2eTests": {
@@ -310,14 +309,14 @@ config = {
"xsuites": ["search", "app-provider", "app-provider-onlyOffice", "app-store", "keycloak", "oidc", "ocm"], # suites to skip
},
"search": {
"skip": True,
"skip": False,
"suites": ["search"], # suites to run
"tikaNeeded": True,
},
},
"e2eMultiService": {
"testSuites": {
"skip": True,
"skip": False,
"suites": [
"smoke",
"shares",
@@ -344,6 +343,8 @@ config = {
"codestyle": True,
}
GRAPH_AVAILABLE_ROLES = "b1e2218d-eef8-4d4c-b82d-0f1a1b48f3b5,a8d5fe5e-96e3-418d-825b-534dbdf22b99,fb6c3e19-e378-47e5-b277-9732f9de6e21,58c63c02-1d89-4572-916a-870abc5a1b7d,2d00ce52-1fc2-4dbc-8b95-a73b73395f5a,1c996275-f1c9-4e71-abdf-a42f6495e960,312c0871-5ef7-4b3a-85b6-0e4074c64049,aa97fe03-7980-45ac-9e50-b325749fd7e6,63e64e19-8d43-42ec-a738-2b6af2610efa"
# workspace for pipeline to cache Go dependencies between steps of a pipeline
# to be used in combination with stepVolumeGo
workspace = \
@@ -395,7 +396,7 @@ def getPipelineNames(pipelines = []):
"""getPipelineNames returns names of pipelines as a string array
Args:
pipelines: array of drone pipelines
pipelines: array of woodpecker pipelines
Returns:
names of the given pipelines as string array
@@ -406,10 +407,10 @@ def getPipelineNames(pipelines = []):
return names
def main(ctx):
"""main is the entrypoint for drone
"""main is the entrypoint for woodpecker
Args:
ctx: drone passes a context with information which the pipeline can be adapted to
ctx: woodpecker passes a context with information which the pipeline can be adapted to
Returns:
none
@@ -418,7 +419,7 @@ def main(ctx):
pipelines = []
build_release_helpers = \
changelog() + \
readyReleaseGo() + \
docs()
build_release_helpers.append(
@@ -508,7 +509,7 @@ def testOpencloudAndUploadResults(ctx):
# FIXME: RE-ENABLE THIS ASAP!!! #
######################################################################
#security_scan = scanOcis(ctx)
#security_scan = scanOpencloud(ctx)
#return [security_scan, pipeline, scan_result_upload]
return [pipeline]
@@ -518,11 +519,15 @@ def testPipelines(ctx):
if config["litmus"]:
pipelines += litmus(ctx, "decomposed")
storage = "posix"
if "[decomposed]" in ctx.build.title.lower():
storage = "decomposed"
if "skip" not in config["cs3ApiTests"] or not config["cs3ApiTests"]["skip"]:
pipelines.append(cs3ApiTests(ctx, "ocis", "default"))
pipelines.append(cs3ApiTests(ctx, storage, "default"))
if "skip" not in config["wopiValidatorTests"] or not config["wopiValidatorTests"]["skip"]:
pipelines.append(wopiValidatorTests(ctx, "ocis", "builtin", "default"))
pipelines.append(wopiValidatorTests(ctx, "ocis", "cs3", "default"))
pipelines.append(wopiValidatorTests(ctx, storage, "builtin", "default"))
pipelines.append(wopiValidatorTests(ctx, storage, "cs3", "default"))
pipelines += localApiTestPipeline(ctx)
@@ -552,6 +557,9 @@ def getGoBinForTesting(ctx):
"exclude": skipIfUnchanged(ctx, "unit-tests"),
},
},
{
"event": "tag",
},
],
"workspace": workspace,
}]
@@ -559,15 +567,8 @@ def getGoBinForTesting(ctx):
def checkGoBinCache():
return [{
"name": "check-go-bin-cache",
"image": OC_UBUNTU,
"environment": {
"CACHE_ENDPOINT": {
"from_secret": "cache_s3_server",
},
"CACHE_BUCKET": {
"from_secret": "cache_s3_bucket",
},
},
"image": MINIO_MC,
"environment": MINIO_MC_ENV,
"commands": [
"bash -x %s/tests/config/woodpecker/check_go_bin_cache.sh %s %s" % (dirs["baseGo"], dirs["baseGo"], dirs["gobinTar"]),
],
@@ -579,6 +580,8 @@ def cacheGoBin():
"name": "bingo-get",
"image": OC_CI_GOLANG,
"commands": [
". ./.env",
"if $BIN_CACHE_FOUND; then exit 0; fi",
"make bingo-update",
],
"environment": CI_HTTP_PROXY_ENV,
@@ -587,6 +590,8 @@ def cacheGoBin():
"name": "archive-go-bin",
"image": OC_UBUNTU,
"commands": [
". ./.env",
"if $BIN_CACHE_FOUND; then exit 0; fi",
"tar -czvf %s /go/bin" % dirs["gobinTarPath"],
],
},
@@ -595,6 +600,8 @@ def cacheGoBin():
"image": MINIO_MC,
"environment": MINIO_MC_ENV,
"commands": [
". ./.env",
"if $BIN_CACHE_FOUND; then exit 0; fi",
# .bingo folder will change after 'bingo-get'
# so get the stored hash of a .bingo folder
"BINGO_HASH=$(cat %s/.bingo_hash)" % dirs["baseGo"],
@@ -688,7 +695,7 @@ def testOpencloud(ctx):
"workspace": workspace,
}
def scanOcis(ctx):
def scanOpencloud(ctx):
steps = restoreGoBinCache() + makeGoGenerate("") + [
{
"name": "govulncheck",
@@ -886,9 +893,9 @@ def localApiTestPipeline(ctx):
if ctx.build.event == "cron" or "full-ci" in ctx.build.title.lower():
with_remote_php.append(False)
storages = ["decomposed"]
if "posix" in ctx.build.title.lower():
storages = ["posix"]
storages = ["posix"]
if "[decomposed]" in ctx.build.title.lower():
storages = ["decomposed"]
defaults = {
"suites": {},
@@ -915,7 +922,7 @@ def localApiTestPipeline(ctx):
for storage in params["storages"]:
for run_with_remote_php in params["withRemotePhp"]:
pipeline = {
"name": "%s-%s%s" % ("CLI" if name.startswith("cli") else "API", name, "-withoutRemotePhp" if not run_with_remote_php else ""),
"name": "%s-%s%s-%s" % ("CLI" if name.startswith("cli") else "API", name, "-withoutRemotePhp" if not run_with_remote_php else "", storage),
"steps": restoreBuildArtifactCache(ctx, dirs["opencloudBinArtifact"], dirs["opencloudBinPath"]) +
(tikaService() if params["tikaNeeded"] else []) +
(waitForServices("online-offices", ["collabora:9980", "onlyoffice:443", "fakeoffice:8080"]) if params["collaborationServiceNeeded"] else []) +
@@ -924,7 +931,7 @@ def localApiTestPipeline(ctx):
(waitForEmailService() if params["emailNeeded"] else []) +
(opencloudServer(storage, params["accounts_hash_difficulty"], deploy_type = "federation", extra_server_environment = params["extraServerEnvironment"]) if params["federationServer"] else []) +
((wopiCollaborationService("fakeoffice") + wopiCollaborationService("collabora") + wopiCollaborationService("onlyoffice")) if params["collaborationServiceNeeded"] else []) +
(ocisHealthCheck("wopi", ["wopi-collabora:9304", "wopi-onlyoffice:9304", "wopi-fakeoffice:9304"]) if params["collaborationServiceNeeded"] else []) +
(openCloudHealthCheck("wopi", ["wopi-collabora:9304", "wopi-onlyoffice:9304", "wopi-fakeoffice:9304"]) if params["collaborationServiceNeeded"] else []) +
localApiTests(ctx, name, params["suites"], storage, params["extraEnvironment"], run_with_remote_php) +
logRequests(),
"services": (emailService() if params["emailNeeded"] else []) +
@@ -981,7 +988,7 @@ def localApiTests(ctx, name, suites, storage = "decomposed", extra_environment =
def cs3ApiTests(ctx, storage, accounts_hash_difficulty = 4):
return {
"name": "cs3ApiTests",
"name": "cs3ApiTests-%s" % storage,
"steps": restoreBuildArtifactCache(ctx, dirs["opencloudBinArtifact"], dirs["opencloudBinPath"]) +
opencloudServer(storage, accounts_hash_difficulty, [], [], "cs3api_validator") +
[
@@ -990,6 +997,8 @@ def cs3ApiTests(ctx, storage, accounts_hash_difficulty = 4):
"image": OC_CS3_API_VALIDATOR,
"environment": {},
"commands": [
"apk --no-cache add curl",
"curl '%s/graph/v1.0/users' -k -X POST --data-raw '{\"onPremisesSamAccountName\":\"marie\",\"displayName\":\"Marie Curie\",\"mail\":\"marie@opencloud.eu\",\"passwordProfile\":{\"password\":\"radioactivity\"}}' -uadmin:admin" % OC_URL,
"/usr/bin/cs3api-validator /var/lib/cs3api-validator --endpoint=%s:9142" % OC_SERVER_NAME,
],
},
@@ -1045,7 +1054,7 @@ def wopiValidatorTests(ctx, storage, wopiServerType, accounts_hash_difficulty =
]
else:
extra_server_environment = {
"OCIS_EXCLUDE_RUN_SERVICES": "app-provider",
"OC_EXCLUDE_RUN_SERVICES": "app-provider",
}
wopiServer = wopiCollaborationService("fakeoffice")
@@ -1083,9 +1092,9 @@ def wopiValidatorTests(ctx, storage, wopiServerType, accounts_hash_difficulty =
})
return {
"name": "wopiValidatorTests-%s" % wopiServerType,
"name": "wopiValidatorTests-%s-%s" % (wopiServerType, storage),
"services": fakeOffice(),
"steps": restoreBuildArtifactCache(ctx, dirs["opencloudBinArtifact"], dirs["opencloudBinPath"]) +
fakeOffice() +
waitForServices("fake-office", ["fakeoffice:8080"]) +
opencloudServer(storage, accounts_hash_difficulty, deploy_type = "wopi_validator", extra_server_environment = extra_server_environment) +
wopiServer +
@@ -1126,13 +1135,16 @@ def wopiValidatorTests(ctx, storage, wopiServerType, accounts_hash_difficulty =
],
}
def coreApiTests(ctx, part_number = 1, number_of_parts = 1, with_remote_php = False, storage = "posix", accounts_hash_difficulty = 4):
filterTags = "~@skipOnGraph&&~@skipOnOcis-%s-Storage" % ("OC" if storage == "owncloud" else "OCIS")
def coreApiTests(ctx, part_number = 1, number_of_parts = 1, with_remote_php = False, accounts_hash_difficulty = 4):
storage = "posix"
if "[decomposed]" in ctx.build.title.lower():
storage = "decomposed"
filterTags = "~@skipOnGraph&&~@skipOnOpencloud-%s-Storage" % storage
test_dir = "%s/tests/acceptance" % dirs["base"]
expected_failures_file = "%s/expected-failures-API-on-decomposed-storage.md" % (test_dir)
expected_failures_file = "%s/expected-failures-API-on-%s-storage.md" % (test_dir, storage)
return {
"name": "Core-API-Tests-%s%s" % (part_number, "-withoutRemotePhp" if not with_remote_php else ""),
"name": "Core-API-Tests-%s%s-%s" % (part_number, "-withoutRemotePhp" if not with_remote_php else "", storage),
"steps": restoreBuildArtifactCache(ctx, dirs["opencloudBinArtifact"], dirs["opencloudBinPath"]) +
opencloudServer(storage, accounts_hash_difficulty, with_wrapper = True) +
[
@@ -1141,7 +1153,7 @@ def coreApiTests(ctx, part_number = 1, number_of_parts = 1, with_remote_php = Fa
"image": OC_CI_PHP % DEFAULT_PHP_VERSION,
"environment": {
"TEST_SERVER_URL": OC_URL,
"OCIS_REVA_DATA_ROOT": "%s" % (dirs["ocisRevaDataRoot"] if storage == "owncloud" else ""),
"OC_REVA_DATA_ROOT": "%s" % (dirs["opencloudRevaDataRoot"] if storage == "owncloud" else ""),
"SEND_SCENARIO_LINE_REFERENCES": True,
"STORAGE_DRIVER": storage,
"BEHAT_FILTER_TAGS": filterTags,
@@ -1150,7 +1162,7 @@ def coreApiTests(ctx, part_number = 1, number_of_parts = 1, with_remote_php = Fa
"ACCEPTANCE_TEST_TYPE": "core-api",
"EXPECTED_FAILURES_FILE": expected_failures_file,
"UPLOAD_DELETE_WAIT_TIME": "1" if storage == "owncloud" else 0,
"OCIS_WRAPPER_URL": "http://%s:5200" % OC_SERVER_NAME,
"OC_WRAPPER_URL": "http://%s:5200" % OC_SERVER_NAME,
"WITH_REMOTE_PHP": with_remote_php,
},
"commands": [
@@ -1190,13 +1202,10 @@ def apiTests(ctx):
"withRemotePhp": with_remote_php,
}
if "posix" in ctx.build.title.lower():
storage = "posix"
for runPart in range(1, config["apiTests"]["numberOfParts"] + 1):
for run_with_remote_php in defaults["withRemotePhp"]:
if (not debugPartsEnabled or (debugPartsEnabled and runPart in debugParts)):
pipelines.append(coreApiTests(ctx, runPart, config["apiTests"]["numberOfParts"], run_with_remote_php, storage))
pipelines.append(coreApiTests(ctx, runPart, config["apiTests"]["numberOfParts"], run_with_remote_php))
return pipelines
@@ -1212,9 +1221,8 @@ def e2eTestPipeline(ctx):
extra_server_environment = {
"OC_PASSWORD_POLICY_BANNED_PASSWORDS_LIST": "%s" % dirs["bannedPasswordList"],
"OC_SHOW_USER_EMAIL_IN_RESULTS": True,
"FRONTEND_OCS_ENABLE_DENIALS": True,
# Needed for enabling all roles
"GRAPH_AVAILABLE_ROLES": "b1e2218d-eef8-4d4c-b82d-0f1a1b48f3b5,a8d5fe5e-96e3-418d-825b-534dbdf22b99,fb6c3e19-e378-47e5-b277-9732f9de6e21,58c63c02-1d89-4572-916a-870abc5a1b7d,2d00ce52-1fc2-4dbc-8b95-a73b73395f5a,1c996275-f1c9-4e71-abdf-a42f6495e960,312c0871-5ef7-4b3a-85b6-0e4074c64049,aa97fe03-7980-45ac-9e50-b325749fd7e6,63e64e19-8d43-42ec-a738-2b6af2610efa",
"GRAPH_AVAILABLE_ROLES": "%s" % GRAPH_AVAILABLE_ROLES,
}
e2e_trigger = [
@@ -1242,6 +1250,10 @@ def e2eTestPipeline(ctx):
if (ctx.build.event == "tag"):
return pipelines
storage = "posix"
if "[decomposed]" in ctx.build.title.lower():
storage = "decomposed"
for name, suite in config["e2eTests"].items():
if "skip" in suite and suite["skip"]:
continue
@@ -1265,7 +1277,7 @@ def e2eTestPipeline(ctx):
restoreWebCache() + \
restoreWebPnpmCache() + \
(tikaService() if params["tikaNeeded"] else []) + \
opencloudServer(extra_server_environment = extra_server_environment, tika_enabled = params["tikaNeeded"])
opencloudServer(storage, extra_server_environment = extra_server_environment, tika_enabled = params["tikaNeeded"])
step_e2e = {
"name": "e2e-tests",
@@ -1296,7 +1308,7 @@ def e2eTestPipeline(ctx):
"bash run-e2e.sh %s --run-part %d" % (e2e_args, run_part),
]
pipelines.append({
"name": "e2e-tests-%s-%s" % (name, run_part),
"name": "e2e-tests-%s-%s-%s" % (name, run_part, storage),
"steps": steps_before + [run_e2e] + steps_after,
"depends_on": getPipelineNames(buildOpencloudBinaryForTesting(ctx) + buildWebCache(ctx)),
"when": e2e_trigger,
@@ -1304,7 +1316,7 @@ def e2eTestPipeline(ctx):
else:
step_e2e["commands"].append("bash run-e2e.sh %s" % e2e_args)
pipelines.append({
"name": "e2e-tests-%s" % name,
"name": "e2e-tests-%s-%s" % (name, storage),
"steps": steps_before + [step_e2e] + steps_after,
"depends_on": getPipelineNames(buildOpencloudBinaryForTesting(ctx) + buildWebCache(ctx)),
"when": e2e_trigger,
@@ -1322,20 +1334,18 @@ def multiServiceE2ePipeline(ctx):
"tikaNeeded": False,
}
e2e_trigger = {
"when": [
{
"event": ["push", "manual"],
"branch": "main",
e2e_trigger = [
{
"event": ["push", "manual"],
"branch": "main",
},
{
"event": "pull_request",
"path": {
"exclude": skipIfUnchanged(ctx, "e2e-tests"),
},
{
"event": "pull_request",
"path": {
"exclude": skipIfUnchanged(ctx, "e2e-tests"),
},
},
],
}
},
]
if ("skip-e2e" in ctx.build.title.lower()):
return pipelines
@@ -1344,33 +1354,37 @@ def multiServiceE2ePipeline(ctx):
if (not "full-ci" in ctx.build.title.lower() and ctx.build.event != "cron"):
return pipelines
storage = "decomposed"
if "posix" in ctx.build.title.lower():
storage = "posix"
storage = "posix"
if "[decomposed]" in ctx.build.title.lower():
storage = "decomposed"
extra_server_environment = {
"OCIS_PASSWORD_POLICY_BANNED_PASSWORDS_LIST": "%s" % dirs["bannedPasswordList"],
"OCIS_JWT_SECRET": "some-ocis-jwt-secret",
"OCIS_SERVICE_ACCOUNT_ID": "service-account-id",
"OCIS_SERVICE_ACCOUNT_SECRET": "service-account-secret",
"OCIS_EXCLUDE_RUN_SERVICES": "storage-users",
"OCIS_GATEWAY_GRPC_ADDR": "0.0.0.0:9142",
"OC_PASSWORD_POLICY_BANNED_PASSWORDS_LIST": "%s" % dirs["bannedPasswordList"],
"OC_JWT_SECRET": "some-opencloud-jwt-secret",
"OC_SERVICE_ACCOUNT_ID": "service-account-id",
"OC_SERVICE_ACCOUNT_SECRET": "service-account-secret",
"OC_EXCLUDE_RUN_SERVICES": "storage-users",
"OC_GATEWAY_GRPC_ADDR": "0.0.0.0:9142",
"SETTINGS_GRPC_ADDR": "0.0.0.0:9191",
"GATEWAY_STORAGE_USERS_MOUNT_ID": "storage-users-id",
"OC_SHOW_USER_EMAIL_IN_RESULTS": True,
# Needed for enabling all roles
"GRAPH_AVAILABLE_ROLES": "%s" % GRAPH_AVAILABLE_ROLES,
}
storage_users_environment = {
"OCIS_CORS_ALLOW_ORIGINS": "%s,https://%s:9201" % (OC_URL, OC_SERVER_NAME),
"STORAGE_USERS_JWT_SECRET": "some-ocis-jwt-secret",
"OC_CORS_ALLOW_ORIGINS": "%s,https://%s:9201" % (OC_URL, OC_SERVER_NAME),
"STORAGE_USERS_JWT_SECRET": "some-opencloud-jwt-secret",
"STORAGE_USERS_MOUNT_ID": "storage-users-id",
"STORAGE_USERS_SERVICE_ACCOUNT_ID": "service-account-id",
"STORAGE_USERS_SERVICE_ACCOUNT_SECRET": "service-account-secret",
"STORAGE_USERS_GATEWAY_GRPC_ADDR": "%s:9142" % OC_SERVER_NAME,
"STORAGE_USERS_EVENTS_ENDPOINT": "%s:9233" % OC_SERVER_NAME,
"STORAGE_USERS_DATA_GATEWAY_URL": "%s/data" % OC_URL,
"OCIS_CACHE_STORE": "nats-js-kv",
"OCIS_CACHE_STORE_NODES": "%s:9233" % OC_SERVER_NAME,
"OC_CACHE_STORE": "nats-js-kv",
"OC_CACHE_STORE_NODES": "%s:9233" % OC_SERVER_NAME,
"MICRO_REGISTRY_ADDRESS": "%s:9233" % OC_SERVER_NAME,
"OC_BASE_DATA_PATH": dirs["multiServiceOcBaseDataPath"],
}
storage_users1_environment = {
"STORAGE_USERS_GRPC_ADDR": "storageusers1:9157",
@@ -1390,13 +1404,9 @@ def multiServiceE2ePipeline(ctx):
for item in storage_users_environment:
storage_users2_environment[item] = storage_users_environment[item]
storage_volume = [{
"name": "storage",
"path": "/root/.ocis",
}]
storage_users_services = startOcisService("storage-users", "storageusers1", storage_users1_environment, storage_volume) + \
startOcisService("storage-users", "storageusers2", storage_users2_environment, storage_volume) + \
ocisHealthCheck("storage-users", ["storageusers1:9159", "storageusers2:9159"])
storage_users_services = startOpenCloudService("storage-users", "storageusers1", storage_users1_environment) + \
startOpenCloudService("storage-users", "storageusers2", storage_users2_environment) + \
openCloudHealthCheck("storage-users", ["storageusers1:9159", "storageusers2:9159"])
for _, suite in config["e2eMultiService"].items():
if "skip" in suite and suite["skip"]:
@@ -1425,7 +1435,7 @@ def multiServiceE2ePipeline(ctx):
"name": "e2e-tests",
"image": OC_CI_NODEJS % DEFAULT_NODEJS_VERSION,
"environment": {
"BASE_URL_OCIS": OC_DOMAIN,
"OC_BASE_URL": OC_DOMAIN,
"HEADLESS": True,
"RETRY": "1",
},
@@ -1433,14 +1443,15 @@ def multiServiceE2ePipeline(ctx):
"cd %s/tests/e2e" % dirs["web"],
"bash run-e2e.sh %s" % e2e_args,
],
}] + logTracingResults()
}]
# + logTracingResults()
# uploadTracingResult(ctx) + \
pipelines.append({
"name": "e2e-tests-multi-service",
"steps": steps,
"depends_on": getPipelineNames(buildOpencloudBinaryForTesting(ctx) + buildWebCache(ctx)),
"workspace": e2e_trigger,
"when": e2e_trigger,
})
return pipelines
@@ -1505,7 +1516,7 @@ def dockerReleases(ctx):
build_type = "daily"
# dockerhub repo
# - "owncloud/ocis-rolling"
# - "opencloudeu/opencloud-rolling"
repo = docker_repo_slug + "-rolling"
docker_repos.append(repo)
@@ -1556,13 +1567,20 @@ def dockerRelease(ctx, repo, build_type):
"image": PLUGINS_DOCKER_BUILDX,
"settings": {
"dry_run": True,
"platforms": "linux/amd64,linux/arm64",
"platforms": "linux/amd64", # do dry run only on the native platform
"repo": "%s,quay.io/%s" % (repo, repo),
"auto_tag": False if build_type == "daily" else True,
"tag": "daily" if build_type == "daily" else "",
"default_tag": "daily",
"dockerfile": "opencloud/docker/Dockerfile.multiarch",
"build_args": build_args,
"pull_image": False,
"http_proxy": {
"from_secret": "ci_http_proxy",
},
"https_proxy": {
"from_secret": "ci_http_proxy",
},
},
"when": [
{
@@ -1575,12 +1593,19 @@ def dockerRelease(ctx, repo, build_type):
"image": PLUGINS_DOCKER_BUILDX,
"settings": {
"repo": "%s,quay.io/%s" % (repo, repo),
"platforms": "linux/amd64,linux/arm64",
"platforms": "linux/amd64,linux/arm64", # we can add remote builders
"auto_tag": False if build_type == "daily" else True,
"tag": "daily" if build_type == "daily" else "",
"default_tag": "daily",
"dockerfile": "opencloud/docker/Dockerfile.multiarch",
"build_args": build_args,
"pull_image": False,
"http_proxy": {
"from_secret": "ci_http_proxy",
},
"https_proxy": {
"from_secret": "ci_http_proxy",
},
"logins": [
{
"registry": "https://index.docker.io/v1/",
@@ -1633,7 +1658,7 @@ def dockerRelease(ctx, repo, build_type):
def binaryReleases(ctx):
pipelines = []
depends_on = getPipelineNames(testOpencloudAndUploadResults(ctx) + testPipelines(ctx))
depends_on = getPipelineNames(getGoBinForTesting(ctx))
for os in config["binaryReleases"]["os"]:
pipelines.append(binaryRelease(ctx, os, depends_on))
@@ -1670,19 +1695,6 @@ def binaryRelease(ctx, arch, depends_on = []):
},
],
},
{
"name": "changelog",
"image": OC_CI_GOLANG,
"environment": CI_HTTP_PROXY_ENV,
"commands": [
"make changelog CHANGELOG_VERSION=%s" % ctx.build.ref.replace("refs/tags/v", ""),
],
"when": [
{
"event": "tag",
},
],
},
{
"name": "release",
"image": PLUGINS_GITHUB_RELEASE,
@@ -1691,10 +1703,9 @@ def binaryRelease(ctx, arch, depends_on = []):
"from_secret": "github_token",
},
"files": [
"ocis/dist/release/*",
"opencloud/dist/release/*",
],
"title": ctx.build.ref.replace("refs/tags/v", ""),
"note": "ocis/dist/CHANGELOG.md",
"overwrite": True,
"prerelease": len(ctx.build.ref.split("-")) > 1,
},
@@ -1764,19 +1775,6 @@ def licenseCheck(ctx):
"cd third-party-licenses && tar -czf ../third-party-licenses.tar.gz *",
],
},
{
"name": "changelog",
"image": OC_CI_GOLANG,
"environment": CI_HTTP_PROXY_ENV,
"commands": [
"make changelog CHANGELOG_VERSION=%s" % ctx.build.ref.replace("refs/tags/v", "").split("-")[0],
],
"when": [
{
"event": "tag",
},
],
},
{
"name": "release",
"image": PLUGINS_GITHUB_RELEASE,
@@ -1788,7 +1786,6 @@ def licenseCheck(ctx):
"third-party-licenses.tar.gz",
],
"title": ctx.build.ref.replace("refs/tags/v", ""),
"note": "ocis/dist/CHANGELOG.md",
"overwrite": True,
"prerelease": len(ctx.build.ref.split("-")) > 1,
},
@@ -1814,53 +1811,20 @@ def licenseCheck(ctx):
"workspace": workspace,
}
def changelog():
def readyReleaseGo():
return [{
"name": "changelog",
"name": "ready-release-go",
"steps": [
{
"name": "generate",
"image": OC_CI_GOLANG,
"environment": CI_HTTP_PROXY_ENV,
"commands": [
"make -C opencloud changelog",
],
},
{
"name": "diff",
"image": OC_CI_ALPINE,
"commands": [
"git diff",
],
},
{
"name": "output",
"image": OC_CI_ALPINE,
"commands": [
"cat CHANGELOG.md",
],
},
{
"name": "publish",
"image": PLUGINS_GIT_PUSH,
"name": "release-helper",
"image": READY_RELEASE_GO,
"settings": {
"branch": "main",
"remote": "ssh://git@github.com/%s.git" % repo_slug,
"commit": True,
"ssh_key": {
"from_secret": "ssh_key",
"git_email": "devops@opencloud.eu",
"forge_type": "github",
"forge_token": {
"from_secret": "github_token",
},
"commit_message": "Automated changelog update [skip ci]",
"author_email": "devops@opencloud.eu",
"author_name": "openclouders",
"rebase": True,
},
"when": [
{
"event": ["push", "manual"],
"branch": "main",
},
],
},
],
"when": [
@@ -1868,9 +1832,6 @@ def changelog():
"event": ["push", "manual"],
"branch": "main",
},
{
"event": "pull_request",
},
],
}]
@@ -2020,7 +1981,6 @@ def opencloudServer(storage = "decomposed", accounts_hash_difficulty = 4, volume
"OC_URL": OC_URL,
"OC_CONFIG_DIR": "/root/.opencloud/config", # needed for checking config later
"STORAGE_USERS_DRIVER": "%s" % (storage),
"STORAGE_USERS_ID_CACHE_STORE": "nats-js-kv",
"PROXY_ENABLE_BASIC_AUTH": True,
"WEB_UI_CONFIG_FILE": "%s/%s" % (dirs["base"], dirs["opencloudConfig"]),
"OC_LOG_LEVEL": "error",
@@ -2072,6 +2032,9 @@ def opencloudServer(storage = "decomposed", accounts_hash_difficulty = 4, volume
"WEBFINGER_DEBUG_ADDR": "0.0.0.0:9279",
}
if storage == "posix":
environment["STORAGE_USERS_ID_CACHE_STORE"] = "nats-js-kv"
if deploy_type == "":
environment["FRONTEND_OCS_ENABLE_DENIALS"] = True
@@ -2084,7 +2047,7 @@ def opencloudServer(storage = "decomposed", accounts_hash_difficulty = 4, volume
if deploy_type == "wopi_validator":
environment["GATEWAY_GRPC_ADDR"] = "0.0.0.0:9142" # make gateway available to wopi server
environment["APP_PROVIDER_EXTERNAL_ADDR"] = "com.owncloud.api.app-provider"
environment["APP_PROVIDER_EXTERNAL_ADDR"] = "eu.opencloud.api.app-provider"
environment["APP_PROVIDER_DRIVER"] = "wopi"
environment["APP_PROVIDER_WOPI_APP_NAME"] = "FakeOffice"
environment["APP_PROVIDER_WOPI_APP_URL"] = "http://fakeoffice:8080"
@@ -2115,6 +2078,7 @@ def opencloudServer(storage = "decomposed", accounts_hash_difficulty = 4, volume
wrapper_commands = [
"make -C %s build" % dirs["ocWrapper"],
"env | sort",
"%s/bin/ocwrapper serve --bin %s --url %s --admin-username admin --admin-password admin" % (dirs["ocWrapper"], dirs["opencloudBin"], environment["OC_URL"]),
]
@@ -2158,15 +2122,14 @@ def opencloudServer(storage = "decomposed", accounts_hash_difficulty = 4, volume
return steps
def startOcisService(service = None, name = None, environment = {}, volumes = []):
def startOpenCloudService(service = None, name = None, environment = {}):
"""
Starts an OCIS service in a detached container.
Starts an OpenCloud service in a detached container.
Args:
service (str): The name of the service to start.
name (str): The name of the container.
environment (dict): The environment variables to set in the container.
volumes (list): The volumes to mount in the container.
Returns:
list: A list of pipeline steps to start the service.
@@ -2197,7 +2160,7 @@ def redis():
},
]
def redisForOCStorage(storage = "ocis"):
def redisForOCStorage(storage = "decomposed"):
if storage == "owncloud":
return redis()
else:
@@ -2216,25 +2179,29 @@ def build():
]
def skipIfUnchanged(ctx, type):
## FIXME: the 'exclude' feature (https://woodpecker-ci.org/docs/usage/workflow-syntax#path) does not seem to provide
# what we need. It seems to skip the build as soon as one of the changed files matches an exclude pattern, we only
# want to skip of ALL changed files match. So skip this condition for now:
return []
if "full-ci" in ctx.build.title.lower() or ctx.build.event == "tag" or ctx.build.event == "cron":
return []
base = [
"^.github/.*",
"^.vscode/.*",
"^changelog/.*",
"^docs/.*",
"^deployments/.*",
".github/**",
".vscode/**",
"docs/**",
"deployments/**",
"CHANGELOG.md",
"CONTRIBUTING.md",
"LICENSE",
"README.md",
]
unit = [
".*_test.go$",
"**/*_test.go",
]
acceptance = [
"^tests/acceptance/.*",
"tests/acceptance/**",
]
skip = []
@@ -2253,16 +2220,16 @@ def skipIfUnchanged(ctx, type):
def example_deploys(ctx):
on_merge_deploy = [
"ocis_full/master.yml",
"ocis_full/onlyoffice-master.yml",
"opencloud_full/master.yml",
"opencloud_full/onlyoffice-master.yml",
]
nightly_deploy = [
"ocis_ldap/rolling.yml",
"ocis_keycloak/rolling.yml",
"ocis_full/production.yml",
"ocis_full/rolling.yml",
"ocis_full/onlyoffice-rolling.yml",
"ocis_full/s3-rolling.yml",
"opencloud_ldap/rolling.yml",
"opencloud_keycloak/rolling.yml",
"opencloud_full/production.yml",
"opencloud_full/rolling.yml",
"opencloud_full/onlyoffice-rolling.yml",
"opencloud_full/s3-rolling.yml",
]
# if on master branch:
@@ -2456,10 +2423,10 @@ def pipelineSanityChecks(ctx, pipelines):
"""pipelineSanityChecks helps the CI developers to find errors before running it
These sanity checks are only executed on when converting starlark to yaml.
Error outputs are only visible when the conversion is done with the drone cli.
Error outputs are only visible when the conversion is done with the woodpecker cli.
Args:
ctx: drone passes a context with information which the pipeline can be adapted to
ctx: woodpecker passes a context with information which the pipeline can be adapted to
pipelines: pipelines to be checked, normally you should run this on the return value of main()
Returns:
@@ -2652,15 +2619,8 @@ def getWoodpeckerEnvAndCheckScript(ctx):
def checkForWebCache(name):
return {
"name": "check-for-%s-cache" % name,
"image": OC_UBUNTU,
"environment": {
"CACHE_ENDPOINT": {
"from_secret": "cache_s3_server",
},
"CACHE_BUCKET": {
"from_secret": "cache_s3_bucket",
},
},
"image": MINIO_MC,
"environment": MINIO_MC_ENV,
"commands": [
"bash -x check_web_cache.sh %s" % name,
],
@@ -2672,6 +2632,7 @@ def cloneWeb():
"image": OC_CI_NODEJS % DEFAULT_NODEJS_VERSION,
"commands": [
". ./.woodpecker.env",
"if $WEB_CACHE_FOUND; then exit 0; fi",
"rm -rf %s" % dirs["web"],
"git clone -b $WEB_BRANCH --single-branch --no-tags https://github.com/opencloud-eu/web.git %s" % dirs["web"],
"cd %s && git checkout $WEB_COMMITID" % dirs["web"],
@@ -2687,6 +2648,8 @@ def generateWebPnpmCache(ctx):
"name": "install-pnpm",
"image": OC_CI_NODEJS % DEFAULT_NODEJS_VERSION,
"commands": [
". ./.woodpecker.env",
"if $WEB_CACHE_FOUND; then exit 0; fi",
"cd %s" % dirs["web"],
'npm install --silent --global --force "$(jq -r ".packageManager" < package.json)"',
"pnpm config set store-dir ./.pnpm-store",
@@ -2697,6 +2660,8 @@ def generateWebPnpmCache(ctx):
"name": "zip-pnpm",
"image": OC_CI_NODEJS % DEFAULT_NODEJS_VERSION,
"commands": [
". ./.woodpecker.env",
"if $WEB_CACHE_FOUND; then exit 0; fi",
# zip the pnpm deps before caching
"if [ ! -d '%s' ]; then mkdir -p %s; fi" % (dirs["zip"], dirs["zip"]),
"cd %s" % dirs["web"],
@@ -2708,7 +2673,8 @@ def generateWebPnpmCache(ctx):
"image": MINIO_MC,
"environment": MINIO_MC_ENV,
"commands": [
"source ./.woodpecker.env",
". ./.woodpecker.env",
"if $WEB_CACHE_FOUND; then exit 0; fi",
# cache using the minio/mc client to the public bucket (long term bucket)
"mc alias set s3 $MC_HOST $AWS_ACCESS_KEY_ID $AWS_SECRET_ACCESS_KEY",
"mc cp -r -a %s s3/$CACHE_BUCKET/opencloud/web-test-runner/$WEB_COMMITID" % dirs["webPnpmZip"],
@@ -2725,6 +2691,8 @@ def generateWebCache(ctx):
"name": "zip-web",
"image": OC_UBUNTU,
"commands": [
". ./.woodpecker.env",
"if $WEB_CACHE_FOUND; then exit 0; fi",
"if [ ! -d '%s' ]; then mkdir -p %s; fi" % (dirs["zip"], dirs["zip"]),
"tar -czvf %s webTestRunner" % dirs["webZip"],
],
@@ -2734,7 +2702,8 @@ def generateWebCache(ctx):
"image": MINIO_MC,
"environment": MINIO_MC_ENV,
"commands": [
"source ./.woodpecker.env",
". ./.woodpecker.env",
"if $WEB_CACHE_FOUND; then exit 0; fi",
# cache using the minio/mc client to the 'owncloud' bucket (long term bucket)
"mc alias set s3 $MC_HOST $AWS_ACCESS_KEY_ID $AWS_SECRET_ACCESS_KEY",
"mc cp -r -a %s s3/$CACHE_BUCKET/opencloud/web-test-runner/$WEB_COMMITID" % dirs["webZip"],
@@ -2770,7 +2739,7 @@ def restoreWebPnpmCache():
"commands": [
"source ./.woodpecker.env",
"mc alias set s3 $MC_HOST $AWS_ACCESS_KEY_ID $AWS_SECRET_ACCESS_KEY",
"mc cp -r -a s3/$CACHE_BUCKET/opencloud/web-test-runner/$WEB_COMMITID/pnpm-store.tar.gz %s" % dirs["zip"],
"mc cp -r -a s3/$CACHE_BUCKET/opencloud/web-test-runner/$WEB_COMMITID/web-pnpm.tar.gz %s" % dirs["zip"],
],
}, {
# we need to install again because the node_modules are not cached
@@ -2821,7 +2790,6 @@ def fakeOffice():
{
"name": "fakeoffice",
"image": OC_CI_ALPINE,
"detach": True,
"environment": {},
"commands": [
"sh %s/tests/config/woodpecker/serve-hosting-discovery.sh" % (dirs["base"]),
@@ -2842,7 +2810,7 @@ def wopiCollaborationService(name):
"COLLABORATION_APP_PROOF_DISABLE": True,
"COLLABORATION_APP_INSECURE": True,
"COLLABORATION_CS3API_DATAGATEWAY_INSECURE": True,
"OCIS_JWT_SECRET": "some-ocis-jwt-secret",
"OC_JWT_SECRET": "some-opencloud-jwt-secret",
"COLLABORATION_WOPI_SECRET": "some-wopi-secret",
}
@@ -2863,7 +2831,7 @@ def wopiCollaborationService(name):
environment["COLLABORATION_WOPI_SRC"] = "http://%s:9300" % service_name
return startOcisService("collaboration", service_name, environment)
return startOpenCloudService("collaboration", service_name, environment)
def tikaService():
return [{
@@ -2893,18 +2861,18 @@ def logRequests():
}]
def k6LoadTests(ctx):
ocis_remote_environment = {
"SSH_OCIS_REMOTE": {
"from_secret": "k6_ssh_ocis_remote",
opencloud_remote_environment = {
"SSH_OC_REMOTE": {
"from_secret": "k6_ssh_opencloud_remote",
},
"SSH_OCIS_USERNAME": {
"from_secret": "k6_ssh_ocis_user",
"SSH_OC_USERNAME": {
"from_secret": "k6_ssh_opencloud_user",
},
"SSH_OCIS_PASSWORD": {
"from_secret": "k6_ssh_ocis_pass",
"SSH_OC_PASSWORD": {
"from_secret": "k6_ssh_opencloud_pass",
},
"TEST_SERVER_URL": {
"from_secret": "k6_ssh_ocis_server_url",
"from_secret": "k6_ssh_opencloud_server_url",
},
}
k6_remote_environment = {
@@ -2919,14 +2887,14 @@ def k6LoadTests(ctx):
},
}
environment = {}
environment.update(ocis_remote_environment)
environment.update(opencloud_remote_environment)
environment.update(k6_remote_environment)
if "skip" in config["k6LoadTests"] and config["k6LoadTests"]["skip"]:
return []
ocis_git_base_url = "https://raw.githubusercontent.com/opencloud-eu/opencloud"
script_link = "%s/%s/tests/config/woodpecker/run_k6_tests.sh" % (ocis_git_base_url, ctx.build.commit)
opencloud_git_base_url = "https://raw.githubusercontent.com/opencloud-eu/opencloud"
script_link = "%s/%s/tests/config/woodpecker/run_k6_tests.sh" % (opencloud_git_base_url, ctx.build.commit)
event_array = ["cron"]
@@ -2948,13 +2916,13 @@ def k6LoadTests(ctx):
],
},
{
"name": "ocis-log",
"name": "opencloud-log",
"image": OC_CI_ALPINE,
"environment": ocis_remote_environment,
"environment": opencloud_remote_environment,
"commands": [
"curl -s -o run_k6_tests.sh %s" % script_link,
"apk add --no-cache openssh-client sshpass",
"sh %s/run_k6_tests.sh --ocis-log" % (dirs["base"]),
"sh %s/run_k6_tests.sh --opencloud-log" % (dirs["base"]),
],
"when": [
{
@@ -2993,7 +2961,7 @@ def waitForServices(name, services = []):
],
}]
def ocisHealthCheck(name, services = []):
def openCloudHealthCheck(name, services = []):
commands = []
timeout = 300
curl_command = ["timeout %s bash -c 'while [ $(curl -s %s/%s ", "-w %{http_code} -o /dev/null) != 200 ]; do sleep 1; done'"]
@@ -3011,9 +2979,7 @@ def collaboraService():
return [
{
"name": "collabora",
"type": "docker",
"image": COLLABORA_CODE,
"detach": True,
"environment": {
"DONT_GEN_SSL_CERT": "set",
"extra_params": "--o:ssl.enable=true --o:ssl.termination=true --o:welcome.enable=false --o:net.frame_ancestors=%s" % OC_URL,
@@ -3029,9 +2995,7 @@ def onlyofficeService():
return [
{
"name": "onlyoffice",
"type": "docker",
"image": ONLYOFFICE_DOCUMENT_SERVER,
"detach": True,
"environment": {
"WOPI_ENABLED": True,
"USE_UNAUTHORIZED_STORAGE": True, # self signed certificates

View File

@@ -37,7 +37,7 @@ function backup_file () {
# URL pattern of the download file
# https://github.com/opencloud-eu/opencloud/releases/download/v1.0.0/opencloud-1.0.0-linux-amd64
dlversion="${OC_VERSION:-1.0.0}"
dlversion="${OC_VERSION:-1.1.0}"
dlurl="https://github.com/opencloud-eu/opencloud/releases/download/v${dlversion}/"
sandbox="opencloud-sandbox-${dlversion}"

View File

@@ -50,11 +50,11 @@ OC_DOMAIN=
ADMIN_PASSWORD=
# Demo users should not be created on a production instance,
# because their passwords are public. Defaults to "false".
# Also see: https://doc.opencloud.eu/opencloud/latest/deployment/general/general-info.html#demo-users-and-groups
# If demo users is set to "true", the following user accounts are created automatically:
# alan, mary, margaret, dennis and lynn - the password is 'demo' for all.
DEMO_USERS=
# Define the openCloud loglevel used.
# For more details see:
# https://doc.opencloud.eu/opencloud/latest/deployment/services/env-vars-special-scope.html
#
LOG_LEVEL=
# Define the kind of logging.
# The default log can be read by machines.
@@ -64,8 +64,6 @@ LOG_LEVEL=
# Define the openCloud storage location. Set the paths for config and data to a local path.
# Note that especially the data directory can grow big.
# Leaving it default stores data in docker internal volumes.
# For more details see:
# https://doc.opencloud.eu/opencloud/next/deployment/general/general-info.html#default-paths
# OC_CONFIG_DIR=/your/local/opencloud/config
# OC_DATA_DIR=/your/local/opencloud/data
@@ -74,7 +72,7 @@ LOG_LEVEL=
# Per default, S3 storage is disabled and the decomposed storage driver is used.
# To enable S3 storage, uncomment the following line and configure the S3 storage.
# For more details see:
# https://doc.opencloud.eu/opencloud/next/deployment/storage/decomposeds3.html
# https://docs.opencloud.eu/docs/admin/configuration/storage-decomposeds3
# Note: the leading colon is required to enable the service.
#DECOMPOSEDS3=:decomposeds3.yml
# Configure the S3 storage endpoint. Defaults to "http://minio:9000" for testing purposes.
@@ -94,16 +92,14 @@ DECOMPOSEDS3_BUCKET=
# Minio domain. Defaults to "minio.opencloud.test".
MINIO_DOMAIN=
# POSIX Storage configuration - optional
# OpenCloud supports posix storage as primary storage.
# Per default, S3 storage is disabled and the decomposed storage driver is used.
# To enable POSIX storage, uncomment the following line.
# OpenCloud uses POSIX storage as the default primary storage.
# By default, Decomposed storage is disabled, and the POSIX storage driver is used.
# To enable Decomposed storage, uncomment the following line.
# Note: the leading colon is required to enable the service.
#POSIX=:posix.yml
#DECOMPOSED=:decomposed.yml
# Define SMPT settings if you would like to send OpenCloud email notifications.
# For more details see:
# https://doc.opencloud.eu/opencloud/latest/deployment/services/s-list/notifications.html
#
# NOTE: when configuring Inbucket, these settings have no effect, see inbucket.yml for details.
# SMTP host to connect to.
SMTP_HOST=
@@ -205,7 +201,6 @@ COLLABORA_SSL_VERIFICATION=false
### Debugging - Monitoring ###
# Please see documentation at: https://opencloud.dev/opencloud/deployment/monitoring-tracing/
# Note: the leading colon is required to enable the service.
#MONITORING=:monitoring_tracing/monitoring.yml
@@ -246,4 +241,4 @@ COMPOSE_PATH_SEPARATOR=:
# This MUST be the last line as it assembles the supplemental compose files to be used.
# ALL supplemental configs must be added here, whether commented or not.
# Each var must either be empty or contain :path/file.yml
COMPOSE_FILE=docker-compose.yml${OPENCLOUD:-}${TIKA:-}${DECOMPOSEDS3:-}${DECOMPOSEDS3_MINIO:-}${POSIX:-}${COLLABORA:-}${MONITORING:-}${IMPORTER:-}${CLAMAV:-}${ONLYOFFICE:-}${INBUCKET:-}${EXTENSIONS:-}${UNZIP:-}${DRAWIO:-}${JSONVIEWER:-}${PROGRESSBARS:-}${EXTERNALSITES:-}
COMPOSE_FILE=docker-compose.yml${OPENCLOUD:-}${TIKA:-}${DECOMPOSEDS3:-}${DECOMPOSEDS3_MINIO:-}${DECOMPOSED:-}${COLLABORA:-}${MONITORING:-}${IMPORTER:-}${CLAMAV:-}${ONLYOFFICE:-}${INBUCKET:-}${EXTENSIONS:-}${UNZIP:-}${DRAWIO:-}${JSONVIEWER:-}${PROGRESSBARS:-}${EXTERNALSITES:-}

View File

@@ -53,7 +53,7 @@ services:
restart: always
collabora:
image: collabora/code:24.04.12.2.1
image: collabora/code:24.04.13.2.1
# release notes: https://www.collaboraonline.com/release-notes/
networks:
opencloud-net:
@@ -80,6 +80,7 @@ services:
logging:
driver: ${LOG_DRIVER:-local}
restart: always
command: ["bash", "-c", "coolconfig generate-proof-key ; /start-collabora-online.sh"]
entrypoint: ['/bin/bash', '-c']
command: ['coolconfig generate-proof-key && /start-collabora-online.sh']
healthcheck:
test: [ "CMD", "curl", "-f", "http://localhost:9980/hosting/discovery" ]

View File

@@ -0,0 +1,6 @@
---
services:
opencloud:
environment:
STORAGE_USERS_DRIVER: decomposed

View File

@@ -1,10 +0,0 @@
---
services:
opencloud:
environment:
# activate posix storage driver for users
STORAGE_USERS_DRIVER: posix
# keep system data on decomposed storage since this are only small files atm
STORAGE_SYSTEM_DRIVER: decomposed
# posix requires a shared cache store
STORAGE_USERS_ID_CACHE_STORE: "nats-js-kv"

59
go.mod
View File

@@ -5,7 +5,7 @@ go 1.24.1
require (
dario.cat/mergo v1.0.1
github.com/CiscoM31/godata v1.0.10
github.com/KimMachineGun/automemlimit v0.7.0
github.com/KimMachineGun/automemlimit v0.7.1
github.com/Masterminds/semver v1.5.0
github.com/MicahParks/keyfunc/v2 v2.1.0
github.com/Nerzal/gocloak/v13 v13.9.0
@@ -13,7 +13,7 @@ require (
github.com/beevik/etree v1.5.0
github.com/blevesearch/bleve/v2 v2.4.4
github.com/cenkalti/backoff v2.2.1+incompatible
github.com/coreos/go-oidc/v3 v3.12.0
github.com/coreos/go-oidc/v3 v3.13.0
github.com/cs3org/go-cs3apis v0.0.0-20241105092511-3ad35d174fc1
github.com/davidbyttow/govips/v2 v2.16.0
github.com/dhowden/tag v0.0.0-20240417053706-3d75831295e8
@@ -35,14 +35,14 @@ require (
github.com/go-micro/plugins/v4/wrapper/trace/opentelemetry v1.2.0
github.com/go-playground/validator/v10 v10.25.0
github.com/gofrs/uuid v4.4.0+incompatible
github.com/golang-jwt/jwt/v5 v5.2.1
github.com/golang-jwt/jwt/v5 v5.2.2
github.com/golang/protobuf v1.5.4
github.com/google/go-cmp v0.7.0
github.com/google/go-tika v0.3.1
github.com/google/uuid v1.6.0
github.com/gookit/config/v2 v2.2.5
github.com/gorilla/mux v1.8.1
github.com/grpc-ecosystem/grpc-gateway/v2 v2.26.1
github.com/grpc-ecosystem/grpc-gateway/v2 v2.26.3
github.com/invopop/validation v0.8.0
github.com/jellydator/ttlcache/v2 v2.11.1
github.com/jellydator/ttlcache/v3 v3.3.0
@@ -55,29 +55,29 @@ require (
github.com/mitchellh/mapstructure v1.5.0
github.com/mna/pigeon v1.3.0
github.com/mohae/deepcopy v0.0.0-20170929034955-c48cc78d4826
github.com/nats-io/nats-server/v2 v2.10.26
github.com/nats-io/nats-server/v2 v2.11.0
github.com/nats-io/nats.go v1.39.1
github.com/oklog/run v1.1.0
github.com/olekukonko/tablewriter v0.0.5
github.com/onsi/ginkgo v1.16.5
github.com/onsi/ginkgo/v2 v2.23.0
github.com/onsi/gomega v1.36.2
github.com/open-policy-agent/opa v1.1.0
github.com/opencloud-eu/reva/v2 v2.27.3-0.20250314084055-d2fcfe6b3445
github.com/onsi/ginkgo/v2 v2.23.3
github.com/onsi/gomega v1.36.3
github.com/open-policy-agent/opa v1.2.0
github.com/opencloud-eu/reva/v2 v2.28.1-0.20250325103543-f3ec73475a58
github.com/orcaman/concurrent-map v1.0.0
github.com/owncloud/libre-graph-api-go v1.0.5-0.20240829135935-80dc00d6f5ea
github.com/pkg/errors v0.9.1
github.com/pkg/xattr v0.4.10
github.com/prometheus/client_golang v1.21.1
github.com/r3labs/sse/v2 v2.10.0
github.com/riandyrn/otelchi v0.12.0
github.com/riandyrn/otelchi v0.12.1
github.com/rogpeppe/go-internal v1.14.1
github.com/rs/cors v1.11.1
github.com/rs/zerolog v1.33.0
github.com/shamaton/msgpack/v2 v2.2.2
github.com/shamaton/msgpack/v2 v2.2.3
github.com/sirupsen/logrus v1.9.3
github.com/spf13/afero v1.12.0
github.com/spf13/cobra v1.8.1
github.com/spf13/afero v1.14.0
github.com/spf13/cobra v1.9.1
github.com/stretchr/testify v1.10.0
github.com/test-go/testify v1.1.4
github.com/thejerf/suture/v4 v4.0.6
@@ -88,9 +88,9 @@ require (
github.com/xhit/go-simple-mail/v2 v2.16.0
go-micro.dev/v4 v4.11.0
go.etcd.io/bbolt v1.4.0
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.59.0
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.60.0
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.60.0
go.opentelemetry.io/contrib/zpages v0.57.0
go.opentelemetry.io/contrib/zpages v0.60.0
go.opentelemetry.io/otel v1.35.0
go.opentelemetry.io/otel/exporters/jaeger v1.17.0
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.35.0
@@ -98,13 +98,13 @@ require (
go.opentelemetry.io/otel/trace v1.35.0
golang.org/x/crypto v0.36.0
golang.org/x/exp v0.0.0-20250210185358-939b2ce775ac
golang.org/x/image v0.24.0
golang.org/x/image v0.25.0
golang.org/x/net v0.37.0
golang.org/x/oauth2 v0.28.0
golang.org/x/sync v0.12.0
golang.org/x/term v0.30.0
golang.org/x/text v0.23.0
google.golang.org/genproto/googleapis/api v0.0.0-20250218202821-56aae31c358a
google.golang.org/genproto/googleapis/api v0.0.0-20250303144028-a0af3efb3deb
google.golang.org/grpc v1.71.0
google.golang.org/protobuf v1.36.5
gopkg.in/yaml.v2 v2.4.0
@@ -116,14 +116,13 @@ require (
contrib.go.opencensus.io/exporter/prometheus v0.4.2 // indirect
filippo.io/edwards25519 v1.1.0 // indirect
github.com/Azure/go-ntlmssp v0.0.0-20221128193559-754e69321358 // indirect
github.com/BurntSushi/toml v1.4.0 // indirect
github.com/BurntSushi/toml v1.5.0 // indirect
github.com/Masterminds/goutils v1.1.1 // indirect
github.com/Masterminds/sprig v2.22.0+incompatible // indirect
github.com/Microsoft/go-winio v0.6.2 // indirect
github.com/OneOfOne/xxhash v1.2.8 // indirect
github.com/ProtonMail/go-crypto v1.1.5 // indirect
github.com/RoaringBitmap/roaring v1.9.3 // indirect
github.com/agnivade/levenshtein v1.2.0 // indirect
github.com/agnivade/levenshtein v1.2.1 // indirect
github.com/ajg/form v1.5.1 // indirect
github.com/alexedwards/argon2id v1.0.0 // indirect
github.com/amoghe/go-crypt v0.0.0-20220222110647-20eada5f5964 // indirect
@@ -160,7 +159,7 @@ require (
github.com/coreos/go-semver v0.3.0 // indirect
github.com/coreos/go-systemd/v22 v22.5.0 // indirect
github.com/cornelk/hashmap v1.0.8 // indirect
github.com/cpuguy83/go-md2man/v2 v2.0.5 // indirect
github.com/cpuguy83/go-md2man/v2 v2.0.6 // indirect
github.com/crewjam/httperr v0.2.0 // indirect
github.com/crewjam/saml v0.4.14 // indirect
github.com/cyphar/filepath-securejoin v0.3.6 // indirect
@@ -198,7 +197,7 @@ require (
github.com/go-playground/universal-translator v0.18.1 // indirect
github.com/go-redis/redis/v8 v8.11.5 // indirect
github.com/go-resty/resty/v2 v2.7.0 // indirect
github.com/go-sql-driver/mysql v1.9.0 // indirect
github.com/go-sql-driver/mysql v1.9.1 // indirect
github.com/go-task/slim-sprig v0.0.0-20230315185526-52ccab3ef572 // indirect
github.com/go-task/slim-sprig/v3 v3.0.0 // indirect
github.com/go-test/deep v1.1.0 // indirect
@@ -210,12 +209,13 @@ require (
github.com/goccy/go-yaml v1.11.2 // indirect
github.com/gofrs/flock v0.12.1 // indirect
github.com/gogo/protobuf v1.3.2 // indirect
github.com/golang-jwt/jwt/v4 v4.5.1 // indirect
github.com/golang-jwt/jwt/v4 v4.5.2 // indirect
github.com/golang/geo v0.0.0-20210211234256-740aa86cb551 // indirect
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da // indirect
github.com/golang/snappy v0.0.4 // indirect
github.com/gomodule/redigo v1.9.2 // indirect
github.com/google/go-querystring v1.1.0 // indirect
github.com/google/go-tpm v0.9.3 // indirect
github.com/google/pprof v0.0.0-20241210010833-40e02aabc2ad // indirect
github.com/google/renameio/v2 v2.0.0 // indirect
github.com/gookit/color v1.5.4 // indirect
@@ -254,7 +254,7 @@ require (
github.com/minio/crc64nvme v1.0.1 // indirect
github.com/minio/highwayhash v1.0.3 // indirect
github.com/minio/md5-simd v1.1.2 // indirect
github.com/minio/minio-go/v7 v7.0.87 // indirect
github.com/minio/minio-go/v7 v7.0.88 // indirect
github.com/mitchellh/copystructure v1.2.0 // indirect
github.com/mitchellh/reflectwalk v1.0.2 // indirect
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
@@ -288,6 +288,7 @@ require (
github.com/segmentio/ksuid v1.0.4 // indirect
github.com/sercand/kuberesolver/v5 v5.1.1 // indirect
github.com/sergi/go-diff v1.3.2-0.20230802210424-5b0b94c5c0d3 // indirect
github.com/sethvargo/go-diceware v0.5.0 // indirect
github.com/sethvargo/go-password v0.3.1 // indirect
github.com/shurcooL/httpfs v0.0.0-20230704072500-f1e31cf0ba5c // indirect
github.com/shurcooL/vfsgen v0.0.0-20230704071429-0000e147ea92 // indirect
@@ -308,9 +309,9 @@ require (
github.com/xo/terminfo v0.0.0-20220910002029-abceb7e1c41e // indirect
github.com/xrash/smetrics v0.0.0-20240521201337-686a1a2994c1 // indirect
github.com/yashtewari/glob-intersection v0.2.0 // indirect
go.etcd.io/etcd/api/v3 v3.5.19 // indirect
go.etcd.io/etcd/client/pkg/v3 v3.5.19 // indirect
go.etcd.io/etcd/client/v3 v3.5.19 // indirect
go.etcd.io/etcd/api/v3 v3.5.20 // indirect
go.etcd.io/etcd/client/pkg/v3 v3.5.20 // indirect
go.etcd.io/etcd/client/v3 v3.5.20 // indirect
go.opencensus.io v0.24.0 // indirect
go.opentelemetry.io/auto/sdk v1.1.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.35.0 // indirect
@@ -321,11 +322,11 @@ require (
go.uber.org/zap v1.23.0 // indirect
golang.org/x/mod v0.24.0 // indirect
golang.org/x/sys v0.31.0 // indirect
golang.org/x/time v0.10.0 // indirect
golang.org/x/time v0.11.0 // indirect
golang.org/x/tools v0.31.0 // indirect
golang.org/x/xerrors v0.0.0-20220907171357-04be3eba64a2 // indirect
google.golang.org/genproto v0.0.0-20241118233622-e639e219e697 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20250219182151-9fdb1cabc7b2 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20250303144028-a0af3efb3deb // indirect
gopkg.in/cenkalti/backoff.v1 v1.1.0 // indirect
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7 // indirect
gopkg.in/warnings.v0 v0.1.2 // indirect

121
go.sum
View File

@@ -61,15 +61,15 @@ github.com/Azure/go-autorest/tracing v0.5.0/go.mod h1:r/s2XiOKccPW3HrqB+W0TQzfbt
github.com/Azure/go-ntlmssp v0.0.0-20221128193559-754e69321358 h1:mFRzDkZVAjdal+s7s0MwaRv9igoPqLRdzOLzw/8Xvq8=
github.com/Azure/go-ntlmssp v0.0.0-20221128193559-754e69321358/go.mod h1:chxPXzSsl7ZWRAuOIE23GDNzjWuZquvFlgA8xmpunjU=
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/BurntSushi/toml v1.4.0 h1:kuoIxZQy2WRRk1pttg9asf+WVv6tWQuBNVmK8+nqPr0=
github.com/BurntSushi/toml v1.4.0/go.mod h1:ukJfTF/6rtPPRCnwkur4qwRxa8vTRFBF0uk2lLoLwho=
github.com/BurntSushi/toml v1.5.0 h1:W5quZX/G/csjUnuI8SUYlsHs9M38FC7znL0lIO+DvMg=
github.com/BurntSushi/toml v1.5.0/go.mod h1:ukJfTF/6rtPPRCnwkur4qwRxa8vTRFBF0uk2lLoLwho=
github.com/BurntSushi/xgb v0.0.0-20160522181843-27f122750802/go.mod h1:IVnqGOEym/WlBOVXweHU+Q+/VP0lqqI8lqeDx9IjBqo=
github.com/CiscoM31/godata v1.0.10 h1:DZdJ6M8QNh4HquvDDOqNLu6h77Wl86KGK7Qlbmb90sk=
github.com/CiscoM31/godata v1.0.10/go.mod h1:ZMiT6JuD3Rm83HEtiTx4JEChsd25YCrxchKGag/sdTc=
github.com/DeepDiver1975/secure v0.0.0-20240611112133-abc838fb797c h1:ocsNvQ2tNHme4v/lTs17HROamc7mFzZfzWcg4m+UXN0=
github.com/DeepDiver1975/secure v0.0.0-20240611112133-abc838fb797c/go.mod h1:BmF5hyM6tXczk3MpQkFf1hpKSRqCyhqcbiQtiAF7+40=
github.com/KimMachineGun/automemlimit v0.7.0 h1:7G06p/dMSf7G8E6oq+f2uOPuVncFyIlDI/pBWK49u88=
github.com/KimMachineGun/automemlimit v0.7.0/go.mod h1:QZxpHaGOQoYvFhv/r4u3U0JTC2ZcOwbSr11UZF46UBM=
github.com/KimMachineGun/automemlimit v0.7.1 h1:QcG/0iCOLChjfUweIMC3YL5Xy9C3VBeNmCZHrZfJMBw=
github.com/KimMachineGun/automemlimit v0.7.1/go.mod h1:QZxpHaGOQoYvFhv/r4u3U0JTC2ZcOwbSr11UZF46UBM=
github.com/Masterminds/goutils v1.1.1 h1:5nUrii3FMTL5diU80unEVvNevw1nH4+ZV4DSLVJLSYI=
github.com/Masterminds/goutils v1.1.1/go.mod h1:8cTjp+g8YejhMuvIA5y2vz3BpJxksy863GQaJW2MFNU=
github.com/Masterminds/semver v1.5.0 h1:H65muMkzWKEuNDnfl9d70GUjFniHKHRbFPGBuZ3QEww=
@@ -84,8 +84,6 @@ github.com/Microsoft/go-winio v0.6.2/go.mod h1:yd8OoFMLzJbo9gZq8j5qaps8bJ9aShtEA
github.com/Nerzal/gocloak/v13 v13.9.0 h1:YWsJsdM5b0yhM2Ba3MLydiOlujkBry4TtdzfIzSVZhw=
github.com/Nerzal/gocloak/v13 v13.9.0/go.mod h1:YYuDcXZ7K2zKECyVP7pPqjKxx2AzYSpKDj8d6GuyM10=
github.com/OneOfOne/xxhash v1.2.2/go.mod h1:HSdplMjZKSmBqAxg5vPj2TmRDmfkzw+cTzAElWljhcU=
github.com/OneOfOne/xxhash v1.2.8 h1:31czK/TI9sNkxIKfaUfGlU47BAxQ0ztGgd9vPyqimf8=
github.com/OneOfOne/xxhash v1.2.8/go.mod h1:eZbhyaAYD41SGSSsnmcpxVoRiQ/MPUTjUdIIOT9Um7Q=
github.com/OpenDNS/vegadns2client v0.0.0-20180418235048-a3fa4a771d87/go.mod h1:iGLljf5n9GjT6kc0HBvyI1nOKnGQbNB66VzSNbK5iks=
github.com/ProtonMail/go-crypto v1.1.5 h1:eoAQfK2dwL+tFSFpr7TbOaPNUbPiJj4fLYwwGE1FQO4=
github.com/ProtonMail/go-crypto v1.1.5/go.mod h1:rA3QumHc/FZ8pAHreoekgiAbzpNsfQAosU5td4SnOrE=
@@ -93,8 +91,8 @@ github.com/RoaringBitmap/roaring v1.9.3 h1:t4EbC5qQwnisr5PrP9nt0IRhRTb9gMUgQF4t4
github.com/RoaringBitmap/roaring v1.9.3/go.mod h1:6AXUsoIEzDTFFQCe1RbGA6uFONMhvejWj5rqITANK90=
github.com/Shopify/sarama v1.19.0/go.mod h1:FVkBWblsNy7DGZRfXLU0O9RCGt5g3g3yEuWXgklEdEo=
github.com/Shopify/toxiproxy v2.1.4+incompatible/go.mod h1:OXgGpZ6Cli1/URJOF1DMxUHB2q5Ap20/P/eIdh4G0pI=
github.com/agnivade/levenshtein v1.2.0 h1:U9L4IOT0Y3i0TIlUIDJ7rVUziKi/zPbrJGaFrtYH3SY=
github.com/agnivade/levenshtein v1.2.0/go.mod h1:QVVI16kDrtSuwcpd0p1+xMC6Z/VfhtCyDIjcwga4/DU=
github.com/agnivade/levenshtein v1.2.1 h1:EHBY3UOn1gwdy/VbFwgo4cxecRznFk7fKWN1KOX7eoM=
github.com/agnivade/levenshtein v1.2.1/go.mod h1:QVVI16kDrtSuwcpd0p1+xMC6Z/VfhtCyDIjcwga4/DU=
github.com/ajg/form v1.5.1 h1:t9c7v8JUKu/XxOGBU0yjNpaMloxGEJhUkqFRq0ibGeU=
github.com/ajg/form v1.5.1/go.mod h1:uL1WgH+h2mgNtvBq0339dVnzXdBETtL2LeUXaIv25UY=
github.com/akamai/AkamaiOPEN-edgegrid-golang v1.1.0/go.mod h1:kX6YddBkXqqywAe8c9LyvgTCyFuZCTMF4cRPQhc3Fy8=
@@ -113,6 +111,8 @@ github.com/amoghe/go-crypt v0.0.0-20220222110647-20eada5f5964 h1:I9YN9WMo3SUh7p/
github.com/amoghe/go-crypt v0.0.0-20220222110647-20eada5f5964/go.mod h1:eFiR01PwTcpbzXtdMces7zxg6utvFM5puiWHpWB8D/k=
github.com/anmitsu/go-shlex v0.0.0-20200514113438-38f4b401e2be h1:9AeTilPcZAjCFIImctFaOjnTIavg87rW78vTPkQqLI8=
github.com/anmitsu/go-shlex v0.0.0-20200514113438-38f4b401e2be/go.mod h1:ySMOLuWl6zY27l47sB3qLNK6tF2fkHG55UZxx8oIVo4=
github.com/antithesishq/antithesis-sdk-go v0.4.3-default-no-op h1:+OSa/t11TFhqfrX0EOSqQBDJ0YlpmK0rDSiB19dg9M0=
github.com/antithesishq/antithesis-sdk-go v0.4.3-default-no-op/go.mod h1:IUpT2DPAKh6i/YhSbt6Gl3v2yvUZjmKncl7U91fup7E=
github.com/apache/thrift v0.12.0/go.mod h1:cp2SuWMxlEZw2r+iP2GNCdIi4C1qmUzdZFSVb+bacwQ=
github.com/arbovm/levenshtein v0.0.0-20160628152529-48b4e1c0c4d0 h1:jfIu9sQUG6Ig+0+Ap1h4unLjW6YQJpKZVmUzxsD4E/Q=
github.com/arbovm/levenshtein v0.0.0-20160628152529-48b4e1c0c4d0/go.mod h1:t2tdKJDJF9BV14lnkjHmOQgcvEKgtqs5a1N3LNdJhGE=
@@ -222,8 +222,8 @@ github.com/cloudflare/cloudflare-go v0.14.0/go.mod h1:EnwdgGMaFOruiPZRFSgn+TsQ3h
github.com/cncf/udpa/go v0.0.0-20191209042840-269d4d468f6f/go.mod h1:M8M6+tZqaGXZJjfX53e64911xZQV5JYwmTeXPW+k8Sc=
github.com/coreos/bbolt v1.3.2/go.mod h1:iRUV2dpdMOn7Bo10OQBFzIJO9kkE559Wcmn+qkEiiKk=
github.com/coreos/etcd v3.3.13+incompatible/go.mod h1:uF7uidLiAD3TWHmW31ZFd/JWoc32PjwdhPthX9715RE=
github.com/coreos/go-oidc/v3 v3.12.0 h1:sJk+8G2qq94rDI6ehZ71Bol3oUHy63qNYmkiSjrc/Jo=
github.com/coreos/go-oidc/v3 v3.12.0/go.mod h1:gE3LgjOgFoHi9a4ce4/tJczr0Ai2/BoDhf0r5lltWI0=
github.com/coreos/go-oidc/v3 v3.13.0 h1:M66zd0pcc5VxvBNM4pB331Wrsanby+QomQYjN8HamW8=
github.com/coreos/go-oidc/v3 v3.13.0/go.mod h1:HaZ3szPaZ0e4r6ebqvsLWlk2Tn+aejfmrfah6hnSYEU=
github.com/coreos/go-semver v0.3.0 h1:wkHLiw0WNATZnSG7epLsujiMCgPAc9xhjJ4tgnAxmfM=
github.com/coreos/go-semver v0.3.0/go.mod h1:nnelYz7RCh+5ahJtPPxZlU+153eP4D4r3EedlOD2RNk=
github.com/coreos/go-systemd v0.0.0-20190321100706-95778dfbb74e/go.mod h1:F5haX7vjVVG0kc13fIWeqUViNPyEJxv/OmvnBo0Yme4=
@@ -235,9 +235,8 @@ github.com/cornelk/hashmap v1.0.8/go.mod h1:RfZb7JO3RviW/rT6emczVuC/oxpdz4UsSB2L
github.com/cpu/goacmedns v0.1.1/go.mod h1:MuaouqEhPAHxsbqjgnck5zeghuwBP1dLnPoobeGqugQ=
github.com/cpuguy83/go-md2man/v2 v2.0.0-20190314233015-f79a8a8ca69d/go.mod h1:maD7wRr/U5Z6m/iR4s+kqSMx2CaBsrgA7czyZG/E6dU=
github.com/cpuguy83/go-md2man/v2 v2.0.0/go.mod h1:maD7wRr/U5Z6m/iR4s+kqSMx2CaBsrgA7czyZG/E6dU=
github.com/cpuguy83/go-md2man/v2 v2.0.4/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46tRHOmNcaadrF8o=
github.com/cpuguy83/go-md2man/v2 v2.0.5 h1:ZtcqGrnekaHpVLArFSe4HK5DoKx1T0rq2DwVB0alcyc=
github.com/cpuguy83/go-md2man/v2 v2.0.5/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46tRHOmNcaadrF8o=
github.com/cpuguy83/go-md2man/v2 v2.0.6 h1:XJtiaUW6dEEqVuZiMTn1ldk455QWwEIsMIJlo5vtkx0=
github.com/cpuguy83/go-md2man/v2 v2.0.6/go.mod h1:oOW0eioCTA6cOiMLiUPZOpcVxMig6NIQQ7OS05n1F4g=
github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E=
github.com/crewjam/httperr v0.2.0 h1:b2BfXR8U3AlIHwNeFFvZ+BV1LFvKLlzMjzaTnZMybNo=
github.com/crewjam/httperr v0.2.0/go.mod h1:Jlz+Sg/XqBQhyMjdDiC+GNNRzZTD7x39Gu3pglZ5oH4=
@@ -419,8 +418,8 @@ github.com/go-redis/redis/v8 v8.11.5/go.mod h1:gREzHqY1hg6oD9ngVRbLStwAWKhA0FEgq
github.com/go-resty/resty/v2 v2.1.1-0.20191201195748-d7b97669fe48/go.mod h1:dZGr0i9PLlaaTD4H/hoZIDjQ+r6xq8mgbRzHZf7f2J8=
github.com/go-resty/resty/v2 v2.7.0 h1:me+K9p3uhSmXtrBZ4k9jcEAfJmuC8IivWHwaLZwPrFY=
github.com/go-resty/resty/v2 v2.7.0/go.mod h1:9PWDzw47qPphMRFfhsyk0NnSgvluHcljSMVIq3w7q0I=
github.com/go-sql-driver/mysql v1.9.0 h1:Y0zIbQXhQKmQgTp44Y1dp3wTXcn804QoTptLZT1vtvo=
github.com/go-sql-driver/mysql v1.9.0/go.mod h1:pDetrLJeA3oMujJuvXc8RJoasr589B6A9fwzD3QMrqw=
github.com/go-sql-driver/mysql v1.9.1 h1:FrjNGn/BsJQjVRuSa8CBrM5BWA9BWoXXat3KrtSb/iI=
github.com/go-sql-driver/mysql v1.9.1/go.mod h1:qn46aNg1333BRMNU69Lq93t8du/dwxI64Gl8i5p1WMU=
github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY=
github.com/go-task/slim-sprig v0.0.0-20210107165309-348f09dbbbc0/go.mod h1:fyg7847qk6SyHyPtNmDHnmrv/HOrqktSC+C9fM+CJOE=
github.com/go-task/slim-sprig v0.0.0-20230315185526-52ccab3ef572 h1:tfuBGBXKqDEevZMzYi5KSi8KkcZtzBcTgAUUtapy0OI=
@@ -456,10 +455,10 @@ github.com/gogo/protobuf v1.2.1/go.mod h1:hp+jE20tsWTFYpLwKvXlhS1hjn+gTNwPg2I6zV
github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q=
github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q=
github.com/goji/httpauth v0.0.0-20160601135302-2da839ab0f4d/go.mod h1:nnjvkQ9ptGaCkuDUx6wNykzzlUixGxvkme+H/lnzb+A=
github.com/golang-jwt/jwt/v4 v4.5.1 h1:JdqV9zKUdtaa9gdPlywC3aeoEsR681PlKC+4F5gQgeo=
github.com/golang-jwt/jwt/v4 v4.5.1/go.mod h1:m21LjoU+eqJr34lmDMbreY2eSTRJ1cv77w39/MY0Ch0=
github.com/golang-jwt/jwt/v5 v5.2.1 h1:OuVbFODueb089Lh128TAcimifWaLhJwVflnrgM17wHk=
github.com/golang-jwt/jwt/v5 v5.2.1/go.mod h1:pqrtFR0X4osieyHYxtmOUWsAWrfe1Q5UVIyoH402zdk=
github.com/golang-jwt/jwt/v4 v4.5.2 h1:YtQM7lnr8iZ+j5q71MGKkNw9Mn7AjHM68uc9g5fXeUI=
github.com/golang-jwt/jwt/v4 v4.5.2/go.mod h1:m21LjoU+eqJr34lmDMbreY2eSTRJ1cv77w39/MY0Ch0=
github.com/golang-jwt/jwt/v5 v5.2.2 h1:Rl4B7itRWVtYIHFrSNd7vhTiz9UpLdi6gZhZ3wEeDy8=
github.com/golang-jwt/jwt/v5 v5.2.2/go.mod h1:pqrtFR0X4osieyHYxtmOUWsAWrfe1Q5UVIyoH402zdk=
github.com/golang/geo v0.0.0-20210211234256-740aa86cb551 h1:gtexQ/VGyN+VVFRXSFiguSNcXmS6rkKT+X7FdIrTtfo=
github.com/golang/geo v0.0.0-20210211234256-740aa86cb551/go.mod h1:QZ0nwyI2jOfgRAoBvP+ab5aRr7c9x7lhGEJrKvBwjWI=
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q=
@@ -529,6 +528,8 @@ github.com/google/go-querystring v1.1.0 h1:AnCroh3fv4ZBgVIf1Iwtovgjaw/GiKJo8M8yD
github.com/google/go-querystring v1.1.0/go.mod h1:Kcdr2DB4koayq7X8pmAG4sNG59So17icRSOU623lUBU=
github.com/google/go-tika v0.3.1 h1:l+jr10hDhZjcgxFRfcQChRLo1bPXQeLFluMyvDhXTTA=
github.com/google/go-tika v0.3.1/go.mod h1:DJh5N8qxXIl85QkqmXknd+PeeRkUOTbvwyYf7ieDz6c=
github.com/google/go-tpm v0.9.3 h1:+yx0/anQuGzi+ssRqeD6WpXjW2L/V0dItUayO0i9sRc=
github.com/google/go-tpm v0.9.3/go.mod h1:h9jEsEECg7gtLis0upRBQU+GhYVH6jMjrFxI8u6bVUY=
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/google/martian v2.1.0+incompatible/go.mod h1:9I4somxYTbIHy5NJKHRl3wXiIaQGbYVAs8BPL6v8lEs=
github.com/google/martian/v3 v3.0.0/go.mod h1:y5Zk1BBys9G+gd6Jrk0W3cC1+ELVxBWuIGO+w/tUAp0=
@@ -580,8 +581,8 @@ github.com/grpc-ecosystem/go-grpc-middleware v1.4.0/go.mod h1:g5qyo/la0ALbONm6Vb
github.com/grpc-ecosystem/go-grpc-prometheus v1.2.0/go.mod h1:8NvIoxWQoOIhqOTXgfV/d3M/q6VIi02HzZEHgUlZvzk=
github.com/grpc-ecosystem/grpc-gateway v1.8.5/go.mod h1:vNeuVxBJEsws4ogUvrchl83t/GYV9WGTSLVdBhOQFDY=
github.com/grpc-ecosystem/grpc-gateway v1.9.0/go.mod h1:vNeuVxBJEsws4ogUvrchl83t/GYV9WGTSLVdBhOQFDY=
github.com/grpc-ecosystem/grpc-gateway/v2 v2.26.1 h1:e9Rjr40Z98/clHv5Yg79Is0NtosR5LXRvdr7o/6NwbA=
github.com/grpc-ecosystem/grpc-gateway/v2 v2.26.1/go.mod h1:tIxuGz/9mpox++sgp9fJjHO0+q1X9/UOWd798aAm22M=
github.com/grpc-ecosystem/grpc-gateway/v2 v2.26.3 h1:5ZPtiqj0JL5oKWmcsq4VMaAW5ukBEgSGXEN89zeH1Jo=
github.com/grpc-ecosystem/grpc-gateway/v2 v2.26.3/go.mod h1:ndYquD05frm2vACXE1nsccT4oJzjhw2arTS2cpUD1PI=
github.com/h2non/parth v0.0.0-20190131123155-b4df798d6542/go.mod h1:Ow0tF8D4Kplbc8s8sSb3V2oUCygFHVp8gC3Dn6U4MNI=
github.com/hashicorp/consul/api v1.1.0/go.mod h1:VmuI/Lkw1nC05EYQWNKwWGbkg+FbDBtguAZLlVdkD9Q=
github.com/hashicorp/consul/sdk v0.1.1/go.mod h1:VKf9jXwCTEY1QZP2MOLRhb5i/I/ssyNV1vwHyQBF0x8=
@@ -788,8 +789,8 @@ github.com/minio/highwayhash v1.0.3 h1:kbnuUMoHYyVl7szWjSxJnxw11k2U709jqFPPmIUyD
github.com/minio/highwayhash v1.0.3/go.mod h1:GGYsuwP/fPD6Y9hMiXuapVvlIUEhFhMTh0rxU3ik1LQ=
github.com/minio/md5-simd v1.1.2 h1:Gdi1DZK69+ZVMoNHRXJyNcxrMA4dSxoYHZSQbirFg34=
github.com/minio/md5-simd v1.1.2/go.mod h1:MzdKDxYpY2BT9XQFocsiZf/NKVtR7nkE4RoEpN+20RM=
github.com/minio/minio-go/v7 v7.0.87 h1:nkr9x0u53PespfxfUqxP3UYWiE2a41gaofgNnC4Y8WQ=
github.com/minio/minio-go/v7 v7.0.87/go.mod h1:33+O8h0tO7pCeCWwBVa07RhVVfB/3vS4kEX7rwYKmIg=
github.com/minio/minio-go/v7 v7.0.88 h1:v8MoIJjwYxOkehp+eiLIuvXk87P2raUtoU5klrAAshs=
github.com/minio/minio-go/v7 v7.0.88/go.mod h1:33+O8h0tO7pCeCWwBVa07RhVVfB/3vS4kEX7rwYKmIg=
github.com/mitchellh/cli v1.0.0/go.mod h1:hNIlj7HEI86fIcpObd7a0FcrxTWetlwJDGcceTlRvqc=
github.com/mitchellh/copystructure v1.2.0 h1:vpKXTN4ewci03Vljg/q9QvCGUDttBOGBIa15WveJJGw=
github.com/mitchellh/copystructure v1.2.0/go.mod h1:qLl+cE2AmVv+CoeAwDPye/v+N2HKCj9FbZEVFJRxO9s=
@@ -826,8 +827,8 @@ github.com/mwitkow/go-conntrack v0.0.0-20190716064945-2f068394615f/go.mod h1:qRW
github.com/namedotcom/go v0.0.0-20180403034216-08470befbe04/go.mod h1:5sN+Lt1CaY4wsPvgQH/jsuJi4XO2ssZbdsIizr4CVC8=
github.com/nats-io/jwt/v2 v2.7.3 h1:6bNPK+FXgBeAqdj4cYQ0F8ViHRbi7woQLq4W29nUAzE=
github.com/nats-io/jwt/v2 v2.7.3/go.mod h1:GvkcbHhKquj3pkioy5put1wvPxs78UlZ7D/pY+BgZk4=
github.com/nats-io/nats-server/v2 v2.10.26 h1:2i3rAsn4x5/2eOt2NEmuI/iSb8zfHpIUI7yiaOWbo2c=
github.com/nats-io/nats-server/v2 v2.10.26/go.mod h1:SGzoWGU8wUVnMr/HJhEMv4R8U4f7hF4zDygmRxpNsvg=
github.com/nats-io/nats-server/v2 v2.11.0 h1:fdwAT1d6DZW/4LUz5rkvQUe5leGEwjjOQYntzVRKvjE=
github.com/nats-io/nats-server/v2 v2.11.0/go.mod h1:leXySghbdtXSUmWem8K9McnJ6xbJOb0t9+NQ5HTRZjI=
github.com/nats-io/nats.go v1.39.1 h1:oTkfKBmz7W047vRxV762M67ZdXeOtUgvbBaNoQ+3PPk=
github.com/nats-io/nats.go v1.39.1/go.mod h1:MgRb8oOdigA6cYpEPhXJuRVH6UE/V4jblJ2jQ27IXYM=
github.com/nats-io/nkeys v0.4.10 h1:glmRrpCmYLHByYcePvnTBEAwawwapjCPMjy2huw20wc=
@@ -855,17 +856,17 @@ github.com/onsi/ginkgo v1.7.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+W
github.com/onsi/ginkgo v1.12.1/go.mod h1:zj2OWP4+oCPe1qIXoGWkgMRwljMUYCdkwsT2108oapk=
github.com/onsi/ginkgo v1.16.5 h1:8xi0RTUf59SOSfEtZMvwTvXYMzG4gV23XVHOZiXNtnE=
github.com/onsi/ginkgo v1.16.5/go.mod h1:+E8gABHa3K6zRBolWtd+ROzc/U5bkGt0FwiG042wbpU=
github.com/onsi/ginkgo/v2 v2.23.0 h1:FA1xjp8ieYDzlgS5ABTpdUDB7wtngggONc8a7ku2NqQ=
github.com/onsi/ginkgo/v2 v2.23.0/go.mod h1:zXTP6xIp3U8aVuXN8ENK9IXRaTjFnpVB9mGmaSRvxnM=
github.com/onsi/ginkgo/v2 v2.23.3 h1:edHxnszytJ4lD9D5Jjc4tiDkPBZ3siDeJJkUZJJVkp0=
github.com/onsi/ginkgo/v2 v2.23.3/go.mod h1:zXTP6xIp3U8aVuXN8ENK9IXRaTjFnpVB9mGmaSRvxnM=
github.com/onsi/gomega v1.4.3/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY=
github.com/onsi/gomega v1.7.1/go.mod h1:XdKZgCCFLUoM/7CFJVPcG8C1xQ1AJ0vpAezJrB7JYyY=
github.com/onsi/gomega v1.10.1/go.mod h1:iN09h71vgCQne3DLsj+A5owkum+a2tYe+TOCB1ybHNo=
github.com/onsi/gomega v1.36.2 h1:koNYke6TVk6ZmnyHrCXba/T/MoLBXFjeC1PtvYgw0A8=
github.com/onsi/gomega v1.36.2/go.mod h1:DdwyADRjrc825LhMEkD76cHR5+pUnjhUN8GlHlRPHzY=
github.com/open-policy-agent/opa v1.1.0 h1:HMz2evdEMTyNqtdLjmu3Vyx06BmhNYAx67Yz3Ll9q2s=
github.com/open-policy-agent/opa v1.1.0/go.mod h1:T1pASQ1/vwfTa+e2fYcfpLCvWgYtqtiUv+IuA/dLPQs=
github.com/opencloud-eu/reva/v2 v2.27.3-0.20250314084055-d2fcfe6b3445 h1:At2GtwEeNls1P60RpBa9QQridCtFQNW/pnQ5tybT8X0=
github.com/opencloud-eu/reva/v2 v2.27.3-0.20250314084055-d2fcfe6b3445/go.mod h1:yCscyJJ7FX/HA2fexM2i1OyKSZnJgdq1vnoXgXKmnn8=
github.com/onsi/gomega v1.36.3 h1:hID7cr8t3Wp26+cYnfcjR6HpJ00fdogN6dqZ1t6IylU=
github.com/onsi/gomega v1.36.3/go.mod h1:8D9+Txp43QWKhM24yyOBEdpkzN8FvJyAwecBgsU4KU0=
github.com/open-policy-agent/opa v1.2.0 h1:88NDVCM0of1eO6Z4AFeL3utTEtMuwloFmWWU7dRV1z0=
github.com/open-policy-agent/opa v1.2.0/go.mod h1:30euUmOvuBoebRCcJ7DMF42bRBOPznvt0ACUMYDUGVY=
github.com/opencloud-eu/reva/v2 v2.28.1-0.20250325103543-f3ec73475a58 h1:sWVVkEAz3EQOigCRQqbpgd+YzArj6HWbVUyDqtj4Frw=
github.com/opencloud-eu/reva/v2 v2.28.1-0.20250325103543-f3ec73475a58/go.mod h1:BBTT/JIHofRQu1VdFStlXRlrwAMD3wCnVkNAx3jsfO8=
github.com/opentracing/opentracing-go v1.1.0/go.mod h1:UkNAQd3GIcIGf0SeVgPpRdFStlNbqXla1AfSYxPUl2o=
github.com/opentracing/opentracing-go v1.2.0 h1:uEJPy/1a5RIPAJ0Ov+OIO8OxWu77jEv+1B0VhjKrZUs=
github.com/opentracing/opentracing-go v1.2.0/go.mod h1:GxEUsuufX4nBwe+T+Wl9TAgYrxe9dPLANfrWvHYVTgc=
@@ -968,8 +969,8 @@ github.com/rainycape/memcache v0.0.0-20150622160815-1031fa0ce2f2/go.mod h1:7tZKc
github.com/rcrowley/go-metrics v0.0.0-20181016184325-3113b8401b8a/go.mod h1:bCqnVzQkZxMG4s8nGwiZ5l3QUCyqpo9Y+/ZMZ9VjZe4=
github.com/rcrowley/go-metrics v0.0.0-20200313005456-10cdbea86bc0 h1:MkV+77GLUNo5oJ0jf870itWm3D0Sjh7+Za9gazKc5LQ=
github.com/rcrowley/go-metrics v0.0.0-20200313005456-10cdbea86bc0/go.mod h1:bCqnVzQkZxMG4s8nGwiZ5l3QUCyqpo9Y+/ZMZ9VjZe4=
github.com/riandyrn/otelchi v0.12.0 h1:7aXphKyzut8849DDb/0LWyCPq4mfnikpggEmmW3b38U=
github.com/riandyrn/otelchi v0.12.0/go.mod h1:weZZeUJURvtCcbWsdb7Y6F8KFZGedJlSrgUjq9VirV8=
github.com/riandyrn/otelchi v0.12.1 h1:FdRKK3/RgZ/T+d+qTH5Uw3MFx0KwRF38SkdfTMMq/m8=
github.com/riandyrn/otelchi v0.12.1/go.mod h1:weZZeUJURvtCcbWsdb7Y6F8KFZGedJlSrgUjq9VirV8=
github.com/rivo/uniseg v0.2.0/go.mod h1:J6wj4VEh+S6ZtnVlnTBMWIodfgj8LQOQFoIToxlJtxc=
github.com/rivo/uniseg v0.4.7 h1:WUdvkW8uEhrYfLC4ZzdpI2ztxP1I582+49Oc5Mq64VQ=
github.com/rivo/uniseg v0.4.7/go.mod h1:FN3SvrM+Zdj16jyLfmOkMNblXMcoc8DfTHruCPUcx88=
@@ -1003,10 +1004,12 @@ github.com/sercand/kuberesolver/v5 v5.1.1 h1:CYH+d67G0sGBj7q5wLK61yzqJJ8gLLC8aep
github.com/sercand/kuberesolver/v5 v5.1.1/go.mod h1:Fs1KbKhVRnB2aDWN12NjKCB+RgYMWZJ294T3BtmVCpQ=
github.com/sergi/go-diff v1.3.2-0.20230802210424-5b0b94c5c0d3 h1:n661drycOFuPLCN3Uc8sB6B/s6Z4t2xvBgU1htSHuq8=
github.com/sergi/go-diff v1.3.2-0.20230802210424-5b0b94c5c0d3/go.mod h1:A0bzQcvG0E7Rwjx0REVgAGH58e96+X0MeOfepqsbeW4=
github.com/sethvargo/go-diceware v0.5.0 h1:exrQ7GpaBo00GqRVM1N8ChXSsi3oS7tjQiIehsD+yR0=
github.com/sethvargo/go-diceware v0.5.0/go.mod h1:Lg1SyPS7yQO6BBgTN5r4f2MUDkqGfLWsOjHPY0kA8iw=
github.com/sethvargo/go-password v0.3.1 h1:WqrLTjo7X6AcVYfC6R7GtSyuUQR9hGyAj/f1PYQZCJU=
github.com/sethvargo/go-password v0.3.1/go.mod h1:rXofC1zT54N7R8K/h1WDUdkf9BOx5OptoxrMBcrXzvs=
github.com/shamaton/msgpack/v2 v2.2.2 h1:GOIg0c9LV04VwzOOqZSrmsv/JzjNOOMxnS/HvOHGdgs=
github.com/shamaton/msgpack/v2 v2.2.2/go.mod h1:6khjYnkx73f7VQU7wjcFS9DFjs+59naVWJv1TB7qdOI=
github.com/shamaton/msgpack/v2 v2.2.3 h1:uDOHmxQySlvlUYfQwdjxyybAOzjlQsD1Vjy+4jmO9NM=
github.com/shamaton/msgpack/v2 v2.2.3/go.mod h1:6khjYnkx73f7VQU7wjcFS9DFjs+59naVWJv1TB7qdOI=
github.com/shirou/gopsutil v3.21.11+incompatible h1:+1+c1VGhc88SSonWP6foOcLhvnKlUeu/erjjvaPEYiI=
github.com/shirou/gopsutil v3.21.11+incompatible/go.mod h1:5b4v6he4MtMOwMlS0TUMTu2PcXUg8+E1lC7eC3UO/RA=
github.com/shurcooL/httpfs v0.0.0-20230704072500-f1e31cf0ba5c h1:aqg5Vm5dwtvL+YgDpBcK1ITf3o96N/K7/wsRXQnUTEs=
@@ -1035,13 +1038,13 @@ github.com/spacewander/go-suffix-tree v0.0.0-20191010040751-0865e368c784/go.mod
github.com/spaolacci/murmur3 v0.0.0-20180118202830-f09979ecbc72/go.mod h1:JwIasOWyU6f++ZhiEuf87xNszmSA2myDM2Kzu9HwQUA=
github.com/spf13/afero v1.1.2/go.mod h1:j4pytiNVoe2o6bmDsKpLACNPDBIoEAkihy7loJ1B0CQ=
github.com/spf13/afero v1.4.1/go.mod h1:Ai8FlHk4v/PARR026UzYexafAt9roJ7LcLMAmO6Z93I=
github.com/spf13/afero v1.12.0 h1:UcOPyRBYczmFn6yvphxkn9ZEOY65cpwGKb5mL36mrqs=
github.com/spf13/afero v1.12.0/go.mod h1:ZTlWwG4/ahT8W7T0WQ5uYmjI9duaLQGy3Q2OAl4sk/4=
github.com/spf13/afero v1.14.0 h1:9tH6MapGnn/j0eb0yIXiLjERO8RB6xIVZRDCX7PtqWA=
github.com/spf13/afero v1.14.0/go.mod h1:acJQ8t0ohCGuMN3O+Pv0V0hgMxNYDlvdk+VTfyZmbYo=
github.com/spf13/cast v1.3.0/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE=
github.com/spf13/cast v1.3.1/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE=
github.com/spf13/cobra v1.1.1/go.mod h1:WnodtKOvamDL/PwE2M4iKs8aMDBZ5Q5klgD3qfVJQMI=
github.com/spf13/cobra v1.8.1 h1:e5/vxKd/rZsfSJMUX1agtjeTDf+qv1/JdBF8gg5k9ZM=
github.com/spf13/cobra v1.8.1/go.mod h1:wHxEcudfqmLYa8iTfL+OuZPbBZkmvliBWKIezN3kD9Y=
github.com/spf13/cobra v1.9.1 h1:CXSaggrXdbHK9CF+8ywj8Amf7PBRmPCOJugH954Nnlo=
github.com/spf13/cobra v1.9.1/go.mod h1:nDyEzZ8ogv936Cinf6g1RU9MRY64Ir93oCnqb9wxYW0=
github.com/spf13/jwalterweatherman v1.0.0/go.mod h1:cQK4TGJAtQXfYWX+Ddv3mKDzgVb68N+wFjFa4jdeBTo=
github.com/spf13/jwalterweatherman v1.1.0/go.mod h1:aNWZUN0dPAAO/Ljvb5BEdw96iTZ0EXowPYD95IqWIGo=
github.com/spf13/pflag v1.0.3/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=
@@ -1143,12 +1146,12 @@ github.com/yusufpapurcu/wmi v1.2.4/go.mod h1:SBZ9tNy3G9/m5Oi98Zks0QjeHVDvuK0qfxQ
go.etcd.io/bbolt v1.3.2/go.mod h1:IbVyRI1SCnLcuJnV2u8VeU0CEYM7e686BmAb1XKL+uU=
go.etcd.io/bbolt v1.4.0 h1:TU77id3TnN/zKr7CO/uk+fBCwF2jGcMuw2B/FMAzYIk=
go.etcd.io/bbolt v1.4.0/go.mod h1:AsD+OCi/qPN1giOX1aiLAha3o1U8rAz65bvN4j0sRuk=
go.etcd.io/etcd/api/v3 v3.5.19 h1:w3L6sQZGsWPuBxRQ4m6pPP3bVUtV8rjW033EGwlr0jw=
go.etcd.io/etcd/api/v3 v3.5.19/go.mod h1:QqKGViq4KTgOG43dr/uH0vmGWIaoJY3ggFi6ZH0TH/U=
go.etcd.io/etcd/client/pkg/v3 v3.5.19 h1:9VsyGhg0WQGjDWWlDI4VuaS9PZJGNbPkaHEIuLwtixk=
go.etcd.io/etcd/client/pkg/v3 v3.5.19/go.mod h1:qaOi1k4ZA9lVLejXNvyPABrVEe7VymMF2433yyRQ7O0=
go.etcd.io/etcd/client/v3 v3.5.19 h1:+4byIz6ti3QC28W0zB0cEZWwhpVHXdrKovyycJh1KNo=
go.etcd.io/etcd/client/v3 v3.5.19/go.mod h1:FNzyinmMIl0oVsty1zA3hFeUrxXI/JpEnz4sG+POzjU=
go.etcd.io/etcd/api/v3 v3.5.20 h1:aKfz3nPZECWoZJXMSH9y6h2adXjtOHaHTGEVCuCmaz0=
go.etcd.io/etcd/api/v3 v3.5.20/go.mod h1:QqKGViq4KTgOG43dr/uH0vmGWIaoJY3ggFi6ZH0TH/U=
go.etcd.io/etcd/client/pkg/v3 v3.5.20 h1:sZIAtra+xCo56gdf6BR62to/hiie5Bwl7hQIqMzVTEM=
go.etcd.io/etcd/client/pkg/v3 v3.5.20/go.mod h1:qaOi1k4ZA9lVLejXNvyPABrVEe7VymMF2433yyRQ7O0=
go.etcd.io/etcd/client/v3 v3.5.20 h1:jMT2MwQEhyvhQg49Cec+1ZHJzfUf6ZgcmV0GjPv0tIQ=
go.etcd.io/etcd/client/v3 v3.5.20/go.mod h1:J5lbzYRMUR20YolS5UjlqqMcu3/wdEvG5VNBhzyo3m0=
go.opencensus.io v0.20.1/go.mod h1:6WKK9ahsWS3RSO+PY9ZHZUfv2irvY6gN279GOPZjmmk=
go.opencensus.io v0.20.2/go.mod h1:6WKK9ahsWS3RSO+PY9ZHZUfv2irvY6gN279GOPZjmmk=
go.opencensus.io v0.21.0/go.mod h1:mSImk1erAIZhrmZN+AvHh14ztQfjbGwt4TtuofqLduU=
@@ -1161,12 +1164,12 @@ go.opencensus.io v0.24.0 h1:y73uSU6J157QMP2kn2r30vwW1A2W2WFwSCGnAVxeaD0=
go.opencensus.io v0.24.0/go.mod h1:vNK8G9p7aAivkbmorf4v+7Hgx+Zs0yY+0fOtgBfjQKo=
go.opentelemetry.io/auto/sdk v1.1.0 h1:cH53jehLUN6UFLY71z+NDOiNJqDdPRaXzTel0sJySYA=
go.opentelemetry.io/auto/sdk v1.1.0/go.mod h1:3wSPjt5PWp2RhlCcmmOial7AvC4DQqZb7a7wCow3W8A=
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.59.0 h1:rgMkmiGfix9vFJDcDi1PK8WEQP4FLQwLDfhp5ZLpFeE=
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.59.0/go.mod h1:ijPqXp5P6IRRByFVVg9DY8P5HkxkHE5ARIa+86aXPf4=
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.60.0 h1:x7wzEgXfnzJcHDwStJT+mxOz4etr2EcexjqhBvmoakw=
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.60.0/go.mod h1:rg+RlpR5dKwaS95IyyZqj5Wd4E13lk/msnTS0Xl9lJM=
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.60.0 h1:sbiXRNDSWJOTobXh5HyQKjq6wUC5tNybqjIqDpAY4CU=
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.60.0/go.mod h1:69uWxva0WgAA/4bu2Yy70SLDBwZXuQ6PbBpbsa5iZrQ=
go.opentelemetry.io/contrib/zpages v0.57.0 h1:mHFZlTkyrUJcuBhpytPSaVPiVkqri96RKUDk01d83eQ=
go.opentelemetry.io/contrib/zpages v0.57.0/go.mod h1:u/SScNsxj6TacMBA6KCJZjXVC1uwkdVgLFyHrOe0x9M=
go.opentelemetry.io/contrib/zpages v0.60.0 h1:wOM9ie1Hz4H88L9KE6GrGbKJhfm+8F1NfW/Y3q9Xt+8=
go.opentelemetry.io/contrib/zpages v0.60.0/go.mod h1:xqfToSRGh2MYUsfyErNz8jnNDPlnpZqWM/y6Z2Cx7xw=
go.opentelemetry.io/otel v1.35.0 h1:xKWKPxrxB6OtMCbmMY021CqC45J+3Onta9MqjhnusiQ=
go.opentelemetry.io/otel v1.35.0/go.mod h1:UEqy8Zp11hpkUrL73gSlELM0DupHoiq72dR+Zqel/+Y=
go.opentelemetry.io/otel/exporters/jaeger v1.17.0 h1:D7UpUy2Xc2wsi1Ras6V40q806WM07rqoCWzXu7Sqy+4=
@@ -1242,8 +1245,8 @@ golang.org/x/exp v0.0.0-20250210185358-939b2ce775ac/go.mod h1:hH+7mtFmImwwcMvScy
golang.org/x/image v0.0.0-20190227222117-0694c2d4d067/go.mod h1:kZ7UVZpmo3dzQBMxlp+ypCbDeSB+sBbTgSJuh5dn5js=
golang.org/x/image v0.0.0-20190802002840-cff245a6509b/go.mod h1:FeLwcggjj3mMvU+oOTbSwawSJRM1uh48EjtB4UJZlP0=
golang.org/x/image v0.18.0/go.mod h1:4yyo5vMFQjVjUcVk4jEQcU9MGy/rulF5WvUILseCM2E=
golang.org/x/image v0.24.0 h1:AN7zRgVsbvmTfNyqIbbOraYL8mSwcKncEj8ofjgzcMQ=
golang.org/x/image v0.24.0/go.mod h1:4b/ITuLfqYq1hqZcjofwctIhi7sZh2WaCjvsBNjjya8=
golang.org/x/image v0.25.0 h1:Y6uW6rH1y5y/LK1J8BPWZtr6yZ7hrsy6hFrXjgsc2fQ=
golang.org/x/image v0.25.0/go.mod h1:tCAmOEGthTtkalusGp1g3xa2gke8J6c2N565dTyl9Rs=
golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
golang.org/x/lint v0.0.0-20190227174305-5b3e6a55c961/go.mod h1:wehouNa3lNwaWXcvxsM5YxQ5yQlVC4a0KAMCusXpPoU=
golang.org/x/lint v0.0.0-20190301231843-5614ed5bae6f/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
@@ -1478,8 +1481,8 @@ golang.org/x/time v0.0.0-20191024005414-555d28b269f0/go.mod h1:tRJNPiyCQ0inRvYxb
golang.org/x/time v0.0.0-20200630173020-3af7569d3a1e/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20201208040808-7e3f01d25324/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20210220033141-f8bda1e9f3ba/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.10.0 h1:3usCWA8tQn0L8+hFJQNgzpWbd89begxN66o1Ojdn5L4=
golang.org/x/time v0.10.0/go.mod h1:3BpzKBy/shNhVucY/MWOyx10tF3SFh9QdLuxbVysPQM=
golang.org/x/time v0.11.0 h1:/bpjEDfN9tkoN/ryeYHnv5hcMlc8ncjMcM4XBk5NWV0=
golang.org/x/time v0.11.0/go.mod h1:CDIdPxbZBQxdj6cxyCIdrNogrJKMJ7pr37NYpMcMDSg=
golang.org/x/tools v0.0.0-20180221164845-07fd8470d635/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20180828015842-6cd1fcedba52/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
@@ -1597,10 +1600,10 @@ google.golang.org/genproto v0.0.0-20200804131852-c06518451d9c/go.mod h1:FWY/as6D
google.golang.org/genproto v0.0.0-20200825200019-8632dd797987/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20241118233622-e639e219e697 h1:ToEetK57OidYuqD4Q5w+vfEnPvPpuTwedCNVohYJfNk=
google.golang.org/genproto v0.0.0-20241118233622-e639e219e697/go.mod h1:JJrvXBWRZaFMxBufik1a4RpFw4HhgVtBBWQeQgUj2cc=
google.golang.org/genproto/googleapis/api v0.0.0-20250218202821-56aae31c358a h1:nwKuGPlUAt+aR+pcrkfFRrTU1BVrSmYyYMxYbUIVHr0=
google.golang.org/genproto/googleapis/api v0.0.0-20250218202821-56aae31c358a/go.mod h1:3kWAYMk1I75K4vykHtKt2ycnOgpA6974V7bREqbsenU=
google.golang.org/genproto/googleapis/rpc v0.0.0-20250219182151-9fdb1cabc7b2 h1:DMTIbak9GhdaSxEjvVzAeNZvyc03I61duqNbnm3SU0M=
google.golang.org/genproto/googleapis/rpc v0.0.0-20250219182151-9fdb1cabc7b2/go.mod h1:LuRYeWDFV6WOn90g357N17oMCaxpgCnbi/44qJvDn2I=
google.golang.org/genproto/googleapis/api v0.0.0-20250303144028-a0af3efb3deb h1:p31xT4yrYrSM/G4Sn2+TNUkVhFCbG9y8itM2S6Th950=
google.golang.org/genproto/googleapis/api v0.0.0-20250303144028-a0af3efb3deb/go.mod h1:jbe3Bkdp+Dh2IrslsFCklNhweNTBgSYanP1UXhJDhKg=
google.golang.org/genproto/googleapis/rpc v0.0.0-20250303144028-a0af3efb3deb h1:TLPQVbx1GJ8VKZxz52VAxl1EBgKXXbTiU9Fc5fZeLn4=
google.golang.org/genproto/googleapis/rpc v0.0.0-20250303144028-a0af3efb3deb/go.mod h1:LuRYeWDFV6WOn90g357N17oMCaxpgCnbi/44qJvDn2I=
google.golang.org/grpc v1.17.0/go.mod h1:6QZJwpn2B+Zp71q/5VxRsJ6NXXVCE5NRUHRo+f3cWCs=
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
google.golang.org/grpc v1.19.1/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=

View File

@@ -30,10 +30,10 @@ dev-docker-multiarch:
docker buildx rm opencloudbuilder || true
docker buildx create --platform linux/arm64,linux/amd64 --name opencloudbuilder
docker buildx use opencloudbuilder
cd .. && docker buildx build --platform linux/arm64,linux/amd64 --output type=docker --file opencloud/docker/Dockerfile.multiarch --tag opencloud-eu/opencloud:dev-multiarch .
cd .. && docker buildx build --platform linux/arm64,linux/amd64 --output type=docker --file opencloud/docker/Dockerfile.multiarch --tag opencloudeu/opencloud:dev-multiarch .
docker buildx rm opencloudbuilder
.PHONY: debug-docker
debug-docker:
$(MAKE) --no-print-directory debug-linux-docker-$(GOARCH)
docker build -f docker/Dockerfile.linux.debug.$(GOARCH) -t opencloud-eu/opencloud:debug .
docker build -f docker/Dockerfile.linux.debug.$(GOARCH) -t opencloudeu/opencloud:debug .

View File

@@ -1,4 +1,4 @@
FROM amd64/alpine:3.20
FROM amd64/alpine:3.21
ARG VERSION=""
ARG REVISION=""
@@ -22,6 +22,8 @@ RUN addgroup -g 1000 -S opencloud-group && \
adduser -S --ingroup opencloud-group --uid 1000 opencloud-user --home /var/lib/opencloud
RUN mkdir -p /var/lib/opencloud && \
# Pre-create the web directory to avoid permission issues
mkdir -p /var/lib/opencloud/web/assets/apps && \
chown -R opencloud-user:opencloud-group /var/lib/opencloud && \
chmod -R 751 /var/lib/opencloud && \
mkdir -p /etc/opencloud && \

View File

@@ -1,4 +1,4 @@
FROM arm64v8/alpine:3.20
FROM arm64v8/alpine:3.21
ARG VERSION=""
ARG REVISION=""
@@ -22,6 +22,8 @@ RUN addgroup -g 1000 -S opencloud-group && \
adduser -S --ingroup opencloud-group --uid 1000 opencloud-user --home /var/lib/opencloud
RUN mkdir -p /var/lib/opencloud && \
# Pre-create the web directory to avoid permission issues
mkdir -p /var/lib/opencloud/web/assets/apps && \
chown -R opencloud-user:opencloud-group /var/lib/opencloud && \
chmod -R 751 /var/lib/opencloud && \
mkdir -p /etc/opencloud && \

View File

@@ -1,4 +1,4 @@
FROM amd64/alpine:latest
FROM amd64/alpine:edge
ARG VERSION=""
ARG REVISION=""
@@ -22,6 +22,8 @@ RUN addgroup -g 1000 -S opencloud-group && \
adduser -S --ingroup opencloud-group --uid 1000 opencloud-user --home /var/lib/opencloud
RUN mkdir -p /var/lib/opencloud && \
# Pre-create the web directory to avoid permission issues
mkdir -p /var/lib/opencloud/web/assets/apps && \
chown -R opencloud-user:opencloud-group /var/lib/opencloud && \
chmod -R 751 /var/lib/opencloud && \
mkdir -p /etc/opencloud && \

View File

@@ -0,0 +1,43 @@
FROM arm64v8/alpine:edge
ARG VERSION=""
ARG REVISION=""
RUN apk add --no-cache attr bash ca-certificates curl delve inotify-tools libc6-compat mailcap tree vips patch && \
echo 'hosts: files dns' >| /etc/nsswitch.conf
LABEL maintainer="OpenCloud GmbH <devops@opencloud.eu>" \
org.opencontainers.image.title="OpenCloud" \
org.opencontainers.image.vendor="OpenCloud GmbH" \
org.opencontainers.image.authors="OpenCloud GmbH" \
org.opencontainers.image.description="OpenCloud is a modern file-sync and share platform" \
org.opencontainers.image.licenses="Apache-2.0" \
org.opencontainers.image.documentation="https://github.com/opencloud-eu/opencloud" \
org.opencontainers.image.url="https://hub.docker.com/r/opencloud-eu/opencloud" \
org.opencontainers.image.source="https://github.com/opencloud-eu/opencloud" \
org.opencontainers.image.version="${VERSION}" \
org.opencontainers.image.revision="${REVISION}"
RUN addgroup -g 1000 -S opencloud-group && \
adduser -S --ingroup opencloud-group --uid 1000 opencloud-user --home /var/lib/opencloud
RUN mkdir -p /var/lib/opencloud && \
# Pre-create the web directory to avoid permission issues
mkdir -p /var/lib/opencloud/web/assets/apps && \
chown -R opencloud-user:opencloud-group /var/lib/opencloud && \
chmod -R 751 /var/lib/opencloud && \
mkdir -p /etc/opencloud && \
chown -R opencloud-user:opencloud-group /etc/opencloud && \
chmod -R 751 /etc/opencloud
VOLUME [ "/var/lib/opencloud", "/etc/opencloud" ]
WORKDIR /var/lib/opencloud
USER 1000
EXPOSE 9200/tcp
ENTRYPOINT ["/usr/bin/opencloud"]
CMD ["server"]
COPY dist/binaries/opencloud-linux-arm64 /usr/bin/opencloud

View File

@@ -1,4 +1,4 @@
FROM golang:alpine3.20 AS build
FROM golang:alpine3.21 AS build
ARG TARGETOS
ARG TARGETARCH
ARG VERSION
@@ -37,6 +37,8 @@ RUN addgroup -g 1000 -S opencloud-group && \
adduser -S --ingroup opencloud-group --uid 1000 opencloud-user --home /var/lib/opencloud
RUN mkdir -p /var/lib/opencloud && \
# Pre-create the web directory to avoid permission issues
mkdir -p /var/lib/opencloud/web/assets/apps && \
chown -R opencloud-user:opencloud-group /var/lib/opencloud && \
chmod -R 751 /var/lib/opencloud && \
mkdir -p /etc/opencloud && \

View File

@@ -28,7 +28,7 @@ var (
// regex to determine if a node is trashed or versioned.
// 9113a718-8285-4b32-9042-f930f1a58ac2.REV.2024-05-22T07:32:53.89969726Z
_versionRegex = regexp.MustCompile(`\.REV\.[0-9]{4}-[0-9]{2}-[0-9]{2}T[0-9]{2}:[0-9]{2}:[0-9]{2}(\.[0-9]+)?Z$`)
_versionRegex = regexp.MustCompile(`\.REV\.[0-9]{4}-[0-9]{2}-[0-9]{2}T[0-9]{2}:[0-9]{2}:[0-9]{2}(\.[0-9]+)?Z(\.\d+)?$`)
// 9113a718-8285-4b32-9042-f930f1a58ac2.T.2024-05-23T08:25:20.006571811Z <- this HAS a symlink
_trashRegex = regexp.MustCompile(`\.T\.[0-9]{4}-[0-9]{2}-[0-9]{2}T[0-9]{2}:[0-9]{2}:[0-9]{2}(\.[0-9]+)?Z$`)
)

View File

@@ -11,17 +11,9 @@ import (
"strings"
"time"
authapp "github.com/opencloud-eu/opencloud/services/auth-app/pkg/command"
"github.com/cenkalti/backoff"
"github.com/mohae/deepcopy"
"github.com/olekukonko/tablewriter"
notifications "github.com/opencloud-eu/opencloud/services/notifications/pkg/command"
"github.com/opencloud-eu/reva/v2/pkg/events/stream"
"github.com/opencloud-eu/reva/v2/pkg/logger"
"github.com/opencloud-eu/reva/v2/pkg/rgrpc/todo/pool"
"github.com/thejerf/suture/v4"
occfg "github.com/opencloud-eu/opencloud/pkg/config"
"github.com/opencloud-eu/opencloud/pkg/log"
ogrpc "github.com/opencloud-eu/opencloud/pkg/service/grpc"
@@ -31,6 +23,7 @@ import (
appProvider "github.com/opencloud-eu/opencloud/services/app-provider/pkg/command"
appRegistry "github.com/opencloud-eu/opencloud/services/app-registry/pkg/command"
audit "github.com/opencloud-eu/opencloud/services/audit/pkg/command"
authapp "github.com/opencloud-eu/opencloud/services/auth-app/pkg/command"
authbasic "github.com/opencloud-eu/opencloud/services/auth-basic/pkg/command"
authmachine "github.com/opencloud-eu/opencloud/services/auth-machine/pkg/command"
authservice "github.com/opencloud-eu/opencloud/services/auth-service/pkg/command"
@@ -44,6 +37,7 @@ import (
idp "github.com/opencloud-eu/opencloud/services/idp/pkg/command"
invitations "github.com/opencloud-eu/opencloud/services/invitations/pkg/command"
nats "github.com/opencloud-eu/opencloud/services/nats/pkg/command"
notifications "github.com/opencloud-eu/opencloud/services/notifications/pkg/command"
ocdav "github.com/opencloud-eu/opencloud/services/ocdav/pkg/command"
ocm "github.com/opencloud-eu/opencloud/services/ocm/pkg/command"
ocs "github.com/opencloud-eu/opencloud/services/ocs/pkg/command"
@@ -64,6 +58,10 @@ import (
web "github.com/opencloud-eu/opencloud/services/web/pkg/command"
webdav "github.com/opencloud-eu/opencloud/services/webdav/pkg/command"
webfinger "github.com/opencloud-eu/opencloud/services/webfinger/pkg/command"
"github.com/opencloud-eu/reva/v2/pkg/events/stream"
"github.com/opencloud-eu/reva/v2/pkg/logger"
"github.com/opencloud-eu/reva/v2/pkg/rgrpc/todo/pool"
"github.com/thejerf/suture/v4"
)
var (
@@ -160,6 +158,11 @@ func NewService(ctx context.Context, options ...Option) (*Service, error) {
cfg.AppRegistry.Commons = cfg.Commons
return appRegistry.Execute(cfg.AppRegistry)
})
reg(3, opts.Config.AuthApp.Service.Name, func(ctx context.Context, cfg *occfg.Config) error {
cfg.AuthApp.Context = ctx
cfg.AuthApp.Commons = cfg.Commons
return authapp.Execute(cfg.AuthApp)
})
reg(3, opts.Config.AuthBasic.Service.Name, func(ctx context.Context, cfg *occfg.Config) error {
cfg.AuthBasic.Context = ctx
cfg.AuthBasic.Commons = cfg.Commons
@@ -324,11 +327,6 @@ func NewService(ctx context.Context, options ...Option) (*Service, error) {
cfg.Audit.Commons = cfg.Commons
return audit.Execute(cfg.Audit)
})
areg(opts.Config.AuthApp.Service.Name, func(ctx context.Context, cfg *occfg.Config) error {
cfg.AuthApp.Context = ctx
cfg.AuthApp.Commons = cfg.Commons
return authapp.Execute(cfg.AuthApp)
})
areg(opts.Config.Policies.Service.Name, func(ctx context.Context, cfg *occfg.Config) error {
cfg.Policies.Context = ctx
cfg.Policies.Commons = cfg.Commons

View File

@@ -16,7 +16,7 @@ var (
// LatestTag is the latest released version plus the dev meta version.
// Will be overwritten by the release pipeline
// Needs a manual change for every tagged release
LatestTag = "1.1.0-alpha.1+dev"
LatestTag = "1.1.0+dev"
// Date indicates the build date.
// This has been removed, it looks like you can only replace static strings with recent go versions

49
release-config.ts Normal file
View File

@@ -0,0 +1,49 @@
export default {
changeTypes: [
{
title: '💥 Breaking changes',
labels: ['breaking', 'Type:Breaking-Change'],
bump: 'major',
weight: 3
},
{
title: '🔒 Security',
labels: ['security', 'Type:Security'],
bump: 'patch',
weight: 2
},
{
title: '✨ Features',
labels: ['feature', 'Type:Feature'],
bump: 'minor',
weight: 1
},
{
title: '📈 Enhancement',
labels: ['enhancement', 'refactor', 'Type:Enhancement'],
bump: 'minor'
},
{
title: '🐛 Bug Fixes',
labels: ['bug', 'Type:Bug'],
bump: 'patch'
},
{
title: '📚 Documentation',
labels: ['docs', 'documentation', 'Type:Documentation'],
bump: 'patch'
},
{
title: '✅ Tests',
labels: ['test', 'tests', 'Type:Test'],
bump: 'patch'
},
{
title: '📦️ Dependencies',
labels: ['dependency', 'dependencies', 'Type:Dependencies'],
bump: 'patch',
weight: -1
}
],
useVersionPrefixV: true,
}

View File

@@ -1,53 +1,44 @@
# Auth-App
The auth-app service provides authentication for 3rd party apps.
The auth-app service provides authentication for 3rd party apps unable to use
OpenID Connect. The service is enabled by default and started automatically. It
is possible to disable the service by setting:
## The `auth` Service Family
OpenCloud uses serveral authentication services for different use cases. All services that start with `auth-` are part of the authentication service family. Each member authenticates requests with different scopes. As of now, these services exist:
- `auth-app` handles authentication of external 3rd party apps
- `auth-basic` handles basic authentication
- `auth-bearer` handles oidc authentication
- `auth-machine` handles interservice authentication when a user is impersonated
- `auth-service` handles interservice authentication when using service accounts
## Service Startup
Because this service is not started automatically, a manual start needs to be initiated which can be done in several ways. To configure the service usage, an environment variable for the proxy service needs to be set to allow app authentication.
```bash
OC_ADD_RUN_SERVICES=auth-app # deployment specific. Add the service to the manual startup list, use with binary deployments. Alternatively you can start the service explicitly via the command line.
PROXY_ENABLE_APP_AUTH=true # mandatory, allow app authentication. In case of a distributed environment, this envvar needs to be set in the proxy service.
OC_EXCLUDE_RUN_SERVICES=auth-app # deployment specific. Removes service from the list of automatically started services, use with single-binary deployments
PROXY_ENABLE_APP_AUTH=false # mandatory, disables app authentication. In case of a distributed environment, this envvar needs to be set in the proxy service.
```
## App Tokens
App Tokens are used to authenticate 3rd party access via https like when using curl (apps) to access an API endpoint. These apps need to authenticate themselves as no logged in user authenticates the request. To be able to use an app token, one must first create a token. There are different options of creating a token.
App Tokens are password specifically generated to be used by 3rd party applications
for authentication when accessing the OpenCloud API endpoints. To
be able to use an app token, one must first create a token. There are different
options of creating a token.
### Via CLI (dev only)
## Important Security Note
Replace the `user-name` with an existing user. For the `token-expiration`, you can use any time abbreviation from the following list: `h, m, s`. Examples: `72h` or `1h` or `1m` or `1s.` Default is `72h`.
When using an external IDP for authentication, App Token are NOT invalidated
when the user is disabled or locked in that external IDP. That means the user
will still be able to use its existing App Tokens for authentication for as
long as the App Tokes are valid.
```bash
opencloud auth-app create --user-name={user-name} --expiration={token-expiration}
```
Once generated, these tokens can be used to authenticate requests to OpenCloud. They are passed as part of the request as `Basic Auth` header.
## Managing App Tokens
### Via API
The `auth-app` service provides an API to create (POST), list (GET) and delete (DELETE) tokens at the `/auth-app/tokens` endpoint.
Please note: This API is preliminary. In the future we will provide endpoints
in the `graph` service for allowing the management of App Tokens.
When using curl for the respective command, you need to authenticate with a header. To do so, get from the browsers developer console the currently active bearer token. Consider that this token has a short lifetime. In any example, replace `<your host[:port]>` with the URL:port of your OpenCloud instance, and `{token}` `{value}` accordingly. Note that the active bearer token authenticates the user the token was issued for.
The `auth-app` service provides an API to create (POST), list (GET) and delete (DELETE) tokens at the `/auth-app/tokens` endpoint.
* **Create a token**\
The POST request requires:
* A `expiry` key/value pair in the form of `expiry=<number><h|m|s>`\
Example: `expiry=72h`
* An active bearer token
```bash
curl --request POST 'https://<your host:9200>/auth-app/tokens?expiry={value}' \
--header 'accept: application/json' \
--header 'authorization: Bearer {token}'
--header 'accept: application/json'
```
Example output:
```
@@ -59,14 +50,19 @@ When using curl for the respective command, you need to authenticate with a head
}
```
Note, that this is the only time the app token will be returned in cleartext. To use the token
please copy it from the response.
* **List tokens**\
The GET request only requires an active bearer token for authentication:\
Note that `--request GET` is technically not required because it is curl default.
```bash
curl --request GET 'https://<your host:9200>/auth-app/tokens' \
--header 'accept: application/json' \
--header 'authorization: Bearer {token}'
--header 'accept: application/json'
```
Note that the `token` value in the response to the "List Tokens` request is not the actual
app token, but a hashed value of the token. So this value cannot be used for authenticating
with the token.
Example output:
```
[
@@ -87,20 +83,23 @@ When using curl for the respective command, you need to authenticate with a head
* **Delete a token**\
The DELETE request requires:
* A `token` key/value pair in the form of `token=<token_issued>`\
Example: `token=Z3s2K7816M4vuSpd5`
* An active bearer token
* A `token` key/value pair in the form of `token=<token_issued>`. The value needs to be the hashed value as returned by the `List Tokens` respone.\
Example: `token=$2$Z3s2K7816M4vuSpd5`
```bash
curl --request DELETE 'https://<your host:9200>/auth-app/tokens?token={value}' \
--header 'accept: application/json' \
--header 'authorization: Bearer {token}'
--header 'accept: application/json'
```
### Via Impersonation API
When setting the environment variable `AUTH_APP_ENABLE_IMPERSONATION` to `true`, admins will be able to use the `/auth-app/tokens` endpoint to create tokens for other users but using their own bearer token for authentication. This can be important for migration scenarios, but should not be considered for regular tasks on a production system for security reasons.
When setting the environment variable `AUTH_APP_ENABLE_IMPERSONATION` to
`true`, admins will be able to use the `/auth-app/tokens` endpoint to create
tokens for other users. This can be important for migration scenarios, but
should not be considered for regular tasks on a production system for security
reasons.
To impersonate, the respective requests from the CLI commands above extend with the following parameters, where you can use one or the other:
To impersonate, the respective requests from the CLI commands above extend with
the following parameters, where you can use one or the other:
* The `userID` in the form of: `userID={value}`\
Example:\
@@ -114,6 +113,27 @@ Example:\
A final create request would then look like:
```bash
curl --request POST 'https://<your host:9200>/auth-app/tokens?expiry={value}&userName={value}' \
--header 'accept: application/json' \
--header 'authorization: Bearer {token}'
--header 'accept: application/json'
```
### Via CLI (developer only)
As the CLI is using the internal CS3Apis this needs access to the reva gateway
service. This is mainly of developer (and admin) usage.
Replace the `user-name` with an existing user. For the `token-expiration`, you
can use any time abbreviation from the following list: `h, m, s`. Examples:
`72h` or `1h` or `1m` or `1s.` Default is `72h`.
```bash
opencloud auth-app create --user-name={user-name} --expiration={token-expiration}
```
## Authenticating using App Tokens
To autenticate using an App Token simply use the username for which token was generated
and the token value as returned by the "Create Token" request.
```bash
curl -u <username>:<tokenvalue> 'https://<your host>/graph/v1.0/me' \
--header 'accept: application/json'
```

View File

@@ -28,9 +28,40 @@ type Config struct {
AllowImpersonation bool `yaml:"allow_impersonation" env:"AUTH_APP_ENABLE_IMPERSONATION" desc:"Allows admins to create app tokens for other users. Used for migration. Do NOT use in productive deployments." introductionVersion:"1.0.0"`
StorageDriver string `yaml:"storage_driver" env:"AUTH_APP_STORAGE_DRIVER" desc:"Driver to be used to persist the app tokes . Supported values are 'jsoncs3', 'json'." introductionVersion:"%%NEXT%%"`
StorageDrivers StorageDrivers `yaml:"storage_drivers"`
Context context.Context `yaml:"-"`
}
type StorageDrivers struct {
JSONCS3 JSONCS3Driver `yaml:"jsoncs3"`
}
type JSONCS3Driver struct {
ProviderAddr string `yaml:"provider_addr" env:"AUTH_APP_JSONCS3_PROVIDER_ADDR" desc:"GRPC address of the STORAGE-SYSTEM service." introductionVersion:"%%NEXT%%"`
SystemUserID string `yaml:"system_user_id" env:"OC_SYSTEM_USER_ID;AUTH_APP_JSONCS3_SYSTEM_USER_ID" desc:"ID of the OpenCloud STORAGE-SYSTEM system user. Admins need to set the ID for the STORAGE-SYSTEM system user in this config option which is then used to reference the user. Any reasonable long string is possible, preferably this would be an UUIDv4 format." introductionVersion:"%%NEXT%%"`
SystemUserIDP string `yaml:"system_user_idp" env:"OC_SYSTEM_USER_IDP;AUTH_APP_JSONCS3_SYSTEM_USER_IDP" desc:"IDP of the OpenCloud STORAGE-SYSTEM system user." introductionVersion:"%%NEXT%%"`
SystemUserAPIKey string `yaml:"system_user_api_key" env:"OC_SYSTEM_USER_API_KEY;AUTH_APP_JSONCS3_SYSTEM_USER_API_KEY" desc:"API key for the STORAGE-SYSTEM system user." introductionVersion:"%%NEXT%%"`
PasswordGenerator string `yaml:"password_generator" env:"AUTH_APP_JSONCS3_PASSWORD_GENERATOR" desc:"The password generator that should be used for generating app tokens. Supported values are: 'diceware' and 'random'." introductionVersion:"%%NEXT%%"`
PasswordGeneratorOptions PasswordGeneratorOptions `yaml:"password_generator_options"`
}
type PasswordGeneratorOptions struct {
DicewareOptions DicewareOptions `yaml:"diceware"`
RandPWOpts RandPWOpts `yaml:"randon"`
}
// DicewareOptions defines the config options for the "diceware" password generator
type DicewareOptions struct {
NumberOfWords int `yaml:"number_of_words" env:"AUTH_APP_JSONCS3_DICEWARE_NUMBER_OF_WORDS" desc:"The number of words the generated passphrase will have." introductionVersion:"%%NEXT%%"`
}
// RandPWOpts defines the config options for the "random" password generator
type RandPWOpts struct {
PasswordLength int `yaml:"password_length" env:"AUTH_APP_JSONCS3_RANDOM_PASSWORD_LENGTH" desc:"The number of charactors the generated passwords will have." introductionVersion:"%%NEXT%%"`
}
// Log defines the loging configuration
type Log struct {
Level string `yaml:"level" env:"OC_LOG_LEVEL;AUTH_APP_LOG_LEVEL" desc:"The log level. Valid values are: 'panic', 'fatal', 'error', 'warn', 'info', 'debug', 'trace'." introductionVersion:"1.0.0"`

View File

@@ -44,6 +44,19 @@ func DefaultConfig() *config.Config {
Service: config.Service{
Name: "auth-app",
},
StorageDriver: "jsoncs3",
StorageDrivers: config.StorageDrivers{
JSONCS3: config.JSONCS3Driver{
ProviderAddr: "eu.opencloud.api.storage-system",
SystemUserIDP: "internal",
PasswordGenerator: "diceware",
PasswordGeneratorOptions: config.PasswordGeneratorOptions{
DicewareOptions: config.DicewareOptions{
NumberOfWords: 6,
},
},
},
},
Reva: shared.DefaultRevaConfig(),
}
}
@@ -85,6 +98,14 @@ func EnsureDefaults(cfg *config.Config) {
cfg.MachineAuthAPIKey = cfg.Commons.MachineAuthAPIKey
}
if cfg.StorageDrivers.JSONCS3.SystemUserAPIKey == "" && cfg.Commons != nil && cfg.Commons.SystemUserAPIKey != "" {
cfg.StorageDrivers.JSONCS3.SystemUserAPIKey = cfg.Commons.SystemUserAPIKey
}
if cfg.StorageDrivers.JSONCS3.SystemUserID == "" && cfg.Commons != nil && cfg.Commons.SystemUserID != "" {
cfg.StorageDrivers.JSONCS3.SystemUserID = cfg.Commons.SystemUserID
}
if cfg.TokenManager == nil && cfg.Commons != nil && cfg.Commons.TokenManager != nil {
cfg.TokenManager = &config.TokenManager{
JWTSecret: cfg.Commons.TokenManager.JWTSecret,

View File

@@ -11,6 +11,14 @@ import (
func AuthAppConfigFromStruct(cfg *config.Config) map[string]interface{} {
appAuthJSON := filepath.Join(defaults.BaseDataPath(), "appauth.json")
jsonCS3pwGenOpt := map[string]any{}
switch cfg.StorageDrivers.JSONCS3.PasswordGenerator {
case "random":
jsonCS3pwGenOpt["token_strength"] = cfg.StorageDrivers.JSONCS3.PasswordGeneratorOptions.RandPWOpts.PasswordLength
case "diceware":
jsonCS3pwGenOpt["number_of_words"] = cfg.StorageDrivers.JSONCS3.PasswordGeneratorOptions.DicewareOptions.NumberOfWords
}
rcfg := map[string]interface{}{
"shared": map[string]interface{}{
"jwt_secret": cfg.TokenManager.JWTSecret,
@@ -36,11 +44,19 @@ func AuthAppConfigFromStruct(cfg *config.Config) map[string]interface{} {
},
},
"applicationauth": map[string]interface{}{
"driver": "json",
"driver": cfg.StorageDriver,
"drivers": map[string]interface{}{
"json": map[string]interface{}{
"file": appAuthJSON,
},
"jsoncs3": map[string]interface{}{
"provider_addr": cfg.StorageDrivers.JSONCS3.ProviderAddr,
"service_user_id": cfg.StorageDrivers.JSONCS3.SystemUserID,
"service_user_idp": cfg.StorageDrivers.JSONCS3.SystemUserIDP,
"machine_auth_apikey": cfg.StorageDrivers.JSONCS3.SystemUserAPIKey,
"password_generator": cfg.StorageDrivers.JSONCS3.PasswordGenerator,
"generator_config": jsonCS3pwGenOpt,
},
},
},
},

View File

@@ -99,7 +99,10 @@ func (a *AuthAppService) HandleCreate(w http.ResponseWriter, r *http.Request) {
return
}
label := "Generated via API"
label := q.Get("label")
if label == "" {
label = "Generated via API"
}
// Impersonated request
userID, userName := q.Get("userID"), q.Get("userName")
@@ -131,7 +134,7 @@ func (a *AuthAppService) HandleCreate(w http.ResponseWriter, r *http.Request) {
return
}
label = "Generated via Impersonation API"
label = label + " (Impersonation)"
}
scopes, err := scope.AddOwnerScope(map[string]*authpb.Scope{})

View File

@@ -1272,6 +1272,10 @@ func (f *FileConnector) CheckFileInfo(ctx context.Context) (*ConnectorResponse,
fileinfo.KeyPostMessageOrigin: f.cfg.Commons.OpenCloudURL,
fileinfo.KeyLicenseCheckForEditIsEnabled: f.cfg.App.LicenseCheckEnable,
// set to true for Collabora until we have a web embed mode for "Save As" and "Export As"
// see the FIXME in ./fileinfo/collabora.go and https://github.com/opencloud-eu/web/issues/422
fileinfo.KeyUserCanNotWriteRelative: false,
}
switch wopiContext.ViewMode {

View File

@@ -1780,7 +1780,7 @@ var _ = Describe("FileConnector", func() {
OwnerID: "61616262636340637573746f6d496470", // hex of aabbcc@customIdp
Size: int64(998877),
BaseFileName: "test.txt",
UserCanNotWriteRelative: false,
UserCanNotWriteRelative: true,
DisableExport: true,
DisableCopy: true,
DisablePrint: true,
@@ -1962,7 +1962,7 @@ var _ = Describe("FileConnector", func() {
OwnerID: "61616262636340637573746f6d496470", // hex of aabbcc@customIdp
Size: int64(998877),
BaseFileName: "test.txt",
UserCanNotWriteRelative: false,
UserCanNotWriteRelative: true,
DisableExport: true,
DisableCopy: true,
DisablePrint: true,

View File

@@ -99,7 +99,7 @@ func (cinfo *Collabora) SetProperties(props map[string]interface{}) {
case KeyUserCanWrite:
cinfo.UserCanWrite = value.(bool)
case KeyUserCanNotWriteRelative:
cinfo.UserCanNotWriteRelative = value.(bool)
cinfo.UserCanNotWriteRelative = true // FIXME: set to `value.(bool)` again for https://github.com/opencloud-eu/web/issues/422
case KeyUserID:
cinfo.UserID = value.(string)
case KeyUserFriendlyName:

View File

@@ -34,7 +34,6 @@ type Config struct {
EnableFederatedSharingIncoming bool `yaml:"enable_federated_sharing_incoming" env:"OC_ENABLE_OCM;FRONTEND_ENABLE_FEDERATED_SHARING_INCOMING" desc:"Changing this value is NOT supported. Enables support for incoming federated sharing for clients. The backend behaviour is not changed." introductionVersion:"1.0.0"`
EnableFederatedSharingOutgoing bool `yaml:"enable_federated_sharing_outgoing" env:"OC_ENABLE_OCM;FRONTEND_ENABLE_FEDERATED_SHARING_OUTGOING" desc:"Changing this value is NOT supported. Enables support for outgoing federated sharing for clients. The backend behaviour is not changed." introductionVersion:"1.0.0"`
SearchMinLength int `yaml:"search_min_length" env:"FRONTEND_SEARCH_MIN_LENGTH" desc:"Minimum number of characters to enter before a client should start a search for Share receivers. This setting can be used to customize the user experience if e.g too many results are displayed." introductionVersion:"1.0.0"`
Edition string `yaml:"edition" env:"OC_EDITION;FRONTEND_EDITION" desc:"Edition of OpenCloud. Used for branding purposes." introductionVersion:"1.0.0"`
DisableSSE bool `yaml:"disable_sse" env:"OC_DISABLE_SSE;FRONTEND_DISABLE_SSE" desc:"When set to true, clients are informed that the Server-Sent Events endpoint is not accessible." introductionVersion:"1.0.0"`
DefaultLinkPermissions int `yaml:"default_link_permissions" env:"FRONTEND_DEFAULT_LINK_PERMISSIONS" desc:"Defines the default permissions a link is being created with. Possible values are 0 (= internal link, for instance members only) and 1 (= public link with viewer permissions). Defaults to 1." introductionVersion:"1.0.0"`

View File

@@ -87,7 +87,6 @@ func DefaultConfig() *config.Config {
DefaultUploadProtocol: "tus",
DefaultLinkPermissions: 1,
SearchMinLength: 3,
Edition: "Community",
Checksums: config.Checksums{
SupportedTypes: []string{"sha1", "md5", "adler32"},
PreferredUploadType: "sha1",

View File

@@ -208,7 +208,6 @@ func FrontendConfigFromStruct(cfg *config.Config, logger log.Logger) (map[string
"needsDbUpgrade": false,
"version": version.Legacy,
"versionstring": version.LegacyString,
"edition": cfg.Edition,
"productname": "OpenCloud",
"product": "OpenCloud",
"productversion": version.GetString(),
@@ -339,7 +338,7 @@ func FrontendConfigFromStruct(cfg *config.Config, logger log.Logger) (map[string
},
"version": map[string]interface{}{
"product": "OpenCloud",
"edition": "Community",
"edition": "",
"major": version.ParsedLegacy().Major(),
"minor": version.ParsedLegacy().Minor(),
"micro": version.ParsedLegacy().Patch(),

View File

@@ -677,21 +677,34 @@ func (g Graph) getSpecialDriveItems(ctx context.Context, baseURL *url.URL, space
}
var spaceItems []libregraph.DriveItem
var err error
doCache := true
spaceItems = g.fetchSpecialDriveItem(ctx, spaceItems, SpaceImageSpecialFolderName, imageNode, space, baseURL)
spaceItems = g.fetchSpecialDriveItem(ctx, spaceItems, ReadmeSpecialFolderName, readmeNode, space, baseURL)
spaceItems, err = g.fetchSpecialDriveItem(ctx, spaceItems, SpaceImageSpecialFolderName, imageNode, space, baseURL)
if err != nil {
doCache = false
g.logger.Debug().Err(err).Str("ID", imageNode).Msg("Could not get space image")
}
spaceItems, err = g.fetchSpecialDriveItem(ctx, spaceItems, ReadmeSpecialFolderName, readmeNode, space, baseURL)
if err != nil {
doCache = false
g.logger.Debug().Err(err).Str("ID", imageNode).Msg("Could not get space readme")
}
// cache properties
spacePropertiesEntry := specialDriveItemEntry{
specialDriveItems: spaceItems,
rootMtime: space.GetMtime(),
}
g.specialDriveItemsCache.Set(cachekey, spacePropertiesEntry, time.Duration(g.config.Spaces.ExtendedSpacePropertiesCacheTTL))
if doCache {
g.specialDriveItemsCache.Set(cachekey, spacePropertiesEntry, time.Duration(g.config.Spaces.ExtendedSpacePropertiesCacheTTL))
}
return spaceItems
}
func (g Graph) fetchSpecialDriveItem(ctx context.Context, spaceItems []libregraph.DriveItem, itemName string, itemNode string, space *storageprovider.StorageSpace, baseURL *url.URL) []libregraph.DriveItem {
func (g Graph) fetchSpecialDriveItem(ctx context.Context, spaceItems []libregraph.DriveItem, itemName string, itemNode string, space *storageprovider.StorageSpace, baseURL *url.URL) ([]libregraph.DriveItem, error) {
var ref *storageprovider.Reference
if itemNode != "" {
rid, _ := storagespace.ParseID(itemNode)
@@ -700,12 +713,15 @@ func (g Graph) fetchSpecialDriveItem(ctx context.Context, spaceItems []libregrap
ref = &storageprovider.Reference{
ResourceId: &rid,
}
spaceItem := g.getSpecialDriveItem(ctx, ref, itemName, baseURL, space)
spaceItem, err := g.getSpecialDriveItem(ctx, ref, itemName, baseURL, space)
if err != nil {
return spaceItems, err
}
if spaceItem != nil {
spaceItems = append(spaceItems, *spaceItem)
}
}
return spaceItems
return spaceItems, nil
}
// generates a space root stat cache key used to detect changes in a space
@@ -730,10 +746,10 @@ type specialDriveItemEntry struct {
rootMtime *types.Timestamp
}
func (g Graph) getSpecialDriveItem(ctx context.Context, ref *storageprovider.Reference, itemName string, baseURL *url.URL, space *storageprovider.StorageSpace) *libregraph.DriveItem {
func (g Graph) getSpecialDriveItem(ctx context.Context, ref *storageprovider.Reference, itemName string, baseURL *url.URL, space *storageprovider.StorageSpace) (*libregraph.DriveItem, error) {
var spaceItem *libregraph.DriveItem
if ref.GetResourceId().GetSpaceId() == "" && ref.GetResourceId().GetOpaqueId() == "" {
return nil
return nil, nil
}
// FIXME we should send a fieldmask 'path' and return it as the Path property to save an additional call to the storage.
@@ -741,16 +757,14 @@ func (g Graph) getSpecialDriveItem(ctx context.Context, ref *storageprovider.Ref
// and Path should always be relative to the space root OR the resource the current user can access ...
spaceItem, err := g.getDriveItem(ctx, ref)
if err != nil {
g.logger.Debug().Err(err).Str("ID", ref.GetResourceId().GetOpaqueId()).Str("name", itemName).Msg("Could not get item info")
return nil
return nil, err
}
itemPath := ref.GetPath()
if itemPath == "" {
// lookup by id
itemPath, err = g.getPathForResource(ctx, *ref.GetResourceId())
if err != nil {
g.logger.Debug().Err(err).Str("ID", ref.GetResourceId().GetOpaqueId()).Str("name", itemName).Msg("Could not get item path")
return nil
return nil, err
}
}
spaceItem.SpecialFolder = &libregraph.SpecialFolder{Name: libregraph.PtrString(itemName)}
@@ -758,5 +772,5 @@ func (g Graph) getSpecialDriveItem(ctx context.Context, ref *storageprovider.Ref
webdavURL.Path = path.Join(webdavURL.Path, space.GetId().GetOpaqueId(), itemPath)
spaceItem.WebDavUrl = libregraph.PtrString(webdavURL.String())
return spaceItem
return spaceItem, nil
}

View File

@@ -7,3 +7,76 @@ It is mainly targeted at smaller installations. For larger setups it is recommen
By default, it is configured to use the OpenCloud IDM service as its LDAP backend for looking up and authenticating users. Other backends like an external LDAP server can be configured via a set of [enviroment variables](https://docs.opencloud.eu/services/idp/configuration/#environment-variables).
Note that translations provided by the IDP service are not maintained via OpenCloud but part of the embedded [LibreGraph Connect Identifier](https://github.com/libregraph/lico/tree/master/identifier) package.
## Configuration
### Custom Clients
By default the `idp` service generates a OIDC client configuration suitable for
using OpenCloud with the standard client applications (Web, Desktop, iOS and
Android). If you need to configure additional client it is possible to inject a
custom configuration via `yaml`. This can be done by adding a section `clients`
to the `idp` section of the main configuration file (`opencloud.yaml`). This section
needs to contain configuration for all clients (including the standard clients).
For example if you want to add a (public) client for use with the oidc-agent you would
need to add this snippet to the `idp` section in `opencloud.yaml`.
```yaml
clients:
- id: web
name: OpenCloud Web App
trusted: true
secret: ""
redirect_uris:
- https://opencloud.k8s:9200/
- https://opencloud.k8s:9200/oidc-callback.html
- https://opencloud.k8s:9200/oidc-silent-redirect.html
post_logout_redirect_uris: []
origins:
- https://opencloud.k8s:9200
application_type: ""
- id: OpenCloudDesktop
name: OpenCloud Desktop Client
trusted: false
secret: ""
redirect_uris:
- http://127.0.0.1
- http://localhost
post_logout_redirect_uris: []
origins: []
application_type: native
- id: OpenCloudAndroid
name: OpenCloud Android App
trusted: false
secret: ""
redirect_uris:
- oc://android.opencloud.eu
post_logout_redirect_uris:
- oc://android.opencloud.eu
origins: []
application_type: native
- id: OpenCloudIOS
name: OpenCloud iOS App
trusted: false
secret: ""
redirect_uris:
- oc://ios.opencloud.eu
post_logout_redirect_uris:
- oc://ios.opencloud.eu
origins: []
application_type: native
- id: oidc-agent
name: OIDC Agent
trusted: false
secret: ""
redirect_uris:
- http://127.0.0.1
- http://localhost
post_logout_redirect_uris: []
origins: []
application_type: native
```

View File

@@ -105,11 +105,11 @@
"web-vitals": "^3.5.2"
},
"devDependencies": {
"@babel/core": "7.26.9",
"@babel/core": "7.26.10",
"@typescript-eslint/eslint-plugin": "^4.33.0",
"@typescript-eslint/parser": "^4.33.0",
"babel-eslint": "^10.1.0",
"babel-loader": "9.2.1",
"babel-loader": "10.0.0",
"babel-plugin-named-asset-import": "^0.3.8",
"babel-preset-react-app": "^10.1.0",
"case-sensitive-paths-webpack-plugin": "2.4.0",

View File

@@ -60,13 +60,14 @@ type Asset struct {
}
type Client struct {
ID string `yaml:"id"`
Name string `yaml:"name"`
Trusted bool `yaml:"trusted"`
Secret string `yaml:"secret"`
RedirectURIs []string `yaml:"redirect_uris"`
Origins []string `yaml:"origins"`
ApplicationType string `yaml:"application_type"`
ID string `yaml:"id"`
Name string `yaml:"name"`
Trusted bool `yaml:"trusted"`
Secret string `yaml:"secret"`
RedirectURIs []string `yaml:"redirect_uris"`
PostLogoutRedirectURIs []string `yaml:"post_logout_redirect_uris"`
Origins []string `yaml:"origins"`
ApplicationType string `yaml:"application_type"`
}
type Settings struct {

View File

@@ -101,6 +101,9 @@ func DefaultConfig() *config.Config {
RedirectURIs: []string{
"oc://android.opencloud.eu",
},
PostLogoutRedirectURIs: []string{
"oc://android.opencloud.eu",
},
},
{
ID: "OpenCloudIOS",
@@ -109,6 +112,9 @@ func DefaultConfig() *config.Config {
RedirectURIs: []string{
"oc://ios.opencloud.eu",
},
PostLogoutRedirectURIs: []string{
"oc://ios.opencloud.eu",
},
},
},
Ldap: config.Ldap{

View File

File diff suppressed because it is too large Load Diff

View File

@@ -77,7 +77,6 @@ func Server(cfg *config.Config) *cli.Command {
ocdav.Product(cfg.Status.Product),
ocdav.Version(cfg.Status.Version),
ocdav.VersionString(cfg.Status.VersionString),
ocdav.Edition(cfg.Status.Edition),
ocdav.MachineAuthAPIKey(cfg.MachineAuthAPIKey),
ocdav.Broker(broker.NoOp{}),
// ocdav.FavoriteManager() // FIXME needs a proper persistence implementation https://github.com/owncloud/ocis/issues/1228

View File

@@ -81,5 +81,4 @@ type Status struct {
Product string
ProductName string
ProductVersion string
Edition string `yaml:"edition" env:"OC_EDITION;OCDAV_EDITION" desc:"Edition of OpenCloud. Used for branding purposes." introductionVersion:"1.0.0"`
}

View File

@@ -92,7 +92,6 @@ func DefaultConfig() *config.Config {
ProductVersion: version.GetString(),
Product: "OpenCloud",
ProductName: "OpenCloud",
Edition: "Community",
},
}
}

View File

@@ -352,7 +352,7 @@ func loadMiddlewares(logger log.Logger, cfg *config.Config,
middleware.CredentialsByUserAgent(cfg.AuthMiddleware.CredentialsByUserAgent),
middleware.Logger(logger),
middleware.OIDCIss(cfg.OIDC.Issuer),
middleware.EnableBasicAuth(cfg.EnableBasicAuth),
middleware.EnableBasicAuth(cfg.EnableBasicAuth || cfg.AuthMiddleware.AllowAppAuth),
middleware.TraceProvider(traceProvider),
),
middleware.AccountResolver(

View File

@@ -100,6 +100,9 @@ func DefaultConfig() *config.Config {
Cluster: "opencloud-cluster",
EnableTLS: false,
},
AuthMiddleware: config.AuthMiddleware{
AllowAppAuth: true,
},
}
}

View File

@@ -12,7 +12,7 @@ When hard-stopping OpenCloud, for example with the `kill <pid>` command (SIGKILL
* With the value of the environment variable `STORAGE_USERS_GRACEFUL_SHUTDOWN_TIMEOUT`, the `storage-users` service will delay its shutdown giving it time to finalize writing necessary data. This delay can be necessary if there is a lot of data to be saved and/or if storage access/thruput is slow. In such a case you would receive an error log entry informing you that not all data could be saved in time. To prevent such occurrences, you must increase the default value.
* If a shutdown error has been logged, the command-line maintenance tool [Inspect and Manipulate Node Metadata](https://doc.opencloud.eu/maintenance/commands/commands.html#inspect-and-manipulate-node-metadata) can help to fix the issue. Please contact support for details.
* If a shutdown error has been logged, the command-line maintenance tool to inspect and manipulate node metadata can help to fix the issue. Please contact support for details.
## CLI Commands
@@ -142,8 +142,7 @@ opencloud storage-users uploads sessions --processing=false --has-virus=false --
### Manage Trash-Bin Items
This command set provides commands to get an overview of trash-bin items, restore items and purge old items of `personal` spaces and `project` spaces (spaces that have been created manually). `trash-bin` commands require a `spaceID` as parameter. See [List all spaces
](https://doc.opencloud.eu/apis/http/graph/spaces/#list-all-spaces-get-drives) or [Listing Space IDs](https://doc.opencloud.eu/maintenance/space-ids/space-ids.html) for details of how to get them.
This command set provides commands to get an overview of trash-bin items, restore items and purge old items of `personal` spaces and `project` spaces (spaces that have been created manually). `trash-bin` commands require a `spaceID` as parameter.
```bash
opencloud storage-users trash-bin <command>
@@ -170,10 +169,10 @@ The behaviour of the `purge-expired` command can be configured by using the foll
Used to obtain space trash-bin information and takes the system admin user as the default which is the `OC_ADMIN_USER_ID` but can be set individually. It should be noted, that the `OC_ADMIN_USER_ID` is only assigned automatically when using the single binary deployment and must be manually assigned in all other deployments. The command only considers spaces to which the assigned user has access and delete permission.
* `STORAGE_USERS_PURGE_TRASH_BIN_PERSONAL_DELETE_BEFORE`\
Has a default value of `720h` which equals `30 days`. This means, the command will delete all files older than `30 days`. The value is human-readable, for valid values see the duration type described in the [Environment Variable Types](https://doc.opencloud.eu/deployment/services/envvar-types-description.html). A value of `0` is equivalent to disable and prevents the deletion of `personal space` trash-bin files.
Has a default value of `720h` which equals `30 days`. This means, the command will delete all files older than `30 days`. The value is human-readable. A value of `0` is equivalent to disable and prevents the deletion of `personal space` trash-bin files.
* `STORAGE_USERS_PURGE_TRASH_BIN_PROJECT_DELETE_BEFORE`\
Has a default value of `720h` which equals `30 days`. This means, the command will delete all files older than `30 days`. The value is human-readable, for valid values see the duration type described in the [Environment Variable Types](https://doc.opencloud.eu/latest/deployment/services/envvar-types-description.html). A value of `0` is equivalent to disable and prevents the deletion of `project space` trash-bin files.
Has a default value of `720h` which equals `30 days`. This means, the command will delete all files older than `30 days`. The value is human-readable. A value of `0` is equivalent to disable and prevents the deletion of `project space` trash-bin files.
#### List and Restore Trash-Bins Items

View File

@@ -98,13 +98,26 @@ func ListUploadSessions(cfg *config.Config) *cli.Command {
return configlog.ReturnFatal(parser.ParseConfig(cfg))
},
Action: func(c *cli.Context) error {
var err error
f, ok := registry.NewFuncs[cfg.Driver]
if !ok {
fmt.Fprintf(os.Stderr, "Unknown filesystem driver '%s'\n", cfg.Driver)
os.Exit(1)
}
drivers := revaconfig.StorageProviderDrivers(cfg)
fs, err := f(drivers[cfg.Driver].(map[string]interface{}), nil, nil)
var fsStream events.Stream
if cfg.Driver == "posix" {
// We need to init the posix driver with 'scanfs' disabled
drivers["posix"] = revaconfig.Posix(cfg, false)
// Also posix refuses to start without an events stream
fsStream, err = event.NewStream(cfg)
if err != nil {
fmt.Fprintf(os.Stderr, "Failed to create event stream for posix driver: %v\n", err)
os.Exit(1)
}
}
fs, err := f(drivers[cfg.Driver].(map[string]interface{}), fsStream, nil)
if err != nil {
fmt.Fprintf(os.Stderr, "Failed to initialize filesystem driver '%s'\n", cfg.Driver)
return err
@@ -117,7 +130,7 @@ func ListUploadSessions(cfg *config.Config) *cli.Command {
}
var stream events.Stream
if c.Bool("restart") {
if c.Bool("restart") || c.Bool("resume") {
stream, err = event.NewStream(cfg)
if err != nil {
fmt.Fprintf(os.Stderr, "Failed to create event stream: %v\n", err)

View File

@@ -194,14 +194,21 @@ type OwnCloudSQLDriver struct {
// PosixDriver is the storage driver configuration when using 'posix' storage driver
type PosixDriver struct {
// Root is the absolute path to the location of the data
Root string `yaml:"root" env:"STORAGE_USERS_POSIX_ROOT" desc:"The directory where the filesystem storage will store its data. If not defined, the root directory derives from $OC_BASE_DATA_PATH/storage/users." introductionVersion:"1.0.0"`
PersonalSpaceAliasTemplate string `yaml:"personalspacealias_template" env:"STORAGE_USERS_POSIX_PERSONAL_SPACE_ALIAS_TEMPLATE" desc:"Template string to construct personal space aliases." introductionVersion:"1.0.0"`
PersonalSpacePathTemplate string `yaml:"personalspacepath_template" env:"STORAGE_USERS_POSIX_PERSONAL_SPACE_PATH_TEMPLATE" desc:"Template string to construct the paths of the personal space roots." introductionVersion:"1.0.0"`
GeneralSpaceAliasTemplate string `yaml:"generalspacealias_template" env:"STORAGE_USERS_POSIX_GENERAL_SPACE_ALIAS_TEMPLATE" desc:"Template string to construct general space aliases." introductionVersion:"1.0.0"`
GeneralSpacePathTemplate string `yaml:"generalspacepath_template" env:"STORAGE_USERS_POSIX_GENERAL_SPACE_PATH_TEMPLATE" desc:"Template string to construct the paths of the projects space roots." introductionVersion:"1.0.0"`
PermissionsEndpoint string `yaml:"permissions_endpoint" env:"STORAGE_USERS_PERMISSION_ENDPOINT;STORAGE_USERS_POSIX_PERMISSIONS_ENDPOINT" desc:"Endpoint of the permissions service. The endpoints can differ for 'decomposed', 'posix' and 'decomposeds3'." introductionVersion:"1.0.0"`
AsyncUploads bool `yaml:"async_uploads" env:"OC_ASYNC_UPLOADS" desc:"Enable asynchronous file uploads." introductionVersion:"1.0.0"`
ScanDebounceDelay time.Duration `yaml:"scan_debounce_delay" env:"STORAGE_USERS_POSIX_SCAN_DEBOUNCE_DELAY" desc:"The time in milliseconds to wait before scanning the filesystem for changes after a change has been detected." introductionVersion:"1.0.0"`
Root string `yaml:"root" env:"STORAGE_USERS_POSIX_ROOT" desc:"The directory where the filesystem storage will store its data. If not defined, the root directory derives from $OC_BASE_DATA_PATH/storage/users." introductionVersion:"1.0.0"`
Propagator string `yaml:"propagator" env:"OC_DECOMPOSEDFS_PROPAGATOR;STORAGE_USERS_POSIX_PROPAGATOR" desc:"The propagator used for the posix driver. At the moment, only 'sync' is fully supported, 'async' is available as an experimental option." introductionVersion:"2.0.0"`
AsyncPropagatorOptions AsyncPropagatorOptions `yaml:"async_propagator_options"`
PersonalSpaceAliasTemplate string `yaml:"personalspacealias_template" env:"STORAGE_USERS_POSIX_PERSONAL_SPACE_ALIAS_TEMPLATE" desc:"Template string to construct personal space aliases." introductionVersion:"1.0.0"`
PersonalSpacePathTemplate string `yaml:"personalspacepath_template" env:"STORAGE_USERS_POSIX_PERSONAL_SPACE_PATH_TEMPLATE" desc:"Template string to construct the paths of the personal space roots." introductionVersion:"1.0.0"`
GeneralSpaceAliasTemplate string `yaml:"generalspacealias_template" env:"STORAGE_USERS_POSIX_GENERAL_SPACE_ALIAS_TEMPLATE" desc:"Template string to construct general space aliases." introductionVersion:"1.0.0"`
GeneralSpacePathTemplate string `yaml:"generalspacepath_template" env:"STORAGE_USERS_POSIX_GENERAL_SPACE_PATH_TEMPLATE" desc:"Template string to construct the paths of the projects space roots." introductionVersion:"1.0.0"`
PermissionsEndpoint string `yaml:"permissions_endpoint" env:"STORAGE_USERS_PERMISSION_ENDPOINT;STORAGE_USERS_POSIX_PERMISSIONS_ENDPOINT" desc:"Endpoint of the permissions service. The endpoints can differ for 'decomposed', 'posix' and 'decomposeds3'." introductionVersion:"1.0.0"`
AsyncUploads bool `yaml:"async_uploads" env:"OC_ASYNC_UPLOADS" desc:"Enable asynchronous file uploads." introductionVersion:"1.0.0"`
ScanDebounceDelay time.Duration `yaml:"scan_debounce_delay" env:"STORAGE_USERS_POSIX_SCAN_DEBOUNCE_DELAY" desc:"The time in milliseconds to wait before scanning the filesystem for changes after a change has been detected." introductionVersion:"1.0.0"`
MaxQuota uint64 `yaml:"max_quota" env:"OC_SPACES_MAX_QUOTA;STORAGE_USERS_POSIX_MAX_QUOTA" desc:"Set a global max quota for spaces in bytes. A value of 0 equals unlimited. If not using the global OC_SPACES_MAX_QUOTA, you must define the FRONTEND_MAX_QUOTA in the frontend service." introductionVersion:"2.0.0"`
MaxAcquireLockCycles int `yaml:"max_acquire_lock_cycles" env:"STORAGE_USERS_POSIX_MAX_ACQUIRE_LOCK_CYCLES" desc:"When trying to lock files, OpenCloud will try this amount of times to acquire the lock before failing. After each try it will wait for an increasing amount of time. Values of 0 or below will be ignored and the default value will be used." introductionVersion:"2.0.0"`
LockCycleDurationFactor int `yaml:"lock_cycle_duration_factor" env:"STORAGE_USERS_POSIX_LOCK_CYCLE_DURATION_FACTOR" desc:"When trying to lock files, OpenCloud will multiply the cycle with this factor and use it as a millisecond timeout. Values of 0 or below will be ignored and the default value will be used." introductionVersion:"2.0.0"`
MaxConcurrency int `yaml:"max_concurrency" env:"OC_MAX_CONCURRENCY;STORAGE_USERS_POSIX_MAX_CONCURRENCY" desc:"Maximum number of concurrent go-routines. Higher values can potentially get work done faster but will also cause more load on the system. Values of 0 or below will be ignored and the default value will be used." introductionVersion:"2.0.0"`
DisableVersioning bool `yaml:"disable_versioning" env:"OC_DISABLE_VERSIONING" desc:"Disables versioning of files. When set to true, new uploads with the same filename will overwrite existing files instead of creating a new version." introductionVersion:"2.0.0"`
UseSpaceGroups bool `yaml:"use_space_groups" env:"STORAGE_USERS_POSIX_USE_SPACE_GROUPS" desc:"Use space groups to manage permissions on spaces." introductionVersion:"1.0.0"`

View File

@@ -91,7 +91,7 @@ func DefaultConfig() *config.Config {
TransferExpires: 86400,
UploadExpiration: 24 * 60 * 60,
GracefulShutdownTimeout: 30,
Driver: "decomposed",
Driver: "posix",
Drivers: config.Drivers{
OwnCloudSQL: config.OwnCloudSQLDriver{
Root: filepath.Join(defaults.BaseDataPath(), "storage", "owncloud"),
@@ -143,7 +143,7 @@ func DefaultConfig() *config.Config {
UseSpaceGroups: false,
Root: filepath.Join(defaults.BaseDataPath(), "storage", "users"),
PersonalSpaceAliasTemplate: "{{.SpaceType}}/{{.User.Username | lower}}",
PersonalSpacePathTemplate: "users/{{.User.Username}}",
PersonalSpacePathTemplate: "users/{{.User.Id.OpaqueId}}",
GeneralSpaceAliasTemplate: "{{.SpaceType}}/{{.SpaceName | replace \" \" \"-\" | lower}}",
GeneralSpacePathTemplate: "projects/{{.SpaceId}}",
PermissionsEndpoint: "eu.opencloud.api.settings",
@@ -165,7 +165,7 @@ func DefaultConfig() *config.Config {
TTL: 24 * 60 * time.Second,
},
IDCache: config.IDCache{
Store: "memory",
Store: "nats-js-kv",
Nodes: []string{"127.0.0.1:9233"},
Database: "ids-storage-users",
TTL: 24 * 60 * time.Second,

View File

@@ -87,15 +87,26 @@ func Local(cfg *config.Config) map[string]interface{} {
// Posix is the config mapping for the Posix storage driver
func Posix(cfg *config.Config, enableFSScan bool) map[string]interface{} {
return map[string]interface{}{
"root": cfg.Drivers.Posix.Root,
"personalspacepath_template": cfg.Drivers.Posix.PersonalSpacePathTemplate,
"generalspacepath_template": cfg.Drivers.Posix.GeneralSpacePathTemplate,
"permissionssvc": cfg.Drivers.Posix.PermissionsEndpoint,
"permissionssvc_tls_mode": cfg.Commons.GRPCClientTLS.Mode,
"treetime_accounting": true,
"treesize_accounting": true,
"asyncfileuploads": cfg.Drivers.Posix.AsyncUploads,
"scan_debounce_delay": cfg.Drivers.Posix.ScanDebounceDelay,
"root": cfg.Drivers.Posix.Root,
"personalspacepath_template": cfg.Drivers.Posix.PersonalSpacePathTemplate,
"personalspacealias_template": cfg.Drivers.Posix.PersonalSpaceAliasTemplate,
"generalspacepath_template": cfg.Drivers.Posix.GeneralSpacePathTemplate,
"generalspacealias_template": cfg.Drivers.Posix.GeneralSpaceAliasTemplate,
"permissionssvc": cfg.Drivers.Posix.PermissionsEndpoint,
"permissionssvc_tls_mode": cfg.Commons.GRPCClientTLS.Mode,
"treetime_accounting": true,
"treesize_accounting": true,
"asyncfileuploads": cfg.Drivers.Posix.AsyncUploads,
"scan_debounce_delay": cfg.Drivers.Posix.ScanDebounceDelay,
"max_quota": cfg.Drivers.Posix.MaxQuota,
"disable_versioning": cfg.Drivers.Posix.DisableVersioning,
"propagator": cfg.Drivers.Posix.Propagator,
"async_propagator_options": map[string]interface{}{
"propagation_delay": cfg.Drivers.Posix.AsyncPropagatorOptions.PropagationDelay,
},
"max_acquire_lock_cycles": cfg.Drivers.Posix.MaxAcquireLockCycles,
"lock_cycle_duration_factor": cfg.Drivers.Posix.LockCycleDurationFactor,
"max_concurrency": cfg.Drivers.Posix.MaxConcurrency,
"idcache": map[string]interface{}{
"cache_store": cfg.IDCache.Store,
"cache_nodes": cfg.IDCache.Nodes,
@@ -114,6 +125,15 @@ func Posix(cfg *config.Config, enableFSScan bool) map[string]interface{} {
"cache_auth_username": cfg.FilemetadataCache.AuthUsername,
"cache_auth_password": cfg.FilemetadataCache.AuthPassword,
},
"events": map[string]interface{}{
"numconsumers": cfg.Events.NumConsumers,
},
"tokens": map[string]interface{}{
"transfer_shared_secret": cfg.Commons.TransferSecret,
"transfer_expires": cfg.TransferExpires,
"download_endpoint": cfg.DataServerURL,
"datagateway_endpoint": cfg.DataGatewayURL,
},
"use_space_groups": cfg.Drivers.Posix.UseSpaceGroups,
"enable_fs_revisions": cfg.Drivers.Posix.EnableFSRevisions,
"scan_fs": enableFSScan,

View File

@@ -1,6 +1,6 @@
SHELL := bash
NAME := web
WEB_ASSETS_VERSION = v1.0.0
WEB_ASSETS_VERSION = v2.0.0
WEB_ASSETS_BRANCH = main
ifneq (, $(shell command -v go 2> /dev/null)) # suppress `command not found warnings` for non go targets in CI

View File

@@ -214,17 +214,6 @@ class CapabilitiesContext implements Context {
$this->featureContext->theHTTPStatusCodeShouldBe(200, '', $response);
$responseXmlObject = HttpRequestHelper::getResponseXml($response, __METHOD__)->data->capabilities;
$edition = $this->getParameterValueFromXml(
$responseXmlObject,
'core',
'status@@@edition'
);
if (!\strlen($edition)) {
Assert::fail(
"Cannot get edition from core capabilities"
);
}
$product = $this->getParameterValueFromXml(
$responseXmlObject,
@@ -249,7 +238,6 @@ class CapabilitiesContext implements Context {
);
}
$jsonExpectedDecoded['edition'] = $edition;
$jsonExpectedDecoded['product'] = $product;
$jsonExpectedDecoded['productname'] = $productName;

View File

@@ -2042,17 +2042,6 @@ class FeatureContext extends BehatVariablesContext {
);
}
/**
* @return string
*/
public function getEditionFromStatus(): string {
$decodedResponse = $this->getJsonDecodedStatusPhp();
if (isset($decodedResponse['edition'])) {
return $decodedResponse['edition'];
}
return '';
}
/**
* @return string|null
*/
@@ -2282,14 +2271,6 @@ class FeatureContext extends BehatVariablesContext {
],
"parameter" => []
],
[
"code" => "%edition%",
"function" => [
$this,
"getEditionFromStatus"
],
"parameter" => []
],
[
"code" => "%version%",
"function" => [

View File

@@ -15,7 +15,7 @@ services:
STORAGE_SYSTEM_DRIVER_OC_ROOT: /srv/app/tmp/opencloud/storage/metadata
SHARING_USER_JSON_FILE: /srv/app/tmp/opencloud/shares.json
PROXY_ENABLE_BASIC_AUTH: "true"
WEB_UI_CONFIG_FILE: /woodpecker/src/github.com/opencloud-eu/opencloud/tests/config/drone/opencloud-config.json
WEB_UI_CONFIG_FILE: /woodpecker/src/github.com/opencloud-eu/opencloud/tests/config/woodpecker/opencloud-config.json
ACCOUNTS_HASH_DIFFICULTY: 4
OC_INSECURE: "true"
IDM_CREATE_DEMO_USERS: "true"

View File

@@ -153,15 +153,6 @@ _ocdav: api compatibility, return correct status code_
- [coreApiTrashbin/trashbinSharingToShares.feature:277](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiTrashbin/trashbinSharingToShares.feature#L277)
#### [Uploading with the same mtime and filename causes "internal server errors"](https://github.com/owncloud/ocis/issues/10496)
- [coreApiWebdavUpload/uploadFile.feature:400](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUpload/uploadFile.feature#L400)
- [coreApiWebdavUpload/uploadFile.feature:401](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUpload/uploadFile.feature#L401)
- [coreApiWebdavUpload/uploadFile.feature:402](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUpload/uploadFile.feature#L402)
- [coreApiWebdavUploadTUS/uploadFileMtime.feature:79](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/uploadFileMtime.feature#L79)
- [coreApiWebdavUploadTUS/uploadFileMtime.feature:80](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/uploadFileMtime.feature#L80)
- [coreApiWebdavUploadTUS/uploadFileMtime.feature:81](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/uploadFileMtime.feature#L81)
### Won't fix
Not everything needs to be implemented for opencloud.

View File

@@ -0,0 +1,163 @@
## Scenarios from core API tests that are expected to fail with decomposed storage while running with the Graph API
### File
Basic file management like up and download, move, copy, properties, trash, versions and chunking.
#### [Custom dav properties with namespaces are rendered incorrectly](https://github.com/owncloud/ocis/issues/2140)
_ocdav: double-check the webdav property parsing when custom namespaces are used_
- [coreApiWebdavProperties/setFileProperties.feature:128](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavProperties/setFileProperties.feature#L128)
- [coreApiWebdavProperties/setFileProperties.feature:129](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavProperties/setFileProperties.feature#L129)
- [coreApiWebdavProperties/setFileProperties.feature:130](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavProperties/setFileProperties.feature#L130)
### Sync
Synchronization features like etag propagation, setting mtime and locking files
#### [Uploading an old method chunked file with checksum should fail using new DAV path](https://github.com/owncloud/ocis/issues/2323)
- [coreApiMain/checksums.feature:233](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiMain/checksums.feature#L233)
- [coreApiMain/checksums.feature:234](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiMain/checksums.feature#L234)
- [coreApiMain/checksums.feature:235](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiMain/checksums.feature#L235)
### Share
#### [d:quota-available-bytes in dprop of PROPFIND give wrong response value](https://github.com/owncloud/ocis/issues/8197)
- [coreApiWebdavProperties/getQuota.feature:57](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavProperties/getQuota.feature#L57)
- [coreApiWebdavProperties/getQuota.feature:58](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavProperties/getQuota.feature#L58)
- [coreApiWebdavProperties/getQuota.feature:59](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavProperties/getQuota.feature#L59)
- [coreApiWebdavProperties/getQuota.feature:73](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavProperties/getQuota.feature#L73)
- [coreApiWebdavProperties/getQuota.feature:74](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavProperties/getQuota.feature#L74)
- [coreApiWebdavProperties/getQuota.feature:75](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavProperties/getQuota.feature#L75)
#### [deleting a file inside a received shared folder is moved to the trash-bin of the sharer not the receiver](https://github.com/owncloud/ocis/issues/1124)
- [coreApiTrashbin/trashbinSharingToShares.feature:54](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiTrashbin/trashbinSharingToShares.feature#L54)
- [coreApiTrashbin/trashbinSharingToShares.feature:55](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiTrashbin/trashbinSharingToShares.feature#L55)
- [coreApiTrashbin/trashbinSharingToShares.feature:56](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiTrashbin/trashbinSharingToShares.feature#L56)
- [coreApiTrashbin/trashbinSharingToShares.feature:83](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiTrashbin/trashbinSharingToShares.feature#L83)
- [coreApiTrashbin/trashbinSharingToShares.feature:84](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiTrashbin/trashbinSharingToShares.feature#L84)
- [coreApiTrashbin/trashbinSharingToShares.feature:85](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiTrashbin/trashbinSharingToShares.feature#L85)
- [coreApiTrashbin/trashbinSharingToShares.feature:142](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiTrashbin/trashbinSharingToShares.feature#L142)
- [coreApiTrashbin/trashbinSharingToShares.feature:143](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiTrashbin/trashbinSharingToShares.feature#L143)
- [coreApiTrashbin/trashbinSharingToShares.feature:144](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiTrashbin/trashbinSharingToShares.feature#L144)
- [coreApiTrashbin/trashbinSharingToShares.feature:202](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiTrashbin/trashbinSharingToShares.feature#L202)
- [coreApiTrashbin/trashbinSharingToShares.feature:203](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiTrashbin/trashbinSharingToShares.feature#L203)
### Other
API, search, favorites, config, capabilities, not existing endpoints, CORS and others
#### [sending MKCOL requests to another or non-existing user's webDav endpoints as normal user should return 404](https://github.com/owncloud/ocis/issues/5049)
_ocdav: api compatibility, return correct status code_
- [coreApiAuth/webDavMKCOLAuth.feature:42](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiAuth/webDavMKCOLAuth.feature#L42)
- [coreApiAuth/webDavMKCOLAuth.feature:53](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiAuth/webDavMKCOLAuth.feature#L53)
#### [trying to lock file of another user gives http 500](https://github.com/owncloud/ocis/issues/2176)
- [coreApiAuth/webDavLOCKAuth.feature:46](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiAuth/webDavLOCKAuth.feature#L46)
- [coreApiAuth/webDavLOCKAuth.feature:58](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiAuth/webDavLOCKAuth.feature#L58)
#### [Support for favorites](https://github.com/owncloud/ocis/issues/1228)
- [coreApiFavorites/favorites.feature:101](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiFavorites/favorites.feature#L101)
- [coreApiFavorites/favorites.feature:102](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiFavorites/favorites.feature#L102)
- [coreApiFavorites/favorites.feature:103](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiFavorites/favorites.feature#L103)
- [coreApiFavorites/favorites.feature:124](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiFavorites/favorites.feature#L124)
- [coreApiFavorites/favorites.feature:125](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiFavorites/favorites.feature#L125)
- [coreApiFavorites/favorites.feature:126](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiFavorites/favorites.feature#L126)
- [coreApiFavorites/favorites.feature:189](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiFavorites/favorites.feature#L189)
- [coreApiFavorites/favorites.feature:190](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiFavorites/favorites.feature#L190)
- [coreApiFavorites/favorites.feature:191](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiFavorites/favorites.feature#L191)
- [coreApiFavorites/favorites.feature:145](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiFavorites/favorites.feature#L145)
- [coreApiFavorites/favorites.feature:146](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiFavorites/favorites.feature#L146)
- [coreApiFavorites/favorites.feature:147](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiFavorites/favorites.feature#L147)
- [coreApiFavorites/favorites.feature:174](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiFavorites/favorites.feature#L174)
- [coreApiFavorites/favorites.feature:175](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiFavorites/favorites.feature#L175)
- [coreApiFavorites/favorites.feature:176](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiFavorites/favorites.feature#L176)
- [coreApiFavorites/favoritesSharingToShares.feature:91](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiFavorites/favoritesSharingToShares.feature#L91)
- [coreApiFavorites/favoritesSharingToShares.feature:92](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiFavorites/favoritesSharingToShares.feature#L92)
- [coreApiFavorites/favoritesSharingToShares.feature:93](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiFavorites/favoritesSharingToShares.feature#L93)
#### [WWW-Authenticate header for unauthenticated requests is not clear](https://github.com/owncloud/ocis/issues/2285)
- [coreApiWebdavOperations/refuseAccess.feature:21](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavOperations/refuseAccess.feature#L21)
- [coreApiWebdavOperations/refuseAccess.feature:22](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavOperations/refuseAccess.feature#L22)
#### [PATCH request for TUS upload with wrong checksum gives incorrect response](https://github.com/owncloud/ocis/issues/1755)
- [coreApiWebdavUploadTUS/checksums.feature:74](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/checksums.feature#L74)
- [coreApiWebdavUploadTUS/checksums.feature:75](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/checksums.feature#L75)
- [coreApiWebdavUploadTUS/checksums.feature:76](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/checksums.feature#L76)
- [coreApiWebdavUploadTUS/checksums.feature:77](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/checksums.feature#L77)
- [coreApiWebdavUploadTUS/checksums.feature:79](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/checksums.feature#L79)
- [coreApiWebdavUploadTUS/checksums.feature:78](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/checksums.feature#L78)
- [coreApiWebdavUploadTUS/checksums.feature:147](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/checksums.feature#L147)
- [coreApiWebdavUploadTUS/checksums.feature:148](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/checksums.feature#L148)
- [coreApiWebdavUploadTUS/checksums.feature:149](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/checksums.feature#L149)
- [coreApiWebdavUploadTUS/checksums.feature:192](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/checksums.feature#L192)
- [coreApiWebdavUploadTUS/checksums.feature:193](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/checksums.feature#L193)
- [coreApiWebdavUploadTUS/checksums.feature:194](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/checksums.feature#L194)
- [coreApiWebdavUploadTUS/checksums.feature:195](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/checksums.feature#L195)
- [coreApiWebdavUploadTUS/checksums.feature:196](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/checksums.feature#L196)
- [coreApiWebdavUploadTUS/checksums.feature:197](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/checksums.feature#L197)
- [coreApiWebdavUploadTUS/checksums.feature:240](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/checksums.feature#L240)
- [coreApiWebdavUploadTUS/checksums.feature:241](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/checksums.feature#L241)
- [coreApiWebdavUploadTUS/checksums.feature:242](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/checksums.feature#L242)
- [coreApiWebdavUploadTUS/checksums.feature:243](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/checksums.feature#L243)
- [coreApiWebdavUploadTUS/checksums.feature:244](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/checksums.feature#L244)
- [coreApiWebdavUploadTUS/checksums.feature:245](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/checksums.feature#L245)
- [coreApiWebdavUploadTUS/uploadToShare.feature:255](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/uploadToShare.feature#L255)
- [coreApiWebdavUploadTUS/uploadToShare.feature:256](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/uploadToShare.feature#L256)
- [coreApiWebdavUploadTUS/uploadToShare.feature:279](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/uploadToShare.feature#L279)
- [coreApiWebdavUploadTUS/uploadToShare.feature:280](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/uploadToShare.feature#L280)
- [coreApiWebdavUploadTUS/uploadToShare.feature:375](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/uploadToShare.feature#L375)
- [coreApiWebdavUploadTUS/uploadToShare.feature:376](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/uploadToShare.feature#L376)
#### [Renaming resource to banned name is allowed in spaces webdav](https://github.com/owncloud/ocis/issues/3099)
- [coreApiWebdavMove2/moveFile.feature:143](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavMove2/moveFile.feature#L143)
- [coreApiWebdavMove1/moveFolder.feature:36](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavMove1/moveFolder.feature#L36)
- [coreApiWebdavMove1/moveFolder.feature:50](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavMove1/moveFolder.feature#L50)
- [coreApiWebdavMove1/moveFolder.feature:64](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavMove1/moveFolder.feature#L64)
#### [Trying to delete other user's trashbin item returns 409 for spaces path instead of 404](https://github.com/owncloud/ocis/issues/9791)
- [coreApiTrashbin/trashbinDelete.feature:92](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiTrashbin/trashbinDelete.feature#L92)
#### [MOVE a file into same folder with same name returns 404 instead of 403](https://github.com/owncloud/ocis/issues/1976)
- [coreApiWebdavMove2/moveFile.feature:100](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavMove2/moveFile.feature#L100)
- [coreApiWebdavMove2/moveFile.feature:101](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavMove2/moveFile.feature#L101)
- [coreApiWebdavMove2/moveFile.feature:102](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavMove2/moveFile.feature#L102)
- [coreApiWebdavMove1/moveFolder.feature:217](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavMove1/moveFolder.feature#L217)
- [coreApiWebdavMove1/moveFolder.feature:218](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavMove1/moveFolder.feature#L218)
- [coreApiWebdavMove1/moveFolder.feature:219](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavMove1/moveFolder.feature#L219)
- [coreApiWebdavMove2/moveShareOnOpencloud.feature:334](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavMove2/moveShareOnOpencloud.feature#L334)
- [coreApiWebdavMove2/moveShareOnOpencloud.feature:337](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavMove2/moveShareOnOpencloud.feature#L337)
- [coreApiWebdavMove2/moveShareOnOpencloud.feature:340](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavMove2/moveShareOnOpencloud.feature#L340)
#### [COPY file/folder to same name is possible (but 500 code error for folder with spaces path)](https://github.com/owncloud/ocis/issues/8711)
- [coreApiSharePublicLink2/copyFromPublicLink.feature:198](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiSharePublicLink2/copyFromPublicLink.feature#L198)
- [coreApiWebdavProperties/copyFile.feature:1094](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavProperties/copyFile.feature#L1094)
- [coreApiWebdavProperties/copyFile.feature:1095](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavProperties/copyFile.feature#L1095)
- [coreApiWebdavProperties/copyFile.feature:1096](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavProperties/copyFile.feature#L1096)
#### [Trying to restore personal file to file of share received folder returns 403 but the share file is deleted (new dav path)](https://github.com/owncloud/ocis/issues/10356)
- [coreApiTrashbin/trashbinSharingToShares.feature:277](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiTrashbin/trashbinSharingToShares.feature#L277)
### Won't fix
Not everything needs to be implemented for opencloud.
- _Blacklisted ignored files are no longer required because opencloud can handle `.htaccess` files without security implications introduced by serving user provided files with apache._
Note: always have an empty line at the end of this file.
The bash script that processes this file requires that the last line has a newline on the end.

View File

@@ -36,14 +36,9 @@
- [apiSearch1/search.feature:321](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/apiSearch1/search.feature#L321)
- [apiSearch1/search.feature:324](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/apiSearch1/search.feature#L324)
- [apiSearch1/search.feature:356](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/apiSearch1/search.feature#L356)
- [apiSearch1/search.feature:369](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/apiSearch1/search.feature#L369)
- [apiSearch1/search.feature:396](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/apiSearch1/search.feature#L396)
- [apiSearch1/search.feature:410](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/apiSearch1/search.feature#L410)
- [apiSearch1/search.feature:437](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/apiSearch1/search.feature#L437)
- [apiSearch1/search.feature:438](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/apiSearch1/search.feature#L438)
- [apiSearch1/search.feature:439](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/apiSearch1/search.feature#L439)
- [apiSearch1/search.feature:465](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/apiSearch1/search.feature#L465)
- [apiSearch1/search.feature:466](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/apiSearch1/search.feature#L466)
- [apiSearch1/search.feature:467](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/apiSearch1/search.feature#L467)
- [apiSearch2/mediaTypeSearch.feature:31](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/apiSearch2/mediaTypeSearch.feature#L31)
- [apiSearch2/mediaTypeSearch.feature:32](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/apiSearch2/mediaTypeSearch.feature#L32)
- [apiSearch2/mediaTypeSearch.feature:33](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/apiSearch2/mediaTypeSearch.feature#L33)
@@ -154,7 +149,7 @@
- [apiSpacesShares/shareSubItemOfSpaceViaPublicLink.feature:147](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/apiSpacesShares/shareSubItemOfSpaceViaPublicLink.feature#L147)
- [apiSharingNgLinkSharePermission/createLinkShare.feature:473](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/apiSharingNgLinkSharePermission/createLinkShare.feature#L473)
- [apiSharingNgLinkSharePermission/createLinkShare.feature:1208](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/apiSharingNgLinkSharePermission/createLinkShare.feature#L1208)
- [apiSharingNgLinkSharePermission/updateLinkShare.feature:203](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/apiSharingNgLinkSharePermission/updateLinkShare.feature#L203)
- [apiSharingNgLinkSharePermission/updateLinkShare.feature:204](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/apiSharingNgLinkSharePermission/updateLinkShare.feature#L204)
- [apiSharingNgLinkSharePermission/updateLinkShare.feature:282](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/apiSharingNgLinkSharePermission/updateLinkShare.feature#L282)
- [apiSharingNgLinkShareRoot/updateLinkShare.feature:10](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/apiSharingNgLinkShareRoot/updateLinkShare.feature#L10)
- [apiSharingNgLinkShareRoot/updateLinkShare.feature:42](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/apiSharingNgLinkShareRoot/updateLinkShare.feature#L42)
@@ -193,7 +188,7 @@
- [apiSpaces/uploadSpaces.feature:95](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/apiSpaces/uploadSpaces.feature#L95)
- [apiSpaces/uploadSpaces.feature:112](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/apiSpaces/uploadSpaces.feature#L112)
- [apiSpaces/uploadSpaces.feature:129](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/apiSpaces/uploadSpaces.feature#L129)
- [apiSpacesShares/shareSpacesViaLink.feature:61](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/apiSpacesShares/shareSpacesViaLink.feature#L61)
- [apiSpacesShares/shareSpacesViaLink.feature:58](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/apiSpacesShares/shareSpacesViaLink.feature#L58)
- [apiDepthInfinity/propfind.feature:74](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/apiDepthInfinity/propfind.feature#L74)
- [apiDepthInfinity/propfind.feature:124](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/apiDepthInfinity/propfind.feature#L124)
- [apiLocks/lockFiles.feature:490](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/apiLocks/lockFiles.feature#L490)
@@ -209,17 +204,11 @@
- [apiLocks/unlockFiles.feature:322](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/apiLocks/unlockFiles.feature#L322)
- [apiLocks/unlockFiles.feature:323](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/apiLocks/unlockFiles.feature#L323)
- [apiAntivirus/antivirus.feature:114](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/apiAntivirus/antivirus.feature#L114)
- [apiAntivirus/antivirus.feature:115](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/apiAntivirus/antivirus.feature#L115)
- [apiAntivirus/antivirus.feature:116](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/apiAntivirus/antivirus.feature#L116)
- [apiAntivirus/antivirus.feature:117](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/apiAntivirus/antivirus.feature#L117)
- [apiAntivirus/antivirus.feature:118](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/apiAntivirus/antivirus.feature#L118)
- [apiAntivirus/antivirus.feature:119](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/apiAntivirus/antivirus.feature#L119)
- [apiAntivirus/antivirus.feature:140](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/apiAntivirus/antivirus.feature#L140)
- [apiAntivirus/antivirus.feature:141](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/apiAntivirus/antivirus.feature#L141)
- [apiAntivirus/antivirus.feature:142](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/apiAntivirus/antivirus.feature#L142)
- [apiAntivirus/antivirus.feature:143](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/apiAntivirus/antivirus.feature#L143)
- [apiAntivirus/antivirus.feature:144](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/apiAntivirus/antivirus.feature#L144)
- [apiAntivirus/antivirus.feature:145](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/apiAntivirus/antivirus.feature#L145)
- [apiAntivirus/antivirus.feature:356](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/apiAntivirus/antivirus.feature#L356)
- [apiAntivirus/antivirus.feature:357](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/apiAntivirus/antivirus.feature#L357)
- [apiAntivirus/antivirus.feature:358](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/apiAntivirus/antivirus.feature#L358)
@@ -310,6 +299,8 @@
- [coreApiWebdavOperations/propfind.feature:55](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavOperations/propfind.feature#L55)
- [coreApiWebdavOperations/propfind.feature:69](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavOperations/propfind.feature#L69)
- [coreApiWebdavUpload/uploadFile.feature:376](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUpload/uploadFile.feature#L376)
- [apiActivities/shareActivities.feature:1956](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/apiActivities/shareActivities.feature#L1956)
- [apiActivities/shareActivities.feature:2095](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/apiActivities/shareActivities.feature#L2095)
#### [Cannot create new TUS upload resource using /webdav without remote.php - returns html instead](https://github.com/owncloud/ocis/issues/10346)
@@ -346,6 +337,7 @@
- [coreApiWebdavUploadTUS/uploadFileMtime.feature:39](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/uploadFileMtime.feature#L39)
- [coreApiWebdavUploadTUS/uploadFileMtime.feature:51](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/uploadFileMtime.feature#L51)
- [coreApiWebdavUploadTUS/uploadFileMtime.feature:65](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/uploadFileMtime.feature#L65)
- [coreApiWebdavUploadTUS/uploadFileMtime.feature:79](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/uploadFileMtime.feature#L79)
- [coreApiWebdavUploadTUS/uploadFileMtimeShares.feature:29](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/uploadFileMtimeShares.feature#L29)
- [coreApiWebdavUploadTUS/uploadFileMtimeShares.feature:48](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/uploadFileMtimeShares.feature#L48)
- [coreApiWebdavUploadTUS/uploadFileMtimeShares.feature:69](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/uploadFileMtimeShares.feature#L69)

View File

@@ -22,7 +22,7 @@ Feature: create auth-app token
],
"properties": {
"token": {
"pattern": "^[a-zA-Z0-9]{16}$"
"pattern": "^([a-zA-Z]+ ){5}[a-zA-Z]+$"
},
"label": {
"const": "Generated via API"
@@ -58,7 +58,7 @@ Feature: create auth-app token
],
"properties": {
"token": {
"pattern": "^\\$2a\\$11\\$[A-Za-z0-9./]{53}$"
"pattern": "^\\$argon2id\\$v=19\\$m=65536,t=1,p=16\\$.+$"
},
"label": {
"const": "Generated via API"
@@ -75,7 +75,7 @@ Feature: create auth-app token
],
"properties": {
"token": {
"pattern": "^\\$2a\\$11\\$[A-Za-z0-9./]{53}$"
"pattern": "^\\$argon2id\\$v=19\\$m=65536,t=1,p=16\\$.+$"
},
"label": {
"const": "Generated via CLI"
@@ -92,10 +92,10 @@ Feature: create auth-app token
],
"properties": {
"token": {
"pattern": "^\\$2a\\$11\\$[A-Za-z0-9./]{53}$"
"pattern": "^\\$argon2id\\$v=19\\$m=65536,t=1,p=16\\$.+$"
},
"label": {
"const": "Generated via Impersonation API"
"const": "Generated via API (Impersonation)"
}
}
}
@@ -121,10 +121,10 @@ Feature: create auth-app token
],
"properties": {
"token": {
"pattern": "^[a-zA-Z0-9]{16}$"
"pattern": "^([a-zA-Z]+ ){5}[a-zA-Z]+$"
},
"label": {
"const": "Generated via Impersonation API"
"const": "Generated via API (Impersonation)"
}
}
}

View File

@@ -183,7 +183,7 @@ Feature: check file info with different wopi apps
"const": true
},
"UserCanNotWriteRelative": {
"const": false
"const": true
},
"EnableOwnerTermination": {
"const": true
@@ -581,7 +581,7 @@ Feature: check file info with different wopi apps
"const": <user-can-write>
},
"UserCanNotWriteRelative": {
"const": false
"const": true
},
"EnableOwnerTermination": {
"const": true
@@ -691,7 +691,7 @@ Feature: check file info with different wopi apps
"const": true
},
"UserCanNotWriteRelative": {
"const": false
"const": true
},
"EnableOwnerTermination": {
"const": true
@@ -1077,7 +1077,7 @@ Feature: check file info with different wopi apps
"const": true
},
"UserCanNotWriteRelative": {
"const": false
"const": true
},
"EnableOwnerTermination": {
"const": true
@@ -1424,7 +1424,7 @@ Feature: check file info with different wopi apps
"const": true
},
"UserCanNotWriteRelative": {
"const": false
"const": true
},
"EnableOwnerTermination": {
"const": true
@@ -1810,7 +1810,7 @@ Feature: check file info with different wopi apps
"const": <user-can-write>
},
"UserCanNotWriteRelative": {
"const": false
"const": true
},
"EnableOwnerTermination": {
"const": true

View File

@@ -27,7 +27,7 @@ Feature: backup consistency
Then the command should be successful
And the command output should contain "💚 No inconsistency found. The backup in '%storage_path%' seems to be valid."
@issue-9498
@issue-9498 @issue-391
Scenario: check backup consistency after uploading file multiple times via TUS
Given user "Alice" uploads a file "filesForUpload/textfile.txt" to "/today.txt" with mtime "today" via TUS inside of the space "Personal" using the WebDAV API
And user "Alice" uploads a file "filesForUpload/textfile.txt" to "/today.txt" with mtime "today" via TUS inside of the space "Personal" using the WebDAV API
@@ -39,9 +39,9 @@ Feature: backup consistency
And the administrator has started the server
When user "Alice" gets the number of versions of file "today.txt"
Then the HTTP status code should be "207"
And the number of versions should be "1"
And the number of versions should be "2"
@issue-9498
@issue-9498 @issue-428
Scenario: check backup consistency after uploading a file multiple times
Given user "Alice" has uploaded file with content "hello world" to "/textfile0.txt"
And user "Alice" has uploaded file with content "hello world" to "/textfile0.txt"
@@ -53,4 +53,4 @@ Feature: backup consistency
And the administrator has started the server
When user "Alice" gets the number of versions of file "/textfile0.txt"
Then the HTTP status code should be "207"
And the number of versions should be "2"
And the number of versions should be "2"

View File

@@ -11,7 +11,7 @@ Feature: remove file versions via CLI command
And user "Alice" has uploaded file with content "This is version 3" to "textfile.txt"
When the administrator removes all the file versions using the CLI
Then the command should be successful
And the command output should contain " Deleted 2 revisions (6 files / 2 blobs)"
And the command output should contain " Deleted 2 revisions (4 files / 2 blobs)"
When user "Alice" gets the number of versions of file "textfile.txt"
Then the HTTP status code should be "207"
And the number of versions should be "0"
@@ -26,7 +26,7 @@ Feature: remove file versions via CLI command
And user "Alice" has uploaded file with content "This is version 3" to "anotherFile.txt"
When the administrator removes the versions of file "randomFile.txt" of user "Alice" from space "Personal" using the CLI
Then the command should be successful
And the command output should contain " Deleted 2 revisions (6 files / 2 blobs)"
And the command output should contain " Deleted 2 revisions (4 files / 2 blobs)"
When user "Alice" gets the number of versions of file "randomFile.txt"
Then the HTTP status code should be "207"
And the number of versions should be "0"
@@ -52,7 +52,7 @@ Feature: remove file versions via CLI command
And we save it into "EPSUM_FILEID"
When the administrator removes the file versions of space "projectSpace" using the CLI
Then the command should be successful
And the command output should contain " Deleted 4 revisions (12 files / 4 blobs)"
And the command output should contain " Deleted 4 revisions (8 files / 4 blobs)"
When user "Alice" gets the number of versions of file "file.txt"
Then the HTTP status code should be "207"
And the number of versions should be "2"

View File

@@ -193,17 +193,12 @@ Feature: capabilities
"status": {
"type": "object",
"required": [
"edition",
"product",
"productname",
"version",
"versionstring"
],
"properties": {
"edition": {
"type": "string",
"enum": ["%edition%"]
},
"product": {
"type": "string",
"enum": ["%productname%"]
@@ -230,7 +225,6 @@ Feature: capabilities
"type": "object",
"required": [
"string",
"edition",
"product"
],
"properties": {
@@ -238,10 +232,6 @@ Feature: capabilities
"type": "string",
"enum": ["%versionstring%"]
},
"edition": {
"type": "string",
"enum": ["%edition%"]
},
"product": {
"type": "string",
"enum": ["%productname%"]

View File

@@ -47,7 +47,6 @@ Feature: default capabilities for normal user
"required": [
"version",
"versionstring",
"edition",
"productname"
],
"properties": {
@@ -57,9 +56,6 @@ Feature: default capabilities for normal user
"versionstring": {
"const": "%versionstring%"
},
"edition": {
"const": "%edition%"
},
"productname": {
"const": "%productname%"
}

View File

@@ -8,5 +8,5 @@ Feature: Status
When the administrator requests status.php
Then the status.php response should include
"""
{"installed":true,"maintenance":false,"needsDbUpgrade":false,"version":"$CURRENT_VERSION","versionstring":"$CURRENT_VERSION_STRING","edition":"$EDITION","productname":"$PRODUCTNAME","product":"$PRODUCT"}
{"installed":true,"maintenance":false,"needsDbUpgrade":false,"version":"$CURRENT_VERSION","versionstring":"$CURRENT_VERSION_STRING","productname":"$PRODUCTNAME","product":"$PRODUCT"}
"""

View File

@@ -282,14 +282,14 @@ Feature: dav-versions
| new |
| spaces |
@issue-10496
@issue-391
Scenario Outline: upload the same file more than twice with the same mtime and only one version is available
Given using <dav-path-version> DAV path
And user "Alice" has uploaded file "filesForUpload/textfile.txt" to "file.txt" with mtime "Thu, 08 Aug 2019 04:18:13 GMT"
And user "Alice" has uploaded file "filesForUpload/textfile.txt" to "file.txt" with mtime "Thu, 08 Aug 2019 04:18:13 GMT"
When user "Alice" uploads file "filesForUpload/textfile.txt" to "file.txt" with mtime "Thu, 08 Aug 2019 04:18:13 GMT" using the WebDAV API
Then the HTTP status code should be "204"
And the version folder of file "/file.txt" for user "Alice" should contain "1" element
And the version folder of file "/file.txt" for user "Alice" should contain "2" element
And as "Alice" the mtime of the file "file.txt" should be "Thu, 08 Aug 2019 04:18:13 GMT"
Examples:
| dav-path-version |
@@ -303,7 +303,7 @@ Feature: dav-versions
And user "Alice" has uploaded file "filesForUpload/textfile.txt" to "file.txt" with mtime "Thu, 08 Aug 2019 04:18:13 GMT"
When user "Alice" restores version index "1" of file "/file.txt" using the WebDAV API
Then the HTTP status code should be "204"
And the version folder of file "/file.txt" for user "Alice" should contain "0" element
And the version folder of file "/file.txt" for user "Alice" should contain "1" element
@skipOnReva
Scenario: sharer of a file can see the old version information when the sharee changes the content of the file

View File

@@ -59,7 +59,7 @@ Feature: upload file
And user "Alice" has uploaded file "filesForUpload/textfile.txt" to "file.txt" with mtime "Thu, 08 Aug 2019 04:18:13 GMT" using the TUS protocol
When user "Alice" uploads file "filesForUpload/textfile.txt" to "file.txt" with mtime "Thu, 08 Aug 2019 04:18:13 GMT" using the TUS protocol on the WebDAV API
Then as "Alice" the mtime of the file "file.txt" should be "Thu, 08 Aug 2019 04:18:13 GMT"
And the version folder of file "file.txt" for user "Alice" should contain "1" element
And the version folder of file "file.txt" for user "Alice" should contain "2" element
Examples:
| dav-path-version |
| old |

View File

@@ -17,6 +17,9 @@ export APP_PROVIDER_DEBUG_ADDR=127.0.0.1:10165
export APP_PROVIDER_GRPC_ADDR=127.0.0.1:10164
export APP_REGISTRY_DEBUG_ADDR=127.0.0.1:10243
export APP_REGISTRY_GRPC_ADDR=127.0.0.1:10242
export AUTH_APP_DEBUG_ADDR=127.0.0.1:10245
export AUTH_APP_GRPC_ADDR=127.0.0.1:10246
export AUTH_APP_HTTP_ADDR=127.0.0.1:10247
export AUTH_BASIC_DEBUG_ADDR=127.0.0.1:10147
export AUTH_BASIC_GRPC_ADDR=127.0.0.1:10146
export AUTH_MACHINE_DEBUG_ADDR=127.0.0.1:10167

View File

@@ -15,13 +15,17 @@ BINGO_DIR="$ROOT_PATH/.bingo"
BINGO_HASH=$(cat "$BINGO_DIR"/* | sha256sum | cut -d ' ' -f 1)
URL="$CACHE_ENDPOINT/$CACHE_BUCKET/opencloud/go-bin/$BINGO_HASH/$2"
if curl --output /dev/null --silent --head --fail "$URL"; then
mc alias set s3 "$MC_HOST" "$AWS_ACCESS_KEY_ID" "$AWS_SECRET_ACCESS_KEY"
if mc ls --json s3/"$CACHE_BUCKET"/opencloud/go-bin/"$BINGO_HASH"/$2 | grep "\"status\":\"success\""; then
echo "[INFO] Go bin cache with has '$BINGO_HASH' exists."
# https://discourse.drone.io/t/how-to-exit-a-pipeline-early-without-failing/3951
# exit a Pipeline early without failing
exit 78
ENV="BIN_CACHE_FOUND=true\n"
else
# stored hash of a .bingo folder to '.bingo_hash' file
echo "$BINGO_HASH" >"$ROOT_PATH/.bingo_hash"
echo "[INFO] Go bin cache with has '$BINGO_HASH' does not exist."
ENV="BIN_CACHE_FOUND=false\n"
fi
echo -e $ENV >> .env

View File

@@ -10,16 +10,15 @@ fi
echo "Checking web version - $WEB_COMMITID in cache"
URL="$CACHE_ENDPOINT/$CACHE_BUCKET/opencloud/web-test-runner/$WEB_COMMITID/$1.tar.gz"
mc alias set s3 "$MC_HOST" "$AWS_ACCESS_KEY_ID" "$AWS_SECRET_ACCESS_KEY"
echo "Checking for the web cache at '$URL'."
if curl --output /dev/null --silent --head --fail "$URL"
if mc ls --json s3/"$CACHE_BUCKET"/opencloud/web-test-runner/"$WEB_COMMITID"/"$1".tar.gz | grep "\"status\":\"success\"";
then
echo "$1 cache with commit id $WEB_COMMITID already available."
# https://discourse.drone.io/t/how-to-exit-a-pipeline-early-without-failing/3951
# exit a Pipeline early without failing
exit 78
ENV="WEB_CACHE_FOUND=true\n"
else
echo "$1 cache with commit id $WEB_COMMITID was not available."
ENV="WEB_CACHE_FOUND=false\n"
fi
echo -e $ENV >> .woodpecker.env

View File

@@ -1,5 +1,5 @@
#!/bin/sh
while true; do
echo -e "HTTP/1.1 200 OK\n\n$(cat /drone/src/tests/config/drone/hosting-discovery.xml)" | nc -l -k -p 8080
echo -e "HTTP/1.1 200 OK\n\n$(cat /woodpecker/src/github.com/opencloud-eu/opencloud/tests/config/woodpecker/hosting-discovery.xml)" | nc -l -k -p 8080
done

View File

@@ -3,7 +3,7 @@ reflection interface similar to Go's standard library `json` and `xml` packages.
Compatible with TOML version [v1.0.0](https://toml.io/en/v1.0.0).
Documentation: https://godocs.io/github.com/BurntSushi/toml
Documentation: https://pkg.go.dev/github.com/BurntSushi/toml
See the [releases page](https://github.com/BurntSushi/toml/releases) for a
changelog; this information is also in the git tag annotations (e.g. `git show

View File

@@ -196,6 +196,19 @@ func (md *MetaData) PrimitiveDecode(primValue Primitive, v any) error {
return md.unify(primValue.undecoded, rvalue(v))
}
// markDecodedRecursive is a helper to mark any key under the given tmap as
// decoded, recursing as needed
func markDecodedRecursive(md *MetaData, tmap map[string]any) {
for key := range tmap {
md.decoded[md.context.add(key).String()] = struct{}{}
if tmap, ok := tmap[key].(map[string]any); ok {
md.context = append(md.context, key)
markDecodedRecursive(md, tmap)
md.context = md.context[0 : len(md.context)-1]
}
}
}
// unify performs a sort of type unification based on the structure of `rv`,
// which is the client representation.
//
@@ -222,6 +235,16 @@ func (md *MetaData) unify(data any, rv reflect.Value) error {
if err != nil {
return md.parseErr(err)
}
// Assume the Unmarshaler decoded everything, so mark all keys under
// this table as decoded.
if tmap, ok := data.(map[string]any); ok {
markDecodedRecursive(md, tmap)
}
if aot, ok := data.([]map[string]any); ok {
for _, tmap := range aot {
markDecodedRecursive(md, tmap)
}
}
return nil
}
if v, ok := rvi.(encoding.TextUnmarshaler); ok {
@@ -540,12 +563,14 @@ func (md *MetaData) badtype(dst string, data any) error {
func (md *MetaData) parseErr(err error) error {
k := md.context.String()
d := string(md.data)
return ParseError{
LastKey: k,
Position: md.keyInfo[k].pos,
Line: md.keyInfo[k].pos.Line,
Message: err.Error(),
err: err,
input: string(md.data),
LastKey: k,
Position: md.keyInfo[k].pos.withCol(d),
Line: md.keyInfo[k].pos.Line,
input: d,
}
}

View File

@@ -402,31 +402,30 @@ func (enc *Encoder) eMap(key Key, rv reflect.Value, inline bool) {
// Sort keys so that we have deterministic output. And write keys directly
// underneath this key first, before writing sub-structs or sub-maps.
var mapKeysDirect, mapKeysSub []string
var mapKeysDirect, mapKeysSub []reflect.Value
for _, mapKey := range rv.MapKeys() {
k := mapKey.String()
if typeIsTable(tomlTypeOfGo(eindirect(rv.MapIndex(mapKey)))) {
mapKeysSub = append(mapKeysSub, k)
mapKeysSub = append(mapKeysSub, mapKey)
} else {
mapKeysDirect = append(mapKeysDirect, k)
mapKeysDirect = append(mapKeysDirect, mapKey)
}
}
var writeMapKeys = func(mapKeys []string, trailC bool) {
sort.Strings(mapKeys)
writeMapKeys := func(mapKeys []reflect.Value, trailC bool) {
sort.Slice(mapKeys, func(i, j int) bool { return mapKeys[i].String() < mapKeys[j].String() })
for i, mapKey := range mapKeys {
val := eindirect(rv.MapIndex(reflect.ValueOf(mapKey)))
val := eindirect(rv.MapIndex(mapKey))
if isNil(val) {
continue
}
if inline {
enc.writeKeyValue(Key{mapKey}, val, true)
enc.writeKeyValue(Key{mapKey.String()}, val, true)
if trailC || i != len(mapKeys)-1 {
enc.wf(", ")
}
} else {
enc.encode(key.add(mapKey), val)
enc.encode(key.add(mapKey.String()), val)
}
}
}
@@ -441,8 +440,6 @@ func (enc *Encoder) eMap(key Key, rv reflect.Value, inline bool) {
}
}
const is32Bit = (32 << (^uint(0) >> 63)) == 32
func pointerTo(t reflect.Type) reflect.Type {
if t.Kind() == reflect.Ptr {
return pointerTo(t.Elem())
@@ -477,15 +474,14 @@ func (enc *Encoder) eStruct(key Key, rv reflect.Value, inline bool) {
frv := eindirect(rv.Field(i))
if is32Bit {
// Copy so it works correct on 32bit archs; not clear why this
// is needed. See #314, and https://www.reddit.com/r/golang/comments/pnx8v4
// This also works fine on 64bit, but 32bit archs are somewhat
// rare and this is a wee bit faster.
copyStart := make([]int, len(start))
copy(copyStart, start)
start = copyStart
}
// Need to make a copy because ... ehm, I don't know why... I guess
// allocating a new array can cause it to fail(?)
//
// Done for: https://github.com/BurntSushi/toml/issues/430
// Previously only on 32bit for: https://github.com/BurntSushi/toml/issues/314
copyStart := make([]int, len(start))
copy(copyStart, start)
start = copyStart
// Treat anonymous struct fields with tag names as though they are
// not anonymous, like encoding/json does.
@@ -507,7 +503,7 @@ func (enc *Encoder) eStruct(key Key, rv reflect.Value, inline bool) {
}
addFields(rt, rv, nil)
writeFields := func(fields [][]int) {
writeFields := func(fields [][]int, totalFields int) {
for _, fieldIndex := range fields {
fieldType := rt.FieldByIndex(fieldIndex)
fieldVal := rv.FieldByIndex(fieldIndex)
@@ -537,7 +533,7 @@ func (enc *Encoder) eStruct(key Key, rv reflect.Value, inline bool) {
if inline {
enc.writeKeyValue(Key{keyName}, fieldVal, true)
if fieldIndex[0] != len(fields)-1 {
if fieldIndex[0] != totalFields-1 {
enc.wf(", ")
}
} else {
@@ -549,8 +545,10 @@ func (enc *Encoder) eStruct(key Key, rv reflect.Value, inline bool) {
if inline {
enc.wf("{")
}
writeFields(fieldsDirect)
writeFields(fieldsSub)
l := len(fieldsDirect) + len(fieldsSub)
writeFields(fieldsDirect, l)
writeFields(fieldsSub, l)
if inline {
enc.wf("}")
}

View File

@@ -67,21 +67,36 @@ type ParseError struct {
// Position of an error.
type Position struct {
Line int // Line number, starting at 1.
Col int // Error column, starting at 1.
Start int // Start of error, as byte offset starting at 0.
Len int // Lenght in bytes.
Len int // Length of the error in bytes.
}
func (p Position) withCol(tomlFile string) Position {
var (
pos int
lines = strings.Split(tomlFile, "\n")
)
for i := range lines {
ll := len(lines[i]) + 1 // +1 for the removed newline
if pos+ll >= p.Start {
p.Col = p.Start - pos + 1
if p.Col < 1 { // Should never happen, but just in case.
p.Col = 1
}
break
}
pos += ll
}
return p
}
func (pe ParseError) Error() string {
msg := pe.Message
if msg == "" { // Error from errorf()
msg = pe.err.Error()
}
if pe.LastKey == "" {
return fmt.Sprintf("toml: line %d: %s", pe.Position.Line, msg)
return fmt.Sprintf("toml: line %d: %s", pe.Position.Line, pe.Message)
}
return fmt.Sprintf("toml: line %d (last key %q): %s",
pe.Position.Line, pe.LastKey, msg)
pe.Position.Line, pe.LastKey, pe.Message)
}
// ErrorWithPosition returns the error with detailed location context.
@@ -92,26 +107,19 @@ func (pe ParseError) ErrorWithPosition() string {
return pe.Error()
}
var (
lines = strings.Split(pe.input, "\n")
col = pe.column(lines)
b = new(strings.Builder)
)
msg := pe.Message
if msg == "" {
msg = pe.err.Error()
}
// TODO: don't show control characters as literals? This may not show up
// well everywhere.
var (
lines = strings.Split(pe.input, "\n")
b = new(strings.Builder)
)
if pe.Position.Len == 1 {
fmt.Fprintf(b, "toml: error: %s\n\nAt line %d, column %d:\n\n",
msg, pe.Position.Line, col+1)
pe.Message, pe.Position.Line, pe.Position.Col)
} else {
fmt.Fprintf(b, "toml: error: %s\n\nAt line %d, column %d-%d:\n\n",
msg, pe.Position.Line, col, col+pe.Position.Len)
pe.Message, pe.Position.Line, pe.Position.Col, pe.Position.Col+pe.Position.Len-1)
}
if pe.Position.Line > 2 {
fmt.Fprintf(b, "% 7d | %s\n", pe.Position.Line-2, expandTab(lines[pe.Position.Line-3]))
@@ -129,7 +137,7 @@ func (pe ParseError) ErrorWithPosition() string {
diff := len(expanded) - len(lines[pe.Position.Line-1])
fmt.Fprintf(b, "% 7d | %s\n", pe.Position.Line, expanded)
fmt.Fprintf(b, "% 10s%s%s\n", "", strings.Repeat(" ", col+diff), strings.Repeat("^", pe.Position.Len))
fmt.Fprintf(b, "% 10s%s%s\n", "", strings.Repeat(" ", pe.Position.Col-1+diff), strings.Repeat("^", pe.Position.Len))
return b.String()
}
@@ -151,23 +159,6 @@ func (pe ParseError) ErrorWithUsage() string {
return m
}
func (pe ParseError) column(lines []string) int {
var pos, col int
for i := range lines {
ll := len(lines[i]) + 1 // +1 for the removed newline
if pos+ll >= pe.Position.Start {
col = pe.Position.Start - pos
if col < 0 { // Should never happen, but just in case.
col = 0
}
break
}
pos += ll
}
return col
}
func expandTab(s string) string {
var (
b strings.Builder

View File

@@ -275,7 +275,9 @@ func (lx *lexer) errorPos(start, length int, err error) stateFn {
func (lx *lexer) errorf(format string, values ...any) stateFn {
if lx.atEOF {
pos := lx.getPos()
pos.Line--
if lx.pos >= 1 && lx.input[lx.pos-1] == '\n' {
pos.Line--
}
pos.Len = 1
pos.Start = lx.pos - 1
lx.items <- item{typ: itemError, pos: pos, err: fmt.Errorf(format, values...)}
@@ -492,6 +494,9 @@ func lexKeyEnd(lx *lexer) stateFn {
lx.emit(itemKeyEnd)
return lexSkip(lx, lexValue)
default:
if r == '\n' {
return lx.errorPrevLine(fmt.Errorf("expected '.' or '=', but got %q instead", r))
}
return lx.errorf("expected '.' or '=', but got %q instead", r)
}
}
@@ -560,6 +565,9 @@ func lexValue(lx *lexer) stateFn {
if r == eof {
return lx.errorf("unexpected EOF; expected value")
}
if r == '\n' {
return lx.errorPrevLine(fmt.Errorf("expected value but found %q instead", r))
}
return lx.errorf("expected value but found %q instead", r)
}
@@ -1111,7 +1119,7 @@ func lexBaseNumberOrDate(lx *lexer) stateFn {
case 'x':
r = lx.peek()
if !isHex(r) {
lx.errorf("not a hexidecimal number: '%s%c'", lx.current(), r)
lx.errorf("not a hexadecimal number: '%s%c'", lx.current(), r)
}
return lexHexInteger
}
@@ -1259,23 +1267,6 @@ func isBinary(r rune) bool { return r == '0' || r == '1' }
func isOctal(r rune) bool { return r >= '0' && r <= '7' }
func isHex(r rune) bool { return (r >= '0' && r <= '9') || (r|0x20 >= 'a' && r|0x20 <= 'f') }
func isBareKeyChar(r rune, tomlNext bool) bool {
if tomlNext {
return (r >= 'A' && r <= 'Z') ||
(r >= 'a' && r <= 'z') ||
(r >= '0' && r <= '9') ||
r == '_' || r == '-' ||
r == 0xb2 || r == 0xb3 || r == 0xb9 || (r >= 0xbc && r <= 0xbe) ||
(r >= 0xc0 && r <= 0xd6) || (r >= 0xd8 && r <= 0xf6) || (r >= 0xf8 && r <= 0x037d) ||
(r >= 0x037f && r <= 0x1fff) ||
(r >= 0x200c && r <= 0x200d) || (r >= 0x203f && r <= 0x2040) ||
(r >= 0x2070 && r <= 0x218f) || (r >= 0x2460 && r <= 0x24ff) ||
(r >= 0x2c00 && r <= 0x2fef) || (r >= 0x3001 && r <= 0xd7ff) ||
(r >= 0xf900 && r <= 0xfdcf) || (r >= 0xfdf0 && r <= 0xfffd) ||
(r >= 0x10000 && r <= 0xeffff)
}
return (r >= 'A' && r <= 'Z') ||
(r >= 'a' && r <= 'z') ||
(r >= '0' && r <= '9') ||
r == '_' || r == '-'
return (r >= 'A' && r <= 'Z') || (r >= 'a' && r <= 'z') ||
(r >= '0' && r <= '9') || r == '_' || r == '-'
}

View File

@@ -135,9 +135,6 @@ func (k Key) maybeQuoted(i int) string {
// Like append(), but only increase the cap by 1.
func (k Key) add(piece string) Key {
if cap(k) > len(k) {
return append(k, piece)
}
newKey := make(Key, len(k)+1)
copy(newKey, k)
newKey[len(k)] = piece

View File

@@ -50,7 +50,6 @@ func parse(data string) (p *parser, err error) {
// it anyway.
if strings.HasPrefix(data, "\xff\xfe") || strings.HasPrefix(data, "\xfe\xff") { // UTF-16
data = data[2:]
//lint:ignore S1017 https://github.com/dominikh/go-tools/issues/1447
} else if strings.HasPrefix(data, "\xef\xbb\xbf") { // UTF-8
data = data[3:]
}
@@ -65,7 +64,7 @@ func parse(data string) (p *parser, err error) {
if i := strings.IndexRune(data[:ex], 0); i > -1 {
return nil, ParseError{
Message: "files cannot contain NULL bytes; probably using UTF-16; TOML files must be UTF-8",
Position: Position{Line: 1, Start: i, Len: 1},
Position: Position{Line: 1, Col: 1, Start: i, Len: 1},
Line: 1,
input: data,
}
@@ -92,8 +91,9 @@ func parse(data string) (p *parser, err error) {
func (p *parser) panicErr(it item, err error) {
panic(ParseError{
Message: err.Error(),
err: err,
Position: it.pos,
Position: it.pos.withCol(p.lx.input),
Line: it.pos.Len,
LastKey: p.current(),
})
@@ -102,7 +102,7 @@ func (p *parser) panicErr(it item, err error) {
func (p *parser) panicItemf(it item, format string, v ...any) {
panic(ParseError{
Message: fmt.Sprintf(format, v...),
Position: it.pos,
Position: it.pos.withCol(p.lx.input),
Line: it.pos.Len,
LastKey: p.current(),
})
@@ -111,7 +111,7 @@ func (p *parser) panicItemf(it item, format string, v ...any) {
func (p *parser) panicf(format string, v ...any) {
panic(ParseError{
Message: fmt.Sprintf(format, v...),
Position: p.pos,
Position: p.pos.withCol(p.lx.input),
Line: p.pos.Line,
LastKey: p.current(),
})
@@ -123,10 +123,11 @@ func (p *parser) next() item {
if it.typ == itemError {
if it.err != nil {
panic(ParseError{
Position: it.pos,
Message: it.err.Error(),
err: it.err,
Position: it.pos.withCol(p.lx.input),
Line: it.pos.Line,
LastKey: p.current(),
err: it.err,
})
}
@@ -527,7 +528,7 @@ func numUnderscoresOK(s string) bool {
}
}
// isHexis a superset of all the permissable characters surrounding an
// isHex is a superset of all the permissible characters surrounding an
// underscore.
accept = isHex(r)
}

View File

@@ -269,11 +269,9 @@ func parseMountInfoLine(line string) (mountInfo, error) {
return mountInfo{}, fmt.Errorf("invalid separator")
}
fields1 := strings.Split(fieldss[0], " ")
fields1 := strings.SplitN(fieldss[0], " ", 7)
if len(fields1) < 6 {
return mountInfo{}, fmt.Errorf("not enough fields before separator: %v", fields1)
} else if len(fields1) > 7 {
return mountInfo{}, fmt.Errorf("too many fields before separator: %v", fields1)
} else if len(fields1) == 6 {
fields1 = append(fields1, "")
}

View File

@@ -1,4 +0,0 @@
*.txt
*.pprof
cmap2/
cache/

View File

@@ -1,13 +0,0 @@
language: go
sudo: false
go:
- "1.10"
- "1.11"
- "1.12"
- master
script:
- go test -tags safe ./...
- go test ./...
-

View File

@@ -1,74 +0,0 @@
# xxhash [![GoDoc](https://godoc.org/github.com/OneOfOne/xxhash?status.svg)](https://godoc.org/github.com/OneOfOne/xxhash) [![Build Status](https://travis-ci.org/OneOfOne/xxhash.svg?branch=master)](https://travis-ci.org/OneOfOne/xxhash) [![Coverage](https://gocover.io/_badge/github.com/OneOfOne/xxhash)](https://gocover.io/github.com/OneOfOne/xxhash)
This is a native Go implementation of the excellent [xxhash](https://github.com/Cyan4973/xxHash)* algorithm, an extremely fast non-cryptographic Hash algorithm, working at speeds close to RAM limits.
* The C implementation is ([Copyright](https://github.com/Cyan4973/xxHash/blob/master/LICENSE) (c) 2012-2014, Yann Collet)
## Install
go get github.com/OneOfOne/xxhash
## Features
* On Go 1.7+ the pure go version is faster than CGO for all inputs.
* Supports ChecksumString{32,64} xxhash{32,64}.WriteString, which uses no copies when it can, falls back to copy on appengine.
* The native version falls back to a less optimized version on appengine due to the lack of unsafe.
* Almost as fast as the mostly pure assembly version written by the brilliant [cespare](https://github.com/cespare/xxhash), while also supporting seeds.
* To manually toggle the appengine version build with `-tags safe`.
## Benchmark
### Core i7-4790 @ 3.60GHz, Linux 4.12.6-1-ARCH (64bit), Go tip (+ff90f4af66 2017-08-19)
```bash
➤ go test -bench '64' -count 5 -tags cespare | benchstat /dev/stdin
name time/op
# https://github.com/cespare/xxhash
XXSum64Cespare/Func-8 160ns ± 2%
XXSum64Cespare/Struct-8 173ns ± 1%
XXSum64ShortCespare/Func-8 6.78ns ± 1%
XXSum64ShortCespare/Struct-8 19.6ns ± 2%
# this package (default mode, using unsafe)
XXSum64/Func-8 170ns ± 1%
XXSum64/Struct-8 182ns ± 1%
XXSum64Short/Func-8 13.5ns ± 3%
XXSum64Short/Struct-8 20.4ns ± 0%
# this package (appengine, *not* using unsafe)
XXSum64/Func-8 241ns ± 5%
XXSum64/Struct-8 243ns ± 6%
XXSum64Short/Func-8 15.2ns ± 2%
XXSum64Short/Struct-8 23.7ns ± 5%
CRC64ISO-8 1.23µs ± 1%
CRC64ISOString-8 2.71µs ± 4%
CRC64ISOShort-8 22.2ns ± 3%
Fnv64-8 2.34µs ± 1%
Fnv64Short-8 74.7ns ± 8%
```
## Usage
```go
h := xxhash.New64()
// r, err := os.Open("......")
// defer f.Close()
r := strings.NewReader(F)
io.Copy(h, r)
fmt.Println("xxhash.Backend:", xxhash.Backend)
fmt.Println("File checksum:", h.Sum64())
```
[<kbd>playground</kbd>](https://play.golang.org/p/wHKBwfu6CPV)
## TODO
* Rewrite the 32bit version to be more optimized.
* General cleanup as the Go inliner gets smarter.
## License
This project is released under the Apache v2. license. See [LICENSE](LICENSE) for more details.

View File

@@ -1,294 +0,0 @@
package xxhash
import (
"encoding/binary"
"errors"
"hash"
)
const (
prime32x1 uint32 = 2654435761
prime32x2 uint32 = 2246822519
prime32x3 uint32 = 3266489917
prime32x4 uint32 = 668265263
prime32x5 uint32 = 374761393
prime64x1 uint64 = 11400714785074694791
prime64x2 uint64 = 14029467366897019727
prime64x3 uint64 = 1609587929392839161
prime64x4 uint64 = 9650029242287828579
prime64x5 uint64 = 2870177450012600261
maxInt32 int32 = (1<<31 - 1)
// precomputed zero Vs for seed 0
zero64x1 = 0x60ea27eeadc0b5d6
zero64x2 = 0xc2b2ae3d27d4eb4f
zero64x3 = 0x0
zero64x4 = 0x61c8864e7a143579
)
const (
magic32 = "xxh\x07"
magic64 = "xxh\x08"
marshaled32Size = len(magic32) + 4*7 + 16
marshaled64Size = len(magic64) + 8*6 + 32 + 1
)
func NewHash32() hash.Hash { return New32() }
func NewHash64() hash.Hash { return New64() }
// Checksum32 returns the checksum of the input data with the seed set to 0.
func Checksum32(in []byte) uint32 {
return Checksum32S(in, 0)
}
// ChecksumString32 returns the checksum of the input data, without creating a copy, with the seed set to 0.
func ChecksumString32(s string) uint32 {
return ChecksumString32S(s, 0)
}
type XXHash32 struct {
mem [16]byte
ln, memIdx int32
v1, v2, v3, v4 uint32
seed uint32
}
// Size returns the number of bytes Sum will return.
func (xx *XXHash32) Size() int {
return 4
}
// BlockSize returns the hash's underlying block size.
// The Write method must be able to accept any amount
// of data, but it may operate more efficiently if all writes
// are a multiple of the block size.
func (xx *XXHash32) BlockSize() int {
return 16
}
// NewS32 creates a new hash.Hash32 computing the 32bit xxHash checksum starting with the specific seed.
func NewS32(seed uint32) (xx *XXHash32) {
xx = &XXHash32{
seed: seed,
}
xx.Reset()
return
}
// New32 creates a new hash.Hash32 computing the 32bit xxHash checksum starting with the seed set to 0.
func New32() *XXHash32 {
return NewS32(0)
}
func (xx *XXHash32) Reset() {
xx.v1 = xx.seed + prime32x1 + prime32x2
xx.v2 = xx.seed + prime32x2
xx.v3 = xx.seed
xx.v4 = xx.seed - prime32x1
xx.ln, xx.memIdx = 0, 0
}
// Sum appends the current hash to b and returns the resulting slice.
// It does not change the underlying hash state.
func (xx *XXHash32) Sum(in []byte) []byte {
s := xx.Sum32()
return append(in, byte(s>>24), byte(s>>16), byte(s>>8), byte(s))
}
// MarshalBinary implements the encoding.BinaryMarshaler interface.
func (xx *XXHash32) MarshalBinary() ([]byte, error) {
b := make([]byte, 0, marshaled32Size)
b = append(b, magic32...)
b = appendUint32(b, xx.v1)
b = appendUint32(b, xx.v2)
b = appendUint32(b, xx.v3)
b = appendUint32(b, xx.v4)
b = appendUint32(b, xx.seed)
b = appendInt32(b, xx.ln)
b = appendInt32(b, xx.memIdx)
b = append(b, xx.mem[:]...)
return b, nil
}
// UnmarshalBinary implements the encoding.BinaryUnmarshaler interface.
func (xx *XXHash32) UnmarshalBinary(b []byte) error {
if len(b) < len(magic32) || string(b[:len(magic32)]) != magic32 {
return errors.New("xxhash: invalid hash state identifier")
}
if len(b) != marshaled32Size {
return errors.New("xxhash: invalid hash state size")
}
b = b[len(magic32):]
b, xx.v1 = consumeUint32(b)
b, xx.v2 = consumeUint32(b)
b, xx.v3 = consumeUint32(b)
b, xx.v4 = consumeUint32(b)
b, xx.seed = consumeUint32(b)
b, xx.ln = consumeInt32(b)
b, xx.memIdx = consumeInt32(b)
copy(xx.mem[:], b)
return nil
}
// Checksum64 an alias for Checksum64S(in, 0)
func Checksum64(in []byte) uint64 {
return Checksum64S(in, 0)
}
// ChecksumString64 returns the checksum of the input data, without creating a copy, with the seed set to 0.
func ChecksumString64(s string) uint64 {
return ChecksumString64S(s, 0)
}
type XXHash64 struct {
v1, v2, v3, v4 uint64
seed uint64
ln uint64
mem [32]byte
memIdx int8
}
// Size returns the number of bytes Sum will return.
func (xx *XXHash64) Size() int {
return 8
}
// BlockSize returns the hash's underlying block size.
// The Write method must be able to accept any amount
// of data, but it may operate more efficiently if all writes
// are a multiple of the block size.
func (xx *XXHash64) BlockSize() int {
return 32
}
// NewS64 creates a new hash.Hash64 computing the 64bit xxHash checksum starting with the specific seed.
func NewS64(seed uint64) (xx *XXHash64) {
xx = &XXHash64{
seed: seed,
}
xx.Reset()
return
}
// New64 creates a new hash.Hash64 computing the 64bit xxHash checksum starting with the seed set to 0x0.
func New64() *XXHash64 {
return NewS64(0)
}
func (xx *XXHash64) Reset() {
xx.ln, xx.memIdx = 0, 0
xx.v1, xx.v2, xx.v3, xx.v4 = resetVs64(xx.seed)
}
// Sum appends the current hash to b and returns the resulting slice.
// It does not change the underlying hash state.
func (xx *XXHash64) Sum(in []byte) []byte {
s := xx.Sum64()
return append(in, byte(s>>56), byte(s>>48), byte(s>>40), byte(s>>32), byte(s>>24), byte(s>>16), byte(s>>8), byte(s))
}
// MarshalBinary implements the encoding.BinaryMarshaler interface.
func (xx *XXHash64) MarshalBinary() ([]byte, error) {
b := make([]byte, 0, marshaled64Size)
b = append(b, magic64...)
b = appendUint64(b, xx.v1)
b = appendUint64(b, xx.v2)
b = appendUint64(b, xx.v3)
b = appendUint64(b, xx.v4)
b = appendUint64(b, xx.seed)
b = appendUint64(b, xx.ln)
b = append(b, byte(xx.memIdx))
b = append(b, xx.mem[:]...)
return b, nil
}
// UnmarshalBinary implements the encoding.BinaryUnmarshaler interface.
func (xx *XXHash64) UnmarshalBinary(b []byte) error {
if len(b) < len(magic64) || string(b[:len(magic64)]) != magic64 {
return errors.New("xxhash: invalid hash state identifier")
}
if len(b) != marshaled64Size {
return errors.New("xxhash: invalid hash state size")
}
b = b[len(magic64):]
b, xx.v1 = consumeUint64(b)
b, xx.v2 = consumeUint64(b)
b, xx.v3 = consumeUint64(b)
b, xx.v4 = consumeUint64(b)
b, xx.seed = consumeUint64(b)
b, xx.ln = consumeUint64(b)
xx.memIdx = int8(b[0])
b = b[1:]
copy(xx.mem[:], b)
return nil
}
func appendInt32(b []byte, x int32) []byte { return appendUint32(b, uint32(x)) }
func appendUint32(b []byte, x uint32) []byte {
var a [4]byte
binary.LittleEndian.PutUint32(a[:], x)
return append(b, a[:]...)
}
func appendUint64(b []byte, x uint64) []byte {
var a [8]byte
binary.LittleEndian.PutUint64(a[:], x)
return append(b, a[:]...)
}
func consumeInt32(b []byte) ([]byte, int32) { bn, x := consumeUint32(b); return bn, int32(x) }
func consumeUint32(b []byte) ([]byte, uint32) { x := u32(b); return b[4:], x }
func consumeUint64(b []byte) ([]byte, uint64) { x := u64(b); return b[8:], x }
// force the compiler to use ROTL instructions
func rotl32_1(x uint32) uint32 { return (x << 1) | (x >> (32 - 1)) }
func rotl32_7(x uint32) uint32 { return (x << 7) | (x >> (32 - 7)) }
func rotl32_11(x uint32) uint32 { return (x << 11) | (x >> (32 - 11)) }
func rotl32_12(x uint32) uint32 { return (x << 12) | (x >> (32 - 12)) }
func rotl32_13(x uint32) uint32 { return (x << 13) | (x >> (32 - 13)) }
func rotl32_17(x uint32) uint32 { return (x << 17) | (x >> (32 - 17)) }
func rotl32_18(x uint32) uint32 { return (x << 18) | (x >> (32 - 18)) }
func rotl64_1(x uint64) uint64 { return (x << 1) | (x >> (64 - 1)) }
func rotl64_7(x uint64) uint64 { return (x << 7) | (x >> (64 - 7)) }
func rotl64_11(x uint64) uint64 { return (x << 11) | (x >> (64 - 11)) }
func rotl64_12(x uint64) uint64 { return (x << 12) | (x >> (64 - 12)) }
func rotl64_18(x uint64) uint64 { return (x << 18) | (x >> (64 - 18)) }
func rotl64_23(x uint64) uint64 { return (x << 23) | (x >> (64 - 23)) }
func rotl64_27(x uint64) uint64 { return (x << 27) | (x >> (64 - 27)) }
func rotl64_31(x uint64) uint64 { return (x << 31) | (x >> (64 - 31)) }
func mix64(h uint64) uint64 {
h ^= h >> 33
h *= prime64x2
h ^= h >> 29
h *= prime64x3
h ^= h >> 32
return h
}
func resetVs64(seed uint64) (v1, v2, v3, v4 uint64) {
if seed == 0 {
return zero64x1, zero64x2, zero64x3, zero64x4
}
return (seed + prime64x1 + prime64x2), (seed + prime64x2), (seed), (seed - prime64x1)
}
// borrowed from cespare
func round64(h, v uint64) uint64 {
h += v * prime64x2
h = rotl64_31(h)
h *= prime64x1
return h
}
func mergeRound64(h, v uint64) uint64 {
v = round64(0, v)
h ^= v
h = h*prime64x1 + prime64x4
return h
}

View File

@@ -1,161 +0,0 @@
package xxhash
func u32(in []byte) uint32 {
return uint32(in[0]) | uint32(in[1])<<8 | uint32(in[2])<<16 | uint32(in[3])<<24
}
func u64(in []byte) uint64 {
return uint64(in[0]) | uint64(in[1])<<8 | uint64(in[2])<<16 | uint64(in[3])<<24 | uint64(in[4])<<32 | uint64(in[5])<<40 | uint64(in[6])<<48 | uint64(in[7])<<56
}
// Checksum32S returns the checksum of the input bytes with the specific seed.
func Checksum32S(in []byte, seed uint32) (h uint32) {
var i int
if len(in) > 15 {
var (
v1 = seed + prime32x1 + prime32x2
v2 = seed + prime32x2
v3 = seed + 0
v4 = seed - prime32x1
)
for ; i < len(in)-15; i += 16 {
in := in[i : i+16 : len(in)]
v1 += u32(in[0:4:len(in)]) * prime32x2
v1 = rotl32_13(v1) * prime32x1
v2 += u32(in[4:8:len(in)]) * prime32x2
v2 = rotl32_13(v2) * prime32x1
v3 += u32(in[8:12:len(in)]) * prime32x2
v3 = rotl32_13(v3) * prime32x1
v4 += u32(in[12:16:len(in)]) * prime32x2
v4 = rotl32_13(v4) * prime32x1
}
h = rotl32_1(v1) + rotl32_7(v2) + rotl32_12(v3) + rotl32_18(v4)
} else {
h = seed + prime32x5
}
h += uint32(len(in))
for ; i <= len(in)-4; i += 4 {
in := in[i : i+4 : len(in)]
h += u32(in[0:4:len(in)]) * prime32x3
h = rotl32_17(h) * prime32x4
}
for ; i < len(in); i++ {
h += uint32(in[i]) * prime32x5
h = rotl32_11(h) * prime32x1
}
h ^= h >> 15
h *= prime32x2
h ^= h >> 13
h *= prime32x3
h ^= h >> 16
return
}
func (xx *XXHash32) Write(in []byte) (n int, err error) {
i, ml := 0, int(xx.memIdx)
n = len(in)
xx.ln += int32(n)
if d := 16 - ml; ml > 0 && ml+len(in) > 16 {
xx.memIdx += int32(copy(xx.mem[xx.memIdx:], in[:d]))
ml, in = 16, in[d:len(in):len(in)]
} else if ml+len(in) < 16 {
xx.memIdx += int32(copy(xx.mem[xx.memIdx:], in))
return
}
if ml > 0 {
i += 16 - ml
xx.memIdx += int32(copy(xx.mem[xx.memIdx:len(xx.mem):len(xx.mem)], in))
in := xx.mem[:16:len(xx.mem)]
xx.v1 += u32(in[0:4:len(in)]) * prime32x2
xx.v1 = rotl32_13(xx.v1) * prime32x1
xx.v2 += u32(in[4:8:len(in)]) * prime32x2
xx.v2 = rotl32_13(xx.v2) * prime32x1
xx.v3 += u32(in[8:12:len(in)]) * prime32x2
xx.v3 = rotl32_13(xx.v3) * prime32x1
xx.v4 += u32(in[12:16:len(in)]) * prime32x2
xx.v4 = rotl32_13(xx.v4) * prime32x1
xx.memIdx = 0
}
for ; i <= len(in)-16; i += 16 {
in := in[i : i+16 : len(in)]
xx.v1 += u32(in[0:4:len(in)]) * prime32x2
xx.v1 = rotl32_13(xx.v1) * prime32x1
xx.v2 += u32(in[4:8:len(in)]) * prime32x2
xx.v2 = rotl32_13(xx.v2) * prime32x1
xx.v3 += u32(in[8:12:len(in)]) * prime32x2
xx.v3 = rotl32_13(xx.v3) * prime32x1
xx.v4 += u32(in[12:16:len(in)]) * prime32x2
xx.v4 = rotl32_13(xx.v4) * prime32x1
}
if len(in)-i != 0 {
xx.memIdx += int32(copy(xx.mem[xx.memIdx:], in[i:len(in):len(in)]))
}
return
}
func (xx *XXHash32) Sum32() (h uint32) {
var i int32
if xx.ln > 15 {
h = rotl32_1(xx.v1) + rotl32_7(xx.v2) + rotl32_12(xx.v3) + rotl32_18(xx.v4)
} else {
h = xx.seed + prime32x5
}
h += uint32(xx.ln)
if xx.memIdx > 0 {
for ; i < xx.memIdx-3; i += 4 {
in := xx.mem[i : i+4 : len(xx.mem)]
h += u32(in[0:4:len(in)]) * prime32x3
h = rotl32_17(h) * prime32x4
}
for ; i < xx.memIdx; i++ {
h += uint32(xx.mem[i]) * prime32x5
h = rotl32_11(h) * prime32x1
}
}
h ^= h >> 15
h *= prime32x2
h ^= h >> 13
h *= prime32x3
h ^= h >> 16
return
}
// Checksum64S returns the 64bit xxhash checksum for a single input
func Checksum64S(in []byte, seed uint64) uint64 {
if len(in) == 0 && seed == 0 {
return 0xef46db3751d8e999
}
if len(in) > 31 {
return checksum64(in, seed)
}
return checksum64Short(in, seed)
}

View File

@@ -1,183 +0,0 @@
// +build appengine safe ppc64le ppc64be mipsle mips s390x
package xxhash
// Backend returns the current version of xxhash being used.
const Backend = "GoSafe"
func ChecksumString32S(s string, seed uint32) uint32 {
return Checksum32S([]byte(s), seed)
}
func (xx *XXHash32) WriteString(s string) (int, error) {
if len(s) == 0 {
return 0, nil
}
return xx.Write([]byte(s))
}
func ChecksumString64S(s string, seed uint64) uint64 {
return Checksum64S([]byte(s), seed)
}
func (xx *XXHash64) WriteString(s string) (int, error) {
if len(s) == 0 {
return 0, nil
}
return xx.Write([]byte(s))
}
func checksum64(in []byte, seed uint64) (h uint64) {
var (
v1, v2, v3, v4 = resetVs64(seed)
i int
)
for ; i < len(in)-31; i += 32 {
in := in[i : i+32 : len(in)]
v1 = round64(v1, u64(in[0:8:len(in)]))
v2 = round64(v2, u64(in[8:16:len(in)]))
v3 = round64(v3, u64(in[16:24:len(in)]))
v4 = round64(v4, u64(in[24:32:len(in)]))
}
h = rotl64_1(v1) + rotl64_7(v2) + rotl64_12(v3) + rotl64_18(v4)
h = mergeRound64(h, v1)
h = mergeRound64(h, v2)
h = mergeRound64(h, v3)
h = mergeRound64(h, v4)
h += uint64(len(in))
for ; i < len(in)-7; i += 8 {
h ^= round64(0, u64(in[i:len(in):len(in)]))
h = rotl64_27(h)*prime64x1 + prime64x4
}
for ; i < len(in)-3; i += 4 {
h ^= uint64(u32(in[i:len(in):len(in)])) * prime64x1
h = rotl64_23(h)*prime64x2 + prime64x3
}
for ; i < len(in); i++ {
h ^= uint64(in[i]) * prime64x5
h = rotl64_11(h) * prime64x1
}
return mix64(h)
}
func checksum64Short(in []byte, seed uint64) uint64 {
var (
h = seed + prime64x5 + uint64(len(in))
i int
)
for ; i < len(in)-7; i += 8 {
k := u64(in[i : i+8 : len(in)])
h ^= round64(0, k)
h = rotl64_27(h)*prime64x1 + prime64x4
}
for ; i < len(in)-3; i += 4 {
h ^= uint64(u32(in[i:i+4:len(in)])) * prime64x1
h = rotl64_23(h)*prime64x2 + prime64x3
}
for ; i < len(in); i++ {
h ^= uint64(in[i]) * prime64x5
h = rotl64_11(h) * prime64x1
}
return mix64(h)
}
func (xx *XXHash64) Write(in []byte) (n int, err error) {
var (
ml = int(xx.memIdx)
d = 32 - ml
)
n = len(in)
xx.ln += uint64(n)
if ml+len(in) < 32 {
xx.memIdx += int8(copy(xx.mem[xx.memIdx:len(xx.mem):len(xx.mem)], in))
return
}
i, v1, v2, v3, v4 := 0, xx.v1, xx.v2, xx.v3, xx.v4
if ml > 0 && ml+len(in) > 32 {
xx.memIdx += int8(copy(xx.mem[xx.memIdx:len(xx.mem):len(xx.mem)], in[:d:len(in)]))
in = in[d:len(in):len(in)]
in := xx.mem[0:32:len(xx.mem)]
v1 = round64(v1, u64(in[0:8:len(in)]))
v2 = round64(v2, u64(in[8:16:len(in)]))
v3 = round64(v3, u64(in[16:24:len(in)]))
v4 = round64(v4, u64(in[24:32:len(in)]))
xx.memIdx = 0
}
for ; i < len(in)-31; i += 32 {
in := in[i : i+32 : len(in)]
v1 = round64(v1, u64(in[0:8:len(in)]))
v2 = round64(v2, u64(in[8:16:len(in)]))
v3 = round64(v3, u64(in[16:24:len(in)]))
v4 = round64(v4, u64(in[24:32:len(in)]))
}
if len(in)-i != 0 {
xx.memIdx += int8(copy(xx.mem[xx.memIdx:], in[i:len(in):len(in)]))
}
xx.v1, xx.v2, xx.v3, xx.v4 = v1, v2, v3, v4
return
}
func (xx *XXHash64) Sum64() (h uint64) {
var i int
if xx.ln > 31 {
v1, v2, v3, v4 := xx.v1, xx.v2, xx.v3, xx.v4
h = rotl64_1(v1) + rotl64_7(v2) + rotl64_12(v3) + rotl64_18(v4)
h = mergeRound64(h, v1)
h = mergeRound64(h, v2)
h = mergeRound64(h, v3)
h = mergeRound64(h, v4)
} else {
h = xx.seed + prime64x5
}
h += uint64(xx.ln)
if xx.memIdx > 0 {
in := xx.mem[:xx.memIdx]
for ; i < int(xx.memIdx)-7; i += 8 {
in := in[i : i+8 : len(in)]
k := u64(in[0:8:len(in)])
k *= prime64x2
k = rotl64_31(k)
k *= prime64x1
h ^= k
h = rotl64_27(h)*prime64x1 + prime64x4
}
for ; i < int(xx.memIdx)-3; i += 4 {
in := in[i : i+4 : len(in)]
h ^= uint64(u32(in[0:4:len(in)])) * prime64x1
h = rotl64_23(h)*prime64x2 + prime64x3
}
for ; i < int(xx.memIdx); i++ {
h ^= uint64(in[i]) * prime64x5
h = rotl64_11(h) * prime64x1
}
}
return mix64(h)
}

View File

@@ -1,240 +0,0 @@
// +build !safe
// +build !appengine
// +build !ppc64le
// +build !mipsle
// +build !ppc64be
// +build !mips
// +build !s390x
package xxhash
import (
"reflect"
"unsafe"
)
// Backend returns the current version of xxhash being used.
const Backend = "GoUnsafe"
// ChecksumString32S returns the checksum of the input data, without creating a copy, with the specific seed.
func ChecksumString32S(s string, seed uint32) uint32 {
if len(s) == 0 {
return Checksum32S(nil, seed)
}
ss := (*reflect.StringHeader)(unsafe.Pointer(&s))
return Checksum32S((*[maxInt32]byte)(unsafe.Pointer(ss.Data))[:len(s):len(s)], seed)
}
func (xx *XXHash32) WriteString(s string) (int, error) {
if len(s) == 0 {
return 0, nil
}
ss := (*reflect.StringHeader)(unsafe.Pointer(&s))
return xx.Write((*[maxInt32]byte)(unsafe.Pointer(ss.Data))[:len(s):len(s)])
}
// ChecksumString64S returns the checksum of the input data, without creating a copy, with the specific seed.
func ChecksumString64S(s string, seed uint64) uint64 {
if len(s) == 0 {
return Checksum64S(nil, seed)
}
ss := (*reflect.StringHeader)(unsafe.Pointer(&s))
return Checksum64S((*[maxInt32]byte)(unsafe.Pointer(ss.Data))[:len(s):len(s)], seed)
}
func (xx *XXHash64) WriteString(s string) (int, error) {
if len(s) == 0 {
return 0, nil
}
ss := (*reflect.StringHeader)(unsafe.Pointer(&s))
return xx.Write((*[maxInt32]byte)(unsafe.Pointer(ss.Data))[:len(s):len(s)])
}
//go:nocheckptr
func checksum64(in []byte, seed uint64) uint64 {
var (
wordsLen = len(in) >> 3
words = ((*[maxInt32 / 8]uint64)(unsafe.Pointer(&in[0])))[:wordsLen:wordsLen]
v1, v2, v3, v4 = resetVs64(seed)
h uint64
i int
)
for ; i < len(words)-3; i += 4 {
words := (*[4]uint64)(unsafe.Pointer(&words[i]))
v1 = round64(v1, words[0])
v2 = round64(v2, words[1])
v3 = round64(v3, words[2])
v4 = round64(v4, words[3])
}
h = rotl64_1(v1) + rotl64_7(v2) + rotl64_12(v3) + rotl64_18(v4)
h = mergeRound64(h, v1)
h = mergeRound64(h, v2)
h = mergeRound64(h, v3)
h = mergeRound64(h, v4)
h += uint64(len(in))
for _, k := range words[i:] {
h ^= round64(0, k)
h = rotl64_27(h)*prime64x1 + prime64x4
}
if in = in[wordsLen<<3 : len(in) : len(in)]; len(in) > 3 {
words := (*[1]uint32)(unsafe.Pointer(&in[0]))
h ^= uint64(words[0]) * prime64x1
h = rotl64_23(h)*prime64x2 + prime64x3
in = in[4:len(in):len(in)]
}
for _, b := range in {
h ^= uint64(b) * prime64x5
h = rotl64_11(h) * prime64x1
}
return mix64(h)
}
//go:nocheckptr
func checksum64Short(in []byte, seed uint64) uint64 {
var (
h = seed + prime64x5 + uint64(len(in))
i int
)
if len(in) > 7 {
var (
wordsLen = len(in) >> 3
words = ((*[maxInt32 / 8]uint64)(unsafe.Pointer(&in[0])))[:wordsLen:wordsLen]
)
for i := range words {
h ^= round64(0, words[i])
h = rotl64_27(h)*prime64x1 + prime64x4
}
i = wordsLen << 3
}
if in = in[i:len(in):len(in)]; len(in) > 3 {
words := (*[1]uint32)(unsafe.Pointer(&in[0]))
h ^= uint64(words[0]) * prime64x1
h = rotl64_23(h)*prime64x2 + prime64x3
in = in[4:len(in):len(in)]
}
for _, b := range in {
h ^= uint64(b) * prime64x5
h = rotl64_11(h) * prime64x1
}
return mix64(h)
}
func (xx *XXHash64) Write(in []byte) (n int, err error) {
mem, idx := xx.mem[:], int(xx.memIdx)
xx.ln, n = xx.ln+uint64(len(in)), len(in)
if idx+len(in) < 32 {
xx.memIdx += int8(copy(mem[idx:len(mem):len(mem)], in))
return
}
var (
v1, v2, v3, v4 = xx.v1, xx.v2, xx.v3, xx.v4
i int
)
if d := 32 - int(idx); d > 0 && int(idx)+len(in) > 31 {
copy(mem[idx:len(mem):len(mem)], in[:len(in):len(in)])
words := (*[4]uint64)(unsafe.Pointer(&mem[0]))
v1 = round64(v1, words[0])
v2 = round64(v2, words[1])
v3 = round64(v3, words[2])
v4 = round64(v4, words[3])
if in, xx.memIdx = in[d:len(in):len(in)], 0; len(in) == 0 {
goto RET
}
}
for ; i < len(in)-31; i += 32 {
words := (*[4]uint64)(unsafe.Pointer(&in[i]))
v1 = round64(v1, words[0])
v2 = round64(v2, words[1])
v3 = round64(v3, words[2])
v4 = round64(v4, words[3])
}
if len(in)-i != 0 {
xx.memIdx += int8(copy(mem[xx.memIdx:len(mem):len(mem)], in[i:len(in):len(in)]))
}
RET:
xx.v1, xx.v2, xx.v3, xx.v4 = v1, v2, v3, v4
return
}
func (xx *XXHash64) Sum64() (h uint64) {
if seed := xx.seed; xx.ln > 31 {
v1, v2, v3, v4 := xx.v1, xx.v2, xx.v3, xx.v4
h = rotl64_1(v1) + rotl64_7(v2) + rotl64_12(v3) + rotl64_18(v4)
h = mergeRound64(h, v1)
h = mergeRound64(h, v2)
h = mergeRound64(h, v3)
h = mergeRound64(h, v4)
} else if seed == 0 {
h = prime64x5
} else {
h = seed + prime64x5
}
h += uint64(xx.ln)
if xx.memIdx == 0 {
return mix64(h)
}
var (
in = xx.mem[:xx.memIdx:xx.memIdx]
wordsLen = len(in) >> 3
words = ((*[maxInt32 / 8]uint64)(unsafe.Pointer(&in[0])))[:wordsLen:wordsLen]
)
for _, k := range words {
h ^= round64(0, k)
h = rotl64_27(h)*prime64x1 + prime64x4
}
if in = in[wordsLen<<3 : len(in) : len(in)]; len(in) > 3 {
words := (*[1]uint32)(unsafe.Pointer(&in[0]))
h ^= uint64(words[0]) * prime64x1
h = rotl64_23(h)*prime64x2 + prime64x3
in = in[4:len(in):len(in)]
}
for _, b := range in {
h ^= uint64(b) * prime64x5
h = rotl64_11(h) * prime64x1
}
return mix64(h)
}

View File

@@ -104,7 +104,7 @@ func (r *roffRenderer) RenderNode(w io.Writer, node *blackfriday.Node, entering
node.Parent.Prev.Type == blackfriday.Heading &&
node.Parent.Prev.FirstChild != nil &&
bytes.EqualFold(node.Parent.Prev.FirstChild.Literal, []byte("NAME")) {
before, after, found := bytes.Cut(node.Literal, []byte(" - "))
before, after, found := bytesCut(node.Literal, []byte(" - "))
escapeSpecialChars(w, before)
if found {
out(w, ` \- `)
@@ -406,3 +406,12 @@ func escapeSpecialCharsLine(w io.Writer, text []byte) {
w.Write([]byte{'\\', text[i]}) // nolint: errcheck
}
}
// bytesCut is a copy of [bytes.Cut] to provide compatibility with go1.17
// and older. We can remove this once we drop support for go1.17 and older.
func bytesCut(s, sep []byte) (before, after []byte, found bool) {
if i := bytes.Index(s, sep); i >= 0 {
return s[:i], s[i+len(sep):], true
}
return s, nil, false
}

View File

@@ -24,6 +24,7 @@ Artur Melanchyk <artur.melanchyk@gmail.com>
Asta Xie <xiemengjun at gmail.com>
B Lamarche <blam413 at gmail.com>
Bes Dollma <bdollma@thousandeyes.com>
Bogdan Constantinescu <bog.con.bc at gmail.com>
Brian Hendriks <brian at dolthub.com>
Bulat Gaifullin <gaifullinbf at gmail.com>
Caine Jette <jette at alum.mit.edu>

View File

@@ -1,5 +1,16 @@
# Changelog
## v1.9.1 (2025-03-21)
### Major Changes
* Add Charset() option. (#1679)
### Bugfixes
* go.mod: fix go version format (#1682)
* Fix FormatDSN missing ConnectionAttributes (#1619)
## v1.9.0 (2025-02-18)
### Major Changes

View File

@@ -44,7 +44,6 @@ type Config struct {
DBName string // Database name
Params map[string]string // Connection parameters
ConnectionAttributes string // Connection Attributes, comma-delimited string of user-defined "key:value" pairs
charsets []string // Connection charset. When set, this will be set in SET NAMES <charset> query
Collation string // Connection collation. When set, this will be set in SET NAMES <charset> COLLATE <collation> query
Loc *time.Location // Location for time.Time values
MaxAllowedPacket int // Max packet size allowed
@@ -81,6 +80,7 @@ type Config struct {
beforeConnect func(context.Context, *Config) error // Invoked before a connection is established
pubKey *rsa.PublicKey // Server public key
timeTruncate time.Duration // Truncate time.Time values to the specified duration
charsets []string // Connection charset. When set, this will be set in SET NAMES <charset> query
}
// Functional Options Pattern
@@ -135,6 +135,21 @@ func EnableCompression(yes bool) Option {
}
}
// Charset sets the connection charset and collation.
//
// charset is the connection charset.
// collation is the connection collation. It can be null or empty string.
//
// When collation is not specified, `SET NAMES <charset>` command is sent when the connection is established.
// When collation is specified, `SET NAMES <charset> COLLATE <collation>` command is sent when the connection is established.
func Charset(charset, collation string) Option {
return func(cfg *Config) error {
cfg.charsets = []string{charset}
cfg.Collation = collation
return nil
}
}
func (cfg *Config) Clone() *Config {
cp := *cfg
if cp.TLS != nil {
@@ -307,6 +322,10 @@ func (cfg *Config) FormatDSN() string {
writeDSNParam(&buf, &hasParam, "columnsWithAlias", "true")
}
if cfg.ConnectionAttributes != "" {
writeDSNParam(&buf, &hasParam, "connectionAttributes", url.QueryEscape(cfg.ConnectionAttributes))
}
if cfg.compress {
writeDSNParam(&buf, &hasParam, "compress", "true")
}

View File

@@ -7,6 +7,8 @@ import (
"strings"
)
const tokenDelimiter = "."
type Parser struct {
// If populated, only these methods will be considered valid.
//
@@ -122,9 +124,10 @@ func (p *Parser) ParseWithClaims(tokenString string, claims Claims, keyFunc Keyf
// It's only ever useful in cases where you know the signature is valid (because it has
// been checked previously in the stack) and you want to extract values from it.
func (p *Parser) ParseUnverified(tokenString string, claims Claims) (token *Token, parts []string, err error) {
parts = strings.Split(tokenString, ".")
if len(parts) != 3 {
return nil, parts, NewValidationError("token contains an invalid number of segments", ValidationErrorMalformed)
var ok bool
parts, ok = splitToken(tokenString)
if !ok {
return nil, nil, NewValidationError("token contains an invalid number of segments", ValidationErrorMalformed)
}
token = &Token{Raw: tokenString}
@@ -174,3 +177,30 @@ func (p *Parser) ParseUnverified(tokenString string, claims Claims) (token *Toke
return token, parts, nil
}
// splitToken splits a token string into three parts: header, claims, and signature. It will only
// return true if the token contains exactly two delimiters and three parts. In all other cases, it
// will return nil parts and false.
func splitToken(token string) ([]string, bool) {
parts := make([]string, 3)
header, remain, ok := strings.Cut(token, tokenDelimiter)
if !ok {
return nil, false
}
parts[0] = header
claims, remain, ok := strings.Cut(remain, tokenDelimiter)
if !ok {
return nil, false
}
parts[1] = claims
// One more cut to ensure the signature is the last part of the token and there are no more
// delimiters. This avoids an issue where malicious input could contain additional delimiters
// causing unecessary overhead parsing tokens.
signature, _, unexpected := strings.Cut(remain, tokenDelimiter)
if unexpected {
return nil, false
}
parts[2] = signature
return parts, true
}

View File

@@ -10,11 +10,11 @@ implementation of [JSON Web
Tokens](https://datatracker.ietf.org/doc/html/rfc7519).
Starting with [v4.0.0](https://github.com/golang-jwt/jwt/releases/tag/v4.0.0)
this project adds Go module support, but maintains backwards compatibility with
this project adds Go module support, but maintains backward compatibility with
older `v3.x.y` tags and upstream `github.com/dgrijalva/jwt-go`. See the
[`MIGRATION_GUIDE.md`](./MIGRATION_GUIDE.md) for more information. Version
v5.0.0 introduces major improvements to the validation of tokens, but is not
entirely backwards compatible.
entirely backward compatible.
> After the original author of the library suggested migrating the maintenance
> of `jwt-go`, a dedicated team of open source maintainers decided to clone the
@@ -24,7 +24,7 @@ entirely backwards compatible.
**SECURITY NOTICE:** Some older versions of Go have a security issue in the
crypto/elliptic. Recommendation is to upgrade to at least 1.15 See issue
crypto/elliptic. The recommendation is to upgrade to at least 1.15 See issue
[dgrijalva/jwt-go#216](https://github.com/dgrijalva/jwt-go/issues/216) for more
detail.
@@ -32,7 +32,7 @@ detail.
what you
expect](https://auth0.com/blog/critical-vulnerabilities-in-json-web-token-libraries/).
This library attempts to make it easy to do the right thing by requiring key
types match the expected alg, but you should take the extra step to verify it in
types to match the expected alg, but you should take the extra step to verify it in
your usage. See the examples provided.
### Supported Go versions
@@ -41,7 +41,7 @@ Our support of Go versions is aligned with Go's [version release
policy](https://golang.org/doc/devel/release#policy). So we will support a major
version of Go until there are two newer major releases. We no longer support
building jwt-go with unsupported Go versions, as these contain security
vulnerabilities which will not be fixed.
vulnerabilities that will not be fixed.
## What the heck is a JWT?
@@ -117,7 +117,7 @@ notable differences:
This library is considered production ready. Feedback and feature requests are
appreciated. The API should be considered stable. There should be very few
backwards-incompatible changes outside of major version updates (and only with
backward-incompatible changes outside of major version updates (and only with
good reason).
This project uses [Semantic Versioning 2.0.0](http://semver.org). Accepted pull
@@ -125,8 +125,8 @@ requests will land on `main`. Periodically, versions will be tagged from
`main`. You can find all the releases on [the project releases
page](https://github.com/golang-jwt/jwt/releases).
**BREAKING CHANGES:*** A full list of breaking changes is available in
`VERSION_HISTORY.md`. See `MIGRATION_GUIDE.md` for more information on updating
**BREAKING CHANGES:** A full list of breaking changes is available in
`VERSION_HISTORY.md`. See [`MIGRATION_GUIDE.md`](./MIGRATION_GUIDE.md) for more information on updating
your code.
## Extensions

View File

@@ -2,11 +2,11 @@
## Supported Versions
As of February 2022 (and until this document is updated), the latest version `v4` is supported.
As of November 2024 (and until this document is updated), the latest version `v5` is supported. In critical cases, we might supply back-ported patches for `v4`.
## Reporting a Vulnerability
If you think you found a vulnerability, and even if you are not sure, please report it to jwt-go-security@googlegroups.com or one of the other [golang-jwt maintainers](https://github.com/orgs/golang-jwt/people). Please try be explicit, describe steps to reproduce the security issue with code example(s).
If you think you found a vulnerability, and even if you are not sure, please report it a [GitHub Security Advisory](https://github.com/golang-jwt/jwt/security/advisories/new). Please try be explicit, describe steps to reproduce the security issue with code example(s).
You will receive a response within a timely manner. If the issue is confirmed, we will do our best to release a patch as soon as possible given the complexity of the problem.

View File

@@ -8,6 +8,8 @@ import (
"strings"
)
const tokenDelimiter = "."
type Parser struct {
// If populated, only these methods will be considered valid.
validMethods []string
@@ -136,9 +138,10 @@ func (p *Parser) ParseWithClaims(tokenString string, claims Claims, keyFunc Keyf
// It's only ever useful in cases where you know the signature is valid (since it has already
// been or will be checked elsewhere in the stack) and you want to extract values from it.
func (p *Parser) ParseUnverified(tokenString string, claims Claims) (token *Token, parts []string, err error) {
parts = strings.Split(tokenString, ".")
if len(parts) != 3 {
return nil, parts, newError("token contains an invalid number of segments", ErrTokenMalformed)
var ok bool
parts, ok = splitToken(tokenString)
if !ok {
return nil, nil, newError("token contains an invalid number of segments", ErrTokenMalformed)
}
token = &Token{Raw: tokenString}
@@ -196,6 +199,33 @@ func (p *Parser) ParseUnverified(tokenString string, claims Claims) (token *Toke
return token, parts, nil
}
// splitToken splits a token string into three parts: header, claims, and signature. It will only
// return true if the token contains exactly two delimiters and three parts. In all other cases, it
// will return nil parts and false.
func splitToken(token string) ([]string, bool) {
parts := make([]string, 3)
header, remain, ok := strings.Cut(token, tokenDelimiter)
if !ok {
return nil, false
}
parts[0] = header
claims, remain, ok := strings.Cut(remain, tokenDelimiter)
if !ok {
return nil, false
}
parts[1] = claims
// One more cut to ensure the signature is the last part of the token and there are no more
// delimiters. This avoids an issue where malicious input could contain additional delimiters
// causing unecessary overhead parsing tokens.
signature, _, unexpected := strings.Cut(remain, tokenDelimiter)
if unexpected {
return nil, false
}
parts[2] = signature
return parts, true
}
// DecodeSegment decodes a JWT specific base64url encoding. This function will
// take into account whether the [Parser] is configured with additional options,
// such as [WithStrictDecoding] or [WithPaddingAllowed].

View File

@@ -75,7 +75,7 @@ func (t *Token) SignedString(key interface{}) (string, error) {
}
// SigningString generates the signing string. This is the most expensive part
// of the whole deal. Unless you need this for something special, just go
// of the whole deal. Unless you need this for something special, just go
// straight for the SignedString.
func (t *Token) SigningString() (string, error) {
h, err := json.Marshal(t.Header)

View File

@@ -1,3 +1,4 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
@@ -178,10 +179,24 @@
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "{}"
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

35
vendor/github.com/google/go-tpm/legacy/tpm2/README.md generated vendored Normal file
View File

@@ -0,0 +1,35 @@
# TPM 2.0 client library
## Tests
This library contains unit tests in `github.com/google/go-tpm/tpm2`, which just
tests that various encoding and error checking functions work correctly. It also
contains more comprehensive integration tests in
`github.com/google/go-tpm/tpm2/test`, which run actual commands on a TPM.
By default, these integration tests are run against the
[`go-tpm-tools`](https://github.com/google/go-tpm-tools)
simulator, which is baesed on the
[Microsoft Reference TPM2 code](https://github.com/microsoft/ms-tpm-20-ref). To
run both the unit and integration tests, run (in this directory)
```bash
go test . ./test
```
These integration tests can also be run against a real TPM device. This is
slightly more complex as the tests often need to be built as a normal user and
then executed as root. For example,
```bash
# Build the test binary without running it
go test -c github.com/google/go-tpm/tpm2/test
# Execute the test binary as root
sudo ./test.test --tpm-path=/dev/tpmrm0
```
On Linux, The `--tpm-path` causes the integration tests to be run against a
real TPM located at that path (usually `/dev/tpmrm0` or `/dev/tpm0`). On Windows, the story is similar, execept that
the `--use-tbs` flag is used instead.
Tip: if your TPM host is remote and you don't want to install Go on it, this
same two-step process can be used. The test binary can be copied to a remote
host and run without extra installation (as the test binary has very few
*runtime* dependancies).

View File

@@ -0,0 +1,575 @@
// Copyright (c) 2018, Google LLC All rights reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package tpm2
import (
"crypto"
"crypto/elliptic"
"fmt"
"strings"
// Register the relevant hash implementations to prevent a runtime failure.
_ "crypto/sha1"
_ "crypto/sha256"
_ "crypto/sha512"
"github.com/google/go-tpm/tpmutil"
)
var hashInfo = []struct {
alg Algorithm
hash crypto.Hash
}{
{AlgSHA1, crypto.SHA1},
{AlgSHA256, crypto.SHA256},
{AlgSHA384, crypto.SHA384},
{AlgSHA512, crypto.SHA512},
{AlgSHA3_256, crypto.SHA3_256},
{AlgSHA3_384, crypto.SHA3_384},
{AlgSHA3_512, crypto.SHA3_512},
}
// MAX_DIGEST_BUFFER is the maximum size of []byte request or response fields.
// Typically used for chunking of big blobs of data (such as for hashing or
// encryption).
const maxDigestBuffer = 1024
// Algorithm represents a TPM_ALG_ID value.
type Algorithm uint16
// HashToAlgorithm looks up the TPM2 algorithm corresponding to the provided crypto.Hash
func HashToAlgorithm(hash crypto.Hash) (Algorithm, error) {
for _, info := range hashInfo {
if info.hash == hash {
return info.alg, nil
}
}
return AlgUnknown, fmt.Errorf("go hash algorithm #%d has no TPM2 algorithm", hash)
}
// IsNull returns true if a is AlgNull or zero (unset).
func (a Algorithm) IsNull() bool {
return a == AlgNull || a == AlgUnknown
}
// UsesCount returns true if a signature algorithm uses count value.
func (a Algorithm) UsesCount() bool {
return a == AlgECDAA
}
// UsesHash returns true if the algorithm requires the use of a hash.
func (a Algorithm) UsesHash() bool {
return a == AlgOAEP
}
// Hash returns a crypto.Hash based on the given TPM_ALG_ID.
// An error is returned if the given algorithm is not a hash algorithm or is not available.
func (a Algorithm) Hash() (crypto.Hash, error) {
for _, info := range hashInfo {
if info.alg == a {
if !info.hash.Available() {
return crypto.Hash(0), fmt.Errorf("go hash algorithm #%d not available", info.hash)
}
return info.hash, nil
}
}
return crypto.Hash(0), fmt.Errorf("hash algorithm not supported: 0x%x", a)
}
func (a Algorithm) String() string {
var s strings.Builder
var err error
switch a {
case AlgUnknown:
_, err = s.WriteString("AlgUnknown")
case AlgRSA:
_, err = s.WriteString("RSA")
case AlgSHA1:
_, err = s.WriteString("SHA1")
case AlgHMAC:
_, err = s.WriteString("HMAC")
case AlgAES:
_, err = s.WriteString("AES")
case AlgKeyedHash:
_, err = s.WriteString("KeyedHash")
case AlgXOR:
_, err = s.WriteString("XOR")
case AlgSHA256:
_, err = s.WriteString("SHA256")
case AlgSHA384:
_, err = s.WriteString("SHA384")
case AlgSHA512:
_, err = s.WriteString("SHA512")
case AlgNull:
_, err = s.WriteString("AlgNull")
case AlgRSASSA:
_, err = s.WriteString("RSASSA")
case AlgRSAES:
_, err = s.WriteString("RSAES")
case AlgRSAPSS:
_, err = s.WriteString("RSAPSS")
case AlgOAEP:
_, err = s.WriteString("OAEP")
case AlgECDSA:
_, err = s.WriteString("ECDSA")
case AlgECDH:
_, err = s.WriteString("ECDH")
case AlgECDAA:
_, err = s.WriteString("ECDAA")
case AlgKDF2:
_, err = s.WriteString("KDF2")
case AlgECC:
_, err = s.WriteString("ECC")
case AlgSymCipher:
_, err = s.WriteString("SymCipher")
case AlgSHA3_256:
_, err = s.WriteString("SHA3_256")
case AlgSHA3_384:
_, err = s.WriteString("SHA3_384")
case AlgSHA3_512:
_, err = s.WriteString("SHA3_512")
case AlgCTR:
_, err = s.WriteString("CTR")
case AlgOFB:
_, err = s.WriteString("OFB")
case AlgCBC:
_, err = s.WriteString("CBC")
case AlgCFB:
_, err = s.WriteString("CFB")
case AlgECB:
_, err = s.WriteString("ECB")
default:
return fmt.Sprintf("Alg?<%d>", int(a))
}
if err != nil {
return fmt.Sprintf("Writing to string builder failed: %v", err)
}
return s.String()
}
// Supported Algorithms.
const (
AlgUnknown Algorithm = 0x0000
AlgRSA Algorithm = 0x0001
AlgSHA1 Algorithm = 0x0004
AlgHMAC Algorithm = 0x0005
AlgAES Algorithm = 0x0006
AlgKeyedHash Algorithm = 0x0008
AlgXOR Algorithm = 0x000A
AlgSHA256 Algorithm = 0x000B
AlgSHA384 Algorithm = 0x000C
AlgSHA512 Algorithm = 0x000D
AlgNull Algorithm = 0x0010
AlgRSASSA Algorithm = 0x0014
AlgRSAES Algorithm = 0x0015
AlgRSAPSS Algorithm = 0x0016
AlgOAEP Algorithm = 0x0017
AlgECDSA Algorithm = 0x0018
AlgECDH Algorithm = 0x0019
AlgECDAA Algorithm = 0x001A
AlgKDF2 Algorithm = 0x0021
AlgECC Algorithm = 0x0023
AlgSymCipher Algorithm = 0x0025
AlgSHA3_256 Algorithm = 0x0027
AlgSHA3_384 Algorithm = 0x0028
AlgSHA3_512 Algorithm = 0x0029
AlgCTR Algorithm = 0x0040
AlgOFB Algorithm = 0x0041
AlgCBC Algorithm = 0x0042
AlgCFB Algorithm = 0x0043
AlgECB Algorithm = 0x0044
)
// HandleType defines a type of handle.
type HandleType uint8
// Supported handle types
const (
HandleTypePCR HandleType = 0x00
HandleTypeNVIndex HandleType = 0x01
HandleTypeHMACSession HandleType = 0x02
HandleTypeLoadedSession HandleType = 0x02
HandleTypePolicySession HandleType = 0x03
HandleTypeSavedSession HandleType = 0x03
HandleTypePermanent HandleType = 0x40
HandleTypeTransient HandleType = 0x80
HandleTypePersistent HandleType = 0x81
)
// SessionType defines the type of session created in StartAuthSession.
type SessionType uint8
// Supported session types.
const (
SessionHMAC SessionType = 0x00
SessionPolicy SessionType = 0x01
SessionTrial SessionType = 0x03
)
// SessionAttributes represents an attribute of a session.
type SessionAttributes byte
// Session Attributes (Structures 8.4 TPMA_SESSION)
const (
AttrContinueSession SessionAttributes = 1 << iota
AttrAuditExclusive
AttrAuditReset
_ // bit 3 reserved
_ // bit 4 reserved
AttrDecrypt
AttrEcrypt
AttrAudit
)
// EmptyAuth represents the empty authorization value.
var EmptyAuth []byte
// KeyProp is a bitmask used in Attributes field of key templates. Individual
// flags should be OR-ed to form a full mask.
type KeyProp uint32
// Key properties.
const (
FlagFixedTPM KeyProp = 0x00000002
FlagStClear KeyProp = 0x00000004
FlagFixedParent KeyProp = 0x00000010
FlagSensitiveDataOrigin KeyProp = 0x00000020
FlagUserWithAuth KeyProp = 0x00000040
FlagAdminWithPolicy KeyProp = 0x00000080
FlagNoDA KeyProp = 0x00000400
FlagRestricted KeyProp = 0x00010000
FlagDecrypt KeyProp = 0x00020000
FlagSign KeyProp = 0x00040000
FlagSealDefault = FlagFixedTPM | FlagFixedParent
FlagSignerDefault = FlagSign | FlagRestricted | FlagFixedTPM |
FlagFixedParent | FlagSensitiveDataOrigin | FlagUserWithAuth
FlagStorageDefault = FlagDecrypt | FlagRestricted | FlagFixedTPM |
FlagFixedParent | FlagSensitiveDataOrigin | FlagUserWithAuth
)
// TPMProp represents a Property Tag (TPM_PT) used with calls to GetCapability(CapabilityTPMProperties).
type TPMProp uint32
// TPM Capability Properties, see TPM 2.0 Spec, Rev 1.38, Table 23.
// Fixed TPM Properties (PT_FIXED)
const (
FamilyIndicator TPMProp = 0x100 + iota
SpecLevel
SpecRevision
SpecDayOfYear
SpecYear
Manufacturer
VendorString1
VendorString2
VendorString3
VendorString4
VendorTPMType
FirmwareVersion1
FirmwareVersion2
InputMaxBufferSize
TransientObjectsMin
PersistentObjectsMin
LoadedObjectsMin
ActiveSessionsMax
PCRCount
PCRSelectMin
ContextGapMax
_ // (PT_FIXED + 21) is skipped
NVCountersMax
NVIndexMax
MemoryMethod
ClockUpdate
ContextHash
ContextSym
ContextSymSize
OrderlyCount
CommandMaxSize
ResponseMaxSize
DigestMaxSize
ObjectContextMaxSize
SessionContextMaxSize
PSFamilyIndicator
PSSpecLevel
PSSpecRevision
PSSpecDayOfYear
PSSpecYear
SplitSigningMax
TotalCommands
LibraryCommands
VendorCommands
NVMaxBufferSize
TPMModes
CapabilityMaxBufferSize
)
// Variable TPM Properties (PT_VAR)
const (
TPMAPermanent TPMProp = 0x200 + iota
TPMAStartupClear
HRNVIndex
HRLoaded
HRLoadedAvail
HRActive
HRActiveAvail
HRTransientAvail
CurrentPersistent
AvailPersistent
NVCounters
NVCountersAvail
AlgorithmSet
LoadedCurves
LockoutCounter
MaxAuthFail
LockoutInterval
LockoutRecovery
NVWriteRecovery
AuditCounter0
AuditCounter1
)
// Allowed ranges of different kinds of Handles (TPM_HANDLE)
// These constants have type TPMProp for backwards compatibility.
const (
PCRFirst TPMProp = 0x00000000
HMACSessionFirst TPMProp = 0x02000000
LoadedSessionFirst TPMProp = 0x02000000
PolicySessionFirst TPMProp = 0x03000000
ActiveSessionFirst TPMProp = 0x03000000
TransientFirst TPMProp = 0x80000000
PersistentFirst TPMProp = 0x81000000
PersistentLast TPMProp = 0x81FFFFFF
PlatformPersistent TPMProp = 0x81800000
NVIndexFirst TPMProp = 0x01000000
NVIndexLast TPMProp = 0x01FFFFFF
PermanentFirst TPMProp = 0x40000000
PermanentLast TPMProp = 0x4000010F
)
// Reserved Handles.
const (
HandleOwner tpmutil.Handle = 0x40000001 + iota
HandleRevoke
HandleTransport
HandleOperator
HandleAdmin
HandleEK
HandleNull
HandleUnassigned
HandlePasswordSession
HandleLockout
HandleEndorsement
HandlePlatform
)
// Capability identifies some TPM property or state type.
type Capability uint32
// TPM Capabilities.
const (
CapabilityAlgs Capability = iota
CapabilityHandles
CapabilityCommands
CapabilityPPCommands
CapabilityAuditCommands
CapabilityPCRs
CapabilityTPMProperties
CapabilityPCRProperties
CapabilityECCCurves
CapabilityAuthPolicies
)
// TPM Structure Tags. Tags are used to disambiguate structures, similar to Alg
// values: tag value defines what kind of data lives in a nested field.
const (
TagNull tpmutil.Tag = 0x8000
TagNoSessions tpmutil.Tag = 0x8001
TagSessions tpmutil.Tag = 0x8002
TagAttestCertify tpmutil.Tag = 0x8017
TagAttestQuote tpmutil.Tag = 0x8018
TagAttestCreation tpmutil.Tag = 0x801a
TagAuthSecret tpmutil.Tag = 0x8023
TagHashCheck tpmutil.Tag = 0x8024
TagAuthSigned tpmutil.Tag = 0x8025
)
// StartupType instructs the TPM on how to handle its state during Shutdown or
// Startup.
type StartupType uint16
// Startup types
const (
StartupClear StartupType = iota
StartupState
)
// EllipticCurve identifies specific EC curves.
type EllipticCurve uint16
// ECC curves supported by TPM 2.0 spec.
const (
CurveNISTP192 = EllipticCurve(iota + 1)
CurveNISTP224
CurveNISTP256
CurveNISTP384
CurveNISTP521
CurveBNP256 = EllipticCurve(iota + 10)
CurveBNP638
CurveSM2P256 = EllipticCurve(0x0020)
)
var toGoCurve = map[EllipticCurve]elliptic.Curve{
CurveNISTP224: elliptic.P224(),
CurveNISTP256: elliptic.P256(),
CurveNISTP384: elliptic.P384(),
CurveNISTP521: elliptic.P521(),
}
// Supported TPM operations.
const (
CmdNVUndefineSpaceSpecial tpmutil.Command = 0x0000011F
CmdEvictControl tpmutil.Command = 0x00000120
CmdUndefineSpace tpmutil.Command = 0x00000122
CmdClear tpmutil.Command = 0x00000126
CmdHierarchyChangeAuth tpmutil.Command = 0x00000129
CmdDefineSpace tpmutil.Command = 0x0000012A
CmdCreatePrimary tpmutil.Command = 0x00000131
CmdIncrementNVCounter tpmutil.Command = 0x00000134
CmdWriteNV tpmutil.Command = 0x00000137
CmdWriteLockNV tpmutil.Command = 0x00000138
CmdDictionaryAttackLockReset tpmutil.Command = 0x00000139
CmdDictionaryAttackParameters tpmutil.Command = 0x0000013A
CmdPCREvent tpmutil.Command = 0x0000013C
CmdPCRReset tpmutil.Command = 0x0000013D
CmdSequenceComplete tpmutil.Command = 0x0000013E
CmdStartup tpmutil.Command = 0x00000144
CmdShutdown tpmutil.Command = 0x00000145
CmdActivateCredential tpmutil.Command = 0x00000147
CmdCertify tpmutil.Command = 0x00000148
CmdCertifyCreation tpmutil.Command = 0x0000014A
CmdReadNV tpmutil.Command = 0x0000014E
CmdReadLockNV tpmutil.Command = 0x0000014F
CmdPolicySecret tpmutil.Command = 0x00000151
CmdCreate tpmutil.Command = 0x00000153
CmdECDHZGen tpmutil.Command = 0x00000154
CmdImport tpmutil.Command = 0x00000156
CmdLoad tpmutil.Command = 0x00000157
CmdQuote tpmutil.Command = 0x00000158
CmdRSADecrypt tpmutil.Command = 0x00000159
CmdSequenceUpdate tpmutil.Command = 0x0000015C
CmdSign tpmutil.Command = 0x0000015D
CmdUnseal tpmutil.Command = 0x0000015E
CmdPolicySigned tpmutil.Command = 0x00000160
CmdContextLoad tpmutil.Command = 0x00000161
CmdContextSave tpmutil.Command = 0x00000162
CmdECDHKeyGen tpmutil.Command = 0x00000163
CmdEncryptDecrypt tpmutil.Command = 0x00000164
CmdFlushContext tpmutil.Command = 0x00000165
CmdLoadExternal tpmutil.Command = 0x00000167
CmdMakeCredential tpmutil.Command = 0x00000168
CmdReadPublicNV tpmutil.Command = 0x00000169
CmdPolicyCommandCode tpmutil.Command = 0x0000016C
CmdPolicyOr tpmutil.Command = 0x00000171
CmdReadPublic tpmutil.Command = 0x00000173
CmdRSAEncrypt tpmutil.Command = 0x00000174
CmdStartAuthSession tpmutil.Command = 0x00000176
CmdGetCapability tpmutil.Command = 0x0000017A
CmdGetRandom tpmutil.Command = 0x0000017B
CmdHash tpmutil.Command = 0x0000017D
CmdPCRRead tpmutil.Command = 0x0000017E
CmdPolicyPCR tpmutil.Command = 0x0000017F
CmdReadClock tpmutil.Command = 0x00000181
CmdPCRExtend tpmutil.Command = 0x00000182
CmdEventSequenceComplete tpmutil.Command = 0x00000185
CmdHashSequenceStart tpmutil.Command = 0x00000186
CmdPolicyGetDigest tpmutil.Command = 0x00000189
CmdPolicyPassword tpmutil.Command = 0x0000018C
CmdEncryptDecrypt2 tpmutil.Command = 0x00000193
)
// Regular TPM 2.0 devices use 24-bit mask (3 bytes) for PCR selection.
const sizeOfPCRSelect = 3
const defaultRSAExponent = 1<<16 + 1
// NVAttr is a bitmask used in Attributes field of NV indexes. Individual
// flags should be OR-ed to form a full mask.
type NVAttr uint32
// NV Attributes
const (
AttrPPWrite NVAttr = 0x00000001
AttrOwnerWrite NVAttr = 0x00000002
AttrAuthWrite NVAttr = 0x00000004
AttrPolicyWrite NVAttr = 0x00000008
AttrPolicyDelete NVAttr = 0x00000400
AttrWriteLocked NVAttr = 0x00000800
AttrWriteAll NVAttr = 0x00001000
AttrWriteDefine NVAttr = 0x00002000
AttrWriteSTClear NVAttr = 0x00004000
AttrGlobalLock NVAttr = 0x00008000
AttrPPRead NVAttr = 0x00010000
AttrOwnerRead NVAttr = 0x00020000
AttrAuthRead NVAttr = 0x00040000
AttrPolicyRead NVAttr = 0x00080000
AttrNoDA NVAttr = 0x02000000
AttrOrderly NVAttr = 0x04000000
AttrClearSTClear NVAttr = 0x08000000
AttrReadLocked NVAttr = 0x10000000
AttrWritten NVAttr = 0x20000000
AttrPlatformCreate NVAttr = 0x40000000
AttrReadSTClear NVAttr = 0x80000000
)
var permMap = map[NVAttr]string{
AttrPPWrite: "PPWrite",
AttrOwnerWrite: "OwnerWrite",
AttrAuthWrite: "AuthWrite",
AttrPolicyWrite: "PolicyWrite",
AttrPolicyDelete: "PolicyDelete",
AttrWriteLocked: "WriteLocked",
AttrWriteAll: "WriteAll",
AttrWriteDefine: "WriteDefine",
AttrWriteSTClear: "WriteSTClear",
AttrGlobalLock: "GlobalLock",
AttrPPRead: "PPRead",
AttrOwnerRead: "OwnerRead",
AttrAuthRead: "AuthRead",
AttrPolicyRead: "PolicyRead",
AttrNoDA: "No Do",
AttrOrderly: "Oderly",
AttrClearSTClear: "ClearSTClear",
AttrReadLocked: "ReadLocked",
AttrWritten: "Writte",
AttrPlatformCreate: "PlatformCreate",
AttrReadSTClear: "ReadSTClear",
}
// String returns a textual representation of the set of NVAttr
func (p NVAttr) String() string {
var retString strings.Builder
for iterator, item := range permMap {
if (p & iterator) != 0 {
retString.WriteString(item + " + ")
}
}
if retString.String() == "" {
return "Permission/s not found"
}
return strings.TrimSuffix(retString.String(), " + ")
}

362
vendor/github.com/google/go-tpm/legacy/tpm2/error.go generated vendored Normal file
View File

@@ -0,0 +1,362 @@
package tpm2
import (
"fmt"
"github.com/google/go-tpm/tpmutil"
)
type (
// RCFmt0 holds Format 0 error codes
RCFmt0 uint8
// RCFmt1 holds Format 1 error codes
RCFmt1 uint8
// RCWarn holds error codes used in warnings
RCWarn uint8
// RCIndex is used to reference arguments, handles and sessions in errors
RCIndex uint8
)
// Format 0 error codes.
const (
RCInitialize RCFmt0 = 0x00
RCFailure RCFmt0 = 0x01
RCSequence RCFmt0 = 0x03
RCPrivate RCFmt0 = 0x0B
RCHMAC RCFmt0 = 0x19
RCDisabled RCFmt0 = 0x20
RCExclusive RCFmt0 = 0x21
RCAuthType RCFmt0 = 0x24
RCAuthMissing RCFmt0 = 0x25
RCPolicy RCFmt0 = 0x26
RCPCR RCFmt0 = 0x27
RCPCRChanged RCFmt0 = 0x28
RCUpgrade RCFmt0 = 0x2D
RCTooManyContexts RCFmt0 = 0x2E
RCAuthUnavailable RCFmt0 = 0x2F
RCReboot RCFmt0 = 0x30
RCUnbalanced RCFmt0 = 0x31
RCCommandSize RCFmt0 = 0x42
RCCommandCode RCFmt0 = 0x43
RCAuthSize RCFmt0 = 0x44
RCAuthContext RCFmt0 = 0x45
RCNVRange RCFmt0 = 0x46
RCNVSize RCFmt0 = 0x47
RCNVLocked RCFmt0 = 0x48
RCNVAuthorization RCFmt0 = 0x49
RCNVUninitialized RCFmt0 = 0x4A
RCNVSpace RCFmt0 = 0x4B
RCNVDefined RCFmt0 = 0x4C
RCBadContext RCFmt0 = 0x50
RCCPHash RCFmt0 = 0x51
RCParent RCFmt0 = 0x52
RCNeedsTest RCFmt0 = 0x53
RCNoResult RCFmt0 = 0x54
RCSensitive RCFmt0 = 0x55
)
var fmt0Msg = map[RCFmt0]string{
RCInitialize: "TPM not initialized by TPM2_Startup or already initialized",
RCFailure: "commands not being accepted because of a TPM failure",
RCSequence: "improper use of a sequence handle",
RCPrivate: "not currently used",
RCHMAC: "not currently used",
RCDisabled: "the command is disabled",
RCExclusive: "command failed because audit sequence required exclusivity",
RCAuthType: "authorization handle is not correct for command",
RCAuthMissing: "5 command requires an authorization session for handle and it is not present",
RCPolicy: "policy failure in math operation or an invalid authPolicy value",
RCPCR: "PCR check fail",
RCPCRChanged: "PCR have changed since checked",
RCUpgrade: "TPM is in field upgrade mode unless called via TPM2_FieldUpgradeData(), then it is not in field upgrade mode",
RCTooManyContexts: "context ID counter is at maximum",
RCAuthUnavailable: "authValue or authPolicy is not available for selected entity",
RCReboot: "a _TPM_Init and Startup(CLEAR) is required before the TPM can resume operation",
RCUnbalanced: "the protection algorithms (hash and symmetric) are not reasonably balanced; the digest size of the hash must be larger than the key size of the symmetric algorithm",
RCCommandSize: "command commandSize value is inconsistent with contents of the command buffer; either the size is not the same as the octets loaded by the hardware interface layer or the value is not large enough to hold a command header",
RCCommandCode: "command code not supported",
RCAuthSize: "the value of authorizationSize is out of range or the number of octets in the Authorization Area is greater than required",
RCAuthContext: "use of an authorization session with a context command or another command that cannot have an authorization session",
RCNVRange: "NV offset+size is out of range",
RCNVSize: "Requested allocation size is larger than allowed",
RCNVLocked: "NV access locked",
RCNVAuthorization: "NV access authorization fails in command actions",
RCNVUninitialized: "an NV Index is used before being initialized or the state saved by TPM2_Shutdown(STATE) could not be restored",
RCNVSpace: "insufficient space for NV allocation",
RCNVDefined: "NV Index or persistent object already defined",
RCBadContext: "context in TPM2_ContextLoad() is not valid",
RCCPHash: "cpHash value already set or not correct for use",
RCParent: "handle for parent is not a valid parent",
RCNeedsTest: "some function needs testing",
RCNoResult: "returned when an internal function cannot process a request due to an unspecified problem; this code is usually related to invalid parameters that are not properly filtered by the input unmarshaling code",
RCSensitive: "the sensitive area did not unmarshal correctly after decryption",
}
// Format 1 error codes.
const (
RCAsymmetric = 0x01
RCAttributes = 0x02
RCHash = 0x03
RCValue = 0x04
RCHierarchy = 0x05
RCKeySize = 0x07
RCMGF = 0x08
RCMode = 0x09
RCType = 0x0A
RCHandle = 0x0B
RCKDF = 0x0C
RCRange = 0x0D
RCAuthFail = 0x0E
RCNonce = 0x0F
RCPP = 0x10
RCScheme = 0x12
RCSize = 0x15
RCSymmetric = 0x16
RCTag = 0x17
RCSelector = 0x18
RCInsufficient = 0x1A
RCSignature = 0x1B
RCKey = 0x1C
RCPolicyFail = 0x1D
RCIntegrity = 0x1F
RCTicket = 0x20
RCReservedBits = 0x21
RCBadAuth = 0x22
RCExpired = 0x23
RCPolicyCC = 0x24
RCBinding = 0x25
RCCurve = 0x26
RCECCPoint = 0x27
)
var fmt1Msg = map[RCFmt1]string{
RCAsymmetric: "asymmetric algorithm not supported or not correct",
RCAttributes: "inconsistent attributes",
RCHash: "hash algorithm not supported or not appropriate",
RCValue: "value is out of range or is not correct for the context",
RCHierarchy: "hierarchy is not enabled or is not correct for the use",
RCKeySize: "key size is not supported",
RCMGF: "mask generation function not supported",
RCMode: "mode of operation not supported",
RCType: "the type of the value is not appropriate for the use",
RCHandle: "the handle is not correct for the use",
RCKDF: "unsupported key derivation function or function not appropriate for use",
RCRange: "value was out of allowed range",
RCAuthFail: "the authorization HMAC check failed and DA counter incremented",
RCNonce: "invalid nonce size or nonce value mismatch",
RCPP: "authorization requires assertion of PP",
RCScheme: "unsupported or incompatible scheme",
RCSize: "structure is the wrong size",
RCSymmetric: "unsupported symmetric algorithm or key size, or not appropriate for instance",
RCTag: "incorrect structure tag",
RCSelector: "union selector is incorrect",
RCInsufficient: "the TPM was unable to unmarshal a value because there were not enough octets in the input buffer",
RCSignature: "the signature is not valid",
RCKey: "key fields are not compatible with the selected use",
RCPolicyFail: "a policy check failed",
RCIntegrity: "integrity check failed",
RCTicket: "invalid ticket",
RCReservedBits: "reserved bits not set to zero as required",
RCBadAuth: "authorization failure without DA implications",
RCExpired: "the policy has expired",
RCPolicyCC: "the commandCode in the policy is not the commandCode of the command or the command code in a policy command references a command that is not implemented",
RCBinding: "public and sensitive portions of an object are not cryptographically bound",
RCCurve: "curve not supported",
RCECCPoint: "point is not on the required curve",
}
// Warning codes.
const (
RCContextGap RCWarn = 0x01
RCObjectMemory RCWarn = 0x02
RCSessionMemory RCWarn = 0x03
RCMemory RCWarn = 0x04
RCSessionHandles RCWarn = 0x05
RCObjectHandles RCWarn = 0x06
RCLocality RCWarn = 0x07
RCYielded RCWarn = 0x08
RCCanceled RCWarn = 0x09
RCTesting RCWarn = 0x0A
RCReferenceH0 RCWarn = 0x10
RCReferenceH1 RCWarn = 0x11
RCReferenceH2 RCWarn = 0x12
RCReferenceH3 RCWarn = 0x13
RCReferenceH4 RCWarn = 0x14
RCReferenceH5 RCWarn = 0x15
RCReferenceH6 RCWarn = 0x16
RCReferenceS0 RCWarn = 0x18
RCReferenceS1 RCWarn = 0x19
RCReferenceS2 RCWarn = 0x1A
RCReferenceS3 RCWarn = 0x1B
RCReferenceS4 RCWarn = 0x1C
RCReferenceS5 RCWarn = 0x1D
RCReferenceS6 RCWarn = 0x1E
RCNVRate RCWarn = 0x20
RCLockout RCWarn = 0x21
RCRetry RCWarn = 0x22
RCNVUnavailable RCWarn = 0x23
)
var warnMsg = map[RCWarn]string{
RCContextGap: "gap for context ID is too large",
RCObjectMemory: "out of memory for object contexts",
RCSessionMemory: "out of memory for session contexts",
RCMemory: "out of shared object/session memory or need space for internal operations",
RCSessionHandles: "out of session handles",
RCObjectHandles: "out of object handles",
RCLocality: "bad locality",
RCYielded: "the TPM has suspended operation on the command; forward progress was made and the command may be retried",
RCCanceled: "the command was canceled",
RCTesting: "TPM is performing self-tests",
RCReferenceH0: "the 1st handle in the handle area references a transient object or session that is not loaded",
RCReferenceH1: "the 2nd handle in the handle area references a transient object or session that is not loaded",
RCReferenceH2: "the 3rd handle in the handle area references a transient object or session that is not loaded",
RCReferenceH3: "the 4th handle in the handle area references a transient object or session that is not loaded",
RCReferenceH4: "the 5th handle in the handle area references a transient object or session that is not loaded",
RCReferenceH5: "the 6th handle in the handle area references a transient object or session that is not loaded",
RCReferenceH6: "the 7th handle in the handle area references a transient object or session that is not loaded",
RCReferenceS0: "the 1st authorization session handle references a session that is not loaded",
RCReferenceS1: "the 2nd authorization session handle references a session that is not loaded",
RCReferenceS2: "the 3rd authorization session handle references a session that is not loaded",
RCReferenceS3: "the 4th authorization session handle references a session that is not loaded",
RCReferenceS4: "the 5th authorization session handle references a session that is not loaded",
RCReferenceS5: "the 6th authorization session handle references a session that is not loaded",
RCReferenceS6: "the 7th authorization session handle references a session that is not loaded",
RCNVRate: "the TPM is rate-limiting accesses to prevent wearout of NV",
RCLockout: "authorizations for objects subject to DA protection are not allowed at this time because the TPM is in DA lockout mode",
RCRetry: "the TPM was not able to start the command",
RCNVUnavailable: "the command may require writing of NV and NV is not current accessible",
}
// Indexes for arguments, handles and sessions.
const (
RC1 RCIndex = iota + 1
RC2
RC3
RC4
RC5
RC6
RC7
RC8
RC9
RCA
RCB
RCC
RCD
RCE
RCF
)
const unknownCode = "unknown error code"
// Error is returned for all Format 0 errors from the TPM. It is used for general
// errors not specific to a parameter, handle or session.
type Error struct {
Code RCFmt0
}
func (e Error) Error() string {
msg := fmt0Msg[e.Code]
if msg == "" {
msg = unknownCode
}
return fmt.Sprintf("error code 0x%x : %s", e.Code, msg)
}
// VendorError represents a vendor-specific error response. These types of responses
// are not decoded and Code contains the complete response code.
type VendorError struct {
Code uint32
}
func (e VendorError) Error() string {
return fmt.Sprintf("vendor error code 0x%x", e.Code)
}
// Warning is typically used to report transient errors.
type Warning struct {
Code RCWarn
}
func (w Warning) Error() string {
msg := warnMsg[w.Code]
if msg == "" {
msg = unknownCode
}
return fmt.Sprintf("warning code 0x%x : %s", w.Code, msg)
}
// ParameterError describes an error related to a parameter, and the parameter number.
type ParameterError struct {
Code RCFmt1
Parameter RCIndex
}
func (e ParameterError) Error() string {
msg := fmt1Msg[e.Code]
if msg == "" {
msg = unknownCode
}
return fmt.Sprintf("parameter %d, error code 0x%x : %s", e.Parameter, e.Code, msg)
}
// HandleError describes an error related to a handle, and the handle number.
type HandleError struct {
Code RCFmt1
Handle RCIndex
}
func (e HandleError) Error() string {
msg := fmt1Msg[e.Code]
if msg == "" {
msg = unknownCode
}
return fmt.Sprintf("handle %d, error code 0x%x : %s", e.Handle, e.Code, msg)
}
// SessionError describes an error related to a session, and the session number.
type SessionError struct {
Code RCFmt1
Session RCIndex
}
func (e SessionError) Error() string {
msg := fmt1Msg[e.Code]
if msg == "" {
msg = unknownCode
}
return fmt.Sprintf("session %d, error code 0x%x : %s", e.Session, e.Code, msg)
}
// Decode a TPM2 response code and return the appropriate error. Logic
// according to the "Response Code Evaluation" chart in Part 1 of the TPM 2.0
// spec.
func decodeResponse(code tpmutil.ResponseCode) error {
if code == tpmutil.RCSuccess {
return nil
}
if code&0x180 == 0 { // Bits 7:8 == 0 is a TPM1 error
return fmt.Errorf("response status 0x%x", code)
}
if code&0x80 == 0 { // Bit 7 unset
if code&0x400 > 0 { // Bit 10 set, vendor specific code
return VendorError{uint32(code)}
}
if code&0x800 > 0 { // Bit 11 set, warning with code in bit 0:6
return Warning{RCWarn(code & 0x7f)}
}
// error with code in bit 0:6
return Error{RCFmt0(code & 0x7f)}
}
if code&0x40 > 0 { // Bit 6 set, code in 0:5, parameter number in 8:11
return ParameterError{RCFmt1(code & 0x3f), RCIndex((code & 0xf00) >> 8)}
}
if code&0x800 == 0 { // Bit 11 unset, code in 0:5, handle in 8:10
return HandleError{RCFmt1(code & 0x3f), RCIndex((code & 0x700) >> 8)}
}
// Code in 0:5, Session in 8:10
return SessionError{RCFmt1(code & 0x3f), RCIndex((code & 0x700) >> 8)}
}

116
vendor/github.com/google/go-tpm/legacy/tpm2/kdf.go generated vendored Normal file
View File

@@ -0,0 +1,116 @@
// Copyright (c) 2018, Google LLC All rights reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package tpm2
import (
"crypto"
"crypto/hmac"
"encoding/binary"
"hash"
)
// KDFa implements TPM 2.0's default key derivation function, as defined in
// section 11.4.9.2 of the TPM revision 2 specification part 1.
// See: https://trustedcomputinggroup.org/resource/tpm-library-specification/
// The key & label parameters must not be zero length.
// The label parameter is a non-null-terminated string.
// The contextU & contextV parameters are optional.
// Deprecated: Use KDFaHash.
func KDFa(hashAlg Algorithm, key []byte, label string, contextU, contextV []byte, bits int) ([]byte, error) {
h, err := hashAlg.Hash()
if err != nil {
return nil, err
}
return KDFaHash(h, key, label, contextU, contextV, bits), nil
}
// KDFe implements TPM 2.0's ECDH key derivation function, as defined in
// section 11.4.9.3 of the TPM revision 2 specification part 1.
// See: https://trustedcomputinggroup.org/resource/tpm-library-specification/
// The z parameter is the x coordinate of one party's private ECC key multiplied
// by the other party's public ECC point.
// The use parameter is a non-null-terminated string.
// The partyUInfo and partyVInfo are the x coordinates of the initiator's and
// Deprecated: Use KDFeHash.
func KDFe(hashAlg Algorithm, z []byte, use string, partyUInfo, partyVInfo []byte, bits int) ([]byte, error) {
h, err := hashAlg.Hash()
if err != nil {
return nil, err
}
return KDFeHash(h, z, use, partyUInfo, partyVInfo, bits), nil
}
// KDFaHash implements TPM 2.0's default key derivation function, as defined in
// section 11.4.9.2 of the TPM revision 2 specification part 1.
// See: https://trustedcomputinggroup.org/resource/tpm-library-specification/
// The key & label parameters must not be zero length.
// The label parameter is a non-null-terminated string.
// The contextU & contextV parameters are optional.
func KDFaHash(h crypto.Hash, key []byte, label string, contextU, contextV []byte, bits int) []byte {
mac := hmac.New(h.New, key)
out := kdf(mac, bits, func() {
mac.Write([]byte(label))
mac.Write([]byte{0}) // Terminating null character for C-string.
mac.Write(contextU)
mac.Write(contextV)
binary.Write(mac, binary.BigEndian, uint32(bits))
})
return out
}
// KDFeHash implements TPM 2.0's ECDH key derivation function, as defined in
// section 11.4.9.3 of the TPM revision 2 specification part 1.
// See: https://trustedcomputinggroup.org/resource/tpm-library-specification/
// The z parameter is the x coordinate of one party's private ECC key multiplied
// by the other party's public ECC point.
// The use parameter is a non-null-terminated string.
// The partyUInfo and partyVInfo are the x coordinates of the initiator's and
// the responder's ECC points, respectively.
func KDFeHash(h crypto.Hash, z []byte, use string, partyUInfo, partyVInfo []byte, bits int) []byte {
hash := h.New()
out := kdf(hash, bits, func() {
hash.Write(z)
hash.Write([]byte(use))
hash.Write([]byte{0}) // Terminating null character for C-string.
hash.Write(partyUInfo)
hash.Write(partyVInfo)
})
return out
}
func kdf(h hash.Hash, bits int, update func()) []byte {
bytes := (bits + 7) / 8
out := []byte{}
for counter := 1; len(out) < bytes; counter++ {
h.Reset()
binary.Write(h, binary.BigEndian, uint32(counter))
update()
out = h.Sum(out)
}
// out's length is a multiple of hash size, so there will be excess
// bytes if bytes isn't a multiple of hash size.
out = out[:bytes]
// As mentioned in the KDFa and KDFe specs mentioned above,
// the unused bits of the most significant octet are masked off.
if maskBits := uint8(bits % 8); maskBits > 0 {
out[0] &= (1 << maskBits) - 1
}
return out
}

View File

@@ -0,0 +1,57 @@
//go:build !windows
// Copyright (c) 2019, Google LLC All rights reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package tpm2
import (
"errors"
"fmt"
"io"
"os"
"github.com/google/go-tpm/tpmutil"
)
// OpenTPM opens a channel to the TPM at the given path. If the file is a
// device, then it treats it like a normal TPM device, and if the file is a
// Unix domain socket, then it opens a connection to the socket.
//
// This function may also be invoked with no paths, as tpm2.OpenTPM(). In this
// case, the default paths on Linux (/dev/tpmrm0 then /dev/tpm0), will be used.
func OpenTPM(path ...string) (tpm io.ReadWriteCloser, err error) {
switch len(path) {
case 0:
tpm, err = tpmutil.OpenTPM("/dev/tpmrm0")
if errors.Is(err, os.ErrNotExist) {
tpm, err = tpmutil.OpenTPM("/dev/tpm0")
}
case 1:
tpm, err = tpmutil.OpenTPM(path[0])
default:
return nil, errors.New("cannot specify multiple paths to tpm2.OpenTPM")
}
if err != nil {
return nil, err
}
// Make sure this is a TPM 2.0
_, err = GetManufacturer(tpm)
if err != nil {
tpm.Close()
return nil, fmt.Errorf("open %s: device is not a TPM 2.0", path)
}
return tpm, nil
}

View File

@@ -1,4 +1,6 @@
// Copyright 2018-2024 CERN
//go:build windows
// Copyright (c) 2018, Google LLC All rights reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
@@ -11,15 +13,27 @@
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
//
// In applying this license, CERN does not waive the privileges and immunities
// granted to it by virtue of its status as an Intergovernmental Organization
// or submit itself to any jurisdiction.
package loader
package tpm2
import (
// Load core storage filesystem backends.
_ "github.com/opencloud-eu/reva/v2/pkg/storage/fs/posix"
// Add your own here
"fmt"
"io"
"github.com/google/go-tpm/tpmutil"
"github.com/google/go-tpm/tpmutil/tbs"
)
// OpenTPM opens a channel to the TPM.
func OpenTPM() (io.ReadWriteCloser, error) {
info, err := tbs.GetDeviceInfo()
if err != nil {
return nil, err
}
if info.TPMVersion != tbs.TPMVersion20 {
return nil, fmt.Errorf("openTPM: device is not a TPM 2.0")
}
return tpmutil.OpenTPM()
}

Some files were not shown because too many files have changed in this diff Show More