Compare commits

..

34 Commits

Author SHA1 Message Date
Artur Neumann
903346a712 fix translation 2025-03-13 16:51:41 +05:45
Andre Duffeck
5d577930ae Merge pull request #366 from aduffeck/bump-reva-766c69
Bump reva
2025-03-13 09:27:52 +01:00
André Duffeck
c897ec321f Get rid of 'retry' dependency 2025-03-13 08:51:42 +01:00
André Duffeck
9f91963036 Use official golang image 2025-03-13 08:51:42 +01:00
André Duffeck
d772aea84a Adapt to changed signatures in reva 2025-03-13 08:51:42 +01:00
André Duffeck
b6db5f7677 Bump reva 2025-03-13 08:51:41 +01:00
Artur Neumann
747df432d0 Merge pull request #357 from opencloud-eu/test/localApitests-ci
Run localApiTests in CI
2025-03-13 12:56:38 +05:45
prashant-gurung899
b56c74cfea run localApiTests in CI
Signed-off-by: prashant-gurung899 <prasantgrg777@gmail.com>
2025-03-13 12:36:53 +05:45
Ralf Haferkamp
3ecb89733d Merge pull request #364 from opencloud-eu/dependabot/go_modules/github.com/leonelquinteros/gotext-1.7.1
build(deps): bump github.com/leonelquinteros/gotext from 1.7.0 to 1.7.1
2025-03-12 17:17:23 +01:00
Ralf Haferkamp
fde6a96045 Merge pull request #365 from opencloud-eu/dependabot/go_modules/golang.org/x/oauth2-0.28.0
build(deps): bump golang.org/x/oauth2 from 0.26.0 to 0.28.0
2025-03-12 17:16:50 +01:00
dependabot[bot]
38d3c7b826 build(deps): bump golang.org/x/oauth2 from 0.26.0 to 0.28.0
Bumps [golang.org/x/oauth2](https://github.com/golang/oauth2) from 0.26.0 to 0.28.0.
- [Commits](https://github.com/golang/oauth2/compare/v0.26.0...v0.28.0)

---
updated-dependencies:
- dependency-name: golang.org/x/oauth2
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-03-12 14:44:59 +00:00
dependabot[bot]
202b7cc7e6 build(deps): bump github.com/leonelquinteros/gotext from 1.7.0 to 1.7.1
Bumps [github.com/leonelquinteros/gotext](https://github.com/leonelquinteros/gotext) from 1.7.0 to 1.7.1.
- [Release notes](https://github.com/leonelquinteros/gotext/releases)
- [Commits](https://github.com/leonelquinteros/gotext/compare/v1.7.0...v1.7.1)

---
updated-dependencies:
- dependency-name: github.com/leonelquinteros/gotext
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-03-12 14:44:53 +00:00
Michael Barz
7de08f90b3 Merge pull request #343 from opencloud-eu/litmus-in-ci
run webDAV litmus tests in CI
2025-03-12 12:29:28 +01:00
Artur Neumann
8fd3c1fb10 remove commented line 2025-03-12 16:32:05 +05:45
Artur Neumann
322f395b39 rename config folder 2025-03-12 16:01:54 +05:45
Artur Neumann
dfa782cccc run WebDAV litmus tests in CI 2025-03-12 15:41:46 +05:45
Michael Barz
a000dd9612 fix: wrong trigger 2025-03-12 10:20:35 +01:00
Michael Barz
b0cc4e59fa fix: wrong trigger and branch 2025-03-12 10:07:42 +01:00
Ralf Haferkamp
314a79710c Merge pull request #349 from opencloud-eu/dependabot/go_modules/go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc-1.35.0
Bump go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc from 1.34.0 to 1.35.0
2025-03-12 09:59:40 +01:00
Artur Neumann
90cdca03f1 Merge pull request #354 from opencloud-eu/remove-s3-upload
fix: remove uneeded upload steps
2025-03-12 14:24:21 +05:45
Michael Barz
f413c618e5 fix: remove uneeded upload steps 2025-03-12 09:28:09 +01:00
dependabot[bot]
7cfc6eb429 Bump go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc
Bumps [go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc](https://github.com/open-telemetry/opentelemetry-go) from 1.34.0 to 1.35.0.
- [Release notes](https://github.com/open-telemetry/opentelemetry-go/releases)
- [Changelog](https://github.com/open-telemetry/opentelemetry-go/blob/main/CHANGELOG.md)
- [Commits](https://github.com/open-telemetry/opentelemetry-go/compare/v1.34.0...v1.35.0)

---
updated-dependencies:
- dependency-name: go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-03-12 08:02:35 +00:00
Ralf Haferkamp
bb55aa8405 Merge pull request #348 from opencloud-eu/dependabot/go_modules/google.golang.org/protobuf-1.36.5
Bump google.golang.org/protobuf from 1.36.3 to 1.36.5
2025-03-12 09:00:49 +01:00
dependabot[bot]
0834c15c87 Bump google.golang.org/protobuf from 1.36.3 to 1.36.5
Bumps google.golang.org/protobuf from 1.36.3 to 1.36.5.

---
updated-dependencies:
- dependency-name: google.golang.org/protobuf
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-03-12 07:40:31 +00:00
Artur Neumann
b075b177d6 tests: add pipeline config (#341)
Co-authored-by: Michael Barz <michael.barz@zeitgestalten.eu>
2025-03-12 12:22:13 +05:45
Ralf Haferkamp
52af0283f1 Merge pull request #347 from opencloud-eu/remove-wikipedia-from-app-switcher
Remove wikipedia from app switcher in full deployment example
2025-03-11 15:23:25 +01:00
Alex Ackermann
a07cf4d682 Remove wikipedia from app switcher in full deployment example 2025-03-11 15:07:46 +01:00
Alex
f5d4c0cc3d Fix csp.yaml for full deployment example (#345) 2025-03-11 13:27:30 +01:00
Ralf Haferkamp
a245a45e9c Merge pull request #336 from opencloud-eu/dependabot/npm_and_yarn/services/idp/babel-preset-react-app-10.1.0
Bump babel-preset-react-app from 10.0.1 to 10.1.0 in /services/idp
2025-03-11 12:26:28 +01:00
Florian Schade
ad4b27b928 Merge pull request #344 from rhafer/make-generate-test-go
Reintroduce check for go before including bingo Makefile
2025-03-11 11:47:45 +01:00
Ralf Haferkamp
48edc9a5d1 Reintroduce check for go before including bingo Makefile
This re-adds the check for go being installed before including the
bingo variables make file to avoid repeating errors about missing a
missing go binary when running 'make node-generate' in the ci (the node
container doesn't have go installed)
2025-03-11 11:19:20 +01:00
dependabot[bot]
6268247434 Bump babel-preset-react-app from 10.0.1 to 10.1.0 in /services/idp
Bumps [babel-preset-react-app](https://github.com/facebook/create-react-app/tree/HEAD/packages/babel-preset-react-app) from 10.0.1 to 10.1.0.
- [Release notes](https://github.com/facebook/create-react-app/releases)
- [Changelog](https://github.com/facebook/create-react-app/blob/main/CHANGELOG-1.x.md)
- [Commits](https://github.com/facebook/create-react-app/commits/babel-preset-react-app@10.1.0/packages/babel-preset-react-app)

---
updated-dependencies:
- dependency-name: babel-preset-react-app
  dependency-type: direct:development
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-03-11 08:22:45 +00:00
Ralf Haferkamp
e406d05cb4 Merge pull request #335 from opencloud-eu/dependabot/npm_and_yarn/services/idp/babel/core-7.26.9
Bump @babel/core from 7.22.11 to 7.26.9 in /services/idp
2025-03-11 09:21:07 +01:00
dependabot[bot]
88c169898c Bump @babel/core from 7.22.11 to 7.26.9 in /services/idp
Bumps [@babel/core](https://github.com/babel/babel/tree/HEAD/packages/babel-core) from 7.22.11 to 7.26.9.
- [Release notes](https://github.com/babel/babel/releases)
- [Changelog](https://github.com/babel/babel/blob/main/CHANGELOG.md)
- [Commits](https://github.com/babel/babel/commits/v7.26.9/packages/babel-core)

---
updated-dependencies:
- dependency-name: "@babel/core"
  dependency-type: direct:development
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-03-10 15:30:54 +00:00
221 changed files with 4695 additions and 5890 deletions

View File

@@ -1,4 +1,4 @@
"""oCIS CI definition
"""OpenCloud CI definition
"""
# Production release tags
@@ -24,7 +24,7 @@ OC_CI_BAZEL_BUILDIFIER = "owncloudci/bazel-buildifier:latest"
OC_CI_CLAMAVD = "owncloudci/clamavd"
OC_CI_DRONE_ANSIBLE = "owncloudci/drone-ansible:latest"
OC_CI_DRONE_SKIP_PIPELINE = "owncloudci/drone-skip-pipeline"
OC_CI_GOLANG = "owncloudci/golang:1.24"
OC_CI_GOLANG = "docker.io/golang:1.24"
OC_CI_NODEJS = "owncloudci/nodejs:%s"
OC_CI_PHP = "owncloudci/php:%s"
OC_CI_WAIT_FOR = "owncloudci/wait-for:latest"
@@ -56,21 +56,21 @@ dirs = {
"baseGo": "/go/src/github.com/opencloud-eu/opencloud",
"gobinTar": "go-bin.tar.gz",
"gobinTarPath": "/go/src/github.com/opencloud-eu/opencloud/go-bin.tar.gz",
"ocisConfig": "tests/config/drone/ocis-config.json",
"opencloudConfig": "tests/config/woodpecker/opencloud-config.json",
"ocis": "/woodpecker/src/github.com/opencloud-eu/opencloud/srv/app/tmp/ocis",
"ocisRevaDataRoot": "/woodpecker/src/github.com/opencloud-eu/opencloud/srv/app/tmp/ocis/owncloud/data",
"ocisWrapper": "/woodpecker/src/github.com/opencloud-eu/opencloud/tests/ociswrapper",
"bannedPasswordList": "tests/config/drone/banned-password-list.txt",
"ocmProviders": "tests/config/drone/providers.json",
"ocWrapper": "/woodpecker/src/github.com/opencloud-eu/opencloud/tests/ocwrapper",
"bannedPasswordList": "tests/config/woodpecker/banned-password-list.txt",
"ocmProviders": "tests/config/woodpecker/providers.json",
"opencloudBinPath": "opencloud/bin",
"opencloudBin": "opencloud/bin/opencloud",
"opencloudBinArtifact": "opencloud-binary-amd64",
}
# OCIS URLs
# OpenCloud URLs
OC_SERVER_NAME = "opencloud-server"
OCIS_URL = "https://%s:9200" % OC_SERVER_NAME
OCIS_DOMAIN = "%s:9200" % OC_SERVER_NAME
OC_URL = "https://%s:9200" % OC_SERVER_NAME
OC_DOMAIN = "%s:9200" % OC_SERVER_NAME
FED_OC_SERVER_NAME = "federation-opencloud-server"
OC_FED_URL = "https://%s:10200" % FED_OC_SERVER_NAME
OC_FED_DOMAIN = "%s:10200" % FED_OC_SERVER_NAME
@@ -78,13 +78,13 @@ OC_FED_DOMAIN = "%s:10200" % FED_OC_SERVER_NAME
# configuration
config = {
"cs3ApiTests": {
"skip": False,
"skip": True,
},
"wopiValidatorTests": {
"skip": False,
"skip": True,
},
"k6LoadTests": {
"skip": False,
"skip": True,
},
"localApiTests": {
"basic": {
@@ -98,13 +98,13 @@ config = {
"apiLocks",
"apiActivities",
],
"skip": False,
"skip": True,
},
"settings": {
"suites": [
"apiSettings",
],
"skip": False,
"skip": True,
"withRemotePhp": [True],
"emailNeeded": True,
"extraEnvironment": {
@@ -125,27 +125,27 @@ config = {
"apiGraph",
"apiServiceAvailability",
],
"skip": False,
"skip": True,
"withRemotePhp": [True],
},
"graphUserGroup": {
"suites": [
"apiGraphUserGroup",
],
"skip": False,
"skip": True,
"withRemotePhp": [True],
},
"spaces": {
"suites": [
"apiSpaces",
],
"skip": False,
"skip": True,
},
"spacesShares": {
"suites": [
"apiSpacesShares",
],
"skip": False,
"skip": True,
},
"spacesDavOperation": {
"suites": [
@@ -157,13 +157,13 @@ config = {
"suites": [
"apiSearch1",
],
"skip": False,
"skip": True,
},
"search2": {
"suites": [
"apiSearch2",
],
"skip": False,
"skip": True,
},
"sharingNg": {
"suites": [
@@ -171,23 +171,23 @@ config = {
"apiSharingNg1",
"apiSharingNg2",
],
"skip": False,
"skip": True,
},
"sharingNgShareInvitation": {
"suites": [
"apiSharingNgShareInvitation",
],
"skip": False,
"skip": True,
},
"sharingNgLinkShare": {
"suites": [
"apiSharingNgLinkSharePermission",
"apiSharingNgLinkShareRoot",
],
"skip": False,
"skip": True,
},
"accountsHashDifficulty": {
"skip": False,
"skip": True,
"suites": [
"apiAccountsHashDifficulty",
],
@@ -197,7 +197,7 @@ config = {
"suites": [
"apiNotification",
],
"skip": False,
"skip": True,
"withRemotePhp": [True],
"emailNeeded": True,
"extraEnvironment": {
@@ -217,7 +217,7 @@ config = {
"suites": [
"apiAntivirus",
],
"skip": False,
"skip": True,
"antivirusNeeded": True,
"extraServerEnvironment": {
"ANTIVIRUS_SCANNER_TYPE": "clamav",
@@ -232,14 +232,14 @@ config = {
"suites": [
"apiSearchContent",
],
"skip": False,
"skip": True,
"tikaNeeded": True,
},
"ocm": {
"suites": [
"apiOcm",
],
"skip": False,
"skip": True,
"withRemotePhp": [True],
"federationServer": True,
"emailNeeded": True,
@@ -265,7 +265,7 @@ config = {
"suites": [
"apiCollaboration",
],
"skip": False,
"skip": True,
"collaborationServiceNeeded": True,
"extraServerEnvironment": {
"GATEWAY_GRPC_ADDR": "0.0.0.0:9142",
@@ -275,7 +275,7 @@ config = {
"suites": [
"apiAuthApp",
],
"skip": False,
"skip": True,
"withRemotePhp": [True],
"extraServerEnvironment": {
"OCIS_ADD_RUN_SERVICES": "auth-app",
@@ -286,7 +286,7 @@ config = {
"suites": [
"cliCommands",
],
"skip": False,
"skip": True,
"withRemotePhp": [True],
"antivirusNeeded": True,
"extraServerEnvironment": {
@@ -299,24 +299,24 @@ config = {
},
"apiTests": {
"numberOfParts": 7,
"skip": False,
"skip": True,
"skipExceptParts": [],
},
"e2eTests": {
"part": {
"skip": False,
"skip": True,
"totalParts": 4, # divide and run all suites in parts (divide pipelines)
"xsuites": ["search", "app-provider", "oidc", "ocm"], # suites to skip
},
"search": {
"skip": False,
"skip": True,
"suites": ["search"], # suites to run
"tikaNeeded": True,
},
},
"e2eMultiService": {
"testSuites": {
"skip": False,
"skip": True,
"suites": [
"smoke",
"shares",
@@ -368,12 +368,12 @@ MINIO_MC_ENV = {
}
CI_HTTP_PROXY_ENV = {
# "HTTP_PROXY": {
# "from_secret": "ci_http_proxy",
# },
# "HTTPS_PROXY": {
# "from_secret": "ci_http_proxy",
# },
"HTTP_PROXY": {
"from_secret": "ci_http_proxy",
},
"HTTPS_PROXY": {
"from_secret": "ci_http_proxy",
},
}
def pipelineDependsOn(pipeline, dependant_pipelines):
@@ -427,12 +427,12 @@ def main(ctx):
checkTestSuitesInExpectedFailures(ctx) + \
buildWebCache(ctx) + \
getGoBinForTesting(ctx) + \
buildOcisBinaryForTesting(ctx) + \
buildOpencloudBinaryForTesting(ctx) + \
checkStarlark() + \
build_release_helpers + \
testOcisAndUploadResults(ctx)
testOpencloudAndUploadResults(ctx) + \
testPipelines(ctx)
# testPipelines(ctx)
# build_release_pipelines = \
# dockerReleases(ctx) + \
# binaryReleases(ctx)
@@ -492,8 +492,8 @@ def buildWebCache(ctx):
cachePipeline("web-pnpm", generateWebPnpmCache(ctx)),
]
def testOcisAndUploadResults(ctx):
pipeline = testOcis(ctx)
def testOpencloudAndUploadResults(ctx):
pipeline = testOpencloud(ctx)
######################################################################
# The triggers have been disabled for now, since the govulncheck can #
@@ -510,7 +510,7 @@ def testPipelines(ctx):
pipelines = []
if config["litmus"]:
pipelines += litmus(ctx, "ocis")
pipelines += litmus(ctx, "decomposed")
if "skip" not in config["cs3ApiTests"] or not config["cs3ApiTests"]["skip"]:
pipelines.append(cs3ApiTests(ctx, "ocis", "default"))
@@ -563,7 +563,7 @@ def checkGoBinCache():
},
},
"commands": [
"bash -x %s/tests/config/drone/check_go_bin_cache.sh %s %s" % (dirs["baseGo"], dirs["baseGo"], dirs["gobinTar"]),
"bash -x %s/tests/config/woodpecker/check_go_bin_cache.sh %s %s" % (dirs["baseGo"], dirs["baseGo"], dirs["gobinTar"]),
],
}]
@@ -620,7 +620,7 @@ def restoreGoBinCache():
},
]
def testOcis(ctx):
def testOpencloud(ctx):
steps = restoreGoBinCache() + makeGoGenerate("") + [
{
"name": "golangci-lint",
@@ -651,7 +651,7 @@ def testOcis(ctx):
},
"bucket": "cache",
"source": "cache/**/*",
"target": "%s/%s" % (repo_slug, ctx.build.commit + "-${DRONE_BUILD_NUMBER}"),
"target": "%s/%s" % (repo_slug, ctx.build.commit + "-${CI_PIPELINE_NUMBER}"),
"path_style": True,
"access_key": {
"from_secret": "cache_s3_access_key",
@@ -713,7 +713,7 @@ def scanOcis(ctx):
"workspace": workspace,
}
def buildOcisBinaryForTesting(ctx):
def buildOpencloudBinaryForTesting(ctx):
return [{
"name": "build_opencloud_binary_for_testing",
"steps": makeNodeGenerate("") +
@@ -885,7 +885,7 @@ def localApiTestPipeline(ctx):
"skip": False,
"extraEnvironment": {},
"extraServerEnvironment": {},
"storages": ["ocis"],
"storages": ["decomposed"],
"accounts_hash_difficulty": 4,
"emailNeeded": False,
"antivirusNeeded": False,
@@ -909,10 +909,10 @@ def localApiTestPipeline(ctx):
"steps": restoreBuildArtifactCache(ctx, dirs["opencloudBinArtifact"], dirs["opencloudBinPath"]) +
(tikaService() if params["tikaNeeded"] else []) +
(waitForServices("online-offices", ["collabora:9980", "onlyoffice:443", "fakeoffice:8080"]) if params["collaborationServiceNeeded"] else []) +
ocisServer(storage, params["accounts_hash_difficulty"], extra_server_environment = params["extraServerEnvironment"], with_wrapper = True, tika_enabled = params["tikaNeeded"]) +
opencloudServer(storage, params["accounts_hash_difficulty"], extra_server_environment = params["extraServerEnvironment"], with_wrapper = True, tika_enabled = params["tikaNeeded"]) +
(waitForClamavService() if params["antivirusNeeded"] else []) +
(waitForEmailService() if params["emailNeeded"] else []) +
(ocisServer(storage, params["accounts_hash_difficulty"], deploy_type = "federation", extra_server_environment = params["extraServerEnvironment"]) if params["federationServer"] else []) +
(opencloudServer(storage, params["accounts_hash_difficulty"], deploy_type = "federation", extra_server_environment = params["extraServerEnvironment"]) if params["federationServer"] else []) +
((wopiCollaborationService("fakeoffice") + wopiCollaborationService("collabora") + wopiCollaborationService("onlyoffice")) if params["collaborationServiceNeeded"] else []) +
(ocisHealthCheck("wopi", ["wopi-collabora:9304", "wopi-onlyoffice:9304", "wopi-fakeoffice:9304"]) if params["collaborationServiceNeeded"] else []) +
localApiTests(ctx, name, params["suites"], storage, params["extraEnvironment"], run_with_remote_php) +
@@ -920,7 +920,7 @@ def localApiTestPipeline(ctx):
"services": (emailService() if params["emailNeeded"] else []) +
(clamavService() if params["antivirusNeeded"] else []) +
((fakeOffice() + collaboraService() + onlyofficeService()) if params["collaborationServiceNeeded"] else []),
"depends_on": getPipelineNames(buildOcisBinaryForTesting(ctx)),
"depends_on": getPipelineNames(buildOpencloudBinaryForTesting(ctx)),
"when": [
{
"event": ["push", "manual"],
@@ -937,21 +937,21 @@ def localApiTestPipeline(ctx):
pipelines.append(pipeline)
return pipelines
def localApiTests(ctx, name, suites, storage = "ocis", extra_environment = {}, with_remote_php = False):
def localApiTests(ctx, name, suites, storage = "decomposed", extra_environment = {}, with_remote_php = False):
test_dir = "%s/tests/acceptance" % dirs["base"]
expected_failures_file = "%s/expected-failures-localAPI-on-%s-storage.md" % (test_dir, storage.upper())
expected_failures_file = "%s/expected-failures-localAPI-on-%s-storage.md" % (test_dir, storage)
environment = {
"TEST_SERVER_URL": OCIS_URL,
"TEST_SERVER_URL": OC_URL,
"TEST_SERVER_FED_URL": OC_FED_URL,
"OCIS_REVA_DATA_ROOT": "%s" % (dirs["ocisRevaDataRoot"] if storage == "owncloud" else ""),
"SEND_SCENARIO_LINE_REFERENCES": True,
"STORAGE_DRIVER": storage,
"BEHAT_SUITES": ",".join(suites),
"BEHAT_FILTER_TAGS": "~@skip&&~@skipOnGraph&&~@skipOnOcis-%s-Storage" % ("OC" if storage == "owncloud" else "OCIS"),
"BEHAT_FILTER_TAGS": "~@skip&&~@skipOnGraph&&~@skipOnOpencloud-%s-Storage" % storage,
"EXPECTED_FAILURES_FILE": expected_failures_file,
"UPLOAD_DELETE_WAIT_TIME": "1" if storage == "owncloud" else 0,
"OCIS_WRAPPER_URL": "http://%s:5200" % OC_SERVER_NAME,
"OC_WRAPPER_URL": "http://%s:5200" % OC_SERVER_NAME,
"WITH_REMOTE_PHP": with_remote_php,
"COLLABORATION_SERVICE_URL": "http://wopi-fakeoffice:9300",
}
@@ -974,7 +974,7 @@ def cs3ApiTests(ctx, storage, accounts_hash_difficulty = 4):
return {
"name": "cs3ApiTests",
"steps": restoreBuildArtifactCache(ctx, dirs["opencloudBinArtifact"], dirs["opencloudBinPath"]) +
ocisServer(storage, accounts_hash_difficulty, [], [], "cs3api_validator") +
opencloudServer(storage, accounts_hash_difficulty, [], [], "cs3api_validator") +
[
{
"name": "cs3ApiTests",
@@ -985,7 +985,7 @@ def cs3ApiTests(ctx, storage, accounts_hash_difficulty = 4):
],
},
],
"depends_on": getPipelineNames(buildOcisBinaryForTesting(ctx)),
"depends_on": getPipelineNames(buildOpencloudBinaryForTesting(ctx)),
"when": [
{
"event": ["push", "manual"],
@@ -1028,7 +1028,7 @@ def wopiValidatorTests(ctx, storage, wopiServerType, accounts_hash_difficulty =
"image": "cs3org/wopiserver:v10.4.0",
"detach": True,
"commands": [
"cp %s/tests/config/drone/wopiserver.conf /etc/wopi/wopiserver.conf" % (dirs["base"]),
"cp %s/tests/config/woodpecker/wopiserver.conf /etc/wopi/wopiserver.conf" % (dirs["base"]),
"echo 123 > /etc/wopi/wopisecret",
"/app/wopiserver.py",
],
@@ -1078,7 +1078,7 @@ def wopiValidatorTests(ctx, storage, wopiServerType, accounts_hash_difficulty =
"steps": restoreBuildArtifactCache(ctx, dirs["opencloudBinArtifact"], dirs["opencloudBinPath"]) +
fakeOffice() +
waitForServices("fake-office", ["fakeoffice:8080"]) +
ocisServer(storage, accounts_hash_difficulty, deploy_type = "wopi_validator", extra_server_environment = extra_server_environment) +
opencloudServer(storage, accounts_hash_difficulty, deploy_type = "wopi_validator", extra_server_environment = extra_server_environment) +
wopiServer +
waitForServices("wopi-fakeoffice", ["wopi-fakeoffice:9300"]) +
[
@@ -1087,10 +1087,10 @@ def wopiValidatorTests(ctx, storage, wopiServerType, accounts_hash_difficulty =
"image": OC_CI_ALPINE,
"environment": {},
"commands": [
"curl -v -X PUT '%s/remote.php/webdav/test.wopitest' -k --fail --retry-connrefused --retry 7 --retry-all-errors -u admin:admin -D headers.txt" % OCIS_URL,
"curl -v -X PUT '%s/remote.php/webdav/test.wopitest' -k --fail --retry-connrefused --retry 7 --retry-all-errors -u admin:admin -D headers.txt" % OC_URL,
"cat headers.txt",
"export FILE_ID=$(cat headers.txt | sed -n -e 's/^.*Oc-Fileid: //p')",
"export URL=\"%s/app/open?app_name=FakeOffice&file_id=$FILE_ID\"" % OCIS_URL,
"export URL=\"%s/app/open?app_name=FakeOffice&file_id=$FILE_ID\"" % OC_URL,
"export URL=$(echo $URL | tr -d '[:cntrl:]')",
"curl -v -X POST \"$URL\" -k --fail --retry-connrefused --retry 7 --retry-all-errors -u admin:admin > open.json",
"cat open.json",
@@ -1102,7 +1102,7 @@ def wopiValidatorTests(ctx, storage, wopiServerType, accounts_hash_difficulty =
},
] +
validatorTests,
"depends_on": getPipelineNames(buildOcisBinaryForTesting(ctx)),
"depends_on": getPipelineNames(buildOpencloudBinaryForTesting(ctx)),
"when": [
{
"event": ["push", "manual"],
@@ -1125,13 +1125,13 @@ def coreApiTests(ctx, part_number = 1, number_of_parts = 1, with_remote_php = Fa
return {
"name": "Core-API-Tests-%s%s" % (part_number, "-withoutRemotePhp" if not with_remote_php else ""),
"steps": restoreBuildArtifactCache(ctx, dirs["opencloudBinArtifact"], dirs["opencloudBinPath"]) +
ocisServer(storage, accounts_hash_difficulty, with_wrapper = True) +
opencloudServer(storage, accounts_hash_difficulty, with_wrapper = True) +
[
{
"name": "oC10ApiTests-%s" % part_number,
"image": OC_CI_PHP % DEFAULT_PHP_VERSION,
"environment": {
"TEST_SERVER_URL": OCIS_URL,
"TEST_SERVER_URL": OC_URL,
"OCIS_REVA_DATA_ROOT": "%s" % (dirs["ocisRevaDataRoot"] if storage == "owncloud" else ""),
"SEND_SCENARIO_LINE_REFERENCES": True,
"STORAGE_DRIVER": storage,
@@ -1153,7 +1153,7 @@ def coreApiTests(ctx, part_number = 1, number_of_parts = 1, with_remote_php = Fa
] +
logRequests(),
"services": redisForOCStorage(storage),
"depends_on": getPipelineNames(buildOcisBinaryForTesting(ctx)),
"depends_on": getPipelineNames(buildOpencloudBinaryForTesting(ctx)),
"when": [
{
"event": ["push", "manual"],
@@ -1249,16 +1249,16 @@ def e2eTestPipeline(ctx):
restoreWebCache() + \
restoreWebPnpmCache() + \
(tikaService() if params["tikaNeeded"] else []) + \
ocisServer(extra_server_environment = extra_server_environment, tika_enabled = params["tikaNeeded"])
opencloudServer(extra_server_environment = extra_server_environment, tika_enabled = params["tikaNeeded"])
step_e2e = {
"name": "e2e-tests",
"image": OC_CI_NODEJS % DEFAULT_NODEJS_VERSION,
"environment": {
"BASE_URL_OCIS": OCIS_DOMAIN,
"BASE_URL_OCIS": OC_DOMAIN,
"HEADLESS": True,
"RETRY": "1",
"WEB_UI_CONFIG_FILE": "%s/%s" % (dirs["base"], dirs["ocisConfig"]),
"WEB_UI_CONFIG_FILE": "%s/%s" % (dirs["base"], dirs["opencloudConfig"]),
"LOCAL_UPLOAD_DIR": "/uploads",
},
"commands": [
@@ -1281,7 +1281,7 @@ def e2eTestPipeline(ctx):
pipelines.append({
"name": "e2e-tests-%s-%s" % (name, run_part),
"steps": steps_before + [run_e2e] + steps_after,
"depends_on": getPipelineNames(buildOcisBinaryForTesting(ctx) + buildWebCache(ctx)),
"depends_on": getPipelineNames(buildOpencloudBinaryForTesting(ctx) + buildWebCache(ctx)),
"when": e2e_trigger,
})
else:
@@ -1289,7 +1289,7 @@ def e2eTestPipeline(ctx):
pipelines.append({
"name": "e2e-tests-%s" % name,
"steps": steps_before + [step_e2e] + steps_after,
"depends_on": getPipelineNames(buildOcisBinaryForTesting(ctx) + buildWebCache(ctx)),
"depends_on": getPipelineNames(buildOpencloudBinaryForTesting(ctx) + buildWebCache(ctx)),
"when": e2e_trigger,
})
@@ -1339,14 +1339,14 @@ def multiServiceE2ePipeline(ctx):
}
storage_users_environment = {
"OCIS_CORS_ALLOW_ORIGINS": "%s,https://%s:9201" % (OCIS_URL, OC_SERVER_NAME),
"OCIS_CORS_ALLOW_ORIGINS": "%s,https://%s:9201" % (OC_URL, OC_SERVER_NAME),
"STORAGE_USERS_JWT_SECRET": "some-ocis-jwt-secret",
"STORAGE_USERS_MOUNT_ID": "storage-users-id",
"STORAGE_USERS_SERVICE_ACCOUNT_ID": "service-account-id",
"STORAGE_USERS_SERVICE_ACCOUNT_SECRET": "service-account-secret",
"STORAGE_USERS_GATEWAY_GRPC_ADDR": "%s:9142" % OC_SERVER_NAME,
"STORAGE_USERS_EVENTS_ENDPOINT": "%s:9233" % OC_SERVER_NAME,
"STORAGE_USERS_DATA_GATEWAY_URL": "%s/data" % OCIS_URL,
"STORAGE_USERS_DATA_GATEWAY_URL": "%s/data" % OC_URL,
"OCIS_CACHE_STORE": "nats-js-kv",
"OCIS_CACHE_STORE_NODES": "%s:9233" % OC_SERVER_NAME,
"MICRO_REGISTRY_ADDRESS": "%s:9233" % OC_SERVER_NAME,
@@ -1398,13 +1398,13 @@ def multiServiceE2ePipeline(ctx):
restoreWebCache() + \
restoreWebPnpmCache() + \
tikaService() + \
ocisServer(extra_server_environment = extra_server_environment, tika_enabled = params["tikaNeeded"]) + \
opencloudServer(extra_server_environment = extra_server_environment, tika_enabled = params["tikaNeeded"]) + \
storage_users_services + \
[{
"name": "e2e-tests",
"image": OC_CI_NODEJS % DEFAULT_NODEJS_VERSION,
"environment": {
"BASE_URL_OCIS": OCIS_DOMAIN,
"BASE_URL_OCIS": OC_DOMAIN,
"HEADLESS": True,
"RETRY": "1",
},
@@ -1418,7 +1418,7 @@ def multiServiceE2ePipeline(ctx):
pipelines.append({
"name": "e2e-tests-multi-service",
"steps": steps,
"depends_on": getPipelineNames(buildOcisBinaryForTesting(ctx) + buildWebCache(ctx)),
"depends_on": getPipelineNames(buildOpencloudBinaryForTesting(ctx) + buildWebCache(ctx)),
"workspace": e2e_trigger,
})
return pipelines
@@ -1437,7 +1437,7 @@ def uploadTracingResult(ctx):
"path_style": True,
"source": "webTestRunner/reports/e2e/playwright/tracing/**/*",
"strip_prefix": "webTestRunner/reports/e2e/playwright/tracing",
"target": "/${DRONE_REPO}/${DRONE_BUILD_NUMBER}/tracing",
"target": "/${DRONE_REPO}/${CI_PIPELINE_NUMBER}/tracing",
},
"environment": {
"AWS_ACCESS_KEY_ID": {
@@ -1465,7 +1465,7 @@ def logTracingResults():
"commands": [
"cd %s/reports/e2e/playwright/tracing/" % dirs["web"],
'echo "To see the trace, please open the following link in the console"',
'for f in *.zip; do echo "npx playwright show-trace https://cache.owncloud.com/public/${DRONE_REPO}/${DRONE_BUILD_NUMBER}/tracing/$f \n"; done',
'for f in *.zip; do echo "npx playwright show-trace https://cache.owncloud.com/public/${DRONE_REPO}/${CI_PIPELINE_NUMBER}/tracing/$f \n"; done',
],
"when": {
"status": [
@@ -1521,7 +1521,7 @@ def dockerRelease(ctx, arch, repo, build_type):
"REVISION=%s" % (ctx.build.commit),
"VERSION=%s" % (ctx.build.ref.replace("refs/tags/", "") if ctx.build.event == "tag" else "master"),
]
depends_on = getPipelineNames(testOcisAndUploadResults(ctx) + testPipelines(ctx))
depends_on = getPipelineNames(testOpencloudAndUploadResults(ctx) + testPipelines(ctx))
if ctx.build.event == "tag":
depends_on = []
@@ -1613,7 +1613,7 @@ def binaryReleases(ctx):
# uploads binary to https://download.owncloud.com/ocis/ocis/daily/
target = "/ocis/%s/daily" % (ctx.repo.name.replace("ocis-", ""))
depends_on = getPipelineNames(testOcisAndUploadResults(ctx) + testPipelines(ctx))
depends_on = getPipelineNames(testOpencloudAndUploadResults(ctx) + testPipelines(ctx))
if ctx.build.event == "tag":
depends_on = []
@@ -1657,25 +1657,6 @@ def binaryReleases(ctx):
return pipelines
def binaryRelease(ctx, arch, build_type, target, depends_on = []):
settings = {
"endpoint": {
"from_secret": "upload_s3_endpoint",
},
"access_key": {
"from_secret": "upload_s3_access_key",
},
"secret_key": {
"from_secret": "upload_s3_secret_key",
},
"bucket": {
"from_secret": "upload_s3_bucket",
},
"path_style": True,
"strip_prefix": "ocis/dist/release/",
"source": "ocis/dist/release/*",
"target": target,
}
return {
"name": "binaries-%s-%s" % (arch, build_type),
"steps": makeNodeGenerate("") +
@@ -1706,20 +1687,6 @@ def binaryRelease(ctx, arch, build_type, target, depends_on = []):
},
],
},
{
"name": "upload",
"image": PLUGINS_S3,
"settings": settings,
"when": [
{
"event": ["push", "manual"],
"branch": "main",
},
{
"event": "tag",
},
],
},
{
"name": "changelog",
"image": OC_CI_GOLANG,
@@ -1785,24 +1752,6 @@ def licenseCheck(ctx):
folder = "testing"
target = "/ocis/%s/%s/%s" % (ctx.repo.name.replace("ocis-", ""), folder, buildref)
settings = {
"endpoint": {
"from_secret": "upload_s3_endpoint",
},
"access_key": {
"from_secret": "upload_s3_access_key",
},
"secret_key": {
"from_secret": "upload_s3_secret_key",
},
"bucket": {
"from_secret": "upload_s3_bucket",
},
"path_style": True,
"source": "third-party-licenses.tar.gz",
"target": target,
}
return [{
"name": "check-licenses",
"steps": [
@@ -1843,20 +1792,6 @@ def licenseCheck(ctx):
"cd third-party-licenses && tar -czf ../third-party-licenses.tar.gz *",
],
},
{
"name": "upload",
"image": PLUGINS_S3,
"settings": settings,
"when": [
{
"event": "tag",
},
{
"event": ["push", "manual"],
"branch": "main",
},
],
},
{
"name": "changelog",
"image": OC_CI_GOLANG,
@@ -2004,7 +1939,7 @@ def changelog():
"push",
],
"message": "Automated changelog update [skip ci]",
"branch": "master",
"branch": "main",
"author_email": "devops@opencloud.eu",
"author_name": "openclouders",
"netrc_machine": "github.com",
@@ -2107,7 +2042,7 @@ def makeNodeGenerate(module):
},
"commands": [
"pnpm config set store-dir ./.pnpm-store",
"retry -t 3 '%s node-generate-prod'" % (make),
"for i in $(seq 3); do %s node-generate-prod && break || sleep 1; done" % (make),
],
},
]
@@ -2122,7 +2057,7 @@ def makeGoGenerate(module):
"name": "generate go",
"image": OC_CI_GOLANG,
"commands": [
"retry -t 3 '%s go-generate'" % (make),
"for i in $(seq 3); do %s go-generate && break || sleep 1; done" % (make),
],
"environment": CI_HTTP_PROXY_ENV,
},
@@ -2153,35 +2088,34 @@ def notify(ctx):
{
"event": ["push", "manual"],
"branch": ["main", "release-*"],
"status": status,
},
{
"event": "tag",
"status": status,
},
],
"runs_on": status,
}
def ocisServer(storage = "ocis", accounts_hash_difficulty = 4, volumes = [], depends_on = [], deploy_type = "", extra_server_environment = {}, with_wrapper = False, tika_enabled = False):
def opencloudServer(storage = "decomposed", accounts_hash_difficulty = 4, volumes = [], depends_on = [], deploy_type = "", extra_server_environment = {}, with_wrapper = False, tika_enabled = False):
user = "0:0"
container_name = OC_SERVER_NAME
environment = {
"OCIS_URL": OCIS_URL,
"OCIS_CONFIG_DIR": "/root/.ocis/config", # needed for checking config later
"OC_URL": OC_URL,
"OC_CONFIG_DIR": "/root/.opencloud/config", # needed for checking config later
"STORAGE_USERS_DRIVER": "%s" % (storage),
"PROXY_ENABLE_BASIC_AUTH": True,
"WEB_UI_CONFIG_FILE": "%s/%s" % (dirs["base"], dirs["ocisConfig"]),
"OCIS_LOG_LEVEL": "error",
"WEB_UI_CONFIG_FILE": "%s/%s" % (dirs["base"], dirs["opencloudConfig"]),
"OC_LOG_LEVEL": "error",
"IDM_CREATE_DEMO_USERS": True, # needed for litmus and cs3api-validator tests
"IDM_ADMIN_PASSWORD": "admin", # override the random admin password from `ocis init`
"IDM_ADMIN_PASSWORD": "admin", # override the random admin password from `opencloud init`
"FRONTEND_SEARCH_MIN_LENGTH": "2",
"OCIS_ASYNC_UPLOADS": True,
"OCIS_EVENTS_ENABLE_TLS": False,
"OC_ASYNC_UPLOADS": True,
"OC_EVENTS_ENABLE_TLS": False,
"NATS_NATS_HOST": "0.0.0.0",
"NATS_NATS_PORT": 9233,
"OCIS_JWT_SECRET": "some-ocis-jwt-secret",
"OC_JWT_SECRET": "some-opencloud-jwt-secret",
"EVENTHISTORY_STORE": "memory",
"OCIS_TRANSLATION_PATH": "%s/tests/config/translations" % dirs["base"],
"OC_TRANSLATION_PATH": "%s/tests/config/translations" % dirs["base"],
# debug addresses required for running services health tests
"ACTIVITYLOG_DEBUG_ADDR": "0.0.0.0:9197",
"APP_PROVIDER_DEBUG_ADDR": "0.0.0.0:9165",
@@ -2224,11 +2158,11 @@ def ocisServer(storage = "ocis", accounts_hash_difficulty = 4, volumes = [], dep
environment["FRONTEND_OCS_ENABLE_DENIALS"] = True
# fonts map for txt thumbnails (including unicode support)
environment["THUMBNAILS_TXT_FONTMAP_FILE"] = "%s/tests/config/drone/fontsMap.json" % (dirs["base"])
environment["THUMBNAILS_TXT_FONTMAP_FILE"] = "%s/tests/config/woodpecker/fontsMap.json" % (dirs["base"])
if deploy_type == "cs3api_validator":
environment["GATEWAY_GRPC_ADDR"] = "0.0.0.0:9142" # make gateway available to cs3api-validator
environment["OCIS_SHARING_PUBLIC_SHARE_MUST_HAVE_PASSWORD"] = False
environment["OC_SHARING_PUBLIC_SHARE_MUST_HAVE_PASSWORD"] = False
if deploy_type == "wopi_validator":
environment["GATEWAY_GRPC_ADDR"] = "0.0.0.0:9142" # make gateway available to wopi server
@@ -2238,10 +2172,10 @@ def ocisServer(storage = "ocis", accounts_hash_difficulty = 4, volumes = [], dep
environment["APP_PROVIDER_WOPI_APP_URL"] = "http://fakeoffice:8080"
environment["APP_PROVIDER_WOPI_INSECURE"] = True
environment["APP_PROVIDER_WOPI_WOPI_SERVER_EXTERNAL_URL"] = "http://wopi-fakeoffice:9300"
environment["APP_PROVIDER_WOPI_FOLDER_URL_BASE_URL"] = OCIS_URL
environment["APP_PROVIDER_WOPI_FOLDER_URL_BASE_URL"] = OC_URL
if deploy_type == "federation":
environment["OCIS_URL"] = OC_FED_URL
environment["OC_URL"] = OC_FED_URL
environment["PROXY_HTTP_ADDR"] = OC_FED_DOMAIN
container_name = FED_OC_SERVER_NAME
@@ -2252,7 +2186,7 @@ def ocisServer(storage = "ocis", accounts_hash_difficulty = 4, volumes = [], dep
environment["SEARCH_EXTRACTOR_CS3SOURCE_INSECURE"] = True
# Pass in "default" accounts_hash_difficulty to not set this environment variable.
# That will allow OCIS to use whatever its built-in default is.
# That will allow OpenCloud to use whatever its built-in default is.
# Otherwise pass in a value from 4 to about 11 or 12 (default 4, for making regular tests fast)
# The high values cause lots of CPU to be used when hashing passwords, and really slow down the tests.
if (accounts_hash_difficulty != "default"):
@@ -2262,43 +2196,50 @@ def ocisServer(storage = "ocis", accounts_hash_difficulty = 4, volumes = [], dep
environment[item] = extra_server_environment[item]
wrapper_commands = [
"make -C %s build" % dirs["ocisWrapper"],
"%s/bin/ociswrapper serve --bin %s --url %s --admin-username admin --admin-password admin" % (dirs["ocisWrapper"], dirs["opencloudBin"], environment["OCIS_URL"]),
"make -C %s build" % dirs["ocWrapper"],
"%s/bin/ocwrapper serve --bin %s --url %s --admin-username admin --admin-password admin" % (dirs["ocWrapper"], dirs["opencloudBin"], environment["OC_URL"]),
]
wait_for_ocis = {
wait_for_opencloud = {
"name": "wait-for-%s" % (container_name),
"image": OC_CI_ALPINE,
"commands": [
# wait for ocis-server to be ready (5 minutes)
# wait for opencloud-server to be ready (5 minutes)
"timeout 300 bash -c 'while [ $(curl -sk -uadmin:admin " +
"%s/graph/v1.0/users/admin " % environment["OCIS_URL"] +
"%s/graph/v1.0/users/admin " % environment["OC_URL"] +
"-w %{http_code} -o /dev/null) != 200 ]; do sleep 1; done'",
],
"depends_on": depends_on,
}
return [
{
"name": container_name,
"image": OC_CI_GOLANG,
"detach": True,
"environment": environment,
"backend_options": {
"docker": {
"user": user,
},
opencloud_server = {
"name": container_name,
"image": OC_CI_GOLANG,
"detach": True,
"environment": environment,
"backend_options": {
"docker": {
"user": user,
},
"commands": [
"%s init --insecure true" % dirs["opencloudBin"],
"cat $OCIS_CONFIG_DIR/ocis.yaml",
"cp tests/config/drone/app-registry.yaml /root/.ocis/config/app-registry.yaml",
] + (wrapper_commands),
"depends_on": depends_on,
},
wait_for_ocis,
"commands": [
"%s init --insecure true" % dirs["opencloudBin"],
"cat $OC_CONFIG_DIR/opencloud.yaml",
"cp tests/config/woodpecker/app-registry.yaml $OC_CONFIG_DIR/app-registry.yaml",
] + (wrapper_commands),
}
steps = [
opencloud_server,
wait_for_opencloud,
]
# empty depends_on list makes steps to run in parallel, what we don't want
if depends_on:
steps[0]["depends_on"] = depends_on
steps[1]["depends_on"] = depends_on
return steps
def startOcisService(service = None, name = None, environment = {}, volumes = []):
"""
Starts an OCIS service in a detached container.
@@ -2350,7 +2291,7 @@ def build():
"name": "build",
"image": OC_CI_GOLANG,
"commands": [
"retry -t 3 'make -C opencloud build'",
"for i in $(seq 3); do make -C opencloud build && break || sleep 1; done",
],
"environment": CI_HTTP_PROXY_ENV,
},
@@ -2577,7 +2518,7 @@ def genericCachePurge(flush_path):
def genericBuildArtifactCache(ctx, name, action, path):
if action == "rebuild" or action == "restore":
cache_path = "%s/%s/%s" % ("cache", repo_slug, ctx.build.commit + "-${DRONE_BUILD_NUMBER}")
cache_path = "%s/%s/%s" % ("cache", repo_slug, ctx.build.commit + "-${CI_PIPELINE_NUMBER}")
name = "%s_build_artifact_cache" % (name)
return genericCache(name, action, [path], cache_path)
@@ -2679,7 +2620,7 @@ def litmus(ctx, storage):
result = {
"name": "litmus",
"steps": restoreBuildArtifactCache(ctx, dirs["opencloudBinArtifact"], dirs["opencloudBinPath"]) +
ocisServer(storage) +
opencloudServer(storage) +
setupForLitmus() +
[
{
@@ -2688,7 +2629,7 @@ def litmus(ctx, storage):
"environment": environment,
"commands": [
"source .env",
'export LITMUS_URL="%s/remote.php/webdav"' % OCIS_URL,
'export LITMUS_URL="%s/remote.php/webdav"' % OC_URL,
litmusCommand,
],
},
@@ -2698,7 +2639,7 @@ def litmus(ctx, storage):
"environment": environment,
"commands": [
"source .env",
'export LITMUS_URL="%s/remote.php/dav/files/admin"' % OCIS_URL,
'export LITMUS_URL="%s/remote.php/dav/files/admin"' % OC_URL,
litmusCommand,
],
},
@@ -2708,7 +2649,7 @@ def litmus(ctx, storage):
"environment": environment,
"commands": [
"source .env",
'export LITMUS_URL="%s/remote.php/dav/files/admin/Shares/new_folder/"' % OCIS_URL,
'export LITMUS_URL="%s/remote.php/dav/files/admin/Shares/new_folder/"' % OC_URL,
litmusCommand,
],
},
@@ -2718,7 +2659,7 @@ def litmus(ctx, storage):
"environment": environment,
"commands": [
"source .env",
'export LITMUS_URL="%s/remote.php/webdav/Shares/new_folder/"' % OCIS_URL,
'export LITMUS_URL="%s/remote.php/webdav/Shares/new_folder/"' % OC_URL,
litmusCommand,
],
},
@@ -2742,13 +2683,13 @@ def litmus(ctx, storage):
"environment": environment,
"commands": [
"source .env",
"export LITMUS_URL='%s/remote.php/dav/spaces/'$SPACE_ID" % OCIS_URL,
"export LITMUS_URL='%s/remote.php/dav/spaces/'$SPACE_ID" % OC_URL,
litmusCommand,
],
},
],
"services": redisForOCStorage(storage),
"depends_on": getPipelineNames(buildOcisBinaryForTesting(ctx)),
"depends_on": getPipelineNames(buildOpencloudBinaryForTesting(ctx)),
"when": [
{
"event": ["push", "manual"],
@@ -2771,10 +2712,10 @@ def setupForLitmus():
"name": "setup-for-litmus",
"image": OC_UBUNTU,
"environment": {
"TEST_SERVER_URL": OCIS_URL,
"TEST_SERVER_URL": OC_URL,
},
"commands": [
"bash ./tests/config/drone/setup-for-litmus.sh",
"bash ./tests/config/woodpecker/setup-for-litmus.sh",
"cat .env",
],
}]
@@ -2782,7 +2723,7 @@ def setupForLitmus():
def getDroneEnvAndCheckScript(ctx):
ocis_git_base_url = "https://raw.githubusercontent.com/opencloud-eu/opencloud"
path_to_drone_env = "%s/%s/.drone.env" % (ocis_git_base_url, ctx.build.commit)
path_to_check_script = "%s/%s/tests/config/drone/check_web_cache.sh" % (ocis_git_base_url, ctx.build.commit)
path_to_check_script = "%s/%s/tests/config/woodpecker/check_web_cache.sh" % (ocis_git_base_url, ctx.build.commit)
return {
"name": "get-drone-env-and-check-script",
"image": OC_UBUNTU,
@@ -2833,7 +2774,7 @@ def generateWebPnpmCache(ctx):
"cd %s" % dirs["web"],
'npm install --silent --global --force "$(jq -r ".packageManager" < package.json)"',
"pnpm config set store-dir ./.pnpm-store",
"retry -t 3 'pnpm install'",
"for i in $(seq 3); do pnpm install && break || sleep 1; done",
],
},
{
@@ -2925,7 +2866,7 @@ def restoreWebPnpmCache():
"tar -xvf %s" % dirs["webPnpmZip"],
'npm install --silent --global --force "$(jq -r ".packageManager" < package.json)"',
"pnpm config set store-dir ./.pnpm-store",
"retry -t 3 'pnpm install'",
"for i in $(seq 3); do pnpm install && break || sleep 1; done",
],
}]
@@ -2967,7 +2908,7 @@ def fakeOffice():
"detach": True,
"environment": {},
"commands": [
"sh %s/tests/config/drone/serve-hosting-discovery.sh" % (dirs["base"]),
"sh %s/tests/config/woodpecker/serve-hosting-discovery.sh" % (dirs["base"]),
],
},
]
@@ -3070,7 +3011,7 @@ def k6LoadTests(ctx):
return []
ocis_git_base_url = "https://raw.githubusercontent.com/opencloud-eu/opencloud"
script_link = "%s/%s/tests/config/drone/run_k6_tests.sh" % (ocis_git_base_url, ctx.build.commit)
script_link = "%s/%s/tests/config/woodpecker/run_k6_tests.sh" % (ocis_git_base_url, ctx.build.commit)
event_array = ["cron"]
@@ -3160,7 +3101,7 @@ def collaboraService():
"detach": True,
"environment": {
"DONT_GEN_SSL_CERT": "set",
"extra_params": "--o:ssl.enable=true --o:ssl.termination=true --o:welcome.enable=false --o:net.frame_ancestors=%s" % OCIS_URL,
"extra_params": "--o:ssl.enable=true --o:ssl.termination=true --o:welcome.enable=false --o:net.frame_ancestors=%s" % OC_URL,
},
"commands": [
"coolconfig generate-proof-key",
@@ -3181,7 +3122,7 @@ def onlyofficeService():
"USE_UNAUTHORIZED_STORAGE": True, # self signed certificates
},
"commands": [
"cp %s/tests/config/drone/only-office.json /etc/onlyoffice/documentserver/local.json" % dirs["base"],
"cp %s/tests/config/woodpecker/only-office.json /etc/onlyoffice/documentserver/local.json" % dirs["base"],
"openssl req -x509 -newkey rsa:4096 -keyout onlyoffice.key -out onlyoffice.crt -sha256 -days 365 -batch -nodes",
"mkdir -p /var/www/onlyoffice/Data/certs",
"cp onlyoffice.key /var/www/onlyoffice/Data/certs/",

View File

@@ -15,9 +15,3 @@ external-sites:
color: "#E2BAFF"
icon: cloud
priority: 50
- name: Wikipedia
url: "https://www.wikipedia.org"
target: external
color: "#0D856F"
icon: book
priority: 51

View File

@@ -6,7 +6,7 @@ directives:
- 'blob:'
- 'https://${COMPANION_DOMAIN|companion.opencloud.test}/'
- 'wss://${COMPANION_DOMAIN|companion.opencloud.test}/'
- 'https://raw.githubusercontent.com/opencloud/awesome-apps/'
- 'https://raw.githubusercontent.com/opencloud-eu/awesome-apps/'
default-src:
- '''none'''
font-src:
@@ -26,7 +26,7 @@ directives:
- '''self'''
- 'data:'
- 'blob:'
- 'https://raw.githubusercontent.com/opencloud/awesome-apps/'
- 'https://raw.githubusercontent.com/opencloud-eu/awesome-apps/'
# In contrary to bash and docker the default is given after the | character
- 'https://${ONLYOFFICE_DOMAIN|onlyoffice.opencloud.test}/'
- 'https://${COLLABORA_DOMAIN|collabora.opencloud.test}/'

View File

@@ -15,4 +15,3 @@ If you want to contribute to the dev docs, please visit [OpenCloud on Github](ht
Contents will be transferred, during the build process.
A change to trigger CI

50
go.mod
View File

@@ -1,6 +1,6 @@
module github.com/opencloud-eu/opencloud
go 1.24.0
go 1.24.1
require (
dario.cat/mergo v1.0.1
@@ -42,14 +42,14 @@ require (
github.com/google/uuid v1.6.0
github.com/gookit/config/v2 v2.2.5
github.com/gorilla/mux v1.8.1
github.com/grpc-ecosystem/grpc-gateway/v2 v2.26.0
github.com/grpc-ecosystem/grpc-gateway/v2 v2.26.1
github.com/invopop/validation v0.8.0
github.com/jellydator/ttlcache/v2 v2.11.1
github.com/jellydator/ttlcache/v3 v3.3.0
github.com/jinzhu/now v1.1.5
github.com/justinas/alice v1.2.0
github.com/kovidgoyal/imaging v1.6.3
github.com/leonelquinteros/gotext v1.7.0
github.com/leonelquinteros/gotext v1.7.1
github.com/libregraph/idm v0.5.0
github.com/libregraph/lico v0.65.1
github.com/mitchellh/mapstructure v1.5.0
@@ -63,7 +63,7 @@ require (
github.com/onsi/ginkgo/v2 v2.23.0
github.com/onsi/gomega v1.36.2
github.com/open-policy-agent/opa v1.1.0
github.com/opencloud-eu/reva/v2 v2.27.3-0.20250304172421-22b1ead80cdd
github.com/opencloud-eu/reva/v2 v2.27.3-0.20250312134906-766c69c5d1be
github.com/orcaman/concurrent-map v1.0.0
github.com/owncloud/libre-graph-api-go v1.0.5-0.20240829135935-80dc00d6f5ea
github.com/pkg/errors v0.9.1
@@ -82,7 +82,7 @@ require (
github.com/test-go/testify v1.1.4
github.com/thejerf/suture/v4 v4.0.6
github.com/tidwall/gjson v1.18.0
github.com/tus/tusd/v2 v2.6.0
github.com/tus/tusd/v2 v2.7.1
github.com/unrolled/secure v1.16.0
github.com/urfave/cli/v2 v2.27.5
github.com/xhit/go-simple-mail/v2 v2.16.0
@@ -93,20 +93,20 @@ require (
go.opentelemetry.io/contrib/zpages v0.57.0
go.opentelemetry.io/otel v1.35.0
go.opentelemetry.io/otel/exporters/jaeger v1.17.0
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.34.0
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.35.0
go.opentelemetry.io/otel/sdk v1.35.0
go.opentelemetry.io/otel/trace v1.35.0
golang.org/x/crypto v0.34.0
golang.org/x/exp v0.0.0-20241009180824-f66d83c29e7c
golang.org/x/crypto v0.36.0
golang.org/x/exp v0.0.0-20250210185358-939b2ce775ac
golang.org/x/image v0.24.0
golang.org/x/net v0.35.0
golang.org/x/oauth2 v0.26.0
golang.org/x/sync v0.11.0
golang.org/x/term v0.29.0
golang.org/x/text v0.22.0
google.golang.org/genproto/googleapis/api v0.0.0-20250115164207-1a7da9e5054f
google.golang.org/grpc v1.70.0
google.golang.org/protobuf v1.36.3
golang.org/x/net v0.37.0
golang.org/x/oauth2 v0.28.0
golang.org/x/sync v0.12.0
golang.org/x/term v0.30.0
golang.org/x/text v0.23.0
google.golang.org/genproto/googleapis/api v0.0.0-20250218202821-56aae31c358a
google.golang.org/grpc v1.71.0
google.golang.org/protobuf v1.36.5
gopkg.in/yaml.v2 v2.4.0
gotest.tools/v3 v3.5.2
stash.kopano.io/kgol/rndm v1.1.2
@@ -308,24 +308,24 @@ require (
github.com/xo/terminfo v0.0.0-20220910002029-abceb7e1c41e // indirect
github.com/xrash/smetrics v0.0.0-20240521201337-686a1a2994c1 // indirect
github.com/yashtewari/glob-intersection v0.2.0 // indirect
go.etcd.io/etcd/api/v3 v3.5.18 // indirect
go.etcd.io/etcd/client/pkg/v3 v3.5.18 // indirect
go.etcd.io/etcd/client/v3 v3.5.18 // indirect
go.etcd.io/etcd/api/v3 v3.5.19 // indirect
go.etcd.io/etcd/client/pkg/v3 v3.5.19 // indirect
go.etcd.io/etcd/client/v3 v3.5.19 // indirect
go.opencensus.io v0.24.0 // indirect
go.opentelemetry.io/auto/sdk v1.1.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.34.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.35.0 // indirect
go.opentelemetry.io/otel/metric v1.35.0 // indirect
go.opentelemetry.io/proto/otlp v1.5.0 // indirect
go.uber.org/atomic v1.11.0 // indirect
go.uber.org/multierr v1.9.0 // indirect
go.uber.org/multierr v1.11.0 // indirect
go.uber.org/zap v1.23.0 // indirect
golang.org/x/mod v0.23.0 // indirect
golang.org/x/sys v0.30.0 // indirect
golang.org/x/mod v0.24.0 // indirect
golang.org/x/sys v0.31.0 // indirect
golang.org/x/time v0.10.0 // indirect
golang.org/x/tools v0.30.0 // indirect
golang.org/x/tools v0.31.0 // indirect
golang.org/x/xerrors v0.0.0-20220907171357-04be3eba64a2 // indirect
google.golang.org/genproto v0.0.0-20241118233622-e639e219e697 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20250115164207-1a7da9e5054f // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20250219182151-9fdb1cabc7b2 // indirect
gopkg.in/cenkalti/backoff.v1 v1.1.0 // indirect
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7 // indirect
gopkg.in/warnings.v0 v0.1.2 // indirect

96
go.sum
View File

@@ -581,8 +581,8 @@ github.com/grpc-ecosystem/go-grpc-middleware v1.4.0/go.mod h1:g5qyo/la0ALbONm6Vb
github.com/grpc-ecosystem/go-grpc-prometheus v1.2.0/go.mod h1:8NvIoxWQoOIhqOTXgfV/d3M/q6VIi02HzZEHgUlZvzk=
github.com/grpc-ecosystem/grpc-gateway v1.8.5/go.mod h1:vNeuVxBJEsws4ogUvrchl83t/GYV9WGTSLVdBhOQFDY=
github.com/grpc-ecosystem/grpc-gateway v1.9.0/go.mod h1:vNeuVxBJEsws4ogUvrchl83t/GYV9WGTSLVdBhOQFDY=
github.com/grpc-ecosystem/grpc-gateway/v2 v2.26.0 h1:VD1gqscl4nYs1YxVuSdemTrSgTKrwOWDK0FVFMqm+Cg=
github.com/grpc-ecosystem/grpc-gateway/v2 v2.26.0/go.mod h1:4EgsQoS4TOhJizV+JTFg40qx1Ofh3XmXEQNBpgvNT40=
github.com/grpc-ecosystem/grpc-gateway/v2 v2.26.1 h1:e9Rjr40Z98/clHv5Yg79Is0NtosR5LXRvdr7o/6NwbA=
github.com/grpc-ecosystem/grpc-gateway/v2 v2.26.1/go.mod h1:tIxuGz/9mpox++sgp9fJjHO0+q1X9/UOWd798aAm22M=
github.com/h2non/parth v0.0.0-20190131123155-b4df798d6542/go.mod h1:Ow0tF8D4Kplbc8s8sSb3V2oUCygFHVp8gC3Dn6U4MNI=
github.com/hashicorp/consul/api v1.1.0/go.mod h1:VmuI/Lkw1nC05EYQWNKwWGbkg+FbDBtguAZLlVdkD9Q=
github.com/hashicorp/consul/sdk v0.1.1/go.mod h1:VKf9jXwCTEY1QZP2MOLRhb5i/I/ssyNV1vwHyQBF0x8=
@@ -721,8 +721,8 @@ github.com/labstack/echo/v4 v4.1.11/go.mod h1:i541M3Fj6f76NZtHSj7TXnyM8n2gaodfvf
github.com/labstack/gommon v0.3.0/go.mod h1:MULnywXg0yavhxWKc+lOruYdAhDwPK9wf0OL7NoOu+k=
github.com/leodido/go-urn v1.4.0 h1:WT9HwE9SGECu3lg4d/dIA+jxlljEa1/ffXKmRjqdmIQ=
github.com/leodido/go-urn v1.4.0/go.mod h1:bvxc+MVxLKB4z00jd1z+Dvzr47oO32F/QSNjSBOlFxI=
github.com/leonelquinteros/gotext v1.7.0 h1:jcJmF4AXqyamP7vuw2MMIKs+O3jAEmvrc5JQiI8Ht/8=
github.com/leonelquinteros/gotext v1.7.0/go.mod h1:qJdoQuERPpccw7L70uoU+K/BvTfRBHYsisCQyFLXyvw=
github.com/leonelquinteros/gotext v1.7.1 h1:/JNPeE3lY5JeVYv2+KBpz39994W3W9fmZCGq3eO9Ri8=
github.com/leonelquinteros/gotext v1.7.1/go.mod h1:I0WoFDn9u2D3VbPnnDPT8mzZu0iSXG8iih+AH2fHHqg=
github.com/libregraph/idm v0.5.0 h1:tDMwKbAOZzdeDYMxVlY5PbSqRKO7dbAW9KT42A51WSk=
github.com/libregraph/idm v0.5.0/go.mod h1:BGMwIQ/6orJSPVzJ1x6kgG2JyG9GY05YFmbsnaD80k0=
github.com/libregraph/lico v0.65.1 h1:7ENAoAgbetZkJSwa1dMMP5WvXMTQ5E/3LI4uRDhwjEk=
@@ -865,8 +865,8 @@ github.com/onsi/gomega v1.36.2 h1:koNYke6TVk6ZmnyHrCXba/T/MoLBXFjeC1PtvYgw0A8=
github.com/onsi/gomega v1.36.2/go.mod h1:DdwyADRjrc825LhMEkD76cHR5+pUnjhUN8GlHlRPHzY=
github.com/open-policy-agent/opa v1.1.0 h1:HMz2evdEMTyNqtdLjmu3Vyx06BmhNYAx67Yz3Ll9q2s=
github.com/open-policy-agent/opa v1.1.0/go.mod h1:T1pASQ1/vwfTa+e2fYcfpLCvWgYtqtiUv+IuA/dLPQs=
github.com/opencloud-eu/reva/v2 v2.27.3-0.20250304172421-22b1ead80cdd h1:n4w8SHHuk74HOHy7ZEsL6zw2N/SEarqr482jAh5C/p4=
github.com/opencloud-eu/reva/v2 v2.27.3-0.20250304172421-22b1ead80cdd/go.mod h1:Zi6h/WupAKzY/umvteGJCY3q4GOvTyrlm4JZfiuHeds=
github.com/opencloud-eu/reva/v2 v2.27.3-0.20250312134906-766c69c5d1be h1:dxKsVUzdKIGf1hfGdr1GH6NFfo0xuIcPma/qjHNG8mU=
github.com/opencloud-eu/reva/v2 v2.27.3-0.20250312134906-766c69c5d1be/go.mod h1:sqlExPoEnEd0KdfoSKogV8PrwEBY3l06icoa4gJnGnU=
github.com/opentracing/opentracing-go v1.1.0/go.mod h1:UkNAQd3GIcIGf0SeVgPpRdFStlNbqXla1AfSYxPUl2o=
github.com/opentracing/opentracing-go v1.2.0 h1:uEJPy/1a5RIPAJ0Ov+OIO8OxWu77jEv+1B0VhjKrZUs=
github.com/opentracing/opentracing-go v1.2.0/go.mod h1:GxEUsuufX4nBwe+T+Wl9TAgYrxe9dPLANfrWvHYVTgc=
@@ -1097,8 +1097,8 @@ github.com/toorop/go-dkim v0.0.0-20201103131630-e1cd1a0a5208/go.mod h1:BzWtXXrXz
github.com/transip/gotransip/v6 v6.2.0/go.mod h1:pQZ36hWWRahCUXkFWlx9Hs711gLd8J4qdgLdRzmtY+g=
github.com/trustelem/zxcvbn v1.0.1 h1:mp4JFtzdDYGj9WYSD3KQSkwwUumWNFzXaAjckaTYpsc=
github.com/trustelem/zxcvbn v1.0.1/go.mod h1:zonUyKeh7sw6psPf/e3DtRqkRyZvAbOfjNz/aO7YQ5s=
github.com/tus/tusd/v2 v2.6.0 h1:Je243QDKnFTvm/WkLH2bd1oQ+7trolrflRWyuI0PdWI=
github.com/tus/tusd/v2 v2.6.0/go.mod h1:1Eb1lBoSRBfYJ/mQfFVjyw8ZdNMdBqW17vgQKl3Ah9g=
github.com/tus/tusd/v2 v2.7.1 h1:TGJjhv9RYXDmsTz8ug/qSd9vQpmD0Ik0G0IPo80Qmc0=
github.com/tus/tusd/v2 v2.7.1/go.mod h1:PLdIMQ/ge+5ADgGKcL3FgTaPs+7wB0JIiI5HQXAiJE8=
github.com/uber-go/atomic v1.3.2/go.mod h1:/Ct5t2lcmbJ4OSe/waGBoaVvVqtO0bmtfVNex1PFV8g=
github.com/urfave/cli v1.22.4/go.mod h1:Gos4lmkARVdJ6EkW0WaNv/tZAAMe9V7XWyB60NtXRu0=
github.com/urfave/cli/v2 v2.3.0/go.mod h1:LJmUH05zAU44vOAcrfzZQKsZbVcdbOG8rtL3/XcUArI=
@@ -1144,12 +1144,12 @@ github.com/yusufpapurcu/wmi v1.2.4/go.mod h1:SBZ9tNy3G9/m5Oi98Zks0QjeHVDvuK0qfxQ
go.etcd.io/bbolt v1.3.2/go.mod h1:IbVyRI1SCnLcuJnV2u8VeU0CEYM7e686BmAb1XKL+uU=
go.etcd.io/bbolt v1.4.0 h1:TU77id3TnN/zKr7CO/uk+fBCwF2jGcMuw2B/FMAzYIk=
go.etcd.io/bbolt v1.4.0/go.mod h1:AsD+OCi/qPN1giOX1aiLAha3o1U8rAz65bvN4j0sRuk=
go.etcd.io/etcd/api/v3 v3.5.18 h1:Q4oDAKnmwqTo5lafvB+afbgCDF7E35E4EYV2g+FNGhs=
go.etcd.io/etcd/api/v3 v3.5.18/go.mod h1:uY03Ob2H50077J7Qq0DeehjM/A9S8PhVfbQ1mSaMopU=
go.etcd.io/etcd/client/pkg/v3 v3.5.18 h1:mZPOYw4h8rTk7TeJ5+3udUkfVGBqc+GCjOJYd68QgNM=
go.etcd.io/etcd/client/pkg/v3 v3.5.18/go.mod h1:BxVf2o5wXG9ZJV+/Cu7QNUiJYk4A29sAhoI5tIRsCu4=
go.etcd.io/etcd/client/v3 v3.5.18 h1:nvvYmNHGumkDjZhTHgVU36A9pykGa2K4lAJ0yY7hcXA=
go.etcd.io/etcd/client/v3 v3.5.18/go.mod h1:kmemwOsPU9broExyhYsBxX4spCTDX3yLgPMWtpBXG6E=
go.etcd.io/etcd/api/v3 v3.5.19 h1:w3L6sQZGsWPuBxRQ4m6pPP3bVUtV8rjW033EGwlr0jw=
go.etcd.io/etcd/api/v3 v3.5.19/go.mod h1:QqKGViq4KTgOG43dr/uH0vmGWIaoJY3ggFi6ZH0TH/U=
go.etcd.io/etcd/client/pkg/v3 v3.5.19 h1:9VsyGhg0WQGjDWWlDI4VuaS9PZJGNbPkaHEIuLwtixk=
go.etcd.io/etcd/client/pkg/v3 v3.5.19/go.mod h1:qaOi1k4ZA9lVLejXNvyPABrVEe7VymMF2433yyRQ7O0=
go.etcd.io/etcd/client/v3 v3.5.19 h1:+4byIz6ti3QC28W0zB0cEZWwhpVHXdrKovyycJh1KNo=
go.etcd.io/etcd/client/v3 v3.5.19/go.mod h1:FNzyinmMIl0oVsty1zA3hFeUrxXI/JpEnz4sG+POzjU=
go.opencensus.io v0.20.1/go.mod h1:6WKK9ahsWS3RSO+PY9ZHZUfv2irvY6gN279GOPZjmmk=
go.opencensus.io v0.20.2/go.mod h1:6WKK9ahsWS3RSO+PY9ZHZUfv2irvY6gN279GOPZjmmk=
go.opencensus.io v0.21.0/go.mod h1:mSImk1erAIZhrmZN+AvHh14ztQfjbGwt4TtuofqLduU=
@@ -1172,10 +1172,10 @@ go.opentelemetry.io/otel v1.35.0 h1:xKWKPxrxB6OtMCbmMY021CqC45J+3Onta9MqjhnusiQ=
go.opentelemetry.io/otel v1.35.0/go.mod h1:UEqy8Zp11hpkUrL73gSlELM0DupHoiq72dR+Zqel/+Y=
go.opentelemetry.io/otel/exporters/jaeger v1.17.0 h1:D7UpUy2Xc2wsi1Ras6V40q806WM07rqoCWzXu7Sqy+4=
go.opentelemetry.io/otel/exporters/jaeger v1.17.0/go.mod h1:nPCqOnEH9rNLKqH/+rrUjiMzHJdV1BlpKcTwRTyKkKI=
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.34.0 h1:OeNbIYk/2C15ckl7glBlOBp5+WlYsOElzTNmiPW/x60=
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.34.0/go.mod h1:7Bept48yIeqxP2OZ9/AqIpYS94h2or0aB4FypJTc8ZM=
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.34.0 h1:tgJ0uaNS4c98WRNUEx5U3aDlrDOI5Rs+1Vifcw4DJ8U=
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.34.0/go.mod h1:U7HYyW0zt/a9x5J1Kjs+r1f/d4ZHnYFclhYY2+YbeoE=
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.35.0 h1:1fTNlAIJZGWLP5FVu0fikVry1IsiUnXjf7QFvoNN3Xw=
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.35.0/go.mod h1:zjPK58DtkqQFn+YUMbx0M2XV3QgKU0gS9LeGohREyK4=
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.35.0 h1:m639+BofXTvcY1q8CGs4ItwQarYtJPOWmVobfM1HpVI=
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.35.0/go.mod h1:LjReUci/F4BUyv+y4dwnq3h/26iNOeC3wAIqgvTIZVo=
go.opentelemetry.io/otel/metric v1.35.0 h1:0znxYu2SNyuMSQT4Y9WDWej0VpcsxkuklLa4/siN90M=
go.opentelemetry.io/otel/metric v1.35.0/go.mod h1:nKVFgxBZ2fReX6IlyW28MgZojkoAkJGaE8CpgeAU3oE=
go.opentelemetry.io/otel/sdk v1.35.0 h1:iPctf8iprVySXSKJffSS79eOjl9pvxV9ZqOWT0QejKY=
@@ -1196,8 +1196,8 @@ go.uber.org/goleak v1.3.0 h1:2K3zAYmnTNqV73imy9J1T3WC+gmCePx2hEGkimedGto=
go.uber.org/goleak v1.3.0/go.mod h1:CoHD4mav9JJNrW/WLlf7HGZPjdw8EucARQHekz1X6bE=
go.uber.org/multierr v1.1.0/go.mod h1:wR5kodmAFQ0UK8QlbwjlSNy0Z68gJhDJUG5sjR94q/0=
go.uber.org/multierr v1.6.0/go.mod h1:cdWPpRnG4AhwMwsgIHip0KRBQjJy5kYEpYjJxpXp9iU=
go.uber.org/multierr v1.9.0 h1:7fIwc/ZtS0q++VgcfqFDxSBZVv/Xo49/SYnDFupUwlI=
go.uber.org/multierr v1.9.0/go.mod h1:X2jQV1h+kxSjClGpnseKVIxpmcjrj7MNnI0bnlfKTVQ=
go.uber.org/multierr v1.11.0 h1:blXXJkSxSSfBVBlC76pxqeO+LN3aDfLQo+309xJstO0=
go.uber.org/multierr v1.11.0/go.mod h1:20+QtiLqy0Nd6FdQB9TLXag12DsQkrbs3htMFfDN80Y=
go.uber.org/ratelimit v0.0.0-20180316092928-c15da0234277/go.mod h1:2X8KaoNd1J0lZV+PxJk/5+DGbO/tpwLR1m++a7FnB/Y=
go.uber.org/zap v1.10.0/go.mod h1:vwi/ZaCAaUcBkycHslxD9B2zi4UTXhF60s6SWpuDF0Q=
go.uber.org/zap v1.18.1/go.mod h1:xg/QME4nWcxGxrpdeYfq7UvYrLh66cuVKdrbD1XF/NI=
@@ -1226,8 +1226,8 @@ golang.org/x/crypto v0.19.0/go.mod h1:Iy9bg/ha4yyC70EfRS8jz+B6ybOBKMaSxLj6P6oBDf
golang.org/x/crypto v0.21.0/go.mod h1:0BP7YvVV9gBbVKyeTG0Gyn+gZm94bibOW5BjDEYAOMs=
golang.org/x/crypto v0.23.0/go.mod h1:CKFgDieR+mRhux2Lsu27y0fO304Db0wZe70UKqHu0v8=
golang.org/x/crypto v0.31.0/go.mod h1:kDsLvtWBEx7MV9tJOj9bnXsPbxwJQ6csT/x4KIN4Ssk=
golang.org/x/crypto v0.34.0 h1:+/C6tk6rf/+t5DhUketUbD1aNGqiSX3j15Z6xuIDlBA=
golang.org/x/crypto v0.34.0/go.mod h1:dy7dXNW32cAb/6/PRuTNsix8T+vJAqvuIy5Bli/x0YQ=
golang.org/x/crypto v0.36.0 h1:AnAEvhDddvBdpY+uR+MyHmuZzzNqXSe/GvuDeob5L34=
golang.org/x/crypto v0.36.0/go.mod h1:Y4J0ReaxCR1IMaabaSMugxJES1EpwhBHhv2bDHklZvc=
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190306152737-a1d7652674e8/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190510132918-efd6b22b2522/go.mod h1:ZjyILWgesfNpC6sMxTJOJm9Kp84zZh5NQWvqDGG3Qr8=
@@ -1238,8 +1238,8 @@ golang.org/x/exp v0.0.0-20191227195350-da58074b4299/go.mod h1:2RIsYlXP63K8oxa1u0
golang.org/x/exp v0.0.0-20200119233911-0405dc783f0a/go.mod h1:2RIsYlXP63K8oxa1u096TMicItID8zy7Y6sNkU49FU4=
golang.org/x/exp v0.0.0-20200207192155-f17229e696bd/go.mod h1:J/WKrq2StrnmMY6+EHIKF9dgMWnmCNThgcyBT1FY9mM=
golang.org/x/exp v0.0.0-20200224162631-6cc2880d07d6/go.mod h1:3jZMyOhIsHpP37uCMkUooju7aAi5cS1Q23tOzKc+0MU=
golang.org/x/exp v0.0.0-20241009180824-f66d83c29e7c h1:7dEasQXItcW1xKJ2+gg5VOiBnqWrJc+rq0DPKyvvdbY=
golang.org/x/exp v0.0.0-20241009180824-f66d83c29e7c/go.mod h1:NQtJDoLvd6faHhE7m4T/1IY708gDefGGjR/iUW8yQQ8=
golang.org/x/exp v0.0.0-20250210185358-939b2ce775ac h1:l5+whBCLH3iH2ZNHYLbAe58bo7yrN4mVcnkHDYz5vvs=
golang.org/x/exp v0.0.0-20250210185358-939b2ce775ac/go.mod h1:hH+7mtFmImwwcMvScyxUhjuVHR3HGaDPMn9rMSUUbxo=
golang.org/x/image v0.0.0-20190227222117-0694c2d4d067/go.mod h1:kZ7UVZpmo3dzQBMxlp+ypCbDeSB+sBbTgSJuh5dn5js=
golang.org/x/image v0.0.0-20190802002840-cff245a6509b/go.mod h1:FeLwcggjj3mMvU+oOTbSwawSJRM1uh48EjtB4UJZlP0=
golang.org/x/image v0.18.0/go.mod h1:4yyo5vMFQjVjUcVk4jEQcU9MGy/rulF5WvUILseCM2E=
@@ -1269,8 +1269,8 @@ golang.org/x/mod v0.8.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs=
golang.org/x/mod v0.12.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs=
golang.org/x/mod v0.15.0/go.mod h1:hTbmBsO62+eylJbnUtE2MGJUyE7QWk4xUqPFrRgJ+7c=
golang.org/x/mod v0.17.0/go.mod h1:hTbmBsO62+eylJbnUtE2MGJUyE7QWk4xUqPFrRgJ+7c=
golang.org/x/mod v0.23.0 h1:Zb7khfcRGKk+kqfxFaP5tZqCnDZMjC5VtUBs87Hr6QM=
golang.org/x/mod v0.23.0/go.mod h1:6SkKJ3Xj0I0BrPOZoBy3bdMptDDU9oJrpohJ3eWZ1fY=
golang.org/x/mod v0.24.0 h1:ZfthKaKaT4NrhGVZHO1/WDTwGES4De8KtWO0SIbNJMU=
golang.org/x/mod v0.24.0/go.mod h1:IXM97Txy2VM4PJ3gI61r1YEk/gAj6zAHN3AdZt6S9Ww=
golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
@@ -1327,8 +1327,8 @@ golang.org/x/net v0.21.0/go.mod h1:bIjVDfnllIU7BJ2DNgfnXvpSvtn8VRwhlsaeUTyUS44=
golang.org/x/net v0.23.0/go.mod h1:JKghWKKOSdJwpW2GEx0Ja7fmaKnMsbu+MWVZTokSYmg=
golang.org/x/net v0.25.0/go.mod h1:JkAGAh7GEvH74S6FOH42FLoXpXbE/aqXSrIQjXgsiwM=
golang.org/x/net v0.33.0/go.mod h1:HXLR5J+9DxmrqMwG9qjGCxZ+zKXxBru04zlTvWlWuN4=
golang.org/x/net v0.35.0 h1:T5GQRQb2y08kTAByq9L4/bz8cipCdA8FbRTXewonqY8=
golang.org/x/net v0.35.0/go.mod h1:EglIi67kWsHKlRzzVMUD93VMSWGFOMSZgxFjparz1Qk=
golang.org/x/net v0.37.0 h1:1zLorHbz+LYj7MQlSf1+2tPIIgibq2eL5xkrGk6f+2c=
golang.org/x/net v0.37.0/go.mod h1:ivrbrMbzFq5J41QOQh0siUuly180yBYtLp+CKbEaFx8=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
@@ -1336,8 +1336,8 @@ golang.org/x/oauth2 v0.0.0-20191202225959-858c2ad4c8b6/go.mod h1:gOpvHmFTYa4Iltr
golang.org/x/oauth2 v0.0.0-20200107190931-bf48bf16ab8d/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20210514164344-f6687ab2804c/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/oauth2 v0.0.0-20220223155221-ee480838109b/go.mod h1:DAh4E804XQdzx2j+YRIaUnCqCV2RuMz24cGBJ5QYIrc=
golang.org/x/oauth2 v0.26.0 h1:afQXWNNaeC4nvZ0Ed9XvCCzXM6UHJG7iCg0W4fPqSBE=
golang.org/x/oauth2 v0.26.0/go.mod h1:XYTD2NtWslqkgxebSiOHnXEap4TF09sJSc7H1sXbhtI=
golang.org/x/oauth2 v0.28.0 h1:CrgCKl8PPAVtLnU3c+EDw6x11699EWlsDeWNWKdIOkc=
golang.org/x/oauth2 v0.28.0/go.mod h1:onh5ek6nERTohokkhCD/y2cV4Do3fxFHFuAejCkRWT8=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
@@ -1356,8 +1356,8 @@ golang.org/x/sync v0.3.0/go.mod h1:FU7BRWz2tNW+3quACPkgCx/L+uEAv1htQ0V83Z9Rj+Y=
golang.org/x/sync v0.6.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
golang.org/x/sync v0.7.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
golang.org/x/sync v0.10.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
golang.org/x/sync v0.11.0 h1:GGz8+XQP4FvTTrjZPzNKTMFtSXH80RAzG+5ghFPgK9w=
golang.org/x/sync v0.11.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
golang.org/x/sync v0.12.0 h1:MHc5BpPuC30uJk597Ri8TV3CNZcTLu6B6z4lJy+g6Jw=
golang.org/x/sync v0.12.0/go.mod h1:1dzgHSNfp02xaA81J2MS99Qcpr2w7fw1gpm99rleRqA=
golang.org/x/sys v0.0.0-20180622082034-63fc586f45fe/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180823144017-11551d06cbcc/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
@@ -1439,8 +1439,8 @@ golang.org/x/sys v0.18.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.20.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.21.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.28.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.30.0 h1:QjkSwP/36a20jFYWkSue1YwXzLmsV5Gfq7Eiy72C1uc=
golang.org/x/sys v0.30.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.31.0 h1:ioabZlmFYtWhL+TRYpcnNlLwhyxaM9kWTDEmfnprqik=
golang.org/x/sys v0.31.0/go.mod h1:BJP2sWEmIv4KK5OTEluFJCKSidICx8ciO85XgH3Ak8k=
golang.org/x/telemetry v0.0.0-20240228155512-f48c80bd79b2/go.mod h1:TeRTkGYfJXctD9OcfyVLyj2J3IxLnKwHJR8f4D8a3YE=
golang.org/x/term v0.0.0-20201117132131-f5c789dd3221/go.mod h1:Nr5EML6q2oocZ2LXRh80K7BxOlk5/8JxuGnuhpl+muw=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
@@ -1453,8 +1453,8 @@ golang.org/x/term v0.17.0/go.mod h1:lLRBjIVuehSbZlaOtGMbcMncT+aqLLLmKrsjNrUguwk=
golang.org/x/term v0.18.0/go.mod h1:ILwASektA3OnRv7amZ1xhE/KTR+u50pbXfZ03+6Nx58=
golang.org/x/term v0.20.0/go.mod h1:8UkIAJTvZgivsXaD6/pH6U9ecQzZ45awqEOzuCvwpFY=
golang.org/x/term v0.27.0/go.mod h1:iMsnZpn0cago0GOrHO2+Y7u7JPn5AylBrcoWkElMTSM=
golang.org/x/term v0.29.0 h1:L6pJp37ocefwRRtYPKSWOWzOtWSxVajvz2ldH/xi3iU=
golang.org/x/term v0.29.0/go.mod h1:6bl4lRlvVuDgSf3179VpIxBF0o10JUpXWOnI7nErv7s=
golang.org/x/term v0.30.0 h1:PQ39fJZ+mfadBm0y5WlL4vlM7Sx1Hgf13sMIY2+QS9Y=
golang.org/x/term v0.30.0/go.mod h1:NYYFdzHoI5wRh/h5tDMdMqCqPJZEuNqVR5xJLd/n67g=
golang.org/x/text v0.0.0-20170915032832-14c0d48ead0c/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
@@ -1471,8 +1471,8 @@ golang.org/x/text v0.14.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU=
golang.org/x/text v0.15.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU=
golang.org/x/text v0.16.0/go.mod h1:GhwF1Be+LQoKShO3cGOHzqOgRrGaYc9AvblQOmPVHnI=
golang.org/x/text v0.21.0/go.mod h1:4IBbMaMmOPCJ8SecivzSH54+73PCFmPWxNTLm+vZkEQ=
golang.org/x/text v0.22.0 h1:bofq7m3/HAFvbF51jz3Q9wLg3jkvSPuiZu/pD1XwgtM=
golang.org/x/text v0.22.0/go.mod h1:YRoo4H8PVmsu+E3Ou7cqLVH8oXWIHVoX0jqUWALQhfY=
golang.org/x/text v0.23.0 h1:D71I7dUrlY+VX0gQShAThNGHFxZ13dGLBHQLVl1mJlY=
golang.org/x/text v0.23.0/go.mod h1:/BLNzu4aZCJ1+kcD0DNRotWKage4q2rGVAg4o22unh4=
golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20191024005414-555d28b269f0/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
@@ -1535,8 +1535,8 @@ golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc
golang.org/x/tools v0.6.0/go.mod h1:Xwgl3UAJ/d3gWutnCtw505GrjyAbvKui8lOU390QaIU=
golang.org/x/tools v0.13.0/go.mod h1:HvlwmtVNQAhOuCjW7xxvovg8wbNq7LwfXh/k7wXUl58=
golang.org/x/tools v0.21.1-0.20240508182429-e35e4ccd0d2d/go.mod h1:aiJjzUbINMkxbQROHiO6hDPo2LHcIPhhQsa9DLh0yGk=
golang.org/x/tools v0.30.0 h1:BgcpHewrV5AUp2G9MebG4XPFI1E2W41zU1SaqVA9vJY=
golang.org/x/tools v0.30.0/go.mod h1:c347cR/OJfw5TI+GfX7RUPNMdDRRbjvYTS0jPyvsVtY=
golang.org/x/tools v0.31.0 h1:0EedkvKDbh+qistFTd0Bcwe/YLh4vHwWEkiI0toFIBU=
golang.org/x/tools v0.31.0/go.mod h1:naFTU+Cev749tSJRXJlna0T3WxKvb1kWEx15xA4SdmQ=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
@@ -1598,10 +1598,10 @@ google.golang.org/genproto v0.0.0-20200804131852-c06518451d9c/go.mod h1:FWY/as6D
google.golang.org/genproto v0.0.0-20200825200019-8632dd797987/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20241118233622-e639e219e697 h1:ToEetK57OidYuqD4Q5w+vfEnPvPpuTwedCNVohYJfNk=
google.golang.org/genproto v0.0.0-20241118233622-e639e219e697/go.mod h1:JJrvXBWRZaFMxBufik1a4RpFw4HhgVtBBWQeQgUj2cc=
google.golang.org/genproto/googleapis/api v0.0.0-20250115164207-1a7da9e5054f h1:gap6+3Gk41EItBuyi4XX/bp4oqJ3UwuIMl25yGinuAA=
google.golang.org/genproto/googleapis/api v0.0.0-20250115164207-1a7da9e5054f/go.mod h1:Ic02D47M+zbarjYYUlK57y316f2MoN0gjAwI3f2S95o=
google.golang.org/genproto/googleapis/rpc v0.0.0-20250115164207-1a7da9e5054f h1:OxYkA3wjPsZyBylwymxSHa7ViiW1Sml4ToBrncvFehI=
google.golang.org/genproto/googleapis/rpc v0.0.0-20250115164207-1a7da9e5054f/go.mod h1:+2Yz8+CLJbIfL9z73EW45avw8Lmge3xVElCP9zEKi50=
google.golang.org/genproto/googleapis/api v0.0.0-20250218202821-56aae31c358a h1:nwKuGPlUAt+aR+pcrkfFRrTU1BVrSmYyYMxYbUIVHr0=
google.golang.org/genproto/googleapis/api v0.0.0-20250218202821-56aae31c358a/go.mod h1:3kWAYMk1I75K4vykHtKt2ycnOgpA6974V7bREqbsenU=
google.golang.org/genproto/googleapis/rpc v0.0.0-20250219182151-9fdb1cabc7b2 h1:DMTIbak9GhdaSxEjvVzAeNZvyc03I61duqNbnm3SU0M=
google.golang.org/genproto/googleapis/rpc v0.0.0-20250219182151-9fdb1cabc7b2/go.mod h1:LuRYeWDFV6WOn90g357N17oMCaxpgCnbi/44qJvDn2I=
google.golang.org/grpc v1.17.0/go.mod h1:6QZJwpn2B+Zp71q/5VxRsJ6NXXVCE5NRUHRo+f3cWCs=
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
google.golang.org/grpc v1.19.1/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
@@ -1617,8 +1617,8 @@ google.golang.org/grpc v1.29.1/go.mod h1:itym6AZVZYACWQqET3MqgPpjcuV5QH3BxFS3Iji
google.golang.org/grpc v1.30.0/go.mod h1:N36X2cJ7JwdamYAgDz+s+rVMFjt3numwzf/HckM8pak=
google.golang.org/grpc v1.31.0/go.mod h1:N36X2cJ7JwdamYAgDz+s+rVMFjt3numwzf/HckM8pak=
google.golang.org/grpc v1.33.2/go.mod h1:JMHMWHQWaTccqQQlmk3MJZS+GWXOdAesneDmEnv2fbc=
google.golang.org/grpc v1.70.0 h1:pWFv03aZoHzlRKHWicjsZytKAiYCtNS0dHbXnIdq7jQ=
google.golang.org/grpc v1.70.0/go.mod h1:ofIJqVKDXx/JiXrwr2IG4/zwdH9txy3IlF40RmcJSQw=
google.golang.org/grpc v1.71.0 h1:kF77BGdPTQ4/JZWMlb9VpJ5pa25aqvVqogsxNHHdeBg=
google.golang.org/grpc v1.71.0/go.mod h1:H0GRtasmQOh9LkFoCPDu3ZrwUtD1YGE+b2vYBYd/8Ec=
google.golang.org/grpc/examples v0.0.0-20211102180624-670c133e568e h1:m7aQHHqd0q89mRwhwS9Bx2rjyl/hsFAeta+uGrHsQaU=
google.golang.org/grpc/examples v0.0.0-20211102180624-670c133e568e/go.mod h1:gID3PKrg7pWKntu9Ss6zTLJ0ttC0X9IHgREOCZwbCVU=
google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8=
@@ -1635,8 +1635,8 @@ google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp0
google.golang.org/protobuf v1.26.0/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
google.golang.org/protobuf v1.28.0/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqwMG9pJV4I=
google.golang.org/protobuf v1.28.1/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqwMG9pJV4I=
google.golang.org/protobuf v1.36.3 h1:82DV7MYdb8anAVi3qge1wSnMDrnKK7ebr+I0hHRN1BU=
google.golang.org/protobuf v1.36.3/go.mod h1:9fA7Ob0pmnwhb644+1+CVWFRbNajQ6iRojtC/QF5bRE=
google.golang.org/protobuf v1.36.5 h1:tPhr+woSbjfYvY6/GPufUoYizxw1cF/yFoxJ2fmpwlM=
google.golang.org/protobuf v1.36.5/go.mod h1:9fA7Ob0pmnwhb644+1+CVWFRbNajQ6iRojtC/QF5bRE=
gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw=
gopkg.in/cenkalti/backoff.v1 v1.1.0 h1:Arh75ttbsvlpVA7WtVpH4u9h6Zl46xuptxqLxPiSo4Y=
gopkg.in/cenkalti/backoff.v1 v1.1.0/go.mod h1:J6Vskwqd+OMVJl8C33mmtxTBs2gyzfv7UDAkHu8BrjI=

View File

@@ -7,7 +7,9 @@ ifdef ENABLE_VIPS
TAGS := ${TAGS},enable_vips
endif
ifneq (, $(shell command -v go 2> /dev/null)) # suppress `command not found warnings` for non go targets in CI
include ../.bingo/Variables.mk
endif
include ../.make/default.mk
include ../.make/recursion.mk
include ../.make/go.mk

View File

@@ -313,9 +313,9 @@ func setCmd(cfg *config.Config) *cli.Command {
func backend(root, backend string) metadata.Backend {
switch backend {
case "xattrs":
return metadata.NewXattrsBackend(root, cache.Config{})
return metadata.NewXattrsBackend(cache.Config{})
case "mpk":
return metadata.NewMessagePackBackend(root, cache.Config{})
return metadata.NewMessagePackBackend(cache.Config{})
}
return metadata.NullBackend{}
}

View File

@@ -1,7 +1,9 @@
SHELL := bash
NAME := pkg
ifneq (, $(shell command -v go 2> /dev/null)) # suppress `command not found warnings` for non go targets in CI
include ../.bingo/Variables.mk
endif
include ../.make/default.mk
include ../.make/recursion.mk
include ../.make/go.mk

View File

@@ -1,7 +1,9 @@
SHELL := bash
NAME := protogen
ifneq (, $(shell command -v go 2> /dev/null)) # suppress `command not found warnings` for non go targets in CI
include ../.bingo/Variables.mk
endif
include ../.make/default.mk
include ../.make/recursion.mk
include ../.make/generate.mk

View File

@@ -3,7 +3,10 @@ NAME := activitylog
OUTPUT_DIR = ./pkg/service/l10n
TEMPLATE_FILE = ./pkg/service/l10n/activitylog.pot
ifneq (, $(shell command -v go 2> /dev/null)) # suppress `command not found warnings` for non go targets in CI
include ../../.bingo/Variables.mk
endif
include ../../.make/default.mk
include ../../.make/recursion.mk
include ../../.make/go.mk

View File

@@ -1,7 +1,10 @@
SHELL := bash
NAME := antivirus
ifneq (, $(shell command -v go 2> /dev/null)) # suppress `command not found warnings` for non go targets in CI
include ../../.bingo/Variables.mk
endif
include ../../.make/default.mk
include ../../.make/recursion.mk
include ../../.make/go.mk

View File

@@ -1,7 +1,10 @@
SHELL := bash
NAME := app-provider
ifneq (, $(shell command -v go 2> /dev/null)) # suppress `command not found warnings` for non go targets in CI
include ../../.bingo/Variables.mk
endif
include ../../.make/default.mk
include ../../.make/recursion.mk
include ../../.make/go.mk

View File

@@ -1,7 +1,10 @@
SHELL := bash
NAME := app-registry
ifneq (, $(shell command -v go 2> /dev/null)) # suppress `command not found warnings` for non go targets in CI
include ../../.bingo/Variables.mk
endif
include ../../.make/default.mk
include ../../.make/recursion.mk
include ../../.make/go.mk

View File

@@ -1,7 +1,10 @@
SHELL := bash
NAME := audit
ifneq (, $(shell command -v go 2> /dev/null)) # suppress `command not found warnings` for non go targets in CI
include ../../.bingo/Variables.mk
endif
include ../../.make/default.mk
include ../../.make/recursion.mk
include ../../.make/go.mk

View File

@@ -1,7 +1,10 @@
SHELL := bash
NAME := auth-app
ifneq (, $(shell command -v go 2> /dev/null)) # suppress `command not found warnings` for non go targets in CI
include ../../.bingo/Variables.mk
endif
include ../../.make/default.mk
include ../../.make/recursion.mk
include ../../.make/go.mk

View File

@@ -1,7 +1,10 @@
SHELL := bash
NAME := auth-basic
ifneq (, $(shell command -v go 2> /dev/null)) # suppress `command not found warnings` for non go targets in CI
include ../../.bingo/Variables.mk
endif
include ../../.make/default.mk
include ../../.make/recursion.mk
include ../../.make/go.mk

View File

@@ -1,7 +1,10 @@
SHELL := bash
NAME := auth-bearer
ifneq (, $(shell command -v go 2> /dev/null)) # suppress `command not found warnings` for non go targets in CI
include ../../.bingo/Variables.mk
endif
include ../../.make/default.mk
include ../../.make/recursion.mk
include ../../.make/go.mk

View File

@@ -1,7 +1,10 @@
SHELL := bash
NAME := auth-machine
ifneq (, $(shell command -v go 2> /dev/null)) # suppress `command not found warnings` for non go targets in CI
include ../../.bingo/Variables.mk
endif
include ../../.make/default.mk
include ../../.make/recursion.mk
include ../../.make/go.mk

View File

@@ -1,7 +1,10 @@
SHELL := bash
NAME := auth-service
ifneq (, $(shell command -v go 2> /dev/null)) # suppress `command not found warnings` for non go targets in CI
include ../../.bingo/Variables.mk
endif
include ../../.make/default.mk
include ../../.make/recursion.mk
include ../../.make/go.mk

View File

@@ -1,7 +1,10 @@
SHELL := bash
NAME := clientlog
ifneq (, $(shell command -v go 2> /dev/null)) # suppress `command not found warnings` for non go targets in CI
include ../../.bingo/Variables.mk
endif
include ../../.make/default.mk
include ../../.make/recursion.mk
include ../../.make/go.mk

View File

@@ -1,7 +1,10 @@
SHELL := bash
NAME := collaboration
ifneq (, $(shell command -v go 2> /dev/null)) # suppress `command not found warnings` for non go targets in CI
include ../../.bingo/Variables.mk
endif
include ../../.make/default.mk
include ../../.make/recursion.mk
include ../../.make/go.mk

View File

@@ -1,7 +1,10 @@
SHELL := bash
NAME := eventhistory
ifneq (, $(shell command -v go 2> /dev/null)) # suppress `command not found warnings` for non go targets in CI
include ../../.bingo/Variables.mk
endif
include ../../.make/default.mk
include ../../.make/recursion.mk
include ../../.make/go.mk

View File

@@ -1,7 +1,10 @@
SHELL := bash
NAME := frontend
ifneq (, $(shell command -v go 2> /dev/null)) # suppress `command not found warnings` for non go targets in CI
include ../../.bingo/Variables.mk
endif
include ../../.make/default.mk
include ../../.make/recursion.mk
include ../../.make/go.mk

View File

@@ -1,7 +1,10 @@
SHELL := bash
NAME := gateway
ifneq (, $(shell command -v go 2> /dev/null)) # suppress `command not found warnings` for non go targets in CI
include ../../.bingo/Variables.mk
endif
include ../../.make/default.mk
include ../../.make/recursion.mk
include ../../.make/go.mk

View File

@@ -3,7 +3,10 @@ NAME := graph
OUTPUT_DIR = ./pkg/l10n
TEMPLATE_FILE = ./pkg/l10n/graph.pot
ifneq (, $(shell command -v go 2> /dev/null)) # suppress `command not found warnings` for non go targets in CI
include ../../.bingo/Variables.mk
endif
include ../../.make/default.mk
include ../../.make/recursion.mk
include ../../.make/go.mk

View File

@@ -1,7 +1,10 @@
SHELL := bash
NAME := groups
ifneq (, $(shell command -v go 2> /dev/null)) # suppress `command not found warnings` for non go targets in CI
include ../../.bingo/Variables.mk
endif
include ../../.make/default.mk
include ../../.make/recursion.mk
include ../../.make/go.mk

View File

@@ -2,7 +2,10 @@ SHELL := bash
NAME := idm
TAGS := disable_crypt
ifneq (, $(shell command -v go 2> /dev/null)) # suppress `command not found warnings` for non go targets in CI
include ../../.bingo/Variables.mk
endif
include ../../.make/default.mk
include ../../.make/recursion.mk
include ../../.make/go.mk

View File

@@ -1,7 +1,10 @@
SHELL := bash
NAME := idp
ifneq (, $(shell command -v go 2> /dev/null)) # suppress `command not found warnings` for non go targets in CI
include ../../.bingo/Variables.mk
endif
include ../../.make/default.mk
include ../../.make/recursion.mk
include ../../.make/go.mk

View File

@@ -105,13 +105,13 @@
"web-vitals": "^3.5.2"
},
"devDependencies": {
"@babel/core": "7.22.11",
"@babel/core": "7.26.9",
"@typescript-eslint/eslint-plugin": "^4.33.0",
"@typescript-eslint/parser": "^4.33.0",
"babel-eslint": "^10.1.0",
"babel-loader": "9.2.1",
"babel-plugin-named-asset-import": "^0.3.8",
"babel-preset-react-app": "^10.0.1",
"babel-preset-react-app": "^10.1.0",
"case-sensitive-paths-webpack-plugin": "2.4.0",
"cldr": "^7.5.0",
"css-loader": "5.2.7",

View File

File diff suppressed because it is too large Load Diff

View File

@@ -1,7 +1,10 @@
SHELL := bash
NAME := invitations
ifneq (, $(shell command -v go 2> /dev/null)) # suppress `command not found warnings` for non go targets in CI
include ../../.bingo/Variables.mk
endif
include ../../.make/default.mk
include ../../.make/recursion.mk
include ../../.make/go.mk

View File

@@ -1,7 +1,10 @@
SHELL := bash
NAME := nats
ifneq (, $(shell command -v go 2> /dev/null)) # suppress `command not found warnings` for non go targets in CI
include ../../.bingo/Variables.mk
endif
include ../../.make/default.mk
include ../../.make/recursion.mk
include ../../.make/go.mk

View File

@@ -3,7 +3,10 @@ NAME := notifications
OUTPUT_DIR = ./pkg/email/l10n
TEMPLATE_FILE = ./pkg/email/l10n/notifications.pot
ifneq (, $(shell command -v go 2> /dev/null)) # suppress `command not found warnings` for non go targets in CI
include ../../.bingo/Variables.mk
endif
include ../../.make/default.mk
include ../../.make/recursion.mk
include ../../.make/go.mk

View File

@@ -1,7 +1,10 @@
SHELL := bash
NAME := ocdav
ifneq (, $(shell command -v go 2> /dev/null)) # suppress `command not found warnings` for non go targets in CI
include ../../.bingo/Variables.mk
endif
include ../../.make/default.mk
include ../../.make/recursion.mk
include ../../.make/go.mk

View File

@@ -1,7 +1,10 @@
SHELL := bash
NAME := ocm
ifneq (, $(shell command -v go 2> /dev/null)) # suppress `command not found warnings` for non go targets in CI
include ../../.bingo/Variables.mk
endif
include ../../.make/default.mk
include ../../.make/recursion.mk
include ../../.make/go.mk

View File

@@ -1,7 +1,10 @@
SHELL := bash
NAME := ocs
ifneq (, $(shell command -v go 2> /dev/null)) # suppress `command not found warnings` for non go targets in CI
include ../../.bingo/Variables.mk
endif
include ../../.make/default.mk
include ../../.make/recursion.mk
include ../../.make/go.mk

View File

@@ -1,7 +1,10 @@
SHELL := bash
NAME := policies
ifneq (, $(shell command -v go 2> /dev/null)) # suppress `command not found warnings` for non go targets in CI
include ../../.bingo/Variables.mk
endif
include ../../.make/default.mk
include ../../.make/recursion.mk
include ../../.make/go.mk

View File

@@ -1,7 +1,10 @@
SHELL := bash
NAME := postprocessing
ifneq (, $(shell command -v go 2> /dev/null)) # suppress `command not found warnings` for non go targets in CI
include ../../.bingo/Variables.mk
endif
include ../../.make/default.mk
include ../../.make/recursion.mk
include ../../.make/go.mk

View File

@@ -1,7 +1,10 @@
SHELL := bash
NAME := proxy
ifneq (, $(shell command -v go 2> /dev/null)) # suppress `command not found warnings` for non go targets in CI
include ../../.bingo/Variables.mk
endif
include ../../.make/default.mk
include ../../.make/recursion.mk
include ../../.make/go.mk

View File

@@ -1,7 +1,10 @@
SHELL := bash
NAME := search
ifneq (, $(shell command -v go 2> /dev/null)) # suppress `command not found warnings` for non go targets in CI
include ../../.bingo/Variables.mk
endif
include ../../.make/default.mk
include ../../.make/recursion.mk
include ../../.make/go.mk

View File

@@ -3,7 +3,10 @@ NAME := settings
OUTPUT_DIR = ./pkg/service/v0/l10n
TEMPLATE_FILE = ./pkg/service/v0/l10n/settings.pot
ifneq (, $(shell command -v go 2> /dev/null)) # suppress `command not found warnings` for non go targets in CI
include ../../.bingo/Variables.mk
endif
include ../../.make/default.mk
include ../../.make/recursion.mk
include ../../.make/go.mk

View File

@@ -1,7 +1,10 @@
SHELL := bash
NAME := sharing
ifneq (, $(shell command -v go 2> /dev/null)) # suppress `command not found warnings` for non go targets in CI
include ../../.bingo/Variables.mk
endif
include ../../.make/default.mk
include ../../.make/recursion.mk
include ../../.make/go.mk

View File

@@ -1,7 +1,10 @@
SHELL := bash
NAME := sse
ifneq (, $(shell command -v go 2> /dev/null)) # suppress `command not found warnings` for non go targets in CI
include ../../.bingo/Variables.mk
endif
include ../../.make/default.mk
include ../../.make/recursion.mk
include ../../.make/go.mk

View File

@@ -1,7 +1,10 @@
SHELL := bash
NAME := storage-publiclink
ifneq (, $(shell command -v go 2> /dev/null)) # suppress `command not found warnings` for non go targets in CI
include ../../.bingo/Variables.mk
endif
include ../../.make/default.mk
include ../../.make/recursion.mk
include ../../.make/go.mk

View File

@@ -1,7 +1,10 @@
SHELL := bash
NAME := storage-shares
ifneq (, $(shell command -v go 2> /dev/null)) # suppress `command not found warnings` for non go targets in CI
include ../../.bingo/Variables.mk
endif
include ../../.make/default.mk
include ../../.make/recursion.mk
include ../../.make/go.mk

View File

@@ -1,7 +1,10 @@
SHELL := bash
NAME := storage-system
ifneq (, $(shell command -v go 2> /dev/null)) # suppress `command not found warnings` for non go targets in CI
include ../../.bingo/Variables.mk
endif
include ../../.make/default.mk
include ../../.make/recursion.mk
include ../../.make/go.mk

View File

@@ -1,7 +1,10 @@
SHELL := bash
NAME := storage-users
ifneq (, $(shell command -v go 2> /dev/null)) # suppress `command not found warnings` for non go targets in CI
include ../../.bingo/Variables.mk
endif
include ../../.make/default.mk
include ../../.make/recursion.mk
include ../../.make/go.mk

View File

@@ -1,7 +1,10 @@
SHELL := bash
NAME := thumbnails
ifneq (, $(shell command -v go 2> /dev/null)) # suppress `command not found warnings` for non go targets in CI
include ../../.bingo/Variables.mk
endif
include ../../.make/default.mk
include ../../.make/recursion.mk
include ../../.make/go.mk

View File

@@ -3,7 +3,10 @@ NAME := userlog
OUTPUT_DIR = ./pkg/service/l10n
TEMPLATE_FILE = ./pkg/service/l10n/userlog.pot
ifneq (, $(shell command -v go 2> /dev/null)) # suppress `command not found warnings` for non go targets in CI
include ../../.bingo/Variables.mk
endif
include ../../.make/default.mk
include ../../.make/recursion.mk
include ../../.make/go.mk

View File

@@ -376,7 +376,7 @@ func composeMessage(nt NotificationTemplate, locale, defaultLocale, path string,
func loadTemplates(nt NotificationTemplate, locale, defaultLocale, path string) (string, string) {
t := l10n.NewTranslatorFromCommonConfig(defaultLocale, _domain, path, _translationFS, "l10n/locale").Locale(locale)
return t.Get("%s", nt.Subject), t.Get("%s", nt.Message)
return t.Get("%s", nt.Subject), t.Get(nt.Message)
}
func executeTemplate(raw string, vars map[string]interface{}) (string, error) {

View File

@@ -1,7 +1,10 @@
SHELL := bash
NAME := users
ifneq (, $(shell command -v go 2> /dev/null)) # suppress `command not found warnings` for non go targets in CI
include ../../.bingo/Variables.mk
endif
include ../../.make/default.mk
include ../../.make/recursion.mk
include ../../.make/go.mk

View File

@@ -3,7 +3,10 @@ NAME := web
WEB_ASSETS_VERSION = v1.0.0
WEB_ASSETS_BRANCH = main
ifneq (, $(shell command -v go 2> /dev/null)) # suppress `command not found warnings` for non go targets in CI
include ../../.bingo/Variables.mk
endif
include ../../.make/default.mk
include ../../.make/recursion.mk
include ../../.make/go.mk

View File

@@ -1,7 +1,10 @@
SHELL := bash
NAME := webdav
ifneq (, $(shell command -v go 2> /dev/null)) # suppress `command not found warnings` for non go targets in CI
include ../../.bingo/Variables.mk
endif
include ../../.make/default.mk
include ../../.make/recursion.mk
include ../../.make/go.mk

View File

@@ -1,7 +1,10 @@
SHELL := bash
NAME := webfinger
ifneq (, $(shell command -v go 2> /dev/null)) # suppress `command not found warnings` for non go targets in CI
include ../../.bingo/Variables.mk
endif
include ../../.make/default.mk
include ../../.make/recursion.mk
include ../../.make/go.mk

View File

@@ -126,6 +126,15 @@ func populateFieldValueFromPath(msgValue protoreflect.Message, fieldPath []strin
}
}
// Check if oneof already set
if of := fieldDescriptor.ContainingOneof(); of != nil && !of.IsSynthetic() {
if f := msgValue.WhichOneof(of); f != nil {
if fieldDescriptor.Message() == nil || fieldDescriptor.FullName() != f.FullName() {
return fmt.Errorf("field already set for oneof %q", of.FullName().Name())
}
}
}
// If this is the last element, we're done
if i == len(fieldPath)-1 {
break
@@ -140,13 +149,6 @@ func populateFieldValueFromPath(msgValue protoreflect.Message, fieldPath []strin
msgValue = msgValue.Mutable(fieldDescriptor).Message()
}
// Check if oneof already set
if of := fieldDescriptor.ContainingOneof(); of != nil && !of.IsSynthetic() {
if f := msgValue.WhichOneof(of); f != nil {
return fmt.Errorf("field already set for oneof %q", of.FullName().Name())
}
}
switch {
case fieldDescriptor.IsList():
return populateRepeatedField(fieldDescriptor, msgValue.Mutable(fieldDescriptor).List(), values)

View File

@@ -53,7 +53,7 @@ func New(root string) (*Blobstore, error) {
}
// Upload stores some data in the blobstore under the given key
func (bs *Blobstore) Upload(node *node.Node, source string) error {
func (bs *Blobstore) Upload(node *node.Node, source, _copyTarget string) error {
if node.BlobID == "" {
return ErrBlobIDEmpty
}

View File

@@ -77,7 +77,7 @@ func New(endpoint, region, bucket, accessKey, secretKey string, defaultPutOption
}
// Upload stores some data in the blobstore under the given key
func (bs *Blobstore) Upload(node *node.Node, source string) error {
func (bs *Blobstore) Upload(node *node.Node, source, _copyTarget string) error {
reader, err := os.Open(source)
if err != nil {
return errors.Wrap(err, "can not open source file to upload")

View File

@@ -81,7 +81,11 @@ func (bs *Blobstore) Upload(node *node.Node, source, copyTarget string) error {
return err
}
file.Seek(0, 0)
_, err = file.Seek(0, 0)
if err != nil {
return err
}
copyFile, err := os.OpenFile(copyTarget, os.O_CREATE|os.O_WRONLY|os.O_TRUNC, 0600)
if err != nil {
return errors.Wrapf(err, "could not open copy target '%s' for writing", copyTarget)

View File

@@ -59,6 +59,11 @@ func New(m map[string]interface{}) (*Options, error) {
m["metadata_backend"] = "hybrid"
}
// debounced scan delay
if o.ScanDebounceDelay == 0 {
o.ScanDebounceDelay = 10 * time.Millisecond
}
do, err := decomposedoptions.New(m)
if err != nil {
return nil, err

View File

@@ -78,7 +78,7 @@ func New(m map[string]interface{}, stream events.Stream, log *zerolog.Logger) (s
var lu *lookup.Lookup
switch o.MetadataBackend {
case "xattrs":
lu = lookup.New(metadata.NewXattrsBackend(o.Root, o.FileMetadataCache), um, o, &timemanager.Manager{})
lu = lookup.New(metadata.NewXattrsBackend(o.FileMetadataCache), um, o, &timemanager.Manager{})
case "hybrid":
lu = lookup.New(metadata.NewHybridBackend(1024, // start offloading grants after 1KB
func(n metadata.MetadataNode) string {

View File

@@ -259,56 +259,71 @@ func (tb *Trashbin) ListRecycle(ctx context.Context, spaceID string, key, relati
}
// RestoreRecycleItem restores the specified item
func (tb *Trashbin) RestoreRecycleItem(ctx context.Context, spaceID string, key, relativePath string, restoreRef *provider.Reference) error {
func (tb *Trashbin) RestoreRecycleItem(ctx context.Context, spaceID string, key, relativePath string, restoreRef *provider.Reference) (*node.Node, error) {
_, span := tracer.Start(ctx, "RestoreRecycleItem")
defer span.End()
trashRoot := filepath.Join(tb.lu.InternalPath(spaceID, spaceID), ".Trash")
trashPath := filepath.Clean(filepath.Join(trashRoot, "files", key+".trashitem", relativePath))
restorePath := ""
// TODO why can we not use NodeFromResource here? It will use walk path. Do trashed items have a problem with that?
restoreBaseNode, err := tb.lu.NodeFromID(ctx, restoreRef.GetResourceId())
if err != nil {
return err
if restoreRef != nil {
restoreBaseNode, err := tb.lu.NodeFromID(ctx, restoreRef.GetResourceId())
if err != nil {
return nil, err
}
restorePath = filepath.Join(restoreBaseNode.InternalPath(), restoreRef.GetPath())
} else {
originalPath, _, err := tb.readInfoFile(trashRoot, key)
if err != nil {
return nil, err
}
restorePath = filepath.Join(tb.lu.InternalPath(spaceID, spaceID), originalPath, relativePath)
}
restorePath := filepath.Join(restoreBaseNode.InternalPath(), restoreRef.GetPath())
// TODO the decomposed trash also checks the permissions on the restore node
_, id, _, err := tb.lu.MetadataBackend().IdentifyPath(ctx, trashPath)
if err != nil {
return err
return nil, err
}
// update parent id in case it was restored to a different location
_, parentID, _, err := tb.lu.MetadataBackend().IdentifyPath(ctx, filepath.Dir(restorePath))
if err != nil {
return err
return nil, err
}
if len(parentID) == 0 {
return fmt.Errorf("trashbin: parent id not found for %s", restorePath)
return nil, fmt.Errorf("trashbin: parent id not found for %s", restorePath)
}
trashNode := &trashNode{spaceID: spaceID, id: id, path: trashPath}
err = tb.lu.MetadataBackend().Set(ctx, trashNode, prefixes.ParentidAttr, []byte(parentID))
if err != nil {
return err
return nil, err
}
// restore the item
err = os.Rename(trashPath, restorePath)
if err != nil {
return err
return nil, err
}
if err := tb.lu.CacheID(ctx, spaceID, string(id), restorePath); err != nil {
tb.log.Error().Err(err).Str("spaceID", spaceID).Str("id", string(id)).Str("path", restorePath).Msg("trashbin: error caching id")
if err := tb.lu.CacheID(ctx, spaceID, id, restorePath); err != nil {
tb.log.Error().Err(err).Str("spaceID", spaceID).Str("id", id).Str("path", restorePath).Msg("trashbin: error caching id")
}
restoredNode, err := tb.lu.NodeFromID(ctx, &provider.ResourceId{SpaceId: spaceID, OpaqueId: id})
if err != nil {
return nil, err
}
// cleanup trash info
if relativePath == "." || relativePath == "/" {
return os.Remove(filepath.Join(trashRoot, "info", key+".trashinfo"))
return restoredNode, os.Remove(filepath.Join(trashRoot, "info", key+".trashinfo"))
} else {
return nil
return restoredNode, nil
}
}
// PurgeRecycleItem purges the specified item, all its children and all their revisions

View File

@@ -197,7 +197,7 @@ func (t *Tree) Scan(path string, action EventAction, isDir bool) error {
})
}
if err := t.setDirty(filepath.Dir(path), true); err != nil {
return err
t.log.Error().Err(err).Str("path", path).Bool("isDir", isDir).Msg("failed to mark directory as dirty")
}
t.scanDebouncer.Debounce(scanItem{
Path: filepath.Dir(path),
@@ -208,7 +208,7 @@ func (t *Tree) Scan(path string, action EventAction, isDir bool) error {
// 2. New directory
// -> scan directory
if err := t.setDirty(path, true); err != nil {
return err
t.log.Error().Err(err).Str("path", path).Bool("isDir", isDir).Msg("failed to mark directory as dirty")
}
t.scanDebouncer.Debounce(scanItem{
Path: path,
@@ -244,8 +244,13 @@ func (t *Tree) Scan(path string, action EventAction, isDir bool) error {
t.log.Debug().Str("path", path).Bool("isDir", isDir).Msg("scanning path (ActionMoveFrom)")
// 6. file/directory moved out of the watched directory
// -> update directory
if err := t.setDirty(filepath.Dir(path), true); err != nil {
return err
err := t.HandleFileDelete(path)
if err != nil {
t.log.Error().Err(err).Str("path", path).Bool("isDir", isDir).Msg("failed to handle deleted item")
}
err = t.setDirty(filepath.Dir(path), true)
if err != nil {
t.log.Error().Err(err).Str("path", path).Bool("isDir", isDir).Msg("failed to mark directory as dirty")
}
go func() { _ = t.WarmupIDCache(filepath.Dir(path), false, true) }()
@@ -258,7 +263,7 @@ func (t *Tree) Scan(path string, action EventAction, isDir bool) error {
err := t.HandleFileDelete(path)
if err != nil {
return err
t.log.Error().Err(err).Str("path", path).Bool("isDir", isDir).Msg("failed to handle deleted item")
}
t.scanDebouncer.Debounce(scanItem{
@@ -342,7 +347,7 @@ func (t *Tree) findSpaceId(path string) (string, node.Attributes, error) {
}
}
return string(spaceID), spaceAttrs, nil
return spaceID, spaceAttrs, nil
}
spaceCandidate = filepath.Dir(spaceCandidate)
}
@@ -387,7 +392,7 @@ func (t *Tree) assimilate(item scanItem) error {
// the file has an id set, we already know it from the past
n := node.NewBaseNode(spaceID, id, t.lookup)
previousPath, ok := t.lookup.GetCachedID(context.Background(), spaceID, string(id))
previousPath, ok := t.lookup.GetCachedID(context.Background(), spaceID, id)
previousParentID, _ := t.lookup.MetadataBackend().Get(context.Background(), n, prefixes.ParentidAttr)
// compare metadata mtime with actual mtime. if it matches AND the path hasn't changed (move operation)
@@ -418,10 +423,10 @@ func (t *Tree) assimilate(item scanItem) error {
// this is a move
t.log.Debug().Str("path", item.Path).Msg("move detected")
if err := t.lookup.CacheID(context.Background(), spaceID, string(id), item.Path); err != nil {
t.log.Error().Err(err).Str("spaceID", spaceID).Str("id", string(id)).Str("path", item.Path).Msg("could not cache id")
if err := t.lookup.CacheID(context.Background(), spaceID, id, item.Path); err != nil {
t.log.Error().Err(err).Str("spaceID", spaceID).Str("id", id).Str("path", item.Path).Msg("could not cache id")
}
_, attrs, err := t.updateFile(item.Path, string(id), spaceID)
_, attrs, err := t.updateFile(item.Path, id, spaceID)
if err != nil {
return err
}
@@ -471,11 +476,11 @@ func (t *Tree) assimilate(item scanItem) error {
} else {
// This item had already been assimilated in the past. Update the path
t.log.Debug().Str("path", item.Path).Msg("updating cached path")
if err := t.lookup.CacheID(context.Background(), spaceID, string(id), item.Path); err != nil {
t.log.Error().Err(err).Str("spaceID", spaceID).Str("id", string(id)).Str("path", item.Path).Msg("could not cache id")
if err := t.lookup.CacheID(context.Background(), spaceID, id, item.Path); err != nil {
t.log.Error().Err(err).Str("spaceID", spaceID).Str("id", id).Str("path", item.Path).Msg("could not cache id")
}
_, _, err := t.updateFile(item.Path, string(id), spaceID)
_, _, err := t.updateFile(item.Path, id, spaceID)
if err != nil {
return err
}
@@ -753,7 +758,7 @@ func (t *Tree) WarmupIDCache(root string, assimilate, onlyDirty bool) error {
}
spaceID, _, _, err = t.lookup.MetadataBackend().IdentifyPath(context.Background(), spaceCandidate)
if err == nil {
if err == nil && len(spaceID) > 0 {
err = scopeSpace(path)
if err != nil {
return err

View File

@@ -223,44 +223,6 @@ func (tp *Tree) DownloadRevision(ctx context.Context, ref *provider.Reference, r
return ri, reader, nil
}
func (tp *Tree) getRevisionNode(ctx context.Context, ref *provider.Reference, revisionKey string, hasPermission func(*provider.ResourcePermissions) bool) (*node.Node, error) {
_, span := tracer.Start(ctx, "getRevisionNode")
defer span.End()
log := appctx.GetLogger(ctx)
// verify revision key format
kp := strings.SplitN(revisionKey, node.RevisionIDDelimiter, 2)
if len(kp) != 2 {
log.Error().Str("revisionKey", revisionKey).Msg("malformed revisionKey")
return nil, errtypes.NotFound(revisionKey)
}
log.Debug().Str("revisionKey", revisionKey).Msg("DownloadRevision")
spaceID := ref.ResourceId.SpaceId
// check if the node is available and has not been deleted
n, err := node.ReadNode(ctx, tp.lookup, spaceID, kp[0], false, nil, false)
if err != nil {
return nil, err
}
if !n.Exists {
err = errtypes.NotFound(filepath.Join(n.ParentID, n.Name))
return nil, err
}
p, err := tp.permissions.AssemblePermissions(ctx, n)
switch {
case err != nil:
return nil, err
case !hasPermission(p):
return nil, errtypes.PermissionDenied(filepath.Join(n.ParentID, n.Name))
}
// Set space owner in context
storagespace.ContextSendSpaceOwnerID(ctx, n.SpaceOwnerOrManager(ctx))
return n, nil
}
func (tp *Tree) RestoreRevision(ctx context.Context, srcNode, targetNode metadata.MetadataNode) error {
source := srcNode.InternalPath()
target := targetNode.InternalPath()
@@ -275,7 +237,10 @@ func (tp *Tree) RestoreRevision(ctx context.Context, srcNode, targetNode metadat
return err
}
defer wf.Close()
wf.Truncate(0)
err = wf.Truncate(0)
if err != nil {
return err
}
if _, err := io.Copy(wf, rf); err != nil {
return err
@@ -293,7 +258,11 @@ func (tp *Tree) RestoreRevision(ctx context.Context, srcNode, targetNode metadat
// always set the node mtime to the current time
mtime := time.Now()
os.Chtimes(target, mtime, mtime)
err = os.Chtimes(target, mtime, mtime)
if err != nil {
return errtypes.InternalError("failed to update times:" + err.Error())
}
err = tp.lookup.MetadataBackend().SetMultiple(ctx, targetNode,
map[string][]byte{
prefixes.MTimeAttr: []byte(mtime.UTC().Format(time.RFC3339Nano)),

View File

@@ -26,7 +26,6 @@ import (
"io/fs"
"os"
"path/filepath"
"regexp"
"strings"
"time"
@@ -62,13 +61,6 @@ func init() {
tracer = otel.Tracer("github.com/cs3org/reva/pkg/storage/pkg/decomposedfs/tree")
}
// Blobstore defines an interface for storing blobs in a blobstore
type Blobstore interface {
Upload(node *node.Node, source, copyTarget string) error
Download(node *node.Node) (io.ReadCloser, error)
Delete(node *node.Node) error
}
type Watcher interface {
Watch(path string)
}
@@ -82,7 +74,7 @@ type scanItem struct {
// Tree manages a hierarchical tree
type Tree struct {
lookup *lookup.Lookup
blobstore Blobstore
blobstore node.Blobstore
trashbin *trashbin.Trashbin
propagator propagator.Propagator
permissions permissions.Permissions
@@ -103,7 +95,7 @@ type Tree struct {
type PermissionCheckFunc func(rp *provider.ResourcePermissions) bool
// New returns a new instance of Tree
func New(lu node.PathLookup, bs Blobstore, um usermapper.Mapper, trashbin *trashbin.Trashbin, permissions permissions.Permissions, o *options.Options, es events.Stream, cache store.Store, log *zerolog.Logger) (*Tree, error) {
func New(lu node.PathLookup, bs node.Blobstore, um usermapper.Mapper, trashbin *trashbin.Trashbin, permissions permissions.Permissions, o *options.Options, es events.Stream, cache store.Store, log *zerolog.Logger) (*Tree, error) {
scanQueue := make(chan scanItem)
t := &Tree{
lookup: lu.(*lookup.Lookup),
@@ -242,7 +234,10 @@ func (t *Tree) TouchFile(ctx context.Context, n *node.Node, markprocessing bool,
if err != nil {
return err
}
t.lookup.TimeManager().OverrideMtime(ctx, n, &attributes, nodeMTime)
err = t.lookup.TimeManager().OverrideMtime(ctx, n, &attributes, nodeMTime)
if err != nil {
return err
}
} else {
fi, err := f.Stat()
if err != nil {
@@ -572,13 +567,13 @@ func (t *Tree) DeleteBlob(node *node.Node) error {
}
// BuildSpaceIDIndexEntry returns the entry for the space id index
func (t *Tree) BuildSpaceIDIndexEntry(spaceID, nodeID string) string {
return nodeID
func (t *Tree) BuildSpaceIDIndexEntry(spaceID string) string {
return spaceID
}
// ResolveSpaceIDIndexEntry returns the node id for the space id index entry
func (t *Tree) ResolveSpaceIDIndexEntry(spaceid, entry string) (string, string, error) {
return spaceid, entry, nil
func (t *Tree) ResolveSpaceIDIndexEntry(spaceID string) (string, error) {
return spaceID, nil
}
// InitNewNode initializes a new node
@@ -652,8 +647,6 @@ func (t *Tree) createDirNode(ctx context.Context, n *node.Node) (err error) {
return n.SetXattrsWithContext(ctx, attributes, false)
}
var nodeIDRegep = regexp.MustCompile(`.*/nodes/([^.]*).*`)
func (t *Tree) isIgnored(path string) bool {
return isLockFile(path) || isTrash(path) || t.isUpload(path) || t.isInternal(path)
}

View File

@@ -130,7 +130,7 @@ type Decomposedfs struct {
}
// NewDefault returns an instance with default components
func NewDefault(m map[string]interface{}, bs tree.Blobstore, es events.Stream, log *zerolog.Logger) (storage.FS, error) {
func NewDefault(m map[string]interface{}, bs node.Blobstore, es events.Stream, log *zerolog.Logger) (storage.FS, error) {
if log == nil {
log = &zerolog.Logger{}
}
@@ -143,9 +143,9 @@ func NewDefault(m map[string]interface{}, bs tree.Blobstore, es events.Stream, l
var lu *lookup.Lookup
switch o.MetadataBackend {
case "xattrs":
lu = lookup.New(metadata.NewXattrsBackend(o.Root, o.FileMetadataCache), o, &timemanager.Manager{})
lu = lookup.New(metadata.NewXattrsBackend(o.FileMetadataCache), o, &timemanager.Manager{})
case "messagepack":
lu = lookup.New(metadata.NewMessagePackBackend(o.Root, o.FileMetadataCache), o, &timemanager.Manager{})
lu = lookup.New(metadata.NewMessagePackBackend(o.FileMetadataCache), o, &timemanager.Manager{})
default:
return nil, fmt.Errorf("unknown metadata backend %s, only 'messagepack' or 'xattrs' (default) supported", o.MetadataBackend)
}
@@ -1292,7 +1292,23 @@ func (fs *Decomposedfs) RestoreRecycleItem(ctx context.Context, space *provider.
return errtypes.NotFound(key)
}
return fs.trashbin.RestoreRecycleItem(ctx, spaceID, key, relativePath, restoreRef)
restoredNode, err := fs.trashbin.RestoreRecycleItem(ctx, spaceID, key, relativePath, restoreRef)
if err != nil {
return err
}
var sizeDiff int64
if restoredNode.IsDir(ctx) {
treeSize, err := restoredNode.GetTreeSize(ctx)
if err != nil {
return err
}
sizeDiff = int64(treeSize)
} else {
sizeDiff = restoredNode.Blobsize
}
return fs.tp.Propagate(ctx, restoredNode, sizeDiff)
}
func (fs *Decomposedfs) PurgeRecycleItem(ctx context.Context, space *provider.Reference, key, relativePath string) error {

View File

@@ -40,7 +40,6 @@ import (
// MessagePackBackend persists the attributes in messagepack format inside the file
type MessagePackBackend struct {
rootPath string
metaCache cache.FileMetadataCache
}
@@ -51,9 +50,8 @@ type readWriteCloseSeekTruncater interface {
}
// NewMessagePackBackend returns a new MessagePackBackend instance
func NewMessagePackBackend(rootPath string, o cache.Config) MessagePackBackend {
func NewMessagePackBackend(o cache.Config) MessagePackBackend {
return MessagePackBackend{
rootPath: filepath.Clean(rootPath),
metaCache: cache.GetFileMetadataCache(o),
}
}
@@ -63,7 +61,7 @@ func (MessagePackBackend) Name() string { return "messagepack" }
// IdentifyPath returns the id and mtime of a file
func (b MessagePackBackend) IdentifyPath(_ context.Context, path string) (string, string, time.Time, error) {
metaPath := filepath.Join(path + ".mpk")
metaPath := filepath.Clean(path + ".mpk")
source, err := os.Open(metaPath)
// // No cached entry found. Read from storage and store in cache
if err != nil {

View File

@@ -37,12 +37,11 @@ import (
// XattrsBackend stores the file attributes in extended attributes
type XattrsBackend struct {
rootPath string
metaCache cache.FileMetadataCache
}
// NewMessageBackend returns a new XattrsBackend instance
func NewXattrsBackend(rootPath string, o cache.Config) XattrsBackend {
func NewXattrsBackend(o cache.Config) XattrsBackend {
return XattrsBackend{
metaCache: cache.GetFileMetadataCache(o),
}

View File

@@ -86,6 +86,13 @@ const (
ProcessingStatus = "processing:"
)
// Blobstore defines an interface for storing blobs in a blobstore
type Blobstore interface {
Upload(node *Node, source, copyTarget string) error
Download(node *Node) (io.ReadCloser, error)
Delete(node *Node) error
}
type TimeManager interface {
// OverrideMTime overrides the mtime of the node, either on the node itself or in the given attributes, depending on the implementation
OverrideMtime(ctx context.Context, n *Node, attrs *Attributes, mtime time.Time) error
@@ -129,8 +136,8 @@ type Tree interface {
ReadBlob(node *Node) (io.ReadCloser, error)
DeleteBlob(node *Node) error
BuildSpaceIDIndexEntry(spaceID, nodeID string) string
ResolveSpaceIDIndexEntry(spaceID, entry string) (string, string, error)
BuildSpaceIDIndexEntry(spaceID string) string
ResolveSpaceIDIndexEntry(spaceID string) (string, error)
CreateRevision(ctx context.Context, n *Node, version string, f *lockedfile.File) (string, error)
ListRevisions(ctx context.Context, ref *provider.Reference) ([]*provider.FileVersion, error)

View File

@@ -129,7 +129,7 @@ func (tb *DecomposedfsTrashbin) ListRecycle(ctx context.Context, spaceID string,
}
item := &provider.RecycleItem{
Type: provider.ResourceType(typeInt),
Size: uint64(size),
Size: size,
Key: filepath.Join(key, relativePath),
DeletionTime: deletionTime,
Ref: &provider.Reference{
@@ -345,7 +345,7 @@ func (tb *DecomposedfsTrashbin) listTrashRoot(ctx context.Context, spaceID strin
}
// RestoreRecycleItem restores the specified item
func (tb *DecomposedfsTrashbin) RestoreRecycleItem(ctx context.Context, spaceID string, key, relativePath string, restoreRef *provider.Reference) error {
func (tb *DecomposedfsTrashbin) RestoreRecycleItem(ctx context.Context, spaceID string, key, relativePath string, restoreRef *provider.Reference) (*node.Node, error) {
_, span := tracer.Start(ctx, "RestoreRecycleItem")
defer span.End()
@@ -353,7 +353,7 @@ func (tb *DecomposedfsTrashbin) RestoreRecycleItem(ctx context.Context, spaceID
if restoreRef != nil {
tn, err := tb.fs.lu.NodeFromResource(ctx, restoreRef)
if err != nil {
return err
return nil, err
}
targetNode = tn
@@ -361,19 +361,19 @@ func (tb *DecomposedfsTrashbin) RestoreRecycleItem(ctx context.Context, spaceID
rn, parent, restoreFunc, err := tb.fs.tp.(*tree.Tree).RestoreRecycleItemFunc(ctx, spaceID, key, relativePath, targetNode)
if err != nil {
return err
return nil, err
}
// check permissions of deleted node
rp, err := tb.fs.p.AssembleTrashPermissions(ctx, rn)
switch {
case err != nil:
return err
return nil, err
case !rp.RestoreRecycleItem:
if rp.Stat {
return errtypes.PermissionDenied(key)
return nil, errtypes.PermissionDenied(key)
}
return errtypes.NotFound(key)
return nil, errtypes.NotFound(key)
}
// Set space owner in context
@@ -383,13 +383,13 @@ func (tb *DecomposedfsTrashbin) RestoreRecycleItem(ctx context.Context, spaceID
pp, err := tb.fs.p.AssemblePermissions(ctx, parent)
switch {
case err != nil:
return err
return nil, err
case !pp.InitiateFileUpload:
// share receiver cannot restore to a shared resource to which she does not have write permissions.
if rp.Stat {
return errtypes.PermissionDenied(key)
return nil, errtypes.PermissionDenied(key)
}
return errtypes.NotFound(key)
return nil, errtypes.NotFound(key)
}
// Run the restore func

View File

@@ -30,6 +30,8 @@ import (
"sync/atomic"
"time"
"maps"
userv1beta1 "github.com/cs3org/go-cs3apis/cs3/identity/user/v1beta1"
v1beta11 "github.com/cs3org/go-cs3apis/cs3/rpc/v1beta1"
provider "github.com/cs3org/go-cs3apis/cs3/storage/provider/v1beta1"
@@ -248,7 +250,7 @@ func (fs *Decomposedfs) ListStorageSpaces(ctx context.Context, filter []*provide
var (
spaceID = spaceIDAny
nodeID = spaceIDAny
entry = spaceIDAny
requestedUserID *userv1beta1.UserId
)
@@ -266,8 +268,8 @@ func (fs *Decomposedfs) ListStorageSpaces(ctx context.Context, filter []*provide
spaceTypes[filter[i].GetSpaceType()] = struct{}{}
}
case provider.ListStorageSpacesRequest_Filter_TYPE_ID:
_, spaceID, nodeID, _ = storagespace.SplitID(filter[i].GetId().OpaqueId)
if strings.Contains(nodeID, "/") {
_, spaceID, entry, _ = storagespace.SplitID(filter[i].GetId().OpaqueId)
if strings.Contains(entry, "/") {
return []*provider.StorageSpace{}, nil
}
case provider.ListStorageSpacesRequest_Filter_TYPE_USER:
@@ -296,11 +298,11 @@ func (fs *Decomposedfs) ListStorageSpaces(ctx context.Context, filter []*provide
// /path/to/root/spaces/personal/nodeid
// /path/to/root/spaces/shared/nodeid
if spaceID != spaceIDAny && nodeID != spaceIDAny {
if spaceID != spaceIDAny && entry != spaceIDAny {
// try directly reading the node
n, err := node.ReadNode(ctx, fs.lu, spaceID, nodeID, true, nil, false) // permission to read disabled space is checked later
n, err := node.ReadNode(ctx, fs.lu, spaceID, entry, true, nil, false) // permission to read disabled space is checked later
if err != nil {
appctx.GetLogger(ctx).Error().Err(err).Str("id", nodeID).Msg("could not read node")
appctx.GetLogger(ctx).Error().Err(err).Str("id", entry).Msg("could not read node")
return nil, err
}
if !n.Exists {
@@ -332,12 +334,10 @@ func (fs *Decomposedfs) ListStorageSpaces(ctx context.Context, filter []*provide
return nil, errors.Wrap(err, "error reading user index")
}
if nodeID == spaceIDAny {
for spaceID, nodeID := range allMatches {
matches[spaceID] = nodeID
}
if entry == spaceIDAny {
maps.Copy(matches, allMatches)
} else {
matches[allMatches[nodeID]] = allMatches[nodeID]
matches[allMatches[entry]] = allMatches[entry]
}
// get Groups for userid
@@ -359,12 +359,10 @@ func (fs *Decomposedfs) ListStorageSpaces(ctx context.Context, filter []*provide
return nil, errors.Wrap(err, "error reading group index")
}
if nodeID == spaceIDAny {
for spaceID, nodeID := range allMatches {
matches[spaceID] = nodeID
}
if entry == spaceIDAny {
maps.Copy(matches, allMatches)
} else {
matches[allMatches[nodeID]] = allMatches[nodeID]
matches[allMatches[entry]] = allMatches[entry]
}
}
@@ -389,12 +387,10 @@ func (fs *Decomposedfs) ListStorageSpaces(ctx context.Context, filter []*provide
return nil, errors.Wrap(err, "error reading type index")
}
if nodeID == spaceIDAny {
for spaceID, nodeID := range allMatches {
matches[spaceID] = nodeID
}
if entry == spaceIDAny {
maps.Copy(matches, allMatches)
} else {
matches[allMatches[nodeID]] = allMatches[nodeID]
matches[allMatches[entry]] = allMatches[entry]
}
}
}
@@ -417,9 +413,9 @@ func (fs *Decomposedfs) ListStorageSpaces(ctx context.Context, filter []*provide
// Distribute work
errg.Go(func() error {
defer close(work)
for spaceID, nodeID := range matches {
for spaceID, entry := range matches {
select {
case work <- []string{spaceID, nodeID}:
case work <- []string{spaceID, entry}:
case <-ctx.Done():
return ctx.Err()
}
@@ -435,15 +431,15 @@ func (fs *Decomposedfs) ListStorageSpaces(ctx context.Context, filter []*provide
for i := 0; i < numWorkers; i++ {
errg.Go(func() error {
for match := range work {
spaceID, nodeID, err := fs.tp.ResolveSpaceIDIndexEntry(match[0], match[1])
spaceID, err := fs.tp.ResolveSpaceIDIndexEntry(match[1])
if err != nil {
appctx.GetLogger(ctx).Error().Err(err).Str("id", nodeID).Msg("resolve space id index entry, skipping")
appctx.GetLogger(ctx).Error().Err(err).Str("id", spaceID).Msg("resolve space id index entry, skipping")
continue
}
n, err := node.ReadNode(ctx, fs.lu, spaceID, nodeID, true, nil, true)
n, err := node.ReadNode(ctx, fs.lu, spaceID, spaceID, true, nil, true)
if err != nil {
appctx.GetLogger(ctx).Error().Err(err).Str("id", nodeID).Msg("could not read node, skipping")
appctx.GetLogger(ctx).Error().Err(err).Str("id", spaceID).Msg("could not read node, skipping")
continue
}
@@ -459,7 +455,7 @@ func (fs *Decomposedfs) ListStorageSpaces(ctx context.Context, filter []*provide
case errtypes.NotFound:
// ok
default:
appctx.GetLogger(ctx).Error().Err(err).Str("id", nodeID).Msg("could not convert to storage space")
appctx.GetLogger(ctx).Error().Err(err).Str("id", spaceID).Msg("could not convert to storage space")
}
continue
}
@@ -497,9 +493,9 @@ func (fs *Decomposedfs) ListStorageSpaces(ctx context.Context, filter []*provide
}
// if there are no matches (or they happened to be spaces for the owner) and the node is a child return a space
if int64(len(matches)) <= numShares.Load() && nodeID != spaceID {
if int64(len(matches)) <= numShares.Load() && entry != spaceID {
// try node id
n, err := node.ReadNode(ctx, fs.lu, spaceID, nodeID, true, nil, false) // permission to read disabled space is checked in storageSpaceFromNode
n, err := node.ReadNode(ctx, fs.lu, spaceID, entry, true, nil, false) // permission to read disabled space is checked in storageSpaceFromNode
if err != nil {
return nil, err
}
@@ -817,7 +813,7 @@ func (fs *Decomposedfs) DeleteStorageSpace(ctx context.Context, req *provider.De
// - for decomposedfs/decomposeds3 it is the relative link to the space root
// - for the posixfs it is the node id
func (fs *Decomposedfs) updateIndexes(ctx context.Context, grantee *provider.Grantee, spaceType, spaceID, nodeID string) error {
target := fs.tp.BuildSpaceIDIndexEntry(spaceID, nodeID)
target := fs.tp.BuildSpaceIDIndexEntry(spaceID)
err := fs.linkStorageSpaceType(ctx, spaceType, spaceID, target)
if err != nil {
return err

View File

@@ -23,13 +23,14 @@ import (
provider "github.com/cs3org/go-cs3apis/cs3/storage/provider/v1beta1"
"github.com/opencloud-eu/reva/v2/pkg/storage"
"github.com/opencloud-eu/reva/v2/pkg/storage/pkg/decomposedfs/node"
)
type Trashbin interface {
Setup(storage.FS) error
ListRecycle(ctx context.Context, spaceID, key, relativePath string) ([]*provider.RecycleItem, error)
RestoreRecycleItem(ctx context.Context, spaceID, key, relativePath string, restoreRef *provider.Reference) error
RestoreRecycleItem(ctx context.Context, spaceID, key, relativePath string, restoreRef *provider.Reference) (*node.Node, error)
PurgeRecycleItem(ctx context.Context, spaceID, key, relativePath string) error
EmptyRecycle(ctx context.Context, spaceID string) error
}

View File

@@ -56,17 +56,10 @@ func init() {
tracer = otel.Tracer("github.com/cs3org/reva/pkg/storage/utils/decomposedfs/tree")
}
// Blobstore defines an interface for storing blobs in a blobstore
type Blobstore interface {
Upload(node *node.Node, source string) error
Download(node *node.Node) (io.ReadCloser, error)
Delete(node *node.Node) error
}
// Tree manages a hierarchical tree
type Tree struct {
lookup node.PathLookup
blobstore Blobstore
blobstore node.Blobstore
propagator propagator.Propagator
permissions permissions.Permissions
@@ -79,7 +72,7 @@ type Tree struct {
type PermissionCheckFunc func(rp *provider.ResourcePermissions) bool
// New returns a new instance of Tree
func New(lu node.PathLookup, bs Blobstore, o *options.Options, p permissions.Permissions, cache store.Store, log *zerolog.Logger) *Tree {
func New(lu node.PathLookup, bs node.Blobstore, o *options.Options, p permissions.Permissions, cache store.Store, log *zerolog.Logger) *Tree {
return &Tree{
lookup: lu,
blobstore: bs,
@@ -524,7 +517,7 @@ func (t *Tree) Delete(ctx context.Context, n *node.Node) (err error) {
}
// RestoreRecycleItemFunc returns a node and a function to restore it from the trash.
func (t *Tree) RestoreRecycleItemFunc(ctx context.Context, spaceid, key, trashPath string, targetNode *node.Node) (*node.Node, *node.Node, func() error, error) {
func (t *Tree) RestoreRecycleItemFunc(ctx context.Context, spaceid, key, trashPath string, targetNode *node.Node) (*node.Node, *node.Node, func() (*node.Node, error), error) {
_, span := tracer.Start(ctx, "RestoreRecycleItemFunc")
defer span.End()
logger := appctx.GetLogger(ctx)
@@ -555,9 +548,9 @@ func (t *Tree) RestoreRecycleItemFunc(ctx context.Context, spaceid, key, trashPa
return nil, nil, nil, err
}
fn := func() error {
fn := func() (*node.Node, error) {
if targetNode.Exists {
return errtypes.AlreadyExists("origin already exists")
return nil, errtypes.AlreadyExists("origin already exists")
}
parts := strings.SplitN(recycleNode.ID, node.TrashIDDelimiter, 2)
@@ -567,18 +560,18 @@ func (t *Tree) RestoreRecycleItemFunc(ctx context.Context, spaceid, key, trashPa
// add the entry for the parent dir
err = os.Symlink("../../../../../"+lookup.Pathify(originalId, 4, 2), filepath.Join(targetNode.ParentPath(), targetNode.Name))
if err != nil {
return err
return nil, err
}
// attempt to rename only if we're not in a subfolder
if recycleNode.ID != restoreNode.ID {
err = os.Rename(recycleNode.InternalPath(), restoreNode.InternalPath())
if err != nil {
return err
return nil, err
}
err = t.lookup.MetadataBackend().Rename(recycleNode, restoreNode)
if err != nil {
return err
return nil, err
}
}
@@ -590,7 +583,7 @@ func (t *Tree) RestoreRecycleItemFunc(ctx context.Context, spaceid, key, trashPa
attrs.SetString(prefixes.ParentidAttr, targetNode.ParentID)
if err = t.lookup.MetadataBackend().SetMultiple(ctx, restoreNode, map[string][]byte(attrs), true); err != nil {
return errors.Wrap(err, "Decomposedfs: could not update recycle node")
return nil, errors.Wrap(err, "Decomposedfs: could not update recycle node")
}
// delete item link in trash
@@ -598,7 +591,7 @@ func (t *Tree) RestoreRecycleItemFunc(ctx context.Context, spaceid, key, trashPa
if trashPath != "" && trashPath != "/" {
resolvedTrashRoot, err := filepath.EvalSymlinks(trashItem)
if err != nil {
return errors.Wrap(err, "Decomposedfs: could not resolve trash root")
return nil, errors.Wrap(err, "Decomposedfs: could not resolve trash root")
}
deletePath = filepath.Join(resolvedTrashRoot, trashPath)
if err = os.Remove(deletePath); err != nil {
@@ -609,18 +602,11 @@ func (t *Tree) RestoreRecycleItemFunc(ctx context.Context, spaceid, key, trashPa
logger.Error().Err(err).Str("trashItem", trashItem).Str("deletePath", deletePath).Str("trashPath", trashPath).Msg("error recursively deleting trash item")
}
}
var sizeDiff int64
if recycleNode.IsDir(ctx) {
treeSize, err := recycleNode.GetTreeSize(ctx)
if err != nil {
return err
}
sizeDiff = int64(treeSize)
} else {
sizeDiff = recycleNode.Blobsize
}
return t.Propagate(ctx, targetNode, sizeDiff)
rn := node.New(restoreNode.SpaceID, restoreNode.ID, targetNode.ParentID, targetNode.Name, recycleNode.Blobsize, recycleNode.BlobID, recycleNode.Type(ctx), recycleNode.Owner(), t.lookup)
rn.SpaceRoot = targetNode.SpaceRoot
rn.Exists = true
// the recycle node has an id with the trish timestamp, but the propagation is only interested in the parent id
return rn, nil
}
return recycleNode, parent, fn, nil
}
@@ -801,7 +787,7 @@ func (t *Tree) Propagate(ctx context.Context, n *node.Node, sizeDiff int64) (err
// WriteBlob writes a blob to the blobstore
func (t *Tree) WriteBlob(node *node.Node, source string) error {
return t.blobstore.Upload(node, source)
return t.blobstore.Upload(node, source, "")
}
// ReadBlob reads a blob from the blobstore
@@ -826,13 +812,14 @@ func (t *Tree) DeleteBlob(node *node.Node) error {
}
// BuildSpaceIDIndexEntry returns the entry for the space id index
func (t *Tree) BuildSpaceIDIndexEntry(spaceID, nodeID string) string {
func (t *Tree) BuildSpaceIDIndexEntry(spaceID string) string {
return "../../../spaces/" + lookup.Pathify(spaceID, 1, 2) + "/nodes/" + lookup.Pathify(spaceID, 4, 2)
}
// ResolveSpaceIDIndexEntry returns the node id for the space id index entry
func (t *Tree) ResolveSpaceIDIndexEntry(_, entry string) (string, string, error) {
return ReadSpaceAndNodeFromIndexLink(entry)
func (t *Tree) ResolveSpaceIDIndexEntry(entry string) (string, error) {
spaceID, _, err := ReadSpaceAndNodeFromIndexLink(entry)
return spaceID, err
}
// ReadSpaceAndNodeFromIndexLink reads a symlink and parses space and node id if the link has the correct format, eg:

View File

@@ -125,7 +125,7 @@ func (store DecomposedFsStore) List(ctx context.Context) ([]*DecomposedFsSession
func (store DecomposedFsStore) Get(ctx context.Context, id string) (*DecomposedFsSession, error) {
sessionPath := sessionPath(store.root, id)
match := _idRegexp.FindStringSubmatch(sessionPath)
if match == nil || len(match) < 2 {
if len(match) < 2 {
return nil, fmt.Errorf("invalid upload path")
}

View File

@@ -125,7 +125,7 @@ func (store OcisStore) List(ctx context.Context) ([]*OcisSession, error) {
func (store OcisStore) Get(ctx context.Context, id string) (*OcisSession, error) {
sessionPath := sessionPath(store.root, id)
match := _idRegexp.FindStringSubmatch(sessionPath)
if match == nil || len(match) < 2 {
if len(match) < 2 {
return nil, fmt.Errorf("invalid upload path")
}

View File

@@ -16,7 +16,7 @@
// granted to it by virtue of its status as an Intergovernmental Organization
// or submit itself to any jurisdiction.
// Code generated by mockery v2.40.2. DO NOT EDIT.
// Code generated by mockery v2.53.2. DO NOT EDIT.
package mocks

View File

@@ -1,4 +1,22 @@
// Code generated by mockery v2.46.3. DO NOT EDIT.
// Copyright 2018-2022 CERN
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
//
// In applying this license, CERN does not waive the privileges and immunities
// granted to it by virtue of its status as an Intergovernmental Organization
// or submit itself to any jurisdiction.
// Code generated by mockery v2.53.2. DO NOT EDIT.
package mocks

View File

@@ -14,6 +14,8 @@ type StoreComposer struct {
Concater ConcaterDataStore
UsesLengthDeferrer bool
LengthDeferrer LengthDeferrerDataStore
ContentServer ContentServerDataStore
UsesContentServer bool
}
// NewStoreComposer creates a new and empty store composer.
@@ -85,3 +87,8 @@ func (store *StoreComposer) UseLengthDeferrer(ext LengthDeferrerDataStore) {
store.UsesLengthDeferrer = ext != nil
store.LengthDeferrer = ext
}
func (store *StoreComposer) UseContentServer(ext ContentServerDataStore) {
store.UsesContentServer = ext != nil
store.ContentServer = ext
}

View File

@@ -75,6 +75,13 @@ type Config struct {
// If the error is non-nil, the error will be forwarded to the client. Furthermore,
// HTTPResponse will be ignored and the error value can contain values for the HTTP response.
PreFinishResponseCallback func(hook HookEvent) (HTTPResponse, error)
// PreUploadTerminateCallback will be invoked on DELETE requests before an upload is terminated,
// giving the application the opportunity to reject the termination. For example, to ensure resources
// used by other services are not deleted.
// If the callback returns no error, optional values from HTTPResponse will be contained in the HTTP response.
// If the error is non-nil, the error will be forwarded to the client. Furthermore,
// HTTPResponse will be ignored and the error value can contain values for the HTTP response.
PreUploadTerminateCallback func(hook HookEvent) (HTTPResponse, error)
// GracefulRequestCompletionTimeout is the timeout for operations to complete after an HTTP
// request has ended (successfully or by error). For example, if an HTTP request is interrupted,
// instead of stopping immediately, the handler and data store will be given some additional

View File

@@ -3,6 +3,7 @@ package handler
import (
"context"
"io"
"net/http"
)
type MetaData map[string]string
@@ -191,3 +192,21 @@ type Lock interface {
// Unlock releases an existing lock for the given upload.
Unlock() error
}
type ServableUpload interface {
// ServeContent serves the uploaded data as specified by the GET request.
// It allows data stores to delegate the handling of range requests and conditional
// requests to their underlying providers.
// The tusd handler will set the Content-Type and Content-Disposition headers
// before calling ServeContent, but the implementation can override them.
// After calling ServeContent, the handler will not take any further action
// other than handling a potential error.
ServeContent(ctx context.Context, w http.ResponseWriter, r *http.Request) error
}
// ContentServerDataStore is the interface for DataStores that can serve content directly.
// When the handler serves a GET request, it will pass the request to ServeContent
// and delegate its handling to the DataStore, instead of using GetReader to obtain the content.
type ContentServerDataStore interface {
AsServableUpload(upload Upload) ServableUpload
}

View File

@@ -60,6 +60,7 @@ var (
ErrInvalidUploadDeferLength = NewError("ERR_INVALID_UPLOAD_LENGTH_DEFER", "invalid Upload-Defer-Length header", http.StatusBadRequest)
ErrUploadStoppedByServer = NewError("ERR_UPLOAD_STOPPED", "upload has been stopped by server", http.StatusBadRequest)
ErrUploadRejectedByServer = NewError("ERR_UPLOAD_REJECTED", "upload creation has been rejected by server", http.StatusBadRequest)
ErrUploadTerminationRejected = NewError("ERR_UPLOAD_TERMINATION_REJECTED", "upload termination has been rejected by server", http.StatusBadRequest)
ErrUploadInterrupted = NewError("ERR_UPLOAD_INTERRUPTED", "upload has been interrupted by another request for this upload resource", http.StatusBadRequest)
ErrServerShutdown = NewError("ERR_SERVER_SHUTDOWN", "request has been interrupted because the server is shutting down", http.StatusServiceUnavailable)
ErrOriginNotAllowed = NewError("ERR_ORIGIN_NOT_ALLOWED", "request origin is not allowed", http.StatusForbidden)
@@ -177,10 +178,10 @@ func (handler *UnroutedHandler) Middleware(h http.Handler) http.Handler {
// We also update the write deadline, but makes sure that it is larger than the read deadline, so we
// can still write a response in the case of a read timeout.
if err := c.resC.SetReadDeadline(time.Now().Add(handler.config.NetworkTimeout)); err != nil {
c.log.Warn("NetworkControlError", "error", err)
c.log.WarnContext(c, "NetworkControlError", "error", err)
}
if err := c.resC.SetWriteDeadline(time.Now().Add(2 * handler.config.NetworkTimeout)); err != nil {
c.log.Warn("NetworkControlError", "error", err)
c.log.WarnContext(c, "NetworkControlError", "error", err)
}
// Allow overriding the HTTP method. The reason for this is
@@ -190,7 +191,7 @@ func (handler *UnroutedHandler) Middleware(h http.Handler) http.Handler {
r.Method = newMethod
}
c.log.Info("RequestIncoming")
c.log.InfoContext(c, "RequestIncoming")
handler.Metrics.incRequestsTotal(r.Method)
@@ -405,7 +406,7 @@ func (handler *UnroutedHandler) PostFile(w http.ResponseWriter, r *http.Request)
handler.Metrics.incUploadsCreated()
c.log = c.log.With("id", id)
c.log.Info("UploadCreated", "size", size, "url", url)
c.log.InfoContext(c, "UploadCreated", "size", size, "url", url)
if handler.config.NotifyCreatedUploads {
handler.CreatedUploads <- newHookEvent(c, info)
@@ -572,7 +573,7 @@ func (handler *UnroutedHandler) PostFileV2(w http.ResponseWriter, r *http.Reques
handler.Metrics.incUploadsCreated()
c.log = c.log.With("id", id)
c.log.Info("UploadCreated", "size", info.Size, "url", url)
c.log.InfoContext(c, "UploadCreated", "size", info.Size, "url", url)
if handler.config.NotifyCreatedUploads {
handler.CreatedUploads <- newHookEvent(c, info)
@@ -891,7 +892,7 @@ func (handler *UnroutedHandler) writeChunk(c *httpContext, resp HTTPResponse, up
maxSize = length
}
c.log.Info("ChunkWriteStart", "maxSize", maxSize, "offset", offset)
c.log.InfoContext(c, "ChunkWriteStart", "maxSize", maxSize, "offset", offset)
var bytesWritten int64
var err error
@@ -907,12 +908,12 @@ func (handler *UnroutedHandler) writeChunk(c *httpContext, resp HTTPResponse, up
// Update the read deadline for every successful read operation. This ensures that the request handler
// keeps going while data is transmitted but that dead connections can also time out and be cleaned up.
if err := c.resC.SetReadDeadline(time.Now().Add(handler.config.NetworkTimeout)); err != nil {
c.log.Warn("NetworkTimeoutError", "error", err)
c.log.WarnContext(c, "NetworkTimeoutError", "error", err)
}
// The write deadline is updated accordingly to ensure that we can also write responses.
if err := c.resC.SetWriteDeadline(time.Now().Add(2 * handler.config.NetworkTimeout)); err != nil {
c.log.Warn("NetworkTimeoutError", "error", err)
c.log.WarnContext(c, "NetworkTimeoutError", "error", err)
}
}
@@ -935,7 +936,7 @@ func (handler *UnroutedHandler) writeChunk(c *httpContext, resp HTTPResponse, up
// it in the response, if the store did not also return an error.
bodyErr := c.body.hasError()
if bodyErr != nil {
c.log.Error("BodyReadError", "error", bodyErr.Error())
c.log.ErrorContext(c, "BodyReadError", "error", bodyErr.Error())
if err == nil {
err = bodyErr
}
@@ -947,12 +948,12 @@ func (handler *UnroutedHandler) writeChunk(c *httpContext, resp HTTPResponse, up
if terminateErr := handler.terminateUpload(c, upload, info); terminateErr != nil {
// We only log this error and not show it to the user since this
// termination error is not relevant to the uploading client
c.log.Error("UploadStopTerminateError", "error", terminateErr.Error())
c.log.ErrorContext(c, "UploadStopTerminateError", "error", terminateErr.Error())
}
}
}
c.log.Info("ChunkWriteComplete", "bytesWritten", bytesWritten)
c.log.InfoContext(c, "ChunkWriteComplete", "bytesWritten", bytesWritten)
// Send new offset to client
newOffset := offset + bytesWritten
@@ -1003,7 +1004,7 @@ func (handler *UnroutedHandler) emitFinishEvents(c *httpContext, resp HTTPRespon
resp = resp.MergeWith(resp2)
}
c.log.Info("UploadFinished", "size", info.Size)
c.log.InfoContext(c, "UploadFinished", "size", info.Size)
handler.Metrics.incUploadsFinished()
if handler.config.NotifyCompleteUploads {
@@ -1047,6 +1048,7 @@ func (handler *UnroutedHandler) GetFile(w http.ResponseWriter, r *http.Request)
return
}
// Fall back to the existing GetReader implementation if ContentServerDataStore is not implemented
contentType, contentDisposition := filterContentType(info)
resp := HTTPResponse{
StatusCode: http.StatusOK,
@@ -1058,6 +1060,27 @@ func (handler *UnroutedHandler) GetFile(w http.ResponseWriter, r *http.Request)
Body: "", // Body is intentionally left empty, and we copy it manually in later.
}
// If the data store implements ContentServerDataStore, use delegate the handling
// of GET requests to the data store.
// Otherwise, we will use the existing GetReader implementation.
if handler.composer.UsesContentServer {
servableUpload := handler.composer.ContentServer.AsServableUpload(upload)
// Pass file type and name to the implementation, but it may override them.
w.Header().Set("Content-Type", resp.Header["Content-Type"])
w.Header().Set("Content-Disposition", resp.Header["Content-Disposition"])
// Use loggingResponseWriter to get the ResponseOutgoing log entry that
// normally handler.sendResp would produce.
loggingW := &loggingResponseWriter{ResponseWriter: w, logger: c.log}
err = servableUpload.ServeContent(c, loggingW, r)
if err != nil {
handler.sendError(c, err)
}
return
}
// If no data has been uploaded yet, respond with an empty "204 No Content" status.
if info.Offset == 0 {
resp.StatusCode = http.StatusNoContent
@@ -1065,6 +1088,15 @@ func (handler *UnroutedHandler) GetFile(w http.ResponseWriter, r *http.Request)
return
}
if handler.composer.UsesContentServer {
servableUpload := handler.composer.ContentServer.AsServableUpload(upload)
err = servableUpload.ServeContent(c, w, r)
if err != nil {
handler.sendError(c, err)
}
return
}
src, err := upload.GetReader(c)
if err != nil {
handler.sendError(c, err)
@@ -1172,7 +1204,7 @@ func (handler *UnroutedHandler) DelFile(w http.ResponseWriter, r *http.Request)
}
var info FileInfo
if handler.config.NotifyTerminatedUploads {
if handler.config.NotifyTerminatedUploads || handler.config.PreUploadTerminateCallback != nil {
info, err = upload.GetInfo(c)
if err != nil {
handler.sendError(c, err)
@@ -1180,15 +1212,26 @@ func (handler *UnroutedHandler) DelFile(w http.ResponseWriter, r *http.Request)
}
}
resp := HTTPResponse{
StatusCode: http.StatusNoContent,
}
if handler.config.PreUploadTerminateCallback != nil {
resp2, err := handler.config.PreUploadTerminateCallback(newHookEvent(c, info))
if err != nil {
handler.sendError(c, err)
return
}
resp = resp.MergeWith(resp2)
}
err = handler.terminateUpload(c, upload, info)
if err != nil {
handler.sendError(c, err)
return
}
handler.sendResp(c, HTTPResponse{
StatusCode: http.StatusNoContent,
})
handler.sendResp(c, resp)
}
// terminateUpload passes a given upload to the DataStore's Terminater,
@@ -1208,7 +1251,7 @@ func (handler *UnroutedHandler) terminateUpload(c *httpContext, upload Upload, i
handler.TerminatedUploads <- newHookEvent(c, info)
}
c.log.Info("UploadTerminated")
c.log.InfoContext(c, "UploadTerminated")
handler.Metrics.incUploadsTerminated()
return nil
@@ -1222,7 +1265,7 @@ func (handler *UnroutedHandler) sendError(c *httpContext, err error) {
var detailedErr Error
if !errors.As(err, &detailedErr) {
c.log.Error("InternalServerError", "message", err.Error())
c.log.ErrorContext(c, "InternalServerError", "message", err.Error())
detailedErr = NewError("ERR_INTERNAL_SERVER_ERROR", err.Error(), http.StatusInternalServerError)
}
@@ -1240,7 +1283,7 @@ func (handler *UnroutedHandler) sendError(c *httpContext, err error) {
func (handler *UnroutedHandler) sendResp(c *httpContext, resp HTTPResponse) {
resp.writeTo(c.res)
c.log.Info("ResponseOutgoing", "status", resp.StatusCode, "body", resp.Body)
c.log.InfoContext(c, "ResponseOutgoing", "status", resp.StatusCode, "body", resp.Body)
}
// Make an absolute URLs to the given upload id. If the base path is absolute
@@ -1323,6 +1366,14 @@ func getHostAndProtocol(r *http.Request, allowForwarded bool) (host, proto strin
}
}
// Remove default ports
if proto == "http" {
host = strings.TrimSuffix(host, ":80")
}
if proto == "https" {
host = strings.TrimSuffix(host, ":443")
}
return
}
@@ -1393,7 +1444,7 @@ func (handler *UnroutedHandler) lockUpload(c *httpContext, id string) (Lock, err
// No need to wrap this in a sync.OnceFunc because c.cancel will be a noop after the first call.
releaseLock := func() {
c.log.Info("UploadInterrupted")
c.log.InfoContext(c, "UploadInterrupted")
c.cancel(ErrUploadInterrupted)
}
@@ -1671,3 +1722,20 @@ func validateUploadId(newId string) error {
return nil
}
// loggingResponseWriter is a wrapper around http.ResponseWriter that logs the
// final status code similar to UnroutedHandler.sendResp.
type loggingResponseWriter struct {
http.ResponseWriter
logger *slog.Logger
}
func (w *loggingResponseWriter) WriteHeader(statusCode int) {
if statusCode >= 200 {
w.logger.Info("ResponseOutgoing", "status", statusCode)
}
w.ResponseWriter.WriteHeader(statusCode)
}
// Unwrap provides access to the underlying http.ResponseWriter.
func (w *loggingResponseWriter) Unwrap() http.ResponseWriter { return w.ResponseWriter }

View File

@@ -26,7 +26,7 @@ import (
var (
// MinClusterVersion is the min cluster version this etcd binary is compatible with.
MinClusterVersion = "3.0.0"
Version = "3.5.18"
Version = "3.5.19"
APIVersion = "unknown"
// Git SHA Value will be set during build

View File

@@ -5,5 +5,5 @@ package otlptrace // import "go.opentelemetry.io/otel/exporters/otlp/otlptrace"
// Version is the current release version of the OpenTelemetry OTLP trace exporter in use.
func Version() string {
return "1.34.0"
return "1.35.0"
}

View File

@@ -1,6 +1,21 @@
Releases
========
v1.11.0 (2023-03-28)
====================
- `Errors` now supports any error that implements multiple-error
interface.
- Add `Every` function to allow checking if all errors in the chain
satisfies `errors.Is` against the target error.
v1.10.0 (2023-03-08)
====================
- Comply with Go 1.20's multiple-error interface.
- Drop Go 1.18 support.
Per the support policy, only Go 1.19 and 1.20 are supported now.
- Drop all non-test external dependencies.
v1.9.0 (2022-12-12)
===================

View File

@@ -2,9 +2,29 @@
`multierr` allows combining one or more Go `error`s together.
## Features
- **Idiomatic**:
multierr follows best practices in Go, and keeps your code idiomatic.
- It keeps the underlying error type hidden,
allowing you to deal in `error` values exclusively.
- It provides APIs to safely append into an error from a `defer` statement.
- **Performant**:
multierr is optimized for performance:
- It avoids allocations where possible.
- It utilizes slice resizing semantics to optimize common cases
like appending into the same error object from a loop.
- **Interoperable**:
multierr interoperates with the Go standard library's error APIs seamlessly:
- The `errors.Is` and `errors.As` functions *just work*.
- **Lightweight**:
multierr comes with virtually no dependencies.
## Installation
go get -u go.uber.org/multierr
```bash
go get -u go.uber.org/multierr@latest
```
## Status

63
vendor/go.uber.org/multierr/error.go generated vendored
View File

@@ -1,4 +1,4 @@
// Copyright (c) 2017-2021 Uber Technologies, Inc.
// Copyright (c) 2017-2023 Uber Technologies, Inc.
//
// Permission is hereby granted, free of charge, to any person obtaining a copy
// of this software and associated documentation files (the "Software"), to deal
@@ -147,8 +147,7 @@ import (
"io"
"strings"
"sync"
"go.uber.org/atomic"
"sync/atomic"
)
var (
@@ -196,23 +195,7 @@ type errorGroup interface {
//
// Callers of this function are free to modify the returned slice.
func Errors(err error) []error {
if err == nil {
return nil
}
// Note that we're casting to multiError, not errorGroup. Our contract is
// that returned errors MAY implement errorGroup. Errors, however, only
// has special behavior for multierr-specific error objects.
//
// This behavior can be expanded in the future but I think it's prudent to
// start with as little as possible in terms of contract and possibility
// of misuse.
eg, ok := err.(*multiError)
if !ok {
return []error{err}
}
return append(([]error)(nil), eg.Errors()...)
return extractErrors(err)
}
// multiError is an error that holds one or more errors.
@@ -227,8 +210,6 @@ type multiError struct {
errors []error
}
var _ errorGroup = (*multiError)(nil)
// Errors returns the list of underlying errors.
//
// This slice MUST NOT be modified.
@@ -239,33 +220,6 @@ func (merr *multiError) Errors() []error {
return merr.errors
}
// As attempts to find the first error in the error list that matches the type
// of the value that target points to.
//
// This function allows errors.As to traverse the values stored on the
// multierr error.
func (merr *multiError) As(target interface{}) bool {
for _, err := range merr.Errors() {
if errors.As(err, target) {
return true
}
}
return false
}
// Is attempts to match the provided error against errors in the error list.
//
// This function allows errors.Is to traverse the values stored on the
// multierr error.
func (merr *multiError) Is(target error) bool {
for _, err := range merr.Errors() {
if errors.Is(err, target) {
return true
}
}
return false
}
func (merr *multiError) Error() string {
if merr == nil {
return ""
@@ -281,6 +235,17 @@ func (merr *multiError) Error() string {
return result
}
// Every compares every error in the given err against the given target error
// using [errors.Is], and returns true only if every comparison returned true.
func Every(err error, target error) bool {
for _, e := range extractErrors(err) {
if !errors.Is(e, target) {
return false
}
}
return true
}
func (merr *multiError) Format(f fmt.State, c rune) {
if c == 'v' && f.Flag('+') {
merr.writeMultiline(f)

Some files were not shown because too many files have changed in this diff Show More