mirror of
https://github.com/opencloud-eu/opencloud.git
synced 2026-01-29 00:11:21 -05:00
Compare commits
37 Commits
stablecite
...
showTraceL
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
ade34ac342 | ||
|
|
7cc92aebf9 | ||
|
|
38bfe7ed3e | ||
|
|
a248ace6d4 | ||
|
|
7b7a5ea3f5 | ||
|
|
2c75042f52 | ||
|
|
d49123b36b | ||
|
|
b40c3f068c | ||
|
|
8d3b2e3eeb | ||
|
|
e8b1834706 | ||
|
|
a732547512 | ||
|
|
f0d807d719 | ||
|
|
601ebc0f0f | ||
|
|
87ef2d97fa | ||
|
|
e86e95db24 | ||
|
|
7b367c765c | ||
|
|
dccc3a0f21 | ||
|
|
22a7eaa005 | ||
|
|
2f7542c36e | ||
|
|
3eac173644 | ||
|
|
42fd54dd35 | ||
|
|
21207eba40 | ||
|
|
e33ff722f7 | ||
|
|
4b6e44d15c | ||
|
|
82033a0c2f | ||
|
|
7759adb4a9 | ||
|
|
0fc4af406b | ||
|
|
92884a4cbd | ||
|
|
063217c3e6 | ||
|
|
e9bd5c4058 | ||
|
|
df529acdce | ||
|
|
3654897f60 | ||
|
|
84ff31c7b6 | ||
|
|
123b641fcb | ||
|
|
98d876b120 | ||
|
|
9264617a28 | ||
|
|
beab7ce18c |
132
.github/release_template.md
vendored
132
.github/release_template.md
vendored
@@ -3,79 +3,83 @@
|
||||
|
||||
### Prerequisites
|
||||
|
||||
* [ ] DEV/QA: Kickoff meeting [Kickoff meeting] (https://???)
|
||||
* [ ] DEV/QA: Define client versions and provide list of breaking changes for desktop/mobile team
|
||||
* [ ] DEV/QA: Check new strings and align with clients
|
||||
* [ ] DEV/DOCS: Create list of pending docs tasks
|
||||
* [ ] DEV: Create branch `release-x.x.x-rc.x` -> CODEFREEZE
|
||||
* [ ] DEV: bump opencloud version in necessary files
|
||||
* [ ] DEV: `changelog/CHANGELOG.tmpl`
|
||||
* [ ] DEV: `pkg/version/version.go`
|
||||
* [ ] DEV: `sonar-project.properties`
|
||||
* [ ] DEV: prepare changelog folder in `changelog/x.x.x_????_??_??`
|
||||
* [ ] DEV: Check successful CI run on release branch
|
||||
* [ ] DEV: Create signed tag `vx.y.z-rc.x`
|
||||
* [ ] DEV: Check successful CI run on `vx.y.z-rc.x` tag / BLOCKING for all further activity
|
||||
* [ ] DEV: Merge back release branch
|
||||
* [ ] DEV: bump released deployments to `vx.y.z-rc.x`
|
||||
* [ ] DEV: https://cloud.opencloud.eu/
|
||||
* [ ] DEV: needs snapshot and migration
|
||||
* [ ] DEV/QA: bump web version
|
||||
* [ ] DEV/QA: bump reva version
|
||||
* [ ] DEV/QA: DEV: Create rc tag `vx.y.z-rc.x`
|
||||
* [ ] DEV: update introductionVersion
|
||||
* [ ] DEV: add new production version
|
||||
|
||||
### QA Phase
|
||||
|
||||
* [ ] QA: Confirmatory testing (if needed)
|
||||
* [ ] QA: [Compatibility test](???)
|
||||
* [ ] QA: [Performance test](https://github.com/opencloud-eu/cdperf/tree/main/packages/k6-tests/src)
|
||||
* [ ] QA: Documentation test:
|
||||
* [ ] QA: Single binary - setup
|
||||
* [ ] QA: Docker - setup
|
||||
* [ ] QA: Docker-compose - setup
|
||||
* [ ] QA: helm/k8s - setup
|
||||
* [ ] QA: e2e with different deployment:
|
||||
* [ ] QA: [wopi](???.works)
|
||||
* [ ] QA: [traefik](???.works)
|
||||
* [ ] QA: [ldap](???.works)
|
||||
* [ ] QA: Compatibility test with posix fs
|
||||
* [ ] QA: Compatibility test with decomposed fs
|
||||
* [ ] DEV/QA: Performance test
|
||||
* [ ] STORAGE_USERS_DRIVER=posix
|
||||
* [ ] 75vu's, 60m
|
||||
* [ ] 75vu's, 60m
|
||||
* [ ] STORAGE_USERS_DRIVER=decomposed
|
||||
* [ ] 75vu's, 60m
|
||||
* [ ] 75vu's, 60m
|
||||
* [ ] QA: documentation test
|
||||
* [ ] QA: Review documentation
|
||||
* [ ] QA: Verify all new features documented
|
||||
* [ ] QA: Create upgrade documentation
|
||||
* [ ] QA: Check installation guides
|
||||
|
||||
* [ ] QA: e2e with different storage:
|
||||
* [ ] QA: local
|
||||
* [ ] QA: nfs
|
||||
* [ ] QA: s3
|
||||
* [ ] QA: decomposed
|
||||
* [ ] QA: decomposeds3
|
||||
* [ ] QA: posix
|
||||
* [ ] QA: posix with enabled watch_fs
|
||||
* [ ] QA: e2e with different deployments deployments:
|
||||
* [ ] QA: e2e tests agains opencloud-charts
|
||||
* [ ] QA: binary
|
||||
* [ ] QA: multitanacy
|
||||
* [ ] QA: docker using [docker-compose_test_plan](https://github.com/opencloud-eu/qa/blob/main/.github/ISSUE_TEMPLATE/docker-compose_test_plan_template.md)
|
||||
* [ ] QA: Different clients:
|
||||
* [ ] QA: desktop (define version) https://github.com/opencloud-eu/client/releases
|
||||
* [ ] QA: against mac - smoke test
|
||||
* [ ] QA: against windows - smoke test
|
||||
* [ ] QA: against mac - exploratory testing
|
||||
* [ ] QA: against windows - exploratory testing
|
||||
* [ ] QA: against linux (use auto tests)
|
||||
* [ ] QA: android (define version) https://github.com/opencloud-eu/android/releases
|
||||
* [ ] QA: ios (define version)
|
||||
* [ ] QA: [Smoke test](???) on Web Office (Collabora, Onlyoffice, Microsoft office)
|
||||
* [ ] QA: Smoke test Hello extension
|
||||
* [ ] QA: [Smoke test](???) ldap
|
||||
* [ ] QA: Collecting errors found
|
||||
* [ ] QA: check docs german translation
|
||||
* [ ] QA: german translations desktop at 100%
|
||||
* [ ] QA: exploratory testing
|
||||
|
||||
### After QA Phase
|
||||
### Collected bugs
|
||||
* [ ] Please place all bugs found here
|
||||
|
||||
* [ ] Brief company-wide heads up via mail @tbsbdr
|
||||
* [ ] Create list of changed ENV vars and send to release-coordination@opencloud.eu
|
||||
* [ ] Variable Name
|
||||
* [ ] Introduced in version
|
||||
* [ ] Default Value
|
||||
* [ ] Description
|
||||
* [ ] dependencies with user other components
|
||||
* [ ] DEV: Create branch `release-x.x.x`
|
||||
* [ ] DEV: bump OpenCloud version in necessary files
|
||||
* [ ] DEV: `pkg/version/version.go`
|
||||
* [ ] DEV: `sonar-project.properties`
|
||||
* [ ] DEV: released deployment versions
|
||||
* [ ] DEV: prepare changelog folder in `changelog/x.x.x_???`
|
||||
* [ ] Release Notes + Breaking Changes @tbsbdr
|
||||
* [ ] Migration + Breaking Changes Admin Doc @???
|
||||
* [ ] DEV: Create final signed tag
|
||||
* [ ] DEV: Check successful CI run on `vx.y.z` tag / BLOCKING for all further activity
|
||||
* [ ] Merge release notes
|
||||
### After QA Phase (IT related)
|
||||
|
||||
### Post-release communication
|
||||
* [ ] DEV: Create a `docs-stable-x.y` branch based on the docs folder in the OpenCloud repo @micbar
|
||||
* [ ] DEV/QA: Ping documentation in RC about the new release tag (for opencloud/helm chart version bump in docs)
|
||||
* [ ] DEV/QA: Ping marketing to update all download links (download mirrors are updated at the full hour, wait with ping until download is actually available)
|
||||
* [ ] DEV/QA: Ping @??? once the demo instances are running this release
|
||||
* [ ] DEV: Merge back release branch
|
||||
* [ ] DEV: Create stable-x.y branch in the OpenCloud repo from final tag
|
||||
* [ ] QA:bump version in pkg/version.go
|
||||
* [ ] QA: Run CI
|
||||
* [ ] DEV/QA: create final tag
|
||||
* [ ] QA: observe CI Run on tag
|
||||
* [ ] DEV/QA: Create a new `stable-*` branch
|
||||
* [ ] (opencloud)[https://github.com/opencloud-eu/opencloud/branches]
|
||||
* [ ] (web)[https://github.com/opencloud-eu/web/branches]
|
||||
* [ ] (reva)[https://github.com/opencloud-eu/reva/branches]
|
||||
* [ ] (opencloud-compose)[https://github.com/opencloud-eu/opencloud-compose/branches]
|
||||
* [ ] DEV/QA:: publish release notes to the docs
|
||||
* [ ] DEV/QA:: update (demo.opencloud.eu)[https://demo.opencloud.eu/]
|
||||
|
||||
### After QA Phase ( Marketing / Product / Sales related )
|
||||
|
||||
* [ ] notify marketing that the release is ready @tbsbdr
|
||||
* [ ] announce in the public channel (matrix channel)[https://matrix.to/#/#opencloud:matrix.org]
|
||||
* [ ] press information @AnneGo137
|
||||
* [ ] press information @AnneGo137
|
||||
* [ ] Blogentry @AnneGo137
|
||||
* [ ] Internal meeting (Groupe Pre-Webinar) @db-ot
|
||||
* [ ] Partner briefing (Partner should be informed about features, new) @matthias
|
||||
* [ ] Webinar DE & EN @AnneGo137
|
||||
* [ ] Präsentation DE @tbsbdr / @db-ot
|
||||
* [ ] Präsentation EN @tbsbdr / @db-ot
|
||||
* [ ] Website ergänzen @AnneGo137
|
||||
* [ ] Features @AnneGo137
|
||||
* [ ] Service & Support - New Enterprise Features @tbsbdr
|
||||
* [ ] OpenCloud_Benefits.pdf updates @AnneGo137
|
||||
* [ ] Welcome Files: Features as media @tbsbdr
|
||||
* [ ] Flyer update @AnneGo137
|
||||
* [ ] Sales presentation @matthias
|
||||
|
||||
@@ -24,6 +24,7 @@ OC_CI_NODEJS_ALPINE = "quay.io/opencloudeu/nodejs-alpine-ci:24"
|
||||
OC_CI_PHP = "quay.io/opencloudeu/php-alpine-ci:%s"
|
||||
OC_CI_WAIT_FOR = "quay.io/opencloudeu/wait-for-ci:latest"
|
||||
OC_CS3_API_VALIDATOR = "opencloudeu/cs3api-validator:latest"
|
||||
OC_CI_WOPI_VALIDATOR = "quay.io/opencloudeu/wopi-validator-ci:latest"
|
||||
OC_LITMUS = "owncloudci/litmus:latest"
|
||||
ONLYOFFICE_DOCUMENT_SERVER = "onlyoffice/documentserver:7.5.1"
|
||||
PLUGINS_DOCKER_BUILDX = "woodpeckerci/plugin-docker-buildx:latest"
|
||||
@@ -200,13 +201,6 @@ config = {
|
||||
],
|
||||
"skip": False,
|
||||
},
|
||||
"accountsHashDifficulty": {
|
||||
"skip": False,
|
||||
"suites": [
|
||||
"apiAccountsHashDifficulty",
|
||||
],
|
||||
"accounts_hash_difficulty": "default",
|
||||
},
|
||||
"notification": {
|
||||
"suites": [
|
||||
"apiNotification",
|
||||
@@ -233,7 +227,6 @@ config = {
|
||||
],
|
||||
"skip": False,
|
||||
"antivirusNeeded": True,
|
||||
"generateVirusFiles": True,
|
||||
"extraServerEnvironment": {
|
||||
"ANTIVIRUS_SCANNER_TYPE": "clamav",
|
||||
"ANTIVIRUS_CLAMAV_SOCKET": "tcp://clamav:3310",
|
||||
@@ -300,7 +293,6 @@ config = {
|
||||
"skip": False,
|
||||
"withRemotePhp": [True],
|
||||
"antivirusNeeded": True,
|
||||
"generateVirusFiles": True,
|
||||
"extraServerEnvironment": {
|
||||
"ANTIVIRUS_SCANNER_TYPE": "clamav",
|
||||
"ANTIVIRUS_CLAMAV_SOCKET": "tcp://clamav:3310",
|
||||
@@ -666,10 +658,10 @@ def testPipelines(ctx):
|
||||
storage = "decomposed"
|
||||
|
||||
if "skip" not in config["cs3ApiTests"] or not config["cs3ApiTests"]["skip"]:
|
||||
pipelines += cs3ApiTests(ctx, storage, "default")
|
||||
pipelines += cs3ApiTests(ctx, storage)
|
||||
if "skip" not in config["wopiValidatorTests"] or not config["wopiValidatorTests"]["skip"]:
|
||||
pipelines += wopiValidatorTests(ctx, storage, "builtin", "default")
|
||||
pipelines += wopiValidatorTests(ctx, storage, "cs3", "default")
|
||||
pipelines += wopiValidatorTests(ctx, storage, "builtin")
|
||||
pipelines += wopiValidatorTests(ctx, storage, "cs3")
|
||||
|
||||
pipelines += localApiTestPipeline(ctx)
|
||||
pipelines += coreApiTestPipeline(ctx)
|
||||
@@ -1058,12 +1050,12 @@ def codestyle(ctx):
|
||||
|
||||
return pipelines
|
||||
|
||||
def cs3ApiTests(ctx, storage, accounts_hash_difficulty = 4):
|
||||
def cs3ApiTests(ctx, storage):
|
||||
pipeline = {
|
||||
"name": "test-cs3-API-%s" % storage,
|
||||
"steps": evaluateWorkflowStep() +
|
||||
restoreBuildArtifactCache(ctx, dirs["opencloudBinArtifact"], dirs["opencloudBinPath"]) +
|
||||
opencloudServer(storage, accounts_hash_difficulty, deploy_type = "cs3api_validator") +
|
||||
opencloudServer(storage, deploy_type = "cs3api_validator") +
|
||||
[
|
||||
{
|
||||
"name": "cs3ApiTests",
|
||||
@@ -1094,7 +1086,7 @@ def cs3ApiTests(ctx, storage, accounts_hash_difficulty = 4):
|
||||
])
|
||||
return [pipeline]
|
||||
|
||||
def wopiValidatorTests(ctx, storage, wopiServerType, accounts_hash_difficulty = 4):
|
||||
def wopiValidatorTests(ctx, storage, wopiServerType):
|
||||
testgroups = [
|
||||
"BaseWopiViewing",
|
||||
"CheckFileInfoSchema",
|
||||
@@ -1137,7 +1129,7 @@ def wopiValidatorTests(ctx, storage, wopiServerType, accounts_hash_difficulty =
|
||||
for testgroup in testgroups:
|
||||
validatorTests.append({
|
||||
"name": "wopiValidatorTests-%s" % testgroup,
|
||||
"image": "owncloudci/wopi-validator",
|
||||
"image": OC_CI_WOPI_VALIDATOR,
|
||||
"commands": [
|
||||
"export WOPI_TOKEN=$(cat accesstoken)",
|
||||
"echo $WOPI_TOKEN",
|
||||
@@ -1153,7 +1145,7 @@ def wopiValidatorTests(ctx, storage, wopiServerType, accounts_hash_difficulty =
|
||||
for builtinOnlyGroup in builtinOnlyTestGroups:
|
||||
validatorTests.append({
|
||||
"name": "wopiValidatorTests-%s" % builtinOnlyGroup,
|
||||
"image": "owncloudci/wopi-validator",
|
||||
"image": OC_CI_WOPI_VALIDATOR,
|
||||
"commands": [
|
||||
"export WOPI_TOKEN=$(cat accesstoken)",
|
||||
"echo $WOPI_TOKEN",
|
||||
@@ -1172,7 +1164,7 @@ def wopiValidatorTests(ctx, storage, wopiServerType, accounts_hash_difficulty =
|
||||
"steps": evaluateWorkflowStep() +
|
||||
restoreBuildArtifactCache(ctx, dirs["opencloudBinArtifact"], dirs["opencloudBinPath"]) +
|
||||
waitForServices("fake-office", ["fakeoffice:8080"]) +
|
||||
opencloudServer(storage, accounts_hash_difficulty, deploy_type = "wopi_validator", extra_server_environment = extra_server_environment) +
|
||||
opencloudServer(storage, deploy_type = "wopi_validator", extra_server_environment = extra_server_environment) +
|
||||
wopiServer +
|
||||
waitForServices("wopi-fakeoffice", ["wopi-fakeoffice:9300"]) +
|
||||
[
|
||||
@@ -1217,29 +1209,21 @@ def wopiValidatorTests(ctx, storage, wopiServerType, accounts_hash_difficulty =
|
||||
def localApiTestPipeline(ctx):
|
||||
pipelines = []
|
||||
|
||||
with_remote_php = [True]
|
||||
enable_watch_fs = [False]
|
||||
if ctx.build.event == "cron":
|
||||
with_remote_php.append(False)
|
||||
enable_watch_fs.append(True)
|
||||
|
||||
defaults = {
|
||||
"suites": {},
|
||||
"skip": False,
|
||||
"extraTestEnvironment": {},
|
||||
"extraServerEnvironment": {},
|
||||
"storages": ["posix"],
|
||||
"accounts_hash_difficulty": 4,
|
||||
"emailNeeded": False,
|
||||
"antivirusNeeded": False,
|
||||
"tikaNeeded": False,
|
||||
"federationServer": False,
|
||||
"collaborationServiceNeeded": False,
|
||||
"extraCollaborationEnvironment": {},
|
||||
"withRemotePhp": with_remote_php,
|
||||
"enableWatchFs": enable_watch_fs,
|
||||
"withRemotePhp": [True],
|
||||
"enableWatchFs": [False],
|
||||
"ldapNeeded": False,
|
||||
"generateVirusFiles": False,
|
||||
}
|
||||
|
||||
if "localApiTests" in config:
|
||||
@@ -1254,6 +1238,14 @@ def localApiTestPipeline(ctx):
|
||||
if "[decomposed]" in ctx.build.title.lower() or name.startswith("cli"):
|
||||
params["storages"] = ["decomposed"]
|
||||
|
||||
if ctx.build.event == "cron":
|
||||
params["withRemotePhp"] = [True, False]
|
||||
params["enableWatchFs"] = [True, False]
|
||||
|
||||
# override withRemotePhp if specified in the suite config
|
||||
if "withRemotePhp" in matrix:
|
||||
params["withRemotePhp"] = matrix["withRemotePhp"]
|
||||
|
||||
for storage in params["storages"]:
|
||||
for run_with_remote_php in params["withRemotePhp"]:
|
||||
for run_with_watch_fs_enabled in params["enableWatchFs"]:
|
||||
@@ -1278,16 +1270,15 @@ def localApiTestPipeline(ctx):
|
||||
(waitForLdapService() if params["ldapNeeded"] else []) +
|
||||
opencloudServer(
|
||||
storage,
|
||||
params["accounts_hash_difficulty"],
|
||||
extra_server_environment = params["extraServerEnvironment"],
|
||||
with_wrapper = True,
|
||||
tika_enabled = params["tikaNeeded"],
|
||||
watch_fs_enabled = run_with_watch_fs_enabled,
|
||||
) +
|
||||
(opencloudServer(storage, params["accounts_hash_difficulty"], deploy_type = "federation", extra_server_environment = params["extraServerEnvironment"], watch_fs_enabled = run_with_watch_fs_enabled) if params["federationServer"] else []) +
|
||||
(opencloudServer(storage, deploy_type = "federation", extra_server_environment = params["extraServerEnvironment"], watch_fs_enabled = run_with_watch_fs_enabled) if params["federationServer"] else []) +
|
||||
((wopiCollaborationService("fakeoffice") + wopiCollaborationService("collabora") + wopiCollaborationService("onlyoffice")) if params["collaborationServiceNeeded"] else []) +
|
||||
(openCloudHealthCheck("wopi", ["wopi-collabora:9304", "wopi-onlyoffice:9304", "wopi-fakeoffice:9304"]) if params["collaborationServiceNeeded"] else []) +
|
||||
localApiTest(params["suites"], storage, params["extraTestEnvironment"], run_with_remote_php, params["generateVirusFiles"]) +
|
||||
localApiTest(params["suites"], storage, params["extraTestEnvironment"], run_with_remote_php) +
|
||||
logRequests(),
|
||||
"services": (emailService() if params["emailNeeded"] else []) +
|
||||
(clamavService() if params["antivirusNeeded"] else []) +
|
||||
@@ -1311,9 +1302,9 @@ def localApiTestPipeline(ctx):
|
||||
pipelines.append(pipeline)
|
||||
return pipelines
|
||||
|
||||
def localApiTest(suites, storage = "decomposed", extra_environment = {}, with_remote_php = False, generate_virus_files = False):
|
||||
def localApiTest(suites, storage = "decomposed", extra_environment = {}, with_remote_php = False):
|
||||
test_dir = "%s/tests/acceptance" % dirs["base"]
|
||||
expected_failures_file = "%s/expected-failures-localAPI-on-%s-storage.md" % (test_dir, storage)
|
||||
expected_failures_file = "%s/expected-failures-%s-storage.md" % (test_dir, storage)
|
||||
|
||||
environment = {
|
||||
"TEST_SERVER_URL": OC_URL,
|
||||
@@ -1336,11 +1327,6 @@ def localApiTest(suites, storage = "decomposed", extra_environment = {}, with_re
|
||||
|
||||
commands = []
|
||||
|
||||
# Generate EICAR virus test files if needed
|
||||
if generate_virus_files:
|
||||
commands.append("chmod +x %s/tests/acceptance/scripts/generate-virus-files.sh" % dirs["base"])
|
||||
commands.append("bash %s/tests/acceptance/scripts/generate-virus-files.sh" % dirs["base"])
|
||||
|
||||
# Merge expected failures
|
||||
if not with_remote_php:
|
||||
commands.append("cat %s/expected-failures-without-remotephp.md >> %s" % (test_dir, expected_failures_file))
|
||||
@@ -1363,7 +1349,6 @@ def coreApiTestPipeline(ctx):
|
||||
"numberOfParts": 7,
|
||||
"skipExceptParts": [],
|
||||
"skip": False,
|
||||
"accounts_hash_difficulty": 4,
|
||||
}
|
||||
|
||||
pipelines = []
|
||||
@@ -1384,6 +1369,10 @@ def coreApiTestPipeline(ctx):
|
||||
params["withRemotePhp"] = [True, False]
|
||||
params["enableWatchFs"] = [True, False]
|
||||
|
||||
# override withRemotePhp if specified in the suite config
|
||||
if "withRemotePhp" in matrix:
|
||||
params["withRemotePhp"] = matrix["withRemotePhp"]
|
||||
|
||||
debugParts = params["skipExceptParts"]
|
||||
debugPartsEnabled = (len(debugParts) != 0)
|
||||
|
||||
@@ -1405,7 +1394,6 @@ def coreApiTestPipeline(ctx):
|
||||
restoreBuildArtifactCache(ctx, dirs["opencloudBinArtifact"], dirs["opencloudBinPath"]) +
|
||||
opencloudServer(
|
||||
storage,
|
||||
params["accounts_hash_difficulty"],
|
||||
with_wrapper = True,
|
||||
watch_fs_enabled = run_with_watch_fs_enabled,
|
||||
) +
|
||||
@@ -1439,7 +1427,7 @@ def coreApiTestPipeline(ctx):
|
||||
def coreApiTest(part_number = 1, number_of_parts = 1, with_remote_php = False, storage = "posix"):
|
||||
filter_tags = "~@skipOnOpencloud-%s-Storage" % storage
|
||||
test_dir = "%s/tests/acceptance" % dirs["base"]
|
||||
expected_failures_file = "%s/expected-failures-API-on-%s-storage.md" % (test_dir, storage)
|
||||
expected_failures_file = "%s/expected-failures-%s-storage.md" % (test_dir, storage)
|
||||
|
||||
return [{
|
||||
"name": "api-tests",
|
||||
@@ -1770,7 +1758,7 @@ def uploadTracingResult(ctx):
|
||||
"mc cp -a %s/reports/e2e/playwright/tracing/* s3/$PUBLIC_BUCKET/web/tracing/$CI_REPO_NAME/$CI_PIPELINE_NUMBER/" % dirs["web"],
|
||||
"cd %s/reports/e2e/playwright/tracing/" % dirs["web"],
|
||||
'echo "To see the trace, please open the following link in the console"',
|
||||
'for f in *.zip; do echo "npx playwright show-trace $MC_HOST/$PUBLIC_BUCKET/web/tracing/$CI_REPO_NAME/$CI_PIPELINE_NUMBER/$f \n"; done',
|
||||
'for f in *.zip; do echo "npx playwright show-trace https://$MC_HOST/$PUBLIC_BUCKET/web/tracing/$CI_REPO_NAME/$CI_PIPELINE_NUMBER/$f \n"; done',
|
||||
],
|
||||
"when": {
|
||||
"status": status,
|
||||
@@ -2323,7 +2311,7 @@ def notifyMatrix(ctx):
|
||||
|
||||
return result
|
||||
|
||||
def opencloudServer(storage = "decomposed", accounts_hash_difficulty = 4, depends_on = [], deploy_type = "", extra_server_environment = {}, with_wrapper = False, tika_enabled = False, watch_fs_enabled = False):
|
||||
def opencloudServer(storage = "decomposed", depends_on = [], deploy_type = "", extra_server_environment = {}, with_wrapper = False, tika_enabled = False, watch_fs_enabled = False):
|
||||
user = "0:0"
|
||||
container_name = OC_SERVER_NAME
|
||||
environment = {
|
||||
@@ -2419,13 +2407,6 @@ def opencloudServer(storage = "decomposed", accounts_hash_difficulty = 4, depend
|
||||
if watch_fs_enabled:
|
||||
environment["STORAGE_USERS_POSIX_WATCH_FS"] = True
|
||||
|
||||
# Pass in "default" accounts_hash_difficulty to not set this environment variable.
|
||||
# That will allow OpenCloud to use whatever its built-in default is.
|
||||
# Otherwise pass in a value from 4 to about 11 or 12 (default 4, for making regular tests fast)
|
||||
# The high values cause lots of CPU to be used when hashing passwords, and really slow down the tests.
|
||||
if accounts_hash_difficulty != "default":
|
||||
environment["ACCOUNTS_HASH_DIFFICULTY"] = accounts_hash_difficulty
|
||||
|
||||
for item in extra_server_environment:
|
||||
environment[item] = extra_server_environment[item]
|
||||
|
||||
@@ -2622,6 +2603,7 @@ def translation_sync(ctx):
|
||||
}]
|
||||
|
||||
def checkStarlark(ctx):
|
||||
S3_HOST = "s3.ci.opencloud.eu"
|
||||
return [{
|
||||
"name": "check-starlark",
|
||||
"steps": [
|
||||
@@ -2629,6 +2611,10 @@ def checkStarlark(ctx):
|
||||
"name": "format-check-starlark",
|
||||
"image": OC_CI_BAZEL_BUILDIFIER,
|
||||
"commands": [
|
||||
"echo https://S3_HOST",
|
||||
"echo https://%s/public" % S3_HOST,
|
||||
"echo %s" % CACHE_S3_SERVER,
|
||||
"echo 'https://s3.ci.opencloud.eu'",
|
||||
"buildifier --mode=check .woodpecker.star",
|
||||
],
|
||||
},
|
||||
@@ -3292,7 +3278,7 @@ def wopiCollaborationService(name):
|
||||
environment["COLLABORATION_APP_ADDR"] = "https://onlyoffice"
|
||||
environment["COLLABORATION_APP_ICON"] = "https://onlyoffice/web-apps/apps/documenteditor/main/resources/img/favicon.ico"
|
||||
elif name == "fakeoffice":
|
||||
environment["COLLABORATION_SERVICE_NAME"] = "collboration-fakeoficce"
|
||||
environment["COLLABORATION_SERVICE_NAME"] = "collaboration-fakeoffice"
|
||||
environment["COLLABORATION_APP_NAME"] = "FakeOffice"
|
||||
environment["COLLABORATION_APP_PRODUCT"] = "Microsoft"
|
||||
environment["COLLABORATION_APP_ADDR"] = "http://fakeoffice:8080"
|
||||
|
||||
56
go.mod
56
go.mod
@@ -11,7 +11,7 @@ require (
|
||||
github.com/Nerzal/gocloak/v13 v13.9.0
|
||||
github.com/bbalet/stopwords v1.0.0
|
||||
github.com/beevik/etree v1.6.0
|
||||
github.com/blevesearch/bleve/v2 v2.5.5
|
||||
github.com/blevesearch/bleve/v2 v2.5.7
|
||||
github.com/cenkalti/backoff v2.2.1+incompatible
|
||||
github.com/coreos/go-oidc/v3 v3.17.0
|
||||
github.com/cs3org/go-cs3apis v0.0.0-20250908152307-4ca807afe54e
|
||||
@@ -20,7 +20,7 @@ require (
|
||||
github.com/dutchcoders/go-clamd v0.0.0-20170520113014-b970184f4d9e
|
||||
github.com/gabriel-vasile/mimetype v1.4.12
|
||||
github.com/ggwhite/go-masker v1.1.0
|
||||
github.com/go-chi/chi/v5 v5.2.3
|
||||
github.com/go-chi/chi/v5 v5.2.4
|
||||
github.com/go-chi/render v1.0.3
|
||||
github.com/go-jose/go-jose/v3 v3.0.4
|
||||
github.com/go-ldap/ldap/v3 v3.4.12
|
||||
@@ -41,13 +41,13 @@ require (
|
||||
github.com/google/uuid v1.6.0
|
||||
github.com/gookit/config/v2 v2.2.7
|
||||
github.com/gorilla/mux v1.8.1
|
||||
github.com/grpc-ecosystem/grpc-gateway/v2 v2.27.3
|
||||
github.com/grpc-ecosystem/grpc-gateway/v2 v2.27.4
|
||||
github.com/invopop/validation v0.8.0
|
||||
github.com/jellydator/ttlcache/v2 v2.11.1
|
||||
github.com/jellydator/ttlcache/v3 v3.4.0
|
||||
github.com/jinzhu/now v1.1.5
|
||||
github.com/justinas/alice v1.2.0
|
||||
github.com/kovidgoyal/imaging v1.8.18
|
||||
github.com/kovidgoyal/imaging v1.8.19
|
||||
github.com/leonelquinteros/gotext v1.7.2
|
||||
github.com/libregraph/idm v0.5.0
|
||||
github.com/libregraph/lico v0.66.0
|
||||
@@ -55,13 +55,13 @@ require (
|
||||
github.com/mna/pigeon v1.3.0
|
||||
github.com/mohae/deepcopy v0.0.0-20170929034955-c48cc78d4826
|
||||
github.com/nats-io/nats-server/v2 v2.12.3
|
||||
github.com/nats-io/nats.go v1.47.0
|
||||
github.com/nats-io/nats.go v1.48.0
|
||||
github.com/oklog/run v1.2.0
|
||||
github.com/olekukonko/tablewriter v1.1.1
|
||||
github.com/olekukonko/tablewriter v1.1.2
|
||||
github.com/onsi/ginkgo v1.16.5
|
||||
github.com/onsi/ginkgo/v2 v2.27.2
|
||||
github.com/onsi/gomega v1.38.2
|
||||
github.com/open-policy-agent/opa v1.11.1
|
||||
github.com/onsi/ginkgo/v2 v2.27.5
|
||||
github.com/onsi/gomega v1.39.0
|
||||
github.com/open-policy-agent/opa v1.12.3
|
||||
github.com/opencloud-eu/icap-client v0.0.0-20250930132611-28a2afe62d89
|
||||
github.com/opencloud-eu/libre-graph-api-go v1.0.8-0.20250724122329-41ba6b191e76
|
||||
github.com/opencloud-eu/reva/v2 v2.41.1-0.20260107152322-93760b632993
|
||||
@@ -75,9 +75,9 @@ require (
|
||||
github.com/rogpeppe/go-internal v1.14.1
|
||||
github.com/rs/cors v1.11.1
|
||||
github.com/rs/zerolog v1.34.0
|
||||
github.com/sirupsen/logrus v1.9.4-0.20230606125235-dd1b4c2e81af
|
||||
github.com/sirupsen/logrus v1.9.4
|
||||
github.com/spf13/afero v1.15.0
|
||||
github.com/spf13/cobra v1.10.1
|
||||
github.com/spf13/cobra v1.10.2
|
||||
github.com/spf13/pflag v1.0.10
|
||||
github.com/spf13/viper v1.21.0
|
||||
github.com/stretchr/testify v1.11.1
|
||||
@@ -95,22 +95,22 @@ require (
|
||||
go-micro.dev/v4 v4.11.0
|
||||
go.etcd.io/bbolt v1.4.3
|
||||
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.64.0
|
||||
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.63.0
|
||||
go.opentelemetry.io/contrib/zpages v0.63.0
|
||||
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.64.0
|
||||
go.opentelemetry.io/contrib/zpages v0.64.0
|
||||
go.opentelemetry.io/otel v1.39.0
|
||||
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.39.0
|
||||
go.opentelemetry.io/otel/exporters/stdout/stdouttrace v1.38.0
|
||||
go.opentelemetry.io/otel/exporters/stdout/stdouttrace v1.39.0
|
||||
go.opentelemetry.io/otel/sdk v1.39.0
|
||||
go.opentelemetry.io/otel/trace v1.39.0
|
||||
golang.org/x/crypto v0.46.0
|
||||
golang.org/x/crypto v0.47.0
|
||||
golang.org/x/exp v0.0.0-20250210185358-939b2ce775ac
|
||||
golang.org/x/image v0.34.0
|
||||
golang.org/x/net v0.48.0
|
||||
golang.org/x/image v0.35.0
|
||||
golang.org/x/net v0.49.0
|
||||
golang.org/x/oauth2 v0.34.0
|
||||
golang.org/x/sync v0.19.0
|
||||
golang.org/x/term v0.38.0
|
||||
golang.org/x/text v0.32.0
|
||||
google.golang.org/genproto/googleapis/api v0.0.0-20251202230838-ff82c1b0f217
|
||||
golang.org/x/term v0.39.0
|
||||
golang.org/x/text v0.33.0
|
||||
google.golang.org/genproto/googleapis/api v0.0.0-20251222181119-0a764e51fe1b
|
||||
google.golang.org/grpc v1.78.0
|
||||
google.golang.org/protobuf v1.36.11
|
||||
gopkg.in/yaml.v2 v2.4.0
|
||||
@@ -157,7 +157,7 @@ require (
|
||||
github.com/blevesearch/zapx/v13 v13.4.2 // indirect
|
||||
github.com/blevesearch/zapx/v14 v14.4.2 // indirect
|
||||
github.com/blevesearch/zapx/v15 v15.4.2 // indirect
|
||||
github.com/blevesearch/zapx/v16 v16.2.7 // indirect
|
||||
github.com/blevesearch/zapx/v16 v16.2.8 // indirect
|
||||
github.com/bluele/gcache v0.0.2 // indirect
|
||||
github.com/bombsimon/logrusr/v3 v3.1.0 // indirect
|
||||
github.com/cenkalti/backoff/v4 v4.3.0 // indirect
|
||||
@@ -165,9 +165,9 @@ require (
|
||||
github.com/ceph/go-ceph v0.37.0 // indirect
|
||||
github.com/cespare/xxhash/v2 v2.3.0 // indirect
|
||||
github.com/cevaris/ordered_map v0.0.0-20190319150403-3adeae072e73 // indirect
|
||||
github.com/clipperhouse/displaywidth v0.3.1 // indirect
|
||||
github.com/clipperhouse/displaywidth v0.6.0 // indirect
|
||||
github.com/clipperhouse/stringish v0.1.1 // indirect
|
||||
github.com/clipperhouse/uax29/v2 v2.2.0 // indirect
|
||||
github.com/clipperhouse/uax29/v2 v2.3.0 // indirect
|
||||
github.com/cloudflare/circl v1.6.1 // indirect
|
||||
github.com/containerd/errdefs v1.0.0 // indirect
|
||||
github.com/containerd/errdefs/pkg v0.3.0 // indirect
|
||||
@@ -312,7 +312,7 @@ require (
|
||||
github.com/nxadm/tail v1.4.8 // indirect
|
||||
github.com/olekukonko/cat v0.0.0-20250911104152-50322a0618f6 // indirect
|
||||
github.com/olekukonko/errors v1.1.0 // indirect
|
||||
github.com/olekukonko/ll v0.1.2 // indirect
|
||||
github.com/olekukonko/ll v0.1.3 // indirect
|
||||
github.com/opencontainers/go-digest v1.0.0 // indirect
|
||||
github.com/opencontainers/image-spec v1.1.1 // indirect
|
||||
github.com/opentracing/opentracing-go v1.2.0 // indirect
|
||||
@@ -390,12 +390,12 @@ require (
|
||||
go.uber.org/zap v1.27.0 // indirect
|
||||
go.yaml.in/yaml/v2 v2.4.3 // indirect
|
||||
go.yaml.in/yaml/v3 v3.0.4 // indirect
|
||||
golang.org/x/mod v0.30.0 // indirect
|
||||
golang.org/x/sys v0.39.0 // indirect
|
||||
golang.org/x/mod v0.31.0 // indirect
|
||||
golang.org/x/sys v0.40.0 // indirect
|
||||
golang.org/x/time v0.14.0 // indirect
|
||||
golang.org/x/tools v0.39.0 // indirect
|
||||
golang.org/x/tools v0.40.0 // indirect
|
||||
google.golang.org/genproto v0.0.0-20250303144028-a0af3efb3deb // indirect
|
||||
google.golang.org/genproto/googleapis/rpc v0.0.0-20251202230838-ff82c1b0f217 // indirect
|
||||
google.golang.org/genproto/googleapis/rpc v0.0.0-20251222181119-0a764e51fe1b // indirect
|
||||
gopkg.in/cenkalti/backoff.v1 v1.1.0 // indirect
|
||||
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7 // indirect
|
||||
gopkg.in/warnings.v0 v0.1.2 // indirect
|
||||
|
||||
112
go.sum
112
go.sum
@@ -151,8 +151,8 @@ github.com/bits-and-blooms/bitset v1.12.0/go.mod h1:7hO7Gc7Pp1vODcmWvKMRA9BNmbv6
|
||||
github.com/bits-and-blooms/bitset v1.22.0 h1:Tquv9S8+SGaS3EhyA+up3FXzmkhxPGjQQCkcs2uw7w4=
|
||||
github.com/bits-and-blooms/bitset v1.22.0/go.mod h1:7hO7Gc7Pp1vODcmWvKMRA9BNmbv6a/7QIWpPxHddWR8=
|
||||
github.com/bketelsen/crypt v0.0.3-0.20200106085610-5cbc8cc4026c/go.mod h1:MKsuJmJgSg28kpZDP6UIiPt0e0Oz0kqKNGyRaWEPv84=
|
||||
github.com/blevesearch/bleve/v2 v2.5.5 h1:lzC89QUCco+y1qBnJxGqm4AbtsdsnlUvq0kXok8n3C8=
|
||||
github.com/blevesearch/bleve/v2 v2.5.5/go.mod h1:t5WoESS5TDteTdnjhhvpA1BpLYErOBX2IQViTMLK7wo=
|
||||
github.com/blevesearch/bleve/v2 v2.5.7 h1:2d9YrL5zrX5EBBW++GOaEKjE+NPWeZGaX77IM26m1Z8=
|
||||
github.com/blevesearch/bleve/v2 v2.5.7/go.mod h1:yj0NlS7ocGC4VOSAedqDDMktdh2935v2CSWOCDMHdSA=
|
||||
github.com/blevesearch/bleve_index_api v1.2.11 h1:bXQ54kVuwP8hdrXUSOnvTQfgK0KI1+f9A0ITJT8tX1s=
|
||||
github.com/blevesearch/bleve_index_api v1.2.11/go.mod h1:rKQDl4u51uwafZxFrPD1R7xFOwKnzZW7s/LSeK4lgo0=
|
||||
github.com/blevesearch/geo v0.2.4 h1:ECIGQhw+QALCZaDcogRTNSJYQXRtC8/m8IKiA706cqk=
|
||||
@@ -185,8 +185,8 @@ github.com/blevesearch/zapx/v14 v14.4.2 h1:2SGHakVKd+TrtEqpfeq8X+So5PShQ5nW6GNxT
|
||||
github.com/blevesearch/zapx/v14 v14.4.2/go.mod h1:rz0XNb/OZSMjNorufDGSpFpjoFKhXmppH9Hi7a877D8=
|
||||
github.com/blevesearch/zapx/v15 v15.4.2 h1:sWxpDE0QQOTjyxYbAVjt3+0ieu8NCE0fDRaFxEsp31k=
|
||||
github.com/blevesearch/zapx/v15 v15.4.2/go.mod h1:1pssev/59FsuWcgSnTa0OeEpOzmhtmr/0/11H0Z8+Nw=
|
||||
github.com/blevesearch/zapx/v16 v16.2.7 h1:xcgFRa7f/tQXOwApVq7JWgPYSlzyUMmkuYa54tMDuR0=
|
||||
github.com/blevesearch/zapx/v16 v16.2.7/go.mod h1:murSoCJPCk25MqURrcJaBQ1RekuqSCSfMjXH4rHyA14=
|
||||
github.com/blevesearch/zapx/v16 v16.2.8 h1:SlnzF0YGtSlrsOE3oE7EgEX6BIepGpeqxs1IjMbHLQI=
|
||||
github.com/blevesearch/zapx/v16 v16.2.8/go.mod h1:murSoCJPCk25MqURrcJaBQ1RekuqSCSfMjXH4rHyA14=
|
||||
github.com/bluele/gcache v0.0.2 h1:WcbfdXICg7G/DGBh1PFfcirkWOQV+v077yF1pSy3DGw=
|
||||
github.com/bluele/gcache v0.0.2/go.mod h1:m15KV+ECjptwSPxKhOhQoAFQVtUFjTVkc3H8o0t/fp0=
|
||||
github.com/bmizerany/assert v0.0.0-20160611221934-b7ed37b82869 h1:DDGfHa7BWjL4YnC6+E63dPcxHo2sUxDIu8g3QgEJdRY=
|
||||
@@ -223,12 +223,12 @@ github.com/chzyer/logex v1.1.10/go.mod h1:+Ywpsq7O8HXn0nuIou7OrIPyXbp3wmkHB+jjWR
|
||||
github.com/chzyer/readline v0.0.0-20180603132655-2972be24d48e/go.mod h1:nSuG5e5PlCu98SY8svDHJxuZscDgtXS6KTTbou5AhLI=
|
||||
github.com/chzyer/test v0.0.0-20180213035817-a1ea475d72b1/go.mod h1:Q3SI9o4m/ZMnBNeIyt5eFwwo7qiLfzFZmjNmxjkiQlU=
|
||||
github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw=
|
||||
github.com/clipperhouse/displaywidth v0.3.1 h1:k07iN9gD32177o1y4O1jQMzbLdCrsGJh+blirVYybsk=
|
||||
github.com/clipperhouse/displaywidth v0.3.1/go.mod h1:tgLJKKyaDOCadywag3agw4snxS5kYEuYR6Y9+qWDDYM=
|
||||
github.com/clipperhouse/displaywidth v0.6.0 h1:k32vueaksef9WIKCNcoqRNyKbyvkvkysNYnAWz2fN4s=
|
||||
github.com/clipperhouse/displaywidth v0.6.0/go.mod h1:R+kHuzaYWFkTm7xoMmK1lFydbci4X2CicfbGstSGg0o=
|
||||
github.com/clipperhouse/stringish v0.1.1 h1:+NSqMOr3GR6k1FdRhhnXrLfztGzuG+VuFDfatpWHKCs=
|
||||
github.com/clipperhouse/stringish v0.1.1/go.mod h1:v/WhFtE1q0ovMta2+m+UbpZ+2/HEXNWYXQgCt4hdOzA=
|
||||
github.com/clipperhouse/uax29/v2 v2.2.0 h1:ChwIKnQN3kcZteTXMgb1wztSgaU+ZemkgWdohwgs8tY=
|
||||
github.com/clipperhouse/uax29/v2 v2.2.0/go.mod h1:EFJ2TJMRUaplDxHKj1qAEhCtQPW2tJSwu5BF98AuoVM=
|
||||
github.com/clipperhouse/uax29/v2 v2.3.0 h1:SNdx9DVUqMoBuBoW3iLOj4FQv3dN5mDtuqwuhIGpJy4=
|
||||
github.com/clipperhouse/uax29/v2 v2.3.0/go.mod h1:Wn1g7MK6OoeDT0vL+Q0SQLDz/KpfsVRgg6W7ihQeh4g=
|
||||
github.com/cloudflare/circl v1.6.1 h1:zqIqSPIndyBh1bjLVVDHMPpVKqp8Su/V+6MeDzzQBQ0=
|
||||
github.com/cloudflare/circl v1.6.1/go.mod h1:uddAzsPgqdMAYatqJ0lsjX1oECcQLIlRpzZh3pJrofs=
|
||||
github.com/cloudflare/cloudflare-go v0.14.0/go.mod h1:EnwdgGMaFOruiPZRFSgn+TsQ3hQ7C/YWzIGLeu5c304=
|
||||
@@ -381,8 +381,8 @@ github.com/go-asn1-ber/asn1-ber v1.4.1/go.mod h1:hEBeB/ic+5LoWskz+yKT7vGhhPYkPro
|
||||
github.com/go-asn1-ber/asn1-ber v1.5.8-0.20250403174932-29230038a667 h1:BP4M0CvQ4S3TGls2FvczZtj5Re/2ZzkV9VwqPHH/3Bo=
|
||||
github.com/go-asn1-ber/asn1-ber v1.5.8-0.20250403174932-29230038a667/go.mod h1:hEBeB/ic+5LoWskz+yKT7vGhhPYkProFKoKdwZRWMe0=
|
||||
github.com/go-chi/chi v4.0.2+incompatible/go.mod h1:eB3wogJHnLi3x/kFX2A+IbTBlXxmMeXJVKy9tTv1XzQ=
|
||||
github.com/go-chi/chi/v5 v5.2.3 h1:WQIt9uxdsAbgIYgid+BpYc+liqQZGMHRaUwp0JUcvdE=
|
||||
github.com/go-chi/chi/v5 v5.2.3/go.mod h1:L2yAIGWB3H+phAw1NxKwWM+7eUH/lU8pOMm5hHcoops=
|
||||
github.com/go-chi/chi/v5 v5.2.4 h1:WtFKPHwlywe8Srng8j2BhOD9312j9cGUxG1SP4V2cR4=
|
||||
github.com/go-chi/chi/v5 v5.2.4/go.mod h1:X7Gx4mteadT3eDOMTsXzmI4/rwUpOwBHLpAfupzFJP0=
|
||||
github.com/go-chi/render v1.0.3 h1:AsXqd2a1/INaIfUSKq3G5uA8weYx20FOsM7uSoCyyt4=
|
||||
github.com/go-chi/render v1.0.3/go.mod h1:/gr3hVkmYR0YlEy3LxCuVRFzEu9Ruok+gFqbIofjao0=
|
||||
github.com/go-cmd/cmd v1.0.5/go.mod h1:y8q8qlK5wQibcw63djSl/ntiHUHXHGdCkPk0j4QeW4s=
|
||||
@@ -624,8 +624,8 @@ github.com/grpc-ecosystem/go-grpc-middleware v1.4.0/go.mod h1:g5qyo/la0ALbONm6Vb
|
||||
github.com/grpc-ecosystem/go-grpc-prometheus v1.2.0/go.mod h1:8NvIoxWQoOIhqOTXgfV/d3M/q6VIi02HzZEHgUlZvzk=
|
||||
github.com/grpc-ecosystem/grpc-gateway v1.8.5/go.mod h1:vNeuVxBJEsws4ogUvrchl83t/GYV9WGTSLVdBhOQFDY=
|
||||
github.com/grpc-ecosystem/grpc-gateway v1.9.0/go.mod h1:vNeuVxBJEsws4ogUvrchl83t/GYV9WGTSLVdBhOQFDY=
|
||||
github.com/grpc-ecosystem/grpc-gateway/v2 v2.27.3 h1:NmZ1PKzSTQbuGHw9DGPFomqkkLWMC+vZCkfs+FHv1Vg=
|
||||
github.com/grpc-ecosystem/grpc-gateway/v2 v2.27.3/go.mod h1:zQrxl1YP88HQlA6i9c63DSVPFklWpGX4OWAc9bFuaH4=
|
||||
github.com/grpc-ecosystem/grpc-gateway/v2 v2.27.4 h1:kEISI/Gx67NzH3nJxAmY/dGac80kKZgZt134u7Y/k1s=
|
||||
github.com/grpc-ecosystem/grpc-gateway/v2 v2.27.4/go.mod h1:6Nz966r3vQYCqIzWsuEl9d7cf7mRhtDmm++sOxlnfxI=
|
||||
github.com/h2non/parth v0.0.0-20190131123155-b4df798d6542/go.mod h1:Ow0tF8D4Kplbc8s8sSb3V2oUCygFHVp8gC3Dn6U4MNI=
|
||||
github.com/hashicorp/consul/api v1.1.0/go.mod h1:VmuI/Lkw1nC05EYQWNKwWGbkg+FbDBtguAZLlVdkD9Q=
|
||||
github.com/hashicorp/consul/sdk v0.1.1/go.mod h1:VKf9jXwCTEY1QZP2MOLRhb5i/I/ssyNV1vwHyQBF0x8=
|
||||
@@ -745,8 +745,8 @@ github.com/kovidgoyal/go-parallel v1.1.1 h1:1OzpNjtrUkBPq3UaqrnvOoB2F9RttSt811ui
|
||||
github.com/kovidgoyal/go-parallel v1.1.1/go.mod h1:BJNIbe6+hxyFWv7n6oEDPj3PA5qSw5OCtf0hcVxWJiw=
|
||||
github.com/kovidgoyal/go-shm v1.0.0 h1:HJEel9D1F9YhULvClEHJLawoRSj/1u/EDV7MJbBPgQo=
|
||||
github.com/kovidgoyal/go-shm v1.0.0/go.mod h1:Yzb80Xf9L3kaoB2RGok9hHwMIt7Oif61kT6t3+VnZds=
|
||||
github.com/kovidgoyal/imaging v1.8.18 h1:42JCqJnQBzBo0hGllLEJVYDARWXPP9MT3HgiTno9Chc=
|
||||
github.com/kovidgoyal/imaging v1.8.18/go.mod h1:bqjHpeAxSuTLvKob6HuqAr9td2wP9G54Snbgd+1QLoU=
|
||||
github.com/kovidgoyal/imaging v1.8.19 h1:zWJdQqF2tfSKjvoB7XpLRhVGbYsze++M0iaqZ4ZkhNk=
|
||||
github.com/kovidgoyal/imaging v1.8.19/go.mod h1:I0q8RdoEuyc4G8GFOF9CaluTUHQSf68d6TmsqpvfRI8=
|
||||
github.com/kr/fs v0.1.0/go.mod h1:FFnZGqtBN9Gxj7eW1uZ42v5BccTP0vu6NEaFoC2HwRg=
|
||||
github.com/kr/logfmt v0.0.0-20140226030751-b84e30acd515/go.mod h1:+0opPa2QZZtGFBFZlji/RkVcI2GknAs/DXo4wKdlNEc=
|
||||
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
|
||||
@@ -916,8 +916,8 @@ github.com/nats-io/jwt/v2 v2.8.0 h1:K7uzyz50+yGZDO5o772eRE7atlcSEENpL7P+b74JV1g=
|
||||
github.com/nats-io/jwt/v2 v2.8.0/go.mod h1:me11pOkwObtcBNR8AiMrUbtVOUGkqYjMQZ6jnSdVUIA=
|
||||
github.com/nats-io/nats-server/v2 v2.12.3 h1:KRv+1n7lddMVgkJPQer+pt36TcO0ENxjilBmeWdjcHs=
|
||||
github.com/nats-io/nats-server/v2 v2.12.3/go.mod h1:MQXjG9WjyXKz9koWzUc3jYUMKD8x3CLmTNy91IQQz3Y=
|
||||
github.com/nats-io/nats.go v1.47.0 h1:YQdADw6J/UfGUd2Oy6tn4Hq6YHxCaJrVKayxxFqYrgM=
|
||||
github.com/nats-io/nats.go v1.47.0/go.mod h1:iRWIPokVIFbVijxuMQq4y9ttaBTMe0SFdlZfMDd+33g=
|
||||
github.com/nats-io/nats.go v1.48.0 h1:pSFyXApG+yWU/TgbKCjmm5K4wrHu86231/w84qRVR+U=
|
||||
github.com/nats-io/nats.go v1.48.0/go.mod h1:iRWIPokVIFbVijxuMQq4y9ttaBTMe0SFdlZfMDd+33g=
|
||||
github.com/nats-io/nkeys v0.4.12 h1:nssm7JKOG9/x4J8II47VWCL1Ds29avyiQDRn0ckMvDc=
|
||||
github.com/nats-io/nkeys v0.4.12/go.mod h1:MT59A1HYcjIcyQDJStTfaOY6vhy9XTUjOFo+SVsvpBg=
|
||||
github.com/nats-io/nuid v1.0.1 h1:5iA8DT8V7q8WK2EScv2padNa/rTESc1KdnPw4TC2paw=
|
||||
@@ -940,25 +940,25 @@ github.com/olekukonko/cat v0.0.0-20250911104152-50322a0618f6 h1:zrbMGy9YXpIeTnGj
|
||||
github.com/olekukonko/cat v0.0.0-20250911104152-50322a0618f6/go.mod h1:rEKTHC9roVVicUIfZK7DYrdIoM0EOr8mK1Hj5s3JjH0=
|
||||
github.com/olekukonko/errors v1.1.0 h1:RNuGIh15QdDenh+hNvKrJkmxxjV4hcS50Db478Ou5sM=
|
||||
github.com/olekukonko/errors v1.1.0/go.mod h1:ppzxA5jBKcO1vIpCXQ9ZqgDh8iwODz6OXIGKU8r5m4Y=
|
||||
github.com/olekukonko/ll v0.1.2 h1:lkg/k/9mlsy0SxO5aC+WEpbdT5K83ddnNhAepz7TQc0=
|
||||
github.com/olekukonko/ll v0.1.2/go.mod h1:b52bVQRRPObe+yyBl0TxNfhesL0nedD4Cht0/zx55Ew=
|
||||
github.com/olekukonko/ll v0.1.3 h1:sV2jrhQGq5B3W0nENUISCR6azIPf7UBUpVq0x/y70Fg=
|
||||
github.com/olekukonko/ll v0.1.3/go.mod h1:b52bVQRRPObe+yyBl0TxNfhesL0nedD4Cht0/zx55Ew=
|
||||
github.com/olekukonko/tablewriter v0.0.5/go.mod h1:hPp6KlRPjbx+hW8ykQs1w3UBbZlj6HuIJcUGPhkA7kY=
|
||||
github.com/olekukonko/tablewriter v1.1.1 h1:b3reP6GCfrHwmKkYwNRFh2rxidGHcT6cgxj/sHiDDx0=
|
||||
github.com/olekukonko/tablewriter v1.1.1/go.mod h1:De/bIcTF+gpBDB3Alv3fEsZA+9unTsSzAg/ZGADCtn4=
|
||||
github.com/olekukonko/tablewriter v1.1.2 h1:L2kI1Y5tZBct/O/TyZK1zIE9GlBj/TVs+AY5tZDCDSc=
|
||||
github.com/olekukonko/tablewriter v1.1.2/go.mod h1:z7SYPugVqGVavWoA2sGsFIoOVNmEHxUAAMrhXONtfkg=
|
||||
github.com/onsi/ginkgo v1.6.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
|
||||
github.com/onsi/ginkgo v1.7.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
|
||||
github.com/onsi/ginkgo v1.12.1/go.mod h1:zj2OWP4+oCPe1qIXoGWkgMRwljMUYCdkwsT2108oapk=
|
||||
github.com/onsi/ginkgo v1.16.5 h1:8xi0RTUf59SOSfEtZMvwTvXYMzG4gV23XVHOZiXNtnE=
|
||||
github.com/onsi/ginkgo v1.16.5/go.mod h1:+E8gABHa3K6zRBolWtd+ROzc/U5bkGt0FwiG042wbpU=
|
||||
github.com/onsi/ginkgo/v2 v2.27.2 h1:LzwLj0b89qtIy6SSASkzlNvX6WktqurSHwkk2ipF/Ns=
|
||||
github.com/onsi/ginkgo/v2 v2.27.2/go.mod h1:ArE1D/XhNXBXCBkKOLkbsb2c81dQHCRcF5zwn/ykDRo=
|
||||
github.com/onsi/ginkgo/v2 v2.27.5 h1:ZeVgZMx2PDMdJm/+w5fE/OyG6ILo1Y3e+QX4zSR0zTE=
|
||||
github.com/onsi/ginkgo/v2 v2.27.5/go.mod h1:ArE1D/XhNXBXCBkKOLkbsb2c81dQHCRcF5zwn/ykDRo=
|
||||
github.com/onsi/gomega v1.4.3/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY=
|
||||
github.com/onsi/gomega v1.7.1/go.mod h1:XdKZgCCFLUoM/7CFJVPcG8C1xQ1AJ0vpAezJrB7JYyY=
|
||||
github.com/onsi/gomega v1.10.1/go.mod h1:iN09h71vgCQne3DLsj+A5owkum+a2tYe+TOCB1ybHNo=
|
||||
github.com/onsi/gomega v1.38.2 h1:eZCjf2xjZAqe+LeWvKb5weQ+NcPwX84kqJ0cZNxok2A=
|
||||
github.com/onsi/gomega v1.38.2/go.mod h1:W2MJcYxRGV63b418Ai34Ud0hEdTVXq9NW9+Sx6uXf3k=
|
||||
github.com/open-policy-agent/opa v1.11.1 h1:4bMlG6DjRZTRAswRyF+KUCgxHu1Gsk0h9EbZ4W9REvM=
|
||||
github.com/open-policy-agent/opa v1.11.1/go.mod h1:QimuJO4T3KYxWzrmAymqlFvsIanCjKrGjmmC8GgAdgE=
|
||||
github.com/onsi/gomega v1.39.0 h1:y2ROC3hKFmQZJNFeGAMeHZKkjBL65mIZcvrLQBF9k6Q=
|
||||
github.com/onsi/gomega v1.39.0/go.mod h1:ZCU1pkQcXDO5Sl9/VVEGlDyp+zm0m1cmeG5TOzLgdh4=
|
||||
github.com/open-policy-agent/opa v1.12.3 h1:qe3m/w52baKC/HJtippw+hYBUKCzuBCPjB+D5P9knfc=
|
||||
github.com/open-policy-agent/opa v1.12.3/go.mod h1:RnDgm04GA1RjEXJvrsG9uNT/+FyBNmozcPvA2qz60M4=
|
||||
github.com/opencloud-eu/go-micro-plugins/v4/store/nats-js-kv v0.0.0-20250512152754-23325793059a h1:Sakl76blJAaM6NxylVkgSzktjo2dS504iDotEFJsh3M=
|
||||
github.com/opencloud-eu/go-micro-plugins/v4/store/nats-js-kv v0.0.0-20250512152754-23325793059a/go.mod h1:pjcozWijkNPbEtX5SIQaxEW/h8VAVZYTLx+70bmB3LY=
|
||||
github.com/opencloud-eu/icap-client v0.0.0-20250930132611-28a2afe62d89 h1:W1ms+lP5lUUIzjRGDg93WrQfZJZCaV1ZP3KeyXi8bzY=
|
||||
@@ -1138,8 +1138,8 @@ github.com/sirupsen/logrus v1.2.0/go.mod h1:LxeOpSwHxABJmUn/MG1IvRgCAasNZTLOkJPx
|
||||
github.com/sirupsen/logrus v1.4.2/go.mod h1:tLMulIdttU9McNUspp0xgXVQah82FyeX6MwdIuYE2rE=
|
||||
github.com/sirupsen/logrus v1.6.0/go.mod h1:7uNnSEd1DgxDLC74fIahvMZmmYsHGZGEOFrfsX/uA88=
|
||||
github.com/sirupsen/logrus v1.7.0/go.mod h1:yWOB1SBYBC5VeMP7gHvWumXLIWorT60ONWic61uBYv0=
|
||||
github.com/sirupsen/logrus v1.9.4-0.20230606125235-dd1b4c2e81af h1:Sp5TG9f7K39yfB+If0vjp97vuT74F72r8hfRpP8jLU0=
|
||||
github.com/sirupsen/logrus v1.9.4-0.20230606125235-dd1b4c2e81af/go.mod h1:naHLuLoDiP4jHNo9R0sCBMtWGeIprob74mVsIT4qYEQ=
|
||||
github.com/sirupsen/logrus v1.9.4 h1:TsZE7l11zFCLZnZ+teH4Umoq5BhEIfIzfRDZ1Uzql2w=
|
||||
github.com/sirupsen/logrus v1.9.4/go.mod h1:ftWc9WdOfJ0a92nsE2jF5u5ZwH8Bv2zdeOC42RjbV2g=
|
||||
github.com/skeema/knownhosts v1.3.0 h1:AM+y0rI04VksttfwjkSTNQorvGqmwATnvnAHpSgc0LY=
|
||||
github.com/skeema/knownhosts v1.3.0/go.mod h1:sPINvnADmT/qYH1kfv+ePMmOBTH6Tbl7b5LvTDjFK7M=
|
||||
github.com/skratchdot/open-golang v0.0.0-20160302144031-75fb7ed4208c/go.mod h1:sUM3LWHvSMaG192sy56D9F7CNvL7jUJVXoqM1QKLnog=
|
||||
@@ -1164,8 +1164,8 @@ github.com/spf13/cast v1.3.1/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkU
|
||||
github.com/spf13/cast v1.10.0 h1:h2x0u2shc1QuLHfxi+cTJvs30+ZAHOGRic8uyGTDWxY=
|
||||
github.com/spf13/cast v1.10.0/go.mod h1:jNfB8QC9IA6ZuY2ZjDp0KtFO2LZZlg4S/7bzP6qqeHo=
|
||||
github.com/spf13/cobra v1.1.1/go.mod h1:WnodtKOvamDL/PwE2M4iKs8aMDBZ5Q5klgD3qfVJQMI=
|
||||
github.com/spf13/cobra v1.10.1 h1:lJeBwCfmrnXthfAupyUTzJ/J4Nc1RsHC/mSRU2dll/s=
|
||||
github.com/spf13/cobra v1.10.1/go.mod h1:7SmJGaTHFVBY0jW4NXGluQoLvhqFQM+6XSKD+P4XaB0=
|
||||
github.com/spf13/cobra v1.10.2 h1:DMTTonx5m65Ic0GOoRY2c16WCbHxOOw6xxezuLaBpcU=
|
||||
github.com/spf13/cobra v1.10.2/go.mod h1:7C1pvHqHw5A4vrJfjNwvOdzYu0Gml16OCs2GRiTUUS4=
|
||||
github.com/spf13/jwalterweatherman v1.0.0/go.mod h1:cQK4TGJAtQXfYWX+Ddv3mKDzgVb68N+wFjFa4jdeBTo=
|
||||
github.com/spf13/jwalterweatherman v1.1.0/go.mod h1:aNWZUN0dPAAO/Ljvb5BEdw96iTZ0EXowPYD95IqWIGo=
|
||||
github.com/spf13/pflag v1.0.3/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=
|
||||
@@ -1311,10 +1311,10 @@ go.opentelemetry.io/auto/sdk v1.2.1 h1:jXsnJ4Lmnqd11kwkBV2LgLoFMZKizbCi5fNZ/ipaZ
|
||||
go.opentelemetry.io/auto/sdk v1.2.1/go.mod h1:KRTj+aOaElaLi+wW1kO/DZRXwkF4C5xPbEe3ZiIhN7Y=
|
||||
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.64.0 h1:RN3ifU8y4prNWeEnQp2kRRHz8UwonAEYZl8tUzHEXAk=
|
||||
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.64.0/go.mod h1:habDz3tEWiFANTo6oUE99EmaFUrCNYAAg3wiVmusm70=
|
||||
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.63.0 h1:RbKq8BG0FI8OiXhBfcRtqqHcZcka+gU3cskNuf05R18=
|
||||
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.63.0/go.mod h1:h06DGIukJOevXaj/xrNjhi/2098RZzcLTbc0jDAUbsg=
|
||||
go.opentelemetry.io/contrib/zpages v0.63.0 h1:TppOKuZGbqXMgsfjqq3i09N5Vbo1JLtLImUqiTPGnX4=
|
||||
go.opentelemetry.io/contrib/zpages v0.63.0/go.mod h1:5F8uugz75ay/MMhRRhxAXY33FuaI8dl7jTxefrIy5qk=
|
||||
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.64.0 h1:ssfIgGNANqpVFCndZvcuyKbl0g+UAVcbBcqGkG28H0Y=
|
||||
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.64.0/go.mod h1:GQ/474YrbE4Jx8gZ4q5I4hrhUzM6UPzyrqJYV2AqPoQ=
|
||||
go.opentelemetry.io/contrib/zpages v0.64.0 h1:iMybqKVR8AHHxFX4DuEWJ9dY75+9E7+IPwUK3Ll7NxM=
|
||||
go.opentelemetry.io/contrib/zpages v0.64.0/go.mod h1:DnkiyoQ7Yx/NmmKn10b6M2YBXreUqq0qhFa/kYgSZME=
|
||||
go.opentelemetry.io/otel v1.39.0 h1:8yPrr/S0ND9QEfTfdP9V+SiwT4E0G7Y5MO7p85nis48=
|
||||
go.opentelemetry.io/otel v1.39.0/go.mod h1:kLlFTywNWrFyEdH0oj2xK0bFYZtHRYUdv1NklR/tgc8=
|
||||
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.39.0 h1:f0cb2XPmrqn4XMy9PNliTgRKJgS5WcL/u0/WRYGz4t0=
|
||||
@@ -1323,8 +1323,8 @@ go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.39.0 h1:in9O8
|
||||
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.39.0/go.mod h1:Rp0EXBm5tfnv0WL+ARyO/PHBEaEAT8UUHQ6AGJcSq6c=
|
||||
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.38.0 h1:aTL7F04bJHUlztTsNGJ2l+6he8c+y/b//eR0jjjemT4=
|
||||
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.38.0/go.mod h1:kldtb7jDTeol0l3ewcmd8SDvx3EmIE7lyvqbasU3QC4=
|
||||
go.opentelemetry.io/otel/exporters/stdout/stdouttrace v1.38.0 h1:kJxSDN4SgWWTjG/hPp3O7LCGLcHXFlvS2/FFOrwL+SE=
|
||||
go.opentelemetry.io/otel/exporters/stdout/stdouttrace v1.38.0/go.mod h1:mgIOzS7iZeKJdeB8/NYHrJ48fdGc71Llo5bJ1J4DWUE=
|
||||
go.opentelemetry.io/otel/exporters/stdout/stdouttrace v1.39.0 h1:8UPA4IbVZxpsD76ihGOQiFml99GPAEZLohDXvqHdi6U=
|
||||
go.opentelemetry.io/otel/exporters/stdout/stdouttrace v1.39.0/go.mod h1:MZ1T/+51uIVKlRzGw1Fo46KEWThjlCBZKl2LzY5nv4g=
|
||||
go.opentelemetry.io/otel/metric v1.39.0 h1:d1UzonvEZriVfpNKEVmHXbdf909uGTOQjA0HF0Ls5Q0=
|
||||
go.opentelemetry.io/otel/metric v1.39.0/go.mod h1:jrZSWL33sD7bBxg1xjrqyDjnuzTUB0x1nBERXd7Ftcs=
|
||||
go.opentelemetry.io/otel/sdk v1.39.0 h1:nMLYcjVsvdui1B/4FRkwjzoRVsMK8uL/cj0OyhKzt18=
|
||||
@@ -1375,8 +1375,8 @@ golang.org/x/crypto v0.14.0/go.mod h1:MVFd36DqK4CsrnJYDkBA3VC4m2GkXAM0PvzMCn4JQf
|
||||
golang.org/x/crypto v0.19.0/go.mod h1:Iy9bg/ha4yyC70EfRS8jz+B6ybOBKMaSxLj6P6oBDfU=
|
||||
golang.org/x/crypto v0.21.0/go.mod h1:0BP7YvVV9gBbVKyeTG0Gyn+gZm94bibOW5BjDEYAOMs=
|
||||
golang.org/x/crypto v0.23.0/go.mod h1:CKFgDieR+mRhux2Lsu27y0fO304Db0wZe70UKqHu0v8=
|
||||
golang.org/x/crypto v0.46.0 h1:cKRW/pmt1pKAfetfu+RCEvjvZkA9RimPbh7bhFjGVBU=
|
||||
golang.org/x/crypto v0.46.0/go.mod h1:Evb/oLKmMraqjZ2iQTwDwvCtJkczlDuTmdJXoZVzqU0=
|
||||
golang.org/x/crypto v0.47.0 h1:V6e3FRj+n4dbpw86FJ8Fv7XVOql7TEwpHapKoMJ/GO8=
|
||||
golang.org/x/crypto v0.47.0/go.mod h1:ff3Y9VzzKbwSSEzWqJsJVBnWmRwRSHt/6Op5n9bQc4A=
|
||||
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
|
||||
golang.org/x/exp v0.0.0-20190306152737-a1d7652674e8/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
|
||||
golang.org/x/exp v0.0.0-20190510132918-efd6b22b2522/go.mod h1:ZjyILWgesfNpC6sMxTJOJm9Kp84zZh5NQWvqDGG3Qr8=
|
||||
@@ -1392,8 +1392,8 @@ golang.org/x/exp v0.0.0-20250210185358-939b2ce775ac/go.mod h1:hH+7mtFmImwwcMvScy
|
||||
golang.org/x/image v0.0.0-20190227222117-0694c2d4d067/go.mod h1:kZ7UVZpmo3dzQBMxlp+ypCbDeSB+sBbTgSJuh5dn5js=
|
||||
golang.org/x/image v0.0.0-20190802002840-cff245a6509b/go.mod h1:FeLwcggjj3mMvU+oOTbSwawSJRM1uh48EjtB4UJZlP0=
|
||||
golang.org/x/image v0.18.0/go.mod h1:4yyo5vMFQjVjUcVk4jEQcU9MGy/rulF5WvUILseCM2E=
|
||||
golang.org/x/image v0.34.0 h1:33gCkyw9hmwbZJeZkct8XyR11yH889EQt/QH4VmXMn8=
|
||||
golang.org/x/image v0.34.0/go.mod h1:2RNFBZRB+vnwwFil8GkMdRvrJOFd1AzdZI6vOY+eJVU=
|
||||
golang.org/x/image v0.35.0 h1:LKjiHdgMtO8z7Fh18nGY6KDcoEtVfsgLDPeLyguqb7I=
|
||||
golang.org/x/image v0.35.0/go.mod h1:MwPLTVgvxSASsxdLzKrl8BRFuyqMyGhLwmC+TO1Sybk=
|
||||
golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
|
||||
golang.org/x/lint v0.0.0-20190227174305-5b3e6a55c961/go.mod h1:wehouNa3lNwaWXcvxsM5YxQ5yQlVC4a0KAMCusXpPoU=
|
||||
golang.org/x/lint v0.0.0-20190301231843-5614ed5bae6f/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
|
||||
@@ -1418,8 +1418,8 @@ golang.org/x/mod v0.8.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs=
|
||||
golang.org/x/mod v0.12.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs=
|
||||
golang.org/x/mod v0.15.0/go.mod h1:hTbmBsO62+eylJbnUtE2MGJUyE7QWk4xUqPFrRgJ+7c=
|
||||
golang.org/x/mod v0.17.0/go.mod h1:hTbmBsO62+eylJbnUtE2MGJUyE7QWk4xUqPFrRgJ+7c=
|
||||
golang.org/x/mod v0.30.0 h1:fDEXFVZ/fmCKProc/yAXXUijritrDzahmwwefnjoPFk=
|
||||
golang.org/x/mod v0.30.0/go.mod h1:lAsf5O2EvJeSFMiBxXDki7sCgAxEUcZHXoXMKT4GJKc=
|
||||
golang.org/x/mod v0.31.0 h1:HaW9xtz0+kOcWKwli0ZXy79Ix+UW/vOfmWI5QVd2tgI=
|
||||
golang.org/x/mod v0.31.0/go.mod h1:43JraMp9cGx1Rx3AqioxrbrhNsLl2l/iNAvuBkrezpg=
|
||||
golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
||||
golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
||||
golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
||||
@@ -1473,8 +1473,8 @@ golang.org/x/net v0.15.0/go.mod h1:idbUs1IY1+zTqbi8yxTbhexhEEk5ur9LInksu6HrEpk=
|
||||
golang.org/x/net v0.21.0/go.mod h1:bIjVDfnllIU7BJ2DNgfnXvpSvtn8VRwhlsaeUTyUS44=
|
||||
golang.org/x/net v0.23.0/go.mod h1:JKghWKKOSdJwpW2GEx0Ja7fmaKnMsbu+MWVZTokSYmg=
|
||||
golang.org/x/net v0.25.0/go.mod h1:JkAGAh7GEvH74S6FOH42FLoXpXbE/aqXSrIQjXgsiwM=
|
||||
golang.org/x/net v0.48.0 h1:zyQRTTrjc33Lhh0fBgT/H3oZq9WuvRR5gPC70xpDiQU=
|
||||
golang.org/x/net v0.48.0/go.mod h1:+ndRgGjkh8FGtu1w1FGbEC31if4VrNVMuKTgcAAnQRY=
|
||||
golang.org/x/net v0.49.0 h1:eeHFmOGUTtaaPSGNmjBKpbng9MulQsJURQUAfUwY++o=
|
||||
golang.org/x/net v0.49.0/go.mod h1:/ysNB2EvaqvesRkuLAyjI1ycPZlQHM3q01F02UY/MV8=
|
||||
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
|
||||
golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
|
||||
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
|
||||
@@ -1586,8 +1586,8 @@ golang.org/x/sys v0.17.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
|
||||
golang.org/x/sys v0.18.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
|
||||
golang.org/x/sys v0.20.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
|
||||
golang.org/x/sys v0.21.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
|
||||
golang.org/x/sys v0.39.0 h1:CvCKL8MeisomCi6qNZ+wbb0DN9E5AATixKsvNtMoMFk=
|
||||
golang.org/x/sys v0.39.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
|
||||
golang.org/x/sys v0.40.0 h1:DBZZqJ2Rkml6QMQsZywtnjnnGvHza6BTfYFWY9kjEWQ=
|
||||
golang.org/x/sys v0.40.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
|
||||
golang.org/x/telemetry v0.0.0-20240228155512-f48c80bd79b2/go.mod h1:TeRTkGYfJXctD9OcfyVLyj2J3IxLnKwHJR8f4D8a3YE=
|
||||
golang.org/x/term v0.0.0-20201117132131-f5c789dd3221/go.mod h1:Nr5EML6q2oocZ2LXRh80K7BxOlk5/8JxuGnuhpl+muw=
|
||||
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
|
||||
@@ -1599,8 +1599,8 @@ golang.org/x/term v0.13.0/go.mod h1:LTmsnFJwVN6bCy1rVCoS+qHT1HhALEFxKncY3WNNh4U=
|
||||
golang.org/x/term v0.17.0/go.mod h1:lLRBjIVuehSbZlaOtGMbcMncT+aqLLLmKrsjNrUguwk=
|
||||
golang.org/x/term v0.18.0/go.mod h1:ILwASektA3OnRv7amZ1xhE/KTR+u50pbXfZ03+6Nx58=
|
||||
golang.org/x/term v0.20.0/go.mod h1:8UkIAJTvZgivsXaD6/pH6U9ecQzZ45awqEOzuCvwpFY=
|
||||
golang.org/x/term v0.38.0 h1:PQ5pkm/rLO6HnxFR7N2lJHOZX6Kez5Y1gDSJla6jo7Q=
|
||||
golang.org/x/term v0.38.0/go.mod h1:bSEAKrOT1W+VSu9TSCMtoGEOUcKxOKgl3LE5QEF/xVg=
|
||||
golang.org/x/term v0.39.0 h1:RclSuaJf32jOqZz74CkPA9qFuVTX7vhLlpfj/IGWlqY=
|
||||
golang.org/x/term v0.39.0/go.mod h1:yxzUCTP/U+FzoxfdKmLaA0RV1WgE0VY7hXBwKtY/4ww=
|
||||
golang.org/x/text v0.0.0-20170915032832-14c0d48ead0c/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
|
||||
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
|
||||
golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
|
||||
@@ -1615,8 +1615,8 @@ golang.org/x/text v0.13.0/go.mod h1:TvPlkZtksWOMsz7fbANvkp4WM8x/WCo/om8BMLbz+aE=
|
||||
golang.org/x/text v0.14.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU=
|
||||
golang.org/x/text v0.15.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU=
|
||||
golang.org/x/text v0.16.0/go.mod h1:GhwF1Be+LQoKShO3cGOHzqOgRrGaYc9AvblQOmPVHnI=
|
||||
golang.org/x/text v0.32.0 h1:ZD01bjUt1FQ9WJ0ClOL5vxgxOI/sVCNgX1YtKwcY0mU=
|
||||
golang.org/x/text v0.32.0/go.mod h1:o/rUWzghvpD5TXrTIBuJU77MTaN0ljMWE47kxGJQ7jY=
|
||||
golang.org/x/text v0.33.0 h1:B3njUFyqtHDUI5jMn1YIr5B0IE2U0qck04r6d4KPAxE=
|
||||
golang.org/x/text v0.33.0/go.mod h1:LuMebE6+rBincTi9+xWTY8TztLzKHc/9C1uBCG27+q8=
|
||||
golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
|
||||
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
|
||||
golang.org/x/time v0.0.0-20191024005414-555d28b269f0/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
|
||||
@@ -1679,8 +1679,8 @@ golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc
|
||||
golang.org/x/tools v0.6.0/go.mod h1:Xwgl3UAJ/d3gWutnCtw505GrjyAbvKui8lOU390QaIU=
|
||||
golang.org/x/tools v0.13.0/go.mod h1:HvlwmtVNQAhOuCjW7xxvovg8wbNq7LwfXh/k7wXUl58=
|
||||
golang.org/x/tools v0.21.1-0.20240508182429-e35e4ccd0d2d/go.mod h1:aiJjzUbINMkxbQROHiO6hDPo2LHcIPhhQsa9DLh0yGk=
|
||||
golang.org/x/tools v0.39.0 h1:ik4ho21kwuQln40uelmciQPp9SipgNDdrafrYA4TmQQ=
|
||||
golang.org/x/tools v0.39.0/go.mod h1:JnefbkDPyD8UU2kI5fuf8ZX4/yUeh9W877ZeBONxUqQ=
|
||||
golang.org/x/tools v0.40.0 h1:yLkxfA+Qnul4cs9QA3KnlFu0lVmd8JJfoq+E41uSutA=
|
||||
golang.org/x/tools v0.40.0/go.mod h1:Ik/tzLRlbscWpqqMRjyWYDisX8bG13FrdXp3o4Sr9lc=
|
||||
golang.org/x/tools/godoc v0.1.0-deprecated h1:o+aZ1BOj6Hsx/GBdJO/s815sqftjSnrZZwyYTHODvtk=
|
||||
golang.org/x/tools/godoc v0.1.0-deprecated/go.mod h1:qM63CriJ961IHWmnWa9CjZnBndniPt4a3CK0PVB9bIg=
|
||||
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||
@@ -1744,10 +1744,10 @@ google.golang.org/genproto v0.0.0-20200804131852-c06518451d9c/go.mod h1:FWY/as6D
|
||||
google.golang.org/genproto v0.0.0-20200825200019-8632dd797987/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
|
||||
google.golang.org/genproto v0.0.0-20250303144028-a0af3efb3deb h1:ITgPrl429bc6+2ZraNSzMDk3I95nmQln2fuPstKwFDE=
|
||||
google.golang.org/genproto v0.0.0-20250303144028-a0af3efb3deb/go.mod h1:sAo5UzpjUwgFBCzupwhcLcxHVDK7vG5IqI30YnwX2eE=
|
||||
google.golang.org/genproto/googleapis/api v0.0.0-20251202230838-ff82c1b0f217 h1:fCvbg86sFXwdrl5LgVcTEvNC+2txB5mgROGmRL5mrls=
|
||||
google.golang.org/genproto/googleapis/api v0.0.0-20251202230838-ff82c1b0f217/go.mod h1:+rXWjjaukWZun3mLfjmVnQi18E1AsFbDN9QdJ5YXLto=
|
||||
google.golang.org/genproto/googleapis/rpc v0.0.0-20251202230838-ff82c1b0f217 h1:gRkg/vSppuSQoDjxyiGfN4Upv/h/DQmIR10ZU8dh4Ww=
|
||||
google.golang.org/genproto/googleapis/rpc v0.0.0-20251202230838-ff82c1b0f217/go.mod h1:7i2o+ce6H/6BluujYR+kqX3GKH+dChPTQU19wjRPiGk=
|
||||
google.golang.org/genproto/googleapis/api v0.0.0-20251222181119-0a764e51fe1b h1:uA40e2M6fYRBf0+8uN5mLlqUtV192iiksiICIBkYJ1E=
|
||||
google.golang.org/genproto/googleapis/api v0.0.0-20251222181119-0a764e51fe1b/go.mod h1:Xa7le7qx2vmqB/SzWUBa7KdMjpdpAHlh5QCSnjessQk=
|
||||
google.golang.org/genproto/googleapis/rpc v0.0.0-20251222181119-0a764e51fe1b h1:Mv8VFug0MP9e5vUxfBcE3vUkV6CImK3cMNMIDFjmzxU=
|
||||
google.golang.org/genproto/googleapis/rpc v0.0.0-20251222181119-0a764e51fe1b/go.mod h1:j9x/tPzZkyxcgEFkiKEEGxfvyumM01BEtsW8xzOahRQ=
|
||||
google.golang.org/grpc v1.17.0/go.mod h1:6QZJwpn2B+Zp71q/5VxRsJ6NXXVCE5NRUHRo+f3cWCs=
|
||||
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
|
||||
google.golang.org/grpc v1.19.1/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
|
||||
|
||||
@@ -5,6 +5,7 @@ import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
"io"
|
||||
"regexp"
|
||||
"strings"
|
||||
)
|
||||
|
||||
@@ -77,7 +78,7 @@ func (md MD) WriteToc(w io.Writer) (int64, error) {
|
||||
// main title not in toc
|
||||
continue
|
||||
}
|
||||
link := fmt.Sprintf("#%s", strings.ToLower(strings.Replace(h.Header, " ", "-", -1)))
|
||||
link := fmt.Sprintf("#%s", toAnchor(h.Header))
|
||||
s := fmt.Sprintf("%s* [%s](%s)\n", strings.Repeat(" ", h.Level-2), h.Header, link)
|
||||
n, err := w.Write([]byte(s))
|
||||
if err != nil {
|
||||
@@ -137,3 +138,12 @@ func headingFromString(s string) Heading {
|
||||
Header: strings.TrimPrefix(con, " "),
|
||||
}
|
||||
}
|
||||
|
||||
func toAnchor(header string) string {
|
||||
// Remove everything except letters, numbers, and spaces
|
||||
reg := regexp.MustCompile(`[^a-zA-Z0-9 ]+`)
|
||||
anchor := reg.ReplaceAllString(header, "")
|
||||
// Replace spaces with hyphens and convert to lowercase
|
||||
anchor = strings.ReplaceAll(anchor, " ", "-")
|
||||
return strings.ToLower(anchor)
|
||||
}
|
||||
|
||||
@@ -11,7 +11,7 @@ msgid ""
|
||||
msgstr ""
|
||||
"Project-Id-Version: \n"
|
||||
"Report-Msgid-Bugs-To: EMAIL\n"
|
||||
"POT-Creation-Date: 2025-12-25 00:05+0000\n"
|
||||
"POT-Creation-Date: 2026-01-14 00:09+0000\n"
|
||||
"PO-Revision-Date: 2025-01-27 10:17+0000\n"
|
||||
"Last-Translator: ii kaka, 2025\n"
|
||||
"Language-Team: Japanese (https://app.transifex.com/opencloud-eu/teams/204053/ja/)\n"
|
||||
|
||||
@@ -11,7 +11,7 @@ msgid ""
|
||||
msgstr ""
|
||||
"Project-Id-Version: \n"
|
||||
"Report-Msgid-Bugs-To: EMAIL\n"
|
||||
"POT-Creation-Date: 2025-12-25 00:05+0000\n"
|
||||
"POT-Creation-Date: 2026-01-14 00:09+0000\n"
|
||||
"PO-Revision-Date: 2025-01-27 10:17+0000\n"
|
||||
"Last-Translator: ii kaka, 2025\n"
|
||||
"Language-Team: Japanese (https://app.transifex.com/opencloud-eu/teams/204053/ja/)\n"
|
||||
|
||||
@@ -1,6 +1,11 @@
|
||||
# Webfinger
|
||||
|
||||
The webfinger service provides an RFC7033 WebFinger lookup of OpenCloud instances relevant for a given user account via endpoints a the /.well-known/webfinger implementation.
|
||||
The webfinger service provides an RFC7033 WebFinger lookup of OpenCloud resources, relevant for a given user account at the /.well-known/webfinger enpoint.
|
||||
|
||||
1. An [OpenID Connect Discovery](#openid-connect-discovery) for the IdP, based on the OpenCloud URL.
|
||||
2. An [Authenticated Instance Discovery](#authenticated-instance-discovery), based on the user account.
|
||||
|
||||
These two request are only needed for discovery.
|
||||
|
||||
## OpenID Connect Discovery
|
||||
|
||||
@@ -18,7 +23,7 @@ Clients can make an unauthenticated `GET https://drive.opencloud.test/.well-know
|
||||
}
|
||||
```
|
||||
|
||||
Here, the `resource` takes the instance domain URI, but an `acct:` URI works as well.
|
||||
Here, the `resource` takes the instance domain URI, but an `acct:` URI works as well.
|
||||
|
||||
## Authenticated Instance Discovery
|
||||
|
||||
@@ -58,14 +63,14 @@ webfinger:
|
||||
- claim: email
|
||||
regex: alan@example\.org
|
||||
href: "https://{{.preferred_username}}.cloud.opencloud.test"
|
||||
title:
|
||||
title:
|
||||
"en": "OpenCloud Instance for Alan"
|
||||
"de": "OpenCloud Instanz für Alan"
|
||||
break: true
|
||||
- claim: "email"
|
||||
regex: mary@example\.org
|
||||
href: "https://{{.preferred_username}}.cloud.opencloud.test"
|
||||
title:
|
||||
title:
|
||||
"en": "OpenCloud Instance for Mary"
|
||||
"de": "OpenCloud Instanz für Mary"
|
||||
break: false
|
||||
|
||||
346
tests/README.md
346
tests/README.md
@@ -4,166 +4,129 @@ To run tests in the test suite you have two options. You may go the easy way and
|
||||
|
||||
Both ways to run tests with the test suites are described here.
|
||||
|
||||
## Table of Contents
|
||||
|
||||
- [Running Test Suite in Docker](#running-test-suite-in-docker)
|
||||
- [Running API Tests](#running-api-tests)
|
||||
- [Run Tests With Required Services](#run-tests-with-required-services)
|
||||
- [Run Tests Only](#run-tests-only)
|
||||
- [Skip Local Image Build While Running Tests](#skip-local-image-build-while-running-tests)
|
||||
- [Check Test Logs](#check-test-logs)
|
||||
- [Cleanup the Setup](#cleanup-the-setup)
|
||||
- [Running WOPI Validator Tests](#running-wopi-validator-tests)
|
||||
- [Running Test Suite in Local Environment](#running-test-suite-in-local-environment)
|
||||
- [Running Tests With And Without `remote.php`](#running-tests-with-and-without-remotephp)
|
||||
- [Running ENV Config Tests (@env-Config)](#running-env-config-tests-env-config)
|
||||
- [Running Test Suite With Email Service (@email)](#running-test-suite-with-email-service-email)
|
||||
- [Running Test Suite With Tika Service (@tikaServiceNeeded)](#running-test-suite-with-tika-service-tikaserviceneeded)
|
||||
- [Running Test Suite With Antivirus Service (@antivirus)](#running-test-suite-with-antivirus-service-antivirus)
|
||||
- [Running Test Suite With Federated Sharing (@ocm)](#running-test-suite-with-federated-sharing-ocm)
|
||||
- [Running Text Preview Tests Containing Unicode Characters](#running-text-preview-tests-containing-unicode-characters)
|
||||
- [Running All API Tests Locally](#running-all-api-tests-locally)
|
||||
|
||||
## Running Test Suite in Docker
|
||||
|
||||
Let's see what is available. Invoke the following command from within the root of the OpenCloud repository.
|
||||
Check the available commands and environment variables with:
|
||||
|
||||
```bash
|
||||
make -C tests/acceptance/docker help
|
||||
```
|
||||
|
||||
Basically we have two sources for feature tests and test suites:
|
||||
### Running API Tests
|
||||
|
||||
- [OpenCloud feature test and test suites](https://github.com/opencloud-eu/opencloud/tree/main/tests/acceptance/features)
|
||||
- [tests and test suites transferred from core, they have prefix coreApi](https://github.com/opencloud-eu/opencloud/tree/main/tests/acceptance/features)
|
||||
#### Run Tests With Required Services
|
||||
|
||||
At the moment, both can be applied to OpenCloud.
|
||||
We can run a single feature or a single test suite with different storage drivers.
|
||||
|
||||
As a storage backend, we support the OpenCloud native storage, also called `decomposed`. This stores files directly on disk. Along with that we also provide `decomposeds3`, `posix` storage drivers.
|
||||
1. Run a specific feature file:
|
||||
|
||||
You can invoke two types of test suite runs:
|
||||
```bash
|
||||
BEHAT_FEATURE='tests/acceptance/features/apiGraphUserGroup/createUser.feature' \
|
||||
make -C tests/acceptance/docker run-api-tests
|
||||
```
|
||||
|
||||
- run a full test suite, which consists of multiple feature tests
|
||||
- run a single feature or single scenario in a feature
|
||||
or a single scenario in a feature:
|
||||
|
||||
### Run Full Test Suite
|
||||
```bash
|
||||
BEHAT_FEATURE='tests/acceptance/features/apiGraphUserGroup/createUser.feature:24' \
|
||||
make -C tests/acceptance/docker run-api-tests
|
||||
```
|
||||
|
||||
#### Local OpenCloud Tests (prefix `api`)
|
||||
2. Run a specific test suite:
|
||||
|
||||
The names of the full test suite make targets have the same naming as in the CI pipeline. See the available local OpenCloud specific test suites [here](https://github.com/opencloud-eu/opencloud/tree/main/tests/acceptance/features). They can be run with `decomposed` storage, `decomposeds3` storage and `posix` storage
|
||||
```bash
|
||||
BEHAT_SUITE='apiGraphUserGroup' \
|
||||
make -C tests/acceptance/docker run-api-tests
|
||||
```
|
||||
|
||||
For example, command:
|
||||
3. Run with different storage driver (default is `posix`):
|
||||
|
||||
```bash
|
||||
STORAGE_DRIVER='posix' \
|
||||
BEHAT_SUITE='apiGraphUserGroup' \
|
||||
make -C tests/acceptance/docker run-api-tests
|
||||
```
|
||||
|
||||
4. Run the tests that require an email server (tests tagged with `@email`). Provide `START_EMAIL=true` while running the tests:
|
||||
|
||||
```bash
|
||||
START_EMAIL=true \
|
||||
BEHAT_FEATURE='tests/acceptance/features/apiNotification/emailNotification.feature' \
|
||||
make -C tests/acceptance/docker run-api-tests
|
||||
```
|
||||
|
||||
5. Run the tests that require tika service (tests tagged with `@tikaServiceNeeded`). Provide `START_TIKA=true` while running the tests:
|
||||
|
||||
```bash
|
||||
START_TIKA=true \
|
||||
BEHAT_FEATURE='tests/acceptance/features/apiSearchContent/contentSearch.feature' \
|
||||
make -C tests/acceptance/docker run-api-tests
|
||||
```
|
||||
|
||||
6. Run the tests that require an antivirus service (tests tagged with `@antivirus`). Provide `START_ANTIVIRUS=true` while running the tests:
|
||||
|
||||
```bash
|
||||
START_ANTIVIRUS=true \
|
||||
BEHAT_FEATURE='tests/acceptance/features/apiAntivirus/antivirus.feature' \
|
||||
make -C tests/acceptance/docker run-api-tests
|
||||
```
|
||||
|
||||
7. Run the wopi tests. Provide `ENABLE_WOPI=true` while running the tests:
|
||||
```bash
|
||||
ENABLE_WOPI=true \
|
||||
BEHAT_FEATURE='tests/acceptance/features/apiCollaboration/checkFileInfo.feature' \
|
||||
make -C tests/acceptance/docker run-api-tests
|
||||
```
|
||||
|
||||
#### Run Tests Only
|
||||
|
||||
If you want to re-run the tests because of some failures or any other reason, you can use the following command to run only the tests without starting the services again.
|
||||
Also, this command can be used to run the tests against the already hosted OpenCloud server by providing the `TEST_SERVER_URL` and `USE_BEARER_TOKEN` environment variables.
|
||||
|
||||
> [!NOTE]
|
||||
> You can utilize the following environment variables:
|
||||
>
|
||||
> - `BEHAT_FEATURE`
|
||||
> - `BEHAT_SUITE`
|
||||
> - `USE_BEARER_TOKEN`
|
||||
> - `TEST_SERVER_URL`
|
||||
|
||||
```bash
|
||||
make -C tests/acceptance/docker localApiTests-apiGraph-decomposed
|
||||
BEHAT_FEATURE='tests/acceptance/features/apiGraphUserGroup/createUser.feature:24' \
|
||||
make -C tests/acceptance/docker run-test-only
|
||||
```
|
||||
|
||||
runs the same tests as the `localApiTests-apiGraph-decomposed` CI pipeline, which runs the OpenCloud test suite "apiGraph" against the OpenCloud server with `decomposed` storage.
|
||||
#### Skip Local Image Build While Running Tests
|
||||
|
||||
command:
|
||||
|
||||
```bash
|
||||
make -C tests/acceptance/docker localApiTests-apiGraph-decomposeds3
|
||||
```
|
||||
|
||||
runs the OpenCloud test suite `apiGraph` against the OpenCloud server with `decomposeds3` storage.
|
||||
|
||||
And command:
|
||||
|
||||
```bash
|
||||
make -C tests/acceptance/docker localApiTests-apiGraph-posix
|
||||
```
|
||||
|
||||
runs the OpenCloud test suite `apiGraph` against the OpenCloud server with `posix` storage.
|
||||
|
||||
Note:
|
||||
While running the tests, OpenCloud server is started with [ocwrapper](https://github.com/opencloud-eu/opencloud/blob/main/tests/ocwrapper/README.md) (i.e. `WITH_WRAPPER=true`) by default. In order to run the tests without ocwrapper, provide `WITH_WRAPPER=false` when running the tests. For example:
|
||||
|
||||
```bash
|
||||
WITH_WRAPPER=false \
|
||||
BEHAT_FEATURE='tests/acceptance/features/apiGraphUserGroup/createUser.feature:26' \
|
||||
make -C tests/acceptance/docker test-opencloud-feature-decomposed-storage
|
||||
```
|
||||
|
||||
But some test suites that are tagged with `@env-config` require the OpenCloud server to be run with ocwrapper. So, running those tests require `WITH_WRAPPER=true` (default setting).
|
||||
|
||||
Note:
|
||||
To run the tests that require an email server (tests tagged with `@email`), you need to provide `START_EMAIL=true` while running the tests.
|
||||
|
||||
```bash
|
||||
START_EMAIL=true \
|
||||
BEHAT_FEATURE='tests/acceptance/features/apiNotification/emailNotification.feature' \
|
||||
make -C tests/acceptance/docker test-opencloud-feature-decomposed-storage
|
||||
```
|
||||
|
||||
Note:
|
||||
To run the tests that require tika service (tests tagged with `@tikaServiceNeeded`), you need to provide `START_TIKA=true` while running the tests.
|
||||
|
||||
```bash
|
||||
START_TIKA=true \
|
||||
BEHAT_FEATURE='tests/acceptance/features/apiSearchContent/contentSearch.feature' \
|
||||
make -C tests/acceptance/docker test-opencloud-feature-decomposed-storage
|
||||
```
|
||||
|
||||
Note:
|
||||
To run the tests that require an antivirus service (tests tagged with `@antivirus`), you need to provide the following environment variables while running the tests.
|
||||
|
||||
```bash
|
||||
START_ANTIVIRUS=true \
|
||||
OC_ASYNC_UPLOADS=true \
|
||||
OC_ADD_RUN_SERVICES=antivirus \
|
||||
POSTPROCESSING_STEPS=virusscan \
|
||||
BEHAT_FEATURE='tests/acceptance/features/apiAntivirus/antivirus.feature' \
|
||||
make -C tests/acceptance/docker test-opencloud-feature-decomposed-storage
|
||||
```
|
||||
|
||||
#### Tests Transferred From Core (prefix `coreApi`)
|
||||
|
||||
Command `make -C tests/acceptance/docker Core-API-Tests-decomposed-storage-3` runs the same tests as the `Core-API-Tests-decomposed-storage-3` CI pipeline, which runs the third (out of ten) test suite groups transferred from core against the OpenCloud server with `decomposed` storage.
|
||||
|
||||
And `make -C tests/acceptance/docker Core-API-Tests-decomposeds3-storage-3` runs the third (out of ten) test suite groups transferred from core against the OpenCloud server with `decomposeds3` storage.
|
||||
|
||||
### Run Single Feature Test
|
||||
|
||||
The tests for a single feature (a feature file) can also be run against the different storage backends. To do that, multiple make targets with the schema **test-_\<test-source\>_-feature-_\<storage-backend\>_** are available. To select a single feature you have to add an additional `BEHAT_FEATURE=<path-to-feature-file>` parameter when invoking the make command.
|
||||
|
||||
For example;
|
||||
|
||||
```bash
|
||||
BEHAT_FEATURE='tests/acceptance/features/apiGraphUserGroup/createUser.feature' \
|
||||
make -C tests/acceptance/docker test-opencloud-feature-decomposed-storage
|
||||
```
|
||||
|
||||
Note:
|
||||
`BEHAT_FEATURE` must be pointing to a valid feature file
|
||||
|
||||
And to run a single scenario in a feature, you can do:
|
||||
|
||||
Note:
|
||||
A specific scenario from a feature can be run by adding `:<line-number>` at the end of the feature file path. For example, to run the scenario at line 26 of the feature file `apiGraphUserGroup/createUser.feature`, simply add the line number like this: `apiGraphUserGroup/createUser.feature:26`. Note that the line numbers mentioned in the examples might not always point to a scenario, so always check the line numbers before running the test.
|
||||
|
||||
```bash
|
||||
BEHAT_FEATURE='tests/acceptance/features/apiGraphUserGroup/createUser.feature:26' \
|
||||
make -C tests/acceptance/docker test-opencloud-feature-decomposed-storage
|
||||
```
|
||||
|
||||
Similarly, with `decomposeds3` storage;
|
||||
|
||||
```bash
|
||||
# run a whole feature
|
||||
BEHAT_FEATURE='tests/acceptance/features/apiGraphUserGroup/createUser.feature' \
|
||||
make -C tests/acceptance/docker test-opencloud-feature-decomposeds3-storage
|
||||
|
||||
# run a single scenario
|
||||
BEHAT_FEATURE='tests/acceptance/features/apiGraphUserGroup/createUser.feature:26' \
|
||||
make -C tests/acceptance/docker test-opencloud-feature-decomposeds3-storage
|
||||
```
|
||||
|
||||
In the same way, tests transferred from core can be run as:
|
||||
|
||||
```bash
|
||||
# run a whole feature
|
||||
BEHAT_FEATURE='tests/acceptance/features/coreApiAuth/webDavAuth.feature' \
|
||||
make -C tests/acceptance/docker test-core-feature-decomposed-storage
|
||||
|
||||
# run a single scenario
|
||||
BEHAT_FEATURE='tests/acceptance/features/coreApiAuth/webDavAuth.feature:15' \
|
||||
make -C tests/acceptance/docker test-core-feature-decomposed-storage
|
||||
```
|
||||
|
||||
Note:
|
||||
The test suites transferred from core have `coreApi` prefixed
|
||||
|
||||
### OpenCloud Image to Be Tested (Skip Local Image Build)
|
||||
|
||||
By default, the tests will be run against the docker image built from your current working state of the OpenCloud repository. For some purposes it might also be handy to use an OpenCloud image from Docker Hub. Therefore, you can provide the optional flag `OC_IMAGE_TAG=...` which must contain an available docker tag of the [opencloud-eu/opencloud registry on Docker Hub](https://hub.docker.com/r/opencloud-eu/opencloud) (e.g. 'latest').
|
||||
While running the tests, opencloud docker image is built with `opencloudeu/opencloud:dev` tag. If you want to skip building the local image, you can use `OC_IMAGE_TAG` env which must contain an available docker tag of the [opencloudeu/opencloud registry on Docker Hub](https://hub.docker.com/r/opencloudeu/opencloud) (e.g. 'latest').
|
||||
|
||||
```bash
|
||||
OC_IMAGE_TAG=latest \
|
||||
make -C tests/acceptance/docker localApiTests-apiGraph-opencloud
|
||||
BEHAT_FEATURE='tests/acceptance/features/apiGraphUserGroup/createUser.feature' \
|
||||
make -C tests/acceptance/docker run-api-tests
|
||||
```
|
||||
|
||||
### Test Log Output
|
||||
#### Check Test Logs
|
||||
|
||||
While a test is running or when it is finished, you can attach to the logs generated by the tests.
|
||||
|
||||
@@ -171,15 +134,49 @@ While a test is running or when it is finished, you can attach to the logs gener
|
||||
make -C tests/acceptance/docker show-test-logs
|
||||
```
|
||||
|
||||
Note:
|
||||
The log output is opened in `less`. You can navigate up and down with your cursors. By pressing "F" you can follow the latest line of the output.
|
||||
> [!NOTE]
|
||||
> The log output is opened in `less`. You can navigate up and down with your cursors. By pressing "F" you can follow the latest line of the output.
|
||||
|
||||
### Cleanup
|
||||
#### Cleanup the Setup
|
||||
|
||||
During testing we start a redis and OpenCloud docker container. These will not be stopped automatically. You can stop them with:
|
||||
Run the following command to clean all the resources created while running the tests:
|
||||
|
||||
```bash
|
||||
make -C tests/acceptance/docker clean
|
||||
make -C tests/acceptance/docker clean-all
|
||||
```
|
||||
|
||||
### Running WOPI Validator Tests
|
||||
|
||||
#### Available Test Groups
|
||||
|
||||
```text
|
||||
BaseWopiViewing
|
||||
CheckFileInfoSchema
|
||||
EditFlows
|
||||
Locks
|
||||
AccessTokens
|
||||
GetLock
|
||||
ExtendedLockLength
|
||||
FileVersion
|
||||
Features
|
||||
PutRelativeFile
|
||||
RenameFileIfCreateChildFileIsNotSupported
|
||||
```
|
||||
|
||||
#### Run Test
|
||||
|
||||
```bash
|
||||
TEST_GROUP=BaseWopiViewing docker compose -f tests/acceptance/docker/src/wopi-validator-test.yml up -d
|
||||
```
|
||||
|
||||
#### Run Test (macOS)
|
||||
|
||||
Use the arm image for macOS to run the validator tests.
|
||||
|
||||
```bash
|
||||
WOPI_VALIDATOR_IMAGE=scharfvi/wopi-validator \
|
||||
TEST_GROUP=BaseWopiViewing \
|
||||
docker compose -f tests/acceptance/docker/src/wopi-validator-test.yml up -d
|
||||
```
|
||||
|
||||
## Running Test Suite in Local Environment
|
||||
@@ -245,7 +242,7 @@ A specific scenario from a feature can be run by adding `:<line-number>` at the
|
||||
### Use Existing Tests for BDD
|
||||
|
||||
As a lot of scenarios are written for core, we can use those tests for Behaviour driven development in OpenCloud.
|
||||
Every scenario that does not work in OpenCloud with `decomposed` storage, is listed in `tests/acceptance/expected-failures-API-on-decomposed-storage.md` with a link to the related issue.
|
||||
Every scenario that does not work in OpenCloud with `decomposed` storage, is listed in `tests/acceptance/expected-failures-decomposed-storage.md` with a link to the related issue.
|
||||
|
||||
Those scenarios are run in the ordinary acceptance test pipeline in CI. The scenarios that fail are checked against the
|
||||
expected failures. If there are any differences then the CI pipeline fails.
|
||||
@@ -269,7 +266,7 @@ If you want to work on a specific issue
|
||||
5. remove those tests from the expected failures file
|
||||
6. make a PR that has the fixed code, and the relevant lines removed from the expected failures file.
|
||||
|
||||
## Running Tests With And Without `remote.php`
|
||||
### Running Tests With And Without `remote.php`
|
||||
|
||||
By default, the tests are run with `remote.php` enabled. If you want to run the tests without `remote.php`, you can disable it by setting the environment variable `WITH_REMOTE_PHP=false` while running the tests.
|
||||
|
||||
@@ -279,11 +276,11 @@ TEST_SERVER_URL="https://localhost:9200" \
|
||||
make test-acceptance-api
|
||||
```
|
||||
|
||||
## Running ENV Config Tests (@env-Config)
|
||||
### Running ENV Config Tests (@env-Config)
|
||||
|
||||
Test suites tagged with `@env-config` are used to test the environment variables that are used to configure OpenCloud. These tests are special tests that require the OpenCloud server to be run using [ocwrapper](https://github.com/opencloud-eu/opencloud/blob/main/tests/ocwrapper/README.md).
|
||||
|
||||
### Run OpenCloud With ocwrapper
|
||||
#### Run OpenCloud With ocwrapper
|
||||
|
||||
```bash
|
||||
# working dir: OpenCloud repo root dir
|
||||
@@ -301,7 +298,7 @@ PROXY_ENABLE_BASIC_AUTH=true \
|
||||
./bin/ocwrapper serve --bin=../../opencloud/bin/opencloud
|
||||
```
|
||||
|
||||
### Run the Tests
|
||||
#### Run the Tests
|
||||
|
||||
```bash
|
||||
OC_WRAPPER_URL=http://localhost:5200 \
|
||||
@@ -310,7 +307,7 @@ BEHAT_FEATURE=tests/acceptance/features/apiAsyncUpload/delayPostprocessing.featu
|
||||
make test-acceptance-api
|
||||
```
|
||||
|
||||
### Writing New ENV Config Tests
|
||||
#### Writing New ENV Config Tests
|
||||
|
||||
While writing tests for a new OpenCloud ENV configuration, please make sure to follow these guidelines:
|
||||
|
||||
@@ -318,11 +315,11 @@ While writing tests for a new OpenCloud ENV configuration, please make sure to f
|
||||
2. Use `OcConfigHelper.php` for helper functions - provides functions to reconfigure the running OpenCloud instance.
|
||||
3. Recommended: add the new step implementations in `OcConfigContext.php`
|
||||
|
||||
## Running Test Suite With Email Service (@email)
|
||||
### Running Test Suite With Email Service (@email)
|
||||
|
||||
Test suites that are tagged with `@email` require an email service. We use inbucket as the email service in our tests.
|
||||
|
||||
### Setup Inbucket
|
||||
#### Setup Inbucket
|
||||
|
||||
Run the following command to setup inbucket
|
||||
|
||||
@@ -330,7 +327,7 @@ Run the following command to setup inbucket
|
||||
docker run -d -p9000:9000 -p2500:2500 --name inbucket inbucket/inbucket
|
||||
```
|
||||
|
||||
### Run OpenCloud
|
||||
#### Run OpenCloud
|
||||
|
||||
Documentation for environment variables is available [here](https://docs.opencloud.eu/services/notifications/#environment-variables)
|
||||
|
||||
@@ -349,7 +346,7 @@ NOTIFICATIONS_SMTP_SENDER="OpenCloud <noreply@example.com>" \
|
||||
opencloud/bin/opencloud server
|
||||
```
|
||||
|
||||
### Run the Acceptance Test
|
||||
#### Run the Acceptance Test
|
||||
|
||||
Run the acceptance test with the following command:
|
||||
|
||||
@@ -361,11 +358,11 @@ BEHAT_FEATURE="tests/acceptance/features/apiNotification/emailNotification.featu
|
||||
make test-acceptance-api
|
||||
```
|
||||
|
||||
## Running Test Suite With Tika Service (@tikaServiceNeeded)
|
||||
### Running Test Suite With Tika Service (@tikaServiceNeeded)
|
||||
|
||||
Test suites that are tagged with `@tikaServiceNeeded` require tika service.
|
||||
|
||||
### Setup Tika Service
|
||||
#### Setup Tika Service
|
||||
|
||||
Run the following docker command to setup tika service
|
||||
|
||||
@@ -373,7 +370,7 @@ Run the following docker command to setup tika service
|
||||
docker run -d -p 127.0.0.1:9998:9998 apache/tika
|
||||
```
|
||||
|
||||
### Run OpenCloud
|
||||
#### Run OpenCloud
|
||||
|
||||
TODO: Documentation related to the content based search and tika extractor will be added later.
|
||||
|
||||
@@ -391,7 +388,7 @@ SEARCH_EXTRACTOR_CS3SOURCE_INSECURE=true \
|
||||
opencloud/bin/opencloud server
|
||||
```
|
||||
|
||||
### Run the Acceptance Test
|
||||
#### Run the Acceptance Test
|
||||
|
||||
Run the acceptance test with the following command:
|
||||
|
||||
@@ -401,15 +398,15 @@ BEHAT_FEATURE="tests/acceptance/features/apiSearchContent/contentSearch.feature"
|
||||
make test-acceptance-api
|
||||
```
|
||||
|
||||
## Running Test Suite With Antivirus Service (@antivirus)
|
||||
### Running Test Suite With Antivirus Service (@antivirus)
|
||||
|
||||
Test suites that are tagged with `@antivirus` require antivirus service. TODO The available antivirus and the configuration related to them will be added latert. This documentation is only going to use `clamav` as antivirus.
|
||||
|
||||
### Setup clamAV
|
||||
#### Setup clamAV
|
||||
|
||||
#### 1. Setup Locally
|
||||
**Option 1. Setup Locally**
|
||||
|
||||
##### Linux OS user
|
||||
Linux OS user:
|
||||
|
||||
Run the following command to set up calmAV and clamAV daemon
|
||||
|
||||
@@ -426,7 +423,7 @@ sudo service clamav-daemon status
|
||||
Note:
|
||||
The commands are ubuntu specific and may differ according to your system. You can find information related to installation of clamAV in their official documentation [here](https://docs.clamav.net/manual/Installing/Packages.html).
|
||||
|
||||
##### Mac OS user
|
||||
Mac OS user:
|
||||
|
||||
Install ClamAV using [here](https://gist.github.com/mendozao/3ea393b91f23a813650baab9964425b9)
|
||||
Start ClamAV daemon
|
||||
@@ -435,7 +432,7 @@ Start ClamAV daemon
|
||||
/your/location/to/brew/Cellar/clamav/1.1.0/sbin/clamd
|
||||
```
|
||||
|
||||
#### 2. Setup clamAV With Docker
|
||||
**Option 2. Setup clamAV With Docker**
|
||||
|
||||
Run `clamAV` through docker
|
||||
|
||||
@@ -443,7 +440,7 @@ Run `clamAV` through docker
|
||||
docker run -d -p 3310:3310 opencloudeu/clamav-ci:latest
|
||||
```
|
||||
|
||||
### Run OpenCloud
|
||||
#### Run OpenCloud
|
||||
|
||||
As `antivirus` service is not enabled by default we need to enable the service while running OpenCloud server. We also need to enable `async upload` and as virus scan is performed in post-processing step, we need to set it as well. Documentation for environment variables related to antivirus is available [here](https://docs.opencloud.eu/services/antivirus/#environment-variables)
|
||||
|
||||
@@ -469,15 +466,6 @@ For antivirus running localy on Linux OS, use `ANTIVIRUS_CLAMAV_SOCKET= "/var/ru
|
||||
For antivirus running localy on Mac OS, use `ANTIVIRUS_CLAMAV_SOCKET= "/tmp/clamd.sock"`.
|
||||
For antivirus running with docker, use `ANTIVIRUS_CLAMAV_SOCKET= "tcp://host.docker.internal:3310"`
|
||||
|
||||
### Create virus files
|
||||
|
||||
The antivirus tests require EICAR test files which are not stored in the repository
|
||||
They are generated dynamically when needed for testing.
|
||||
|
||||
```bash
|
||||
tests/acceptance/scripts/generate-virus-files.sh
|
||||
```
|
||||
|
||||
#### Run the Acceptance Test
|
||||
|
||||
Run the acceptance test with the following command:
|
||||
@@ -488,11 +476,11 @@ BEHAT_FEATURE="tests/acceptance/features/apiAntivirus/antivirus.feature" \
|
||||
make test-acceptance-api
|
||||
```
|
||||
|
||||
## Running Test Suite With Federated Sharing (@ocm)
|
||||
### Running Test Suite With Federated Sharing (@ocm)
|
||||
|
||||
Test suites that are tagged with `@ocm` require running two different OpenCloud instances. TODO More detailed information and configuration related to it will be added later.
|
||||
|
||||
### Setup First OpenCloud Instance
|
||||
#### Setup First OpenCloud Instance
|
||||
|
||||
```bash
|
||||
# init OpenCloud
|
||||
@@ -514,15 +502,15 @@ opencloud/bin/opencloud server
|
||||
|
||||
The first OpenCloud instance should be available at: https://localhost:9200/
|
||||
|
||||
### Setup Second OpenCloud Instance
|
||||
#### Setup Second OpenCloud Instance
|
||||
|
||||
You can run the second OpenCloud instance in two ways:
|
||||
|
||||
#### Using `.vscode/launch.json`
|
||||
**Option 1. Using `.vscode/launch.json`**
|
||||
|
||||
From the `Run and Debug` panel of VSCode, select `Fed OpenCloud Server` and start the debugger.
|
||||
|
||||
#### Using env file
|
||||
**Option 2. Using env file**
|
||||
|
||||
```bash
|
||||
# init OpenCloud
|
||||
@@ -550,7 +538,7 @@ BEHAT_FEATURE="tests/acceptance/features/apiOcm/ocm.feature" \
|
||||
make test-acceptance-api
|
||||
```
|
||||
|
||||
## Running Text Preview Tests Containing Unicode Characters
|
||||
### Running Text Preview Tests Containing Unicode Characters
|
||||
|
||||
There are some tests that check the text preview of files containing Unicode characters. The OpenCloud server by default cannot generate the thumbnail of such files correctly but it provides an environment variable to allow the use of custom fonts that support Unicode characters. So to run such tests successfully, we have to run the OpenCloud server with this environment variable.
|
||||
|
||||
@@ -573,15 +561,15 @@ The sample `fontsMap.json` file is located in `tests/config/drone/fontsMap.json`
|
||||
### Build dev docker
|
||||
|
||||
```bash
|
||||
make -C opencloud dev-docker
|
||||
make -C opencloud dev-docker
|
||||
```
|
||||
|
||||
### Choose STORAGE_DRIVER
|
||||
|
||||
By default, the system uses `posix` storage. However, you can override this by setting the `STORAGE_DRIVER` environment variable.
|
||||
|
||||
|
||||
### Run a script that starts the openCloud server in the docker and runs the API tests locally (for debugging purposes)
|
||||
### Run a script that starts the openCloud server in the docker and runs the API tests locally (for debugging purposes)
|
||||
|
||||
```bash
|
||||
STORAGE_DRIVER=posix ./tests/acceptance/run_api_tests.sh
|
||||
STORAGE_DRIVER=posix ./tests/acceptance/run_api_tests.sh
|
||||
```
|
||||
|
||||
@@ -62,7 +62,7 @@ class HttpRequestHelper {
|
||||
|
||||
/**
|
||||
*
|
||||
* @param string|null $url
|
||||
* @param string $url
|
||||
* @param string|null $xRequestId
|
||||
* @param string|null $method
|
||||
* @param string|null $user
|
||||
@@ -80,8 +80,8 @@ class HttpRequestHelper {
|
||||
* @throws GuzzleException
|
||||
*/
|
||||
public static function sendRequestOnce(
|
||||
?string $url,
|
||||
?string $xRequestId,
|
||||
string $url,
|
||||
?string $xRequestId = null,
|
||||
?string $method = 'GET',
|
||||
?string $user = null,
|
||||
?string $password = null,
|
||||
@@ -100,7 +100,7 @@ class HttpRequestHelper {
|
||||
$parsedUrl = parse_url($url);
|
||||
$baseUrl = $parsedUrl['scheme'] . '://' . $parsedUrl['host'];
|
||||
$baseUrl .= isset($parsedUrl['port']) ? ':' . $parsedUrl['port'] : '';
|
||||
$testUrl = $baseUrl . "/graph/v1.0/use/$user";
|
||||
$testUrl = $baseUrl . "/graph/v1.0/user/$user";
|
||||
if (OcHelper::isTestingOnReva()) {
|
||||
$url = $baseUrl . "/ocs/v2.php/cloud/users/$user";
|
||||
}
|
||||
|
||||
@@ -195,13 +195,20 @@ class UploadHelper extends Assert {
|
||||
}
|
||||
|
||||
/**
|
||||
* get the path of a file from FilesForUpload directory
|
||||
*
|
||||
* @param string|null $name name of the file to upload
|
||||
* get the path of the acceptance tests directory
|
||||
*
|
||||
* @return string
|
||||
*/
|
||||
public static function getUploadFilesDir(?string $name): string {
|
||||
return \getenv("FILES_FOR_UPLOAD") . $name;
|
||||
public static function getAcceptanceTestsDir(): string {
|
||||
return \dirname(__FILE__) . "/../";
|
||||
}
|
||||
|
||||
/**
|
||||
* get the path of the filesForUpload directory
|
||||
*
|
||||
* @return string
|
||||
*/
|
||||
public static function getFilesForUploadDir(): string {
|
||||
return \dirname(__FILE__) . "/../filesForUpload/";
|
||||
}
|
||||
}
|
||||
|
||||
@@ -25,6 +25,7 @@ use PHPUnit\Framework\Assert;
|
||||
use Psr\Http\Message\ResponseInterface;
|
||||
use TestHelpers\WebDavHelper;
|
||||
use TestHelpers\BehatHelper;
|
||||
use TestHelpers\UploadHelper;
|
||||
|
||||
require_once 'bootstrap.php';
|
||||
|
||||
@@ -49,7 +50,7 @@ class ChecksumContext implements Context {
|
||||
string $checksum
|
||||
): ResponseInterface {
|
||||
$file = \file_get_contents(
|
||||
$this->featureContext->acceptanceTestsDirLocation() . $source
|
||||
UploadHelper::getAcceptanceTestsDir() . $source
|
||||
);
|
||||
return $this->featureContext->makeDavRequest(
|
||||
$user,
|
||||
|
||||
@@ -37,11 +37,13 @@ use TestHelpers\SetupHelper;
|
||||
use TestHelpers\HttpRequestHelper;
|
||||
use TestHelpers\HttpLogger;
|
||||
use TestHelpers\OcHelper;
|
||||
use TestHelpers\StorageDriver;
|
||||
use TestHelpers\GraphHelper;
|
||||
use TestHelpers\WebDavHelper;
|
||||
use TestHelpers\SettingsHelper;
|
||||
use TestHelpers\OcConfigHelper;
|
||||
use TestHelpers\BehatHelper;
|
||||
use TestHelpers\UploadHelper;
|
||||
use Swaggest\JsonSchema\InvalidValue as JsonSchemaException;
|
||||
use Swaggest\JsonSchema\Exception\ArrayException;
|
||||
use Swaggest\JsonSchema\Exception\ConstException;
|
||||
@@ -561,6 +563,38 @@ class FeatureContext extends BehatVariablesContext {
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* @BeforeScenario @antivirus
|
||||
*
|
||||
* @return void
|
||||
* @throws Exception
|
||||
*/
|
||||
public function createTestVirusFiles(): void {
|
||||
$uploadDir = UploadHelper::getFilesForUploadDir() . 'filesWithVirus/';
|
||||
$virusFile = $uploadDir . 'eicar.com';
|
||||
$virusZipFile = $uploadDir . 'eicar_com.zip';
|
||||
|
||||
if (file_exists($virusFile) && file_exists($virusZipFile)) {
|
||||
return;
|
||||
}
|
||||
|
||||
if (!is_dir($uploadDir)) {
|
||||
mkdir($uploadDir, 0755);
|
||||
}
|
||||
|
||||
$res1 = HttpRequestHelper::sendRequestOnce('https://secure.eicar.org/eicar.com');
|
||||
if ($res1->getStatusCode() !== 200) {
|
||||
throw new Exception("Could not download eicar.com test virus file");
|
||||
}
|
||||
file_put_contents($virusFile, $res1->getBody()->getContents());
|
||||
|
||||
$res2 = HttpRequestHelper::sendRequestOnce('https://secure.eicar.org/eicar_com.zip');
|
||||
file_put_contents($virusZipFile, $res2->getBody()->getContents());
|
||||
if ($res2->getStatusCode() !== 200) {
|
||||
throw new Exception("Could not download eicar_com.zip test virus file");
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
*
|
||||
* @BeforeScenario
|
||||
@@ -2595,18 +2629,11 @@ class FeatureContext extends BehatVariablesContext {
|
||||
return "work_tmp";
|
||||
}
|
||||
|
||||
/**
|
||||
* @return string
|
||||
*/
|
||||
public function acceptanceTestsDirLocation(): string {
|
||||
return \dirname(__FILE__) . "/../";
|
||||
}
|
||||
|
||||
/**
|
||||
* @return string
|
||||
*/
|
||||
public function workStorageDirLocation(): string {
|
||||
return $this->acceptanceTestsDirLocation() . $this->temporaryStorageSubfolderName() . "/";
|
||||
return UploadHelper::getAcceptanceTestsDir() . $this->temporaryStorageSubfolderName() . "/";
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -2967,10 +2994,10 @@ class FeatureContext extends BehatVariablesContext {
|
||||
public static function isExpectedToFail(string $scenarioLine): bool {
|
||||
$expectedFailFile = \getenv('EXPECTED_FAILURES_FILE');
|
||||
if (!$expectedFailFile) {
|
||||
$expectedFailFile = __DIR__ . '/../expected-failures-localAPI-on-decomposed-storage.md';
|
||||
if (\strpos($scenarioLine, "coreApi") === 0) {
|
||||
$expectedFailFile = __DIR__ . '/../expected-failures-API-on-decomposed-storage.md';
|
||||
if (OcHelper::getStorageDriver() === StorageDriver::POSIX) {
|
||||
$expectedFailFile = __DIR__ . '/../expected-failures-posix-storage.md';
|
||||
}
|
||||
$expectedFailFile = __DIR__ . '/../expected-failures-decomposed-storage.md';
|
||||
}
|
||||
|
||||
$reader = \fopen($expectedFailFile, 'r');
|
||||
|
||||
@@ -26,6 +26,7 @@ use Psr\Http\Message\ResponseInterface;
|
||||
use TestHelpers\HttpRequestHelper;
|
||||
use TestHelpers\WebDavHelper;
|
||||
use TestHelpers\BehatHelper;
|
||||
use TestHelpers\UploadHelper;
|
||||
|
||||
require_once 'bootstrap.php';
|
||||
|
||||
@@ -862,7 +863,7 @@ class PublicWebDavContext implements Context {
|
||||
string $destination,
|
||||
): void {
|
||||
$content = \file_get_contents(
|
||||
$this->featureContext->acceptanceTestsDirLocation() . $source
|
||||
UploadHelper::getAcceptanceTestsDir() . $source
|
||||
);
|
||||
$response = $this->publicUploadContent(
|
||||
$destination,
|
||||
@@ -888,7 +889,7 @@ class PublicWebDavContext implements Context {
|
||||
string $password
|
||||
): void {
|
||||
$content = \file_get_contents(
|
||||
$this->featureContext->acceptanceTestsDirLocation() . $source
|
||||
UploadHelper::getAcceptanceTestsDir() . $source
|
||||
);
|
||||
$response = $this->publicUploadContent(
|
||||
$destination,
|
||||
|
||||
@@ -32,6 +32,7 @@ use Psr\Http\Message\ResponseInterface;
|
||||
use TestHelpers\HttpRequestHelper;
|
||||
use TestHelpers\WebDavHelper;
|
||||
use TestHelpers\BehatHelper;
|
||||
use TestHelpers\UploadHelper;
|
||||
|
||||
require_once 'bootstrap.php';
|
||||
|
||||
@@ -364,7 +365,7 @@ class TUSContext implements Context {
|
||||
$client->setChecksumAlgorithm('sha1');
|
||||
$client->setApiPath(WebDavHelper::getDavPath($davPathVersion, $suffixPath));
|
||||
$client->setMetadata($uploadMetadata);
|
||||
$sourceFile = $this->featureContext->acceptanceTestsDirLocation() . $source;
|
||||
$sourceFile = UploadHelper::getAcceptanceTestsDir() . $source;
|
||||
$client->setKey((string)rand())->file($sourceFile, $destination);
|
||||
$this->featureContext->pauseUploadDelete();
|
||||
|
||||
@@ -518,7 +519,7 @@ class TUSContext implements Context {
|
||||
*/
|
||||
public function writeDataToTempFile(string $content): string {
|
||||
$temporaryFileName = \tempnam(
|
||||
$this->featureContext->acceptanceTestsDirLocation(),
|
||||
UploadHelper::getAcceptanceTestsDir(),
|
||||
"tus-upload-test-"
|
||||
);
|
||||
if ($temporaryFileName === false) {
|
||||
|
||||
@@ -1648,7 +1648,7 @@ trait WebDav {
|
||||
?bool $isGivenStep = false
|
||||
): ResponseInterface {
|
||||
$user = $this->getActualUsername($user);
|
||||
$file = \fopen($this->acceptanceTestsDirLocation() . $source, 'r');
|
||||
$file = \fopen(UploadHelper::getAcceptanceTestsDir() . $source, 'r');
|
||||
$this->pauseUploadDelete();
|
||||
$response = $this->makeDavRequest(
|
||||
$user,
|
||||
@@ -1781,7 +1781,7 @@ trait WebDav {
|
||||
}
|
||||
return $this->uploadFileWithHeaders(
|
||||
$user,
|
||||
$this->acceptanceTestsDirLocation() . $source,
|
||||
UploadHelper::getAcceptanceTestsDir() . $source,
|
||||
$destination,
|
||||
$headers,
|
||||
$noOfChunks
|
||||
@@ -2222,7 +2222,7 @@ trait WebDav {
|
||||
$this->getBaseUrl(),
|
||||
$user,
|
||||
$this->getPasswordForUser($user),
|
||||
$this->acceptanceTestsDirLocation() . $source,
|
||||
UploadHelper::getAcceptanceTestsDir() . $source,
|
||||
$destination,
|
||||
$this->getStepLineRef(),
|
||||
["X-OC-Mtime" => $mtime],
|
||||
@@ -2257,7 +2257,7 @@ trait WebDav {
|
||||
$this->getBaseUrl(),
|
||||
$user,
|
||||
$this->getPasswordForUser($user),
|
||||
$this->acceptanceTestsDirLocation() . $source,
|
||||
UploadHelper::getAcceptanceTestsDir() . $source,
|
||||
$destination,
|
||||
$this->getStepLineRef(),
|
||||
["X-OC-Mtime" => $mtime],
|
||||
|
||||
@@ -18,7 +18,7 @@ log_success() {
|
||||
|
||||
SCRIPT_PATH=$(dirname "$0")
|
||||
PATH_TO_SUITES="${SCRIPT_PATH}/features"
|
||||
EXPECTED_FAILURE_FILES=("expected-failures-localAPI-on-decomposed-storage.md" "expected-failures-API-on-decomposed-storage.md" "expected-failures-without-remotephp.md")
|
||||
EXPECTED_FAILURE_FILES=("expected-failures-decomposed-storage.md" "expected-failures-posix-storage.md" "expected-failures-without-remotephp.md")
|
||||
# contains all the suites names inside tests/acceptance/features
|
||||
AVAILABLE_SUITES=($(ls -l "$PATH_TO_SUITES" | grep '^d' | awk '{print $NF}'))
|
||||
|
||||
|
||||
@@ -3,9 +3,9 @@ default:
|
||||
"": "%paths.base%/../bootstrap"
|
||||
|
||||
suites:
|
||||
apiAccountsHashDifficulty:
|
||||
apiSpaces:
|
||||
paths:
|
||||
- "%paths.base%/../features/apiAccountsHashDifficulty"
|
||||
- "%paths.base%/../features/apiSpaces"
|
||||
context: &common_ldap_suite_context
|
||||
parameters:
|
||||
ldapAdminPassword: admin
|
||||
@@ -18,21 +18,6 @@ default:
|
||||
adminPassword: admin
|
||||
regularUserPassword: 123456
|
||||
- SettingsContext:
|
||||
- GraphContext:
|
||||
- SpacesContext:
|
||||
- CapabilitiesContext:
|
||||
- FilesVersionsContext:
|
||||
- NotificationContext:
|
||||
- OCSContext:
|
||||
- PublicWebDavContext:
|
||||
|
||||
apiSpaces:
|
||||
paths:
|
||||
- "%paths.base%/../features/apiSpaces"
|
||||
context: *common_ldap_suite_context
|
||||
contexts:
|
||||
- FeatureContext: *common_feature_context_params
|
||||
- SettingsContext:
|
||||
- SpacesContext:
|
||||
- CapabilitiesContext:
|
||||
- FilesVersionsContext:
|
||||
@@ -442,7 +427,7 @@ default:
|
||||
- AuthAppContext:
|
||||
- CliContext:
|
||||
- OcConfigContext:
|
||||
|
||||
|
||||
apiTenancy:
|
||||
paths:
|
||||
- "%paths.base%/../features/apiTenancy"
|
||||
|
||||
@@ -1,3 +1,4 @@
|
||||
.ONESHELL:
|
||||
SHELL := bash
|
||||
|
||||
# define standard colors
|
||||
@@ -5,47 +6,43 @@ BLACK := $(shell tput -Txterm setaf 0)
|
||||
RED := $(shell tput -Txterm setaf 1)
|
||||
GREEN := $(shell tput -Txterm setaf 2)
|
||||
YELLOW := $(shell tput -Txterm setaf 3)
|
||||
LIGHTPURPLE := $(shell tput -Txterm setaf 4)
|
||||
BLUE := $(shell tput -Txterm setaf 4)
|
||||
PURPLE := $(shell tput -Txterm setaf 5)
|
||||
BLUE := $(shell tput -Txterm setaf 6)
|
||||
CYAN := $(shell tput -Txterm setaf 6)
|
||||
WHITE := $(shell tput -Txterm setaf 7)
|
||||
|
||||
RESET := $(shell tput -Txterm sgr0)
|
||||
|
||||
## default values only for sub-make calls
|
||||
ifeq ($(LOCAL_TEST),true)
|
||||
COMPOSE_FILE ?= src/opencloud-base.yml:src/tika.yml
|
||||
ifeq ($(START_EMAIL),true)
|
||||
COMPOSE_FILE := $(COMPOSE_FILE):src/email.yml
|
||||
endif
|
||||
else
|
||||
COMPOSE_FILE ?= src/redis.yml:src/opencloud-base.yml:src/acceptance.yml
|
||||
endif
|
||||
|
||||
## user input
|
||||
BEHAT_FEATURE ?=
|
||||
|
||||
|
||||
ifdef OC_IMAGE_TAG
|
||||
BUILD_DEV_IMAGE := 0
|
||||
else
|
||||
BUILD_DEV_IMAGE := 1
|
||||
endif
|
||||
OC_IMAGE_TAG ?= dev
|
||||
COMPOSE_FILE := src/opencloud-base.yml
|
||||
|
||||
# run tests with ocwrapper by default
|
||||
WITH_WRAPPER ?= true
|
||||
OC_WRAPPER := ../../ocwrapper/bin/ocwrapper
|
||||
|
||||
ifdef START_TIKA
|
||||
ifeq ($(START_TIKA),true)
|
||||
COMPOSE_FILE := $(COMPOSE_FILE):src/tika.yml
|
||||
SEARCH_EXTRACTOR_TYPE := tika
|
||||
else
|
||||
SEARCH_EXTRACTOR_TYPE := basic
|
||||
endif
|
||||
# enable tika for full text extraction
|
||||
ifeq ($(START_TIKA),true)
|
||||
COMPOSE_FILE := src/tika.yml:$(COMPOSE_FILE)
|
||||
export SEARCH_EXTRACTOR_TYPE := tika
|
||||
else
|
||||
SEARCH_EXTRACTOR_TYPE := basic
|
||||
export SEARCH_EXTRACTOR_TYPE := basic
|
||||
endif
|
||||
|
||||
# enable email server
|
||||
ifeq ($(START_EMAIL),true)
|
||||
COMPOSE_FILE := src/email.yml:$(COMPOSE_FILE)
|
||||
export OC_ADD_RUN_SERVICES := notifications
|
||||
endif
|
||||
|
||||
# enable antivirus
|
||||
ifeq ($(START_ANTIVIRUS),true)
|
||||
COMPOSE_FILE := src/antivirus.yml:$(COMPOSE_FILE)
|
||||
export OC_ADD_RUN_SERVICES := $(OC_ADD_RUN_SERVICES) antivirus
|
||||
export POSTPROCESSING_STEPS := virusscan
|
||||
endif
|
||||
|
||||
# enable wopi services
|
||||
ifeq ($(ENABLE_WOPI),true)
|
||||
COMPOSE_FILE := $(COMPOSE_FILE):src/wopi.yml
|
||||
endif
|
||||
|
||||
# default to posix
|
||||
@@ -53,223 +50,92 @@ STORAGE_DRIVER ?= posix
|
||||
ifeq ($(STORAGE_DRIVER),posix)
|
||||
# posix requires a additional driver config
|
||||
COMPOSE_FILE := $(COMPOSE_FILE):src/posix.yml
|
||||
else ifeq ($(STORAGE_DRIVER),decomposeds3)
|
||||
COMPOSE_FILE := src/ceph.yml:$(COMPOSE_FILE)
|
||||
endif
|
||||
|
||||
# static
|
||||
DIVIDE_INTO_NUM_PARTS := 10
|
||||
PARTS = 1 2 3 4 5 6 7 8 9 10
|
||||
LOCAL_API_SUITES = $(shell ls ../features | grep ^api*)
|
||||
# use latest as default tag if OC_IMAGE is provided but no tag is set
|
||||
ifneq ($(strip $(OC_IMAGE)),)
|
||||
ifeq ($(strip $(OC_IMAGE_TAG)),)
|
||||
OC_IMAGE_TAG := latest
|
||||
endif
|
||||
endif
|
||||
|
||||
COMPOSE_PROJECT_NAME := opencloud-acceptance-tests
|
||||
|
||||
# Export variables for sub-make calls
|
||||
export COMPOSE_PROJECT_NAME
|
||||
export COMPOSE_FILE
|
||||
|
||||
export OC_IMAGE
|
||||
export OC_IMAGE_TAG
|
||||
|
||||
# test configurations
|
||||
export STORAGE_DRIVER
|
||||
export WITH_WRAPPER
|
||||
export BEHAT_SUITE
|
||||
export BEHAT_FEATURE
|
||||
export TEST_SERVER_URL
|
||||
export USE_BEARER_TOKEN
|
||||
|
||||
## make definition
|
||||
.PHONY: help
|
||||
help:
|
||||
@echo "Please use 'make <target>' where <target> is one of the following:"
|
||||
@echo -e "Test suites: ${CYAN}https://github.com/opencloud-eu/opencloud/tree/main/tests/acceptance/features${RESET}"
|
||||
@echo -e "Testing docs: ${CYAN}https://github.com/opencloud-eu/opencloud/tree/main/tests/README.md${RESET}"
|
||||
@echo
|
||||
@echo -e "${PURPLE}docs: https://docs.opencloud.eu/opencloud/development/testing/#testing-with-test-suite-in-docker${RESET}\n"
|
||||
@echo "Available commands (targets):"
|
||||
@echo -e " ${GREEN}run-api-tests\t\t${RESET}Build dev image, start services and run the tests."
|
||||
@echo -e " ${GREEN}start-services\t${RESET}Start service containers."
|
||||
@echo -e " ${GREEN}run-test-only\t\t${RESET}Run the tests only."
|
||||
@echo -e " ${GREEN}show-test-logs\t${RESET}Show the test logs."
|
||||
@echo -e " ${GREEN}ps\t\t\t${RESET}Show the running containers."
|
||||
@echo
|
||||
@echo -e "OpenCloud feature tests and test suites can be found here:"
|
||||
@echo -e "\thttps://github.com/opencloud-eu/opencloud/tree/main/tests/acceptance/features"
|
||||
@echo -e " ${YELLOW}clean-dev-image\t${RESET}Delete the docker image built during acceptance tests."
|
||||
@echo -e " ${YELLOW}clean-containers\t${RESET}Delete the docker containers and volumes."
|
||||
@echo -e " ${YELLOW}clean-tests\t\t${RESET}Delete test dependencies and results."
|
||||
@echo -e " ${YELLOW}clean-all\t\t${RESET}Clean all resources: images, containers, volumes, test files."
|
||||
@echo
|
||||
@echo -e "test suites that test core compatibility are found here and they start with prefix coreApi-:"
|
||||
@echo -e "\thttps://github.com/opencloud-eu/opencloud/tree/main/tests/acceptance/features"
|
||||
@echo "Available environment variables:"
|
||||
@echo -e " ${PURPLE}OC_IMAGE\t\t${RESET}${CYAN}[image_repo]${RESET} OpenCloud image to use. If provided, the dev image build is skipped."
|
||||
@echo -e " ${PURPLE}OC_IMAGE_TAG\t\t${RESET}${CYAN}[image_tag]${RESET} OpenCloud image tag to use. If provided, the dev image build is skipped."
|
||||
@echo -e " ${PURPLE}WITH_WRAPPER\t\t${RESET}${CYAN}[true|false]${RESET} Start OpenCloud server using ocwrapper. Default: ${YELLOW}true${RESET}"
|
||||
@echo -e " ${PURPLE}STORAGE_DRIVER\t${RESET}${CYAN}[posix|decomposed|decomposeds3]${RESET} Storage driver to use. Default: ${YELLOW}posix${RESET}"
|
||||
@echo -e " ${PURPLE}BEHAT_FEATURE\t\t${RESET}${RESET}${CYAN}[path]${RESET} Path to a feature file. Example: ${YELLOW}tests/acceptance/features/apiGraph/changeRole.feature${RESET}"
|
||||
@echo -e " ${PURPLE}BEHAT_SUITE\t\t${RESET}${RESET}${CYAN}[suite_name]${RESET} Test suite to run. Example: ${YELLOW}apiGraph${RESET}"
|
||||
@echo -e " ${PURPLE}TEST_SERVER_URL\t${RESET}${CYAN}[url]${RESET} URL of the OpenCloud server to test against."
|
||||
@echo -e " ${PURPLE}USE_BEARER_TOKEN\t${RESET}${CYAN}[true|false]${RESET} Use a bearer token for authentication. Default: ${YELLOW}false${RESET}"
|
||||
@echo
|
||||
@echo -e "The OpenCloud to be tested will be build from your current working state."
|
||||
@echo -e "You also can select the OpenCloud Docker image for all tests by setting"
|
||||
@echo -e "\tmake ... ${YELLOW}OC_IMAGE_TAG=latest${RESET}"
|
||||
@echo -e "where ${YELLOW}latest${RESET} is an example for any valid Docker image tag from"
|
||||
@echo -e "https://hub.docker.com/r/opencloud-eu/opencloud."
|
||||
@echo
|
||||
@echo -e "${GREEN}Run full OpenCloud test suites with decomposed storage:${RESET}\n"
|
||||
@echo -e "\tmake localApiTests-apiAccountsHashDifficulty-decomposed\t\t${BLUE}run apiAccountsHashDifficulty test suite, where available test suite are apiAccountsHashDifficulty apiArchiver apiContract apiGraph apiSpaces apiSpacesShares apiAsyncUpload apiCors${RESET}"
|
||||
@echo
|
||||
@echo -e "${GREEN}Run full OpenCloud test suites with decomposeds3 storage:${RESET}\n"
|
||||
@echo -e "\tmake localApiTests-apiAccountsHashDifficulty-decomposeds3\t\t${BLUE}run apiAccountsHashDifficulty test suite, where available test suite are apiAccountsHashDifficulty apiArchiver apiContract apiGraph apiSpaces apiSpacesShares apiAsyncUpload apiCors${RESET}"
|
||||
@echo
|
||||
@echo -e "${GREEN}Run full OpenCloud test suites with decomposed storage:${RESET}\n"
|
||||
@echo -e "\tmake Core-API-Tests-decomposed-storage-${RED}X${RESET}\t\t${BLUE}run test suite number X, where ${RED}X = 1 .. 10${RESET}"
|
||||
@echo
|
||||
@echo -e "${GREEN}Run full OpenCloud test suites with decomposeds3 storage:${RESET}\n"
|
||||
@echo -e "\tmake Core-API-Tests-decomposeds3-storage-${RED}X${RESET}\t\t${BLUE}run test suite number X, where ${RED}X = 1 .. 10${RESET}"
|
||||
@echo
|
||||
@echo -e "${GREEN}Run an OpenCloud feature test with decomposed storage:${RESET}\n"
|
||||
@echo -e "\tmake test-opencloud-feature-decomposed-storage ${YELLOW}BEHAT_FEATURE='...'${RESET}\t${BLUE}run single feature test${RESET}"
|
||||
@echo
|
||||
@echo -e "\twhere ${YELLOW}BEHAT_FEATURE='...'${RESET} contains a relative path to the feature definition."
|
||||
@echo -e "\texample: ${RED}tests/acceptance/features/apiAccountsHashDifficulty/addUser.feature${RESET}"
|
||||
@echo
|
||||
@echo -e "${GREEN}Run an OpenCloud feature test with decomposeds3 storage:${RESET}\n"
|
||||
@echo -e "\tmake test-opencloud-feature-decomposeds3-storage ${YELLOW}BEHAT_FEATURE='...'${RESET}\t${BLUE}run single feature test${RESET}"
|
||||
@echo
|
||||
@echo -e "\twhere ${YELLOW}BEHAT_FEATURE='...'${RESET} contains a relative path to the feature definition."
|
||||
@echo -e "\texample: ${RED}tests/acceptance/features/apiAccountsHashDifficulty/addUser.feature${RESET}"
|
||||
@echo
|
||||
@echo -e "\twhere ${YELLOW}BEHAT_FEATURE='...'${RESET} contains a relative path to the feature definition."
|
||||
@echo -e "\texample: ${RED}tests/acceptance/features/apiAccountsHashDifficulty/addUser.feature${RESET}"
|
||||
@echo
|
||||
@echo -e "${GREEN}Run a core test against OpenCloud with decomposed storage:${RESET}\n"
|
||||
@echo -e "\tmake test-core-feature-decomposed-storage ${YELLOW}BEHAT_FEATURE='...'${RESET}\t${BLUE}run single feature test${RESET}"
|
||||
@echo
|
||||
@echo -e "\twhere ${YELLOW}BEHAT_FEATURE='...'${RESET} contains a relative path to the feature definition."
|
||||
@echo -e "\texample: ${RED}tests/acceptance/features/coreApiAuth/webDavAuth.feature${RESET}"
|
||||
@echo
|
||||
@echo -e "${GREEN}Run a core test against OpenCloud with decomposeds3 storage:${RESET}\n"
|
||||
@echo -e "\tmake test-core-feature-decomposeds3-storage ${YELLOW}BEHAT_FEATURE='...'${RESET}\t${BLUE}run single feature test${RESET}"
|
||||
@echo
|
||||
@echo -e "\twhere ${YELLOW}BEHAT_FEATURE='...'${RESET} contains a relative path to the feature definition."
|
||||
@echo -e "\texample: ${RED}tests/acceptance/features/coreApiAuth/webDavAuth.feature${RESET}"
|
||||
@echo
|
||||
@echo -e "\twhere ${YELLOW}BEHAT_FEATURE='...'${RESET} contains a relative path to the feature definition."
|
||||
@echo -e "\texample: ${RED}tests/acceptance/features/coreApiAuth/webDavAuth.feature${RESET}"
|
||||
@echo
|
||||
@echo
|
||||
@echo -e "${GREEN}Show output of tests:${RESET}\n"
|
||||
@echo -e "\tmake show-test-logs\t\t${BLUE}show output of running or finished tests${RESET}"
|
||||
@echo
|
||||
@echo
|
||||
@echo -e "${GREEN}Clean up after testing:${RESET}\n"
|
||||
@echo -e "\tmake clean\t${BLUE}clean up all${RESET}"
|
||||
@echo -e "\tmake clean-docker-container\t\t${BLUE}stops and removes used docker containers${RESET}"
|
||||
@echo -e "\tmake clean-docker-volumes\t\t${BLUE}removes used docker volumes (used for caching)${RESET}"
|
||||
@echo
|
||||
.PHONY: test-opencloud-feature-decomposed-storage
|
||||
test-opencloud-feature-decomposed-storage: ## test a OpenCloud feature with decomposed storage, usage: make ... BEHAT_FEATURE='tests/acceptance/features/apiAccountsHashDifficulty/addUser.feature:10'
|
||||
@TEST_SOURCE=opencloud \
|
||||
STORAGE_DRIVER=decomposed \
|
||||
BEHAT_FEATURE=$(BEHAT_FEATURE) \
|
||||
$(MAKE) --no-print-directory testSuite
|
||||
@echo -e "Example usage:"
|
||||
@echo -e " ${PURPLE}WITH_WRAPPER${RESET}=${YELLOW}false${RESET} \\"
|
||||
@echo -e " ${PURPLE}STORAGE_DRIVER${RESET}=${YELLOW}posix${RESET} \\"
|
||||
@echo -e " ${PURPLE}BEHAT_FEATURE${RESET}=${YELLOW}tests/acceptance/features/apiGraph/changeRole.feature${RESET} \\"
|
||||
@echo -e " make ${GREEN}run-api-tests${RESET}"
|
||||
|
||||
.PHONY: test-opencloud-feature-decomposeds3-storage
|
||||
test-opencloud-feature-decomposeds3-storage: ## test a OpenCloud feature with decomposeds3 storage, usage: make ... BEHAT_FEATURE='tests/acceptance/features/apiAccountsHashDifficulty/addUser.feature:10'
|
||||
@TEST_SOURCE=opencloud \
|
||||
STORAGE_DRIVER=decomposeds3 \
|
||||
BEHAT_FEATURE=$(BEHAT_FEATURE) \
|
||||
START_CEPH=1 \
|
||||
$(MAKE) --no-print-directory testSuite
|
||||
.PHONY: run-api-tests
|
||||
run-api-tests: $(OC_WRAPPER) build-dev-image clean-containers
|
||||
@echo "${BLUE}[INFO]${RESET} Compose project: ${YELLOW}$(COMPOSE_PROJECT_NAME)${RESET}"
|
||||
@echo "${BLUE}[INFO]${RESET} Compose file: ${YELLOW}$(COMPOSE_FILE)${RESET}"
|
||||
@echo "${BLUE}[INFO]${RESET} Using storage driver: ${YELLOW}$(STORAGE_DRIVER)${RESET}"
|
||||
# force use local server when using this command
|
||||
export TEST_SERVER_URL=https://opencloud-server:9200
|
||||
$(MAKE) --no-print-directory start-services
|
||||
$(MAKE) --no-print-directory run-test-only
|
||||
|
||||
.PHONY: test-opencloud-feature-posix-storage
|
||||
test-opencloud-feature-posix-storage: ## test a OpenCloud feature with posix storage, usage: make ... BEHAT_FEATURE='tests/acceptance/features/apiAccountsHashDifficulty/addUser.feature:10'
|
||||
@TEST_SOURCE=opencloud \
|
||||
STORAGE_DRIVER=posix \
|
||||
BEHAT_FEATURE=$(BEHAT_FEATURE) \
|
||||
$(MAKE) --no-print-directory testSuite
|
||||
|
||||
.PHONY: test-core-feature-decomposed-storage
|
||||
test-core-feature-decomposed-storage: ## test a core feature with decomposed storage, usage: make ... BEHAT_FEATURE='tests/acceptance/features/coreApiAuth/webDavAuth.feature'
|
||||
@TEST_SOURCE=core \
|
||||
STORAGE_DRIVER=decomposed \
|
||||
BEHAT_FEATURE=$(BEHAT_FEATURE) \
|
||||
$(MAKE) --no-print-directory testSuite
|
||||
|
||||
.PHONY: test-core-feature-decomposeds3-storage
|
||||
test-core-feature-decomposeds3-storage: ## test a core feature with decomposeds3 storage, usage: make ... BEHAT_FEATURE='tests/acceptance/features/coreApiAuth/webDavAuth.feature'
|
||||
@TEST_SOURCE=core \
|
||||
STORAGE_DRIVER=decomposeds3 \
|
||||
BEHAT_FEATURE=$(BEHAT_FEATURE) \
|
||||
START_CEPH=1 \
|
||||
$(MAKE) --no-print-directory testSuite
|
||||
|
||||
.PHONY: test-opencloud-feature-posix-storage
|
||||
test-core-opencloud-feature-posix-storage: ## test a core feature with posix storage, usage: make ... BEHAT_FEATURE='tests/acceptance/features/apiAccountsHashDifficulty/addUser.feature:10'
|
||||
@TEST_SOURCE=core \
|
||||
STORAGE_DRIVER=posix \
|
||||
BEHAT_FEATURE=$(BEHAT_FEATURE) \
|
||||
$(MAKE) --no-print-directory testSuite
|
||||
|
||||
localSuiteOpencloud = $(addprefix localApiTests-, $(addsuffix -decomposed,${LOCAL_API_SUITES}))
|
||||
.PHONY: $(localSuiteOpencloud)
|
||||
$(localSuiteOpencloud): ## run local api test suite with decomposed storage
|
||||
@$(eval BEHAT_SUITE=$(shell echo "$@" | cut -d'-' -f2))
|
||||
@TEST_SOURCE=opencloud \
|
||||
STORAGE_DRIVER=decomposed \
|
||||
BEHAT_SUITE=$(BEHAT_SUITE) \
|
||||
$(MAKE) --no-print-directory testSuite
|
||||
|
||||
localSuiteDecomposedS3 = $(addprefix localApiTests-, $(addsuffix -decomposeds3,${LOCAL_API_SUITES}))
|
||||
.PHONY: $(localSuiteDecomposedS3)
|
||||
$(localSuiteDecomposedS3): ## run local api test suite with s3 storage
|
||||
@$(eval BEHAT_SUITE=$(shell echo "$@" | cut -d'-' -f2))
|
||||
@TEST_SOURCE=opencloud \
|
||||
STORAGE_DRIVER=decomposeds3 \
|
||||
BEHAT_SUITE=$(BEHAT_SUITE) \
|
||||
$(MAKE) --no-print-directory testSuite
|
||||
|
||||
localSuitePosix = $(addprefix localApiTests-, $(addsuffix -posix,${LOCAL_API_SUITES}))
|
||||
.PHONY: $(localSuitePosix)
|
||||
$(localSuitePosix): ## run local api test suite with posix storage
|
||||
@$(eval BEHAT_SUITE=$(shell echo "$@" | cut -d'-' -f2))
|
||||
@TEST_SOURCE=opencloud \
|
||||
STORAGE_DRIVER=posix \
|
||||
BEHAT_SUITE=$(BEHAT_SUITE) \
|
||||
$(MAKE) --no-print-directory testSuite
|
||||
|
||||
targetsOC = $(addprefix Core-API-Tests-decomposed-storage-,$(PARTS))
|
||||
.PHONY: $(targetsOC)
|
||||
$(targetsOC):
|
||||
@$(eval RUN_PART=$(shell echo "$@" | tr -dc '0-9'))
|
||||
@TEST_SOURCE=core \
|
||||
STORAGE_DRIVER=decomposed \
|
||||
RUN_PART=$(RUN_PART) \
|
||||
$(MAKE) --no-print-directory testSuite
|
||||
|
||||
targetsDecomposedS3 = $(addprefix Core-API-Tests-decomposeds3-storage-,$(PARTS))
|
||||
.PHONY: $(targetsDecomposedS3)
|
||||
$(targets):
|
||||
@$(eval RUN_PART=$(shell echo "$@" | tr -dc '0-9'))
|
||||
@TEST_SOURCE=core \
|
||||
STORAGE_DRIVER=decomposeds3 \
|
||||
RUN_PART=$(RUN_PART) \
|
||||
$(MAKE) --no-print-directory testSuite
|
||||
|
||||
.PHONY: testSuite
|
||||
testSuite: $(OC_WRAPPER) build-dev-image clean-docker-container
|
||||
@if [ -n "${START_CEPH}" ]; then \
|
||||
COMPOSE_PROJECT_NAME=$(COMPOSE_PROJECT_NAME) \
|
||||
COMPOSE_FILE=src/ceph.yml \
|
||||
docker compose run start_ceph; \
|
||||
fi; \
|
||||
if [ "${START_EMAIL}" == "true" ]; then \
|
||||
COMPOSE_PROJECT_NAME=$(COMPOSE_PROJECT_NAME) \
|
||||
COMPOSE_FILE=src/email.yml \
|
||||
docker compose run start_email; \
|
||||
fi; \
|
||||
if [ "${START_ANTIVIRUS}" == "true" ]; then \
|
||||
COMPOSE_PROJECT_NAME=$(COMPOSE_PROJECT_NAME) \
|
||||
COMPOSE_FILE=src/antivirus.yml \
|
||||
docker compose run start_antivirus; \
|
||||
fi; \
|
||||
if [ "${START_TIKA}" == "true" ]; then \
|
||||
COMPOSE_PROJECT_NAME=$(COMPOSE_PROJECT_NAME) \
|
||||
COMPOSE_FILE=src/tika.yml \
|
||||
docker compose run tika-service; \
|
||||
fi; \
|
||||
COMPOSE_PROJECT_NAME=$(COMPOSE_PROJECT_NAME) \
|
||||
COMPOSE_FILE=$(COMPOSE_FILE) \
|
||||
STORAGE_DRIVER=$(STORAGE_DRIVER) \
|
||||
TEST_SOURCE=$(TEST_SOURCE) \
|
||||
WITH_WRAPPER=$(WITH_WRAPPER) \
|
||||
OC_ASYNC_UPLOADS=$(OC_ASYNC_UPLOADS) \
|
||||
OC_ADD_RUN_SERVICES=$(OC_ADD_RUN_SERVICES) \
|
||||
POSTPROCESSING_STEPS=$(POSTPROCESSING_STEPS) \
|
||||
SEARCH_EXTRACTOR_TYPE=$(SEARCH_EXTRACTOR_TYPE) \
|
||||
OC_IMAGE_TAG=$(OC_IMAGE_TAG) \
|
||||
BEHAT_SUITE=$(BEHAT_SUITE) \
|
||||
BEHAT_FEATURE=$(BEHAT_FEATURE) \
|
||||
DIVIDE_INTO_NUM_PARTS=$(DIVIDE_INTO_NUM_PARTS) \
|
||||
RUN_PART=$(RUN_PART) \
|
||||
.PHONY: start-services
|
||||
start-services: $(OC_WRAPPER) ## start services
|
||||
docker compose up -d --build --force-recreate
|
||||
|
||||
.PHONY: run-test-only
|
||||
run-test-only:
|
||||
docker compose -f src/acceptance.yml up
|
||||
|
||||
.PHONY: show-test-logs
|
||||
show-test-logs: ## show logs of test
|
||||
@COMPOSE_PROJECT_NAME=$(COMPOSE_PROJECT_NAME) \
|
||||
COMPOSE_FILE=$(COMPOSE_FILE) \
|
||||
show-test-logs: ## show test logs
|
||||
docker compose logs --no-log-prefix -f acceptance-tests | less
|
||||
|
||||
.PHONY: ps
|
||||
ps: ## show docker status
|
||||
@COMPOSE_PROJECT_NAME=$(COMPOSE_PROJECT_NAME) \
|
||||
COMPOSE_FILE=$(COMPOSE_FILE) \
|
||||
ps: ## show running containers
|
||||
docker compose ps
|
||||
|
||||
$(OC_WRAPPER):
|
||||
@@ -279,55 +145,21 @@ $(OC_WRAPPER):
|
||||
|
||||
.PHONY: build-dev-image
|
||||
build-dev-image:
|
||||
@if [ $(BUILD_DEV_IMAGE) -eq 1 ]; then \
|
||||
@if [ -z "$(OC_IMAGE)" ] && [ -z "$(OC_IMAGE_TAG)" ]; then \
|
||||
$(MAKE) --no-print-directory -C ../../../opencloud dev-docker \
|
||||
; fi;
|
||||
|
||||
.PHONY: clean-dev-docker-image
|
||||
clean-dev-docker-image: ## clean docker image built during acceptance tests
|
||||
@docker image rm opencloud-eu/opencloud:dev || true
|
||||
.PHONY: clean-dev-image
|
||||
clean-dev-image: ## clean docker image built during acceptance tests
|
||||
@docker image rm opencloudeu/opencloud:dev || true
|
||||
|
||||
.PHONY: clean-docker-container
|
||||
clean-docker-container: ## clean docker containers created during acceptance tests
|
||||
@COMPOSE_PROJECT_NAME=$(COMPOSE_PROJECT_NAME) \
|
||||
COMPOSE_FILE=$(COMPOSE_FILE) \
|
||||
BEHAT_SUITE="" \
|
||||
DIVIDE_INTO_NUM_PARTS="" \
|
||||
OC_IMAGE_TAG="" \
|
||||
RUN_PART="" \
|
||||
STORAGE_DRIVER="" \
|
||||
TEST_SOURCE="" \
|
||||
docker compose down --remove-orphans
|
||||
|
||||
.PHONY: clean-docker-volumes
|
||||
clean-docker-volumes: ## clean docker volumes created during acceptance tests
|
||||
@COMPOSE_PROJECT_NAME=$(COMPOSE_PROJECT_NAME) \
|
||||
COMPOSE_FILE=$(COMPOSE_FILE) \
|
||||
BEHAT_SUITE="" \
|
||||
DIVIDE_INTO_NUM_PARTS="" \
|
||||
OC_IMAGE_TAG="" \
|
||||
RUN_PART="" \
|
||||
STORAGE_DRIVER="" \
|
||||
TEST_SOURCE="" \
|
||||
.PHONY: clean-containers
|
||||
clean-containers: ## clean docker containers created during acceptance tests
|
||||
docker compose down --remove-orphans -v
|
||||
|
||||
.PHONY: clean-files
|
||||
clean-files:
|
||||
.PHONY: clean-tests
|
||||
clean-tests:
|
||||
@$(MAKE) --no-print-directory -C ../../../. clean-tests
|
||||
|
||||
.PHONY: clean
|
||||
clean: clean-docker-container clean-docker-volumes clean-dev-docker-image clean-files ## clean all
|
||||
|
||||
.PHONY: start-server
|
||||
start-server: $(OC_WRAPPER) ## build and start server
|
||||
@echo "Build and start server..."
|
||||
COMPOSE_FILE=$(COMPOSE_FILE) \
|
||||
COMPOSE_PROJECT_NAME=$(COMPOSE_PROJECT_NAME) \
|
||||
OC_IMAGE_TAG=dev \
|
||||
WITH_WRAPPER=$(WITH_WRAPPER) \
|
||||
TEST_SOURCE=opencloud \
|
||||
STORAGE_DRIVER=$(STORAGE_DRIVER) \
|
||||
OC_ASYNC_UPLOADS=true \
|
||||
SEARCH_EXTRACTOR_TYPE=tika \
|
||||
OC_ADD_RUN_SERVICES=notifications \
|
||||
docker compose up -d --build --force-recreate
|
||||
clean-all: clean-containers clean-dev-image clean-tests ## clean all
|
||||
|
||||
@@ -5,14 +5,12 @@ services:
|
||||
command: /bin/bash /test/run-tests.sh
|
||||
environment:
|
||||
OC_ROOT: /woodpecker/src/github.com/opencloud-eu/opencloud
|
||||
TEST_SERVER_URL: https://opencloud-server:9200
|
||||
TEST_SERVER_URL: ${TEST_SERVER_URL:-https://opencloud-server:9200}
|
||||
OC_WRAPPER_URL: http://opencloud-server:5200
|
||||
STORAGE_DRIVER: $STORAGE_DRIVER
|
||||
TEST_SOURCE: $TEST_SOURCE
|
||||
STORAGE_DRIVER: ${STORAGE_DRIVER:-posix}
|
||||
BEHAT_SUITE: ${BEHAT_SUITE:-}
|
||||
BEHAT_FEATURE: ${BEHAT_FEATURE:-}
|
||||
DIVIDE_INTO_NUM_PARTS: $DIVIDE_INTO_NUM_PARTS
|
||||
RUN_PART: $RUN_PART
|
||||
USE_BEARER_TOKEN: ${USE_BEARER_TOKEN:-false}
|
||||
# email
|
||||
EMAIL_HOST: email
|
||||
EMAIL_PORT: 9000
|
||||
|
||||
10
tests/acceptance/docker/src/onlyoffice-entrypoint.sh
Normal file
10
tests/acceptance/docker/src/onlyoffice-entrypoint.sh
Normal file
@@ -0,0 +1,10 @@
|
||||
#!/bin/bash
|
||||
|
||||
set -e
|
||||
|
||||
mkdir -p /var/www/onlyoffice/Data/certs
|
||||
cd /var/www/onlyoffice/Data/certs
|
||||
openssl req -x509 -newkey rsa:4096 -keyout onlyoffice.key -out onlyoffice.crt -sha256 -days 365 -batch -nodes
|
||||
chmod 400 /var/www/onlyoffice/Data/certs/onlyoffice.key
|
||||
|
||||
/app/ds/run-document-server.sh
|
||||
@@ -1,12 +1,12 @@
|
||||
services:
|
||||
opencloud-server:
|
||||
image: opencloudeu/opencloud:dev
|
||||
entrypoint: [ "/bin/sh", "/usr/bin/serve-opencloud.sh" ]
|
||||
image: ${OC_IMAGE:-opencloudeu/opencloud}:${OC_IMAGE_TAG:-dev}
|
||||
entrypoint: ["/bin/sh", "/usr/bin/serve-opencloud.sh"]
|
||||
user: root
|
||||
environment:
|
||||
WITH_WRAPPER: $WITH_WRAPPER
|
||||
WITH_WRAPPER: ${WITH_WRAPPER:-true}
|
||||
OC_URL: "https://opencloud-server:9200"
|
||||
STORAGE_USERS_DRIVER: $STORAGE_DRIVER
|
||||
STORAGE_USERS_DRIVER: ${STORAGE_DRIVER:-posix}
|
||||
STORAGE_USERS_POSIX_WATCH_FS: "true"
|
||||
STORAGE_USERS_DRIVER_LOCAL_ROOT: /srv/app/tmp/opencloud/local/root
|
||||
STORAGE_USERS_DRIVER_OC_ROOT: /srv/app/tmp/opencloud/storage/users
|
||||
@@ -14,15 +14,12 @@ services:
|
||||
SHARING_USER_JSON_FILE: /srv/app/tmp/opencloud/shares.json
|
||||
PROXY_ENABLE_BASIC_AUTH: "true"
|
||||
WEB_UI_CONFIG_FILE: /woodpecker/src/github.com/opencloud-eu/opencloud/tests/config/woodpecker/opencloud-config.json
|
||||
ACCOUNTS_HASH_DIFFICULTY: 4
|
||||
OC_INSECURE: "true"
|
||||
IDM_CREATE_DEMO_USERS: "true"
|
||||
IDM_ADMIN_PASSWORD: "admin"
|
||||
FRONTEND_SEARCH_MIN_LENGTH: "2"
|
||||
OC_ASYNC_UPLOADS: $OC_ASYNC_UPLOADS
|
||||
OC_ADD_RUN_SERVICES: $OC_ADD_RUN_SERVICES
|
||||
OC_ADD_RUN_SERVICES: ${OC_ADD_RUN_SERVICES:-}
|
||||
PROXY_HTTP_ADDR: "0.0.0.0:9200"
|
||||
OC_JWT_SECRET: "some-random-jwt-secret"
|
||||
|
||||
# decomposeds3 specific settings
|
||||
STORAGE_USERS_DECOMPOSEDS3_ENDPOINT: http://ceph:8080
|
||||
@@ -42,19 +39,19 @@ services:
|
||||
ANTIVIRUS_CLAMAV_SOCKET: tcp://clamav:3310
|
||||
|
||||
# postprocessing step
|
||||
POSTPROCESSING_STEPS: $POSTPROCESSING_STEPS
|
||||
POSTPROCESSING_STEPS: ${POSTPROCESSING_STEPS:-}
|
||||
|
||||
# tika
|
||||
SEARCH_EXTRACTOR_TYPE: $SEARCH_EXTRACTOR_TYPE
|
||||
SEARCH_EXTRACTOR_TYPE: ${SEARCH_EXTRACTOR_TYPE:-basic}
|
||||
SEARCH_EXTRACTOR_TIKA_TIKA_URL: "http://tika:9998"
|
||||
SEARCH_EXTRACTOR_CS3SOURCE_INSECURE: "true"
|
||||
|
||||
# fonts map for txt thumbnails (including unicode support)
|
||||
THUMBNAILS_TXT_FONTMAP_FILE: "/woodpecker/src/github.com/opencloud-eu/opencloud/tests/config/drone/fontsMap.json"
|
||||
ports:
|
||||
- '9200:9200'
|
||||
- '5200:5200' ## ocwrapper
|
||||
- '9174:9174' ## notifications debug
|
||||
- "9200:9200"
|
||||
- "5200:5200" ## ocwrapper
|
||||
- "9174:9174" ## notifications debug
|
||||
volumes:
|
||||
- ../../../config:/woodpecker/src/github.com/opencloud-eu/opencloud/tests/config
|
||||
- ../../../ocwrapper/bin/ocwrapper:/usr/bin/ocwrapper
|
||||
|
||||
@@ -2,8 +2,6 @@
|
||||
services:
|
||||
opencloud-server:
|
||||
environment:
|
||||
# activate posix storage driver for users
|
||||
STORAGE_USERS_DRIVER: posix
|
||||
# posix requires a shared cache store
|
||||
STORAGE_USERS_ID_CACHE_STORE: "nats-js-kv"
|
||||
STORAGE_USERS_POSIX_WATCH_FS: "true"
|
||||
|
||||
@@ -1,4 +0,0 @@
|
||||
services:
|
||||
redis:
|
||||
image: redis:6
|
||||
command: ["--databases", "1"]
|
||||
@@ -1,70 +1,42 @@
|
||||
#!/bin/bash
|
||||
|
||||
#mkdir -p /drone/src/vendor-bin/behat
|
||||
#cp /tmp/vendor-bin/behat/composer.json /drone/src/vendor-bin/behat/composer.json
|
||||
mkdir -p "${OC_ROOT}/vendor-bin/behat"
|
||||
if [ ! -f "${OC_ROOT}/vendor-bin/behat/composer.json" ]; then
|
||||
cp /tmp/vendor-bin/behat/composer.json "${OC_ROOT}/vendor-bin/behat/composer.json"
|
||||
fi
|
||||
|
||||
git config --global advice.detachedHead false
|
||||
|
||||
## CONFIGURE TEST
|
||||
BEHAT_FILTER_TAGS='~@skip'
|
||||
EXPECTED_FAILURES_FILE=''
|
||||
|
||||
if [ "$TEST_SOURCE" = "core" ]; then
|
||||
export ACCEPTANCE_TEST_TYPE='core-api'
|
||||
if [ "$STORAGE_DRIVER" = "decomposed" ]; then
|
||||
export OC_REVA_DATA_ROOT=''
|
||||
export BEHAT_FILTER_TAGS='~@skipOnOpencloud-decomposed-Storage'
|
||||
export EXPECTED_FAILURES_FILE='/drone/src/tests/acceptance/expected-failures-API-on-decomposed-storage.md'
|
||||
elif [ "$STORAGE_DRIVER" = "decomposeds3" ]; then
|
||||
export BEHAT_FILTER_TAGS='~@skip&&~@skipOnOpencloud-decomposeds3-Storage'
|
||||
export OC_REVA_DATA_ROOT=''
|
||||
else
|
||||
echo "non existing STORAGE selected"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
unset BEHAT_SUITE
|
||||
|
||||
elif [ "$TEST_SOURCE" = "opencloud" ]; then
|
||||
if [ "$STORAGE_DRIVER" = "decomposed" ]; then
|
||||
export BEHAT_FILTER_TAGS='~@skip&&~@skipOnOpencloud-decomposed-Storage'
|
||||
export OC_REVA_DATA_ROOT=''
|
||||
elif [ "$STORAGE_DRIVER" = "decomposeds3" ]; then
|
||||
export BEHAT_FILTER_TAGS='~@skip&&~@skipOnOpencloud-decomposeds3-Storage'
|
||||
export OC_REVA_DATA_ROOT=''
|
||||
elif [ "$STORAGE_DRIVER" = "posix" ]; then
|
||||
export BEHAT_FILTER_TAGS='~@skip&&~@skipOnOpencloud-posix-Storage'
|
||||
export OC_REVA_DATA_ROOT=''
|
||||
else
|
||||
echo "non existing storage selected"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
unset DIVIDE_INTO_NUM_PARTS
|
||||
unset RUN_PART
|
||||
else
|
||||
echo "non existing TEST_SOURCE selected"
|
||||
exit 1
|
||||
if [ "$STORAGE_DRIVER" = "posix" ]; then
|
||||
BEHAT_FILTER_TAGS+='&&~@skipOnOpencloud-posix-Storage'
|
||||
EXPECTED_FAILURES_FILE="${OC_ROOT}/tests/acceptance/expected-failures-posix-storage.md"
|
||||
elif [ "$STORAGE_DRIVER" = "decomposed" ]; then
|
||||
BEHAT_FILTER_TAGS+='&&~@skipOnOpencloud-decomposed-Storage'
|
||||
EXPECTED_FAILURES_FILE="${OC_ROOT}/tests/acceptance/expected-failures-decomposed-storage.md"
|
||||
fi
|
||||
|
||||
if [ ! -z "$BEHAT_FEATURE" ]; then
|
||||
echo "feature selected: " + $BEHAT_FEATURE
|
||||
# allow running without filters if its a feature
|
||||
export BEHAT_FILTER_TAGS
|
||||
export EXPECTED_FAILURES_FILE
|
||||
|
||||
if [ -n "$BEHAT_FEATURE" ]; then
|
||||
export BEHAT_FEATURE
|
||||
echo "[INFO] Running feature: $BEHAT_FEATURE"
|
||||
# allow running without filters if its a feature
|
||||
unset BEHAT_FILTER_TAGS
|
||||
unset DIVIDE_INTO_NUM_PARTS
|
||||
unset RUN_PART
|
||||
unset BEHAT_SUITE
|
||||
unset EXPECTED_FAILURES_FILE
|
||||
else
|
||||
elif [ -n "$BEHAT_SUITE" ]; then
|
||||
export BEHAT_SUITE
|
||||
echo "[INFO] Running suite: $BEHAT_SUITE"
|
||||
unset BEHAT_FEATURE
|
||||
fi
|
||||
|
||||
## RUN TEST
|
||||
sleep 10
|
||||
make -C "$OC_ROOT" test-acceptance-api
|
||||
|
||||
if [[ -z "$TEST_SOURCE" ]]; then
|
||||
echo "non existing TEST_SOURCE selected"
|
||||
exit 1
|
||||
else
|
||||
sleep 10
|
||||
make -C $OC_ROOT test-acceptance-api
|
||||
fi
|
||||
|
||||
chmod -R 777 vendor-bin/**/vendor vendor-bin/**/composer.lock tests/acceptance/output
|
||||
chmod -R 777 "${OC_ROOT}/vendor-bin/"*"/vendor" "${OC_ROOT}/vendor-bin/"*"/composer.lock" "${OC_ROOT}/tests/acceptance/output" 2>/dev/null || true
|
||||
|
||||
45
tests/acceptance/docker/src/run-wopi-validator.sh
Executable file
45
tests/acceptance/docker/src/run-wopi-validator.sh
Executable file
@@ -0,0 +1,45 @@
|
||||
#!/bin/sh
|
||||
|
||||
set -xe
|
||||
|
||||
if [ -z "$TEST_GROUP" ]; then
|
||||
echo "TEST_GROUP not set"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "Waiting for collaboration WOPI endpoint..."
|
||||
|
||||
until curl -s http://collaboration:9304 >/dev/null; do
|
||||
echo "Waiting for collaboration WOPI endpoint..."
|
||||
sleep 2
|
||||
done
|
||||
|
||||
echo "Collaboration is up"
|
||||
|
||||
if [ -z "$OC_URL" ]; then
|
||||
OC_URL="https://opencloud-server:9200"
|
||||
fi
|
||||
|
||||
curl -vk -X DELETE "$OC_URL/remote.php/webdav/test.wopitest" -u admin:admin
|
||||
curl -vk -X PUT "$OC_URL/remote.php/webdav/test.wopitest" -u admin:admin -D headers.txt
|
||||
cat headers.txt
|
||||
FILE_ID="$(cat headers.txt | sed -n -e 's/^.*oc-fileid: //Ip')"
|
||||
export FILE_ID
|
||||
URL="$OC_URL/app/open?app_name=FakeOffice&file_id=$FILE_ID"
|
||||
URL="$(echo "$URL" | tr -d '[:cntrl:]')"
|
||||
export URL
|
||||
curl -vk -X POST "$URL" -u admin:admin > open.json
|
||||
cat open.json
|
||||
cat open.json | jq .form_parameters.access_token | tr -d '"' > accesstoken
|
||||
cat open.json | jq .form_parameters.access_token_ttl | tr -d '"' > accesstokenttl
|
||||
WOPI_FILE_ID="$(cat open.json | jq .app_url | sed -n -e 's/^.*files%2F//p' | tr -d '"')"
|
||||
echo "http://collaboration:9300/wopi/files/$WOPI_FILE_ID" > wopisrc
|
||||
|
||||
WOPI_TOKEN=$(cat accesstoken)
|
||||
export WOPI_TOKEN
|
||||
WOPI_TTL=$(cat accesstokenttl)
|
||||
export WOPI_TTL
|
||||
WOPI_SRC=$(cat wopisrc)
|
||||
export WOPI_SRC
|
||||
|
||||
/app/Microsoft.Office.WopiValidator -s -t "$WOPI_TOKEN" -w "$WOPI_SRC" -l "$WOPI_TTL" --testgroup $TEST_GROUP
|
||||
86
tests/acceptance/docker/src/wopi-validator-test.yml
Normal file
86
tests/acceptance/docker/src/wopi-validator-test.yml
Normal file
@@ -0,0 +1,86 @@
|
||||
services:
|
||||
fakeoffice:
|
||||
image: owncloudci/alpine:latest
|
||||
entrypoint: /bin/sh
|
||||
command:
|
||||
[
|
||||
"-c",
|
||||
"while true; do echo -e \"HTTP/1.1 200 OK\n\n$(cat /hosting-discovery.xml)\" | nc -l -k -p 8080; done",
|
||||
]
|
||||
ports:
|
||||
- 8080:8080
|
||||
extra_hosts:
|
||||
- opencloud.local:${DOCKER_HOST:-host-gateway}
|
||||
healthcheck:
|
||||
test: ["CMD", "curl", "-f", "http://localhost:8080"]
|
||||
volumes:
|
||||
- ./../../../config/woodpecker/hosting-discovery.xml:/hosting-discovery.xml
|
||||
|
||||
opencloud:
|
||||
image: opencloudeu/opencloud:dev
|
||||
container_name: opencloud-server
|
||||
ports:
|
||||
- 9200:9200
|
||||
entrypoint: /bin/sh
|
||||
command: ["-c", "opencloud init || true; sleep 10; opencloud server"]
|
||||
environment:
|
||||
OC_URL: https://opencloud-server:9200
|
||||
OC_CONFIG_DIR: /etc/opencloud
|
||||
STORAGE_USERS_DRIVER: posix
|
||||
PROXY_ENABLE_BASIC_AUTH: true
|
||||
OC_LOG_LEVEL: error
|
||||
OC_LOG_COLOR: false
|
||||
OC_INSECURE: true
|
||||
IDM_ADMIN_PASSWORD: admin
|
||||
GATEWAY_GRPC_ADDR: 0.0.0.0:9142
|
||||
NATS_NATS_HOST: 0.0.0.0
|
||||
NATS_NATS_PORT: 9233
|
||||
volumes:
|
||||
- config:/etc/opencloud
|
||||
depends_on:
|
||||
fakeoffice:
|
||||
condition: service_healthy
|
||||
|
||||
collaboration:
|
||||
image: opencloudeu/opencloud:dev
|
||||
restart: unless-stopped
|
||||
ports:
|
||||
- 9300:9300
|
||||
entrypoint:
|
||||
- /bin/sh
|
||||
command: ["-c", "opencloud collaboration server"]
|
||||
environment:
|
||||
OC_CONFIG_DIR: /etc/opencloud
|
||||
MICRO_REGISTRY: nats-js-kv
|
||||
MICRO_REGISTRY_ADDRESS: opencloud:9233
|
||||
COLLABORATION_LOG_LEVEL: info
|
||||
COLLABORATION_GRPC_ADDR: 0.0.0.0:9301
|
||||
COLLABORATION_HTTP_ADDR: 0.0.0.0:9300
|
||||
COLLABORATION_DEBUG_ADDR: 0.0.0.0:9304
|
||||
COLLABORATION_APP_PROOF_DISABLE: true
|
||||
COLLABORATION_APP_INSECURE: true
|
||||
COLLABORATION_CS3API_DATAGATEWAY_INSECURE: true
|
||||
COLLABORATION_WOPI_SECRET: some-wopi-secret
|
||||
COLLABORATION_SERVICE_NAME: collaboration-fakeoffice
|
||||
COLLABORATION_APP_NAME: FakeOffice
|
||||
COLLABORATION_APP_PRODUCT: Microsoft
|
||||
COLLABORATION_APP_ADDR: http://fakeoffice:8080
|
||||
COLLABORATION_WOPI_SRC: http://collaboration:9300
|
||||
volumes:
|
||||
- config:/etc/opencloud
|
||||
depends_on:
|
||||
- opencloud
|
||||
|
||||
wopi-validator:
|
||||
image: ${WOPI_VALIDATOR_IMAGE:-opencloudeu/wopi-validator-ci}
|
||||
volumes:
|
||||
- ./run-wopi-validator.sh:/app/run-wopi-validator.sh
|
||||
environment:
|
||||
TEST_GROUP: ${TEST_GROUP:-PutRelativeFile}
|
||||
entrypoint: /app/run-wopi-validator.sh
|
||||
depends_on:
|
||||
- collaboration
|
||||
restart: "on-failure"
|
||||
|
||||
volumes:
|
||||
config:
|
||||
105
tests/acceptance/docker/src/wopi.yml
Normal file
105
tests/acceptance/docker/src/wopi.yml
Normal file
@@ -0,0 +1,105 @@
|
||||
x-common_config: &common_config
|
||||
image: opencloudeu/opencloud:dev
|
||||
restart: unless-stopped
|
||||
entrypoint: /bin/sh
|
||||
command: ["-c", "opencloud collaboration server"]
|
||||
user: root
|
||||
|
||||
x-common_env: &common_env
|
||||
OC_CONFIG_DIR: /etc/opencloud
|
||||
MICRO_REGISTRY: nats-js-kv
|
||||
MICRO_REGISTRY_ADDRESS: opencloud-server:9233
|
||||
COLLABORATION_LOG_LEVEL: info
|
||||
COLLABORATION_GRPC_ADDR: 0.0.0.0:9301
|
||||
COLLABORATION_HTTP_ADDR: 0.0.0.0:9300
|
||||
COLLABORATION_DEBUG_ADDR: 0.0.0.0:9304
|
||||
COLLABORATION_APP_PROOF_DISABLE: true
|
||||
COLLABORATION_APP_INSECURE: true
|
||||
COLLABORATION_CS3API_DATAGATEWAY_INSECURE: true
|
||||
COLLABORATION_WOPI_SECRET: some-wopi-secret
|
||||
|
||||
x-config_volume: &config_volume
|
||||
- config:/etc/opencloud
|
||||
|
||||
x-depends_on: &depends_on
|
||||
- opencloud-server
|
||||
|
||||
services:
|
||||
opencloud-server:
|
||||
environment:
|
||||
OC_CONFIG_DIR: /etc/opencloud
|
||||
GATEWAY_GRPC_ADDR: 0.0.0.0:9142
|
||||
NATS_NATS_HOST: 0.0.0.0
|
||||
NATS_NATS_PORT: 9233
|
||||
volumes: *config_volume
|
||||
|
||||
fakeoffice:
|
||||
image: alpine:latest
|
||||
entrypoint: /bin/sh
|
||||
command:
|
||||
[
|
||||
"-c",
|
||||
"while true; do echo -e \"HTTP/1.1 200 OK\n\n$(cat /fakeoffice-discovery.xml)\" | nc -l -k -p 8080; done",
|
||||
]
|
||||
healthcheck:
|
||||
test: ["CMD", "curl", "-f", "http://fakeoffice:8080"]
|
||||
volumes:
|
||||
- ./../../../config/woodpecker/hosting-discovery.xml:/fakeoffice-discovery.xml
|
||||
|
||||
collabora:
|
||||
image: collabora/code:24.04.5.1.1
|
||||
environment:
|
||||
DONT_GEN_SSL_CERT: set
|
||||
extra_params: --o:ssl.enable=true --o:ssl.termination=true --o:welcome.enable=false --o:net.frame_ancestors=https://opencloud-server:9200
|
||||
entrypoint: /bin/sh
|
||||
command: ["-c", "coolconfig generate-proof-key; /start-collabora-online.sh"]
|
||||
|
||||
onlyoffice:
|
||||
image: onlyoffice/documentserver:7.5.1
|
||||
environment:
|
||||
WOPI_ENABLED: true
|
||||
USE_UNAUTHORIZED_STORAGE: true
|
||||
entrypoint: bash /entrypoint.sh
|
||||
volumes:
|
||||
- ./onlyoffice-entrypoint.sh:/entrypoint.sh
|
||||
|
||||
collaboration-fakeoffice:
|
||||
<<: *common_config
|
||||
environment:
|
||||
<<: *common_env
|
||||
COLLABORATION_SERVICE_NAME: collaboration-fakeoffice
|
||||
COLLABORATION_APP_NAME: FakeOffice
|
||||
COLLABORATION_APP_PRODUCT: Microsoft
|
||||
COLLABORATION_APP_ADDR: http://fakeoffice:8080
|
||||
COLLABORATION_WOPI_SRC: http://collaboration-fakeoffice:9300
|
||||
volumes: *config_volume
|
||||
depends_on: *depends_on
|
||||
|
||||
collaboration-collabora:
|
||||
<<: *common_config
|
||||
environment:
|
||||
<<: *common_env
|
||||
COLLABORATION_SERVICE_NAME: collaboration-collabora
|
||||
COLLABORATION_APP_NAME: Collabora
|
||||
COLLABORATION_APP_PRODUCT: Collabora
|
||||
COLLABORATION_APP_ADDR: https://collabora:9980
|
||||
COLLABORATION_APP_ICON: https://collabora:9980/favicon.ico
|
||||
COLLABORATION_WOPI_SRC: http://collaboration-collabora:9300
|
||||
volumes: *config_volume
|
||||
depends_on: *depends_on
|
||||
|
||||
collaboration-onlyoffice:
|
||||
<<: *common_config
|
||||
environment:
|
||||
<<: *common_env
|
||||
COLLABORATION_SERVICE_NAME: collaboration-onlyoffice
|
||||
COLLABORATION_APP_NAME: OnlyOffice
|
||||
COLLABORATION_APP_PRODUCT: OnlyOffice
|
||||
COLLABORATION_APP_ADDR: https://onlyoffice
|
||||
COLLABORATION_APP_ICON: https://onlyoffice/web-apps/apps/documenteditor/main/resources/img/favicon.ico
|
||||
COLLABORATION_WOPI_SRC: http://collaboration-onlyoffice:9300
|
||||
volumes: *config_volume
|
||||
depends_on: *depends_on
|
||||
|
||||
volumes:
|
||||
config:
|
||||
@@ -1,175 +0,0 @@
|
||||
## Scenarios from core API tests that are expected to fail with decomposed storage while running with the Graph API
|
||||
|
||||
### File
|
||||
|
||||
Basic file management like up and download, move, copy, properties, trash, versions and chunking.
|
||||
|
||||
#### [Custom dav properties with namespaces are rendered incorrectly](https://github.com/owncloud/ocis/issues/2140)
|
||||
|
||||
_ocdav: double-check the webdav property parsing when custom namespaces are used_
|
||||
|
||||
- [coreApiWebdavProperties/setFileProperties.feature:128](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavProperties/setFileProperties.feature#L128)
|
||||
- [coreApiWebdavProperties/setFileProperties.feature:129](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavProperties/setFileProperties.feature#L129)
|
||||
- [coreApiWebdavProperties/setFileProperties.feature:130](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavProperties/setFileProperties.feature#L130)
|
||||
|
||||
### Sync
|
||||
|
||||
Synchronization features like etag propagation, setting mtime and locking files
|
||||
|
||||
#### [Uploading an old method chunked file with checksum should fail using new DAV path](https://github.com/owncloud/ocis/issues/2323)
|
||||
|
||||
- [coreApiMain/checksums.feature:233](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiMain/checksums.feature#L233)
|
||||
- [coreApiMain/checksums.feature:234](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiMain/checksums.feature#L234)
|
||||
- [coreApiMain/checksums.feature:235](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiMain/checksums.feature#L235)
|
||||
|
||||
### Share
|
||||
|
||||
#### [d:quota-available-bytes in dprop of PROPFIND give wrong response value](https://github.com/owncloud/ocis/issues/8197)
|
||||
|
||||
- [coreApiWebdavProperties/getQuota.feature:57](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavProperties/getQuota.feature#L57)
|
||||
- [coreApiWebdavProperties/getQuota.feature:58](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavProperties/getQuota.feature#L58)
|
||||
- [coreApiWebdavProperties/getQuota.feature:59](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavProperties/getQuota.feature#L59)
|
||||
- [coreApiWebdavProperties/getQuota.feature:73](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavProperties/getQuota.feature#L73)
|
||||
- [coreApiWebdavProperties/getQuota.feature:74](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavProperties/getQuota.feature#L74)
|
||||
- [coreApiWebdavProperties/getQuota.feature:75](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavProperties/getQuota.feature#L75)
|
||||
|
||||
#### [deleting a file inside a received shared folder is moved to the trash-bin of the sharer not the receiver](https://github.com/owncloud/ocis/issues/1124)
|
||||
|
||||
- [coreApiTrashbin/trashbinSharingToShares.feature:54](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiTrashbin/trashbinSharingToShares.feature#L54)
|
||||
- [coreApiTrashbin/trashbinSharingToShares.feature:55](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiTrashbin/trashbinSharingToShares.feature#L55)
|
||||
- [coreApiTrashbin/trashbinSharingToShares.feature:56](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiTrashbin/trashbinSharingToShares.feature#L56)
|
||||
- [coreApiTrashbin/trashbinSharingToShares.feature:83](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiTrashbin/trashbinSharingToShares.feature#L83)
|
||||
- [coreApiTrashbin/trashbinSharingToShares.feature:84](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiTrashbin/trashbinSharingToShares.feature#L84)
|
||||
- [coreApiTrashbin/trashbinSharingToShares.feature:85](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiTrashbin/trashbinSharingToShares.feature#L85)
|
||||
- [coreApiTrashbin/trashbinSharingToShares.feature:142](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiTrashbin/trashbinSharingToShares.feature#L142)
|
||||
- [coreApiTrashbin/trashbinSharingToShares.feature:143](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiTrashbin/trashbinSharingToShares.feature#L143)
|
||||
- [coreApiTrashbin/trashbinSharingToShares.feature:144](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiTrashbin/trashbinSharingToShares.feature#L144)
|
||||
- [coreApiTrashbin/trashbinSharingToShares.feature:202](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiTrashbin/trashbinSharingToShares.feature#L202)
|
||||
- [coreApiTrashbin/trashbinSharingToShares.feature:203](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiTrashbin/trashbinSharingToShares.feature#L203)
|
||||
|
||||
### Other
|
||||
|
||||
API, search, favorites, config, capabilities, not existing endpoints, CORS and others
|
||||
|
||||
#### [sending MKCOL requests to another or non-existing user's webDav endpoints as normal user should return 404](https://github.com/owncloud/ocis/issues/5049)
|
||||
|
||||
_ocdav: api compatibility, return correct status code_
|
||||
|
||||
- [coreApiAuth/webDavMKCOLAuth.feature:42](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiAuth/webDavMKCOLAuth.feature#L42)
|
||||
- [coreApiAuth/webDavMKCOLAuth.feature:53](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiAuth/webDavMKCOLAuth.feature#L53)
|
||||
|
||||
#### [trying to lock file of another user gives http 500](https://github.com/owncloud/ocis/issues/2176)
|
||||
|
||||
- [coreApiAuth/webDavLOCKAuth.feature:46](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiAuth/webDavLOCKAuth.feature#L46)
|
||||
- [coreApiAuth/webDavLOCKAuth.feature:58](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiAuth/webDavLOCKAuth.feature#L58)
|
||||
|
||||
#### [Support for favorites](https://github.com/owncloud/ocis/issues/1228)
|
||||
|
||||
- [coreApiFavorites/favorites.feature:101](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiFavorites/favorites.feature#L101)
|
||||
- [coreApiFavorites/favorites.feature:102](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiFavorites/favorites.feature#L102)
|
||||
- [coreApiFavorites/favorites.feature:103](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiFavorites/favorites.feature#L103)
|
||||
- [coreApiFavorites/favorites.feature:124](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiFavorites/favorites.feature#L124)
|
||||
- [coreApiFavorites/favorites.feature:125](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiFavorites/favorites.feature#L125)
|
||||
- [coreApiFavorites/favorites.feature:126](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiFavorites/favorites.feature#L126)
|
||||
- [coreApiFavorites/favorites.feature:189](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiFavorites/favorites.feature#L189)
|
||||
- [coreApiFavorites/favorites.feature:190](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiFavorites/favorites.feature#L190)
|
||||
- [coreApiFavorites/favorites.feature:191](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiFavorites/favorites.feature#L191)
|
||||
- [coreApiFavorites/favorites.feature:145](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiFavorites/favorites.feature#L145)
|
||||
- [coreApiFavorites/favorites.feature:146](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiFavorites/favorites.feature#L146)
|
||||
- [coreApiFavorites/favorites.feature:147](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiFavorites/favorites.feature#L147)
|
||||
- [coreApiFavorites/favorites.feature:174](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiFavorites/favorites.feature#L174)
|
||||
- [coreApiFavorites/favorites.feature:175](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiFavorites/favorites.feature#L175)
|
||||
- [coreApiFavorites/favorites.feature:176](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiFavorites/favorites.feature#L176)
|
||||
- [coreApiFavorites/favoritesSharingToShares.feature:91](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiFavorites/favoritesSharingToShares.feature#L91)
|
||||
- [coreApiFavorites/favoritesSharingToShares.feature:92](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiFavorites/favoritesSharingToShares.feature#L92)
|
||||
- [coreApiFavorites/favoritesSharingToShares.feature:93](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiFavorites/favoritesSharingToShares.feature#L93)
|
||||
|
||||
#### [WWW-Authenticate header for unauthenticated requests is not clear](https://github.com/owncloud/ocis/issues/2285)
|
||||
|
||||
- [coreApiWebdavOperations/refuseAccess.feature:21](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavOperations/refuseAccess.feature#L21)
|
||||
- [coreApiWebdavOperations/refuseAccess.feature:22](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavOperations/refuseAccess.feature#L22)
|
||||
|
||||
#### [PATCH request for TUS upload with wrong checksum gives incorrect response](https://github.com/owncloud/ocis/issues/1755)
|
||||
|
||||
- [coreApiWebdavUploadTUS/checksums.feature:74](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/checksums.feature#L74)
|
||||
- [coreApiWebdavUploadTUS/checksums.feature:75](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/checksums.feature#L75)
|
||||
- [coreApiWebdavUploadTUS/checksums.feature:76](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/checksums.feature#L76)
|
||||
- [coreApiWebdavUploadTUS/checksums.feature:77](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/checksums.feature#L77)
|
||||
- [coreApiWebdavUploadTUS/checksums.feature:79](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/checksums.feature#L79)
|
||||
- [coreApiWebdavUploadTUS/checksums.feature:78](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/checksums.feature#L78)
|
||||
- [coreApiWebdavUploadTUS/checksums.feature:147](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/checksums.feature#L147)
|
||||
- [coreApiWebdavUploadTUS/checksums.feature:148](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/checksums.feature#L148)
|
||||
- [coreApiWebdavUploadTUS/checksums.feature:149](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/checksums.feature#L149)
|
||||
- [coreApiWebdavUploadTUS/checksums.feature:192](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/checksums.feature#L192)
|
||||
- [coreApiWebdavUploadTUS/checksums.feature:193](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/checksums.feature#L193)
|
||||
- [coreApiWebdavUploadTUS/checksums.feature:194](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/checksums.feature#L194)
|
||||
- [coreApiWebdavUploadTUS/checksums.feature:195](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/checksums.feature#L195)
|
||||
- [coreApiWebdavUploadTUS/checksums.feature:196](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/checksums.feature#L196)
|
||||
- [coreApiWebdavUploadTUS/checksums.feature:197](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/checksums.feature#L197)
|
||||
- [coreApiWebdavUploadTUS/checksums.feature:240](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/checksums.feature#L240)
|
||||
- [coreApiWebdavUploadTUS/checksums.feature:241](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/checksums.feature#L241)
|
||||
- [coreApiWebdavUploadTUS/checksums.feature:242](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/checksums.feature#L242)
|
||||
- [coreApiWebdavUploadTUS/checksums.feature:243](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/checksums.feature#L243)
|
||||
- [coreApiWebdavUploadTUS/checksums.feature:244](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/checksums.feature#L244)
|
||||
- [coreApiWebdavUploadTUS/checksums.feature:245](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/checksums.feature#L245)
|
||||
- [coreApiWebdavUploadTUS/uploadToShare.feature:255](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/uploadToShare.feature#L255)
|
||||
- [coreApiWebdavUploadTUS/uploadToShare.feature:256](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/uploadToShare.feature#L256)
|
||||
- [coreApiWebdavUploadTUS/uploadToShare.feature:279](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/uploadToShare.feature#L279)
|
||||
- [coreApiWebdavUploadTUS/uploadToShare.feature:280](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/uploadToShare.feature#L280)
|
||||
- [coreApiWebdavUploadTUS/uploadToShare.feature:376](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/uploadToShare.feature#L376)
|
||||
- [coreApiWebdavUploadTUS/uploadToShare.feature:377](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/uploadToShare.feature#L377)
|
||||
|
||||
#### [Renaming resource to banned name is allowed in spaces webdav](https://github.com/owncloud/ocis/issues/3099)
|
||||
|
||||
- [coreApiWebdavMove2/moveFile.feature:143](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavMove2/moveFile.feature#L143)
|
||||
- [coreApiWebdavMove1/moveFolder.feature:36](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavMove1/moveFolder.feature#L36)
|
||||
- [coreApiWebdavMove1/moveFolder.feature:50](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavMove1/moveFolder.feature#L50)
|
||||
- [coreApiWebdavMove1/moveFolder.feature:64](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavMove1/moveFolder.feature#L64)
|
||||
|
||||
#### [Trying to delete other user's trashbin item returns 409 for spaces path instead of 404](https://github.com/owncloud/ocis/issues/9791)
|
||||
|
||||
- [coreApiTrashbin/trashbinDelete.feature:92](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiTrashbin/trashbinDelete.feature#L92)
|
||||
|
||||
#### [MOVE a file into same folder with same name returns 404 instead of 403](https://github.com/owncloud/ocis/issues/1976)
|
||||
|
||||
- [coreApiWebdavMove2/moveFile.feature:100](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavMove2/moveFile.feature#L100)
|
||||
- [coreApiWebdavMove2/moveFile.feature:101](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavMove2/moveFile.feature#L101)
|
||||
- [coreApiWebdavMove2/moveFile.feature:102](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavMove2/moveFile.feature#L102)
|
||||
- [coreApiWebdavMove1/moveFolder.feature:217](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavMove1/moveFolder.feature#L217)
|
||||
- [coreApiWebdavMove1/moveFolder.feature:218](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavMove1/moveFolder.feature#L218)
|
||||
- [coreApiWebdavMove1/moveFolder.feature:219](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavMove1/moveFolder.feature#L219)
|
||||
- [coreApiWebdavMove2/moveShareOnOpencloud.feature:334](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavMove2/moveShareOnOpencloud.feature#L334)
|
||||
- [coreApiWebdavMove2/moveShareOnOpencloud.feature:337](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavMove2/moveShareOnOpencloud.feature#L337)
|
||||
- [coreApiWebdavMove2/moveShareOnOpencloud.feature:340](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavMove2/moveShareOnOpencloud.feature#L340)
|
||||
|
||||
#### [COPY file/folder to same name is possible (but 500 code error for folder with spaces path)](https://github.com/owncloud/ocis/issues/8711)
|
||||
|
||||
- [coreApiSharePublicLink2/copyFromPublicLink.feature:198](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiSharePublicLink2/copyFromPublicLink.feature#L198)
|
||||
- [coreApiWebdavProperties/copyFile.feature:1094](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavProperties/copyFile.feature#L1094)
|
||||
- [coreApiWebdavProperties/copyFile.feature:1095](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavProperties/copyFile.feature#L1095)
|
||||
- [coreApiWebdavProperties/copyFile.feature:1096](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavProperties/copyFile.feature#L1096)
|
||||
|
||||
#### [Trying to restore personal file to file of share received folder returns 403 but the share file is deleted (new dav path)](https://github.com/owncloud/ocis/issues/10356)
|
||||
|
||||
- [coreApiTrashbin/trashbinSharingToShares.feature:277](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiTrashbin/trashbinSharingToShares.feature#L277)
|
||||
|
||||
#### [Preview. UTF characters do not display on prievew](https://github.com/opencloud-eu/opencloud/issues/1451)
|
||||
|
||||
- [coreApiWebdavPreviews/previews.feature:249](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavPreviews/previews.feature#L249)
|
||||
- [coreApiWebdavPreviews/previews.feature:250](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavPreviews/previews.feature#L250)
|
||||
- [coreApiWebdavPreviews/previews.feature:251](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavPreviews/previews.feature#L251)
|
||||
|
||||
#### [Preview of text file truncated](https://github.com/opencloud-eu/opencloud/issues/1452)
|
||||
|
||||
- [coreApiWebdavPreviews/previews.feature:263](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavPreviews/previews.feature#L263)
|
||||
- [coreApiWebdavPreviews/previews.feature:264](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavPreviews/previews.feature#L264)
|
||||
- [coreApiWebdavPreviews/previews.feature:265](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavPreviews/previews.feature#L265)
|
||||
|
||||
### Won't fix
|
||||
|
||||
Not everything needs to be implemented for opencloud.
|
||||
|
||||
- _Blacklisted ignored files are no longer required because opencloud can handle `.htaccess` files without security implications introduced by serving user provided files with apache._
|
||||
|
||||
Note: always have an empty line at the end of this file.
|
||||
The bash script that processes this file requires that the last line has a newline on the end.
|
||||
@@ -1,175 +0,0 @@
|
||||
## Scenarios from core API tests that are expected to fail with decomposed storage while running with the Graph API
|
||||
|
||||
### File
|
||||
|
||||
Basic file management like up and download, move, copy, properties, trash, versions and chunking.
|
||||
|
||||
#### [Custom dav properties with namespaces are rendered incorrectly](https://github.com/owncloud/ocis/issues/2140)
|
||||
|
||||
_ocdav: double-check the webdav property parsing when custom namespaces are used_
|
||||
|
||||
- [coreApiWebdavProperties/setFileProperties.feature:128](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavProperties/setFileProperties.feature#L128)
|
||||
- [coreApiWebdavProperties/setFileProperties.feature:129](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavProperties/setFileProperties.feature#L129)
|
||||
- [coreApiWebdavProperties/setFileProperties.feature:130](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavProperties/setFileProperties.feature#L130)
|
||||
|
||||
### Sync
|
||||
|
||||
Synchronization features like etag propagation, setting mtime and locking files
|
||||
|
||||
#### [Uploading an old method chunked file with checksum should fail using new DAV path](https://github.com/owncloud/ocis/issues/2323)
|
||||
|
||||
- [coreApiMain/checksums.feature:233](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiMain/checksums.feature#L233)
|
||||
- [coreApiMain/checksums.feature:234](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiMain/checksums.feature#L234)
|
||||
- [coreApiMain/checksums.feature:235](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiMain/checksums.feature#L235)
|
||||
|
||||
### Share
|
||||
|
||||
#### [d:quota-available-bytes in dprop of PROPFIND give wrong response value](https://github.com/owncloud/ocis/issues/8197)
|
||||
|
||||
- [coreApiWebdavProperties/getQuota.feature:57](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavProperties/getQuota.feature#L57)
|
||||
- [coreApiWebdavProperties/getQuota.feature:58](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavProperties/getQuota.feature#L58)
|
||||
- [coreApiWebdavProperties/getQuota.feature:59](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavProperties/getQuota.feature#L59)
|
||||
- [coreApiWebdavProperties/getQuota.feature:73](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavProperties/getQuota.feature#L73)
|
||||
- [coreApiWebdavProperties/getQuota.feature:74](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavProperties/getQuota.feature#L74)
|
||||
- [coreApiWebdavProperties/getQuota.feature:75](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavProperties/getQuota.feature#L75)
|
||||
|
||||
#### [deleting a file inside a received shared folder is moved to the trash-bin of the sharer not the receiver](https://github.com/owncloud/ocis/issues/1124)
|
||||
|
||||
- [coreApiTrashbin/trashbinSharingToShares.feature:54](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiTrashbin/trashbinSharingToShares.feature#L54)
|
||||
- [coreApiTrashbin/trashbinSharingToShares.feature:55](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiTrashbin/trashbinSharingToShares.feature#L55)
|
||||
- [coreApiTrashbin/trashbinSharingToShares.feature:56](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiTrashbin/trashbinSharingToShares.feature#L56)
|
||||
- [coreApiTrashbin/trashbinSharingToShares.feature:83](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiTrashbin/trashbinSharingToShares.feature#L83)
|
||||
- [coreApiTrashbin/trashbinSharingToShares.feature:84](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiTrashbin/trashbinSharingToShares.feature#L84)
|
||||
- [coreApiTrashbin/trashbinSharingToShares.feature:85](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiTrashbin/trashbinSharingToShares.feature#L85)
|
||||
- [coreApiTrashbin/trashbinSharingToShares.feature:142](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiTrashbin/trashbinSharingToShares.feature#L142)
|
||||
- [coreApiTrashbin/trashbinSharingToShares.feature:143](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiTrashbin/trashbinSharingToShares.feature#L143)
|
||||
- [coreApiTrashbin/trashbinSharingToShares.feature:144](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiTrashbin/trashbinSharingToShares.feature#L144)
|
||||
- [coreApiTrashbin/trashbinSharingToShares.feature:202](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiTrashbin/trashbinSharingToShares.feature#L202)
|
||||
- [coreApiTrashbin/trashbinSharingToShares.feature:203](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiTrashbin/trashbinSharingToShares.feature#L203)
|
||||
|
||||
### Other
|
||||
|
||||
API, search, favorites, config, capabilities, not existing endpoints, CORS and others
|
||||
|
||||
#### [sending MKCOL requests to another or non-existing user's webDav endpoints as normal user should return 404](https://github.com/owncloud/ocis/issues/5049)
|
||||
|
||||
_ocdav: api compatibility, return correct status code_
|
||||
|
||||
- [coreApiAuth/webDavMKCOLAuth.feature:42](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiAuth/webDavMKCOLAuth.feature#L42)
|
||||
- [coreApiAuth/webDavMKCOLAuth.feature:53](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiAuth/webDavMKCOLAuth.feature#L53)
|
||||
|
||||
#### [trying to lock file of another user gives http 500](https://github.com/owncloud/ocis/issues/2176)
|
||||
|
||||
- [coreApiAuth/webDavLOCKAuth.feature:46](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiAuth/webDavLOCKAuth.feature#L46)
|
||||
- [coreApiAuth/webDavLOCKAuth.feature:58](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiAuth/webDavLOCKAuth.feature#L58)
|
||||
|
||||
#### [Support for favorites](https://github.com/owncloud/ocis/issues/1228)
|
||||
|
||||
- [coreApiFavorites/favorites.feature:101](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiFavorites/favorites.feature#L101)
|
||||
- [coreApiFavorites/favorites.feature:102](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiFavorites/favorites.feature#L102)
|
||||
- [coreApiFavorites/favorites.feature:103](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiFavorites/favorites.feature#L103)
|
||||
- [coreApiFavorites/favorites.feature:124](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiFavorites/favorites.feature#L124)
|
||||
- [coreApiFavorites/favorites.feature:125](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiFavorites/favorites.feature#L125)
|
||||
- [coreApiFavorites/favorites.feature:126](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiFavorites/favorites.feature#L126)
|
||||
- [coreApiFavorites/favorites.feature:189](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiFavorites/favorites.feature#L189)
|
||||
- [coreApiFavorites/favorites.feature:190](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiFavorites/favorites.feature#L190)
|
||||
- [coreApiFavorites/favorites.feature:191](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiFavorites/favorites.feature#L191)
|
||||
- [coreApiFavorites/favorites.feature:145](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiFavorites/favorites.feature#L145)
|
||||
- [coreApiFavorites/favorites.feature:146](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiFavorites/favorites.feature#L146)
|
||||
- [coreApiFavorites/favorites.feature:147](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiFavorites/favorites.feature#L147)
|
||||
- [coreApiFavorites/favorites.feature:174](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiFavorites/favorites.feature#L174)
|
||||
- [coreApiFavorites/favorites.feature:175](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiFavorites/favorites.feature#L175)
|
||||
- [coreApiFavorites/favorites.feature:176](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiFavorites/favorites.feature#L176)
|
||||
- [coreApiFavorites/favoritesSharingToShares.feature:91](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiFavorites/favoritesSharingToShares.feature#L91)
|
||||
- [coreApiFavorites/favoritesSharingToShares.feature:92](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiFavorites/favoritesSharingToShares.feature#L92)
|
||||
- [coreApiFavorites/favoritesSharingToShares.feature:93](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiFavorites/favoritesSharingToShares.feature#L93)
|
||||
|
||||
#### [WWW-Authenticate header for unauthenticated requests is not clear](https://github.com/owncloud/ocis/issues/2285)
|
||||
|
||||
- [coreApiWebdavOperations/refuseAccess.feature:21](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavOperations/refuseAccess.feature#L21)
|
||||
- [coreApiWebdavOperations/refuseAccess.feature:22](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavOperations/refuseAccess.feature#L22)
|
||||
|
||||
#### [PATCH request for TUS upload with wrong checksum gives incorrect response](https://github.com/owncloud/ocis/issues/1755)
|
||||
|
||||
- [coreApiWebdavUploadTUS/checksums.feature:74](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/checksums.feature#L74)
|
||||
- [coreApiWebdavUploadTUS/checksums.feature:75](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/checksums.feature#L75)
|
||||
- [coreApiWebdavUploadTUS/checksums.feature:76](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/checksums.feature#L76)
|
||||
- [coreApiWebdavUploadTUS/checksums.feature:77](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/checksums.feature#L77)
|
||||
- [coreApiWebdavUploadTUS/checksums.feature:79](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/checksums.feature#L79)
|
||||
- [coreApiWebdavUploadTUS/checksums.feature:78](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/checksums.feature#L78)
|
||||
- [coreApiWebdavUploadTUS/checksums.feature:147](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/checksums.feature#L147)
|
||||
- [coreApiWebdavUploadTUS/checksums.feature:148](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/checksums.feature#L148)
|
||||
- [coreApiWebdavUploadTUS/checksums.feature:149](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/checksums.feature#L149)
|
||||
- [coreApiWebdavUploadTUS/checksums.feature:192](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/checksums.feature#L192)
|
||||
- [coreApiWebdavUploadTUS/checksums.feature:193](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/checksums.feature#L193)
|
||||
- [coreApiWebdavUploadTUS/checksums.feature:194](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/checksums.feature#L194)
|
||||
- [coreApiWebdavUploadTUS/checksums.feature:195](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/checksums.feature#L195)
|
||||
- [coreApiWebdavUploadTUS/checksums.feature:196](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/checksums.feature#L196)
|
||||
- [coreApiWebdavUploadTUS/checksums.feature:197](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/checksums.feature#L197)
|
||||
- [coreApiWebdavUploadTUS/checksums.feature:240](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/checksums.feature#L240)
|
||||
- [coreApiWebdavUploadTUS/checksums.feature:241](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/checksums.feature#L241)
|
||||
- [coreApiWebdavUploadTUS/checksums.feature:242](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/checksums.feature#L242)
|
||||
- [coreApiWebdavUploadTUS/checksums.feature:243](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/checksums.feature#L243)
|
||||
- [coreApiWebdavUploadTUS/checksums.feature:244](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/checksums.feature#L244)
|
||||
- [coreApiWebdavUploadTUS/checksums.feature:245](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/checksums.feature#L245)
|
||||
- [coreApiWebdavUploadTUS/uploadToShare.feature:255](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/uploadToShare.feature#L255)
|
||||
- [coreApiWebdavUploadTUS/uploadToShare.feature:256](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/uploadToShare.feature#L256)
|
||||
- [coreApiWebdavUploadTUS/uploadToShare.feature:279](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/uploadToShare.feature#L279)
|
||||
- [coreApiWebdavUploadTUS/uploadToShare.feature:280](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/uploadToShare.feature#L280)
|
||||
- [coreApiWebdavUploadTUS/uploadToShare.feature:376](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/uploadToShare.feature#L376)
|
||||
- [coreApiWebdavUploadTUS/uploadToShare.feature:377](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/uploadToShare.feature#L377)
|
||||
|
||||
#### [Renaming resource to banned name is allowed in spaces webdav](https://github.com/owncloud/ocis/issues/3099)
|
||||
|
||||
- [coreApiWebdavMove2/moveFile.feature:143](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavMove2/moveFile.feature#L143)
|
||||
- [coreApiWebdavMove1/moveFolder.feature:36](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavMove1/moveFolder.feature#L36)
|
||||
- [coreApiWebdavMove1/moveFolder.feature:50](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavMove1/moveFolder.feature#L50)
|
||||
- [coreApiWebdavMove1/moveFolder.feature:64](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavMove1/moveFolder.feature#L64)
|
||||
|
||||
#### [Trying to delete other user's trashbin item returns 409 for spaces path instead of 404](https://github.com/owncloud/ocis/issues/9791)
|
||||
|
||||
- [coreApiTrashbin/trashbinDelete.feature:92](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiTrashbin/trashbinDelete.feature#L92)
|
||||
|
||||
#### [MOVE a file into same folder with same name returns 404 instead of 403](https://github.com/owncloud/ocis/issues/1976)
|
||||
|
||||
- [coreApiWebdavMove2/moveFile.feature:100](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavMove2/moveFile.feature#L100)
|
||||
- [coreApiWebdavMove2/moveFile.feature:101](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavMove2/moveFile.feature#L101)
|
||||
- [coreApiWebdavMove2/moveFile.feature:102](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavMove2/moveFile.feature#L102)
|
||||
- [coreApiWebdavMove1/moveFolder.feature:217](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavMove1/moveFolder.feature#L217)
|
||||
- [coreApiWebdavMove1/moveFolder.feature:218](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavMove1/moveFolder.feature#L218)
|
||||
- [coreApiWebdavMove1/moveFolder.feature:219](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavMove1/moveFolder.feature#L219)
|
||||
- [coreApiWebdavMove2/moveShareOnOpencloud.feature:334](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavMove2/moveShareOnOpencloud.feature#L334)
|
||||
- [coreApiWebdavMove2/moveShareOnOpencloud.feature:337](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavMove2/moveShareOnOpencloud.feature#L337)
|
||||
- [coreApiWebdavMove2/moveShareOnOpencloud.feature:340](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavMove2/moveShareOnOpencloud.feature#L340)
|
||||
|
||||
#### [COPY file/folder to same name is possible (but 500 code error for folder with spaces path)](https://github.com/owncloud/ocis/issues/8711)
|
||||
|
||||
- [coreApiSharePublicLink2/copyFromPublicLink.feature:198](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiSharePublicLink2/copyFromPublicLink.feature#L198)
|
||||
- [coreApiWebdavProperties/copyFile.feature:1094](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavProperties/copyFile.feature#L1094)
|
||||
- [coreApiWebdavProperties/copyFile.feature:1095](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavProperties/copyFile.feature#L1095)
|
||||
- [coreApiWebdavProperties/copyFile.feature:1096](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavProperties/copyFile.feature#L1096)
|
||||
|
||||
#### [Trying to restore personal file to file of share received folder returns 403 but the share file is deleted (new dav path)](https://github.com/owncloud/ocis/issues/10356)
|
||||
|
||||
- [coreApiTrashbin/trashbinSharingToShares.feature:277](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiTrashbin/trashbinSharingToShares.feature#L277)
|
||||
|
||||
#### [Preview. UTF characters do not display on prievew](https://github.com/opencloud-eu/opencloud/issues/1451)
|
||||
|
||||
- [coreApiWebdavPreviews/previews.feature:249](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavPreviews/previews.feature#L249)
|
||||
- [coreApiWebdavPreviews/previews.feature:250](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavPreviews/previews.feature#L250)
|
||||
- [coreApiWebdavPreviews/previews.feature:251](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavPreviews/previews.feature#L251)
|
||||
|
||||
#### [Preview of text file truncated](https://github.com/opencloud-eu/opencloud/issues/1452)
|
||||
|
||||
- [coreApiWebdavPreviews/previews.feature:263](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavPreviews/previews.feature#L263)
|
||||
- [coreApiWebdavPreviews/previews.feature:264](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavPreviews/previews.feature#L264)
|
||||
- [coreApiWebdavPreviews/previews.feature:265](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavPreviews/previews.feature#L265)
|
||||
|
||||
### Won't fix
|
||||
|
||||
Not everything needs to be implemented for opencloud.
|
||||
|
||||
- _Blacklisted ignored files are no longer required because opencloud can handle `.htaccess` files without security implications introduced by serving user provided files with apache._
|
||||
|
||||
Note: always have an empty line at the end of this file.
|
||||
The bash script that processes this file requires that the last line has a newline on the end.
|
||||
@@ -19,8 +19,6 @@
|
||||
|
||||
#### [Settings service user can list other peoples assignments](https://github.com/owncloud/ocis/issues/5032)
|
||||
|
||||
- [apiAccountsHashDifficulty/assignRole.feature:27](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/apiAccountsHashDifficulty/assignRole.feature#L27)
|
||||
- [apiAccountsHashDifficulty/assignRole.feature:28](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/apiAccountsHashDifficulty/assignRole.feature#L28)
|
||||
- [apiGraph/getAssignedRole.feature:31](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/apiGraph/getAssignedRole.feature#L31)
|
||||
- [apiGraph/getAssignedRole.feature:32](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/apiGraph/getAssignedRole.feature#L32)
|
||||
- [apiGraph/getAssignedRole.feature:33](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/apiGraph/getAssignedRole.feature#L33)
|
||||
@@ -193,8 +191,8 @@
|
||||
- [apiServiceAvailability/serviceAvailabilityCheck.feature:123](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/apiServiceAvailability/serviceAvailabilityCheck.feature#L123)
|
||||
|
||||
#### [Skip tests for different languages](https://github.com/opencloud-eu/opencloud/issues/183)
|
||||
- [apiActivities/activities.feature:2598](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/apiActivities/activities.feature#L2598)
|
||||
|
||||
- [apiActivities/activities.feature:2598](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/apiActivities/activities.feature#L2598)
|
||||
|
||||
#### [Missing properties in REPORT response](https://github.com/owncloud/ocis/issues/9780), [d:getetag property has empty value in REPORT response](https://github.com/owncloud/ocis/issues/9783)
|
||||
|
||||
@@ -205,6 +203,160 @@
|
||||
- [apiSearch1/search.feature:466](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/apiSearch1/search.feature#L466)
|
||||
- [apiSearch1/search.feature:467](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/apiSearch1/search.feature#L467)
|
||||
|
||||
## Scenarios from core API tests that are expected to fail with decomposed storage
|
||||
|
||||
### File
|
||||
|
||||
Basic file management like up and download, move, copy, properties, trash, versions and chunking.
|
||||
|
||||
#### [Custom dav properties with namespaces are rendered incorrectly](https://github.com/owncloud/ocis/issues/2140)
|
||||
|
||||
_ocdav: double-check the webdav property parsing when custom namespaces are used_
|
||||
|
||||
- [coreApiWebdavProperties/setFileProperties.feature:128](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavProperties/setFileProperties.feature#L128)
|
||||
- [coreApiWebdavProperties/setFileProperties.feature:129](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavProperties/setFileProperties.feature#L129)
|
||||
- [coreApiWebdavProperties/setFileProperties.feature:130](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavProperties/setFileProperties.feature#L130)
|
||||
|
||||
### Sync
|
||||
|
||||
Synchronization features like etag propagation, setting mtime and locking files
|
||||
|
||||
#### [Uploading an old method chunked file with checksum should fail using new DAV path](https://github.com/owncloud/ocis/issues/2323)
|
||||
|
||||
- [coreApiMain/checksums.feature:233](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiMain/checksums.feature#L233)
|
||||
- [coreApiMain/checksums.feature:234](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiMain/checksums.feature#L234)
|
||||
- [coreApiMain/checksums.feature:235](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiMain/checksums.feature#L235)
|
||||
|
||||
### Share
|
||||
|
||||
#### [d:quota-available-bytes in dprop of PROPFIND give wrong response value](https://github.com/owncloud/ocis/issues/8197)
|
||||
|
||||
- [coreApiWebdavProperties/getQuota.feature:57](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavProperties/getQuota.feature#L57)
|
||||
- [coreApiWebdavProperties/getQuota.feature:58](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavProperties/getQuota.feature#L58)
|
||||
- [coreApiWebdavProperties/getQuota.feature:59](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavProperties/getQuota.feature#L59)
|
||||
- [coreApiWebdavProperties/getQuota.feature:73](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavProperties/getQuota.feature#L73)
|
||||
- [coreApiWebdavProperties/getQuota.feature:74](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavProperties/getQuota.feature#L74)
|
||||
- [coreApiWebdavProperties/getQuota.feature:75](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavProperties/getQuota.feature#L75)
|
||||
|
||||
#### [deleting a file inside a received shared folder is moved to the trash-bin of the sharer not the receiver](https://github.com/owncloud/ocis/issues/1124)
|
||||
|
||||
- [coreApiTrashbin/trashbinSharingToShares.feature:54](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiTrashbin/trashbinSharingToShares.feature#L54)
|
||||
- [coreApiTrashbin/trashbinSharingToShares.feature:55](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiTrashbin/trashbinSharingToShares.feature#L55)
|
||||
- [coreApiTrashbin/trashbinSharingToShares.feature:56](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiTrashbin/trashbinSharingToShares.feature#L56)
|
||||
- [coreApiTrashbin/trashbinSharingToShares.feature:83](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiTrashbin/trashbinSharingToShares.feature#L83)
|
||||
- [coreApiTrashbin/trashbinSharingToShares.feature:84](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiTrashbin/trashbinSharingToShares.feature#L84)
|
||||
- [coreApiTrashbin/trashbinSharingToShares.feature:85](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiTrashbin/trashbinSharingToShares.feature#L85)
|
||||
- [coreApiTrashbin/trashbinSharingToShares.feature:142](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiTrashbin/trashbinSharingToShares.feature#L142)
|
||||
- [coreApiTrashbin/trashbinSharingToShares.feature:143](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiTrashbin/trashbinSharingToShares.feature#L143)
|
||||
- [coreApiTrashbin/trashbinSharingToShares.feature:144](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiTrashbin/trashbinSharingToShares.feature#L144)
|
||||
- [coreApiTrashbin/trashbinSharingToShares.feature:202](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiTrashbin/trashbinSharingToShares.feature#L202)
|
||||
- [coreApiTrashbin/trashbinSharingToShares.feature:203](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiTrashbin/trashbinSharingToShares.feature#L203)
|
||||
|
||||
### Other
|
||||
|
||||
API, search, favorites, config, capabilities, not existing endpoints, CORS and others
|
||||
|
||||
#### [sending MKCOL requests to another or non-existing user's webDav endpoints as normal user should return 404](https://github.com/owncloud/ocis/issues/5049)
|
||||
|
||||
_ocdav: api compatibility, return correct status code_
|
||||
|
||||
- [coreApiAuth/webDavMKCOLAuth.feature:42](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiAuth/webDavMKCOLAuth.feature#L42)
|
||||
- [coreApiAuth/webDavMKCOLAuth.feature:53](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiAuth/webDavMKCOLAuth.feature#L53)
|
||||
|
||||
#### [trying to lock file of another user gives http 500](https://github.com/owncloud/ocis/issues/2176)
|
||||
|
||||
- [coreApiAuth/webDavLOCKAuth.feature:46](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiAuth/webDavLOCKAuth.feature#L46)
|
||||
- [coreApiAuth/webDavLOCKAuth.feature:58](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiAuth/webDavLOCKAuth.feature#L58)
|
||||
|
||||
#### [Support for favorites](https://github.com/owncloud/ocis/issues/1228)
|
||||
|
||||
- [coreApiFavorites/favorites.feature:101](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiFavorites/favorites.feature#L101)
|
||||
- [coreApiFavorites/favorites.feature:102](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiFavorites/favorites.feature#L102)
|
||||
- [coreApiFavorites/favorites.feature:103](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiFavorites/favorites.feature#L103)
|
||||
- [coreApiFavorites/favorites.feature:124](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiFavorites/favorites.feature#L124)
|
||||
- [coreApiFavorites/favorites.feature:125](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiFavorites/favorites.feature#L125)
|
||||
- [coreApiFavorites/favorites.feature:126](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiFavorites/favorites.feature#L126)
|
||||
- [coreApiFavorites/favorites.feature:189](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiFavorites/favorites.feature#L189)
|
||||
- [coreApiFavorites/favorites.feature:190](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiFavorites/favorites.feature#L190)
|
||||
- [coreApiFavorites/favorites.feature:191](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiFavorites/favorites.feature#L191)
|
||||
- [coreApiFavorites/favorites.feature:145](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiFavorites/favorites.feature#L145)
|
||||
- [coreApiFavorites/favorites.feature:146](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiFavorites/favorites.feature#L146)
|
||||
- [coreApiFavorites/favorites.feature:147](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiFavorites/favorites.feature#L147)
|
||||
- [coreApiFavorites/favorites.feature:174](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiFavorites/favorites.feature#L174)
|
||||
- [coreApiFavorites/favorites.feature:175](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiFavorites/favorites.feature#L175)
|
||||
- [coreApiFavorites/favorites.feature:176](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiFavorites/favorites.feature#L176)
|
||||
- [coreApiFavorites/favoritesSharingToShares.feature:91](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiFavorites/favoritesSharingToShares.feature#L91)
|
||||
- [coreApiFavorites/favoritesSharingToShares.feature:92](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiFavorites/favoritesSharingToShares.feature#L92)
|
||||
- [coreApiFavorites/favoritesSharingToShares.feature:93](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiFavorites/favoritesSharingToShares.feature#L93)
|
||||
|
||||
#### [WWW-Authenticate header for unauthenticated requests is not clear](https://github.com/owncloud/ocis/issues/2285)
|
||||
|
||||
- [coreApiWebdavOperations/refuseAccess.feature:21](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavOperations/refuseAccess.feature#L21)
|
||||
- [coreApiWebdavOperations/refuseAccess.feature:22](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavOperations/refuseAccess.feature#L22)
|
||||
|
||||
#### [PATCH request for TUS upload with wrong checksum gives incorrect response](https://github.com/owncloud/ocis/issues/1755)
|
||||
|
||||
- [coreApiWebdavUploadTUS/checksums.feature:74](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/checksums.feature#L74)
|
||||
- [coreApiWebdavUploadTUS/checksums.feature:75](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/checksums.feature#L75)
|
||||
- [coreApiWebdavUploadTUS/checksums.feature:76](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/checksums.feature#L76)
|
||||
- [coreApiWebdavUploadTUS/checksums.feature:77](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/checksums.feature#L77)
|
||||
- [coreApiWebdavUploadTUS/checksums.feature:79](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/checksums.feature#L79)
|
||||
- [coreApiWebdavUploadTUS/checksums.feature:78](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/checksums.feature#L78)
|
||||
- [coreApiWebdavUploadTUS/checksums.feature:147](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/checksums.feature#L147)
|
||||
- [coreApiWebdavUploadTUS/checksums.feature:148](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/checksums.feature#L148)
|
||||
- [coreApiWebdavUploadTUS/checksums.feature:149](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/checksums.feature#L149)
|
||||
- [coreApiWebdavUploadTUS/checksums.feature:192](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/checksums.feature#L192)
|
||||
- [coreApiWebdavUploadTUS/checksums.feature:193](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/checksums.feature#L193)
|
||||
- [coreApiWebdavUploadTUS/checksums.feature:194](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/checksums.feature#L194)
|
||||
- [coreApiWebdavUploadTUS/checksums.feature:195](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/checksums.feature#L195)
|
||||
- [coreApiWebdavUploadTUS/checksums.feature:196](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/checksums.feature#L196)
|
||||
- [coreApiWebdavUploadTUS/checksums.feature:197](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/checksums.feature#L197)
|
||||
- [coreApiWebdavUploadTUS/checksums.feature:240](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/checksums.feature#L240)
|
||||
- [coreApiWebdavUploadTUS/checksums.feature:241](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/checksums.feature#L241)
|
||||
- [coreApiWebdavUploadTUS/checksums.feature:242](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/checksums.feature#L242)
|
||||
- [coreApiWebdavUploadTUS/checksums.feature:243](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/checksums.feature#L243)
|
||||
- [coreApiWebdavUploadTUS/checksums.feature:244](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/checksums.feature#L244)
|
||||
- [coreApiWebdavUploadTUS/checksums.feature:245](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/checksums.feature#L245)
|
||||
- [coreApiWebdavUploadTUS/uploadToShare.feature:255](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/uploadToShare.feature#L255)
|
||||
- [coreApiWebdavUploadTUS/uploadToShare.feature:256](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/uploadToShare.feature#L256)
|
||||
- [coreApiWebdavUploadTUS/uploadToShare.feature:279](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/uploadToShare.feature#L279)
|
||||
- [coreApiWebdavUploadTUS/uploadToShare.feature:280](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/uploadToShare.feature#L280)
|
||||
- [coreApiWebdavUploadTUS/uploadToShare.feature:376](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/uploadToShare.feature#L376)
|
||||
- [coreApiWebdavUploadTUS/uploadToShare.feature:377](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/uploadToShare.feature#L377)
|
||||
|
||||
#### [Renaming resource to banned name is allowed in spaces webdav](https://github.com/owncloud/ocis/issues/3099)
|
||||
|
||||
- [coreApiWebdavMove2/moveFile.feature:143](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavMove2/moveFile.feature#L143)
|
||||
- [coreApiWebdavMove1/moveFolder.feature:36](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavMove1/moveFolder.feature#L36)
|
||||
- [coreApiWebdavMove1/moveFolder.feature:50](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavMove1/moveFolder.feature#L50)
|
||||
- [coreApiWebdavMove1/moveFolder.feature:64](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavMove1/moveFolder.feature#L64)
|
||||
|
||||
#### [Trying to delete other user's trashbin item returns 409 for spaces path instead of 404](https://github.com/owncloud/ocis/issues/9791)
|
||||
|
||||
- [coreApiTrashbin/trashbinDelete.feature:92](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiTrashbin/trashbinDelete.feature#L92)
|
||||
|
||||
#### [MOVE a file into same folder with same name returns 404 instead of 403](https://github.com/owncloud/ocis/issues/1976)
|
||||
|
||||
- [coreApiWebdavMove2/moveFile.feature:100](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavMove2/moveFile.feature#L100)
|
||||
- [coreApiWebdavMove2/moveFile.feature:101](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavMove2/moveFile.feature#L101)
|
||||
- [coreApiWebdavMove2/moveFile.feature:102](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavMove2/moveFile.feature#L102)
|
||||
- [coreApiWebdavMove1/moveFolder.feature:217](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavMove1/moveFolder.feature#L217)
|
||||
- [coreApiWebdavMove1/moveFolder.feature:218](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavMove1/moveFolder.feature#L218)
|
||||
- [coreApiWebdavMove1/moveFolder.feature:219](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavMove1/moveFolder.feature#L219)
|
||||
- [coreApiWebdavMove2/moveShareOnOpencloud.feature:334](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavMove2/moveShareOnOpencloud.feature#L334)
|
||||
- [coreApiWebdavMove2/moveShareOnOpencloud.feature:337](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavMove2/moveShareOnOpencloud.feature#L337)
|
||||
- [coreApiWebdavMove2/moveShareOnOpencloud.feature:340](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavMove2/moveShareOnOpencloud.feature#L340)
|
||||
|
||||
#### [COPY file/folder to same name is possible (but 500 code error for folder with spaces path)](https://github.com/owncloud/ocis/issues/8711)
|
||||
|
||||
- [coreApiSharePublicLink2/copyFromPublicLink.feature:198](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiSharePublicLink2/copyFromPublicLink.feature#L198)
|
||||
- [coreApiWebdavProperties/copyFile.feature:1094](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavProperties/copyFile.feature#L1094)
|
||||
- [coreApiWebdavProperties/copyFile.feature:1095](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavProperties/copyFile.feature#L1095)
|
||||
- [coreApiWebdavProperties/copyFile.feature:1096](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavProperties/copyFile.feature#L1096)
|
||||
|
||||
#### [Trying to restore personal file to file of share received folder returns 403 but the share file is deleted (new dav path)](https://github.com/owncloud/ocis/issues/10356)
|
||||
|
||||
- [coreApiTrashbin/trashbinSharingToShares.feature:277](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiTrashbin/trashbinSharingToShares.feature#L277)
|
||||
|
||||
#### [Preview. UTF characters do not display on prievew](https://github.com/opencloud-eu/opencloud/issues/1451)
|
||||
|
||||
@@ -218,5 +370,11 @@
|
||||
- [coreApiWebdavPreviews/previews.feature:264](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavPreviews/previews.feature#L264)
|
||||
- [coreApiWebdavPreviews/previews.feature:265](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavPreviews/previews.feature#L265)
|
||||
|
||||
### Won't fix
|
||||
|
||||
Not everything needs to be implemented for opencloud.
|
||||
|
||||
- _Blacklisted ignored files are no longer required because opencloud can handle `.htaccess` files without security implications introduced by serving user provided files with apache._
|
||||
|
||||
Note: always have an empty line at the end of this file.
|
||||
The bash script that processes this file requires that the last line has a newline on the end.
|
||||
@@ -1,4 +1,4 @@
|
||||
## Scenarios from OpenCloud API tests that are expected to fail with decomposed storage
|
||||
## Scenarios from OpenCloud API tests that are expected to fail with posix storage
|
||||
|
||||
#### [Downloading the archive of the resource (files | folder) using resource path is not possible](https://github.com/owncloud/ocis/issues/4637)
|
||||
|
||||
@@ -19,8 +19,6 @@
|
||||
|
||||
#### [Settings service user can list other peoples assignments](https://github.com/owncloud/ocis/issues/5032)
|
||||
|
||||
- [apiAccountsHashDifficulty/assignRole.feature:27](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/apiAccountsHashDifficulty/assignRole.feature#L27)
|
||||
- [apiAccountsHashDifficulty/assignRole.feature:28](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/apiAccountsHashDifficulty/assignRole.feature#L28)
|
||||
- [apiGraph/getAssignedRole.feature:31](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/apiGraph/getAssignedRole.feature#L31)
|
||||
- [apiGraph/getAssignedRole.feature:32](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/apiGraph/getAssignedRole.feature#L32)
|
||||
- [apiGraph/getAssignedRole.feature:33](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/apiGraph/getAssignedRole.feature#L33)
|
||||
@@ -193,8 +191,8 @@
|
||||
- [apiServiceAvailability/serviceAvailabilityCheck.feature:123](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/apiServiceAvailability/serviceAvailabilityCheck.feature#L123)
|
||||
|
||||
#### [Skip tests for different languages](https://github.com/opencloud-eu/opencloud/issues/183)
|
||||
- [apiActivities/activities.feature:2598](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/apiActivities/activities.feature#L2598)
|
||||
|
||||
- [apiActivities/activities.feature:2598](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/apiActivities/activities.feature#L2598)
|
||||
|
||||
#### [Missing properties in REPORT response](https://github.com/owncloud/ocis/issues/9780), [d:getetag property has empty value in REPORT response](https://github.com/owncloud/ocis/issues/9783)
|
||||
|
||||
@@ -205,5 +203,178 @@
|
||||
- [apiSearch1/search.feature:466](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/apiSearch1/search.feature#L466)
|
||||
- [apiSearch1/search.feature:467](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/apiSearch1/search.feature#L467)
|
||||
|
||||
## Scenarios from core API tests that are expected to fail with posix storage
|
||||
|
||||
### File
|
||||
|
||||
Basic file management like up and download, move, copy, properties, trash, versions and chunking.
|
||||
|
||||
#### [Custom dav properties with namespaces are rendered incorrectly](https://github.com/owncloud/ocis/issues/2140)
|
||||
|
||||
_ocdav: double-check the webdav property parsing when custom namespaces are used_
|
||||
|
||||
- [coreApiWebdavProperties/setFileProperties.feature:128](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavProperties/setFileProperties.feature#L128)
|
||||
- [coreApiWebdavProperties/setFileProperties.feature:129](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavProperties/setFileProperties.feature#L129)
|
||||
- [coreApiWebdavProperties/setFileProperties.feature:130](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavProperties/setFileProperties.feature#L130)
|
||||
|
||||
### Sync
|
||||
|
||||
Synchronization features like etag propagation, setting mtime and locking files
|
||||
|
||||
#### [Uploading an old method chunked file with checksum should fail using new DAV path](https://github.com/owncloud/ocis/issues/2323)
|
||||
|
||||
- [coreApiMain/checksums.feature:233](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiMain/checksums.feature#L233)
|
||||
- [coreApiMain/checksums.feature:234](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiMain/checksums.feature#L234)
|
||||
- [coreApiMain/checksums.feature:235](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiMain/checksums.feature#L235)
|
||||
|
||||
### Share
|
||||
|
||||
#### [d:quota-available-bytes in dprop of PROPFIND give wrong response value](https://github.com/owncloud/ocis/issues/8197)
|
||||
|
||||
- [coreApiWebdavProperties/getQuota.feature:57](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavProperties/getQuota.feature#L57)
|
||||
- [coreApiWebdavProperties/getQuota.feature:58](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavProperties/getQuota.feature#L58)
|
||||
- [coreApiWebdavProperties/getQuota.feature:59](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavProperties/getQuota.feature#L59)
|
||||
- [coreApiWebdavProperties/getQuota.feature:73](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavProperties/getQuota.feature#L73)
|
||||
- [coreApiWebdavProperties/getQuota.feature:74](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavProperties/getQuota.feature#L74)
|
||||
- [coreApiWebdavProperties/getQuota.feature:75](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavProperties/getQuota.feature#L75)
|
||||
|
||||
#### [deleting a file inside a received shared folder is moved to the trash-bin of the sharer not the receiver](https://github.com/owncloud/ocis/issues/1124)
|
||||
|
||||
- [coreApiTrashbin/trashbinSharingToShares.feature:54](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiTrashbin/trashbinSharingToShares.feature#L54)
|
||||
- [coreApiTrashbin/trashbinSharingToShares.feature:55](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiTrashbin/trashbinSharingToShares.feature#L55)
|
||||
- [coreApiTrashbin/trashbinSharingToShares.feature:56](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiTrashbin/trashbinSharingToShares.feature#L56)
|
||||
- [coreApiTrashbin/trashbinSharingToShares.feature:83](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiTrashbin/trashbinSharingToShares.feature#L83)
|
||||
- [coreApiTrashbin/trashbinSharingToShares.feature:84](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiTrashbin/trashbinSharingToShares.feature#L84)
|
||||
- [coreApiTrashbin/trashbinSharingToShares.feature:85](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiTrashbin/trashbinSharingToShares.feature#L85)
|
||||
- [coreApiTrashbin/trashbinSharingToShares.feature:142](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiTrashbin/trashbinSharingToShares.feature#L142)
|
||||
- [coreApiTrashbin/trashbinSharingToShares.feature:143](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiTrashbin/trashbinSharingToShares.feature#L143)
|
||||
- [coreApiTrashbin/trashbinSharingToShares.feature:144](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiTrashbin/trashbinSharingToShares.feature#L144)
|
||||
- [coreApiTrashbin/trashbinSharingToShares.feature:202](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiTrashbin/trashbinSharingToShares.feature#L202)
|
||||
- [coreApiTrashbin/trashbinSharingToShares.feature:203](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiTrashbin/trashbinSharingToShares.feature#L203)
|
||||
|
||||
### Other
|
||||
|
||||
API, search, favorites, config, capabilities, not existing endpoints, CORS and others
|
||||
|
||||
#### [sending MKCOL requests to another or non-existing user's webDav endpoints as normal user should return 404](https://github.com/owncloud/ocis/issues/5049)
|
||||
|
||||
_ocdav: api compatibility, return correct status code_
|
||||
|
||||
- [coreApiAuth/webDavMKCOLAuth.feature:42](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiAuth/webDavMKCOLAuth.feature#L42)
|
||||
- [coreApiAuth/webDavMKCOLAuth.feature:53](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiAuth/webDavMKCOLAuth.feature#L53)
|
||||
|
||||
#### [trying to lock file of another user gives http 500](https://github.com/owncloud/ocis/issues/2176)
|
||||
|
||||
- [coreApiAuth/webDavLOCKAuth.feature:46](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiAuth/webDavLOCKAuth.feature#L46)
|
||||
- [coreApiAuth/webDavLOCKAuth.feature:58](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiAuth/webDavLOCKAuth.feature#L58)
|
||||
|
||||
#### [Support for favorites](https://github.com/owncloud/ocis/issues/1228)
|
||||
|
||||
- [coreApiFavorites/favorites.feature:101](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiFavorites/favorites.feature#L101)
|
||||
- [coreApiFavorites/favorites.feature:102](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiFavorites/favorites.feature#L102)
|
||||
- [coreApiFavorites/favorites.feature:103](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiFavorites/favorites.feature#L103)
|
||||
- [coreApiFavorites/favorites.feature:124](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiFavorites/favorites.feature#L124)
|
||||
- [coreApiFavorites/favorites.feature:125](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiFavorites/favorites.feature#L125)
|
||||
- [coreApiFavorites/favorites.feature:126](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiFavorites/favorites.feature#L126)
|
||||
- [coreApiFavorites/favorites.feature:189](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiFavorites/favorites.feature#L189)
|
||||
- [coreApiFavorites/favorites.feature:190](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiFavorites/favorites.feature#L190)
|
||||
- [coreApiFavorites/favorites.feature:191](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiFavorites/favorites.feature#L191)
|
||||
- [coreApiFavorites/favorites.feature:145](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiFavorites/favorites.feature#L145)
|
||||
- [coreApiFavorites/favorites.feature:146](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiFavorites/favorites.feature#L146)
|
||||
- [coreApiFavorites/favorites.feature:147](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiFavorites/favorites.feature#L147)
|
||||
- [coreApiFavorites/favorites.feature:174](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiFavorites/favorites.feature#L174)
|
||||
- [coreApiFavorites/favorites.feature:175](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiFavorites/favorites.feature#L175)
|
||||
- [coreApiFavorites/favorites.feature:176](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiFavorites/favorites.feature#L176)
|
||||
- [coreApiFavorites/favoritesSharingToShares.feature:91](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiFavorites/favoritesSharingToShares.feature#L91)
|
||||
- [coreApiFavorites/favoritesSharingToShares.feature:92](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiFavorites/favoritesSharingToShares.feature#L92)
|
||||
- [coreApiFavorites/favoritesSharingToShares.feature:93](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiFavorites/favoritesSharingToShares.feature#L93)
|
||||
|
||||
#### [WWW-Authenticate header for unauthenticated requests is not clear](https://github.com/owncloud/ocis/issues/2285)
|
||||
|
||||
- [coreApiWebdavOperations/refuseAccess.feature:21](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavOperations/refuseAccess.feature#L21)
|
||||
- [coreApiWebdavOperations/refuseAccess.feature:22](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavOperations/refuseAccess.feature#L22)
|
||||
|
||||
#### [PATCH request for TUS upload with wrong checksum gives incorrect response](https://github.com/owncloud/ocis/issues/1755)
|
||||
|
||||
- [coreApiWebdavUploadTUS/checksums.feature:74](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/checksums.feature#L74)
|
||||
- [coreApiWebdavUploadTUS/checksums.feature:75](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/checksums.feature#L75)
|
||||
- [coreApiWebdavUploadTUS/checksums.feature:76](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/checksums.feature#L76)
|
||||
- [coreApiWebdavUploadTUS/checksums.feature:77](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/checksums.feature#L77)
|
||||
- [coreApiWebdavUploadTUS/checksums.feature:79](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/checksums.feature#L79)
|
||||
- [coreApiWebdavUploadTUS/checksums.feature:78](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/checksums.feature#L78)
|
||||
- [coreApiWebdavUploadTUS/checksums.feature:147](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/checksums.feature#L147)
|
||||
- [coreApiWebdavUploadTUS/checksums.feature:148](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/checksums.feature#L148)
|
||||
- [coreApiWebdavUploadTUS/checksums.feature:149](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/checksums.feature#L149)
|
||||
- [coreApiWebdavUploadTUS/checksums.feature:192](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/checksums.feature#L192)
|
||||
- [coreApiWebdavUploadTUS/checksums.feature:193](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/checksums.feature#L193)
|
||||
- [coreApiWebdavUploadTUS/checksums.feature:194](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/checksums.feature#L194)
|
||||
- [coreApiWebdavUploadTUS/checksums.feature:195](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/checksums.feature#L195)
|
||||
- [coreApiWebdavUploadTUS/checksums.feature:196](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/checksums.feature#L196)
|
||||
- [coreApiWebdavUploadTUS/checksums.feature:197](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/checksums.feature#L197)
|
||||
- [coreApiWebdavUploadTUS/checksums.feature:240](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/checksums.feature#L240)
|
||||
- [coreApiWebdavUploadTUS/checksums.feature:241](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/checksums.feature#L241)
|
||||
- [coreApiWebdavUploadTUS/checksums.feature:242](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/checksums.feature#L242)
|
||||
- [coreApiWebdavUploadTUS/checksums.feature:243](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/checksums.feature#L243)
|
||||
- [coreApiWebdavUploadTUS/checksums.feature:244](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/checksums.feature#L244)
|
||||
- [coreApiWebdavUploadTUS/checksums.feature:245](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/checksums.feature#L245)
|
||||
- [coreApiWebdavUploadTUS/uploadToShare.feature:255](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/uploadToShare.feature#L255)
|
||||
- [coreApiWebdavUploadTUS/uploadToShare.feature:256](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/uploadToShare.feature#L256)
|
||||
- [coreApiWebdavUploadTUS/uploadToShare.feature:279](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/uploadToShare.feature#L279)
|
||||
- [coreApiWebdavUploadTUS/uploadToShare.feature:280](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/uploadToShare.feature#L280)
|
||||
- [coreApiWebdavUploadTUS/uploadToShare.feature:376](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/uploadToShare.feature#L376)
|
||||
- [coreApiWebdavUploadTUS/uploadToShare.feature:377](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/uploadToShare.feature#L377)
|
||||
|
||||
#### [Renaming resource to banned name is allowed in spaces webdav](https://github.com/owncloud/ocis/issues/3099)
|
||||
|
||||
- [coreApiWebdavMove2/moveFile.feature:143](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavMove2/moveFile.feature#L143)
|
||||
- [coreApiWebdavMove1/moveFolder.feature:36](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavMove1/moveFolder.feature#L36)
|
||||
- [coreApiWebdavMove1/moveFolder.feature:50](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavMove1/moveFolder.feature#L50)
|
||||
- [coreApiWebdavMove1/moveFolder.feature:64](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavMove1/moveFolder.feature#L64)
|
||||
|
||||
#### [Trying to delete other user's trashbin item returns 409 for spaces path instead of 404](https://github.com/owncloud/ocis/issues/9791)
|
||||
|
||||
- [coreApiTrashbin/trashbinDelete.feature:92](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiTrashbin/trashbinDelete.feature#L92)
|
||||
|
||||
#### [MOVE a file into same folder with same name returns 404 instead of 403](https://github.com/owncloud/ocis/issues/1976)
|
||||
|
||||
- [coreApiWebdavMove2/moveFile.feature:100](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavMove2/moveFile.feature#L100)
|
||||
- [coreApiWebdavMove2/moveFile.feature:101](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavMove2/moveFile.feature#L101)
|
||||
- [coreApiWebdavMove2/moveFile.feature:102](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavMove2/moveFile.feature#L102)
|
||||
- [coreApiWebdavMove1/moveFolder.feature:217](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavMove1/moveFolder.feature#L217)
|
||||
- [coreApiWebdavMove1/moveFolder.feature:218](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavMove1/moveFolder.feature#L218)
|
||||
- [coreApiWebdavMove1/moveFolder.feature:219](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavMove1/moveFolder.feature#L219)
|
||||
- [coreApiWebdavMove2/moveShareOnOpencloud.feature:334](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavMove2/moveShareOnOpencloud.feature#L334)
|
||||
- [coreApiWebdavMove2/moveShareOnOpencloud.feature:337](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavMove2/moveShareOnOpencloud.feature#L337)
|
||||
- [coreApiWebdavMove2/moveShareOnOpencloud.feature:340](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavMove2/moveShareOnOpencloud.feature#L340)
|
||||
|
||||
#### [COPY file/folder to same name is possible (but 500 code error for folder with spaces path)](https://github.com/owncloud/ocis/issues/8711)
|
||||
|
||||
- [coreApiSharePublicLink2/copyFromPublicLink.feature:198](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiSharePublicLink2/copyFromPublicLink.feature#L198)
|
||||
- [coreApiWebdavProperties/copyFile.feature:1094](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavProperties/copyFile.feature#L1094)
|
||||
- [coreApiWebdavProperties/copyFile.feature:1095](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavProperties/copyFile.feature#L1095)
|
||||
- [coreApiWebdavProperties/copyFile.feature:1096](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavProperties/copyFile.feature#L1096)
|
||||
|
||||
#### [Trying to restore personal file to file of share received folder returns 403 but the share file is deleted (new dav path)](https://github.com/owncloud/ocis/issues/10356)
|
||||
|
||||
- [coreApiTrashbin/trashbinSharingToShares.feature:277](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiTrashbin/trashbinSharingToShares.feature#L277)
|
||||
|
||||
#### [Preview. UTF characters do not display on prievew](https://github.com/opencloud-eu/opencloud/issues/1451)
|
||||
|
||||
- [coreApiWebdavPreviews/previews.feature:249](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavPreviews/previews.feature#L249)
|
||||
- [coreApiWebdavPreviews/previews.feature:250](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavPreviews/previews.feature#L250)
|
||||
- [coreApiWebdavPreviews/previews.feature:251](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavPreviews/previews.feature#L251)
|
||||
|
||||
#### [Preview of text file truncated](https://github.com/opencloud-eu/opencloud/issues/1452)
|
||||
|
||||
- [coreApiWebdavPreviews/previews.feature:263](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavPreviews/previews.feature#L263)
|
||||
- [coreApiWebdavPreviews/previews.feature:264](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavPreviews/previews.feature#L264)
|
||||
- [coreApiWebdavPreviews/previews.feature:265](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavPreviews/previews.feature#L265)
|
||||
|
||||
### Won't fix
|
||||
|
||||
Not everything needs to be implemented for opencloud.
|
||||
|
||||
- _Blacklisted ignored files are no longer required because opencloud can handle `.htaccess` files without security implications introduced by serving user provided files with apache._
|
||||
|
||||
Note: always have an empty line at the end of this file.
|
||||
The bash script that processes this file requires that the last line has a newline on the end.
|
||||
@@ -201,9 +201,9 @@
|
||||
- [apiAntivirus/antivirus.feature:143](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/apiAntivirus/antivirus.feature#L143)
|
||||
- [apiAntivirus/antivirus.feature:144](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/apiAntivirus/antivirus.feature#L144)
|
||||
- [apiAntivirus/antivirus.feature:145](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/apiAntivirus/antivirus.feature#L145)
|
||||
- [apiAntivirus/antivirus.feature:356](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/apiAntivirus/antivirus.feature#L356)
|
||||
- [apiAntivirus/antivirus.feature:357](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/apiAntivirus/antivirus.feature#L357)
|
||||
- [apiAntivirus/antivirus.feature:358](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/apiAntivirus/antivirus.feature#L358)
|
||||
- [apiAntivirus/antivirus.feature:359](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/apiAntivirus/antivirus.feature#L359)
|
||||
- [apiCollaboration/wopi.feature:956](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/apiCollaboration/wopi.feature#L956)
|
||||
- [apiCollaboration/wopi.feature:957](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/apiCollaboration/wopi.feature#L957)
|
||||
- [apiCollaboration/wopi.feature:958](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/apiCollaboration/wopi.feature#L958)
|
||||
@@ -320,7 +320,6 @@
|
||||
- [coreApiWebdavUploadTUS/uploadFile.feature:122](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/uploadFile.feature#L122)
|
||||
- [coreApiWebdavUploadTUS/uploadFile.feature:133](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/uploadFile.feature#L133)
|
||||
- [coreApiWebdavUploadTUS/uploadFile.feature:146](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/uploadFile.feature#L146)
|
||||
- [coreApiWebdavUploadTUS/uploadFile.feature:168](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/uploadFile.feature#L168)
|
||||
- [coreApiWebdavUploadTUS/uploadFile.feature:187](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/uploadFile.feature#L187)
|
||||
- [coreApiWebdavUploadTUS/uploadFile.feature:199](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/uploadFile.feature#L199)
|
||||
- [coreApiWebdavUploadTUS/uploadFile.feature:212](https://github.com/opencloud-eu/opencloud/blob/main/tests/acceptance/features/coreApiWebdavUploadTUS/uploadFile.feature#L212)
|
||||
|
||||
@@ -1,16 +0,0 @@
|
||||
@skipOnReva
|
||||
Feature: add user
|
||||
As an admin
|
||||
I want to be able to add users and store their password with the full hash difficulty
|
||||
So that I can give people controlled individual access to resources on the OpenCloud server
|
||||
|
||||
|
||||
Scenario: admin creates a user
|
||||
When the user "Admin" creates a new user with the following attributes using the Graph API:
|
||||
| userName | brand-new-user |
|
||||
| displayName | Brand New User |
|
||||
| email | new@example.org |
|
||||
| password | %alt1% |
|
||||
Then the HTTP status code should be "201"
|
||||
And user "brand-new-user" should exist
|
||||
And user "brand-new-user" should be able to upload file "filesForUpload/lorem.txt" to "lorem.txt"
|
||||
@@ -1,64 +0,0 @@
|
||||
Feature: assign role
|
||||
As an admin,
|
||||
I want to assign roles to users
|
||||
So that I can provide them different authority
|
||||
|
||||
|
||||
Scenario Outline: only admin user can see all existing roles
|
||||
Given user "Alice" has been created with default attributes
|
||||
And the administrator has given "Alice" the role "<user-role>" using the settings api
|
||||
When user "Alice" tries to get all existing roles using the settings API
|
||||
Then the HTTP status code should be "<http-status-code>"
|
||||
Examples:
|
||||
| user-role | http-status-code |
|
||||
| Admin | 201 |
|
||||
| Space Admin | 201 |
|
||||
| User | 201 |
|
||||
|
||||
@issue-5032
|
||||
Scenario Outline: only admin user can see assignments list
|
||||
Given user "Alice" has been created with default attributes
|
||||
And the administrator has given "Alice" the role "<user-role>" using the settings api
|
||||
When user "Alice" tries to get list of assignment using the settings API
|
||||
Then the HTTP status code should be "<http-status-code>"
|
||||
Examples:
|
||||
| user-role | http-status-code |
|
||||
| Admin | 201 |
|
||||
| Space Admin | 401 |
|
||||
| User | 401 |
|
||||
|
||||
|
||||
Scenario Outline: a user cannot change own role
|
||||
Given user "Alice" has been created with default attributes
|
||||
And the administrator has given "Alice" the role "<user-role>" using the settings api
|
||||
When user "Alice" changes his own role to "<desired-role>"
|
||||
Then the HTTP status code should be "400"
|
||||
And user "Alice" should have the role "<user-role>"
|
||||
Examples:
|
||||
| user-role | desired-role |
|
||||
| Admin | User |
|
||||
| Admin | Space Admin |
|
||||
| Space Admin | Admin |
|
||||
| Space Admin | Space Admin |
|
||||
| User | Admin |
|
||||
| User | Space Admin |
|
||||
|
||||
|
||||
Scenario Outline: only admin user can change the role for another user
|
||||
Given these users have been created with default attributes:
|
||||
| username |
|
||||
| Alice |
|
||||
| Brian |
|
||||
And the administrator has given "Alice" the role "<user-role>" using the settings api
|
||||
When user "Alice" changes the role "<desired-role>" for user "Brian"
|
||||
Then the HTTP status code should be "<http-status-code>"
|
||||
And user "Brian" should have the role "<expected-role>"
|
||||
Examples:
|
||||
| user-role | desired-role | http-status-code | expected-role |
|
||||
| Admin | User | 201 | User |
|
||||
| Admin | Space Admin | 201 | Space Admin |
|
||||
| Admin | Admin | 201 | Admin |
|
||||
| Space Admin | Admin | 400 | User |
|
||||
| Space Admin | Space Admin | 400 | User |
|
||||
| User | Admin | 400 | User |
|
||||
| User | Space Admin | 400 | User |
|
||||
@@ -1,18 +0,0 @@
|
||||
@skipOnReva
|
||||
Feature: sharing
|
||||
As a user
|
||||
I want to be able to share files when passwords are stored with the full hash difficulty
|
||||
So that I can give people secure controlled access to my data
|
||||
|
||||
|
||||
Scenario Outline: creating a share of a file with a user
|
||||
Given using OCS API version "<ocs-api-version>"
|
||||
And user "Alice" has been created with default attributes
|
||||
And user "Alice" has uploaded file with content "OpenCloud test text file 0" to "/textfile0.txt"
|
||||
And user "Brian" has been created with default attributes
|
||||
When user "Alice" shares file "textfile0.txt" with user "Brian" using the sharing API
|
||||
And the content of file "/Shares/textfile0.txt" for user "Brian" should be "OpenCloud test text file 0"
|
||||
Examples:
|
||||
| ocs-api-version |
|
||||
| 1 |
|
||||
| 2 |
|
||||
@@ -1,21 +0,0 @@
|
||||
@skipOnReva
|
||||
Feature: upload file
|
||||
As a user
|
||||
I want to be able to upload files when passwords are stored with the full hash difficulty
|
||||
So that I can store and share files securely between multiple client systems
|
||||
|
||||
|
||||
Scenario Outline: upload a file and check download content
|
||||
Given using OCS API version "<ocs-api-version>"
|
||||
And user "Alice" has been created with default attributes
|
||||
And using <dav-path-version> DAV path
|
||||
When user "Alice" uploads file with content "uploaded content" to "/upload.txt" using the WebDAV API
|
||||
Then the content of file "/upload.txt" for user "Alice" should be "uploaded content"
|
||||
Examples:
|
||||
| ocs-api-version | dav-path-version |
|
||||
| 1 | old |
|
||||
| 1 | new |
|
||||
| 1 | spaces |
|
||||
| 2 | old |
|
||||
| 2 | new |
|
||||
| 2 | spaces |
|
||||
@@ -1,31 +0,0 @@
|
||||
@skipOnReva
|
||||
Feature: attempt to PUT files with invalid password
|
||||
As an admin
|
||||
I want the system to be secure when passwords are stored with the full hash difficulty
|
||||
So that unauthorised users do not have access to data
|
||||
|
||||
Background:
|
||||
Given user "Alice" has been created with default attributes
|
||||
And user "Alice" has created folder "/PARENT"
|
||||
|
||||
|
||||
Scenario: send PUT requests to webDav endpoints as normal user with wrong password
|
||||
When user "Alice" requests these endpoints with "PUT" including body "doesnotmatter" using password "invalid" about user "Alice"
|
||||
| endpoint |
|
||||
| /webdav/textfile0.txt |
|
||||
| /dav/files/%username%/textfile0.txt |
|
||||
| /webdav/PARENT |
|
||||
| /dav/files/%username%/PARENT |
|
||||
| /dav/files/%username%/PARENT/parent.txt |
|
||||
Then the HTTP status code of responses on all endpoints should be "401"
|
||||
|
||||
|
||||
Scenario: send PUT requests to webDav endpoints as normal user with no password
|
||||
When user "Alice" requests these endpoints with "PUT" including body "doesnotmatter" using password "" about user "Alice"
|
||||
| endpoint |
|
||||
| /webdav/textfile0.txt |
|
||||
| /dav/files/%username%/textfile0.txt |
|
||||
| /webdav/PARENT |
|
||||
| /dav/files/%username%/PARENT |
|
||||
| /dav/files/%username%/PARENT/parent.txt |
|
||||
Then the HTTP status code of responses on all endpoints should be "401"
|
||||
@@ -21,7 +21,7 @@ Feature: List upload sessions via CLI command
|
||||
And the CLI response should not contain these entries:
|
||||
| file0.txt |
|
||||
|
||||
|
||||
@antivirus
|
||||
Scenario: list all upload sessions that are currently in postprocessing
|
||||
Given the following configs have been set:
|
||||
| config | value |
|
||||
@@ -39,7 +39,7 @@ Feature: List upload sessions via CLI command
|
||||
And the CLI response should not contain these entries:
|
||||
| virusFile.txt |
|
||||
|
||||
|
||||
@antivirus
|
||||
Scenario: list all upload sessions that are infected by virus
|
||||
Given the following configs have been set:
|
||||
| config | value |
|
||||
@@ -109,7 +109,7 @@ Feature: List upload sessions via CLI command
|
||||
And the CLI response should not contain these entries:
|
||||
| file2.txt |
|
||||
|
||||
|
||||
@antivirus
|
||||
Scenario: clean all upload sessions that are not in post-processing
|
||||
Given the following configs have been set:
|
||||
| config | value |
|
||||
@@ -126,7 +126,7 @@ Feature: List upload sessions via CLI command
|
||||
And the CLI response should not contain these entries:
|
||||
| file1.txt |
|
||||
|
||||
|
||||
@antivirus
|
||||
Scenario: clean upload sessions that are not in post-processing and is not virus infected
|
||||
Given the following configs have been set:
|
||||
| config | value |
|
||||
|
||||
@@ -1,19 +0,0 @@
|
||||
#!/bin/bash
|
||||
# tests/acceptance/scripts/generate-virus-files.sh
|
||||
|
||||
set -e
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
TARGET_DIR="$SCRIPT_DIR/../filesForUpload/filesWithVirus"
|
||||
|
||||
echo "Generating EICAR test files..."
|
||||
|
||||
mkdir -p "$TARGET_DIR"
|
||||
|
||||
cd "$TARGET_DIR"
|
||||
|
||||
echo "Downloading eicar.com..."
|
||||
curl -s -o eicar.com https://secure.eicar.org/eicar.com
|
||||
|
||||
echo "Downloading eicar_com.zip..."
|
||||
curl -s -o eicar_com.zip https://secure.eicar.org/eicar_com.zip
|
||||
@@ -1,4 +1,32 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
# Terminal colors
|
||||
if [ -n "${PLAIN_OUTPUT}" ]; then
|
||||
# No colors
|
||||
TC_GREEN=""
|
||||
TC_RED=""
|
||||
TC_CYAN=""
|
||||
TC_RESET=""
|
||||
else
|
||||
# Colors
|
||||
TC_GREEN="\033[32m"
|
||||
TC_RED="\033[31m"
|
||||
TC_CYAN="\033[36m"
|
||||
TC_RESET="\033[0m"
|
||||
fi
|
||||
|
||||
function log_failed(){
|
||||
printf "${TC_RED}[FAILED] %s\n${TC_RESET}" "$1"
|
||||
}
|
||||
|
||||
function log_info(){
|
||||
printf "${TC_CYAN}[INFO] %s\n${TC_RESET}" "$1"
|
||||
}
|
||||
|
||||
function log_error(){
|
||||
printf "${TC_RED}[ERROR] %s\n${TC_RESET}" "$1"
|
||||
}
|
||||
|
||||
[[ "${DEBUG}" == "true" ]] && set -x
|
||||
|
||||
# from http://stackoverflow.com/a/630387
|
||||
@@ -31,15 +59,8 @@ if [ -n "${PLAIN_OUTPUT}" ]
|
||||
then
|
||||
# explicitly tell Behat to not do colored output
|
||||
COLORS_OPTION="--no-colors"
|
||||
# Use the Bash "null" command to do nothing, rather than use tput to set a color
|
||||
RED_COLOR=":"
|
||||
GREEN_COLOR=":"
|
||||
YELLOW_COLOR=":"
|
||||
else
|
||||
COLORS_OPTION="--colors"
|
||||
RED_COLOR="tput setaf 1"
|
||||
GREEN_COLOR="tput setaf 2"
|
||||
YELLOW_COLOR="tput setaf 3"
|
||||
fi
|
||||
|
||||
# The following environment variables can be specified:
|
||||
@@ -61,7 +82,7 @@ then
|
||||
LINT_STATUS=$?
|
||||
if [ ${LINT_STATUS} -ne 0 ]
|
||||
then
|
||||
echo "Error: expected failures file ${EXPECTED_FAILURES_FILE} is invalid"
|
||||
log_error "Expected failures file ${EXPECTED_FAILURES_FILE} is invalid"
|
||||
exit ${LINT_STATUS}
|
||||
fi
|
||||
fi
|
||||
@@ -224,7 +245,7 @@ function run_behat_tests() {
|
||||
# So exit the tests and do not lint expected failures when undefined steps exist.
|
||||
if [[ ${SCENARIO_RESULTS} == *"undefined"* ]]
|
||||
then
|
||||
${RED_COLOR}; echo -e "Undefined steps: There were some undefined steps found."
|
||||
log_error "Undefined steps: There were some undefined steps found."
|
||||
exit 1
|
||||
fi
|
||||
# If there were no scenarios in the requested suite or feature that match
|
||||
@@ -237,7 +258,7 @@ function run_behat_tests() {
|
||||
MATCHING_COUNT=`grep -ca '^No scenarios$' ${TEST_LOG_FILE}`
|
||||
if [ ${MATCHING_COUNT} -eq 1 ]
|
||||
then
|
||||
echo "Information: no matching scenarios were found."
|
||||
log_info "No matching scenarios were found."
|
||||
BEHAT_EXIT_STATUS=0
|
||||
else
|
||||
# Find the count of scenarios that passed and failed
|
||||
@@ -280,9 +301,9 @@ function run_behat_tests() {
|
||||
then
|
||||
if [ -n "${BEHAT_SUITE_TO_RUN}" ]
|
||||
then
|
||||
echo "Checking expected failures for suite ${BEHAT_SUITE_TO_RUN}"
|
||||
log_info "Checking expected failures for suite: ${BEHAT_SUITE_TO_RUN}"
|
||||
else
|
||||
echo "Checking expected failures"
|
||||
log_info "Checking expected failures..."
|
||||
fi
|
||||
|
||||
# Check that every failed scenario is in the list of expected failures
|
||||
@@ -295,7 +316,7 @@ function run_behat_tests() {
|
||||
grep "\[${SUITE_SCENARIO}\]" "${EXPECTED_FAILURES_FILE}" > /dev/null
|
||||
if [ $? -ne 0 ]
|
||||
then
|
||||
echo "Error: Scenario ${SUITE_SCENARIO} failed but was not expected to fail."
|
||||
log_error "Scenario ${SUITE_SCENARIO} failed but was not expected to fail."
|
||||
UNEXPECTED_FAILED_SCENARIOS+=("${SUITE_SCENARIO}")
|
||||
fi
|
||||
done
|
||||
@@ -336,7 +357,7 @@ function run_behat_tests() {
|
||||
echo "${FAILED_SCENARIO_PATHS}" | grep ${SUITE_SCENARIO}$ > /dev/null
|
||||
if [ $? -ne 0 ]
|
||||
then
|
||||
echo "Info: Scenario ${SUITE_SCENARIO} was expected to fail but did not fail."
|
||||
log_error "Scenario ${SUITE_SCENARIO} was expected to fail but did not fail."
|
||||
UNEXPECTED_PASSED_SCENARIOS+=("${SUITE_SCENARIO}")
|
||||
fi
|
||||
done < ${EXPECTED_FAILURES_FILE}
|
||||
@@ -373,7 +394,7 @@ function run_behat_tests() {
|
||||
:
|
||||
else
|
||||
echo ""
|
||||
echo "The following tests were skipped because they are tagged @skip:"
|
||||
log_info "The following tests were skipped because they are tagged @skip:"
|
||||
cat "${DRY_RUN_FILE}" | tee -a ${TEST_LOG_FILE}
|
||||
fi
|
||||
rm -f "${DRY_RUN_FILE}"
|
||||
@@ -526,7 +547,6 @@ fi
|
||||
|
||||
export IPV4_URL
|
||||
export IPV6_URL
|
||||
export FILES_FOR_UPLOAD="${SCRIPT_PATH}/filesForUpload/"
|
||||
|
||||
TEST_LOG_FILE=$(mktemp)
|
||||
SCENARIOS_THAT_PASSED=0
|
||||
@@ -568,10 +588,6 @@ for i in "${!BEHAT_SUITES[@]}"
|
||||
done
|
||||
done
|
||||
|
||||
TOTAL_SCENARIOS=$((SCENARIOS_THAT_PASSED + SCENARIOS_THAT_FAILED))
|
||||
|
||||
echo "runsh: Total ${TOTAL_SCENARIOS} scenarios (${SCENARIOS_THAT_PASSED} passed, ${SCENARIOS_THAT_FAILED} failed)"
|
||||
|
||||
# 3 types of things can have gone wrong:
|
||||
# - some scenario failed (and it was not expected to fail)
|
||||
# - some scenario passed (but it was expected to fail)
|
||||
@@ -643,37 +659,42 @@ fi
|
||||
|
||||
if [ -n "${EXPECTED_FAILURES_FILE}" ]
|
||||
then
|
||||
echo "runsh: Exit code after checking expected failures: ${FINAL_EXIT_STATUS}"
|
||||
log_info "Exit code after checking expected failures: ${FINAL_EXIT_STATUS}"
|
||||
fi
|
||||
|
||||
if [ "${UNEXPECTED_FAILURE}" = true ]
|
||||
then
|
||||
${YELLOW_COLOR}; echo "runsh: Total unexpected failed scenarios throughout the test run:"
|
||||
${RED_COLOR}; printf "%s\n" "${UNEXPECTED_FAILED_SCENARIOS[@]}"
|
||||
log_failed "Total unexpected failed scenarios:"
|
||||
printf "${TC_RED}- %s\n${TC_RESET}" "${UNEXPECTED_FAILED_SCENARIOS[@]}"
|
||||
echo ""
|
||||
else
|
||||
${GREEN_COLOR}; echo "runsh: There were no unexpected failures."
|
||||
log_info "There were no unexpected failures."
|
||||
fi
|
||||
|
||||
if [ "${UNEXPECTED_SUCCESS}" = true ]
|
||||
then
|
||||
${YELLOW_COLOR}; echo "runsh: Total unexpected passed scenarios throughout the test run:"
|
||||
${RED_COLOR}; printf "%s\n" "${ACTUAL_UNEXPECTED_PASS[@]}"
|
||||
log_error "Total unexpected passed scenarios:"
|
||||
printf "${TC_GREEN}- %s\n${TC_RESET}" "${ACTUAL_UNEXPECTED_PASS[@]}"
|
||||
echo ""
|
||||
else
|
||||
${GREEN_COLOR}; echo "runsh: There were no unexpected success."
|
||||
log_info "There were no unexpected success."
|
||||
fi
|
||||
|
||||
if [ "${UNEXPECTED_BEHAT_EXIT_STATUS}" = true ]
|
||||
then
|
||||
${YELLOW_COLOR}; echo "runsh: The following Behat test runs exited with non-zero status:"
|
||||
${RED_COLOR}; printf "%s\n" "${UNEXPECTED_BEHAT_EXIT_STATUSES[@]}"
|
||||
log_error "The following Behat test runs exited with non-zero status:"
|
||||
printf "${TC_RED}%s\n${TC_RESET}" "${UNEXPECTED_BEHAT_EXIT_STATUSES[@]}"
|
||||
fi
|
||||
|
||||
TOTAL_SCENARIOS=$((SCENARIOS_THAT_PASSED + SCENARIOS_THAT_FAILED))
|
||||
printf "Summary: %s scenarios (${TC_GREEN}%s passed${TC_RESET}, ${TC_RED}%s failed${TC_RESET})" "${TOTAL_SCENARIOS}" "${SCENARIOS_THAT_PASSED}" "${SCENARIOS_THAT_FAILED}"
|
||||
echo ""
|
||||
|
||||
# # sync the file-system so all output will be flushed to storage.
|
||||
# # In drone we sometimes see that the last lines of output are missing from the
|
||||
# # drone log.
|
||||
# # In CI, we sometimes see that the last lines of output are missing.
|
||||
# sync
|
||||
|
||||
# # If we are running in drone CI, then sleep for a bit to (hopefully) let the
|
||||
# # If we are running in CI, then sleep for a bit to (hopefully) let the
|
||||
# # drone agent send all the output to the drone server.
|
||||
# if [ -n "${CI_REPO}" ]
|
||||
# then
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Set required environment variables
|
||||
export LOCAL_TEST=true
|
||||
export START_TIKA=true
|
||||
export START_EMAIL=true
|
||||
export WITH_WRAPPER=true
|
||||
export STORAGE_DRIVER=${STORAGE_DRIVER:-posix}
|
||||
@@ -10,11 +10,15 @@ export TEST_ROOT_PATH="/drone/src/tests"
|
||||
# LOCAL TEST WITHOUT EXTRA ENVS
|
||||
TEST_SERVER_URL="https://opencloud-server:9200"
|
||||
OC_WRAPPER_URL="http://opencloud-server:5200"
|
||||
EXPECTED_FAILURES_FILE="tests/acceptance/expected-failures-localAPI-on-decomposed-storage.md"
|
||||
EXPECTED_FAILURES_FILE_FROM_CORE="tests/acceptance/expected-failures-API-on-decomposed-storage.md"
|
||||
|
||||
if [ "$STORAGE_DRIVER" = "posix" ]; then
|
||||
EXPECTED_FAILURES_FILE="tests/acceptance/expected-failures-posix-storage.md"
|
||||
else
|
||||
EXPECTED_FAILURES_FILE="tests/acceptance/expected-failures-decomposed-storage.md"
|
||||
fi
|
||||
|
||||
# Start server
|
||||
make -C tests/acceptance/docker start-server
|
||||
make -C tests/acceptance/docker start-services
|
||||
|
||||
# Wait until the server responds with HTTP 200
|
||||
echo "Waiting for server to start..."
|
||||
@@ -60,7 +64,6 @@ SUITES=(
|
||||
"apiSharingNgShareInvitation"
|
||||
"apiSharingNgLinkSharePermission"
|
||||
"apiSharingNgLinkShareRoot"
|
||||
"apiAccountsHashDifficulty"
|
||||
"apiSearchContent"
|
||||
"apiNotification"
|
||||
)
|
||||
@@ -139,7 +142,7 @@ for SUITE in "${CORE_SUITES[@]}"; do
|
||||
LOG_FILE="$LOG_DIR/${SUITE}.log"
|
||||
|
||||
# Run suite
|
||||
make test-acceptance-api TEST_SERVER_URL=$TEST_SERVER_URL EXPECTED_FAILURES_FILE=$EXPECTED_FAILURES_FILE_FROM_CORE BEHAT_SUITE=$SUITE SEND_SCENARIO_LINE_REFERENCES=true > "$LOG_FILE" 2>&1
|
||||
make test-acceptance-api TEST_SERVER_URL=$TEST_SERVER_URL EXPECTED_FAILURES_FILE=$EXPECTED_FAILURES_FILE BEHAT_SUITE=$SUITE SEND_SCENARIO_LINE_REFERENCES=true > "$LOG_FILE" 2>&1
|
||||
|
||||
# Check if suite was successful
|
||||
if [ $? -eq 0 ]; then
|
||||
|
||||
@@ -1,13 +1,13 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Set required environment variables
|
||||
export LOCAL_TEST=true
|
||||
export START_TIKA=true
|
||||
export WITH_WRAPPER=false
|
||||
|
||||
TEST_SERVER_URL="https://opencloud-server:9200"
|
||||
|
||||
# Start server
|
||||
make -C tests/acceptance/docker start-server
|
||||
make -C tests/acceptance/docker start-services
|
||||
|
||||
# Wait until the server responds with HTTP 200
|
||||
echo "Waiting for server to start..."
|
||||
|
||||
25
vendor/github.com/blevesearch/bleve/v2/.travis.yml
generated
vendored
25
vendor/github.com/blevesearch/bleve/v2/.travis.yml
generated
vendored
@@ -1,25 +0,0 @@
|
||||
sudo: false
|
||||
|
||||
language: go
|
||||
|
||||
go:
|
||||
- "1.21.x"
|
||||
- "1.22.x"
|
||||
- "1.23.x"
|
||||
|
||||
script:
|
||||
- go get golang.org/x/tools/cmd/cover
|
||||
- go get github.com/mattn/goveralls
|
||||
- go get github.com/kisielk/errcheck
|
||||
- go get -u github.com/FiloSottile/gvt
|
||||
- gvt restore
|
||||
- go test -race -v $(go list ./... | grep -v vendor/)
|
||||
- go vet $(go list ./... | grep -v vendor/)
|
||||
- go test ./test -v -indexType scorch
|
||||
- errcheck -ignorepkg fmt $(go list ./... | grep -v vendor/);
|
||||
- scripts/project-code-coverage.sh
|
||||
- scripts/build_children.sh
|
||||
|
||||
notifications:
|
||||
email:
|
||||
- fts-team@couchbase.com
|
||||
2
vendor/github.com/blevesearch/bleve/v2/README.md
generated
vendored
2
vendor/github.com/blevesearch/bleve/v2/README.md
generated
vendored
@@ -1,7 +1,7 @@
|
||||
#  bleve
|
||||
|
||||
[](https://github.com/blevesearch/bleve/actions/workflows/tests.yml?query=event%3Apush+branch%3Amaster)
|
||||
[](https://coveralls.io/github/blevesearch/bleve?branch=master)
|
||||
[](https://coveralls.io/github/blevesearch/bleve)
|
||||
[](https://pkg.go.dev/github.com/blevesearch/bleve/v2)
|
||||
[](https://app.gitter.im/#/room/#blevesearch_bleve:gitter.im)
|
||||
[](https://goreportcard.com/report/github.com/blevesearch/bleve/v2)
|
||||
|
||||
4
vendor/github.com/blevesearch/bleve/v2/document/field_geoshape.go
generated
vendored
4
vendor/github.com/blevesearch/bleve/v2/document/field_geoshape.go
generated
vendored
@@ -180,7 +180,7 @@ func NewGeoShapeFieldFromShapeWithIndexingOptions(name string, arrayPositions []
|
||||
|
||||
// docvalues are always enabled for geoshape fields, even if the
|
||||
// indexing options are set to not include docvalues.
|
||||
options = options | index.DocValues
|
||||
options |= index.DocValues
|
||||
|
||||
return &GeoShapeField{
|
||||
shape: shape,
|
||||
@@ -232,7 +232,7 @@ func NewGeometryCollectionFieldFromShapesWithIndexingOptions(name string,
|
||||
|
||||
// docvalues are always enabled for geoshape fields, even if the
|
||||
// indexing options are set to not include docvalues.
|
||||
options = options | index.DocValues
|
||||
options |= index.DocValues
|
||||
|
||||
return &GeoShapeField{
|
||||
shape: shape,
|
||||
|
||||
4
vendor/github.com/blevesearch/bleve/v2/document/field_vector.go
generated
vendored
4
vendor/github.com/blevesearch/bleve/v2/document/field_vector.go
generated
vendored
@@ -109,6 +109,10 @@ func NewVectorField(name string, arrayPositions []uint64,
|
||||
func NewVectorFieldWithIndexingOptions(name string, arrayPositions []uint64,
|
||||
vector []float32, dims int, similarity, vectorIndexOptimizedFor string,
|
||||
options index.FieldIndexingOptions) *VectorField {
|
||||
// ensure the options are set to not store/index term vectors/doc values
|
||||
options &^= index.StoreField | index.IncludeTermVectors | index.DocValues
|
||||
// skip freq/norms for vector field
|
||||
options |= index.SkipFreqNorm
|
||||
|
||||
return &VectorField{
|
||||
name: name,
|
||||
|
||||
154
vendor/github.com/blevesearch/bleve/v2/fusion/rrf.go
generated
vendored
154
vendor/github.com/blevesearch/bleve/v2/fusion/rrf.go
generated
vendored
@@ -17,113 +17,125 @@ package fusion
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"sort"
|
||||
|
||||
"github.com/blevesearch/bleve/v2/search"
|
||||
)
|
||||
|
||||
// formatRRFMessage builds the explanation string for a single component of the
|
||||
// Reciprocal Rank Fusion calculation.
|
||||
func formatRRFMessage(weight float64, rank int, rankConstant int) string {
|
||||
return fmt.Sprintf("rrf score (weight=%.3f, rank=%d, rank_constant=%d), normalized score of", weight, rank, rankConstant)
|
||||
}
|
||||
|
||||
// ReciprocalRankFusion performs a reciprocal rank fusion on the search results.
|
||||
func ReciprocalRankFusion(hits search.DocumentMatchCollection, weights []float64, rankConstant int, windowSize int, numKNNQueries int, explain bool) FusionResult {
|
||||
if len(hits) == 0 {
|
||||
return FusionResult{
|
||||
Hits: hits,
|
||||
// ReciprocalRankFusion applies Reciprocal Rank Fusion across the primary FTS
|
||||
// results and each KNN sub-query. Ranks are limited to `windowSize` per source,
|
||||
// weighted, and combined into a single fused score, with optional explanation
|
||||
// details.
|
||||
func ReciprocalRankFusion(hits search.DocumentMatchCollection, weights []float64, rankConstant int, windowSize int, numKNNQueries int, explain bool) *FusionResult {
|
||||
nHits := len(hits)
|
||||
if nHits == 0 || windowSize == 0 {
|
||||
return &FusionResult{
|
||||
Hits: search.DocumentMatchCollection{},
|
||||
Total: 0,
|
||||
MaxScore: 0.0,
|
||||
}
|
||||
}
|
||||
|
||||
// Create a map of document ID to a slice of ranks.
|
||||
// The first element of the slice is the rank from the FTS search,
|
||||
// and the subsequent elements are the ranks from the KNN searches.
|
||||
docRanks := make(map[string][]int)
|
||||
limit := min(nHits, windowSize)
|
||||
|
||||
// Pre-assign rank lists to each candidate document
|
||||
for _, hit := range hits {
|
||||
docRanks[hit.ID] = make([]int, numKNNQueries+1)
|
||||
// precompute rank+scores to prevent additional division ops later
|
||||
rankReciprocals := make([]float64, limit)
|
||||
for i := range rankReciprocals {
|
||||
rankReciprocals[i] = 1.0 / float64(rankConstant+i+1)
|
||||
}
|
||||
|
||||
// Only a max of `window_size` elements need to be counted for. Stop
|
||||
// calculating rank once this threshold is hit.
|
||||
sort.Slice(hits, func(a, b int) bool {
|
||||
return scoreSortFunc()(hits[a], hits[b]) < 0
|
||||
})
|
||||
// Only consider top windowSize docs for rescoring
|
||||
for i := range min(windowSize, len(hits)) {
|
||||
if hits[i].Score != 0.0 {
|
||||
// Skip if Score is 0, since that means the document was not
|
||||
// found as part of FTS, and only in KNN.
|
||||
docRanks[hits[i].ID][0] = i + 1
|
||||
// init explanations if required
|
||||
var fusionExpl map[*search.DocumentMatch][]*search.Explanation
|
||||
if explain {
|
||||
fusionExpl = make(map[*search.DocumentMatch][]*search.Explanation, nHits)
|
||||
}
|
||||
|
||||
// The code here mainly deals with obtaining rank/score for fts hits.
|
||||
// First sort hits by score
|
||||
sortDocMatchesByScore(hits)
|
||||
|
||||
// Calculate fts rank+scores
|
||||
ftsWeight := weights[0]
|
||||
for i := 0; i < nHits; i++ {
|
||||
if i < windowSize {
|
||||
hit := hits[i]
|
||||
|
||||
// No fts scores from this hit onwards, break loop
|
||||
if hit.Score == 0.0 {
|
||||
break
|
||||
}
|
||||
|
||||
contrib := ftsWeight * rankReciprocals[i]
|
||||
hit.Score = contrib
|
||||
|
||||
if explain {
|
||||
expl := getFusionExplAt(
|
||||
hit,
|
||||
0,
|
||||
contrib,
|
||||
formatRRFMessage(ftsWeight, i+1, rankConstant),
|
||||
)
|
||||
fusionExpl[hit] = append(fusionExpl[hit], expl)
|
||||
}
|
||||
} else {
|
||||
// These FTS hits are not counted in the results, so set to 0
|
||||
hits[i].Score = 0.0
|
||||
}
|
||||
}
|
||||
|
||||
// Allocate knnDocs and reuse it within the loop
|
||||
knnDocs := make([]*search.DocumentMatch, 0, len(hits))
|
||||
// Code from here is to calculate knn ranks and scores
|
||||
// iterate over each knn query and calculate knn rank+scores
|
||||
for queryIdx := 0; queryIdx < numKNNQueries; queryIdx++ {
|
||||
knnWeight := weights[queryIdx+1]
|
||||
// Sorts hits in decreasing order of hit.ScoreBreakdown[i]
|
||||
sortDocMatchesByBreakdown(hits, queryIdx)
|
||||
|
||||
// For each KNN query, rank the documents based on their KNN score.
|
||||
for i := range numKNNQueries {
|
||||
knnDocs = knnDocs[:0]
|
||||
for i := 0; i < nHits; i++ {
|
||||
// break if score breakdown doesn't exist (sort function puts these hits at the end)
|
||||
// or if we go past the windowSize
|
||||
_, scoreBreakdownExists := scoreBreakdownForQuery(hits[i], queryIdx)
|
||||
if i >= windowSize || !scoreBreakdownExists {
|
||||
break
|
||||
}
|
||||
|
||||
for _, hit := range hits {
|
||||
if _, ok := hit.ScoreBreakdown[i]; ok {
|
||||
knnDocs = append(knnDocs, hit)
|
||||
hit := hits[i]
|
||||
contrib := knnWeight * rankReciprocals[i]
|
||||
hit.Score += contrib
|
||||
|
||||
if explain {
|
||||
expl := getFusionExplAt(
|
||||
hit,
|
||||
queryIdx+1,
|
||||
contrib,
|
||||
formatRRFMessage(knnWeight, i+1, rankConstant),
|
||||
)
|
||||
fusionExpl[hit] = append(fusionExpl[hit], expl)
|
||||
}
|
||||
}
|
||||
|
||||
// Sort the documents based on their score for this KNN query.
|
||||
sort.Slice(knnDocs, func(a, b int) bool {
|
||||
return scoreBreakdownSortFunc(i)(knnDocs[a], knnDocs[b]) < 0
|
||||
})
|
||||
|
||||
// Update the ranks of the documents in the docRanks map.
|
||||
// Only consider top windowSize docs for rescoring.
|
||||
for j := range min(windowSize, len(knnDocs)) {
|
||||
docRanks[knnDocs[j].ID][i+1] = j + 1
|
||||
}
|
||||
}
|
||||
|
||||
// Calculate the RRF score for each document.
|
||||
var maxScore float64
|
||||
for _, hit := range hits {
|
||||
var rrfScore float64
|
||||
var explChildren []*search.Explanation
|
||||
if explain {
|
||||
explChildren = make([]*search.Explanation, 0, numKNNQueries+1)
|
||||
finalizeFusionExpl(hit, fusionExpl[hit])
|
||||
}
|
||||
for i, rank := range docRanks[hit.ID] {
|
||||
if rank > 0 {
|
||||
partialRrfScore := weights[i] * 1.0 / float64(rankConstant+rank)
|
||||
if explain {
|
||||
expl := getFusionExplAt(
|
||||
hit,
|
||||
i,
|
||||
partialRrfScore,
|
||||
formatRRFMessage(weights[i], rank, rankConstant),
|
||||
)
|
||||
explChildren = append(explChildren, expl)
|
||||
}
|
||||
rrfScore += partialRrfScore
|
||||
}
|
||||
}
|
||||
hit.Score = rrfScore
|
||||
hit.ScoreBreakdown = nil
|
||||
if rrfScore > maxScore {
|
||||
maxScore = rrfScore
|
||||
}
|
||||
|
||||
if explain {
|
||||
finalizeFusionExpl(hit, explChildren)
|
||||
if hit.Score > maxScore {
|
||||
maxScore = hit.Score
|
||||
}
|
||||
}
|
||||
|
||||
sort.Sort(hits)
|
||||
if len(hits) > windowSize {
|
||||
sortDocMatchesByScore(hits)
|
||||
if nHits > windowSize {
|
||||
hits = hits[:windowSize]
|
||||
}
|
||||
return FusionResult{
|
||||
return &FusionResult{
|
||||
Hits: hits,
|
||||
Total: uint64(len(hits)),
|
||||
MaxScore: maxScore,
|
||||
|
||||
200
vendor/github.com/blevesearch/bleve/v2/fusion/rsf.go
generated
vendored
200
vendor/github.com/blevesearch/bleve/v2/fusion/rsf.go
generated
vendored
@@ -16,145 +16,147 @@ package fusion
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"sort"
|
||||
|
||||
"github.com/blevesearch/bleve/v2/search"
|
||||
)
|
||||
|
||||
// formatRSFMessage builds the explanation string associated with a single
|
||||
// component of the Relative Score Fusion calculation.
|
||||
func formatRSFMessage(weight float64, normalizedScore float64, minScore float64, maxScore float64) string {
|
||||
return fmt.Sprintf("rsf score (weight=%.3f, normalized=%.6f, min=%.6f, max=%.6f), normalized score of",
|
||||
weight, normalizedScore, minScore, maxScore)
|
||||
}
|
||||
|
||||
// RelativeScoreFusion normalizes scores based on min/max values for FTS and each KNN query, then applies weights.
|
||||
func RelativeScoreFusion(hits search.DocumentMatchCollection, weights []float64, windowSize int, numKNNQueries int, explain bool) FusionResult {
|
||||
if len(hits) == 0 {
|
||||
return FusionResult{
|
||||
Hits: hits,
|
||||
// RelativeScoreFusion normalizes the best-scoring documents from the primary
|
||||
// FTS query and each KNN query, scales those normalized values by the supplied
|
||||
// weights, and combines them into a single fused score. Only the top
|
||||
// `windowSize` documents per source are considered, and explanations are
|
||||
// materialized lazily when requested.
|
||||
func RelativeScoreFusion(hits search.DocumentMatchCollection, weights []float64, windowSize int, numKNNQueries int, explain bool) *FusionResult {
|
||||
nHits := len(hits)
|
||||
if nHits == 0 || windowSize == 0 {
|
||||
return &FusionResult{
|
||||
Hits: search.DocumentMatchCollection{},
|
||||
Total: 0,
|
||||
MaxScore: 0.0,
|
||||
}
|
||||
}
|
||||
|
||||
rsfScores := make(map[string]float64)
|
||||
|
||||
// contains the docs under consideration for scoring.
|
||||
// Reused for fts and knn hits
|
||||
scoringDocs := make([]*search.DocumentMatch, 0, len(hits))
|
||||
var explMap map[string][]*search.Explanation
|
||||
// init explanations if required
|
||||
var fusionExpl map[*search.DocumentMatch][]*search.Explanation
|
||||
if explain {
|
||||
explMap = make(map[string][]*search.Explanation)
|
||||
fusionExpl = make(map[*search.DocumentMatch][]*search.Explanation, nHits)
|
||||
}
|
||||
// remove non-fts hits
|
||||
|
||||
// Code here for calculating fts results
|
||||
// Sort by fts scores
|
||||
sortDocMatchesByScore(hits)
|
||||
|
||||
// ftsLimit holds the total number of fts hits to consider for rsf
|
||||
ftsLimit := 0
|
||||
for _, hit := range hits {
|
||||
if hit.Score != 0.0 {
|
||||
scoringDocs = append(scoringDocs, hit)
|
||||
if hit.Score == 0.0 {
|
||||
break
|
||||
}
|
||||
ftsLimit++
|
||||
}
|
||||
// sort hits by fts score
|
||||
sort.Slice(scoringDocs, func(a, b int) bool {
|
||||
return scoreSortFunc()(scoringDocs[a], scoringDocs[b]) < 0
|
||||
})
|
||||
// Reslice to correct size
|
||||
if len(scoringDocs) > windowSize {
|
||||
scoringDocs = scoringDocs[:windowSize]
|
||||
}
|
||||
ftsLimit = min(ftsLimit, windowSize)
|
||||
|
||||
var min, max float64
|
||||
if len(scoringDocs) > 0 {
|
||||
min, max = scoringDocs[len(scoringDocs)-1].Score, scoringDocs[0].Score
|
||||
}
|
||||
// calculate fts scores
|
||||
if ftsLimit > 0 {
|
||||
max := hits[0].Score
|
||||
min := hits[ftsLimit-1].Score
|
||||
denom := max - min
|
||||
weight := weights[0]
|
||||
|
||||
for _, hit := range scoringDocs {
|
||||
var tempRsfScore float64
|
||||
if max > min {
|
||||
tempRsfScore = (hit.Score - min) / (max - min)
|
||||
} else {
|
||||
tempRsfScore = 1.0
|
||||
}
|
||||
|
||||
if explain {
|
||||
// create and replace new explanation
|
||||
expl := getFusionExplAt(
|
||||
hit,
|
||||
0,
|
||||
tempRsfScore,
|
||||
formatRSFMessage(weights[0], tempRsfScore, min, max),
|
||||
)
|
||||
explMap[hit.ID] = append(explMap[hit.ID], expl)
|
||||
}
|
||||
|
||||
rsfScores[hit.ID] = weights[0] * tempRsfScore
|
||||
}
|
||||
|
||||
for i := range numKNNQueries {
|
||||
scoringDocs = scoringDocs[:0]
|
||||
for _, hit := range hits {
|
||||
if _, exists := hit.ScoreBreakdown[i]; exists {
|
||||
scoringDocs = append(scoringDocs, hit)
|
||||
for i := 0; i < ftsLimit; i++ {
|
||||
hit := hits[i]
|
||||
norm := 1.0
|
||||
if denom > 0 {
|
||||
norm = (hit.Score - min) / denom
|
||||
}
|
||||
}
|
||||
|
||||
sort.Slice(scoringDocs, func(a, b int) bool {
|
||||
return scoreBreakdownSortFunc(i)(scoringDocs[a], scoringDocs[b]) < 0
|
||||
})
|
||||
|
||||
if len(scoringDocs) > windowSize {
|
||||
scoringDocs = scoringDocs[:windowSize]
|
||||
}
|
||||
|
||||
if len(scoringDocs) > 0 {
|
||||
min, max = scoringDocs[len(scoringDocs)-1].ScoreBreakdown[i], scoringDocs[0].ScoreBreakdown[i]
|
||||
} else {
|
||||
min, max = 0.0, 0.0
|
||||
}
|
||||
|
||||
for _, hit := range scoringDocs {
|
||||
var tempRsfScore float64
|
||||
if max > min {
|
||||
tempRsfScore = (hit.ScoreBreakdown[i] - min) / (max - min)
|
||||
} else {
|
||||
tempRsfScore = 1.0
|
||||
}
|
||||
|
||||
contrib := weight * norm
|
||||
if explain {
|
||||
expl := getFusionExplAt(
|
||||
hit,
|
||||
i+1,
|
||||
tempRsfScore,
|
||||
formatRSFMessage(weights[i+1], tempRsfScore, min, max),
|
||||
0,
|
||||
norm,
|
||||
formatRSFMessage(weight, norm, min, max),
|
||||
)
|
||||
explMap[hit.ID] = append(explMap[hit.ID], expl)
|
||||
fusionExpl[hit] = append(fusionExpl[hit], expl)
|
||||
}
|
||||
|
||||
rsfScores[hit.ID] += weights[i+1] * tempRsfScore
|
||||
hit.Score = contrib
|
||||
}
|
||||
for i := ftsLimit; i < nHits; i++ {
|
||||
// These FTS hits are not counted in the results, so set to 0
|
||||
hits[i].Score = 0.0
|
||||
}
|
||||
}
|
||||
|
||||
var maxScore float64
|
||||
for _, hit := range hits {
|
||||
if rsfScore, exists := rsfScores[hit.ID]; exists {
|
||||
hit.Score = rsfScore
|
||||
if rsfScore > maxScore {
|
||||
maxScore = rsfScore
|
||||
// Code from here is for calculating knn scores
|
||||
for queryIdx := 0; queryIdx < numKNNQueries; queryIdx++ {
|
||||
sortDocMatchesByBreakdown(hits, queryIdx)
|
||||
|
||||
// knnLimit holds the total number of knn hits retrieved for a specific knn query
|
||||
knnLimit := 0
|
||||
for _, hit := range hits {
|
||||
if _, ok := scoreBreakdownForQuery(hit, queryIdx); !ok {
|
||||
break
|
||||
}
|
||||
if explain {
|
||||
finalizeFusionExpl(hit, explMap[hit.ID])
|
||||
}
|
||||
} else {
|
||||
hit.Score = 0.0
|
||||
knnLimit++
|
||||
}
|
||||
knnLimit = min(knnLimit, windowSize)
|
||||
|
||||
// if limit is 0, skip calculating
|
||||
if knnLimit == 0 {
|
||||
continue
|
||||
}
|
||||
|
||||
max, _ := scoreBreakdownForQuery(hits[0], queryIdx)
|
||||
min, _ := scoreBreakdownForQuery(hits[knnLimit-1], queryIdx)
|
||||
denom := max - min
|
||||
weight := weights[queryIdx+1]
|
||||
|
||||
for i := 0; i < knnLimit; i++ {
|
||||
hit := hits[i]
|
||||
score, _ := scoreBreakdownForQuery(hit, queryIdx)
|
||||
norm := 1.0
|
||||
if denom > 0 {
|
||||
norm = (score - min) / denom
|
||||
}
|
||||
contrib := weight * norm
|
||||
if explain {
|
||||
expl := getFusionExplAt(
|
||||
hit,
|
||||
queryIdx+1,
|
||||
norm,
|
||||
formatRSFMessage(weight, norm, min, max),
|
||||
)
|
||||
fusionExpl[hit] = append(fusionExpl[hit], expl)
|
||||
}
|
||||
hit.Score += contrib
|
||||
}
|
||||
}
|
||||
|
||||
// Finalize scores
|
||||
var maxScore float64
|
||||
for _, hit := range hits {
|
||||
if explain {
|
||||
finalizeFusionExpl(hit, fusionExpl[hit])
|
||||
}
|
||||
if hit.Score > maxScore {
|
||||
maxScore = hit.Score
|
||||
}
|
||||
hit.ScoreBreakdown = nil
|
||||
}
|
||||
|
||||
sort.Sort(hits)
|
||||
sortDocMatchesByScore(hits)
|
||||
|
||||
if len(hits) > windowSize {
|
||||
if nHits > windowSize {
|
||||
hits = hits[:windowSize]
|
||||
}
|
||||
|
||||
return FusionResult{
|
||||
return &FusionResult{
|
||||
Hits: hits,
|
||||
Total: uint64(len(hits)),
|
||||
MaxScore: maxScore,
|
||||
|
||||
125
vendor/github.com/blevesearch/bleve/v2/fusion/util.go
generated
vendored
125
vendor/github.com/blevesearch/bleve/v2/fusion/util.go
generated
vendored
@@ -16,70 +16,82 @@
|
||||
package fusion
|
||||
|
||||
import (
|
||||
"sort"
|
||||
|
||||
"github.com/blevesearch/bleve/v2/search"
|
||||
)
|
||||
|
||||
// scoreBreakdownSortFunc returns a comparison function for sorting DocumentMatch objects
|
||||
// by their ScoreBreakdown at the specified index in descending order.
|
||||
// In case of ties, documents with lower HitNumber (earlier hits) are preferred.
|
||||
// If either document is missing the ScoreBreakdown for the specified index,
|
||||
// it's treated as having a score of 0.0.
|
||||
func scoreBreakdownSortFunc(idx int) func(i, j *search.DocumentMatch) int {
|
||||
return func(i, j *search.DocumentMatch) int {
|
||||
// Safely extract scores, defaulting to 0.0 if missing
|
||||
iScore := 0.0
|
||||
jScore := 0.0
|
||||
|
||||
if i.ScoreBreakdown != nil {
|
||||
if score, ok := i.ScoreBreakdown[idx]; ok {
|
||||
iScore = score
|
||||
}
|
||||
}
|
||||
|
||||
if j.ScoreBreakdown != nil {
|
||||
if score, ok := j.ScoreBreakdown[idx]; ok {
|
||||
jScore = score
|
||||
}
|
||||
}
|
||||
|
||||
// Sort by score in descending order (higher scores first)
|
||||
if iScore > jScore {
|
||||
return -1
|
||||
} else if iScore < jScore {
|
||||
return 1
|
||||
}
|
||||
|
||||
// Break ties by HitNumber in ascending order (lower HitNumber wins)
|
||||
if i.HitNumber < j.HitNumber {
|
||||
return -1
|
||||
} else if i.HitNumber > j.HitNumber {
|
||||
return 1
|
||||
}
|
||||
|
||||
return 0 // Equal scores and HitNumbers
|
||||
// sortDocMatchesByScore orders the provided collection in-place by the primary
|
||||
// score in descending order, breaking ties with the original `HitNumber` to
|
||||
// ensure deterministic output.
|
||||
func sortDocMatchesByScore(hits search.DocumentMatchCollection) {
|
||||
if len(hits) < 2 {
|
||||
return
|
||||
}
|
||||
|
||||
sort.Slice(hits, func(a, b int) bool {
|
||||
i := hits[a]
|
||||
j := hits[b]
|
||||
if i.Score == j.Score {
|
||||
return i.HitNumber < j.HitNumber
|
||||
}
|
||||
return i.Score > j.Score
|
||||
})
|
||||
}
|
||||
|
||||
func scoreSortFunc() func(i, j *search.DocumentMatch) int {
|
||||
return func(i, j *search.DocumentMatch) int {
|
||||
// Sort by score in descending order
|
||||
if i.Score > j.Score {
|
||||
return -1
|
||||
} else if i.Score < j.Score {
|
||||
return 1
|
||||
}
|
||||
|
||||
// Break ties by HitNumber
|
||||
if i.HitNumber < j.HitNumber {
|
||||
return -1
|
||||
} else if i.HitNumber > j.HitNumber {
|
||||
return 1
|
||||
}
|
||||
|
||||
return 0
|
||||
// scoreBreakdownForQuery fetches the score for a specific KNN query index from
|
||||
// the provided hit. The boolean return indicates whether the score is present.
|
||||
func scoreBreakdownForQuery(hit *search.DocumentMatch, idx int) (float64, bool) {
|
||||
if hit == nil || hit.ScoreBreakdown == nil {
|
||||
return 0, false
|
||||
}
|
||||
|
||||
score, ok := hit.ScoreBreakdown[idx]
|
||||
return score, ok
|
||||
}
|
||||
|
||||
// sortDocMatchesByBreakdown orders the hits in-place using the KNN score for
|
||||
// the supplied query index (descending), breaking ties with `HitNumber` and
|
||||
// placing hits without a score at the end.
|
||||
func sortDocMatchesByBreakdown(hits search.DocumentMatchCollection, queryIdx int) {
|
||||
if len(hits) < 2 {
|
||||
return
|
||||
}
|
||||
|
||||
sort.SliceStable(hits, func(a, b int) bool {
|
||||
left := hits[a]
|
||||
right := hits[b]
|
||||
|
||||
var leftScore float64
|
||||
leftOK := false
|
||||
if left != nil && left.ScoreBreakdown != nil {
|
||||
leftScore, leftOK = left.ScoreBreakdown[queryIdx]
|
||||
}
|
||||
|
||||
var rightScore float64
|
||||
rightOK := false
|
||||
if right != nil && right.ScoreBreakdown != nil {
|
||||
rightScore, rightOK = right.ScoreBreakdown[queryIdx]
|
||||
}
|
||||
|
||||
if leftOK && rightOK {
|
||||
if leftScore == rightScore {
|
||||
return left.HitNumber < right.HitNumber
|
||||
}
|
||||
return leftScore > rightScore
|
||||
}
|
||||
|
||||
if leftOK != rightOK {
|
||||
return leftOK
|
||||
}
|
||||
|
||||
return left.HitNumber < right.HitNumber
|
||||
})
|
||||
}
|
||||
|
||||
// getFusionExplAt copies the existing explanation child at the requested index
|
||||
// and wraps it in a new node describing how the fusion algorithm adjusted the
|
||||
// score.
|
||||
func getFusionExplAt(hit *search.DocumentMatch, i int, value float64, message string) *search.Explanation {
|
||||
return &search.Explanation{
|
||||
Value: value,
|
||||
@@ -88,6 +100,9 @@ func getFusionExplAt(hit *search.DocumentMatch, i int, value float64, message st
|
||||
}
|
||||
}
|
||||
|
||||
// finalizeFusionExpl installs the collection of fusion explanation children and
|
||||
// updates the root message so the caller sees the fused score as the sum of its
|
||||
// parts.
|
||||
func finalizeFusionExpl(hit *search.DocumentMatch, explChildren []*search.Explanation) {
|
||||
hit.Expl.Children = explChildren
|
||||
|
||||
|
||||
62
vendor/github.com/blevesearch/bleve/v2/index/scorch/event.go
generated
vendored
62
vendor/github.com/blevesearch/bleve/v2/index/scorch/event.go
generated
vendored
@@ -35,43 +35,45 @@ type Event struct {
|
||||
// EventKind represents an event code for OnEvent() callbacks.
|
||||
type EventKind int
|
||||
|
||||
// EventKindCloseStart is fired when a Scorch.Close() has begun.
|
||||
var EventKindCloseStart = EventKind(1)
|
||||
const (
|
||||
// EventKindCloseStart is fired when a Scorch.Close() has begun.
|
||||
EventKindCloseStart EventKind = iota
|
||||
|
||||
// EventKindClose is fired when a scorch index has been fully closed.
|
||||
var EventKindClose = EventKind(2)
|
||||
// EventKindClose is fired when a scorch index has been fully closed.
|
||||
EventKindClose
|
||||
|
||||
// EventKindMergerProgress is fired when the merger has completed a
|
||||
// round of merge processing.
|
||||
var EventKindMergerProgress = EventKind(3)
|
||||
// EventKindMergerProgress is fired when the merger has completed a
|
||||
// round of merge processing.
|
||||
EventKindMergerProgress
|
||||
|
||||
// EventKindPersisterProgress is fired when the persister has completed
|
||||
// a round of persistence processing.
|
||||
var EventKindPersisterProgress = EventKind(4)
|
||||
// EventKindPersisterProgress is fired when the persister has completed
|
||||
// a round of persistence processing.
|
||||
EventKindPersisterProgress
|
||||
|
||||
// EventKindBatchIntroductionStart is fired when Batch() is invoked which
|
||||
// introduces a new segment.
|
||||
var EventKindBatchIntroductionStart = EventKind(5)
|
||||
// EventKindBatchIntroductionStart is fired when Batch() is invoked which
|
||||
// introduces a new segment.
|
||||
EventKindBatchIntroductionStart
|
||||
|
||||
// EventKindBatchIntroduction is fired when Batch() completes.
|
||||
var EventKindBatchIntroduction = EventKind(6)
|
||||
// EventKindBatchIntroduction is fired when Batch() completes.
|
||||
EventKindBatchIntroduction
|
||||
|
||||
// EventKindMergeTaskIntroductionStart is fired when the merger is about to
|
||||
// start the introduction of merged segment from a single merge task.
|
||||
var EventKindMergeTaskIntroductionStart = EventKind(7)
|
||||
// EventKindMergeTaskIntroductionStart is fired when the merger is about to
|
||||
// start the introduction of merged segment from a single merge task.
|
||||
EventKindMergeTaskIntroductionStart
|
||||
|
||||
// EventKindMergeTaskIntroduction is fired when the merger has completed
|
||||
// the introduction of merged segment from a single merge task.
|
||||
var EventKindMergeTaskIntroduction = EventKind(8)
|
||||
// EventKindMergeTaskIntroduction is fired when the merger has completed
|
||||
// the introduction of merged segment from a single merge task.
|
||||
EventKindMergeTaskIntroduction
|
||||
|
||||
// EventKindPreMergeCheck is fired before the merge begins to check if
|
||||
// the caller should proceed with the merge.
|
||||
var EventKindPreMergeCheck = EventKind(9)
|
||||
// EventKindPreMergeCheck is fired before the merge begins to check if
|
||||
// the caller should proceed with the merge.
|
||||
EventKindPreMergeCheck
|
||||
|
||||
// EventKindIndexStart is fired when Index() is invoked which
|
||||
// creates a new Document object from an interface using the index mapping.
|
||||
var EventKindIndexStart = EventKind(10)
|
||||
// EventKindIndexStart is fired when Index() is invoked which
|
||||
// creates a new Document object from an interface using the index mapping.
|
||||
EventKindIndexStart
|
||||
|
||||
// EventKindPurgerCheck is fired before the purge code is invoked and decides
|
||||
// whether to execute or not. For unit test purposes
|
||||
var EventKindPurgerCheck = EventKind(11)
|
||||
// EventKindPurgerCheck is fired before the purge code is invoked and decides
|
||||
// whether to execute or not. For unit test purposes
|
||||
EventKindPurgerCheck
|
||||
)
|
||||
|
||||
11
vendor/github.com/blevesearch/bleve/v2/index/scorch/introducer.go
generated
vendored
11
vendor/github.com/blevesearch/bleve/v2/index/scorch/introducer.go
generated
vendored
@@ -24,6 +24,8 @@ import (
|
||||
segment "github.com/blevesearch/scorch_segment_api/v2"
|
||||
)
|
||||
|
||||
const introducer = "introducer"
|
||||
|
||||
type segmentIntroduction struct {
|
||||
id uint64
|
||||
data segment.Segment
|
||||
@@ -50,10 +52,11 @@ type epochWatcher struct {
|
||||
func (s *Scorch) introducerLoop() {
|
||||
defer func() {
|
||||
if r := recover(); r != nil {
|
||||
s.fireAsyncError(&AsyncPanicError{
|
||||
Source: "introducer",
|
||||
Path: s.path,
|
||||
})
|
||||
s.fireAsyncError(NewScorchError(
|
||||
introducer,
|
||||
fmt.Sprintf("panic: %v, path: %s", r, s.path),
|
||||
ErrAsyncPanic,
|
||||
))
|
||||
}
|
||||
|
||||
s.asyncTasks.Done()
|
||||
|
||||
24
vendor/github.com/blevesearch/bleve/v2/index/scorch/merge.go
generated
vendored
24
vendor/github.com/blevesearch/bleve/v2/index/scorch/merge.go
generated
vendored
@@ -29,13 +29,16 @@ import (
|
||||
segment "github.com/blevesearch/scorch_segment_api/v2"
|
||||
)
|
||||
|
||||
const merger = "merger"
|
||||
|
||||
func (s *Scorch) mergerLoop() {
|
||||
defer func() {
|
||||
if r := recover(); r != nil {
|
||||
s.fireAsyncError(&AsyncPanicError{
|
||||
Source: "merger",
|
||||
Path: s.path,
|
||||
})
|
||||
s.fireAsyncError(NewScorchError(
|
||||
merger,
|
||||
fmt.Sprintf("panic: %v, path: %s", r, s.path),
|
||||
ErrAsyncPanic,
|
||||
))
|
||||
}
|
||||
|
||||
s.asyncTasks.Done()
|
||||
@@ -45,7 +48,11 @@ func (s *Scorch) mergerLoop() {
|
||||
var ctrlMsg *mergerCtrl
|
||||
mergePlannerOptions, err := s.parseMergePlannerOptions()
|
||||
if err != nil {
|
||||
s.fireAsyncError(fmt.Errorf("mergePlannerOption json parsing err: %v", err))
|
||||
s.fireAsyncError(NewScorchError(
|
||||
merger,
|
||||
fmt.Sprintf("mergerPlannerOptions json parsing err: %v", err),
|
||||
ErrOptionsParse,
|
||||
))
|
||||
return
|
||||
}
|
||||
ctrlMsgDflt := &mergerCtrl{ctx: context.Background(),
|
||||
@@ -110,7 +117,12 @@ OUTER:
|
||||
ctrlMsg = nil
|
||||
break OUTER
|
||||
}
|
||||
s.fireAsyncError(fmt.Errorf("merging err: %v", err))
|
||||
|
||||
s.fireAsyncError(NewScorchError(
|
||||
merger,
|
||||
fmt.Sprintf("merging err: %v", err),
|
||||
ErrPersist,
|
||||
))
|
||||
_ = ourSnapshot.DecRef()
|
||||
atomic.AddUint64(&s.stats.TotFileMergeLoopErr, 1)
|
||||
continue OUTER
|
||||
|
||||
35
vendor/github.com/blevesearch/bleve/v2/index/scorch/persister.go
generated
vendored
35
vendor/github.com/blevesearch/bleve/v2/index/scorch/persister.go
generated
vendored
@@ -38,6 +38,8 @@ import (
|
||||
bolt "go.etcd.io/bbolt"
|
||||
)
|
||||
|
||||
const persister = "persister"
|
||||
|
||||
// DefaultPersisterNapTimeMSec is kept to zero as this helps in direct
|
||||
// persistence of segments with the default safe batch option.
|
||||
// If the default safe batch option results in high number of
|
||||
@@ -95,10 +97,11 @@ type notificationChan chan struct{}
|
||||
func (s *Scorch) persisterLoop() {
|
||||
defer func() {
|
||||
if r := recover(); r != nil {
|
||||
s.fireAsyncError(&AsyncPanicError{
|
||||
Source: "persister",
|
||||
Path: s.path,
|
||||
})
|
||||
s.fireAsyncError(NewScorchError(
|
||||
persister,
|
||||
fmt.Sprintf("panic: %v, path: %s", r, s.path),
|
||||
ErrAsyncPanic,
|
||||
))
|
||||
}
|
||||
|
||||
s.asyncTasks.Done()
|
||||
@@ -112,7 +115,11 @@ func (s *Scorch) persisterLoop() {
|
||||
|
||||
po, err := s.parsePersisterOptions()
|
||||
if err != nil {
|
||||
s.fireAsyncError(fmt.Errorf("persisterOptions json parsing err: %v", err))
|
||||
s.fireAsyncError(NewScorchError(
|
||||
persister,
|
||||
fmt.Sprintf("persisterOptions json parsing err: %v", err),
|
||||
ErrOptionsParse,
|
||||
))
|
||||
return
|
||||
}
|
||||
|
||||
@@ -173,7 +180,11 @@ OUTER:
|
||||
// the retry attempt
|
||||
unpersistedCallbacks = append(unpersistedCallbacks, ourPersistedCallbacks...)
|
||||
|
||||
s.fireAsyncError(fmt.Errorf("got err persisting snapshot: %v", err))
|
||||
s.fireAsyncError(NewScorchError(
|
||||
persister,
|
||||
fmt.Sprintf("got err persisting snapshot: %v", err),
|
||||
ErrPersist,
|
||||
))
|
||||
_ = ourSnapshot.DecRef()
|
||||
atomic.AddUint64(&s.stats.TotPersistLoopErr, 1)
|
||||
continue OUTER
|
||||
@@ -1060,13 +1071,21 @@ func (s *Scorch) loadSegment(segmentBucket *bolt.Bucket) (*SegmentSnapshot, erro
|
||||
func (s *Scorch) removeOldData() {
|
||||
removed, err := s.removeOldBoltSnapshots()
|
||||
if err != nil {
|
||||
s.fireAsyncError(fmt.Errorf("got err removing old bolt snapshots: %v", err))
|
||||
s.fireAsyncError(NewScorchError(
|
||||
persister,
|
||||
fmt.Sprintf("got err removing old bolt snapshots: %v", err),
|
||||
ErrCleanup,
|
||||
))
|
||||
}
|
||||
atomic.AddUint64(&s.stats.TotSnapshotsRemovedFromMetaStore, uint64(removed))
|
||||
|
||||
err = s.removeOldZapFiles()
|
||||
if err != nil {
|
||||
s.fireAsyncError(fmt.Errorf("got err removing old zap files: %v", err))
|
||||
s.fireAsyncError(NewScorchError(
|
||||
persister,
|
||||
fmt.Sprintf("got err removing old zap files: %v", err),
|
||||
ErrCleanup,
|
||||
))
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
43
vendor/github.com/blevesearch/bleve/v2/index/scorch/scorch.go
generated
vendored
43
vendor/github.com/blevesearch/bleve/v2/index/scorch/scorch.go
generated
vendored
@@ -88,14 +88,45 @@ type Scorch struct {
|
||||
spatialPlugin index.SpatialAnalyzerPlugin
|
||||
}
|
||||
|
||||
// AsyncPanicError is passed to scorch asyncErrorHandler when panic occurs in scorch background process
|
||||
type AsyncPanicError struct {
|
||||
Source string
|
||||
Path string
|
||||
type ScorchErrorType string
|
||||
|
||||
func (t ScorchErrorType) Error() string {
|
||||
return string(t)
|
||||
}
|
||||
|
||||
func (e *AsyncPanicError) Error() string {
|
||||
return fmt.Sprintf("%s panic when processing %s", e.Source, e.Path)
|
||||
// ErrType values for ScorchError
|
||||
const (
|
||||
ErrAsyncPanic = ScorchErrorType("async panic error")
|
||||
ErrPersist = ScorchErrorType("persist error")
|
||||
ErrCleanup = ScorchErrorType("cleanup error")
|
||||
ErrOptionsParse = ScorchErrorType("options parse error")
|
||||
)
|
||||
|
||||
// ScorchError is passed to onAsyncError when errors are
|
||||
// fired from scorch background processes
|
||||
type ScorchError struct {
|
||||
Source string
|
||||
ErrMsg string
|
||||
ErrType ScorchErrorType
|
||||
}
|
||||
|
||||
func (e *ScorchError) Error() string {
|
||||
return fmt.Sprintf("source: %s, %v: %s", e.Source, e.ErrType, e.ErrMsg)
|
||||
}
|
||||
|
||||
// Lets the onAsyncError function verify what type of
|
||||
// error is fired using errors.Is(...). This lets the function
|
||||
// handle errors differently.
|
||||
func (e *ScorchError) Unwrap() error {
|
||||
return e.ErrType
|
||||
}
|
||||
|
||||
func NewScorchError(source, errMsg string, errType ScorchErrorType) error {
|
||||
return &ScorchError{
|
||||
Source: source,
|
||||
ErrMsg: errMsg,
|
||||
ErrType: errType,
|
||||
}
|
||||
}
|
||||
|
||||
type internalStats struct {
|
||||
|
||||
34
vendor/github.com/blevesearch/bleve/v2/index/scorch/snapshot_index.go
generated
vendored
34
vendor/github.com/blevesearch/bleve/v2/index/scorch/snapshot_index.go
generated
vendored
@@ -23,7 +23,6 @@ import (
|
||||
"path/filepath"
|
||||
"reflect"
|
||||
"sort"
|
||||
"strings"
|
||||
"sync"
|
||||
"sync/atomic"
|
||||
|
||||
@@ -147,7 +146,7 @@ func (is *IndexSnapshot) newIndexSnapshotFieldDict(field string,
|
||||
makeItr func(i segment.TermDictionary) segment.DictionaryIterator,
|
||||
randomLookup bool,
|
||||
) (*IndexSnapshotFieldDict, error) {
|
||||
results := make(chan *asynchSegmentResult)
|
||||
results := make(chan *asynchSegmentResult, len(is.segment))
|
||||
var totalBytesRead uint64
|
||||
var fieldCardinality int64
|
||||
for _, s := range is.segment {
|
||||
@@ -281,10 +280,13 @@ func (is *IndexSnapshot) FieldDictRange(field string, startTerm []byte,
|
||||
// to use as the end key in a traditional (inclusive, exclusive]
|
||||
// start/end range
|
||||
func calculateExclusiveEndFromPrefix(in []byte) []byte {
|
||||
if len(in) == 0 {
|
||||
return nil
|
||||
}
|
||||
rv := make([]byte, len(in))
|
||||
copy(rv, in)
|
||||
for i := len(rv) - 1; i >= 0; i-- {
|
||||
rv[i] = rv[i] + 1
|
||||
rv[i]++
|
||||
if rv[i] != 0 {
|
||||
return rv // didn't overflow, so stop
|
||||
}
|
||||
@@ -391,7 +393,7 @@ func (is *IndexSnapshot) FieldDictContains(field string) (index.FieldDictContain
|
||||
}
|
||||
|
||||
func (is *IndexSnapshot) DocIDReaderAll() (index.DocIDReader, error) {
|
||||
results := make(chan *asynchSegmentResult)
|
||||
results := make(chan *asynchSegmentResult, len(is.segment))
|
||||
for index, segment := range is.segment {
|
||||
go func(index int, segment *SegmentSnapshot) {
|
||||
results <- &asynchSegmentResult{
|
||||
@@ -405,7 +407,7 @@ func (is *IndexSnapshot) DocIDReaderAll() (index.DocIDReader, error) {
|
||||
}
|
||||
|
||||
func (is *IndexSnapshot) DocIDReaderOnly(ids []string) (index.DocIDReader, error) {
|
||||
results := make(chan *asynchSegmentResult)
|
||||
results := make(chan *asynchSegmentResult, len(is.segment))
|
||||
for index, segment := range is.segment {
|
||||
go func(index int, segment *SegmentSnapshot) {
|
||||
docs, err := segment.DocNumbers(ids)
|
||||
@@ -451,7 +453,7 @@ func (is *IndexSnapshot) newDocIDReader(results chan *asynchSegmentResult) (inde
|
||||
func (is *IndexSnapshot) Fields() ([]string, error) {
|
||||
// FIXME not making this concurrent for now as it's not used in hot path
|
||||
// of any searches at the moment (just a debug aid)
|
||||
fieldsMap := map[string]struct{}{}
|
||||
fieldsMap := make(map[string]struct{})
|
||||
for _, segment := range is.segment {
|
||||
fields := segment.Fields()
|
||||
for _, field := range fields {
|
||||
@@ -765,7 +767,7 @@ func (is *IndexSnapshot) recycleTermFieldReader(tfr *IndexSnapshotTermFieldReade
|
||||
|
||||
is.m2.Lock()
|
||||
if is.fieldTFRs == nil {
|
||||
is.fieldTFRs = map[string][]*IndexSnapshotTermFieldReader{}
|
||||
is.fieldTFRs = make(map[string][]*IndexSnapshotTermFieldReader)
|
||||
}
|
||||
if len(is.fieldTFRs[tfr.field]) < is.getFieldTFRCacheThreshold() {
|
||||
tfr.bytesRead = 0
|
||||
@@ -813,7 +815,7 @@ func (is *IndexSnapshot) documentVisitFieldTermsOnSegment(
|
||||
// Filter out fields that have been completely deleted or had their
|
||||
// docvalues data deleted from both visitable fields and required fields
|
||||
filterUpdatedFields := func(fields []string) []string {
|
||||
filteredFields := make([]string, 0)
|
||||
filteredFields := make([]string, 0, len(fields))
|
||||
for _, field := range fields {
|
||||
if info, ok := is.updatedFields[field]; ok &&
|
||||
(info.DocValues || info.Deleted) {
|
||||
@@ -978,15 +980,17 @@ func subtractStrings(a, b []string) []string {
|
||||
return a
|
||||
}
|
||||
|
||||
// Create a map for O(1) lookups
|
||||
bMap := make(map[string]struct{}, len(b))
|
||||
for _, bs := range b {
|
||||
bMap[bs] = struct{}{}
|
||||
}
|
||||
|
||||
rv := make([]string, 0, len(a))
|
||||
OUTER:
|
||||
for _, as := range a {
|
||||
for _, bs := range b {
|
||||
if as == bs {
|
||||
continue OUTER
|
||||
}
|
||||
if _, exists := bMap[as]; !exists {
|
||||
rv = append(rv, as)
|
||||
}
|
||||
rv = append(rv, as)
|
||||
}
|
||||
return rv
|
||||
}
|
||||
@@ -1279,7 +1283,7 @@ func (is *IndexSnapshot) TermFrequencies(field string, limit int, descending boo
|
||||
sort.Slice(termFreqs, func(i, j int) bool {
|
||||
if termFreqs[i].Frequency == termFreqs[j].Frequency {
|
||||
// If frequencies are equal, sort by term lexicographically
|
||||
return strings.Compare(termFreqs[i].Term, termFreqs[j].Term) < 0
|
||||
return termFreqs[i].Term < termFreqs[j].Term
|
||||
}
|
||||
if descending {
|
||||
return termFreqs[i].Frequency > termFreqs[j].Frequency
|
||||
|
||||
8
vendor/github.com/blevesearch/bleve/v2/index/scorch/snapshot_vector_index.go
generated
vendored
8
vendor/github.com/blevesearch/bleve/v2/index/scorch/snapshot_vector_index.go
generated
vendored
@@ -37,14 +37,10 @@ func (is *IndexSnapshot) VectorReader(ctx context.Context, vector []float32,
|
||||
snapshot: is,
|
||||
searchParams: searchParams,
|
||||
eligibleSelector: eligibleSelector,
|
||||
postings: make([]segment_api.VecPostingsList, len(is.segment)),
|
||||
iterators: make([]segment_api.VecPostingsIterator, len(is.segment)),
|
||||
}
|
||||
|
||||
if rv.postings == nil {
|
||||
rv.postings = make([]segment_api.VecPostingsList, len(is.segment))
|
||||
}
|
||||
if rv.iterators == nil {
|
||||
rv.iterators = make([]segment_api.VecPostingsIterator, len(is.segment))
|
||||
}
|
||||
// initialize postings and iterators within the OptimizeVR's Finish()
|
||||
return rv, nil
|
||||
}
|
||||
|
||||
61
vendor/github.com/blevesearch/bleve/v2/index_alias_impl.go
generated
vendored
61
vendor/github.com/blevesearch/bleve/v2/index_alias_impl.go
generated
vendored
@@ -18,7 +18,6 @@ import (
|
||||
"context"
|
||||
"fmt"
|
||||
"sort"
|
||||
"strings"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
@@ -905,7 +904,7 @@ func preSearchDataSearch(ctx context.Context, req *SearchRequest, flags *preSear
|
||||
// which would happen in the case of an alias tree and depending on the level of the tree, the preSearchData
|
||||
// needs to be redistributed to the indexes at that level
|
||||
func redistributePreSearchData(req *SearchRequest, indexes []Index) (map[string]map[string]interface{}, error) {
|
||||
rv := make(map[string]map[string]interface{})
|
||||
rv := make(map[string]map[string]interface{}, len(indexes))
|
||||
for _, index := range indexes {
|
||||
rv[index.Name()] = make(map[string]interface{})
|
||||
}
|
||||
@@ -1202,23 +1201,16 @@ func (i *indexAliasImpl) TermFrequencies(field string, limit int, descending boo
|
||||
})
|
||||
}
|
||||
|
||||
if descending {
|
||||
sort.Slice(rvTermFreqs, func(i, j int) bool {
|
||||
if rvTermFreqs[i].Frequency == rvTermFreqs[j].Frequency {
|
||||
// If frequencies are equal, sort by term lexicographically
|
||||
return strings.Compare(rvTermFreqs[i].Term, rvTermFreqs[j].Term) < 0
|
||||
}
|
||||
sort.Slice(rvTermFreqs, func(i, j int) bool {
|
||||
if rvTermFreqs[i].Frequency == rvTermFreqs[j].Frequency {
|
||||
// If frequencies are equal, sort by term lexicographically
|
||||
return rvTermFreqs[i].Term < rvTermFreqs[j].Term
|
||||
}
|
||||
if descending {
|
||||
return rvTermFreqs[i].Frequency > rvTermFreqs[j].Frequency
|
||||
})
|
||||
} else {
|
||||
sort.Slice(rvTermFreqs, func(i, j int) bool {
|
||||
if rvTermFreqs[i].Frequency == rvTermFreqs[j].Frequency {
|
||||
// If frequencies are equal, sort by term lexicographically
|
||||
return strings.Compare(rvTermFreqs[i].Term, rvTermFreqs[j].Term) < 0
|
||||
}
|
||||
return rvTermFreqs[i].Frequency < rvTermFreqs[j].Frequency
|
||||
})
|
||||
}
|
||||
}
|
||||
return rvTermFreqs[i].Frequency < rvTermFreqs[j].Frequency
|
||||
})
|
||||
|
||||
if limit > len(rvTermFreqs) {
|
||||
limit = len(rvTermFreqs)
|
||||
@@ -1272,25 +1264,22 @@ func (i *indexAliasImpl) CentroidCardinalities(field string, limit int, descendi
|
||||
close(asyncResults)
|
||||
}()
|
||||
|
||||
rvCentroidCardinalitiesResult := make([]index.CentroidCardinality, 0, limit)
|
||||
rvCentroidCardinalities := make([]index.CentroidCardinality, 0, limit*len(i.indexes))
|
||||
for asr := range asyncResults {
|
||||
asr = append(asr, rvCentroidCardinalitiesResult...)
|
||||
if descending {
|
||||
sort.Slice(asr, func(i, j int) bool {
|
||||
return asr[i].Cardinality > asr[j].Cardinality
|
||||
})
|
||||
} else {
|
||||
sort.Slice(asr, func(i, j int) bool {
|
||||
return asr[i].Cardinality < asr[j].Cardinality
|
||||
})
|
||||
}
|
||||
|
||||
if limit > len(asr) {
|
||||
limit = len(asr)
|
||||
}
|
||||
|
||||
rvCentroidCardinalitiesResult = asr[:limit]
|
||||
rvCentroidCardinalities = append(rvCentroidCardinalities, asr...)
|
||||
}
|
||||
|
||||
return rvCentroidCardinalitiesResult, nil
|
||||
sort.Slice(rvCentroidCardinalities, func(i, j int) bool {
|
||||
if descending {
|
||||
return rvCentroidCardinalities[i].Cardinality > rvCentroidCardinalities[j].Cardinality
|
||||
} else {
|
||||
return rvCentroidCardinalities[i].Cardinality < rvCentroidCardinalities[j].Cardinality
|
||||
}
|
||||
})
|
||||
|
||||
if limit > len(rvCentroidCardinalities) {
|
||||
limit = len(rvCentroidCardinalities)
|
||||
}
|
||||
|
||||
return rvCentroidCardinalities[:limit], nil
|
||||
}
|
||||
|
||||
24
vendor/github.com/blevesearch/bleve/v2/index_impl.go
generated
vendored
24
vendor/github.com/blevesearch/bleve/v2/index_impl.go
generated
vendored
@@ -20,6 +20,7 @@ import (
|
||||
"io"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"regexp"
|
||||
"strconv"
|
||||
"sync"
|
||||
"sync/atomic"
|
||||
@@ -859,6 +860,26 @@ func (i *indexImpl) SearchInContext(ctx context.Context, req *SearchRequest) (sr
|
||||
} else {
|
||||
// build terms facet
|
||||
facetBuilder := facet.NewTermsFacetBuilder(facetRequest.Field, facetRequest.Size)
|
||||
|
||||
// Set prefix filter if provided
|
||||
if facetRequest.TermPrefix != "" {
|
||||
facetBuilder.SetPrefixFilter(facetRequest.TermPrefix)
|
||||
}
|
||||
|
||||
// Set regex filter if provided
|
||||
if facetRequest.TermPattern != "" {
|
||||
// Use cached compiled pattern if available, otherwise compile it now
|
||||
if facetRequest.compiledPattern != nil {
|
||||
facetBuilder.SetRegexFilter(facetRequest.compiledPattern)
|
||||
} else {
|
||||
regex, err := regexp.Compile(facetRequest.TermPattern)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("error compiling regex pattern for facet '%s': %v", facetName, err)
|
||||
}
|
||||
facetBuilder.SetRegexFilter(regex)
|
||||
}
|
||||
}
|
||||
|
||||
facetsBuilder.Add(facetName, facetBuilder)
|
||||
}
|
||||
}
|
||||
@@ -1304,6 +1325,9 @@ func (f *indexImplFieldDict) Cardinality() int {
|
||||
|
||||
// helper function to remove duplicate entries from slice of strings
|
||||
func deDuplicate(fields []string) []string {
|
||||
if len(fields) == 0 {
|
||||
return fields
|
||||
}
|
||||
entries := make(map[string]struct{})
|
||||
ret := []string{}
|
||||
for _, entry := range fields {
|
||||
|
||||
50
vendor/github.com/blevesearch/bleve/v2/index_update.go
generated
vendored
50
vendor/github.com/blevesearch/bleve/v2/index_update.go
generated
vendored
@@ -92,7 +92,7 @@ func DeletedFields(ori, upd *mapping.IndexMappingImpl) (map[string]*index.Update
|
||||
// Compare both the mappings based on the document paths
|
||||
// and create a list of index, docvalues, store differences
|
||||
// for every single field possible
|
||||
fieldInfo := make(map[string]*index.UpdateFieldInfo)
|
||||
fieldInfo := make(map[string]*index.UpdateFieldInfo, len(oriPaths))
|
||||
for path, info := range oriPaths {
|
||||
err = addFieldInfo(fieldInfo, info, updPaths[path])
|
||||
if err != nil {
|
||||
@@ -109,13 +109,13 @@ func DeletedFields(ori, upd *mapping.IndexMappingImpl) (map[string]*index.Update
|
||||
// A field cannot be completely deleted with any dynamic value turned on
|
||||
if info.Deleted {
|
||||
if upd.IndexDynamic {
|
||||
return nil, fmt.Errorf("Mapping cannot be removed when index dynamic is true")
|
||||
return nil, fmt.Errorf("mapping cannot be removed when index dynamic is true")
|
||||
}
|
||||
if upd.StoreDynamic {
|
||||
return nil, fmt.Errorf("Mapping cannot be removed when store dynamic is true")
|
||||
return nil, fmt.Errorf("mapping cannot be removed when store dynamic is true")
|
||||
}
|
||||
if upd.DocValuesDynamic {
|
||||
return nil, fmt.Errorf("Mapping cannot be removed when docvalues dynamic is true")
|
||||
return nil, fmt.Errorf("mapping cannot be removed when docvalues dynamic is true")
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -191,14 +191,14 @@ func checkUpdatedMapping(ori, upd *mapping.DocumentMapping) error {
|
||||
|
||||
// Simple checks to ensure no new field mappings present
|
||||
// in updated
|
||||
// Create a map of original field names for O(1) lookup
|
||||
oriFieldNames := make(map[string]bool, len(ori.Fields))
|
||||
for _, fMapping := range ori.Fields {
|
||||
oriFieldNames[fMapping.Name] = true
|
||||
}
|
||||
|
||||
for _, updFMapping := range upd.Fields {
|
||||
var oriFMapping *mapping.FieldMapping
|
||||
for _, fMapping := range ori.Fields {
|
||||
if updFMapping.Name == fMapping.Name {
|
||||
oriFMapping = fMapping
|
||||
}
|
||||
}
|
||||
if oriFMapping == nil {
|
||||
if !oriFieldNames[updFMapping.Name] {
|
||||
return fmt.Errorf("updated index mapping contains new fields")
|
||||
}
|
||||
}
|
||||
@@ -238,10 +238,8 @@ func addPathInfo(paths map[string]*pathInfo, name string, mp *mapping.DocumentMa
|
||||
|
||||
// Recursively add path information for all child mappings
|
||||
for cName, cMapping := range mp.Properties {
|
||||
var pathName string
|
||||
if name == "" {
|
||||
pathName = cName
|
||||
} else {
|
||||
pathName := cName
|
||||
if name != "" {
|
||||
pathName = name + "." + cName
|
||||
}
|
||||
addPathInfo(paths, pathName, cMapping, im, pInfo, rootName)
|
||||
@@ -460,9 +458,6 @@ func addFieldInfo(fInfo map[string]*index.UpdateFieldInfo, ori, upd *pathInfo) e
|
||||
}
|
||||
}
|
||||
}
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
@@ -567,19 +562,18 @@ func compareFieldMapping(original, updated *mapping.FieldMapping) (*index.Update
|
||||
// In such a situation, any conflicting changes found will abort the update process
|
||||
func validateFieldInfo(newInfo *index.UpdateFieldInfo, fInfo map[string]*index.UpdateFieldInfo,
|
||||
ori *pathInfo, oriFMapInfo *fieldMapInfo) error {
|
||||
// Determine field name
|
||||
fieldName := oriFMapInfo.fieldMapping.Name
|
||||
if fieldName == "" {
|
||||
fieldName = oriFMapInfo.parent.path
|
||||
}
|
||||
|
||||
// Construct full name with parent path
|
||||
var name string
|
||||
if oriFMapInfo.parent.parentPath == "" {
|
||||
if oriFMapInfo.fieldMapping.Name == "" {
|
||||
name = oriFMapInfo.parent.path
|
||||
} else {
|
||||
name = oriFMapInfo.fieldMapping.Name
|
||||
}
|
||||
name = fieldName
|
||||
} else {
|
||||
if oriFMapInfo.fieldMapping.Name == "" {
|
||||
name = oriFMapInfo.parent.parentPath + "." + oriFMapInfo.parent.path
|
||||
} else {
|
||||
name = oriFMapInfo.parent.parentPath + "." + oriFMapInfo.fieldMapping.Name
|
||||
}
|
||||
name = oriFMapInfo.parent.parentPath + "." + fieldName
|
||||
}
|
||||
if (newInfo.Deleted || newInfo.Index || newInfo.DocValues || newInfo.Store) && ori.dynamic {
|
||||
return fmt.Errorf("updated field is under a dynamic property")
|
||||
|
||||
10
vendor/github.com/blevesearch/bleve/v2/mapping/document.go
generated
vendored
10
vendor/github.com/blevesearch/bleve/v2/mapping/document.go
generated
vendored
@@ -52,7 +52,7 @@ type DocumentMapping struct {
|
||||
}
|
||||
|
||||
func (dm *DocumentMapping) Validate(cache *registry.Cache,
|
||||
parentName string, fieldAliasCtx map[string]*FieldMapping,
|
||||
path []string, fieldAliasCtx map[string]*FieldMapping,
|
||||
) error {
|
||||
var err error
|
||||
if dm.DefaultAnalyzer != "" {
|
||||
@@ -68,11 +68,7 @@ func (dm *DocumentMapping) Validate(cache *registry.Cache,
|
||||
}
|
||||
}
|
||||
for propertyName, property := range dm.Properties {
|
||||
newParent := propertyName
|
||||
if parentName != "" {
|
||||
newParent = fmt.Sprintf("%s.%s", parentName, propertyName)
|
||||
}
|
||||
err = property.Validate(cache, newParent, fieldAliasCtx)
|
||||
err = property.Validate(cache, append(path, propertyName), fieldAliasCtx)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
@@ -96,7 +92,7 @@ func (dm *DocumentMapping) Validate(cache *registry.Cache,
|
||||
return err
|
||||
}
|
||||
}
|
||||
err := validateFieldMapping(field, parentName, fieldAliasCtx)
|
||||
err := validateFieldMapping(field, path, fieldAliasCtx)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
7
vendor/github.com/blevesearch/bleve/v2/mapping/index.go
generated
vendored
7
vendor/github.com/blevesearch/bleve/v2/mapping/index.go
generated
vendored
@@ -191,13 +191,16 @@ func (im *IndexMappingImpl) Validate() error {
|
||||
return err
|
||||
}
|
||||
}
|
||||
// fieldAliasCtx is used to detect any field alias conflicts across the entire mapping
|
||||
// the map will hold the fully qualified field name to FieldMapping, so we can
|
||||
// check for conflicts as we validate each DocumentMapping.
|
||||
fieldAliasCtx := make(map[string]*FieldMapping)
|
||||
err = im.DefaultMapping.Validate(im.cache, "", fieldAliasCtx)
|
||||
err = im.DefaultMapping.Validate(im.cache, []string{}, fieldAliasCtx)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
for _, docMapping := range im.TypeMapping {
|
||||
err = docMapping.Validate(im.cache, "", fieldAliasCtx)
|
||||
err = docMapping.Validate(im.cache, []string{}, fieldAliasCtx)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
2
vendor/github.com/blevesearch/bleve/v2/mapping/mapping_no_vectors.go
generated
vendored
2
vendor/github.com/blevesearch/bleve/v2/mapping/mapping_no_vectors.go
generated
vendored
@@ -38,7 +38,7 @@ func (fm *FieldMapping) processVectorBase64(propertyMightBeVector interface{},
|
||||
// -----------------------------------------------------------------------------
|
||||
// document validation functions
|
||||
|
||||
func validateFieldMapping(field *FieldMapping, parentName string,
|
||||
func validateFieldMapping(field *FieldMapping, path []string,
|
||||
fieldAliasCtx map[string]*FieldMapping) error {
|
||||
return validateFieldType(field)
|
||||
}
|
||||
|
||||
150
vendor/github.com/blevesearch/bleve/v2/mapping/mapping_vectors.go
generated
vendored
150
vendor/github.com/blevesearch/bleve/v2/mapping/mapping_vectors.go
generated
vendored
@@ -20,6 +20,7 @@ package mapping
|
||||
import (
|
||||
"fmt"
|
||||
"reflect"
|
||||
"slices"
|
||||
|
||||
"github.com/blevesearch/bleve/v2/document"
|
||||
"github.com/blevesearch/bleve/v2/util"
|
||||
@@ -141,15 +142,27 @@ func (fm *FieldMapping) processVector(propertyMightBeVector interface{},
|
||||
if !ok {
|
||||
return false
|
||||
}
|
||||
// Apply defaults for similarity and optimization if not set
|
||||
similarity := fm.Similarity
|
||||
if similarity == "" {
|
||||
similarity = index.DefaultVectorSimilarityMetric
|
||||
}
|
||||
vectorIndexOptimizedFor := fm.VectorIndexOptimizedFor
|
||||
if vectorIndexOptimizedFor == "" {
|
||||
vectorIndexOptimizedFor = index.DefaultIndexOptimization
|
||||
}
|
||||
// normalize raw vector if similarity is cosine
|
||||
if fm.Similarity == index.CosineSimilarity {
|
||||
vector = NormalizeVector(vector)
|
||||
// Since the vector can be multi-vector (flattened array of multiple vectors),
|
||||
// we use NormalizeMultiVector to normalize each sub-vector independently.
|
||||
if similarity == index.CosineSimilarity {
|
||||
vector = NormalizeMultiVector(vector, fm.Dims)
|
||||
}
|
||||
|
||||
fieldName := getFieldName(pathString, path, fm)
|
||||
|
||||
options := fm.Options()
|
||||
field := document.NewVectorFieldWithIndexingOptions(fieldName, indexes, vector,
|
||||
fm.Dims, fm.Similarity, fm.VectorIndexOptimizedFor, options)
|
||||
fm.Dims, similarity, vectorIndexOptimizedFor, options)
|
||||
context.doc.AddField(field)
|
||||
|
||||
// "_all" composite field is not applicable for vector field
|
||||
@@ -163,20 +176,29 @@ func (fm *FieldMapping) processVectorBase64(propertyMightBeVectorBase64 interfac
|
||||
if !ok {
|
||||
return
|
||||
}
|
||||
|
||||
// Apply defaults for similarity and optimization if not set
|
||||
similarity := fm.Similarity
|
||||
if similarity == "" {
|
||||
similarity = index.DefaultVectorSimilarityMetric
|
||||
}
|
||||
vectorIndexOptimizedFor := fm.VectorIndexOptimizedFor
|
||||
if vectorIndexOptimizedFor == "" {
|
||||
vectorIndexOptimizedFor = index.DefaultIndexOptimization
|
||||
}
|
||||
decodedVector, err := document.DecodeVector(encodedString)
|
||||
if err != nil || len(decodedVector) != fm.Dims {
|
||||
return
|
||||
}
|
||||
// normalize raw vector if similarity is cosine
|
||||
if fm.Similarity == index.CosineSimilarity {
|
||||
// normalize raw vector if similarity is cosine, multi-vector is not supported
|
||||
// for base64 encoded vectors, so we use NormalizeVector directly.
|
||||
if similarity == index.CosineSimilarity {
|
||||
decodedVector = NormalizeVector(decodedVector)
|
||||
}
|
||||
|
||||
fieldName := getFieldName(pathString, path, fm)
|
||||
options := fm.Options()
|
||||
field := document.NewVectorFieldWithIndexingOptions(fieldName, indexes, decodedVector,
|
||||
fm.Dims, fm.Similarity, fm.VectorIndexOptimizedFor, options)
|
||||
fm.Dims, similarity, vectorIndexOptimizedFor, options)
|
||||
context.doc.AddField(field)
|
||||
|
||||
// "_all" composite field is not applicable for vector_base64 field
|
||||
@@ -186,87 +208,121 @@ func (fm *FieldMapping) processVectorBase64(propertyMightBeVectorBase64 interfac
|
||||
// -----------------------------------------------------------------------------
|
||||
// document validation functions
|
||||
|
||||
func validateFieldMapping(field *FieldMapping, parentName string,
|
||||
func validateFieldMapping(field *FieldMapping, path []string,
|
||||
fieldAliasCtx map[string]*FieldMapping) error {
|
||||
switch field.Type {
|
||||
case "vector", "vector_base64":
|
||||
return validateVectorFieldAlias(field, parentName, fieldAliasCtx)
|
||||
return validateVectorFieldAlias(field, path, fieldAliasCtx)
|
||||
default: // non-vector field
|
||||
return validateFieldType(field)
|
||||
}
|
||||
}
|
||||
|
||||
func validateVectorFieldAlias(field *FieldMapping, parentName string,
|
||||
func validateVectorFieldAlias(field *FieldMapping, path []string,
|
||||
fieldAliasCtx map[string]*FieldMapping) error {
|
||||
|
||||
if field.Name == "" {
|
||||
field.Name = parentName
|
||||
// fully qualified field name
|
||||
pathString := encodePath(path)
|
||||
// check if field has a name set, else use path to compute effective name
|
||||
effectiveFieldName := getFieldName(pathString, path, field)
|
||||
// Compute effective values for validation
|
||||
effectiveSimilarity := field.Similarity
|
||||
if effectiveSimilarity == "" {
|
||||
effectiveSimilarity = index.DefaultVectorSimilarityMetric
|
||||
}
|
||||
effectiveOptimizedFor := field.VectorIndexOptimizedFor
|
||||
if effectiveOptimizedFor == "" {
|
||||
effectiveOptimizedFor = index.DefaultIndexOptimization
|
||||
}
|
||||
|
||||
if field.Similarity == "" {
|
||||
field.Similarity = index.DefaultVectorSimilarityMetric
|
||||
}
|
||||
|
||||
if field.VectorIndexOptimizedFor == "" {
|
||||
field.VectorIndexOptimizedFor = index.DefaultIndexOptimization
|
||||
}
|
||||
if _, exists := index.SupportedVectorIndexOptimizations[field.VectorIndexOptimizedFor]; !exists {
|
||||
// if an unsupported config is provided, override to default
|
||||
field.VectorIndexOptimizedFor = index.DefaultIndexOptimization
|
||||
}
|
||||
|
||||
// following fields are not applicable for vector
|
||||
// thus, we set them to default values
|
||||
field.IncludeInAll = false
|
||||
field.IncludeTermVectors = false
|
||||
field.Store = false
|
||||
field.DocValues = false
|
||||
field.SkipFreqNorm = true
|
||||
|
||||
// # If alias is present, validate the field options as per the alias
|
||||
// # If alias is present, validate the field options as per the alias.
|
||||
// note: reading from a nil map is safe
|
||||
if fieldAlias, ok := fieldAliasCtx[field.Name]; ok {
|
||||
if fieldAlias, ok := fieldAliasCtx[effectiveFieldName]; ok {
|
||||
if field.Dims != fieldAlias.Dims {
|
||||
return fmt.Errorf("field: '%s', invalid alias "+
|
||||
"(different dimensions %d and %d)", fieldAlias.Name, field.Dims,
|
||||
"(different dimensions %d and %d)", effectiveFieldName, field.Dims,
|
||||
fieldAlias.Dims)
|
||||
}
|
||||
|
||||
if field.Similarity != fieldAlias.Similarity {
|
||||
// Compare effective similarity values
|
||||
aliasSimilarity := fieldAlias.Similarity
|
||||
if aliasSimilarity == "" {
|
||||
aliasSimilarity = index.DefaultVectorSimilarityMetric
|
||||
}
|
||||
if effectiveSimilarity != aliasSimilarity {
|
||||
return fmt.Errorf("field: '%s', invalid alias "+
|
||||
"(different similarity values %s and %s)", fieldAlias.Name,
|
||||
field.Similarity, fieldAlias.Similarity)
|
||||
"(different similarity values %s and %s)", effectiveFieldName,
|
||||
effectiveSimilarity, aliasSimilarity)
|
||||
}
|
||||
|
||||
// Compare effective vector index optimization values
|
||||
aliasOptimizedFor := fieldAlias.VectorIndexOptimizedFor
|
||||
if aliasOptimizedFor == "" {
|
||||
aliasOptimizedFor = index.DefaultIndexOptimization
|
||||
}
|
||||
if effectiveOptimizedFor != aliasOptimizedFor {
|
||||
return fmt.Errorf("field: '%s', invalid alias "+
|
||||
"(different vector index optimization values %s and %s)", effectiveFieldName,
|
||||
effectiveOptimizedFor, aliasOptimizedFor)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// # Validate field options
|
||||
|
||||
// Vector dimensions must be within allowed range
|
||||
if field.Dims < MinVectorDims || field.Dims > MaxVectorDims {
|
||||
return fmt.Errorf("field: '%s', invalid vector dimension: %d,"+
|
||||
" value should be in range (%d, %d)", field.Name, field.Dims,
|
||||
" value should be in range [%d, %d]", effectiveFieldName, field.Dims,
|
||||
MinVectorDims, MaxVectorDims)
|
||||
}
|
||||
|
||||
if _, ok := index.SupportedVectorSimilarityMetrics[field.Similarity]; !ok {
|
||||
// Similarity metric must be supported
|
||||
if _, ok := index.SupportedVectorSimilarityMetrics[effectiveSimilarity]; !ok {
|
||||
return fmt.Errorf("field: '%s', invalid similarity "+
|
||||
"metric: '%s', valid metrics are: %+v", field.Name, field.Similarity,
|
||||
"metric: '%s', valid metrics are: %+v", effectiveFieldName, effectiveSimilarity,
|
||||
reflect.ValueOf(index.SupportedVectorSimilarityMetrics).MapKeys())
|
||||
}
|
||||
// Vector index optimization must be supported
|
||||
if _, ok := index.SupportedVectorIndexOptimizations[effectiveOptimizedFor]; !ok {
|
||||
return fmt.Errorf("field: '%s', invalid vector index "+
|
||||
"optimization: '%s', valid optimizations are: %+v", effectiveFieldName,
|
||||
effectiveOptimizedFor,
|
||||
reflect.ValueOf(index.SupportedVectorIndexOptimizations).MapKeys())
|
||||
}
|
||||
|
||||
if fieldAliasCtx != nil { // writing to a nil map is unsafe
|
||||
fieldAliasCtx[field.Name] = field
|
||||
fieldAliasCtx[effectiveFieldName] = field
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// NormalizeVector normalizes a single vector to unit length.
|
||||
// It makes a copy of the input vector to avoid modifying it in-place.
|
||||
func NormalizeVector(vec []float32) []float32 {
|
||||
// make a copy of the vector to avoid modifying the original
|
||||
// vector in-place
|
||||
vecCopy := make([]float32, len(vec))
|
||||
copy(vecCopy, vec)
|
||||
vecCopy := slices.Clone(vec)
|
||||
// normalize the vector copy using in-place normalization provided by faiss
|
||||
return faiss.NormalizeVector(vecCopy)
|
||||
}
|
||||
|
||||
// NormalizeMultiVector normalizes each sub-vector of size `dims` independently.
|
||||
// For a flattened array containing multiple vectors, each sub-vector is
|
||||
// normalized separately to unit length.
|
||||
// It makes a copy of the input vector to avoid modifying it in-place.
|
||||
func NormalizeMultiVector(vec []float32, dims int) []float32 {
|
||||
if len(vec) == 0 || dims <= 0 || len(vec)%dims != 0 {
|
||||
return vec
|
||||
}
|
||||
// Single vector - delegate to NormalizeVector
|
||||
if len(vec) == dims {
|
||||
return NormalizeVector(vec)
|
||||
}
|
||||
// Multi-vector - make a copy to avoid modifying the original
|
||||
result := slices.Clone(vec)
|
||||
// Normalize each sub-vector in-place
|
||||
for i := 0; i < len(result); i += dims {
|
||||
faiss.NormalizeVector(result[i : i+dims])
|
||||
}
|
||||
return result
|
||||
}
|
||||
|
||||
6
vendor/github.com/blevesearch/bleve/v2/rescorer.go
generated
vendored
6
vendor/github.com/blevesearch/bleve/v2/rescorer.go
generated
vendored
@@ -99,7 +99,7 @@ func (r *rescorer) rescore(ftsHits, knnHits search.DocumentMatchCollection) (sea
|
||||
|
||||
switch r.req.Score {
|
||||
case ScoreRRF:
|
||||
res := fusion.ReciprocalRankFusion(
|
||||
fusionResult = fusion.ReciprocalRankFusion(
|
||||
mergedHits,
|
||||
r.origBoosts,
|
||||
r.req.Params.ScoreRankConstant,
|
||||
@@ -107,16 +107,14 @@ func (r *rescorer) rescore(ftsHits, knnHits search.DocumentMatchCollection) (sea
|
||||
numKNNQueries(r.req),
|
||||
r.req.Explain,
|
||||
)
|
||||
fusionResult = &res
|
||||
case ScoreRSF:
|
||||
res := fusion.RelativeScoreFusion(
|
||||
fusionResult = fusion.RelativeScoreFusion(
|
||||
mergedHits,
|
||||
r.origBoosts,
|
||||
r.req.Params.ScoreWindowSize,
|
||||
numKNNQueries(r.req),
|
||||
r.req.Explain,
|
||||
)
|
||||
fusionResult = &res
|
||||
}
|
||||
|
||||
return fusionResult.Hits, fusionResult.Total, fusionResult.MaxScore
|
||||
|
||||
101
vendor/github.com/blevesearch/bleve/v2/search.go
generated
vendored
101
vendor/github.com/blevesearch/bleve/v2/search.go
generated
vendored
@@ -17,8 +17,10 @@ package bleve
|
||||
import (
|
||||
"fmt"
|
||||
"reflect"
|
||||
"regexp"
|
||||
"sort"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/blevesearch/bleve/v2/analysis"
|
||||
@@ -147,8 +149,13 @@ type numericRange struct {
|
||||
type FacetRequest struct {
|
||||
Size int `json:"size"`
|
||||
Field string `json:"field"`
|
||||
TermPrefix string `json:"term_prefix,omitempty"`
|
||||
TermPattern string `json:"term_pattern,omitempty"`
|
||||
NumericRanges []*numericRange `json:"numeric_ranges,omitempty"`
|
||||
DateTimeRanges []*dateTimeRange `json:"date_ranges,omitempty"`
|
||||
|
||||
// Compiled regex pattern (cached during validation)
|
||||
compiledPattern *regexp.Regexp
|
||||
}
|
||||
|
||||
// NewFacetRequest creates a facet on the specified
|
||||
@@ -161,7 +168,26 @@ func NewFacetRequest(field string, size int) *FacetRequest {
|
||||
}
|
||||
}
|
||||
|
||||
// SetPrefixFilter sets the prefix filter for term facets.
|
||||
func (fr *FacetRequest) SetPrefixFilter(prefix string) {
|
||||
fr.TermPrefix = prefix
|
||||
}
|
||||
|
||||
// SetRegexFilter sets the regex pattern filter for term facets.
|
||||
func (fr *FacetRequest) SetRegexFilter(pattern string) {
|
||||
fr.TermPattern = pattern
|
||||
}
|
||||
|
||||
func (fr *FacetRequest) Validate() error {
|
||||
// Validate regex pattern if provided and cache the compiled regex
|
||||
if fr.TermPattern != "" {
|
||||
compiled, err := regexp.Compile(fr.TermPattern)
|
||||
if err != nil {
|
||||
return fmt.Errorf("invalid term pattern: %v", err)
|
||||
}
|
||||
fr.compiledPattern = compiled
|
||||
}
|
||||
|
||||
nrCount := len(fr.NumericRanges)
|
||||
drCount := len(fr.DateTimeRanges)
|
||||
if nrCount > 0 && drCount > 0 {
|
||||
@@ -546,49 +572,74 @@ func (sr *SearchResult) Size() int {
|
||||
}
|
||||
|
||||
func (sr *SearchResult) String() string {
|
||||
rv := ""
|
||||
rv := &strings.Builder{}
|
||||
if sr.Total > 0 {
|
||||
if sr.Request != nil && sr.Request.Size > 0 {
|
||||
rv = fmt.Sprintf("%d matches, showing %d through %d, took %s\n", sr.Total, sr.Request.From+1, sr.Request.From+len(sr.Hits), sr.Took)
|
||||
switch {
|
||||
case sr.Request != nil && sr.Request.Size > 0:
|
||||
start := sr.Request.From + 1
|
||||
end := sr.Request.From + len(sr.Hits)
|
||||
fmt.Fprintf(rv, "%d matches, showing %d through %d, took %s\n", sr.Total, start, end, sr.Took)
|
||||
for i, hit := range sr.Hits {
|
||||
rv += fmt.Sprintf("%5d. %s (%f)\n", i+sr.Request.From+1, hit.ID, hit.Score)
|
||||
for fragmentField, fragments := range hit.Fragments {
|
||||
rv += fmt.Sprintf("\t%s\n", fragmentField)
|
||||
for _, fragment := range fragments {
|
||||
rv += fmt.Sprintf("\t\t%s\n", fragment)
|
||||
}
|
||||
}
|
||||
for otherFieldName, otherFieldValue := range hit.Fields {
|
||||
if _, ok := hit.Fragments[otherFieldName]; !ok {
|
||||
rv += fmt.Sprintf("\t%s\n", otherFieldName)
|
||||
rv += fmt.Sprintf("\t\t%v\n", otherFieldValue)
|
||||
}
|
||||
}
|
||||
rv = formatHit(rv, hit, start+i)
|
||||
}
|
||||
} else {
|
||||
rv = fmt.Sprintf("%d matches, took %s\n", sr.Total, sr.Took)
|
||||
case sr.Request == nil:
|
||||
fmt.Fprintf(rv, "%d matches, took %s\n", sr.Total, sr.Took)
|
||||
for i, hit := range sr.Hits {
|
||||
rv = formatHit(rv, hit, i+1)
|
||||
}
|
||||
default:
|
||||
fmt.Fprintf(rv, "%d matches, took %s\n", sr.Total, sr.Took)
|
||||
}
|
||||
} else {
|
||||
rv = "No matches"
|
||||
fmt.Fprintf(rv, "No matches\n")
|
||||
}
|
||||
if len(sr.Facets) > 0 {
|
||||
rv += "Facets:\n"
|
||||
fmt.Fprintf(rv, "Facets:\n")
|
||||
for fn, f := range sr.Facets {
|
||||
rv += fmt.Sprintf("%s(%d)\n", fn, f.Total)
|
||||
fmt.Fprintf(rv, "%s(%d)\n", fn, f.Total)
|
||||
for _, t := range f.Terms.Terms() {
|
||||
rv += fmt.Sprintf("\t%s(%d)\n", t.Term, t.Count)
|
||||
fmt.Fprintf(rv, "\t%s(%d)\n", t.Term, t.Count)
|
||||
}
|
||||
for _, n := range f.NumericRanges {
|
||||
rv += fmt.Sprintf("\t%s(%d)\n", n.Name, n.Count)
|
||||
fmt.Fprintf(rv, "\t%s(%d)\n", n.Name, n.Count)
|
||||
}
|
||||
for _, d := range f.DateRanges {
|
||||
rv += fmt.Sprintf("\t%s(%d)\n", d.Name, d.Count)
|
||||
fmt.Fprintf(rv, "\t%s(%d)\n", d.Name, d.Count)
|
||||
}
|
||||
if f.Other != 0 {
|
||||
rv += fmt.Sprintf("\tOther(%d)\n", f.Other)
|
||||
fmt.Fprintf(rv, "\tOther(%d)\n", f.Other)
|
||||
}
|
||||
}
|
||||
}
|
||||
return rv.String()
|
||||
}
|
||||
|
||||
// formatHit is a helper function to format a single hit in the search result for
|
||||
// the String() method of SearchResult
|
||||
func formatHit(rv *strings.Builder, hit *search.DocumentMatch, hitNumber int) *strings.Builder {
|
||||
fmt.Fprintf(rv, "%5d. %s (%f)\n", hitNumber, hit.ID, hit.Score)
|
||||
for fragmentField, fragments := range hit.Fragments {
|
||||
fmt.Fprintf(rv, "\t%s\n", fragmentField)
|
||||
for _, fragment := range fragments {
|
||||
fmt.Fprintf(rv, "\t\t%s\n", fragment)
|
||||
}
|
||||
}
|
||||
for otherFieldName, otherFieldValue := range hit.Fields {
|
||||
if _, ok := hit.Fragments[otherFieldName]; !ok {
|
||||
fmt.Fprintf(rv, "\t%s\n", otherFieldName)
|
||||
fmt.Fprintf(rv, "\t\t%v\n", otherFieldValue)
|
||||
}
|
||||
}
|
||||
if len(hit.DecodedSort) > 0 {
|
||||
fmt.Fprintf(rv, "\t_sort: [")
|
||||
for k, v := range hit.DecodedSort {
|
||||
if k > 0 {
|
||||
fmt.Fprintf(rv, ", ")
|
||||
}
|
||||
fmt.Fprintf(rv, "%v", v)
|
||||
}
|
||||
fmt.Fprintf(rv, "]\n")
|
||||
}
|
||||
return rv
|
||||
}
|
||||
|
||||
|
||||
60
vendor/github.com/blevesearch/bleve/v2/search/facet/facet_builder_terms.go
generated
vendored
60
vendor/github.com/blevesearch/bleve/v2/search/facet/facet_builder_terms.go
generated
vendored
@@ -15,7 +15,9 @@
|
||||
package facet
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"reflect"
|
||||
"regexp"
|
||||
"sort"
|
||||
|
||||
"github.com/blevesearch/bleve/v2/search"
|
||||
@@ -30,12 +32,14 @@ func init() {
|
||||
}
|
||||
|
||||
type TermsFacetBuilder struct {
|
||||
size int
|
||||
field string
|
||||
termsCount map[string]int
|
||||
total int
|
||||
missing int
|
||||
sawValue bool
|
||||
size int
|
||||
field string
|
||||
prefixBytes []byte
|
||||
regex *regexp.Regexp
|
||||
termsCount map[string]int
|
||||
total int
|
||||
missing int
|
||||
sawValue bool
|
||||
}
|
||||
|
||||
func NewTermsFacetBuilder(field string, size int) *TermsFacetBuilder {
|
||||
@@ -48,7 +52,16 @@ func NewTermsFacetBuilder(field string, size int) *TermsFacetBuilder {
|
||||
|
||||
func (fb *TermsFacetBuilder) Size() int {
|
||||
sizeInBytes := reflectStaticSizeTermsFacetBuilder + size.SizeOfPtr +
|
||||
len(fb.field)
|
||||
len(fb.field) +
|
||||
len(fb.prefixBytes) +
|
||||
size.SizeOfPtr // regex pointer (does not include actual regexp.Regexp object size)
|
||||
|
||||
// Estimate regex object size if present.
|
||||
if fb.regex != nil {
|
||||
// This is only the static size of regexp.Regexp struct, not including heap allocations.
|
||||
sizeInBytes += int(reflect.TypeOf(*fb.regex).Size())
|
||||
// NOTE: Actual memory usage of regexp.Regexp may be higher due to internal allocations.
|
||||
}
|
||||
|
||||
for k := range fb.termsCount {
|
||||
sizeInBytes += size.SizeOfString + len(k) +
|
||||
@@ -62,10 +75,39 @@ func (fb *TermsFacetBuilder) Field() string {
|
||||
return fb.field
|
||||
}
|
||||
|
||||
// SetPrefixFilter sets the prefix filter for term facets.
|
||||
func (fb *TermsFacetBuilder) SetPrefixFilter(prefix string) {
|
||||
if prefix != "" {
|
||||
fb.prefixBytes = []byte(prefix)
|
||||
} else {
|
||||
fb.prefixBytes = nil
|
||||
}
|
||||
}
|
||||
|
||||
// SetRegexFilter sets the compiled regex filter for term facets.
|
||||
func (fb *TermsFacetBuilder) SetRegexFilter(regex *regexp.Regexp) {
|
||||
fb.regex = regex
|
||||
}
|
||||
|
||||
func (fb *TermsFacetBuilder) UpdateVisitor(term []byte) {
|
||||
fb.sawValue = true
|
||||
fb.termsCount[string(term)] = fb.termsCount[string(term)] + 1
|
||||
// Total represents all terms visited, not just matching ones.
|
||||
// This is necessary for the "Other" calculation.
|
||||
fb.total++
|
||||
|
||||
// Fast prefix check on []byte - zero allocation
|
||||
if len(fb.prefixBytes) > 0 && !bytes.HasPrefix(term, fb.prefixBytes) {
|
||||
return
|
||||
}
|
||||
|
||||
// Fast regex check on []byte - zero allocation
|
||||
if fb.regex != nil && !fb.regex.Match(term) {
|
||||
return
|
||||
}
|
||||
|
||||
// Only convert to string if term matches filters
|
||||
termStr := string(term)
|
||||
fb.sawValue = true
|
||||
fb.termsCount[termStr] = fb.termsCount[termStr] + 1
|
||||
}
|
||||
|
||||
func (fb *TermsFacetBuilder) StartDoc() {
|
||||
|
||||
5
vendor/github.com/blevesearch/bleve/v2/search/query/boolean.go
generated
vendored
5
vendor/github.com/blevesearch/bleve/v2/search/query/boolean.go
generated
vendored
@@ -15,7 +15,6 @@
|
||||
package query
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
@@ -203,7 +202,7 @@ func (q *BooleanQuery) Searcher(ctx context.Context, i index.IndexReader, m mapp
|
||||
return false
|
||||
}
|
||||
// Compare document IDs
|
||||
cmp := bytes.Compare(refDoc.IndexInternalID, d.IndexInternalID)
|
||||
cmp := refDoc.IndexInternalID.Compare(d.IndexInternalID)
|
||||
if cmp < 0 {
|
||||
// filterSearcher is behind the current document, Advance() it
|
||||
refDoc, err = filterSearcher.Advance(sctx, d.IndexInternalID)
|
||||
@@ -211,7 +210,7 @@ func (q *BooleanQuery) Searcher(ctx context.Context, i index.IndexReader, m mapp
|
||||
return false
|
||||
}
|
||||
// After advance, check if they're now equal
|
||||
return bytes.Equal(refDoc.IndexInternalID, d.IndexInternalID)
|
||||
cmp = refDoc.IndexInternalID.Compare(d.IndexInternalID)
|
||||
}
|
||||
// cmp >= 0: either equal (match) or filterSearcher is ahead (no match)
|
||||
return cmp == 0
|
||||
|
||||
2
vendor/github.com/blevesearch/bleve/v2/search/query/knn.go
generated
vendored
2
vendor/github.com/blevesearch/bleve/v2/search/query/knn.go
generated
vendored
@@ -53,7 +53,7 @@ func (q *KNNQuery) SetK(k int64) {
|
||||
q.K = k
|
||||
}
|
||||
|
||||
func (q *KNNQuery) SetFieldVal(field string) {
|
||||
func (q *KNNQuery) SetField(field string) {
|
||||
q.VectorField = field
|
||||
}
|
||||
|
||||
|
||||
10
vendor/github.com/blevesearch/bleve/v2/search/scorer/scorer_disjunction.go
generated
vendored
10
vendor/github.com/blevesearch/bleve/v2/search/scorer/scorer_disjunction.go
generated
vendored
@@ -88,7 +88,10 @@ func (s *DisjunctionQueryScorer) Score(ctx *search.SearchContext, constituents [
|
||||
func (s *DisjunctionQueryScorer) ScoreAndExplBreakdown(ctx *search.SearchContext, constituents []*search.DocumentMatch,
|
||||
matchingIdxs []int, originalPositions []int, countTotal int) *search.DocumentMatch {
|
||||
|
||||
scoreBreakdown := make(map[int]float64)
|
||||
rv := constituents[0]
|
||||
if rv.ScoreBreakdown == nil {
|
||||
rv.ScoreBreakdown = make(map[int]float64, len(constituents))
|
||||
}
|
||||
var childrenExplanations []*search.Explanation
|
||||
if s.options.Explain {
|
||||
// since we want to notify which expl belongs to which matched searcher within the disjunction searcher
|
||||
@@ -104,7 +107,7 @@ func (s *DisjunctionQueryScorer) ScoreAndExplBreakdown(ctx *search.SearchContext
|
||||
// scorer used in disjunction heap searcher
|
||||
index = matchingIdxs[i]
|
||||
}
|
||||
scoreBreakdown[index] = docMatch.Score
|
||||
rv.ScoreBreakdown[index] = docMatch.Score
|
||||
if s.options.Explain {
|
||||
childrenExplanations[index] = docMatch.Expl
|
||||
}
|
||||
@@ -113,9 +116,6 @@ func (s *DisjunctionQueryScorer) ScoreAndExplBreakdown(ctx *search.SearchContext
|
||||
if s.options.Explain {
|
||||
explBreakdown = &search.Explanation{Children: childrenExplanations}
|
||||
}
|
||||
|
||||
rv := constituents[0]
|
||||
rv.ScoreBreakdown = scoreBreakdown
|
||||
rv.Expl = explBreakdown
|
||||
rv.FieldTermLocations = search.MergeFieldTermLocations(
|
||||
rv.FieldTermLocations, constituents[1:])
|
||||
|
||||
11
vendor/github.com/blevesearch/bleve/v2/search/search.go
generated
vendored
11
vendor/github.com/blevesearch/bleve/v2/search/search.go
generated
vendored
@@ -207,20 +207,29 @@ func (dm *DocumentMatch) Reset() *DocumentMatch {
|
||||
indexInternalID := dm.IndexInternalID
|
||||
// remember the []interface{} used for sort
|
||||
sort := dm.Sort
|
||||
// remember the []string used for decoded sort
|
||||
decodedSort := dm.DecodedSort
|
||||
// remember the FieldTermLocations backing array
|
||||
ftls := dm.FieldTermLocations
|
||||
for i := range ftls { // recycle the ArrayPositions of each location
|
||||
ftls[i].Location.ArrayPositions = ftls[i].Location.ArrayPositions[:0]
|
||||
}
|
||||
// remember the score breakdown map
|
||||
scoreBreakdown := dm.ScoreBreakdown
|
||||
// clear out the score breakdown map
|
||||
clear(scoreBreakdown)
|
||||
// idiom to copy over from empty DocumentMatch (0 allocations)
|
||||
*dm = DocumentMatch{}
|
||||
// reuse the []byte already allocated (and reset len to 0)
|
||||
dm.IndexInternalID = indexInternalID[:0]
|
||||
// reuse the []interface{} already allocated (and reset len to 0)
|
||||
dm.Sort = sort[:0]
|
||||
dm.DecodedSort = dm.DecodedSort[:0]
|
||||
// reuse the []string already allocated (and reset len to 0)
|
||||
dm.DecodedSort = decodedSort[:0]
|
||||
// reuse the FieldTermLocations already allocated (and reset len to 0)
|
||||
dm.FieldTermLocations = ftls[:0]
|
||||
// reuse the score breakdown map already allocated (after clearing it)
|
||||
dm.ScoreBreakdown = scoreBreakdown
|
||||
return dm
|
||||
}
|
||||
|
||||
|
||||
2
vendor/github.com/blevesearch/bleve/v2/search/searcher/search_knn.go
generated
vendored
2
vendor/github.com/blevesearch/bleve/v2/search/searcher/search_knn.go
generated
vendored
@@ -84,7 +84,7 @@ func (s *KNNSearcher) VectorOptimize(ctx context.Context, octx index.VectorOptim
|
||||
|
||||
func (s *KNNSearcher) Advance(ctx *search.SearchContext, ID index.IndexInternalID) (
|
||||
*search.DocumentMatch, error) {
|
||||
knnMatch, err := s.vectorReader.Next(s.vd.Reset())
|
||||
knnMatch, err := s.vectorReader.Advance(ID, s.vd.Reset())
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
30
vendor/github.com/blevesearch/bleve/v2/search_knn.go
generated
vendored
30
vendor/github.com/blevesearch/bleve/v2/search_knn.go
generated
vendored
@@ -288,10 +288,15 @@ func createKNNQuery(req *SearchRequest, knnFilterResults map[int]index.EligibleD
|
||||
// If it's a filtered kNN but has no eligible filter hits, then
|
||||
// do not run the kNN query.
|
||||
if selector, exists := knnFilterResults[i]; exists && selector == nil {
|
||||
// if the kNN query is filtered and has no eligible filter hits, then
|
||||
// do not run the kNN query, so we add a match_none query to the subQueries.
|
||||
// this will ensure that the score breakdown is set to 0 for this kNN query.
|
||||
subQueries = append(subQueries, NewMatchNoneQuery())
|
||||
kArray = append(kArray, 0)
|
||||
continue
|
||||
}
|
||||
knnQuery := query.NewKNNQuery(knn.Vector)
|
||||
knnQuery.SetFieldVal(knn.Field)
|
||||
knnQuery.SetField(knn.Field)
|
||||
knnQuery.SetK(knn.K)
|
||||
knnQuery.SetBoost(knn.Boost.Value())
|
||||
knnQuery.SetParams(knn.Params)
|
||||
@@ -381,7 +386,7 @@ func addSortAndFieldsToKNNHits(req *SearchRequest, knnHits []*search.DocumentMat
|
||||
return nil
|
||||
}
|
||||
|
||||
func (i *indexImpl) runKnnCollector(ctx context.Context, req *SearchRequest, reader index.IndexReader, preSearch bool) ([]*search.DocumentMatch, error) {
|
||||
func (i *indexImpl) runKnnCollector(ctx context.Context, req *SearchRequest, reader index.IndexReader, preSearch bool) (knnHits []*search.DocumentMatch, err error) {
|
||||
// Maps the index of a KNN query in the request to its pre-filter result:
|
||||
// - If the KNN query is **not filtered**, the value will be `nil`.
|
||||
// - If the KNN query **is filtered**, the value will be an eligible document selector
|
||||
@@ -401,21 +406,33 @@ func (i *indexImpl) runKnnCollector(ctx context.Context, req *SearchRequest, rea
|
||||
continue
|
||||
}
|
||||
// Applies to all supported types of queries.
|
||||
filterSearcher, _ := filterQ.Searcher(ctx, reader, i.m, search.SearcherOptions{
|
||||
filterSearcher, err := filterQ.Searcher(ctx, reader, i.m, search.SearcherOptions{
|
||||
Score: "none", // just want eligible hits --> don't compute scores if not needed
|
||||
})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
// Using the index doc count to determine collector size since we do not
|
||||
// have an estimate of the number of eligible docs in the index yet.
|
||||
indexDocCount, err := i.DocCount()
|
||||
if err != nil {
|
||||
// close the searcher before returning
|
||||
filterSearcher.Close()
|
||||
return nil, err
|
||||
}
|
||||
filterColl := collector.NewEligibleCollector(int(indexDocCount))
|
||||
err = filterColl.Collect(ctx, filterSearcher, reader)
|
||||
if err != nil {
|
||||
// close the searcher before returning
|
||||
filterSearcher.Close()
|
||||
return nil, err
|
||||
}
|
||||
knnFilterResults[idx] = filterColl.EligibleSelector()
|
||||
// Close the filter searcher, as we are done with it.
|
||||
err = filterSearcher.Close()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
|
||||
// Add the filter hits when creating the kNN query
|
||||
@@ -429,12 +446,17 @@ func (i *indexImpl) runKnnCollector(ctx context.Context, req *SearchRequest, rea
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer func() {
|
||||
if serr := knnSearcher.Close(); err == nil && serr != nil {
|
||||
err = serr
|
||||
}
|
||||
}()
|
||||
knnCollector := collector.NewKNNCollector(kArray, sumOfK)
|
||||
err = knnCollector.Collect(ctx, knnSearcher, reader)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
knnHits := knnCollector.Results()
|
||||
knnHits = knnCollector.Results()
|
||||
if !preSearch {
|
||||
knnHits = finalizeKNNResults(req, knnHits)
|
||||
}
|
||||
|
||||
298
vendor/github.com/blevesearch/zapx/v16/faiss_vector_posting.go
generated
vendored
298
vendor/github.com/blevesearch/zapx/v16/faiss_vector_posting.go
generated
vendored
@@ -19,15 +19,11 @@ package zap
|
||||
|
||||
import (
|
||||
"encoding/binary"
|
||||
"encoding/json"
|
||||
"math"
|
||||
"reflect"
|
||||
|
||||
"github.com/RoaringBitmap/roaring/v2"
|
||||
"github.com/RoaringBitmap/roaring/v2/roaring64"
|
||||
"github.com/bits-and-blooms/bitset"
|
||||
index "github.com/blevesearch/bleve_index_api"
|
||||
faiss "github.com/blevesearch/go-faiss"
|
||||
segment "github.com/blevesearch/scorch_segment_api/v2"
|
||||
)
|
||||
|
||||
@@ -272,45 +268,7 @@ func (vpItr *VecPostingsIterator) BytesWritten() uint64 {
|
||||
return 0
|
||||
}
|
||||
|
||||
// vectorIndexWrapper conforms to scorch_segment_api's VectorIndex interface
|
||||
type vectorIndexWrapper struct {
|
||||
search func(qVector []float32, k int64,
|
||||
params json.RawMessage) (segment.VecPostingsList, error)
|
||||
searchWithFilter func(qVector []float32, k int64, eligibleDocIDs []uint64,
|
||||
params json.RawMessage) (segment.VecPostingsList, error)
|
||||
close func()
|
||||
size func() uint64
|
||||
|
||||
obtainKCentroidCardinalitiesFromIVFIndex func(limit int, descending bool) (
|
||||
[]index.CentroidCardinality, error)
|
||||
}
|
||||
|
||||
func (i *vectorIndexWrapper) Search(qVector []float32, k int64,
|
||||
params json.RawMessage) (
|
||||
segment.VecPostingsList, error) {
|
||||
return i.search(qVector, k, params)
|
||||
}
|
||||
|
||||
func (i *vectorIndexWrapper) SearchWithFilter(qVector []float32, k int64,
|
||||
eligibleDocIDs []uint64, params json.RawMessage) (
|
||||
segment.VecPostingsList, error) {
|
||||
return i.searchWithFilter(qVector, k, eligibleDocIDs, params)
|
||||
}
|
||||
|
||||
func (i *vectorIndexWrapper) Close() {
|
||||
i.close()
|
||||
}
|
||||
|
||||
func (i *vectorIndexWrapper) Size() uint64 {
|
||||
return i.size()
|
||||
}
|
||||
|
||||
func (i *vectorIndexWrapper) ObtainKCentroidCardinalitiesFromIVFIndex(limit int, descending bool) (
|
||||
[]index.CentroidCardinality, error) {
|
||||
return i.obtainKCentroidCardinalitiesFromIVFIndex(limit, descending)
|
||||
}
|
||||
|
||||
// InterpretVectorIndex returns a construct of closures (vectorIndexWrapper)
|
||||
// InterpretVectorIndex returns a struct based implementation (vectorIndexWrapper)
|
||||
// that will allow the caller to -
|
||||
// (1) search within an attached vector index
|
||||
// (2) search limited to a subset of documents within an attached vector index
|
||||
@@ -319,248 +277,18 @@ func (i *vectorIndexWrapper) ObtainKCentroidCardinalitiesFromIVFIndex(limit int,
|
||||
func (sb *SegmentBase) InterpretVectorIndex(field string, requiresFiltering bool,
|
||||
except *roaring.Bitmap) (
|
||||
segment.VectorIndex, error) {
|
||||
// Params needed for the closures
|
||||
var vecIndex *faiss.IndexImpl
|
||||
var vecDocIDMap map[int64]uint32
|
||||
var docVecIDMap map[uint32][]int64
|
||||
var vectorIDsToExclude []int64
|
||||
var fieldIDPlus1 uint16
|
||||
var vecIndexSize uint64
|
||||
|
||||
// Utility function to add the corresponding docID and scores for each vector
|
||||
// returned after the kNN query to the newly
|
||||
// created vecPostingsList
|
||||
addIDsToPostingsList := func(pl *VecPostingsList, ids []int64, scores []float32) {
|
||||
for i := 0; i < len(ids); i++ {
|
||||
vecID := ids[i]
|
||||
// Checking if it's present in the vecDocIDMap.
|
||||
// If -1 is returned as an ID(insufficient vectors), this will ensure
|
||||
// it isn't added to the final postings list.
|
||||
if docID, ok := vecDocIDMap[vecID]; ok {
|
||||
code := getVectorCode(docID, scores[i])
|
||||
pl.postings.Add(code)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
var (
|
||||
wrapVecIndex = &vectorIndexWrapper{
|
||||
search: func(qVector []float32, k int64, params json.RawMessage) (
|
||||
segment.VecPostingsList, error) {
|
||||
// 1. returned postings list (of type PostingsList) has two types of information - docNum and its score.
|
||||
// 2. both the values can be represented using roaring bitmaps.
|
||||
// 3. the Iterator (of type PostingsIterator) returned would operate in terms of VecPostings.
|
||||
// 4. VecPostings would just have the docNum and the score. Every call of Next()
|
||||
// and Advance just returns the next VecPostings. The caller would do a vp.Number()
|
||||
// and the Score() to get the corresponding values
|
||||
rv := &VecPostingsList{
|
||||
except: nil, // todo: handle the except bitmap within postings iterator.
|
||||
postings: roaring64.New(),
|
||||
}
|
||||
|
||||
if vecIndex == nil || vecIndex.D() != len(qVector) {
|
||||
// vector index not found or dimensionality mismatched
|
||||
return rv, nil
|
||||
}
|
||||
|
||||
scores, ids, err := vecIndex.SearchWithoutIDs(qVector, k,
|
||||
vectorIDsToExclude, params)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
addIDsToPostingsList(rv, ids, scores)
|
||||
|
||||
return rv, nil
|
||||
},
|
||||
searchWithFilter: func(qVector []float32, k int64,
|
||||
eligibleDocIDs []uint64, params json.RawMessage) (
|
||||
segment.VecPostingsList, error) {
|
||||
// 1. returned postings list (of type PostingsList) has two types of information - docNum and its score.
|
||||
// 2. both the values can be represented using roaring bitmaps.
|
||||
// 3. the Iterator (of type PostingsIterator) returned would operate in terms of VecPostings.
|
||||
// 4. VecPostings would just have the docNum and the score. Every call of Next()
|
||||
// and Advance just returns the next VecPostings. The caller would do a vp.Number()
|
||||
// and the Score() to get the corresponding values
|
||||
rv := &VecPostingsList{
|
||||
except: nil, // todo: handle the except bitmap within postings iterator.
|
||||
postings: roaring64.New(),
|
||||
}
|
||||
if vecIndex == nil || vecIndex.D() != len(qVector) {
|
||||
// vector index not found or dimensionality mismatched
|
||||
return rv, nil
|
||||
}
|
||||
// Check and proceed only if non-zero documents eligible per the filter query.
|
||||
if len(eligibleDocIDs) == 0 {
|
||||
return rv, nil
|
||||
}
|
||||
// If every element in the index is eligible (full selectivity),
|
||||
// then this can basically be considered unfiltered kNN.
|
||||
if len(eligibleDocIDs) == int(sb.numDocs) {
|
||||
scores, ids, err := vecIndex.SearchWithoutIDs(qVector, k,
|
||||
vectorIDsToExclude, params)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
addIDsToPostingsList(rv, ids, scores)
|
||||
return rv, nil
|
||||
}
|
||||
// vector IDs corresponding to the local doc numbers to be
|
||||
// considered for the search
|
||||
vectorIDsToInclude := make([]int64, 0, len(eligibleDocIDs))
|
||||
for _, id := range eligibleDocIDs {
|
||||
vecIDs := docVecIDMap[uint32(id)]
|
||||
// In the common case where vecIDs has only one element, which occurs
|
||||
// when a document has only one vector field, we can
|
||||
// avoid the unnecessary overhead of slice unpacking (append(vecIDs...)).
|
||||
// Directly append the single element for efficiency.
|
||||
if len(vecIDs) == 1 {
|
||||
vectorIDsToInclude = append(vectorIDsToInclude, vecIDs[0])
|
||||
} else {
|
||||
vectorIDsToInclude = append(vectorIDsToInclude, vecIDs...)
|
||||
}
|
||||
}
|
||||
// In case a doc has invalid vector fields but valid non-vector fields,
|
||||
// filter hit IDs may be ineligible for the kNN since the document does
|
||||
// not have any/valid vectors.
|
||||
if len(vectorIDsToInclude) == 0 {
|
||||
return rv, nil
|
||||
}
|
||||
// If the index is not an IVF index, then the search can be
|
||||
// performed directly, using the Flat index.
|
||||
if !vecIndex.IsIVFIndex() {
|
||||
// vector IDs corresponding to the local doc numbers to be
|
||||
// considered for the search
|
||||
scores, ids, err := vecIndex.SearchWithIDs(qVector, k,
|
||||
vectorIDsToInclude, params)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
addIDsToPostingsList(rv, ids, scores)
|
||||
return rv, nil
|
||||
}
|
||||
// Determining which clusters, identified by centroid ID,
|
||||
// have at least one eligible vector and hence, ought to be
|
||||
// probed.
|
||||
clusterVectorCounts, err := vecIndex.ObtainClusterVectorCountsFromIVFIndex(vectorIDsToInclude)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
var selector faiss.Selector
|
||||
// If there are more elements to be included than excluded, it
|
||||
// might be quicker to use an exclusion selector as a filter
|
||||
// instead of an inclusion selector.
|
||||
if float32(len(eligibleDocIDs))/float32(len(docVecIDMap)) > 0.5 {
|
||||
// Use a bitset to efficiently track eligible document IDs.
|
||||
// This reduces the lookup cost when checking if a document ID is eligible,
|
||||
// compared to using a map or slice.
|
||||
bs := bitset.New(uint(len(eligibleDocIDs)))
|
||||
for _, docID := range eligibleDocIDs {
|
||||
bs.Set(uint(docID))
|
||||
}
|
||||
ineligibleVectorIDs := make([]int64, 0, len(vecDocIDMap)-len(vectorIDsToInclude))
|
||||
for docID, vecIDs := range docVecIDMap {
|
||||
// Check if the document ID is NOT in the eligible set, marking it as ineligible.
|
||||
if !bs.Test(uint(docID)) {
|
||||
// In the common case where vecIDs has only one element, which occurs
|
||||
// when a document has only one vector field, we can
|
||||
// avoid the unnecessary overhead of slice unpacking (append(vecIDs...)).
|
||||
// Directly append the single element for efficiency.
|
||||
if len(vecIDs) == 1 {
|
||||
ineligibleVectorIDs = append(ineligibleVectorIDs, vecIDs[0])
|
||||
} else {
|
||||
ineligibleVectorIDs = append(ineligibleVectorIDs, vecIDs...)
|
||||
}
|
||||
}
|
||||
}
|
||||
selector, err = faiss.NewIDSelectorNot(ineligibleVectorIDs)
|
||||
} else {
|
||||
selector, err = faiss.NewIDSelectorBatch(vectorIDsToInclude)
|
||||
}
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
// If no error occurred during the creation of the selector, then
|
||||
// it should be deleted once the search is complete.
|
||||
defer selector.Delete()
|
||||
// Ordering the retrieved centroid IDs by increasing order
|
||||
// of distance i.e. decreasing order of proximity to query vector.
|
||||
centroidIDs := make([]int64, 0, len(clusterVectorCounts))
|
||||
for centroidID := range clusterVectorCounts {
|
||||
centroidIDs = append(centroidIDs, centroidID)
|
||||
}
|
||||
closestCentroidIDs, centroidDistances, err :=
|
||||
vecIndex.ObtainClustersWithDistancesFromIVFIndex(qVector, centroidIDs)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
// Getting the nprobe value set at index time.
|
||||
nprobe := int(vecIndex.GetNProbe())
|
||||
// Determining the minimum number of centroids to be probed
|
||||
// to ensure that at least 'k' vectors are collected while
|
||||
// examining at least 'nprobe' centroids.
|
||||
var eligibleDocsTillNow int64
|
||||
minEligibleCentroids := len(closestCentroidIDs)
|
||||
for i, centroidID := range closestCentroidIDs {
|
||||
eligibleDocsTillNow += clusterVectorCounts[centroidID]
|
||||
// Stop once we've examined at least 'nprobe' centroids and
|
||||
// collected at least 'k' vectors.
|
||||
if eligibleDocsTillNow >= k && i+1 >= nprobe {
|
||||
minEligibleCentroids = i + 1
|
||||
break
|
||||
}
|
||||
}
|
||||
// Search the clusters specified by 'closestCentroidIDs' for
|
||||
// vectors whose IDs are present in 'vectorIDsToInclude'
|
||||
scores, ids, err := vecIndex.SearchClustersFromIVFIndex(
|
||||
selector, closestCentroidIDs, minEligibleCentroids,
|
||||
k, qVector, centroidDistances, params)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
addIDsToPostingsList(rv, ids, scores)
|
||||
return rv, nil
|
||||
},
|
||||
close: func() {
|
||||
// skipping the closing because the index is cached and it's being
|
||||
// deferred to a later point of time.
|
||||
sb.vecIndexCache.decRef(fieldIDPlus1)
|
||||
},
|
||||
size: func() uint64 {
|
||||
return vecIndexSize
|
||||
},
|
||||
obtainKCentroidCardinalitiesFromIVFIndex: func(limit int, descending bool) ([]index.CentroidCardinality, error) {
|
||||
if vecIndex == nil || !vecIndex.IsIVFIndex() {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
cardinalities, centroids, err := vecIndex.ObtainKCentroidCardinalitiesFromIVFIndex(limit, descending)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
centroidCardinalities := make([]index.CentroidCardinality, len(cardinalities))
|
||||
for i, cardinality := range cardinalities {
|
||||
centroidCardinalities[i] = index.CentroidCardinality{
|
||||
Centroid: centroids[i],
|
||||
Cardinality: cardinality,
|
||||
}
|
||||
}
|
||||
return centroidCardinalities, nil
|
||||
},
|
||||
}
|
||||
|
||||
err error
|
||||
)
|
||||
|
||||
fieldIDPlus1 = sb.fieldsMap[field]
|
||||
rv := &vectorIndexWrapper{sb: sb}
|
||||
fieldIDPlus1 := sb.fieldsMap[field]
|
||||
if fieldIDPlus1 <= 0 {
|
||||
return wrapVecIndex, nil
|
||||
return rv, nil
|
||||
}
|
||||
rv.fieldIDPlus1 = fieldIDPlus1
|
||||
|
||||
vectorSection := sb.fieldsSectionsMap[fieldIDPlus1-1][SectionFaissVectorIndex]
|
||||
// check if the field has a vector section in the segment.
|
||||
if vectorSection <= 0 {
|
||||
return wrapVecIndex, nil
|
||||
return rv, nil
|
||||
}
|
||||
|
||||
pos := int(vectorSection)
|
||||
@@ -574,15 +302,19 @@ func (sb *SegmentBase) InterpretVectorIndex(field string, requiresFiltering bool
|
||||
pos += n
|
||||
}
|
||||
|
||||
vecIndex, vecDocIDMap, docVecIDMap, vectorIDsToExclude, err =
|
||||
var err error
|
||||
rv.vecIndex, rv.vecDocIDMap, rv.docVecIDMap, rv.vectorIDsToExclude, err =
|
||||
sb.vecIndexCache.loadOrCreate(fieldIDPlus1, sb.mem[pos:], requiresFiltering,
|
||||
except)
|
||||
|
||||
if vecIndex != nil {
|
||||
vecIndexSize = vecIndex.Size()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return wrapVecIndex, err
|
||||
if rv.vecIndex != nil {
|
||||
rv.vecIndexSize = rv.vecIndex.Size()
|
||||
}
|
||||
|
||||
return rv, nil
|
||||
}
|
||||
|
||||
func (sb *SegmentBase) UpdateFieldStats(stats segment.FieldStats) {
|
||||
|
||||
645
vendor/github.com/blevesearch/zapx/v16/faiss_vector_wrapper.go
generated
vendored
Normal file
645
vendor/github.com/blevesearch/zapx/v16/faiss_vector_wrapper.go
generated
vendored
Normal file
@@ -0,0 +1,645 @@
|
||||
// Copyright (c) 2025 Couchbase, Inc.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
//go:build vectors
|
||||
// +build vectors
|
||||
|
||||
package zap
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"math"
|
||||
"slices"
|
||||
|
||||
"github.com/RoaringBitmap/roaring/v2/roaring64"
|
||||
"github.com/bits-and-blooms/bitset"
|
||||
index "github.com/blevesearch/bleve_index_api"
|
||||
faiss "github.com/blevesearch/go-faiss"
|
||||
segment "github.com/blevesearch/scorch_segment_api/v2"
|
||||
)
|
||||
|
||||
// MaxMultiVectorDocSearchRetries limits repeated searches when deduplicating
|
||||
// multi-vector documents. Each retry excludes previously seen vectors to find
|
||||
// new unique documents. Acts as a safeguard against pathological data distributions.
|
||||
var MaxMultiVectorDocSearchRetries = 100
|
||||
|
||||
// vectorIndexWrapper conforms to scorch_segment_api's VectorIndex interface
|
||||
type vectorIndexWrapper struct {
|
||||
vecIndex *faiss.IndexImpl
|
||||
vecDocIDMap map[int64]uint32
|
||||
docVecIDMap map[uint32][]int64
|
||||
vectorIDsToExclude []int64
|
||||
fieldIDPlus1 uint16
|
||||
vecIndexSize uint64
|
||||
|
||||
sb *SegmentBase
|
||||
}
|
||||
|
||||
func (v *vectorIndexWrapper) Search(qVector []float32, k int64,
|
||||
params json.RawMessage) (
|
||||
segment.VecPostingsList, error) {
|
||||
// 1. returned postings list (of type PostingsList) has two types of information - docNum and its score.
|
||||
// 2. both the values can be represented using roaring bitmaps.
|
||||
// 3. the Iterator (of type PostingsIterator) returned would operate in terms of VecPostings.
|
||||
// 4. VecPostings would just have the docNum and the score. Every call of Next()
|
||||
// and Advance just returns the next VecPostings. The caller would do a vp.Number()
|
||||
// and the Score() to get the corresponding values
|
||||
rv := &VecPostingsList{
|
||||
except: nil, // todo: handle the except bitmap within postings iterator.
|
||||
postings: roaring64.New(),
|
||||
}
|
||||
|
||||
if v.vecIndex == nil || v.vecIndex.D() != len(qVector) {
|
||||
// vector index not found or dimensionality mismatched
|
||||
return rv, nil
|
||||
}
|
||||
|
||||
if v.sb.numDocs == 0 {
|
||||
return rv, nil
|
||||
}
|
||||
|
||||
rs, err := v.searchWithoutIDs(qVector, k,
|
||||
v.vectorIDsToExclude, params)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
v.addIDsToPostingsList(rv, rs)
|
||||
|
||||
return rv, nil
|
||||
}
|
||||
|
||||
func (v *vectorIndexWrapper) SearchWithFilter(qVector []float32, k int64,
|
||||
eligibleDocIDs []uint64, params json.RawMessage) (
|
||||
segment.VecPostingsList, error) {
|
||||
// If every element in the index is eligible (full selectivity),
|
||||
// then this can basically be considered unfiltered kNN.
|
||||
if len(eligibleDocIDs) == int(v.sb.numDocs) {
|
||||
return v.Search(qVector, k, params)
|
||||
}
|
||||
// 1. returned postings list (of type PostingsList) has two types of information - docNum and its score.
|
||||
// 2. both the values can be represented using roaring bitmaps.
|
||||
// 3. the Iterator (of type PostingsIterator) returned would operate in terms of VecPostings.
|
||||
// 4. VecPostings would just have the docNum and the score. Every call of Next()
|
||||
// and Advance just returns the next VecPostings. The caller would do a vp.Number()
|
||||
// and the Score() to get the corresponding values
|
||||
rv := &VecPostingsList{
|
||||
except: nil, // todo: handle the except bitmap within postings iterator.
|
||||
postings: roaring64.New(),
|
||||
}
|
||||
if v.vecIndex == nil || v.vecIndex.D() != len(qVector) {
|
||||
// vector index not found or dimensionality mismatched
|
||||
return rv, nil
|
||||
}
|
||||
// Check and proceed only if non-zero documents eligible per the filter query.
|
||||
if len(eligibleDocIDs) == 0 {
|
||||
return rv, nil
|
||||
}
|
||||
|
||||
// vector IDs corresponding to the local doc numbers to be
|
||||
// considered for the search
|
||||
vectorIDsToInclude := make([]int64, 0, len(eligibleDocIDs))
|
||||
for _, id := range eligibleDocIDs {
|
||||
vecIDs := v.docVecIDMap[uint32(id)]
|
||||
// In the common case where vecIDs has only one element, which occurs
|
||||
// when a document has only one vector field, we can
|
||||
// avoid the unnecessary overhead of slice unpacking (append(vecIDs...)).
|
||||
// Directly append the single element for efficiency.
|
||||
if len(vecIDs) == 1 {
|
||||
vectorIDsToInclude = append(vectorIDsToInclude, vecIDs[0])
|
||||
} else {
|
||||
vectorIDsToInclude = append(vectorIDsToInclude, vecIDs...)
|
||||
}
|
||||
}
|
||||
// In case a doc has invalid vector fields but valid non-vector fields,
|
||||
// filter hit IDs may be ineligible for the kNN since the document does
|
||||
// not have any/valid vectors.
|
||||
if len(vectorIDsToInclude) == 0 {
|
||||
return rv, nil
|
||||
}
|
||||
// If the index is not an IVF index, then the search can be
|
||||
// performed directly, using the Flat index.
|
||||
if !v.vecIndex.IsIVFIndex() {
|
||||
// vector IDs corresponding to the local doc numbers to be
|
||||
// considered for the search
|
||||
rs, err := v.searchWithIDs(qVector, k,
|
||||
vectorIDsToInclude, params)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
v.addIDsToPostingsList(rv, rs)
|
||||
return rv, nil
|
||||
}
|
||||
// Determining which clusters, identified by centroid ID,
|
||||
// have at least one eligible vector and hence, ought to be
|
||||
// probed.
|
||||
clusterVectorCounts, err := v.vecIndex.ObtainClusterVectorCountsFromIVFIndex(vectorIDsToInclude)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
var ids []int64
|
||||
var include bool
|
||||
// If there are more elements to be included than excluded, it
|
||||
// might be quicker to use an exclusion selector as a filter
|
||||
// instead of an inclusion selector.
|
||||
if float32(len(eligibleDocIDs))/float32(len(v.docVecIDMap)) > 0.5 {
|
||||
// Use a bitset to efficiently track eligible document IDs.
|
||||
// This reduces the lookup cost when checking if a document ID is eligible,
|
||||
// compared to using a map or slice.
|
||||
bs := bitset.New(uint(v.sb.numDocs))
|
||||
for _, docID := range eligibleDocIDs {
|
||||
bs.Set(uint(docID))
|
||||
}
|
||||
ineligibleVectorIDs := make([]int64, 0, len(v.vecDocIDMap)-len(vectorIDsToInclude))
|
||||
for docID, vecIDs := range v.docVecIDMap {
|
||||
// Check if the document ID is NOT in the eligible set, marking it as ineligible.
|
||||
if !bs.Test(uint(docID)) {
|
||||
// In the common case where vecIDs has only one element, which occurs
|
||||
// when a document has only one vector field, we can
|
||||
// avoid the unnecessary overhead of slice unpacking (append(vecIDs...)).
|
||||
// Directly append the single element for efficiency.
|
||||
if len(vecIDs) == 1 {
|
||||
ineligibleVectorIDs = append(ineligibleVectorIDs, vecIDs[0])
|
||||
} else {
|
||||
ineligibleVectorIDs = append(ineligibleVectorIDs, vecIDs...)
|
||||
}
|
||||
}
|
||||
}
|
||||
ids = ineligibleVectorIDs
|
||||
include = false
|
||||
} else {
|
||||
ids = vectorIDsToInclude
|
||||
include = true
|
||||
}
|
||||
// Ordering the retrieved centroid IDs by increasing order
|
||||
// of distance i.e. decreasing order of proximity to query vector.
|
||||
centroidIDs := make([]int64, 0, len(clusterVectorCounts))
|
||||
for centroidID := range clusterVectorCounts {
|
||||
centroidIDs = append(centroidIDs, centroidID)
|
||||
}
|
||||
closestCentroidIDs, centroidDistances, err :=
|
||||
v.vecIndex.ObtainClustersWithDistancesFromIVFIndex(qVector, centroidIDs)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
// Getting the nprobe value set at index time.
|
||||
nprobe := int(v.vecIndex.GetNProbe())
|
||||
// Determining the minimum number of centroids to be probed
|
||||
// to ensure that at least 'k' vectors are collected while
|
||||
// examining at least 'nprobe' centroids.
|
||||
// centroidsToProbe range: [nprobe, number of eligible centroids]
|
||||
var eligibleVecsTillNow int64
|
||||
centroidsToProbe := len(closestCentroidIDs)
|
||||
for i, centroidID := range closestCentroidIDs {
|
||||
eligibleVecsTillNow += clusterVectorCounts[centroidID]
|
||||
// Stop once we've examined at least 'nprobe' centroids and
|
||||
// collected at least 'k' vectors.
|
||||
if eligibleVecsTillNow >= k && i+1 >= nprobe {
|
||||
centroidsToProbe = i + 1
|
||||
break
|
||||
}
|
||||
}
|
||||
// Search the clusters specified by 'closestCentroidIDs' for
|
||||
// vectors whose IDs are present in 'vectorIDsToInclude'
|
||||
rs, err := v.searchClustersFromIVFIndex(
|
||||
ids, include, closestCentroidIDs, centroidsToProbe,
|
||||
k, qVector, centroidDistances, params)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
v.addIDsToPostingsList(rv, rs)
|
||||
return rv, nil
|
||||
}
|
||||
func (v *vectorIndexWrapper) Close() {
|
||||
// skipping the closing because the index is cached and it's being
|
||||
// deferred to a later point of time.
|
||||
v.sb.vecIndexCache.decRef(v.fieldIDPlus1)
|
||||
}
|
||||
|
||||
func (v *vectorIndexWrapper) Size() uint64 {
|
||||
return v.vecIndexSize
|
||||
}
|
||||
|
||||
func (v *vectorIndexWrapper) ObtainKCentroidCardinalitiesFromIVFIndex(limit int, descending bool) (
|
||||
[]index.CentroidCardinality, error) {
|
||||
if v.vecIndex == nil || !v.vecIndex.IsIVFIndex() {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
cardinalities, centroids, err := v.vecIndex.ObtainKCentroidCardinalitiesFromIVFIndex(limit, descending)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
centroidCardinalities := make([]index.CentroidCardinality, len(cardinalities))
|
||||
for i, cardinality := range cardinalities {
|
||||
centroidCardinalities[i] = index.CentroidCardinality{
|
||||
Centroid: centroids[i],
|
||||
Cardinality: cardinality,
|
||||
}
|
||||
}
|
||||
return centroidCardinalities, nil
|
||||
}
|
||||
|
||||
// Utility function to add the corresponding docID and scores for each unique
|
||||
// docID retrieved from the vector index search to the newly created vecPostingsList
|
||||
func (v *vectorIndexWrapper) addIDsToPostingsList(pl *VecPostingsList, rs resultSet) {
|
||||
rs.iterate(func(docID uint32, score float32) {
|
||||
// transform the docID and score to vector code format
|
||||
code := getVectorCode(docID, score)
|
||||
// add to postings list, this ensures ordered storage
|
||||
// based on the docID since it occupies the upper 32 bits
|
||||
pl.postings.Add(code)
|
||||
})
|
||||
}
|
||||
|
||||
// docSearch performs a search on the vector index to retrieve
|
||||
// top k documents based on the provided search function.
|
||||
// It handles deduplication of documents that may have multiple
|
||||
// vectors associated with them.
|
||||
// The prepareNextIter function is used to set up the state
|
||||
// for the next iteration, if more searches are needed to find
|
||||
// k unique documents. The callback recieves the number of iterations
|
||||
// done so far and the vector ids retrieved in the last search. While preparing
|
||||
// the next iteration, if its decided that no further searches are needed,
|
||||
// the prepareNextIter function can decide whether to continue searching or not
|
||||
func (v *vectorIndexWrapper) docSearch(k int64, numDocs uint64,
|
||||
search func() (scores []float32, labels []int64, err error),
|
||||
prepareNextIter func(numIter int, labels []int64) bool) (resultSet, error) {
|
||||
// create a result set to hold top K docIDs and their scores
|
||||
rs := newResultSet(k, numDocs)
|
||||
// flag to indicate if we have exhausted the vector index
|
||||
var exhausted bool
|
||||
// keep track of number of iterations done, we execute the loop more than once only when
|
||||
// we have multi-vector documents leading to duplicates in docIDs retrieved
|
||||
numIter := 0
|
||||
// get the metric type of the index to help with deduplication logic
|
||||
metricType := v.vecIndex.MetricType()
|
||||
// we keep searching until we have k unique docIDs or we have exhausted the vector index
|
||||
// or we have reached the maximum number of deduplication iterations allowed
|
||||
for numIter < MaxMultiVectorDocSearchRetries && rs.size() < k && !exhausted {
|
||||
// search the vector index
|
||||
numIter++
|
||||
scores, labels, err := search()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
// process the retrieved ids and scores, getting the corresponding docIDs
|
||||
// for each vector id retrieved, and storing the best score for each unique docID
|
||||
// the moment we see a -1 for a vector id, we stop processing further since
|
||||
// it indicates there are no more vectors to be retrieved and break out of the loop
|
||||
// by setting the exhausted flag
|
||||
for i, vecID := range labels {
|
||||
if vecID == -1 {
|
||||
exhausted = true
|
||||
break
|
||||
}
|
||||
docID, exists := v.getDocIDForVectorID(vecID)
|
||||
if !exists {
|
||||
continue
|
||||
}
|
||||
score := scores[i]
|
||||
prevScore, exists := rs.get(docID)
|
||||
if !exists {
|
||||
// first time seeing this docID, so just store it
|
||||
rs.put(docID, score)
|
||||
continue
|
||||
}
|
||||
// we have seen this docID before, so we must compare scores
|
||||
// check the index metric type first to check how we compare distances/scores
|
||||
// and store the best score for the docID accordingly
|
||||
// for inner product, higher the score, better the match
|
||||
// for euclidean distance, lower the score/distance, better the match
|
||||
// so we invert the comparison accordingly
|
||||
switch metricType {
|
||||
case faiss.MetricInnerProduct: // similarity metrics like dot product => higher is better
|
||||
if score > prevScore {
|
||||
rs.put(docID, score)
|
||||
}
|
||||
case faiss.MetricL2:
|
||||
fallthrough
|
||||
default: // distance metrics like euclidean distance => lower is better
|
||||
if score < prevScore {
|
||||
rs.put(docID, score)
|
||||
}
|
||||
}
|
||||
}
|
||||
// if we still have less than k unique docIDs, prepare for the next iteration, provided
|
||||
// we have not exhausted the index
|
||||
if rs.size() < k && !exhausted {
|
||||
// prepare state for next iteration
|
||||
shouldContinue := prepareNextIter(numIter, labels)
|
||||
if !shouldContinue {
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
// at this point we either have k unique docIDs or we have exhausted
|
||||
// the vector index or we have reached the maximum number of deduplication iterations allowed
|
||||
// or the prepareNextIter function decided to break out of the loop
|
||||
return rs, nil
|
||||
}
|
||||
|
||||
// searchWithoutIDs performs a search on the vector index to retrieve the top K documents while
|
||||
// excluding any vector IDs specified in the exclude slice.
|
||||
func (v *vectorIndexWrapper) searchWithoutIDs(qVector []float32, k int64, exclude []int64, params json.RawMessage) (
|
||||
resultSet, error) {
|
||||
return v.docSearch(k, v.sb.numDocs,
|
||||
func() ([]float32, []int64, error) {
|
||||
return v.vecIndex.SearchWithoutIDs(qVector, k, exclude, params)
|
||||
},
|
||||
func(numIter int, labels []int64) bool {
|
||||
// if this is the first loop iteration and we have < k unique docIDs,
|
||||
// we must clone the existing exclude slice before appending to it
|
||||
// to avoid modifying the original slice passed in by the caller
|
||||
if numIter == 1 {
|
||||
exclude = slices.Clone(exclude)
|
||||
}
|
||||
// prepare the exclude list for the next iteration by adding
|
||||
// the vector ids retrieved in this iteration
|
||||
exclude = append(exclude, labels...)
|
||||
// with exclude list updated, we can proceed to the next iteration
|
||||
return true
|
||||
})
|
||||
}
|
||||
|
||||
// searchWithIDs performs a search on the vector index to retrieve the top K documents while only
|
||||
// considering the vector IDs specified in the include slice.
|
||||
func (v *vectorIndexWrapper) searchWithIDs(qVector []float32, k int64, include []int64, params json.RawMessage) (
|
||||
resultSet, error) {
|
||||
// if the number of iterations > 1, we will be modifying the include slice
|
||||
// to exclude vector ids already seen, so we use this set to track the
|
||||
// include set for the next iteration, this is reused across iterations
|
||||
// and allocated only once, when numIter == 1
|
||||
var includeSet map[int64]struct{}
|
||||
return v.docSearch(k, v.sb.numDocs,
|
||||
func() ([]float32, []int64, error) {
|
||||
return v.vecIndex.SearchWithIDs(qVector, k, include, params)
|
||||
},
|
||||
func(numIter int, labels []int64) bool {
|
||||
// if this is the first loop iteration and we have < k unique docIDs,
|
||||
// we clone the existing include slice before modifying it
|
||||
if numIter == 1 {
|
||||
include = slices.Clone(include)
|
||||
// build the include set for subsequent iterations
|
||||
includeSet = make(map[int64]struct{}, len(include))
|
||||
for _, id := range include {
|
||||
includeSet[id] = struct{}{}
|
||||
}
|
||||
}
|
||||
// prepare the include list for the next iteration
|
||||
// by removing the vector ids retrieved in this iteration
|
||||
// from the include set
|
||||
for _, id := range labels {
|
||||
delete(includeSet, id)
|
||||
}
|
||||
// now build the next include slice from the set
|
||||
include = include[:0]
|
||||
for id := range includeSet {
|
||||
include = append(include, id)
|
||||
}
|
||||
// only continue searching if we still have vector ids to include
|
||||
return len(include) != 0
|
||||
})
|
||||
}
|
||||
|
||||
// searchClustersFromIVFIndex performs a search on the IVF vector index to retrieve the top K documents
|
||||
// while either including or excluding the vector IDs specified in the ids slice, depending on the include flag.
|
||||
// It takes into account the eligible centroid IDs and ensures that at least centroidsToProbe are probed.
|
||||
// If after a few iterations we haven't found enough documents, it dynamically increases the number of
|
||||
// clusters searched (up to the number of eligible centroids) to ensure we can find k unique documents.
|
||||
func (v *vectorIndexWrapper) searchClustersFromIVFIndex(ids []int64, include bool, eligibleCentroidIDs []int64,
|
||||
centroidsToProbe int, k int64, x, centroidDis []float32, params json.RawMessage) (
|
||||
resultSet, error) {
|
||||
// if the number of iterations > 1, we will be modifying the include slice
|
||||
// to exclude vector ids already seen, so we use this set to track the
|
||||
// include set for the next iteration, this is reused across iterations
|
||||
// and allocated only once, when numIter == 1
|
||||
var includeSet map[int64]struct{}
|
||||
var totalEligibleCentroids = len(eligibleCentroidIDs)
|
||||
// Threshold for when to start increasing: after 2 iterations without
|
||||
// finding enough documents, we start increasing up to the number of centroidsToProbe
|
||||
// up to the total number of eligible centroids available
|
||||
const nprobeIncreaseThreshold = 2
|
||||
return v.docSearch(k, v.sb.numDocs,
|
||||
func() ([]float32, []int64, error) {
|
||||
// build the selector based on whatever ids is as of now and the
|
||||
// include/exclude flag
|
||||
selector, err := v.getSelector(ids, include)
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
// once the main search is done we must free the selector
|
||||
defer selector.Delete()
|
||||
return v.vecIndex.SearchClustersFromIVFIndex(selector, eligibleCentroidIDs,
|
||||
centroidsToProbe, k, x, centroidDis, params)
|
||||
},
|
||||
func(numIter int, labels []int64) bool {
|
||||
// if this is the first loop iteration and we have < k unique docIDs,
|
||||
// we must clone the existing ids slice before modifying it to avoid
|
||||
// modifying the original slice passed in by the caller
|
||||
if numIter == 1 {
|
||||
ids = slices.Clone(ids)
|
||||
if include {
|
||||
// build the include set for subsequent iterations
|
||||
// by adding all the ids initially present in the ids slice
|
||||
includeSet = make(map[int64]struct{}, len(ids))
|
||||
for _, id := range ids {
|
||||
includeSet[id] = struct{}{}
|
||||
}
|
||||
}
|
||||
}
|
||||
// if we have iterated atleast nprobeIncreaseThreshold times
|
||||
// and still have not found enough unique docIDs, we increase
|
||||
// the number of centroids to probe for the next iteration
|
||||
// to try and find more vectors/documents
|
||||
if numIter >= nprobeIncreaseThreshold && centroidsToProbe < len(eligibleCentroidIDs) {
|
||||
// Calculate how much to increase: increase by 50% of the remaining centroids to probe,
|
||||
// but at least by 1 to ensure progress.
|
||||
increaseAmount := max((totalEligibleCentroids-centroidsToProbe)/2, 1)
|
||||
// Update centroidsToProbe, ensuring it does not exceed the total eligible centroids
|
||||
centroidsToProbe = min(centroidsToProbe+increaseAmount, len(eligibleCentroidIDs))
|
||||
}
|
||||
// prepare the exclude/include list for the next iteration
|
||||
if include {
|
||||
// removing the vector ids retrieved in this iteration
|
||||
// from the include set and rebuild the ids slice from the set
|
||||
for _, id := range labels {
|
||||
delete(includeSet, id)
|
||||
}
|
||||
// now build the next include slice from the set
|
||||
ids = ids[:0]
|
||||
for id := range includeSet {
|
||||
ids = append(ids, id)
|
||||
}
|
||||
// only continue searching if we still have vector ids to include
|
||||
return len(ids) != 0
|
||||
} else {
|
||||
// appending the vector ids retrieved in this iteration
|
||||
// to the exclude list
|
||||
ids = append(ids, labels...)
|
||||
// with exclude list updated, we can proceed to the next iteration
|
||||
return true
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
// Utility function to get a faiss.Selector based on the include/exclude flag
|
||||
// and the vector ids provided, if include is true, it returns an inclusion selector,
|
||||
// else it returns an exclusion selector. The caller must ensure to free the selector
|
||||
// by calling selector.Delete() when done using it.
|
||||
func (v *vectorIndexWrapper) getSelector(ids []int64, include bool) (selector faiss.Selector, err error) {
|
||||
if include {
|
||||
selector, err = faiss.NewIDSelectorBatch(ids)
|
||||
} else {
|
||||
selector, err = faiss.NewIDSelectorNot(ids)
|
||||
}
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return selector, nil
|
||||
}
|
||||
|
||||
// Utility function to get the docID for a given vectorID, used for the
|
||||
// deduplication logic, to map vectorIDs back to their corresponding docIDs
|
||||
func (v *vectorIndexWrapper) getDocIDForVectorID(vecID int64) (uint32, bool) {
|
||||
docID, exists := v.vecDocIDMap[vecID]
|
||||
return docID, exists
|
||||
}
|
||||
|
||||
// resultSet is a data structure to hold (docID, score) pairs while ensuring
|
||||
// that each docID is unique. It supports efficient insertion, retrieval,
|
||||
// and iteration over the stored pairs.
|
||||
type resultSet interface {
|
||||
// Add a (docID, score) pair to the result set.
|
||||
put(docID uint32, score float32)
|
||||
// Get the score for a given docID. Returns false if docID not present.
|
||||
get(docID uint32) (float32, bool)
|
||||
// Iterate over all (docID, score) pairs in the result set.
|
||||
iterate(func(docID uint32, score float32))
|
||||
// Get the size of the result set.
|
||||
size() int64
|
||||
}
|
||||
|
||||
// resultSetSliceThreshold defines the threshold ratio of k to total documents
|
||||
// in the index, below which a map-based resultSet is used, and above which
|
||||
// a slice-based resultSet is used.
|
||||
// It is derived using the following reasoning:
|
||||
//
|
||||
// Let N = total number of documents
|
||||
// Let K = number of top K documents to retrieve
|
||||
//
|
||||
// Memory usage if the Result Set uses a map[uint32]float32 of size K underneath:
|
||||
//
|
||||
// ~20 bytes per entry (key + value + map overhead)
|
||||
// Total ≈ 20 * K bytes
|
||||
//
|
||||
// Memory usage if the Result Set uses a slice of float32 of size N underneath:
|
||||
//
|
||||
// 4 bytes per entry
|
||||
// Total ≈ 4 * N bytes
|
||||
//
|
||||
// We want the threshold below which a map is more memory-efficient than a slice:
|
||||
//
|
||||
// 20K < 4N
|
||||
// K/N < 4/20
|
||||
//
|
||||
// Therefore, if the ratio of K to N is less than 0.2 (4/20), we use a map-based resultSet.
|
||||
const resultSetSliceThreshold float64 = 0.2
|
||||
|
||||
// newResultSet creates a new resultSet
|
||||
func newResultSet(k int64, numDocs uint64) resultSet {
|
||||
// if numDocs is zero (empty index), just use map-based resultSet as its a no-op
|
||||
// else decide based the percent of documents being retrieved. If we require
|
||||
// greater than 20% of total documents, use slice-based resultSet for better memory efficiency
|
||||
// else use map-based resultSet
|
||||
if numDocs == 0 || float64(k)/float64(numDocs) < resultSetSliceThreshold {
|
||||
return newResultSetMap(k)
|
||||
}
|
||||
return newResultSetSlice(numDocs)
|
||||
}
|
||||
|
||||
type resultSetMap struct {
|
||||
data map[uint32]float32
|
||||
}
|
||||
|
||||
func newResultSetMap(k int64) resultSet {
|
||||
return &resultSetMap{
|
||||
data: make(map[uint32]float32, k),
|
||||
}
|
||||
}
|
||||
|
||||
func (rs *resultSetMap) put(docID uint32, score float32) {
|
||||
rs.data[docID] = score
|
||||
}
|
||||
|
||||
func (rs *resultSetMap) get(docID uint32) (float32, bool) {
|
||||
score, exists := rs.data[docID]
|
||||
return score, exists
|
||||
}
|
||||
|
||||
func (rs *resultSetMap) iterate(f func(docID uint32, score float32)) {
|
||||
for docID, score := range rs.data {
|
||||
f(docID, score)
|
||||
}
|
||||
}
|
||||
|
||||
func (rs *resultSetMap) size() int64 {
|
||||
return int64(len(rs.data))
|
||||
}
|
||||
|
||||
type resultSetSlice struct {
|
||||
count int64
|
||||
data []float32
|
||||
}
|
||||
|
||||
func newResultSetSlice(numDocs uint64) resultSet {
|
||||
data := make([]float32, numDocs)
|
||||
// scores can be negative, so initialize to a sentinel value which is NaN
|
||||
sentinel := float32(math.NaN())
|
||||
for i := range data {
|
||||
data[i] = sentinel
|
||||
}
|
||||
return &resultSetSlice{
|
||||
count: 0,
|
||||
data: data,
|
||||
}
|
||||
}
|
||||
|
||||
func (rs *resultSetSlice) put(docID uint32, score float32) {
|
||||
// only increment count if this docID was not already present
|
||||
if math.IsNaN(float64(rs.data[docID])) {
|
||||
rs.count++
|
||||
}
|
||||
rs.data[docID] = score
|
||||
}
|
||||
|
||||
func (rs *resultSetSlice) get(docID uint32) (float32, bool) {
|
||||
score := rs.data[docID]
|
||||
if math.IsNaN(float64(score)) {
|
||||
return 0, false
|
||||
}
|
||||
return score, true
|
||||
}
|
||||
|
||||
func (rs *resultSetSlice) iterate(f func(docID uint32, score float32)) {
|
||||
for docID, score := range rs.data {
|
||||
if !math.IsNaN(float64(score)) {
|
||||
f(uint32(docID), score)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (rs *resultSetSlice) size() int64 {
|
||||
return rs.count
|
||||
}
|
||||
60
vendor/github.com/clipperhouse/displaywidth/CHANGELOG.md
generated
vendored
Normal file
60
vendor/github.com/clipperhouse/displaywidth/CHANGELOG.md
generated
vendored
Normal file
@@ -0,0 +1,60 @@
|
||||
# Changelog
|
||||
|
||||
## [0.6.0]
|
||||
|
||||
[Compare](https://github.com/clipperhouse/displaywidth/compare/v0.5.0...v0.6.0)
|
||||
|
||||
### Added
|
||||
- New `StringGraphemes` and `BytesGraphemes` methods, for iterating over the
|
||||
widths of grapheme clusters.
|
||||
|
||||
### Changed
|
||||
- Added ASCII fast paths
|
||||
|
||||
## [0.5.0]
|
||||
|
||||
[Compare](https://github.com/clipperhouse/displaywidth/compare/v0.4.1...v0.5.0)
|
||||
|
||||
### Added
|
||||
- Unicode 16 support
|
||||
- Improved emoji presentation handling per Unicode TR51
|
||||
|
||||
### Changed
|
||||
- Corrected VS15 (U+FE0E) handling: now preserves base character width (no-op) per Unicode TR51
|
||||
- Performance optimizations: reduced property lookups
|
||||
|
||||
### Fixed
|
||||
- VS15 variation selector now correctly preserves base character width instead of forcing width 1
|
||||
|
||||
## [0.4.1]
|
||||
|
||||
[Compare](https://github.com/clipperhouse/displaywidth/compare/v0.4.0...v0.4.1)
|
||||
|
||||
### Changed
|
||||
- Updated uax29 dependency
|
||||
- Improved flag handling
|
||||
|
||||
## [0.4.0]
|
||||
|
||||
[Compare](https://github.com/clipperhouse/displaywidth/compare/v0.3.1...v0.4.0)
|
||||
|
||||
### Added
|
||||
- Support for variation selectors (VS15, VS16) and regional indicator pairs (flags)
|
||||
|
||||
## [0.3.1]
|
||||
|
||||
[Compare](https://github.com/clipperhouse/displaywidth/compare/v0.3.0...v0.3.1)
|
||||
|
||||
### Added
|
||||
- Fuzz testing support
|
||||
|
||||
### Changed
|
||||
- Updated stringish dependency
|
||||
|
||||
## [0.3.0]
|
||||
|
||||
[Compare](https://github.com/clipperhouse/displaywidth/compare/v0.2.0...v0.3.0)
|
||||
|
||||
### Changed
|
||||
- Dropped compatibility with go-runewidth
|
||||
- Trie implementation cleanup
|
||||
110
vendor/github.com/clipperhouse/displaywidth/README.md
generated
vendored
110
vendor/github.com/clipperhouse/displaywidth/README.md
generated
vendored
@@ -5,6 +5,7 @@ A high-performance Go package for measuring the monospace display width of strin
|
||||
[](https://pkg.go.dev/github.com/clipperhouse/displaywidth)
|
||||
[](https://github.com/clipperhouse/displaywidth/actions/workflows/gotest.yml)
|
||||
[](https://github.com/clipperhouse/displaywidth/actions/workflows/gofuzz.yml)
|
||||
|
||||
## Install
|
||||
```bash
|
||||
go get github.com/clipperhouse/displaywidth
|
||||
@@ -32,84 +33,91 @@ func main() {
|
||||
}
|
||||
```
|
||||
|
||||
For most purposes, you should use the `String` or `Bytes` methods.
|
||||
|
||||
|
||||
### Options
|
||||
|
||||
You can specify East Asian Width and Strict Emoji Neutral settings. If
|
||||
unspecified, the default is `EastAsianWidth: false, StrictEmojiNeutral: true`.
|
||||
You can specify East Asian Width settings. When false (default),
|
||||
[East Asian Ambiguous characters](https://www.unicode.org/reports/tr11/#Ambiguous)
|
||||
are treated as width 1. When true, East Asian Ambiguous characters are treated
|
||||
as width 2.
|
||||
|
||||
```go
|
||||
options := displaywidth.Options{
|
||||
EastAsianWidth: true,
|
||||
StrictEmojiNeutral: false,
|
||||
myOptions := displaywidth.Options{
|
||||
EastAsianWidth: true,
|
||||
}
|
||||
|
||||
width := options.String("Hello, 世界!")
|
||||
width := myOptions.String("Hello, 世界!")
|
||||
fmt.Println(width)
|
||||
```
|
||||
|
||||
## Details
|
||||
## Technical details
|
||||
|
||||
This package implements the Unicode East Asian Width standard (UAX #11) and is
|
||||
intended to be compatible with `go-runewidth`. It operates on bytes without
|
||||
decoding runes for better performance.
|
||||
This package implements the Unicode East Asian Width standard
|
||||
([UAX #11](https://www.unicode.org/reports/tr11/)), and handles
|
||||
[version selectors](https://en.wikipedia.org/wiki/Variation_Selectors_(Unicode_block)),
|
||||
and [regional indicator pairs](https://en.wikipedia.org/wiki/Regional_indicator_symbol)
|
||||
(flags). We implement [Unicode TR51](https://unicode.org/reports/tr51/).
|
||||
|
||||
`clipperhouse/displaywidth`, `mattn/go-runewidth`, and `rivo/uniseg` will
|
||||
give the same outputs for most real-world text. See extensive details in the
|
||||
[compatibility analysis](comparison/COMPATIBILITY_ANALYSIS.md).
|
||||
|
||||
If you wish to investigate the core logic, see the `lookupProperties` and `width`
|
||||
functions in [width.go](width.go#L135). The essential trie generation logic is in
|
||||
`buildPropertyBitmap` in [unicode.go](internal/gen/unicode.go#L317).
|
||||
|
||||
I (@clipperhouse) am keeping an eye on [emerging standards and test suites](https://www.jeffquast.com/post/state-of-terminal-emulation-2025/).
|
||||
|
||||
## Prior Art
|
||||
|
||||
[mattn/go-runewidth](https://github.com/mattn/go-runewidth)
|
||||
|
||||
[rivo/uniseg](https://github.com/rivo/uniseg)
|
||||
|
||||
[x/text/width](https://pkg.go.dev/golang.org/x/text/width)
|
||||
|
||||
[x/text/internal/triegen](https://pkg.go.dev/golang.org/x/text/internal/triegen)
|
||||
|
||||
## Benchmarks
|
||||
|
||||
Part of my motivation is the insight that we can avoid decoding runes for better performance.
|
||||
|
||||
```bash
|
||||
cd comparison
|
||||
go test -bench=. -benchmem
|
||||
```
|
||||
|
||||
```
|
||||
goos: darwin
|
||||
goarch: arm64
|
||||
pkg: github.com/clipperhouse/displaywidth
|
||||
pkg: github.com/clipperhouse/displaywidth/comparison
|
||||
cpu: Apple M2
|
||||
BenchmarkStringDefault/displaywidth-8 10537 ns/op 160.10 MB/s 0 B/op 0 allocs/op
|
||||
BenchmarkStringDefault/go-runewidth-8 14162 ns/op 119.12 MB/s 0 B/op 0 allocs/op
|
||||
BenchmarkString_EAW/displaywidth-8 10776 ns/op 156.55 MB/s 0 B/op 0 allocs/op
|
||||
BenchmarkString_EAW/go-runewidth-8 23987 ns/op 70.33 MB/s 0 B/op 0 allocs/op
|
||||
BenchmarkString_StrictEmoji/displaywidth-8 10892 ns/op 154.88 MB/s 0 B/op 0 allocs/op
|
||||
BenchmarkString_StrictEmoji/go-runewidth-8 14552 ns/op 115.93 MB/s 0 B/op 0 allocs/op
|
||||
BenchmarkString_ASCII/displaywidth-8 1116 ns/op 114.72 MB/s 0 B/op 0 allocs/op
|
||||
BenchmarkString_ASCII/go-runewidth-8 1178 ns/op 108.67 MB/s 0 B/op 0 allocs/op
|
||||
BenchmarkString_Unicode/displaywidth-8 896.9 ns/op 148.29 MB/s 0 B/op 0 allocs/op
|
||||
BenchmarkString_Unicode/go-runewidth-8 1434 ns/op 92.72 MB/s 0 B/op 0 allocs/op
|
||||
BenchmarkStringWidth_Emoji/displaywidth-8 3033 ns/op 238.74 MB/s 0 B/op 0 allocs/op
|
||||
BenchmarkStringWidth_Emoji/go-runewidth-8 4841 ns/op 149.56 MB/s 0 B/op 0 allocs/op
|
||||
BenchmarkString_Mixed/displaywidth-8 4064 ns/op 124.74 MB/s 0 B/op 0 allocs/op
|
||||
BenchmarkString_Mixed/go-runewidth-8 4696 ns/op 107.97 MB/s 0 B/op 0 allocs/op
|
||||
BenchmarkString_ControlChars/displaywidth-8 320.6 ns/op 102.93 MB/s 0 B/op 0 allocs/op
|
||||
BenchmarkString_ControlChars/go-runewidth-8 373.8 ns/op 88.28 MB/s 0 B/op 0 allocs/op
|
||||
BenchmarkRuneDefault/displaywidth-8 335.5 ns/op 411.35 MB/s 0 B/op 0 allocs/op
|
||||
BenchmarkRuneDefault/go-runewidth-8 681.2 ns/op 202.58 MB/s 0 B/op 0 allocs/op
|
||||
BenchmarkRuneWidth_EAW/displaywidth-8 146.7 ns/op 374.80 MB/s 0 B/op 0 allocs/op
|
||||
BenchmarkRuneWidth_EAW/go-runewidth-8 495.6 ns/op 110.98 MB/s 0 B/op 0 allocs/op
|
||||
BenchmarkRuneWidth_ASCII/displaywidth-8 63.00 ns/op 460.33 MB/s 0 B/op 0 allocs/op
|
||||
BenchmarkRuneWidth_ASCII/go-runewidth-8 68.90 ns/op 420.91 MB/s 0 B/op 0 allocs/op
|
||||
|
||||
BenchmarkString_Mixed/clipperhouse/displaywidth-8 10469 ns/op 161.15 MB/s 0 B/op 0 allocs/op
|
||||
BenchmarkString_Mixed/mattn/go-runewidth-8 14250 ns/op 118.39 MB/s 0 B/op 0 allocs/op
|
||||
BenchmarkString_Mixed/rivo/uniseg-8 19258 ns/op 87.60 MB/s 0 B/op 0 allocs/op
|
||||
|
||||
BenchmarkString_EastAsian/clipperhouse/displaywidth-8 10518 ns/op 160.39 MB/s 0 B/op 0 allocs/op
|
||||
BenchmarkString_EastAsian/mattn/go-runewidth-8 23827 ns/op 70.80 MB/s 0 B/op 0 allocs/op
|
||||
BenchmarkString_EastAsian/rivo/uniseg-8 19537 ns/op 86.35 MB/s 0 B/op 0 allocs/op
|
||||
|
||||
BenchmarkString_ASCII/clipperhouse/displaywidth-8 1027 ns/op 124.61 MB/s 0 B/op 0 allocs/op
|
||||
BenchmarkString_ASCII/mattn/go-runewidth-8 1166 ns/op 109.78 MB/s 0 B/op 0 allocs/op
|
||||
BenchmarkString_ASCII/rivo/uniseg-8 1551 ns/op 82.52 MB/s 0 B/op 0 allocs/op
|
||||
|
||||
BenchmarkString_Emoji/clipperhouse/displaywidth-8 3164 ns/op 228.84 MB/s 0 B/op 0 allocs/op
|
||||
BenchmarkString_Emoji/mattn/go-runewidth-8 4728 ns/op 153.13 MB/s 0 B/op 0 allocs/op
|
||||
BenchmarkString_Emoji/rivo/uniseg-8 6489 ns/op 111.57 MB/s 0 B/op 0 allocs/op
|
||||
|
||||
BenchmarkRune_Mixed/clipperhouse/displaywidth-8 3429 ns/op 491.96 MB/s 0 B/op 0 allocs/op
|
||||
BenchmarkRune_Mixed/mattn/go-runewidth-8 5308 ns/op 317.81 MB/s 0 B/op 0 allocs/op
|
||||
|
||||
BenchmarkRune_EastAsian/clipperhouse/displaywidth-8 3419 ns/op 493.49 MB/s 0 B/op 0 allocs/op
|
||||
BenchmarkRune_EastAsian/mattn/go-runewidth-8 15321 ns/op 110.11 MB/s 0 B/op 0 allocs/op
|
||||
|
||||
BenchmarkRune_ASCII/clipperhouse/displaywidth-8 254.4 ns/op 503.19 MB/s 0 B/op 0 allocs/op
|
||||
BenchmarkRune_ASCII/mattn/go-runewidth-8 264.3 ns/op 484.31 MB/s 0 B/op 0 allocs/op
|
||||
|
||||
BenchmarkRune_Emoji/clipperhouse/displaywidth-8 1374 ns/op 527.02 MB/s 0 B/op 0 allocs/op
|
||||
BenchmarkRune_Emoji/mattn/go-runewidth-8 2210 ns/op 327.66 MB/s 0 B/op 0 allocs/op
|
||||
```
|
||||
|
||||
I use a similar technique in [this grapheme cluster library](https://github.com/clipperhouse/uax29).
|
||||
|
||||
## Compatibility
|
||||
|
||||
`displaywidth` will mostly give the same outputs as `go-runewidth`, but there are some differences:
|
||||
|
||||
- Unicode category Mn (Nonspacing Mark): `displaywidth` will return width 0, `go-runewidth` may return width 1 for some runes.
|
||||
- Unicode category Cf (Format): `displaywidth` will return width 0, `go-runewidth` may return width 1 for some runes.
|
||||
- Unicode category Mc (Spacing Mark): `displaywidth` will return width 1, `go-runewidth` may return width 0 for some runes.
|
||||
- Unicode category Cs (Surrogate): `displaywidth` will return width 0, `go-runewidth` may return width 1 for some runes. Surrogates are not valid UTF-8; some packages may turn them into the replacement character (U+FFFD).
|
||||
- Unicode category Zl (Line separator): `displaywidth` will return width 0, `go-runewidth` may return width 1.
|
||||
- Unicode category Zp (Paragraph separator): `displaywidth` will return width 0, `go-runewidth` may return width 1.
|
||||
- Unicode Noncharacters (U+FFFE and U+FFFF): `displaywidth` will return width 0, `go-runewidth` may return width 1.
|
||||
|
||||
See `TestCompatibility` for more details.
|
||||
|
||||
72
vendor/github.com/clipperhouse/displaywidth/graphemes.go
generated
vendored
Normal file
72
vendor/github.com/clipperhouse/displaywidth/graphemes.go
generated
vendored
Normal file
@@ -0,0 +1,72 @@
|
||||
package displaywidth
|
||||
|
||||
import (
|
||||
"github.com/clipperhouse/stringish"
|
||||
"github.com/clipperhouse/uax29/v2/graphemes"
|
||||
)
|
||||
|
||||
// Graphemes is an iterator over grapheme clusters.
|
||||
//
|
||||
// Iterate using the Next method, and get the width of the current grapheme
|
||||
// using the Width method.
|
||||
type Graphemes[T stringish.Interface] struct {
|
||||
iter graphemes.Iterator[T]
|
||||
options Options
|
||||
}
|
||||
|
||||
// Next advances the iterator to the next grapheme cluster.
|
||||
func (g *Graphemes[T]) Next() bool {
|
||||
return g.iter.Next()
|
||||
}
|
||||
|
||||
// Value returns the current grapheme cluster.
|
||||
func (g *Graphemes[T]) Value() T {
|
||||
return g.iter.Value()
|
||||
}
|
||||
|
||||
// Width returns the display width of the current grapheme cluster.
|
||||
func (g *Graphemes[T]) Width() int {
|
||||
return graphemeWidth(g.Value(), g.options)
|
||||
}
|
||||
|
||||
// StringGraphemes returns an iterator over grapheme clusters for the given
|
||||
// string.
|
||||
//
|
||||
// Iterate using the Next method, and get the width of the current grapheme
|
||||
// using the Width method.
|
||||
func StringGraphemes(s string) Graphemes[string] {
|
||||
return DefaultOptions.StringGraphemes(s)
|
||||
}
|
||||
|
||||
// StringGraphemes returns an iterator over grapheme clusters for the given
|
||||
// string, with the given options.
|
||||
//
|
||||
// Iterate using the Next method, and get the width of the current grapheme
|
||||
// using the Width method.
|
||||
func (options Options) StringGraphemes(s string) Graphemes[string] {
|
||||
return Graphemes[string]{
|
||||
iter: graphemes.FromString(s),
|
||||
options: options,
|
||||
}
|
||||
}
|
||||
|
||||
// BytesGraphemes returns an iterator over grapheme clusters for the given
|
||||
// []byte.
|
||||
//
|
||||
// Iterate using the Next method, and get the width of the current grapheme
|
||||
// using the Width method.
|
||||
func BytesGraphemes(s []byte) Graphemes[[]byte] {
|
||||
return DefaultOptions.BytesGraphemes(s)
|
||||
}
|
||||
|
||||
// BytesGraphemes returns an iterator over grapheme clusters for the given
|
||||
// []byte, with the given options.
|
||||
//
|
||||
// Iterate using the Next method, and get the width of the current grapheme
|
||||
// using the Width method.
|
||||
func (options Options) BytesGraphemes(s []byte) Graphemes[[]byte] {
|
||||
return Graphemes[[]byte]{
|
||||
iter: graphemes.FromBytes(s),
|
||||
options: options,
|
||||
}
|
||||
}
|
||||
91
vendor/github.com/clipperhouse/displaywidth/tables.go
generated
vendored
Normal file
91
vendor/github.com/clipperhouse/displaywidth/tables.go
generated
vendored
Normal file
@@ -0,0 +1,91 @@
|
||||
package displaywidth
|
||||
|
||||
// propertyWidths is a jump table of sorts, instead of a switch
|
||||
var propertyWidths = [5]int{
|
||||
_Default: 1,
|
||||
_Zero_Width: 0,
|
||||
_East_Asian_Wide: 2,
|
||||
_East_Asian_Ambiguous: 1,
|
||||
_Emoji: 2,
|
||||
}
|
||||
|
||||
// asciiWidths is a lookup table for single-byte character widths. Printable
|
||||
// ASCII characters have width 1, control characters have width 0.
|
||||
//
|
||||
// It is intended for valid single-byte UTF-8, which means <128.
|
||||
//
|
||||
// If you look up an index >= 128, that is either:
|
||||
// - invalid UTF-8, or
|
||||
// - a multi-byte UTF-8 sequence, in which case you should be operating on
|
||||
// the grapheme cluster, and not using this table
|
||||
//
|
||||
// We will return a default value of 1 in those cases, so as not to panic.
|
||||
var asciiWidths = [256]int8{
|
||||
// Control characters (0x00-0x1F): width 0
|
||||
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
|
||||
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
|
||||
// Printable ASCII (0x20-0x7E): width 1
|
||||
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
|
||||
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
|
||||
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
|
||||
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
|
||||
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
|
||||
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
|
||||
// DEL (0x7F): width 0
|
||||
0,
|
||||
// >= 128
|
||||
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
|
||||
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
|
||||
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
|
||||
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
|
||||
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
|
||||
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
|
||||
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
|
||||
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
|
||||
}
|
||||
|
||||
// asciiProperties is a lookup table for single-byte character properties.
|
||||
// It is intended for valid single-byte UTF-8, which means <128.
|
||||
//
|
||||
// If you look up an index >= 128, that is either:
|
||||
// - invalid UTF-8, or
|
||||
// - a multi-byte UTF-8 sequence, in which case you should be operating on
|
||||
// the grapheme cluster, and not using this table
|
||||
//
|
||||
// We will return a default value of _Default in those cases, so as not to
|
||||
// panic.
|
||||
var asciiProperties = [256]property{
|
||||
// Control characters (0x00-0x1F): _Zero_Width
|
||||
_Zero_Width, _Zero_Width, _Zero_Width, _Zero_Width, _Zero_Width, _Zero_Width, _Zero_Width, _Zero_Width,
|
||||
_Zero_Width, _Zero_Width, _Zero_Width, _Zero_Width, _Zero_Width, _Zero_Width, _Zero_Width, _Zero_Width,
|
||||
_Zero_Width, _Zero_Width, _Zero_Width, _Zero_Width, _Zero_Width, _Zero_Width, _Zero_Width, _Zero_Width,
|
||||
_Zero_Width, _Zero_Width, _Zero_Width, _Zero_Width, _Zero_Width, _Zero_Width, _Zero_Width, _Zero_Width,
|
||||
// Printable ASCII (0x20-0x7E): _Default
|
||||
_Default, _Default, _Default, _Default, _Default, _Default, _Default, _Default,
|
||||
_Default, _Default, _Default, _Default, _Default, _Default, _Default, _Default,
|
||||
_Default, _Default, _Default, _Default, _Default, _Default, _Default, _Default,
|
||||
_Default, _Default, _Default, _Default, _Default, _Default, _Default, _Default,
|
||||
_Default, _Default, _Default, _Default, _Default, _Default, _Default, _Default,
|
||||
_Default, _Default, _Default, _Default, _Default, _Default, _Default, _Default,
|
||||
_Default, _Default, _Default, _Default, _Default, _Default, _Default, _Default,
|
||||
_Default, _Default, _Default, _Default, _Default, _Default, _Default, _Default,
|
||||
_Default, _Default, _Default, _Default, _Default, _Default, _Default, _Default,
|
||||
_Default, _Default, _Default, _Default, _Default, _Default, _Default, _Default,
|
||||
_Default, _Default, _Default, _Default, _Default, _Default, _Default, _Default,
|
||||
_Default, _Default, _Default, _Default, _Default, _Default, _Default,
|
||||
// DEL (0x7F): _Zero_Width
|
||||
_Zero_Width,
|
||||
// >= 128
|
||||
_Default, _Default, _Default, _Default, _Default, _Default, _Default, _Default,
|
||||
_Default, _Default, _Default, _Default, _Default, _Default, _Default, _Default,
|
||||
_Default, _Default, _Default, _Default, _Default, _Default, _Default, _Default,
|
||||
_Default, _Default, _Default, _Default, _Default, _Default, _Default, _Default,
|
||||
_Default, _Default, _Default, _Default, _Default, _Default, _Default, _Default,
|
||||
_Default, _Default, _Default, _Default, _Default, _Default, _Default, _Default,
|
||||
_Default, _Default, _Default, _Default, _Default, _Default, _Default, _Default,
|
||||
_Default, _Default, _Default, _Default, _Default, _Default, _Default, _Default,
|
||||
_Default, _Default, _Default, _Default, _Default, _Default, _Default, _Default,
|
||||
_Default, _Default, _Default, _Default, _Default, _Default, _Default, _Default,
|
||||
_Default, _Default, _Default, _Default, _Default, _Default, _Default, _Default,
|
||||
_Default, _Default, _Default, _Default, _Default, _Default, _Default, _Default,
|
||||
}
|
||||
2727
vendor/github.com/clipperhouse/displaywidth/trie.go
generated
vendored
2727
vendor/github.com/clipperhouse/displaywidth/trie.go
generated
vendored
File diff suppressed because it is too large
Load Diff
280
vendor/github.com/clipperhouse/displaywidth/width.go
generated
vendored
280
vendor/github.com/clipperhouse/displaywidth/width.go
generated
vendored
@@ -7,153 +7,205 @@ import (
|
||||
"github.com/clipperhouse/uax29/v2/graphemes"
|
||||
)
|
||||
|
||||
// String calculates the display width of a string
|
||||
// using the [DefaultOptions]
|
||||
// Options allows you to specify the treatment of ambiguous East Asian
|
||||
// characters. When EastAsianWidth is false (default), ambiguous East Asian
|
||||
// characters are treated as width 1. When EastAsianWidth is true, ambiguous
|
||||
// East Asian characters are treated as width 2.
|
||||
type Options struct {
|
||||
EastAsianWidth bool
|
||||
}
|
||||
|
||||
// DefaultOptions is the default options for the display width
|
||||
// calculation, which is EastAsianWidth: false.
|
||||
var DefaultOptions = Options{EastAsianWidth: false}
|
||||
|
||||
// String calculates the display width of a string,
|
||||
// by iterating over grapheme clusters in the string
|
||||
// and summing their widths.
|
||||
func String(s string) int {
|
||||
return DefaultOptions.String(s)
|
||||
}
|
||||
|
||||
// Bytes calculates the display width of a []byte
|
||||
// using the [DefaultOptions]
|
||||
// String calculates the display width of a string, for the given options, by
|
||||
// iterating over grapheme clusters in the string and summing their widths.
|
||||
func (options Options) String(s string) int {
|
||||
// Optimization: no need to parse grapheme
|
||||
switch len(s) {
|
||||
case 0:
|
||||
return 0
|
||||
case 1:
|
||||
return int(asciiWidths[s[0]])
|
||||
}
|
||||
|
||||
width := 0
|
||||
g := graphemes.FromString(s)
|
||||
for g.Next() {
|
||||
width += graphemeWidth(g.Value(), options)
|
||||
}
|
||||
return width
|
||||
}
|
||||
|
||||
// Bytes calculates the display width of a []byte,
|
||||
// by iterating over grapheme clusters in the byte slice
|
||||
// and summing their widths.
|
||||
func Bytes(s []byte) int {
|
||||
return DefaultOptions.Bytes(s)
|
||||
}
|
||||
|
||||
// Bytes calculates the display width of a []byte, for the given options, by
|
||||
// iterating over grapheme clusters in the slice and summing their widths.
|
||||
func (options Options) Bytes(s []byte) int {
|
||||
// Optimization: no need to parse grapheme
|
||||
switch len(s) {
|
||||
case 0:
|
||||
return 0
|
||||
case 1:
|
||||
return int(asciiWidths[s[0]])
|
||||
}
|
||||
|
||||
width := 0
|
||||
g := graphemes.FromBytes(s)
|
||||
for g.Next() {
|
||||
width += graphemeWidth(g.Value(), options)
|
||||
}
|
||||
return width
|
||||
}
|
||||
|
||||
// Rune calculates the display width of a rune. You
|
||||
// should almost certainly use [String] or [Bytes] for
|
||||
// most purposes.
|
||||
//
|
||||
// The smallest unit of display width is a grapheme
|
||||
// cluster, not a rune. Iterating over runes to measure
|
||||
// width is incorrect in many cases.
|
||||
func Rune(r rune) int {
|
||||
return DefaultOptions.Rune(r)
|
||||
}
|
||||
|
||||
type Options struct {
|
||||
EastAsianWidth bool
|
||||
StrictEmojiNeutral bool
|
||||
}
|
||||
|
||||
var DefaultOptions = Options{
|
||||
EastAsianWidth: false,
|
||||
StrictEmojiNeutral: true,
|
||||
}
|
||||
|
||||
// String calculates the display width of a string
|
||||
// for the given options
|
||||
func (options Options) String(s string) int {
|
||||
if len(s) == 0 {
|
||||
return 0
|
||||
}
|
||||
|
||||
total := 0
|
||||
g := graphemes.FromString(s)
|
||||
for g.Next() {
|
||||
// The first character in the grapheme cluster determines the width;
|
||||
// modifiers and joiners do not contribute to the width.
|
||||
props, _ := lookupProperties(g.Value())
|
||||
total += props.width(options)
|
||||
}
|
||||
return total
|
||||
}
|
||||
|
||||
// BytesOptions calculates the display width of a []byte
|
||||
// for the given options
|
||||
func (options Options) Bytes(s []byte) int {
|
||||
if len(s) == 0 {
|
||||
return 0
|
||||
}
|
||||
|
||||
total := 0
|
||||
g := graphemes.FromBytes(s)
|
||||
for g.Next() {
|
||||
// The first character in the grapheme cluster determines the width;
|
||||
// modifiers and joiners do not contribute to the width.
|
||||
props, _ := lookupProperties(g.Value())
|
||||
total += props.width(options)
|
||||
}
|
||||
return total
|
||||
}
|
||||
|
||||
// Rune calculates the display width of a rune, for the given options.
|
||||
//
|
||||
// You should almost certainly use [String] or [Bytes] for most purposes.
|
||||
//
|
||||
// The smallest unit of display width is a grapheme cluster, not a rune.
|
||||
// Iterating over runes to measure width is incorrect in many cases.
|
||||
func (options Options) Rune(r rune) int {
|
||||
// Fast path for ASCII
|
||||
if r < utf8.RuneSelf {
|
||||
if isASCIIControl(byte(r)) {
|
||||
// Control (0x00-0x1F) and DEL (0x7F)
|
||||
return 0
|
||||
}
|
||||
// ASCII printable (0x20-0x7E)
|
||||
return 1
|
||||
return int(asciiWidths[byte(r)])
|
||||
}
|
||||
|
||||
// Surrogates (U+D800-U+DFFF) are invalid UTF-8 and have zero width
|
||||
// Other packages might turn them into the replacement character (U+FFFD)
|
||||
// in which case, we won't see it.
|
||||
// Surrogates (U+D800-U+DFFF) are invalid UTF-8.
|
||||
if r >= 0xD800 && r <= 0xDFFF {
|
||||
return 0
|
||||
}
|
||||
|
||||
// Stack-allocated to avoid heap allocation
|
||||
var buf [4]byte // UTF-8 is at most 4 bytes
|
||||
var buf [4]byte
|
||||
n := utf8.EncodeRune(buf[:], r)
|
||||
// Skip the grapheme iterator and directly lookup properties
|
||||
props, _ := lookupProperties(buf[:n])
|
||||
return props.width(options)
|
||||
|
||||
// Skip the grapheme iterator
|
||||
return lookupProperties(buf[:n]).width(options)
|
||||
}
|
||||
|
||||
func isASCIIControl(b byte) bool {
|
||||
return b < 0x20 || b == 0x7F
|
||||
}
|
||||
|
||||
const defaultWidth = 1
|
||||
|
||||
// is returns true if the property flag is set
|
||||
func (p property) is(flag property) bool {
|
||||
return p&flag != 0
|
||||
}
|
||||
|
||||
// lookupProperties returns the properties for the first character in a string
|
||||
func lookupProperties[T stringish.Interface](s T) (property, int) {
|
||||
if len(s) == 0 {
|
||||
return 0, 0
|
||||
// graphemeWidth returns the display width of a grapheme cluster.
|
||||
// The passed string must be a single grapheme cluster.
|
||||
func graphemeWidth[T stringish.Interface](s T, options Options) int {
|
||||
// Optimization: no need to look up properties
|
||||
switch len(s) {
|
||||
case 0:
|
||||
return 0
|
||||
case 1:
|
||||
return int(asciiWidths[s[0]])
|
||||
}
|
||||
|
||||
// Fast path for ASCII characters (single byte)
|
||||
b := s[0]
|
||||
if b < utf8.RuneSelf { // Single-byte ASCII
|
||||
if isASCIIControl(b) {
|
||||
// Control characters (0x00-0x1F) and DEL (0x7F) - width 0
|
||||
return _ZeroWidth, 1
|
||||
return lookupProperties(s).width(options)
|
||||
}
|
||||
|
||||
// isRIPrefix checks if the slice matches the Regional Indicator prefix
|
||||
// (F0 9F 87). It assumes len(s) >= 3.
|
||||
func isRIPrefix[T stringish.Interface](s T) bool {
|
||||
return s[0] == 0xF0 && s[1] == 0x9F && s[2] == 0x87
|
||||
}
|
||||
|
||||
// isVS16 checks if the slice matches VS16 (U+FE0F) UTF-8 encoding
|
||||
// (EF B8 8F). It assumes len(s) >= 3.
|
||||
func isVS16[T stringish.Interface](s T) bool {
|
||||
return s[0] == 0xEF && s[1] == 0xB8 && s[2] == 0x8F
|
||||
}
|
||||
|
||||
// lookupProperties returns the properties for a grapheme.
|
||||
// The passed string must be at least one byte long.
|
||||
//
|
||||
// Callers must handle zero and single-byte strings upstream, both as an
|
||||
// optimization, and to reduce the scope of this function.
|
||||
func lookupProperties[T stringish.Interface](s T) property {
|
||||
l := len(s)
|
||||
|
||||
if s[0] < utf8.RuneSelf {
|
||||
// Check for variation selector after ASCII (e.g., keycap sequences like 1️⃣)
|
||||
if l >= 4 {
|
||||
// Subslice may help eliminate bounds checks
|
||||
vs := s[1:4]
|
||||
if isVS16(vs) {
|
||||
// VS16 requests emoji presentation (width 2)
|
||||
return _Emoji
|
||||
}
|
||||
// VS15 (0x8E) requests text presentation but does not affect width,
|
||||
// in my reading of Unicode TR51. Falls through to _Default.
|
||||
}
|
||||
// ASCII printable characters (0x20-0x7E) - width 1
|
||||
// Return 0 properties, width calculation will default to 1
|
||||
return 0, 1
|
||||
return asciiProperties[s[0]]
|
||||
}
|
||||
|
||||
// Use the generated trie for lookup
|
||||
props, size := lookup(s)
|
||||
return property(props), size
|
||||
// Regional indicator pair (flag)
|
||||
if l >= 8 {
|
||||
// Subslice may help eliminate bounds checks
|
||||
ri := s[:8]
|
||||
// First rune
|
||||
if isRIPrefix(ri[0:3]) {
|
||||
b3 := ri[3]
|
||||
if b3 >= 0xA6 && b3 <= 0xBF {
|
||||
// Second rune
|
||||
if isRIPrefix(ri[4:7]) {
|
||||
b7 := ri[7]
|
||||
if b7 >= 0xA6 && b7 <= 0xBF {
|
||||
return _Emoji
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
p, sz := lookup(s)
|
||||
|
||||
// Variation Selectors
|
||||
if sz > 0 && l >= sz+3 {
|
||||
// Subslice may help eliminate bounds checks
|
||||
vs := s[sz : sz+3]
|
||||
if isVS16(vs) {
|
||||
// VS16 requests emoji presentation (width 2)
|
||||
return _Emoji
|
||||
}
|
||||
// VS15 (0x8E) requests text presentation but does not affect width,
|
||||
// in my reading of Unicode TR51. Falls through to return the base
|
||||
// character's property.
|
||||
}
|
||||
|
||||
return property(p)
|
||||
}
|
||||
|
||||
// width determines the display width of a character based on its properties
|
||||
const _Default property = 0
|
||||
const boundsCheck = property(len(propertyWidths) - 1)
|
||||
|
||||
// width determines the display width of a character based on its properties,
|
||||
// and configuration options
|
||||
func (p property) width(options Options) int {
|
||||
if p == 0 {
|
||||
// Character not in trie, use default behavior
|
||||
return defaultWidth
|
||||
}
|
||||
|
||||
if p.is(_ZeroWidth) {
|
||||
return 0
|
||||
}
|
||||
|
||||
if options.EastAsianWidth {
|
||||
if p.is(_East_Asian_Ambiguous) {
|
||||
return 2
|
||||
}
|
||||
if p.is(_East_Asian_Ambiguous|_Emoji) && !options.StrictEmojiNeutral {
|
||||
return 2
|
||||
}
|
||||
}
|
||||
|
||||
if p.is(_East_Asian_Full_Wide) {
|
||||
if options.EastAsianWidth && p == _East_Asian_Ambiguous {
|
||||
return 2
|
||||
}
|
||||
|
||||
// Default width for all other characters
|
||||
return defaultWidth
|
||||
// Bounds check may help the compiler eliminate its bounds check,
|
||||
// and safety of course.
|
||||
if p > boundsCheck {
|
||||
return 1 // default width
|
||||
}
|
||||
|
||||
return propertyWidths[p]
|
||||
}
|
||||
|
||||
24
vendor/github.com/clipperhouse/uax29/v2/graphemes/README.md
generated
vendored
24
vendor/github.com/clipperhouse/uax29/v2/graphemes/README.md
generated
vendored
@@ -1,5 +1,9 @@
|
||||
An implementation of grapheme cluster boundaries from [Unicode text segmentation](https://unicode.org/reports/tr29/#Grapheme_Cluster_Boundaries) (UAX 29), for Unicode version 15.0.0.
|
||||
|
||||
[](https://pkg.go.dev/github.com/clipperhouse/uax29/v2/graphemes)
|
||||

|
||||

|
||||
|
||||
## Quick start
|
||||
|
||||
```
|
||||
@@ -18,15 +22,14 @@ for tokens.Next() { // Next() returns true until end of data
|
||||
}
|
||||
```
|
||||
|
||||
[](https://pkg.go.dev/github.com/clipperhouse/uax29/v2/graphemes)
|
||||
|
||||
_A grapheme is a “single visible character”, which might be a simple as a single letter, or a complex emoji that consists of several Unicode code points._
|
||||
|
||||
## Conformance
|
||||
|
||||
We use the Unicode [test suite](https://unicode.org/reports/tr41/tr41-26.html#Tests29). Status:
|
||||
We use the Unicode [test suite](https://unicode.org/reports/tr41/tr41-26.html#Tests29).
|
||||
|
||||

|
||||

|
||||

|
||||
|
||||
## APIs
|
||||
|
||||
@@ -71,9 +74,18 @@ for tokens.Next() { // Next() returns true until end of data
|
||||
}
|
||||
```
|
||||
|
||||
### Performance
|
||||
### Benchmarks
|
||||
|
||||
On a Mac M2 laptop, we see around 200MB/s, or around 100 million graphemes per second. You should see ~constant memory, and no allocations.
|
||||
On a Mac M2 laptop, we see around 200MB/s, or around 100 million graphemes per second, and no allocations.
|
||||
|
||||
```
|
||||
goos: darwin
|
||||
goarch: arm64
|
||||
pkg: github.com/clipperhouse/uax29/graphemes/comparative
|
||||
cpu: Apple M2
|
||||
BenchmarkGraphemes/clipperhouse/uax29-8 173805 ns/op 201.16 MB/s 0 B/op 0 allocs/op
|
||||
BenchmarkGraphemes/rivo/uniseg-8 2045128 ns/op 17.10 MB/s 0 B/op 0 allocs/op
|
||||
```
|
||||
|
||||
### Invalid inputs
|
||||
|
||||
|
||||
7
vendor/github.com/clipperhouse/uax29/v2/graphemes/iterator.go
generated
vendored
7
vendor/github.com/clipperhouse/uax29/v2/graphemes/iterator.go
generated
vendored
@@ -1,8 +1,11 @@
|
||||
package graphemes
|
||||
|
||||
import "github.com/clipperhouse/uax29/v2/internal/iterators"
|
||||
import (
|
||||
"github.com/clipperhouse/stringish"
|
||||
"github.com/clipperhouse/uax29/v2/internal/iterators"
|
||||
)
|
||||
|
||||
type Iterator[T iterators.Stringish] struct {
|
||||
type Iterator[T stringish.Interface] struct {
|
||||
*iterators.Iterator[T]
|
||||
}
|
||||
|
||||
|
||||
4
vendor/github.com/clipperhouse/uax29/v2/graphemes/splitfunc.go
generated
vendored
4
vendor/github.com/clipperhouse/uax29/v2/graphemes/splitfunc.go
generated
vendored
@@ -3,7 +3,7 @@ package graphemes
|
||||
import (
|
||||
"bufio"
|
||||
|
||||
"github.com/clipperhouse/uax29/v2/internal/iterators"
|
||||
"github.com/clipperhouse/stringish"
|
||||
)
|
||||
|
||||
// is determines if lookup intersects propert(ies)
|
||||
@@ -18,7 +18,7 @@ const _Ignore = _Extend
|
||||
// See https://unicode.org/reports/tr29/#Grapheme_Cluster_Boundaries.
|
||||
var SplitFunc bufio.SplitFunc = splitFunc[[]byte]
|
||||
|
||||
func splitFunc[T iterators.Stringish](data T, atEOF bool) (advance int, token T, err error) {
|
||||
func splitFunc[T stringish.Interface](data T, atEOF bool) (advance int, token T, err error) {
|
||||
var empty T
|
||||
if len(data) == 0 {
|
||||
return 0, empty, nil
|
||||
|
||||
6
vendor/github.com/clipperhouse/uax29/v2/graphemes/trie.go
generated
vendored
6
vendor/github.com/clipperhouse/uax29/v2/graphemes/trie.go
generated
vendored
@@ -1,10 +1,10 @@
|
||||
package graphemes
|
||||
|
||||
import "github.com/clipperhouse/stringish"
|
||||
|
||||
// generated by github.com/clipperhouse/uax29/v2
|
||||
// from https://www.unicode.org/Public/15.0.0/ucd/auxiliary/GraphemeBreakProperty.txt
|
||||
|
||||
import "github.com/clipperhouse/uax29/v2/internal/iterators"
|
||||
|
||||
type property uint16
|
||||
|
||||
const (
|
||||
@@ -27,7 +27,7 @@ const (
|
||||
// lookup returns the trie value for the first UTF-8 encoding in s and
|
||||
// the width in bytes of this encoding. The size will be 0 if s does not
|
||||
// hold enough bytes to complete the encoding. len(s) must be greater than 0.
|
||||
func lookup[T iterators.Stringish](s T) (v property, sz int) {
|
||||
func lookup[T stringish.Interface](s T) (v property, sz int) {
|
||||
c0 := s[0]
|
||||
switch {
|
||||
case c0 < 0x80: // is ASCII
|
||||
|
||||
27
vendor/github.com/clipperhouse/uax29/v2/internal/iterators/iterator.go
generated
vendored
27
vendor/github.com/clipperhouse/uax29/v2/internal/iterators/iterator.go
generated
vendored
@@ -1,14 +1,12 @@
|
||||
package iterators
|
||||
|
||||
type Stringish interface {
|
||||
[]byte | string
|
||||
}
|
||||
import "github.com/clipperhouse/stringish"
|
||||
|
||||
type SplitFunc[T Stringish] func(T, bool) (int, T, error)
|
||||
type SplitFunc[T stringish.Interface] func(T, bool) (int, T, error)
|
||||
|
||||
// Iterator is a generic iterator for words that are either []byte or string.
|
||||
// Iterate while Next() is true, and access the word via Value().
|
||||
type Iterator[T Stringish] struct {
|
||||
type Iterator[T stringish.Interface] struct {
|
||||
split SplitFunc[T]
|
||||
data T
|
||||
start int
|
||||
@@ -16,7 +14,7 @@ type Iterator[T Stringish] struct {
|
||||
}
|
||||
|
||||
// New creates a new Iterator for the given data and SplitFunc.
|
||||
func New[T Stringish](split SplitFunc[T], data T) *Iterator[T] {
|
||||
func New[T stringish.Interface](split SplitFunc[T], data T) *Iterator[T] {
|
||||
return &Iterator[T]{
|
||||
split: split,
|
||||
data: data,
|
||||
@@ -83,3 +81,20 @@ func (iter *Iterator[T]) Reset() {
|
||||
iter.start = 0
|
||||
iter.pos = 0
|
||||
}
|
||||
|
||||
func (iter *Iterator[T]) First() T {
|
||||
if len(iter.data) == 0 {
|
||||
return iter.data
|
||||
}
|
||||
advance, _, err := iter.split(iter.data, true)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
if advance <= 0 {
|
||||
panic("SplitFunc returned a zero or negative advance")
|
||||
}
|
||||
if advance > len(iter.data) {
|
||||
panic("SplitFunc advanced beyond the end of the data")
|
||||
}
|
||||
return iter.data[:advance]
|
||||
}
|
||||
|
||||
2
vendor/github.com/go-chi/chi/v5/chi.go
generated
vendored
2
vendor/github.com/go-chi/chi/v5/chi.go
generated
vendored
@@ -1,6 +1,6 @@
|
||||
// Package chi is a small, idiomatic and composable router for building HTTP services.
|
||||
//
|
||||
// chi requires Go 1.14 or newer.
|
||||
// chi supports the four most recent major versions of Go.
|
||||
//
|
||||
// Example:
|
||||
//
|
||||
|
||||
9
vendor/github.com/go-chi/chi/v5/middleware/content_charset.go
generated
vendored
9
vendor/github.com/go-chi/chi/v5/middleware/content_charset.go
generated
vendored
@@ -2,6 +2,7 @@ package middleware
|
||||
|
||||
import (
|
||||
"net/http"
|
||||
"slices"
|
||||
"strings"
|
||||
)
|
||||
|
||||
@@ -29,13 +30,7 @@ func contentEncoding(ce string, charsets ...string) bool {
|
||||
_, ce = split(strings.ToLower(ce), ";")
|
||||
_, ce = split(ce, "charset=")
|
||||
ce, _ = split(ce, ";")
|
||||
for _, c := range charsets {
|
||||
if ce == c {
|
||||
return true
|
||||
}
|
||||
}
|
||||
|
||||
return false
|
||||
return slices.Contains(charsets, ce)
|
||||
}
|
||||
|
||||
// Split a string in two parts, cleaning any whitespace.
|
||||
|
||||
6
vendor/github.com/go-chi/chi/v5/middleware/request_id.go
generated
vendored
6
vendor/github.com/go-chi/chi/v5/middleware/request_id.go
generated
vendored
@@ -25,7 +25,7 @@ const RequestIDKey ctxKeyRequestID = 0
|
||||
var RequestIDHeader = "X-Request-Id"
|
||||
|
||||
var prefix string
|
||||
var reqid uint64
|
||||
var reqid atomic.Uint64
|
||||
|
||||
// A quick note on the statistics here: we're trying to calculate the chance that
|
||||
// two randomly generated base62 prefixes will collide. We use the formula from
|
||||
@@ -69,7 +69,7 @@ func RequestID(next http.Handler) http.Handler {
|
||||
ctx := r.Context()
|
||||
requestID := r.Header.Get(RequestIDHeader)
|
||||
if requestID == "" {
|
||||
myid := atomic.AddUint64(&reqid, 1)
|
||||
myid := reqid.Add(1)
|
||||
requestID = fmt.Sprintf("%s-%06d", prefix, myid)
|
||||
}
|
||||
ctx = context.WithValue(ctx, RequestIDKey, requestID)
|
||||
@@ -92,5 +92,5 @@ func GetReqID(ctx context.Context) string {
|
||||
|
||||
// NextRequestID generates the next request ID in the sequence.
|
||||
func NextRequestID() uint64 {
|
||||
return atomic.AddUint64(&reqid, 1)
|
||||
return reqid.Add(1)
|
||||
}
|
||||
|
||||
11
vendor/github.com/go-chi/chi/v5/middleware/strip.go
generated
vendored
11
vendor/github.com/go-chi/chi/v5/middleware/strip.go
generated
vendored
@@ -47,15 +47,22 @@ func RedirectSlashes(next http.Handler) http.Handler {
|
||||
} else {
|
||||
path = r.URL.Path
|
||||
}
|
||||
|
||||
if len(path) > 1 && path[len(path)-1] == '/' {
|
||||
// Trim all leading and trailing slashes (e.g., "//evil.com", "/some/path//")
|
||||
path = "/" + strings.Trim(path, "/")
|
||||
// Normalize backslashes to forward slashes to prevent "/\evil.com" style redirects
|
||||
// that some clients may interpret as protocol-relative.
|
||||
path = strings.ReplaceAll(path, `\`, `/`)
|
||||
|
||||
// Collapse leading/trailing slashes and force a single leading slash.
|
||||
path := "/" + strings.Trim(path, "/")
|
||||
|
||||
if r.URL.RawQuery != "" {
|
||||
path = fmt.Sprintf("%s?%s", path, r.URL.RawQuery)
|
||||
}
|
||||
http.Redirect(w, r, path, 301)
|
||||
return
|
||||
}
|
||||
|
||||
next.ServeHTTP(w, r)
|
||||
}
|
||||
return http.HandlerFunc(fn)
|
||||
|
||||
6
vendor/github.com/go-chi/chi/v5/mux.go
generated
vendored
6
vendor/github.com/go-chi/chi/v5/mux.go
generated
vendored
@@ -467,8 +467,10 @@ func (mx *Mux) routeHTTP(w http.ResponseWriter, r *http.Request) {
|
||||
|
||||
// Find the route
|
||||
if _, _, h := mx.tree.FindRoute(rctx, method, routePath); h != nil {
|
||||
if supportsPathValue {
|
||||
setPathValue(rctx, r)
|
||||
// Set http.Request path values from our request context
|
||||
for i, key := range rctx.URLParams.Keys {
|
||||
value := rctx.URLParams.Values[i]
|
||||
r.SetPathValue(key, value)
|
||||
}
|
||||
if supportsPattern {
|
||||
setPattern(rctx, r)
|
||||
|
||||
21
vendor/github.com/go-chi/chi/v5/path_value.go
generated
vendored
21
vendor/github.com/go-chi/chi/v5/path_value.go
generated
vendored
@@ -1,21 +0,0 @@
|
||||
//go:build go1.22 && !tinygo
|
||||
// +build go1.22,!tinygo
|
||||
|
||||
|
||||
package chi
|
||||
|
||||
import "net/http"
|
||||
|
||||
// supportsPathValue is true if the Go version is 1.22 and above.
|
||||
//
|
||||
// If this is true, `net/http.Request` has methods `SetPathValue` and `PathValue`.
|
||||
const supportsPathValue = true
|
||||
|
||||
// setPathValue sets the path values in the Request value
|
||||
// based on the provided request context.
|
||||
func setPathValue(rctx *Context, r *http.Request) {
|
||||
for i, key := range rctx.URLParams.Keys {
|
||||
value := rctx.URLParams.Values[i]
|
||||
r.SetPathValue(key, value)
|
||||
}
|
||||
}
|
||||
19
vendor/github.com/go-chi/chi/v5/path_value_fallback.go
generated
vendored
19
vendor/github.com/go-chi/chi/v5/path_value_fallback.go
generated
vendored
@@ -1,19 +0,0 @@
|
||||
//go:build !go1.22 || tinygo
|
||||
// +build !go1.22 tinygo
|
||||
|
||||
package chi
|
||||
|
||||
import "net/http"
|
||||
|
||||
// supportsPathValue is true if the Go version is 1.22 and above.
|
||||
//
|
||||
// If this is true, `net/http.Request` has methods `SetPathValue` and `PathValue`.
|
||||
const supportsPathValue = false
|
||||
|
||||
// setPathValue sets the path values in the Request value
|
||||
// based on the provided request context.
|
||||
//
|
||||
// setPathValue is only supported in Go 1.22 and above so
|
||||
// this is just a blank function so that it compiles.
|
||||
func setPathValue(rctx *Context, r *http.Request) {
|
||||
}
|
||||
21
vendor/github.com/go-chi/chi/v5/tree.go
generated
vendored
21
vendor/github.com/go-chi/chi/v5/tree.go
generated
vendored
@@ -71,6 +71,7 @@ func RegisterMethod(method string) {
|
||||
}
|
||||
mt := methodTyp(2 << n)
|
||||
methodMap[method] = mt
|
||||
reverseMethodMap[mt] = method
|
||||
mALL |= mt
|
||||
}
|
||||
|
||||
@@ -328,7 +329,7 @@ func (n *node) replaceChild(label, tail byte, child *node) {
|
||||
|
||||
func (n *node) getEdge(ntyp nodeTyp, label, tail byte, prefix string) *node {
|
||||
nds := n.children[ntyp]
|
||||
for i := 0; i < len(nds); i++ {
|
||||
for i := range nds {
|
||||
if nds[i].label == label && nds[i].tail == tail {
|
||||
if ntyp == ntRegexp && nds[i].prefix != prefix {
|
||||
continue
|
||||
@@ -429,9 +430,7 @@ func (n *node) findRoute(rctx *Context, method methodTyp, path string) *node {
|
||||
}
|
||||
|
||||
// serially loop through each node grouped by the tail delimiter
|
||||
for idx := 0; idx < len(nds); idx++ {
|
||||
xn = nds[idx]
|
||||
|
||||
for _, xn = range nds {
|
||||
// label for param nodes is the delimiter byte
|
||||
p := strings.IndexByte(xsearch, xn.tail)
|
||||
|
||||
@@ -770,20 +769,14 @@ func patParamKeys(pattern string) []string {
|
||||
}
|
||||
}
|
||||
|
||||
// longestPrefix finds the length of the shared prefix
|
||||
// of two strings
|
||||
func longestPrefix(k1, k2 string) int {
|
||||
max := len(k1)
|
||||
if l := len(k2); l < max {
|
||||
max = l
|
||||
}
|
||||
var i int
|
||||
for i = 0; i < max; i++ {
|
||||
// longestPrefix finds the length of the shared prefix of two strings
|
||||
func longestPrefix(k1, k2 string) (i int) {
|
||||
for i = 0; i < min(len(k1), len(k2)); i++ {
|
||||
if k1[i] != k2[i] {
|
||||
break
|
||||
}
|
||||
}
|
||||
return i
|
||||
return
|
||||
}
|
||||
|
||||
type nodes []*node
|
||||
|
||||
@@ -3476,6 +3476,9 @@ type JSONSchema_FieldConfiguration struct {
|
||||
// parameter. Use this to avoid having auto generated path parameter names
|
||||
// for overlapping paths.
|
||||
PathParamName string `protobuf:"bytes,47,opt,name=path_param_name,json=pathParamName,proto3" json:"path_param_name,omitempty"`
|
||||
// Declares this field to be deprecated. Allows for the generated OpenAPI
|
||||
// parameter to be marked as deprecated without affecting the proto field.
|
||||
Deprecated bool `protobuf:"varint,49,opt,name=deprecated,proto3" json:"deprecated,omitempty"`
|
||||
unknownFields protoimpl.UnknownFields
|
||||
sizeCache protoimpl.SizeCache
|
||||
}
|
||||
@@ -3512,10 +3515,21 @@ func (x *JSONSchema_FieldConfiguration) GetPathParamName() string {
|
||||
return ""
|
||||
}
|
||||
|
||||
func (x *JSONSchema_FieldConfiguration) GetDeprecated() bool {
|
||||
if x != nil {
|
||||
return x.Deprecated
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func (x *JSONSchema_FieldConfiguration) SetPathParamName(v string) {
|
||||
x.PathParamName = v
|
||||
}
|
||||
|
||||
func (x *JSONSchema_FieldConfiguration) SetDeprecated(v bool) {
|
||||
x.Deprecated = v
|
||||
}
|
||||
|
||||
type JSONSchema_FieldConfiguration_builder struct {
|
||||
_ [0]func() // Prevents comparability and use of unkeyed literals for the builder.
|
||||
|
||||
@@ -3524,6 +3538,9 @@ type JSONSchema_FieldConfiguration_builder struct {
|
||||
// parameter. Use this to avoid having auto generated path parameter names
|
||||
// for overlapping paths.
|
||||
PathParamName string
|
||||
// Declares this field to be deprecated. Allows for the generated OpenAPI
|
||||
// parameter to be marked as deprecated without affecting the proto field.
|
||||
Deprecated bool
|
||||
}
|
||||
|
||||
func (b0 JSONSchema_FieldConfiguration_builder) Build() *JSONSchema_FieldConfiguration {
|
||||
@@ -3531,6 +3548,7 @@ func (b0 JSONSchema_FieldConfiguration_builder) Build() *JSONSchema_FieldConfigu
|
||||
b, x := &b0, m0
|
||||
_, _ = b, x
|
||||
x.PathParamName = b.PathParamName
|
||||
x.Deprecated = b.Deprecated
|
||||
return m0
|
||||
}
|
||||
|
||||
@@ -3904,7 +3922,7 @@ var file_protoc_gen_openapiv2_options_openapiv2_proto_rawDesc = []byte{
|
||||
0x79, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x03, 0x6b, 0x65, 0x79, 0x12, 0x2c, 0x0a, 0x05,
|
||||
0x76, 0x61, 0x6c, 0x75, 0x65, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x16, 0x2e, 0x67, 0x6f,
|
||||
0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x56, 0x61,
|
||||
0x6c, 0x75, 0x65, 0x52, 0x05, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x3a, 0x02, 0x38, 0x01, 0x22, 0xd7,
|
||||
0x6c, 0x75, 0x65, 0x52, 0x05, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x3a, 0x02, 0x38, 0x01, 0x22, 0xf7,
|
||||
0x0a, 0x0a, 0x0a, 0x4a, 0x53, 0x4f, 0x4e, 0x53, 0x63, 0x68, 0x65, 0x6d, 0x61, 0x12, 0x10, 0x0a,
|
||||
0x03, 0x72, 0x65, 0x66, 0x18, 0x03, 0x20, 0x01, 0x28, 0x09, 0x52, 0x03, 0x72, 0x65, 0x66, 0x12,
|
||||
0x14, 0x0a, 0x05, 0x74, 0x69, 0x74, 0x6c, 0x65, 0x18, 0x05, 0x20, 0x01, 0x28, 0x09, 0x52, 0x05,
|
||||
@@ -3968,11 +3986,13 @@ var file_protoc_gen_openapiv2_options_openapiv2_proto_rawDesc = []byte{
|
||||
0x67, 0x65, 0x6e, 0x5f, 0x6f, 0x70, 0x65, 0x6e, 0x61, 0x70, 0x69, 0x76, 0x32, 0x2e, 0x6f, 0x70,
|
||||
0x74, 0x69, 0x6f, 0x6e, 0x73, 0x2e, 0x4a, 0x53, 0x4f, 0x4e, 0x53, 0x63, 0x68, 0x65, 0x6d, 0x61,
|
||||
0x2e, 0x45, 0x78, 0x74, 0x65, 0x6e, 0x73, 0x69, 0x6f, 0x6e, 0x73, 0x45, 0x6e, 0x74, 0x72, 0x79,
|
||||
0x52, 0x0a, 0x65, 0x78, 0x74, 0x65, 0x6e, 0x73, 0x69, 0x6f, 0x6e, 0x73, 0x1a, 0x3c, 0x0a, 0x12,
|
||||
0x52, 0x0a, 0x65, 0x78, 0x74, 0x65, 0x6e, 0x73, 0x69, 0x6f, 0x6e, 0x73, 0x1a, 0x5c, 0x0a, 0x12,
|
||||
0x46, 0x69, 0x65, 0x6c, 0x64, 0x43, 0x6f, 0x6e, 0x66, 0x69, 0x67, 0x75, 0x72, 0x61, 0x74, 0x69,
|
||||
0x6f, 0x6e, 0x12, 0x26, 0x0a, 0x0f, 0x70, 0x61, 0x74, 0x68, 0x5f, 0x70, 0x61, 0x72, 0x61, 0x6d,
|
||||
0x5f, 0x6e, 0x61, 0x6d, 0x65, 0x18, 0x2f, 0x20, 0x01, 0x28, 0x09, 0x52, 0x0d, 0x70, 0x61, 0x74,
|
||||
0x68, 0x50, 0x61, 0x72, 0x61, 0x6d, 0x4e, 0x61, 0x6d, 0x65, 0x1a, 0x55, 0x0a, 0x0f, 0x45, 0x78,
|
||||
0x68, 0x50, 0x61, 0x72, 0x61, 0x6d, 0x4e, 0x61, 0x6d, 0x65, 0x12, 0x1e, 0x0a, 0x0a, 0x64, 0x65,
|
||||
0x70, 0x72, 0x65, 0x63, 0x61, 0x74, 0x65, 0x64, 0x18, 0x31, 0x20, 0x01, 0x28, 0x08, 0x52, 0x0a,
|
||||
0x64, 0x65, 0x70, 0x72, 0x65, 0x63, 0x61, 0x74, 0x65, 0x64, 0x1a, 0x55, 0x0a, 0x0f, 0x45, 0x78,
|
||||
0x74, 0x65, 0x6e, 0x73, 0x69, 0x6f, 0x6e, 0x73, 0x45, 0x6e, 0x74, 0x72, 0x79, 0x12, 0x10, 0x0a,
|
||||
0x03, 0x6b, 0x65, 0x79, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x03, 0x6b, 0x65, 0x79, 0x12,
|
||||
0x2c, 0x0a, 0x05, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x16,
|
||||
|
||||
@@ -612,6 +612,9 @@ message JSONSchema {
|
||||
// parameter. Use this to avoid having auto generated path parameter names
|
||||
// for overlapping paths.
|
||||
string path_param_name = 47;
|
||||
// Declares this field to be deprecated. Allows for the generated OpenAPI
|
||||
// parameter to be marked as deprecated without affecting the proto field.
|
||||
bool deprecated = 49;
|
||||
}
|
||||
// Custom properties that start with "x-" such as "x-foo" used to describe
|
||||
// extra functionality that is not covered by the standard OpenAPI Specification.
|
||||
|
||||
@@ -3268,6 +3268,7 @@ func (b0 Scopes_builder) Build() *Scopes {
|
||||
type JSONSchema_FieldConfiguration struct {
|
||||
state protoimpl.MessageState `protogen:"opaque.v1"`
|
||||
xxx_hidden_PathParamName string `protobuf:"bytes,47,opt,name=path_param_name,json=pathParamName,proto3" json:"path_param_name,omitempty"`
|
||||
xxx_hidden_Deprecated bool `protobuf:"varint,49,opt,name=deprecated,proto3" json:"deprecated,omitempty"`
|
||||
unknownFields protoimpl.UnknownFields
|
||||
sizeCache protoimpl.SizeCache
|
||||
}
|
||||
@@ -3304,10 +3305,21 @@ func (x *JSONSchema_FieldConfiguration) GetPathParamName() string {
|
||||
return ""
|
||||
}
|
||||
|
||||
func (x *JSONSchema_FieldConfiguration) GetDeprecated() bool {
|
||||
if x != nil {
|
||||
return x.xxx_hidden_Deprecated
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func (x *JSONSchema_FieldConfiguration) SetPathParamName(v string) {
|
||||
x.xxx_hidden_PathParamName = v
|
||||
}
|
||||
|
||||
func (x *JSONSchema_FieldConfiguration) SetDeprecated(v bool) {
|
||||
x.xxx_hidden_Deprecated = v
|
||||
}
|
||||
|
||||
type JSONSchema_FieldConfiguration_builder struct {
|
||||
_ [0]func() // Prevents comparability and use of unkeyed literals for the builder.
|
||||
|
||||
@@ -3316,6 +3328,9 @@ type JSONSchema_FieldConfiguration_builder struct {
|
||||
// parameter. Use this to avoid having auto generated path parameter names
|
||||
// for overlapping paths.
|
||||
PathParamName string
|
||||
// Declares this field to be deprecated. Allows for the generated OpenAPI
|
||||
// parameter to be marked as deprecated without affecting the proto field.
|
||||
Deprecated bool
|
||||
}
|
||||
|
||||
func (b0 JSONSchema_FieldConfiguration_builder) Build() *JSONSchema_FieldConfiguration {
|
||||
@@ -3323,6 +3338,7 @@ func (b0 JSONSchema_FieldConfiguration_builder) Build() *JSONSchema_FieldConfigu
|
||||
b, x := &b0, m0
|
||||
_, _ = b, x
|
||||
x.xxx_hidden_PathParamName = b.PathParamName
|
||||
x.xxx_hidden_Deprecated = b.Deprecated
|
||||
return m0
|
||||
}
|
||||
|
||||
@@ -3696,7 +3712,7 @@ var file_protoc_gen_openapiv2_options_openapiv2_proto_rawDesc = []byte{
|
||||
0x79, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x03, 0x6b, 0x65, 0x79, 0x12, 0x2c, 0x0a, 0x05,
|
||||
0x76, 0x61, 0x6c, 0x75, 0x65, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x16, 0x2e, 0x67, 0x6f,
|
||||
0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x56, 0x61,
|
||||
0x6c, 0x75, 0x65, 0x52, 0x05, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x3a, 0x02, 0x38, 0x01, 0x22, 0xd7,
|
||||
0x6c, 0x75, 0x65, 0x52, 0x05, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x3a, 0x02, 0x38, 0x01, 0x22, 0xf7,
|
||||
0x0a, 0x0a, 0x0a, 0x4a, 0x53, 0x4f, 0x4e, 0x53, 0x63, 0x68, 0x65, 0x6d, 0x61, 0x12, 0x10, 0x0a,
|
||||
0x03, 0x72, 0x65, 0x66, 0x18, 0x03, 0x20, 0x01, 0x28, 0x09, 0x52, 0x03, 0x72, 0x65, 0x66, 0x12,
|
||||
0x14, 0x0a, 0x05, 0x74, 0x69, 0x74, 0x6c, 0x65, 0x18, 0x05, 0x20, 0x01, 0x28, 0x09, 0x52, 0x05,
|
||||
@@ -3760,11 +3776,13 @@ var file_protoc_gen_openapiv2_options_openapiv2_proto_rawDesc = []byte{
|
||||
0x67, 0x65, 0x6e, 0x5f, 0x6f, 0x70, 0x65, 0x6e, 0x61, 0x70, 0x69, 0x76, 0x32, 0x2e, 0x6f, 0x70,
|
||||
0x74, 0x69, 0x6f, 0x6e, 0x73, 0x2e, 0x4a, 0x53, 0x4f, 0x4e, 0x53, 0x63, 0x68, 0x65, 0x6d, 0x61,
|
||||
0x2e, 0x45, 0x78, 0x74, 0x65, 0x6e, 0x73, 0x69, 0x6f, 0x6e, 0x73, 0x45, 0x6e, 0x74, 0x72, 0x79,
|
||||
0x52, 0x0a, 0x65, 0x78, 0x74, 0x65, 0x6e, 0x73, 0x69, 0x6f, 0x6e, 0x73, 0x1a, 0x3c, 0x0a, 0x12,
|
||||
0x52, 0x0a, 0x65, 0x78, 0x74, 0x65, 0x6e, 0x73, 0x69, 0x6f, 0x6e, 0x73, 0x1a, 0x5c, 0x0a, 0x12,
|
||||
0x46, 0x69, 0x65, 0x6c, 0x64, 0x43, 0x6f, 0x6e, 0x66, 0x69, 0x67, 0x75, 0x72, 0x61, 0x74, 0x69,
|
||||
0x6f, 0x6e, 0x12, 0x26, 0x0a, 0x0f, 0x70, 0x61, 0x74, 0x68, 0x5f, 0x70, 0x61, 0x72, 0x61, 0x6d,
|
||||
0x5f, 0x6e, 0x61, 0x6d, 0x65, 0x18, 0x2f, 0x20, 0x01, 0x28, 0x09, 0x52, 0x0d, 0x70, 0x61, 0x74,
|
||||
0x68, 0x50, 0x61, 0x72, 0x61, 0x6d, 0x4e, 0x61, 0x6d, 0x65, 0x1a, 0x55, 0x0a, 0x0f, 0x45, 0x78,
|
||||
0x68, 0x50, 0x61, 0x72, 0x61, 0x6d, 0x4e, 0x61, 0x6d, 0x65, 0x12, 0x1e, 0x0a, 0x0a, 0x64, 0x65,
|
||||
0x70, 0x72, 0x65, 0x63, 0x61, 0x74, 0x65, 0x64, 0x18, 0x31, 0x20, 0x01, 0x28, 0x08, 0x52, 0x0a,
|
||||
0x64, 0x65, 0x70, 0x72, 0x65, 0x63, 0x61, 0x74, 0x65, 0x64, 0x1a, 0x55, 0x0a, 0x0f, 0x45, 0x78,
|
||||
0x74, 0x65, 0x6e, 0x73, 0x69, 0x6f, 0x6e, 0x73, 0x45, 0x6e, 0x74, 0x72, 0x79, 0x12, 0x10, 0x0a,
|
||||
0x03, 0x6b, 0x65, 0x79, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x03, 0x6b, 0x65, 0x79, 0x12,
|
||||
0x2c, 0x0a, 0x05, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x16,
|
||||
|
||||
19
vendor/github.com/kovidgoyal/imaging/animation.go
generated
vendored
19
vendor/github.com/kovidgoyal/imaging/animation.go
generated
vendored
@@ -163,15 +163,20 @@ func (self *Image) populate_from_gif(g *gif.GIF) {
|
||||
Delay: gifmeta.CalculateFrameDelay(g.Delay[i], min_gap),
|
||||
}
|
||||
switch prev_disposal {
|
||||
case gif.DisposalNone, 0:
|
||||
case gif.DisposalNone, 0: // 1
|
||||
frame.ComposeOnto = frame.Number - 1
|
||||
case gif.DisposalPrevious:
|
||||
case gif.DisposalPrevious: // 3
|
||||
frame.ComposeOnto = prev_compose_onto
|
||||
case gif.DisposalBackground:
|
||||
// this is in contravention of the GIF spec but browsers and
|
||||
// gif2apng both do this, so follow them. Test image for this
|
||||
// is apple.gif
|
||||
frame.ComposeOnto = frame.Number - 1
|
||||
case gif.DisposalBackground: // 2
|
||||
if i > 0 && g.Delay[i-1] == 0 {
|
||||
// this is in contravention of the GIF spec but browsers and
|
||||
// gif2apng both do this, so follow them. Test images for this
|
||||
// are apple.gif and disposal-background-with-delay.gif
|
||||
frame.ComposeOnto = frame.Number - 1
|
||||
} else {
|
||||
// delay present, frame visible, so clear to background as the spec requires
|
||||
frame.ComposeOnto = 0
|
||||
}
|
||||
}
|
||||
prev_disposal, prev_compose_onto = g.Disposal[i], frame.ComposeOnto
|
||||
self.Frames = append(self.Frames, &frame)
|
||||
|
||||
2
vendor/github.com/kovidgoyal/imaging/publish.py
generated
vendored
2
vendor/github.com/kovidgoyal/imaging/publish.py
generated
vendored
@@ -5,7 +5,7 @@ import os
|
||||
import subprocess
|
||||
|
||||
|
||||
VERSION = "1.8.18"
|
||||
VERSION = "1.8.19"
|
||||
|
||||
|
||||
def run(*args: str):
|
||||
|
||||
2
vendor/github.com/nats-io/nats.go/README.md
generated
vendored
2
vendor/github.com/nats-io/nats.go/README.md
generated
vendored
@@ -23,7 +23,7 @@ A [Go](http://golang.org) client for the [NATS messaging system](https://nats.io
|
||||
go get github.com/nats-io/nats.go@latest
|
||||
|
||||
# To get a specific version:
|
||||
go get github.com/nats-io/nats.go@v1.47.0
|
||||
go get github.com/nats-io/nats.go@v1.48.0
|
||||
|
||||
# Note that the latest major version for NATS Server is v2:
|
||||
go get github.com/nats-io/nats-server/v2@latest
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user