Compare commits

..

35 Commits

Author SHA1 Message Date
OpenCloud Devops
86dbae6412 🎉 Release 4.1.0 (#1960)
* 🎉 Release 4.0.1

* 🎉 Release 4.0.1

* 🎉 Release 4.0.1

* 🎉 Release 4.0.1

* 🎉 Release 4.0.1

* 🎉 Release 4.0.1

* 🎉 Release 4.0.1

* 🎉 Release 4.0.1

* 🎉 Release 4.0.1

* 🎉 Release 4.0.1

* 🎉 Release 4.0.1

* 🎉 Release 4.0.1

* 🎉 Release 4.0.1

* 🎉 Release 4.0.1

* 🎉 Release 4.0.1

* 🎉 Release 4.0.1

* 🎉 Release 4.1.0

* 🎉 Release 4.1.0

* 🎉 Release 4.1.0

* 🎉 Release 4.1.0

* 🎉 Release 4.1.0

* 🎉 Release 4.1.0

* 🎉 Release 4.1.0
2025-12-15 19:51:20 +01:00
Benedikt Kulmann
87257623c6 chore: bump web to v4.3.0 (#2030) 2025-12-15 18:04:49 +01:00
Viktor Scharf
8aac5f6318 reva-bump-2.41.0 (#2032) 2025-12-15 17:03:58 +01:00
Anja Barz
4dcecbf5c0 fix typo (#2024) 2025-12-15 15:01:50 +01:00
Sawjan Gurung
cffeb4a690 [full-ci][tests-only] test: fix some test flakiness (#2003)
* test: check content after upload

Signed-off-by: Saw-jan <saw.jan.grg3e@gmail.com>

test: check content with retry

Signed-off-by: Saw-jan <saw.jan.grg3e@gmail.com>

* test: check empty body before json decoding

Signed-off-by: Saw-jan <saw.jan.grg3e@gmail.com>

* test: wait post-processing for webdav requests if applicable

Signed-off-by: Saw-jan <saw.jan.grg3e@gmail.com>

* test: check token before doing request

Signed-off-by: Saw-jan <saw.jan.grg3e@gmail.com>

test: check body before json decoding

Signed-off-by: Saw-jan <saw.jan.grg3e@gmail.com>

test: add wait step

Signed-off-by: Saw-jan <saw.jan.grg3e@gmail.com>

---------

Signed-off-by: Saw-jan <saw.jan.grg3e@gmail.com>
2025-12-15 14:45:12 +05:45
opencloudeu
826640c2c5 [tx] updated from transifex 2025-12-14 00:03:10 +00:00
opencloudeu
8a4785e0e7 [tx] updated from transifex 2025-12-13 00:03:05 +00:00
Viktor Scharf
e7d8f3f446 show edition in the opncloud version output (#2019) 2025-12-12 15:01:25 +01:00
Prashant Gurung
b1c9159bd1 skip test related pipelines for ready-release-go PRs (#2011)
Signed-off-by: prashant-gurung899 <prasantgrg777@gmail.com>
2025-12-12 14:30:26 +05:45
Sawjan Gurung
446ae35701 Merge pull request #2007 from opencloud-eu/ci/fix-translation-pipeline
ci: fix translation pipeline
2025-12-12 10:46:26 +05:45
Florian Schade
8f323c775a Merge pull request #2001 from fschade/release-channel-capability
enhancement: introduce release channel capability
2025-12-11 09:33:02 +01:00
Florian Schade
40d8aacea4 enhancement: introduce build time edition channels
be careful, the env:OC_EDITION, env:FRONTEND_EDITION, and conf:edition got removed as part of this commit, no deprecation because the flag is build time only!
2025-12-10 16:21:42 +01:00
Michael Flemming
4c5d5fb218 Merge pull request #1947 from opencloud-eu/production_to_rolling_image
create a rolling release on all tag events [CI SKIP]
2025-12-10 10:29:40 +01:00
Michael 'Flimmy' Flemming
ec30bcc030 ci: add logic to do multiple docker releases for non-patch production-releases 2025-12-09 16:45:09 +01:00
Michael 'Flimmy' Flemming
61a591bcba ci: move config getter from nested if to for-loop 2025-12-09 16:34:29 +01:00
Michael 'Flimmy' Flemming
fc9a62a2d8 add comments to nested ifs 2025-12-09 16:25:16 +01:00
Benedikt Kulmann
b595461ae7 Merge pull request #1995 from opencloud-eu/enforce-server-url-trailing-slash
fix: enforce trailing slash for server url
2025-12-09 16:16:06 +01:00
Sawjan Gurung
a3a1397e2d Merge pull request #1993 from opencloud-eu/test/add-mismatch-offset-test
[full-ci][tests-only] test: add test to check mismatch offset during TUS upload
2025-12-09 17:16:29 +05:45
Jörn Friedrich Dreyer
dbabedb90b Merge pull request #1996 from opencloud-eu/fix-policies-link
[docs] update policies link

pipeline failure unrelated
2025-12-09 12:21:42 +01:00
Jörn Friedrich Dreyer
dd4f2fe529 update policies link
Signed-off-by: Jörn Friedrich Dreyer <jfd@butonic.de>
2025-12-09 10:55:12 +01:00
Benedikt Kulmann
614d916978 fix: enforce trailing slash for server url 2025-12-09 10:14:08 +01:00
Saw-jan
ee16c0597c test: add test to check mismatch offset during TUS upload
Signed-off-by: Saw-jan <saw.jan.grg3e@gmail.com>
2025-12-09 12:51:35 +05:45
Sawjan Gurung
8e2fab3c9d Merge pull request #1990 from opencloud-eu/test/fix-resource-checks
[full-ci][tests-only] test: proper resource existence check
2025-12-09 12:12:03 +05:45
Saw-jan
c3a7892889 test: proper resource existence check
Signed-off-by: Saw-jan <saw.jan.grg3e@gmail.com>
2025-12-09 10:51:06 +05:45
Viktor Scharf
db7c0a88dd set backport feature (#1973) 2025-12-06 17:13:07 +01:00
Michael Barz
64119b3f8a fix: enhance resource creation with detailed process information (#1978)
Co-authored-by: Stavros Kois <s.kois@outlook.com>
2025-12-05 15:08:39 +01:00
Viktor Scharf
6c171e11a2 check propfing after renaming data in file system (#1809) 2025-12-05 09:11:23 +01:00
Viktor Scharf
7318fde6a0 fix-get-attribute-test (#1974) 2025-12-04 14:24:57 +01:00
Viktor Scharf
6ce0cc6b1f replace golang image (#1955) 2025-12-03 15:02:25 +01:00
dependabot[bot]
3e81d1f1d8 build(deps): bump github.com/testcontainers/testcontainers-go
Bumps [github.com/testcontainers/testcontainers-go](https://github.com/testcontainers/testcontainers-go) from 0.39.0 to 0.40.0.
- [Release notes](https://github.com/testcontainers/testcontainers-go/releases)
- [Commits](https://github.com/testcontainers/testcontainers-go/compare/v0.39.0...v0.40.0)

---
updated-dependencies:
- dependency-name: github.com/testcontainers/testcontainers-go
  dependency-version: 0.40.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-12-03 12:12:51 +01:00
Jannik Stehle
d352d91210 Merge pull request #1965 from opencloud-eu/ci/chat-notification-url
ci: use secret for the chat notifications url
2025-12-03 11:53:49 +01:00
Jannik Stehle
434ba0a30a ci: use secret for the chat notifications url 2025-12-03 11:08:53 +01:00
opencloudeu
0f448f23a0 [tx] updated from transifex 2025-12-03 00:02:26 +00:00
Artur Neumann
babb97f8a6 Merge pull request #1956 from opencloud-eu/individual-it-patch-1
fix the link in quickstart script for itself
2025-12-02 21:11:25 +05:45
Artur Neumann
a6d637456d fix the link in quickstart script for itself
see also https://docs.opencloud.eu/docs/admin/intro#quick-start
2025-12-02 19:24:27 +05:45
100 changed files with 1372 additions and 1439 deletions

6
.backportrc.json Normal file
View File

@@ -0,0 +1,6 @@
{
"repoOwner": "opencloud-eu",
"repoName": "opencloud",
"targetBranchChoices": ["main", "stable-2.0", "stable-4.0"],
"fork": false
}

View File

@@ -1,4 +1,4 @@
# The test runner source for UI tests
WEB_COMMITID=50e3fff6a518361d59cba864a927470f313b6f91
WEB_BRANCH=stable-4.2
WEB_COMMITID=3120ea384c7a9d1f1ea0c328965951fc06d66900
WEB_BRANCH=main

View File

@@ -74,11 +74,11 @@ OC_FED_DOMAIN = "%s:10200" % FED_OC_SERVER_NAME
event = {
"base": {
"event": ["push", "manual"],
"branch": "stable-*",
"branch": "main",
},
"cron": {
"event": "cron",
"branch": "stable-*",
"branch": "main",
},
"pull_request": {
"event": "pull_request",
@@ -388,6 +388,8 @@ config = {
"production": {
# NOTE: need to be updated if new production releases are determined
"tags": ["2.0", "4.0"],
# NOTE: need to be set to true if patch releases are made from stable-X-branches
"skip_rolling": "false",
"repo": docker_repo_slug,
"build_type": "production",
},
@@ -1612,31 +1614,40 @@ def uploadTracingResult(ctx):
def dockerReleases(ctx):
pipelines = []
docker_repos = []
docker_releases = []
build_type = ""
# only make realeases on tag events
if ctx.build.event == "tag":
tag = ctx.build.ref.replace("refs/tags/v", "").lower()
# iterate over production tags to see if this is a production release
is_production = False
skip_rolling = False
for prod_tag in config["dockerReleases"]["production"]["tags"]:
if tag.startswith(prod_tag):
is_production = True
skip_rolling = config["dockerReleases"]["production"]["skip_rolling"]
break
if is_production:
docker_repos.append(config["dockerReleases"]["production"]["repo"])
build_type = config["dockerReleases"]["production"]["build_type"]
docker_releases.append("production")
# a new production realease is also a rolling release
# unless skip_rolling is set in the config, i.e. for patch-releases on stable-branch
if not skip_rolling:
docker_releases.append("rolling")
else:
docker_repos.append(config["dockerReleases"]["rolling"]["repo"])
build_type = config["dockerReleases"]["rolling"]["build_type"]
docker_releases.append("rolling")
# on non tag events, do daily build
else:
docker_repos.append(config["dockerReleases"]["daily"]["repo"])
build_type = config["dockerReleases"]["daily"]["build_type"]
docker_releases.append("daily")
for repo in docker_repos:
for releaseConfigName in docker_releases:
repo = config["dockerReleases"][releaseConfigName]["repo"]
build_type = config["dockerReleases"][releaseConfigName]["build_type"]
repo_pipelines = []
repo_pipelines.append(dockerRelease(ctx, repo, build_type))
@@ -1953,7 +1964,6 @@ def readyReleaseGo():
"image": READY_RELEASE_GO,
"settings": {
"git_email": "devops@opencloud.eu",
"release_branch": "stable-4.0",
"forge_type": "github",
"forge_token": {
"from_secret": "github_token",

View File

@@ -1,20 +1,40 @@
# Changelog
## [4.0.1](https://github.com/opencloud-eu/opencloud/releases/tag/v4.0.1) - 2025-12-15
## [4.1.0](https://github.com/opencloud-eu/opencloud/releases/tag/v4.1.0) - 2025-12-15
### ❤️ Thanks to all contributors! ❤️
@ScharfViktor, @fschade, @kulmann, @micbar, @prashant-gurung899
@JammingBen, @ScharfViktor, @Svanvith, @butonic, @flimmy, @fschade, @individual-it, @kulmann, @micbar, @prashant-gurung899, @saw-jan
### 📚 Documentation
- fix typo [[#2024](https://github.com/opencloud-eu/opencloud/pull/2024)]
- [docs] update policies link [[#1996](https://github.com/opencloud-eu/opencloud/pull/1996)]
- fix the link in quickstart script for itself [[#1956](https://github.com/opencloud-eu/opencloud/pull/1956)]
### ✅ Tests
- [stable-4.0] Port #2011 [[#2018](https://github.com/opencloud-eu/opencloud/pull/2018)]
- [full-ci][tests-only] test: fix some test flakiness [[#2003](https://github.com/opencloud-eu/opencloud/pull/2003)]
- [tests-only] Skip test related pipelines for ready-release-go PRs [[#2011](https://github.com/opencloud-eu/opencloud/pull/2011)]
- [full-ci][tests-only] test: add test to check mismatch offset during TUS upload [[#1993](https://github.com/opencloud-eu/opencloud/pull/1993)]
- [full-ci][tests-only] test: proper resource existence check [[#1990](https://github.com/opencloud-eu/opencloud/pull/1990)]
- check propfing after renaming data in file system [[#1809](https://github.com/opencloud-eu/opencloud/pull/1809)]
- fix-get-attribute-test [[#1974](https://github.com/opencloud-eu/opencloud/pull/1974)]
### 📈 Enhancement
- Show edition in opencloud version command [[#2019](https://github.com/opencloud-eu/opencloud/pull/2019)]
### 🐛 Bug Fixes
- [stable-4.0] fix: build time edition channels #2001 [[#2010](https://github.com/opencloud-eu/opencloud/pull/2010)]
- [stable-4.0] fix: enforce trailing slash for server url [[#2002](https://github.com/opencloud-eu/opencloud/pull/2002)]
- [stable-4.0] fix: enhance resource creation with detailed process information (#1978) [[#2000](https://github.com/opencloud-eu/opencloud/pull/2000)]
- fix: enforce trailing slash for server url [[#1995](https://github.com/opencloud-eu/opencloud/pull/1995)]
- fix: enhance resource creation with detailed process information [[#1978](https://github.com/opencloud-eu/opencloud/pull/1978)]
### 📦️ Dependencies
- chore: bump web to v4.3.0 [[#2030](https://github.com/opencloud-eu/opencloud/pull/2030)]
- reva-bump-2.41.0 [[#2032](https://github.com/opencloud-eu/opencloud/pull/2032)]
- build(deps): bump github.com/testcontainers/testcontainers-go from 0.39.0 to 0.40.0 [[#1931](https://github.com/opencloud-eu/opencloud/pull/1931)]
## [4.0.0](https://github.com/opencloud-eu/opencloud/releases/tag/v4.0.0) - 2025-12-01

View File

@@ -11,7 +11,7 @@ set -euo pipefail
# OC_VERSION: Version to download, e.g. OC_VERSION="1.2.0"
# Call this script directly from opencloud:
# curl -L https://opencloud.eu/quickinstall.sh | /bin/bash
# curl -L https://opencloud.eu/install | /bin/bash
# This function is borrowed from openSUSEs /usr/bin/old, thanks.
function backup_file () {

6
go.mod
View File

@@ -64,7 +64,7 @@ require (
github.com/open-policy-agent/opa v1.10.1
github.com/opencloud-eu/icap-client v0.0.0-20250930132611-28a2afe62d89
github.com/opencloud-eu/libre-graph-api-go v1.0.8-0.20250724122329-41ba6b191e76
github.com/opencloud-eu/reva/v2 v2.40.1
github.com/opencloud-eu/reva/v2 v2.41.0
github.com/opensearch-project/opensearch-go/v4 v4.5.0
github.com/orcaman/concurrent-map v1.0.0
github.com/pkg/errors v0.9.1
@@ -80,7 +80,7 @@ require (
github.com/spf13/cobra v1.10.1
github.com/stretchr/testify v1.11.1
github.com/test-go/testify v1.1.4
github.com/testcontainers/testcontainers-go v0.39.0
github.com/testcontainers/testcontainers-go v0.40.0
github.com/testcontainers/testcontainers-go/modules/opensearch v0.39.0
github.com/theckman/yacspin v0.13.12
github.com/thejerf/suture/v4 v4.0.6
@@ -189,7 +189,7 @@ require (
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f // indirect
github.com/distribution/reference v0.6.0 // indirect
github.com/dlclark/regexp2 v1.4.0 // indirect
github.com/docker/docker v28.3.3+incompatible // indirect
github.com/docker/docker v28.5.1+incompatible // indirect
github.com/docker/go-connections v0.6.0 // indirect
github.com/docker/go-units v0.5.0 // indirect
github.com/dustin/go-humanize v1.0.1 // indirect

12
go.sum
View File

@@ -310,8 +310,8 @@ github.com/dlclark/regexp2 v1.4.0 h1:F1rxgk7p4uKjwIQxBs9oAXe5CqrXlCduYEJvrF4u93E
github.com/dlclark/regexp2 v1.4.0/go.mod h1:2pZnwuY/m+8K6iRw6wQdMtk+rH5tNGR1i55kozfMjCc=
github.com/dnaeon/go-vcr v1.0.1/go.mod h1:aBB1+wY4s93YsC3HHjMBMrwTj2R9FHDzUr9KyGc8n1E=
github.com/dnsimple/dnsimple-go v0.63.0/go.mod h1:O5TJ0/U6r7AfT8niYNlmohpLbCSG+c71tQlGr9SeGrg=
github.com/docker/docker v28.3.3+incompatible h1:Dypm25kh4rmk49v1eiVbsAtpAsYURjYkaKubwuBdxEI=
github.com/docker/docker v28.3.3+incompatible/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk=
github.com/docker/docker v28.5.1+incompatible h1:Bm8DchhSD2J6PsFzxC35TZo4TLGR2PdW/E69rU45NhM=
github.com/docker/docker v28.5.1+incompatible/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk=
github.com/docker/go-connections v0.6.0 h1:LlMG9azAe1TqfR7sO+NJttz1gy6KO7VJBh+pMmjSD94=
github.com/docker/go-connections v0.6.0/go.mod h1:AahvXYshr6JgfUJGdDCs2b5EZG/vmaMAntpSFH5BFKE=
github.com/docker/go-units v0.5.0 h1:69rxXcBk27SvSaaxTtLh/8llcHD8vYHT7WSdRZ/jvr4=
@@ -963,8 +963,8 @@ github.com/opencloud-eu/inotifywaitgo v0.0.0-20251111171128-a390bae3c5e9 h1:dIft
github.com/opencloud-eu/inotifywaitgo v0.0.0-20251111171128-a390bae3c5e9/go.mod h1:JWyDC6H+5oZRdUJUgKuaye+8Ph5hEs6HVzVoPKzWSGI=
github.com/opencloud-eu/libre-graph-api-go v1.0.8-0.20250724122329-41ba6b191e76 h1:vD/EdfDUrv4omSFjrinT8Mvf+8D7f9g4vgQ2oiDrVUI=
github.com/opencloud-eu/libre-graph-api-go v1.0.8-0.20250724122329-41ba6b191e76/go.mod h1:pzatilMEHZFT3qV7C/X3MqOa3NlRQuYhlRhZTL+hN6Q=
github.com/opencloud-eu/reva/v2 v2.40.1 h1:QwMkbGMhwDSwfk2WxbnTpIig2BugPBaVFjWcy2DSU3U=
github.com/opencloud-eu/reva/v2 v2.40.1/go.mod h1:DGH08n2mvtsQLkt8o15FV6m51FwSJJGhjR8Ty+iIJww=
github.com/opencloud-eu/reva/v2 v2.41.0 h1:oie8+sxcA+drREXRTqm0LmfUdy/mmaa6pA6wkdF6tF4=
github.com/opencloud-eu/reva/v2 v2.41.0/go.mod h1:DGH08n2mvtsQLkt8o15FV6m51FwSJJGhjR8Ty+iIJww=
github.com/opencontainers/go-digest v1.0.0 h1:apOUWs51W5PlhuyGyz9FCeeBIOUDA/6nW8Oi/yOhh5U=
github.com/opencontainers/go-digest v1.0.0/go.mod h1:0JzlMkj0TRzQZfJkVvzbP0HBR3IKzErnv2BNG4W4MAM=
github.com/opencontainers/image-spec v1.1.1 h1:y0fUlFfIZhPF1W537XOLg0/fcx6zcHCJwooC2xJA040=
@@ -1188,8 +1188,8 @@ github.com/tchap/go-patricia/v2 v2.3.3 h1:xfNEsODumaEcCcY3gI0hYPZ/PcpVv5ju6RMAhg
github.com/tchap/go-patricia/v2 v2.3.3/go.mod h1:VZRHKAb53DLaG+nA9EaYYiaEx6YztwDlLElMsnSHD4k=
github.com/test-go/testify v1.1.4 h1:Tf9lntrKUMHiXQ07qBScBTSA0dhYQlu83hswqelv1iE=
github.com/test-go/testify v1.1.4/go.mod h1:rH7cfJo/47vWGdi4GPj16x3/t1xGOj2YxzmNQzk2ghU=
github.com/testcontainers/testcontainers-go v0.39.0 h1:uCUJ5tA+fcxbFAB0uP3pIK3EJ2IjjDUHFSZ1H1UxAts=
github.com/testcontainers/testcontainers-go v0.39.0/go.mod h1:qmHpkG7H5uPf/EvOORKvS6EuDkBUPE3zpVGaH9NL7f8=
github.com/testcontainers/testcontainers-go v0.40.0 h1:pSdJYLOVgLE8YdUY2FHQ1Fxu+aMnb6JfVz1mxk7OeMU=
github.com/testcontainers/testcontainers-go v0.40.0/go.mod h1:FSXV5KQtX2HAMlm7U3APNyLkkap35zNLxukw9oBi/MY=
github.com/testcontainers/testcontainers-go/modules/opensearch v0.39.0 h1:IkJUhR8AigQxv7qHZho/OtTU6JtiSdBGVh76o175JGo=
github.com/testcontainers/testcontainers-go/modules/opensearch v0.39.0/go.mod h1:B7AhrDmQ4QbpzA0BeWvqzaJ8vbwcdEQDzybr35sBRfw=
github.com/thanhpk/randstr v1.0.6 h1:psAOktJFD4vV9NEVb3qkhRSMvYh4ORRaj1+w/hn4B+o=

View File

@@ -33,6 +33,7 @@ func VersionCommand(cfg *config.Config) *cli.Command {
Category: "info",
Action: func(c *cli.Context) error {
fmt.Println("Version: " + version.GetString())
fmt.Printf("Edition: %s\n", version.Edition)
fmt.Printf("Compiled: %s\n", version.Compiled())
if c.Bool(_skipServiceListingFlagName) {

View File

@@ -34,7 +34,7 @@ var (
// LatestTag is the latest released version plus the dev meta version.
// Will be overwritten by the release pipeline
// Needs a manual change for every tagged release
LatestTag = "4.0.1+dev"
LatestTag = "4.0.0-rc.3+dev"
// Date indicates the build date.
// This has been removed, it looks like you can only replace static strings with recent go versions

View File

@@ -11,7 +11,7 @@ msgid ""
msgstr ""
"Project-Id-Version: \n"
"Report-Msgid-Bugs-To: EMAIL\n"
"POT-Creation-Date: 2025-11-12 00:01+0000\n"
"POT-Creation-Date: 2025-12-03 00:01+0000\n"
"PO-Revision-Date: 2025-01-27 10:17+0000\n"
"Last-Translator: Stephan Paternotte <stephan@paternottes.net>, 2025\n"
"Language-Team: Dutch (https://app.transifex.com/opencloud-eu/teams/204053/nl/)\n"

View File

@@ -11,7 +11,7 @@ msgid ""
msgstr ""
"Project-Id-Version: \n"
"Report-Msgid-Bugs-To: EMAIL\n"
"POT-Creation-Date: 2025-11-17 00:02+0000\n"
"POT-Creation-Date: 2025-12-13 00:02+0000\n"
"PO-Revision-Date: 2025-01-27 10:17+0000\n"
"Last-Translator: Mário Machado, 2025\n"
"Language-Team: Portuguese (https://app.transifex.com/opencloud-eu/teams/204053/pt/)\n"

View File

@@ -12,7 +12,7 @@ msgid ""
msgstr ""
"Project-Id-Version: \n"
"Report-Msgid-Bugs-To: EMAIL\n"
"POT-Creation-Date: 2025-11-12 00:01+0000\n"
"POT-Creation-Date: 2025-12-03 00:01+0000\n"
"PO-Revision-Date: 2025-01-27 10:17+0000\n"
"Last-Translator: Lulufox, 2025\n"
"Language-Team: Russian (https://app.transifex.com/opencloud-eu/teams/204053/ru/)\n"

View File

@@ -11,7 +11,7 @@ msgid ""
msgstr ""
"Project-Id-Version: \n"
"Report-Msgid-Bugs-To: EMAIL\n"
"POT-Creation-Date: 2025-11-15 00:02+0000\n"
"POT-Creation-Date: 2025-12-13 00:02+0000\n"
"PO-Revision-Date: 2025-01-27 10:17+0000\n"
"Last-Translator: Stephan Paternotte <stephan@paternottes.net>, 2025\n"
"Language-Team: Dutch (https://app.transifex.com/opencloud-eu/teams/204053/nl/)\n"

View File

@@ -14,7 +14,7 @@ msgid ""
msgstr ""
"Project-Id-Version: \n"
"Report-Msgid-Bugs-To: EMAIL\n"
"POT-Creation-Date: 2025-11-15 00:02+0000\n"
"POT-Creation-Date: 2025-12-13 00:02+0000\n"
"PO-Revision-Date: 2025-01-27 10:17+0000\n"
"Last-Translator: Radoslaw Posim, 2025\n"
"Language-Team: Polish (https://app.transifex.com/opencloud-eu/teams/204053/pl/)\n"

View File

@@ -11,7 +11,7 @@ msgid ""
msgstr ""
"Project-Id-Version: \n"
"Report-Msgid-Bugs-To: EMAIL\n"
"POT-Creation-Date: 2025-11-17 00:02+0000\n"
"POT-Creation-Date: 2025-12-13 00:02+0000\n"
"PO-Revision-Date: 2025-01-27 10:17+0000\n"
"Last-Translator: Mário Machado, 2025\n"
"Language-Team: Portuguese (https://app.transifex.com/opencloud-eu/teams/204053/pt/)\n"

View File

@@ -12,7 +12,7 @@ msgid ""
msgstr ""
"Project-Id-Version: \n"
"Report-Msgid-Bugs-To: EMAIL\n"
"POT-Creation-Date: 2025-11-21 00:02+0000\n"
"POT-Creation-Date: 2025-12-13 00:02+0000\n"
"PO-Revision-Date: 2025-01-27 10:17+0000\n"
"Last-Translator: Lulufox, 2025\n"
"Language-Team: Russian (https://app.transifex.com/opencloud-eu/teams/204053/ru/)\n"

View File

@@ -12,7 +12,7 @@ msgid ""
msgstr ""
"Project-Id-Version: \n"
"Report-Msgid-Bugs-To: EMAIL\n"
"POT-Creation-Date: 2025-11-23 00:02+0000\n"
"POT-Creation-Date: 2025-12-13 00:02+0000\n"
"PO-Revision-Date: 2025-01-27 10:17+0000\n"
"Last-Translator: miguel tapias, 2025\n"
"Language-Team: Spanish (https://app.transifex.com/opencloud-eu/teams/204053/es/)\n"

View File

@@ -11,7 +11,7 @@ msgid ""
msgstr ""
"Project-Id-Version: \n"
"Report-Msgid-Bugs-To: EMAIL\n"
"POT-Creation-Date: 2025-11-24 00:02+0000\n"
"POT-Creation-Date: 2025-12-14 00:02+0000\n"
"PO-Revision-Date: 2025-01-27 10:17+0000\n"
"Last-Translator: Jiri Grönroos <jiri.gronroos@iki.fi>, 2025\n"
"Language-Team: Finnish (https://app.transifex.com/opencloud-eu/teams/204053/fi/)\n"

View File

@@ -11,7 +11,7 @@ msgid ""
msgstr ""
"Project-Id-Version: \n"
"Report-Msgid-Bugs-To: EMAIL\n"
"POT-Creation-Date: 2025-11-15 00:02+0000\n"
"POT-Creation-Date: 2025-12-13 00:02+0000\n"
"PO-Revision-Date: 2025-01-27 10:17+0000\n"
"Last-Translator: Stephan Paternotte <stephan@paternottes.net>, 2025\n"
"Language-Team: Dutch (https://app.transifex.com/opencloud-eu/teams/204053/nl/)\n"

View File

@@ -11,7 +11,7 @@ msgid ""
msgstr ""
"Project-Id-Version: \n"
"Report-Msgid-Bugs-To: EMAIL\n"
"POT-Creation-Date: 2025-11-18 00:02+0000\n"
"POT-Creation-Date: 2025-12-13 00:02+0000\n"
"PO-Revision-Date: 2025-01-27 10:17+0000\n"
"Last-Translator: Mário Machado, 2025\n"
"Language-Team: Portuguese (https://app.transifex.com/opencloud-eu/teams/204053/pt/)\n"

View File

@@ -12,7 +12,7 @@ msgid ""
msgstr ""
"Project-Id-Version: \n"
"Report-Msgid-Bugs-To: EMAIL\n"
"POT-Creation-Date: 2025-11-16 00:02+0000\n"
"POT-Creation-Date: 2025-12-13 00:02+0000\n"
"PO-Revision-Date: 2025-01-27 10:17+0000\n"
"Last-Translator: Lulufox, 2025\n"
"Language-Team: Russian (https://app.transifex.com/opencloud-eu/teams/204053/ru/)\n"

View File

@@ -164,4 +164,4 @@ A good example of how such a file should be formatted can be found in the [Apach
## Example Policies
The policies service contains a set of preconfigured example policies. See the [deployment examples](https://github.com/opencloud-eu/opencloud/tree/main/deployments/examples) directory for details. The contained policies disallow OpenCloud to create certain file types, both via the proxy middleware and the events service via postprocessing.
The policies service contains a set of preconfigured example policies. See the [devtools policie](https://github.com/opencloud-eu/opencloud/tree/main/devtools/deployments/service_policies/policies/) directory for details. The contained policies disallow OpenCloud to create certain file types, both via the proxy middleware and the events service via postprocessing.

View File

@@ -7,5 +7,5 @@ type HTTP struct {
Namespace string `yaml:"-"`
TLSCert string `yaml:"tls_cert" env:"PROXY_TRANSPORT_TLS_CERT" desc:"Path/File name of the TLS server certificate (in PEM format) for the external http services. If not defined, the root directory derives from $OC_BASE_DATA_PATH/proxy." introductionVersion:"1.0.0"`
TLSKey string `yaml:"tls_key" env:"PROXY_TRANSPORT_TLS_KEY" desc:"Path/File name for the TLS certificate key (in PEM format) for the server certificate to use for the external http services. If not defined, the root directory derives from $OC_BASE_DATA_PATH/proxy." introductionVersion:"1.0.0"`
TLS bool `yaml:"tls" env:"PROXY_TLS" desc:"Enable/Disable HTTPS for external HTTP services. Must be set to 'true' if the built-in IDP service an no reverse proxy is used. See the text description for details." introductionVersion:"1.0.0"`
TLS bool `yaml:"tls" env:"PROXY_TLS" desc:"Enable/Disable HTTPS for external HTTP services. Must be set to 'true' if the built-in IDP service and no reverse proxy is used. See the text description for details." introductionVersion:"1.0.0"`
}

View File

@@ -11,7 +11,7 @@ msgid ""
msgstr ""
"Project-Id-Version: \n"
"Report-Msgid-Bugs-To: EMAIL\n"
"POT-Creation-Date: 2025-11-24 00:02+0000\n"
"POT-Creation-Date: 2025-12-14 00:02+0000\n"
"PO-Revision-Date: 2025-01-27 10:17+0000\n"
"Last-Translator: Jiri Grönroos <jiri.gronroos@iki.fi>, 2025\n"
"Language-Team: Finnish (https://app.transifex.com/opencloud-eu/teams/204053/fi/)\n"

View File

@@ -11,7 +11,7 @@ msgid ""
msgstr ""
"Project-Id-Version: \n"
"Report-Msgid-Bugs-To: EMAIL\n"
"POT-Creation-Date: 2025-11-18 00:02+0000\n"
"POT-Creation-Date: 2025-12-13 00:02+0000\n"
"PO-Revision-Date: 2025-01-27 10:17+0000\n"
"Last-Translator: idoet <idoet@protonmail.ch>, 2025\n"
"Language-Team: Indonesian (https://app.transifex.com/opencloud-eu/teams/204053/id/)\n"

View File

@@ -11,7 +11,7 @@ msgid ""
msgstr ""
"Project-Id-Version: \n"
"Report-Msgid-Bugs-To: EMAIL\n"
"POT-Creation-Date: 2025-11-12 00:01+0000\n"
"POT-Creation-Date: 2025-12-03 00:01+0000\n"
"PO-Revision-Date: 2025-01-27 10:17+0000\n"
"Last-Translator: Stephan Paternotte <stephan@paternottes.net>, 2025\n"
"Language-Team: Dutch (https://app.transifex.com/opencloud-eu/teams/204053/nl/)\n"

View File

@@ -13,7 +13,7 @@ msgid ""
msgstr ""
"Project-Id-Version: \n"
"Report-Msgid-Bugs-To: EMAIL\n"
"POT-Creation-Date: 2025-11-15 00:02+0000\n"
"POT-Creation-Date: 2025-12-13 00:02+0000\n"
"PO-Revision-Date: 2025-01-27 10:17+0000\n"
"Last-Translator: Radoslaw Posim, 2025\n"
"Language-Team: Polish (https://app.transifex.com/opencloud-eu/teams/204053/pl/)\n"

View File

@@ -11,7 +11,7 @@ msgid ""
msgstr ""
"Project-Id-Version: \n"
"Report-Msgid-Bugs-To: EMAIL\n"
"POT-Creation-Date: 2025-11-17 00:02+0000\n"
"POT-Creation-Date: 2025-12-13 00:02+0000\n"
"PO-Revision-Date: 2025-01-27 10:17+0000\n"
"Last-Translator: Mário Machado, 2025\n"
"Language-Team: Portuguese (https://app.transifex.com/opencloud-eu/teams/204053/pt/)\n"

View File

@@ -11,7 +11,7 @@ msgid ""
msgstr ""
"Project-Id-Version: \n"
"Report-Msgid-Bugs-To: EMAIL\n"
"POT-Creation-Date: 2025-11-15 00:02+0000\n"
"POT-Creation-Date: 2025-12-13 00:02+0000\n"
"PO-Revision-Date: 2025-01-27 10:17+0000\n"
"Last-Translator: Stephan Paternotte <stephan@paternottes.net>, 2025\n"
"Language-Team: Dutch (https://app.transifex.com/opencloud-eu/teams/204053/nl/)\n"

View File

@@ -13,7 +13,7 @@ msgid ""
msgstr ""
"Project-Id-Version: \n"
"Report-Msgid-Bugs-To: EMAIL\n"
"POT-Creation-Date: 2025-11-15 00:02+0000\n"
"POT-Creation-Date: 2025-12-13 00:02+0000\n"
"PO-Revision-Date: 2025-01-27 10:17+0000\n"
"Last-Translator: Radoslaw Posim, 2025\n"
"Language-Team: Polish (https://app.transifex.com/opencloud-eu/teams/204053/pl/)\n"

View File

@@ -11,7 +11,7 @@ msgid ""
msgstr ""
"Project-Id-Version: \n"
"Report-Msgid-Bugs-To: EMAIL\n"
"POT-Creation-Date: 2025-11-17 00:02+0000\n"
"POT-Creation-Date: 2025-12-13 00:02+0000\n"
"PO-Revision-Date: 2025-01-27 10:17+0000\n"
"Last-Translator: Mário Machado, 2025\n"
"Language-Team: Portuguese (https://app.transifex.com/opencloud-eu/teams/204053/pt/)\n"

View File

@@ -1,6 +1,6 @@
SHELL := bash
NAME := web
WEB_ASSETS_VERSION = v4.2.1
WEB_ASSETS_VERSION = v4.3.0
WEB_ASSETS_BRANCH = main
ifneq (, $(shell command -v go 2> /dev/null)) # suppress `command not found warnings` for non go targets in CI

View File

@@ -79,6 +79,21 @@ Feature: create a resources using collaborative posixfs
And as "Alice" the final content of file "/new-name.txt" should be "content"
Scenario: check propfind after rename file
Given user "Alice" has uploaded file with content "content" to "test.txt"
When the administrator renames the file "test.txt" to "new-name.txt" for user "Alice" on the POSIX filesystem
Then the command should be successful
When user "Alice" sends PROPFIND request to space "Personal" using the WebDAV API
Then the HTTP status code should be "207"
And as user "Alice" the PROPFIND response should contain a resource "new-name.txt" with these key and value pairs:
| key | value |
| oc:fileid | %file_id_pattern% |
| oc:name | new-name.txt |
| oc:permissions | RDNVWZP |
| oc:privatelink | %base_url%/f/[0-9a-z-$%]+ |
| oc:size | 7 |
Scenario: rename a created file
Given the administrator has created the file "test.txt" with content "content" for user "Alice" on the POSIX filesystem
When the administrator renames the file "test.txt" to "test.md" for user "Alice" on the POSIX filesystem

View File

@@ -81,7 +81,6 @@ info:
{
"username": "string",
"password": "string",
"email": "string",
"serveraddress": "string"
}
```
@@ -637,6 +636,9 @@ definitions:
by the default (runc) runtime.
This field is omitted when empty.
**Deprecated**: This field is deprecated as kernel 6.12 has deprecated `memory.kmem.tcp.limit_in_bytes` field
for cgroups v1. This field will be removed in a future release.
type: "integer"
format: "int64"
MemoryReservation:
@@ -1531,37 +1533,6 @@ definitions:
items:
type: "string"
example: ["/bin/sh", "-c"]
# FIXME(thaJeztah): temporarily using a full example to remove some "omitempty" fields. Remove once the fields are removed.
example:
"User": "web:web"
"ExposedPorts": {
"80/tcp": {},
"443/tcp": {}
}
"Env": ["PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"]
"Cmd": ["/bin/sh"]
"Healthcheck": {
"Test": ["string"],
"Interval": 0,
"Timeout": 0,
"Retries": 0,
"StartPeriod": 0,
"StartInterval": 0
}
"ArgsEscaped": true
"Volumes": {
"/app/data": {},
"/app/config": {}
}
"WorkingDir": "/public/"
"Entrypoint": []
"OnBuild": []
"Labels": {
"com.example.some-label": "some-value",
"com.example.some-other-label": "some-other-value"
}
"StopSignal": "SIGTERM"
"Shell": ["/bin/sh", "-c"]
NetworkingConfig:
description: |
@@ -1608,6 +1579,8 @@ definitions:
Bridge:
description: |
Name of the default bridge interface when dockerd's --bridge flag is set.
Deprecated: This field is only set when the daemon is started with the --bridge flag specified.
type: "string"
example: "docker0"
SandboxID:
@@ -1965,6 +1938,11 @@ definitions:
Depending on how the image was created, this field may be empty and
is only set for images that were built/created locally. This field
is empty if the image was pulled from an image registry.
> **Deprecated**: This field is only set when using the deprecated
> legacy builder. It is included in API responses for informational
> purposes, but should not be depended on as it will be omitted
> once the legacy builder is removed.
type: "string"
x-nullable: false
example: ""
@@ -1990,6 +1968,11 @@ definitions:
The version of Docker that was used to build the image.
Depending on how the image was created, this field may be empty.
> **Deprecated**: This field is only set when using the deprecated
> legacy builder. It is included in API responses for informational
> purposes, but should not be depended on as it will be omitted
> once the legacy builder is removed.
type: "string"
x-nullable: false
example: "27.0.1"
@@ -2034,14 +2017,6 @@ definitions:
format: "int64"
x-nullable: false
example: 1239828
VirtualSize:
description: |
Total size of the image including all layers it is composed of.
Deprecated: this field is omitted in API v1.44, but kept for backward compatibility. Use Size instead.
type: "integer"
format: "int64"
example: 1239828
GraphDriver:
$ref: "#/definitions/DriverData"
RootFS:
@@ -2174,14 +2149,6 @@ definitions:
format: "int64"
x-nullable: false
example: 1239828
VirtualSize:
description: |-
Total size of the image including all layers it is composed of.
Deprecated: this field is omitted in API v1.44, but kept for backward compatibility. Use Size instead.
type: "integer"
format: "int64"
example: 172064416
Labels:
description: "User-defined key/value metadata."
type: "object"
@@ -2234,6 +2201,10 @@ definitions:
password:
type: "string"
email:
description: |
Email is an optional value associated with the username.
> **Deprecated**: This field is deprecated since docker 1.11 (API v1.23) and will be removed in a future release.
type: "string"
serveraddress:
type: "string"
@@ -3171,10 +3142,15 @@ definitions:
- Args
properties:
DockerVersion:
description: "Docker Version used to create the plugin"
description: |-
Docker Version used to create the plugin.
Depending on how the plugin was created, this field may be empty or omitted.
Deprecated: this field is no longer set, and will be removed in the next API version.
type: "string"
x-nullable: false
example: "17.06.0-ce"
x-omitempty: true
Description:
type: "string"
x-nullable: false
@@ -4392,6 +4368,7 @@ definitions:
A counter that triggers an update even if no relevant parameters have
been changed.
type: "integer"
format: "uint64"
Runtime:
description: |
Runtime is the type of runtime specified for the task executor.
@@ -6375,6 +6352,8 @@ definitions:
Kernel memory TCP limits are not supported when using cgroups v2, which
does not support the corresponding `memory.kmem.tcp.limit_in_bytes` cgroup.
**Deprecated**: This field is deprecated as kernel 6.12 has deprecated kernel memory TCP accounting.
type: "boolean"
example: true
CpuCfsPeriod:
@@ -6412,29 +6391,6 @@ definitions:
description: "Indicates IPv4 forwarding is enabled."
type: "boolean"
example: true
BridgeNfIptables:
description: |
Indicates if `bridge-nf-call-iptables` is available on the host when
the daemon was started.
<p><br /></p>
> **Deprecated**: netfilter module is now loaded on-demand and no longer
> during daemon startup, making this field obsolete. This field is always
> `false` and will be removed in a API v1.49.
type: "boolean"
example: false
BridgeNfIp6tables:
description: |
Indicates if `bridge-nf-call-ip6tables` is available on the host.
<p><br /></p>
> **Deprecated**: netfilter module is now loaded on-demand, and no longer
> during daemon startup, making this field obsolete. This field is always
> `false` and will be removed in a API v1.49.
type: "boolean"
example: false
Debug:
description: |
Indicates if the daemon is running in debug-mode / with debug-level

View File

@@ -1,6 +1,8 @@
package build
// CacheDiskUsage contains disk usage for the build cache.
//
// Deprecated: this type is no longer used and will be removed in the next release.
type CacheDiskUsage struct {
TotalSize int64
Reclaimable int64

View File

@@ -1,6 +1,8 @@
package container
// DiskUsage contains disk usage for containers.
//
// Deprecated: this type is no longer used and will be removed in the next release.
type DiskUsage struct {
TotalSize int64
Reclaimable int64

View File

@@ -394,7 +394,12 @@ type Resources struct {
// KernelMemory specifies the kernel memory limit (in bytes) for the container.
// Deprecated: kernel 5.4 deprecated kmem.limit_in_bytes.
KernelMemory int64 `json:",omitempty"`
KernelMemory int64 `json:",omitempty"`
// Hard limit for kernel TCP buffer memory (in bytes).
//
// Deprecated: This field is deprecated and will be removed in the next release.
// Starting with 6.12, the kernel has deprecated kernel memory tcp accounting
// for cgroups v1.
KernelMemoryTCP int64 `json:",omitempty"` // Hard limit for kernel TCP buffer memory (in bytes)
MemoryReservation int64 // Memory soft limit (in bytes)
MemorySwap int64 // Total memory usage (memory + swap); set `-1` to enable unlimited swap

View File

@@ -13,8 +13,11 @@ type NetworkSettings struct {
}
// NetworkSettingsBase holds networking state for a container when inspecting it.
//
// Deprecated: Most fields in NetworkSettingsBase are deprecated. Fields which aren't deprecated will move to
// NetworkSettings in v29.0, and this struct will be removed.
type NetworkSettingsBase struct {
Bridge string // Bridge contains the name of the default bridge interface iff it was set through the daemon --bridge flag.
Bridge string // Deprecated: This field is only set when the daemon is started with the --bridge flag specified.
SandboxID string // SandboxID uniquely represents a container's network stack
SandboxKey string // SandboxKey identifies the sandbox
Ports nat.PortMap // Ports is a collection of PortBinding indexed by Port
@@ -35,18 +38,44 @@ type NetworkSettingsBase struct {
SecondaryIPv6Addresses []network.Address // Deprecated: This field is never set and will be removed in a future release.
}
// DefaultNetworkSettings holds network information
// during the 2 release deprecation period.
// It will be removed in Docker 1.11.
// DefaultNetworkSettings holds the networking state for the default bridge, if the container is connected to that
// network.
//
// Deprecated: this struct is deprecated since Docker v1.11 and will be removed in v29. You should look for the default
// network in NetworkSettings.Networks instead.
type DefaultNetworkSettings struct {
EndpointID string // EndpointID uniquely represents a service endpoint in a Sandbox
Gateway string // Gateway holds the gateway address for the network
GlobalIPv6Address string // GlobalIPv6Address holds network's global IPv6 address
GlobalIPv6PrefixLen int // GlobalIPv6PrefixLen represents mask length of network's global IPv6 address
IPAddress string // IPAddress holds the IPv4 address for the network
IPPrefixLen int // IPPrefixLen represents mask length of network's IPv4 address
IPv6Gateway string // IPv6Gateway holds gateway address specific for IPv6
MacAddress string // MacAddress holds the MAC address for the network
// EndpointID uniquely represents a service endpoint in a Sandbox
//
// Deprecated: This field will be removed in v29. You should look for the default network in NetworkSettings.Networks instead.
EndpointID string
// Gateway holds the gateway address for the network
//
// Deprecated: This field will be removed in v29. You should look for the default network in NetworkSettings.Networks instead.
Gateway string
// GlobalIPv6Address holds network's global IPv6 address
//
// Deprecated: This field will be removed in v29. You should look for the default network in NetworkSettings.Networks instead.
GlobalIPv6Address string
// GlobalIPv6PrefixLen represents mask length of network's global IPv6 address
//
// Deprecated: This field will be removed in v29. You should look for the default network in NetworkSettings.Networks instead.
GlobalIPv6PrefixLen int
// IPAddress holds the IPv4 address for the network
//
// Deprecated: This field will be removed in v29. You should look for the default network in NetworkSettings.Networks instead.
IPAddress string
// IPPrefixLen represents mask length of network's IPv4 address
//
// Deprecated: This field will be removed in v29. You should look for the default network in NetworkSettings.Networks instead.
IPPrefixLen int
// IPv6Gateway holds gateway address specific for IPv6
//
// Deprecated: This field will be removed in v29. You should look for the default network in NetworkSettings.Networks instead.
IPv6Gateway string
// MacAddress holds the MAC address for the network
//
// Deprecated: This field will be removed in v29. You should look for the default network in NetworkSettings.Networks instead.
MacAddress string
}
// NetworkSettingsSummary provides a summary of container's networks

View File

@@ -0,0 +1,61 @@
package filters
import (
"encoding/json"
"github.com/docker/docker/api/types/versions"
)
// ToParamWithVersion encodes Args as a JSON string. If version is less than 1.22
// then the encoded format will use an older legacy format where the values are a
// list of strings, instead of a set.
//
// Deprecated: do not use in any new code; use ToJSON instead
func ToParamWithVersion(version string, a Args) (string, error) {
out, err := ToJSON(a)
if out == "" || err != nil {
return "", nil
}
if version != "" && versions.LessThan(version, "1.22") {
return encodeLegacyFilters(out)
}
return out, nil
}
// encodeLegacyFilters encodes Args in the legacy format as used in API v1.21 and older.
// where values are a list of strings, instead of a set.
//
// Don't use in any new code; use [filters.ToJSON]] instead.
func encodeLegacyFilters(currentFormat string) (string, error) {
// The Args.fields field is not exported, but used to marshal JSON,
// so we'll marshal to the new format, then unmarshal to get the
// fields, and marshal again.
//
// This is far from optimal, but this code is only used for deprecated
// API versions, so should not be hit commonly.
var argsFields map[string]map[string]bool
err := json.Unmarshal([]byte(currentFormat), &argsFields)
if err != nil {
return "", err
}
buf, err := json.Marshal(convertArgsToSlice(argsFields))
if err != nil {
return "", err
}
return string(buf), nil
}
func convertArgsToSlice(f map[string]map[string]bool) map[string][]string {
m := map[string][]string{}
for k, v := range f {
values := []string{}
for kk := range v {
if v[kk] {
values = append(values, kk)
}
}
m[k] = values
}
return m
}

View File

@@ -8,8 +8,6 @@ import (
"encoding/json"
"regexp"
"strings"
"github.com/docker/docker/api/types/versions"
)
// Args stores a mapping of keys to a set of multiple values.
@@ -63,24 +61,6 @@ func ToJSON(a Args) (string, error) {
return string(buf), err
}
// ToParamWithVersion encodes Args as a JSON string. If version is less than 1.22
// then the encoded format will use an older legacy format where the values are a
// list of strings, instead of a set.
//
// Deprecated: do not use in any new code; use ToJSON instead
func ToParamWithVersion(version string, a Args) (string, error) {
if a.Len() == 0 {
return "", nil
}
if version != "" && versions.LessThan(version, "1.22") {
buf, err := json.Marshal(convertArgsToSlice(a.fields))
return string(buf), err
}
return ToJSON(a)
}
// FromJSON decodes a JSON encoded string into Args
func FromJSON(p string) (Args, error) {
args := NewArgs()
@@ -320,17 +300,3 @@ func deprecatedArgs(d map[string][]string) map[string]map[string]bool {
}
return m
}
func convertArgsToSlice(f map[string]map[string]bool) map[string][]string {
m := map[string][]string{}
for k, v := range f {
values := []string{}
for kk := range v {
if v[kk] {
values = append(values, kk)
}
}
m[k] = values
}
return m
}

View File

@@ -1,6 +1,8 @@
package image
// DiskUsage contains disk usage for images.
//
// Deprecated: this type is no longer used and will be removed in the next release.
type DiskUsage struct {
TotalSize int64
Reclaimable int64

View File

@@ -48,6 +48,8 @@ type InspectResponse struct {
// Depending on how the image was created, this field may be empty and
// is only set for images that were built/created locally. This field
// is empty if the image was pulled from an image registry.
//
// Deprecated: this field is deprecated, and will be removed in the next release.
Parent string
// Comment is an optional message that can be set when committing or
@@ -80,6 +82,8 @@ type InspectResponse struct {
// DockerVersion is the version of Docker that was used to build the image.
//
// Depending on how the image was created, this field may be empty.
//
// Deprecated: this field is deprecated, and will be removed in the next release.
DockerVersion string
// Author is the name of the author that was specified when committing the

View File

@@ -4,8 +4,6 @@ import (
"errors"
"fmt"
"net"
"github.com/docker/docker/internal/multierror"
)
// EndpointSettings stores the network endpoint details
@@ -99,7 +97,7 @@ func (cfg *EndpointIPAMConfig) IsInRange(v4Subnets []NetworkSubnet, v6Subnets []
errs = append(errs, err)
}
return multierror.Join(errs...)
return errJoin(errs...)
}
func validateEndpointIPAddress(epAddr string, ipamSubnets []NetworkSubnet) error {
@@ -149,5 +147,5 @@ func (cfg *EndpointIPAMConfig) Validate() error {
}
}
return multierror.Join(errs...)
return errJoin(errs...)
}

View File

@@ -4,8 +4,7 @@ import (
"errors"
"fmt"
"net/netip"
"github.com/docker/docker/internal/multierror"
"strings"
)
// IPAM represents IP Address Management
@@ -72,7 +71,7 @@ func ValidateIPAM(ipam *IPAM, enableIPv6 bool) error {
}
}
if err := multierror.Join(errs...); err != nil {
if err := errJoin(errs...); err != nil {
return fmt.Errorf("invalid network config:\n%w", err)
}
@@ -132,3 +131,43 @@ func validateAddress(address string, subnet netip.Prefix, subnetFamily ipFamily)
return nil
}
func errJoin(errs ...error) error {
n := 0
for _, err := range errs {
if err != nil {
n++
}
}
if n == 0 {
return nil
}
e := &joinError{
errs: make([]error, 0, n),
}
for _, err := range errs {
if err != nil {
e.errs = append(e.errs, err)
}
}
return e
}
type joinError struct {
errs []error
}
func (e *joinError) Error() string {
if len(e.errs) == 1 {
return strings.TrimSpace(e.errs[0].Error())
}
stringErrs := make([]string, 0, len(e.errs))
for _, subErr := range e.errs {
stringErrs = append(stringErrs, strings.ReplaceAll(subErr.Error(), "\n", "\n\t"))
}
return "* " + strings.Join(stringErrs, "\n* ")
}
func (e *joinError) Unwrap() []error {
return e.errs
}

View File

@@ -42,7 +42,11 @@ type PluginConfig struct {
// Required: true
Description string `json:"Description"`
// Docker Version used to create the plugin
// Docker Version used to create the plugin.
//
// Depending on how the plugin was created, this field may be empty or omitted.
//
// Deprecated: this field is no longer set, and will be removed in the next API version.
DockerVersion string `json:"DockerVersion,omitempty"`
// documentation

View File

@@ -32,8 +32,8 @@ type AuthConfig struct {
Auth string `json:"auth,omitempty"`
// Email is an optional value associated with the username.
// This field is deprecated and will be removed in a later
// version of docker.
//
// Deprecated: This field is deprecated since docker 1.11 (API v1.23) and will be removed in the next release.
Email string `json:"email,omitempty"`
ServerAddress string `json:"serveraddress,omitempty"`

View File

@@ -1,5 +1,7 @@
package swarm
import "github.com/docker/docker/api/types/swarm/runtime"
// RuntimeType is the type of runtime used for the TaskSpec
type RuntimeType string
@@ -25,3 +27,11 @@ const (
type NetworkAttachmentSpec struct {
ContainerID string
}
// RuntimeSpec defines the base payload which clients can specify for creating
// a service with the plugin runtime.
type RuntimeSpec = runtime.PluginSpec
// RuntimePrivilege describes a permission the user has to accept
// upon installing a plugin.
type RuntimePrivilege = runtime.PluginPrivilege

View File

@@ -1,3 +0,0 @@
//go:generate protoc --gogofaster_out=import_path=github.com/docker/docker/api/types/swarm/runtime:. plugin.proto
package runtime

View File

@@ -1,808 +0,0 @@
// Code generated by protoc-gen-gogo. DO NOT EDIT.
// source: plugin.proto
package runtime
import (
fmt "fmt"
proto "github.com/gogo/protobuf/proto"
io "io"
math "math"
math_bits "math/bits"
)
// Reference imports to suppress errors if they are not otherwise used.
var _ = proto.Marshal
var _ = fmt.Errorf
var _ = math.Inf
// This is a compile-time assertion to ensure that this generated file
// is compatible with the proto package it is being compiled against.
// A compilation error at this line likely means your copy of the
// proto package needs to be updated.
const _ = proto.GoGoProtoPackageIsVersion3 // please upgrade the proto package
// PluginSpec defines the base payload which clients can specify for creating
// a service with the plugin runtime.
type PluginSpec struct {
Name string `protobuf:"bytes,1,opt,name=name,proto3" json:"name,omitempty"`
Remote string `protobuf:"bytes,2,opt,name=remote,proto3" json:"remote,omitempty"`
Privileges []*PluginPrivilege `protobuf:"bytes,3,rep,name=privileges,proto3" json:"privileges,omitempty"`
Disabled bool `protobuf:"varint,4,opt,name=disabled,proto3" json:"disabled,omitempty"`
Env []string `protobuf:"bytes,5,rep,name=env,proto3" json:"env,omitempty"`
}
func (m *PluginSpec) Reset() { *m = PluginSpec{} }
func (m *PluginSpec) String() string { return proto.CompactTextString(m) }
func (*PluginSpec) ProtoMessage() {}
func (*PluginSpec) Descriptor() ([]byte, []int) {
return fileDescriptor_22a625af4bc1cc87, []int{0}
}
func (m *PluginSpec) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
}
func (m *PluginSpec) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
if deterministic {
return xxx_messageInfo_PluginSpec.Marshal(b, m, deterministic)
} else {
b = b[:cap(b)]
n, err := m.MarshalToSizedBuffer(b)
if err != nil {
return nil, err
}
return b[:n], nil
}
}
func (m *PluginSpec) XXX_Merge(src proto.Message) {
xxx_messageInfo_PluginSpec.Merge(m, src)
}
func (m *PluginSpec) XXX_Size() int {
return m.Size()
}
func (m *PluginSpec) XXX_DiscardUnknown() {
xxx_messageInfo_PluginSpec.DiscardUnknown(m)
}
var xxx_messageInfo_PluginSpec proto.InternalMessageInfo
func (m *PluginSpec) GetName() string {
if m != nil {
return m.Name
}
return ""
}
func (m *PluginSpec) GetRemote() string {
if m != nil {
return m.Remote
}
return ""
}
func (m *PluginSpec) GetPrivileges() []*PluginPrivilege {
if m != nil {
return m.Privileges
}
return nil
}
func (m *PluginSpec) GetDisabled() bool {
if m != nil {
return m.Disabled
}
return false
}
func (m *PluginSpec) GetEnv() []string {
if m != nil {
return m.Env
}
return nil
}
// PluginPrivilege describes a permission the user has to accept
// upon installing a plugin.
type PluginPrivilege struct {
Name string `protobuf:"bytes,1,opt,name=name,proto3" json:"name,omitempty"`
Description string `protobuf:"bytes,2,opt,name=description,proto3" json:"description,omitempty"`
Value []string `protobuf:"bytes,3,rep,name=value,proto3" json:"value,omitempty"`
}
func (m *PluginPrivilege) Reset() { *m = PluginPrivilege{} }
func (m *PluginPrivilege) String() string { return proto.CompactTextString(m) }
func (*PluginPrivilege) ProtoMessage() {}
func (*PluginPrivilege) Descriptor() ([]byte, []int) {
return fileDescriptor_22a625af4bc1cc87, []int{1}
}
func (m *PluginPrivilege) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
}
func (m *PluginPrivilege) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
if deterministic {
return xxx_messageInfo_PluginPrivilege.Marshal(b, m, deterministic)
} else {
b = b[:cap(b)]
n, err := m.MarshalToSizedBuffer(b)
if err != nil {
return nil, err
}
return b[:n], nil
}
}
func (m *PluginPrivilege) XXX_Merge(src proto.Message) {
xxx_messageInfo_PluginPrivilege.Merge(m, src)
}
func (m *PluginPrivilege) XXX_Size() int {
return m.Size()
}
func (m *PluginPrivilege) XXX_DiscardUnknown() {
xxx_messageInfo_PluginPrivilege.DiscardUnknown(m)
}
var xxx_messageInfo_PluginPrivilege proto.InternalMessageInfo
func (m *PluginPrivilege) GetName() string {
if m != nil {
return m.Name
}
return ""
}
func (m *PluginPrivilege) GetDescription() string {
if m != nil {
return m.Description
}
return ""
}
func (m *PluginPrivilege) GetValue() []string {
if m != nil {
return m.Value
}
return nil
}
func init() {
proto.RegisterType((*PluginSpec)(nil), "PluginSpec")
proto.RegisterType((*PluginPrivilege)(nil), "PluginPrivilege")
}
func init() { proto.RegisterFile("plugin.proto", fileDescriptor_22a625af4bc1cc87) }
var fileDescriptor_22a625af4bc1cc87 = []byte{
// 225 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xe2, 0xe2, 0x29, 0xc8, 0x29, 0x4d,
0xcf, 0xcc, 0xd3, 0x2b, 0x28, 0xca, 0x2f, 0xc9, 0x57, 0x9a, 0xc1, 0xc8, 0xc5, 0x15, 0x00, 0x16,
0x08, 0x2e, 0x48, 0x4d, 0x16, 0x12, 0xe2, 0x62, 0xc9, 0x4b, 0xcc, 0x4d, 0x95, 0x60, 0x54, 0x60,
0xd4, 0xe0, 0x0c, 0x02, 0xb3, 0x85, 0xc4, 0xb8, 0xd8, 0x8a, 0x52, 0x73, 0xf3, 0x4b, 0x52, 0x25,
0x98, 0xc0, 0xa2, 0x50, 0x9e, 0x90, 0x01, 0x17, 0x57, 0x41, 0x51, 0x66, 0x59, 0x66, 0x4e, 0x6a,
0x7a, 0x6a, 0xb1, 0x04, 0xb3, 0x02, 0xb3, 0x06, 0xb7, 0x91, 0x80, 0x1e, 0xc4, 0xb0, 0x00, 0x98,
0x44, 0x10, 0x92, 0x1a, 0x21, 0x29, 0x2e, 0x8e, 0x94, 0xcc, 0xe2, 0xc4, 0xa4, 0x9c, 0xd4, 0x14,
0x09, 0x16, 0x05, 0x46, 0x0d, 0x8e, 0x20, 0x38, 0x5f, 0x48, 0x80, 0x8b, 0x39, 0x35, 0xaf, 0x4c,
0x82, 0x55, 0x81, 0x59, 0x83, 0x33, 0x08, 0xc4, 0x54, 0x8a, 0xe5, 0xe2, 0x47, 0x33, 0x0c, 0xab,
0xf3, 0x14, 0xb8, 0xb8, 0x53, 0x52, 0x8b, 0x93, 0x8b, 0x32, 0x0b, 0x4a, 0x32, 0xf3, 0xf3, 0xa0,
0x6e, 0x44, 0x16, 0x12, 0x12, 0xe1, 0x62, 0x2d, 0x4b, 0xcc, 0x29, 0x4d, 0x05, 0xbb, 0x91, 0x33,
0x08, 0xc2, 0x71, 0x92, 0x38, 0xf1, 0x48, 0x8e, 0xf1, 0xc2, 0x23, 0x39, 0xc6, 0x07, 0x8f, 0xe4,
0x18, 0x27, 0x3c, 0x96, 0x63, 0xb8, 0xf0, 0x58, 0x8e, 0xe1, 0xc6, 0x63, 0x39, 0x86, 0x24, 0x36,
0x70, 0xd0, 0x18, 0x03, 0x02, 0x00, 0x00, 0xff, 0xff, 0x37, 0xea, 0xe2, 0xca, 0x2a, 0x01, 0x00,
0x00,
}
func (m *PluginSpec) Marshal() (dAtA []byte, err error) {
size := m.Size()
dAtA = make([]byte, size)
n, err := m.MarshalToSizedBuffer(dAtA[:size])
if err != nil {
return nil, err
}
return dAtA[:n], nil
}
func (m *PluginSpec) MarshalTo(dAtA []byte) (int, error) {
size := m.Size()
return m.MarshalToSizedBuffer(dAtA[:size])
}
func (m *PluginSpec) MarshalToSizedBuffer(dAtA []byte) (int, error) {
i := len(dAtA)
_ = i
var l int
_ = l
if len(m.Env) > 0 {
for iNdEx := len(m.Env) - 1; iNdEx >= 0; iNdEx-- {
i -= len(m.Env[iNdEx])
copy(dAtA[i:], m.Env[iNdEx])
i = encodeVarintPlugin(dAtA, i, uint64(len(m.Env[iNdEx])))
i--
dAtA[i] = 0x2a
}
}
if m.Disabled {
i--
if m.Disabled {
dAtA[i] = 1
} else {
dAtA[i] = 0
}
i--
dAtA[i] = 0x20
}
if len(m.Privileges) > 0 {
for iNdEx := len(m.Privileges) - 1; iNdEx >= 0; iNdEx-- {
{
size, err := m.Privileges[iNdEx].MarshalToSizedBuffer(dAtA[:i])
if err != nil {
return 0, err
}
i -= size
i = encodeVarintPlugin(dAtA, i, uint64(size))
}
i--
dAtA[i] = 0x1a
}
}
if len(m.Remote) > 0 {
i -= len(m.Remote)
copy(dAtA[i:], m.Remote)
i = encodeVarintPlugin(dAtA, i, uint64(len(m.Remote)))
i--
dAtA[i] = 0x12
}
if len(m.Name) > 0 {
i -= len(m.Name)
copy(dAtA[i:], m.Name)
i = encodeVarintPlugin(dAtA, i, uint64(len(m.Name)))
i--
dAtA[i] = 0xa
}
return len(dAtA) - i, nil
}
func (m *PluginPrivilege) Marshal() (dAtA []byte, err error) {
size := m.Size()
dAtA = make([]byte, size)
n, err := m.MarshalToSizedBuffer(dAtA[:size])
if err != nil {
return nil, err
}
return dAtA[:n], nil
}
func (m *PluginPrivilege) MarshalTo(dAtA []byte) (int, error) {
size := m.Size()
return m.MarshalToSizedBuffer(dAtA[:size])
}
func (m *PluginPrivilege) MarshalToSizedBuffer(dAtA []byte) (int, error) {
i := len(dAtA)
_ = i
var l int
_ = l
if len(m.Value) > 0 {
for iNdEx := len(m.Value) - 1; iNdEx >= 0; iNdEx-- {
i -= len(m.Value[iNdEx])
copy(dAtA[i:], m.Value[iNdEx])
i = encodeVarintPlugin(dAtA, i, uint64(len(m.Value[iNdEx])))
i--
dAtA[i] = 0x1a
}
}
if len(m.Description) > 0 {
i -= len(m.Description)
copy(dAtA[i:], m.Description)
i = encodeVarintPlugin(dAtA, i, uint64(len(m.Description)))
i--
dAtA[i] = 0x12
}
if len(m.Name) > 0 {
i -= len(m.Name)
copy(dAtA[i:], m.Name)
i = encodeVarintPlugin(dAtA, i, uint64(len(m.Name)))
i--
dAtA[i] = 0xa
}
return len(dAtA) - i, nil
}
func encodeVarintPlugin(dAtA []byte, offset int, v uint64) int {
offset -= sovPlugin(v)
base := offset
for v >= 1<<7 {
dAtA[offset] = uint8(v&0x7f | 0x80)
v >>= 7
offset++
}
dAtA[offset] = uint8(v)
return base
}
func (m *PluginSpec) Size() (n int) {
if m == nil {
return 0
}
var l int
_ = l
l = len(m.Name)
if l > 0 {
n += 1 + l + sovPlugin(uint64(l))
}
l = len(m.Remote)
if l > 0 {
n += 1 + l + sovPlugin(uint64(l))
}
if len(m.Privileges) > 0 {
for _, e := range m.Privileges {
l = e.Size()
n += 1 + l + sovPlugin(uint64(l))
}
}
if m.Disabled {
n += 2
}
if len(m.Env) > 0 {
for _, s := range m.Env {
l = len(s)
n += 1 + l + sovPlugin(uint64(l))
}
}
return n
}
func (m *PluginPrivilege) Size() (n int) {
if m == nil {
return 0
}
var l int
_ = l
l = len(m.Name)
if l > 0 {
n += 1 + l + sovPlugin(uint64(l))
}
l = len(m.Description)
if l > 0 {
n += 1 + l + sovPlugin(uint64(l))
}
if len(m.Value) > 0 {
for _, s := range m.Value {
l = len(s)
n += 1 + l + sovPlugin(uint64(l))
}
}
return n
}
func sovPlugin(x uint64) (n int) {
return (math_bits.Len64(x|1) + 6) / 7
}
func sozPlugin(x uint64) (n int) {
return sovPlugin(uint64((x << 1) ^ uint64((int64(x) >> 63))))
}
func (m *PluginSpec) Unmarshal(dAtA []byte) error {
l := len(dAtA)
iNdEx := 0
for iNdEx < l {
preIndex := iNdEx
var wire uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowPlugin
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
wire |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
}
fieldNum := int32(wire >> 3)
wireType := int(wire & 0x7)
if wireType == 4 {
return fmt.Errorf("proto: PluginSpec: wiretype end group for non-group")
}
if fieldNum <= 0 {
return fmt.Errorf("proto: PluginSpec: illegal tag %d (wire type %d)", fieldNum, wire)
}
switch fieldNum {
case 1:
if wireType != 2 {
return fmt.Errorf("proto: wrong wireType = %d for field Name", wireType)
}
var stringLen uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowPlugin
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
stringLen |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
}
intStringLen := int(stringLen)
if intStringLen < 0 {
return ErrInvalidLengthPlugin
}
postIndex := iNdEx + intStringLen
if postIndex < 0 {
return ErrInvalidLengthPlugin
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
m.Name = string(dAtA[iNdEx:postIndex])
iNdEx = postIndex
case 2:
if wireType != 2 {
return fmt.Errorf("proto: wrong wireType = %d for field Remote", wireType)
}
var stringLen uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowPlugin
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
stringLen |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
}
intStringLen := int(stringLen)
if intStringLen < 0 {
return ErrInvalidLengthPlugin
}
postIndex := iNdEx + intStringLen
if postIndex < 0 {
return ErrInvalidLengthPlugin
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
m.Remote = string(dAtA[iNdEx:postIndex])
iNdEx = postIndex
case 3:
if wireType != 2 {
return fmt.Errorf("proto: wrong wireType = %d for field Privileges", wireType)
}
var msglen int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowPlugin
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
msglen |= int(b&0x7F) << shift
if b < 0x80 {
break
}
}
if msglen < 0 {
return ErrInvalidLengthPlugin
}
postIndex := iNdEx + msglen
if postIndex < 0 {
return ErrInvalidLengthPlugin
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
m.Privileges = append(m.Privileges, &PluginPrivilege{})
if err := m.Privileges[len(m.Privileges)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
return err
}
iNdEx = postIndex
case 4:
if wireType != 0 {
return fmt.Errorf("proto: wrong wireType = %d for field Disabled", wireType)
}
var v int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowPlugin
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
v |= int(b&0x7F) << shift
if b < 0x80 {
break
}
}
m.Disabled = bool(v != 0)
case 5:
if wireType != 2 {
return fmt.Errorf("proto: wrong wireType = %d for field Env", wireType)
}
var stringLen uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowPlugin
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
stringLen |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
}
intStringLen := int(stringLen)
if intStringLen < 0 {
return ErrInvalidLengthPlugin
}
postIndex := iNdEx + intStringLen
if postIndex < 0 {
return ErrInvalidLengthPlugin
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
m.Env = append(m.Env, string(dAtA[iNdEx:postIndex]))
iNdEx = postIndex
default:
iNdEx = preIndex
skippy, err := skipPlugin(dAtA[iNdEx:])
if err != nil {
return err
}
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthPlugin
}
if (iNdEx + skippy) > l {
return io.ErrUnexpectedEOF
}
iNdEx += skippy
}
}
if iNdEx > l {
return io.ErrUnexpectedEOF
}
return nil
}
func (m *PluginPrivilege) Unmarshal(dAtA []byte) error {
l := len(dAtA)
iNdEx := 0
for iNdEx < l {
preIndex := iNdEx
var wire uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowPlugin
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
wire |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
}
fieldNum := int32(wire >> 3)
wireType := int(wire & 0x7)
if wireType == 4 {
return fmt.Errorf("proto: PluginPrivilege: wiretype end group for non-group")
}
if fieldNum <= 0 {
return fmt.Errorf("proto: PluginPrivilege: illegal tag %d (wire type %d)", fieldNum, wire)
}
switch fieldNum {
case 1:
if wireType != 2 {
return fmt.Errorf("proto: wrong wireType = %d for field Name", wireType)
}
var stringLen uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowPlugin
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
stringLen |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
}
intStringLen := int(stringLen)
if intStringLen < 0 {
return ErrInvalidLengthPlugin
}
postIndex := iNdEx + intStringLen
if postIndex < 0 {
return ErrInvalidLengthPlugin
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
m.Name = string(dAtA[iNdEx:postIndex])
iNdEx = postIndex
case 2:
if wireType != 2 {
return fmt.Errorf("proto: wrong wireType = %d for field Description", wireType)
}
var stringLen uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowPlugin
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
stringLen |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
}
intStringLen := int(stringLen)
if intStringLen < 0 {
return ErrInvalidLengthPlugin
}
postIndex := iNdEx + intStringLen
if postIndex < 0 {
return ErrInvalidLengthPlugin
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
m.Description = string(dAtA[iNdEx:postIndex])
iNdEx = postIndex
case 3:
if wireType != 2 {
return fmt.Errorf("proto: wrong wireType = %d for field Value", wireType)
}
var stringLen uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowPlugin
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
stringLen |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
}
intStringLen := int(stringLen)
if intStringLen < 0 {
return ErrInvalidLengthPlugin
}
postIndex := iNdEx + intStringLen
if postIndex < 0 {
return ErrInvalidLengthPlugin
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
m.Value = append(m.Value, string(dAtA[iNdEx:postIndex]))
iNdEx = postIndex
default:
iNdEx = preIndex
skippy, err := skipPlugin(dAtA[iNdEx:])
if err != nil {
return err
}
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthPlugin
}
if (iNdEx + skippy) > l {
return io.ErrUnexpectedEOF
}
iNdEx += skippy
}
}
if iNdEx > l {
return io.ErrUnexpectedEOF
}
return nil
}
func skipPlugin(dAtA []byte) (n int, err error) {
l := len(dAtA)
iNdEx := 0
depth := 0
for iNdEx < l {
var wire uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return 0, ErrIntOverflowPlugin
}
if iNdEx >= l {
return 0, io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
wire |= (uint64(b) & 0x7F) << shift
if b < 0x80 {
break
}
}
wireType := int(wire & 0x7)
switch wireType {
case 0:
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return 0, ErrIntOverflowPlugin
}
if iNdEx >= l {
return 0, io.ErrUnexpectedEOF
}
iNdEx++
if dAtA[iNdEx-1] < 0x80 {
break
}
}
case 1:
iNdEx += 8
case 2:
var length int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return 0, ErrIntOverflowPlugin
}
if iNdEx >= l {
return 0, io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
length |= (int(b) & 0x7F) << shift
if b < 0x80 {
break
}
}
if length < 0 {
return 0, ErrInvalidLengthPlugin
}
iNdEx += length
case 3:
depth++
case 4:
if depth == 0 {
return 0, ErrUnexpectedEndOfGroupPlugin
}
depth--
case 5:
iNdEx += 4
default:
return 0, fmt.Errorf("proto: illegal wireType %d", wireType)
}
if iNdEx < 0 {
return 0, ErrInvalidLengthPlugin
}
if depth == 0 {
return iNdEx, nil
}
}
return 0, io.ErrUnexpectedEOF
}
var (
ErrInvalidLengthPlugin = fmt.Errorf("proto: negative length found during unmarshaling")
ErrIntOverflowPlugin = fmt.Errorf("proto: integer overflow")
ErrUnexpectedEndOfGroupPlugin = fmt.Errorf("proto: unexpected end of group")
)

View File

@@ -1,19 +0,0 @@
syntax = "proto3";
// PluginSpec defines the base payload which clients can specify for creating
// a service with the plugin runtime.
message PluginSpec {
string name = 1;
string remote = 2;
repeated PluginPrivilege privileges = 3;
bool disabled = 4;
repeated string env = 5;
}
// PluginPrivilege describes a permission the user has to accept
// upon installing a plugin.
message PluginPrivilege {
string name = 1;
string description = 2;
repeated string value = 3;
}

View File

@@ -0,0 +1,27 @@
package runtime
import "fmt"
// PluginSpec defines the base payload which clients can specify for creating
// a service with the plugin runtime.
type PluginSpec struct {
Name string `json:"name,omitempty"`
Remote string `json:"remote,omitempty"`
Privileges []*PluginPrivilege `json:"privileges,omitempty"`
Disabled bool `json:"disabled,omitempty"`
Env []string `json:"env,omitempty"`
}
// PluginPrivilege describes a permission the user has to accept
// upon installing a plugin.
type PluginPrivilege struct {
Name string `json:"name,omitempty"`
Description string `json:"description,omitempty"`
Value []string `json:"value,omitempty"`
}
var (
ErrInvalidLengthPlugin = fmt.Errorf("proto: negative length found during unmarshaling") // Deprecated: this error was only used internally and is no longer used.
ErrIntOverflowPlugin = fmt.Errorf("proto: integer overflow") // Deprecated: this error was only used internally and is no longer used.
ErrUnexpectedEndOfGroupPlugin = fmt.Errorf("proto: unexpected end of group") // Deprecated: this error was only used internally and is no longer used.
)

View File

@@ -4,7 +4,6 @@ import (
"time"
"github.com/docker/docker/api/types/filters"
"github.com/docker/docker/api/types/swarm/runtime"
)
// TaskState represents the state of a task.
@@ -77,7 +76,7 @@ type TaskSpec struct {
// NetworkAttachmentSpec is used if the `Runtime` field is set to
// `attachment`.
ContainerSpec *ContainerSpec `json:",omitempty"`
PluginSpec *runtime.PluginSpec `json:",omitempty"`
PluginSpec *RuntimeSpec `json:",omitempty"`
NetworkAttachmentSpec *NetworkAttachmentSpec `json:",omitempty"`
Resources *ResourceRequirements `json:",omitempty"`

View File

@@ -1,17 +0,0 @@
package system
import (
"github.com/docker/docker/api/types/build"
"github.com/docker/docker/api/types/container"
"github.com/docker/docker/api/types/image"
"github.com/docker/docker/api/types/volume"
)
// DiskUsage contains response of Engine API for API 1.49 and greater:
// GET "/system/df"
type DiskUsage struct {
Images *image.DiskUsage
Containers *container.DiskUsage
Volumes *volume.DiskUsage
BuildCache *build.CacheDiskUsage
}

View File

@@ -9,19 +9,23 @@ import (
// Info contains response of Engine API:
// GET "/info"
type Info struct {
ID string
Containers int
ContainersRunning int
ContainersPaused int
ContainersStopped int
Images int
Driver string
DriverStatus [][2]string
SystemStatus [][2]string `json:",omitempty"` // SystemStatus is only propagated by the Swarm standalone API
Plugins PluginsInfo
MemoryLimit bool
SwapLimit bool
KernelMemory bool `json:",omitempty"` // Deprecated: kernel 5.4 deprecated kmem.limit_in_bytes
ID string
Containers int
ContainersRunning int
ContainersPaused int
ContainersStopped int
Images int
Driver string
DriverStatus [][2]string
SystemStatus [][2]string `json:",omitempty"` // SystemStatus is only propagated by the Swarm standalone API
Plugins PluginsInfo
MemoryLimit bool
SwapLimit bool
KernelMemory bool `json:",omitempty"` // Deprecated: kernel 5.4 deprecated kmem.limit_in_bytes
// KernelMemoryLimit is not supported on cgroups v2.
//
// Deprecated: This field is deprecated and will be removed in the next release.
// Starting with kernel 6.12, the kernel has deprecated kernel memory tcp accounting
KernelMemoryTCP bool `json:",omitempty"` // KernelMemoryTCP is not supported on cgroups v2.
CPUCfsPeriod bool `json:"CpuCfsPeriod"`
CPUCfsQuota bool `json:"CpuCfsQuota"`

View File

@@ -46,15 +46,16 @@ type NetworkSettings = container.NetworkSettings
// NetworkSettingsBase holds networking state for a container when inspecting it.
//
// Deprecated: use [container.NetworkSettingsBase].
type NetworkSettingsBase = container.NetworkSettingsBase
// Deprecated: [container.NetworkSettingsBase] will be removed in v29. Prefer
// accessing the fields it contains through [container.NetworkSettings].
type NetworkSettingsBase = container.NetworkSettingsBase //nolint:staticcheck // ignore SA1019: NetworkSettingsBase is deprecated in v28.4.
// DefaultNetworkSettings holds network information
// during the 2 release deprecation period.
// It will be removed in Docker 1.11.
//
// Deprecated: use [container.DefaultNetworkSettings].
type DefaultNetworkSettings = container.DefaultNetworkSettings
type DefaultNetworkSettings = container.DefaultNetworkSettings //nolint:staticcheck // ignore SA1019: DefaultNetworkSettings is deprecated in v28.4.
// SummaryNetworkSettings provides a summary of container's networks
// in /containers/json.

View File

@@ -1,6 +1,8 @@
package volume
// DiskUsage contains disk usage for volumes.
//
// Deprecated: this type is no longer used and will be removed in the next release.
type DiskUsage struct {
TotalSize int64
Reclaimable int64

View File

@@ -463,7 +463,9 @@ func (cli *Client) dialer() func(context.Context) (net.Conn, error) {
case "unix":
return net.Dial(cli.proto, cli.addr)
case "npipe":
return sockets.DialPipe(cli.addr, 32*time.Second)
ctx, cancel := context.WithTimeout(ctx, 32*time.Second)
defer cancel()
return dialPipeContext(ctx, cli.addr)
default:
if tlsConfig := cli.tlsConfig(); tlsConfig != nil {
return tls.Dial(cli.proto, cli.addr, tlsConfig)

View File

@@ -2,6 +2,17 @@
package client
import (
"context"
"net"
"syscall"
)
// DefaultDockerHost defines OS-specific default host if the DOCKER_HOST
// (EnvOverrideHost) environment variable is unset or empty.
const DefaultDockerHost = "unix:///var/run/docker.sock"
// dialPipeContext connects to a Windows named pipe. It is not supported on non-Windows.
func dialPipeContext(_ context.Context, _ string) (net.Conn, error) {
return nil, syscall.EAFNOSUPPORT
}

View File

@@ -1,5 +1,17 @@
package client
import (
"context"
"net"
"github.com/Microsoft/go-winio"
)
// DefaultDockerHost defines OS-specific default host if the DOCKER_HOST
// (EnvOverrideHost) environment variable is unset or empty.
const DefaultDockerHost = "npipe:////./pipe/docker_engine"
// dialPipeContext connects to a Windows named pipe. It is not supported on non-Windows.
func dialPipeContext(ctx context.Context, addr string) (net.Conn, error) {
return winio.DialPipeContext(ctx, addr)
}

View File

@@ -28,7 +28,7 @@ func (cli *Client) ContainerStats(ctx context.Context, containerID string, strea
return container.StatsResponseReader{
Body: resp.Body,
OSType: getDockerOS(resp.Header.Get("Server")),
OSType: resp.Header.Get("Ostype"),
}, nil
}
@@ -51,6 +51,6 @@ func (cli *Client) ContainerStatsOneShot(ctx context.Context, containerID string
return container.StatsResponseReader{
Body: resp.Body,
OSType: getDockerOS(resp.Header.Get("Server")),
OSType: resp.Header.Get("Ostype"),
}, nil
}

View File

@@ -40,7 +40,7 @@ func (cli *Client) ImageBuild(ctx context.Context, buildContext io.Reader, optio
return build.ImageBuildResponse{
Body: resp.Body,
OSType: getDockerOS(resp.Header.Get("Server")),
OSType: resp.Header.Get("Ostype"),
}, nil
}

View File

@@ -8,12 +8,9 @@ import (
cerrdefs "github.com/containerd/errdefs"
"github.com/docker/docker/api/types/filters"
"github.com/docker/docker/internal/lazyregexp"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
)
var headerRegexp = lazyregexp.New(`\ADocker/.+\s\((.+)\)\z`)
type emptyIDError string
func (e emptyIDError) InvalidParameter() {}
@@ -31,16 +28,6 @@ func trimID(objType, id string) (string, error) {
return id, nil
}
// getDockerOS returns the operating system based on the server header from the daemon.
func getDockerOS(serverHeader string) string {
var osType string
matches := headerRegexp.FindStringSubmatch(serverHeader)
if len(matches) > 0 {
osType = matches[1]
}
return osType
}
// getFiltersQuery returns a url query with "filters" query term, based on the
// filters provided.
func getFiltersQuery(f filters.Args) (url.Values, error) {

View File

@@ -1,90 +0,0 @@
// Copyright 2018 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Code below was largely copied from golang.org/x/mod@v0.22;
// https://github.com/golang/mod/blob/v0.22.0/internal/lazyregexp/lazyre.go
// with some additional methods added.
// Package lazyregexp is a thin wrapper over regexp, allowing the use of global
// regexp variables without forcing them to be compiled at init.
package lazyregexp
import (
"os"
"regexp"
"strings"
"sync"
)
// Regexp is a wrapper around [regexp.Regexp], where the underlying regexp will be
// compiled the first time it is needed.
type Regexp struct {
str string
once sync.Once
rx *regexp.Regexp
}
func (r *Regexp) re() *regexp.Regexp {
r.once.Do(r.build)
return r.rx
}
func (r *Regexp) build() {
r.rx = regexp.MustCompile(r.str)
r.str = ""
}
func (r *Regexp) FindSubmatch(s []byte) [][]byte {
return r.re().FindSubmatch(s)
}
func (r *Regexp) FindAllStringSubmatch(s string, n int) [][]string {
return r.re().FindAllStringSubmatch(s, n)
}
func (r *Regexp) FindStringSubmatch(s string) []string {
return r.re().FindStringSubmatch(s)
}
func (r *Regexp) FindStringSubmatchIndex(s string) []int {
return r.re().FindStringSubmatchIndex(s)
}
func (r *Regexp) ReplaceAllString(src, repl string) string {
return r.re().ReplaceAllString(src, repl)
}
func (r *Regexp) FindString(s string) string {
return r.re().FindString(s)
}
func (r *Regexp) FindAllString(s string, n int) []string {
return r.re().FindAllString(s, n)
}
func (r *Regexp) MatchString(s string) bool {
return r.re().MatchString(s)
}
func (r *Regexp) ReplaceAllStringFunc(src string, repl func(string) string) string {
return r.re().ReplaceAllStringFunc(src, repl)
}
func (r *Regexp) SubexpNames() []string {
return r.re().SubexpNames()
}
var inTest = len(os.Args) > 0 && strings.HasSuffix(strings.TrimSuffix(os.Args[0], ".exe"), ".test")
// New creates a new lazy regexp, delaying the compiling work until it is first
// needed. If the code is being run as part of tests, the regexp compiling will
// happen immediately.
func New(str string) *Regexp {
lr := &Regexp{str: str}
if inTest {
// In tests, always compile the regexps early.
lr.re()
}
return lr
}

View File

@@ -1,46 +0,0 @@
package multierror
import (
"strings"
)
// Join is a drop-in replacement for errors.Join with better formatting.
func Join(errs ...error) error {
n := 0
for _, err := range errs {
if err != nil {
n++
}
}
if n == 0 {
return nil
}
e := &joinError{
errs: make([]error, 0, n),
}
for _, err := range errs {
if err != nil {
e.errs = append(e.errs, err)
}
}
return e
}
type joinError struct {
errs []error
}
func (e *joinError) Error() string {
if len(e.errs) == 1 {
return strings.TrimSpace(e.errs[0].Error())
}
stringErrs := make([]string, 0, len(e.errs))
for _, subErr := range e.errs {
stringErrs = append(stringErrs, strings.ReplaceAll(subErr.Error(), "\n", "\n\t"))
}
return "* " + strings.Join(stringErrs, "\n* ")
}
func (e *joinError) Unwrap() []error {
return e.errs
}

View File

@@ -151,9 +151,9 @@ type JSONMessage struct {
// Deprecated: this field is deprecated since docker v0.7.1 / API v1.8. Use the information in [Progress] instead. This field will be omitted in a future release.
ProgressMessage string `json:"progress,omitempty"`
ID string `json:"id,omitempty"`
From string `json:"from,omitempty"`
Time int64 `json:"time,omitempty"`
TimeNano int64 `json:"timeNano,omitempty"`
From string `json:"from,omitempty"` // Deprecated: this field is no longer set in stream responses and should not be used.
Time int64 `json:"time,omitempty"` // Deprecated: this field is no longer set in stream responses and should not be used.
TimeNano int64 `json:"timeNano,omitempty"` // Deprecated: this field is no longer set in stream responses and should not be used.
Error *JSONError `json:"errorDetail,omitempty"`
// ErrorMessage contains errors encountered during the operation.

View File

@@ -46,10 +46,27 @@ type Options struct {
WatchRoot string `mapstructure:"watch_root"` // base directory for the watch. events will be considered relative to this path
WatchNotificationBrokers string `mapstructure:"watch_notification_brokers"`
NatsWatcher NatsWatcherConfig `mapstructure:"natswatcher"`
// InotifyWatcher specific options
InotifyStatsFrequency time.Duration `mapstructure:"inotify_stats_frequency"`
}
// NatsWatcherConfig is the configuration needed for a NATS watcher event stream.
type NatsWatcherConfig struct {
Endpoint string `mapstructure:"address"`
Cluster string `mapstructure:"clusterID"`
Stream string `mapstructure:"stream"`
Durable string `mapstructure:"durable-name"`
TLSInsecure bool `mapstructure:"tls-insecure"`
TLSRootCACertificate string `mapstructure:"tls-root-ca-cert"`
EnableTLS bool `mapstructure:"enable-tls"`
AuthUsername string `mapstructure:"username"`
AuthPassword string `mapstructure:"password"`
MaxAckPending int `mapstructure:"max-ack-pending"`
AckWait time.Duration `mapstructure:"ack-wait"`
}
// New returns a new Options instance for the given configuration
func New(m map[string]interface{}) (*Options, error) {
// default to hybrid metadatabackend for posixfs

View File

@@ -21,6 +21,7 @@ package trashbin
import (
"context"
"fmt"
"io"
"io/fs"
"os"
"path/filepath"
@@ -589,6 +590,10 @@ func (tb *Trashbin) IsEmpty(ctx context.Context, spaceID string) bool {
}
dirItems, err := trash.ReadDir(1)
if err != nil {
if err == io.EOF {
// empty trash
return true
}
// if we cannot read the trash, we assume there are no trashed items
tb.log.Error().Err(err).Str("spaceID", spaceID).Msg("trashbin: error reading trash directory")
return true

View File

@@ -39,6 +39,7 @@ import (
provider "github.com/cs3org/go-cs3apis/cs3/storage/provider/v1beta1"
"github.com/opencloud-eu/reva/v2/pkg/events"
"github.com/opencloud-eu/reva/v2/pkg/storage/fs/posix/watcher"
"github.com/opencloud-eu/reva/v2/pkg/storage/pkg/decomposedfs/metadata"
"github.com/opencloud-eu/reva/v2/pkg/storage/pkg/decomposedfs/metadata/prefixes"
"github.com/opencloud-eu/reva/v2/pkg/storage/pkg/decomposedfs/node"
@@ -54,16 +55,6 @@ type ScanDebouncer struct {
mutex sync.Mutex
}
type EventAction int
const (
ActionCreate EventAction = iota
ActionUpdate
ActionMove
ActionDelete
ActionMoveFrom
)
type queueItem struct {
item scanItem
timer *time.Timer
@@ -190,10 +181,10 @@ func (t *Tree) workScanQueue() {
}
// Scan scans the given path and updates the id chache
func (t *Tree) Scan(path string, action EventAction, isDir bool) error {
func (t *Tree) Scan(path string, action watcher.EventAction, isDir bool) error {
// cases:
switch action {
case ActionCreate:
case watcher.ActionCreate:
t.log.Debug().Str("path", path).Bool("isDir", isDir).Msg("scanning path (ActionCreate)")
if !isDir {
// 1. New file (could be emitted as part of a new directory)
@@ -225,7 +216,7 @@ func (t *Tree) Scan(path string, action EventAction, isDir bool) error {
})
}
case ActionUpdate:
case watcher.ActionUpdate:
t.log.Debug().Str("path", path).Bool("isDir", isDir).Msg("scanning path (ActionUpdate)")
// 3. Updated file
// -> update file unless parent directory is being rescanned
@@ -241,7 +232,7 @@ func (t *Tree) Scan(path string, action EventAction, isDir bool) error {
AssimilationCounter.WithLabelValues(_labelDir, _labelUpdated).Inc()
}
case ActionMove:
case watcher.ActionMove:
t.log.Debug().Str("path", path).Bool("isDir", isDir).Msg("scanning path (ActionMove)")
// 4. Moved file
// -> update file
@@ -258,7 +249,7 @@ func (t *Tree) Scan(path string, action EventAction, isDir bool) error {
AssimilationCounter.WithLabelValues(_labelDir, _labelMoved).Inc()
}
case ActionMoveFrom:
case watcher.ActionMoveFrom:
t.log.Debug().Str("path", path).Bool("isDir", isDir).Msg("scanning path (ActionMoveFrom)")
// 6. file/directory moved out of the watched directory
// -> remove from caches
@@ -279,7 +270,7 @@ func (t *Tree) Scan(path string, action EventAction, isDir bool) error {
// We do not do metrics here because this has been handled in `ActionMove`
case ActionDelete:
case watcher.ActionDelete:
t.log.Debug().Str("path", path).Bool("isDir", isDir).Msg("handling deleted item")
// 7. Deleted file or directory
@@ -426,6 +417,15 @@ func (t *Tree) assimilate(item scanItem) error {
}
}
fi, err := os.Lstat(item.Path)
if err != nil {
return err
}
if !fi.IsDir() && !fi.Mode().IsRegular() {
t.log.Trace().Str("path", item.Path).Msg("skipping non-regular file")
return nil
}
if id != "" {
// the file has an id set, we already know it from the past
@@ -451,20 +451,10 @@ func (t *Tree) assimilate(item scanItem) error {
// compare metadata mtime with actual mtime. if it matches AND the path hasn't changed (move operation)
// we can skip the assimilation because the file was handled by us
fi, err := os.Lstat(item.Path)
if err != nil {
return err
}
if previousPath == item.Path && mtime.Equal(fi.ModTime()) {
return nil
}
if !fi.IsDir() && !fi.Mode().IsRegular() {
t.log.Trace().Str("path", item.Path).Msg("skipping non-regular file")
return nil
}
// was it moved or copied/restored with a clashing id?
if ok && len(parentID) > 0 && previousPath != item.Path {
_, err := os.Stat(previousPath)
@@ -675,6 +665,7 @@ assimilate:
}
var n *node.Node
sizeDiff := int64(0)
if fi.IsDir() {
// The Space's name attribute might not match the directory name. Use the name as
// it was set before. Also the space root doesn't have a 'type' attribute
@@ -712,44 +703,46 @@ assimilate:
n.SpaceRoot = &node.Node{BaseNode: node.BaseNode{SpaceID: spaceID, ID: spaceID}}
prevBlobSize, err := previousAttribs.Int64(prefixes.BlobsizeAttr)
if err == nil && prevBlobSize != fi.Size() {
// file size changed, trigger propagation of tree size changes
err = t.Propagate(context.Background(), n, fi.Size()-prevBlobSize)
if err != nil {
t.log.Error().Err(err).Str("path", path).Msg("could not propagate tree size changes")
}
if err != nil || prevBlobSize < 0 {
prevBlobSize = 0
}
if prevBlobSize != fi.Size() {
sizeDiff = fi.Size() - prevBlobSize
}
}
attributes.SetTime(prefixes.MTimeAttr, fi.ModTime())
n.SpaceRoot = &node.Node{BaseNode: node.BaseNode{SpaceID: spaceID, ID: spaceID}}
if t.options.EnableFSRevisions {
if !fi.IsDir() && t.options.EnableFSRevisions {
go func() {
// Copy the previous current version to a revision
currentNode := node.NewBaseNode(n.SpaceID, n.ID+node.CurrentIDDelimiter, t.lookup)
currentPath := currentNode.InternalPath()
stat, err := os.Stat(currentPath)
if err != nil {
t.log.Error().Err(err).Str("path", path).Str("currentPath", currentPath).Msg("could not stat current path")
return
}
revisionPath := t.lookup.VersionPath(n.SpaceID, n.ID, stat.ModTime().UTC().Format(time.RFC3339Nano))
if err == nil {
revisionPath := t.lookup.VersionPath(n.SpaceID, n.ID, stat.ModTime().UTC().Format(time.RFC3339Nano))
err = os.Rename(currentPath, revisionPath)
if err != nil {
t.log.Error().Err(err).Str("path", path).Str("revisionPath", revisionPath).Msg("could not create revision")
return
err = os.Rename(currentPath, revisionPath)
if err != nil {
t.log.Error().Err(err).Str("path", path).Str("revisionPath", revisionPath).Msg("could not create revision")
return
}
}
// Copy the new version to the current version
if err := os.MkdirAll(filepath.Dir(currentPath), 0700); err != nil {
t.log.Error().Err(err).Str("path", path).Str("currentPath", currentPath).Msg("could not create base path for current file")
return
}
w, err := os.OpenFile(currentPath, os.O_CREATE|os.O_WRONLY|os.O_TRUNC, 0600)
if err != nil {
t.log.Error().Err(err).Str("path", path).Str("currentPath", currentPath).Msg("could not open current path for writing")
return
}
defer w.Close()
r, err := os.OpenFile(n.InternalPath(), os.O_RDONLY, 0600)
r, err := os.OpenFile(path, os.O_RDONLY, 0600)
if err != nil {
t.log.Error().Err(err).Str("path", path).Msg("could not open file for reading")
return
@@ -775,7 +768,7 @@ assimilate:
}()
}
err = t.Propagate(context.Background(), n, 0)
err = t.Propagate(context.Background(), n, sizeDiff)
if err != nil {
return nil, nil, errors.Wrap(err, "failed to propagate")
}

View File

@@ -8,6 +8,7 @@ import (
"encoding/json"
"path/filepath"
"github.com/opencloud-eu/reva/v2/pkg/storage/fs/posix/watcher"
"github.com/rs/zerolog"
"github.com/rs/zerolog/log"
kafka "github.com/segmentio/kafka-go"
@@ -97,17 +98,17 @@ func (w *CephFSWatcher) Watch(topic string) {
go func() {
switch {
case mask&CEPH_MDS_NOTIFY_DELETE > 0:
err = w.tree.Scan(path, ActionDelete, isDir)
err = w.tree.Scan(path, watcher.ActionDelete, isDir)
case mask&CEPH_MDS_NOTIFY_MOVED_TO > 0:
if ev.SrcMask > 0 {
// This is a move, clean up the old path
err = w.tree.Scan(filepath.Join(w.tree.options.WatchRoot, ev.SrcPath), ActionMoveFrom, isDir)
err = w.tree.Scan(filepath.Join(w.tree.options.WatchRoot, ev.SrcPath), watcher.ActionMoveFrom, isDir)
}
err = w.tree.Scan(path, ActionMove, isDir)
err = w.tree.Scan(path, watcher.ActionMove, isDir)
case mask&CEPH_MDS_NOTIFY_CREATE > 0:
err = w.tree.Scan(path, ActionCreate, isDir)
err = w.tree.Scan(path, watcher.ActionCreate, isDir)
case mask&CEPH_MDS_NOTIFY_CLOSE_WRITE > 0:
err = w.tree.Scan(path, ActionUpdate, isDir)
err = w.tree.Scan(path, watcher.ActionUpdate, isDir)
case mask&CEPH_MDS_NOTIFY_CLOSE > 0:
// ignore, already handled by CLOSE_WRITE
default:

View File

@@ -26,6 +26,7 @@ import (
"strconv"
"time"
"github.com/opencloud-eu/reva/v2/pkg/storage/fs/posix/watcher"
"github.com/rs/zerolog"
)
@@ -88,15 +89,15 @@ start:
go func() {
switch ev.Event {
case "CREATE":
err = w.tree.Scan(ev.Path, ActionCreate, false)
err = w.tree.Scan(ev.Path, watcher.ActionCreate, false)
case "CLOSE":
var bytesWritten int
bytesWritten, err = strconv.Atoi(ev.BytesWritten)
if err == nil && bytesWritten > 0 {
err = w.tree.Scan(ev.Path, ActionUpdate, false)
err = w.tree.Scan(ev.Path, watcher.ActionUpdate, false)
}
case "RENAME":
err = w.tree.Scan(ev.Path, ActionMove, false)
err = w.tree.Scan(ev.Path, watcher.ActionMove, false)
if warmupErr := w.tree.WarmupIDCache(ev.Path, false, false); warmupErr != nil {
w.log.Error().Err(warmupErr).Str("path", ev.Path).Msg("error warming up id cache")
}

View File

@@ -26,6 +26,7 @@ import (
"strconv"
"strings"
"github.com/opencloud-eu/reva/v2/pkg/storage/fs/posix/watcher"
"github.com/rs/zerolog"
kafka "github.com/segmentio/kafka-go"
)
@@ -77,21 +78,21 @@ func (w *GpfsWatchFolderWatcher) Watch(topic string) {
var err error
switch {
case strings.Contains(lwev.Event, "IN_DELETE"):
err = w.tree.Scan(path, ActionDelete, isDir)
err = w.tree.Scan(path, watcher.ActionDelete, isDir)
case strings.Contains(lwev.Event, "IN_MOVE_FROM"):
err = w.tree.Scan(path, ActionMoveFrom, isDir)
err = w.tree.Scan(path, watcher.ActionMoveFrom, isDir)
case strings.Contains(lwev.Event, "IN_CREATE"):
err = w.tree.Scan(path, ActionCreate, isDir)
err = w.tree.Scan(path, watcher.ActionCreate, isDir)
case strings.Contains(lwev.Event, "IN_CLOSE_WRITE"):
bytesWritten, convErr := strconv.Atoi(lwev.BytesWritten)
if convErr == nil && bytesWritten > 0 {
err = w.tree.Scan(path, ActionUpdate, isDir)
err = w.tree.Scan(path, watcher.ActionUpdate, isDir)
}
case strings.Contains(lwev.Event, "IN_MOVED_TO"):
err = w.tree.Scan(path, ActionMove, isDir)
err = w.tree.Scan(path, watcher.ActionMove, isDir)
}
if err != nil {
w.log.Error().Err(err).Str("path", path).Msg("error scanning path")

View File

@@ -30,6 +30,7 @@ import (
"time"
"github.com/opencloud-eu/reva/v2/pkg/storage/fs/posix/options"
"github.com/opencloud-eu/reva/v2/pkg/storage/fs/posix/watcher"
"github.com/pablodz/inotifywaitgo/inotifywaitgo"
"github.com/rs/zerolog"
slogzerolog "github.com/samber/slog-zerolog/v2"
@@ -96,15 +97,15 @@ func (iw *InotifyWatcher) Watch(path string) {
var err error
switch e {
case inotifywaitgo.DELETE:
err = iw.tree.Scan(event.Filename, ActionDelete, event.IsDir)
err = iw.tree.Scan(event.Filename, watcher.ActionDelete, event.IsDir)
case inotifywaitgo.MOVED_FROM:
err = iw.tree.Scan(event.Filename, ActionMoveFrom, event.IsDir)
err = iw.tree.Scan(event.Filename, watcher.ActionMoveFrom, event.IsDir)
case inotifywaitgo.MOVED_TO:
err = iw.tree.Scan(event.Filename, ActionMove, event.IsDir)
err = iw.tree.Scan(event.Filename, watcher.ActionMove, event.IsDir)
case inotifywaitgo.CREATE:
err = iw.tree.Scan(event.Filename, ActionCreate, event.IsDir)
err = iw.tree.Scan(event.Filename, watcher.ActionCreate, event.IsDir)
case inotifywaitgo.CLOSE_WRITE:
err = iw.tree.Scan(event.Filename, ActionUpdate, event.IsDir)
err = iw.tree.Scan(event.Filename, watcher.ActionUpdate, event.IsDir)
case inotifywaitgo.CLOSE:
// ignore, already handled by CLOSE_WRITE
default:

View File

@@ -47,6 +47,7 @@ import (
"github.com/opencloud-eu/reva/v2/pkg/storage/fs/posix/lookup"
"github.com/opencloud-eu/reva/v2/pkg/storage/fs/posix/options"
"github.com/opencloud-eu/reva/v2/pkg/storage/fs/posix/trashbin"
"github.com/opencloud-eu/reva/v2/pkg/storage/fs/posix/watcher/natswatcher"
"github.com/opencloud-eu/reva/v2/pkg/storage/pkg/decomposedfs"
"github.com/opencloud-eu/reva/v2/pkg/storage/pkg/decomposedfs/metadata"
"github.com/opencloud-eu/reva/v2/pkg/storage/pkg/decomposedfs/metadata/prefixes"
@@ -147,6 +148,11 @@ func New(lu node.PathLookup, bs node.Blobstore, um usermapper.Mapper, trashbin *
if err != nil {
return nil, err
}
case "natswatcher":
t.watcher, err = natswatcher.New(context.TODO(), t, o.NatsWatcher, o.WatchRoot, log)
if err != nil {
return nil, err
}
default:
t.watcher, err = NewInotifyWatcher(t, o, log)
if err != nil {
@@ -499,8 +505,18 @@ func (t *Tree) ListFolder(ctx context.Context, n *node.Node) ([]*node.Node, erro
_, nodeID, err := t.lookup.IDsForPath(ctx, path)
if err != nil {
t.log.Error().Err(err).Str("path", path).Msg("failed to get ids for entry")
continue
// we don't know about this node yet for some reason, assimilate it on the fly
t.log.Info().Err(err).Str("path", path).Msg("encountered unknown entity while listing the directory. Assimilate.")
err = t.assimilate(scanItem{Path: path})
if err != nil {
t.log.Error().Err(err).Str("path", path).Msg("failed to assimilate node")
continue
}
_, nodeID, err = t.lookup.IDsForPath(ctx, path)
if err != nil || nodeID == "" {
t.log.Error().Err(err).Str("path", path).Msg("still could not resolve node after assimilation")
continue
}
}
child, err := node.ReadNode(ctx, t.lookup, n.SpaceID, nodeID, false, n.SpaceRoot, true)
@@ -708,9 +724,23 @@ func (t *Tree) createDirNode(ctx context.Context, n *node.Node) (err error) {
t.log.Error().Err(err).Str("spaceID", n.SpaceID).Str("id", n.ID).Str("path", path).Msg("could not cache id")
}
// Write mtime from filesystem to metadata to preven re-assimilation
d, err := os.Open(path)
if err != nil {
return err
}
fi, err := d.Stat()
if err != nil {
return err
}
mtime := fi.ModTime()
attributes := n.NodeMetadata(ctx)
attributes[prefixes.MTimeAttr] = []byte(mtime.UTC().Format(time.RFC3339Nano))
attributes[prefixes.IDAttr] = []byte(n.ID)
attributes[prefixes.TreesizeAttr] = []byte("0") // initialize as empty, TODO why bother? if it is not set we could treat it as 0?
if t.options.TreeTimeAccounting || t.options.TreeSizeAccounting {
attributes[prefixes.PropagationAttr] = []byte("1") // mark the node for propagation
}

View File

@@ -0,0 +1,11 @@
package watcher
type EventAction int
const (
ActionCreate EventAction = iota
ActionUpdate
ActionMove
ActionDelete
ActionMoveFrom
)

View File

@@ -0,0 +1,236 @@
package natswatcher
import (
"context"
"crypto/tls"
"crypto/x509"
"fmt"
"os"
"path/filepath"
"time"
"github.com/cenkalti/backoff"
"github.com/nats-io/nats.go"
"github.com/nats-io/nats.go/jetstream"
"github.com/opencloud-eu/reva/v2/pkg/storage/fs/posix/options"
"github.com/opencloud-eu/reva/v2/pkg/storage/fs/posix/watcher"
"github.com/rs/zerolog"
"github.com/vmihailenco/msgpack/v5"
)
// natsEvent represents the event encoded in MessagePack.
// we abbreviate the the properties to save some space
type natsEvent struct {
Event string `msgpack:"e"`
Path string `msgpack:"p,omitempty"`
ToPath string `msgpack:"t,omitempty"`
IsDir bool `msgpack:"d,omitempty"`
}
// NatsWatcher consumes filesystem-style events from NATS JetStream.
type NatsWatcher struct {
ctx context.Context
tree Scannable
log *zerolog.Logger
watchRoot string
config options.NatsWatcherConfig
}
type Scannable interface {
Scan(path string, action watcher.EventAction, isDir bool) error
}
// NewNatsWatcher creates a new NATS watcher.
func New(ctx context.Context, tree Scannable, cfg options.NatsWatcherConfig, watchRoot string, log *zerolog.Logger) (*NatsWatcher, error) {
return &NatsWatcher{
ctx: ctx,
tree: tree,
log: log,
watchRoot: watchRoot,
config: cfg,
}, nil
}
// Watch starts consuming events from a NATS JetStream subject
func (w *NatsWatcher) Watch(path string) {
w.log.Info().Str("stream", w.config.Stream).Msg("starting NATS watcher with auto-reconnect")
for {
select {
case <-w.ctx.Done():
w.log.Debug().Msg("context cancelled, stopping NATS watcher")
return
default:
}
// Try to connect with exponential backoff
nc, js, err := w.connectWithBackoff()
if err != nil {
w.log.Error().Err(err).Msg("failed to establish NATS connection after retries")
time.Sleep(5 * time.Second)
continue
}
if err := w.consume(js); err != nil {
w.log.Error().Err(err).Msg("NATS consumer exited with error, reconnecting")
}
_ = nc.Drain()
nc.Close()
time.Sleep(2 * time.Second)
}
}
// connectWithBackoff repeatedly attempts to connect to NATS JetStream with exponential backoff.
func (w *NatsWatcher) connectWithBackoff() (*nats.Conn, jetstream.JetStream, error) {
var nc *nats.Conn
var js jetstream.JetStream
b := backoff.NewExponentialBackOff()
b.InitialInterval = 1 * time.Second
b.MaxInterval = 30 * time.Second
b.MaxElapsedTime = 0 // never stop
connect := func() error {
select {
case <-w.ctx.Done():
return backoff.Permanent(w.ctx.Err())
default:
}
var err error
nc, err = w.connect()
if err != nil {
w.log.Warn().Err(err).Msg("failed to connect to NATS, retrying")
return err
}
js, err = jetstream.New(nc)
if err != nil {
nc.Close()
w.log.Warn().Err(err).Msg("failed to create jetstream context, retrying")
return err
}
w.log.Info().Str("endpoint", w.config.Endpoint).Msg("connected to NATS JetStream")
return nil
}
if err := backoff.Retry(connect, backoff.WithContext(b, w.ctx)); err != nil {
return nil, nil, err
}
return nc, js, nil
}
// consume subscribes to JetStream and handles messages.
func (w *NatsWatcher) consume(js jetstream.JetStream) error {
stream, err := js.Stream(w.ctx, w.config.Stream)
if err != nil {
return fmt.Errorf("failed to get stream: %w", err)
}
consumer, err := stream.CreateOrUpdateConsumer(w.ctx, jetstream.ConsumerConfig{
Durable: w.config.Durable,
AckPolicy: jetstream.AckExplicitPolicy,
MaxAckPending: w.config.MaxAckPending,
AckWait: w.config.AckWait,
})
if err != nil {
return fmt.Errorf("failed to create consumer: %w", err)
}
w.log.Info().
Str("stream", w.config.Stream).
Msg("started consuming from JetStream")
_, err = consumer.Consume(func(msg jetstream.Msg) {
defer func() {
if ackErr := msg.Ack(); ackErr != nil {
w.log.Warn().Err(ackErr).Msg("failed to ack message")
}
}()
var ev natsEvent
if err := msgpack.Unmarshal(msg.Data(), &ev); err != nil {
w.log.Error().Err(err).Msg("failed to decode MessagePack event")
return
}
w.handleEvent(ev)
})
if err != nil {
return fmt.Errorf("consumer error: %w", err)
}
<-w.ctx.Done()
return w.ctx.Err()
}
// connect establishes a single NATS connection with optional TLS and auth.
func (w *NatsWatcher) connect() (*nats.Conn, error) {
var tlsConf *tls.Config
if w.config.EnableTLS {
var rootCAPool *x509.CertPool
if w.config.TLSRootCACertificate != "" {
rootCrtFile, err := os.ReadFile(w.config.TLSRootCACertificate)
if err != nil {
return nil, fmt.Errorf("failed to read root CA: %w", err)
}
rootCAPool = x509.NewCertPool()
rootCAPool.AppendCertsFromPEM(rootCrtFile)
w.config.TLSInsecure = false
}
tlsConf = &tls.Config{
MinVersion: tls.VersionTLS12,
InsecureSkipVerify: w.config.TLSInsecure,
RootCAs: rootCAPool,
}
}
opts := []nats.Option{nats.Name("opencloud-posixfs-natswatcher")}
if tlsConf != nil {
opts = append(opts, nats.Secure(tlsConf))
}
if w.config.AuthUsername != "" && w.config.AuthPassword != "" {
opts = append(opts, nats.UserInfo(w.config.AuthUsername, w.config.AuthPassword))
}
return nats.Connect(w.config.Endpoint, opts...)
}
// handleEvent applies the event to the local tree.
func (w *NatsWatcher) handleEvent(ev natsEvent) {
var err error
// Determine the relevant path
path := filepath.Join(w.watchRoot, ev.Path)
switch ev.Event {
case "CREATE":
err = w.tree.Scan(path, watcher.ActionCreate, ev.IsDir)
case "MOVED_TO":
err = w.tree.Scan(path, watcher.ActionMove, ev.IsDir)
case "MOVE_FROM":
err = w.tree.Scan(path, watcher.ActionMoveFrom, ev.IsDir)
case "MOVE": // support event with source and target path
err = w.tree.Scan(path, watcher.ActionMoveFrom, ev.IsDir)
if err == nil {
w.log.Error().Err(err).Interface("event", ev).Msg("error processing event")
}
tgt := filepath.Join(w.watchRoot, ev.ToPath)
if tgt == "" {
w.log.Warn().Interface("event", ev).Msg("MOVE event missing target path")
} else {
err = w.tree.Scan(tgt, watcher.ActionMove, ev.IsDir)
}
case "CLOSE_WRITE":
err = w.tree.Scan(path, watcher.ActionUpdate, ev.IsDir)
case "DELETE":
err = w.tree.Scan(path, watcher.ActionDelete, ev.IsDir)
default:
w.log.Warn().Str("event", ev.Event).Msg("unhandled event type")
}
if err != nil {
w.log.Error().Err(err).Interface("event", ev).Msg("error processing event")
}
}

View File

@@ -2,6 +2,7 @@ package metadata
import (
"context"
"fmt"
"io"
"io/fs"
"os"
@@ -292,17 +293,19 @@ func (b HybridBackend) SetMultiple(ctx context.Context, n MetadataNode, attribs
}
}
xerrs := 0
total := 0
var xerr error
// error handling: Count if there are errors while setting the attribs.
// if there were any, return an error.
for key, val := range attribs {
total++
if xerr = xattr.Set(path, key, val); xerr != nil {
// log
xerrs++
}
}
if xerrs > 0 {
return errors.Wrap(xerr, "Failed to set all xattrs")
return fmt.Errorf("failed to set %d/%d xattrs: %w", xerrs, total, xerr)
}
attribs, err = b.getAll(ctx, n, true, false, false)

View File

@@ -12,10 +12,12 @@ linters:
enable:
- errorlint
- gocritic
- govet
- misspell
- nakedret
- nolintlint
- perfsprint
- prealloc
- revive
- testifylint
- thelper
@@ -32,6 +34,11 @@ linters:
comparison: true
errorf: true
errorf-multi: true
govet:
disable:
- fieldalignment
- shadow
enable-all: true
revive:
rules:
- name: blank-imports
@@ -85,5 +92,6 @@ output:
formats:
text:
path: stdout
path-prefix: .
run:
relative-path-mode: gitroot
version: "2"

View File

@@ -0,0 +1,229 @@
# AI Coding Agent Guidelines
This document provides guidelines for AI coding agents working on the Testcontainers for Go repository.
## Repository Overview
This is a **Go monorepo** containing:
- **Core library**: Root directory contains the main testcontainers-go library
- **Modules**: `./modules/` directory with 50+ technology-specific modules (postgres, redis, kafka, etc.)
- **Examples**: `./examples/` directory with example implementations
- **Module generator**: `./modulegen/` directory with tools to generate new modules
- **Documentation**: `./docs/` directory with MkDocs-based documentation
## Environment Setup
### Go Version
- **Required**: Go 1.24.7
- **Tool**: Use [gvm](https://github.com/andrewkroh/gvm) for version management
- **CRITICAL**: Always run this before ANY Go command:
```bash
# For Apple Silicon (M1/M2/M3)
eval "$(gvm 1.24.7 --arch=arm64)"
# For Intel/AMD (x86_64)
eval "$(gvm 1.24.7 --arch=amd64)"
```
### Project Structure
Each module in `./modules/` is a separate Go module with:
- `go.mod` / `go.sum` - Module dependencies
- `{module}.go` - Main module implementation
- `{module}_test.go` - Unit tests
- `examples_test.go` - Testable examples for documentation
- `Makefile` - Standard targets: `pre-commit`, `test-unit`
## Development Workflow
### Before Making Changes
1. **Read existing code** in similar modules for patterns
2. **Check documentation** in `docs/modules/index.md` for best practices
3. **Run tests** to ensure baseline passes
### Working with Modules
1. **Change to module directory**: `cd modules/{module-name}`
2. **Run pre-commit checks**: `make pre-commit` (linting, formatting, tidy)
3. **Run tests**: `make test-unit`
4. **Both together**: `make pre-commit test-unit`
### Git Workflow
- **Branch naming**: Use descriptive names like `chore-module-use-run`, `feat-add-xyz`, `fix-module-issue`
- **NEVER** use `main` branch for PRs (they will be auto-closed)
- **Commit format**: Conventional commits (enforced by CI)
```text
type(scope): description
Longer explanation if needed.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
```
- **Commit types** (enforced): `security`, `fix`, `feat`, `docs`, `chore`, `deps`
- **Scope rules**:
- Optional (can be omitted for repo-level changes)
- Must be lowercase (uppercase scopes are rejected)
- Examples: `feat(redis)`, `chore(kafka)`, `docs`, `fix(postgres)`
- **Subject rules**:
- Must NOT start with uppercase letter
- ✅ Good: `feat(redis): add support for clustering`
- ❌ Bad: `feat(redis): Add support for clustering`
- **Breaking changes**: Add `!` after type: `feat(redis)!: remove deprecated API`
- **Always include co-author footer** when AI assists with changes
### Pull Requests
- **Title format**: Same as commit format (validated by CI)
- `type(scope): description`
- Examples: `feat(redis): add clustering support`, `docs: improve module guide`, `chore(kafka): update tests`
- **Title validation** enforced by `.github/workflows/conventions.yml`
- **Labels**: Use appropriate labels (`chore`, `breaking change`, `documentation`, etc.)
- **Body template**:
```markdown
## What does this PR do?
Brief description of changes.
## Why is it important?
Context and rationale.
## Related issues
- Relates to #issue-number
```
## Module Development Best Practices
**📖 Detailed guide**: See [`docs/modules/index.md`](docs/modules/index.md) for comprehensive module development documentation.
### Quick Reference
#### Container Struct
- **Name**: Use `Container`, not module-specific names like `PostgresContainer`
- **Fields**: Use private fields for state management
- **Embedding**: Always embed `testcontainers.Container`
```go
type Container struct {
testcontainers.Container
dbName string // private
user string // private
}
```
#### Run Function Pattern
Five-step implementation:
1. Process custom options (if using intermediate settings)
2. Build `moduleOpts` with defaults
3. Add conditional options based on settings
4. Append user options (allows overrides)
5. Call `testcontainers.Run` and return with proper error wrapping
```go
func Run(ctx context.Context, img string, opts ...testcontainers.ContainerCustomizer) (*Container, error) {
// See docs/modules/index.md for complete implementation
moduleOpts := []testcontainers.ContainerCustomizer{
testcontainers.WithExposedPorts("5432/tcp"),
// ... defaults
}
moduleOpts = append(moduleOpts, opts...)
ctr, err := testcontainers.Run(ctx, img, moduleOpts...)
if err != nil {
return nil, fmt.Errorf("run modulename: %w", err)
}
return &Container{Container: ctr}, nil
}
```
#### Container Options
- **Simple config**: Use built-in `testcontainers.With*` options
- **Complex logic**: Use `testcontainers.CustomizeRequestOption`
- **State transfer**: Create custom `Option` type
**Critical rules:**
- ✅ Return struct types (not interfaces)
- ✅ Call built-in options directly: `testcontainers.WithFiles(f)(req)`
- ❌ Don't use `.Customize()` method
- ❌ Don't pass slices to variadic functions
#### Common Patterns
- **Env inspection**: Use `strings.CutPrefix` with early exit
- **Variadic args**: Pass directly, not as slices
- **Option order**: defaults → user options → post-processing
- **Error format**: `fmt.Errorf("run modulename: %w", err)`
**For complete examples and detailed explanations**, see [`docs/modules/index.md`](docs/modules/index.md).
## Testing Guidelines
### Running Tests
- **From module directory**: `cd modules/{module} && make test-unit`
- **Pre-commit checks**: `make pre-commit` (run this first to catch lint issues)
- **Full check**: `make pre-commit test-unit`
### Test Patterns
- Use testable examples in `examples_test.go`
- Follow existing test patterns in similar modules
- Test both success and error cases
- Use `t.Parallel()` when tests are independent
### When Tests Fail
1. **Read the error message carefully** - it usually tells you exactly what's wrong
2. **Check if it's a lint issue** - run `make pre-commit` first
3. **Verify Go version** - ensure using Go 1.24.7
4. **Check Docker** - some tests require Docker daemon running
## Common Pitfalls to Avoid
### Code Issues
- ❌ Using interface types as return values
- ❌ Forgetting to run `eval "$(gvm 1.24.7 --arch=arm64)"`
- ❌ Not handling errors from built-in options
- ❌ Using module-specific container names (`PostgresContainer`)
- ❌ Calling `.Customize()` method instead of direct function call
### Git Issues
- ❌ Forgetting co-author footer in commits
- ❌ Not running tests before committing
- ❌ Committing files outside module scope (use `git add modules/{module}/`)
- ❌ Using uppercase in scope: `feat(Redis)` → use `feat(redis)`
- ❌ Starting subject with uppercase: `fix: Add feature` → use `fix: add feature`
- ❌ Using wrong commit type (only: `security`, `fix`, `feat`, `docs`, `chore`, `deps`)
- ❌ Creating PR from `main` branch (will be auto-closed)
### Testing Issues
- ❌ Running tests without pre-commit checks first
- ❌ Not changing to module directory before running make
- ❌ Forgetting to set Go version before testing
## Reference Documentation
For detailed information, see:
- **Module development**: `docs/modules/index.md` - Comprehensive best practices
- **Contributing**: `docs/contributing.md` - General contribution guidelines
- **Modules catalog**: [testcontainers.com/modules](https://testcontainers.com/modules/?language=go)
- **API docs**: [pkg.go.dev/github.com/testcontainers/testcontainers-go](https://pkg.go.dev/github.com/testcontainers/testcontainers-go)
## Module Generator
To create a new module:
```bash
cd modulegen
go run . new module --name mymodule --image "docker.io/myimage:tag"
```
This generates:
- Module scaffolding with proper structure
- Documentation template
- Test files with examples
- Makefile with standard targets
The generator uses templates in `modulegen/_template/` that follow current best practices.
## Need Help?
- **Slack**: [testcontainers.slack.com](https://slack.testcontainers.org/)
- **GitHub Discussions**: [github.com/testcontainers/testcontainers-go/discussions](https://github.com/testcontainers/testcontainers-go/discussions)
- **Issues**: Check existing issues or create a new one with detailed context

View File

@@ -8,7 +8,7 @@ verify_ssl = true
[packages]
mkdocs = "==1.5.3"
mkdocs-codeinclude-plugin = "==0.2.1"
mkdocs-include-markdown-plugin = "==7.1.7"
mkdocs-include-markdown-plugin = "==7.2.0"
mkdocs-material = "==9.5.18"
mkdocs-markdownextradata-plugin = "==0.2.6"

View File

@@ -1,7 +1,7 @@
{
"_meta": {
"hash": {
"sha256": "85cf0b145b1bf3625db055f19d76b73094afa3aa1e7283b348a814c0a294d1ed"
"sha256": "361ac26693514418dce9f92cca60528d549bd0b3f4710374cf77dafd399ae232"
},
"pipfile-spec": 6,
"requires": {
@@ -140,11 +140,11 @@
},
"click": {
"hashes": [
"sha256:27c491cc05d968d271d5a1db13e3b5a184636d9d930f148c50b038f0d0646202",
"sha256:61a3265b914e850b85317d0b3109c7f8cd35a670f963866005d6ef1d5175a12b"
"sha256:9b9f285302c6e3064f4330c05f05b81945b2a39544279343e6e7c5f27a9baddc",
"sha256:e7b8232224eba16f4ebe410c25ced9f7875cb5f3263ffc93cc3e8da705e229c4"
],
"markers": "python_version >= '3.10'",
"version": "==8.2.1"
"version": "==8.3.0"
},
"colorama": {
"hashes": [
@@ -195,70 +195,98 @@
},
"markupsafe": {
"hashes": [
"sha256:0bff5e0ae4ef2e1ae4fdf2dfd5b76c75e5c2fa4132d05fc1b0dabcd20c7e28c4",
"sha256:0f4ca02bea9a23221c0182836703cbf8930c5e9454bacce27e767509fa286a30",
"sha256:1225beacc926f536dc82e45f8a4d68502949dc67eea90eab715dea3a21c1b5f0",
"sha256:131a3c7689c85f5ad20f9f6fb1b866f402c445b220c19fe4308c0b147ccd2ad9",
"sha256:15ab75ef81add55874e7ab7055e9c397312385bd9ced94920f2802310c930396",
"sha256:1a9d3f5f0901fdec14d8d2f66ef7d035f2157240a433441719ac9a3fba440b13",
"sha256:1c99d261bd2d5f6b59325c92c73df481e05e57f19837bdca8413b9eac4bd8028",
"sha256:1e084f686b92e5b83186b07e8a17fc09e38fff551f3602b249881fec658d3eca",
"sha256:2181e67807fc2fa785d0592dc2d6206c019b9502410671cc905d132a92866557",
"sha256:2cb8438c3cbb25e220c2ab33bb226559e7afb3baec11c4f218ffa7308603c832",
"sha256:3169b1eefae027567d1ce6ee7cae382c57fe26e82775f460f0b2778beaad66c0",
"sha256:3809ede931876f5b2ec92eef964286840ed3540dadf803dd570c3b7e13141a3b",
"sha256:38a9ef736c01fccdd6600705b09dc574584b89bea478200c5fbf112a6b0d5579",
"sha256:3d79d162e7be8f996986c064d1c7c817f6df3a77fe3d6859f6f9e7be4b8c213a",
"sha256:444dcda765c8a838eaae23112db52f1efaf750daddb2d9ca300bcae1039adc5c",
"sha256:48032821bbdf20f5799ff537c7ac3d1fba0ba032cfc06194faffa8cda8b560ff",
"sha256:4aa4e5faecf353ed117801a068ebab7b7e09ffb6e1d5e412dc852e0da018126c",
"sha256:52305740fe773d09cffb16f8ed0427942901f00adedac82ec8b67752f58a1b22",
"sha256:569511d3b58c8791ab4c2e1285575265991e6d8f8700c7be0e88f86cb0672094",
"sha256:57cb5a3cf367aeb1d316576250f65edec5bb3be939e9247ae594b4bcbc317dfb",
"sha256:5b02fb34468b6aaa40dfc198d813a641e3a63b98c2b05a16b9f80b7ec314185e",
"sha256:6381026f158fdb7c72a168278597a5e3a5222e83ea18f543112b2662a9b699c5",
"sha256:6af100e168aa82a50e186c82875a5893c5597a0c1ccdb0d8b40240b1f28b969a",
"sha256:6c89876f41da747c8d3677a2b540fb32ef5715f97b66eeb0c6b66f5e3ef6f59d",
"sha256:6e296a513ca3d94054c2c881cc913116e90fd030ad1c656b3869762b754f5f8a",
"sha256:70a87b411535ccad5ef2f1df5136506a10775d267e197e4cf531ced10537bd6b",
"sha256:7e94c425039cde14257288fd61dcfb01963e658efbc0ff54f5306b06054700f8",
"sha256:846ade7b71e3536c4e56b386c2a47adf5741d2d8b94ec9dc3e92e5e1ee1e2225",
"sha256:88416bd1e65dcea10bc7569faacb2c20ce071dd1f87539ca2ab364bf6231393c",
"sha256:88b49a3b9ff31e19998750c38e030fc7bb937398b1f78cfa599aaef92d693144",
"sha256:8c4e8c3ce11e1f92f6536ff07154f9d49677ebaaafc32db9db4620bc11ed480f",
"sha256:8e06879fc22a25ca47312fbe7c8264eb0b662f6db27cb2d3bbbc74b1df4b9b87",
"sha256:9025b4018f3a1314059769c7bf15441064b2207cb3f065e6ea1e7359cb46db9d",
"sha256:93335ca3812df2f366e80509ae119189886b0f3c2b81325d39efdb84a1e2ae93",
"sha256:9778bd8ab0a994ebf6f84c2b949e65736d5575320a17ae8984a77fab08db94cf",
"sha256:9e2d922824181480953426608b81967de705c3cef4d1af983af849d7bd619158",
"sha256:a123e330ef0853c6e822384873bef7507557d8e4a082961e1defa947aa59ba84",
"sha256:a904af0a6162c73e3edcb969eeeb53a63ceeb5d8cf642fade7d39e7963a22ddb",
"sha256:ad10d3ded218f1039f11a75f8091880239651b52e9bb592ca27de44eed242a48",
"sha256:b424c77b206d63d500bcb69fa55ed8d0e6a3774056bdc4839fc9298a7edca171",
"sha256:b5a6b3ada725cea8a5e634536b1b01c30bcdcd7f9c6fff4151548d5bf6b3a36c",
"sha256:ba8062ed2cf21c07a9e295d5b8a2a5ce678b913b45fdf68c32d95d6c1291e0b6",
"sha256:ba9527cdd4c926ed0760bc301f6728ef34d841f405abf9d4f959c478421e4efd",
"sha256:bbcb445fa71794da8f178f0f6d66789a28d7319071af7a496d4d507ed566270d",
"sha256:bcf3e58998965654fdaff38e58584d8937aa3096ab5354d493c77d1fdd66d7a1",
"sha256:c0ef13eaeee5b615fb07c9a7dadb38eac06a0608b41570d8ade51c56539e509d",
"sha256:cabc348d87e913db6ab4aa100f01b08f481097838bdddf7c7a84b7575b7309ca",
"sha256:cdb82a876c47801bb54a690c5ae105a46b392ac6099881cdfb9f6e95e4014c6a",
"sha256:cfad01eed2c2e0c01fd0ecd2ef42c492f7f93902e39a42fc9ee1692961443a29",
"sha256:d16a81a06776313e817c951135cf7340a3e91e8c1ff2fac444cfd75fffa04afe",
"sha256:d8213e09c917a951de9d09ecee036d5c7d36cb6cb7dbaece4c71a60d79fb9798",
"sha256:e07c3764494e3776c602c1e78e298937c3315ccc9043ead7e685b7f2b8d47b3c",
"sha256:e17c96c14e19278594aa4841ec148115f9c7615a47382ecb6b82bd8fea3ab0c8",
"sha256:e444a31f8db13eb18ada366ab3cf45fd4b31e4db1236a4448f68778c1d1a5a2f",
"sha256:e6a2a455bd412959b57a172ce6328d2dd1f01cb2135efda2e4576e8a23fa3b0f",
"sha256:eaa0a10b7f72326f1372a713e73c3f739b524b3af41feb43e4921cb529f5929a",
"sha256:eb7972a85c54febfb25b5c4b4f3af4dcc731994c7da0d8a0b4a6eb0640e1d178",
"sha256:ee55d3edf80167e48ea11a923c7386f4669df67d7994554387f84e7d8b0a2bf0",
"sha256:f3818cb119498c0678015754eba762e0d61e5b52d34c8b13d770f0719f7b1d79",
"sha256:f8b3d067f2e40fe93e1ccdd6b2e1d16c43140e76f02fb1319a05cf2b79d99430",
"sha256:fcabf5ff6eea076f859677f5f0b6b5c1a51e70a376b0579e0eadef8db48c6b50"
"sha256:0303439a41979d9e74d18ff5e2dd8c43ed6c6001fd40e5bf2e43f7bd9bbc523f",
"sha256:068f375c472b3e7acbe2d5318dea141359e6900156b5b2ba06a30b169086b91a",
"sha256:0bf2a864d67e76e5c9a34dc26ec616a66b9888e25e7b9460e1c76d3293bd9dbf",
"sha256:0db14f5dafddbb6d9208827849fad01f1a2609380add406671a26386cdf15a19",
"sha256:0eb9ff8191e8498cca014656ae6b8d61f39da5f95b488805da4bb029cccbfbaf",
"sha256:0f4b68347f8c5eab4a13419215bdfd7f8c9b19f2b25520968adfad23eb0ce60c",
"sha256:1085e7fbddd3be5f89cc898938f42c0b3c711fdcb37d75221de2666af647c175",
"sha256:116bb52f642a37c115f517494ea5feb03889e04df47eeff5b130b1808ce7c219",
"sha256:12c63dfb4a98206f045aa9563db46507995f7ef6d83b2f68eda65c307c6829eb",
"sha256:133a43e73a802c5562be9bbcd03d090aa5a1fe899db609c29e8c8d815c5f6de6",
"sha256:1353ef0c1b138e1907ae78e2f6c63ff67501122006b0f9abad68fda5f4ffc6ab",
"sha256:15d939a21d546304880945ca1ecb8a039db6b4dc49b2c5a400387cdae6a62e26",
"sha256:177b5253b2834fe3678cb4a5f0059808258584c559193998be2601324fdeafb1",
"sha256:1872df69a4de6aead3491198eaf13810b565bdbeec3ae2dc8780f14458ec73ce",
"sha256:1b4b79e8ebf6b55351f0d91fe80f893b4743f104bff22e90697db1590e47a218",
"sha256:1b52b4fb9df4eb9ae465f8d0c228a00624de2334f216f178a995ccdcf82c4634",
"sha256:1ba88449deb3de88bd40044603fafffb7bc2b055d626a330323a9ed736661695",
"sha256:1cc7ea17a6824959616c525620e387f6dd30fec8cb44f649e31712db02123dad",
"sha256:218551f6df4868a8d527e3062d0fb968682fe92054e89978594c28e642c43a73",
"sha256:26a5784ded40c9e318cfc2bdb30fe164bdb8665ded9cd64d500a34fb42067b1c",
"sha256:2713baf880df847f2bece4230d4d094280f4e67b1e813eec43b4c0e144a34ffe",
"sha256:2a15a08b17dd94c53a1da0438822d70ebcd13f8c3a95abe3a9ef9f11a94830aa",
"sha256:2f981d352f04553a7171b8e44369f2af4055f888dfb147d55e42d29e29e74559",
"sha256:32001d6a8fc98c8cb5c947787c5d08b0a50663d139f1305bac5885d98d9b40fa",
"sha256:3524b778fe5cfb3452a09d31e7b5adefeea8c5be1d43c4f810ba09f2ceb29d37",
"sha256:3537e01efc9d4dccdf77221fb1cb3b8e1a38d5428920e0657ce299b20324d758",
"sha256:35add3b638a5d900e807944a078b51922212fb3dedb01633a8defc4b01a3c85f",
"sha256:38664109c14ffc9e7437e86b4dceb442b0096dfe3541d7864d9cbe1da4cf36c8",
"sha256:3a7e8ae81ae39e62a41ec302f972ba6ae23a5c5396c8e60113e9066ef893da0d",
"sha256:3b562dd9e9ea93f13d53989d23a7e775fdfd1066c33494ff43f5418bc8c58a5c",
"sha256:457a69a9577064c05a97c41f4e65148652db078a3a509039e64d3467b9e7ef97",
"sha256:4bd4cd07944443f5a265608cc6aab442e4f74dff8088b0dfc8238647b8f6ae9a",
"sha256:4e885a3d1efa2eadc93c894a21770e4bc67899e3543680313b09f139e149ab19",
"sha256:4faffd047e07c38848ce017e8725090413cd80cbc23d86e55c587bf979e579c9",
"sha256:509fa21c6deb7a7a273d629cf5ec029bc209d1a51178615ddf718f5918992ab9",
"sha256:5678211cb9333a6468fb8d8be0305520aa073f50d17f089b5b4b477ea6e67fdc",
"sha256:591ae9f2a647529ca990bc681daebdd52c8791ff06c2bfa05b65163e28102ef2",
"sha256:5a7d5dc5140555cf21a6fefbdbf8723f06fcd2f63ef108f2854de715e4422cb4",
"sha256:69c0b73548bc525c8cb9a251cddf1931d1db4d2258e9599c28c07ef3580ef354",
"sha256:6b5420a1d9450023228968e7e6a9ce57f65d148ab56d2313fcd589eee96a7a50",
"sha256:722695808f4b6457b320fdc131280796bdceb04ab50fe1795cd540799ebe1698",
"sha256:729586769a26dbceff69f7a7dbbf59ab6572b99d94576a5592625d5b411576b9",
"sha256:77f0643abe7495da77fb436f50f8dab76dbc6e5fd25d39589a0f1fe6548bfa2b",
"sha256:795e7751525cae078558e679d646ae45574b47ed6e7771863fcc079a6171a0fc",
"sha256:7be7b61bb172e1ed687f1754f8e7484f1c8019780f6f6b0786e76bb01c2ae115",
"sha256:7c3fb7d25180895632e5d3148dbdc29ea38ccb7fd210aa27acbd1201a1902c6e",
"sha256:7e68f88e5b8799aa49c85cd116c932a1ac15caaa3f5db09087854d218359e485",
"sha256:83891d0e9fb81a825d9a6d61e3f07550ca70a076484292a70fde82c4b807286f",
"sha256:8485f406a96febb5140bfeca44a73e3ce5116b2501ac54fe953e488fb1d03b12",
"sha256:8709b08f4a89aa7586de0aadc8da56180242ee0ada3999749b183aa23df95025",
"sha256:8f71bc33915be5186016f675cd83a1e08523649b0e33efdb898db577ef5bb009",
"sha256:915c04ba3851909ce68ccc2b8e2cd691618c4dc4c4232fb7982bca3f41fd8c3d",
"sha256:949b8d66bc381ee8b007cd945914c721d9aba8e27f71959d750a46f7c282b20b",
"sha256:94c6f0bb423f739146aec64595853541634bde58b2135f27f61c1ffd1cd4d16a",
"sha256:9a1abfdc021a164803f4d485104931fb8f8c1efd55bc6b748d2f5774e78b62c5",
"sha256:9b79b7a16f7fedff2495d684f2b59b0457c3b493778c9eed31111be64d58279f",
"sha256:a320721ab5a1aba0a233739394eb907f8c8da5c98c9181d1161e77a0c8e36f2d",
"sha256:a4afe79fb3de0b7097d81da19090f4df4f8d3a2b3adaa8764138aac2e44f3af1",
"sha256:ad2cf8aa28b8c020ab2fc8287b0f823d0a7d8630784c31e9ee5edea20f406287",
"sha256:b8512a91625c9b3da6f127803b166b629725e68af71f8184ae7e7d54686a56d6",
"sha256:bc51efed119bc9cfdf792cdeaa4d67e8f6fcccab66ed4bfdd6bde3e59bfcbb2f",
"sha256:bdc919ead48f234740ad807933cdf545180bfbe9342c2bb451556db2ed958581",
"sha256:bdd37121970bfd8be76c5fb069c7751683bdf373db1ed6c010162b2a130248ed",
"sha256:be8813b57049a7dc738189df53d69395eba14fb99345e0a5994914a3864c8a4b",
"sha256:c0c0b3ade1c0b13b936d7970b1d37a57acde9199dc2aecc4c336773e1d86049c",
"sha256:c47a551199eb8eb2121d4f0f15ae0f923d31350ab9280078d1e5f12b249e0026",
"sha256:c4ffb7ebf07cfe8931028e3e4c85f0357459a3f9f9490886198848f4fa002ec8",
"sha256:ccfcd093f13f0f0b7fdd0f198b90053bf7b2f02a3927a30e63f3ccc9df56b676",
"sha256:d2ee202e79d8ed691ceebae8e0486bd9a2cd4794cec4824e1c99b6f5009502f6",
"sha256:d53197da72cc091b024dd97249dfc7794d6a56530370992a5e1a08983ad9230e",
"sha256:d6dd0be5b5b189d31db7cda48b91d7e0a9795f31430b7f271219ab30f1d3ac9d",
"sha256:d88b440e37a16e651bda4c7c2b930eb586fd15ca7406cb39e211fcff3bf3017d",
"sha256:de8a88e63464af587c950061a5e6a67d3632e36df62b986892331d4620a35c01",
"sha256:df2449253ef108a379b8b5d6b43f4b1a8e81a061d6537becd5582fba5f9196d7",
"sha256:e1c1493fb6e50ab01d20a22826e57520f1284df32f2d8601fdd90b6304601419",
"sha256:e1cf1972137e83c5d4c136c43ced9ac51d0e124706ee1c8aa8532c1287fa8795",
"sha256:e2103a929dfa2fcaf9bb4e7c091983a49c9ac3b19c9061b6d5427dd7d14d81a1",
"sha256:e56b7d45a839a697b5eb268c82a71bd8c7f6c94d6fd50c3d577fa39a9f1409f5",
"sha256:e8afc3f2ccfa24215f8cb28dcf43f0113ac3c37c2f0f0806d8c70e4228c5cf4d",
"sha256:e8fc20152abba6b83724d7ff268c249fa196d8259ff481f3b1476383f8f24e42",
"sha256:eaa9599de571d72e2daf60164784109f19978b327a3910d3e9de8c97b5b70cfe",
"sha256:ec15a59cf5af7be74194f7ab02d0f59a62bdcf1a537677ce67a2537c9b87fcda",
"sha256:f190daf01f13c72eac4efd5c430a8de82489d9cff23c364c3ea822545032993e",
"sha256:f34c41761022dd093b4b6896d4810782ffbabe30f2d443ff5f083e0cbbb8c737",
"sha256:f3e98bb3798ead92273dc0e5fd0f31ade220f59a266ffd8a4f6065e0a3ce0523",
"sha256:f42d0984e947b8adf7dd6dde396e720934d12c506ce84eea8476409563607591",
"sha256:f71a396b3bf33ecaa1626c255855702aca4d3d9fea5e051b41ac59a9c1c41edc",
"sha256:f9e130248f4462aaa8e2552d547f36ddadbeaa573879158d721bbd33dfe4743a",
"sha256:fed51ac40f757d41b7c48425901843666a6677e3e8eb0abcff09e4ba6e664f50"
],
"markers": "python_version >= '3.9'",
"version": "==3.0.2"
"version": "==3.0.3"
},
"mergedeep": {
"hashes": [
@@ -288,12 +316,12 @@
},
"mkdocs-include-markdown-plugin": {
"hashes": [
"sha256:677637e04c2d3497c50340be522e2a7f614124f592c7982d88b859f88d527a4c",
"sha256:a0c13efe4f6b05a419c022e201055bf43145eed90de65f2353c33fb4005b6aa5"
"sha256:4a67a91ade680dc0e15f608e5b6343bec03372ffa112c40a4254c1bfb10f42f3",
"sha256:d56cdaeb2d113fb66ed0fe4fb7af1da889926b0b9872032be24e19bbb09c9f5b"
],
"index": "pypi",
"markers": "python_version >= '3.9'",
"version": "==7.1.7"
"version": "==7.2.0"
},
"mkdocs-markdownextradata-plugin": {
"hashes": [
@@ -345,11 +373,11 @@
},
"platformdirs": {
"hashes": [
"sha256:abd01743f24e5287cd7a5db3752faf1a2d65353f38ec26d98e25a6db65958c85",
"sha256:ca753cf4d81dc309bc67b0ea38fd15dc97bc30ce419a7f58d13eb3bf14c4febf"
"sha256:70ddccdd7c99fc5942e9fc25636a8b34d04c24b335100223152c2803e4063312",
"sha256:e578a81bb873cbb89a41fcc904c7ef523cc18284b7e3b3ccf06aca1403b7ebd3"
],
"markers": "python_version >= '3.9'",
"version": "==4.4.0"
"markers": "python_version >= '3.10'",
"version": "==4.5.0"
},
"pygments": {
"hashes": [
@@ -385,62 +413,82 @@
},
"pyyaml": {
"hashes": [
"sha256:01179a4a8559ab5de078078f37e5c1a30d76bb88519906844fd7bdea1b7729ff",
"sha256:0833f8694549e586547b576dcfaba4a6b55b9e96098b36cdc7ebefe667dfed48",
"sha256:0a9a2848a5b7feac301353437eb7d5957887edbf81d56e903999a75a3d743086",
"sha256:0b69e4ce7a131fe56b7e4d770c67429700908fc0752af059838b1cfb41960e4e",
"sha256:0ffe8360bab4910ef1b9e87fb812d8bc0a308b0d0eef8c8f44e0254ab3b07133",
"sha256:11d8f3dd2b9c1207dcaf2ee0bbbfd5991f571186ec9cc78427ba5bd32afae4b5",
"sha256:17e311b6c678207928d649faa7cb0d7b4c26a0ba73d41e99c4fff6b6c3276484",
"sha256:1e2120ef853f59c7419231f3bf4e7021f1b936f6ebd222406c3b60212205d2ee",
"sha256:1f71ea527786de97d1a0cc0eacd1defc0985dcf6b3f17bb77dcfc8c34bec4dc5",
"sha256:23502f431948090f597378482b4812b0caae32c22213aecf3b55325e049a6c68",
"sha256:24471b829b3bf607e04e88d79542a9d48bb037c2267d7927a874e6c205ca7e9a",
"sha256:29717114e51c84ddfba879543fb232a6ed60086602313ca38cce623c1d62cfbf",
"sha256:2e99c6826ffa974fe6e27cdb5ed0021786b03fc98e5ee3c5bfe1fd5015f42b99",
"sha256:39693e1f8320ae4f43943590b49779ffb98acb81f788220ea932a6b6c51004d8",
"sha256:3ad2a3decf9aaba3d29c8f537ac4b243e36bef957511b4766cb0057d32b0be85",
"sha256:3b1fdb9dc17f5a7677423d508ab4f243a726dea51fa5e70992e59a7411c89d19",
"sha256:41e4e3953a79407c794916fa277a82531dd93aad34e29c2a514c2c0c5fe971cc",
"sha256:43fa96a3ca0d6b1812e01ced1044a003533c47f6ee8aca31724f78e93ccc089a",
"sha256:50187695423ffe49e2deacb8cd10510bc361faac997de9efef88badc3bb9e2d1",
"sha256:5ac9328ec4831237bec75defaf839f7d4564be1e6b25ac710bd1a96321cc8317",
"sha256:5d225db5a45f21e78dd9358e58a98702a0302f2659a3c6cd320564b75b86f47c",
"sha256:6395c297d42274772abc367baaa79683958044e5d3835486c16da75d2a694631",
"sha256:688ba32a1cffef67fd2e9398a2efebaea461578b0923624778664cc1c914db5d",
"sha256:68ccc6023a3400877818152ad9a1033e3db8625d899c72eacb5a668902e4d652",
"sha256:70b189594dbe54f75ab3a1acec5f1e3faa7e8cf2f1e08d9b561cb41b845f69d5",
"sha256:797b4f722ffa07cc8d62053e4cff1486fa6dc094105d13fea7b1de7d8bf71c9e",
"sha256:7c36280e6fb8385e520936c3cb3b8042851904eba0e58d277dca80a5cfed590b",
"sha256:7e7401d0de89a9a855c839bc697c079a4af81cf878373abd7dc625847d25cbd8",
"sha256:80bab7bfc629882493af4aa31a4cfa43a4c57c83813253626916b8c7ada83476",
"sha256:82d09873e40955485746739bcb8b4586983670466c23382c19cffecbf1fd8706",
"sha256:8388ee1976c416731879ac16da0aff3f63b286ffdd57cdeb95f3f2e085687563",
"sha256:8824b5a04a04a047e72eea5cec3bc266db09e35de6bdfe34c9436ac5ee27d237",
"sha256:8b9c7197f7cb2738065c481a0461e50ad02f18c78cd75775628afb4d7137fb3b",
"sha256:9056c1ecd25795207ad294bcf39f2db3d845767be0ea6e6a34d856f006006083",
"sha256:936d68689298c36b53b29f23c6dbb74de12b4ac12ca6cfe0e047bedceea56180",
"sha256:9b22676e8097e9e22e36d6b7bda33190d0d400f345f23d4065d48f4ca7ae0425",
"sha256:a4d3091415f010369ae4ed1fc6b79def9416358877534caf6a0fdd2146c87a3e",
"sha256:a8786accb172bd8afb8be14490a16625cbc387036876ab6ba70912730faf8e1f",
"sha256:a9f8c2e67970f13b16084e04f134610fd1d374bf477b17ec1599185cf611d725",
"sha256:bc2fa7c6b47d6bc618dd7fb02ef6fdedb1090ec036abab80d4681424b84c1183",
"sha256:c70c95198c015b85feafc136515252a261a84561b7b1d51e3384e0655ddf25ab",
"sha256:cc1c1159b3d456576af7a3e4d1ba7e6924cb39de8f67111c735f6fc832082774",
"sha256:ce826d6ef20b1bc864f0a68340c8b3287705cae2f8b4b1d932177dcc76721725",
"sha256:d584d9ec91ad65861cc08d42e834324ef890a082e591037abe114850ff7bbc3e",
"sha256:d7fded462629cfa4b685c5416b949ebad6cec74af5e2d42905d41e257e0869f5",
"sha256:d84a1718ee396f54f3a086ea0a66d8e552b2ab2017ef8b420e92edbc841c352d",
"sha256:d8e03406cac8513435335dbab54c0d385e4a49e4945d2909a581c83647ca0290",
"sha256:e10ce637b18caea04431ce14fabcf5c64a1c61ec9c56b071a4b7ca131ca52d44",
"sha256:ec031d5d2feb36d1d1a24380e4db6d43695f3748343d99434e6f5f9156aaa2ed",
"sha256:ef6107725bd54b262d6dedcc2af448a266975032bc85ef0172c5f059da6325b4",
"sha256:efdca5630322a10774e8e98e1af481aad470dd62c3170801852d752aa7a783ba",
"sha256:f753120cb8181e736c57ef7636e83f31b9c0d1722c516f7e86cf15b7aa57ff12",
"sha256:ff3824dc5261f50c9b0dfb3be22b4567a6f938ccce4587b38952d85fd9e9afe4"
"sha256:00c4bdeba853cc34e7dd471f16b4114f4162dc03e6b7afcc2128711f0eca823c",
"sha256:0150219816b6a1fa26fb4699fb7daa9caf09eb1999f3b70fb6e786805e80375a",
"sha256:02893d100e99e03eda1c8fd5c441d8c60103fd175728e23e431db1b589cf5ab3",
"sha256:02ea2dfa234451bbb8772601d7b8e426c2bfa197136796224e50e35a78777956",
"sha256:0f29edc409a6392443abf94b9cf89ce99889a1dd5376d94316ae5145dfedd5d6",
"sha256:10892704fc220243f5305762e276552a0395f7beb4dbf9b14ec8fd43b57f126c",
"sha256:16249ee61e95f858e83976573de0f5b2893b3677ba71c9dd36b9cf8be9ac6d65",
"sha256:1d37d57ad971609cf3c53ba6a7e365e40660e3be0e5175fa9f2365a379d6095a",
"sha256:1ebe39cb5fc479422b83de611d14e2c0d3bb2a18bbcb01f229ab3cfbd8fee7a0",
"sha256:214ed4befebe12df36bcc8bc2b64b396ca31be9304b8f59e25c11cf94a4c033b",
"sha256:2283a07e2c21a2aa78d9c4442724ec1eb15f5e42a723b99cb3d822d48f5f7ad1",
"sha256:22ba7cfcad58ef3ecddc7ed1db3409af68d023b7f940da23c6c2a1890976eda6",
"sha256:27c0abcb4a5dac13684a37f76e701e054692a9b2d3064b70f5e4eb54810553d7",
"sha256:28c8d926f98f432f88adc23edf2e6d4921ac26fb084b028c733d01868d19007e",
"sha256:2e71d11abed7344e42a8849600193d15b6def118602c4c176f748e4583246007",
"sha256:34d5fcd24b8445fadc33f9cf348c1047101756fd760b4dacb5c3e99755703310",
"sha256:37503bfbfc9d2c40b344d06b2199cf0e96e97957ab1c1b546fd4f87e53e5d3e4",
"sha256:3c5677e12444c15717b902a5798264fa7909e41153cdf9ef7ad571b704a63dd9",
"sha256:3ff07ec89bae51176c0549bc4c63aa6202991da2d9a6129d7aef7f1407d3f295",
"sha256:41715c910c881bc081f1e8872880d3c650acf13dfa8214bad49ed4cede7c34ea",
"sha256:418cf3f2111bc80e0933b2cd8cd04f286338bb88bdc7bc8e6dd775ebde60b5e0",
"sha256:44edc647873928551a01e7a563d7452ccdebee747728c1080d881d68af7b997e",
"sha256:4a2e8cebe2ff6ab7d1050ecd59c25d4c8bd7e6f400f5f82b96557ac0abafd0ac",
"sha256:4ad1906908f2f5ae4e5a8ddfce73c320c2a1429ec52eafd27138b7f1cbe341c9",
"sha256:501a031947e3a9025ed4405a168e6ef5ae3126c59f90ce0cd6f2bfc477be31b7",
"sha256:5190d403f121660ce8d1d2c1bb2ef1bd05b5f68533fc5c2ea899bd15f4399b35",
"sha256:5498cd1645aa724a7c71c8f378eb29ebe23da2fc0d7a08071d89469bf1d2defb",
"sha256:5cf4e27da7e3fbed4d6c3d8e797387aaad68102272f8f9752883bc32d61cb87b",
"sha256:5e0b74767e5f8c593e8c9b5912019159ed0533c70051e9cce3e8b6aa699fcd69",
"sha256:5ed875a24292240029e4483f9d4a4b8a1ae08843b9c54f43fcc11e404532a8a5",
"sha256:5fcd34e47f6e0b794d17de1b4ff496c00986e1c83f7ab2fb8fcfe9616ff7477b",
"sha256:5fdec68f91a0c6739b380c83b951e2c72ac0197ace422360e6d5a959d8d97b2c",
"sha256:6344df0d5755a2c9a276d4473ae6b90647e216ab4757f8426893b5dd2ac3f369",
"sha256:64386e5e707d03a7e172c0701abfb7e10f0fb753ee1d773128192742712a98fd",
"sha256:652cb6edd41e718550aad172851962662ff2681490a8a711af6a4d288dd96824",
"sha256:66291b10affd76d76f54fad28e22e51719ef9ba22b29e1d7d03d6777a9174198",
"sha256:66e1674c3ef6f541c35191caae2d429b967b99e02040f5ba928632d9a7f0f065",
"sha256:6adc77889b628398debc7b65c073bcb99c4a0237b248cacaf3fe8a557563ef6c",
"sha256:79005a0d97d5ddabfeeea4cf676af11e647e41d81c9a7722a193022accdb6b7c",
"sha256:7c6610def4f163542a622a73fb39f534f8c101d690126992300bf3207eab9764",
"sha256:7f047e29dcae44602496db43be01ad42fc6f1cc0d8cd6c83d342306c32270196",
"sha256:8098f252adfa6c80ab48096053f512f2321f0b998f98150cea9bd23d83e1467b",
"sha256:850774a7879607d3a6f50d36d04f00ee69e7fc816450e5f7e58d7f17f1ae5c00",
"sha256:8d1fab6bb153a416f9aeb4b8763bc0f22a5586065f86f7664fc23339fc1c1fac",
"sha256:8da9669d359f02c0b91ccc01cac4a67f16afec0dac22c2ad09f46bee0697eba8",
"sha256:8dc52c23056b9ddd46818a57b78404882310fb473d63f17b07d5c40421e47f8e",
"sha256:9149cad251584d5fb4981be1ecde53a1ca46c891a79788c0df828d2f166bda28",
"sha256:93dda82c9c22deb0a405ea4dc5f2d0cda384168e466364dec6255b293923b2f3",
"sha256:96b533f0e99f6579b3d4d4995707cf36df9100d67e0c8303a0c55b27b5f99bc5",
"sha256:9c57bb8c96f6d1808c030b1687b9b5fb476abaa47f0db9c0101f5e9f394e97f4",
"sha256:9c7708761fccb9397fe64bbc0395abcae8c4bf7b0eac081e12b809bf47700d0b",
"sha256:9f3bfb4965eb874431221a3ff3fdcddc7e74e3b07799e0e84ca4a0f867d449bf",
"sha256:a33284e20b78bd4a18c8c2282d549d10bc8408a2a7ff57653c0cf0b9be0afce5",
"sha256:a80cb027f6b349846a3bf6d73b5e95e782175e52f22108cfa17876aaeff93702",
"sha256:b30236e45cf30d2b8e7b3e85881719e98507abed1011bf463a8fa23e9c3e98a8",
"sha256:b3bc83488de33889877a0f2543ade9f70c67d66d9ebb4ac959502e12de895788",
"sha256:b865addae83924361678b652338317d1bd7e79b1f4596f96b96c77a5a34b34da",
"sha256:b8bb0864c5a28024fac8a632c443c87c5aa6f215c0b126c449ae1a150412f31d",
"sha256:ba1cc08a7ccde2d2ec775841541641e4548226580ab850948cbfda66a1befcdc",
"sha256:bdb2c67c6c1390b63c6ff89f210c8fd09d9a1217a465701eac7316313c915e4c",
"sha256:c1ff362665ae507275af2853520967820d9124984e0f7466736aea23d8611fba",
"sha256:c2514fceb77bc5e7a2f7adfaa1feb2fb311607c9cb518dbc378688ec73d8292f",
"sha256:c3355370a2c156cffb25e876646f149d5d68f5e0a3ce86a5084dd0b64a994917",
"sha256:c458b6d084f9b935061bc36216e8a69a7e293a2f1e68bf956dcd9e6cbcd143f5",
"sha256:d0eae10f8159e8fdad514efdc92d74fd8d682c933a6dd088030f3834bc8e6b26",
"sha256:d76623373421df22fb4cf8817020cbb7ef15c725b9d5e45f17e189bfc384190f",
"sha256:ebc55a14a21cb14062aa4162f906cd962b28e2e9ea38f9b4391244cd8de4ae0b",
"sha256:eda16858a3cab07b80edaf74336ece1f986ba330fdb8ee0d6c0d68fe82bc96be",
"sha256:ee2922902c45ae8ccada2c5b501ab86c36525b883eff4255313a253a3160861c",
"sha256:efd7b85f94a6f21e4932043973a7ba2613b059c4a000551892ac9f1d11f5baf3",
"sha256:f7057c9a337546edc7973c0d3ba84ddcdf0daa14533c2065749c9075001090e6",
"sha256:fa160448684b4e94d80416c0fa4aac48967a969efe22931448d853ada8baf926",
"sha256:fc09d0aa354569bc501d4e787133afc08552722d3ab34836a80547331bb5d4a0"
],
"markers": "python_version >= '3.8'",
"version": "==6.0.2"
"version": "==6.0.3"
},
"pyyaml-env-tag": {
"hashes": [

View File

@@ -311,6 +311,10 @@ func (c *DockerContainer) Stop(ctx context.Context, timeout *time.Duration) erro
//
// Default: timeout is 10 seconds.
func (c *DockerContainer) Terminate(ctx context.Context, opts ...TerminateOption) error {
if c == nil {
return nil
}
options := NewTerminateOptions(ctx, opts...)
err := c.Stop(options.Context(), options.StopTimeout())
if err != nil && !isCleanupSafe(err) {
@@ -493,6 +497,7 @@ func (c *DockerContainer) ContainerIP(ctx context.Context) (string, error) {
return "", err
}
//nolint:staticcheck // SA1019: IPAddress is deprecated, but we need it for compatibility until v29
ip := inspect.NetworkSettings.IPAddress
if ip == "" {
// use IP from "Networks" if only single network defined
@@ -509,14 +514,13 @@ func (c *DockerContainer) ContainerIP(ctx context.Context) (string, error) {
// ContainerIPs gets the IP addresses of all the networks within the container.
func (c *DockerContainer) ContainerIPs(ctx context.Context) ([]string, error) {
ips := make([]string, 0)
inspect, err := c.Inspect(ctx)
if err != nil {
return nil, err
}
networks := inspect.NetworkSettings.Networks
ips := make([]string, 0, len(networks))
for _, nw := range networks {
ips = append(ips, nw.IPAddress)
}

View File

@@ -39,7 +39,6 @@ func ExtractImagesFromDockerfile(dockerfile string, buildArgs map[string]*string
// ExtractImagesFromReader extracts images from the Dockerfile sourced from r.
func ExtractImagesFromReader(r io.Reader, buildArgs map[string]*string) ([]string, error) {
var images []string
var lines []string
scanner := bufio.NewScanner(r)
for scanner.Scan() {
@@ -49,6 +48,8 @@ func ExtractImagesFromReader(r io.Reader, buildArgs map[string]*string) ([]strin
return nil, scanner.Err()
}
images := make([]string, 0, len(lines))
// extract images from dockerfile
for _, line := range lines {
line = strings.TrimSpace(line)

View File

@@ -1,4 +1,4 @@
package internal
// Version is the next development version of the application
const Version = "0.39.0"
const Version = "0.40.0"

View File

@@ -219,11 +219,16 @@ var defaultReadinessHook = func() ContainerLifecycleHooks {
// if a Wait Strategy has been specified, wait before returning
if dockerContainer.WaitingFor != nil {
strategy := dockerContainer.WaitingFor
strategyDesc := "unknown strategy"
if s, ok := strategy.(fmt.Stringer); ok {
strategyDesc = s.String()
}
dockerContainer.logger.Printf(
"⏳ Waiting for container id %s image: %s. Waiting for: %+v",
dockerContainer.ID[:12], dockerContainer.Image, dockerContainer.WaitingFor,
dockerContainer.ID[:12], dockerContainer.Image, strategyDesc,
)
if err := dockerContainer.WaitingFor.WaitUntilReady(ctx, c); err != nil {
if err := strategy.WaitUntilReady(ctx, dockerContainer); err != nil {
return fmt.Errorf("wait until ready: %w", err)
}
}
@@ -608,6 +613,18 @@ func mergePortBindings(configPortMap, exposedPortMap nat.PortMap, exposedPorts [
exposedPortMap[k] = v
}
}
// Fix: Ensure that ports with empty HostPort get "0" for automatic allocation
// This fixes the UDP port binding issue where ports were getting HostPort:0 instead of being allocated
for k, v := range exposedPortMap {
for i := range v {
if v[i].HostPort == "" {
v[i].HostPort = "0" // Tell Docker to allocate a random port
}
}
exposedPortMap[k] = v
}
return exposedPortMap
}

View File

@@ -153,4 +153,4 @@ nav:
- Getting help: getting_help.md
edit_uri: edit/main/docs/
extra:
latest_version: v0.39.0
latest_version: v0.40.0

View File

@@ -5,7 +5,7 @@ import (
"errors"
"fmt"
"maps"
"net/url"
"path"
"time"
"dario.cat/mergo"
@@ -196,12 +196,7 @@ func (c CustomHubSubstitutor) Substitute(image string) (string, error) {
}
}
result, err := url.JoinPath(c.hub, image)
if err != nil {
return "", err
}
return result, nil
return path.Join(c.hub, image), nil
}
// prependHubRegistry represents a way to prepend a custom Hub registry to the image name,
@@ -244,12 +239,7 @@ func (p prependHubRegistry) Substitute(image string) (string, error) {
}
}
result, err := url.JoinPath(p.prefix, image)
if err != nil {
return "", err
}
return result, nil
return path.Join(p.prefix, image), nil
}
// WithImageSubstitutors sets the image substitutors for a container

View File

@@ -108,6 +108,7 @@ func exposeHostPorts(ctx context.Context, req *ContainerRequest, ports ...int) (
}
// TODO: remove once we have docker context support via #2810
//nolint:staticcheck // SA1019: IPAddress is deprecated, but we need it for compatibility until v29
sshdIP := inspect.NetworkSettings.IPAddress
if sshdIP == "" {
single := len(inspect.NetworkSettings.Networks) == 1
@@ -176,30 +177,22 @@ func exposeHostPorts(ctx context.Context, req *ContainerRequest, ports ...int) (
// newSshdContainer creates a new SSHD container with the provided options.
func newSshdContainer(ctx context.Context, opts ...ContainerCustomizer) (*sshdContainer, error) {
req := GenericContainerRequest{
ContainerRequest: ContainerRequest{
Image: sshdImage,
ExposedPorts: []string{sshPort},
Env: map[string]string{"PASSWORD": sshPassword},
WaitingFor: wait.ForListeningPort(sshPort),
},
Started: true,
moduleOpts := []ContainerCustomizer{
WithExposedPorts(sshPort),
WithEnv(map[string]string{"PASSWORD": sshPassword}),
WithWaitStrategy(wait.ForListeningPort(sshPort)),
}
for _, opt := range opts {
if err := opt.Customize(&req); err != nil {
return nil, err
}
}
moduleOpts = append(moduleOpts, opts...)
c, err := GenericContainer(ctx, req)
c, err := Run(ctx, sshdImage, moduleOpts...)
var sshd *sshdContainer
if c != nil {
sshd = &sshdContainer{Container: c}
}
if err != nil {
return sshd, fmt.Errorf("generic container: %w", err)
return sshd, fmt.Errorf("run sshd container: %w", err)
}
if err = sshd.clientConfig(ctx); err != nil {

View File

@@ -1,5 +1,5 @@
mkdocs==1.5.3
mkdocs-codeinclude-plugin==0.2.1
mkdocs-include-markdown-plugin==7.1.6
mkdocs-include-markdown-plugin==7.2.0
mkdocs-material==9.5.18
mkdocs-markdownextradata-plugin==0.2.6

View File

@@ -3,7 +3,9 @@ package wait
import (
"context"
"errors"
"fmt"
"reflect"
"strings"
"time"
)
@@ -51,6 +53,29 @@ func (ms *MultiStrategy) Timeout() *time.Duration {
return ms.timeout
}
// String returns a human-readable description of the wait strategy.
func (ms *MultiStrategy) String() string {
if len(ms.Strategies) == 0 {
return "all of: (none)"
}
var strategies []string
for _, strategy := range ms.Strategies {
if strategy == nil || reflect.ValueOf(strategy).IsNil() {
continue
}
if s, ok := strategy.(fmt.Stringer); ok {
strategies = append(strategies, s.String())
} else {
strategies = append(strategies, fmt.Sprintf("%T", strategy))
}
}
// Always include "all of:" prefix to make it clear this is a MultiStrategy
// even when there's only one strategy after filtering out nils
return "all of: [" + strings.Join(strategies, ", ") + "]"
}
func (ms *MultiStrategy) WaitUntilReady(ctx context.Context, target StrategyTarget) error {
var cancel context.CancelFunc
if ms.deadline != nil {

View File

@@ -2,6 +2,7 @@ package wait
import (
"context"
"fmt"
"io"
"time"
@@ -76,6 +77,22 @@ func (ws *ExecStrategy) Timeout() *time.Duration {
return ws.timeout
}
// String returns a human-readable description of the wait strategy.
func (ws *ExecStrategy) String() string {
if len(ws.cmd) == 0 {
return "exec command"
}
// Only show the command name and argument count to avoid exposing sensitive data
argCount := len(ws.cmd) - 1
if argCount == 0 {
return fmt.Sprintf("exec command %q", ws.cmd[0])
}
if argCount == 1 {
return fmt.Sprintf("exec command %q with 1 argument", ws.cmd[0])
}
return fmt.Sprintf("exec command %q with %d arguments", ws.cmd[0], argCount)
}
func (ws *ExecStrategy) WaitUntilReady(ctx context.Context, target StrategyTarget) error {
timeout := defaultStartupTimeout()
if ws.timeout != nil {

View File

@@ -59,6 +59,11 @@ func (ws *ExitStrategy) Timeout() *time.Duration {
return ws.timeout
}
// String returns a human-readable description of the wait strategy.
func (ws *ExitStrategy) String() string {
return "container to exit"
}
// WaitUntilReady implements Strategy.WaitUntilReady
func (ws *ExitStrategy) WaitUntilReady(ctx context.Context, target StrategyTarget) error {
if ws.timeout != nil {

View File

@@ -61,6 +61,14 @@ func (ws *FileStrategy) Timeout() *time.Duration {
return ws.timeout
}
// String returns a human-readable description of the wait strategy.
func (ws *FileStrategy) String() string {
if ws.matcher != nil {
return fmt.Sprintf("file %q to exist and match condition", ws.file)
}
return fmt.Sprintf("file %q to exist", ws.file)
}
// WaitUntilReady waits until the file exists in the container and copies it to the target.
func (ws *FileStrategy) WaitUntilReady(ctx context.Context, target StrategyTarget) error {
timeout := defaultStartupTimeout()

View File

@@ -60,6 +60,11 @@ func (ws *HealthStrategy) Timeout() *time.Duration {
return ws.timeout
}
// String returns a human-readable description of the wait strategy.
func (ws *HealthStrategy) String() string {
return "container to become healthy"
}
// WaitUntilReady implements Strategy.WaitUntilReady
func (ws *HealthStrategy) WaitUntilReady(ctx context.Context, target StrategyTarget) error {
timeout := defaultStartupTimeout()

View File

@@ -114,6 +114,28 @@ func (hp *HostPortStrategy) Timeout() *time.Duration {
return hp.timeout
}
// String returns a human-readable description of the wait strategy.
func (hp *HostPortStrategy) String() string {
port := "first exposed port"
if hp.Port != "" {
port = fmt.Sprintf("port %s", hp.Port)
}
var checks string
switch {
case hp.skipInternalCheck && hp.skipExternalCheck:
checks = " to be mapped"
case hp.skipInternalCheck:
checks = " to be accessible externally"
case hp.skipExternalCheck:
checks = " to be listening internally"
default:
checks = " to be listening"
}
return fmt.Sprintf("%s%s", port, checks)
}
// detectInternalPort returns the lowest internal port that is currently bound.
// If no internal port is found, it returns the zero nat.Port value which
// can be checked against an empty string.

View File

@@ -154,6 +154,21 @@ func (ws *HTTPStrategy) Timeout() *time.Duration {
return ws.timeout
}
// String returns a human-readable description of the wait strategy.
func (ws *HTTPStrategy) String() string {
proto := "HTTP"
if ws.UseTLS {
proto = "HTTPS"
}
port := "default"
if ws.Port != "" {
port = ws.Port.Port()
}
return fmt.Sprintf("%s %s request on port %s path %q", proto, ws.Method, port, ws.Path)
}
// WaitUntilReady implements Strategy.WaitUntilReady
func (ws *HTTPStrategy) WaitUntilReady(ctx context.Context, target StrategyTarget) error {
timeout := defaultStartupTimeout()

View File

@@ -123,6 +123,21 @@ func (ws *LogStrategy) Timeout() *time.Duration {
return ws.timeout
}
// String returns a human-readable description of the wait strategy.
func (ws *LogStrategy) String() string {
logType := "log message"
if ws.IsRegexp {
logType = "log pattern"
}
occurrence := ""
if ws.Occurrence > 1 {
occurrence = fmt.Sprintf(" (occurrence: %d)", ws.Occurrence)
}
return fmt.Sprintf("%s %q%s", logType, ws.Log, occurrence)
}
// WaitUntilReady implements Strategy.WaitUntilReady
func (ws *LogStrategy) WaitUntilReady(ctx context.Context, target StrategyTarget) error {
timeout := defaultStartupTimeout()

View File

@@ -33,6 +33,11 @@ func (ws *NopStrategy) Timeout() *time.Duration {
return ws.timeout
}
// String returns a human-readable description of the wait strategy.
func (ws *NopStrategy) String() string {
return "custom wait condition"
}
func (ws *NopStrategy) WithStartupTimeout(timeout time.Duration) *NopStrategy {
ws.timeout = &timeout
return ws

View File

@@ -61,6 +61,21 @@ func (w *waitForSQL) Timeout() *time.Duration {
return w.timeout
}
// String returns a human-readable description of the wait strategy.
func (w *waitForSQL) String() string {
port := "default"
if w.Port != "" {
port = w.Port.Port()
}
query := ""
if w.query != defaultForSQLQuery {
query = fmt.Sprintf(" with query %q", w.query)
}
return fmt.Sprintf("SQL database on port %s using driver %q%s", port, w.Driver, query)
}
// WaitUntilReady repeatedly tries to run "SELECT 1" or user defined query on the given port using sql and driver.
//
// If it doesn't succeed until the timeout value which defaults to 60 seconds, it will return an error.

View File

@@ -6,6 +6,7 @@ import (
"crypto/x509"
"fmt"
"io"
"strings"
"time"
)
@@ -99,6 +100,25 @@ func (ws *TLSStrategy) TLSConfig() *tls.Config {
return ws.tlsConfig
}
// String returns a human-readable description of the wait strategy.
func (ws *TLSStrategy) String() string {
var parts []string
if len(ws.rootFiles) > 0 {
parts = append(parts, fmt.Sprintf("root CAs %v", ws.rootFiles))
}
if ws.certFiles != nil {
parts = append(parts, fmt.Sprintf("cert %q and key %q", ws.certFiles.certPEMFile, ws.certFiles.keyPEMFile))
}
if len(parts) == 0 {
return "TLS certificates"
}
return strings.Join(parts, " and ")
}
// WaitUntilReady implements the [Strategy] interface.
// It waits for the CA, client cert and key files to be available in the container and
// uses them to setup the TLS config.

10
vendor/modules.txt vendored
View File

@@ -379,7 +379,7 @@ github.com/distribution/reference
## explicit
github.com/dlclark/regexp2
github.com/dlclark/regexp2/syntax
# github.com/docker/docker v28.3.3+incompatible
# github.com/docker/docker v28.5.1+incompatible
## explicit
github.com/docker/docker/api
github.com/docker/docker/api/types
@@ -403,8 +403,6 @@ github.com/docker/docker/api/types/time
github.com/docker/docker/api/types/versions
github.com/docker/docker/api/types/volume
github.com/docker/docker/client
github.com/docker/docker/internal/lazyregexp
github.com/docker/docker/internal/multierror
github.com/docker/docker/pkg/jsonmessage
github.com/docker/docker/pkg/stdcopy
# github.com/docker/go-connections v0.6.0
@@ -1370,7 +1368,7 @@ github.com/opencloud-eu/icap-client
# github.com/opencloud-eu/libre-graph-api-go v1.0.8-0.20250724122329-41ba6b191e76
## explicit; go 1.18
github.com/opencloud-eu/libre-graph-api-go
# github.com/opencloud-eu/reva/v2 v2.40.1
# github.com/opencloud-eu/reva/v2 v2.41.0
## explicit; go 1.24.1
github.com/opencloud-eu/reva/v2/cmd/revad/internal/grace
github.com/opencloud-eu/reva/v2/cmd/revad/runtime
@@ -1684,6 +1682,8 @@ github.com/opencloud-eu/reva/v2/pkg/storage/fs/posix/options
github.com/opencloud-eu/reva/v2/pkg/storage/fs/posix/timemanager
github.com/opencloud-eu/reva/v2/pkg/storage/fs/posix/trashbin
github.com/opencloud-eu/reva/v2/pkg/storage/fs/posix/tree
github.com/opencloud-eu/reva/v2/pkg/storage/fs/posix/watcher
github.com/opencloud-eu/reva/v2/pkg/storage/fs/posix/watcher/natswatcher
github.com/opencloud-eu/reva/v2/pkg/storage/fs/registry
github.com/opencloud-eu/reva/v2/pkg/storage/fs/s3ng
github.com/opencloud-eu/reva/v2/pkg/storage/fs/s3ng/blobstore
@@ -2059,7 +2059,7 @@ github.com/tchap/go-patricia/v2/patricia
## explicit
github.com/test-go/testify/mock
github.com/test-go/testify/require
# github.com/testcontainers/testcontainers-go v0.39.0
# github.com/testcontainers/testcontainers-go v0.40.0
## explicit; go 1.24.0
github.com/testcontainers/testcontainers-go
github.com/testcontainers/testcontainers-go/exec