Compare commits

...

39 Commits

Author SHA1 Message Date
Viktor Scharf
a487621b1d update opencloud 4.0.0-rc.2 (#1917) 2025-11-26 10:16:44 +01:00
Jörn Friedrich Dreyer
fe83b6b0af bump alpine to v3.22 (#1913)
Signed-off-by: Jörn Friedrich Dreyer <jfd@butonic.de>
2025-11-26 08:07:44 +01:00
Jörn Friedrich Dreyer
52d31ca8ef log missing name or id attributes (#1914)
Signed-off-by: Jörn Friedrich Dreyer <jfd@butonic.de>
2025-11-26 08:06:05 +01:00
opencloudeu
63b5723883 [tx] updated from transifex 2025-11-26 00:03:14 +00:00
Jörn Friedrich Dreyer
aa48ecefe0 Merge pull request #1905 from opencloud-eu/bump-reva-4acb0bf96c4
bump reva to 2.39.3
2025-11-25 10:08:33 +01:00
Artur Neumann
12cebc705e Merge pull request #1902 from opencloud-eu/disableRunningTestWithWatchFS
[full-ci] disable running ci with watch fs when full-ci
2025-11-25 14:48:34 +05:45
Jörn Friedrich Dreyer
2bcf66394f bump reva v2.39.3
Signed-off-by: Jörn Friedrich Dreyer <jfd@butonic.de>
2025-11-25 09:23:08 +01:00
Jörn Friedrich Dreyer
b7308d661e Merge pull request #1901 from opencloud-eu/handle-objectguid-endianness
handle objectguid endianess
2025-11-25 08:26:12 +01:00
Viktor Scharf
5f7f096d89 disable running ci with watch fs when full-ci 2025-11-24 16:56:56 +01:00
Benedikt Kulmann
ac7ee2216e Merge pull request #1900 from opencloud-eu/bump-web-v4.2.1-rc.1
chore: bump web to v4.2.1-rc.1
2025-11-24 16:40:36 +01:00
Viktor Scharf
7330c7b0b4 trigger ci 2025-11-24 16:14:27 +01:00
Jörn Friedrich Dreyer
4340cdc9e6 handle objectguid endianess
Signed-off-by: Jörn Friedrich Dreyer <jfd@butonic.de>
2025-11-24 15:53:41 +01:00
Benedikt Kulmann
de4ca7c473 chore: bump web to v4.2.1-rc.1 2025-11-24 15:07:34 +01:00
opencloudeu
a181049508 [tx] updated from transifex 2025-11-24 00:02:52 +00:00
opencloudeu
718e86b06e [tx] updated from transifex 2025-11-23 00:03:05 +00:00
Benedikt Kulmann
487a2a0aa6 fix: add update server to default csp rules (#1875)
* fix: add update server to default csp rules

* adapt tests

---------

Co-authored-by: Viktor Scharf <v.scharf@opencloud.eu>
2025-11-21 17:13:22 +01:00
Alex
a496b6f46b fix: add missing capability flag support-radicale (#1891) 2025-11-21 15:39:46 +01:00
Jörn Friedrich Dreyer
c1fcf71d42 Merge pull request #1890 from opencloud-eu/fix-opensearch-client-cert
fix opensearch client certificate

well ... technically it is not a fix. We expected the certificate on the CLI to be in PEM format. so, it would have worked if you used sth. like:
```console
export SEARCH_ENGINE_OPEN_SEARCH_CLIENT_CA_CERT="-----BEGIN CERTIFICATE-----
MIIDXTCCAkWgAwIBAgIJAKJ...
...
-----END CERTIFICATE-----"
```

which was different than all our other cert env vars, which take a path.
2025-11-21 15:15:48 +01:00
Viktor Scharf
d1fa52b603 revaBump-getting#428 (#1887) 2025-11-21 13:11:10 +01:00
Jörn Friedrich Dreyer
538e8141b2 fix opensearch client certificate
Signed-off-by: Jörn Friedrich Dreyer <jfd@butonic.de>
2025-11-21 12:23:07 +01:00
opencloudeu
ba71d2978a [tx] updated from transifex 2025-11-21 00:03:13 +00:00
dependabot[bot]
4ae0951f5f build(deps): bump github.com/blevesearch/bleve/v2 from 2.5.4 to 2.5.5
Bumps [github.com/blevesearch/bleve/v2](https://github.com/blevesearch/bleve) from 2.5.4 to 2.5.5.
- [Release notes](https://github.com/blevesearch/bleve/releases)
- [Commits](https://github.com/blevesearch/bleve/compare/v2.5.4...v2.5.5)

---
updated-dependencies:
- dependency-name: github.com/blevesearch/bleve/v2
  dependency-version: 2.5.5
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-11-20 17:41:01 +01:00
Ralf Haferkamp
e85d8effc1 Bump reva
Fixes: #1871, #1867
2025-11-20 14:58:30 +01:00
Michael Flemming
90e4127227 Merge pull request #1881 from opencloud-eu/fix_rc_latest_overwrite 2025-11-20 14:14:41 +01:00
dependabot[bot]
28148d02bd build(deps): bump github.com/olekukonko/tablewriter from 1.1.0 to 1.1.1
Bumps [github.com/olekukonko/tablewriter](https://github.com/olekukonko/tablewriter) from 1.1.0 to 1.1.1.
- [Commits](https://github.com/olekukonko/tablewriter/compare/v1.1.0...v1.1.1)

---
updated-dependencies:
- dependency-name: github.com/olekukonko/tablewriter
  dependency-version: 1.1.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-11-20 11:50:20 +01:00
Michael 'Flimmy' Flemming
205ffbbe83 add logic to skip latest tag for pre-releases 2025-11-20 11:27:57 +01:00
Viktor Scharf
9d173f0ea6 api-tests: delete spaces before users (#1877)
* delete spaces before users

* delete users after deleting spaces

* fix
2025-11-20 09:12:48 +01:00
Thomas Schweiger
6161e40d43 Merge pull request #1859 from opencloud-eu/fix/create-missing-readme-files
fix: add missing service README.md files with basic description
2025-11-19 19:16:16 +01:00
Christian Richter
95b19c7e33 Merge pull request #1617 from dragonchaser/merge-csp-configs
load two yaml configs
2025-11-19 14:20:43 +01:00
Christian Richter
97ee9b36a5 incorporate requested changes
Signed-off-by: Christian Richter <c.richter@opencloud.eu>
2025-11-19 09:38:16 +01:00
Christian Richter
f9807f9f3a actually load overrideyaml
Signed-off-by: Christian Richter <c.richter@opencloud.eu>
2025-11-19 09:38:16 +01:00
Christian Richter
8007e8a269 add ability to completely override csp config
Signed-off-by: Christian Richter <c.richter@opencloud.eu>
2025-11-19 09:38:16 +01:00
Christian Richter
63603679a5 remove obsolete comment
Signed-off-by: Christian Richter <c.richter@opencloud.eu>
2025-11-19 09:38:16 +01:00
Christian Richter
16f9667fe8 adapt tests & deepmerge
Signed-off-by: Christian Richter <c.richter@opencloud.eu>
2025-11-19 09:38:16 +01:00
Christian Richter
d16524510a adapt tests
Signed-off-by: Christian Richter <c.richter@opencloud.eu>
2025-11-19 09:38:16 +01:00
Christian Richter
20b903b32d load two yaml configs
Signed-off-by: Christian Richter <c.richter@opencloud.eu>
2025-11-19 09:38:16 +01:00
Michael Barz
54a38e37c6 Apply suggestions from code review
Co-authored-by: Jörn Friedrich Dreyer <jfd@butonic.de>
2025-11-17 10:43:44 +01:00
Jörn Friedrich Dreyer
4de25fdb5e Apply suggestions from code review 2025-11-17 10:41:17 +01:00
Thomas Schweiger
aa2da8372b fix: add missing service README.md files with basic description 2025-11-14 12:24:01 +01:00
219 changed files with 20029 additions and 26260 deletions

View File

@@ -1,4 +1,4 @@
# The test runner source for UI tests
WEB_COMMITID=6abffcc9cff31c46a341105eb6030fec56338126
WEB_BRANCH=main
WEB_COMMITID=3d7367cfb1abe288d0fc0b0b1cc494a7747bcaf6
WEB_BRANCH=stable-4.2

View File

@@ -608,7 +608,7 @@ def testPipelines(ctx):
pipelines += apiTests(ctx)
enable_watch_fs = [False]
if ctx.build.event == "cron" or "full-ci" in ctx.build.title.lower():
if ctx.build.event == "cron":
enable_watch_fs.append(True)
for run_with_watch_fs_enabled in enable_watch_fs:
@@ -994,7 +994,7 @@ def localApiTestPipeline(ctx):
with_remote_php = [True]
enable_watch_fs = [False]
if ctx.build.event == "cron" or "full-ci" in ctx.build.title.lower():
if ctx.build.event == "cron":
with_remote_php.append(False)
enable_watch_fs.append(True)
@@ -1310,7 +1310,7 @@ def apiTests(ctx):
with_remote_php = [True]
enable_watch_fs = [False]
if ctx.build.event == "cron" or "full-ci" in ctx.build.title.lower():
if ctx.build.event == "cron":
with_remote_php.append(False)
enable_watch_fs.append(True)
@@ -1654,6 +1654,16 @@ def dockerRelease(ctx, repo, build_type):
"VERSION": "%s" % (ctx.build.ref.replace("refs/tags/", "") if ctx.build.event == "tag" else "daily"),
}
# if no additional tag is given, the build-plugin adds latest
hard_tag = "daily"
if ctx.build.event == "tag":
tag_version = ctx.build.ref.replace("refs/tags/", "")
tag_parts = tag_version.split("-")
# if a tag has something appended with "-" i.e. alpha, beta, rc1...
# set the entire string as tag, else leave empty to autotag with latest
hard_tag = tag_version if len(tag_parts) > 1 else ""
depends_on = getPipelineNames(getGoBinForTesting(ctx))
if ctx.build.event == "tag":
@@ -1672,7 +1682,7 @@ def dockerRelease(ctx, repo, build_type):
"platforms": "linux/amd64", # do dry run only on the native platform
"repo": "%s,quay.io/%s" % (repo, repo),
"auto_tag": False if build_type == "daily" else True,
"tag": "daily" if build_type == "daily" else "",
"tag": hard_tag,
"default_tag": "daily",
"dockerfile": "opencloud/docker/Dockerfile.multiarch",
"build_args": build_args,

39
go.mod
View File

@@ -11,9 +11,9 @@ require (
github.com/Nerzal/gocloak/v13 v13.9.0
github.com/bbalet/stopwords v1.0.0
github.com/beevik/etree v1.6.0
github.com/blevesearch/bleve/v2 v2.5.4
github.com/blevesearch/bleve/v2 v2.5.5
github.com/cenkalti/backoff v2.2.1+incompatible
github.com/coreos/go-oidc/v3 v3.16.0
github.com/coreos/go-oidc/v3 v3.17.0
github.com/cs3org/go-cs3apis v0.0.0-20250908152307-4ca807afe54e
github.com/davidbyttow/govips/v2 v2.16.0
github.com/dhowden/tag v0.0.0-20240417053706-3d75831295e8
@@ -57,14 +57,14 @@ require (
github.com/nats-io/nats-server/v2 v2.12.1
github.com/nats-io/nats.go v1.47.0
github.com/oklog/run v1.2.0
github.com/olekukonko/tablewriter v1.1.0
github.com/olekukonko/tablewriter v1.1.1
github.com/onsi/ginkgo v1.16.5
github.com/onsi/ginkgo/v2 v2.27.2
github.com/onsi/gomega v1.38.2
github.com/open-policy-agent/opa v1.10.1
github.com/opencloud-eu/icap-client v0.0.0-20250930132611-28a2afe62d89
github.com/opencloud-eu/libre-graph-api-go v1.0.8-0.20250724122329-41ba6b191e76
github.com/opencloud-eu/reva/v2 v2.39.3-0.20251113164418-9fd6b6864c10
github.com/opencloud-eu/reva/v2 v2.39.3
github.com/opensearch-project/opensearch-go/v4 v4.5.0
github.com/orcaman/concurrent-map v1.0.0
github.com/pkg/errors v0.9.1
@@ -101,18 +101,19 @@ require (
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.38.0
go.opentelemetry.io/otel/sdk v1.38.0
go.opentelemetry.io/otel/trace v1.38.0
golang.org/x/crypto v0.43.0
golang.org/x/crypto v0.44.0
golang.org/x/exp v0.0.0-20250210185358-939b2ce775ac
golang.org/x/image v0.32.0
golang.org/x/net v0.46.0
golang.org/x/oauth2 v0.33.0
golang.org/x/sync v0.18.0
golang.org/x/term v0.37.0
golang.org/x/text v0.30.0
golang.org/x/text v0.31.0
google.golang.org/genproto/googleapis/api v0.0.0-20250929231259-57b25ae835d4
google.golang.org/grpc v1.76.0
google.golang.org/protobuf v1.36.10
gopkg.in/yaml.v2 v2.4.0
gopkg.in/yaml.v3 v3.0.1
gotest.tools/v3 v3.5.2
stash.kopano.io/kgol/rndm v1.1.2
)
@@ -139,13 +140,13 @@ require (
github.com/beorn7/perks v1.0.1 // indirect
github.com/bitly/go-simplejson v0.5.0 // indirect
github.com/bits-and-blooms/bitset v1.22.0 // indirect
github.com/blevesearch/bleve_index_api v1.2.10 // indirect
github.com/blevesearch/bleve_index_api v1.2.11 // indirect
github.com/blevesearch/geo v0.2.4 // indirect
github.com/blevesearch/go-faiss v1.0.25 // indirect
github.com/blevesearch/go-faiss v1.0.26 // indirect
github.com/blevesearch/go-porterstemmer v1.0.3 // indirect
github.com/blevesearch/gtreap v0.1.1 // indirect
github.com/blevesearch/mmap-go v1.0.4 // indirect
github.com/blevesearch/scorch_segment_api/v2 v2.3.12 // indirect
github.com/blevesearch/scorch_segment_api/v2 v2.3.13 // indirect
github.com/blevesearch/segment v0.9.1 // indirect
github.com/blevesearch/snowballstem v0.9.0 // indirect
github.com/blevesearch/upsidedown_store_api v1.0.2 // indirect
@@ -155,7 +156,7 @@ require (
github.com/blevesearch/zapx/v13 v13.4.2 // indirect
github.com/blevesearch/zapx/v14 v14.4.2 // indirect
github.com/blevesearch/zapx/v15 v15.4.2 // indirect
github.com/blevesearch/zapx/v16 v16.2.6 // indirect
github.com/blevesearch/zapx/v16 v16.2.7 // indirect
github.com/bluele/gcache v0.0.2 // indirect
github.com/bombsimon/logrusr/v3 v3.1.0 // indirect
github.com/cenkalti/backoff/v4 v4.3.0 // indirect
@@ -163,6 +164,9 @@ require (
github.com/ceph/go-ceph v0.36.0 // indirect
github.com/cespare/xxhash/v2 v2.3.0 // indirect
github.com/cevaris/ordered_map v0.0.0-20190319150403-3adeae072e73 // indirect
github.com/clipperhouse/displaywidth v0.3.1 // indirect
github.com/clipperhouse/stringish v0.1.1 // indirect
github.com/clipperhouse/uax29/v2 v2.2.0 // indirect
github.com/cloudflare/circl v1.6.1 // indirect
github.com/containerd/errdefs v1.0.0 // indirect
github.com/containerd/errdefs/pkg v0.3.0 // indirect
@@ -238,7 +242,7 @@ require (
github.com/google/go-querystring v1.1.0 // indirect
github.com/google/go-tpm v0.9.6 // indirect
github.com/google/pprof v0.0.0-20250403155104-27863c87afa6 // indirect
github.com/google/renameio/v2 v2.0.0 // indirect
github.com/google/renameio/v2 v2.0.1 // indirect
github.com/gookit/goutil v0.7.1 // indirect
github.com/gorilla/handlers v1.5.1 // indirect
github.com/gorilla/schema v1.4.1 // indirect
@@ -276,7 +280,7 @@ require (
github.com/mattermost/xml-roundtrip-validator v0.1.0 // indirect
github.com/mattn/go-colorable v0.1.14 // indirect
github.com/mattn/go-isatty v0.0.20 // indirect
github.com/mattn/go-runewidth v0.0.16 // indirect
github.com/mattn/go-runewidth v0.0.19 // indirect
github.com/mattn/go-sqlite3 v1.14.32 // indirect
github.com/maxymania/go-system v0.0.0-20170110133659-647cc364bf0b // indirect
github.com/mendsley/gojwk v0.0.0-20141217222730-4d5ec6e58103 // indirect
@@ -304,8 +308,9 @@ require (
github.com/nats-io/nkeys v0.4.11 // indirect
github.com/nats-io/nuid v1.0.1 // indirect
github.com/nxadm/tail v1.4.8 // indirect
github.com/olekukonko/cat v0.0.0-20250911104152-50322a0618f6 // indirect
github.com/olekukonko/errors v1.1.0 // indirect
github.com/olekukonko/ll v0.0.9 // indirect
github.com/olekukonko/ll v0.1.2 // indirect
github.com/opencontainers/go-digest v1.0.0 // indirect
github.com/opencontainers/image-spec v1.1.1 // indirect
github.com/opentracing/opentracing-go v1.2.0 // indirect
@@ -325,11 +330,13 @@ require (
github.com/prometheus/procfs v0.17.0 // indirect
github.com/prometheus/statsd_exporter v0.22.8 // indirect
github.com/rcrowley/go-metrics v0.0.0-20250401214520-65e299d6c5c9 // indirect
github.com/rivo/uniseg v0.4.7 // indirect
github.com/rs/xid v1.6.0 // indirect
github.com/russellhaering/goxmldsig v1.5.0 // indirect
github.com/russross/blackfriday/v2 v2.1.0 // indirect
github.com/rwcarlsen/goexif v0.0.0-20190401172101-9e8deecbddbd // indirect
github.com/samber/lo v1.51.0 // indirect
github.com/samber/slog-common v0.19.0 // indirect
github.com/samber/slog-zerolog/v2 v2.9.0 // indirect
github.com/segmentio/asm v1.2.0 // indirect
github.com/segmentio/kafka-go v0.4.49 // indirect
github.com/segmentio/ksuid v1.0.4 // indirect
@@ -385,7 +392,6 @@ require (
gopkg.in/cenkalti/backoff.v1 v1.1.0 // indirect
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7 // indirect
gopkg.in/warnings.v0 v0.1.2 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
sigs.k8s.io/yaml v1.6.0 // indirect
)
@@ -400,3 +406,6 @@ replace go-micro.dev/v4 => github.com/butonic/go-micro/v4 v4.11.1-0.202411151126
exclude github.com/mattn/go-sqlite3 v2.0.3+incompatible
replace github.com/go-micro/plugins/v4/store/nats-js-kv => github.com/opencloud-eu/go-micro-plugins/v4/store/nats-js-kv v0.0.0-20250512152754-23325793059a
// to get the logger injection (https://github.com/pablodz/inotifywaitgo/pull/11)
replace github.com/pablodz/inotifywaitgo v0.0.9 => github.com/opencloud-eu/inotifywaitgo v0.0.0-20251111171128-a390bae3c5e9

73
go.sum
View File

@@ -151,22 +151,22 @@ github.com/bits-and-blooms/bitset v1.12.0/go.mod h1:7hO7Gc7Pp1vODcmWvKMRA9BNmbv6
github.com/bits-and-blooms/bitset v1.22.0 h1:Tquv9S8+SGaS3EhyA+up3FXzmkhxPGjQQCkcs2uw7w4=
github.com/bits-and-blooms/bitset v1.22.0/go.mod h1:7hO7Gc7Pp1vODcmWvKMRA9BNmbv6a/7QIWpPxHddWR8=
github.com/bketelsen/crypt v0.0.3-0.20200106085610-5cbc8cc4026c/go.mod h1:MKsuJmJgSg28kpZDP6UIiPt0e0Oz0kqKNGyRaWEPv84=
github.com/blevesearch/bleve/v2 v2.5.4 h1:1iur8e+PHsxtncV2xIVuqlQme/V8guEDO2uV6Wll3lQ=
github.com/blevesearch/bleve/v2 v2.5.4/go.mod h1:yB4PnV4N2q5rTEpB2ndG8N2ISexBQEFIYgwx4ztfvoo=
github.com/blevesearch/bleve_index_api v1.2.10 h1:FMFmZCmTX6PdoLLvwUnKF2RsmILFFwO3h0WPevXY9fE=
github.com/blevesearch/bleve_index_api v1.2.10/go.mod h1:rKQDl4u51uwafZxFrPD1R7xFOwKnzZW7s/LSeK4lgo0=
github.com/blevesearch/bleve/v2 v2.5.5 h1:lzC89QUCco+y1qBnJxGqm4AbtsdsnlUvq0kXok8n3C8=
github.com/blevesearch/bleve/v2 v2.5.5/go.mod h1:t5WoESS5TDteTdnjhhvpA1BpLYErOBX2IQViTMLK7wo=
github.com/blevesearch/bleve_index_api v1.2.11 h1:bXQ54kVuwP8hdrXUSOnvTQfgK0KI1+f9A0ITJT8tX1s=
github.com/blevesearch/bleve_index_api v1.2.11/go.mod h1:rKQDl4u51uwafZxFrPD1R7xFOwKnzZW7s/LSeK4lgo0=
github.com/blevesearch/geo v0.2.4 h1:ECIGQhw+QALCZaDcogRTNSJYQXRtC8/m8IKiA706cqk=
github.com/blevesearch/geo v0.2.4/go.mod h1:K56Q33AzXt2YExVHGObtmRSFYZKYGv0JEN5mdacJJR8=
github.com/blevesearch/go-faiss v1.0.25 h1:lel1rkOUGbT1CJ0YgzKwC7k+XH0XVBHnCVWahdCXk4U=
github.com/blevesearch/go-faiss v1.0.25/go.mod h1:OMGQwOaRRYxrmeNdMrXJPvVx8gBnvE5RYrr0BahNnkk=
github.com/blevesearch/go-faiss v1.0.26 h1:4dRLolFgjPyjkaXwff4NfbZFdE/dfywbzDqporeQvXI=
github.com/blevesearch/go-faiss v1.0.26/go.mod h1:OMGQwOaRRYxrmeNdMrXJPvVx8gBnvE5RYrr0BahNnkk=
github.com/blevesearch/go-porterstemmer v1.0.3 h1:GtmsqID0aZdCSNiY8SkuPJ12pD4jI+DdXTAn4YRcHCo=
github.com/blevesearch/go-porterstemmer v1.0.3/go.mod h1:angGc5Ht+k2xhJdZi511LtmxuEf0OVpvUUNrwmM1P7M=
github.com/blevesearch/gtreap v0.1.1 h1:2JWigFrzDMR+42WGIN/V2p0cUvn4UP3C4Q5nmaZGW8Y=
github.com/blevesearch/gtreap v0.1.1/go.mod h1:QaQyDRAT51sotthUWAH4Sj08awFSSWzgYICSZ3w0tYk=
github.com/blevesearch/mmap-go v1.0.4 h1:OVhDhT5B/M1HNPpYPBKIEJaD0F3Si+CrEKULGCDPWmc=
github.com/blevesearch/mmap-go v1.0.4/go.mod h1:EWmEAOmdAS9z/pi/+Toxu99DnsbhG1TIxUoRmJw/pSs=
github.com/blevesearch/scorch_segment_api/v2 v2.3.12 h1:GGZc2qwbyRBwtckPPkHkLyXw64mmsLJxdturBI1cM+c=
github.com/blevesearch/scorch_segment_api/v2 v2.3.12/go.mod h1:JBRGAneqgLSI2+jCNjtwMqp2B7EBF3/VUzgDPIU33MM=
github.com/blevesearch/scorch_segment_api/v2 v2.3.13 h1:ZPjv/4VwWvHJZKeMSgScCapOy8+DdmsmRyLmSB88UoY=
github.com/blevesearch/scorch_segment_api/v2 v2.3.13/go.mod h1:ENk2LClTehOuMS8XzN3UxBEErYmtwkE7MAArFTXs9Vc=
github.com/blevesearch/segment v0.9.1 h1:+dThDy+Lvgj5JMxhmOVlgFfkUtZV2kw49xax4+jTfSU=
github.com/blevesearch/segment v0.9.1/go.mod h1:zN21iLm7+GnBHWTao9I+Au/7MBiL8pPFtJBJTsk6kQw=
github.com/blevesearch/snowballstem v0.9.0 h1:lMQ189YspGP6sXvZQ4WZ+MLawfV8wOmPoD/iWeNXm8s=
@@ -185,8 +185,8 @@ github.com/blevesearch/zapx/v14 v14.4.2 h1:2SGHakVKd+TrtEqpfeq8X+So5PShQ5nW6GNxT
github.com/blevesearch/zapx/v14 v14.4.2/go.mod h1:rz0XNb/OZSMjNorufDGSpFpjoFKhXmppH9Hi7a877D8=
github.com/blevesearch/zapx/v15 v15.4.2 h1:sWxpDE0QQOTjyxYbAVjt3+0ieu8NCE0fDRaFxEsp31k=
github.com/blevesearch/zapx/v15 v15.4.2/go.mod h1:1pssev/59FsuWcgSnTa0OeEpOzmhtmr/0/11H0Z8+Nw=
github.com/blevesearch/zapx/v16 v16.2.6 h1:OHuUl2GhM+FpBq9RwNsJ4k/QodqbMMHoQEgn/IHYpu8=
github.com/blevesearch/zapx/v16 v16.2.6/go.mod h1:cuAPB+YoIyRngNhno1S1GPr9SfMk+x/SgAHBLXSIq3k=
github.com/blevesearch/zapx/v16 v16.2.7 h1:xcgFRa7f/tQXOwApVq7JWgPYSlzyUMmkuYa54tMDuR0=
github.com/blevesearch/zapx/v16 v16.2.7/go.mod h1:murSoCJPCk25MqURrcJaBQ1RekuqSCSfMjXH4rHyA14=
github.com/bluele/gcache v0.0.2 h1:WcbfdXICg7G/DGBh1PFfcirkWOQV+v077yF1pSy3DGw=
github.com/bluele/gcache v0.0.2/go.mod h1:m15KV+ECjptwSPxKhOhQoAFQVtUFjTVkc3H8o0t/fp0=
github.com/bmizerany/assert v0.0.0-20160611221934-b7ed37b82869 h1:DDGfHa7BWjL4YnC6+E63dPcxHo2sUxDIu8g3QgEJdRY=
@@ -223,6 +223,12 @@ github.com/chzyer/logex v1.1.10/go.mod h1:+Ywpsq7O8HXn0nuIou7OrIPyXbp3wmkHB+jjWR
github.com/chzyer/readline v0.0.0-20180603132655-2972be24d48e/go.mod h1:nSuG5e5PlCu98SY8svDHJxuZscDgtXS6KTTbou5AhLI=
github.com/chzyer/test v0.0.0-20180213035817-a1ea475d72b1/go.mod h1:Q3SI9o4m/ZMnBNeIyt5eFwwo7qiLfzFZmjNmxjkiQlU=
github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw=
github.com/clipperhouse/displaywidth v0.3.1 h1:k07iN9gD32177o1y4O1jQMzbLdCrsGJh+blirVYybsk=
github.com/clipperhouse/displaywidth v0.3.1/go.mod h1:tgLJKKyaDOCadywag3agw4snxS5kYEuYR6Y9+qWDDYM=
github.com/clipperhouse/stringish v0.1.1 h1:+NSqMOr3GR6k1FdRhhnXrLfztGzuG+VuFDfatpWHKCs=
github.com/clipperhouse/stringish v0.1.1/go.mod h1:v/WhFtE1q0ovMta2+m+UbpZ+2/HEXNWYXQgCt4hdOzA=
github.com/clipperhouse/uax29/v2 v2.2.0 h1:ChwIKnQN3kcZteTXMgb1wztSgaU+ZemkgWdohwgs8tY=
github.com/clipperhouse/uax29/v2 v2.2.0/go.mod h1:EFJ2TJMRUaplDxHKj1qAEhCtQPW2tJSwu5BF98AuoVM=
github.com/cloudflare/circl v1.6.1 h1:zqIqSPIndyBh1bjLVVDHMPpVKqp8Su/V+6MeDzzQBQ0=
github.com/cloudflare/circl v1.6.1/go.mod h1:uddAzsPgqdMAYatqJ0lsjX1oECcQLIlRpzZh3pJrofs=
github.com/cloudflare/cloudflare-go v0.14.0/go.mod h1:EnwdgGMaFOruiPZRFSgn+TsQ3hQ7C/YWzIGLeu5c304=
@@ -237,8 +243,8 @@ github.com/containerd/platforms v1.0.0-rc.1 h1:83KIq4yy1erSRgOVHNk1HYdPvzdJ5CnsW
github.com/containerd/platforms v1.0.0-rc.1/go.mod h1:J71L7B+aiM5SdIEqmd9wp6THLVRzJGXfNuWCZCllLA4=
github.com/coreos/bbolt v1.3.2/go.mod h1:iRUV2dpdMOn7Bo10OQBFzIJO9kkE559Wcmn+qkEiiKk=
github.com/coreos/etcd v3.3.13+incompatible/go.mod h1:uF7uidLiAD3TWHmW31ZFd/JWoc32PjwdhPthX9715RE=
github.com/coreos/go-oidc/v3 v3.16.0 h1:qRQUCFstKpXwmEjDQTIbyY/5jF00+asXzSkmkoa/mow=
github.com/coreos/go-oidc/v3 v3.16.0/go.mod h1:wqPbKFrVnE90vty060SB40FCJ8fTHTxSwyXJqZH+sI8=
github.com/coreos/go-oidc/v3 v3.17.0 h1:hWBGaQfbi0iVviX4ibC7bk8OKT5qNr4klBaCHVNvehc=
github.com/coreos/go-oidc/v3 v3.17.0/go.mod h1:wqPbKFrVnE90vty060SB40FCJ8fTHTxSwyXJqZH+sI8=
github.com/coreos/go-semver v0.3.0/go.mod h1:nnelYz7RCh+5ahJtPPxZlU+153eP4D4r3EedlOD2RNk=
github.com/coreos/go-semver v0.3.1 h1:yi21YpKnrx1gt5R+la8n5WgS0kCrsPp33dmEyHReZr4=
github.com/coreos/go-semver v0.3.1/go.mod h1:irMmmIw/7yzSRPWryHsK7EYSg09caPQL03VsM8rvUec=
@@ -581,8 +587,8 @@ github.com/google/pprof v0.0.0-20200708004538-1a94d8640e99/go.mod h1:ZgVRPoUq/hf
github.com/google/pprof v0.0.0-20250403155104-27863c87afa6 h1:BHT72Gu3keYf3ZEu2J0b1vyeLSOYI8bm5wbJM/8yDe8=
github.com/google/pprof v0.0.0-20250403155104-27863c87afa6/go.mod h1:boTsfXsheKC2y+lKOCMpSfarhxDeIzfZG1jqGcPl3cA=
github.com/google/renameio v0.1.0/go.mod h1:KWCgfxg9yswjAJkECMjeO8J8rahYeXnNhOm40UhjYkI=
github.com/google/renameio/v2 v2.0.0 h1:UifI23ZTGY8Tt29JbYFiuyIU3eX+RNFtUwefq9qAhxg=
github.com/google/renameio/v2 v2.0.0/go.mod h1:BtmJXm5YlszgC+TD4HOEEUFgkJP3nLxehU6hfe7jRt4=
github.com/google/renameio/v2 v2.0.1 h1:HyOM6qd9gF9sf15AvhbptGHUnaLTpEI9akAFFU3VyW0=
github.com/google/renameio/v2 v2.0.1/go.mod h1:BtmJXm5YlszgC+TD4HOEEUFgkJP3nLxehU6hfe7jRt4=
github.com/google/uuid v1.1.1/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/google/uuid v1.1.2/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
@@ -821,8 +827,8 @@ github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D
github.com/mattn/go-runewidth v0.0.2/go.mod h1:LwmH8dsx7+W8Uxz3IHJYH5QSwggIsqBzpuz5H//U1FU=
github.com/mattn/go-runewidth v0.0.6/go.mod h1:H031xJmbD/WCDINGzjvQ9THkh0rPKHF+m2gUSrubnMI=
github.com/mattn/go-runewidth v0.0.9/go.mod h1:H031xJmbD/WCDINGzjvQ9THkh0rPKHF+m2gUSrubnMI=
github.com/mattn/go-runewidth v0.0.16 h1:E5ScNMtiwvlvB5paMFdw9p4kSQzbXFikJ5SQO6TULQc=
github.com/mattn/go-runewidth v0.0.16/go.mod h1:Jdepj2loyihRzMpdS35Xk/zdY8IAYHsh153qUoGf23w=
github.com/mattn/go-runewidth v0.0.19 h1:v++JhqYnZuu5jSKrk9RbgF5v4CGUjqRfBm05byFGLdw=
github.com/mattn/go-runewidth v0.0.19/go.mod h1:XBkDxAl56ILZc9knddidhrOlY5R/pDhgLpndooCuJAs=
github.com/mattn/go-sqlite3 v1.14.32 h1:JD12Ag3oLy1zQA+BNn74xRgaBbdhbNIDYvQUEuuErjs=
github.com/mattn/go-sqlite3 v1.14.32/go.mod h1:Uh1q+B4BYcTPb+yiD3kU8Ct7aC0hY9fxUwlHK0RXw+Y=
github.com/mattn/go-tty v0.0.0-20180219170247-931426f7535a/go.mod h1:XPvLUNfbS4fJH25nqRHfWLMa1ONC8Amw+mIA639KxkE=
@@ -924,13 +930,15 @@ github.com/nxadm/tail v1.4.8/go.mod h1:+ncqLTQzXmGhMZNUePPaPqPvBxHAIsmXswZKocGu+
github.com/oklog/run v1.2.0 h1:O8x3yXwah4A73hJdlrwo/2X6J62gE5qTMusH0dvz60E=
github.com/oklog/run v1.2.0/go.mod h1:mgDbKRSwPhJfesJ4PntqFUbKQRZ50NgmZTSPlFA0YFk=
github.com/oklog/ulid v1.3.1/go.mod h1:CirwcVhetQ6Lv90oh/F+FBtV6XMibvdAFo93nm5qn4U=
github.com/olekukonko/cat v0.0.0-20250911104152-50322a0618f6 h1:zrbMGy9YXpIeTnGj4EljqMiZsIcE09mmF8XsD5AYOJc=
github.com/olekukonko/cat v0.0.0-20250911104152-50322a0618f6/go.mod h1:rEKTHC9roVVicUIfZK7DYrdIoM0EOr8mK1Hj5s3JjH0=
github.com/olekukonko/errors v1.1.0 h1:RNuGIh15QdDenh+hNvKrJkmxxjV4hcS50Db478Ou5sM=
github.com/olekukonko/errors v1.1.0/go.mod h1:ppzxA5jBKcO1vIpCXQ9ZqgDh8iwODz6OXIGKU8r5m4Y=
github.com/olekukonko/ll v0.0.9 h1:Y+1YqDfVkqMWuEQMclsF9HUR5+a82+dxJuL1HHSRpxI=
github.com/olekukonko/ll v0.0.9/go.mod h1:En+sEW0JNETl26+K8eZ6/W4UQ7CYSrrgg/EdIYT2H8g=
github.com/olekukonko/ll v0.1.2 h1:lkg/k/9mlsy0SxO5aC+WEpbdT5K83ddnNhAepz7TQc0=
github.com/olekukonko/ll v0.1.2/go.mod h1:b52bVQRRPObe+yyBl0TxNfhesL0nedD4Cht0/zx55Ew=
github.com/olekukonko/tablewriter v0.0.5/go.mod h1:hPp6KlRPjbx+hW8ykQs1w3UBbZlj6HuIJcUGPhkA7kY=
github.com/olekukonko/tablewriter v1.1.0 h1:N0LHrshF4T39KvI96fn6GT8HEjXRXYNDrDjKFDB7RIY=
github.com/olekukonko/tablewriter v1.1.0/go.mod h1:5c+EBPeSqvXnLLgkm9isDdzR3wjfBkHR9Nhfp3NWrzo=
github.com/olekukonko/tablewriter v1.1.1 h1:b3reP6GCfrHwmKkYwNRFh2rxidGHcT6cgxj/sHiDDx0=
github.com/olekukonko/tablewriter v1.1.1/go.mod h1:De/bIcTF+gpBDB3Alv3fEsZA+9unTsSzAg/ZGADCtn4=
github.com/onsi/ginkgo v1.6.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/ginkgo v1.7.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/ginkgo v1.12.1/go.mod h1:zj2OWP4+oCPe1qIXoGWkgMRwljMUYCdkwsT2108oapk=
@@ -949,10 +957,12 @@ github.com/opencloud-eu/go-micro-plugins/v4/store/nats-js-kv v0.0.0-202505121527
github.com/opencloud-eu/go-micro-plugins/v4/store/nats-js-kv v0.0.0-20250512152754-23325793059a/go.mod h1:pjcozWijkNPbEtX5SIQaxEW/h8VAVZYTLx+70bmB3LY=
github.com/opencloud-eu/icap-client v0.0.0-20250930132611-28a2afe62d89 h1:W1ms+lP5lUUIzjRGDg93WrQfZJZCaV1ZP3KeyXi8bzY=
github.com/opencloud-eu/icap-client v0.0.0-20250930132611-28a2afe62d89/go.mod h1:vigJkNss1N2QEceCuNw/ullDehncuJNFB6mEnzfq9UI=
github.com/opencloud-eu/inotifywaitgo v0.0.0-20251111171128-a390bae3c5e9 h1:dIftlX03Bzfbujhp9B54FbgER0VBDWJi/w8RBxJlzxU=
github.com/opencloud-eu/inotifywaitgo v0.0.0-20251111171128-a390bae3c5e9/go.mod h1:JWyDC6H+5oZRdUJUgKuaye+8Ph5hEs6HVzVoPKzWSGI=
github.com/opencloud-eu/libre-graph-api-go v1.0.8-0.20250724122329-41ba6b191e76 h1:vD/EdfDUrv4omSFjrinT8Mvf+8D7f9g4vgQ2oiDrVUI=
github.com/opencloud-eu/libre-graph-api-go v1.0.8-0.20250724122329-41ba6b191e76/go.mod h1:pzatilMEHZFT3qV7C/X3MqOa3NlRQuYhlRhZTL+hN6Q=
github.com/opencloud-eu/reva/v2 v2.39.3-0.20251113164418-9fd6b6864c10 h1:9b5O3lzYHmR+aDNo81UYMcDGfUARrHw5Suk4YmqNgJA=
github.com/opencloud-eu/reva/v2 v2.39.3-0.20251113164418-9fd6b6864c10/go.mod h1:YxP7b+8olAhgbQBUUnsRQokgf1RkwpEBLq614XXXXHA=
github.com/opencloud-eu/reva/v2 v2.39.3 h1:/9NW08Bpy1GaNAPo8HrlyT21Flj8uNnOUyWLud1ehGc=
github.com/opencloud-eu/reva/v2 v2.39.3/go.mod h1:kkGiMeEVR59VjDsmWIczWqRcwK8cy9ogTd/u802U3NI=
github.com/opencontainers/go-digest v1.0.0 h1:apOUWs51W5PlhuyGyz9FCeeBIOUDA/6nW8Oi/yOhh5U=
github.com/opencontainers/go-digest v1.0.0/go.mod h1:0JzlMkj0TRzQZfJkVvzbP0HBR3IKzErnv2BNG4W4MAM=
github.com/opencontainers/image-spec v1.1.1 h1:y0fUlFfIZhPF1W537XOLg0/fcx6zcHCJwooC2xJA040=
@@ -969,8 +979,6 @@ github.com/orcaman/concurrent-map v1.0.0/go.mod h1:Lu3tH6HLW3feq74c2GC+jIMS/K2CF
github.com/ovh/go-ovh v1.1.0/go.mod h1:AxitLZ5HBRPyUd+Zl60Ajaag+rNTdVXWIkzfrVuTXWA=
github.com/oxtoacart/bpool v0.0.0-20190530202638-03653db5a59c h1:rp5dCmg/yLR3mgFuSOe4oEnDDmGLROTvMragMUXpTQw=
github.com/oxtoacart/bpool v0.0.0-20190530202638-03653db5a59c/go.mod h1:X07ZCGwUbLaax7L0S3Tw4hpejzu63ZrrQiUe6W0hcy0=
github.com/pablodz/inotifywaitgo v0.0.9 h1:njquRbBU7fuwIe5rEvtaniVBjwWzcpdUVptSgzFqZsw=
github.com/pablodz/inotifywaitgo v0.0.9/go.mod h1:hAfx2oN+WKg8miwUKPs52trySpPignlRBRxWcXVHku0=
github.com/pascaldekloe/goe v0.0.0-20180627143212-57f6aae5913c/go.mod h1:lzWF7FIEvWOWxwDKqyGYQf6ZUaNfKdP144TG7ZOy1lc=
github.com/patrickmn/go-cache v2.1.0+incompatible h1:HRMgzkcYKYpi3C8ajMPV8OFXaaRUnok+kx1WdO15EQc=
github.com/patrickmn/go-cache v2.1.0+incompatible/go.mod h1:3Qf8kWWT7OJRJbdiICTKqZju1ZixQ/KpMGzzAfe6+WQ=
@@ -1064,9 +1072,6 @@ github.com/rcrowley/go-metrics v0.0.0-20250401214520-65e299d6c5c9 h1:bsUq1dX0N8A
github.com/rcrowley/go-metrics v0.0.0-20250401214520-65e299d6c5c9/go.mod h1:bCqnVzQkZxMG4s8nGwiZ5l3QUCyqpo9Y+/ZMZ9VjZe4=
github.com/riandyrn/otelchi v0.12.2 h1:6QhGv0LVw/dwjtPd12mnNrl0oEQF4ZAlmHcnlTYbeAg=
github.com/riandyrn/otelchi v0.12.2/go.mod h1:weZZeUJURvtCcbWsdb7Y6F8KFZGedJlSrgUjq9VirV8=
github.com/rivo/uniseg v0.2.0/go.mod h1:J6wj4VEh+S6ZtnVlnTBMWIodfgj8LQOQFoIToxlJtxc=
github.com/rivo/uniseg v0.4.7 h1:WUdvkW8uEhrYfLC4ZzdpI2ztxP1I582+49Oc5Mq64VQ=
github.com/rivo/uniseg v0.4.7/go.mod h1:FN3SvrM+Zdj16jyLfmOkMNblXMcoc8DfTHruCPUcx88=
github.com/rogpeppe/fastuuid v0.0.0-20150106093220-6724a57986af/go.mod h1:XWv6SoW27p1b0cqNHllgS5HIMJraePCO15w5zCzIWYg=
github.com/rogpeppe/go-internal v1.3.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4=
github.com/rogpeppe/go-internal v1.14.1 h1:UQB4HGPB6osV0SQTLymcB4TgvyWu6ZyliaW0tI/otEQ=
@@ -1086,6 +1091,12 @@ github.com/rwcarlsen/goexif v0.0.0-20190401172101-9e8deecbddbd h1:CmH9+J6ZSsIjUK
github.com/rwcarlsen/goexif v0.0.0-20190401172101-9e8deecbddbd/go.mod h1:hPqNNc0+uJM6H+SuU8sEs5K5IQeKccPqeSjfgcKGgPk=
github.com/ryanuber/columnize v0.0.0-20160712163229-9b3edd62028f/go.mod h1:sm1tb6uqfes/u+d4ooFouqFdy9/2g9QGwK3SQygK0Ts=
github.com/sacloud/libsacloud v1.36.2/go.mod h1:P7YAOVmnIn3DKHqCZcUKYUXmSwGBm3yS7IBEjKVSrjg=
github.com/samber/lo v1.51.0 h1:kysRYLbHy/MB7kQZf5DSN50JHmMsNEdeY24VzJFu7wI=
github.com/samber/lo v1.51.0/go.mod h1:4+MXEGsJzbKGaUEQFKBq2xtfuznW9oz/WrgyzMzRoM0=
github.com/samber/slog-common v0.19.0 h1:fNcZb8B2uOLooeYwFpAlKjkQTUafdjfqKcwcC89G9YI=
github.com/samber/slog-common v0.19.0/go.mod h1:dTz+YOU76aH007YUU0DffsXNsGFQRQllPQh9XyNoA3M=
github.com/samber/slog-zerolog/v2 v2.9.0 h1:6LkOabJmZdNLaUWkTC3IVVA+dq7b/V0FM6lz6/7+THI=
github.com/samber/slog-zerolog/v2 v2.9.0/go.mod h1:gnQW9VnCfM34v2pRMUIGMsZOVbYLqY/v0Wxu6atSVGc=
github.com/scaleway/scaleway-sdk-go v1.0.0-beta.7.0.20210127161313-bd30bebeac4f/go.mod h1:CJJ5VAbozOl0yEw7nHB9+7BXTJbIn6h7W+f6Gau5IP8=
github.com/sean-/seed v0.0.0-20170313163322-e2103e2c3529/go.mod h1:DxrIzT+xaE7yg65j358z/aeFdxmN0P9QXhEzd20vsDc=
github.com/segmentio/asm v1.2.0 h1:9BQrFxC+YOHJlTlHGkTrFWf59nbL3XnCoFLTwDCI7ys=
@@ -1346,8 +1357,8 @@ golang.org/x/crypto v0.14.0/go.mod h1:MVFd36DqK4CsrnJYDkBA3VC4m2GkXAM0PvzMCn4JQf
golang.org/x/crypto v0.19.0/go.mod h1:Iy9bg/ha4yyC70EfRS8jz+B6ybOBKMaSxLj6P6oBDfU=
golang.org/x/crypto v0.21.0/go.mod h1:0BP7YvVV9gBbVKyeTG0Gyn+gZm94bibOW5BjDEYAOMs=
golang.org/x/crypto v0.23.0/go.mod h1:CKFgDieR+mRhux2Lsu27y0fO304Db0wZe70UKqHu0v8=
golang.org/x/crypto v0.43.0 h1:dduJYIi3A3KOfdGOHX8AVZ/jGiyPa3IbBozJ5kNuE04=
golang.org/x/crypto v0.43.0/go.mod h1:BFbav4mRNlXJL4wNeejLpWxB7wMbc79PdRGhWKncxR0=
golang.org/x/crypto v0.44.0 h1:A97SsFvM3AIwEEmTBiaxPPTYpDC47w720rdiiUvgoAU=
golang.org/x/crypto v0.44.0/go.mod h1:013i+Nw79BMiQiMsOPcVCB5ZIJbYkerPrGnOa00tvmc=
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190306152737-a1d7652674e8/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190510132918-efd6b22b2522/go.mod h1:ZjyILWgesfNpC6sMxTJOJm9Kp84zZh5NQWvqDGG3Qr8=
@@ -1586,8 +1597,8 @@ golang.org/x/text v0.13.0/go.mod h1:TvPlkZtksWOMsz7fbANvkp4WM8x/WCo/om8BMLbz+aE=
golang.org/x/text v0.14.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU=
golang.org/x/text v0.15.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU=
golang.org/x/text v0.16.0/go.mod h1:GhwF1Be+LQoKShO3cGOHzqOgRrGaYc9AvblQOmPVHnI=
golang.org/x/text v0.30.0 h1:yznKA/E9zq54KzlzBEAWn1NXSQ8DIp/NYMy88xJjl4k=
golang.org/x/text v0.30.0/go.mod h1:yDdHFIX9t+tORqspjENWgzaCVXgk0yYnYuSZ8UzzBVM=
golang.org/x/text v0.31.0 h1:aC8ghyu4JhP8VojJ2lEHBnochRno1sgL6nEi9WGFGMM=
golang.org/x/text v0.31.0/go.mod h1:tKRAlv61yKIjGGHX/4tP1LTbc13YSec1pxVEWXzfoeM=
golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20191024005414-555d28b269f0/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=

View File

@@ -1,4 +1,4 @@
FROM golang:alpine3.21 AS build
FROM golang:alpine3.22 AS build
ARG TARGETOS
ARG TARGETARCH
ARG VERSION
@@ -13,7 +13,7 @@ RUN --mount=type=bind,target=/build,rw \
GOOS="${TARGETOS:-linux}" GOARCH="${TARGETARCH:-amd64}" ; \
make -C opencloud/opencloud release-linux-docker-${TARGETARCH} ENABLE_VIPS=true DIST=/dist
FROM alpine:3.21
FROM alpine:3.22
ARG VERSION
ARG REVISION
ARG TARGETOS

View File

@@ -16,7 +16,7 @@ var (
// LatestTag is the latest released version plus the dev meta version.
// Will be overwritten by the release pipeline
// Needs a manual change for every tagged release
LatestTag = "4.0.0-rc.1+dev"
LatestTag = "4.0.0-rc.2+dev"
// Date indicates the build date.
// This has been removed, it looks like you can only replace static strings with recent go versions

View File

@@ -11,7 +11,7 @@ msgid ""
msgstr ""
"Project-Id-Version: \n"
"Report-Msgid-Bugs-To: EMAIL\n"
"POT-Creation-Date: 2025-11-06 00:02+0000\n"
"POT-Creation-Date: 2025-11-26 00:02+0000\n"
"PO-Revision-Date: 2025-01-27 10:17+0000\n"
"Last-Translator: Jiri Grönroos <jiri.gronroos@iki.fi>, 2025\n"
"Language-Team: Finnish (https://app.transifex.com/opencloud-eu/teams/204053/fi/)\n"

View File

@@ -0,0 +1,20 @@
# App Provider
The `app-provider` service provides the CS3 App Provider API for OpenCloud. It is responsible for managing and serving applications that can open files based on their MIME types.
The service works in conjunction with the `app-registry` service, which maintains the registry of available applications and their supported MIME types. When a client requests to open a file with a specific application, the `app-provider` service handles the request and coordinates with the application to provide the appropriate interface.
## Integration
The `app-provider` service integrates with:
- `app-registry` - For discovering which applications are available for specific MIME types
- `frontend` - The frontend service forwards app provider requests (default endpoint `/app`) to this service
## Configuration
The service can be configured via environment variables. Key configuration options include:
- `APP_PROVIDER_EXTERNAL_ADDR` - External address where the gateway service can reach the app provider
## Scalability
The app-provider service can be scaled horizontally as it primarily acts as a coordinator between applications and the OpenCloud backend services.

View File

@@ -36,6 +36,7 @@ type Config struct {
SearchMinLength int `yaml:"search_min_length" env:"FRONTEND_SEARCH_MIN_LENGTH" desc:"Minimum number of characters to enter before a client should start a search for Share receivers. This setting can be used to customize the user experience if e.g too many results are displayed." introductionVersion:"1.0.0"`
Edition string `yaml:"edition" env:"OC_EDITION;FRONTEND_EDITION" desc:"Edition of OpenCloud. Used for branding purposes." introductionVersion:"1.0.0"`
DisableSSE bool `yaml:"disable_sse" env:"OC_DISABLE_SSE;FRONTEND_DISABLE_SSE" desc:"When set to true, clients are informed that the Server-Sent Events endpoint is not accessible." introductionVersion:"1.0.0"`
DisableRadicale bool `yaml:"disable_radicale" env:"FRONTEND_DISABLE_RADICALE" desc:"When set to true, clients are informed that the Radicale (CalDAV/CardDAV) is not accessible." introductionVersion:"4.0.0"`
DefaultLinkPermissions int `yaml:"default_link_permissions" env:"FRONTEND_DEFAULT_LINK_PERMISSIONS" desc:"Defines the default permissions a link is being created with. Possible values are 0 (= internal link, for instance members only) and 1 (= public link with viewer permissions). Defaults to 1." introductionVersion:"1.0.0"`
PublicURL string `yaml:"public_url" env:"OC_URL;FRONTEND_PUBLIC_URL" desc:"The public facing URL of the OpenCloud frontend." introductionVersion:"1.0.0"`

View File

@@ -218,6 +218,7 @@ func FrontendConfigFromStruct(cfg *config.Config, logger log.Logger) (map[string
"check_for_updates": cfg.CheckForUpdates,
"support_url_signing": true,
"support_sse": !cfg.DisableSSE,
"support_radicale": !cfg.DisableRadicale,
},
"graph": map[string]interface{}{
"personal_data_export": true,

View File

@@ -497,7 +497,7 @@ func (i *LDAP) searchLDAPEntryByFilter(basedn string, attrs []string, filter str
return res.Entries[0], nil
}
func filterEscapeUUID(binary bool, id string) (string, error) {
func filterEscapeAttribute(attribute string, binary bool, id string) (string, error) {
var escaped string
if binary {
pid, err := uuid.Parse(id)
@@ -505,17 +505,44 @@ func filterEscapeUUID(binary bool, id string) (string, error) {
err := fmt.Errorf("error parsing id '%s' as UUID: %w", id, err)
return "", err
}
for _, b := range pid {
escaped = fmt.Sprintf("%s\\%02x", escaped, b)
}
escaped = filterEscapeBinaryUUID(attribute, pid)
} else {
escaped = ldap.EscapeFilter(id)
}
return escaped, nil
}
// swapObjectGUIDBytes converts between AD's mixed-endian objectGUID format and standard UUID byte order
func swapObjectGUIDBytes(value []byte) []byte {
if len(value) != 16 {
return value
}
return []byte{
value[3], value[2], value[1], value[0], // First component (4 bytes) - reverse
value[5], value[4], // Second component (2 bytes) - reverse
value[7], value[6], // Third component (2 bytes) - reverse
value[8], value[9], value[10], value[11], value[12], value[13], value[14], value[15], // Last 8 bytes - keep as-is
}
}
func filterEscapeBinaryUUID(attribute string, value uuid.UUID) string {
bytes := value[:]
// AD stores objectGUID with mixed endianness 🤪 - swap first 3 components
if strings.EqualFold(attribute, "objectguid") {
bytes = swapObjectGUIDBytes(bytes)
}
var filtered strings.Builder
filtered.Grow(len(bytes) * 3) // Pre-allocate: each byte becomes "\xx"
for _, b := range bytes {
fmt.Fprintf(&filtered, "\\%02x", b)
}
return filtered.String()
}
func (i *LDAP) getLDAPUserByID(id string) (*ldap.Entry, error) {
idString, err := filterEscapeUUID(i.userIDisOctetString, id)
idString, err := filterEscapeAttribute(i.userAttributeMap.id, i.userIDisOctetString, id)
if err != nil {
return nil, fmt.Errorf("invalid User id: %w", err)
}
@@ -524,7 +551,7 @@ func (i *LDAP) getLDAPUserByID(id string) (*ldap.Entry, error) {
}
func (i *LDAP) getLDAPUserByNameOrID(nameOrID string) (*ldap.Entry, error) {
idString, err := filterEscapeUUID(i.userIDisOctetString, nameOrID)
idString, err := filterEscapeAttribute(i.userAttributeMap.id, i.userIDisOctetString, nameOrID)
// err != nil just means that this is not an uuid, so we can skip the uuid filter part
// and just filter by name
var filter string
@@ -812,16 +839,25 @@ func (i *LDAP) updateUserPassword(ctx context.Context, dn, password string) erro
return err
}
func (i *LDAP) ldapUUIDtoString(e *ldap.Entry, attrType string, binary bool) (string, error) {
func (i *LDAP) ldapUUIDtoString(e *ldap.Entry, attribute string, binary bool) (string, error) {
if binary {
rawValue := e.GetEqualFoldRawAttributeValue(attrType)
value, err := uuid.FromBytes(rawValue)
if err == nil {
return value.String(), nil
value := e.GetEqualFoldRawAttributeValue(attribute)
if len(value) != 16 {
return "", fmt.Errorf("invalid UUID in '%s' attribute (got %d bytes)", attribute, len(value))
}
return "", err
// AD stores objectGUID with mixed endianness 🤪 - swap first 3 components
if strings.EqualFold(attribute, "objectguid") {
value = swapObjectGUIDBytes(value)
}
id, err := uuid.FromBytes(value)
if err != nil {
return "", fmt.Errorf("error parsing UUID from '%s' attribute bytes: %w", attribute, err)
}
return id.String(), nil
}
return e.GetEqualFoldAttributeValue(attrType), nil
return e.GetEqualFoldAttributeValue(attribute), nil
}
func (i *LDAP) createUserModelFromLDAP(e *ldap.Entry) *libregraph.User {
@@ -876,7 +912,7 @@ func (i *LDAP) createUserModelFromLDAP(e *ldap.Entry) *libregraph.User {
}
return user
}
i.logger.Warn().Str("dn", e.DN).Msg("Invalid User. Missing username or id attribute")
i.logger.Warn().Str("dn", e.DN).Str("id", id).Str("username", opsan).Msg("Invalid User. Missing username or id attribute")
return nil
}

View File

@@ -455,7 +455,7 @@ func (i *LDAP) groupToLDAPAttrValues(group libregraph.Group) (map[string][]strin
}
func (i *LDAP) getLDAPGroupByID(id string, requestMembers bool) (*ldap.Entry, error) {
idString, err := filterEscapeUUID(i.groupIDisOctetString, id)
idString, err := filterEscapeAttribute(i.groupAttributeMap.id, i.groupIDisOctetString, id)
if err != nil {
return nil, fmt.Errorf("invalid group id: %w", err)
}
@@ -464,7 +464,7 @@ func (i *LDAP) getLDAPGroupByID(id string, requestMembers bool) (*ldap.Entry, er
}
func (i *LDAP) getLDAPGroupByNameOrID(nameOrID string, requestMembers bool) (*ldap.Entry, error) {
idString, err := filterEscapeUUID(i.groupIDisOctetString, nameOrID)
idString, err := filterEscapeAttribute(i.groupAttributeMap.id, i.groupIDisOctetString, nameOrID)
// err != nil just means that this is not an uuid, so we can skip the uuid filter part
// and just filter by name
filter := ""

View File

@@ -2,6 +2,7 @@ package identity
import (
"context"
"encoding/base64"
"errors"
"fmt"
"net/url"
@@ -9,10 +10,10 @@ import (
"github.com/CiscoM31/godata"
"github.com/go-ldap/ldap/v3"
libregraph "github.com/opencloud-eu/libre-graph-api-go"
"github.com/opencloud-eu/opencloud/pkg/log"
"github.com/opencloud-eu/opencloud/services/graph/pkg/config"
"github.com/opencloud-eu/opencloud/services/graph/pkg/identity/mocks"
libregraph "github.com/opencloud-eu/libre-graph-api-go"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/mock"
)
@@ -63,6 +64,33 @@ var userEntry = ldap.NewEntry("uid=user",
"usertypeattribute": {"Member"},
})
var lconfigAD = config.LDAP{
UserBaseDN: "ou=users,dc=test",
UserObjectClass: "user",
UserSearchScope: "sub",
UserFilter: "",
UserDisplayNameAttribute: "displayname",
UserIDAttribute: "objectGUID",
UserIDIsOctetString: true,
UserEmailAttribute: "mail",
UserNameAttribute: "uid",
UserEnabledAttribute: "userEnabledAttribute",
UserTypeAttribute: "userTypeAttribute",
LdapDisabledUsersGroupDN: disableUsersGroup,
DisableUserMechanism: "attribute",
GroupBaseDN: "ou=groups,dc=test",
GroupObjectClass: "group",
GroupSearchScope: "sub",
GroupFilter: "",
GroupNameAttribute: "cn",
GroupMemberAttribute: "member",
GroupIDAttribute: "objectGUID",
GroupIDIsOctetString: true,
WriteEnabled: true,
}
var invalidUserEntry = ldap.NewEntry("uid=user",
map[string][]string{
"uid": {"invalid"},
@@ -260,6 +288,50 @@ func TestGetUser(t *testing.T) {
assert.ErrorContains(t, err, "itemNotFound:")
}
func TestGetUserAD(t *testing.T) {
// we have to simulate ldap / AD returning a binary encoded objectguid
byteID, err := base64.StdEncoding.DecodeString("js8n0m6YBUqIYK8ZMFYnig==")
if err != nil {
t.Error(err)
}
userEntryAD := ldap.NewEntry("uid=user",
map[string][]string{
"uid": {"user"},
"displayname": {"DisplayName"},
"mail": {"user@example"},
"objectguid": {string(byteID)}, // ugly but works
"sn": {"surname"},
"givenname": {"givenName"},
"userenabledattribute": {"TRUE"},
"usertypeattribute": {"Member"},
})
// Mock a valid Search Result
lm := &mocks.Client{}
lm.On("Search", mock.Anything).
Return(
&ldap.SearchResult{
Entries: []*ldap.Entry{userEntryAD},
},
nil)
odataReqDefault, err := godata.ParseRequest(context.Background(), "",
url.Values{})
if err != nil {
t.Errorf("Expected success got '%s'", err.Error())
}
b, _ := getMockedBackend(lm, lconfigAD, &logger)
u, err := b.GetUser(context.Background(), "user", odataReqDefault)
if err != nil {
t.Errorf("Expected GetUser to succeed. Got %s", err.Error())
} else if *u.Id != "d227cf8e-986e-4a05-8860-af193056278a" { // this checks if we decoded the objectguid correctly
t.Errorf("Expected GetUser to return a valid user")
}
}
func TestGetUsers(t *testing.T) {
// Mock a Sizelimit Error
lm := &mocks.Client{}

View File

@@ -11,7 +11,7 @@ msgid ""
msgstr ""
"Project-Id-Version: \n"
"Report-Msgid-Bugs-To: EMAIL\n"
"POT-Creation-Date: 2025-11-06 00:02+0000\n"
"POT-Creation-Date: 2025-11-26 00:02+0000\n"
"PO-Revision-Date: 2025-01-27 10:17+0000\n"
"Last-Translator: Jiri Grönroos <jiri.gronroos@iki.fi>, 2025\n"
"Language-Team: Finnish (https://app.transifex.com/opencloud-eu/teams/204053/fi/)\n"

View File

@@ -12,7 +12,7 @@ msgid ""
msgstr ""
"Project-Id-Version: \n"
"Report-Msgid-Bugs-To: EMAIL\n"
"POT-Creation-Date: 2025-11-01 00:02+0000\n"
"POT-Creation-Date: 2025-11-21 00:02+0000\n"
"PO-Revision-Date: 2025-01-27 10:17+0000\n"
"Last-Translator: Lulufox, 2025\n"
"Language-Team: Russian (https://app.transifex.com/opencloud-eu/teams/204053/ru/)\n"

View File

@@ -11,7 +11,7 @@ msgid ""
msgstr ""
"Project-Id-Version: \n"
"Report-Msgid-Bugs-To: EMAIL\n"
"POT-Creation-Date: 2025-11-06 00:02+0000\n"
"POT-Creation-Date: 2025-11-26 00:02+0000\n"
"PO-Revision-Date: 2025-01-27 10:17+0000\n"
"Last-Translator: LinkinWires <darkinsonic13@gmail.com>, 2025\n"
"Language-Team: Ukrainian (https://app.transifex.com/opencloud-eu/teams/204053/uk/)\n"

29
services/groups/README.md Normal file
View File

@@ -0,0 +1,29 @@
# Groups
The `groups` service provides the CS3 Groups API for OpenCloud. It is responsible for managing group information and memberships within the OpenCloud instance.
This service implements the CS3 identity group provider interface, allowing other services to query and manage groups. It works as a backend provider for the `graph` service when using the CS3 backend mode.
## Backend Integration
The groups service can work with different storage backends:
- LDAP integration through the graph service
- Direct CS3 API implementation
When using the `graph` service with the CS3 backend (`GRAPH_IDENTITY_BACKEND=cs3`), the graph service queries group information through this service.
## API
The service provides CS3 gRPC APIs for:
- Listing groups
- Getting group information
- Finding groups by name or ID
- Managing group memberships
## Usage
The groups service is only used internally by other OpenCloud services and not being accessed directly by clients. The `frontend` and `ocs` services translate HTTP API requests into CS3 API calls to this service.
## Scalability
Since the groups service queries backend systems (like LDAP through the configured identity backend), it can be scaled horizontally without additional configuration when using stateless backends.

View File

@@ -12,7 +12,7 @@ msgid ""
msgstr ""
"Project-Id-Version: \n"
"Report-Msgid-Bugs-To: EMAIL\n"
"POT-Creation-Date: 2025-11-02 00:02+0000\n"
"POT-Creation-Date: 2025-11-23 00:02+0000\n"
"PO-Revision-Date: 2025-01-27 10:17+0000\n"
"Last-Translator: miguel tapias, 2025\n"
"Language-Team: Spanish (https://app.transifex.com/opencloud-eu/teams/204053/es/)\n"

View File

@@ -11,7 +11,7 @@ msgid ""
msgstr ""
"Project-Id-Version: \n"
"Report-Msgid-Bugs-To: EMAIL\n"
"POT-Creation-Date: 2025-11-07 00:02+0000\n"
"POT-Creation-Date: 2025-11-24 00:02+0000\n"
"PO-Revision-Date: 2025-01-27 10:17+0000\n"
"Last-Translator: Jiri Grönroos <jiri.gronroos@iki.fi>, 2025\n"
"Language-Team: Finnish (https://app.transifex.com/opencloud-eu/teams/204053/fi/)\n"
@@ -84,7 +84,7 @@ msgstr "ScienceMesh: {InitiatorName} haluaa tehdä yhteistyötä kanssasi"
#. ShareExpired email template, Subject field (resolves directly)
#: pkg/email/templates.go:30
msgid "Share to '{ShareFolder}' expired at {ExpiredAt}"
msgstr ""
msgstr "Jako kansioon '{ShareFolder}' vanheni {ExpiredAt}"
#. MembershipExpired email template, resolves via {{ .MessageBody }}
#: pkg/email/templates.go:76

View File

@@ -24,27 +24,28 @@ type Config struct {
GRPCClientTLS *shared.GRPCClientTLS `yaml:"grpc_client_tls"`
GrpcClient client.Client `yaml:"-"`
RoleQuotas map[string]uint64 `yaml:"role_quotas"`
Policies []Policy `yaml:"policies"`
AdditionalPolicies []Policy `yaml:"additional_policies"`
OIDC OIDC `yaml:"oidc"`
ServiceAccount ServiceAccount `yaml:"service_account"`
RoleAssignment RoleAssignment `yaml:"role_assignment"`
PolicySelector *PolicySelector `yaml:"policy_selector"`
PreSignedURL PreSignedURL `yaml:"pre_signed_url"`
AccountBackend string `yaml:"account_backend" env:"PROXY_ACCOUNT_BACKEND_TYPE" desc:"Account backend the PROXY service should use. Currently only 'cs3' is possible here." introductionVersion:"1.0.0"`
UserOIDCClaim string `yaml:"user_oidc_claim" env:"PROXY_USER_OIDC_CLAIM" desc:"The name of an OpenID Connect claim that is used for resolving users with the account backend. The value of the claim must hold a per user unique, stable and non re-assignable identifier. The availability of claims depends on your Identity Provider. There are common claims available for most Identity providers like 'email' or 'preferred_username' but you can also add your own claim." introductionVersion:"1.0.0"`
UserCS3Claim string `yaml:"user_cs3_claim" env:"PROXY_USER_CS3_CLAIM" desc:"The name of a CS3 user attribute (claim) that should be mapped to the 'user_oidc_claim'. Supported values are 'username', 'mail' and 'userid'." introductionVersion:"1.0.0"`
MachineAuthAPIKey string `yaml:"machine_auth_api_key" env:"OC_MACHINE_AUTH_API_KEY;PROXY_MACHINE_AUTH_API_KEY" desc:"Machine auth API key used to validate internal requests necessary to access resources from other services." introductionVersion:"1.0.0" mask:"password"`
AutoprovisionAccounts bool `yaml:"auto_provision_accounts" env:"PROXY_AUTOPROVISION_ACCOUNTS" desc:"Set this to 'true' to automatically provision users that do not yet exist in the users service on-demand upon first sign-in. To use this a write-enabled libregraph user backend needs to be setup an running." introductionVersion:"1.0.0"`
AutoProvisionClaims AutoProvisionClaims `yaml:"auto_provision_claims"`
EnableBasicAuth bool `yaml:"enable_basic_auth" env:"PROXY_ENABLE_BASIC_AUTH" desc:"Set this to true to enable 'basic authentication' (username/password)." introductionVersion:"1.0.0"`
InsecureBackends bool `yaml:"insecure_backends" env:"PROXY_INSECURE_BACKENDS" desc:"Disable TLS certificate validation for all HTTP backend connections." introductionVersion:"1.0.0"`
BackendHTTPSCACert string `yaml:"backend_https_cacert" env:"PROXY_HTTPS_CACERT" desc:"Path/File for the root CA certificate used to validate the servers TLS certificate for https enabled backend services." introductionVersion:"1.0.0"`
AuthMiddleware AuthMiddleware `yaml:"auth_middleware"`
PoliciesMiddleware PoliciesMiddleware `yaml:"policies_middleware"`
CSPConfigFileLocation string `yaml:"csp_config_file_location" env:"PROXY_CSP_CONFIG_FILE_LOCATION" desc:"The location of the CSP configuration file." introductionVersion:"1.0.0"`
Events Events `yaml:"events"`
RoleQuotas map[string]uint64 `yaml:"role_quotas"`
Policies []Policy `yaml:"policies"`
AdditionalPolicies []Policy `yaml:"additional_policies"`
OIDC OIDC `yaml:"oidc"`
ServiceAccount ServiceAccount `yaml:"service_account"`
RoleAssignment RoleAssignment `yaml:"role_assignment"`
PolicySelector *PolicySelector `yaml:"policy_selector"`
PreSignedURL PreSignedURL `yaml:"pre_signed_url"`
AccountBackend string `yaml:"account_backend" env:"PROXY_ACCOUNT_BACKEND_TYPE" desc:"Account backend the PROXY service should use. Currently only 'cs3' is possible here." introductionVersion:"1.0.0"`
UserOIDCClaim string `yaml:"user_oidc_claim" env:"PROXY_USER_OIDC_CLAIM" desc:"The name of an OpenID Connect claim that is used for resolving users with the account backend. The value of the claim must hold a per user unique, stable and non re-assignable identifier. The availability of claims depends on your Identity Provider. There are common claims available for most Identity providers like 'email' or 'preferred_username' but you can also add your own claim." introductionVersion:"1.0.0"`
UserCS3Claim string `yaml:"user_cs3_claim" env:"PROXY_USER_CS3_CLAIM" desc:"The name of a CS3 user attribute (claim) that should be mapped to the 'user_oidc_claim'. Supported values are 'username', 'mail' and 'userid'." introductionVersion:"1.0.0"`
MachineAuthAPIKey string `yaml:"machine_auth_api_key" env:"OC_MACHINE_AUTH_API_KEY;PROXY_MACHINE_AUTH_API_KEY" desc:"Machine auth API key used to validate internal requests necessary to access resources from other services." introductionVersion:"1.0.0" mask:"password"`
AutoprovisionAccounts bool `yaml:"auto_provision_accounts" env:"PROXY_AUTOPROVISION_ACCOUNTS" desc:"Set this to 'true' to automatically provision users that do not yet exist in the users service on-demand upon first sign-in. To use this a write-enabled libregraph user backend needs to be setup an running." introductionVersion:"1.0.0"`
AutoProvisionClaims AutoProvisionClaims `yaml:"auto_provision_claims"`
EnableBasicAuth bool `yaml:"enable_basic_auth" env:"PROXY_ENABLE_BASIC_AUTH" desc:"Set this to true to enable 'basic authentication' (username/password)." introductionVersion:"1.0.0"`
InsecureBackends bool `yaml:"insecure_backends" env:"PROXY_INSECURE_BACKENDS" desc:"Disable TLS certificate validation for all HTTP backend connections." introductionVersion:"1.0.0"`
BackendHTTPSCACert string `yaml:"backend_https_cacert" env:"PROXY_HTTPS_CACERT" desc:"Path/File for the root CA certificate used to validate the servers TLS certificate for https enabled backend services." introductionVersion:"1.0.0"`
AuthMiddleware AuthMiddleware `yaml:"auth_middleware"`
PoliciesMiddleware PoliciesMiddleware `yaml:"policies_middleware"`
CSPConfigFileLocation string `yaml:"csp_config_file_location" env:"PROXY_CSP_CONFIG_FILE_LOCATION" desc:"The location of the CSP configuration file." introductionVersion:"1.0.0"`
CSPConfigFileOverrideLocation string `yaml:"csp_config_file_override_location" env:"PROXY_CSP_CONFIG_FILE_OVERRIDE_LOCATION" desc:"The location of the CSP configuration file override." introductionVersion:"%%NEXT%%"`
Events Events `yaml:"events"`
Context context.Context `json:"-" yaml:"-"`
}

View File

@@ -5,6 +5,7 @@ directives:
- '''self'''
- 'blob:'
- 'https://raw.githubusercontent.com/opencloud-eu/awesome-apps/'
- 'https://update.opencloud.eu/'
default-src:
- '''none'''
font-src:

View File

@@ -92,9 +92,10 @@ func DefaultConfig() *config.Config {
DisplayName: "name",
Groups: "groups",
},
EnableBasicAuth: false,
InsecureBackends: false,
CSPConfigFileLocation: "",
EnableBasicAuth: false,
InsecureBackends: false,
CSPConfigFileLocation: "",
CSPConfigFileOverrideLocation: "",
Events: config.Events{
Endpoint: "127.0.0.1:9233",
Cluster: "opencloud-cluster",

View File

@@ -3,30 +3,48 @@ package middleware
import (
"net/http"
"os"
"reflect"
gofig "github.com/gookit/config/v2"
"github.com/gookit/config/v2/yaml"
"github.com/opencloud-eu/opencloud/services/proxy/pkg/config"
"github.com/unrolled/secure"
"github.com/unrolled/secure/cspbuilder"
yamlv3 "gopkg.in/yaml.v3"
)
// LoadCSPConfig loads CSP header configuration from a yaml file.
func LoadCSPConfig(proxyCfg *config.Config) (*config.CSP, error) {
yamlContent, err := loadCSPYaml(proxyCfg)
yamlContent, customYamlContent, err := loadCSPYaml(proxyCfg)
if err != nil {
return nil, err
}
return loadCSPConfig(yamlContent)
return loadCSPConfig(yamlContent, customYamlContent)
}
// LoadCSPConfig loads CSP header configuration from a yaml file.
func loadCSPConfig(yamlContent []byte) (*config.CSP, error) {
func loadCSPConfig(presetYamlContent, customYamlContent []byte) (*config.CSP, error) {
// substitute env vars and load to struct
gofig.WithOptions(gofig.ParseEnv)
gofig.AddDriver(yaml.Driver)
err := gofig.LoadSources("yaml", yamlContent)
presetMap := map[string]interface{}{}
err := yamlv3.Unmarshal(presetYamlContent, &presetMap)
if err != nil {
return nil, err
}
customMap := map[string]interface{}{}
err = yamlv3.Unmarshal(customYamlContent, &customMap)
if err != nil {
return nil, err
}
mergedMap := deepMerge(presetMap, customMap)
mergedYamlContent, err := yamlv3.Marshal(mergedMap)
if err != nil {
return nil, err
}
err = gofig.LoadSources("yaml", mergedYamlContent)
if err != nil {
return nil, err
}
@@ -41,11 +59,78 @@ func loadCSPConfig(yamlContent []byte) (*config.CSP, error) {
return &cspConfig, nil
}
func loadCSPYaml(proxyCfg *config.Config) ([]byte, error) {
if proxyCfg.CSPConfigFileLocation == "" {
return []byte(config.DefaultCSPConfig), nil
// deepMerge recursively merges map2 into map1.
// - nested maps are merged recursively
// - slices are concatenated, preserving order and avoiding duplicates
// - scalar or type-mismatched values from map2 overwrite map1
func deepMerge(map1, map2 map[string]interface{}) map[string]interface{} {
if map1 == nil {
out := make(map[string]interface{}, len(map2))
for k, v := range map2 {
out[k] = v
}
return out
}
return os.ReadFile(proxyCfg.CSPConfigFileLocation)
for k, v2 := range map2 {
if v1, ok := map1[k]; ok {
// both maps -> recurse
if m1, ok1 := v1.(map[string]interface{}); ok1 {
if m2, ok2 := v2.(map[string]interface{}); ok2 {
map1[k] = deepMerge(m1, m2)
continue
}
}
// both slices -> merge unique
if s1, ok1 := v1.([]interface{}); ok1 {
if s2, ok2 := v2.([]interface{}); ok2 {
merged := append([]interface{}{}, s1...)
for _, item := range s2 {
if !sliceContains(merged, item) {
merged = append(merged, item)
}
}
map1[k] = merged
continue
}
// s1 is slice, v2 single -> append if missing
if !sliceContains(s1, v2) {
map1[k] = append(s1, v2)
}
continue
}
// default: overwrite
map1[k] = v2
} else {
// new key -> just set
map1[k] = v2
}
}
return map1
}
func sliceContains(slice []interface{}, val interface{}) bool {
for _, v := range slice {
if reflect.DeepEqual(v, val) {
return true
}
}
return false
}
func loadCSPYaml(proxyCfg *config.Config) ([]byte, []byte, error) {
if proxyCfg.CSPConfigFileOverrideLocation != "" {
overrideCSPYaml, err := os.ReadFile(proxyCfg.CSPConfigFileOverrideLocation)
return overrideCSPYaml, []byte{}, err
}
if proxyCfg.CSPConfigFileLocation == "" {
return []byte(config.DefaultCSPConfig), nil, nil
}
customCSPYaml, err := os.ReadFile(proxyCfg.CSPConfigFileLocation)
return []byte(config.DefaultCSPConfig), customCSPYaml, err
}
// Security is a middleware to apply security relevant http headers like CSP.

View File

@@ -4,11 +4,12 @@ import (
"testing"
"gotest.tools/v3/assert"
"gotest.tools/v3/assert/cmp"
)
func TestLoadCSPConfig(t *testing.T) {
// setup test env
yaml := `
presetYaml := `
directives:
frame-src:
- '''self'''
@@ -17,12 +18,23 @@ directives:
- 'https://${COLLABORA_DOMAIN|collabora.opencloud.test}/'
`
config, err := loadCSPConfig([]byte(yaml))
customYaml := `
directives:
img-src:
- '''self'''
- 'data:'
frame-src:
- 'https://some.custom.domain/'
`
config, err := loadCSPConfig([]byte(presetYaml), []byte(customYaml))
if err != nil {
t.Error(err)
}
assert.Equal(t, config.Directives["frame-src"][0], "'self'")
assert.Equal(t, config.Directives["frame-src"][1], "https://embed.diagrams.net/")
assert.Equal(t, config.Directives["frame-src"][2], "https://onlyoffice.opencloud.test/")
assert.Equal(t, config.Directives["frame-src"][3], "https://collabora.opencloud.test/")
assert.Assert(t, cmp.Contains(config.Directives["frame-src"], "'self'"))
assert.Assert(t, cmp.Contains(config.Directives["frame-src"], "https://embed.diagrams.net/"))
assert.Assert(t, cmp.Contains(config.Directives["frame-src"], "https://onlyoffice.opencloud.test/"))
assert.Assert(t, cmp.Contains(config.Directives["frame-src"], "https://collabora.opencloud.test/"))
assert.Assert(t, cmp.Contains(config.Directives["img-src"], "'self'"))
assert.Assert(t, cmp.Contains(config.Directives["img-src"], "data:"))
}

View File

@@ -5,6 +5,7 @@ import (
"crypto/tls"
"fmt"
"net/http"
"os"
"os/signal"
"github.com/opencloud-eu/reva/v2/pkg/events/raw"
@@ -84,30 +85,37 @@ func Server(cfg *config.Config) *cli.Command {
eng = bleve.NewBackend(idx, bleveQuery.DefaultCreator, logger)
case "open-search":
client, err := opensearchgoAPI.NewClient(opensearchgoAPI.Config{
Client: opensearchgo.Config{
Addresses: cfg.Engine.OpenSearch.Client.Addresses,
Username: cfg.Engine.OpenSearch.Client.Username,
Password: cfg.Engine.OpenSearch.Client.Password,
Header: cfg.Engine.OpenSearch.Client.Header,
CACert: cfg.Engine.OpenSearch.Client.CACert,
RetryOnStatus: cfg.Engine.OpenSearch.Client.RetryOnStatus,
DisableRetry: cfg.Engine.OpenSearch.Client.DisableRetry,
EnableRetryOnTimeout: cfg.Engine.OpenSearch.Client.EnableRetryOnTimeout,
MaxRetries: cfg.Engine.OpenSearch.Client.MaxRetries,
CompressRequestBody: cfg.Engine.OpenSearch.Client.CompressRequestBody,
DiscoverNodesOnStart: cfg.Engine.OpenSearch.Client.DiscoverNodesOnStart,
DiscoverNodesInterval: cfg.Engine.OpenSearch.Client.DiscoverNodesInterval,
EnableMetrics: cfg.Engine.OpenSearch.Client.EnableMetrics,
EnableDebugLogger: cfg.Engine.OpenSearch.Client.EnableDebugLogger,
Transport: &http.Transport{
TLSClientConfig: &tls.Config{
MinVersion: tls.VersionTLS12,
InsecureSkipVerify: cfg.Engine.OpenSearch.Client.Insecure,
},
clientConfig := opensearchgo.Config{
Addresses: cfg.Engine.OpenSearch.Client.Addresses,
Username: cfg.Engine.OpenSearch.Client.Username,
Password: cfg.Engine.OpenSearch.Client.Password,
Header: cfg.Engine.OpenSearch.Client.Header,
RetryOnStatus: cfg.Engine.OpenSearch.Client.RetryOnStatus,
DisableRetry: cfg.Engine.OpenSearch.Client.DisableRetry,
EnableRetryOnTimeout: cfg.Engine.OpenSearch.Client.EnableRetryOnTimeout,
MaxRetries: cfg.Engine.OpenSearch.Client.MaxRetries,
CompressRequestBody: cfg.Engine.OpenSearch.Client.CompressRequestBody,
DiscoverNodesOnStart: cfg.Engine.OpenSearch.Client.DiscoverNodesOnStart,
DiscoverNodesInterval: cfg.Engine.OpenSearch.Client.DiscoverNodesInterval,
EnableMetrics: cfg.Engine.OpenSearch.Client.EnableMetrics,
EnableDebugLogger: cfg.Engine.OpenSearch.Client.EnableDebugLogger,
Transport: &http.Transport{
TLSClientConfig: &tls.Config{
MinVersion: tls.VersionTLS12,
InsecureSkipVerify: cfg.Engine.OpenSearch.Client.Insecure,
},
},
})
}
if cfg.Engine.OpenSearch.Client.CACert != "" {
certBytes, err := os.ReadFile(cfg.Engine.OpenSearch.Client.CACert)
if err != nil {
return fmt.Errorf("failed to read CA cert: %w", err)
}
clientConfig.CACert = certBytes
}
client, err := opensearchgoAPI.NewClient(opensearchgoAPI.Config{Client: clientConfig})
if err != nil {
return fmt.Errorf("failed to create OpenSearch client: %w", err)
}

View File

@@ -34,7 +34,7 @@ type EngineOpenSearchClient struct {
Username string `yaml:"username" env:"SEARCH_ENGINE_OPEN_SEARCH_CLIENT_USERNAME" desc:"Username for HTTP Basic Authentication." introductionVersion:"%%NEXT%%"`
Password string `yaml:"password" env:"SEARCH_ENGINE_OPEN_SEARCH_CLIENT_PASSWORD" desc:"Password for HTTP Basic Authentication." introductionVersion:"%%NEXT%%"`
Header http.Header `yaml:"header" env:"SEARCH_ENGINE_OPEN_SEARCH_CLIENT_HEADER" desc:"HTTP headers to include in requests." introductionVersion:"%%NEXT%%"`
CACert []byte `yaml:"ca_cert" env:"SEARCH_ENGINE_OPEN_SEARCH_CLIENT_CA_CERT" desc:"CA certificate for TLS connections." introductionVersion:"%%NEXT%%"`
CACert string `yaml:"ca_cert" env:"SEARCH_ENGINE_OPEN_SEARCH_CLIENT_CA_CERT" desc:"Path/File name for the root CA certificate (in PEM format) used to validate TLS server certificates of the opensearch server." introductionVersion:"%%NEXT%%"`
RetryOnStatus []int `yaml:"retry_on_status" env:"SEARCH_ENGINE_OPEN_SEARCH_CLIENT_RETRY_ON_STATUS" desc:"HTTP status codes that trigger a retry." introductionVersion:"%%NEXT%%"`
DisableRetry bool `yaml:"disable_retry" env:"SEARCH_ENGINE_OPEN_SEARCH_CLIENT_DISABLE_RETRY" desc:"Disable retries on errors." introductionVersion:"%%NEXT%%"`
EnableRetryOnTimeout bool `yaml:"enable_retry_on_timeout" env:"SEARCH_ENGINE_OPEN_SEARCH_CLIENT_ENABLE_RETRY_ON_TIMEOUT" desc:"Enable retries on timeout." introductionVersion:"%%NEXT%%"`

View File

@@ -11,7 +11,7 @@ msgid ""
msgstr ""
"Project-Id-Version: \n"
"Report-Msgid-Bugs-To: EMAIL\n"
"POT-Creation-Date: 2025-11-07 00:02+0000\n"
"POT-Creation-Date: 2025-11-24 00:02+0000\n"
"PO-Revision-Date: 2025-01-27 10:17+0000\n"
"Last-Translator: Jiri Grönroos <jiri.gronroos@iki.fi>, 2025\n"
"Language-Team: Finnish (https://app.transifex.com/opencloud-eu/teams/204053/fi/)\n"
@@ -103,7 +103,7 @@ msgstr "Ilmoita, kun avaruuden jäsenyys on vanhentunut"
#. name of the notification option 'Space Unshared'
#: pkg/store/defaults/templates.go:24
msgid "Removed as space member"
msgstr ""
msgstr "Poistettu avaruuden jäsenyydestä"
#. description of the notification option 'Email Interval'
#: pkg/store/defaults/templates.go:46

View File

@@ -11,7 +11,7 @@ msgid ""
msgstr ""
"Project-Id-Version: \n"
"Report-Msgid-Bugs-To: EMAIL\n"
"POT-Creation-Date: 2025-11-06 00:02+0000\n"
"POT-Creation-Date: 2025-11-26 00:02+0000\n"
"PO-Revision-Date: 2025-01-27 10:17+0000\n"
"Last-Translator: LinkinWires <darkinsonic13@gmail.com>, 2025\n"
"Language-Team: Ukrainian (https://app.transifex.com/opencloud-eu/teams/204053/uk/)\n"

View File

@@ -0,0 +1,42 @@
# Sharing
The `sharing` service provides the CS3 Sharing API for OpenCloud. It manages user shares and public link shares, implementing the core sharing functionality.
## Overview
The sharing service handles:
- User-to-user shares (share a file or folder with another user)
- Public link shares (share via a public URL)
- Share permissions and roles
- Share lifecycle management (create, update, delete)
This service works in conjunction with the storage providers (`storage-shares` and `storage-publiclink`) to persist and manage share information.
## Integration
The sharing service integrates with:
- `frontend` and `ocs` - Provide HTTP APIs that translate to CS3 sharing calls
- `storage-shares` - Stores and manages received shares
- `storage-publiclink` - Manages public link shares
- `graph` - Provides LibreGraph API for sharing with roles
## Share Types
The service supports different types of shares:
- **User shares** - Share resources with specific users
- **Group shares** - Share resources with groups
- **Public link shares** - Create public URLs for sharing
- **Federated shares** - Share with users on other OpenCloud instances (via `ocm` service)
## Configuration
Share behavior can be configured via environment variables:
- Password enforcement for public shares
- Auto-acceptance of shares
- Share permissions and restrictions
See the `frontend` service README for more details on share-related configuration options.
## Scalability
The sharing service depends on the configured storage backends for share metadata. Scalability characteristics depend on the chosen storage backend configuration.

View File

@@ -0,0 +1,37 @@
# Storage PublicLink
The `storage-publiclink` service provides storage backend functionality for public link shares in OpenCloud. It implements the CS3 storage provider interface specifically for working with public link shared resources.
## Overview
This service is part of the storage services family and is responsible for:
- Providing access to publicly shared resources
- Handling anonymous access to shared content
## Integration
The storage-publiclink service integrates with:
- `sharing` service - Manages and persists public link shares
- `frontend` and `ocdav` - Provide HTTP/WebDAV access to public links
- Storage drivers - Accesses the actual file content
## Storage Registry
The service is registered in the gateway's storage registry with:
- Provider ID: `7993447f-687f-490d-875c-ac95e89a62a4`
- Mount point: `/public`
- Space types: `grant` and `mountpoint`
See the `gateway` README for more details on storage registry configuration.
## Access Control
Public link shares can be configured with:
- Password protection
- Expiration dates
- Read-only or read-write permissions
- Download limits
## Scalability
The storage-publiclink service can be scaled horizontally.

View File

@@ -0,0 +1,33 @@
# Storage Shares
The `storage-shares` service provides storage backend functionality for user and group shares in OpenCloud. It implements the CS3 storage provider interface specifically for working with shared resources.
## Overview
This service is part of the storage services family and is responsible for:
- Providing a virtual view of received shares
- Handling access to resources shared by other users
## Integration
The storage-shares service integrates with:
- `sharing` service - Manages and persists shares
- `storage-users` service - Accesses the underlying file content
- `frontend` and `ocdav` - Provide HTTP/WebDAV access to shares
## Virtual Shares Folder
The service provides a virtual "Shares" folder for each user where all received shares are mounted. This allows users to access all files and folders that have been shared with them in a centralized location.
## Storage Registry
The service is registered in the gateway's storage registry with:
- Provider ID: `a0ca6a90-a365-4782-871e-d44447bbc668`
- Mount point: `/users/{{.CurrentUser.Id.OpaqueId}}/Shares`
- Space types: `virtual`, `grant`, and `mountpoint`
See the `gateway` README for more details on storage registry configuration.
## Scalability
The storage-shares service can be scaled horizontally.

View File

@@ -11,7 +11,7 @@ msgid ""
msgstr ""
"Project-Id-Version: \n"
"Report-Msgid-Bugs-To: EMAIL\n"
"POT-Creation-Date: 2025-11-06 00:02+0000\n"
"POT-Creation-Date: 2025-11-26 00:02+0000\n"
"PO-Revision-Date: 2025-01-27 10:17+0000\n"
"Last-Translator: Jiri Grönroos <jiri.gronroos@iki.fi>, 2025\n"
"Language-Team: Finnish (https://app.transifex.com/opencloud-eu/teams/204053/fi/)\n"

28
services/users/README.md Normal file
View File

@@ -0,0 +1,28 @@
# Users
The `users` service provides the CS3 Users API for OpenCloud. It is responsible for managing user information and authentication within the OpenCloud instance.
This service implements the CS3 identity user provider interface, allowing other services to query and manage user accounts. It works as a backend provider for the `graph` service when using the CS3 backend mode.
## Backend Integration
The users service can work with different storage backends:
- LDAP integration through the graph service
- Direct CS3 API implementation
When using the `graph` service with the CS3 backend (`GRAPH_IDENTITY_BACKEND=cs3`), the graph service queries user information through this service.
## API
The service provides CS3 gRPC APIs for:
- Listing users
- Getting user information
- Finding users by username, email, or ID
## Usage
The users service is only used internally by other OpenCloud services and not being accessed directly by clients. The `frontend`, `ocs`, and `graph` services translate HTTP API requests into CS3 API calls to this service.
## Scalability
Since the users service queries backend systems (like LDAP through the configured identity backend), it can be scaled horizontally without additional configuration when using stateless backends.

View File

@@ -1,6 +1,6 @@
SHELL := bash
NAME := web
WEB_ASSETS_VERSION = v4.2.1-alpha.1
WEB_ASSETS_VERSION = v4.2.1-rc.1
WEB_ASSETS_BRANCH = main
ifneq (, $(shell command -v go 2> /dev/null)) # suppress `command not found warnings` for non go targets in CI

View File

@@ -2701,6 +2701,28 @@ class FeatureContext extends BehatVariablesContext {
}
}
/**
* @AfterScenario
*
* @return void
*
* @throws Exception|GuzzleException
*/
public function cleanDataAfterTests(): void {
if (!OcHelper::isTestingOnReva() && !OcHelper::isUsingPreparedLdapUsers()) {
$this->spacesContext->deleteAllProjectSpaces();
}
if (OcHelper::isTestingOnReva()) {
OcHelper::deleteRevaUserData($this->getCreatedUsers());
}
if ($this->isTestingWithLdap()) {
$this->deleteLdapUsersAndGroups();
}
$this->cleanupDatabaseUsers();
$this->cleanupDatabaseGroups();
}
/**
* @BeforeScenario @temporary_storage_on_server
*

View File

@@ -1893,24 +1893,6 @@ trait Provisioning {
);
}
/**
* @AfterScenario
*
* @return void
* @throws Exception
*/
public function afterScenario(): void {
if (OcHelper::isTestingOnReva()) {
OcHelper::deleteRevaUserData($this->getCreatedUsers());
}
if ($this->isTestingWithLdap()) {
$this->deleteLdapUsersAndGroups();
}
$this->cleanupDatabaseUsers();
$this->cleanupDatabaseGroups();
}
/**
*
* @return void

View File

@@ -478,20 +478,6 @@ class SpacesContext implements Context {
$this->archiverContext = BehatHelper::getContext($scope, $environment, 'ArchiverContext');
}
/**
* @AfterScenario
*
* @return void
*
* @throws Exception|GuzzleException
*/
public function cleanDataAfterTests(): void {
if (OcHelper::isTestingOnReva() || OcHelper::isUsingPreparedLdapUsers()) {
return;
}
$this->deleteAllProjectSpaces();
}
/**
* the admin user first disables and then deletes spaces
*

View File

@@ -219,7 +219,7 @@ Feature: download file
And the following headers should be set
| header | value |
| Content-Disposition | attachment; filename*=UTF-8''<encoded-file-name>; filename="<file-name>" |
| Content-Security-Policy | child-src 'self'; connect-src 'self' blob: https://raw.githubusercontent.com/opencloud-eu/awesome-apps/; default-src 'none'; font-src 'self'; frame-ancestors 'self'; frame-src 'self' blob: https://embed.diagrams.net/; img-src 'self' data: blob: https://raw.githubusercontent.com/opencloud-eu/awesome-apps/; manifest-src 'self'; media-src 'self'; object-src 'self' blob:; script-src 'self' 'unsafe-inline'; style-src 'self' 'unsafe-inline' |
| Content-Security-Policy | child-src 'self'; connect-src 'self' blob: https://raw.githubusercontent.com/opencloud-eu/awesome-apps/ https://update.opencloud.eu/; default-src 'none'; font-src 'self'; frame-ancestors 'self'; frame-src 'self' blob: https://embed.diagrams.net/; img-src 'self' data: blob: https://raw.githubusercontent.com/opencloud-eu/awesome-apps/; manifest-src 'self'; media-src 'self'; object-src 'self' blob:; script-src 'self' 'unsafe-inline'; style-src 'self' 'unsafe-inline' |
| X-Content-Type-Options | nosniff |
| X-Frame-Options | SAMEORIGIN |
| X-Permitted-Cross-Domain-Policies | none |
@@ -247,7 +247,7 @@ Feature: download file
And the following headers should be set
| header | value |
| Content-Disposition | attachment; filename*=UTF-8''%22quote%22double%22.txt; filename=""quote"double".txt" |
| Content-Security-Policy | child-src 'self'; connect-src 'self' blob: https://raw.githubusercontent.com/opencloud-eu/awesome-apps/; default-src 'none'; font-src 'self'; frame-ancestors 'self'; frame-src 'self' blob: https://embed.diagrams.net/; img-src 'self' data: blob: https://raw.githubusercontent.com/opencloud-eu/awesome-apps/; manifest-src 'self'; media-src 'self'; object-src 'self' blob:; script-src 'self' 'unsafe-inline'; style-src 'self' 'unsafe-inline' |
| Content-Security-Policy | child-src 'self'; connect-src 'self' blob: https://raw.githubusercontent.com/opencloud-eu/awesome-apps/ https://update.opencloud.eu/; default-src 'none'; font-src 'self'; frame-ancestors 'self'; frame-src 'self' blob: https://embed.diagrams.net/; img-src 'self' data: blob: https://raw.githubusercontent.com/opencloud-eu/awesome-apps/; manifest-src 'self'; media-src 'self'; object-src 'self' blob:; script-src 'self' 'unsafe-inline'; style-src 'self' 'unsafe-inline' |
| X-Content-Type-Options | nosniff |
| X-Frame-Options | SAMEORIGIN |
| X-Permitted-Cross-Domain-Policies | none |

View File

@@ -82,8 +82,8 @@ func scoreSortFunc() func(i, j *search.DocumentMatch) int {
func getFusionExplAt(hit *search.DocumentMatch, i int, value float64, message string) *search.Explanation {
return &search.Explanation{
Value: value,
Message: message,
Value: value,
Message: message,
Children: []*search.Explanation{hit.Expl.Children[i]},
}
}

View File

@@ -388,3 +388,11 @@ type SynonymIndex interface {
// IndexSynonym indexes a synonym definition, with the specified id and belonging to the specified collection.
IndexSynonym(id string, collection string, definition *SynonymDefinition) error
}
type InsightsIndex interface {
Index
// TermFrequencies returns the tokens ordered by frequencies for the field index.
TermFrequencies(field string, limit int, descending bool) ([]index.TermFreq, error)
// CentroidCardinalities returns the centroids (clusters) from IVF indexes ordered by data density.
CentroidCardinalities(field string, limit int, desceding bool) ([]index.CentroidCardinality, error)
}

View File

@@ -23,6 +23,7 @@ import (
"path/filepath"
"reflect"
"sort"
"strings"
"sync"
"sync/atomic"
@@ -1234,3 +1235,61 @@ func (is *IndexSnapshot) MergeUpdateFieldsInfo(updatedFields map[string]*index.U
}
}
}
// TermFrequencies returns the top N terms ordered by the frequencies
// for a given field across all segments in the index snapshot.
func (is *IndexSnapshot) TermFrequencies(field string, limit int, descending bool) (
termFreqs []index.TermFreq, err error) {
if len(is.segment) == 0 {
return nil, nil
}
if limit <= 0 {
return nil, fmt.Errorf("limit must be positive")
}
// Use FieldDict which aggregates term frequencies across all segments
fieldDict, err := is.FieldDict(field)
if err != nil {
return nil, fmt.Errorf("failed to get field dictionary for field %s: %v", field, err)
}
defer fieldDict.Close()
// Preallocate slice with capacity equal to the number of unique terms
// in the field dictionary
termFreqs = make([]index.TermFreq, 0, fieldDict.Cardinality())
// Iterate through all terms using FieldDict
for {
dictEntry, err := fieldDict.Next()
if err != nil {
return nil, fmt.Errorf("error iterating field dictionary: %v", err)
}
if dictEntry == nil {
break // End of terms
}
termFreqs = append(termFreqs, index.TermFreq{
Term: dictEntry.Term,
Frequency: dictEntry.Count,
})
}
// Sort by frequency (descending or ascending)
sort.Slice(termFreqs, func(i, j int) bool {
if termFreqs[i].Frequency == termFreqs[j].Frequency {
// If frequencies are equal, sort by term lexicographically
return strings.Compare(termFreqs[i].Term, termFreqs[j].Term) < 0
}
if descending {
return termFreqs[i].Frequency > termFreqs[j].Frequency
}
return termFreqs[i].Frequency < termFreqs[j].Frequency
})
if limit >= len(termFreqs) {
return termFreqs, nil
}
return termFreqs[:limit], nil
}

View File

@@ -23,6 +23,7 @@ import (
"encoding/json"
"fmt"
"reflect"
"sort"
"github.com/blevesearch/bleve/v2/size"
index "github.com/blevesearch/bleve_index_api"
@@ -167,3 +168,52 @@ func (i *IndexSnapshotVectorReader) Close() error {
// TODO Consider if any scope of recycling here.
return nil
}
func (i *IndexSnapshot) CentroidCardinalities(field string, limit int, descending bool) (
[]index.CentroidCardinality, error) {
if len(i.segment) == 0 {
return nil, nil
}
if limit <= 0 {
return nil, fmt.Errorf("limit must be positive")
}
centroids := make([]index.CentroidCardinality, 0, limit*len(i.segment))
for _, segment := range i.segment {
if sv, ok := segment.segment.(segment_api.VectorSegment); ok {
vecIndex, err := sv.InterpretVectorIndex(field,
false /* does not require filtering */, segment.deleted)
if err != nil {
return nil, fmt.Errorf("failed to interpret vector index for field %s in segment: %v", field, err)
}
centroidCardinalities, err := vecIndex.ObtainKCentroidCardinalitiesFromIVFIndex(limit, descending)
if err != nil {
return nil, fmt.Errorf("failed to obtain top k centroid cardinalities for field %s in segment: %v", field, err)
}
if len(centroidCardinalities) > 0 {
centroids = append(centroids, centroidCardinalities...)
}
}
}
if len(centroids) == 0 {
return nil, nil
}
sort.Slice(centroids, func(i, j int) bool {
if descending {
return centroids[i].Cardinality > centroids[j].Cardinality
}
return centroids[i].Cardinality < centroids[j].Cardinality
})
if limit >= len(centroids) {
return centroids, nil
}
return centroids[:limit], nil
}

View File

@@ -17,6 +17,8 @@ package bleve
import (
"context"
"fmt"
"sort"
"strings"
"sync"
"time"
@@ -1136,3 +1138,159 @@ func (f *indexAliasImplFieldDict) Close() error {
func (f *indexAliasImplFieldDict) Cardinality() int {
return f.fieldDict.Cardinality()
}
// -----------------------------------------------------------------------------
func (i *indexAliasImpl) TermFrequencies(field string, limit int, descending bool) (
[]index.TermFreq, error) {
i.mutex.RLock()
defer i.mutex.RUnlock()
if !i.open {
return nil, ErrorIndexClosed
}
if len(i.indexes) < 1 {
return nil, ErrorAliasEmpty
}
// short circuit the simple case
if len(i.indexes) == 1 {
if idx, ok := i.indexes[0].(InsightsIndex); ok {
return idx.TermFrequencies(field, limit, descending)
}
return nil, nil
}
// run search on each index in separate go routine
var waitGroup sync.WaitGroup
asyncResults := make(chan []index.TermFreq, len(i.indexes))
searchChildIndex := func(in Index, field string, limit int, descending bool) {
var rv []index.TermFreq
if idx, ok := in.(InsightsIndex); ok {
// over sample for higher accuracy
rv, _ = idx.TermFrequencies(field, limit*5, descending)
}
asyncResults <- rv
waitGroup.Done()
}
waitGroup.Add(len(i.indexes))
for _, in := range i.indexes {
go searchChildIndex(in, field, limit, descending)
}
// on another go routine, close after finished
go func() {
waitGroup.Wait()
close(asyncResults)
}()
rvTermFreqsMap := make(map[string]uint64)
for asr := range asyncResults {
for _, entry := range asr {
rvTermFreqsMap[entry.Term] += entry.Frequency
}
}
rvTermFreqs := make([]index.TermFreq, 0, len(rvTermFreqsMap))
for term, freq := range rvTermFreqsMap {
rvTermFreqs = append(rvTermFreqs, index.TermFreq{
Term: term,
Frequency: freq,
})
}
if descending {
sort.Slice(rvTermFreqs, func(i, j int) bool {
if rvTermFreqs[i].Frequency == rvTermFreqs[j].Frequency {
// If frequencies are equal, sort by term lexicographically
return strings.Compare(rvTermFreqs[i].Term, rvTermFreqs[j].Term) < 0
}
return rvTermFreqs[i].Frequency > rvTermFreqs[j].Frequency
})
} else {
sort.Slice(rvTermFreqs, func(i, j int) bool {
if rvTermFreqs[i].Frequency == rvTermFreqs[j].Frequency {
// If frequencies are equal, sort by term lexicographically
return strings.Compare(rvTermFreqs[i].Term, rvTermFreqs[j].Term) < 0
}
return rvTermFreqs[i].Frequency < rvTermFreqs[j].Frequency
})
}
if limit > len(rvTermFreqs) {
limit = len(rvTermFreqs)
}
return rvTermFreqs[:limit], nil
}
func (i *indexAliasImpl) CentroidCardinalities(field string, limit int, descending bool) (
[]index.CentroidCardinality, error) {
i.mutex.RLock()
defer i.mutex.RUnlock()
if !i.open {
return nil, ErrorIndexClosed
}
if len(i.indexes) < 1 {
return nil, ErrorAliasEmpty
}
// short circuit the simple case
if len(i.indexes) == 1 {
if idx, ok := i.indexes[0].(InsightsIndex); ok {
return idx.CentroidCardinalities(field, limit, descending)
}
return nil, nil
}
// run search on each index in separate go routine
var waitGroup sync.WaitGroup
asyncResults := make(chan []index.CentroidCardinality, len(i.indexes))
searchChildIndex := func(in Index, field string, limit int, descending bool) {
var rv []index.CentroidCardinality
if idx, ok := in.(InsightsIndex); ok {
rv, _ = idx.CentroidCardinalities(field, limit, descending)
}
asyncResults <- rv
waitGroup.Done()
}
waitGroup.Add(len(i.indexes))
for _, in := range i.indexes {
go searchChildIndex(in, field, limit, descending)
}
// on another go routine, close after finished
go func() {
waitGroup.Wait()
close(asyncResults)
}()
rvCentroidCardinalitiesResult := make([]index.CentroidCardinality, 0, limit)
for asr := range asyncResults {
asr = append(asr, rvCentroidCardinalitiesResult...)
if descending {
sort.Slice(asr, func(i, j int) bool {
return asr[i].Cardinality > asr[j].Cardinality
})
} else {
sort.Slice(asr, func(i, j int) bool {
return asr[i].Cardinality < asr[j].Cardinality
})
}
if limit > len(asr) {
limit = len(asr)
}
rvCentroidCardinalitiesResult = asr[:limit]
}
return rvCentroidCardinalitiesResult, nil
}

View File

@@ -57,8 +57,6 @@ type indexImpl struct {
const storePath = "store"
var mappingInternalKey = []byte("_mapping")
const (
SearchQueryStartCallbackKey search.ContextKey = "_search_query_start_callback_key"
SearchQueryEndCallbackKey search.ContextKey = "_search_query_end_callback_key"
@@ -641,8 +639,57 @@ func (i *indexImpl) SearchInContext(ctx context.Context, req *SearchRequest) (sr
}
}
// ------------------------------------------------------------------------------------------
// set up additional contexts for any search operation that will proceed from
// here, such as presearch, collectors etc.
// Scoring model callback to be used to get scoring model
scoringModelCallback := func() string {
if isBM25Enabled(i.m) {
return index.BM25Scoring
}
return index.DefaultScoringModel
}
ctx = context.WithValue(ctx, search.GetScoringModelCallbackKey,
search.GetScoringModelCallbackFn(scoringModelCallback))
// This callback and variable handles the tracking of bytes read
// 1. as part of creation of tfr and its Next() calls which is
// accounted by invoking this callback when the TFR is closed.
// 2. the docvalues portion (accounted in collector) and the retrieval
// of stored fields bytes (by LoadAndHighlightFields)
var totalSearchCost uint64
sendBytesRead := func(bytesRead uint64) {
totalSearchCost += bytesRead
}
// Ensure IO cost accounting and result cost assignment happen on all return paths
defer func() {
if sr != nil {
sr.Cost = totalSearchCost
}
if is, ok := indexReader.(*scorch.IndexSnapshot); ok {
is.UpdateIOStats(totalSearchCost)
}
search.RecordSearchCost(ctx, search.DoneM, 0)
}()
ctx = context.WithValue(ctx, search.SearchIOStatsCallbackKey, search.SearchIOStatsCallbackFunc(sendBytesRead))
// Geo buffer pool callback to be used for getting geo buffer pool
var bufPool *s2.GeoBufferPool
getBufferPool := func() *s2.GeoBufferPool {
if bufPool == nil {
bufPool = s2.NewGeoBufferPool(search.MaxGeoBufPoolSize, search.MinGeoBufPoolSize)
}
return bufPool
}
ctx = context.WithValue(ctx, search.GeoBufferPoolCallbackKey, search.GeoBufferPoolCallbackFunc(getBufferPool))
// ------------------------------------------------------------------------------------------
if _, ok := ctx.Value(search.PreSearchKey).(bool); ok {
preSearchResult, err := i.preSearch(ctx, req, indexReader)
sr, err = i.preSearch(ctx, req, indexReader)
if err != nil {
return nil, err
}
@@ -656,7 +703,8 @@ func (i *indexImpl) SearchInContext(ctx context.Context, req *SearchRequest) (sr
// time stat
searchDuration := time.Since(searchStart)
atomic.AddUint64(&i.stats.searchTime, uint64(searchDuration))
return preSearchResult, nil
return sr, nil
}
var reverseQueryExecution bool
@@ -726,6 +774,9 @@ func (i *indexImpl) SearchInContext(ctx context.Context, req *SearchRequest) (sr
// if score fusion, run collect if rescorer is defined
if rescorer != nil && requestHasKNN(req) {
knnHits, err = i.runKnnCollector(ctx, req, indexReader, false)
if err != nil {
return nil, err
}
}
}
@@ -745,7 +796,6 @@ func (i *indexImpl) SearchInContext(ctx context.Context, req *SearchRequest) (sr
if !contextScoreFusionKeyExists {
setKnnHitsInCollector(knnHits, req, coll)
}
if fts != nil {
if is, ok := indexReader.(*scorch.IndexSnapshot); ok {
@@ -754,44 +804,12 @@ func (i *indexImpl) SearchInContext(ctx context.Context, req *SearchRequest) (sr
ctx = context.WithValue(ctx, search.FieldTermSynonymMapKey, fts)
}
scoringModelCallback := func() string {
if isBM25Enabled(i.m) {
return index.BM25Scoring
}
return index.DefaultScoringModel
}
ctx = context.WithValue(ctx, search.GetScoringModelCallbackKey,
search.GetScoringModelCallbackFn(scoringModelCallback))
// set the bm25Stats (stats important for consistent scoring) in
// the context object
if bm25Stats != nil {
ctx = context.WithValue(ctx, search.BM25StatsKey, bm25Stats)
}
// This callback and variable handles the tracking of bytes read
// 1. as part of creation of tfr and its Next() calls which is
// accounted by invoking this callback when the TFR is closed.
// 2. the docvalues portion (accounted in collector) and the retrieval
// of stored fields bytes (by LoadAndHighlightFields)
var totalSearchCost uint64
sendBytesRead := func(bytesRead uint64) {
totalSearchCost += bytesRead
}
ctx = context.WithValue(ctx, search.SearchIOStatsCallbackKey, search.SearchIOStatsCallbackFunc(sendBytesRead))
var bufPool *s2.GeoBufferPool
getBufferPool := func() *s2.GeoBufferPool {
if bufPool == nil {
bufPool = s2.NewGeoBufferPool(search.MaxGeoBufPoolSize, search.MinGeoBufPoolSize)
}
return bufPool
}
ctx = context.WithValue(ctx, search.GeoBufferPoolCallbackKey, search.GeoBufferPoolCallbackFunc(getBufferPool))
searcher, err := req.Query.Searcher(ctx, indexReader, i.m, search.SearcherOptions{
Explain: req.Explain,
IncludeTermVectors: req.IncludeLocations || req.Highlight != nil,
@@ -804,14 +822,6 @@ func (i *indexImpl) SearchInContext(ctx context.Context, req *SearchRequest) (sr
if serr := searcher.Close(); err == nil && serr != nil {
err = serr
}
if sr != nil {
sr.Cost = totalSearchCost
}
if sr, ok := indexReader.(*scorch.IndexSnapshot); ok {
sr.UpdateIOStats(totalSearchCost)
}
search.RecordSearchCost(ctx, search.DoneM, 0)
}()
if req.Facets != nil {
@@ -1388,3 +1398,68 @@ func (i *indexImpl) FireIndexEvent() {
internalEventIndex.FireIndexEvent()
}
}
// -----------------------------------------------------------------------------
func (i *indexImpl) TermFrequencies(field string, limit int, descending bool) (
[]index.TermFreq, error) {
i.mutex.RLock()
defer i.mutex.RUnlock()
if !i.open {
return nil, ErrorIndexClosed
}
reader, err := i.i.Reader()
if err != nil {
return nil, err
}
defer func() {
if cerr := reader.Close(); err == nil && cerr != nil {
err = cerr
}
}()
insightsReader, ok := reader.(index.IndexInsightsReader)
if !ok {
return nil, fmt.Errorf("index reader does not support TermFrequencies")
}
return insightsReader.TermFrequencies(field, limit, descending)
}
func (i *indexImpl) CentroidCardinalities(field string, limit int, descending bool) (
[]index.CentroidCardinality, error) {
i.mutex.RLock()
defer i.mutex.RUnlock()
if !i.open {
return nil, ErrorIndexClosed
}
reader, err := i.i.Reader()
if err != nil {
return nil, err
}
defer func() {
if cerr := reader.Close(); err == nil && cerr != nil {
err = cerr
}
}()
insightsReader, ok := reader.(index.IndexInsightsReader)
if !ok {
return nil, fmt.Errorf("index reader does not support CentroidCardinalities")
}
centroidCardinalities, err := insightsReader.CentroidCardinalities(field, limit, descending)
if err != nil {
return nil, err
}
for j := 0; j < len(centroidCardinalities); j++ {
centroidCardinalities[j].Index = i.name
}
return centroidCardinalities, nil
}

View File

@@ -755,4 +755,3 @@ func ParseParams(r *SearchRequest, input []byte) (*RequestParams, error) {
return params, nil
}

View File

@@ -185,17 +185,36 @@ func (q *BooleanQuery) Searcher(ctx context.Context, i index.IndexReader, m mapp
if err != nil {
return nil, err
}
var init bool
var refDoc *search.DocumentMatch
filterFunc = func(sctx *search.SearchContext, d *search.DocumentMatch) bool {
// Attempt to advance the filter searcher to the document identified by
// the base searcher's (unfiltered boolean) current result (d.IndexInternalID).
//
// If the filter searcher successfully finds a document with the same
// internal ID, it means the document satisfies the filter and should be kept.
//
// If the filter searcher returns an error, does not find a matching document,
// or finds a document with a different internal ID, the document should be discarded.
dm, err := filterSearcher.Advance(sctx, d.IndexInternalID)
return err == nil && dm != nil && bytes.Equal(dm.IndexInternalID, d.IndexInternalID)
// Initialize the reference document to point
// to the first document in the filterSearcher
var err error
if !init {
refDoc, err = filterSearcher.Next(sctx)
if err != nil {
return false
}
init = true
}
if refDoc == nil {
// filterSearcher is exhausted, d is not in filter
return false
}
// Compare document IDs
cmp := bytes.Compare(refDoc.IndexInternalID, d.IndexInternalID)
if cmp < 0 {
// filterSearcher is behind the current document, Advance() it
refDoc, err = filterSearcher.Advance(sctx, d.IndexInternalID)
if err != nil || refDoc == nil {
return false
}
// After advance, check if they're now equal
return bytes.Equal(refDoc.IndexInternalID, d.IndexInternalID)
}
// cmp >= 0: either equal (match) or filterSearcher is ahead (no match)
return cmp == 0
}
}

View File

@@ -431,6 +431,10 @@ func expandQuery(m mapping.IndexMapping, query Query) (Query, error) {
if err != nil {
return nil, err
}
q.Filter, err = expand(q.Filter)
if err != nil {
return nil, err
}
return q, nil
default:
return query, nil
@@ -481,7 +485,7 @@ func ExtractFields(q Query, m mapping.IndexMapping, fs FieldSet) (FieldSet, erro
fs, err = ExtractFields(expandedQuery, m, fs)
}
case *BooleanQuery:
for _, subq := range []Query{q.Must, q.Should, q.MustNot} {
for _, subq := range []Query{q.Must, q.Should, q.MustNot, q.Filter} {
fs, err = ExtractFields(subq, m, fs)
if err != nil {
break
@@ -553,6 +557,10 @@ func ExtractSynonyms(ctx context.Context, m mapping.SynonymMapping, r index.Thes
if err != nil {
return nil, err
}
rv, err = ExtractSynonyms(ctx, m, r, q.Filter, rv)
if err != nil {
return nil, err
}
case *ConjunctionQuery:
for _, child := range q.Conjuncts {
rv, err = ExtractSynonyms(ctx, m, r, child, rv)

View File

@@ -365,3 +365,29 @@ type EligibleDocumentSelector interface {
// This must be called after all eligible documents have been added.
SegmentEligibleDocs(segmentID int) []uint64
}
// -----------------------------------------------------------------------------
type TermFreq struct {
Term string `json:"term"`
Frequency uint64 `json:"frequency"`
}
type CentroidCardinality struct {
Index string `json:"index"`
Centroid []float32 `json:"centroid"`
Cardinality uint64 `json:"cardinality"`
}
// IndexInsightsReader is an extended index reader that supports APIs which can advertise
// details about content held within the index.
type IndexInsightsReader interface {
IndexReader
// Obtains a maximum limit number of indexed tokens for the field sorted based on frequencies.
TermFrequencies(field string, limit int, descending bool) (termFreqs []TermFreq, err error)
// Obtains a maximum limit number of centroid vectors from IVF indexes sorted based on
// cluster densities (or cardinalities)
CentroidCardinalities(field string, limit int, descending bool) (cenCards []CentroidCardinality, err error)
}

View File

@@ -14,6 +14,7 @@ import "C"
import (
"encoding/json"
"fmt"
"sort"
"unsafe"
)
@@ -64,6 +65,10 @@ type Index interface {
ObtainClustersWithDistancesFromIVFIndex(x []float32, centroidIDs []int64) (
[]int64, []float32, error)
// Applicable only to IVF indexes: Returns the top k centroid cardinalities and
// their vectors in chosen order (descending or ascending)
ObtainKCentroidCardinalitiesFromIVFIndex(limit int, descending bool) ([]uint64, [][]float32, error)
// Search queries the index with the vectors in x.
// Returns the IDs of the k nearest neighbors for each query vector and the
// corresponding distances.
@@ -214,6 +219,72 @@ func (idx *faissIndex) ObtainClustersWithDistancesFromIVFIndex(x []float32, cent
return centroids, centroidDistances, nil
}
func (idx *faissIndex) ObtainKCentroidCardinalitiesFromIVFIndex(limit int, descending bool) (
[]uint64, [][]float32, error) {
if limit <= 0 {
return nil, nil, nil
}
nlist := int(C.faiss_IndexIVF_nlist(idx.idx))
if nlist == 0 {
return nil, nil, nil
}
centroidCardinalities := make([]C.size_t, nlist)
// Allocate a flat buffer for all centroids, then slice it per centroid
d := idx.D()
flatCentroids := make([]float32, nlist*d)
// Call the C function to fill centroid vectors and cardinalities
c := C.faiss_IndexIVF_get_centroids_and_cardinality(
idx.idx,
(*C.float)(&flatCentroids[0]),
(*C.size_t)(&centroidCardinalities[0]),
nil,
)
if c != 0 {
return nil, nil, getLastError()
}
topIndices := getIndicesOfKCentroidCardinalities(
centroidCardinalities,
min(limit, nlist),
descending)
rvCardinalities := make([]uint64, len(topIndices))
rvCentroids := make([][]float32, len(topIndices))
for i, idx := range topIndices {
rvCardinalities[i] = uint64(centroidCardinalities[idx])
rvCentroids[i] = flatCentroids[idx*d : (idx+1)*d]
}
return rvCardinalities, rvCentroids, nil
}
func getIndicesOfKCentroidCardinalities(cardinalities []C.size_t, k int, descending bool) []int {
n := len(cardinalities)
indices := make([]int, n)
for i := range indices {
indices[i] = i
}
// Sort only the indices based on cardinality values
sort.Slice(indices, func(i, j int) bool {
if descending {
return cardinalities[indices[i]] > cardinalities[indices[j]]
}
return cardinalities[indices[i]] < cardinalities[indices[j]]
})
if k >= n {
return indices
}
return indices[:k]
}
func (idx *faissIndex) SearchClustersFromIVFIndex(selector Selector,
eligibleCentroidIDs []int64, minEligibleCentroids int, k int64, x,
centroidDis []float32, params json.RawMessage) ([]float32, []int64, error) {

View File

@@ -20,6 +20,7 @@ package segment
import (
"encoding/json"
index "github.com/blevesearch/bleve_index_api"
"github.com/RoaringBitmap/roaring/v2"
)
@@ -64,6 +65,8 @@ type VectorIndex interface {
params json.RawMessage) (VecPostingsList, error)
Close()
Size() uint64
ObtainKCentroidCardinalitiesFromIVFIndex(limit int, descending bool) ([]index.CentroidCardinality, error)
}
type VectorSegment interface {

View File

@@ -26,6 +26,7 @@ import (
"github.com/RoaringBitmap/roaring/v2"
"github.com/RoaringBitmap/roaring/v2/roaring64"
"github.com/bits-and-blooms/bitset"
index "github.com/blevesearch/bleve_index_api"
faiss "github.com/blevesearch/go-faiss"
segment "github.com/blevesearch/scorch_segment_api/v2"
)
@@ -279,6 +280,9 @@ type vectorIndexWrapper struct {
params json.RawMessage) (segment.VecPostingsList, error)
close func()
size func() uint64
obtainKCentroidCardinalitiesFromIVFIndex func(limit int, descending bool) (
[]index.CentroidCardinality, error)
}
func (i *vectorIndexWrapper) Search(qVector []float32, k int64,
@@ -301,6 +305,11 @@ func (i *vectorIndexWrapper) Size() uint64 {
return i.size()
}
func (i *vectorIndexWrapper) ObtainKCentroidCardinalitiesFromIVFIndex(limit int, descending bool) (
[]index.CentroidCardinality, error) {
return i.obtainKCentroidCardinalitiesFromIVFIndex(limit, descending)
}
// InterpretVectorIndex returns a construct of closures (vectorIndexWrapper)
// that will allow the caller to -
// (1) search within an attached vector index
@@ -520,6 +529,24 @@ func (sb *SegmentBase) InterpretVectorIndex(field string, requiresFiltering bool
size: func() uint64 {
return vecIndexSize
},
obtainKCentroidCardinalitiesFromIVFIndex: func(limit int, descending bool) ([]index.CentroidCardinality, error) {
if vecIndex == nil || !vecIndex.IsIVFIndex() {
return nil, nil
}
cardinalities, centroids, err := vecIndex.ObtainKCentroidCardinalitiesFromIVFIndex(limit, descending)
if err != nil {
return nil, err
}
centroidCardinalities := make([]index.CentroidCardinality, len(cardinalities))
for i, cardinality := range cardinalities {
centroidCardinalities[i] = index.CentroidCardinality{
Centroid: centroids[i],
Cardinality: cardinality,
}
}
return centroidCardinalities, nil
},
}
err error

View File

@@ -0,0 +1 @@
.DS_Store

37
vendor/github.com/clipperhouse/displaywidth/AGENTS.md generated vendored Normal file
View File

@@ -0,0 +1,37 @@
The goals and overview of this package can be found in the README.md file,
start by reading that.
The goal of this package is to determine the display (column) width of a
string, UTF-8 bytes, or runes, as would happen in a monospace font, especially
in a terminal.
When troubleshooting, write Go unit tests instead of executing debug scripts.
The tests can return whatever logs or output you need. If those tests are
only for temporary troubleshooting, clean up the tests after the debugging is
done.
(Separate executable debugging scripts are messy, tend to have conflicting
dependencies and are hard to cleanup.)
If you make changes to the trie generation in internal/gen, it can be invoked
by running `go generate` from the top package directory.
## Pull Requests and branches
For PRs (pull requests), you can use the gh CLI tool to retrieve details,
or post comments. Then, compare the current branch with main. Reviewing a PR
and reviewing a branch are about the same, but the PR may add context.
Look for bugs. Think like GitHub Copilot or Cursor BugBot.
Offer to post a brief summary of the review to the PR, via the gh CLI tool.
## Comparisons to go-runewidth
We originally attempted to make this package compatible with go-runewidth.
However, we found that there were too many differences in the handling of
certain characters and properties.
We believe, preliminarily, that our choices are more correct and complete,
by using more complete categories such as Unicode Cf (format) for zero-width
and Mn (Nonspacing_Mark) for combining marks.

View File

@@ -1,6 +1,6 @@
MIT License
Copyright (c) 2019 Oliver Kuederle
Copyright (c) 2025 Matt Sherman
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal

115
vendor/github.com/clipperhouse/displaywidth/README.md generated vendored Normal file
View File

@@ -0,0 +1,115 @@
# displaywidth
A high-performance Go package for measuring the monospace display width of strings, UTF-8 bytes, and runes.
[![Documentation](https://pkg.go.dev/badge/github.com/clipperhouse/displaywidth.svg)](https://pkg.go.dev/github.com/clipperhouse/displaywidth)
[![Test](https://github.com/clipperhouse/displaywidth/actions/workflows/gotest.yml/badge.svg)](https://github.com/clipperhouse/displaywidth/actions/workflows/gotest.yml)
[![Fuzz](https://github.com/clipperhouse/displaywidth/actions/workflows/gofuzz.yml/badge.svg)](https://github.com/clipperhouse/displaywidth/actions/workflows/gofuzz.yml)
## Install
```bash
go get github.com/clipperhouse/displaywidth
```
## Usage
```go
package main
import (
"fmt"
"github.com/clipperhouse/displaywidth"
)
func main() {
width := displaywidth.String("Hello, 世界!")
fmt.Println(width)
width = displaywidth.Bytes([]byte("🌍"))
fmt.Println(width)
width = displaywidth.Rune('🌍')
fmt.Println(width)
}
```
### Options
You can specify East Asian Width and Strict Emoji Neutral settings. If
unspecified, the default is `EastAsianWidth: false, StrictEmojiNeutral: true`.
```go
options := displaywidth.Options{
EastAsianWidth: true,
StrictEmojiNeutral: false,
}
width := options.String("Hello, 世界!")
fmt.Println(width)
```
## Details
This package implements the Unicode East Asian Width standard (UAX #11) and is
intended to be compatible with `go-runewidth`. It operates on bytes without
decoding runes for better performance.
## Prior Art
[mattn/go-runewidth](https://github.com/mattn/go-runewidth)
[x/text/width](https://pkg.go.dev/golang.org/x/text/width)
[x/text/internal/triegen](https://pkg.go.dev/golang.org/x/text/internal/triegen)
## Benchmarks
Part of my motivation is the insight that we can avoid decoding runes for better performance.
```bash
go test -bench=. -benchmem
```
```
goos: darwin
goarch: arm64
pkg: github.com/clipperhouse/displaywidth
cpu: Apple M2
BenchmarkStringDefault/displaywidth-8 10537 ns/op 160.10 MB/s 0 B/op 0 allocs/op
BenchmarkStringDefault/go-runewidth-8 14162 ns/op 119.12 MB/s 0 B/op 0 allocs/op
BenchmarkString_EAW/displaywidth-8 10776 ns/op 156.55 MB/s 0 B/op 0 allocs/op
BenchmarkString_EAW/go-runewidth-8 23987 ns/op 70.33 MB/s 0 B/op 0 allocs/op
BenchmarkString_StrictEmoji/displaywidth-8 10892 ns/op 154.88 MB/s 0 B/op 0 allocs/op
BenchmarkString_StrictEmoji/go-runewidth-8 14552 ns/op 115.93 MB/s 0 B/op 0 allocs/op
BenchmarkString_ASCII/displaywidth-8 1116 ns/op 114.72 MB/s 0 B/op 0 allocs/op
BenchmarkString_ASCII/go-runewidth-8 1178 ns/op 108.67 MB/s 0 B/op 0 allocs/op
BenchmarkString_Unicode/displaywidth-8 896.9 ns/op 148.29 MB/s 0 B/op 0 allocs/op
BenchmarkString_Unicode/go-runewidth-8 1434 ns/op 92.72 MB/s 0 B/op 0 allocs/op
BenchmarkStringWidth_Emoji/displaywidth-8 3033 ns/op 238.74 MB/s 0 B/op 0 allocs/op
BenchmarkStringWidth_Emoji/go-runewidth-8 4841 ns/op 149.56 MB/s 0 B/op 0 allocs/op
BenchmarkString_Mixed/displaywidth-8 4064 ns/op 124.74 MB/s 0 B/op 0 allocs/op
BenchmarkString_Mixed/go-runewidth-8 4696 ns/op 107.97 MB/s 0 B/op 0 allocs/op
BenchmarkString_ControlChars/displaywidth-8 320.6 ns/op 102.93 MB/s 0 B/op 0 allocs/op
BenchmarkString_ControlChars/go-runewidth-8 373.8 ns/op 88.28 MB/s 0 B/op 0 allocs/op
BenchmarkRuneDefault/displaywidth-8 335.5 ns/op 411.35 MB/s 0 B/op 0 allocs/op
BenchmarkRuneDefault/go-runewidth-8 681.2 ns/op 202.58 MB/s 0 B/op 0 allocs/op
BenchmarkRuneWidth_EAW/displaywidth-8 146.7 ns/op 374.80 MB/s 0 B/op 0 allocs/op
BenchmarkRuneWidth_EAW/go-runewidth-8 495.6 ns/op 110.98 MB/s 0 B/op 0 allocs/op
BenchmarkRuneWidth_ASCII/displaywidth-8 63.00 ns/op 460.33 MB/s 0 B/op 0 allocs/op
BenchmarkRuneWidth_ASCII/go-runewidth-8 68.90 ns/op 420.91 MB/s 0 B/op 0 allocs/op
```
I use a similar technique in [this grapheme cluster library](https://github.com/clipperhouse/uax29).
## Compatibility
`displaywidth` will mostly give the same outputs as `go-runewidth`, but there are some differences:
- Unicode category Mn (Nonspacing Mark): `displaywidth` will return width 0, `go-runewidth` may return width 1 for some runes.
- Unicode category Cf (Format): `displaywidth` will return width 0, `go-runewidth` may return width 1 for some runes.
- Unicode category Mc (Spacing Mark): `displaywidth` will return width 1, `go-runewidth` may return width 0 for some runes.
- Unicode category Cs (Surrogate): `displaywidth` will return width 0, `go-runewidth` may return width 1 for some runes. Surrogates are not valid UTF-8; some packages may turn them into the replacement character (U+FFFD).
- Unicode category Zl (Line separator): `displaywidth` will return width 0, `go-runewidth` may return width 1.
- Unicode category Zp (Paragraph separator): `displaywidth` will return width 0, `go-runewidth` may return width 1.
- Unicode Noncharacters (U+FFFE and U+FFFF): `displaywidth` will return width 0, `go-runewidth` may return width 1.
See `TestCompatibility` for more details.

3
vendor/github.com/clipperhouse/displaywidth/gen.go generated vendored Normal file
View File

@@ -0,0 +1,3 @@
package displaywidth
//go:generate go run -C internal/gen .

1897
vendor/github.com/clipperhouse/displaywidth/trie.go generated vendored Normal file
View File

File diff suppressed because it is too large Load Diff

159
vendor/github.com/clipperhouse/displaywidth/width.go generated vendored Normal file
View File

@@ -0,0 +1,159 @@
package displaywidth
import (
"unicode/utf8"
"github.com/clipperhouse/stringish"
"github.com/clipperhouse/uax29/v2/graphemes"
)
// String calculates the display width of a string
// using the [DefaultOptions]
func String(s string) int {
return DefaultOptions.String(s)
}
// Bytes calculates the display width of a []byte
// using the [DefaultOptions]
func Bytes(s []byte) int {
return DefaultOptions.Bytes(s)
}
func Rune(r rune) int {
return DefaultOptions.Rune(r)
}
type Options struct {
EastAsianWidth bool
StrictEmojiNeutral bool
}
var DefaultOptions = Options{
EastAsianWidth: false,
StrictEmojiNeutral: true,
}
// String calculates the display width of a string
// for the given options
func (options Options) String(s string) int {
if len(s) == 0 {
return 0
}
total := 0
g := graphemes.FromString(s)
for g.Next() {
// The first character in the grapheme cluster determines the width;
// modifiers and joiners do not contribute to the width.
props, _ := lookupProperties(g.Value())
total += props.width(options)
}
return total
}
// BytesOptions calculates the display width of a []byte
// for the given options
func (options Options) Bytes(s []byte) int {
if len(s) == 0 {
return 0
}
total := 0
g := graphemes.FromBytes(s)
for g.Next() {
// The first character in the grapheme cluster determines the width;
// modifiers and joiners do not contribute to the width.
props, _ := lookupProperties(g.Value())
total += props.width(options)
}
return total
}
func (options Options) Rune(r rune) int {
// Fast path for ASCII
if r < utf8.RuneSelf {
if isASCIIControl(byte(r)) {
// Control (0x00-0x1F) and DEL (0x7F)
return 0
}
// ASCII printable (0x20-0x7E)
return 1
}
// Surrogates (U+D800-U+DFFF) are invalid UTF-8 and have zero width
// Other packages might turn them into the replacement character (U+FFFD)
// in which case, we won't see it.
if r >= 0xD800 && r <= 0xDFFF {
return 0
}
// Stack-allocated to avoid heap allocation
var buf [4]byte // UTF-8 is at most 4 bytes
n := utf8.EncodeRune(buf[:], r)
// Skip the grapheme iterator and directly lookup properties
props, _ := lookupProperties(buf[:n])
return props.width(options)
}
func isASCIIControl(b byte) bool {
return b < 0x20 || b == 0x7F
}
const defaultWidth = 1
// is returns true if the property flag is set
func (p property) is(flag property) bool {
return p&flag != 0
}
// lookupProperties returns the properties for the first character in a string
func lookupProperties[T stringish.Interface](s T) (property, int) {
if len(s) == 0 {
return 0, 0
}
// Fast path for ASCII characters (single byte)
b := s[0]
if b < utf8.RuneSelf { // Single-byte ASCII
if isASCIIControl(b) {
// Control characters (0x00-0x1F) and DEL (0x7F) - width 0
return _ZeroWidth, 1
}
// ASCII printable characters (0x20-0x7E) - width 1
// Return 0 properties, width calculation will default to 1
return 0, 1
}
// Use the generated trie for lookup
props, size := lookup(s)
return property(props), size
}
// width determines the display width of a character based on its properties
// and configuration options
func (p property) width(options Options) int {
if p == 0 {
// Character not in trie, use default behavior
return defaultWidth
}
if p.is(_ZeroWidth) {
return 0
}
if options.EastAsianWidth {
if p.is(_East_Asian_Ambiguous) {
return 2
}
if p.is(_East_Asian_Ambiguous|_Emoji) && !options.StrictEmojiNeutral {
return 2
}
}
if p.is(_East_Asian_Full_Wide) {
return 2
}
// Default width for all other characters
return defaultWidth
}

2
vendor/github.com/clipperhouse/stringish/.gitignore generated vendored Normal file
View File

@@ -0,0 +1,2 @@
.DS_Store
*.test

21
vendor/github.com/clipperhouse/stringish/LICENSE generated vendored Normal file
View File

@@ -0,0 +1,21 @@
MIT License
Copyright (c) 2025 Matt Sherman
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

64
vendor/github.com/clipperhouse/stringish/README.md generated vendored Normal file
View File

@@ -0,0 +1,64 @@
# stringish
A small Go module that provides a generic type constraint for “string-like”
data, and a utf8 package that works with both strings and byte slices
without conversions.
```go
type Interface interface {
~[]byte | ~string
}
```
[![Go Reference](https://pkg.go.dev/badge/github.com/clipperhouse/stringish/utf8.svg)](https://pkg.go.dev/github.com/clipperhouse/stringish/utf8)
[![Test Status](https://github.com/clipperhouse/stringish/actions/workflows/gotest.yml/badge.svg)](https://github.com/clipperhouse/stringish/actions/workflows/gotest.yml)
## Install
```
go get github.com/clipperhouse/stringish
```
## Examples
```go
import (
"github.com/clipperhouse/stringish"
"github.com/clipperhouse/stringish/utf8"
)
s := "Hello, 世界"
r, size := utf8.DecodeRune(s) // not DecodeRuneInString 🎉
b := []byte("Hello, 世界")
r, size = utf8.DecodeRune(b) // same API!
func MyFoo[T stringish.Interface](s T) T {
// pass a string or a []byte
// iterate, slice, transform, whatever
}
```
## Motivation
Sometimes we want APIs to accept `string` or `[]byte` without having to convert
between those types. That conversion usually allocates!
By implementing with `stringish.Interface`, we can have a single API, and
single implementation for both types: one `Foo` instead of `Foo` and
`FooString`.
We have converted the
[`unicode/utf8` package](https://github.com/clipperhouse/stringish/blob/main/utf8/utf8.go)
as an example -- note the absence of`*InString` funcs. We might look at `x/text`
next.
## Used by
- clipperhouse/uax29: [stringish trie](https://github.com/clipperhouse/uax29/blob/master/graphemes/trie.go#L27), [stringish iterator](https://github.com/clipperhouse/uax29/blob/master/internal/iterators/iterator.go#L9), [stringish SplitFunc](https://github.com/clipperhouse/uax29/blob/master/graphemes/splitfunc.go#L21)
- [clipperhouse/displaywidth](https://github.com/clipperhouse/displaywidth)
## Prior discussion
- [Consideration of similar by the Go team](https://github.com/golang/go/issues/48643)

View File

@@ -0,0 +1,5 @@
package stringish
type Interface interface {
~[]byte | ~string
}

21
vendor/github.com/clipperhouse/uax29/v2/LICENSE generated vendored Normal file
View File

@@ -0,0 +1,21 @@
MIT License
Copyright (c) 2020 Matt Sherman
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

View File

@@ -0,0 +1,82 @@
An implementation of grapheme cluster boundaries from [Unicode text segmentation](https://unicode.org/reports/tr29/#Grapheme_Cluster_Boundaries) (UAX 29), for Unicode version 15.0.0.
## Quick start
```
go get "github.com/clipperhouse/uax29/v2/graphemes"
```
```go
import "github.com/clipperhouse/uax29/v2/graphemes"
text := "Hello, 世界. Nice dog! 👍🐶"
tokens := graphemes.FromString(text)
for tokens.Next() { // Next() returns true until end of data
fmt.Println(tokens.Value()) // Do something with the current grapheme
}
```
[![Documentation](https://pkg.go.dev/badge/github.com/clipperhouse/uax29/v2/graphemes.svg)](https://pkg.go.dev/github.com/clipperhouse/uax29/v2/graphemes)
_A grapheme is a “single visible character”, which might be a simple as a single letter, or a complex emoji that consists of several Unicode code points._
## Conformance
We use the Unicode [test suite](https://unicode.org/reports/tr41/tr41-26.html#Tests29). Status:
![Go](https://github.com/clipperhouse/uax29/actions/workflows/gotest.yml/badge.svg)
## APIs
### If you have a `string`
```go
text := "Hello, 世界. Nice dog! 👍🐶"
tokens := graphemes.FromString(text)
for tokens.Next() { // Next() returns true until end of data
fmt.Println(tokens.Value()) // Do something with the current grapheme
}
```
### If you have an `io.Reader`
`FromReader` embeds a [`bufio.Scanner`](https://pkg.go.dev/bufio#Scanner), so just use those methods.
```go
r := getYourReader() // from a file or network maybe
tokens := graphemes.FromReader(r)
for tokens.Scan() { // Scan() returns true until error or EOF
fmt.Println(tokens.Text()) // Do something with the current grapheme
}
if tokens.Err() != nil { // Check the error
log.Fatal(tokens.Err())
}
```
### If you have a `[]byte`
```go
b := []byte("Hello, 世界. Nice dog! 👍🐶")
tokens := graphemes.FromBytes(b)
for tokens.Next() { // Next() returns true until end of data
fmt.Println(tokens.Value()) // Do something with the current grapheme
}
```
### Performance
On a Mac M2 laptop, we see around 200MB/s, or around 100 million graphemes per second. You should see ~constant memory, and no allocations.
### Invalid inputs
Invalid UTF-8 input is considered undefined behavior. We test to ensure that bad inputs will not cause pathological outcomes, such as a panic or infinite loop. Callers should expect “garbage-in, garbage-out”.
Your pipeline should probably include a call to [`utf8.Valid()`](https://pkg.go.dev/unicode/utf8#Valid).

View File

@@ -0,0 +1,28 @@
package graphemes
import "github.com/clipperhouse/uax29/v2/internal/iterators"
type Iterator[T iterators.Stringish] struct {
*iterators.Iterator[T]
}
var (
splitFuncString = splitFunc[string]
splitFuncBytes = splitFunc[[]byte]
)
// FromString returns an iterator for the grapheme clusters in the input string.
// Iterate while Next() is true, and access the grapheme via Value().
func FromString(s string) Iterator[string] {
return Iterator[string]{
iterators.New(splitFuncString, s),
}
}
// FromBytes returns an iterator for the grapheme clusters in the input bytes.
// Iterate while Next() is true, and access the grapheme via Value().
func FromBytes(b []byte) Iterator[[]byte] {
return Iterator[[]byte]{
iterators.New(splitFuncBytes, b),
}
}

View File

@@ -0,0 +1,25 @@
// Package graphemes implements Unicode grapheme cluster boundaries: https://unicode.org/reports/tr29/#Grapheme_Cluster_Boundaries
package graphemes
import (
"bufio"
"io"
)
type Scanner struct {
*bufio.Scanner
}
// FromReader returns a Scanner, to split graphemes per
// https://unicode.org/reports/tr29/#Grapheme_Cluster_Boundaries.
//
// It embeds a [bufio.Scanner], so you can use its methods.
//
// Iterate through graphemes by calling Scan() until false, then check Err().
func FromReader(r io.Reader) *Scanner {
sc := bufio.NewScanner(r)
sc.Split(SplitFunc)
return &Scanner{
Scanner: sc,
}
}

View File

@@ -0,0 +1,174 @@
package graphemes
import (
"bufio"
"github.com/clipperhouse/uax29/v2/internal/iterators"
)
// is determines if lookup intersects propert(ies)
func (lookup property) is(properties property) bool {
return (lookup & properties) != 0
}
const _Ignore = _Extend
// SplitFunc is a bufio.SplitFunc implementation of Unicode grapheme cluster segmentation, for use with bufio.Scanner.
//
// See https://unicode.org/reports/tr29/#Grapheme_Cluster_Boundaries.
var SplitFunc bufio.SplitFunc = splitFunc[[]byte]
func splitFunc[T iterators.Stringish](data T, atEOF bool) (advance int, token T, err error) {
var empty T
if len(data) == 0 {
return 0, empty, nil
}
// These vars are stateful across loop iterations
var pos int
var lastExIgnore property = 0 // "last excluding ignored categories"
var lastLastExIgnore property = 0 // "last one before that"
var regionalIndicatorCount int
// Rules are usually of the form Cat1 × Cat2; "current" refers to the first property
// to the right of the ×, from which we look back or forward
current, w := lookup(data[pos:])
if w == 0 {
if !atEOF {
// Rune extends past current data, request more
return 0, empty, nil
}
pos = len(data)
return pos, data[:pos], nil
}
// https://unicode.org/reports/tr29/#GB1
// Start of text always advances
pos += w
for {
eot := pos == len(data) // "end of text"
if eot {
if !atEOF {
// Token extends past current data, request more
return 0, empty, nil
}
// https://unicode.org/reports/tr29/#GB2
break
}
/*
We've switched the evaluation order of GB1↓ and GB2↑. It's ok:
because we've checked for len(data) at the top of this function,
sot and eot are mutually exclusive, order doesn't matter.
*/
// Rules are usually of the form Cat1 × Cat2; "current" refers to the first property
// to the right of the ×, from which we look back or forward
// Remember previous properties to avoid lookups/lookbacks
last := current
if !last.is(_Ignore) {
lastLastExIgnore = lastExIgnore
lastExIgnore = last
}
current, w = lookup(data[pos:])
if w == 0 {
if atEOF {
// Just return the bytes, we can't do anything with them
pos = len(data)
break
}
// Rune extends past current data, request more
return 0, empty, nil
}
// Optimization: no rule can possibly apply
if current|last == 0 { // i.e. both are zero
break
}
// https://unicode.org/reports/tr29/#GB3
if current.is(_LF) && last.is(_CR) {
pos += w
continue
}
// https://unicode.org/reports/tr29/#GB4
// https://unicode.org/reports/tr29/#GB5
if (current | last).is(_Control | _CR | _LF) {
break
}
// https://unicode.org/reports/tr29/#GB6
if current.is(_L|_V|_LV|_LVT) && last.is(_L) {
pos += w
continue
}
// https://unicode.org/reports/tr29/#GB7
if current.is(_V|_T) && last.is(_LV|_V) {
pos += w
continue
}
// https://unicode.org/reports/tr29/#GB8
if current.is(_T) && last.is(_LVT|_T) {
pos += w
continue
}
// https://unicode.org/reports/tr29/#GB9
if current.is(_Extend | _ZWJ) {
pos += w
continue
}
// https://unicode.org/reports/tr29/#GB9a
if current.is(_SpacingMark) {
pos += w
continue
}
// https://unicode.org/reports/tr29/#GB9b
if last.is(_Prepend) {
pos += w
continue
}
// https://unicode.org/reports/tr29/#GB9c
// TODO(clipperhouse):
// It appears to be added in Unicode 15.1.0:
// https://unicode.org/versions/Unicode15.1.0/#Migration
// This package currently supports Unicode 15.0.0, so
// out of scope for now
// https://unicode.org/reports/tr29/#GB11
if current.is(_ExtendedPictographic) && last.is(_ZWJ) && lastLastExIgnore.is(_ExtendedPictographic) {
pos += w
continue
}
// https://unicode.org/reports/tr29/#GB12
// https://unicode.org/reports/tr29/#GB13
if (current & last).is(_RegionalIndicator) {
regionalIndicatorCount++
odd := regionalIndicatorCount%2 == 1
if odd {
pos += w
continue
}
}
// If we fall through all the above rules, it's a grapheme cluster break
break
}
// Return token
return pos, data[:pos], nil
}

1409
vendor/github.com/clipperhouse/uax29/v2/graphemes/trie.go generated vendored Normal file
View File

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,85 @@
package iterators
type Stringish interface {
[]byte | string
}
type SplitFunc[T Stringish] func(T, bool) (int, T, error)
// Iterator is a generic iterator for words that are either []byte or string.
// Iterate while Next() is true, and access the word via Value().
type Iterator[T Stringish] struct {
split SplitFunc[T]
data T
start int
pos int
}
// New creates a new Iterator for the given data and SplitFunc.
func New[T Stringish](split SplitFunc[T], data T) *Iterator[T] {
return &Iterator[T]{
split: split,
data: data,
}
}
// SetText sets the text for the iterator to operate on, and resets all state.
func (iter *Iterator[T]) SetText(data T) {
iter.data = data
iter.start = 0
iter.pos = 0
}
// Split sets the SplitFunc for the Iterator.
func (iter *Iterator[T]) Split(split SplitFunc[T]) {
iter.split = split
}
// Next advances the iterator to the next token. It returns false when there
// are no remaining tokens or an error occurred.
func (iter *Iterator[T]) Next() bool {
if iter.pos == len(iter.data) {
return false
}
if iter.pos > len(iter.data) {
panic("SplitFunc advanced beyond the end of the data")
}
iter.start = iter.pos
advance, _, err := iter.split(iter.data[iter.pos:], true)
if err != nil {
panic(err)
}
if advance <= 0 {
panic("SplitFunc returned a zero or negative advance")
}
iter.pos += advance
if iter.pos > len(iter.data) {
panic("SplitFunc advanced beyond the end of the data")
}
return true
}
// Value returns the current token.
func (iter *Iterator[T]) Value() T {
return iter.data[iter.start:iter.pos]
}
// Start returns the byte position of the current token in the original data.
func (iter *Iterator[T]) Start() int {
return iter.start
}
// End returns the byte position after the current token in the original data.
func (iter *Iterator[T]) End() int {
return iter.pos
}
// Reset resets the iterator to the beginning of the data.
func (iter *Iterator[T]) Reset() {
iter.start = 0
iter.pos = 0
}

View File

@@ -162,7 +162,7 @@ var supportedAlgorithms = map[string]bool{
// parsing.
//
// // Directly fetch the metadata document.
// resp, err := http.Get("https://login.example.com/custom-metadata-path")
// resp, err := http.Get("https://login.example.com/custom-metadata-path")
// if err != nil {
// // ...
// }
@@ -267,7 +267,7 @@ func NewProvider(ctx context.Context, issuer string) (*Provider, error) {
issuerURL = issuer
}
if p.Issuer != issuerURL && !skipIssuerValidation {
return nil, fmt.Errorf("oidc: issuer did not match the issuer returned by provider, expected %q got %q", issuer, p.Issuer)
return nil, fmt.Errorf("oidc: issuer URL provided to client (%q) did not match the issuer URL returned by provider (%q)", issuer, p.Issuer)
}
var algs []string
for _, a := range p.Algorithms {

View File

@@ -1,5 +1,24 @@
version: "2"
linters:
disable:
- errcheck
- errcheck
exclusions:
generated: lax
presets:
- comments
- common-false-positives
- legacy
- std-error-handling
paths:
- third_party$
- builtin$
- examples$
formatters:
enable:
- gofmt
- gofmt
exclusions:
generated: lax
paths:
- third_party$
- builtin$
- examples$

View File

@@ -1,6 +1,6 @@
[![Build Status](https://github.com/google/renameio/workflows/Test/badge.svg)](https://github.com/google/renameio/actions?query=workflow%3ATest)
[![PkgGoDev](https://pkg.go.dev/badge/github.com/google/renameio)](https://pkg.go.dev/github.com/google/renameio)
[![Go Report Card](https://goreportcard.com/badge/github.com/google/renameio)](https://goreportcard.com/report/github.com/google/renameio)
[![PkgGoDev](https://pkg.go.dev/badge/github.com/google/renameio/v2)](https://pkg.go.dev/github.com/google/renameio/v2)
[![Go Report Card](https://goreportcard.com/badge/github.com/google/renameio/v2)](https://goreportcard.com/report/github.com/google/renameio/v2)
The `renameio` Go package provides a way to atomically create or replace a file or
symbolic link.

View File

@@ -77,3 +77,12 @@ func WithExistingPermissions() Option {
c.attemptPermCopy = true
})
}
// WithReplaceOnClose causes PendingFile.Close() to actually call
// CloseAtomicallyReplace(). This means PendingFile implements io.Closer while
// maintaining atomicity per default.
func WithReplaceOnClose() Option {
return optionFunc(func(c *config) {
c.renameOnClose = true
})
}

View File

@@ -114,9 +114,10 @@ func tempDir(dir, dest string) string {
type PendingFile struct {
*os.File
path string
done bool
closed bool
path string
done bool
closed bool
replaceOnClose bool
}
// Cleanup is a no-op if CloseAtomicallyReplace succeeded, and otherwise closes
@@ -131,7 +132,7 @@ func (t *PendingFile) Cleanup() error {
// reporting, there is nothing the caller can recover here.
var closeErr error
if !t.closed {
closeErr = t.Close()
closeErr = t.File.Close()
}
if err := os.Remove(t.Name()); err != nil {
return err
@@ -159,7 +160,7 @@ func (t *PendingFile) CloseAtomicallyReplace() error {
return err
}
t.closed = true
if err := t.Close(); err != nil {
if err := t.File.Close(); err != nil {
return err
}
if err := os.Rename(t.Name(), t.path); err != nil {
@@ -169,6 +170,15 @@ func (t *PendingFile) CloseAtomicallyReplace() error {
return nil
}
// Close closes the file. By default it just calls Close() on the underlying file. For PendingFiles created with
// WithReplaceOnClose it calls CloseAtomicallyReplace() instead.
func (t *PendingFile) Close() error {
if t.replaceOnClose {
return t.CloseAtomicallyReplace()
}
return t.File.Close()
}
// TempFile creates a temporary file destined to atomically creating or
// replacing the destination file at path.
//
@@ -189,6 +199,7 @@ type config struct {
attemptPermCopy bool
ignoreUmask bool
chmod *os.FileMode
renameOnClose bool
}
// NewPendingFile creates a temporary file destined to atomically creating or
@@ -244,7 +255,7 @@ func NewPendingFile(path string, opts ...Option) (*PendingFile, error) {
}
}
return &PendingFile{File: f, path: cfg.path}, nil
return &PendingFile{File: f, path: cfg.path, replaceOnClose: cfg.renameOnClose}, nil
}
// Symlink wraps os.Symlink, replacing an existing symlink with the same name

43
vendor/github.com/mattn/go-runewidth/benchstat.txt generated vendored Normal file
View File

@@ -0,0 +1,43 @@
goos: darwin
goarch: arm64
pkg: github.com/mattn/go-runewidth
cpu: Apple M2
│ old.txt │ new.txt │
│ sec/op │ sec/op vs base │
String1WidthAll/regular-8 108.92m ± 0% 35.09m ± 3% -67.78% (p=0.002 n=6)
String1WidthAll/lut-8 93.97m ± 0% 18.70m ± 0% -80.10% (p=0.002 n=6)
String1Width768/regular-8 60.62µ ± 1% 11.54µ ± 0% -80.97% (p=0.002 n=6)
String1Width768/lut-8 60.66µ ± 1% 11.43µ ± 0% -81.16% (p=0.002 n=6)
String1WidthAllEastAsian/regular-8 115.13m ± 1% 40.79m ± 8% -64.57% (p=0.002 n=6)
String1WidthAllEastAsian/lut-8 93.65m ± 0% 18.70m ± 2% -80.03% (p=0.002 n=6)
String1Width768EastAsian/regular-8 75.32µ ± 0% 23.49µ ± 0% -68.82% (p=0.002 n=6)
String1Width768EastAsian/lut-8 60.76µ ± 0% 11.50µ ± 0% -81.07% (p=0.002 n=6)
geomean 2.562m 604.5µ -76.41%
│ old.txt │ new.txt │
│ B/op │ B/op vs base │
String1WidthAll/regular-8 106.3Mi ± 0% 0.0Mi ± 0% -100.00% (p=0.002 n=6)
String1WidthAll/lut-8 106.3Mi ± 0% 0.0Mi ± 0% -100.00% (p=0.002 n=6)
String1Width768/regular-8 75.00Ki ± 0% 0.00Ki ± 0% -100.00% (p=0.002 n=6)
String1Width768/lut-8 75.00Ki ± 0% 0.00Ki ± 0% -100.00% (p=0.002 n=6)
String1WidthAllEastAsian/regular-8 106.3Mi ± 0% 0.0Mi ± 0% -100.00% (p=0.002 n=6)
String1WidthAllEastAsian/lut-8 106.3Mi ± 0% 0.0Mi ± 0% -100.00% (p=0.002 n=6)
String1Width768EastAsian/regular-8 75.00Ki ± 0% 0.00Ki ± 0% -100.00% (p=0.002 n=6)
String1Width768EastAsian/lut-8 75.00Ki ± 0% 0.00Ki ± 0% -100.00% (p=0.002 n=6)
geomean 2.790Mi ? ¹ ²
¹ summaries must be >0 to compute geomean
² ratios must be >0 to compute geomean
│ old.txt │ new.txt │
│ allocs/op │ allocs/op vs base │
String1WidthAll/regular-8 3.342M ± 0% 0.000M ± 0% -100.00% (p=0.002 n=6)
String1WidthAll/lut-8 3.342M ± 0% 0.000M ± 0% -100.00% (p=0.002 n=6)
String1Width768/regular-8 2.304k ± 0% 0.000k ± 0% -100.00% (p=0.002 n=6)
String1Width768/lut-8 2.304k ± 0% 0.000k ± 0% -100.00% (p=0.002 n=6)
String1WidthAllEastAsian/regular-8 3.342M ± 0% 0.000M ± 0% -100.00% (p=0.002 n=6)
String1WidthAllEastAsian/lut-8 3.342M ± 0% 0.000M ± 0% -100.00% (p=0.002 n=6)
String1Width768EastAsian/regular-8 2.304k ± 0% 0.000k ± 0% -100.00% (p=0.002 n=6)
String1Width768EastAsian/lut-8 2.304k ± 0% 0.000k ± 0% -100.00% (p=0.002 n=6)
geomean 87.75k ? ¹ ²
¹ summaries must be >0 to compute geomean
² ratios must be >0 to compute geomean

54
vendor/github.com/mattn/go-runewidth/new.txt generated vendored Normal file
View File

@@ -0,0 +1,54 @@
goos: darwin
goarch: arm64
pkg: github.com/mattn/go-runewidth
cpu: Apple M2
BenchmarkString1WidthAll/regular-8 33 35033923 ns/op 0 B/op 0 allocs/op
BenchmarkString1WidthAll/regular-8 33 34965112 ns/op 0 B/op 0 allocs/op
BenchmarkString1WidthAll/regular-8 33 36307234 ns/op 0 B/op 0 allocs/op
BenchmarkString1WidthAll/regular-8 33 35007705 ns/op 0 B/op 0 allocs/op
BenchmarkString1WidthAll/regular-8 33 35154182 ns/op 0 B/op 0 allocs/op
BenchmarkString1WidthAll/regular-8 34 35155400 ns/op 0 B/op 0 allocs/op
BenchmarkString1WidthAll/lut-8 63 18688500 ns/op 0 B/op 0 allocs/op
BenchmarkString1WidthAll/lut-8 63 18712474 ns/op 0 B/op 0 allocs/op
BenchmarkString1WidthAll/lut-8 63 18700211 ns/op 0 B/op 0 allocs/op
BenchmarkString1WidthAll/lut-8 62 18694179 ns/op 0 B/op 0 allocs/op
BenchmarkString1WidthAll/lut-8 62 18708392 ns/op 0 B/op 0 allocs/op
BenchmarkString1WidthAll/lut-8 63 18770608 ns/op 0 B/op 0 allocs/op
BenchmarkString1Width768/regular-8 104137 11526 ns/op 0 B/op 0 allocs/op
BenchmarkString1Width768/regular-8 103986 11540 ns/op 0 B/op 0 allocs/op
BenchmarkString1Width768/regular-8 104079 11552 ns/op 0 B/op 0 allocs/op
BenchmarkString1Width768/regular-8 103963 11530 ns/op 0 B/op 0 allocs/op
BenchmarkString1Width768/regular-8 103714 11538 ns/op 0 B/op 0 allocs/op
BenchmarkString1Width768/regular-8 104181 11537 ns/op 0 B/op 0 allocs/op
BenchmarkString1Width768/lut-8 105150 11420 ns/op 0 B/op 0 allocs/op
BenchmarkString1Width768/lut-8 104778 11423 ns/op 0 B/op 0 allocs/op
BenchmarkString1Width768/lut-8 105069 11422 ns/op 0 B/op 0 allocs/op
BenchmarkString1Width768/lut-8 105127 11475 ns/op 0 B/op 0 allocs/op
BenchmarkString1Width768/lut-8 104742 11433 ns/op 0 B/op 0 allocs/op
BenchmarkString1Width768/lut-8 105163 11432 ns/op 0 B/op 0 allocs/op
BenchmarkString1WidthAllEastAsian/regular-8 28 40723347 ns/op 0 B/op 0 allocs/op
BenchmarkString1WidthAllEastAsian/regular-8 28 40790299 ns/op 0 B/op 0 allocs/op
BenchmarkString1WidthAllEastAsian/regular-8 28 40801338 ns/op 0 B/op 0 allocs/op
BenchmarkString1WidthAllEastAsian/regular-8 28 40798216 ns/op 0 B/op 0 allocs/op
BenchmarkString1WidthAllEastAsian/regular-8 28 44135253 ns/op 0 B/op 0 allocs/op
BenchmarkString1WidthAllEastAsian/regular-8 28 40779546 ns/op 0 B/op 0 allocs/op
BenchmarkString1WidthAllEastAsian/lut-8 62 18694165 ns/op 0 B/op 0 allocs/op
BenchmarkString1WidthAllEastAsian/lut-8 62 18685047 ns/op 0 B/op 0 allocs/op
BenchmarkString1WidthAllEastAsian/lut-8 62 18689273 ns/op 0 B/op 0 allocs/op
BenchmarkString1WidthAllEastAsian/lut-8 62 19150346 ns/op 0 B/op 0 allocs/op
BenchmarkString1WidthAllEastAsian/lut-8 63 19126154 ns/op 0 B/op 0 allocs/op
BenchmarkString1WidthAllEastAsian/lut-8 62 18712619 ns/op 0 B/op 0 allocs/op
BenchmarkString1Width768EastAsian/regular-8 50775 23595 ns/op 0 B/op 0 allocs/op
BenchmarkString1Width768EastAsian/regular-8 51061 23563 ns/op 0 B/op 0 allocs/op
BenchmarkString1Width768EastAsian/regular-8 51057 23492 ns/op 0 B/op 0 allocs/op
BenchmarkString1Width768EastAsian/regular-8 51138 23445 ns/op 0 B/op 0 allocs/op
BenchmarkString1Width768EastAsian/regular-8 51195 23469 ns/op 0 B/op 0 allocs/op
BenchmarkString1Width768EastAsian/regular-8 51087 23482 ns/op 0 B/op 0 allocs/op
BenchmarkString1Width768EastAsian/lut-8 104559 11549 ns/op 0 B/op 0 allocs/op
BenchmarkString1Width768EastAsian/lut-8 104508 11483 ns/op 0 B/op 0 allocs/op
BenchmarkString1Width768EastAsian/lut-8 104296 11503 ns/op 0 B/op 0 allocs/op
BenchmarkString1Width768EastAsian/lut-8 104606 11485 ns/op 0 B/op 0 allocs/op
BenchmarkString1Width768EastAsian/lut-8 104588 11495 ns/op 0 B/op 0 allocs/op
BenchmarkString1Width768EastAsian/lut-8 104602 11518 ns/op 0 B/op 0 allocs/op
PASS
ok github.com/mattn/go-runewidth 64.455s

54
vendor/github.com/mattn/go-runewidth/old.txt generated vendored Normal file
View File

@@ -0,0 +1,54 @@
goos: darwin
goarch: arm64
pkg: github.com/mattn/go-runewidth
cpu: Apple M2
BenchmarkString1WidthAll/regular-8 10 108559258 ns/op 111412145 B/op 3342342 allocs/op
BenchmarkString1WidthAll/regular-8 10 108968079 ns/op 111412364 B/op 3342343 allocs/op
BenchmarkString1WidthAll/regular-8 10 108890338 ns/op 111412388 B/op 3342344 allocs/op
BenchmarkString1WidthAll/regular-8 10 108940704 ns/op 111412584 B/op 3342346 allocs/op
BenchmarkString1WidthAll/regular-8 10 108632796 ns/op 111412348 B/op 3342343 allocs/op
BenchmarkString1WidthAll/regular-8 10 109354546 ns/op 111412777 B/op 3342343 allocs/op
BenchmarkString1WidthAll/lut-8 12 93844406 ns/op 111412569 B/op 3342345 allocs/op
BenchmarkString1WidthAll/lut-8 12 93991080 ns/op 111412512 B/op 3342344 allocs/op
BenchmarkString1WidthAll/lut-8 12 93980632 ns/op 111412413 B/op 3342343 allocs/op
BenchmarkString1WidthAll/lut-8 12 94004083 ns/op 111412396 B/op 3342343 allocs/op
BenchmarkString1WidthAll/lut-8 12 93959795 ns/op 111412445 B/op 3342343 allocs/op
BenchmarkString1WidthAll/lut-8 12 93846198 ns/op 111412556 B/op 3342345 allocs/op
BenchmarkString1Width768/regular-8 19785 60696 ns/op 76801 B/op 2304 allocs/op
BenchmarkString1Width768/regular-8 19824 60520 ns/op 76801 B/op 2304 allocs/op
BenchmarkString1Width768/regular-8 19832 60547 ns/op 76801 B/op 2304 allocs/op
BenchmarkString1Width768/regular-8 19778 60543 ns/op 76800 B/op 2304 allocs/op
BenchmarkString1Width768/regular-8 19842 61142 ns/op 76801 B/op 2304 allocs/op
BenchmarkString1Width768/regular-8 19780 60696 ns/op 76801 B/op 2304 allocs/op
BenchmarkString1Width768/lut-8 19598 61161 ns/op 76801 B/op 2304 allocs/op
BenchmarkString1Width768/lut-8 19731 60707 ns/op 76801 B/op 2304 allocs/op
BenchmarkString1Width768/lut-8 19738 60626 ns/op 76801 B/op 2304 allocs/op
BenchmarkString1Width768/lut-8 19764 60670 ns/op 76801 B/op 2304 allocs/op
BenchmarkString1Width768/lut-8 19797 60642 ns/op 76801 B/op 2304 allocs/op
BenchmarkString1Width768/lut-8 19738 60608 ns/op 76800 B/op 2304 allocs/op
BenchmarkString1WidthAllEastAsian/regular-8 9 115080431 ns/op 111412458 B/op 3342345 allocs/op
BenchmarkString1WidthAllEastAsian/regular-8 9 114908880 ns/op 111412476 B/op 3342345 allocs/op
BenchmarkString1WidthAllEastAsian/regular-8 9 115077134 ns/op 111412540 B/op 3342345 allocs/op
BenchmarkString1WidthAllEastAsian/regular-8 9 115175292 ns/op 111412467 B/op 3342345 allocs/op
BenchmarkString1WidthAllEastAsian/regular-8 9 115792653 ns/op 111412362 B/op 3342344 allocs/op
BenchmarkString1WidthAllEastAsian/regular-8 9 115255417 ns/op 111412572 B/op 3342346 allocs/op
BenchmarkString1WidthAllEastAsian/lut-8 12 93761542 ns/op 111412538 B/op 3342345 allocs/op
BenchmarkString1WidthAllEastAsian/lut-8 12 94089990 ns/op 111412440 B/op 3342343 allocs/op
BenchmarkString1WidthAllEastAsian/lut-8 12 93721410 ns/op 111412514 B/op 3342344 allocs/op
BenchmarkString1WidthAllEastAsian/lut-8 12 93572951 ns/op 111412329 B/op 3342342 allocs/op
BenchmarkString1WidthAllEastAsian/lut-8 12 93536052 ns/op 111412206 B/op 3342341 allocs/op
BenchmarkString1WidthAllEastAsian/lut-8 12 93532365 ns/op 111412412 B/op 3342343 allocs/op
BenchmarkString1Width768EastAsian/regular-8 15904 75401 ns/op 76800 B/op 2304 allocs/op
BenchmarkString1Width768EastAsian/regular-8 15932 75449 ns/op 76801 B/op 2304 allocs/op
BenchmarkString1Width768EastAsian/regular-8 15944 75181 ns/op 76801 B/op 2304 allocs/op
BenchmarkString1Width768EastAsian/regular-8 15963 75311 ns/op 76801 B/op 2304 allocs/op
BenchmarkString1Width768EastAsian/regular-8 15879 75292 ns/op 76801 B/op 2304 allocs/op
BenchmarkString1Width768EastAsian/regular-8 15955 75334 ns/op 76801 B/op 2304 allocs/op
BenchmarkString1Width768EastAsian/lut-8 19692 60692 ns/op 76801 B/op 2304 allocs/op
BenchmarkString1Width768EastAsian/lut-8 19712 60699 ns/op 76801 B/op 2304 allocs/op
BenchmarkString1Width768EastAsian/lut-8 19741 60819 ns/op 76801 B/op 2304 allocs/op
BenchmarkString1Width768EastAsian/lut-8 19771 60653 ns/op 76801 B/op 2304 allocs/op
BenchmarkString1Width768EastAsian/lut-8 19737 61027 ns/op 76801 B/op 2304 allocs/op
BenchmarkString1Width768EastAsian/lut-8 19657 60820 ns/op 76801 B/op 2304 allocs/op
PASS
ok github.com/mattn/go-runewidth 76.165s

View File

@@ -4,7 +4,7 @@ import (
"os"
"strings"
"github.com/rivo/uniseg"
"github.com/clipperhouse/uax29/v2/graphemes"
)
//go:generate go run script/generate.go
@@ -64,6 +64,9 @@ func inTable(r rune, t table) bool {
if r < t[0].first {
return false
}
if r > t[len(t)-1].last {
return false
}
bot := 0
top := len(t) - 1
@@ -175,10 +178,10 @@ func (c *Condition) CreateLUT() {
// StringWidth return width as you can see
func (c *Condition) StringWidth(s string) (width int) {
g := uniseg.NewGraphemes(s)
g := graphemes.FromString(s)
for g.Next() {
var chWidth int
for _, r := range g.Runes() {
for _, r := range g.Value() {
chWidth = c.RuneWidth(r)
if chWidth > 0 {
break // Our best guess at this point is to use the width of the first non-zero-width rune.
@@ -197,17 +200,17 @@ func (c *Condition) Truncate(s string, w int, tail string) string {
w -= c.StringWidth(tail)
var width int
pos := len(s)
g := uniseg.NewGraphemes(s)
g := graphemes.FromString(s)
for g.Next() {
var chWidth int
for _, r := range g.Runes() {
for _, r := range g.Value() {
chWidth = c.RuneWidth(r)
if chWidth > 0 {
break // See StringWidth() for details.
}
}
if width+chWidth > w {
pos, _ = g.Positions()
pos = g.Start()
break
}
width += chWidth
@@ -224,10 +227,10 @@ func (c *Condition) TruncateLeft(s string, w int, prefix string) string {
var width int
pos := len(s)
g := uniseg.NewGraphemes(s)
g := graphemes.FromString(s)
for g.Next() {
var chWidth int
for _, r := range g.Runes() {
for _, r := range g.Value() {
chWidth = c.RuneWidth(r)
if chWidth > 0 {
break // See StringWidth() for details.
@@ -236,10 +239,10 @@ func (c *Condition) TruncateLeft(s string, w int, prefix string) string {
if width+chWidth > w {
if width < w {
_, pos = g.Positions()
pos = g.End()
prefix += strings.Repeat(" ", width+chWidth-w)
} else {
pos, _ = g.Positions()
pos = g.Start()
}
break

View File

@@ -4,6 +4,7 @@
package runewidth
import (
"os"
"syscall"
)
@@ -14,6 +15,11 @@ var (
// IsEastAsian return true if the current locale is CJK
func IsEastAsian() bool {
if os.Getenv("WT_SESSION") != "" {
// Windows Terminal always not use East Asian Ambiguous Width(s).
return false
}
r1, _, _ := procGetConsoleOutputCP.Call()
if r1 == 0 {
return false

3
vendor/github.com/olekukonko/cat/.gitignore generated vendored Normal file
View File

@@ -0,0 +1,3 @@
.idea
.github
lab

21
vendor/github.com/olekukonko/cat/LICENSE generated vendored Normal file
View File

@@ -0,0 +1,21 @@
MIT License
Copyright (c) 2025 Oleku Konko
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

168
vendor/github.com/olekukonko/cat/README.md generated vendored Normal file
View File

@@ -0,0 +1,168 @@
# 🐱 `cat` - The Fast & Fluent String Concatenation Library for Go
> **"Because building strings shouldn't feel like herding cats"** 😼
## Why `cat`?
Go's `strings.Builder` is great, but building complex strings often feels clunky. `cat` makes string concatenation:
- **Faster** - Optimized paths for common types, zero-allocation conversions
- **Fluent** - Chainable methods for beautiful, readable code
- **Flexible** - Handles any type, nested structures, and custom formatting
- **Smart** - Automatic pooling, size estimation, and separator handling
```go
// Without cat
var b strings.Builder
b.WriteString("Hello, ")
b.WriteString(user.Name)
b.WriteString("! You have ")
b.WriteString(strconv.Itoa(count))
b.WriteString(" new messages.")
result := b.String()
// With cat
result := cat.Concat("Hello, ", user.Name, "! You have ", count, " new messages.")
```
## 🔥 Hot Features
### 1. Fluent Builder API
Build strings like a boss with method chaining:
```go
s := cat.New(", ").
Add("apple").
If(user.IsVIP, "golden kiwi").
Add("orange").
Sep(" | "). // Change separator mid-way
Add("banana").
String()
// "apple, golden kiwi, orange | banana"
```
### 2. Zero-Allocation Magic
- **Pooled builders** (optional) reduce GC pressure
- **Unsafe byte conversions** (opt-in) avoid `[]byte``string` copies
- **Stack buffers** for numbers instead of heap allocations
```go
// Enable performance features
cat.Pool(true) // Builder pooling
cat.SetUnsafeBytes(true) // Zero-copy []byte conversion
```
### 3. Handles Any Type - Even Nested Ones!
No more manual type conversions:
```go
data := map[string]any{
"id": 12345,
"tags": []string{"go", "fast", "efficient"},
}
fmt.Println(cat.JSONPretty(data))
// {
// "id": 12345,
// "tags": ["go", "fast", "efficient"]
// }
```
### 4. Concatenation for Every Use Case
```go
// Simple joins
cat.With(", ", "apple", "banana", "cherry") // "apple, banana, cherry"
// File paths
cat.Path("dir", "sub", "file.txt") // "dir/sub/file.txt"
// CSV
cat.CSV(1, 2, 3) // "1,2,3"
// Conditional elements
cat.Start("Hello").If(user != nil, " ", user.Name) // "Hello" or "Hello Alice"
// Repeated patterns
cat.RepeatWith("-+", "X", 3) // "X-+X-+X"
```
### 5. Smarter Than Your Average String Lib
```go
// Automatic nesting handling
nested := []any{"a", []any{"b", "c"}, "d"}
cat.FlattenWith(",", nested) // "a,b,c,d"
// Precise size estimation (minimizes allocations)
b := cat.New(", ").Grow(estimatedSize) // Preallocate exactly what you need
// Reflection support for any type
cat.Reflect(anyComplexStruct) // "{Field1:value Field2:[1 2 3]}"
```
## 🚀 Getting Started
```bash
go get github.com/your-repo/cat
```
```go
import "github.com/your-repo/cat"
func main() {
// Simple concatenation
msg := cat.Concat("User ", userID, " has ", count, " items")
// Pooled builder (for high-performance loops)
builder := cat.New(", ")
defer builder.Release() // Return to pool
result := builder.Add(items...).String()
}
```
## 🤔 Why Not Just Use...?
- `fmt.Sprintf` - Slow, many allocations
- `strings.Join` - Only works with strings
- `bytes.Buffer` - No separator support, manual type handling
- `string +` - Even worse performance, especially in loops
## 💡 Pro Tips
1. **Enable pooling** in high-throughput scenarios
2. **Preallocate** with `.Grow()` when you know the final size
3. Use **`If()`** for conditional elements in fluent chains
4. Try **`SetUnsafeBytes(true)`** if you can guarantee byte slices won't mutate
5. **Release builders** when pooling is enabled
## 🐱‍👤 Advanced Usage
```go
// Custom value formatting
type User struct {
Name string
Age int
}
func (u User) String() string {
return cat.With(" ", u.Name, cat.Wrap("(", u.Age, ")"))
}
// JSON-like output
func JSONPretty(v any) string {
return cat.WrapWith(",\n ", "{\n ", "\n}", prettyFields(v))
}
```
```text
/\_/\
( o.o ) > Concatenate with purr-fection!
> ^ <
```
**`cat`** - Because life's too short for ugly string building code. 😻

124
vendor/github.com/olekukonko/cat/builder.go generated vendored Normal file
View File

@@ -0,0 +1,124 @@
package cat
import (
"strings"
)
// Builder is a fluent concatenation helper. It is safe for concurrent use by
// multiple goroutines only if each goroutine uses a distinct *Builder.
// If pooling is enabled via Pool(true), call Release() when done.
// The Builder uses an internal strings.Builder for efficient string concatenation
// and manages a separator that is inserted between added values.
// It supports chaining methods for a fluent API style.
type Builder struct {
buf strings.Builder
sep string
needsSep bool
}
// New begins a new Builder with a separator. If pooling is enabled,
// the Builder is reused and MUST be released with b.Release() when done.
// If sep is empty, uses DefaultSep().
// Optional initial arguments x are added immediately after creation.
// Pooling is controlled globally via Pool(true/false); when enabled, Builders
// are recycled to reduce allocations in high-throughput scenarios.
func New(sep string, x ...any) *Builder {
var b *Builder
if poolEnabled.Load() {
b = builderPool.Get().(*Builder)
b.buf.Reset()
b.sep = sep
b.needsSep = false
} else {
b = &Builder{sep: sep}
}
// Process initial arguments *after* the builder is prepared.
if len(x) > 0 {
b.Add(x...)
}
return b
}
// Start begins a new Builder with no separator (using an empty string as sep).
// It is a convenience function that wraps New(empty, x...), where empty is a constant empty string.
// This allows starting a concatenation without any separator between initial or subsequent additions.
// If pooling is enabled via Pool(true), the returned Builder MUST be released with b.Release() when done.
// Optional variadic arguments x are passed directly to New and added immediately after creation.
// Useful for fluent chains where no default separator is desired from the start.
func Start(x ...any) *Builder {
return New(empty, x...)
}
// Grow pre-sizes the internal buffer.
// This can be used to preallocate capacity based on an estimated total size,
// reducing reallocations during subsequent Add calls.
// It chains, returning the Builder for fluent use.
func (b *Builder) Grow(n int) *Builder { b.buf.Grow(n); return b }
// Add appends values to the builder.
// It inserts the current separator before each new value if needed (i.e., after the first addition).
// Values are converted to strings using the optimized write function, which handles
// common types efficiently without allocations where possible.
// Supports any number of arguments of any type.
// Chains, returning the Builder for fluent use.
func (b *Builder) Add(args ...any) *Builder {
for _, arg := range args {
if b.needsSep && b.sep != empty {
b.buf.WriteString(b.sep)
}
write(&b.buf, arg)
b.needsSep = true
}
return b
}
// If appends values to the builder only if the condition is true.
// Behaves like Add when condition is true; does nothing otherwise.
// Useful for conditional concatenation in chains.
// Chains, returning the Builder for fluent use.
func (b *Builder) If(condition bool, args ...any) *Builder {
if condition {
b.Add(args...)
}
return b
}
// Sep changes the separator for subsequent additions.
// Future Add calls will use this new separator.
// Does not affect already added content.
// If sep is empty, no separator will be added between future values.
// Chains, returning the Builder for fluent use.
func (b *Builder) Sep(sep string) *Builder { b.sep = sep; return b }
// String returns the concatenated result.
// This does not release the Builder; if pooling is enabled, call Release separately
// if you are done with the Builder.
// Can be called multiple times; the internal buffer remains unchanged.
func (b *Builder) String() string { return b.buf.String() }
// Output returns the concatenated result and releases the Builder if pooling is enabled.
// This is a convenience method to get the string and clean up in one call.
// After Output, the Builder should not be used further if pooled, as it may be recycled.
// If pooling is disabled, it behaves like String without release.
func (b *Builder) Output() string {
out := b.buf.String()
b.Release() // Release takes care of the poolEnabled check
return out
}
// Release returns the Builder to the pool if pooling is enabled.
// You should call this exactly once per New() when Pool(true) is active.
// Resets the internal state (buffer, separator, needsSep) before pooling to avoid
// retaining data or large allocations.
// If pooling is disabled, this is a no-op.
// Safe to call multiple times, but typically called once at the end of use.
func (b *Builder) Release() {
if poolEnabled.Load() {
// Avoid retaining large buffers.
b.buf.Reset()
b.sep = empty
b.needsSep = false
builderPool.Put(b)
}
}

117
vendor/github.com/olekukonko/cat/cat.go generated vendored Normal file
View File

@@ -0,0 +1,117 @@
// Package cat provides efficient and flexible string concatenation utilities.
// It includes optimized functions for concatenating various types, builders for fluent chaining,
// and configuration options for defaults, pooling, and unsafe optimizations.
// The package aims to minimize allocations and improve performance in string building scenarios.
package cat
import (
"sync"
"sync/atomic"
)
// Constants used throughout the package for separators, defaults, and configuration.
// These include common string literals for separators, empty strings, and special representations,
// as well as limits like recursion depth. Defining them as constants allows for compile-time
// optimizations, readability, and consistent usage in functions like Space, Path, CSV, and reflection handlers.
// cat.go (updated constants section)
const (
empty = "" // Empty string constant, used for checks and defaults.
space = " " // Single space, default separator.
slash = "/" // Forward slash, for paths.
dot = "." // Period, for extensions or decimals.
comma = "," // Comma, for CSV or lists.
equal = "=" // Equals, for comparisons.
newline = "\n" // Newline, for multi-line strings.
// SQL-specific constants
and = "AND" // AND operator, for SQL conditions.
inOpen = " IN (" // Opening for SQL IN clause
inClose = ")" // Closing for SQL IN clause
asSQL = " AS " // SQL AS for aliasing
count = "COUNT(" // SQL COUNT function prefix
sum = "SUM(" // SQL SUM function prefix
avg = "AVG(" // SQL AVG function prefix
maxOpen = "MAX(" // SQL MAX function prefix
minOpen = "MIN(" // SQL MIN function prefix
caseSQL = "CASE " // SQL CASE keyword
when = "WHEN " // SQL WHEN clause
then = " THEN " // SQL THEN clause
elseSQL = " ELSE " // SQL ELSE clause
end = " END" // SQL END for CASE
countAll = "COUNT(*)" // SQL COUNT(*) for all rows
parenOpen = "(" // Opening parenthesis
parenClose = ")" // Closing parenthesis
maxRecursionDepth = 32 // Maximum recursion depth for nested structure handling.
nilString = "<nil>" // String representation for nil values.
unexportedString = "<?>" // Placeholder for unexported fields.
)
// Numeric is a generic constraint interface for numeric types.
// It includes all signed/unsigned integers and floats.
// Used in generic functions like Number and NumberWith to constrain to numbers.
type Numeric interface {
~int | ~int8 | ~int16 | ~int32 | ~int64 | ~uint | ~uint8 | ~uint16 | ~uint32 | ~uint64 | ~float32 | ~float64
}
// poolEnabled controls whether New() reuses Builder instances from a pool.
// Atomic.Bool for thread-safe toggle.
// When true, Builders from New must be Released to avoid leaks.
var poolEnabled atomic.Bool
// builderPool stores reusable *Builder to reduce GC pressure on hot paths.
// Uses sync.Pool for efficient allocation/reuse.
// New func creates a fresh &Builder when pool is empty.
var builderPool = sync.Pool{
New: func() any { return &Builder{} },
}
// Pool enables or disables Builder pooling for New()/Release().
// When enabled, you MUST call b.Release() after b.String() to return it.
// Thread-safe via atomic.Store.
// Enable for high-throughput scenarios to reduce allocations.
func Pool(enable bool) { poolEnabled.Store(enable) }
// unsafeBytesFlag controls zero-copy []byte -> string behavior via atomics.
// Int32 used for atomic operations: 1 = enabled, 0 = disabled.
// Affects bytesToString function for zero-copy conversions using unsafe.
var unsafeBytesFlag atomic.Int32 // 1 = true, 0 = false
// SetUnsafeBytes toggles zero-copy []byte -> string conversions globally.
// When enabled, bytesToString uses unsafe.String for zero-allocation conversion.
// Thread-safe via atomic.Store.
// Use with caution: assumes the byte slice is not modified after conversion.
// Compatible with Go 1.20+; fallback to string(bts) if disabled.
func SetUnsafeBytes(enable bool) {
if enable {
unsafeBytesFlag.Store(1)
} else {
unsafeBytesFlag.Store(0)
}
}
// IsUnsafeBytes reports whether zero-copy []byte -> string is enabled.
// Thread-safe via atomic.Load.
// Returns true if flag is 1, false otherwise.
// Useful for checking current configuration.
func IsUnsafeBytes() bool { return unsafeBytesFlag.Load() == 1 }
// deterministicMaps controls whether map keys are sorted for deterministic output in string conversions.
// It uses atomic.Bool for thread-safe access.
var deterministicMaps atomic.Bool
// SetDeterministicMaps controls whether map keys are sorted for deterministic output
// in reflection-based handling (e.g., in writeReflect for maps).
// When enabled, keys are sorted using a string-based comparison for consistent string representations.
// Thread-safe via atomic.Store.
// Useful for reproducible outputs in testing or logging.
func SetDeterministicMaps(enable bool) {
deterministicMaps.Store(enable)
}
// IsDeterministicMaps returns current map sorting setting.
// Thread-safe via atomic.Load.
// Returns true if deterministic sorting is enabled, false otherwise.
func IsDeterministicMaps() bool {
return deterministicMaps.Load()
}

590
vendor/github.com/olekukonko/cat/concat.go generated vendored Normal file
View File

@@ -0,0 +1,590 @@
package cat
import (
"reflect"
"strings"
)
// Append appends args to dst and returns the grown slice.
// Callers can reuse dst across calls to amortize allocs.
// It uses an internal Builder for efficient concatenation of the args (no separators),
// then appends the result to the dst byte slice.
// Preallocates based on a size estimate to minimize reallocations.
// Benefits from Builder pooling if enabled.
// Useful for building byte slices incrementally without separators.
func Append(dst []byte, args ...any) []byte {
return AppendWith(empty, dst, args...)
}
// AppendWith appends args to dst and returns the grown slice.
// Callers can reuse dst across calls to amortize allocs.
// Similar to Append, but inserts the specified sep between each arg.
// Preallocates based on a size estimate including separators.
// Benefits from Builder pooling if enabled.
// Useful for building byte slices incrementally with custom separators.
func AppendWith(sep string, dst []byte, args ...any) []byte {
if len(args) == 0 {
return dst
}
b := New(sep)
b.Grow(estimateWith(sep, args))
b.Add(args...)
out := b.Output()
return append(dst, out...)
}
// AppendBytes joins byte slices without separators.
// Only for compatibility with low-level byte processing.
// Directly appends each []byte arg to dst without any conversion or separators.
// Efficient for pure byte concatenation; no allocations if dst has capacity.
// Returns the extended dst slice.
// Does not use Builder, as it's simple append operations.
func AppendBytes(dst []byte, args ...[]byte) []byte {
if len(args) == 0 {
return dst
}
for _, b := range args {
dst = append(dst, b...)
}
return dst
}
// AppendTo writes arguments to an existing strings.Builder.
// More efficient than creating new builders.
// Appends each arg to the provided strings.Builder using the optimized write function.
// No separators are added; for direct concatenation.
// Useful when you already have a strings.Builder and want to add more values efficiently.
// Does not use cat.Builder, as it appends to an existing strings.Builder.
func AppendTo(b *strings.Builder, args ...any) {
for _, arg := range args {
write(b, arg)
}
}
// AppendStrings writes strings to an existing strings.Builder.
// Directly writes each string arg to the provided strings.Builder.
// No type checks or conversions; assumes all args are strings.
// Efficient for appending known strings without separators.
// Does not use cat.Builder, as it appends to an existing strings.Builder.
func AppendStrings(b *strings.Builder, ss ...string) {
for _, s := range ss {
b.WriteString(s)
}
}
// Between concatenates values wrapped between x and y (no separator between args).
// Equivalent to BetweenWith with an empty separator.
func Between(x, y any, args ...any) string {
return BetweenWith(empty, x, y, args...)
}
// BetweenWith concatenates values wrapped between x and y, using sep between x, args, and y.
// Uses a pooled Builder if enabled; releases it after use.
// Equivalent to With(sep, x, args..., y).
func BetweenWith(sep string, x, y any, args ...any) string {
b := New(sep)
// Estimate size for all parts to avoid re-allocation.
b.Grow(estimate([]any{x, y}) + estimateWith(sep, args))
b.Add(x)
b.Add(args...)
b.Add(y)
return b.Output()
}
// CSV joins arguments with "," separators (no space).
// Convenience wrapper for With using a comma as separator.
// Useful for simple CSV string generation without spaces.
func CSV(args ...any) string { return With(comma, args...) }
// Comma joins arguments with ", " separators.
// Convenience wrapper for With using ", " as separator.
// Useful for human-readable lists with comma and space.
func Comma(args ...any) string { return With(comma+space, args...) }
// Concat concatenates any values (no separators).
// Usage: cat.Concat("a", 1, true) → "a1true"
// Equivalent to With with an empty separator.
func Concat(args ...any) string {
return With(empty, args...)
}
// ConcatWith concatenates any values with separator.
// Alias for With; joins args with the provided sep.
func ConcatWith(sep string, args ...any) string {
return With(sep, args...)
}
// Flatten joins nested values into a single concatenation using empty.
// Convenience for FlattenWith using empty.
func Flatten(args ...any) string {
return FlattenWith(empty, args...)
}
// FlattenWith joins nested values into a single concatenation with sep, avoiding
// intermediate slice allocations where possible.
// It recursively flattens any nested []any arguments, concatenating all leaf items
// with sep between them. Skips empty nested slices to avoid extra separators.
// Leaf items (non-slices) are converted using the optimized write function.
// Uses a pooled Builder if enabled; releases it after use.
// Preallocates based on a recursive estimate for efficiency.
// Example: FlattenWith(",", 1, []any{2, []any{3,4}}, 5) → "1,2,3,4,5"
func FlattenWith(sep string, args ...any) string {
if len(args) == 0 {
return empty
}
// Recursive estimate for preallocation.
totalSize := recursiveEstimate(sep, args)
b := New(sep)
b.Grow(totalSize)
recursiveAdd(b, args)
return b.Output()
}
// Group joins multiple groups with empty between groups (no intra-group separators).
// Convenience for GroupWith using empty.
func Group(groups ...[]any) string {
return GroupWith(empty, groups...)
}
// GroupWith joins multiple groups with a separator between groups (no intra-group separators).
// Concatenates each group internally without separators, then joins non-empty groups with sep.
// Preestimates total size for allocation; uses pooled Builder if enabled.
// Optimized for single group: direct Concat.
// Useful for grouping related items with inter-group separation.
func GroupWith(sep string, groups ...[]any) string {
if len(groups) == 0 {
return empty
}
if len(groups) == 1 {
return Concat(groups[0]...)
}
total := 0
nonEmpty := 0
for _, g := range groups {
if len(g) == 0 {
continue
}
if nonEmpty > 0 {
total += len(sep)
}
total += estimate(g)
nonEmpty++
}
b := New(empty)
b.Grow(total)
first := true
for _, g := range groups {
if len(g) == 0 {
continue
}
if !first && sep != empty {
b.buf.WriteString(sep)
}
first = false
for _, a := range g {
write(&b.buf, a)
}
}
return b.Output()
}
// Indent prefixes the concatenation of args with depth levels of two spaces per level.
// Example: Indent(2, "hello") => " hello"
// If depth <= 0, equivalent to Concat(args...).
// Uses " " repeated depth times as prefix, followed by concatenated args (no separators).
// Benefits from pooling via Concat.
func Indent(depth int, args ...any) string {
if depth <= 0 {
return Concat(args...)
}
prefix := strings.Repeat(" ", depth)
return Prefix(prefix, args...)
}
// Join joins strings (matches stdlib strings.Join behavior).
// Usage: cat.Join("a", "b") → "a b" (using empty)
// Joins the variadic string args with the current empty.
// Useful for compatibility with stdlib but using package default sep.
func Join(elems ...string) string {
return strings.Join(elems, empty)
}
// JoinWith joins strings with separator (variadic version).
// Directly uses strings.Join on the variadic string args with sep.
// Efficient for known strings; no conversions needed.
func JoinWith(sep string, elems ...string) string {
return strings.Join(elems, sep)
}
// Lines joins arguments with newline separators.
// Convenience for With using "\n" as separator.
// Useful for building multi-line strings.
func Lines(args ...any) string { return With(newline, args...) }
// Number concatenates numeric values without separators.
// Generic over Numeric types.
// Equivalent to NumberWith with empty sep.
func Number[T Numeric](a ...T) string {
return NumberWith(empty, a...)
}
// NumberWith concatenates numeric values with the provided separator.
// Generic over Numeric types.
// If no args, returns empty string.
// Uses pooled Builder if enabled, with rough growth estimate (8 bytes per item).
// Relies on valueToString for numeric conversion.
func NumberWith[T Numeric](sep string, a ...T) string {
if len(a) == 0 {
return empty
}
b := New(sep)
b.Grow(len(a) * 8)
for _, v := range a {
b.Add(v)
}
return b.Output()
}
// Path joins arguments with "/" separators.
// Convenience for With using "/" as separator.
// Useful for building file paths or URLs.
func Path(args ...any) string { return With(slash, args...) }
// Prefix concatenates with a prefix (no separator).
// Equivalent to PrefixWith with empty sep.
func Prefix(p any, args ...any) string {
return PrefixWith(empty, p, args...)
}
// PrefixWith concatenates with a prefix and separator.
// Adds p, then sep (if args present and sep not empty), then joins args with sep.
// Uses pooled Builder if enabled.
func PrefixWith(sep string, p any, args ...any) string {
b := New(sep)
b.Grow(estimateWith(sep, args) + estimate([]any{p}))
b.Add(p)
b.Add(args...)
return b.Output()
}
// PrefixEach applies the same prefix to each argument and joins the pairs with sep.
// Example: PrefixEach("pre-", ",", "a","b") => "pre-a,pre-b"
// Preestimates size including prefixes and seps.
// Uses pooled Builder if enabled; manually adds sep between pairs, no sep between p and a.
// Returns empty if no args.
func PrefixEach(p any, sep string, args ...any) string {
if len(args) == 0 {
return empty
}
pSize := estimate([]any{p})
total := len(sep)*(len(args)-1) + estimate(args) + pSize*len(args)
b := New(empty)
b.Grow(total)
for i, a := range args {
if i > 0 && sep != empty {
b.buf.WriteString(sep)
}
write(&b.buf, p)
write(&b.buf, a)
}
return b.Output()
}
// Pair joins exactly two values (no separator).
// Equivalent to PairWith with empty sep.
func Pair(a, b any) string {
return PairWith(empty, a, b)
}
// PairWith joins exactly two values with a separator.
// Optimized for two args: uses With(sep, a, b).
func PairWith(sep string, a, b any) string {
return With(sep, a, b)
}
// Quote wraps each argument in double quotes, separated by spaces.
// Equivalent to QuoteWith with '"' as quote.
func Quote(args ...any) string {
return QuoteWith('"', args...)
}
// QuoteWith wraps each argument with the specified quote byte, separated by spaces.
// Wraps each arg with quote, writes arg, closes with quote; joins with space.
// Preestimates with quotes and spaces.
// Uses pooled Builder if enabled.
func QuoteWith(quote byte, args ...any) string {
if len(args) == 0 {
return empty
}
total := estimate(args) + 2*len(args) + len(space)*(len(args)-1)
b := New(empty)
b.Grow(total)
need := false
for _, a := range args {
if need {
b.buf.WriteString(space)
}
b.buf.WriteByte(quote)
write(&b.buf, a)
b.buf.WriteByte(quote)
need = true
}
return b.Output()
}
// Repeat concatenates val n times (no sep between instances).
// Equivalent to RepeatWith with empty sep.
func Repeat(val any, n int) string {
return RepeatWith(empty, val, n)
}
// RepeatWith concatenates val n times with sep between each instance.
// If n <= 0, returns an empty string.
// Optimized to make exactly one allocation; converts val once.
// Uses pooled Builder if enabled.
func RepeatWith(sep string, val any, n int) string {
if n <= 0 {
return empty
}
if n == 1 {
return valueToString(val)
}
b := New(sep)
b.Grow(n*estimate([]any{val}) + (n-1)*len(sep))
for i := 0; i < n; i++ {
b.Add(val)
}
return b.Output()
}
// Reflect converts a reflect.Value to its string representation.
// It handles all kinds of reflected values including primitives, structs, slices, maps, etc.
// For nil values, it returns the nilString constant ("<nil>").
// For unexported or inaccessible fields, it returns unexportedString ("<?>").
// The output follows Go's syntax conventions where applicable (e.g., slices as [a, b], maps as {k:v}).
func Reflect(r reflect.Value) string {
if !r.IsValid() {
return nilString
}
var b strings.Builder
writeReflect(&b, r.Interface(), 0)
return b.String()
}
// Space concatenates arguments with space separators.
// Convenience for With using " " as separator.
func Space(args ...any) string { return With(space, args...) }
// Dot concatenates arguments with dot separators.
// Convenience for With using " " as separator.
func Dot(args ...any) string { return With(dot, args...) }
// Suffix concatenates with a suffix (no separator).
// Equivalent to SuffixWith with empty sep.
func Suffix(s any, args ...any) string {
return SuffixWith(empty, s, args...)
}
// SuffixWith concatenates with a suffix and separator.
// Joins args with sep, then adds sep (if args present and sep not empty), then s.
// Uses pooled Builder if enabled.
func SuffixWith(sep string, s any, args ...any) string {
b := New(sep)
b.Grow(estimateWith(sep, args) + estimate([]any{s}))
b.Add(args...)
b.Add(s)
return b.Output()
}
// SuffixEach applies the same suffix to each argument and joins the pairs with sep.
// Example: SuffixEach("-suf", " | ", "a","b") => "a-suf | b-suf"
// Preestimates size including suffixes and seps.
// Uses pooled Builder if enabled; manually adds sep between pairs, no sep between a and s.
// Returns empty if no args.
func SuffixEach(s any, sep string, args ...any) string {
if len(args) == 0 {
return empty
}
sSize := estimate([]any{s})
total := len(sep)*(len(args)-1) + estimate(args) + sSize*len(args)
b := New(empty)
b.Grow(total)
for i, a := range args {
if i > 0 && sep != empty {
b.buf.WriteString(sep)
}
write(&b.buf, a)
write(&b.buf, s)
}
return b.Output()
}
// Sprint concatenates any values (no separators).
// Usage: Sprint("a", 1, true) → "a1true"
// Equivalent to Concat or With with an empty separator.
func Sprint(args ...any) string {
if len(args) == 0 {
return empty
}
if len(args) == 1 {
return valueToString(args[0])
}
// For multiple args, use the existing Concat functionality
return Concat(args...)
}
// Trio joins exactly three values (no separator).
// Equivalent to TrioWith with empty sep
func Trio(a, b, c any) string {
return TrioWith(empty, a, b, c)
}
// TrioWith joins exactly three values with a separator.
// Optimized for three args: uses With(sep, a, b, c).
func TrioWith(sep string, a, b, c any) string {
return With(sep, a, b, c)
}
// With concatenates arguments with the specified separator.
// Core concatenation function with sep.
// Optimized for zero or one arg: empty or direct valueToString.
// Fast path for all strings: exact preallocation, direct writes via raw strings.Builder (minimal branches/allocs).
// Fallback: pooled Builder with estimateWith, adds args with sep.
// Benefits from pooling if enabled for mixed types.
func With(sep string, args ...any) string {
switch len(args) {
case 0:
return empty
case 1:
return valueToString(args[0])
}
// Fast path for all strings: use raw strings.Builder for speed, no pooling needed.
allStrings := true
totalLen := len(sep) * (len(args) - 1)
for _, a := range args {
if s, ok := a.(string); ok {
totalLen += len(s)
} else {
allStrings = false
break
}
}
if allStrings {
var b strings.Builder
b.Grow(totalLen)
b.WriteString(args[0].(string))
for i := 1; i < len(args); i++ {
if sep != empty {
b.WriteString(sep)
}
b.WriteString(args[i].(string))
}
return b.String()
}
// Fallback for mixed types: use pooled Builder.
b := New(sep)
b.Grow(estimateWith(sep, args))
b.Add(args...)
return b.Output()
}
// Wrap encloses concatenated args between before and after strings (no inner separator).
// Equivalent to Concat(before, args..., after).
func Wrap(before, after string, args ...any) string {
b := Start()
b.Grow(len(before) + len(after) + estimate(args))
b.Add(before)
b.Add(args...)
b.Add(after)
return b.Output()
}
// WrapEach wraps each argument individually with before/after, concatenated without separators.
// Applies before + arg + after to each arg.
// Preestimates size; uses pooled Builder if enabled.
// Returns empty if no args.
// Useful for wrapping multiple items identically without joins.
func WrapEach(before, after string, args ...any) string {
if len(args) == 0 {
return empty
}
total := (len(before)+len(after))*len(args) + estimate(args)
b := Start() // Use pooled builder, but we will write manually.
b.Grow(total)
for _, a := range args {
write(&b.buf, before)
write(&b.buf, a)
write(&b.buf, after)
}
// No separators were ever added, so this is safe.
b.needsSep = true // Correctly set state in case of reuse.
return b.Output()
}
// WrapWith encloses concatenated args between before and after strings,
// joining the arguments with the provided separator.
// If no args, returns before + after.
// Builds inner with With(sep, args...), then Concat(before, inner, after).
// Benefits from pooling via With and Concat.
func WrapWith(sep, before, after string, args ...any) string {
if len(args) == 0 {
return before + after
}
// First, efficiently build the inner part.
inner := With(sep, args...)
// Then, wrap it without allocating another slice.
b := Start()
b.Grow(len(before) + len(inner) + len(after))
b.Add(before)
b.Add(inner)
b.Add(after)
return b.Output()
}
// Pad surrounds a string with spaces on both sides.
// Ensures proper spacing for SQL operators like "=", "AND", etc.
// Example: Pad("=") returns " = " for cleaner formatting.
func Pad(s string) string {
return Concat(space, s, space)
}
// PadWith adds a separator before the string and a space after it.
// Useful for formatting SQL parts with custom leading separators.
// Example: PadWith(",", "column") returns ",column ".
func PadWith(sep, s string) string {
return Concat(sep, s, space)
}
// Parens wraps content in parentheses
// Useful for grouping SQL conditions or expressions
// Example: Parens("a = b AND c = d") → "(a = b AND c = d)"
func Parens(content string) string {
return Concat(parenOpen, content, parenClose)
}
// ParensWith wraps multiple arguments in parentheses with a separator
// Example: ParensWith(" AND ", "a = b", "c = d") → "(a = b AND c = d)"
func ParensWith(sep string, args ...any) string {
return Concat(parenOpen, With(sep, args...), parenClose)
}

376
vendor/github.com/olekukonko/cat/fn.go generated vendored Normal file
View File

@@ -0,0 +1,376 @@
package cat
import (
"fmt"
"reflect"
"sort"
"strconv"
"strings"
"unsafe"
)
// write writes a value to the given strings.Builder using fast paths to avoid temporary allocations.
// It handles common types like strings, byte slices, integers, floats, and booleans directly for efficiency.
// For other types, it falls back to fmt.Fprint, which may involve allocations.
// This function is optimized for performance in string concatenation scenarios, prioritizing
// common cases like strings and numbers at the top of the type switch for compiler optimization.
// Note: For integers and floats, it uses stack-allocated buffers and strconv.Append* functions to
// convert numbers to strings without heap allocations.
func write(b *strings.Builder, arg any) {
writeValue(b, arg, 0)
}
// writeValue appends the string representation of arg to b, handling recursion with a depth limit.
// It serves as a recursive helper for write, directly handling primitives and delegating complex
// types to writeReflect. The depth parameter prevents excessive recursion in deeply nested structures.
func writeValue(b *strings.Builder, arg any, depth int) {
// Handle recursion depth limit
if depth > maxRecursionDepth {
b.WriteString("...")
return
}
// Handle nil values
if arg == nil {
b.WriteString(nilString)
return
}
// Fast path type switch for all primitive types
switch v := arg.(type) {
case string:
b.WriteString(v)
case []byte:
b.WriteString(bytesToString(v))
case int:
var buf [20]byte
b.Write(strconv.AppendInt(buf[:0], int64(v), 10))
case int64:
var buf [20]byte
b.Write(strconv.AppendInt(buf[:0], v, 10))
case int32:
var buf [11]byte
b.Write(strconv.AppendInt(buf[:0], int64(v), 10))
case int16:
var buf [6]byte
b.Write(strconv.AppendInt(buf[:0], int64(v), 10))
case int8:
var buf [4]byte
b.Write(strconv.AppendInt(buf[:0], int64(v), 10))
case uint:
var buf [20]byte
b.Write(strconv.AppendUint(buf[:0], uint64(v), 10))
case uint64:
var buf [20]byte
b.Write(strconv.AppendUint(buf[:0], v, 10))
case uint32:
var buf [10]byte
b.Write(strconv.AppendUint(buf[:0], uint64(v), 10))
case uint16:
var buf [5]byte
b.Write(strconv.AppendUint(buf[:0], uint64(v), 10))
case uint8:
var buf [3]byte
b.Write(strconv.AppendUint(buf[:0], uint64(v), 10))
case float64:
var buf [24]byte
b.Write(strconv.AppendFloat(buf[:0], v, 'f', -1, 64))
case float32:
var buf [24]byte
b.Write(strconv.AppendFloat(buf[:0], float64(v), 'f', -1, 32))
case bool:
if v {
b.WriteString("true")
} else {
b.WriteString("false")
}
case fmt.Stringer:
b.WriteString(v.String())
case error:
b.WriteString(v.Error())
default:
// Fallback to reflection-based handling
writeReflect(b, arg, depth)
}
}
// writeReflect handles all complex types safely.
func writeReflect(b *strings.Builder, arg any, depth int) {
defer func() {
if r := recover(); r != nil {
b.WriteString("[!reflect panic!]")
}
}()
val := reflect.ValueOf(arg)
if val.Kind() == reflect.Ptr {
if val.IsNil() {
b.WriteString(nilString)
return
}
val = val.Elem()
}
switch val.Kind() {
case reflect.Slice, reflect.Array:
b.WriteByte('[')
for i := 0; i < val.Len(); i++ {
if i > 0 {
b.WriteString(", ") // Use comma-space for readability
}
writeValue(b, val.Index(i).Interface(), depth+1)
}
b.WriteByte(']')
case reflect.Struct:
typ := val.Type()
b.WriteByte('{') // Use {} for structs to follow Go convention
first := true
for i := 0; i < val.NumField(); i++ {
fieldValue := val.Field(i)
if !fieldValue.CanInterface() {
continue // Skip unexported fields
}
if !first {
b.WriteByte(' ') // Use space as separator
}
first = false
b.WriteString(typ.Field(i).Name)
b.WriteByte(':')
writeValue(b, fieldValue.Interface(), depth+1)
}
b.WriteByte('}')
case reflect.Map:
b.WriteByte('{')
keys := val.MapKeys()
sort.Slice(keys, func(i, j int) bool {
// A simple string-based sort for keys
return fmt.Sprint(keys[i].Interface()) < fmt.Sprint(keys[j].Interface())
})
for i, key := range keys {
if i > 0 {
b.WriteByte(' ') // Use space as separator
}
writeValue(b, key.Interface(), depth+1)
b.WriteByte(':')
writeValue(b, val.MapIndex(key).Interface(), depth+1)
}
b.WriteByte('}')
case reflect.Interface:
if val.IsNil() {
b.WriteString(nilString)
return
}
writeValue(b, val.Elem().Interface(), depth+1)
default:
fmt.Fprint(b, arg)
}
}
// valueToString converts any value to a string representation.
// It uses optimized paths for common types to avoid unnecessary allocations.
// For types like integers and floats, it directly uses strconv functions.
// This function is useful for single-argument conversions or as a helper in other parts of the package.
// Unlike write, it returns a string instead of appending to a builder.
func valueToString(arg any) string {
switch v := arg.(type) {
case string:
return v
case []byte:
return bytesToString(v)
case int:
return strconv.Itoa(v)
case int64:
return strconv.FormatInt(v, 10)
case int32:
return strconv.FormatInt(int64(v), 10)
case uint:
return strconv.FormatUint(uint64(v), 10)
case uint64:
return strconv.FormatUint(v, 10)
case float64:
return strconv.FormatFloat(v, 'f', -1, 64)
case bool:
if v {
return "true"
}
return "false"
case fmt.Stringer:
return v.String()
case error:
return v.Error()
default:
return fmt.Sprint(v)
}
}
// estimateWith calculates a conservative estimate of the total string length when concatenating
// the given arguments with a separator. This is used for preallocating capacity in strings.Builder
// to minimize reallocations during building.
// It accounts for the length of separators and estimates the length of each argument based on its type.
// If no arguments are provided, it returns 0.
func estimateWith(sep string, args []any) int {
if len(args) == 0 {
return 0
}
size := len(sep) * (len(args) - 1)
size += estimate(args)
return size
}
// estimate calculates a conservative estimate of the combined string length of the given arguments.
// It iterates over each argument and adds an estimated length based on its type:
// - Strings and byte slices: exact length.
// - Numbers: calculated digit count using numLen or uNumLen.
// - Floats and others: fixed conservative estimates (e.g., 16 or 24 bytes).
// This helper is used internally by estimateWith and focuses solely on the arguments without separators.
func estimate(args []any) int {
var size int
for _, a := range args {
switch v := a.(type) {
case string:
size += len(v)
case []byte:
size += len(v)
case int:
size += numLen(int64(v))
case int8:
size += numLen(int64(v))
case int16:
size += numLen(int64(v))
case int32:
size += numLen(int64(v))
case int64:
size += numLen(v)
case uint:
size += uNumLen(uint64(v))
case uint8:
size += uNumLen(uint64(v))
case uint16:
size += uNumLen(uint64(v))
case uint32:
size += uNumLen(uint64(v))
case uint64:
size += uNumLen(v)
case float32:
size += 16
case float64:
size += 24
case bool:
size += 5 // "false"
case fmt.Stringer, error:
size += 16 // conservative
default:
size += 16 // conservative
}
}
return size
}
// numLen returns the number of characters required to represent the signed integer n as a string.
// It handles negative numbers by adding 1 for the '-' sign and uses a loop to count digits.
// Special handling for math.MinInt64 to avoid overflow when negating.
// Returns 1 for 0, and up to 20 for the largest values.
func numLen(n int64) int {
if n == 0 {
return 1
}
c := 0
if n < 0 {
c = 1 // for '-'
// NOTE: math.MinInt64 negated overflows; handle by adding one digit and returning 20.
if n == -1<<63 {
return 20
}
n = -n
}
for n > 0 {
n /= 10
c++
}
return c
}
// uNumLen returns the number of characters required to represent the unsigned integer n as a string.
// It uses a loop to count digits.
// Returns 1 for 0, and up to 20 for the largest uint64 values.
func uNumLen(n uint64) int {
if n == 0 {
return 1
}
c := 0
for n > 0 {
n /= 10
c++
}
return c
}
// bytesToString converts a byte slice to a string efficiently.
// If the package's UnsafeBytes flag is set (via IsUnsafeBytes()), it uses unsafe operations
// to create a string backed by the same memory as the byte slice, avoiding a copy.
// This is zero-allocation when unsafe is enabled.
// Falls back to standard string(bts) conversion otherwise.
// For empty slices, it returns a constant empty string.
// Compatible with Go 1.20+ unsafe functions like unsafe.String and unsafe.SliceData.
func bytesToString(bts []byte) string {
if len(bts) == 0 {
return empty
}
if IsUnsafeBytes() {
// Go 1.20+: unsafe.String with SliceData (1.20 introduced, 1.22 added SliceData).
return unsafe.String(unsafe.SliceData(bts), len(bts))
}
return string(bts)
}
// recursiveEstimate calculates the estimated string length for potentially nested arguments,
// including the lengths of separators between elements. It recurses on nested []any slices,
// flattening the structure while accounting for separators only between non-empty subparts.
// This function is useful for preallocating capacity in builders for nested concatenation operations.
func recursiveEstimate(sep string, args []any) int {
if len(args) == 0 {
return 0
}
size := 0
needsSep := false
for _, a := range args {
switch v := a.(type) {
case []any:
subSize := recursiveEstimate(sep, v)
if subSize > 0 {
if needsSep {
size += len(sep)
}
size += subSize
needsSep = true
}
default:
if needsSep {
size += len(sep)
}
size += estimate([]any{a})
needsSep = true
}
}
return size
}
// recursiveAdd appends the string representations of potentially nested arguments to the builder.
// It recurses on nested []any slices, effectively flattening the structure by adding leaf values
// directly via b.Add without inserting separators (separators are handled externally if needed).
// This function is designed for efficient concatenation of nested argument lists.
func recursiveAdd(b *Builder, args []any) {
for _, a := range args {
switch v := a.(type) {
case []any:
recursiveAdd(b, v)
default:
b.Add(a)
}
}
}

161
vendor/github.com/olekukonko/cat/sql.go generated vendored Normal file
View File

@@ -0,0 +1,161 @@
package cat
// On builds a SQL ON clause comparing two columns across tables.
// Formats as: "table1.column1 = table2.column2" with proper spacing.
// Useful in JOIN conditions to match keys between tables.
func On(table1, column1, table2, column2 string) string {
return With(space,
With(dot, table1, column1),
Pad(equal),
With(dot, table2, column2),
)
}
// Using builds a SQL condition comparing two aliased columns.
// Formats as: "alias1.column1 = alias2.column2" for JOINs or filters.
// Helps when working with table aliases in complex queries.
func Using(alias1, column1, alias2, column2 string) string {
return With(space,
With(dot, alias1, column1),
Pad(equal),
With(dot, alias2, column2),
)
}
// And joins multiple SQL conditions with the AND operator.
// Adds spacing to ensure clean SQL output (e.g., "cond1 AND cond2").
// Accepts variadic arguments for flexible condition chaining.
func And(conditions ...any) string {
return With(Pad(and), conditions...)
}
// In creates a SQL IN clause with properly quoted values
// Example: In("status", "active", "pending") → "status IN ('active', 'pending')"
// Handles value quoting and comma separation automatically
func In(column string, values ...string) string {
if len(values) == 0 {
return Concat(column, inOpen, inClose)
}
quotedValues := make([]string, len(values))
for i, v := range values {
quotedValues[i] = "'" + v + "'"
}
return Concat(column, inOpen, JoinWith(comma+space, quotedValues...), inClose)
}
// As creates an aliased SQL expression
// Example: As("COUNT(*)", "total_count") → "COUNT(*) AS total_count"
func As(expression, alias string) string {
return Concat(expression, asSQL, alias)
}
// Count creates a COUNT expression with optional alias
// Example: Count("id") → "COUNT(id)"
// Example: Count("id", "total") → "COUNT(id) AS total"
// Example: Count("DISTINCT user_id", "unique_users") → "COUNT(DISTINCT user_id) AS unique_users"
func Count(column string, alias ...string) string {
expression := Concat(count, column, parenClose)
if len(alias) == 0 {
return expression
}
return As(expression, alias[0])
}
// CountAll creates COUNT(*) with optional alias
// Example: CountAll() → "COUNT(*)"
// Example: CountAll("total") → "COUNT(*) AS total"
func CountAll(alias ...string) string {
if len(alias) == 0 {
return countAll
}
return As(countAll, alias[0])
}
// Sum creates a SUM expression with optional alias
// Example: Sum("amount") → "SUM(amount)"
// Example: Sum("amount", "total") → "SUM(amount) AS total"
func Sum(column string, alias ...string) string {
expression := Concat(sum, column, parenClose)
if len(alias) == 0 {
return expression
}
return As(expression, alias[0])
}
// Avg creates an AVG expression with optional alias
// Example: Avg("score") → "AVG(score)"
// Example: Avg("score", "average") → "AVG(score) AS average"
func Avg(column string, alias ...string) string {
expression := Concat(avg, column, parenClose)
if len(alias) == 0 {
return expression
}
return As(expression, alias[0])
}
// Max creates a MAX expression with optional alias
// Example: Max("price") → "MAX(price)"
// Example: Max("price", "max_price") → "MAX(price) AS max_price"
func Max(column string, alias ...string) string {
expression := Concat(maxOpen, column, parenClose)
if len(alias) == 0 {
return expression
}
return As(expression, alias[0])
}
// Min creates a MIN expression with optional alias
// Example: Min("price") → "MIN(price)"
// Example: Min("price", "min_price") → "MIN(price) AS min_price"
func Min(column string, alias ...string) string {
expression := Concat(minOpen, column, parenClose)
if len(alias) == 0 {
return expression
}
return As(expression, alias[0])
}
// Case creates a SQL CASE expression with optional alias
// Example: Case("WHEN status = 'active' THEN 1 ELSE 0 END", "is_active") → "CASE WHEN status = 'active' THEN 1 ELSE 0 END AS is_active"
func Case(expression string, alias ...string) string {
caseExpr := Concat(caseSQL, expression)
if len(alias) == 0 {
return caseExpr
}
return As(caseExpr, alias[0])
}
// CaseWhen creates a complete SQL CASE expression from individual parts with proper value handling
// Example: CaseWhen("status =", "'active'", "1", "0", "is_active") → "CASE WHEN status = 'active' THEN 1 ELSE 0 END AS is_active"
// Example: CaseWhen("age >", "18", "'adult'", "'minor'", "age_group") → "CASE WHEN age > 18 THEN 'adult' ELSE 'minor' END AS age_group"
func CaseWhen(conditionPart string, conditionValue, thenValue, elseValue any, alias ...string) string {
condition := Concat(conditionPart, valueToString(conditionValue))
expression := Concat(
when, condition, then, valueToString(thenValue), elseSQL, valueToString(elseValue), end,
)
return Case(expression, alias...)
}
// CaseWhenMulti creates a SQL CASE expression with multiple WHEN clauses
// Example: CaseWhenMulti([]string{"status =", "age >"}, []any{"'active'", 18}, []any{1, "'adult'"}, 0, "result") → "CASE WHEN status = 'active' THEN 1 WHEN age > 18 THEN 'adult' ELSE 0 END AS result"
func CaseWhenMulti(conditionParts []string, conditionValues, thenValues []any, elseValue any, alias ...string) string {
if len(conditionParts) != len(conditionValues) || len(conditionParts) != len(thenValues) {
return "" // or handle error
}
var whenClauses []string
for i := 0; i < len(conditionParts); i++ {
condition := Concat(conditionParts[i], valueToString(conditionValues[i]))
whenClause := Concat(when, condition, then, valueToString(thenValues[i]))
whenClauses = append(whenClauses, whenClause)
}
expression := Concat(
JoinWith(space, whenClauses...),
elseSQL,
valueToString(elseValue),
end,
)
return Case(expression, alias...)
}

View File

@@ -222,6 +222,13 @@ logger.Info("Slog log") // Output: level=INFO msg="Slog log" namespace=app class
ll.Stack("Critical error") // Output: [app] ERROR: Critical error [stack=...] (see example/stack.png)
```
4**General Output**
Logs a output in structured way for inspection of public & private values.
```go
ll.Handler(lh.NewColorizedHandler(os.Stdout))
ll.Output(&SomeStructWithPrivateValues{})
```
#### Performance Tracking
Measure execution time for performance analysis.
```go

View File

@@ -1,421 +0,0 @@
package ll
import (
"fmt"
"github.com/olekukonko/ll/lx"
"reflect"
"strconv"
"strings"
"unsafe"
)
const (
maxRecursionDepth = 20 // Maximum depth for recursive type handling to prevent stack overflow
nilString = "<nil>" // String representation for nil values
unexportedString = "<?>" // String representation for unexported fields
)
// Concat efficiently concatenates values without a separator using the default logger.
// It converts each argument to a string and joins them directly, optimizing for performance
// in logging scenarios. Thread-safe as it does not modify shared state.
// Example:
//
// msg := ll.Concat("Hello", 42, true) // Returns "Hello42true"
func Concat(args ...any) string {
return concat(args...)
}
// ConcatSpaced concatenates values with a space separator using the default logger.
// It converts each argument to a string and joins them with spaces, suitable for log message
// formatting. Thread-safe as it does not modify shared state.
// Example:
//
// msg := ll.ConcatSpaced("Hello", 42, true) // Returns "Hello 42 true"
func ConcatSpaced(args ...any) string {
return concatSpaced(args...)
}
// ConcatAll concatenates elements with a separator, prefix, and suffix using the default logger.
// It combines before, main, and after arguments with the specified separator, optimizing memory
// allocation for logging. Thread-safe as it does not modify shared state.
// Example:
//
// msg := ll.ConcatAll(",", []any{"prefix"}, []any{"suffix"}, "main")
// // Returns "prefix,main,suffix"
func ConcatAll(sep string, before, after []any, args ...any) string {
return concatenate(sep, before, after, args...)
}
// concat efficiently concatenates values without a separator.
// It converts each argument to a string and joins them directly, optimizing for performance
// in logging scenarios. Used internally by Concat and other logging functions.
// Example:
//
// msg := concat("Hello", 42, true) // Returns "Hello42true"
func concat(args ...any) string {
return concatWith("", args...)
}
// concatSpaced concatenates values with a space separator.
// It converts each argument to a string and joins them with spaces, suitable for formatting
// log messages. Used internally by ConcatSpaced.
// Example:
//
// msg := concatSpaced("Hello", 42, true) // Returns "Hello 42 true"
func concatSpaced(args ...any) string {
return concatWith(lx.Space, args...)
}
// concatWith concatenates values with a specified separator using optimized type handling.
// It builds a string from arguments, handling various types efficiently (strings, numbers,
// structs, etc.), and is used by concat and concatSpaced for log message construction.
// Thread-safe as it does not modify shared state.
// Example:
//
// msg := concatWith(",", "Hello", 42, true) // Returns "Hello,42,true"
func concatWith(sep string, args ...any) string {
switch len(args) {
case 0:
return ""
case 1:
return concatToString(args[0])
}
var b strings.Builder
b.Grow(concatEstimateArgs(sep, args))
for i, arg := range args {
if i > 0 {
b.WriteString(sep)
}
concatWriteValue(&b, arg, 0)
}
return b.String()
}
// concatenate concatenates elements with separators, prefixes, and suffixes efficiently.
// It combines before, main, and after arguments with the specified separator, optimizing
// memory allocation for complex log message formatting. Used internally by ConcatAll.
// Example:
//
// msg := concatenate(",", []any{"prefix"}, []any{"suffix"}, "main")
// // Returns "prefix,main,suffix"
func concatenate(sep string, before []any, after []any, args ...any) string {
totalLen := len(before) + len(after) + len(args)
switch totalLen {
case 0:
return ""
case 1:
switch {
case len(before) > 0:
return concatToString(before[0])
case len(args) > 0:
return concatToString(args[0])
default:
return concatToString(after[0])
}
}
var b strings.Builder
b.Grow(concatEstimateTotal(sep, before, after, args))
// Write before elements
concatWriteGroup(&b, sep, before)
// Write main arguments
if len(before) > 0 && len(args) > 0 {
b.WriteString(sep)
}
concatWriteGroup(&b, sep, args)
// Write after elements
if len(after) > 0 && (len(before) > 0 || len(args) > 0) {
b.WriteString(sep)
}
concatWriteGroup(&b, sep, after)
return b.String()
}
// concatWriteGroup writes a group of arguments to a strings.Builder with a separator.
// It handles each argument by converting it to a string, used internally by concatenate
// to process before, main, or after groups in log message construction.
// Example:
//
// var b strings.Builder
// concatWriteGroup(&b, ",", []any{"a", 42}) // Writes "a,42" to b
func concatWriteGroup(b *strings.Builder, sep string, group []any) {
for i, arg := range group {
if i > 0 {
b.WriteString(sep)
}
concatWriteValue(b, arg, 0)
}
}
// concatToString converts a single argument to a string efficiently.
// It handles common types (string, []byte, fmt.Stringer) with minimal overhead and falls
// back to fmt.Sprint for other types. Used internally by concat and concatenate.
// Example:
//
// s := concatToString("Hello") // Returns "Hello"
// s := concatToString([]byte{65, 66}) // Returns "AB"
func concatToString(arg any) string {
switch v := arg.(type) {
case string:
return v
case []byte:
return *(*string)(unsafe.Pointer(&v))
case fmt.Stringer:
return v.String()
case error:
return v.Error()
default:
return fmt.Sprint(v)
}
}
// concatEstimateTotal estimates the total string length for concatenate.
// It calculates the expected size of the concatenated string, including before, main, and
// after arguments with separators, to preallocate the strings.Builder capacity.
// Example:
//
// size := concatEstimateTotal(",", []any{"prefix"}, []any{"suffix"}, "main")
// // Returns estimated length for "prefix,main,suffix"
func concatEstimateTotal(sep string, before, after, args []any) int {
size := 0
if len(before) > 0 {
size += concatEstimateArgs(sep, before)
}
if len(args) > 0 {
if size > 0 {
size += len(sep)
}
size += concatEstimateArgs(sep, args)
}
if len(after) > 0 {
if size > 0 {
size += len(sep)
}
size += concatEstimateArgs(sep, after)
}
return size
}
// concatEstimateArgs estimates the string length for a group of arguments.
// It sums the estimated sizes of each argument plus separators, used by concatEstimateTotal
// and concatWith to optimize memory allocation for log message construction.
// Example:
//
// size := concatEstimateArgs(",", []any{"hello", 42}) // Returns estimated length for "hello,42"
func concatEstimateArgs(sep string, args []any) int {
if len(args) == 0 {
return 0
}
size := len(sep) * (len(args) - 1)
for _, arg := range args {
size += concatEstimateSize(arg)
}
return size
}
// concatEstimateSize estimates the string length for a single argument.
// It provides size estimates for various types (strings, numbers, booleans, etc.) to
// optimize strings.Builder capacity allocation in logging functions.
// Example:
//
// size := concatEstimateSize("hello") // Returns 5
// size := concatEstimateSize(42) // Returns ~2
func concatEstimateSize(arg any) int {
switch v := arg.(type) {
case string:
return len(v)
case []byte:
return len(v)
case int:
return concatNumLen(int64(v))
case int64:
return concatNumLen(v)
case int32:
return concatNumLen(int64(v))
case int16:
return concatNumLen(int64(v))
case int8:
return concatNumLen(int64(v))
case uint:
return concatNumLen(uint64(v))
case uint64:
return concatNumLen(v)
case uint32:
return concatNumLen(uint64(v))
case uint16:
return concatNumLen(uint64(v))
case uint8:
return concatNumLen(uint64(v))
case float64:
return 24 // Max digits for float64
case float32:
return 16 // Max digits for float32
case bool:
if v {
return 4 // "true"
}
return 5 // "false"
case fmt.Stringer:
return 16 // Conservative estimate
default:
return 16 // Default estimate
}
}
// concatNumLen estimates the string length for a signed or unsigned integer.
// It returns a conservative estimate (20 digits) for int64 or uint64 values, including
// a sign for negative numbers, used by concatEstimateSize for memory allocation.
// Example:
//
// size := concatNumLen(int64(-123)) // Returns 20
// size := concatNumLen(uint64(123)) // Returns 20
func concatNumLen[T int64 | uint64](v T) int {
if v < 0 {
return 20 // Max digits for int64 + sign
}
return 20 // Max digits for uint64
}
// concatWriteValue writes a formatted value to a strings.Builder with recursion depth tracking.
// It handles various types (strings, numbers, structs, slices, etc.) and prevents infinite
// recursion by limiting depth. Used internally by concatWith and concatWriteGroup for log
// message formatting.
// Example:
//
// var b strings.Builder
// concatWriteValue(&b, "hello", 0) // Writes "hello" to b
// concatWriteValue(&b, []int{1, 2}, 0) // Writes "[1,2]" to b
func concatWriteValue(b *strings.Builder, arg any, depth int) {
if depth > maxRecursionDepth {
b.WriteString("...")
return
}
if arg == nil {
b.WriteString(nilString)
return
}
if s, ok := arg.(fmt.Stringer); ok {
b.WriteString(s.String())
return
}
switch v := arg.(type) {
case string:
b.WriteString(v)
case []byte:
b.Write(v)
case int:
b.WriteString(strconv.FormatInt(int64(v), 10))
case int64:
b.WriteString(strconv.FormatInt(v, 10))
case int32:
b.WriteString(strconv.FormatInt(int64(v), 10))
case int16:
b.WriteString(strconv.FormatInt(int64(v), 10))
case int8:
b.WriteString(strconv.FormatInt(int64(v), 10))
case uint:
b.WriteString(strconv.FormatUint(uint64(v), 10))
case uint64:
b.WriteString(strconv.FormatUint(v, 10))
case uint32:
b.WriteString(strconv.FormatUint(uint64(v), 10))
case uint16:
b.WriteString(strconv.FormatUint(uint64(v), 10))
case uint8:
b.WriteString(strconv.FormatUint(uint64(v), 10))
case float64:
b.WriteString(strconv.FormatFloat(v, 'f', -1, 64))
case float32:
b.WriteString(strconv.FormatFloat(float64(v), 'f', -1, 32))
case bool:
if v {
b.WriteString("true")
} else {
b.WriteString("false")
}
default:
val := reflect.ValueOf(arg)
if val.Kind() == reflect.Ptr {
if val.IsNil() {
b.WriteString(nilString)
return
}
val = val.Elem()
}
switch val.Kind() {
case reflect.Slice, reflect.Array:
concatFormatSlice(b, val, depth)
case reflect.Struct:
concatFormatStruct(b, val, depth)
default:
fmt.Fprint(b, v)
}
}
}
// concatFormatSlice formats a slice or array for logging.
// It writes the elements in a bracketed, comma-separated format, handling nested types
// recursively with depth tracking. Used internally by concatWriteValue for log message formatting.
// Example:
//
// var b strings.Builder
// val := reflect.ValueOf([]int{1, 2})
// concatFormatSlice(&b, val, 0) // Writes "[1,2]" to b
func concatFormatSlice(b *strings.Builder, val reflect.Value, depth int) {
b.WriteByte('[')
for i := 0; i < val.Len(); i++ {
if i > 0 {
b.WriteByte(',')
}
concatWriteValue(b, val.Index(i).Interface(), depth+1)
}
b.WriteByte(']')
}
// concatFormatStruct formats a struct for logging.
// It writes the structs exported fields in a bracketed, name:value format, handling nested
// types recursively with depth tracking. Unexported fields are represented as "<?>".
// Used internally by concatWriteValue for log message formatting.
// Example:
//
// var b strings.Builder
// val := reflect.ValueOf(struct{ Name string }{Name: "test"})
// concatFormatStruct(&b, val, 0) // Writes "[Name:test]" to b
func concatFormatStruct(b *strings.Builder, val reflect.Value, depth int) {
typ := val.Type()
b.WriteByte('[')
first := true
for i := 0; i < val.NumField(); i++ {
field := typ.Field(i)
fieldValue := val.Field(i)
if !first {
b.WriteString("; ")
}
first = false
b.WriteString(field.Name)
b.WriteByte(':')
if !fieldValue.CanInterface() {
b.WriteString(unexportedString)
continue
}
concatWriteValue(b, fieldValue.Interface(), depth+1)
}
b.WriteByte(']')
}

View File

@@ -2,6 +2,7 @@ package ll
import (
"fmt"
"github.com/olekukonko/cat"
"github.com/olekukonko/ll/lx"
"os"
"strings"
@@ -50,7 +51,7 @@ func (fb *FieldBuilder) Info(args ...any) {
return
}
// Log at Info level with the builders fields, no stack trace
fb.logger.log(lx.LevelInfo, lx.ClassText, concatSpaced(args...), fb.fields, false)
fb.logger.log(lx.LevelInfo, lx.ClassText, cat.Space(args...), fb.fields, false)
}
// Infof logs a message at Info level with the builders fields.
@@ -85,7 +86,7 @@ func (fb *FieldBuilder) Debug(args ...any) {
return
}
// Log at Debug level with the builders fields, no stack trace
fb.logger.log(lx.LevelDebug, lx.ClassText, concatSpaced(args...), fb.fields, false)
fb.logger.log(lx.LevelDebug, lx.ClassText, cat.Space(args...), fb.fields, false)
}
// Debugf logs a message at Debug level with the builders fields.
@@ -120,7 +121,7 @@ func (fb *FieldBuilder) Warn(args ...any) {
return
}
// Log at Warn level with the builders fields, no stack trace
fb.logger.log(lx.LevelWarn, lx.ClassText, concatSpaced(args...), fb.fields, false)
fb.logger.log(lx.LevelWarn, lx.ClassText, cat.Space(args...), fb.fields, false)
}
// Warnf logs a message at Warn level with the builders fields.
@@ -154,7 +155,7 @@ func (fb *FieldBuilder) Error(args ...any) {
return
}
// Log at Error level with the builders fields, no stack trace
fb.logger.log(lx.LevelError, lx.ClassText, concatSpaced(args...), fb.fields, false)
fb.logger.log(lx.LevelError, lx.ClassText, cat.Space(args...), fb.fields, false)
}
// Errorf logs a message at Error level with the builders fields.
@@ -188,7 +189,7 @@ func (fb *FieldBuilder) Stack(args ...any) {
return
}
// Log at Error level with the builders fields and a stack trace
fb.logger.log(lx.LevelError, lx.ClassText, concatSpaced(args...), fb.fields, true)
fb.logger.log(lx.LevelError, lx.ClassText, cat.Space(args...), fb.fields, true)
}
// Stackf logs a message at Error level with a stack trace and the builders fields.

View File

@@ -1,11 +1,12 @@
package ll
import (
"github.com/olekukonko/ll/lh"
"github.com/olekukonko/ll/lx"
"os"
"sync/atomic"
"time"
"github.com/olekukonko/ll/lh"
"github.com/olekukonko/ll/lx"
)
// defaultLogger is the global logger instance for package-level logging functions.
@@ -468,13 +469,7 @@ func Len() int64 {
// duration := ll.Measure(func() { time.Sleep(time.Millisecond) })
// // Output: [] INFO: function executed [duration=~1ms]
func Measure(fns ...func()) time.Duration {
start := time.Now()
for _, fn := range fns {
fn()
}
duration := time.Since(start)
defaultLogger.Fields("duration", duration).Infof("function executed")
return duration
return defaultLogger.Measure(fns...)
}
// Benchmark logs the duration since a start time at Info level using the default logger.
@@ -486,7 +481,7 @@ func Measure(fns ...func()) time.Duration {
// time.Sleep(time.Millisecond)
// ll.Benchmark(start) // Output: [] INFO: benchmark [start=... end=... duration=...]
func Benchmark(start time.Time) {
defaultLogger.Fields("start", start, "end", time.Now(), "duration", time.Now().Sub(start)).Infof("benchmark")
defaultLogger.Benchmark(start)
}
// Clone returns a new logger with the same configuration as the default logger.
@@ -657,3 +652,11 @@ func Mark(names ...string) {
defaultLogger.mark(2, names...)
}
// Output logs data in a human-readable JSON format at Info level, including caller file and line information.
// It is similar to Dbg but formats the output as JSON for better readability. It is thread-safe and respects
// the loggers configuration (e.g., enabled, level, suspend, handler, middleware).
func Output(values ...interface{}) {
o := NewInspector(defaultLogger)
o.Log(2, values...)
}

239
vendor/github.com/olekukonko/ll/inspector.go generated vendored Normal file
View File

@@ -0,0 +1,239 @@
package ll
import (
"encoding/json"
"fmt"
"reflect"
"runtime"
"strings"
"unsafe"
"github.com/olekukonko/ll/lx"
)
// Inspector is a utility for Logger that provides advanced inspection and logging of data
// in human-readable JSON format. It uses reflection to access and represent unexported fields,
// nested structs, embedded structs, and pointers, making it useful for debugging complex data structures.
type Inspector struct {
logger *Logger
}
// NewInspector returns a new Inspector instance associated with the provided logger.
func NewInspector(logger *Logger) *Inspector {
return &Inspector{logger: logger}
}
// Log outputs the given values as indented JSON at the Info level, prefixed with the caller's
// file name and line number. It handles structs (including unexported fields, nested, and embedded),
// pointers, errors, and other types. The skip parameter determines how many stack frames to skip
// when identifying the caller; typically set to 2 to account for the call to Log and its wrapper.
//
// Example usage within a Logger method:
//
// o := NewInspector(l)
// o.Log(2, someStruct) // Logs JSON representation with caller info
func (o *Inspector) Log(skip int, values ...interface{}) {
// Skip if logger is suspended or Info level is disabled
if o.logger.suspend.Load() || !o.logger.shouldLog(lx.LevelInfo) {
return
}
// Retrieve caller information for logging context
_, file, line, ok := runtime.Caller(skip)
if !ok {
o.logger.log(lx.LevelError, lx.ClassText, "Inspector: Unable to parse runtime caller", nil, false)
return
}
// Extract short filename for concise output
shortFile := file
if idx := strings.LastIndex(file, "/"); idx >= 0 {
shortFile = file[idx+1:]
}
// Process each value individually
for _, value := range values {
var jsonData []byte
var err error
// Use reflection for struct types to handle unexported and nested fields
val := reflect.ValueOf(value)
if val.Kind() == reflect.Ptr {
val = val.Elem()
}
if val.Kind() == reflect.Struct {
valueMap := o.structToMap(val)
jsonData, err = json.MarshalIndent(valueMap, "", " ")
} else if errVal, ok := value.(error); ok {
// Special handling for errors to represent them as a simple map
value = map[string]string{"error": errVal.Error()}
jsonData, err = json.MarshalIndent(value, "", " ")
} else {
// Fall back to standard JSON marshaling for non-struct types
jsonData, err = json.MarshalIndent(value, "", " ")
}
if err != nil {
o.logger.log(lx.LevelError, lx.ClassText, fmt.Sprintf("Inspector: JSON encoding error: %v", err), nil, false)
continue
}
// Construct log message with file, line, and JSON data
msg := fmt.Sprintf("[%s:%d] DUMP: %s", shortFile, line, string(jsonData))
o.logger.log(lx.LevelInfo, lx.ClassText, msg, nil, false)
}
}
// structToMap recursively converts a struct's reflect.Value to a map[string]interface{}.
// It includes unexported fields (named with parentheses), prefixes pointers with '*',
// flattens anonymous embedded structs without json tags, and uses unsafe pointers to access
// unexported primitive fields when reflect.CanInterface() returns false.
func (o *Inspector) structToMap(val reflect.Value) map[string]interface{} {
result := make(map[string]interface{})
if !val.IsValid() {
return result
}
typ := val.Type()
for i := 0; i < val.NumField(); i++ {
field := val.Field(i)
fieldType := typ.Field(i)
// Determine field name: prefer json tag if present and not "-", else use struct field name
baseName := fieldType.Name
jsonTag := fieldType.Tag.Get("json")
hasJsonTag := false
if jsonTag != "" {
if idx := strings.Index(jsonTag, ","); idx != -1 {
jsonTag = jsonTag[:idx]
}
if jsonTag != "-" {
baseName = jsonTag
hasJsonTag = true
}
}
// Enclose unexported field names in parentheses
fieldName := baseName
if !fieldType.IsExported() {
fieldName = "(" + baseName + ")"
}
// Handle pointer fields
isPtr := fieldType.Type.Kind() == reflect.Ptr
if isPtr {
fieldName = "*" + fieldName
if field.IsNil() {
result[fieldName] = nil
continue
}
field = field.Elem()
}
// Recurse for struct fields
if field.Kind() == reflect.Struct {
subMap := o.structToMap(field)
isNested := !fieldType.Anonymous || hasJsonTag
if isNested {
result[fieldName] = subMap
} else {
// Flatten embedded struct fields into the parent map, avoiding overwrites
for k, v := range subMap {
if _, exists := result[k]; !exists {
result[k] = v
}
}
}
} else {
// Handle primitive fields
if field.CanInterface() {
result[fieldName] = field.Interface()
} else {
// Use unsafe access for unexported primitives
ptr := getDataPtr(field)
switch field.Kind() {
case reflect.String:
result[fieldName] = *(*string)(ptr)
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
result[fieldName] = o.getIntFromUnexportedField(field)
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
result[fieldName] = o.getUintFromUnexportedField(field)
case reflect.Float32, reflect.Float64:
result[fieldName] = o.getFloatFromUnexportedField(field)
case reflect.Bool:
result[fieldName] = *(*bool)(ptr)
default:
result[fieldName] = fmt.Sprintf("*unexported %s*", field.Type().String())
}
}
}
}
return result
}
// emptyInterface represents the internal structure of an empty interface{}.
// This is used for unsafe pointer manipulation to access unexported field data.
type emptyInterface struct {
typ unsafe.Pointer
word unsafe.Pointer
}
// getDataPtr returns an unsafe.Pointer to the underlying data of a reflect.Value.
// This enables direct access to unexported fields via unsafe operations.
func getDataPtr(v reflect.Value) unsafe.Pointer {
return (*emptyInterface)(unsafe.Pointer(&v)).word
}
// getIntFromUnexportedField extracts a signed integer value from an unexported field
// using unsafe pointer access. It supports int, int8, int16, int32, and int64 kinds,
// returning the value as int64. Returns 0 for unsupported kinds.
func (o *Inspector) getIntFromUnexportedField(field reflect.Value) int64 {
ptr := getDataPtr(field)
switch field.Kind() {
case reflect.Int:
return int64(*(*int)(ptr))
case reflect.Int8:
return int64(*(*int8)(ptr))
case reflect.Int16:
return int64(*(*int16)(ptr))
case reflect.Int32:
return int64(*(*int32)(ptr))
case reflect.Int64:
return *(*int64)(ptr)
}
return 0
}
// getUintFromUnexportedField extracts an unsigned integer value from an unexported field
// using unsafe pointer access. It supports uint, uint8, uint16, uint32, and uint64 kinds,
// returning the value as uint64. Returns 0 for unsupported kinds.
func (o *Inspector) getUintFromUnexportedField(field reflect.Value) uint64 {
ptr := getDataPtr(field)
switch field.Kind() {
case reflect.Uint:
return uint64(*(*uint)(ptr))
case reflect.Uint8:
return uint64(*(*uint8)(ptr))
case reflect.Uint16:
return uint64(*(*uint16)(ptr))
case reflect.Uint32:
return uint64(*(*uint32)(ptr))
case reflect.Uint64:
return *(*uint64)(ptr)
}
return 0
}
// getFloatFromUnexportedField extracts a floating-point value from an unexported field
// using unsafe pointer access. It supports float32 and float64 kinds, returning the value
// as float64. Returns 0 for unsupported kinds.
func (o *Inspector) getFloatFromUnexportedField(field reflect.Value) float64 {
ptr := getDataPtr(field)
switch field.Kind() {
case reflect.Float32:
return float64(*(*float32)(ptr))
case reflect.Float64:
return *(*float64)(ptr)
}
return 0
}

View File

@@ -2,12 +2,14 @@ package lh
import (
"fmt"
"github.com/olekukonko/ll/lx"
"io"
"os"
"sort"
"strings"
"sync"
"time"
"github.com/olekukonko/ll/lx"
)
// Palette defines ANSI color codes for various log components.
@@ -81,6 +83,7 @@ type ColorizedHandler struct {
palette Palette // Color scheme for formatting
showTime bool // Whether to display timestamps
timeFormat string // Format for timestamps (defaults to time.RFC3339)
mu sync.Mutex
}
// ColorOption defines a configuration function for ColorizedHandler.
@@ -130,6 +133,10 @@ func NewColorizedHandler(w io.Writer, opts ...ColorOption) *ColorizedHandler {
//
// handler.Handle(&lx.Entry{Message: "test", Level: lx.LevelInfo}) // Writes colored output
func (h *ColorizedHandler) Handle(e *lx.Entry) error {
h.mu.Lock()
defer h.mu.Unlock()
switch e.Class {
case lx.ClassDump:
// Handle hex dump entries

View File

@@ -2,11 +2,13 @@ package lh
import (
"fmt"
"github.com/olekukonko/ll/lx"
"io"
"sort"
"strings"
"sync"
"time"
"github.com/olekukonko/ll/lx"
)
// TextHandler is a handler that outputs log entries as plain text.
@@ -17,6 +19,7 @@ type TextHandler struct {
w io.Writer // Destination for formatted log output
showTime bool // Whether to display timestamps
timeFormat string // Format for timestamps (defaults to time.RFC3339)
mu sync.Mutex
}
// NewTextHandler creates a new TextHandler writing to the specified writer.
@@ -55,6 +58,9 @@ func (h *TextHandler) Timestamped(enable bool, format ...string) {
//
// handler.Handle(&lx.Entry{Message: "test", Level: lx.LevelInfo}) // Writes "INFO: test"
func (h *TextHandler) Handle(e *lx.Entry) error {
h.mu.Lock()
defer h.mu.Unlock()
// Special handling for dump output
if e.Class == lx.ClassDump {
return h.handleDumpOutput(e)

113
vendor/github.com/olekukonko/ll/ll.go generated vendored
View File

@@ -5,8 +5,6 @@ import (
"encoding/binary"
"encoding/json"
"fmt"
"github.com/olekukonko/ll/lh"
"github.com/olekukonko/ll/lx"
"io"
"math"
"os"
@@ -16,6 +14,10 @@ import (
"sync"
"sync/atomic"
"time"
"github.com/olekukonko/cat"
"github.com/olekukonko/ll/lh"
"github.com/olekukonko/ll/lx"
)
// Logger manages logging configuration and behavior, encapsulating state such as enablement,
@@ -24,7 +26,7 @@ import (
type Logger struct {
mu sync.RWMutex // Guards concurrent access to fields
enabled bool // Determines if logging is enabled
suspend bool // uses suspend path for most actions eg. skipping namespace checks
suspend atomic.Bool // uses suspend path for most actions eg. skipping namespace checks
level lx.LevelType // Minimum log level (e.g., Debug, Info, Warn, Error)
namespaces *lx.Namespace // Manages namespace enable/disable states
currentPath string // Current namespace path (e.g., "parent/child")
@@ -97,7 +99,11 @@ func (l *Logger) AddContext(key string, value interface{}) *Logger {
// logger.Benchmark(start) // Output: [app] INFO: benchmark [start=... end=... duration=...]
func (l *Logger) Benchmark(start time.Time) time.Duration {
duration := time.Since(start)
l.Fields("start", start, "end", time.Now(), "duration", duration).Infof("benchmark")
l.Fields(
"duration_ms", duration.Milliseconds(),
"duration", duration.String(),
).Infof("benchmark completed")
return duration
}
@@ -220,7 +226,7 @@ func (l *Logger) Dbg(values ...interface{}) {
// logger.Debug("Debugging") // Output: [app] DEBUG: Debugging
func (l *Logger) Debug(args ...any) {
// check if suspended
if l.suspend {
if l.suspend.Load() {
return
}
@@ -229,7 +235,7 @@ func (l *Logger) Debug(args ...any) {
return
}
l.log(lx.LevelDebug, lx.ClassText, concatSpaced(args...), nil, false)
l.log(lx.LevelDebug, lx.ClassText, cat.Space(args...), nil, false)
}
// Debugf logs a formatted message at Debug level, delegating to Debug. It is thread-safe.
@@ -239,7 +245,7 @@ func (l *Logger) Debug(args ...any) {
// logger.Debugf("Debug %s", "message") // Output: [app] DEBUG: Debug message
func (l *Logger) Debugf(format string, args ...any) {
// check if suspended
if l.suspend {
if l.suspend.Load() {
return
}
@@ -344,6 +350,21 @@ func (l *Logger) Dump(values ...interface{}) {
}
}
// Output logs data in a human-readable JSON format at Info level, including caller file and line information.
// It is similar to Dbg but formats the output as JSON for better readability. It is thread-safe and respects
// the logger's configuration (e.g., enabled, level, suspend, handler, middleware).
// Example:
//
// logger := New("app").Enable()
// x := map[string]int{"key": 42}
// logger.Output(x) // Output: [app] INFO: [file.go:123] JSON: {"key": 42}
//
// Logger method to provide access to Output functionality
func (l *Logger) Output(values ...interface{}) {
o := NewInspector(l)
o.Log(2, values...)
}
// Enable activates logging, allowing logs to be emitted if other conditions (e.g., level,
// namespace) are met. It is thread-safe using a write lock and returns the logger for chaining.
// Example:
@@ -432,7 +453,7 @@ func (l *Logger) Err(errs ...error) {
// logger.Error("Error occurred") // Output: [app] ERROR: Error occurred
func (l *Logger) Error(args ...any) {
// check if suspended
if l.suspend {
if l.suspend.Load() {
return
}
@@ -440,7 +461,7 @@ func (l *Logger) Error(args ...any) {
if !l.shouldLog(lx.LevelError) {
return
}
l.log(lx.LevelError, lx.ClassText, concatSpaced(args...), nil, false)
l.log(lx.LevelError, lx.ClassText, cat.Space(args...), nil, false)
}
// Errorf logs a formatted message at Error level, delegating to Error. It is thread-safe.
@@ -450,7 +471,7 @@ func (l *Logger) Error(args ...any) {
// logger.Errorf("Error %s", "occurred") // Output: [app] ERROR: Error occurred
func (l *Logger) Errorf(format string, args ...any) {
// check if suspended
if l.suspend {
if l.suspend.Load() {
return
}
@@ -465,7 +486,7 @@ func (l *Logger) Errorf(format string, args ...any) {
// logger.Fatal("Fatal error") // Output: [app] ERROR: Fatal error [stack=...], then exits
func (l *Logger) Fatal(args ...any) {
// check if suspended
if l.suspend {
if l.suspend.Load() {
return
}
@@ -474,7 +495,7 @@ func (l *Logger) Fatal(args ...any) {
os.Exit(1)
}
l.log(lx.LevelError, lx.ClassText, concatSpaced(args...), nil, true)
l.log(lx.LevelError, lx.ClassText, cat.Space(args...), nil, false)
os.Exit(1)
}
@@ -486,7 +507,7 @@ func (l *Logger) Fatal(args ...any) {
// logger.Fatalf("Fatal %s", "error") // Output: [app] ERROR: Fatal error [stack=...], then exits
func (l *Logger) Fatalf(format string, args ...any) {
// check if suspended
if l.suspend {
if l.suspend.Load() {
return
}
@@ -503,7 +524,7 @@ func (l *Logger) Field(fields map[string]interface{}) *FieldBuilder {
fb := &FieldBuilder{logger: l, fields: make(map[string]interface{})}
// check if suspended
if l.suspend {
if l.suspend.Load() {
return fb
}
@@ -524,7 +545,7 @@ func (l *Logger) Field(fields map[string]interface{}) *FieldBuilder {
func (l *Logger) Fields(pairs ...any) *FieldBuilder {
fb := &FieldBuilder{logger: l, fields: make(map[string]interface{})}
if l.suspend {
if l.suspend.Load() {
return fb
}
@@ -650,7 +671,7 @@ func (l *Logger) Indent(depth int) *Logger {
// logger := New("app").Enable().Style(lx.NestedPath)
// logger.Info("Started") // Output: [app]: INFO: Started
func (l *Logger) Info(args ...any) {
if l.suspend {
if l.suspend.Load() {
return
}
@@ -658,7 +679,7 @@ func (l *Logger) Info(args ...any) {
return
}
l.log(lx.LevelInfo, lx.ClassText, concatSpaced(args...), nil, false)
l.log(lx.LevelInfo, lx.ClassText, cat.Space(args...), nil, false)
}
// Infof logs a formatted message at Info level, delegating to Info. It is thread-safe.
@@ -667,7 +688,7 @@ func (l *Logger) Info(args ...any) {
// logger := New("app").Enable().Style(lx.NestedPath)
// logger.Infof("Started %s", "now") // Output: [app]: INFO: Started now
func (l *Logger) Infof(format string, args ...any) {
if l.suspend {
if l.suspend.Load() {
return
}
@@ -771,12 +792,20 @@ func (l *Logger) mark(skip int, names ...string) {
// // Output: [app] INFO: function executed [duration=~1ms]
func (l *Logger) Measure(fns ...func()) time.Duration {
start := time.Now()
// Execute all provided functions
for _, fn := range fns {
fn()
if fn != nil {
fn()
}
}
duration := time.Since(start)
l.Fields("duration", duration).Infof("function executed")
l.Fields(
"duration_ns", duration.Nanoseconds(),
"duration", duration.String(),
"duration_ms", fmt.Sprintf("%.3fms", float64(duration.Nanoseconds())/1e6),
).Infof("execution completed")
return duration
}
@@ -789,7 +818,7 @@ func (l *Logger) Measure(fns ...func()) time.Duration {
// child := parent.Namespace("child")
// child.Info("Child log") // Output: [parent/child] INFO: Child log
func (l *Logger) Namespace(name string) *Logger {
if l.suspend {
if l.suspend.Load() {
return l
}
@@ -897,9 +926,9 @@ func (l *Logger) NamespaceEnabled(relativePath string) bool {
// logger.Panic("Panic error") // Output: [app] ERROR: Panic error [stack=...], then panics
func (l *Logger) Panic(args ...any) {
// Build message by concatenating arguments with spaces
msg := concatSpaced(args...)
msg := cat.Space(args...)
if l.suspend {
if l.suspend.Load() {
panic(msg)
}
@@ -942,7 +971,7 @@ func (l *Logger) Prefix(prefix string) *Logger {
// logger := New("app").Enable()
// logger.Print("message", "value") // Output: [app] INFO: message value
func (l *Logger) Print(args ...any) {
if l.suspend {
if l.suspend.Load() {
return
}
@@ -950,7 +979,7 @@ func (l *Logger) Print(args ...any) {
if !l.shouldLog(lx.LevelInfo) {
return
}
l.log(lx.LevelNone, lx.ClassRaw, concatSpaced(args...), nil, false)
l.log(lx.LevelNone, lx.ClassRaw, cat.Space(args...), nil, false)
}
// Println logs a message at Info level without format specifiers, minimizing allocations
@@ -960,7 +989,7 @@ func (l *Logger) Print(args ...any) {
// logger := New("app").Enable()
// logger.Println("message", "value") // Output: [app] INFO: message value
func (l *Logger) Println(args ...any) {
if l.suspend {
if l.suspend.Load() {
return
}
@@ -968,7 +997,7 @@ func (l *Logger) Println(args ...any) {
if !l.shouldLog(lx.LevelInfo) {
return
}
l.log(lx.LevelNone, lx.ClassRaw, concatenate(lx.Space, nil, []any{lx.Newline}, args...), nil, false)
l.log(lx.LevelNone, lx.ClassRaw, cat.SuffixWith(lx.Space, lx.Newline, args...), nil, false)
}
// Printf logs a formatted message at Info level, delegating to Print. It is thread-safe.
@@ -977,7 +1006,7 @@ func (l *Logger) Println(args ...any) {
// logger := New("app").Enable()
// logger.Printf("Message %s", "value") // Output: [app] INFO: Message value
func (l *Logger) Printf(format string, args ...any) {
if l.suspend {
if l.suspend.Load() {
return
}
@@ -1004,9 +1033,7 @@ func (l *Logger) Remove(m *Middleware) {
// logger.Resume()
// logger.Info("Resumed") // Output: [app] INFO: Resumed
func (l *Logger) Resume() *Logger {
l.mu.Lock()
defer l.mu.Unlock()
l.suspend = false // Clear suspend flag to resume logging
l.suspend.Store(false)
return l
}
@@ -1032,9 +1059,7 @@ func (l *Logger) Separator(separator string) *Logger {
// logger.Suspend()
// logger.Info("Ignored") // No output
func (l *Logger) Suspend() *Logger {
l.mu.Lock()
defer l.mu.Unlock()
l.suspend = true // Set suspend flag to pause logging
l.suspend.Store(true)
return l
}
@@ -1047,9 +1072,7 @@ func (l *Logger) Suspend() *Logger {
// fmt.Println("Logging is suspended") // Prints message
// }
func (l *Logger) Suspended() bool {
l.mu.Lock()
defer l.mu.Unlock()
return l.suspend // Return current suspend state
return l.suspend.Load()
}
// Stack logs messages at Error level with a stack trace for each provided argument.
@@ -1059,7 +1082,7 @@ func (l *Logger) Suspended() bool {
// logger := New("app").Enable()
// logger.Stack("Critical error") // Output: [app] ERROR: Critical error [stack=...]
func (l *Logger) Stack(args ...any) {
if l.suspend {
if l.suspend.Load() {
return
}
@@ -1069,7 +1092,7 @@ func (l *Logger) Stack(args ...any) {
}
for _, arg := range args {
l.log(lx.LevelError, lx.ClassText, concat(arg), nil, true)
l.log(lx.LevelError, lx.ClassText, cat.Concat(arg), nil, true)
}
}
@@ -1080,7 +1103,7 @@ func (l *Logger) Stack(args ...any) {
// logger := New("app").Enable()
// logger.Stackf("Critical %s", "error") // Output: [app] ERROR: Critical error [stack=...]
func (l *Logger) Stackf(format string, args ...any) {
if l.suspend {
if l.suspend.Load() {
return
}
@@ -1171,7 +1194,7 @@ func (l *Logger) Use(fn lx.Handler) *Middleware {
// logger := New("app").Enable()
// logger.Warn("Warning") // Output: [app] WARN: Warning
func (l *Logger) Warn(args ...any) {
if l.suspend {
if l.suspend.Load() {
return
}
@@ -1180,7 +1203,7 @@ func (l *Logger) Warn(args ...any) {
return
}
l.log(lx.LevelWarn, lx.ClassText, concatSpaced(args...), nil, false)
l.log(lx.LevelWarn, lx.ClassText, cat.Space(args...), nil, false)
}
// Warnf logs a formatted message at Warn level, delegating to Warn. It is thread-safe.
@@ -1189,7 +1212,7 @@ func (l *Logger) Warn(args ...any) {
// logger := New("app").Enable()
// logger.Warnf("Warning %s", "issued") // Output: [app] WARN: Warning issued
func (l *Logger) Warnf(format string, args ...any) {
if l.suspend {
if l.suspend.Load() {
return
}
@@ -1363,7 +1386,7 @@ func (l *Logger) shouldLog(level lx.LevelType) bool {
}
// check for suspend mode
if l.suspend {
if l.suspend.Load() {
return false
}

View File

@@ -1,6 +1,7 @@
package lx
import (
"strings"
"time"
)
@@ -31,11 +32,28 @@ const (
// These constants define the severity levels for log messages, used to filter logs based
// on the loggers minimum level. They are ordered to allow comparison (e.g., LevelDebug < LevelWarn).
const (
LevelNone LevelType = iota // Debug level for detailed diagnostic information
LevelInfo // Info level for general operational messages
LevelWarn // Warn level for warning conditions
LevelError // Error level for error conditions requiring attention
LevelDebug // None level for logs without a specific severity (e.g., raw output)
LevelNone LevelType = iota // Debug level for detailed diagnostic information
LevelInfo // Info level for general operational messages
LevelWarn // Warn level for warning conditions
LevelError // Error level for error conditions requiring attention
LevelDebug // None level for logs without a specific severity (e.g., raw output)
LevelUnknown // None level for logs without a specific severity (e.g., raw output)
)
// String constants for each level
const (
DebugString = "DEBUG"
InfoString = "INFO"
WarnString = "WARN"
ErrorString = "ERROR"
NoneString = "NONE"
UnknownString = "UNKNOWN"
TextString = "TEXT"
JSONString = "JSON"
DumpString = "DUMP"
SpecialString = "SPECIAL"
RawString = "RAW"
)
// Log class constants, defining the type of log entry.
@@ -47,6 +65,7 @@ const (
ClassDump // Dump entries for hex/ASCII dumps
ClassSpecial // Special entries for custom or non-standard logs
ClassRaw // Raw entries for unformatted output
ClassUnknown // Raw entries for unformatted output
)
// Namespace style constants.
@@ -72,17 +91,37 @@ type LevelType int
func (l LevelType) String() string {
switch l {
case LevelDebug:
return "DEBUG"
return DebugString
case LevelInfo:
return "INFO"
return InfoString
case LevelWarn:
return "WARN"
return WarnString
case LevelError:
return "ERROR"
return ErrorString
case LevelNone:
return "NONE"
return NoneString
default:
return "UNKNOWN"
return UnknownString
}
}
// LevelParse converts a string to its corresponding LevelType.
// It parses a string (case-insensitive) and returns the corresponding LevelType, defaulting to
// LevelUnknown for unrecognized strings. Supports "WARNING" as an alias for "WARN".
func LevelParse(s string) LevelType {
switch strings.ToUpper(s) {
case DebugString:
return LevelDebug
case InfoString:
return LevelInfo
case WarnString, "WARNING": // Allow both "WARN" and "WARNING"
return LevelWarn
case ErrorString:
return LevelError
case NoneString:
return LevelNone
default:
return LevelUnknown
}
}
@@ -149,16 +188,36 @@ type ClassType int
func (t ClassType) String() string {
switch t {
case ClassText:
return "TEST" // Note: Likely a typo, should be "TEXT"
return TextString
case ClassJSON:
return "JSON"
return JSONString
case ClassDump:
return "DUMP"
return DumpString
case ClassSpecial:
return "SPECIAL"
return SpecialString
case ClassRaw:
return "RAW"
return RawString
default:
return "UNKNOWN"
return UnknownString
}
}
// ParseClass converts a string to its corresponding ClassType.
// It parses a string (case-insensitive) and returns the corresponding ClassType, defaulting to
// ClassUnknown for unrecognized strings.
func ParseClass(s string) ClassType {
switch strings.ToUpper(s) {
case TextString:
return ClassText
case JSONString:
return ClassJSON
case DumpString:
return ClassDump
case SpecialString:
return ClassSpecial
case RawString:
return ClassRaw
default:
return ClassUnknown
}
}

View File

@@ -8,3 +8,4 @@
dev.sh
*csv2table
_test/
*.test

View File

@@ -206,51 +206,51 @@ func main() {
The `defaultConfig()` function (`config.go:defaultConfig`) establishes baseline settings for new tables, ensuring predictable behavior unless overridden. Below is a detailed table of default parameters, organized by configuration section, to help you understand the starting point for table behavior and appearance.
| Section | Parameter | Default Value | Description |
|---------------|-------------------------------|-----------------------------------|-----------------------------------------------------------------------------|
| **Header** | `Alignment.Global` | `tw.AlignCenter` | Centers header text globally unless overridden by `PerColumn`. |
| Header | `Alignment.PerColumn` | `[]tw.Align{}` | Empty; falls back to `Global` unless specified. |
| Header | `Formatting.AutoFormat` | `tw.On` | Applies title case (e.g., "col_one" → "COL ONE") to header content. |
| Header | `Formatting.AutoWrap` | `tw.WrapTruncate` | Truncates long header text with "…" based on width constraints. |
| Header | `Formatting.MergeMode` | `tw.MergeNone` | Disables cell merging in headers by default. |
| Header | `Padding.Global` | `tw.PaddingDefault` (`" "`) | Adds one space on left and right of header cells. |
| Header | `Padding.PerColumn` | `[]tw.Padding{}` | Empty; falls back to `Global` unless specified. |
| Header | `ColMaxWidths.Global` | `0` (unlimited) | No maximum content width for header cells unless set. |
| Header | `ColMaxWidths.PerColumn` | `tw.NewMapper[int, int]()` | Empty map; no per-column content width limits unless specified. |
| Header | `Filter.Global` | `nil` | No global content transformation for header cells. |
| Header | `Filter.PerColumn` | `[]func(string) string{}` | No per-column content transformations unless specified. |
| **Row** | `Alignment.Global` | `tw.AlignLeft` | Left-aligns row text globally unless overridden by `PerColumn`. |
| Row | `Alignment.PerColumn` | `[]tw.Align{}` | Empty; falls back to `Global`. |
| Row | `Formatting.AutoFormat` | `tw.Off` | Disables auto-formatting (e.g., title case) for row content. |
| Row | `Formatting.AutoWrap` | `tw.WrapNormal` | Wraps long row text naturally at word boundaries based on width constraints.|
| Row | `Formatting.MergeMode` | `tw.MergeNone` | Disables cell merging in rows by default. |
| Row | `Padding.Global` | `tw.PaddingDefault` (`" "`) | Adds one space on left and right of row cells. |
| Row | `Padding.PerColumn` | `[]tw.Padding{}` | Empty; falls back to `Global`. |
| Row | `ColMaxWidths.Global` | `0` (unlimited) | No maximum content width for row cells. |
| Row | `ColMaxWidths.PerColumn` | `tw.NewMapper[int, int]()` | Empty map; no per-column content width limits. |
| Row | `Filter.Global` | `nil` | No global content transformation for row cells. |
| Row | `Filter.PerColumn` | `[]func(string) string{}` | No per-column content transformations. |
| **Footer** | `Alignment.Global` | `tw.AlignRight` | Right-aligns footer text globally unless overridden by `PerColumn`. |
| Footer | `Alignment.PerColumn` | `[]tw.Align{}` | Empty; falls back to `Global`. |
| Footer | `Formatting.AutoFormat` | `tw.Off` | Disables auto-formatting for footer content. |
| Footer | `Formatting.AutoWrap` | `tw.WrapNormal` | Wraps long footer text naturally. |
| Footer | `Formatting.MergeMode` | `tw.MergeNone` | Disables cell merging in footers. |
| Footer | `Padding.Global` | `tw.PaddingDefault` (`" "`) | Adds one space on left and right of footer cells. |
| Footer | `Padding.PerColumn` | `[]tw.Padding{}` | Empty; falls back to `Global`. |
| Footer | `ColMaxWidths.Global` | `0` (unlimited) | No maximum content width for footer cells. |
| Footer | `ColMaxWidths.PerColumn` | `tw.NewMapper[int, int]()` | Empty map; no per-column content width limits. |
| Footer | `Filter.Global` | `nil` | No global content transformation for footer cells. |
| Footer | `Filter.PerColumn` | `[]func(string) string{}` | No per-column content transformations. |
| **Global** | `MaxWidth` | `0` (unlimited) | No overall table width limit. |
| Global | `Behavior.AutoHide` | `tw.Off` | Displays empty columns (ignored in streaming). |
| Global | `Behavior.TrimSpace` | `tw.On` | Trims leading/trailing spaces from cell content. |
| Global | `Behavior.Header` | `tw.Control{Hide: tw.Off}` | Shows header if content is provided. |
| Global | `Behavior.Footer` | `tw.Control{Hide: tw.Off}` | Shows footer if content is provided. |
| Global | `Behavior.Compact` | `tw.Compact{Merge: tw.Off}` | No compact width optimization for merged cells. |
| Global | `Debug` | `false` | Disables debug logging. |
| Global | `Stream.Enable` | `false` | Disables streaming mode by default. |
| Global | `Widths.Global` | `0` (unlimited) | No fixed column width unless specified. |
| Global | `Widths.PerColumn` | `tw.NewMapper[int, int]()` | Empty map; no per-column fixed widths unless specified. |
| Section | Parameter | Default Value | Description |
|---------------|--------------------------|-----------------------------------|-----------------------------------------------------------------------------|
| **Header** | `Alignment.Global` | `tw.AlignCenter` | Centers header text globally unless overridden by `PerColumn`. |
| Header | `Alignment.PerColumn` | `[]tw.Align{}` | Empty; falls back to `Global` unless specified. |
| Header | `Formatting.AutoFormat` | `tw.On` | Applies title case (e.g., "col_one" → "COL ONE") to header content. |
| Header | `Formatting.AutoWrap` | `tw.WrapTruncate` | Truncates long header text with "…" based on width constraints. |
| Header | `Merging.Mode` | `tw.MergeNone` | Disables cell merging in headers by default. |
| Header | `Padding.Global` | `tw.PaddingDefault` (`" "`) | Adds one space on left and right of header cells. |
| Header | `Padding.PerColumn` | `[]tw.Padding{}` | Empty; falls back to `Global` unless specified. |
| Header | `ColMaxWidths.Global` | `0` (unlimited) | No maximum content width for header cells unless set. |
| Header | `ColMaxWidths.PerColumn` | `tw.NewMapper[int, int]()` | Empty map; no per-column content width limits unless specified. |
| Header | `Filter.Global` | `nil` | No global content transformation for header cells. |
| Header | `Filter.PerColumn` | `[]func(string) string{}` | No per-column content transformations unless specified. |
| **Row** | `Alignment.Global` | `tw.AlignLeft` | Left-aligns row text globally unless overridden by `PerColumn`. |
| Row | `Alignment.PerColumn` | `[]tw.Align{}` | Empty; falls back to `Global`. |
| Row | `Formatting.AutoFormat` | `tw.Off` | Disables auto-formatting (e.g., title case) for row content. |
| Row | `Formatting.AutoWrap` | `tw.WrapNormal` | Wraps long row text naturally at word boundaries based on width constraints.|
| Row | `Merging.Mode` | `tw.MergeNone` | Disables cell merging in rows by default. |
| Row | `Padding.Global` | `tw.PaddingDefault` (`" "`) | Adds one space on left and right of row cells. |
| Row | `Padding.PerColumn` | `[]tw.Padding{}` | Empty; falls back to `Global`. |
| Row | `ColMaxWidths.Global` | `0` (unlimited) | No maximum content width for row cells. |
| Row | `ColMaxWidths.PerColumn` | `tw.NewMapper[int, int]()` | Empty map; no per-column content width limits. |
| Row | `Filter.Global` | `nil` | No global content transformation for row cells. |
| Row | `Filter.PerColumn` | `[]func(string) string{}` | No per-column content transformations. |
| **Footer** | `Alignment.Global` | `tw.AlignRight` | Right-aligns footer text globally unless overridden by `PerColumn`. |
| Footer | `Alignment.PerColumn` | `[]tw.Align{}` | Empty; falls back to `Global`. |
| Footer | `Formatting.AutoFormat` | `tw.Off` | Disables auto-formatting for footer content. |
| Footer | `Formatting.AutoWrap` | `tw.WrapNormal` | Wraps long footer text naturally. |
| Footer | `Formatting.MergeMode` | `tw.MergeNone` | Disables cell merging in footers. |
| Footer | `Padding.Global` | `tw.PaddingDefault` (`" "`) | Adds one space on left and right of footer cells. |
| Footer | `Padding.PerColumn` | `[]tw.Padding{}` | Empty; falls back to `Global`. |
| Footer | `ColMaxWidths.Global` | `0` (unlimited) | No maximum content width for footer cells. |
| Footer | `ColMaxWidths.PerColumn` | `tw.NewMapper[int, int]()` | Empty map; no per-column content width limits. |
| Footer | `Filter.Global` | `nil` | No global content transformation for footer cells. |
| Footer | `Filter.PerColumn` | `[]func(string) string{}` | No per-column content transformations. |
| **Global** | `MaxWidth` | `0` (unlimited) | No overall table width limit. |
| Global | `Behavior.AutoHide` | `tw.Off` | Displays empty columns (ignored in streaming). |
| Global | `Behavior.TrimSpace` | `tw.On` | Trims leading/trailing spaces from cell content. |
| Global | `Behavior.Header` | `tw.Control{Hide: tw.Off}` | Shows header if content is provided. |
| Global | `Behavior.Footer` | `tw.Control{Hide: tw.Off}` | Shows footer if content is provided. |
| Global | `Behavior.Compact` | `tw.Compact{Merge: tw.Off}` | No compact width optimization for merged cells. |
| Global | `Debug` | `false` | Disables debug logging. |
| Global | `Stream.Enable` | `false` | Disables streaming mode by default. |
| Global | `Widths.Global` | `0` (unlimited) | No fixed column width unless specified. |
| Global | `Widths.PerColumn` | `tw.NewMapper[int, int]()` | Empty map; no per-column fixed widths unless specified. |
**Notes**:
- Defaults can be overridden using any configuration method.
@@ -2210,7 +2210,7 @@ import (
func main() {
// Horizontal Merging (Similar to v0.0.5)
tableH := tablewriter.NewTable(os.Stdout,
tablewriter.WithConfig(tablewriter.Config{Row: tw.CellConfig{Formatting: tw.CellFormatting{MergeMode: tw.MergeHorizontal}}}),
tablewriter.WithConfig(tablewriter.Config{Row: tw.CellConfig{Merging: tw.CellMerging{Mode: tw.MergeHorizontal}}}),
tablewriter.WithRenderer(renderer.NewBlueprint(tw.Rendition{Symbols: tw.NewSymbols(tw.StyleASCII)})), // Specify renderer for symbols
)
tableH.Header("Category", "Item", "Item", "Notes") // Note: Two "Item" headers for demo
@@ -2219,7 +2219,7 @@ func main() {
// Vertical Merging
tableV := tablewriter.NewTable(os.Stdout,
tablewriter.WithConfig(tablewriter.Config{Row: tw.CellConfig{Formatting: tw.CellFormatting{MergeMode: tw.MergeVertical}}}),
tablewriter.WithConfig(tablewriter.Config{Row: tw.CellConfig{Merging: tw.CellMerging{Mode: tw.MergeVertical}}}),
tablewriter.WithRenderer(renderer.NewBlueprint(tw.Rendition{Symbols: tw.NewSymbols(tw.StyleASCII)})),
)
tableV.Header("User", "Permission")
@@ -2230,7 +2230,7 @@ func main() {
// Hierarchical Merging
tableHier := tablewriter.NewTable(os.Stdout,
tablewriter.WithConfig(tablewriter.Config{Row: tw.CellConfig{Formatting: tw.CellFormatting{MergeMode: tw.MergeHierarchical}}}),
tablewriter.WithConfig(tablewriter.Config{Row: tw.CellConfig{Merging: tw.CellMerging{Mode: tw.MergeHierarchical}}}),
tablewriter.WithRenderer(renderer.NewBlueprint(tw.Rendition{Symbols: tw.NewSymbols(tw.StyleASCII)})),
)
tableHier.Header("Group", "SubGroup", "Item")

View File

@@ -28,7 +28,7 @@ go get github.com/olekukonko/tablewriter@v0.0.5
#### Latest Version
The latest stable version
```bash
go get github.com/olekukonko/tablewriter@v1.1.0
go get github.com/olekukonko/tablewriter@v1.1.1
```
**Warning:** Version `v1.0.0` contains missing functionality and should not be used.
@@ -62,7 +62,7 @@ func main() {
data := [][]string{
{"Package", "Version", "Status"},
{"tablewriter", "v0.0.5", "legacy"},
{"tablewriter", "v1.1.0", "latest"},
{"tablewriter", "v1.1.1", "latest"},
}
table := tablewriter.NewWriter(os.Stdout)
@@ -77,7 +77,7 @@ func main() {
│ PACKAGE │ VERSION │ STATUS │
├─────────────┼─────────┼────────┤
│ tablewriter │ v0.0.5 │ legacy │
│ tablewriter │ v1.1.0 │ latest │
│ tablewriter │ v1.1.1 │ latest │
└─────────────┴─────────┴────────┘
```
@@ -520,7 +520,7 @@ func main() {
tablewriter.WithConfig(tablewriter.Config{
Header: tw.CellConfig{Alignment: tw.CellAlignment{Global: tw.AlignCenter}},
Row: tw.CellConfig{
Formatting: tw.CellFormatting{MergeMode: tw.MergeHierarchical},
Merging: tw.CellMerging{Mode: tw.MergeHierarchical},
Alignment: tw.CellAlignment{Global: tw.AlignLeft},
},
}),
@@ -579,8 +579,8 @@ func main() {
})),
tablewriter.WithConfig(tablewriter.Config{
Row: tw.CellConfig{
Formatting: tw.CellFormatting{MergeMode: tw.MergeBoth},
Alignment: tw.CellAlignment{PerColumn: []tw.Align{tw.Skip, tw.Skip, tw.AlignRight, tw.AlignLeft}},
Merging: tw.CellMerging{Mode: tw.MergeBoth},
Alignment: tw.CellAlignment{PerColumn: []tw.Align{tw.Skip, tw.Skip, tw.AlignRight, tw.AlignLeft}},
},
Footer: tw.CellConfig{
@@ -806,12 +806,12 @@ func main() {
tablewriter.WithRenderer(renderer.NewHTML(htmlCfg)),
tablewriter.WithConfig(tablewriter.Config{
Header: tw.CellConfig{
Formatting: tw.CellFormatting{MergeMode: tw.MergeHorizontal}, // Merge identical header cells
Alignment: tw.CellAlignment{Global: tw.AlignCenter},
Merging: tw.CellMerging{Mode: tw.MergeHorizontal}, // Merge identical header cells
Alignment: tw.CellAlignment{Global: tw.AlignCenter},
},
Row: tw.CellConfig{
Formatting: tw.CellFormatting{MergeMode: tw.MergeHorizontal}, // Merge identical row cells
Alignment: tw.CellAlignment{Global: tw.AlignLeft},
Merging: tw.CellMerging{Mode: tw.MergeHorizontal}, // Merge identical row cells
Alignment: tw.CellAlignment{Global: tw.AlignLeft},
},
Footer: tw.CellConfig{Alignment: tw.CellAlignment{Global: tw.AlignRight}},
}),

194
vendor/github.com/olekukonko/tablewriter/benchstat.txt generated vendored Normal file
View File

@@ -0,0 +1,194 @@
goos: darwin
goarch: arm64
pkg: github.com/olekukonko/tablewriter/pkg/twwarp
cpu: Apple M2
│ old.txt │ new.txt │
│ sec/op │ sec/op vs base │
WrapString-8 112.8µ ± 1% 112.9µ ± 2% ~ (p=0.589 n=6)
WrapStringWithSpaces-8 113.4µ ± 1% 113.7µ ± 1% ~ (p=0.310 n=6)
geomean 113.1µ 113.3µ +0.15%
│ old.txt │ new.txt │
│ B/s │ B/s vs base │
WrapString-8 84.92Mi ± 1% 84.82Mi ± 2% ~ (p=0.589 n=6)
WrapStringWithSpaces-8 84.43Mi ± 1% 84.27Mi ± 1% ~ (p=0.310 n=6)
geomean 84.68Mi 84.55Mi -0.15%
│ old.txt │ new.txt │
│ B/op │ B/op vs base │
WrapString-8 47.35Ki ± 0% 47.35Ki ± 0% ~ (p=1.000 n=6) ¹
WrapStringWithSpaces-8 52.76Ki ± 0% 52.76Ki ± 0% ~ (p=1.000 n=6) ¹
geomean 49.98Ki 49.98Ki +0.00%
¹ all samples are equal
│ old.txt │ new.txt │
│ allocs/op │ allocs/op vs base │
WrapString-8 33.00 ± 0% 33.00 ± 0% ~ (p=1.000 n=6) ¹
WrapStringWithSpaces-8 51.00 ± 0% 51.00 ± 0% ~ (p=1.000 n=6) ¹
geomean 41.02 41.02 +0.00%
¹ all samples are equal
pkg: github.com/olekukonko/tablewriter/pkg/twwidth
│ old.txt │ new.txt │
│ sec/op │ sec/op vs base │
WidthFunction/SimpleASCII_EAfalse_NoCache-8 387.6n ± 1% 368.4n ± 2% -4.97% (p=0.002 n=6)
WidthFunction/SimpleASCII_EAfalse_CacheMiss-8 219.0n ± 127% 217.5n ± 119% ~ (p=0.372 n=6)
WidthFunction/SimpleASCII_EAfalse_CacheHit-8 14.78n ± 1% 14.54n ± 3% ~ (p=0.061 n=6)
WidthFunction/SimpleASCII_EAtrue_NoCache-8 676.4n ± 1% 366.8n ± 2% -45.77% (p=0.002 n=6)
WidthFunction/SimpleASCII_EAtrue_CacheMiss-8 216.1n ± 375% 216.0n ± 128% ~ (p=0.937 n=6)
WidthFunction/SimpleASCII_EAtrue_CacheHit-8 14.71n ± 0% 14.49n ± 0% -1.53% (p=0.002 n=6)
WidthFunction/ASCIIWithANSI_EAfalse_NoCache-8 1.027µ ± 3% 1.007µ ± 1% -2.00% (p=0.002 n=6)
WidthFunction/ASCIIWithANSI_EAfalse_CacheMiss-8 219.5n ± 516% 221.4n ± 502% ~ (p=0.515 n=6)
WidthFunction/ASCIIWithANSI_EAfalse_CacheHit-8 14.81n ± 1% 14.61n ± 1% -1.35% (p=0.009 n=6)
WidthFunction/ASCIIWithANSI_EAtrue_NoCache-8 1.313µ ± 2% 1.009µ ± 2% -23.15% (p=0.002 n=6)
WidthFunction/ASCIIWithANSI_EAtrue_CacheMiss-8 653.2n ± 150% 218.2n ± 524% ~ (p=0.331 n=6)
WidthFunction/ASCIIWithANSI_EAtrue_CacheHit-8 14.73n ± 2% 14.50n ± 0% -1.60% (p=0.002 n=6)
WidthFunction/EastAsian_EAfalse_NoCache-8 747.3n ± 1% 336.2n ± 1% -55.02% (p=0.002 n=6)
WidthFunction/EastAsian_EAfalse_CacheMiss-8 226.3n ± 384% 227.4n ± 113% ~ (p=0.937 n=6)
WidthFunction/EastAsian_EAfalse_CacheHit-8 14.74n ± 1% 14.58n ± 1% -1.09% (p=0.011 n=6)
WidthFunction/EastAsian_EAtrue_NoCache-8 965.4n ± 2% 348.7n ± 0% -63.88% (p=0.002 n=6)
WidthFunction/EastAsian_EAtrue_CacheMiss-8 225.4n ± 511% 225.8n ± 111% ~ (p=1.000 n=6)
WidthFunction/EastAsian_EAtrue_CacheHit-8 14.72n ± 1% 14.54n ± 3% ~ (p=0.056 n=6)
WidthFunction/EastAsianWithANSI_EAfalse_NoCache-8 1376.0n ± 2% 983.8n ± 2% -28.50% (p=0.002 n=6)
WidthFunction/EastAsianWithANSI_EAfalse_CacheMiss-8 633.6n ± 170% 222.4n ± 513% ~ (p=0.974 n=6)
WidthFunction/EastAsianWithANSI_EAfalse_CacheHit-8 15.73n ± 1% 15.64n ± 1% ~ (p=0.227 n=6)
WidthFunction/EastAsianWithANSI_EAtrue_NoCache-8 1589.5n ± 1% 996.9n ± 2% -37.29% (p=0.002 n=6)
WidthFunction/EastAsianWithANSI_EAtrue_CacheMiss-8 484.8n ± 309% 221.3n ± 516% ~ (p=0.240 n=6)
WidthFunction/EastAsianWithANSI_EAtrue_CacheHit-8 15.74n ± 1% 15.73n ± 1% ~ (p=0.485 n=6)
WidthFunction/LongSimpleASCII_EAfalse_NoCache-8 4.916µ ± 3% 4.512µ ± 4% -8.22% (p=0.002 n=6)
WidthFunction/LongSimpleASCII_EAfalse_CacheMiss-8 2.430µ ± 114% 2.182µ ± 123% ~ (p=0.699 n=6)
WidthFunction/LongSimpleASCII_EAfalse_CacheHit-8 23.75n ± 3% 23.24n ± 3% ~ (p=0.065 n=6)
WidthFunction/LongSimpleASCII_EAtrue_NoCache-8 9.273µ ± 1% 4.519µ ± 1% -51.27% (p=0.002 n=6)
WidthFunction/LongSimpleASCII_EAtrue_CacheMiss-8 4.021µ ± 131% 2.127µ ± 128% ~ (p=0.240 n=6)
WidthFunction/LongSimpleASCII_EAtrue_CacheHit-8 23.50n ± 2% 23.48n ± 1% ~ (p=0.589 n=6)
WidthFunction/LongASCIIWithANSI_EAfalse_NoCache-8 57.36µ ± 1% 57.33µ ± 2% ~ (p=0.818 n=6)
WidthFunction/LongASCIIWithANSI_EAfalse_CacheMiss-8 22.18µ ± 135% 14.55µ ± 299% ~ (p=0.589 n=6)
WidthFunction/LongASCIIWithANSI_EAfalse_CacheHit-8 44.21n ± 1% 44.20n ± 2% ~ (p=0.818 n=6)
WidthFunction/LongASCIIWithANSI_EAtrue_NoCache-8 60.25µ ± 2% 57.90µ ± 2% -3.90% (p=0.002 n=6)
WidthFunction/LongASCIIWithANSI_EAtrue_CacheMiss-8 16.11µ ± 263% 20.02µ ± 183% ~ (p=0.699 n=6)
WidthFunction/LongASCIIWithANSI_EAtrue_CacheHit-8 44.57n ± 1% 44.18n ± 2% ~ (p=0.461 n=6)
geomean 358.5n 283.9n -20.82%
│ old.txt │ new.txt │
│ B/s │ B/s vs base │
WidthFunction/SimpleASCII_EAfalse_NoCache-8 86.11Mi ± 1% 90.63Mi ± 2% +5.24% (p=0.002 n=6)
WidthFunction/SimpleASCII_EAfalse_CacheMiss-8 152.4Mi ± 56% 153.5Mi ± 54% ~ (p=0.394 n=6)
WidthFunction/SimpleASCII_EAfalse_CacheHit-8 2.205Gi ± 1% 2.242Gi ± 3% ~ (p=0.065 n=6)
WidthFunction/SimpleASCII_EAtrue_NoCache-8 49.35Mi ± 1% 91.00Mi ± 2% +84.40% (p=0.002 n=6)
WidthFunction/SimpleASCII_EAtrue_CacheMiss-8 154.5Mi ± 79% 154.5Mi ± 56% ~ (p=0.937 n=6)
WidthFunction/SimpleASCII_EAtrue_CacheHit-8 2.215Gi ± 0% 2.250Gi ± 0% +1.58% (p=0.002 n=6)
WidthFunction/ASCIIWithANSI_EAfalse_NoCache-8 56.66Mi ± 2% 57.78Mi ± 1% +1.99% (p=0.002 n=6)
WidthFunction/ASCIIWithANSI_EAfalse_CacheMiss-8 265.1Mi ± 84% 262.7Mi ± 83% ~ (p=0.485 n=6)
WidthFunction/ASCIIWithANSI_EAfalse_CacheHit-8 3.836Gi ± 1% 3.888Gi ± 1% +1.34% (p=0.009 n=6)
WidthFunction/ASCIIWithANSI_EAtrue_NoCache-8 44.30Mi ± 2% 57.65Mi ± 2% +30.14% (p=0.002 n=6)
WidthFunction/ASCIIWithANSI_EAtrue_CacheMiss-8 147.3Mi ± 81% 266.7Mi ± 84% ~ (p=0.310 n=6)
WidthFunction/ASCIIWithANSI_EAtrue_CacheHit-8 3.856Gi ± 2% 3.919Gi ± 0% +1.63% (p=0.002 n=6)
WidthFunction/EastAsian_EAfalse_NoCache-8 76.58Mi ± 1% 170.21Mi ± 1% +122.28% (p=0.002 n=6)
WidthFunction/EastAsian_EAfalse_CacheMiss-8 252.8Mi ± 79% 251.6Mi ± 53% ~ (p=0.937 n=6)
WidthFunction/EastAsian_EAfalse_CacheHit-8 3.791Gi ± 1% 3.832Gi ± 1% +1.08% (p=0.009 n=6)
WidthFunction/EastAsian_EAtrue_NoCache-8 59.27Mi ± 2% 164.10Mi ± 0% +176.87% (p=0.002 n=6)
WidthFunction/EastAsian_EAtrue_CacheMiss-8 253.9Mi ± 84% 253.4Mi ± 53% ~ (p=1.000 n=6)
WidthFunction/EastAsian_EAtrue_CacheHit-8 3.796Gi ± 1% 3.841Gi ± 3% ~ (p=0.065 n=6)
WidthFunction/EastAsianWithANSI_EAfalse_NoCache-8 60.29Mi ± 1% 84.33Mi ± 2% +39.88% (p=0.002 n=6)
WidthFunction/EastAsianWithANSI_EAfalse_CacheMiss-8 227.1Mi ± 79% 373.2Mi ± 84% ~ (p=1.000 n=6)
WidthFunction/EastAsianWithANSI_EAfalse_CacheHit-8 5.154Gi ± 1% 5.181Gi ± 1% ~ (p=0.240 n=6)
WidthFunction/EastAsianWithANSI_EAtrue_NoCache-8 52.19Mi ± 1% 83.23Mi ± 2% +59.47% (p=0.002 n=6)
WidthFunction/EastAsianWithANSI_EAtrue_CacheMiss-8 230.9Mi ± 82% 374.9Mi ± 84% ~ (p=0.240 n=6)
WidthFunction/EastAsianWithANSI_EAtrue_CacheHit-8 5.147Gi ± 1% 5.152Gi ± 1% ~ (p=0.485 n=6)
WidthFunction/LongSimpleASCII_EAfalse_NoCache-8 104.8Mi ± 3% 114.1Mi ± 4% +8.95% (p=0.002 n=6)
WidthFunction/LongSimpleASCII_EAfalse_CacheMiss-8 368.0Mi ± 293% 474.3Mi ± 211% ~ (p=0.699 n=6)
WidthFunction/LongSimpleASCII_EAfalse_CacheHit-8 21.17Gi ± 3% 21.64Gi ± 2% ~ (p=0.065 n=6)
WidthFunction/LongSimpleASCII_EAtrue_NoCache-8 55.54Mi ± 1% 113.97Mi ± 1% +105.21% (p=0.002 n=6)
WidthFunction/LongSimpleASCII_EAtrue_CacheMiss-8 399.8Mi ± 232% 577.5Mi ± 149% ~ (p=0.240 n=6)
WidthFunction/LongSimpleASCII_EAtrue_CacheHit-8 21.40Gi ± 2% 21.41Gi ± 1% ~ (p=0.589 n=6)
WidthFunction/LongASCIIWithANSI_EAfalse_NoCache-8 34.08Mi ± 1% 34.10Mi ± 2% ~ (p=0.784 n=6)
WidthFunction/LongASCIIWithANSI_EAfalse_CacheMiss-8 101.5Mi ± 1396% 643.9Mi ± 320% ~ (p=0.589 n=6)
WidthFunction/LongASCIIWithANSI_EAfalse_CacheHit-8 43.18Gi ± 1% 43.20Gi ± 2% ~ (p=0.818 n=6)
WidthFunction/LongASCIIWithANSI_EAtrue_NoCache-8 32.45Mi ± 2% 33.76Mi ± 2% +4.06% (p=0.002 n=6)
WidthFunction/LongASCIIWithANSI_EAtrue_CacheMiss-8 393.0Mi ± 296% 122.4Mi ± 1610% ~ (p=0.699 n=6)
WidthFunction/LongASCIIWithANSI_EAtrue_CacheHit-8 42.83Gi ± 1% 43.21Gi ± 2% ~ (p=0.485 n=6)
geomean 456.4Mi 560.6Mi +22.83%
│ old.txt │ new.txt │
│ B/op │ B/op vs base │
WidthFunction/SimpleASCII_EAfalse_NoCache-8 112.0 ± 1% 113.0 ± 0% ~ (p=0.061 n=6)
WidthFunction/SimpleASCII_EAfalse_CacheMiss-8 55.00 ± 200% 55.00 ± 202% ~ (p=1.000 n=6)
WidthFunction/SimpleASCII_EAfalse_CacheHit-8 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=6) ¹
WidthFunction/SimpleASCII_EAtrue_NoCache-8 113.0 ± 1% 113.0 ± 0% ~ (p=1.000 n=6)
WidthFunction/SimpleASCII_EAtrue_CacheMiss-8 55.00 ± 505% 55.00 ± 205% ~ (p=0.697 n=6)
WidthFunction/SimpleASCII_EAtrue_CacheHit-8 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=6) ¹
WidthFunction/ASCIIWithANSI_EAfalse_NoCache-8 185.0 ± 0% 185.0 ± 1% ~ (p=0.455 n=6)
WidthFunction/ASCIIWithANSI_EAfalse_CacheMiss-8 87.00 ± 402% 87.00 ± 401% ~ (p=1.000 n=6)
WidthFunction/ASCIIWithANSI_EAfalse_CacheHit-8 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=6) ¹
WidthFunction/ASCIIWithANSI_EAtrue_NoCache-8 185.0 ± 0% 185.0 ± 1% ~ (p=1.000 n=6)
WidthFunction/ASCIIWithANSI_EAtrue_CacheMiss-8 174.00 ± 115% 87.00 ± 401% ~ (p=0.621 n=6)
WidthFunction/ASCIIWithANSI_EAtrue_CacheHit-8 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=6) ¹
WidthFunction/EastAsian_EAfalse_NoCache-8 145.0 ± 0% 146.0 ± 0% +0.69% (p=0.002 n=6)
WidthFunction/EastAsian_EAfalse_CacheMiss-8 87.00 ± 392% 87.00 ± 167% ~ (p=0.697 n=6)
WidthFunction/EastAsian_EAfalse_CacheHit-8 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=6) ¹
WidthFunction/EastAsian_EAtrue_NoCache-8 145.0 ± 1% 146.0 ± 1% +0.69% (p=0.013 n=6)
WidthFunction/EastAsian_EAtrue_CacheMiss-8 87.00 ± 392% 87.00 ± 164% ~ (p=0.697 n=6)
WidthFunction/EastAsian_EAtrue_CacheHit-8 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=6) ¹
WidthFunction/EastAsianWithANSI_EAfalse_NoCache-8 193.0 ± 1% 193.0 ± 0% ~ (p=1.000 n=6)
WidthFunction/EastAsianWithANSI_EAfalse_CacheMiss-8 232.0 ± 134% 103.0 ± 485% ~ (p=0.924 n=6)
WidthFunction/EastAsianWithANSI_EAfalse_CacheHit-8 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=6) ¹
WidthFunction/EastAsianWithANSI_EAtrue_NoCache-8 193.0 ± 0% 193.0 ± 1% ~ (p=1.000 n=6)
WidthFunction/EastAsianWithANSI_EAtrue_CacheMiss-8 185.0 ± 203% 103.0 ± 485% ~ (p=0.621 n=6)
WidthFunction/EastAsianWithANSI_EAtrue_CacheHit-8 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=6) ¹
WidthFunction/LongSimpleASCII_EAfalse_NoCache-8 1.153Ki ± 0% 1.150Ki ± 0% ~ (p=0.126 n=6)
WidthFunction/LongSimpleASCII_EAfalse_CacheMiss-8 1.050Ki ± 72% 1.047Ki ± 74% ~ (p=0.939 n=6)
WidthFunction/LongSimpleASCII_EAfalse_CacheHit-8 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=6) ¹
WidthFunction/LongSimpleASCII_EAtrue_NoCache-8 1.152Ki ± 0% 1.155Ki ± 0% +0.30% (p=0.015 n=6)
WidthFunction/LongSimpleASCII_EAtrue_CacheMiss-8 1.036Ki ± 71% 1.039Ki ± 76% ~ (p=0.981 n=6)
WidthFunction/LongSimpleASCII_EAtrue_CacheHit-8 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=6) ¹
WidthFunction/LongASCIIWithANSI_EAfalse_NoCache-8 1.355Ki ± 0% 1.358Ki ± 0% ~ (p=0.065 n=6)
WidthFunction/LongASCIIWithANSI_EAfalse_CacheMiss-8 2.787Ki ± 31% 2.613Ki ± 43% ~ (p=0.805 n=6)
WidthFunction/LongASCIIWithANSI_EAfalse_CacheHit-8 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=6) ¹
WidthFunction/LongASCIIWithANSI_EAtrue_NoCache-8 1.358Ki ± 0% 1.361Ki ± 0% ~ (p=0.158 n=6)
WidthFunction/LongASCIIWithANSI_EAtrue_CacheMiss-8 2.625Ki ± 43% 2.741Ki ± 37% ~ (p=0.987 n=6)
WidthFunction/LongASCIIWithANSI_EAtrue_CacheHit-8 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=6) ¹
geomean ² -5.62% ²
¹ all samples are equal
² summaries must be >0 to compute geomean
│ old.txt │ new.txt │
│ allocs/op │ allocs/op vs base │
WidthFunction/SimpleASCII_EAfalse_NoCache-8 3.000 ± 0% 3.000 ± 0% ~ (p=1.000 n=6) ¹
WidthFunction/SimpleASCII_EAfalse_CacheMiss-8 1.000 ± 200% 1.000 ± 200% ~ (p=1.000 n=6)
WidthFunction/SimpleASCII_EAfalse_CacheHit-8 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=6) ¹
WidthFunction/SimpleASCII_EAtrue_NoCache-8 3.000 ± 0% 3.000 ± 0% ~ (p=1.000 n=6) ¹
WidthFunction/SimpleASCII_EAtrue_CacheMiss-8 1.000 ± 300% 1.000 ± 200% ~ (p=0.697 n=6)
WidthFunction/SimpleASCII_EAtrue_CacheHit-8 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=6) ¹
WidthFunction/ASCIIWithANSI_EAfalse_NoCache-8 6.000 ± 0% 6.000 ± 0% ~ (p=1.000 n=6) ¹
WidthFunction/ASCIIWithANSI_EAfalse_CacheMiss-8 1.000 ± 600% 1.000 ± 600% ~ (p=1.000 n=6)
WidthFunction/ASCIIWithANSI_EAfalse_CacheHit-8 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=6) ¹
WidthFunction/ASCIIWithANSI_EAtrue_NoCache-8 6.000 ± 0% 6.000 ± 0% ~ (p=1.000 n=6) ¹
WidthFunction/ASCIIWithANSI_EAtrue_CacheMiss-8 3.500 ± 100% 1.000 ± 600% ~ (p=0.610 n=6)
WidthFunction/ASCIIWithANSI_EAtrue_CacheHit-8 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=6) ¹
WidthFunction/EastAsian_EAfalse_NoCache-8 3.000 ± 0% 3.000 ± 0% ~ (p=1.000 n=6) ¹
WidthFunction/EastAsian_EAfalse_CacheMiss-8 1.000 ± 300% 1.000 ± 200% ~ (p=0.697 n=6)
WidthFunction/EastAsian_EAfalse_CacheHit-8 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=6) ¹
WidthFunction/EastAsian_EAtrue_NoCache-8 3.000 ± 0% 3.000 ± 0% ~ (p=1.000 n=6) ¹
WidthFunction/EastAsian_EAtrue_CacheMiss-8 1.000 ± 300% 1.000 ± 200% ~ (p=0.697 n=6)
WidthFunction/EastAsian_EAtrue_CacheHit-8 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=6) ¹
WidthFunction/EastAsianWithANSI_EAfalse_NoCache-8 5.000 ± 0% 5.000 ± 0% ~ (p=1.000 n=6) ¹
WidthFunction/EastAsianWithANSI_EAfalse_CacheMiss-8 3.000 ± 133% 1.000 ± 600% ~ (p=1.000 n=6)
WidthFunction/EastAsianWithANSI_EAfalse_CacheHit-8 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=6) ¹
WidthFunction/EastAsianWithANSI_EAtrue_NoCache-8 5.000 ± 0% 5.000 ± 0% ~ (p=1.000 n=6) ¹
WidthFunction/EastAsianWithANSI_EAtrue_CacheMiss-8 2.500 ± 180% 1.000 ± 600% ~ (p=0.610 n=6)
WidthFunction/EastAsianWithANSI_EAtrue_CacheHit-8 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=6) ¹
WidthFunction/LongSimpleASCII_EAfalse_NoCache-8 3.000 ± 0% 3.000 ± 0% ~ (p=1.000 n=6) ¹
WidthFunction/LongSimpleASCII_EAfalse_CacheMiss-8 3.000 ± 67% 3.000 ± 67% ~ (p=1.000 n=6)
WidthFunction/LongSimpleASCII_EAfalse_CacheHit-8 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=6) ¹
WidthFunction/LongSimpleASCII_EAtrue_NoCache-8 3.000 ± 0% 3.000 ± 0% ~ (p=1.000 n=6) ¹
WidthFunction/LongSimpleASCII_EAtrue_CacheMiss-8 3.000 ± 67% 3.000 ± 67% ~ (p=1.000 n=6)
WidthFunction/LongSimpleASCII_EAtrue_CacheHit-8 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=6) ¹
WidthFunction/LongASCIIWithANSI_EAfalse_NoCache-8 9.000 ± 0% 9.000 ± 0% ~ (p=1.000 n=6) ¹
WidthFunction/LongASCIIWithANSI_EAfalse_CacheMiss-8 5.000 ± 100% 3.500 ± 186% ~ (p=0.978 n=6)
WidthFunction/LongASCIIWithANSI_EAfalse_CacheHit-8 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=6) ¹
WidthFunction/LongASCIIWithANSI_EAtrue_NoCache-8 9.000 ± 0% 9.000 ± 0% ~ (p=1.000 n=6) ¹
WidthFunction/LongASCIIWithANSI_EAtrue_CacheMiss-8 4.000 ± 150% 4.500 ± 122% ~ (p=0.952 n=6)
WidthFunction/LongASCIIWithANSI_EAtrue_CacheHit-8 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=6) ¹
geomean ² -9.28% ²
¹ all samples are equal
² summaries must be >0 to compute geomean

Some files were not shown because too many files have changed in this diff Show More