Compare commits

...

16 Commits

Author SHA1 Message Date
Viktor Scharf
b39dfabba5 update contributing.md file 2026-02-27 12:28:45 +01:00
dependabot[bot]
6cdf229979 build(deps): bump github.com/kovidgoyal/imaging from 1.8.19 to 1.8.20
Bumps [github.com/kovidgoyal/imaging](https://github.com/kovidgoyal/imaging) from 1.8.19 to 1.8.20.
- [Release notes](https://github.com/kovidgoyal/imaging/releases)
- [Commits](https://github.com/kovidgoyal/imaging/compare/v1.8.19...v1.8.20)

---
updated-dependencies:
- dependency-name: github.com/kovidgoyal/imaging
  dependency-version: 1.8.20
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-02-26 18:38:29 +01:00
Mahdi Baghbani
d7cb432b4d fix(ocm): allow insecure tls for wayf discovery (#2404)
* fix(ocm): allow insecure tls for wayf discovery

Signed-off-by: Mahdi Baghbani <mahdi-baghbani@azadehafzar.io>
2026-02-26 14:44:38 +01:00
Florian Schade
b69b9cd569 fix: simplify subject.session key parsing 2026-02-25 14:02:09 +01:00
Florian Schade
e8ecbd7af1 refactor: make the logout mode private 2026-02-25 14:02:09 +01:00
Florian Schade
fd614eacf1 fix: use base64 record keys to prevent separator clashes with subjects or sessionIds that contain a dot 2026-02-25 14:02:09 +01:00
Florian Schade
910298aa05 chore: change naming 2026-02-25 14:02:09 +01:00
Florian Schade
7350050a05 test: add more backchannellogout tests 2026-02-25 14:02:09 +01:00
Florian Schade
f72e3f1e32 chore: cleanup backchannel logout pr for review 2026-02-25 14:02:09 +01:00
Florian Schade
0c62c45494 enhancement: document idp side-effects 2026-02-25 14:02:09 +01:00
Florian Schade
f6553498f6 enhancement: finalize backchannel logout 2026-02-25 14:02:09 +01:00
Christian Richter
6a0fd89475 refactor deletion
Co-authored-by: Jörn Dreyer <j.dreyer@opencloud.eu>
Co-authored-by: Michael Barz <m.barz@opencloud.eu>
Signed-off-by: Christian Richter <c.richter@opencloud.eu>
2026-02-25 14:02:09 +01:00
Christian Richter
cb38aaab16 create mapping in cache for subject => sessionid
Signed-off-by: Christian Richter <c.richter@opencloud.eu>
2026-02-25 14:02:09 +01:00
Christian Richter
762062bfa3 add mapping to backchannel logout for subject => sessionid
Signed-off-by: Christian Richter <c.richter@opencloud.eu>
2026-02-25 14:02:09 +01:00
Christian Richter
291265afb0 add additional validation to logout token
Signed-off-by: Christian Richter <c.richter@opencloud.eu>
Co-authored-by: Michael Barz <m.barz@opencloud.eu>
2026-02-25 14:02:09 +01:00
opencloudeu
49a018e973 [tx] updated from transifex 2026-02-24 00:12:39 +00:00
19 changed files with 1733 additions and 284 deletions

View File

@@ -40,7 +40,7 @@ but it should be easily transferable to other (sub)projects.
> **Note:** Please don't file an issue to ask a question. You'll get faster results by using the resources below.
For general questions, please refer to [OpenCloud's FAQs](https://opencloud.eu/faq/) or check the [project page](https://github.com/opencloud-eu) for communication channels.
For general questions, please refer to [OpenCloud's FAQs](https://docs.opencloud.eu/docs/admin/resources/faq/) or check the [project page](https://github.com/opencloud-eu) for communication channels.
## What to know before getting started
@@ -55,7 +55,7 @@ The OpenCloud project follows the strict GitHub workflow of development as brief
### OpenCloud Company, Engineering Partners and Community
OpenCloud is largely created by developers who are employed by the [OpenCloud company](https://opencloud.eu), which is located in Germany.
It is providing support for OpenCloud for customers mainly in the EU. In addition, there are engineering partners who also work full-time on OpenCloud related code, for example, on the component [REVA](https://github.com/cs3org/reva/).
It is providing support for OpenCloud for customers mainly in the EU. In addition, there are engineering partners who also work full-time on OpenCloud related code, for example, on the component [REVA](https://github.com/opencloud-eu/reva/).
Because of that fact, the pace that the development is moving forward is sometimes high for people who are not willing and/or able to spend a comparable amount of time to contribute.
Even though this can be a challenge, it should not scare anybody away. Here is our clear commitment that we feel honored by everybody who is interested in our work and improves it, no matter how big the contribution might be.
@@ -113,7 +113,7 @@ Explain the problem and include additional details to help maintainers reproduce
Provide more context by answering these questions:
* **Did the problem start happening recently** (e.g. after updating to a new version) or was this always a problem?
* If the problem started happening recently, **can you reproduce the problem in an older version?** What's the most recent version in which the problem doesn't happen? You can find more information about how to set up [test environments](https://docs.opencloud.eu/devel/testing) in the [developer documentation](https://docs.opencloud.eu/docs/dev/intro).
* If the problem started happening recently, **can you reproduce the problem in an older version?** What's the most recent version in which the problem doesn't happen? You can find more information about how to set up in the [Getting Started guide](https://docs.opencloud.eu/docs/admin/getting-started).
* **Can you reliably reproduce the issue?** If not, provide details about how often the problem happens and under which conditions it normally happens.
Include details about your configuration and environment as asked for in the template.
@@ -146,9 +146,8 @@ Enhancement suggestions are tracked as [GitHub issues](https://guides.github.com
Unsure where to begin contributing to OpenCloud? You can start by looking through these `Needs-help` issues:
* The [Good first issue](https://github.com/opencloud-eu/opencloud/labels/Topic%3Agood-first-issue) label marks good items to start with.
* [Tests needed](https://github.com/opencloud-eu/opencloud/labels/Interaction%3ANeeds-tests) - issues which would benefit from a test.
* [Help wanted issues](https://github.com/opencloud-eu/opencloud/labels/Interaction%3ANeeds-help) - issues which should be a bit more involved.
* The [Good first issue](https://github.com/opencloud-eu/opencloud/labels/Type%3Agood-first-issue) label marks good items to start with.
* The [Feature Request](https://github.com/opencloud-eu/opencloud/issues?q=state%3Aopen%20label%3AType%3AFeature-Request) label lists features the community would like to see implemented.
It is fine to pick one of the lists following personal preference.
While not perfect, the number of comments is a reasonable proxy for the impact a given change will have.
@@ -221,7 +220,7 @@ To help you find issues and pull requests, each label can be used in search link
The labels are loosely grouped by their purpose, but it's not required that every issue has a label from every group or that an issue can't have more than one label from the same group.
The list here contains all the more general categories of issues which are followed by a colon and a specific value.
For example, severity 1 looks like `Severity:sev1-critical`.
For example, severity 1 looks like `Priority:p1-urgent`.
#### Platform
@@ -257,7 +256,7 @@ Categorizes the issue to also indicate the type of the issue.
#### Status
The status in the ticket life cycle. Keep an eye on that one, especially for the `Waiting-for-Feedback` tag which might indicate that the reporter is asked for feedback.
The status in the ticket life cycle. Keep an eye on that one, especially for the `Status:Needs-Review` tag which might indicate that the reporter is asked for feedback.
#### Interaction

4
go.mod
View File

@@ -48,7 +48,7 @@ require (
github.com/jellydator/ttlcache/v3 v3.4.0
github.com/jinzhu/now v1.1.5
github.com/justinas/alice v1.2.0
github.com/kovidgoyal/imaging v1.8.19
github.com/kovidgoyal/imaging v1.8.20
github.com/leonelquinteros/gotext v1.7.2
github.com/libregraph/idm v0.5.0
github.com/libregraph/lico v0.66.0
@@ -105,7 +105,7 @@ require (
go.opentelemetry.io/otel/trace v1.40.0
golang.org/x/crypto v0.48.0
golang.org/x/exp v0.0.0-20250210185358-939b2ce775ac
golang.org/x/image v0.35.0
golang.org/x/image v0.36.0
golang.org/x/net v0.50.0
golang.org/x/oauth2 v0.35.0
golang.org/x/sync v0.19.0

8
go.sum
View File

@@ -747,8 +747,8 @@ github.com/kovidgoyal/go-parallel v1.1.1 h1:1OzpNjtrUkBPq3UaqrnvOoB2F9RttSt811ui
github.com/kovidgoyal/go-parallel v1.1.1/go.mod h1:BJNIbe6+hxyFWv7n6oEDPj3PA5qSw5OCtf0hcVxWJiw=
github.com/kovidgoyal/go-shm v1.0.0 h1:HJEel9D1F9YhULvClEHJLawoRSj/1u/EDV7MJbBPgQo=
github.com/kovidgoyal/go-shm v1.0.0/go.mod h1:Yzb80Xf9L3kaoB2RGok9hHwMIt7Oif61kT6t3+VnZds=
github.com/kovidgoyal/imaging v1.8.19 h1:zWJdQqF2tfSKjvoB7XpLRhVGbYsze++M0iaqZ4ZkhNk=
github.com/kovidgoyal/imaging v1.8.19/go.mod h1:I0q8RdoEuyc4G8GFOF9CaluTUHQSf68d6TmsqpvfRI8=
github.com/kovidgoyal/imaging v1.8.20 h1:74GZ7C2rIm3rqmGEjK1GvvPOOnJ0SS5iDOa6Flfo0b0=
github.com/kovidgoyal/imaging v1.8.20/go.mod h1:d3phGYkTChGYkY4y++IjpHgUGhWGELDc2NEQAqxwZZg=
github.com/kr/fs v0.1.0/go.mod h1:FFnZGqtBN9Gxj7eW1uZ42v5BccTP0vu6NEaFoC2HwRg=
github.com/kr/logfmt v0.0.0-20140226030751-b84e30acd515/go.mod h1:+0opPa2QZZtGFBFZlji/RkVcI2GknAs/DXo4wKdlNEc=
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
@@ -1393,8 +1393,8 @@ golang.org/x/exp v0.0.0-20250210185358-939b2ce775ac/go.mod h1:hH+7mtFmImwwcMvScy
golang.org/x/image v0.0.0-20190227222117-0694c2d4d067/go.mod h1:kZ7UVZpmo3dzQBMxlp+ypCbDeSB+sBbTgSJuh5dn5js=
golang.org/x/image v0.0.0-20190802002840-cff245a6509b/go.mod h1:FeLwcggjj3mMvU+oOTbSwawSJRM1uh48EjtB4UJZlP0=
golang.org/x/image v0.18.0/go.mod h1:4yyo5vMFQjVjUcVk4jEQcU9MGy/rulF5WvUILseCM2E=
golang.org/x/image v0.35.0 h1:LKjiHdgMtO8z7Fh18nGY6KDcoEtVfsgLDPeLyguqb7I=
golang.org/x/image v0.35.0/go.mod h1:MwPLTVgvxSASsxdLzKrl8BRFuyqMyGhLwmC+TO1Sybk=
golang.org/x/image v0.36.0 h1:Iknbfm1afbgtwPTmHnS2gTM/6PPZfH+z2EFuOkSbqwc=
golang.org/x/image v0.36.0/go.mod h1:YsWD2TyyGKiIX1kZlu9QfKIsQ4nAAK9bdgdrIsE7xy4=
golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
golang.org/x/lint v0.0.0-20190227174305-5b3e6a55c961/go.mod h1:wehouNa3lNwaWXcvxsM5YxQ5yQlVC4a0KAMCusXpPoU=
golang.org/x/lint v0.0.0-20190301231843-5614ed5bae6f/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=

View File

@@ -86,6 +86,7 @@ type ScienceMesh struct {
MeshDirectoryURL string `yaml:"science_mesh_directory_url" env:"OCM_MESH_DIRECTORY_URL" desc:"URL of the mesh directory service." introductionVersion:"1.0.0"`
DirectoryServiceURLs string `yaml:"directory_service_urls" env:"OCM_DIRECTORY_SERVICE_URLS" desc:"Space delimited URLs of the directory services." introductionVersion:"3.5.0"`
InviteAcceptDialog string `yaml:"invite_accept_dialog" env:"OCM_INVITE_ACCEPT_DIALOG" desc:"/open-cloud-mesh/accept-invite;The frontend URL where to land when receiving an invitation" introductionVersion:"3.5.0"`
OCMClientInsecure bool `yaml:"ocm_client_insecure" env:"OC_INSECURE;OCM_CLIENT_INSECURE" desc:"Dev-only. Disable TLS verification for the OCM discovery client (directory fetch and provider discovery). Does not affect OCM invite manager, storage provider, or share provider. Do not set in production." introductionVersion:"%%NEXT%%"`
}
type OCMD struct {

View File

@@ -76,6 +76,7 @@ func OCMConfigFromStruct(cfg *config.Config, logger log.Logger) map[string]inter
"gatewaysvc": cfg.Reva.Address,
"mesh_directory_url": cfg.ScienceMesh.MeshDirectoryURL,
"directory_service_urls": cfg.ScienceMesh.DirectoryServiceURLs,
"ocm_client_insecure": cfg.ScienceMesh.OCMClientInsecure,
"provider_domain": providerDomain,
"events": map[string]interface{}{
"natsaddress": cfg.Events.Endpoint,

View File

@@ -12,3 +12,8 @@ packages:
github.com/opencloud-eu/opencloud/services/proxy/pkg/userroles:
interfaces:
UserRoleAssigner: {}
go-micro.dev/v4/store:
config:
dir: pkg/staticroutes/internal/backchannellogout/mocks
interfaces:
Store: {}

View File

@@ -10,6 +10,7 @@ import (
gateway "github.com/cs3org/go-cs3apis/cs3/gateway/v1beta1"
"github.com/justinas/alice"
"github.com/opencloud-eu/opencloud/pkg/config/configlog"
"github.com/opencloud-eu/opencloud/pkg/generators"
"github.com/opencloud-eu/opencloud/pkg/log"
@@ -72,6 +73,7 @@ func Server(cfg *config.Config) *cobra.Command {
microstore.Nodes(cfg.PreSignedURL.SigningKeys.Nodes...),
microstore.Database("proxy"),
microstore.Table("signing-keys"),
store.DisablePersistence(cfg.PreSignedURL.SigningKeys.DisablePersistence),
store.Authentication(cfg.PreSignedURL.SigningKeys.AuthUsername, cfg.PreSignedURL.SigningKeys.AuthPassword),
)

View File

@@ -8,13 +8,15 @@ import (
"strings"
"time"
"github.com/opencloud-eu/opencloud/pkg/log"
"github.com/opencloud-eu/opencloud/pkg/oidc"
"github.com/pkg/errors"
"github.com/vmihailenco/msgpack/v5"
store "go-micro.dev/v4/store"
"go-micro.dev/v4/store"
"golang.org/x/crypto/sha3"
"golang.org/x/oauth2"
"github.com/opencloud-eu/opencloud/pkg/log"
"github.com/opencloud-eu/opencloud/pkg/oidc"
"github.com/opencloud-eu/opencloud/services/proxy/pkg/staticroutes"
)
const (
@@ -114,16 +116,25 @@ func (m *OIDCAuthenticator) getClaims(token string, req *http.Request) (map[stri
m.Logger.Error().Err(err).Msg("failed to write to userinfo cache")
}
if sid := aClaims.SessionID; sid != "" {
// reuse user cache for session id lookup
err = m.userInfoCache.Write(&store.Record{
Key: sid,
Value: []byte(encodedHash),
Expiry: time.Until(expiration),
})
if err != nil {
m.Logger.Error().Err(err).Msg("failed to write session lookup cache")
}
// fail if creating the storage key fails,
// it means there is no subject and no session.
//
// ok: {key: ".sessionId"}
// ok: {key: "subject."}
// ok: {key: "subject.sessionId"}
// fail: {key: "."}
subjectSessionKey, err := staticroutes.NewRecordKey(aClaims.Subject, aClaims.SessionID)
if err != nil {
m.Logger.Error().Err(err).Msg("failed to build subject.session")
return
}
if err := m.userInfoCache.Write(&store.Record{
Key: subjectSessionKey,
Value: []byte(encodedHash),
Expiry: time.Until(expiration),
}); err != nil {
m.Logger.Error().Err(err).Msg("failed to write session lookup cache")
}
}
}()

View File

@@ -6,17 +6,40 @@ import (
"net/http"
"github.com/go-chi/render"
"github.com/opencloud-eu/opencloud/pkg/oidc"
"github.com/opencloud-eu/reva/v2/pkg/events"
"github.com/opencloud-eu/reva/v2/pkg/utils"
"github.com/pkg/errors"
"github.com/vmihailenco/msgpack/v5"
microstore "go-micro.dev/v4/store"
bcl "github.com/opencloud-eu/opencloud/services/proxy/pkg/staticroutes/internal/backchannellogout"
"github.com/opencloud-eu/reva/v2/pkg/events"
"github.com/opencloud-eu/reva/v2/pkg/utils"
)
// handle backchannel logout requests as per https://openid.net/specs/openid-connect-backchannel-1_0.html#BCRequest
// NewRecordKey converts the subject and session to a base64 encoded key
var NewRecordKey = bcl.NewKey
// backchannelLogout handles backchannel logout requests from the identity provider and invalidates the related sessions in the cache
// spec: https://openid.net/specs/openid-connect-backchannel-1_0.html#BCRequest
//
// known side effects of backchannel logout in keycloak:
//
// - keyCloak "Sign out all active sessions" does not send a backchannel logout request,
// as the devs mention, this may lead to thousands of backchannel logout requests,
// therefore, they recommend a short token lifetime.
// https://github.com/keycloak/keycloak/issues/27342#issuecomment-2408461913
//
// - keyCloak user self-service portal, "Sign out all devices" may not send a backchannel
// logout request for each session, it's not mentionex explicitly,
// but maybe the reason for that is the same as for "Sign out all active sessions"
// to prevent a flood of backchannel logout requests.
//
// - if the keycloak setting "Backchannel logout session required" is disabled (or the token has no session id),
// we resolve the session by the subject which can lead to multiple session records (subject.*),
// we then send a logout event (sse) to each connected client and delete our stored cache record (subject.session & claim).
// all sessions besides the one that triggered the backchannel logout continue to exist in the identity provider,
// so the user will not be fully logged out until all sessions are logged out or expired.
// this leads to the situation that web renders the logout view even if the instance is not fully logged out yet.
func (s *StaticRouteHandler) backchannelLogout(w http.ResponseWriter, r *http.Request) {
// parse the application/x-www-form-urlencoded POST request
logger := s.Logger.SubloggerWithRequestID(r.Context())
if err := r.ParseForm(); err != nil {
logger.Warn().Err(err).Msg("ParseForm failed")
@@ -27,45 +50,84 @@ func (s *StaticRouteHandler) backchannelLogout(w http.ResponseWriter, r *http.Re
logoutToken, err := s.OidcClient.VerifyLogoutToken(r.Context(), r.PostFormValue("logout_token"))
if err != nil {
logger.Warn().Err(err).Msg("VerifyLogoutToken failed")
msg := "failed to verify logout token"
logger.Warn().Err(err).Msg(msg)
render.Status(r, http.StatusBadRequest)
render.JSON(w, r, jse{Error: "invalid_request", ErrorDescription: err.Error()})
render.JSON(w, r, jse{Error: "invalid_request", ErrorDescription: msg})
return
}
records, err := s.UserInfoCache.Read(logoutToken.SessionId)
if errors.Is(err, microstore.ErrNotFound) || len(records) == 0 {
lookupKey, err := bcl.NewKey(logoutToken.Subject, logoutToken.SessionId)
if err != nil {
msg := "failed to build key from logout token"
logger.Warn().Err(err).Msg(msg)
render.Status(r, http.StatusBadRequest)
render.JSON(w, r, jse{Error: "invalid_request", ErrorDescription: msg})
return
}
requestSubjectAndSession, err := bcl.NewSuSe(lookupKey)
if err != nil {
msg := "failed to build subjec.session from lookupKey"
logger.Error().Err(err).Msg(msg)
render.Status(r, http.StatusBadRequest)
render.JSON(w, r, jse{Error: "invalid_request", ErrorDescription: msg})
return
}
lookupRecords, err := bcl.GetLogoutRecords(requestSubjectAndSession, s.UserInfoCache)
if errors.Is(err, microstore.ErrNotFound) || len(lookupRecords) == 0 {
render.Status(r, http.StatusOK)
render.JSON(w, r, nil)
return
}
if err != nil {
logger.Error().Err(err).Msg("Error reading userinfo cache")
msg := "failed to read userinfo cache"
logger.Error().Err(err).Msg(msg)
render.Status(r, http.StatusBadRequest)
render.JSON(w, r, jse{Error: "invalid_request", ErrorDescription: err.Error()})
render.JSON(w, r, jse{Error: "invalid_request", ErrorDescription: msg})
return
}
for _, record := range records {
err := s.publishBackchannelLogoutEvent(r.Context(), record, logoutToken)
for _, record := range lookupRecords {
// the record key is in the format "subject.session" or ".session"
// the record value is the key of the record that contains the claim in its value
key, value := record.Key, string(record.Value)
subjectSession, err := bcl.NewSuSe(key)
if err != nil {
s.Logger.Warn().Err(err).Msg("could not publish backchannel logout event")
// never leak any key-related information
logger.Warn().Err(err).Msgf("failed to parse key: %s", key)
continue
}
err = s.UserInfoCache.Delete(string(record.Value))
session, err := subjectSession.Session()
if err != nil {
logger.Warn().Err(err).Msgf("failed to read session for: %s", key)
continue
}
if err := s.publishBackchannelLogoutEvent(r.Context(), session, value); err != nil {
s.Logger.Warn().Err(err).Msgf("failed to publish backchannel logout event for: %s", key)
continue
}
err = s.UserInfoCache.Delete(value)
if err != nil && !errors.Is(err, microstore.ErrNotFound) {
// Spec requires us to return a 400 BadRequest when the session could not be destroyed
logger.Err(err).Msg("could not delete user info from cache")
// we have to return a 400 BadRequest when we fail to delete the session
// https://openid.net/specs/openid-connect-backchannel-1_0.html#rfc.section.2.8
msg := "failed to delete record"
s.Logger.Warn().Err(err).Msgf("%s for: %s", msg, key)
render.Status(r, http.StatusBadRequest)
render.JSON(w, r, jse{Error: "invalid_request", ErrorDescription: err.Error()})
render.JSON(w, r, jse{Error: "invalid_request", ErrorDescription: msg})
return
}
logger.Debug().Msg("Deleted userinfo from cache")
}
// we can ignore errors when cleaning up the lookup table
err = s.UserInfoCache.Delete(logoutToken.SessionId)
if err != nil {
logger.Debug().Err(err).Msg("Failed to cleanup sessionid lookup entry")
// we can ignore errors when deleting the lookup record
err = s.UserInfoCache.Delete(key)
if err != nil {
logger.Debug().Err(err).Msgf("failed to delete record for: %s", key)
}
}
render.Status(r, http.StatusOK)
@@ -73,41 +135,42 @@ func (s *StaticRouteHandler) backchannelLogout(w http.ResponseWriter, r *http.Re
}
// publishBackchannelLogoutEvent publishes a backchannel logout event when the callback revived from the identity provider
func (s StaticRouteHandler) publishBackchannelLogoutEvent(ctx context.Context, record *microstore.Record, logoutToken *oidc.LogoutToken) error {
func (s *StaticRouteHandler) publishBackchannelLogoutEvent(ctx context.Context, sessionId, claimKey string) error {
if s.EventsPublisher == nil {
return fmt.Errorf("the events publisher is not set")
return errors.New("events publisher not set")
}
urecords, err := s.UserInfoCache.Read(string(record.Value))
if err != nil {
return fmt.Errorf("reading userinfo cache: %w", err)
}
if len(urecords) == 0 {
return fmt.Errorf("userinfo not found")
claimRecords, err := s.UserInfoCache.Read(claimKey)
switch {
case err != nil:
return fmt.Errorf("failed to read userinfo cache: %w", err)
case len(claimRecords) == 0:
return fmt.Errorf("no claim found for key: %s", claimKey)
}
var claims map[string]interface{}
if err = msgpack.Unmarshal(urecords[0].Value, &claims); err != nil {
return fmt.Errorf("could not unmarshal userinfo: %w", err)
if err = msgpack.Unmarshal(claimRecords[0].Value, &claims); err != nil {
return fmt.Errorf("failed to unmarshal claims: %w", err)
}
oidcClaim, ok := claims[s.Config.UserOIDCClaim].(string)
if !ok {
return fmt.Errorf("could not get claim %w", err)
return fmt.Errorf("failed to get claim %w", err)
}
user, _, err := s.UserProvider.GetUserByClaims(ctx, s.Config.UserCS3Claim, oidcClaim)
if err != nil || user.GetId() == nil {
return fmt.Errorf("could not get user by claims: %w", err)
return fmt.Errorf("failed to get user by claims: %w", err)
}
e := events.BackchannelLogout{
Executant: user.GetId(),
SessionId: logoutToken.SessionId,
SessionId: sessionId,
Timestamp: utils.TSNow(),
}
if err := events.Publish(ctx, s.EventsPublisher, e); err != nil {
return fmt.Errorf("could not publish user created event %w", err)
return fmt.Errorf("failed to publish user logout event %w", err)
}
return nil
}

View File

@@ -0,0 +1,185 @@
// package backchannellogout provides functions to classify and lookup
// backchannel logout records from the cache store.
package backchannellogout
import (
"encoding/base64"
"errors"
"strings"
microstore "go-micro.dev/v4/store"
)
// keyEncoding is the base64 encoding used for session and subject keys
var keyEncoding = base64.URLEncoding
// ErrInvalidKey indicates that the provided key does not conform to the expected format.
var ErrInvalidKey = errors.New("invalid key format")
// NewKey converts the subject and session to a base64 encoded key
func NewKey(subject, session string) (string, error) {
subjectSession := strings.Join([]string{
keyEncoding.EncodeToString([]byte(subject)),
keyEncoding.EncodeToString([]byte(session)),
}, ".")
if subjectSession == "." {
return "", ErrInvalidKey
}
return subjectSession, nil
}
// ErrDecoding is returned when decoding fails
var ErrDecoding = errors.New("failed to decode")
// SuSe 🦎 ;) is a struct that groups the subject and session together
// to prevent mix-ups for ('session, subject' || 'subject, session')
// return values.
type SuSe struct {
encodedSubject string
encodedSession string
}
// Subject decodes and returns the subject or an error
func (suse SuSe) Subject() (string, error) {
subject, err := keyEncoding.DecodeString(suse.encodedSubject)
if err != nil {
return "", errors.Join(errors.New("failed to decode subject"), ErrDecoding, err)
}
return string(subject), nil
}
// Session decodes and returns the session or an error
func (suse SuSe) Session() (string, error) {
subject, err := keyEncoding.DecodeString(suse.encodedSession)
if err != nil {
return "", errors.Join(errors.New("failed to decode session"), ErrDecoding, err)
}
return string(subject), nil
}
// ErrInvalidSubjectOrSession is returned when the provided key does not match the expected key format
var ErrInvalidSubjectOrSession = errors.New("invalid subject or session")
// NewSuSe parses the subject and session id from the given key and returns a SuSe struct
func NewSuSe(key string) (SuSe, error) {
suse := SuSe{}
keys := strings.Split(key, ".")
switch len(keys) {
case 1:
suse.encodedSession = keys[0]
case 2:
suse.encodedSubject = keys[0]
suse.encodedSession = keys[1]
default:
return suse, ErrInvalidSubjectOrSession
}
if suse.encodedSubject == "" && suse.encodedSession == "" {
return suse, ErrInvalidSubjectOrSession
}
if _, err := suse.Subject(); err != nil {
return suse, errors.Join(ErrInvalidSubjectOrSession, err)
}
if _, err := suse.Session(); err != nil {
return suse, errors.Join(ErrInvalidSubjectOrSession, err)
}
return suse, nil
}
// logoutMode defines the mode of backchannel logout, either by session or by subject
type logoutMode int
const (
// logoutModeUndefined is used when the logout mode cannot be determined
logoutModeUndefined logoutMode = iota
// logoutModeSubject is used when the logout mode is determined by the subject
logoutModeSubject
// logoutModeSession is used when the logout mode is determined by the session id
logoutModeSession
)
// getLogoutMode determines the backchannel logout mode based on the presence of subject and session in the SuSe struct
func getLogoutMode(suse SuSe) logoutMode {
switch {
case suse.encodedSession == "" && suse.encodedSubject != "":
return logoutModeSubject
case suse.encodedSession != "":
return logoutModeSession
default:
return logoutModeUndefined
}
}
// ErrSuspiciousCacheResult is returned when the cache result is suspicious
var ErrSuspiciousCacheResult = errors.New("suspicious cache result")
// GetLogoutRecords retrieves the records from the user info cache based on the backchannel
// logout mode and the provided SuSe struct.
// it uses a seperator to prevent sufix and prefix exploration in the cache and checks
// if the retrieved records match the requested subject and or session id as well, to prevent false positives.
func GetLogoutRecords(suse SuSe, store microstore.Store) ([]*microstore.Record, error) {
// get subject.session mode
mode := getLogoutMode(suse)
var key string
var opts []microstore.ReadOption
switch {
case mode == logoutModeSubject && suse.encodedSubject != "":
// the dot at the end prevents prefix exploration in the cache,
// so only keys that start with 'subject.*' will be returned, but not 'sub*'.
key = suse.encodedSubject + "."
opts = append(opts, microstore.ReadPrefix())
case mode == logoutModeSession && suse.encodedSession != "":
// the dot at the beginning prevents sufix exploration in the cache,
// so only keys that end with '*.session' will be returned, but not '*sion'.
key = "." + suse.encodedSession
opts = append(opts, microstore.ReadSuffix())
default:
return nil, errors.Join(errors.New("cannot determine logout mode"), ErrSuspiciousCacheResult)
}
// the go micro memory store requires a limit to work, why???
records, err := store.Read(key, append(opts, microstore.ReadLimit(1000))...)
if err != nil {
return nil, err
}
if len(records) == 0 {
return nil, microstore.ErrNotFound
}
if mode == logoutModeSession && len(records) > 1 {
return nil, errors.Join(errors.New("multiple session records found"), ErrSuspiciousCacheResult)
}
// double-check if the found records match the requested subject and or session id as well,
// to prevent false positives.
for _, record := range records {
recordSuSe, err := NewSuSe(record.Key)
if err != nil {
// never leak any key-related information
return nil, errors.Join(errors.New("failed to parse key"), ErrSuspiciousCacheResult, err)
}
switch {
// in subject mode, the subject must match, but the session id can be different
case mode == logoutModeSubject && suse.encodedSubject == recordSuSe.encodedSubject:
continue
// in session mode, the session id must match, but the subject can be different
case mode == logoutModeSession && suse.encodedSession == recordSuSe.encodedSession:
continue
}
return nil, errors.Join(errors.New("key does not match the requested subject or session"), ErrSuspiciousCacheResult)
}
return records, nil
}

View File

@@ -0,0 +1,331 @@
package backchannellogout
import (
"slices"
"strings"
"testing"
"github.com/stretchr/testify/mock"
"github.com/stretchr/testify/require"
"go-micro.dev/v4/store"
"github.com/opencloud-eu/opencloud/services/proxy/pkg/staticroutes/internal/backchannellogout/mocks"
)
func mustNewKey(t *testing.T, subject, session string) string {
key, err := NewKey(subject, session)
require.NoError(t, err)
return key
}
func mustNewSuSe(t *testing.T, subject, session string) SuSe {
suse, err := NewSuSe(mustNewKey(t, subject, session))
require.NoError(t, err)
return suse
}
func TestNewKey(t *testing.T) {
tests := []struct {
name string
subject string
session string
wantKey string
wantErr error
}{
{
name: "key variation: 'subject.session'",
subject: "subject",
session: "session",
wantKey: "c3ViamVjdA==.c2Vzc2lvbg==",
},
{
name: "key variation: 'subject.'",
subject: "subject",
wantKey: "c3ViamVjdA==.",
},
{
name: "key variation: '.session'",
session: "session",
wantKey: ".c2Vzc2lvbg==",
},
{
name: "key variation: '.'",
wantErr: ErrInvalidKey,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
key, err := NewKey(tt.subject, tt.session)
require.ErrorIs(t, err, tt.wantErr)
require.Equal(t, tt.wantKey, key)
})
}
}
func TestNewSuSe(t *testing.T) {
tests := []struct {
name string
key string
wantSubject string
wantSession string
wantErr error
}{
{
name: "key variation: '.session'",
key: mustNewKey(t, "", "session"),
wantSession: "session",
},
{
name: "key variation: 'session'",
key: mustNewKey(t, "", "session"),
wantSession: "session",
},
{
name: "key variation: 'subject.'",
key: mustNewKey(t, "subject", ""),
wantSubject: "subject",
},
{
name: "key variation: 'subject.session'",
key: mustNewKey(t, "subject", "session"),
wantSubject: "subject",
wantSession: "session",
},
{
name: "key variation: 'dot'",
key: ".",
wantErr: ErrInvalidSubjectOrSession,
},
{
name: "key variation: 'empty'",
key: "",
wantErr: ErrInvalidSubjectOrSession,
},
{
name: "key variation: string('subject.session')",
key: "subject.session",
wantErr: ErrInvalidSubjectOrSession,
},
{
name: "key variation: string('subject.')",
key: "subject.",
wantErr: ErrInvalidSubjectOrSession,
},
{
name: "key variation: string('.session')",
key: ".session",
wantErr: ErrInvalidSubjectOrSession,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
suSe, err := NewSuSe(tt.key)
require.ErrorIs(t, err, tt.wantErr)
subject, _ := suSe.Subject()
require.Equal(t, tt.wantSubject, subject)
session, _ := suSe.Session()
require.Equal(t, tt.wantSession, session)
})
}
}
func TestGetLogoutMode(t *testing.T) {
tests := []struct {
name string
suSe SuSe
want logoutMode
}{
{
name: "key variation: '.session'",
suSe: mustNewSuSe(t, "", "session"),
want: logoutModeSession,
},
{
name: "key variation: 'subject.session'",
suSe: mustNewSuSe(t, "subject", "session"),
want: logoutModeSession,
},
{
name: "key variation: 'subject.'",
suSe: mustNewSuSe(t, "subject", ""),
want: logoutModeSubject,
},
{
name: "key variation: 'empty'",
suSe: SuSe{},
want: logoutModeUndefined,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
mode := getLogoutMode(tt.suSe)
require.Equal(t, tt.want, mode)
})
}
}
func TestGetLogoutRecords(t *testing.T) {
sessionStore := store.NewMemoryStore()
recordClaimA := &store.Record{Key: "claim-a", Value: []byte("claim-a-data")}
recordClaimB := &store.Record{Key: "claim-b", Value: []byte("claim-b-data")}
recordClaimC := &store.Record{Key: "claim-c", Value: []byte("claim-c-data")}
recordClaimD := &store.Record{Key: "claim-d", Value: []byte("claim-d-data")}
recordSessionA := &store.Record{Key: mustNewKey(t, "", "session-a"), Value: []byte(recordClaimA.Key)}
recordSessionB := &store.Record{Key: mustNewKey(t, "", "session-b"), Value: []byte(recordClaimB.Key)}
recordSubjectASessionC := &store.Record{Key: mustNewKey(t, "subject-a", "session-c"), Value: []byte(recordSessionA.Key)}
recordSubjectASessionD := &store.Record{Key: mustNewKey(t, "subject-a", "session-d"), Value: []byte(recordSessionA.Key)}
for _, r := range []*store.Record{
recordClaimA,
recordClaimB,
recordClaimC,
recordClaimD,
recordSessionA,
recordSessionB,
recordSubjectASessionC,
recordSubjectASessionD,
} {
require.NoError(t, sessionStore.Write(r))
}
tests := []struct {
name string
suSe SuSe
store func(t *testing.T) store.Store
wantRecords []*store.Record
wantErrs []error
}{
{
name: "fails if multiple session records are found",
suSe: mustNewSuSe(t, "", "session-a"),
store: func(t *testing.T) store.Store {
s := mocks.NewStore(t)
s.EXPECT().Read(mock.Anything, mock.Anything).Return([]*store.Record{
recordSessionA,
recordSessionB,
}, nil)
return s
},
wantRecords: []*store.Record{},
wantErrs: []error{ErrSuspiciousCacheResult}},
{
name: "fails if the record key is not ok",
suSe: mustNewSuSe(t, "", "session-a"),
store: func(t *testing.T) store.Store {
s := mocks.NewStore(t)
s.EXPECT().Read(mock.Anything, mock.Anything).Return([]*store.Record{
{Key: "invalid.record.key"},
}, nil)
return s
},
wantRecords: []*store.Record{},
wantErrs: []error{ErrInvalidSubjectOrSession, ErrSuspiciousCacheResult},
},
{
name: "fails if the session does not match the retrieved record",
suSe: mustNewSuSe(t, "", "session-a"),
store: func(t *testing.T) store.Store {
s := mocks.NewStore(t)
s.EXPECT().Read(mock.Anything, mock.Anything).Return([]*store.Record{
recordSessionB,
}, nil)
return s
},
wantRecords: []*store.Record{},
wantErrs: []error{ErrSuspiciousCacheResult}},
{
name: "fails if the subject does not match the retrieved record",
suSe: mustNewSuSe(t, "subject-a", ""),
store: func(t *testing.T) store.Store {
s := mocks.NewStore(t)
s.EXPECT().Read(mock.Anything, mock.Anything).Return([]*store.Record{
recordSessionB,
}, nil)
return s
},
wantRecords: []*store.Record{},
wantErrs: []error{ErrSuspiciousCacheResult}},
// key variation tests
{
name: "key variation: 'session-a'",
suSe: mustNewSuSe(t, "", "session-a"),
store: func(*testing.T) store.Store {
return sessionStore
},
wantRecords: []*store.Record{recordSessionA},
},
{
name: "key variation: 'session-b'",
suSe: mustNewSuSe(t, "", "session-b"),
store: func(*testing.T) store.Store {
return sessionStore
},
wantRecords: []*store.Record{recordSessionB},
},
{
name: "key variation: 'session-c'",
suSe: mustNewSuSe(t, "", "session-c"),
store: func(*testing.T) store.Store {
return sessionStore
},
wantRecords: []*store.Record{recordSubjectASessionC},
},
{
name: "key variation: 'ession-c'",
suSe: mustNewSuSe(t, "", "ession-c"),
store: func(*testing.T) store.Store {
return sessionStore
},
wantRecords: []*store.Record{},
wantErrs: []error{store.ErrNotFound},
},
{
name: "key variation: 'subject-a'",
suSe: mustNewSuSe(t, "subject-a", ""),
store: func(*testing.T) store.Store {
return sessionStore
},
wantRecords: []*store.Record{recordSubjectASessionC, recordSubjectASessionD},
},
{
name: "key variation: 'subject-'",
suSe: mustNewSuSe(t, "subject-", ""),
store: func(*testing.T) store.Store {
return sessionStore
},
wantRecords: []*store.Record{},
wantErrs: []error{store.ErrNotFound},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
records, err := GetLogoutRecords(tt.suSe, tt.store(t))
for _, wantErr := range tt.wantErrs {
require.ErrorIs(t, err, wantErr)
}
require.Len(t, records, len(tt.wantRecords))
sortRecords := func(r []*store.Record) []*store.Record {
slices.SortFunc(r, func(a, b *store.Record) int {
return strings.Compare(a.Key, b.Key)
})
return r
}
records = sortRecords(records)
for i, wantRecords := range sortRecords(tt.wantRecords) {
require.True(t, len(records) >= i+1)
require.Equal(t, wantRecords.Key, records[i].Key)
require.Equal(t, wantRecords.Value, records[i].Value)
}
})
}
}

View File

@@ -0,0 +1,509 @@
// Code generated by mockery; DO NOT EDIT.
// github.com/vektra/mockery
// template: testify
package mocks
import (
mock "github.com/stretchr/testify/mock"
"go-micro.dev/v4/store"
)
// NewStore creates a new instance of Store. It also registers a testing interface on the mock and a cleanup function to assert the mocks expectations.
// The first argument is typically a *testing.T value.
func NewStore(t interface {
mock.TestingT
Cleanup(func())
}) *Store {
mock := &Store{}
mock.Mock.Test(t)
t.Cleanup(func() { mock.AssertExpectations(t) })
return mock
}
// Store is an autogenerated mock type for the Store type
type Store struct {
mock.Mock
}
type Store_Expecter struct {
mock *mock.Mock
}
func (_m *Store) EXPECT() *Store_Expecter {
return &Store_Expecter{mock: &_m.Mock}
}
// Close provides a mock function for the type Store
func (_mock *Store) Close() error {
ret := _mock.Called()
if len(ret) == 0 {
panic("no return value specified for Close")
}
var r0 error
if returnFunc, ok := ret.Get(0).(func() error); ok {
r0 = returnFunc()
} else {
r0 = ret.Error(0)
}
return r0
}
// Store_Close_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'Close'
type Store_Close_Call struct {
*mock.Call
}
// Close is a helper method to define mock.On call
func (_e *Store_Expecter) Close() *Store_Close_Call {
return &Store_Close_Call{Call: _e.mock.On("Close")}
}
func (_c *Store_Close_Call) Run(run func()) *Store_Close_Call {
_c.Call.Run(func(args mock.Arguments) {
run()
})
return _c
}
func (_c *Store_Close_Call) Return(err error) *Store_Close_Call {
_c.Call.Return(err)
return _c
}
func (_c *Store_Close_Call) RunAndReturn(run func() error) *Store_Close_Call {
_c.Call.Return(run)
return _c
}
// Delete provides a mock function for the type Store
func (_mock *Store) Delete(key string, opts ...store.DeleteOption) error {
var tmpRet mock.Arguments
if len(opts) > 0 {
tmpRet = _mock.Called(key, opts)
} else {
tmpRet = _mock.Called(key)
}
ret := tmpRet
if len(ret) == 0 {
panic("no return value specified for Delete")
}
var r0 error
if returnFunc, ok := ret.Get(0).(func(string, ...store.DeleteOption) error); ok {
r0 = returnFunc(key, opts...)
} else {
r0 = ret.Error(0)
}
return r0
}
// Store_Delete_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'Delete'
type Store_Delete_Call struct {
*mock.Call
}
// Delete is a helper method to define mock.On call
// - key string
// - opts ...store.DeleteOption
func (_e *Store_Expecter) Delete(key interface{}, opts ...interface{}) *Store_Delete_Call {
return &Store_Delete_Call{Call: _e.mock.On("Delete",
append([]interface{}{key}, opts...)...)}
}
func (_c *Store_Delete_Call) Run(run func(key string, opts ...store.DeleteOption)) *Store_Delete_Call {
_c.Call.Run(func(args mock.Arguments) {
var arg0 string
if args[0] != nil {
arg0 = args[0].(string)
}
var arg1 []store.DeleteOption
var variadicArgs []store.DeleteOption
if len(args) > 1 {
variadicArgs = args[1].([]store.DeleteOption)
}
arg1 = variadicArgs
run(
arg0,
arg1...,
)
})
return _c
}
func (_c *Store_Delete_Call) Return(err error) *Store_Delete_Call {
_c.Call.Return(err)
return _c
}
func (_c *Store_Delete_Call) RunAndReturn(run func(key string, opts ...store.DeleteOption) error) *Store_Delete_Call {
_c.Call.Return(run)
return _c
}
// Init provides a mock function for the type Store
func (_mock *Store) Init(options ...store.Option) error {
var tmpRet mock.Arguments
if len(options) > 0 {
tmpRet = _mock.Called(options)
} else {
tmpRet = _mock.Called()
}
ret := tmpRet
if len(ret) == 0 {
panic("no return value specified for Init")
}
var r0 error
if returnFunc, ok := ret.Get(0).(func(...store.Option) error); ok {
r0 = returnFunc(options...)
} else {
r0 = ret.Error(0)
}
return r0
}
// Store_Init_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'Init'
type Store_Init_Call struct {
*mock.Call
}
// Init is a helper method to define mock.On call
// - options ...store.Option
func (_e *Store_Expecter) Init(options ...interface{}) *Store_Init_Call {
return &Store_Init_Call{Call: _e.mock.On("Init",
append([]interface{}{}, options...)...)}
}
func (_c *Store_Init_Call) Run(run func(options ...store.Option)) *Store_Init_Call {
_c.Call.Run(func(args mock.Arguments) {
var arg0 []store.Option
var variadicArgs []store.Option
if len(args) > 0 {
variadicArgs = args[0].([]store.Option)
}
arg0 = variadicArgs
run(
arg0...,
)
})
return _c
}
func (_c *Store_Init_Call) Return(err error) *Store_Init_Call {
_c.Call.Return(err)
return _c
}
func (_c *Store_Init_Call) RunAndReturn(run func(options ...store.Option) error) *Store_Init_Call {
_c.Call.Return(run)
return _c
}
// List provides a mock function for the type Store
func (_mock *Store) List(opts ...store.ListOption) ([]string, error) {
var tmpRet mock.Arguments
if len(opts) > 0 {
tmpRet = _mock.Called(opts)
} else {
tmpRet = _mock.Called()
}
ret := tmpRet
if len(ret) == 0 {
panic("no return value specified for List")
}
var r0 []string
var r1 error
if returnFunc, ok := ret.Get(0).(func(...store.ListOption) ([]string, error)); ok {
return returnFunc(opts...)
}
if returnFunc, ok := ret.Get(0).(func(...store.ListOption) []string); ok {
r0 = returnFunc(opts...)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).([]string)
}
}
if returnFunc, ok := ret.Get(1).(func(...store.ListOption) error); ok {
r1 = returnFunc(opts...)
} else {
r1 = ret.Error(1)
}
return r0, r1
}
// Store_List_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'List'
type Store_List_Call struct {
*mock.Call
}
// List is a helper method to define mock.On call
// - opts ...store.ListOption
func (_e *Store_Expecter) List(opts ...interface{}) *Store_List_Call {
return &Store_List_Call{Call: _e.mock.On("List",
append([]interface{}{}, opts...)...)}
}
func (_c *Store_List_Call) Run(run func(opts ...store.ListOption)) *Store_List_Call {
_c.Call.Run(func(args mock.Arguments) {
var arg0 []store.ListOption
var variadicArgs []store.ListOption
if len(args) > 0 {
variadicArgs = args[0].([]store.ListOption)
}
arg0 = variadicArgs
run(
arg0...,
)
})
return _c
}
func (_c *Store_List_Call) Return(strings []string, err error) *Store_List_Call {
_c.Call.Return(strings, err)
return _c
}
func (_c *Store_List_Call) RunAndReturn(run func(opts ...store.ListOption) ([]string, error)) *Store_List_Call {
_c.Call.Return(run)
return _c
}
// Options provides a mock function for the type Store
func (_mock *Store) Options() store.Options {
ret := _mock.Called()
if len(ret) == 0 {
panic("no return value specified for Options")
}
var r0 store.Options
if returnFunc, ok := ret.Get(0).(func() store.Options); ok {
r0 = returnFunc()
} else {
r0 = ret.Get(0).(store.Options)
}
return r0
}
// Store_Options_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'Options'
type Store_Options_Call struct {
*mock.Call
}
// Options is a helper method to define mock.On call
func (_e *Store_Expecter) Options() *Store_Options_Call {
return &Store_Options_Call{Call: _e.mock.On("Options")}
}
func (_c *Store_Options_Call) Run(run func()) *Store_Options_Call {
_c.Call.Run(func(args mock.Arguments) {
run()
})
return _c
}
func (_c *Store_Options_Call) Return(options store.Options) *Store_Options_Call {
_c.Call.Return(options)
return _c
}
func (_c *Store_Options_Call) RunAndReturn(run func() store.Options) *Store_Options_Call {
_c.Call.Return(run)
return _c
}
// Read provides a mock function for the type Store
func (_mock *Store) Read(key string, opts ...store.ReadOption) ([]*store.Record, error) {
var tmpRet mock.Arguments
if len(opts) > 0 {
tmpRet = _mock.Called(key, opts)
} else {
tmpRet = _mock.Called(key)
}
ret := tmpRet
if len(ret) == 0 {
panic("no return value specified for Read")
}
var r0 []*store.Record
var r1 error
if returnFunc, ok := ret.Get(0).(func(string, ...store.ReadOption) ([]*store.Record, error)); ok {
return returnFunc(key, opts...)
}
if returnFunc, ok := ret.Get(0).(func(string, ...store.ReadOption) []*store.Record); ok {
r0 = returnFunc(key, opts...)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).([]*store.Record)
}
}
if returnFunc, ok := ret.Get(1).(func(string, ...store.ReadOption) error); ok {
r1 = returnFunc(key, opts...)
} else {
r1 = ret.Error(1)
}
return r0, r1
}
// Store_Read_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'Read'
type Store_Read_Call struct {
*mock.Call
}
// Read is a helper method to define mock.On call
// - key string
// - opts ...store.ReadOption
func (_e *Store_Expecter) Read(key interface{}, opts ...interface{}) *Store_Read_Call {
return &Store_Read_Call{Call: _e.mock.On("Read",
append([]interface{}{key}, opts...)...)}
}
func (_c *Store_Read_Call) Run(run func(key string, opts ...store.ReadOption)) *Store_Read_Call {
_c.Call.Run(func(args mock.Arguments) {
var arg0 string
if args[0] != nil {
arg0 = args[0].(string)
}
var arg1 []store.ReadOption
var variadicArgs []store.ReadOption
if len(args) > 1 {
variadicArgs = args[1].([]store.ReadOption)
}
arg1 = variadicArgs
run(
arg0,
arg1...,
)
})
return _c
}
func (_c *Store_Read_Call) Return(records []*store.Record, err error) *Store_Read_Call {
_c.Call.Return(records, err)
return _c
}
func (_c *Store_Read_Call) RunAndReturn(run func(key string, opts ...store.ReadOption) ([]*store.Record, error)) *Store_Read_Call {
_c.Call.Return(run)
return _c
}
// String provides a mock function for the type Store
func (_mock *Store) String() string {
ret := _mock.Called()
if len(ret) == 0 {
panic("no return value specified for String")
}
var r0 string
if returnFunc, ok := ret.Get(0).(func() string); ok {
r0 = returnFunc()
} else {
r0 = ret.Get(0).(string)
}
return r0
}
// Store_String_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'String'
type Store_String_Call struct {
*mock.Call
}
// String is a helper method to define mock.On call
func (_e *Store_Expecter) String() *Store_String_Call {
return &Store_String_Call{Call: _e.mock.On("String")}
}
func (_c *Store_String_Call) Run(run func()) *Store_String_Call {
_c.Call.Run(func(args mock.Arguments) {
run()
})
return _c
}
func (_c *Store_String_Call) Return(s string) *Store_String_Call {
_c.Call.Return(s)
return _c
}
func (_c *Store_String_Call) RunAndReturn(run func() string) *Store_String_Call {
_c.Call.Return(run)
return _c
}
// Write provides a mock function for the type Store
func (_mock *Store) Write(r *store.Record, opts ...store.WriteOption) error {
var tmpRet mock.Arguments
if len(opts) > 0 {
tmpRet = _mock.Called(r, opts)
} else {
tmpRet = _mock.Called(r)
}
ret := tmpRet
if len(ret) == 0 {
panic("no return value specified for Write")
}
var r0 error
if returnFunc, ok := ret.Get(0).(func(*store.Record, ...store.WriteOption) error); ok {
r0 = returnFunc(r, opts...)
} else {
r0 = ret.Error(0)
}
return r0
}
// Store_Write_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'Write'
type Store_Write_Call struct {
*mock.Call
}
// Write is a helper method to define mock.On call
// - r *store.Record
// - opts ...store.WriteOption
func (_e *Store_Expecter) Write(r interface{}, opts ...interface{}) *Store_Write_Call {
return &Store_Write_Call{Call: _e.mock.On("Write",
append([]interface{}{r}, opts...)...)}
}
func (_c *Store_Write_Call) Run(run func(r *store.Record, opts ...store.WriteOption)) *Store_Write_Call {
_c.Call.Run(func(args mock.Arguments) {
var arg0 *store.Record
if args[0] != nil {
arg0 = args[0].(*store.Record)
}
var arg1 []store.WriteOption
var variadicArgs []store.WriteOption
if len(args) > 1 {
variadicArgs = args[1].([]store.WriteOption)
}
arg1 = variadicArgs
run(
arg0,
arg1...,
)
})
return _c
}
func (_c *Store_Write_Call) Return(err error) *Store_Write_Call {
_c.Call.Return(err)
return _c
}
func (_c *Store_Write_Call) RunAndReturn(run func(r *store.Record, opts ...store.WriteOption) error) *Store_Write_Call {
_c.Call.Return(run)
return _c
}

View File

@@ -4,16 +4,16 @@
# FIRST AUTHOR <EMAIL@ADDRESS>, YEAR.
#
# Translators:
# ii kaka, 2025
# iikaka88, 2025
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: \n"
"Report-Msgid-Bugs-To: EMAIL\n"
"POT-Creation-Date: 2026-02-03 00:13+0000\n"
"POT-Creation-Date: 2026-02-24 00:11+0000\n"
"PO-Revision-Date: 2025-01-27 10:17+0000\n"
"Last-Translator: ii kaka, 2025\n"
"Last-Translator: iikaka88, 2025\n"
"Language-Team: Japanese (https://app.transifex.com/opencloud-eu/teams/204053/ja/)\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"

View File

@@ -4,16 +4,16 @@
# FIRST AUTHOR <EMAIL@ADDRESS>, YEAR.
#
# Translators:
# ii kaka, 2025
# iikaka88, 2025
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: \n"
"Report-Msgid-Bugs-To: EMAIL\n"
"POT-Creation-Date: 2026-02-03 00:13+0000\n"
"POT-Creation-Date: 2026-02-24 00:11+0000\n"
"PO-Revision-Date: 2025-01-27 10:17+0000\n"
"Last-Translator: ii kaka, 2025\n"
"Last-Translator: iikaka88, 2025\n"
"Language-Team: Japanese (https://app.transifex.com/opencloud-eu/teams/204053/ja/)\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"

523
vendor/github.com/kovidgoyal/imaging/jpeg/dct.go generated vendored Normal file
View File

@@ -0,0 +1,523 @@
// Copyright 2025 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package jpeg
// Discrete Cosine Transformation (DCT) implementations using the algorithm from
// Christoph Loeffler, Adriaan Lightenberg, and George S. Mostchytz,
// "Practical Fast 1-D DCT Algorithms with 11 Multiplications," ICASSP 1989.
// https://ieeexplore.ieee.org/document/266596
//
// Since the paper is paywalled, the rest of this comment gives a summary.
//
// A 1-dimensional forward DCT (1D FDCT) takes as input 8 values x0..x7
// and transforms them in place into the result values.
//
// The mathematical definition of the N-point 1D FDCT is:
//
// X[k] = α_k Σ_n x[n] * cos (2n+1)*k*π/2N
//
// where α₀ = √2 and α_k = 1 for k > 0.
//
// For our purposes, N=8, so the angles end up being multiples of π/16.
// The most direct implementation of this definition would require 64 multiplications.
//
// Loeffler's paper presents a more efficient computation that requires only
// 11 multiplications and works in terms of three basic operations:
//
// - A "butterfly" x0, x1 = x0+x1, x0-x1.
// The inverse is x0, x1 = (x0+x1)/2, (x0-x1)/2.
//
// - A scaling of x0 by k: x0 *= k. The inverse is scaling by 1/k.
//
// - A rotation of x0, x1 by θ, defined as:
// x0, x1 = x0 cos θ + x1 sin θ, -x0 sin θ + x1 cos θ.
// The inverse is rotation by -θ.
//
// The algorithm proceeds in four stages:
//
// Stage 1:
// - butterfly x0, x7; x1, x6; x2, x5; x3, x4.
//
// Stage 2:
// - butterfly x0, x3; x1, x2
// - rotate x4, x7 by 3π/16
// - rotate x5, x6 by π/16.
//
// Stage 3:
// - butterfly x0, x1; x4, x6; x7, x5
// - rotate x2, x3 by 6π/16 and scale by √2.
//
// Stage 4:
// - butterfly x7, x4
// - scale x5, x6 by √2.
//
// Finally, the values are permuted. The permutation can be read as either:
// - x0, x4, x2, x6, x7, x3, x5, x1 = x0, x1, x2, x3, x4, x5, x6, x7 (paper's form)
// - x0, x1, x2, x3, x4, x5, x6, x7 = x0, x7, x2, x5, x1, x6, x3, x4 (sorted by LHS)
//
// The code below uses the second form to make it easier to merge adjacent stores.
// (Note that unlike in recursive FFT implementations, the permutation here is
// not always mapping indexes to their bit reversals.)
//
// As written above, the rotation requires four multiplications, but it can be
// reduced to three by refactoring (see [dctBox] below), and the scaling in
// stage 3 can be merged into the rotation constants, so the overall cost
// of a 1D FDCT is 11 multiplies.
//
// The 1D inverse DCT (IDCT) is the 1D FDCT run backward
// with all the basic operations inverted.
// dctBox implements a 3-multiply, 3-add rotation+scaling.
// Given x0, x1, k*cos θ, and k*sin θ, dctBox returns the
// rotated and scaled coordinates.
// (It is called dctBox because the rotate+scale operation
// is drawn as a box in Figures 1 and 2 in the paper.)
func dctBox(x0, x1, kcos, ksin int32) (y0, y1 int32) {
// y0 = x0*kcos + x1*ksin
// y1 = -x0*ksin + x1*kcos
ksum := kcos * (x0 + x1)
y0 = ksum + (ksin-kcos)*x1
y1 = ksum - (kcos+ksin)*x0
return y0, y1
}
// A block is an 8x8 input to a 2D DCT (either the FDCT or IDCT).
// The input is actually only 8x8 uint8 values, and the outputs are 8x8 int16,
// but it is convenient to use int32s for intermediate storage,
// so we define only a single block type of [8*8]int32.
//
// A 2D DCT is implemented as 1D DCTs over the rows and columns.
//
// dct_test.go defines a String method for nice printing in tests.
type block [blockSize]int32
const blockSize = 8 * 8
// Note on Numerical Precision
//
// The inputs to both the FDCT and IDCT are uint8 values stored in a block,
// and the outputs are int16s in the same block, but the overall operation
// uses int32 values as fixed-point intermediate values.
// In the code comments below, the notation "QN.M" refers to a
// signed value of 1+N+M significant bits, one of which is the sign bit,
// and M of which hold fractional (sub-integer) precision.
// For example, 255 as a Q8.0 value is stored as int32(255),
// while 255 as a Q8.1 value is stored as int32(510),
// and 255.5 as a Q8.1 value is int32(511).
// The notation UQN.M refers to an unsigned value of N+M significant bits.
// See https://en.wikipedia.org/wiki/Q_(number_format) for more.
//
// In general we only need to keep about 16 significant bits, but it is more
// efficient and somewhat more precise to let unnecessary fractional bits
// accumulate and shift them away in bulk rather than after every operation.
// As such, it is important to keep track of the number of fractional bits
// in each variable at different points in the code, to avoid mistakes like
// adding numbers with different fractional precisions, as well as to keep
// track of the total number of bits, to avoid overflow. A comment like:
//
// // x[123] now Q8.2.
//
// means that x1, x2, and x3 are all Q8.2 (11-bit) values.
// Keeping extra precision bits also reduces the size of the errors introduced
// by using right shift to approximate rounded division.
// Constants needed for the implementation.
// These are all 60-bit precision fixed-point constants.
// The function c(val, b) rounds the constant to b bits.
// c is simple enough that calls to it with constant args
// are inlined and constant-propagated down to an inline constant.
// Each constant is commented with its Ivy definition (see robpike.io/ivy),
// using this scaling helper function:
//
// op fix x = floor 0.5 + x * 2**60
const (
cos1 = 1130768441178740757 // fix cos 1*pi/16
sin1 = 224923827593068887 // fix sin 1*pi/16
cos3 = 958619196450722178 // fix cos 3*pi/16
sin3 = 640528868967736374 // fix sin 3*pi/16
sqrt2 = 1630477228166597777 // fix sqrt 2
sqrt2_cos6 = 623956622067911264 // fix (sqrt 2)*cos 6*pi/16
sqrt2_sin6 = 1506364539328854985 // fix (sqrt 2)*sin 6*pi/16
sqrt2inv = 815238614083298888 // fix 1/sqrt 2
sqrt2inv_cos6 = 311978311033955632 // fix (1/sqrt 2)*cos 6*pi/16
sqrt2inv_sin6 = 753182269664427492 // fix (1/sqrt 2)*sin 6*pi/16
)
func c(x uint64, bits int) int32 {
return int32((x + (1 << (59 - bits))) >> (60 - bits))
}
// fdct implements the forward DCT.
// Inputs are UQ8.0; outputs are Q13.0.
func fdct(b *block) {
fdctCols(b)
fdctRows(b)
}
// fdctCols applies the 1D DCT to the columns of b.
// Inputs are UQ8.0 in [0,255] but interpreted as [-128,127].
// Outputs are Q10.18.
func fdctCols(b *block) {
for i := range 8 {
x0 := b[0*8+i]
x1 := b[1*8+i]
x2 := b[2*8+i]
x3 := b[3*8+i]
x4 := b[4*8+i]
x5 := b[5*8+i]
x6 := b[6*8+i]
x7 := b[7*8+i]
// x[01234567] are UQ8.0 in [0,255].
// Stage 1: four butterflies.
// In general a butterfly of QN.M inputs produces Q(N+1).M outputs.
// A butterfly of UQN.M inputs produces a UQ(N+1).M sum and a QN.M difference.
x0, x7 = x0+x7, x0-x7
x1, x6 = x1+x6, x1-x6
x2, x5 = x2+x5, x2-x5
x3, x4 = x3+x4, x3-x4
// x[0123] now UQ9.0 in [0, 510].
// x[4567] now Q8.0 in [-255,255].
// Stage 2: two boxes and two butterflies.
// A box on QN.M inputs with B-bit constants
// produces Q(N+1).(M+B) outputs.
// (The +1 is from the addition.)
x4, x7 = dctBox(x4, x7, c(cos3, 18), c(sin3, 18))
x5, x6 = dctBox(x5, x6, c(cos1, 18), c(sin1, 18))
// x[47] now Q9.18 in [-354, 354].
// x[56] now Q9.18 in [-300, 300].
x0, x3 = x0+x3, x0-x3
x1, x2 = x1+x2, x1-x2
// x[01] now UQ10.0 in [0, 1020].
// x[23] now Q9.0 in [-510, 510].
// Stage 3: one box and three butterflies.
x2, x3 = dctBox(x2, x3, c(sqrt2_cos6, 18), c(sqrt2_sin6, 18))
// x[23] now Q10.18 in [-943, 943].
x0, x1 = x0+x1, x0-x1
// x0 now UQ11.0 in [0, 2040].
// x1 now Q10.0 in [-1020, 1020].
// Store x0, x1, x2, x3 to their permuted targets.
// The original +128 in every input value
// has cancelled out except in the "DC signal" x0.
// Subtracting 128*8 here is equivalent to subtracting 128
// from every input before we started, but cheaper.
// It also converts x0 from UQ11.18 to Q10.18.
b[0*8+i] = (x0 - 128*8) << 18
b[4*8+i] = x1 << 18
b[2*8+i] = x2
b[6*8+i] = x3
x4, x6 = x4+x6, x4-x6
x7, x5 = x7+x5, x7-x5
// x[4567] now Q10.18 in [-654, 654].
// Stage 4: two √2 scalings and one butterfly.
x5 = (x5 >> 12) * c(sqrt2, 12)
x6 = (x6 >> 12) * c(sqrt2, 12)
// x[56] still Q10.18 in [-925, 925] (= 654√2).
x7, x4 = x7+x4, x7-x4
// x[47] still Q10.18 in [-925, 925] (not Q11.18!).
// This is not obvious at all! See "Note on 925" below.
// Store x4 x5 x6 x7 to their permuted targets.
b[1*8+i] = x7
b[3*8+i] = x5
b[5*8+i] = x6
b[7*8+i] = x4
}
}
// fdctRows applies the 1D DCT to the rows of b.
// Inputs are Q10.18; outputs are Q13.0.
func fdctRows(b *block) {
for i := range 8 {
x := b[8*i : 8*i+8 : 8*i+8]
x0 := x[0]
x1 := x[1]
x2 := x[2]
x3 := x[3]
x4 := x[4]
x5 := x[5]
x6 := x[6]
x7 := x[7]
// x[01234567] are Q10.18 [-1020, 1020].
// Stage 1: four butterflies.
x0, x7 = x0+x7, x0-x7
x1, x6 = x1+x6, x1-x6
x2, x5 = x2+x5, x2-x5
x3, x4 = x3+x4, x3-x4
// x[01234567] now Q11.18 in [-2040, 2040].
// Stage 2: two boxes and two butterflies.
x4, x7 = dctBox(x4>>14, x7>>14, c(cos3, 14), c(sin3, 14))
x5, x6 = dctBox(x5>>14, x6>>14, c(cos1, 14), c(sin1, 14))
// x[47] now Q12.18 in [-2830, 2830].
// x[56] now Q12.18 in [-2400, 2400].
x0, x3 = x0+x3, x0-x3
x1, x2 = x1+x2, x1-x2
// x[01234567] now Q12.18 in [-4080, 4080].
// Stage 3: one box and three butterflies.
x2, x3 = dctBox(x2>>14, x3>>14, c(sqrt2_cos6, 14), c(sqrt2_sin6, 14))
// x[23] now Q13.18 in [-7539, 7539].
x0, x1 = x0+x1, x0-x1
// x[01] now Q13.18 in [-8160, 8160].
x4, x6 = x4+x6, x4-x6
x7, x5 = x7+x5, x7-x5
// x[4567] now Q13.18 in [-5230, 5230].
// Stage 4: two √2 scalings and one butterfly.
x5 = (x5 >> 14) * c(sqrt2, 14)
x6 = (x6 >> 14) * c(sqrt2, 14)
// x[56] still Q13.18 in [-7397, 7397] (= 5230√2).
x7, x4 = x7+x4, x7-x4
// x[47] still Q13.18 in [-7395, 7395] (= 2040*3.6246).
// See "Note on 925" below.
// Cut from Q13.18 to Q13.0.
x0 = (x0 + 1<<17) >> 18
x1 = (x1 + 1<<17) >> 18
x2 = (x2 + 1<<17) >> 18
x3 = (x3 + 1<<17) >> 18
x4 = (x4 + 1<<17) >> 18
x5 = (x5 + 1<<17) >> 18
x6 = (x6 + 1<<17) >> 18
x7 = (x7 + 1<<17) >> 18
// Note: Unlike in fdctCols, saved all stores for the end
// because they are adjacent memory locations and some systems
// can use multiword stores.
x[0] = x0
x[1] = x7
x[2] = x2
x[3] = x5
x[4] = x1
x[5] = x6
x[6] = x3
x[7] = x4
}
}
// "Note on 925", deferred from above to avoid interrupting code.
//
// In fdctCols, heading into stage 2, the values x4, x5, x6, x7 are in [-255, 255].
// Let's call those specific values b4, b5, b6, b7, and trace how x[4567] evolve:
//
// Stage 2:
//
// x4 = b4*cos3 + b7*sin3
// x7 = -b4*sin3 + b7*cos3
// x5 = b5*cos1 + b6*sin1
// x6 = -b5*sin1 + b6*cos1
//
// Stage 3:
//
// x4 = x4+x6 = b4*cos3 + b7*sin3 - b5*sin1 + b6*cos1
// x6 = x4-x6 = b4*cos3 + b7*sin3 + b5*sin1 - b6*cos1
// x7 = x7+x5 = -b4*sin3 + b7*cos3 + b5*cos1 + b6*sin1
// x5 = x7-x5 = -b4*sin3 + b7*cos3 - b5*cos1 - b6*sin1
//
// Stage 4:
//
// x7 = x7+x4 = -b4*sin3 + b7*cos3 + b5*cos1 + b6*sin1 + b4*cos3 + b7*sin3 - b5*sin1 + b6*cos1
// = b4*(cos3-sin3) + b5*(cos1-sin1) + b6*(cos1+sin1) + b7*(cos3+sin3)
// < 255*(0.2759 + 0.7857 + 1.1759 + 1.3871) = 255*3.6246 < 925.
//
// x4 = x7-x4 = -b4*sin3 + b7*cos3 + b5*cos1 + b6*sin1 - b4*cos3 - b7*sin3 + b5*sin1 - b6*cos1
// = -b4*(cos3+sin3) + b5*(cos1+sin1) + b6*(sin1-cos1) + b7*(cos3-sin3)
// < same 925.
//
// The fact that x5, x6 are also at most 925 is not a coincidence: we are computing
// the same kinds of numbers for all four, just with different paths to them.
//
// In fdctRows, the same analysis applies, but the initial values are
// in [-2040, 2040] instead of [-255, 255], so the bound is 2040*3.6246 < 7395.
// idct implements the inverse DCT.
// Inputs are UQ8.0; outputs are Q10.3.
func idct(b *block) {
// A 2D IDCT is a 1D IDCT on rows followed by columns.
idctRows(b)
idctCols(b)
}
// idctRows applies the 1D IDCT to the rows of b.
// Inputs are UQ8.0; outputs are Q9.20.
func idctRows(b *block) {
for i := range 8 {
x := b[8*i : 8*i+8 : 8*i+8]
x0 := x[0]
x7 := x[1]
x2 := x[2]
x5 := x[3]
x1 := x[4]
x6 := x[5]
x3 := x[6]
x4 := x[7]
// Run FDCT backward.
// Independent operations have been reordered somewhat
// to make precision tracking easier.
//
// Note that "x0, x1 = x0+x1, x0-x1" is now a reverse butterfly
// and carries with it an implicit divide by two: the extra bit
// is added to the precision, not the value size.
// x[01234567] are UQ8.0 in [0, 255].
// Stages 4, 3, 2: x0, x1, x2, x3.
x0 <<= 17
x1 <<= 17
// x0, x1 now UQ8.17.
x0, x1 = x0+x1, x0-x1
// x0 now UQ8.18 in [0, 255].
// x1 now Q7.18 in [-127½, 127½].
// Note: (1/sqrt 2)*((cos 6*pi/16)+(sin 6*pi/16)) < 0.924, so no new high bit.
x2, x3 = dctBox(x2, x3, c(sqrt2inv_cos6, 18), -c(sqrt2inv_sin6, 18))
// x[23] now Q8.18 in [-236, 236].
x1, x2 = x1+x2, x1-x2
x0, x3 = x0+x3, x0-x3
// x[0123] now Q8.19 in [-246, 246].
// Stages 4, 3, 2: x4, x5, x6, x7.
x4 <<= 7
x7 <<= 7
// x[47] now UQ8.7
x7, x4 = x7+x4, x7-x4
// x7 now UQ8.8 in [0, 255].
// x4 now Q7.8 in [-127½, 127½].
x6 = x6 * c(sqrt2inv, 8)
x5 = x5 * c(sqrt2inv, 8)
// x[56] now UQ8.8 in [0, 181].
// Note that 1/√2 has five 0s in its binary representation after
// the 8th bit, so this multipliy is actually producing 12 bits of precision.
x7, x5 = x7+x5, x7-x5
x4, x6 = x4+x6, x4-x6
// x[4567] now Q8.9 in [-218, 218].
x4, x7 = dctBox(x4>>2, x7>>2, c(cos3, 12), -c(sin3, 12))
x5, x6 = dctBox(x5>>2, x6>>2, c(cos1, 12), -c(sin1, 12))
// x[4567] now Q9.19 in [-303, 303].
// Stage 1.
x0, x7 = x0+x7, x0-x7
x1, x6 = x1+x6, x1-x6
x2, x5 = x2+x5, x2-x5
x3, x4 = x3+x4, x3-x4
// x[01234567] now Q9.20 in [-275, 275].
// Note: we don't need all 20 bits of "precision",
// but it is faster to let idctCols shift it away as part
// of other operations rather than downshift here.
x[0] = x0
x[1] = x1
x[2] = x2
x[3] = x3
x[4] = x4
x[5] = x5
x[6] = x6
x[7] = x7
}
}
// idctCols applies the 1D IDCT to the columns of b.
// Inputs are Q9.20.
// Outputs are Q10.3. That is, the result is the IDCT*8.
func idctCols(b *block) {
for i := range 8 {
x0 := b[0*8+i]
x7 := b[1*8+i]
x2 := b[2*8+i]
x5 := b[3*8+i]
x1 := b[4*8+i]
x6 := b[5*8+i]
x3 := b[6*8+i]
x4 := b[7*8+i]
// x[012345678] are Q9.20.
// Start by adding 0.5 to x0 (the incoming DC signal).
// The butterflies will add it to all the other values,
// and then the final shifts will round properly.
x0 += 1 << 19
// Stages 4, 3, 2: x0, x1, x2, x3.
x0, x1 = (x0+x1)>>2, (x0-x1)>>2
// x[01] now Q9.19.
// Note: (1/sqrt 2)*((cos 6*pi/16)+(sin 6*pi/16)) < 1, so no new high bit.
x2, x3 = dctBox(x2>>13, x3>>13, c(sqrt2inv_cos6, 12), -c(sqrt2inv_sin6, 12))
// x[0123] now Q9.19.
x1, x2 = x1+x2, x1-x2
x0, x3 = x0+x3, x0-x3
// x[0123] now Q9.20.
// Stages 4, 3, 2: x4, x5, x6, x7.
x7, x4 = x7+x4, x7-x4
// x[47] now Q9.21.
x5 = (x5 >> 13) * c(sqrt2inv, 14)
x6 = (x6 >> 13) * c(sqrt2inv, 14)
// x[56] now Q9.21.
x7, x5 = x7+x5, x7-x5
x4, x6 = x4+x6, x4-x6
// x[4567] now Q9.22.
x4, x7 = dctBox(x4>>14, x7>>14, c(cos3, 12), -c(sin3, 12))
x5, x6 = dctBox(x5>>14, x6>>14, c(cos1, 12), -c(sin1, 12))
// x[4567] now Q10.20.
x0, x7 = x0+x7, x0-x7
x1, x6 = x1+x6, x1-x6
x2, x5 = x2+x5, x2-x5
x3, x4 = x3+x4, x3-x4
// x[01234567] now Q10.21.
x0 >>= 18
x1 >>= 18
x2 >>= 18
x3 >>= 18
x4 >>= 18
x5 >>= 18
x6 >>= 18
x7 >>= 18
// x[01234567] now Q10.3.
b[0*8+i] = x0
b[1*8+i] = x1
b[2*8+i] = x2
b[3*8+i] = x3
b[4*8+i] = x4
b[5*8+i] = x5
b[6*8+i] = x6
b[7*8+i] = x7
}
}

View File

@@ -1,194 +0,0 @@
// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package jpeg
// This is a Go translation of idct.c from
//
// http://standards.iso.org/ittf/PubliclyAvailableStandards/ISO_IEC_13818-4_2004_Conformance_Testing/Video/verifier/mpeg2decode_960109.tar.gz
//
// which carries the following notice:
/* Copyright (C) 1996, MPEG Software Simulation Group. All Rights Reserved. */
/*
* Disclaimer of Warranty
*
* These software programs are available to the user without any license fee or
* royalty on an "as is" basis. The MPEG Software Simulation Group disclaims
* any and all warranties, whether express, implied, or statuary, including any
* implied warranties or merchantability or of fitness for a particular
* purpose. In no event shall the copyright-holder be liable for any
* incidental, punitive, or consequential damages of any kind whatsoever
* arising from the use of these programs.
*
* This disclaimer of warranty extends to the user of these programs and user's
* customers, employees, agents, transferees, successors, and assigns.
*
* The MPEG Software Simulation Group does not represent or warrant that the
* programs furnished hereunder are free of infringement of any third-party
* patents.
*
* Commercial implementations of MPEG-1 and MPEG-2 video, including shareware,
* are subject to royalty fees to patent holders. Many of these patents are
* general enough such that they are unavoidable regardless of implementation
* design.
*
*/
const blockSize = 64 // A DCT block is 8x8.
type block [blockSize]int32
const (
w1 = 2841 // 2048*sqrt(2)*cos(1*pi/16)
w2 = 2676 // 2048*sqrt(2)*cos(2*pi/16)
w3 = 2408 // 2048*sqrt(2)*cos(3*pi/16)
w5 = 1609 // 2048*sqrt(2)*cos(5*pi/16)
w6 = 1108 // 2048*sqrt(2)*cos(6*pi/16)
w7 = 565 // 2048*sqrt(2)*cos(7*pi/16)
w1pw7 = w1 + w7
w1mw7 = w1 - w7
w2pw6 = w2 + w6
w2mw6 = w2 - w6
w3pw5 = w3 + w5
w3mw5 = w3 - w5
r2 = 181 // 256/sqrt(2)
)
// idct performs a 2-D Inverse Discrete Cosine Transformation.
//
// The input coefficients should already have been multiplied by the
// appropriate quantization table. We use fixed-point computation, with the
// number of bits for the fractional component varying over the intermediate
// stages.
//
// For more on the actual algorithm, see Z. Wang, "Fast algorithms for the
// discrete W transform and for the discrete Fourier transform", IEEE Trans. on
// ASSP, Vol. ASSP- 32, pp. 803-816, Aug. 1984.
func idct(src *block) {
// Horizontal 1-D IDCT.
for y := range 8 {
y8 := y * 8
s := src[y8 : y8+8 : y8+8] // Small cap improves performance, see https://golang.org/issue/27857
// If all the AC components are zero, then the IDCT is trivial.
if s[1] == 0 && s[2] == 0 && s[3] == 0 &&
s[4] == 0 && s[5] == 0 && s[6] == 0 && s[7] == 0 {
dc := s[0] << 3
s[0] = dc
s[1] = dc
s[2] = dc
s[3] = dc
s[4] = dc
s[5] = dc
s[6] = dc
s[7] = dc
continue
}
// Prescale.
x0 := (s[0] << 11) + 128
x1 := s[4] << 11
x2 := s[6]
x3 := s[2]
x4 := s[1]
x5 := s[7]
x6 := s[5]
x7 := s[3]
// Stage 1.
x8 := w7 * (x4 + x5)
x4 = x8 + w1mw7*x4
x5 = x8 - w1pw7*x5
x8 = w3 * (x6 + x7)
x6 = x8 - w3mw5*x6
x7 = x8 - w3pw5*x7
// Stage 2.
x8 = x0 + x1
x0 -= x1
x1 = w6 * (x3 + x2)
x2 = x1 - w2pw6*x2
x3 = x1 + w2mw6*x3
x1 = x4 + x6
x4 -= x6
x6 = x5 + x7
x5 -= x7
// Stage 3.
x7 = x8 + x3
x8 -= x3
x3 = x0 + x2
x0 -= x2
x2 = (r2*(x4+x5) + 128) >> 8
x4 = (r2*(x4-x5) + 128) >> 8
// Stage 4.
s[0] = (x7 + x1) >> 8
s[1] = (x3 + x2) >> 8
s[2] = (x0 + x4) >> 8
s[3] = (x8 + x6) >> 8
s[4] = (x8 - x6) >> 8
s[5] = (x0 - x4) >> 8
s[6] = (x3 - x2) >> 8
s[7] = (x7 - x1) >> 8
}
// Vertical 1-D IDCT.
for x := range 8 {
// Similar to the horizontal 1-D IDCT case, if all the AC components are zero, then the IDCT is trivial.
// However, after performing the horizontal 1-D IDCT, there are typically non-zero AC components, so
// we do not bother to check for the all-zero case.
s := src[x : x+57 : x+57] // Small cap improves performance, see https://golang.org/issue/27857
// Prescale.
y0 := (s[8*0] << 8) + 8192
y1 := s[8*4] << 8
y2 := s[8*6]
y3 := s[8*2]
y4 := s[8*1]
y5 := s[8*7]
y6 := s[8*5]
y7 := s[8*3]
// Stage 1.
y8 := w7*(y4+y5) + 4
y4 = (y8 + w1mw7*y4) >> 3
y5 = (y8 - w1pw7*y5) >> 3
y8 = w3*(y6+y7) + 4
y6 = (y8 - w3mw5*y6) >> 3
y7 = (y8 - w3pw5*y7) >> 3
// Stage 2.
y8 = y0 + y1
y0 -= y1
y1 = w6*(y3+y2) + 4
y2 = (y1 - w2pw6*y2) >> 3
y3 = (y1 + w2mw6*y3) >> 3
y1 = y4 + y6
y4 -= y6
y6 = y5 + y7
y5 -= y7
// Stage 3.
y7 = y8 + y3
y8 -= y3
y3 = y0 + y2
y0 -= y2
y2 = (r2*(y4+y5) + 128) >> 8
y4 = (r2*(y4-y5) + 128) >> 8
// Stage 4.
s[8*0] = (y7 + y1) >> 14
s[8*1] = (y3 + y2) >> 14
s[8*2] = (y0 + y4) >> 14
s[8*3] = (y8 + y6) >> 14
s[8*4] = (y8 - y6) >> 14
s[8*5] = (y0 - y4) >> 14
s[8*6] = (y3 - y2) >> 14
s[8*7] = (y7 - y1) >> 14
}
}

View File

@@ -303,7 +303,9 @@ func (d *decoder) processSOS(n int) error {
// SOS markers are processed.
continue
}
d.reconstructBlock(&b, bx, by, int(compIndex))
if err := d.reconstructBlock(&b, bx, by, int(compIndex)); err != nil {
return err
}
} // for j
} // for i
mcu++
@@ -455,23 +457,15 @@ func (d *decoder) reconstructProgressiveImage() error {
stride := mxx * d.comp[i].h
for by := 0; by*v < d.height; by++ {
for bx := 0; bx*h < d.width; bx++ {
d.reconstructBlock(&d.progCoeffs[i][by*stride+bx], bx, by, i)
if err := d.reconstructBlock(&d.progCoeffs[i][by*stride+bx], bx, by, i); err != nil {
return err
}
}
}
}
return nil
}
func level_shift(c int32) uint8 {
if c < -128 {
return 0
}
if c > 127 {
return 255
}
return uint8(c + 128)
}
func (d *decoder) storeFlexBlock(b *block, bx, by, compIndex int) {
h, v := d.comp[compIndex].expand.h, d.comp[compIndex].expand.v
dst, stride := []byte(nil), 0
@@ -490,7 +484,15 @@ func (d *decoder) storeFlexBlock(b *block, bx, by, compIndex int) {
y8 := y * 8
yv := y * v
for x := range 8 {
val := level_shift(b[y8+x])
c := b[y8+x]
var val uint8
if c < -128 {
val = 0
} else if c > 127 {
val = 255
} else {
val = uint8(c + 128)
}
xh := x * h
for yy := range v {
for xx := range h {
@@ -503,7 +505,7 @@ func (d *decoder) storeFlexBlock(b *block, bx, by, compIndex int) {
// reconstructBlock dequantizes, performs the inverse DCT and stores the block
// to the image.
func (d *decoder) reconstructBlock(b *block, bx, by, compIndex int) {
func (d *decoder) reconstructBlock(b *block, bx, by, compIndex int) error {
qt := &d.quant[d.comp[compIndex].tq]
for zig := range blockSize {
b[unzig[zig]] *= qt[zig]
@@ -515,7 +517,7 @@ func (d *decoder) reconstructBlock(b *block, bx, by, compIndex int) {
} else {
if d.flex {
d.storeFlexBlock(b, bx, by, compIndex)
return
return nil
}
switch compIndex {
case 0:
@@ -526,6 +528,8 @@ func (d *decoder) reconstructBlock(b *block, bx, by, compIndex int) {
dst, stride = d.img3.Cr[8*(by*d.img3.CStride+bx):], d.img3.CStride
case 3:
dst, stride = d.blackPix[8*(by*d.blackStride+bx):], d.blackStride
default:
return UnsupportedError("too many components")
}
}
// Level shift by +128, clip to [0, 255], and write to dst.
@@ -533,9 +537,18 @@ func (d *decoder) reconstructBlock(b *block, bx, by, compIndex int) {
y8 := y * 8
yStride := y * stride
for x := range 8 {
dst[yStride+x] = level_shift(b[y8+x])
c := b[y8+x]
if c < -128 {
c = 0
} else if c > 127 {
c = 255
} else {
c += 128
}
dst[yStride+x] = uint8(c)
}
}
return nil
}
// findRST advances past the next RST restart marker that matches expectedRST.

View File

@@ -5,7 +5,7 @@ import os
import subprocess
VERSION = "1.8.19"
VERSION = "1.8.20"
def run(*args: str):

4
vendor/modules.txt vendored
View File

@@ -910,7 +910,7 @@ github.com/kovidgoyal/go-parallel
# github.com/kovidgoyal/go-shm v1.0.0
## explicit; go 1.24.0
github.com/kovidgoyal/go-shm
# github.com/kovidgoyal/imaging v1.8.19
# github.com/kovidgoyal/imaging v1.8.20
## explicit; go 1.24.0
github.com/kovidgoyal/imaging
github.com/kovidgoyal/imaging/apng
@@ -2477,7 +2477,7 @@ golang.org/x/exp/slices
golang.org/x/exp/slog
golang.org/x/exp/slog/internal
golang.org/x/exp/slog/internal/buffer
# golang.org/x/image v0.35.0
# golang.org/x/image v0.36.0
## explicit; go 1.24.0
golang.org/x/image/bmp
golang.org/x/image/ccitt