mirror of
https://github.com/mudler/LocalAI.git
synced 2026-05-17 13:10:23 -04:00
* fix(http): close 0.0.0.0/[::] SSRF bypass in /api/cors-proxy The CORS proxy carried its own private-network blocklist (RFC 1918 + a handful of IPv6 ranges) instead of using the same classification as pkg/utils/urlfetch.go. The hand-rolled list missed 0.0.0.0/8 and ::/128, both of which Linux routes to localhost — so any user with FeatureMCP (default-on for new users) could reach LocalAI's own listener and any other service bound to 0.0.0.0:port via: GET /api/cors-proxy?url=http://0.0.0.0:8080/... GET /api/cors-proxy?url=http://[::]:8080/... Replace the custom check with utils.IsPublicIP (Go stdlib IsLoopback / IsLinkLocalUnicast / IsPrivate / IsUnspecified, plus IPv4-mapped IPv6 unmasking) and add an upfront hostname rejection for localhost, *.local, and the cloud metadata aliases so split-horizon DNS can't paper over the IP check. The IP-pinning DialContext is unchanged: the validated IP from the single resolution is reused for the connection, so DNS rebinding still cannot swap a public answer for a private one between validate and dial. Regression tests cover 0.0.0.0, 0.0.0.0:PORT, [::], ::ffff:127.0.0.1, ::ffff:10.0.0.1, file://, gopher://, ftp://, localhost, 127.0.0.1, 10.0.0.1, 169.254.169.254, metadata.google.internal. Assisted-by: Claude:claude-opus-4-7 [Claude Code] Signed-off-by: Richard Palethorpe <io@richiejp.com> * fix(downloader): verify SHA before promoting temp file to final path DownloadFileWithContext renamed the .partial file to its final name *before* checking the streamed SHA, so a hash mismatch returned an error but left the tampered file at filePath. Subsequent code that operated on filePath (a backend launcher, a YAML loader, a re-download that finds the file already present and skips) would consume the attacker-supplied bytes. Reorder: verify the streamed hash first, remove the .partial on mismatch, then rename. The streamed hash is computed during io.Copy so no second read is needed. While here, raise the empty-SHA case from a Debug log to a Warn so "this download had no integrity check" is visible at the default log level. Backend installs currently pass through with no digest; the warning makes that footprint observable without changing behaviour. Regression test asserts os.IsNotExist on the destination after a deliberate SHA mismatch. Assisted-by: Claude:claude-opus-4-7 [Claude Code] Signed-off-by: Richard Palethorpe <io@richiejp.com> * fix(auth): require email_verified for OIDC admin promotion extractOIDCUserInfo read the ID token's "email" claim but never inspected "email_verified". With LOCALAI_ADMIN_EMAIL set, an attacker who could register on the configured OIDC IdP under that email (some IdPs accept self-supplied unverified emails) inherited admin role: - first login: AssignRole(tx, email, adminEmail) → RoleAdmin - re-login: MaybePromote(db, user, adminEmail) → flip to RoleAdmin Add EmailVerified to oauthUserInfo, parse email_verified from the OIDC claims (default false on absence so an IdP that omits the claim cannot short-circuit the gate), and substitute "" for the role-decision email when verified=false via emailForRoleDecision. The user record still stores the unverified email for display. GitHub's path defaults EmailVerified=true: GitHub only returns a public profile email after verification, and fetchGitHubPrimaryEmail explicitly filters to Verified=true. Regression tests cover both the helper contract and integration with AssignRole, including the bootstrap "first user" branch that would otherwise mask the gate. Assisted-by: Claude:claude-opus-4-7 [Claude Code] Signed-off-by: Richard Palethorpe <io@richiejp.com> * feat(cli): refuse public bind when no auth backend is configured When neither an auth DB nor a static API key is set, the auth middleware passes every request through. That is fine for a developer laptop, a home LAN, or a Tailnet — the network itself is the trust boundary. It is not fine on a public IP, where every model install, settings change, and admin endpoint becomes reachable from the internet. Refuse to start in that exact configuration. Loopback, RFC 1918, RFC 4193 ULA, link-local, and RFC 6598 CGNAT (Tailscale's default range) all count as trusted; wildcard binds (`:port`, `0.0.0.0`, `[::]`) are accepted only when every host interface is in one of those ranges. Hostnames are resolved and treated as trusted only when every answer is. A new --allow-insecure-public-bind / LOCALAI_ALLOW_INSECURE_PUBLIC_BIND flag opts out for deployments that gate access externally (a reverse proxy enforcing auth, a mesh ACL, etc.). The error message lists this plus the three constructive alternatives (bind a private interface, enable --auth, set --api-keys). The interface enumeration goes through a package-level interfaceAddrsFn var so tests can simulate cloud-VM, home-LAN, Tailscale-only, and enumeration-failure topologies without poking at the real network stack. Assisted-by: Claude:claude-opus-4-7 [Claude Code] Signed-off-by: Richard Palethorpe <io@richiejp.com> * test(http): regression-test the localai_assistant admin gate ChatEndpoint already rejects metadata.localai_assistant=true from a non-admin caller, but the gate was open-coded inline with no direct test coverage. The chat route is FeatureChat-gated (default-on), and the assistant's in-process MCP server can install/delete models and edit configs — the wrong handler change would silently turn the LLM into a confused deputy. Extract the gate into requireAssistantAccess(c, authEnabled) and pin its behaviour: auth disabled is a no-op, unauthenticated is 403, RoleUser is 403, RoleAdmin and the synthetic legacy-key admin are admitted. No behaviour change in the production path. Assisted-by: Claude:claude-opus-4-7 [Claude Code] Signed-off-by: Richard Palethorpe <io@richiejp.com> * test(http): assert every API route is auth-classified The auth middleware classifies path prefixes (/api/, /v1/, /models/, etc.) as protected and treats anything else as a static-asset passthrough. A new endpoint shipped under a brand-new prefix — or a new path that simply isn't on the prefix allowlist — would be reachable anonymously. Walk every route registered by API() with auth enabled and a fresh in-memory database (no users, no keys), and assert each API-prefixed route returns 401 / 404 / 405 to an anonymous request. Public surfaces (/api/auth/*, /api/branding, /api/node/* token-authenticated routes, /healthz, branding asset server, generated-content server, static assets) are explicit allowlist entries with comments justifying them. Build-tagged 'auth' so it runs against the SQLite-backed auth DB (matches the existing auth suite). Assisted-by: Claude:claude-opus-4-7 [Claude Code] Signed-off-by: Richard Palethorpe <io@richiejp.com> * test(http): pin agent endpoint per-user isolation contract agents.go's getUserID / effectiveUserID / canImpersonateUser / wantsAllUsers helpers are the single trust boundary for cross-user access on agent, agent-jobs, collections, and skills routes. A regression there is the difference between "regular user reads their own data" and "regular user reads anyone's data via ?user_id=victim". Lock in the contract: - effectiveUserID ignores ?user_id= for unauthenticated and RoleUser - effectiveUserID honours it for RoleAdmin and ProviderAgentWorker - wantsAllUsers requires admin AND the literal "true" string - canImpersonateUser is admin OR agent-worker, never plain RoleUser No production change — this commit only adds tests. Assisted-by: Claude:claude-opus-4-7 [Claude Code] Signed-off-by: Richard Palethorpe <io@richiejp.com> * fix(downloader): drop redundant stat in removePartialFile The stat-then-remove pattern is a TOCTOU window and a wasted syscall — os.Remove already returns ErrNotExist for the missing-file case, so trust that and treat it as a no-op. Assisted-by: Claude:claude-opus-4-7 [Claude Code] Signed-off-by: Richard Palethorpe <io@richiejp.com> * fix(http): redact secrets from trace buffer and distribution-token logs The /api/traces buffer captured Authorization, Cookie, Set-Cookie, and API-key headers verbatim from every request when tracing was enabled. The endpoint is admin-only but the buffer is reachable via any heap-style introspection and the captured tokens otherwise outlive the request. Strip those header values at capture time. Body redaction is left to a follow-up — the prompts are usually the operator's own and JSON-walking is invasive. Distribution tokens were also logged in plaintext from core/explorer/discovery.go; logs forward to syslog/journald and outlive the token. Redact those to a short prefix/suffix instead. Assisted-by: Claude:claude-opus-4-7 [Claude Code] Signed-off-by: Richard Palethorpe <io@richiejp.com> * feat(auth): rate-limit OAuth callbacks separately from password endpoints The shared 5/min/IP limit on auth endpoints is right for password-style flows but too tight for OAuth callbacks: corporate SSO funnels many real users through one outbound IP and would trip the limit. Add a separate 60/min/IP limiter for /api/auth/{github,oidc}/callback so callbacks are bounded against floods without breaking shared-IP deployments. Assisted-by: Claude:claude-opus-4-7 [Claude Code] Signed-off-by: Richard Palethorpe <io@richiejp.com> * feat(gallery): verify backend tarball sha256 when set in gallery entry GalleryBackend gained an optional sha256 field; the install path now threads it through to the existing downloader hash-verify (which already streams, verifies, and rolls back on mismatch). Galleries without sha256 keep working; the empty-SHA path still emits the existing "downloading without integrity check" warning. Assisted-by: Claude:claude-opus-4-7 [Claude Code] Signed-off-by: Richard Palethorpe <io@richiejp.com> * test(http): pin CSRF coverage on multipart endpoints The CSRF middleware in app.go is global (e.Use) so it covers every multipart upload route — branding assets, fine-tune datasets, audio transforms, agent collections. Pin that contract: cross-site multipart POSTs are rejected; same-origin / same-site / API-key clients are not. Also pins the SameSite=Lax fallback path the skipper relies on when Sec-Fetch-Site is absent. Assisted-by: Claude:claude-opus-4-7 [Claude Code] Signed-off-by: Richard Palethorpe <io@richiejp.com> * feat(http): XSS hardening — CSP headers, safe href, base-href escape, SVG sandbox Several closely related XSS-prevention changes spanning the SPA shell, the React UI, and the branding asset server: - New SecurityHeaders middleware sets CSP, X-Content-Type-Options, X-Frame-Options, and Referrer-Policy on every response. The CSP keeps script-src permissive because the Vite bundle relies on inline + eval'd scripts; tightening that requires moving to a nonce-based policy. - The <base href> injection in the SPA shell escaped attacker-controllable Host / X-Forwarded-Host headers — a single quote in the host header broke out of the attribute. Pass through SecureBaseHref (html.EscapeString). - Three React sinks rendering untrusted content via dangerouslySetInnerHTML switch to text-node rendering with whiteSpace: pre-wrap: user message bodies in Chat.jsx and AgentChat.jsx, and the agent activity log in AgentChat.jsx. The hand-rolled escape on the agent user-message variant is replaced by the same plain-text path. - New safeHref util collapses non-allowlisted URI schemes (most importantly javascript:) to '#'. Applied to gallery `<a href={url}>` links in Models / Backends / Manage and to canvas artifact links — these come from gallery JSON or assistant tool calls and must be treated as untrusted. - The branding asset server attaches a sandbox CSP plus same-origin CORP to .svg responses. The React UI loads logos via <img>, but the same URL is also reachable via direct navigation; this prevents script execution if a hostile SVG slipped past upload validation. Assisted-by: Claude:claude-opus-4-7 [Claude Code] Signed-off-by: Richard Palethorpe <io@richiejp.com> * feat(http): bound HTTP server with read-header and idle timeouts A net/http server with no timeouts is trivially Slowloris-able and leaks idle keep-alive connections. Set ReadHeaderTimeout (30s) to plug the slow-headers attack and IdleTimeout (120s) to cap keep-alive sockets. ReadTimeout and WriteTimeout stay at 0 because request bodies can be multi-GB model uploads and SSE / chat completions stream for many minutes; operators who need tighter per-request bounds should terminate slow clients at a reverse proxy. Assisted-by: Claude:claude-opus-4-7 [Claude Code] Signed-off-by: Richard Palethorpe <io@richiejp.com> * test(auth): pin PUT /api/auth/profile field-tampering contract The handler uses an explicit local body struct (only name and avatar_url) plus a gorm Updates(map) with a column allowlist, so an attacker posting {"role":"admin","email":"...","password_hash":"..."} can't mass-assign those fields. Lock that down with a regression test so a future "let's just c.Bind(&user)" refactor breaks loudly. Assisted-by: Claude:claude-opus-4-7 [Claude Code] Signed-off-by: Richard Palethorpe <io@richiejp.com> * fix(services): strip directory components from multipart upload filenames UploadDataset and UploadToCollectionForUser took the raw multipart file.Filename and joined it into a destination path. The fine-tune upload was incidentally safe because of a UUID prefix that fused any leading '..' to a literal segment, but the protection is fragile. UploadToCollectionForUser handed the filename to a vendored backend without sanitising at all. Strip to filepath.Base at both boundaries and reject the trivial unsafe values ("", ".", "..", "/"). Assisted-by: Claude:claude-opus-4-7 [Claude Code] Signed-off-by: Richard Palethorpe <io@richiejp.com> * fix(react-ui): validate persisted MCP server entries on load localStorage is shared across same-origin pages; an XSS that lands once can poison persisted MCP server config to attempt header injection or to feed a non-http URL into the fetch path on subsequent loads. Validate every entry: types must match, URL must parse with http(s) scheme, header keys/values must be control-char-free. Drop anything that doesn't fit. Assisted-by: Claude:claude-opus-4-7 [Claude Code] Signed-off-by: Richard Palethorpe <io@richiejp.com> * fix(http): close X-Forwarded-Prefix open redirect The reverse-proxy support concatenated X-Forwarded-Prefix into the redirect target without validation, so a forged header value of "//evil.com" turned the SPA-shell redirect helper at /, /browse, and /browse/* into a 301 to //evil.com/app. The path-strip middleware had the same shape on its prefix-trailing-slash redirect. Add SafeForwardedPrefix at the middleware boundary: must start with a single '/', no protocol-relative '//' opener, no scheme, no backslash, no control characters. Apply at both consumers; misconfig trips the validator and the header is dropped. Assisted-by: Claude:claude-opus-4-7 [Claude Code] Signed-off-by: Richard Palethorpe <io@richiejp.com> * fix(http): refuse wildcard CORS when LOCALAI_CORS=true with empty allowlist When LOCALAI_CORS=true but LOCALAI_CORS_ALLOW_ORIGINS was empty, Echo's CORSWithConfig saw an empty allow-list and fell back to its default AllowOrigins=["*"]. An operator who flipped the strict-CORS feature flag without populating the list got the opposite of what they asked for. Echo never sets Allow-Credentials: true so this isn't directly exploitable (cookies aren't sent under wildcard CORS), but the misconfiguration trap is worth closing. Skip the registration and warn. Assisted-by: Claude:claude-opus-4-7 [Claude Code] Signed-off-by: Richard Palethorpe <io@richiejp.com> * feat(auth): zxcvbn password strength check with user-acknowledged override The previous policy was len < 8, which let through "Password1" and the rest of the credential-stuffing corpus. LocalAI has no second factor yet, so the bar needs to sit higher. Add ValidatePasswordStrength using github.com/timbutler/zxcvbn (an actively-maintained fork of the trustelem port; v1.0.4, April 2024): - min 12 chars, max 72 (bcrypt's truncation point) - reject NUL bytes (some bcrypt callers truncate at the first NUL) - require zxcvbn score >= 3 ("safely unguessable, ~10^8 guesses to break"); the hint list ["localai", "local-ai", "admin"] penalises passwords built from the app's own branding zxcvbn produces false positives sometimes (a strong-looking password that happens to match a dictionary word) and operators occasionally need to set a known-weak password (kiosk demos, CI rigs). Add an acknowledgement path: PasswordPolicy{AllowWeak: true} skips the entropy check while still enforcing the hard rules. The structured PasswordErrorResponse marks weak-password rejections as Overridable so the UI can surface a "use this anyway" checkbox. Wired through register, self-service password change, and admin password reset on both the server and the React UI. Assisted-by: Claude:claude-opus-4-7 [Claude Code] Signed-off-by: Richard Palethorpe <io@richiejp.com> * fix(react-ui): drop HTML5 minLength on new-password inputs minLength={12} on the new-password input let the browser block the form submit silently before any JS or network call ran. The browser focused the field, showed a brief native tooltip, and that was that — no toast, no fetch, no clue. Reproducible by typing fewer than 12 chars on the second password change of a session. The JS-level length check in handleSubmit already shows a toast and the server rejects with a structured error, so the HTML5 attribute was redundant defence anyway. Drop it. Assisted-by: Claude:claude-opus-4-7 [Claude Code] Signed-off-by: Richard Palethorpe <io@richiejp.com> * fix(react-ui): bundle Geist fonts locally instead of fetching from Google The new CSP correctly refused to apply styles from fonts.googleapis.com because style-src is locked to 'self' and 'unsafe-inline'. Loosening the CSP would defeat its purpose; the right fix is to stop reaching out to a third-party CDN for fonts on every page load. Add @fontsource-variable/geist and @fontsource-variable/geist-mono as npm deps and import them once at boot. Drop the <link rel="preconnect"> and external stylesheet from index.html. Side benefit: no third-party tracking via Referer / IP on every UI load, no failure mode when offline / behind a captive portal. Assisted-by: Claude:claude-opus-4-7 [Claude Code] Signed-off-by: Richard Palethorpe <io@richiejp.com> * fix(react-ui): refresh i18n strings to reflect 12-char password minimum The translations still said "at least 8 characters" everywhere — the client-side toast on a too-short password change told the user the wrong floor. Update tooShort and newPasswordPlaceholder / newPasswordDescription across all five locales (en, es, it, de, zh-CN) to match the real ValidatePasswordStrength rule. Assisted-by: Claude:claude-opus-4-7 [Claude Code] Signed-off-by: Richard Palethorpe <io@richiejp.com> * feat(auth): make password length-floor overridable like the entropy check The 12-char minimum was a policy choice, not a technical invariant — only "non-empty", "<= 72 bytes", and "no NUL bytes" are real bcrypt constraints. Treating length-12 as a hard rule was inconsistent with the entropy check (already overridable) and friction for use cases where the account is just a name on a session, not a security boundary (single-user kiosk, CI rig, lab demo). Restructure ValidatePasswordStrength: - Hard rules (always enforced): non-empty, <= MaxPasswordLength, no NUL byte - Policy rules (skipped when AllowWeak=true): length >= 12, zxcvbn score >= 3 PasswordError now marks password_too_short as Overridable too. The React forms generalised from `error_code === 'password_too_weak'` to `overridable === true`, and the JS-side preflight length checks were removed (server is source of truth, returns the same checkbox flow). Assisted-by: Claude:claude-opus-4-7 [Claude Code] Signed-off-by: Richard Palethorpe <io@richiejp.com> --------- Signed-off-by: Richard Palethorpe <io@richiejp.com>
469 lines
13 KiB
Go
469 lines
13 KiB
Go
package auth
|
|
|
|
import (
|
|
"context"
|
|
"crypto/rand"
|
|
"crypto/subtle"
|
|
"encoding/hex"
|
|
"encoding/json"
|
|
"fmt"
|
|
"io"
|
|
"net/http"
|
|
"strings"
|
|
"time"
|
|
|
|
"github.com/coreos/go-oidc/v3/oidc"
|
|
"github.com/google/uuid"
|
|
"github.com/labstack/echo/v4"
|
|
"github.com/mudler/xlog"
|
|
"golang.org/x/oauth2"
|
|
githubOAuth "golang.org/x/oauth2/github"
|
|
"gorm.io/gorm"
|
|
)
|
|
|
|
// providerEntry holds the OAuth2/OIDC config for a single provider.
|
|
type providerEntry struct {
|
|
oauth2Config oauth2.Config
|
|
oidcVerifier *oidc.IDTokenVerifier // nil for GitHub (API-based user info)
|
|
name string
|
|
userInfoURL string // only used for GitHub
|
|
}
|
|
|
|
// oauthUserInfo is a provider-agnostic representation of an authenticated user.
|
|
// EmailVerified MUST reflect upstream verification: AssignRole compares Email
|
|
// against the configured admin email, so an unverified claim of a privileged
|
|
// address must not be honoured. Callers that cannot prove verification set
|
|
// EmailVerified=false.
|
|
type oauthUserInfo struct {
|
|
Subject string
|
|
Email string
|
|
EmailVerified bool
|
|
Name string
|
|
AvatarURL string
|
|
}
|
|
|
|
// OAuthManager manages multiple OAuth/OIDC providers.
|
|
type OAuthManager struct {
|
|
providers map[string]*providerEntry
|
|
}
|
|
|
|
// OAuthParams groups the parameters needed to create an OAuthManager.
|
|
type OAuthParams struct {
|
|
GitHubClientID string
|
|
GitHubClientSecret string
|
|
OIDCIssuer string
|
|
OIDCClientID string
|
|
OIDCClientSecret string
|
|
}
|
|
|
|
// NewOAuthManager creates an OAuthManager from the given params.
|
|
func NewOAuthManager(baseURL string, params OAuthParams) (*OAuthManager, error) {
|
|
m := &OAuthManager{providers: make(map[string]*providerEntry)}
|
|
|
|
if params.GitHubClientID != "" {
|
|
m.providers[ProviderGitHub] = &providerEntry{
|
|
name: ProviderGitHub,
|
|
oauth2Config: oauth2.Config{
|
|
ClientID: params.GitHubClientID,
|
|
ClientSecret: params.GitHubClientSecret,
|
|
Endpoint: githubOAuth.Endpoint,
|
|
RedirectURL: baseURL + "/api/auth/github/callback",
|
|
Scopes: []string{"user:email", "read:user"},
|
|
},
|
|
userInfoURL: "https://api.github.com/user",
|
|
}
|
|
}
|
|
|
|
if params.OIDCClientID != "" && params.OIDCIssuer != "" {
|
|
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
|
|
defer cancel()
|
|
|
|
provider, err := oidc.NewProvider(ctx, params.OIDCIssuer)
|
|
if err != nil {
|
|
return nil, fmt.Errorf("OIDC discovery failed for %s: %w", params.OIDCIssuer, err)
|
|
}
|
|
|
|
verifier := provider.Verifier(&oidc.Config{ClientID: params.OIDCClientID})
|
|
|
|
m.providers[ProviderOIDC] = &providerEntry{
|
|
name: ProviderOIDC,
|
|
oauth2Config: oauth2.Config{
|
|
ClientID: params.OIDCClientID,
|
|
ClientSecret: params.OIDCClientSecret,
|
|
Endpoint: provider.Endpoint(),
|
|
RedirectURL: baseURL + "/api/auth/oidc/callback",
|
|
Scopes: []string{oidc.ScopeOpenID, "profile", "email"},
|
|
},
|
|
oidcVerifier: verifier,
|
|
}
|
|
}
|
|
|
|
return m, nil
|
|
}
|
|
|
|
// Providers returns the list of configured provider names.
|
|
func (m *OAuthManager) Providers() []string {
|
|
names := make([]string, 0, len(m.providers))
|
|
for name := range m.providers {
|
|
names = append(names, name)
|
|
}
|
|
return names
|
|
}
|
|
|
|
// LoginHandler redirects the user to the OAuth provider's login page.
|
|
func (m *OAuthManager) LoginHandler(providerName string) echo.HandlerFunc {
|
|
return func(c echo.Context) error {
|
|
provider, ok := m.providers[providerName]
|
|
if !ok {
|
|
return c.JSON(http.StatusNotFound, map[string]string{"error": "unknown provider"})
|
|
}
|
|
|
|
state, err := generateState()
|
|
if err != nil {
|
|
return c.JSON(http.StatusInternalServerError, map[string]string{"error": "failed to generate state"})
|
|
}
|
|
|
|
secure := isSecure(c)
|
|
c.SetCookie(&http.Cookie{
|
|
Name: "oauth_state",
|
|
Value: state,
|
|
Path: "/",
|
|
HttpOnly: true,
|
|
Secure: secure,
|
|
SameSite: http.SameSiteLaxMode,
|
|
MaxAge: 600, // 10 minutes
|
|
})
|
|
|
|
// Store invite code in cookie if provided
|
|
if inviteCode := c.QueryParam("invite_code"); inviteCode != "" {
|
|
c.SetCookie(&http.Cookie{
|
|
Name: "invite_code",
|
|
Value: inviteCode,
|
|
Path: "/",
|
|
HttpOnly: true,
|
|
Secure: secure,
|
|
SameSite: http.SameSiteLaxMode,
|
|
MaxAge: 600,
|
|
})
|
|
}
|
|
|
|
url := provider.oauth2Config.AuthCodeURL(state)
|
|
return c.Redirect(http.StatusTemporaryRedirect, url)
|
|
}
|
|
}
|
|
|
|
// CallbackHandler handles the OAuth callback, creates/updates the user, and
|
|
// creates a session.
|
|
func (m *OAuthManager) CallbackHandler(providerName string, db *gorm.DB, adminEmail, registrationMode, hmacSecret string) echo.HandlerFunc {
|
|
return func(c echo.Context) error {
|
|
provider, ok := m.providers[providerName]
|
|
if !ok {
|
|
return c.JSON(http.StatusNotFound, map[string]string{"error": "unknown provider"})
|
|
}
|
|
|
|
// Validate state
|
|
stateCookie, err := c.Cookie("oauth_state")
|
|
if err != nil || stateCookie.Value == "" || subtle.ConstantTimeCompare([]byte(stateCookie.Value), []byte(c.QueryParam("state"))) != 1 {
|
|
return c.JSON(http.StatusBadRequest, map[string]string{"error": "invalid OAuth state"})
|
|
}
|
|
|
|
// Clear state cookie
|
|
c.SetCookie(&http.Cookie{
|
|
Name: "oauth_state",
|
|
Value: "",
|
|
Path: "/",
|
|
HttpOnly: true,
|
|
Secure: isSecure(c),
|
|
MaxAge: -1,
|
|
})
|
|
|
|
// Exchange code for token
|
|
code := c.QueryParam("code")
|
|
if code == "" {
|
|
return c.JSON(http.StatusBadRequest, map[string]string{"error": "missing authorization code"})
|
|
}
|
|
|
|
ctx, cancel := context.WithTimeout(c.Request().Context(), 30*time.Second)
|
|
defer cancel()
|
|
|
|
token, err := provider.oauth2Config.Exchange(ctx, code)
|
|
if err != nil {
|
|
xlog.Error("OAuth code exchange failed", "provider", providerName, "error", err)
|
|
return c.JSON(http.StatusBadRequest, map[string]string{"error": "OAuth authentication failed"})
|
|
}
|
|
|
|
// Fetch user info — branch based on provider type
|
|
var userInfo *oauthUserInfo
|
|
if provider.oidcVerifier != nil {
|
|
userInfo, err = extractOIDCUserInfo(ctx, provider.oidcVerifier, token)
|
|
} else {
|
|
userInfo, err = fetchGitHubUserInfoAsOAuth(ctx, token.AccessToken)
|
|
}
|
|
if err != nil {
|
|
return c.JSON(http.StatusInternalServerError, map[string]string{"error": "failed to fetch user info"})
|
|
}
|
|
|
|
// Retrieve invite code from cookie if present
|
|
var inviteCode string
|
|
if ic, err := c.Cookie("invite_code"); err == nil && ic.Value != "" {
|
|
inviteCode = ic.Value
|
|
// Clear the invite code cookie
|
|
c.SetCookie(&http.Cookie{
|
|
Name: "invite_code",
|
|
Value: "",
|
|
Path: "/",
|
|
HttpOnly: true,
|
|
Secure: isSecure(c),
|
|
MaxAge: -1,
|
|
})
|
|
}
|
|
|
|
// Check if user already exists
|
|
var existingUser User
|
|
userExists := db.Where("provider = ? AND subject = ?", providerName, userInfo.Subject).First(&existingUser).Error == nil
|
|
|
|
var user *User
|
|
if userExists {
|
|
// Existing user — update profile fields, no invite gating needed
|
|
existingUser.Name = userInfo.Name
|
|
existingUser.AvatarURL = userInfo.AvatarURL
|
|
if userInfo.Email != "" {
|
|
existingUser.Email = strings.ToLower(strings.TrimSpace(userInfo.Email))
|
|
}
|
|
db.Save(&existingUser)
|
|
user = &existingUser
|
|
} else {
|
|
// New user — validate invite BEFORE creating, inside a transaction
|
|
var validInvite *InviteCode
|
|
txErr := db.Transaction(func(tx *gorm.DB) error {
|
|
email := ""
|
|
if userInfo.Email != "" {
|
|
email = strings.ToLower(strings.TrimSpace(userInfo.Email))
|
|
}
|
|
|
|
// roleEmail is what AssignRole and NeedsInviteOrApproval
|
|
// use to short-circuit on admin-email matches. Pass the
|
|
// unverified-email-substituted form so an IdP-supplied
|
|
// copy of LOCALAI_ADMIN_EMAIL doesn't bypass either gate.
|
|
roleEmail := emailForRoleDecision(email, userInfo.EmailVerified)
|
|
role := AssignRole(tx, roleEmail, adminEmail)
|
|
status := StatusActive
|
|
|
|
if NeedsInviteOrApproval(tx, roleEmail, adminEmail, registrationMode) {
|
|
if registrationMode == "invite" {
|
|
if inviteCode == "" {
|
|
return fmt.Errorf("invite_required")
|
|
}
|
|
invite, err := ValidateInvite(tx, inviteCode, hmacSecret)
|
|
if err != nil {
|
|
return fmt.Errorf("invalid_invite")
|
|
}
|
|
validInvite = invite
|
|
status = StatusActive
|
|
} else {
|
|
// approval mode — create as pending
|
|
status = StatusPending
|
|
}
|
|
}
|
|
|
|
newUser := User{
|
|
ID: uuid.New().String(),
|
|
Email: email,
|
|
Name: userInfo.Name,
|
|
AvatarURL: userInfo.AvatarURL,
|
|
Provider: providerName,
|
|
Subject: userInfo.Subject,
|
|
Role: role,
|
|
Status: status,
|
|
}
|
|
if err := tx.Create(&newUser).Error; err != nil {
|
|
return fmt.Errorf("failed to create user: %w", err)
|
|
}
|
|
|
|
if validInvite != nil {
|
|
ConsumeInvite(tx, validInvite, newUser.ID)
|
|
}
|
|
|
|
user = &newUser
|
|
return nil
|
|
})
|
|
|
|
if txErr != nil {
|
|
msg := txErr.Error()
|
|
if msg == "invite_required" {
|
|
return c.Redirect(http.StatusTemporaryRedirect, "/login?error=invite_required")
|
|
}
|
|
if msg == "invalid_invite" {
|
|
return c.Redirect(http.StatusTemporaryRedirect, "/login?error=invalid_invite")
|
|
}
|
|
return c.JSON(http.StatusInternalServerError, map[string]string{"error": "failed to create user"})
|
|
}
|
|
}
|
|
|
|
if user.Status != StatusActive {
|
|
return c.JSON(http.StatusForbidden, map[string]string{"error": "account pending approval"})
|
|
}
|
|
|
|
// Same gate as roleEmail above: only verified emails can flip an
|
|
// existing user to admin via the LOCALAI_ADMIN_EMAIL match.
|
|
if userInfo.EmailVerified {
|
|
MaybePromote(db, user, adminEmail)
|
|
}
|
|
|
|
// Create session
|
|
sessionID, err := CreateSession(db, user.ID, hmacSecret)
|
|
if err != nil {
|
|
return c.JSON(http.StatusInternalServerError, map[string]string{"error": "failed to create session"})
|
|
}
|
|
|
|
SetSessionCookie(c, sessionID)
|
|
return c.Redirect(http.StatusTemporaryRedirect, "/app")
|
|
}
|
|
}
|
|
|
|
// extractOIDCUserInfo extracts user info from the OIDC ID token.
|
|
func extractOIDCUserInfo(ctx context.Context, verifier *oidc.IDTokenVerifier, token *oauth2.Token) (*oauthUserInfo, error) {
|
|
rawIDToken, ok := token.Extra("id_token").(string)
|
|
if !ok || rawIDToken == "" {
|
|
return nil, fmt.Errorf("no id_token in token response")
|
|
}
|
|
|
|
idToken, err := verifier.Verify(ctx, rawIDToken)
|
|
if err != nil {
|
|
return nil, fmt.Errorf("failed to verify ID token: %w", err)
|
|
}
|
|
|
|
var claims struct {
|
|
Sub string `json:"sub"`
|
|
Email string `json:"email"`
|
|
EmailVerified *bool `json:"email_verified"`
|
|
Name string `json:"name"`
|
|
Picture string `json:"picture"`
|
|
}
|
|
if err := idToken.Claims(&claims); err != nil {
|
|
return nil, fmt.Errorf("failed to parse ID token claims: %w", err)
|
|
}
|
|
|
|
// Default to false on absence: an IdP that doesn't issue the claim is
|
|
// not asserting verification, and we must not promote on its email.
|
|
verified := claims.EmailVerified != nil && *claims.EmailVerified
|
|
|
|
return &oauthUserInfo{
|
|
Subject: claims.Sub,
|
|
Email: claims.Email,
|
|
EmailVerified: verified,
|
|
Name: claims.Name,
|
|
AvatarURL: claims.Picture,
|
|
}, nil
|
|
}
|
|
|
|
type githubUserInfo struct {
|
|
ID int `json:"id"`
|
|
Login string `json:"login"`
|
|
Name string `json:"name"`
|
|
Email string `json:"email"`
|
|
AvatarURL string `json:"avatar_url"`
|
|
}
|
|
|
|
type githubEmail struct {
|
|
Email string `json:"email"`
|
|
Primary bool `json:"primary"`
|
|
Verified bool `json:"verified"`
|
|
}
|
|
|
|
// fetchGitHubUserInfoAsOAuth fetches GitHub user info and returns it as oauthUserInfo.
|
|
// GitHub only surfaces verified emails (public profile email and the
|
|
// /user/emails Verified=true filter), so a non-empty email is always verified.
|
|
func fetchGitHubUserInfoAsOAuth(ctx context.Context, accessToken string) (*oauthUserInfo, error) {
|
|
info, err := fetchGitHubUserInfo(ctx, accessToken)
|
|
if err != nil {
|
|
return nil, err
|
|
}
|
|
return &oauthUserInfo{
|
|
Subject: fmt.Sprintf("%d", info.ID),
|
|
Email: info.Email,
|
|
EmailVerified: info.Email != "",
|
|
Name: info.Name,
|
|
AvatarURL: info.AvatarURL,
|
|
}, nil
|
|
}
|
|
|
|
func fetchGitHubUserInfo(ctx context.Context, accessToken string) (*githubUserInfo, error) {
|
|
client := &http.Client{Timeout: 10 * time.Second}
|
|
|
|
req, _ := http.NewRequestWithContext(ctx, "GET", "https://api.github.com/user", nil)
|
|
req.Header.Set("Authorization", "Bearer "+accessToken)
|
|
req.Header.Set("Accept", "application/json")
|
|
|
|
resp, err := client.Do(req)
|
|
if err != nil {
|
|
return nil, err
|
|
}
|
|
defer resp.Body.Close()
|
|
|
|
body, err := io.ReadAll(resp.Body)
|
|
if err != nil {
|
|
return nil, err
|
|
}
|
|
|
|
var info githubUserInfo
|
|
if err := json.Unmarshal(body, &info); err != nil {
|
|
return nil, err
|
|
}
|
|
|
|
// If no public email, fetch from /user/emails
|
|
if info.Email == "" {
|
|
info.Email, _ = fetchGitHubPrimaryEmail(ctx, accessToken)
|
|
}
|
|
|
|
return &info, nil
|
|
}
|
|
|
|
func fetchGitHubPrimaryEmail(ctx context.Context, accessToken string) (string, error) {
|
|
client := &http.Client{Timeout: 10 * time.Second}
|
|
|
|
req, _ := http.NewRequestWithContext(ctx, "GET", "https://api.github.com/user/emails", nil)
|
|
req.Header.Set("Authorization", "Bearer "+accessToken)
|
|
req.Header.Set("Accept", "application/json")
|
|
|
|
resp, err := client.Do(req)
|
|
if err != nil {
|
|
return "", err
|
|
}
|
|
defer resp.Body.Close()
|
|
|
|
body, err := io.ReadAll(resp.Body)
|
|
if err != nil {
|
|
return "", err
|
|
}
|
|
|
|
var emails []githubEmail
|
|
if err := json.Unmarshal(body, &emails); err != nil {
|
|
return "", err
|
|
}
|
|
|
|
for _, e := range emails {
|
|
if e.Primary && e.Verified {
|
|
return e.Email, nil
|
|
}
|
|
}
|
|
|
|
// Fall back to first verified email
|
|
for _, e := range emails {
|
|
if e.Verified {
|
|
return e.Email, nil
|
|
}
|
|
}
|
|
|
|
return "", fmt.Errorf("no verified email found")
|
|
}
|
|
|
|
|
|
func generateState() (string, error) {
|
|
b := make([]byte, 16)
|
|
if _, err := rand.Read(b); err != nil {
|
|
return "", err
|
|
}
|
|
return hex.EncodeToString(b), nil
|
|
}
|