Files
LocalAI/core/http/endpoints/mcp/tools.go
Ettore Di Giacinto bcef72b9c1 feat: localai assistant chat modality (#9602)
* fix(tests): inline model_test fixtures after tests/models_fixtures removal

The previous reorg removed tests/models_fixtures/ but core/config/model_test.go
still read CONFIG_FILE/MODELS_PATH env vars pointing into that directory, so
`make test` failed with "open : no such file or directory" on the readConfigFile
spec (the suite ran with --fail-fast and bailed before openresponses_test).

Inline the YAMLs (config/embeddings/grpc/rwkv/whisper) directly into the test
file, materialise them into a per-test tmpdir via BeforeEach, and drop the
env-var lookups. The test no longer depends on Makefile plumbing.

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Assisted-by: claude-code:claude-opus-4-7 [Edit] [Write] [Bash]

* refactor(modeladmin): extract model-admin helpers into a service package

Lift the bodies of EditModelEndpoint, PatchConfigEndpoint,
ToggleStateModelEndpoint, TogglePinnedModelEndpoint and
VRAMEstimateEndpoint into core/services/modeladmin so the same logic can
be called by non-HTTP clients (notably the in-process MCP server that
backs the LocalAI Assistant chat modality, landing in a follow-up commit).

The HTTP handlers shrink to thin shells that parse echo inputs, call the
matching helper, map typed errors (ErrNotFound, ErrConflict,
ErrPathNotTrusted, ErrBadAction, ...) to the existing HTTP status codes,
and render the existing response shapes. No REST-surface behaviour change;
the existing localai endpoint tests cover the regression net.

Adds focused unit tests for each helper against tmp-dir-backed
ModelConfigLoader fixtures (deep-merge patch, rename + conflict, path
separator guard, toggle/pin enable/disable, sync callback).

Assisted-by: Claude:claude-opus-4-7 [Read] [Edit] [Write] [Bash]
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* feat(assistant): LocalAI Assistant chat modality with in-memory MCP server

Adds a chat modality, admin-only, that wires the chat session to an
in-memory MCP server exposing LocalAI's own admin/management surface as
tools. An admin can install models, manage backends, edit configs and
check status by chatting; the LLM calls tools like gallery_search,
install_model, import_model_uri, list_installed_models, edit_model_config
and surfaces the results.

Same Go package powers two modes:

  pkg/mcp/localaitools/

    NewServer(client, opts) builds an MCP server that registers the
    19-tool admin catalog. The LocalAIClient interface has two impls:

    - inproc.Client — calls services directly (no HTTP loopback,
      no synthetic admin API key). Used in-process by the chat handler.
    - httpapi.Client — calls the LocalAI REST API. Used by the new
      `local-ai mcp-server --target=…` subcommand to control a remote
      LocalAI from a stdio MCP host.

    Tools and their embedded skill prompts are agnostic to which client
    backs them. Skill prompts are markdown files under prompts/, embedded
    via go:embed and assembled into the system prompt at server init.

Wiring:

  - core/http/endpoints/mcp/localai_assistant.go — process-wide holder
    that spins up the in-memory MCP server once at Application start
    using paired net.Pipe transports, then reuses LocalToolExecutor
    (no fork) for every chat request that opts in.

  - core/http/endpoints/openai/chat.go — small branch ahead of the
    existing MCP block: when metadata.localai_assistant=true,
    defense-in-depth admin check + executor swap + system-prompt
    injection. All downstream tool dispatch is unchanged.

  - core/http/auth/{permissions,features}.go — adds
    FeatureLocalAIAssistant; gating happens at the chat handler entry
    plus admin-only `/api/settings`.

  - core/cli/{run.go,cli.go,mcp_server.go} —
    LOCALAI_DISABLE_ASSISTANT flag (runtime-toggleable via Settings, no
    restart), plus `local-ai mcp-server` stdio subcommand.

  - core/config/runtime_settings.go — `localai_assistant_enabled`
    runtime setting; the chat handler reads `DisableLocalAIAssistant`
    live at request entry.

UI:

  - Home.jsx — prominent self-explanatory CTA card on first run
    ("Manage LocalAI by chatting"); collapses to a compact
    "Manage by chat" button in the quick-links row once used,
    persisted via localStorage.
  - Chat.jsx — admin-only "Manage" toggle in the chat header,
    "Manage mode" badge, dedicated empty-state copy, starter chips.
  - Settings.jsx — "LocalAI Assistant" section with the runtime
    enable toggle.
  - useChat.js — `localaiAssistant` flag on the chat schema; injects
    `metadata.localai_assistant=true` on requests when active.

Distributed mode: the in-memory MCP server lives only on the head node;
inproc.Client wraps already-distributed-aware services so installs
propagate to workers via the existing GalleryService machinery.

Documentation: `.agents/localai-assistant-mcp.md` is the contributor
contract — when adding an admin REST endpoint, also add a LocalAIClient
method, an inproc + httpapi impl, a tool registration, and a skill
prompt update; the AGENTS.md index links to it.

Out of scope (follow-ups): per-tool RBAC granularity for non-admin
read-only access; streaming mcp_tool_progress for long installs;
React Vitest rig for the UI changes.

Assisted-by: Claude:claude-opus-4-7 [Read] [Edit] [Write] [Bash]
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* refactor(assistant): extract tool/capability/MiB/server-name constants

The MCP tool surface, capability tag set, server-name default, and the
chat-handler metadata key were repeated as bare string literals across
seven files. Renaming any one required hand-editing every call site and
risked code/test/prompt drift.

This pulls them into typed constants:

- pkg/mcp/localaitools/tools.go — Tool* constants for the 19 MCP tools,
  plus DefaultServerName.
- pkg/mcp/localaitools/capability.go — typed Capability + constants for
  the capability tag set the LLM passes to list_installed_models. The
  type rides through LocalAIClient.ListInstalledModels and replaces the
  triplet of "embed"/"embedding"/"embeddings" with the single
  CapabilityEmbeddings.
- pkg/mcp/localaitools/inproc/client.go — bytesPerMiB constant for the
  VRAMEstimate byte→MB conversion.
- core/http/endpoints/mcp/tools.go — MetadataKeyLocalAIAssistant for the
  "localai_assistant" request-metadata key consumed by the chat handler.

Tool registrations, the test catalog, the dispatch table, the validation
fixtures, and the fake/stub clients all reference the constants. The
embedded skill prompts under prompts/ keep their bare strings (go:embed
markdown can't import Go constants); the existing TestPromptsContain
SafetyAnchors guards the alignment.

No behaviour change. All tests pass with -race.

Assisted-by: Claude:claude-opus-4-7 [Read] [Edit] [Write] [Bash]
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* refactor(modeladmin): typed Action for ToggleState/TogglePinned

The toggle/pin verbs were bare strings everywhere — handler signatures,
service implementations, MCP tool args, the fake/stub clients, the
inproc and httpapi LocalAIClient impls, plus 4 test files. A typo in
any caller silently fell through to the runtime "must be 'enable' or
'disable'" check.

Introduce core/services/modeladmin.Action (string alias) with
ActionEnable, ActionDisable, ActionPin, ActionUnpin and a small Valid
helper. The compiler now catches mismatches at every boundary; renames
ripple through one source of truth.

LocalAIClient.ToggleModelState/Pinned signatures change to take
modeladmin.Action. The package is brand-new and unreleased so this is
a free public-API tightening.

Assisted-by: Claude:claude-opus-4-7 [Read] [Edit] [Bash]
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* fix(assistant): respect ctx cancellation on gallery channel sends

InstallModel, DeleteModel, ImportModelURI, InstallBackend and
UpgradeBackend all pushed onto galleryop channels with bare sends. If the
worker was paused or the buffer full, the chat-handler goroutine blocked
forever — the LLM kept polling and the request leaked.

Wrap the five sends in a sendModelOp/sendBackendOp helper that selects
on ctx.Done() so a cancelled chat completion surfaces context.Canceled
back to the LLM instead of hanging.

Adds inproc/client_test.go with a pre-cancelled-ctx regression test on
InstallModel; the helpers are shared so the same guarantee covers the
other four call sites.

Assisted-by: Claude:claude-opus-4-7 [Edit] [Write] [Bash]
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* fix(assistant): graceful shutdown for in-memory holder and stdio CLI

Two related leaks:

- Application.start() built the LocalAIAssistantHolder but never wired
  Close() into the graceful-termination chain — the in-memory MCP
  transport pair stayed alive until process exit, and the goroutines
  behind net.Pipe() didn't drain. Hook into the existing
  signals.RegisterGracefulTerminationHandler chain (same pattern as
  core/http/endpoints/mcp/tools.go:770).

- core/cli/mcp_server.go ran srv.Run with context.Background(); a
  Ctrl-C from the host (Claude Desktop, mcphost, npx inspector) or a
  SIGTERM from process supervision left the stdio loop reading from a
  closed pipe. Switch to signal.NotifyContext to surface the signal
  through ctx and let srv.Run drain.

Assisted-by: Claude:claude-opus-4-7 [Read] [Edit] [Bash]
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* fix(assistant): typed HTTPError + propagate prompt walk error

The httpapi client detected "no such job" by substring-matching on the
error string ("404", "could not find") — brittle to status-code
formatting changes and to LocalAI fixing /models/jobs/:uuid to return a
proper 404. Replace with a typed *HTTPError whose Is() method honours
errors.Is(err, ErrHTTPNotFound). The 500-with-"could not find" branch
stays as a transitional fallback documented in Is().

Same change covers ListNodes' 404 fallback for the /api/nodes endpoint.

Adds httptest tests for both 404 and the legacy 500 path, plus a
direct errors.Is exposure test so external callers (the standalone
stdio CLI host) can match without re-string-parsing.

Also tightens prompts.SystemPrompt: panic when fs.WalkDir on the
embedded FS fails. The only realistic cause is a build-time //go:embed
misconfiguration; serving an empty system prompt to the LLM is much
worse than crashing init. TestSystemPromptIncludesAllEmbeddedFiles
catches regressions in CI.

Assisted-by: Claude:claude-opus-4-7 [Edit] [Write] [Bash]
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* fix(modeladmin): atomic writes for model config files

The five sites that wrote model YAML used os.WriteFile, which opens
with O_TRUNC|O_WRONLY|O_CREATE. A crash mid-write left the destination
truncated and the model unloadable until manual repair. Pre-existing
behaviour inherited from the original endpoint handlers — fix once now
that there's a single helper.

Adds writeFileAtomic: writes to a sibling temp file, chmods, syncs via
Close(), then os.Rename. Same-directory temp keeps the rename atomic on
the same filesystem; cleanup runs on every error path so stray temps
don't accumulate. No new dependency.

Applied to:
- ConfigService.PatchConfig
- ConfigService.EditYAML (both rename and in-place branches)
- mutateYAMLBoolFlag (drives ToggleState + TogglePinned)

atomic_test.go covers the happy path plus a read-only-dir failure case
that asserts the original file is preserved (skipped on Windows where
the chmod trick is POSIX-specific).

Assisted-by: Claude:claude-opus-4-7 [Edit] [Write] [Bash]
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* chore(assistant): prune dead code, mark stub, document conventions

Three small cleanups landing together:

- Drop the unused errNotImplemented sentinel from inproc/client.go.
  All five methods that used to return it are wired to modeladmin
  helpers since the Phase B commit; the package var is dead.

- Annotate httpapi.Client.GetModelConfig as a known stub. LocalAI's
  /models/edit/:name returns rendered HTML, not JSON, so the standalone
  CLI's get_model_config tool surfaces a clear error to the LLM. A
  future JSON-only /api/models/config-yaml/:name endpoint is tracked in
  the agent contract; FIXME points at it.

- Extend `.agents/localai-assistant-mcp.md` with a "Code conventions"
  section that documents the audit-driven rules: tool/Capability/Action
  constants, errors.Is over substring matching, ctx-aware channel
  sends, atomic writes, and graceful shutdown. Refresh the file map so
  it lists tools.go and capability.go and drops the removed
  tools_bootstrap.go.

The tools_models.go diff is a comment-only change explaining why the
ModelName empty-string check stays at the tool layer (consistency
across LocalAIClient implementations, since the SDK schema validator
only enforces presence, not non-empty).

Assisted-by: Claude:claude-opus-4-7 [Read] [Edit] [Bash]
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* test(assistant): convert test files to ginkgo + gomega

The repo convention (per core/http/endpoints/localai/*_test.go,
core/gallery/**, etc.) is Ginkgo v2 with Gomega assertions. The tests I
introduced for the assistant feature used vanilla testing.T, which made
them stand out and stripped the BDD structure the rest of the suite
relies on.

Convert every test file in the assistant scope to Ginkgo:

  pkg/mcp/localaitools/
    dto_test.go            — Describe("DTOs round-trip through JSON")
    prompts_test.go        — Describe("SystemPrompt assembler")
    server_test.go         — Describe("Server tool catalog"),
                              Describe("Tool dispatch"),
                              Describe("Tool error surfacing"),
                              Describe("Argument validation"),
                              Describe("Concurrent tool calls")
    parity_test.go         — Describe("LocalAIClient parity"),
                              hosts the suite's single RunSpecs (the file
                              is package localaitools_test so it can
                              import httpapi without an import cycle;
                              Ginkgo aggregates Describes from both the
                              internal and external test packages into
                              one run).
    httpapi/client_test.go — Describe("httpapi.Client against the
                              LocalAI admin REST surface"),
                              Describe("ErrHTTPNotFound"),
                              Describe("Bearer token")
    inproc/client_test.go  — Describe("inproc.Client cancellation")

  core/services/modeladmin/
    config_test.go         — Describe("ConfigService") with sub-Describes
                              for GetConfig, PatchConfig, EditYAML
    state_test.go          — Describe("ConfigService.ToggleState")
    pinned_test.go         — Describe("ConfigService.TogglePinned")
    atomic_test.go         — Describe("writeFileAtomic")

  core/http/endpoints/mcp/
    localai_assistant_test.go — Describe("LocalAIAssistantHolder")

Each package gets a `*_suite_test.go` with the standard
`RegisterFailHandler(Fail) + RunSpecs(t, "...")` boilerplate. Helpers
that previously took *testing.T (newTestService, writeModelYAML,
readMap, sortedStrings, sortGalleries, etc.) drop the *T receiver and
use Gomega Expectations directly. tmp dirs come from GinkgoT().TempDir().

No semantic change to test coverage — every original assertion has a
direct Gomega counterpart. All suites pass with -race.

Assisted-by: Claude:claude-opus-4-7 [Read] [Edit] [Write] [Bash]
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* test+docs(assistant): drift detector for Tool ↔ REST route mapping

Honest gap from the audit: the parity_test.go suite only checks four
methods, and uses the same httpapi.Client for both sides — it asserts
stability of the DTO shapes, not equivalence between in-process and
HTTP. If a contributor adds an admin REST endpoint without an MCP tool,
or a tool without a matching httpapi route, both surfaces silently
diverge.

Add a coverage test plus stronger docs:

- pkg/mcp/localaitools/coverage_test.go introduces a hand-maintained
  toolToHTTPRoute map: every Tool* constant must list the REST endpoint
  the httpapi.Client hits (or "(none)" with a documented reason). Two
  Ginkgo specs assert the map and the published catalog stay in sync —
  one fails when a Tool is added without a route entry, the other fails
  when a route entry references a tool that no longer exists. Verified
  by removing the ToolDeleteModel entry locally; the test fired with a
  clear message pointing the contributor at the file.

  Deliberate non-test: we don't enumerate live admin REST routes from
  here. Walking the route registry requires booting Application;
  parsing core/http/routes/localai.go is brittle. The "new admin REST
  endpoint → MCP tool" direction stays a PR checklist item — see below.

- AGENTS.md gets a new Quick Reference bullet that calls out the rule
  and points at the test by name.

- .agents/api-endpoints-and-auth.md tightens the existing "Companion:
  MCP admin tool surface" subsection from "if useful, consider..." to
  "MUST be considered, with three concrete outcomes (tool added,
  deliberately skipped with documented reason, or forgot — which
  breaks the contract)". Adds a checklist item at the bottom of the
  file's authoritative checklist.

Assisted-by: Claude:claude-opus-4-7 [Read] [Edit] [Write] [Bash]
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* refactor(assistant): drop duplicate DTOs, surface canonical types

Audit feedback: localaitools/dto.go reinvented several types that already
existed in the codebase. Replace the duplicates with the canonical types
so the LLM-visible wire format stays aligned with the rest of LocalAI by
construction (no parallel structs to keep in sync).

Removed (and the canonical type now used by the LocalAIClient interface):

  localaitools.Gallery          → config.Gallery
  localaitools.GalleryModelHit  → gallery.Metadata
  localaitools.VRAMEstimate     → vram.EstimateResult

Tightened scope:

  localaitools.Backend          → kept, but reduced to {Name, Installed}.
                                  ListKnownBackends now returns
                                  []schema.KnownBackend (the canonical
                                  type already used by REST /backends/known).

Kept with documented rationale:

  localaitools.JobStatus       — galleryop.OpStatus has Error error which
                                 marshals to "{}". JobStatus is the
                                 JSON-friendly mirror.
  localaitools.Node            — nodes.BackendNode carries gorm internals
                                 + token hash; we expose only the
                                 LLM-relevant fields.
  ImportModelURIRequest/Response — schema.ImportModelRequest and
                                   GalleryResponse are wire-shaped, mine
                                   are LLM-shaped (BackendPreference flat,
                                   AmbiguousBackend exposed).

Side wins:

  - Drop bytesPerMiB; vram.EstimateResult already carries human-readable
    display strings (size_display, vram_display) the LLM uses directly.
  - Drop the handler-private vramEstimateRequest in
    core/http/endpoints/localai/vram.go and bind directly into
    modeladmin.VRAMRequest (now JSON-tagged).

Both clients pass through these types now where possible (e.g.
ListGalleries in inproc.Client is a one-liner returning
AppConfig.Galleries; httpapi.Client.GallerySearch decodes straight into
[]gallery.Metadata).

All tests green with -race.

Assisted-by: Claude:claude-opus-4-7 [Read] [Edit] [Bash]
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* refactor(assistant): extract REST route paths into named constants

httpapi.Client had 18 bare-string path sites scattered across methods.
Pull them into pkg/mcp/localaitools/httpapi/routes.go: static paths as
package-private constants, dynamic paths as small builders that handle
url.PathEscape on segment values.

No behaviour change. Drops the now-unused net/url import from client.go
since path escaping moved into routes.go alongside the path it applies to.

Local-only by design: the server-side registrations in
core/http/routes/localai.go remain bare strings. Sharing constants across
the pkg/ ↔ core/ boundary would invert the layering today; the existing
Tool↔REST drift-detector in coverage_test.go is the safety net for that
direction.

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Assisted-by: Claude:claude-opus-4-7 [Claude Code]

* docs(assistant): align with shipped UI and dropped bootstrap env vars

The LocalAI Assistant doc still described the older iteration:

- The in-chat toggle was renamed from "Admin" to "Manage" (the badge is
  now "Manage mode" and the home page exposes a "Manage by chat" CTA).
- LOCALAI_ASSISTANT_BOOTSTRAP_MODEL / --localai-assistant-bootstrap-model
  and the bootstrap_default_model tool were removed — admins pick a model
  from the existing selector instead, no env-var configuration required.
- The shipped tool catalog includes import_model_uri but didn't appear in
  the doc; bootstrap_default_model appeared but no longer exists.
- The Settings → LocalAI Assistant runtime toggle wasn't mentioned as the
  preferred way to disable without restart.

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Assisted-by: Claude:claude-opus-4-7 [Claude Code]

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2026-04-28 19:29:27 +02:00

895 lines
26 KiB
Go

package mcp
import (
"context"
"encoding/json"
"fmt"
"net/http"
"os"
"os/exec"
"strings"
"sync"
"time"
"github.com/mudler/LocalAI/core/config"
"github.com/mudler/LocalAI/core/schema"
mcpRemote "github.com/mudler/LocalAI/core/services/mcp"
"github.com/mudler/LocalAI/core/services/messaging"
"github.com/mudler/LocalAI/pkg/functions"
"github.com/mudler/LocalAI/pkg/signals"
"github.com/modelcontextprotocol/go-sdk/mcp"
"github.com/mudler/xlog"
)
// NamedSession pairs an MCP session with its server name and type.
type NamedSession struct {
Name string
Type string // "remote" or "stdio"
Session *mcp.ClientSession
}
// MCPToolInfo holds a discovered MCP tool along with its origin session.
type MCPToolInfo struct {
ServerName string
ToolName string
Function functions.Function
Session *mcp.ClientSession
}
// MCPServerInfo describes an MCP server and its available tools, prompts, and resources.
type MCPServerInfo struct {
Name string `json:"name"`
Type string `json:"type"`
Tools []string `json:"tools"`
Prompts []string `json:"prompts,omitempty"`
Resources []string `json:"resources,omitempty"`
}
// MCPPromptInfo holds a discovered MCP prompt along with its origin session.
type MCPPromptInfo struct {
ServerName string
PromptName string
Description string
Title string
Arguments []*mcp.PromptArgument
Session *mcp.ClientSession
}
// MCPResourceInfo holds a discovered MCP resource along with its origin session.
type MCPResourceInfo struct {
ServerName string
Name string
URI string
Description string
MIMEType string
Session *mcp.ClientSession
}
type sessionCache struct {
mu sync.Mutex
cache map[string][]*mcp.ClientSession
cancels map[string]context.CancelFunc
}
type namedSessionCache struct {
mu sync.Mutex
cache map[string][]NamedSession
cancels map[string]context.CancelFunc
}
var (
cache = sessionCache{
cache: make(map[string][]*mcp.ClientSession),
cancels: make(map[string]context.CancelFunc),
}
namedCache = namedSessionCache{
cache: make(map[string][]NamedSession),
cancels: make(map[string]context.CancelFunc),
}
client = mcp.NewClient(&mcp.Implementation{Name: "LocalAI", Version: "v1.0.0"}, nil)
)
// MCPNATSClient is the interface for NATS request-reply operations needed by MCP routing.
type MCPNATSClient interface {
Request(subject string, data []byte, timeout time.Duration) ([]byte, error)
}
// MetadataKeyLocalAIAssistant is the request-metadata key the chat handler
// inspects to decide whether to wire the in-process admin MCP server. UI
// callers MUST use this constant rather than the raw string.
const MetadataKeyLocalAIAssistant = "localai_assistant"
// LocalAIAssistantFromMetadata reports whether the request opted into the
// "LocalAI Assistant" chat modality (admin in-process MCP tool surface).
// The MetadataKeyLocalAIAssistant key is consumed so it doesn't leak to
// the backend. Truthy values: "1", "true", "yes" (case-insensitive).
func LocalAIAssistantFromMetadata(metadata map[string]string) bool {
raw, ok := metadata[MetadataKeyLocalAIAssistant]
if !ok {
return false
}
delete(metadata, MetadataKeyLocalAIAssistant)
switch strings.ToLower(strings.TrimSpace(raw)) {
case "1", "true", "yes":
return true
}
return false
}
// MCPServersFromMetadata extracts the MCP server list from the metadata map
// and returns the list. The "mcp_servers" key is consumed (deleted from the map)
// so it doesn't leak to the backend.
func MCPServersFromMetadata(metadata map[string]string) []string {
raw, ok := metadata["mcp_servers"]
if !ok || raw == "" {
return nil
}
delete(metadata, "mcp_servers")
servers := strings.Split(raw, ",")
for i := range servers {
servers[i] = strings.TrimSpace(servers[i])
}
return servers
}
func SessionsFromMCPConfig(
name string,
remote config.MCPGenericConfig[config.MCPRemoteServers],
stdio config.MCPGenericConfig[config.MCPSTDIOServers],
) ([]*mcp.ClientSession, error) {
cache.mu.Lock()
defer cache.mu.Unlock()
sessions, exists := cache.cache[name]
// Verify cached sessions are still alive.
if exists {
pingCtx, pingCancel := context.WithTimeout(context.Background(), 5*time.Second)
defer pingCancel()
alive := true
for _, s := range sessions {
if err := s.Ping(pingCtx, nil); err != nil {
xlog.Warn("MCP session dead, evicting cache", "name", name, "error", err)
alive = false
break
}
}
if !alive {
if cancel, ok := cache.cancels[name]; ok {
cancel()
}
delete(cache.cache, name)
delete(cache.cancels, name)
exists = false
}
}
if exists {
return sessions, nil
}
allSessions := []*mcp.ClientSession{}
ctx, cancel := context.WithCancel(context.Background())
// Get the list of all the tools that the Agent will be esposed to
for _, server := range remote.Servers {
xlog.Debug("[MCP remote server] Configuration", "server", server)
// Create HTTP client with custom roundtripper for bearer token injection
httpClient := &http.Client{
Timeout: config.DefaultMCPToolTimeout,
Transport: newBearerTokenRoundTripper(server.Token, http.DefaultTransport),
}
transport := &mcp.StreamableClientTransport{Endpoint: server.URL, HTTPClient: httpClient}
mcpSession, err := client.Connect(ctx, transport, nil)
if err != nil {
xlog.Error("Failed to connect to MCP server", "error", err, "url", server.URL)
continue
}
xlog.Debug("[MCP remote server] Connected to MCP server", "url", server.URL)
cache.cache[name] = append(cache.cache[name], mcpSession)
allSessions = append(allSessions, mcpSession)
}
for _, server := range stdio.Servers {
xlog.Debug("[MCP stdio server] Configuration", "server", server)
command := exec.Command(server.Command, server.Args...)
command.Env = os.Environ()
for key, value := range server.Env {
command.Env = append(command.Env, key+"="+value)
}
transport := &mcp.CommandTransport{Command: command}
mcpSession, err := client.Connect(ctx, transport, nil)
if err != nil {
xlog.Error("Failed to start MCP server", "error", err, "command", command)
continue
}
xlog.Debug("[MCP stdio server] Connected to MCP server", "command", command)
cache.cache[name] = append(cache.cache[name], mcpSession)
allSessions = append(allSessions, mcpSession)
}
cache.cancels[name] = cancel
return allSessions, nil
}
// NamedSessionsFromMCPConfig returns sessions with their server names preserved.
// If enabledServers is non-empty, only servers with matching names are returned.
func NamedSessionsFromMCPConfig(
name string,
remote config.MCPGenericConfig[config.MCPRemoteServers],
stdio config.MCPGenericConfig[config.MCPSTDIOServers],
enabledServers []string,
) ([]NamedSession, error) {
namedCache.mu.Lock()
defer namedCache.mu.Unlock()
allSessions, exists := namedCache.cache[name]
// If cached, verify sessions are still alive via Ping.
// Dead sessions (e.g. exited stdio containers) are evicted so they get recreated.
if exists {
pingCtx, pingCancel := context.WithTimeout(context.Background(), 5*time.Second)
defer pingCancel()
alive := true
for _, ns := range allSessions {
if err := ns.Session.Ping(pingCtx, nil); err != nil {
xlog.Warn("MCP session dead, evicting cache", "server", ns.Name, "error", err)
alive = false
break
}
}
if !alive {
// Close dead sessions and recreate
if cancel, ok := namedCache.cancels[name]; ok {
cancel()
}
delete(namedCache.cache, name)
delete(namedCache.cancels, name)
exists = false
allSessions = nil
}
}
if !exists {
ctx, cancel := context.WithCancel(context.Background())
for serverName, server := range remote.Servers {
xlog.Debug("[MCP remote server] Configuration", "name", serverName, "server", server)
httpClient := &http.Client{
Timeout: config.DefaultMCPToolTimeout,
Transport: newBearerTokenRoundTripper(server.Token, http.DefaultTransport),
}
transport := &mcp.StreamableClientTransport{Endpoint: server.URL, HTTPClient: httpClient}
mcpSession, err := client.Connect(ctx, transport, nil)
if err != nil {
xlog.Error("Failed to connect to MCP server", "error", err, "name", serverName, "url", server.URL)
continue
}
xlog.Debug("[MCP remote server] Connected", "name", serverName, "url", server.URL)
allSessions = append(allSessions, NamedSession{
Name: serverName,
Type: "remote",
Session: mcpSession,
})
}
for serverName, server := range stdio.Servers {
xlog.Debug("[MCP stdio server] Configuration", "name", serverName, "server", server)
command := exec.Command(server.Command, server.Args...)
command.Env = os.Environ()
for key, value := range server.Env {
command.Env = append(command.Env, key+"="+value)
}
transport := &mcp.CommandTransport{Command: command}
mcpSession, err := client.Connect(ctx, transport, nil)
if err != nil {
xlog.Error("Failed to start MCP server", "error", err, "name", serverName, "command", command)
continue
}
xlog.Debug("[MCP stdio server] Connected", "name", serverName, "command", command)
allSessions = append(allSessions, NamedSession{
Name: serverName,
Type: "stdio",
Session: mcpSession,
})
}
namedCache.cache[name] = allSessions
namedCache.cancels[name] = cancel
}
if len(enabledServers) == 0 {
return allSessions, nil
}
enabled := make(map[string]bool, len(enabledServers))
for _, s := range enabledServers {
enabled[s] = true
}
var filtered []NamedSession
for _, ns := range allSessions {
if enabled[ns.Name] {
filtered = append(filtered, ns)
}
}
return filtered, nil
}
// DiscoverMCPTools queries each session for its tools and converts them to functions.Function.
// Deduplicates by tool name (first server wins).
func DiscoverMCPTools(ctx context.Context, sessions []NamedSession) ([]MCPToolInfo, error) {
seen := make(map[string]bool)
var result []MCPToolInfo
for _, ns := range sessions {
toolsResult, err := ns.Session.ListTools(ctx, nil)
if err != nil {
xlog.Error("Failed to list tools from MCP server", "error", err, "server", ns.Name)
continue
}
for _, tool := range toolsResult.Tools {
if seen[tool.Name] {
continue
}
seen[tool.Name] = true
f := functions.Function{
Name: tool.Name,
Description: tool.Description,
}
// Convert InputSchema to map[string]any for functions.Function
if tool.InputSchema != nil {
schemaBytes, err := json.Marshal(tool.InputSchema)
if err == nil {
var params map[string]any
if err := json.Unmarshal(schemaBytes, &params); err == nil {
f.Parameters = params
} else {
xlog.Warn("Failed to unmarshal MCP tool input schema", "tool", tool.Name, "error", err)
}
}
}
if f.Parameters == nil {
f.Parameters = map[string]any{
"type": "object",
"properties": map[string]any{},
}
}
result = append(result, MCPToolInfo{
ServerName: ns.Name,
ToolName: tool.Name,
Function: f,
Session: ns.Session,
})
}
}
return result, nil
}
// ExecuteMCPToolCall finds the matching tool and executes it.
func ExecuteMCPToolCall(ctx context.Context, tools []MCPToolInfo, toolName string, arguments string) (string, error) {
var toolInfo *MCPToolInfo
for i := range tools {
if tools[i].ToolName == toolName {
toolInfo = &tools[i]
break
}
}
if toolInfo == nil {
return "", fmt.Errorf("MCP tool %q not found", toolName)
}
var args map[string]any
if arguments != "" {
if err := json.Unmarshal([]byte(arguments), &args); err != nil {
return "", fmt.Errorf("failed to parse arguments for tool %q: %w", toolName, err)
}
}
result, err := toolInfo.Session.CallTool(ctx, &mcp.CallToolParams{
Name: toolName,
Arguments: args,
})
if err != nil {
return "", fmt.Errorf("MCP tool %q call failed: %w", toolName, err)
}
// Extract text content from result
var texts []string
for _, content := range result.Content {
if tc, ok := content.(*mcp.TextContent); ok {
texts = append(texts, tc.Text)
}
}
if len(texts) == 0 {
// Fallback: marshal the whole result
data, _ := json.Marshal(result.Content)
return string(data), nil
}
if len(texts) == 1 {
return texts[0], nil
}
combined, _ := json.Marshal(texts)
return string(combined), nil
}
// ExecuteMCPToolCallRemote routes an MCP tool execution request to an agent worker via NATS.
// Used in distributed mode when the frontend doesn't hold MCP sessions locally.
func ExecuteMCPToolCallRemote(
ctx context.Context,
natsClient MCPNATSClient,
modelName string,
remote config.MCPGenericConfig[config.MCPRemoteServers],
stdio config.MCPGenericConfig[config.MCPSTDIOServers],
toolName, arguments string,
) (string, error) {
if natsClient == nil {
return "", fmt.Errorf("NATS client not configured for distributed MCP")
}
var args map[string]any
if arguments != "" {
if err := json.Unmarshal([]byte(arguments), &args); err != nil {
return "", fmt.Errorf("invalid tool arguments JSON: %w", err)
}
}
req := mcpRemote.MCPToolRequest{
ModelName: modelName,
ToolName: toolName,
Arguments: args,
RemoteServers: remote,
StdioServers: stdio,
}
reqData, _ := json.Marshal(req)
replyData, err := natsClient.Request(messaging.SubjectMCPToolExecute, reqData, config.DefaultMCPToolTimeout)
if err != nil {
return "", fmt.Errorf("NATS MCP tool request failed: %w", err)
}
var resp mcpRemote.MCPToolResponse
if err := json.Unmarshal(replyData, &resp); err != nil {
return "", fmt.Errorf("unmarshal MCP reply: %w", err)
}
if resp.Error != "" {
return "", fmt.Errorf("remote MCP tool error: %s", resp.Error)
}
return resp.Result, nil
}
// DiscoverMCPToolsRemote routes an MCP discovery request to an agent worker via NATS.
// Returns server info and tool function schemas from the remote worker.
func DiscoverMCPToolsRemote(
ctx context.Context,
natsClient MCPNATSClient,
modelName string,
remote config.MCPGenericConfig[config.MCPRemoteServers],
stdio config.MCPGenericConfig[config.MCPSTDIOServers],
) (*mcpRemote.MCPDiscoveryResponse, error) {
if natsClient == nil {
return nil, fmt.Errorf("NATS client not configured for distributed MCP")
}
req := mcpRemote.MCPDiscoveryRequest{
ModelName: modelName,
RemoteServers: remote,
StdioServers: stdio,
}
reqData, _ := json.Marshal(req)
replyData, err := natsClient.Request(messaging.SubjectMCPDiscovery, reqData, config.DefaultMCPDiscoveryTimeout)
if err != nil {
return nil, fmt.Errorf("NATS MCP discovery request failed: %w", err)
}
var resp mcpRemote.MCPDiscoveryResponse
if err := json.Unmarshal(replyData, &resp); err != nil {
return nil, fmt.Errorf("unmarshal MCP discovery reply: %w", err)
}
if resp.Error != "" {
return nil, fmt.Errorf("remote MCP discovery error: %s", resp.Error)
}
return &resp, nil
}
// ListMCPServers returns server info with tool, prompt, and resource names for each session.
func ListMCPServers(ctx context.Context, sessions []NamedSession) ([]MCPServerInfo, error) {
var result []MCPServerInfo
for _, ns := range sessions {
info := MCPServerInfo{
Name: ns.Name,
Type: ns.Type,
}
toolsResult, err := ns.Session.ListTools(ctx, nil)
if err != nil {
xlog.Error("Failed to list tools from MCP server", "error", err, "server", ns.Name)
} else {
for _, tool := range toolsResult.Tools {
info.Tools = append(info.Tools, tool.Name)
}
}
promptsResult, err := ns.Session.ListPrompts(ctx, nil)
if err != nil {
xlog.Debug("Failed to list prompts from MCP server", "error", err, "server", ns.Name)
} else {
for _, p := range promptsResult.Prompts {
info.Prompts = append(info.Prompts, p.Name)
}
}
resourcesResult, err := ns.Session.ListResources(ctx, nil)
if err != nil {
xlog.Debug("Failed to list resources from MCP server", "error", err, "server", ns.Name)
} else {
for _, r := range resourcesResult.Resources {
info.Resources = append(info.Resources, r.URI)
}
}
result = append(result, info)
}
return result, nil
}
// IsMCPTool checks if a tool name is in the MCP tool list.
func IsMCPTool(tools []MCPToolInfo, name string) bool {
for _, t := range tools {
if t.ToolName == name {
return true
}
}
return false
}
// DiscoverMCPPrompts queries each session for its prompts.
// Deduplicates by prompt name (first server wins).
func DiscoverMCPPrompts(ctx context.Context, sessions []NamedSession) ([]MCPPromptInfo, error) {
seen := make(map[string]bool)
var result []MCPPromptInfo
for _, ns := range sessions {
promptsResult, err := ns.Session.ListPrompts(ctx, nil)
if err != nil {
xlog.Error("Failed to list prompts from MCP server", "error", err, "server", ns.Name)
continue
}
for _, p := range promptsResult.Prompts {
if seen[p.Name] {
continue
}
seen[p.Name] = true
result = append(result, MCPPromptInfo{
ServerName: ns.Name,
PromptName: p.Name,
Description: p.Description,
Title: p.Title,
Arguments: p.Arguments,
Session: ns.Session,
})
}
}
return result, nil
}
// GetMCPPrompt finds and expands a prompt by name using the discovered prompts list.
func GetMCPPrompt(ctx context.Context, prompts []MCPPromptInfo, name string, args map[string]string) ([]*mcp.PromptMessage, error) {
var info *MCPPromptInfo
for i := range prompts {
if prompts[i].PromptName == name {
info = &prompts[i]
break
}
}
if info == nil {
return nil, fmt.Errorf("MCP prompt %q not found", name)
}
result, err := info.Session.GetPrompt(ctx, &mcp.GetPromptParams{
Name: name,
Arguments: args,
})
if err != nil {
return nil, fmt.Errorf("MCP prompt %q get failed: %w", name, err)
}
return result.Messages, nil
}
// DiscoverMCPResources queries each session for its resources.
// Deduplicates by URI (first server wins).
func DiscoverMCPResources(ctx context.Context, sessions []NamedSession) ([]MCPResourceInfo, error) {
seen := make(map[string]bool)
var result []MCPResourceInfo
for _, ns := range sessions {
resourcesResult, err := ns.Session.ListResources(ctx, nil)
if err != nil {
xlog.Error("Failed to list resources from MCP server", "error", err, "server", ns.Name)
continue
}
for _, r := range resourcesResult.Resources {
if seen[r.URI] {
continue
}
seen[r.URI] = true
result = append(result, MCPResourceInfo{
ServerName: ns.Name,
Name: r.Name,
URI: r.URI,
Description: r.Description,
MIMEType: r.MIMEType,
Session: ns.Session,
})
}
}
return result, nil
}
// ReadMCPResource reads a resource by URI from the matching session.
func ReadMCPResource(ctx context.Context, resources []MCPResourceInfo, uri string) (string, error) {
var info *MCPResourceInfo
for i := range resources {
if resources[i].URI == uri {
info = &resources[i]
break
}
}
if info == nil {
return "", fmt.Errorf("MCP resource %q not found", uri)
}
result, err := info.Session.ReadResource(ctx, &mcp.ReadResourceParams{URI: uri})
if err != nil {
return "", fmt.Errorf("MCP resource %q read failed: %w", uri, err)
}
var texts []string
for _, c := range result.Contents {
if c.Text != "" {
texts = append(texts, c.Text)
}
}
return strings.Join(texts, "\n"), nil
}
// MCPPromptFromMetadata extracts the prompt name and arguments from metadata.
// The "mcp_prompt" and "mcp_prompt_args" keys are consumed (deleted from the map).
func MCPPromptFromMetadata(metadata map[string]string) (string, map[string]string) {
name, ok := metadata["mcp_prompt"]
if !ok || name == "" {
return "", nil
}
delete(metadata, "mcp_prompt")
var args map[string]string
if raw, ok := metadata["mcp_prompt_args"]; ok && raw != "" {
json.Unmarshal([]byte(raw), &args)
delete(metadata, "mcp_prompt_args")
}
return name, args
}
// MCPResourcesFromMetadata extracts resource URIs from metadata.
// The "mcp_resources" key is consumed (deleted from the map).
func MCPResourcesFromMetadata(metadata map[string]string) []string {
raw, ok := metadata["mcp_resources"]
if !ok || raw == "" {
return nil
}
delete(metadata, "mcp_resources")
uris := strings.Split(raw, ",")
for i := range uris {
uris[i] = strings.TrimSpace(uris[i])
}
return uris
}
// PromptMessageToText extracts text from a PromptMessage's Content.
func PromptMessageToText(msg *mcp.PromptMessage) string {
if tc, ok := msg.Content.(*mcp.TextContent); ok {
return tc.Text
}
// Fallback: marshal content
data, _ := json.Marshal(msg.Content)
return string(data)
}
// CloseMCPSessions closes all MCP sessions for a given model and removes them from the cache.
// This should be called when a model is unloaded or shut down.
func CloseMCPSessions(modelName string) {
// Close sessions in the unnamed cache
cache.mu.Lock()
if sessions, ok := cache.cache[modelName]; ok {
for _, s := range sessions {
s.Close()
}
delete(cache.cache, modelName)
}
if cancel, ok := cache.cancels[modelName]; ok {
cancel()
delete(cache.cancels, modelName)
}
cache.mu.Unlock()
// Close sessions in the named cache
namedCache.mu.Lock()
if sessions, ok := namedCache.cache[modelName]; ok {
for _, ns := range sessions {
ns.Session.Close()
}
delete(namedCache.cache, modelName)
}
if cancel, ok := namedCache.cancels[modelName]; ok {
cancel()
delete(namedCache.cancels, modelName)
}
namedCache.mu.Unlock()
xlog.Debug("Closed MCP sessions for model", "model", modelName)
}
// CloseAllMCPSessions closes all cached MCP sessions across all models.
// This should be called during graceful shutdown.
func CloseAllMCPSessions() {
cache.mu.Lock()
for name, sessions := range cache.cache {
for _, s := range sessions {
s.Close()
}
if cancel, ok := cache.cancels[name]; ok {
cancel()
}
}
cache.cache = make(map[string][]*mcp.ClientSession)
cache.cancels = make(map[string]context.CancelFunc)
cache.mu.Unlock()
namedCache.mu.Lock()
for name, sessions := range namedCache.cache {
for _, ns := range sessions {
ns.Session.Close()
}
if cancel, ok := namedCache.cancels[name]; ok {
cancel()
}
}
namedCache.cache = make(map[string][]NamedSession)
namedCache.cancels = make(map[string]context.CancelFunc)
namedCache.mu.Unlock()
xlog.Debug("Closed all MCP sessions")
}
func init() {
signals.RegisterGracefulTerminationHandler(func() {
CloseAllMCPSessions()
})
}
// bearerTokenRoundTripper is a custom roundtripper that injects a bearer token
// into HTTP requests
type bearerTokenRoundTripper struct {
token string
base http.RoundTripper
}
// RoundTrip implements the http.RoundTripper interface
func (rt *bearerTokenRoundTripper) RoundTrip(req *http.Request) (*http.Response, error) {
if rt.token != "" {
req.Header.Set("Authorization", "Bearer "+rt.token)
}
return rt.base.RoundTrip(req)
}
// newBearerTokenRoundTripper creates a new roundtripper that injects the given token
func newBearerTokenRoundTripper(token string, base http.RoundTripper) http.RoundTripper {
if base == nil {
base = http.DefaultTransport
}
return &bearerTokenRoundTripper{
token: token,
base: base,
}
}
// MCPContextResult holds the results of MCP prompt and resource discovery
// so callers can inject them into their message slices.
type MCPContextResult struct {
// PromptMessages are schema.Message values converted from MCP prompts,
// intended to be prepended to the conversation.
PromptMessages []schema.Message
// ResourceSuffix is the formatted text of all discovered MCP resources,
// intended to be appended to the last user message's content.
// Empty string when no resources were requested or found.
ResourceSuffix string
}
// InjectMCPContext discovers MCP prompts and resources from the given named sessions
// and returns them in a form ready for injection into any endpoint's message list.
func InjectMCPContext(
ctx context.Context,
namedSessions []NamedSession,
mcpPromptName string,
mcpPromptArgs map[string]string,
mcpResourceURIs []string,
) (*MCPContextResult, error) {
result := &MCPContextResult{}
if mcpPromptName != "" {
prompts, discErr := DiscoverMCPPrompts(ctx, namedSessions)
if discErr != nil {
xlog.Error("Failed to discover MCP prompts", "error", discErr)
} else {
promptMsgs, getErr := GetMCPPrompt(ctx, prompts, mcpPromptName, mcpPromptArgs)
if getErr != nil {
xlog.Error("Failed to get MCP prompt", "error", getErr)
} else {
for _, pm := range promptMsgs {
result.PromptMessages = append(result.PromptMessages, schema.Message{
Role: string(pm.Role),
Content: PromptMessageToText(pm),
})
}
xlog.Debug("MCP prompt discovered", "prompt", mcpPromptName, "messages", len(result.PromptMessages))
}
}
}
if len(mcpResourceURIs) > 0 {
resources, discErr := DiscoverMCPResources(ctx, namedSessions)
if discErr != nil {
xlog.Error("Failed to discover MCP resources", "error", discErr)
} else {
var resourceTexts []string
for _, uri := range mcpResourceURIs {
content, readErr := ReadMCPResource(ctx, resources, uri)
if readErr != nil {
xlog.Error("Failed to read MCP resource", "error", readErr, "uri", uri)
continue
}
name := uri
for _, r := range resources {
if r.URI == uri {
name = r.Name
break
}
}
resourceTexts = append(resourceTexts, fmt.Sprintf("--- MCP Resource: %s ---\n%s", name, content))
}
if len(resourceTexts) > 0 {
result.ResourceSuffix = "\n\n" + strings.Join(resourceTexts, "\n\n")
xlog.Debug("MCP resources discovered", "count", len(resourceTexts))
}
}
}
return result, nil
}
// AppendResourceSuffix appends the resource suffix from an MCPContextResult
// to the last message's content in the given message slice.
func AppendResourceSuffix(messages []schema.Message, suffix string) {
if suffix == "" || len(messages) == 0 {
return
}
lastIdx := len(messages) - 1
switch ct := messages[lastIdx].Content.(type) {
case string:
messages[lastIdx].Content = ct + suffix
default:
messages[lastIdx].Content = fmt.Sprintf("%v%s", ct, suffix)
}
}