mirror of
https://github.com/mudler/LocalAI.git
synced 2026-04-16 21:08:16 -04:00
add evals, reorder menu
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
This commit is contained in:
141
.agents/debugging-backends.md
Normal file
141
.agents/debugging-backends.md
Normal file
@@ -0,0 +1,141 @@
|
||||
# Debugging and Rebuilding Backends
|
||||
|
||||
When a backend fails at runtime (e.g. a gRPC method error, a Python import error, or a dependency conflict), use this guide to diagnose, fix, and rebuild.
|
||||
|
||||
## Architecture Overview
|
||||
|
||||
- **Source directory**: `backend/python/<name>/` (or `backend/go/<name>/`, `backend/cpp/<name>/`)
|
||||
- **Installed directory**: `backends/<name>/` — this is what LocalAI actually runs. It is populated by `make backends/<name>` which builds a Docker image, exports it, and installs it via `local-ai backends install`.
|
||||
- **Virtual environment**: `backends/<name>/venv/` — the installed Python venv (for Python backends). The Python binary is at `backends/<name>/venv/bin/python`.
|
||||
|
||||
Editing files in `backend/python/<name>/` does **not** affect the running backend until you rebuild with `make backends/<name>`.
|
||||
|
||||
## Diagnosing Failures
|
||||
|
||||
### 1. Check the logs
|
||||
|
||||
Backend gRPC processes log to LocalAI's stdout/stderr. Look for lines tagged with the backend's model ID:
|
||||
|
||||
```
|
||||
GRPC stderr id="trl-finetune-127.0.0.1:37335" line="..."
|
||||
```
|
||||
|
||||
Common error patterns:
|
||||
- **"Method not implemented"** — the backend is missing a gRPC method that the Go side calls. The model loader (`pkg/model/initializers.go`) always calls `LoadModel` after `Health`; fine-tuning backends must implement it even as a no-op stub.
|
||||
- **Python import errors / `AttributeError`** — usually a dependency version mismatch (e.g. `pyarrow` removing `PyExtensionType`).
|
||||
- **"failed to load backend"** — the gRPC process crashed or never started. Check stderr lines for the traceback.
|
||||
|
||||
### 2. Test the Python environment directly
|
||||
|
||||
You can run the installed venv's Python to check imports without starting the full server:
|
||||
|
||||
```bash
|
||||
backends/<name>/venv/bin/python -c "import datasets; print(datasets.__version__)"
|
||||
```
|
||||
|
||||
If `pip` is missing from the venv, bootstrap it:
|
||||
|
||||
```bash
|
||||
backends/<name>/venv/bin/python -m ensurepip
|
||||
```
|
||||
|
||||
Then use `backends/<name>/venv/bin/python -m pip install ...` to test fixes in the installed venv before committing them to the source requirements.
|
||||
|
||||
### 3. Check upstream dependency constraints
|
||||
|
||||
When you hit a dependency conflict, check what the main library expects. For example, TRL's upstream `requirements.txt`:
|
||||
|
||||
```
|
||||
https://github.com/huggingface/trl/blob/main/requirements.txt
|
||||
```
|
||||
|
||||
Pin minimum versions in the backend's requirements files to match upstream.
|
||||
|
||||
## Common Fixes
|
||||
|
||||
### Missing gRPC methods
|
||||
|
||||
If the Go side calls a method the backend doesn't implement (e.g. `LoadModel`), add a no-op stub in `backend.py`:
|
||||
|
||||
```python
|
||||
def LoadModel(self, request, context):
|
||||
"""No-op — actual loading happens elsewhere."""
|
||||
return backend_pb2.Result(success=True, message="OK")
|
||||
```
|
||||
|
||||
The gRPC contract requires `LoadModel` to succeed for the model loader to return a usable client, even if the backend doesn't need upfront model loading.
|
||||
|
||||
### Dependency version conflicts
|
||||
|
||||
Python backends often break when a transitive dependency releases a breaking change (e.g. `pyarrow` removing `PyExtensionType`). Steps:
|
||||
|
||||
1. Identify the broken import in the logs
|
||||
2. Test in the installed venv: `backends/<name>/venv/bin/python -c "import <module>"`
|
||||
3. Check upstream requirements for version constraints
|
||||
4. Update **all** requirements files in `backend/python/<name>/`:
|
||||
- `requirements.txt` — base deps (grpcio, protobuf)
|
||||
- `requirements-cpu.txt` — CPU-specific (includes PyTorch CPU index)
|
||||
- `requirements-cublas12.txt` — CUDA 12
|
||||
- `requirements-cublas13.txt` — CUDA 13
|
||||
5. Rebuild: `make backends/<name>`
|
||||
|
||||
### PyTorch index conflicts (uv resolver)
|
||||
|
||||
The Docker build uses `uv` for pip installs. When `--extra-index-url` points to the PyTorch wheel index, `uv` may refuse to fetch packages like `requests` from PyPI if it finds a different version on the PyTorch index first. Fix this by adding `--index-strategy=unsafe-first-match` to `install.sh`:
|
||||
|
||||
```bash
|
||||
EXTRA_PIP_INSTALL_FLAGS+=" --upgrade --index-strategy=unsafe-first-match"
|
||||
installRequirements
|
||||
```
|
||||
|
||||
Most Python backends already do this — check `backend/python/transformers/install.sh` or similar for reference.
|
||||
|
||||
## Rebuilding
|
||||
|
||||
### Rebuild a single backend
|
||||
|
||||
```bash
|
||||
make backends/<name>
|
||||
```
|
||||
|
||||
This runs the Docker build (`Dockerfile.python`), exports the image to `backend-images/<name>.tar`, and installs it into `backends/<name>/`. It also rebuilds the `local-ai` Go binary (without extra tags).
|
||||
|
||||
**Important**: If you were previously running with `GO_TAGS=auth`, the `make backends/<name>` step will overwrite your binary without that tag. Rebuild the Go binary afterward:
|
||||
|
||||
```bash
|
||||
GO_TAGS=auth make build
|
||||
```
|
||||
|
||||
### Rebuild and restart
|
||||
|
||||
After rebuilding a backend, you must restart LocalAI for it to pick up the new backend files. The backend gRPC process is spawned on demand when the model is first loaded.
|
||||
|
||||
```bash
|
||||
# Kill existing process
|
||||
kill <pid>
|
||||
|
||||
# Restart
|
||||
./local-ai run --debug [your flags]
|
||||
```
|
||||
|
||||
### Quick iteration (skip Docker rebuild)
|
||||
|
||||
For fast iteration on a Python backend's `backend.py` without a full Docker rebuild, you can edit the installed copy directly:
|
||||
|
||||
```bash
|
||||
# Edit the installed copy
|
||||
vim backends/<name>/backend.py
|
||||
|
||||
# Restart LocalAI to respawn the gRPC process
|
||||
```
|
||||
|
||||
This is useful for testing but **does not persist** — the next `make backends/<name>` will overwrite it. Always commit fixes to the source in `backend/python/<name>/`.
|
||||
|
||||
## Verification
|
||||
|
||||
After fixing and rebuilding:
|
||||
|
||||
1. Start LocalAI and confirm the backend registers: look for `Registering backend name="<name>"` in the logs
|
||||
2. Trigger the operation that failed (e.g. start a fine-tuning job)
|
||||
3. Watch the GRPC stderr/stdout lines for the backend's model ID
|
||||
4. Confirm no errors in the traceback
|
||||
@@ -21,7 +21,7 @@ import backend_pb2
|
||||
import backend_pb2_grpc
|
||||
|
||||
_ONE_DAY_IN_SECONDS = 60 * 60 * 24
|
||||
MAX_WORKERS = int(os.environ.get('PYTHON_GRPC_MAX_WORKERS', '1'))
|
||||
MAX_WORKERS = int(os.environ.get('PYTHON_GRPC_MAX_WORKERS', '4'))
|
||||
|
||||
|
||||
class ProgressCallback:
|
||||
@@ -38,16 +38,22 @@ class ProgressCallback:
|
||||
parent = self
|
||||
|
||||
class _Callback(TrainerCallback):
|
||||
def __init__(self):
|
||||
self._train_start_time = None
|
||||
|
||||
def on_train_begin(self, args, state, control, **kwargs):
|
||||
self._train_start_time = time.time()
|
||||
|
||||
def on_log(self, args, state, control, logs=None, **kwargs):
|
||||
if logs is None:
|
||||
return
|
||||
total_steps = state.max_steps if state.max_steps > 0 else 0
|
||||
progress = (state.global_step / total_steps * 100) if total_steps > 0 else 0
|
||||
eta = 0.0
|
||||
if state.global_step > 0 and total_steps > 0:
|
||||
elapsed = time.time() - state.logging_steps # approximate
|
||||
if state.global_step > 0 and total_steps > 0 and self._train_start_time:
|
||||
elapsed = time.time() - self._train_start_time
|
||||
remaining_steps = total_steps - state.global_step
|
||||
if state.global_step > 1:
|
||||
if state.global_step > 0:
|
||||
eta = remaining_steps * (elapsed / state.global_step)
|
||||
|
||||
extra_metrics = {}
|
||||
@@ -72,6 +78,58 @@ class ProgressCallback:
|
||||
)
|
||||
parent.progress_queue.put(update)
|
||||
|
||||
def on_prediction_step(self, args, state, control, **kwargs):
|
||||
"""Send periodic updates during evaluation so the UI doesn't freeze."""
|
||||
if not hasattr(self, '_eval_update_counter'):
|
||||
self._eval_update_counter = 0
|
||||
self._eval_update_counter += 1
|
||||
# Throttle: send an update every 10 prediction steps
|
||||
if self._eval_update_counter % 10 != 0:
|
||||
return
|
||||
total_steps = state.max_steps if state.max_steps > 0 else 0
|
||||
progress = (state.global_step / total_steps * 100) if total_steps > 0 else 0
|
||||
update = backend_pb2.FineTuneProgressUpdate(
|
||||
job_id=parent.job_id,
|
||||
current_step=state.global_step,
|
||||
total_steps=total_steps,
|
||||
current_epoch=float(state.epoch or 0),
|
||||
total_epochs=float(parent.total_epochs),
|
||||
progress_percent=float(progress),
|
||||
status="training",
|
||||
message=f"Evaluating... (batch {self._eval_update_counter})",
|
||||
)
|
||||
parent.progress_queue.put(update)
|
||||
|
||||
def on_evaluate(self, args, state, control, metrics=None, **kwargs):
|
||||
"""Report eval results once evaluation is done."""
|
||||
# Reset prediction counter for next eval round
|
||||
self._eval_update_counter = 0
|
||||
|
||||
total_steps = state.max_steps if state.max_steps > 0 else 0
|
||||
progress = (state.global_step / total_steps * 100) if total_steps > 0 else 0
|
||||
|
||||
eval_loss = 0.0
|
||||
extra_metrics = {}
|
||||
if metrics:
|
||||
eval_loss = float(metrics.get('eval_loss', 0))
|
||||
for k, v in metrics.items():
|
||||
if isinstance(v, (int, float)) and k not in ('eval_loss', 'epoch'):
|
||||
extra_metrics[k] = float(v)
|
||||
|
||||
update = backend_pb2.FineTuneProgressUpdate(
|
||||
job_id=parent.job_id,
|
||||
current_step=state.global_step,
|
||||
total_steps=total_steps,
|
||||
current_epoch=float(state.epoch or 0),
|
||||
total_epochs=float(parent.total_epochs),
|
||||
eval_loss=eval_loss,
|
||||
progress_percent=float(progress),
|
||||
status="training",
|
||||
message=f"Evaluation complete at step {state.global_step}",
|
||||
extra_metrics=extra_metrics,
|
||||
)
|
||||
parent.progress_queue.put(update)
|
||||
|
||||
def on_save(self, args, state, control, **kwargs):
|
||||
checkpoint_path = os.path.join(args.output_dir, f"checkpoint-{state.global_step}")
|
||||
update = backend_pb2.FineTuneProgressUpdate(
|
||||
@@ -256,6 +314,38 @@ class BackendServicer(backend_pb2_grpc.BackendServicer):
|
||||
else:
|
||||
dataset = load_dataset(request.dataset_source, split=dataset_split)
|
||||
|
||||
# Eval dataset setup
|
||||
eval_dataset = None
|
||||
eval_strategy = extra.get("eval_strategy", "steps")
|
||||
eval_steps = int(extra.get("eval_steps", str(request.save_steps if request.save_steps > 0 else 500)))
|
||||
|
||||
if eval_strategy != "no":
|
||||
eval_split = extra.get("eval_split")
|
||||
eval_dataset_source = extra.get("eval_dataset_source")
|
||||
if eval_split:
|
||||
# Load a specific split as eval dataset
|
||||
if os.path.exists(request.dataset_source):
|
||||
if request.dataset_source.endswith('.json') or request.dataset_source.endswith('.jsonl'):
|
||||
eval_dataset = load_dataset("json", data_files=request.dataset_source, split=eval_split)
|
||||
elif request.dataset_source.endswith('.csv'):
|
||||
eval_dataset = load_dataset("csv", data_files=request.dataset_source, split=eval_split)
|
||||
else:
|
||||
eval_dataset = load_dataset(request.dataset_source, split=eval_split)
|
||||
else:
|
||||
eval_dataset = load_dataset(request.dataset_source, split=eval_split)
|
||||
elif eval_dataset_source:
|
||||
# Load eval dataset from a separate source
|
||||
eval_dataset = load_dataset(eval_dataset_source, split="train")
|
||||
else:
|
||||
# Auto-split from training set
|
||||
eval_split_ratio = float(extra.get("eval_split_ratio", "0.1"))
|
||||
split = dataset.train_test_split(test_size=eval_split_ratio)
|
||||
dataset = split["train"]
|
||||
eval_dataset = split["test"]
|
||||
|
||||
if eval_strategy == "no":
|
||||
eval_dataset = None
|
||||
|
||||
# Training config
|
||||
output_dir = request.output_dir or f"./output-{job.job_id}"
|
||||
num_epochs = request.num_epochs if request.num_epochs > 0 else 3
|
||||
@@ -308,6 +398,12 @@ class BackendServicer(backend_pb2_grpc.BackendServicer):
|
||||
if save_total_limit:
|
||||
_save_kwargs["save_total_limit"] = save_total_limit
|
||||
|
||||
# Eval kwargs
|
||||
_eval_kwargs = {}
|
||||
if eval_dataset is not None:
|
||||
_eval_kwargs["eval_strategy"] = eval_strategy
|
||||
_eval_kwargs["eval_steps"] = eval_steps
|
||||
|
||||
# Common training arguments shared by all methods
|
||||
_common_args = dict(
|
||||
output_dir=output_dir,
|
||||
@@ -324,6 +420,7 @@ class BackendServicer(backend_pb2_grpc.BackendServicer):
|
||||
report_to="none",
|
||||
**_save_kwargs,
|
||||
**common_train_kwargs,
|
||||
**_eval_kwargs,
|
||||
)
|
||||
|
||||
# Select trainer based on training method
|
||||
@@ -343,6 +440,7 @@ class BackendServicer(backend_pb2_grpc.BackendServicer):
|
||||
model=model,
|
||||
args=training_args,
|
||||
train_dataset=dataset,
|
||||
eval_dataset=eval_dataset,
|
||||
processing_class=tokenizer,
|
||||
callbacks=[progress_cb.get_callback()],
|
||||
)
|
||||
@@ -365,6 +463,7 @@ class BackendServicer(backend_pb2_grpc.BackendServicer):
|
||||
model=model,
|
||||
args=training_args,
|
||||
train_dataset=dataset,
|
||||
eval_dataset=eval_dataset,
|
||||
processing_class=tokenizer,
|
||||
callbacks=[progress_cb.get_callback()],
|
||||
)
|
||||
@@ -420,6 +519,7 @@ class BackendServicer(backend_pb2_grpc.BackendServicer):
|
||||
model=model,
|
||||
args=training_args,
|
||||
train_dataset=dataset,
|
||||
eval_dataset=eval_dataset,
|
||||
processing_class=tokenizer,
|
||||
callbacks=[progress_cb.get_callback()],
|
||||
)
|
||||
@@ -440,6 +540,7 @@ class BackendServicer(backend_pb2_grpc.BackendServicer):
|
||||
model=model,
|
||||
args=training_args,
|
||||
train_dataset=dataset,
|
||||
eval_dataset=eval_dataset,
|
||||
processing_class=tokenizer,
|
||||
callbacks=[progress_cb.get_callback()],
|
||||
)
|
||||
@@ -478,6 +579,7 @@ class BackendServicer(backend_pb2_grpc.BackendServicer):
|
||||
model=model,
|
||||
args=training_args,
|
||||
train_dataset=dataset,
|
||||
eval_dataset=eval_dataset,
|
||||
processing_class=tokenizer,
|
||||
callbacks=[progress_cb.get_callback()],
|
||||
)
|
||||
@@ -528,9 +630,8 @@ class BackendServicer(backend_pb2_grpc.BackendServicer):
|
||||
continue
|
||||
|
||||
def StopFineTune(self, request, context):
|
||||
# No-op: stopping is handled by killing the backend process from Go.
|
||||
# This stub remains to satisfy the proto-generated gRPC interface.
|
||||
return backend_pb2.Result(success=True, message="No-op (process kill used instead)")
|
||||
# Stopping is handled by killing the process from Go via ShutdownModel.
|
||||
return backend_pb2.Result(success=True, message="OK")
|
||||
|
||||
def ListCheckpoints(self, request, context):
|
||||
output_dir = request.output_dir
|
||||
|
||||
@@ -80,13 +80,14 @@ func UploadToCollectionEndpoint(app *application.Application) echo.HandlerFunc {
|
||||
return c.JSON(http.StatusBadRequest, map[string]string{"error": err.Error()})
|
||||
}
|
||||
defer src.Close()
|
||||
if err := svc.UploadToCollectionForUser(userID, name, file.Filename, src); err != nil {
|
||||
key, err := svc.UploadToCollectionForUser(userID, name, file.Filename, src)
|
||||
if err != nil {
|
||||
if strings.Contains(err.Error(), "not found") {
|
||||
return c.JSON(http.StatusNotFound, map[string]string{"error": err.Error()})
|
||||
}
|
||||
return c.JSON(http.StatusInternalServerError, map[string]string{"error": err.Error()})
|
||||
}
|
||||
return c.JSON(http.StatusOK, map[string]string{"status": "ok", "filename": file.Filename})
|
||||
return c.JSON(http.StatusOK, map[string]string{"status": "ok", "filename": file.Filename, "key": key})
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -208,6 +208,32 @@
|
||||
overflow: hidden;
|
||||
}
|
||||
|
||||
.sidebar-section-toggle {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
justify-content: space-between;
|
||||
width: 100%;
|
||||
background: none;
|
||||
border: none;
|
||||
cursor: pointer;
|
||||
font-family: inherit;
|
||||
transition: color var(--duration-fast);
|
||||
}
|
||||
|
||||
.sidebar-section-toggle:hover {
|
||||
color: var(--color-text-secondary);
|
||||
}
|
||||
|
||||
.sidebar-section-chevron {
|
||||
font-size: 0.5rem;
|
||||
transition: transform var(--duration-fast);
|
||||
flex-shrink: 0;
|
||||
}
|
||||
|
||||
.sidebar-section-toggle.open .sidebar-section-chevron {
|
||||
transform: rotate(90deg);
|
||||
}
|
||||
|
||||
.nav-item {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
@@ -392,6 +418,10 @@
|
||||
display: none;
|
||||
}
|
||||
|
||||
.sidebar.collapsed .sidebar-section-chevron {
|
||||
display: none;
|
||||
}
|
||||
|
||||
.sidebar.collapsed .nav-item {
|
||||
justify-content: center;
|
||||
padding: 8px 0;
|
||||
@@ -2614,6 +2644,43 @@
|
||||
font-size: 0.625rem;
|
||||
}
|
||||
|
||||
/* Studio tabs */
|
||||
.studio-tabs {
|
||||
display: flex;
|
||||
gap: 0;
|
||||
border-bottom: 1px solid var(--color-border-subtle);
|
||||
padding: 0 var(--spacing-xl);
|
||||
background: var(--color-bg-primary);
|
||||
position: sticky;
|
||||
top: 0;
|
||||
z-index: 10;
|
||||
}
|
||||
|
||||
.studio-tab {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
gap: 6px;
|
||||
background: none;
|
||||
border: none;
|
||||
padding: var(--spacing-sm) var(--spacing-md);
|
||||
font-size: 0.8125rem;
|
||||
font-family: inherit;
|
||||
color: var(--color-text-secondary);
|
||||
cursor: pointer;
|
||||
border-bottom: 2px solid transparent;
|
||||
transition: color var(--duration-fast), border-color var(--duration-fast);
|
||||
}
|
||||
|
||||
.studio-tab:hover {
|
||||
color: var(--color-text-primary);
|
||||
}
|
||||
|
||||
.studio-tab-active {
|
||||
color: var(--color-primary);
|
||||
border-bottom-color: var(--color-primary);
|
||||
font-weight: 500;
|
||||
}
|
||||
|
||||
/* Two-column layout for media generation pages */
|
||||
.media-layout {
|
||||
display: grid;
|
||||
|
||||
@@ -1,38 +1,57 @@
|
||||
import { useState, useEffect } from 'react'
|
||||
import { NavLink, useNavigate } from 'react-router-dom'
|
||||
import { NavLink, useNavigate, useLocation } from 'react-router-dom'
|
||||
import ThemeToggle from './ThemeToggle'
|
||||
import { useAuth } from '../context/AuthContext'
|
||||
import { apiUrl } from '../utils/basePath'
|
||||
|
||||
const COLLAPSED_KEY = 'localai_sidebar_collapsed'
|
||||
const SECTIONS_KEY = 'localai_sidebar_sections'
|
||||
|
||||
const mainItems = [
|
||||
const topItems = [
|
||||
{ path: '/app', icon: 'fas fa-home', label: 'Home' },
|
||||
{ path: '/app/models', icon: 'fas fa-download', label: 'Install Models', adminOnly: true },
|
||||
{ path: '/app/chat', icon: 'fas fa-comments', label: 'Chat' },
|
||||
{ path: '/app/image', icon: 'fas fa-image', label: 'Images' },
|
||||
{ path: '/app/video', icon: 'fas fa-video', label: 'Video' },
|
||||
{ path: '/app/tts', icon: 'fas fa-music', label: 'TTS' },
|
||||
{ path: '/app/sound', icon: 'fas fa-volume-high', label: 'Sound' },
|
||||
{ path: '/app/studio', icon: 'fas fa-palette', label: 'Studio' },
|
||||
{ path: '/app/talk', icon: 'fas fa-phone', label: 'Talk' },
|
||||
{ path: '/app/fine-tune', icon: 'fas fa-graduation-cap', label: 'Fine-Tune', feature: 'fine_tuning' },
|
||||
{ path: '/app/usage', icon: 'fas fa-chart-bar', label: 'Usage', authOnly: true },
|
||||
]
|
||||
|
||||
const agentItems = [
|
||||
{ path: '/app/agents', icon: 'fas fa-robot', label: 'Agents' },
|
||||
{ path: '/app/skills', icon: 'fas fa-wand-magic-sparkles', label: 'Skills' },
|
||||
{ path: '/app/collections', icon: 'fas fa-database', label: 'Memory' },
|
||||
{ path: '/app/agent-jobs', icon: 'fas fa-tasks', label: 'MCP CI Jobs', feature: 'mcp' },
|
||||
]
|
||||
|
||||
const systemItems = [
|
||||
{ path: '/app/users', icon: 'fas fa-users', label: 'Users', adminOnly: true, authOnly: true },
|
||||
{ path: '/app/backends', icon: 'fas fa-server', label: 'Backends', adminOnly: true },
|
||||
{ path: '/app/traces', icon: 'fas fa-chart-line', label: 'Traces', adminOnly: true },
|
||||
{ path: '/app/p2p', icon: 'fas fa-circle-nodes', label: 'Swarm', adminOnly: true },
|
||||
{ path: '/app/manage', icon: 'fas fa-desktop', label: 'System', adminOnly: true },
|
||||
{ path: '/app/settings', icon: 'fas fa-cog', label: 'Settings', adminOnly: true },
|
||||
const sections = [
|
||||
{
|
||||
id: 'tools',
|
||||
title: 'Tools',
|
||||
items: [
|
||||
{ path: '/app/fine-tune', icon: 'fas fa-graduation-cap', label: 'Fine-Tune', feature: 'fine_tuning' },
|
||||
],
|
||||
},
|
||||
{
|
||||
id: 'agents',
|
||||
title: 'Agents',
|
||||
featureMap: {
|
||||
'/app/agents': 'agents',
|
||||
'/app/skills': 'skills',
|
||||
'/app/collections': 'collections',
|
||||
'/app/agent-jobs': 'mcp_jobs',
|
||||
},
|
||||
items: [
|
||||
{ path: '/app/agents', icon: 'fas fa-robot', label: 'Agents' },
|
||||
{ path: '/app/skills', icon: 'fas fa-wand-magic-sparkles', label: 'Skills' },
|
||||
{ path: '/app/collections', icon: 'fas fa-database', label: 'Memory' },
|
||||
{ path: '/app/agent-jobs', icon: 'fas fa-tasks', label: 'MCP CI Jobs', feature: 'mcp' },
|
||||
],
|
||||
},
|
||||
{
|
||||
id: 'system',
|
||||
title: 'System',
|
||||
items: [
|
||||
{ path: '/app/usage', icon: 'fas fa-chart-bar', label: 'Usage', authOnly: true },
|
||||
{ path: '/app/users', icon: 'fas fa-users', label: 'Users', adminOnly: true, authOnly: true },
|
||||
{ path: '/app/backends', icon: 'fas fa-server', label: 'Backends', adminOnly: true },
|
||||
{ path: '/app/traces', icon: 'fas fa-chart-line', label: 'Traces', adminOnly: true },
|
||||
{ path: '/app/p2p', icon: 'fas fa-circle-nodes', label: 'Swarm', adminOnly: true },
|
||||
{ path: '/app/manage', icon: 'fas fa-desktop', label: 'System', adminOnly: true },
|
||||
{ path: '/app/settings', icon: 'fas fa-cog', label: 'Settings', adminOnly: true },
|
||||
],
|
||||
},
|
||||
]
|
||||
|
||||
function NavItem({ item, onClose, collapsed }) {
|
||||
@@ -52,18 +71,47 @@ function NavItem({ item, onClose, collapsed }) {
|
||||
)
|
||||
}
|
||||
|
||||
function loadSectionState() {
|
||||
try {
|
||||
const stored = localStorage.getItem(SECTIONS_KEY)
|
||||
return stored ? JSON.parse(stored) : {}
|
||||
} catch (_) {
|
||||
return {}
|
||||
}
|
||||
}
|
||||
|
||||
function saveSectionState(state) {
|
||||
try { localStorage.setItem(SECTIONS_KEY, JSON.stringify(state)) } catch (_) { /* ignore */ }
|
||||
}
|
||||
|
||||
export default function Sidebar({ isOpen, onClose }) {
|
||||
const [features, setFeatures] = useState({})
|
||||
const [collapsed, setCollapsed] = useState(() => {
|
||||
try { return localStorage.getItem(COLLAPSED_KEY) === 'true' } catch (_) { return false }
|
||||
})
|
||||
const [openSections, setOpenSections] = useState(loadSectionState)
|
||||
const { isAdmin, authEnabled, user, logout, hasFeature } = useAuth()
|
||||
const navigate = useNavigate()
|
||||
const location = useLocation()
|
||||
|
||||
useEffect(() => {
|
||||
fetch(apiUrl('/api/features')).then(r => r.json()).then(setFeatures).catch(() => {})
|
||||
}, [])
|
||||
|
||||
// Auto-expand section containing the active route
|
||||
useEffect(() => {
|
||||
for (const section of sections) {
|
||||
const match = section.items.some(item => location.pathname.startsWith(item.path))
|
||||
if (match && !openSections[section.id]) {
|
||||
setOpenSections(prev => {
|
||||
const next = { ...prev, [section.id]: true }
|
||||
saveSectionState(next)
|
||||
return next
|
||||
})
|
||||
}
|
||||
}
|
||||
}, [location.pathname])
|
||||
|
||||
const toggleCollapse = () => {
|
||||
setCollapsed(prev => {
|
||||
const next = !prev
|
||||
@@ -73,19 +121,34 @@ export default function Sidebar({ isOpen, onClose }) {
|
||||
})
|
||||
}
|
||||
|
||||
const visibleMainItems = mainItems.filter(item => {
|
||||
const toggleSection = (id) => {
|
||||
setOpenSections(prev => {
|
||||
const next = { ...prev, [id]: !prev[id] }
|
||||
saveSectionState(next)
|
||||
return next
|
||||
})
|
||||
}
|
||||
|
||||
const filterItem = (item) => {
|
||||
if (item.adminOnly && !isAdmin) return false
|
||||
if (item.authOnly && !authEnabled) return false
|
||||
if (item.feature && features[item.feature] === false) return false
|
||||
if (item.feature && !hasFeature(item.feature)) return false
|
||||
return true
|
||||
})
|
||||
}
|
||||
|
||||
const visibleSystemItems = systemItems.filter(item => {
|
||||
if (item.adminOnly && !isAdmin) return false
|
||||
if (item.authOnly && !authEnabled) return false
|
||||
return true
|
||||
})
|
||||
const visibleTopItems = topItems.filter(filterItem)
|
||||
|
||||
const getVisibleSectionItems = (section) => {
|
||||
return section.items.filter(item => {
|
||||
if (!filterItem(item)) return false
|
||||
if (section.featureMap) {
|
||||
const featureName = section.featureMap[item.path]
|
||||
return featureName ? hasFeature(featureName) : isAdmin
|
||||
}
|
||||
return true
|
||||
})
|
||||
}
|
||||
|
||||
return (
|
||||
<>
|
||||
@@ -107,57 +170,57 @@ export default function Sidebar({ isOpen, onClose }) {
|
||||
|
||||
{/* Navigation */}
|
||||
<nav className="sidebar-nav">
|
||||
{/* Main section */}
|
||||
{/* Top-level items */}
|
||||
<div className="sidebar-section">
|
||||
{visibleMainItems.map(item => (
|
||||
{visibleTopItems.map(item => (
|
||||
<NavItem key={item.path} item={item} onClose={onClose} collapsed={collapsed} />
|
||||
))}
|
||||
</div>
|
||||
|
||||
{/* Agents section (per-feature permissions) */}
|
||||
{features.agents !== false && (() => {
|
||||
const featureMap = {
|
||||
'/app/agents': 'agents',
|
||||
'/app/skills': 'skills',
|
||||
'/app/collections': 'collections',
|
||||
'/app/agent-jobs': 'mcp_jobs',
|
||||
}
|
||||
const visibleAgentItems = agentItems.filter(item => {
|
||||
if (item.feature && features[item.feature] === false) return false
|
||||
const featureName = featureMap[item.path]
|
||||
return featureName ? hasFeature(featureName) : isAdmin
|
||||
})
|
||||
if (visibleAgentItems.length === 0) return null
|
||||
{/* Collapsible sections */}
|
||||
{sections.map(section => {
|
||||
// For agents section, check global feature flag
|
||||
if (section.id === 'agents' && features.agents === false) return null
|
||||
|
||||
const visibleItems = getVisibleSectionItems(section)
|
||||
if (visibleItems.length === 0) return null
|
||||
|
||||
const isSectionOpen = openSections[section.id]
|
||||
const showItems = isSectionOpen || collapsed
|
||||
|
||||
return (
|
||||
<div className="sidebar-section">
|
||||
<div className="sidebar-section-title">Agents</div>
|
||||
{visibleAgentItems.map(item => (
|
||||
<NavItem key={item.path} item={item} onClose={onClose} collapsed={collapsed} />
|
||||
))}
|
||||
<div key={section.id} className="sidebar-section">
|
||||
<button
|
||||
className={`sidebar-section-title sidebar-section-toggle ${isSectionOpen ? 'open' : ''}`}
|
||||
onClick={() => toggleSection(section.id)}
|
||||
title={collapsed ? section.title : undefined}
|
||||
>
|
||||
<span>{section.title}</span>
|
||||
<i className="fas fa-chevron-right sidebar-section-chevron" />
|
||||
</button>
|
||||
{showItems && (
|
||||
<div className="sidebar-section-items">
|
||||
{section.id === 'system' && (
|
||||
<a
|
||||
href={apiUrl('/swagger/index.html')}
|
||||
target="_blank"
|
||||
rel="noopener noreferrer"
|
||||
className="nav-item"
|
||||
title={collapsed ? 'API' : undefined}
|
||||
>
|
||||
<i className="fas fa-code nav-icon" />
|
||||
<span className="nav-label">API</span>
|
||||
<i className="fas fa-external-link-alt nav-external" />
|
||||
</a>
|
||||
)}
|
||||
{visibleItems.map(item => (
|
||||
<NavItem key={item.path} item={item} onClose={onClose} collapsed={collapsed} />
|
||||
))}
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
)
|
||||
})()}
|
||||
|
||||
{/* System section */}
|
||||
<div className="sidebar-section">
|
||||
{visibleSystemItems.length > 0 && (
|
||||
<div className="sidebar-section-title">System</div>
|
||||
)}
|
||||
<a
|
||||
href={apiUrl('/swagger/index.html')}
|
||||
target="_blank"
|
||||
rel="noopener noreferrer"
|
||||
className="nav-item"
|
||||
title={collapsed ? 'API' : undefined}
|
||||
>
|
||||
<i className="fas fa-code nav-icon" />
|
||||
<span className="nav-label">API</span>
|
||||
<i className="fas fa-external-link-alt nav-external" />
|
||||
</a>
|
||||
{visibleSystemItems.map(item => (
|
||||
<NavItem key={item.path} item={item} onClose={onClose} collapsed={collapsed} />
|
||||
))}
|
||||
</div>
|
||||
})}
|
||||
</nav>
|
||||
|
||||
{/* Footer */}
|
||||
|
||||
@@ -180,223 +180,115 @@ function formatAxisValue(val, decimals) {
|
||||
return val.toExponential(1)
|
||||
}
|
||||
|
||||
function TrainingChart({ events }) {
|
||||
function SingleMetricChart({ data, valueKey, label, color, formatValue, events }) {
|
||||
const [tooltip, setTooltip] = useState(null)
|
||||
const svgRef = useRef(null)
|
||||
|
||||
if (!events || events.length < 2) return null
|
||||
if (!data || data.length < 1) return null
|
||||
|
||||
const pad = { top: 20, right: 60, bottom: 40, left: 60 }
|
||||
const W = 600, H = 300
|
||||
const pad = { top: 16, right: 12, bottom: 32, left: 52 }
|
||||
const W = 400, H = 220
|
||||
const cw = W - pad.left - pad.right
|
||||
const ch = H - pad.top - pad.bottom
|
||||
|
||||
const steps = events.map(e => e.current_step)
|
||||
const losses = events.map(e => e.loss)
|
||||
const lrs = events.map(e => e.learning_rate).filter(v => v != null && v > 0)
|
||||
const hasLr = lrs.length > 1
|
||||
const steps = data.map(e => e.current_step)
|
||||
const values = data.map(e => e[valueKey])
|
||||
|
||||
const minStep = Math.min(...steps), maxStep = Math.max(...steps)
|
||||
const stepRange = maxStep - minStep || 1
|
||||
const minLoss = Math.min(...losses), maxLoss = Math.max(...losses)
|
||||
const lossRange = maxLoss - minLoss || 1
|
||||
const lossPad = lossRange * 0.05
|
||||
const yMin = Math.max(0, minLoss - lossPad), yMax = maxLoss + lossPad
|
||||
const minVal = Math.min(...values), maxVal = Math.max(...values)
|
||||
const valRange = maxVal - minVal || 1
|
||||
const valPad = valRange * 0.05
|
||||
const yMin = Math.max(0, minVal - valPad), yMax = maxVal + valPad
|
||||
const yRange = yMax - yMin || 1
|
||||
|
||||
const x = (step) => pad.left + ((step - minStep) / stepRange) * cw
|
||||
const yLoss = (loss) => pad.top + (1 - (loss - yMin) / yRange) * ch
|
||||
const y = (val) => pad.top + (1 - (val - yMin) / yRange) * ch
|
||||
|
||||
// Loss polyline
|
||||
const lossPoints = events.map(e => `${x(e.current_step)},${yLoss(e.loss)}`).join(' ')
|
||||
const points = data.map(e => `${x(e.current_step)},${y(e[valueKey])}`).join(' ')
|
||||
|
||||
// Learning rate polyline (scaled to right axis)
|
||||
let lrPoints = ''
|
||||
let lrMin = 0, lrMax = 1, lrRange = 1
|
||||
if (hasLr) {
|
||||
lrMin = Math.min(...lrs)
|
||||
lrMax = Math.max(...lrs)
|
||||
lrRange = lrMax - lrMin || 1
|
||||
const lrPad = lrRange * 0.05
|
||||
lrMin = Math.max(0, lrMin - lrPad)
|
||||
lrMax = lrMax + lrPad
|
||||
lrRange = lrMax - lrMin || 1
|
||||
const yLr = (lr) => pad.top + (1 - (lr - lrMin) / lrRange) * ch
|
||||
lrPoints = events
|
||||
.filter(e => e.learning_rate != null && e.learning_rate > 0)
|
||||
.map(e => `${x(e.current_step)},${yLr(e.learning_rate)}`)
|
||||
.join(' ')
|
||||
}
|
||||
const xTickCount = Math.min(5, data.length)
|
||||
const xTicks = Array.from({ length: xTickCount }, (_, i) => Math.round(minStep + (stepRange * i) / (xTickCount - 1)))
|
||||
const yTickCount = 4
|
||||
const yTicks = Array.from({ length: yTickCount }, (_, i) => yMin + (yRange * i) / (yTickCount - 1))
|
||||
|
||||
// Axis ticks
|
||||
const xTickCount = Math.min(6, events.length)
|
||||
const xTicks = Array.from({ length: xTickCount }, (_, i) => {
|
||||
const step = minStep + (stepRange * i) / (xTickCount - 1)
|
||||
return Math.round(step)
|
||||
})
|
||||
|
||||
const yTickCount = 5
|
||||
const yTicks = Array.from({ length: yTickCount }, (_, i) => {
|
||||
return yMin + (yRange * i) / (yTickCount - 1)
|
||||
})
|
||||
|
||||
// LR axis ticks (right)
|
||||
const lrTicks = hasLr ? Array.from({ length: yTickCount }, (_, i) => {
|
||||
return lrMin + (lrRange * i) / (yTickCount - 1)
|
||||
}) : []
|
||||
const yLrTick = (lr) => pad.top + (1 - (lr - lrMin) / lrRange) * ch
|
||||
|
||||
// Epoch boundary markers
|
||||
// Epoch boundaries from the full events list if provided
|
||||
const epochBoundaries = []
|
||||
for (let i = 1; i < events.length; i++) {
|
||||
const prevEpoch = Math.floor(events[i - 1].current_epoch || 0)
|
||||
const curEpoch = Math.floor(events[i].current_epoch || 0)
|
||||
const evts = events || data
|
||||
for (let i = 1; i < evts.length; i++) {
|
||||
const prevEpoch = Math.floor(evts[i - 1].current_epoch || 0)
|
||||
const curEpoch = Math.floor(evts[i].current_epoch || 0)
|
||||
if (curEpoch > prevEpoch && curEpoch > 0) {
|
||||
epochBoundaries.push({ step: events[i].current_step, epoch: curEpoch })
|
||||
epochBoundaries.push({ step: evts[i].current_step, epoch: curEpoch })
|
||||
}
|
||||
}
|
||||
|
||||
const fmtVal = formatValue || ((v) => formatAxisValue(v, 3))
|
||||
|
||||
const handleMouseMove = (e) => {
|
||||
if (!svgRef.current) return
|
||||
const rect = svgRef.current.getBoundingClientRect()
|
||||
const mx = ((e.clientX - rect.left) / rect.width) * W
|
||||
const step = minStep + ((mx - pad.left) / cw) * stepRange
|
||||
// Find nearest event
|
||||
let nearest = events[0], bestDist = Infinity
|
||||
for (const ev of events) {
|
||||
const d = Math.abs(ev.current_step - step)
|
||||
if (d < bestDist) { bestDist = d; nearest = ev }
|
||||
let nearest = data[0], bestDist = Infinity
|
||||
for (const d of data) {
|
||||
const dist = Math.abs(d.current_step - step)
|
||||
if (dist < bestDist) { bestDist = dist; nearest = d }
|
||||
}
|
||||
setTooltip({ x: x(nearest.current_step), y: yLoss(nearest.loss), data: nearest })
|
||||
setTooltip({ x: x(nearest.current_step), y: y(nearest[valueKey]), data: nearest })
|
||||
}
|
||||
|
||||
return (
|
||||
<div style={{ marginBottom: 'var(--spacing-md)' }}>
|
||||
<div style={{ fontSize: '0.875rem', fontWeight: 'bold', marginBottom: 'var(--spacing-xs)', display: 'flex', alignItems: 'center', gap: 'var(--spacing-md)' }}>
|
||||
<span>Training Curves</span>
|
||||
<span style={{ fontSize: '0.75rem', fontWeight: 'normal', color: 'var(--color-primary)' }}>
|
||||
<span style={{ display: 'inline-block', width: 16, height: 2, background: 'var(--color-primary)', verticalAlign: 'middle', marginRight: 4 }} /> Loss
|
||||
</span>
|
||||
{hasLr && (
|
||||
<span style={{ fontSize: '0.75rem', fontWeight: 'normal', color: 'var(--color-text-muted)' }}>
|
||||
<span style={{ display: 'inline-block', width: 16, height: 0, borderTop: '2px dashed var(--color-text-muted)', verticalAlign: 'middle', marginRight: 4 }} /> Learning Rate
|
||||
</span>
|
||||
)}
|
||||
<div>
|
||||
<div style={{ fontSize: '0.8125rem', fontWeight: 600, marginBottom: 4, display: 'flex', alignItems: 'center', gap: 6 }}>
|
||||
<span style={{ display: 'inline-block', width: 12, height: 3, background: color, borderRadius: 2 }} />
|
||||
{label}
|
||||
</div>
|
||||
<svg
|
||||
ref={svgRef}
|
||||
viewBox={`0 0 ${W} ${H}`}
|
||||
style={{ width: '100%', height: 'auto', maxHeight: 400, background: 'var(--color-bg-secondary)', borderRadius: 'var(--radius-sm)' }}
|
||||
style={{ width: '100%', height: 'auto', maxHeight: 220, background: 'var(--color-bg-secondary)', borderRadius: 'var(--radius-sm)' }}
|
||||
onMouseMove={handleMouseMove}
|
||||
onMouseLeave={() => setTooltip(null)}
|
||||
>
|
||||
{/* Grid lines */}
|
||||
{yTicks.map((val, i) => (
|
||||
<line key={i} x1={pad.left} x2={W - pad.right} y1={yLoss(val)} y2={yLoss(val)}
|
||||
stroke="currentColor" strokeOpacity={0.1} strokeDasharray="4 4" />
|
||||
<line key={i} x1={pad.left} x2={W - pad.right} y1={y(val)} y2={y(val)}
|
||||
stroke="currentColor" strokeOpacity={0.08} strokeDasharray="3 3" />
|
||||
))}
|
||||
|
||||
{/* Epoch boundary markers */}
|
||||
{epochBoundaries.map((eb, i) => (
|
||||
<g key={i}>
|
||||
<line x1={x(eb.step)} x2={x(eb.step)} y1={pad.top} y2={H - pad.bottom}
|
||||
stroke="currentColor" strokeOpacity={0.2} strokeDasharray="6 3" />
|
||||
<text x={x(eb.step)} y={pad.top - 4} textAnchor="middle"
|
||||
fill="currentColor" fillOpacity={0.4} fontSize={9}>
|
||||
Epoch {eb.epoch}
|
||||
</text>
|
||||
stroke="currentColor" strokeOpacity={0.15} strokeDasharray="4 3" />
|
||||
</g>
|
||||
))}
|
||||
|
||||
{/* Loss curve */}
|
||||
<polyline points={lossPoints} fill="none" stroke="var(--color-primary)" strokeWidth={2} strokeLinejoin="round" />
|
||||
|
||||
{/* Learning rate curve */}
|
||||
{hasLr && lrPoints && (
|
||||
<polyline points={lrPoints} fill="none" stroke="currentColor" strokeOpacity={0.35}
|
||||
strokeWidth={1.5} strokeDasharray="4 3" strokeLinejoin="round" />
|
||||
)}
|
||||
|
||||
{/* X axis */}
|
||||
<polyline points={points} fill="none" stroke={color} strokeWidth={1.5} strokeLinejoin="round" />
|
||||
<line x1={pad.left} x2={W - pad.right} y1={H - pad.bottom} y2={H - pad.bottom}
|
||||
stroke="currentColor" strokeOpacity={0.3} />
|
||||
stroke="currentColor" strokeOpacity={0.2} />
|
||||
{xTicks.map((step, i) => (
|
||||
<g key={i}>
|
||||
<line x1={x(step)} x2={x(step)} y1={H - pad.bottom} y2={H - pad.bottom + 4}
|
||||
stroke="currentColor" strokeOpacity={0.3} />
|
||||
<text x={x(step)} y={H - pad.bottom + 16} textAnchor="middle"
|
||||
fill="currentColor" fillOpacity={0.6} fontSize={10}>
|
||||
{step}
|
||||
</text>
|
||||
</g>
|
||||
<text key={i} x={x(step)} y={H - pad.bottom + 14} textAnchor="middle"
|
||||
fill="currentColor" fillOpacity={0.5} fontSize={9}>{step}</text>
|
||||
))}
|
||||
<text x={pad.left + cw / 2} y={H - 4} textAnchor="middle"
|
||||
fill="currentColor" fillOpacity={0.5} fontSize={10}>
|
||||
Step
|
||||
</text>
|
||||
|
||||
{/* Y axis (left - Loss) */}
|
||||
<line x1={pad.left} x2={pad.left} y1={pad.top} y2={H - pad.bottom}
|
||||
stroke="currentColor" strokeOpacity={0.3} />
|
||||
stroke="currentColor" strokeOpacity={0.2} />
|
||||
{yTicks.map((val, i) => (
|
||||
<g key={i}>
|
||||
<line x1={pad.left - 4} x2={pad.left} y1={yLoss(val)} y2={yLoss(val)}
|
||||
stroke="currentColor" strokeOpacity={0.3} />
|
||||
<text x={pad.left - 8} y={yLoss(val) + 3} textAnchor="end"
|
||||
fill="currentColor" fillOpacity={0.6} fontSize={10}>
|
||||
{formatAxisValue(val, 3)}
|
||||
</text>
|
||||
</g>
|
||||
<text key={i} x={pad.left - 6} y={y(val) + 3} textAnchor="end"
|
||||
fill="currentColor" fillOpacity={0.5} fontSize={9}>{fmtVal(val)}</text>
|
||||
))}
|
||||
<text x={14} y={pad.top + ch / 2} textAnchor="middle"
|
||||
fill="currentColor" fillOpacity={0.5} fontSize={10}
|
||||
transform={`rotate(-90, 14, ${pad.top + ch / 2})`}>
|
||||
Loss
|
||||
</text>
|
||||
|
||||
{/* Y axis (right - Learning Rate) */}
|
||||
{hasLr && (
|
||||
<>
|
||||
<line x1={W - pad.right} x2={W - pad.right} y1={pad.top} y2={H - pad.bottom}
|
||||
stroke="currentColor" strokeOpacity={0.15} />
|
||||
{lrTicks.map((val, i) => (
|
||||
<g key={i}>
|
||||
<line x1={W - pad.right} x2={W - pad.right + 4} y1={yLrTick(val)} y2={yLrTick(val)}
|
||||
stroke="currentColor" strokeOpacity={0.2} />
|
||||
<text x={W - pad.right + 8} y={yLrTick(val) + 3} textAnchor="start"
|
||||
fill="currentColor" fillOpacity={0.4} fontSize={9}>
|
||||
{val.toExponential(0)}
|
||||
</text>
|
||||
</g>
|
||||
))}
|
||||
<text x={W - 8} y={pad.top + ch / 2} textAnchor="middle"
|
||||
fill="currentColor" fillOpacity={0.4} fontSize={9}
|
||||
transform={`rotate(90, ${W - 8}, ${pad.top + ch / 2})`}>
|
||||
LR
|
||||
</text>
|
||||
</>
|
||||
)}
|
||||
|
||||
{/* Tooltip */}
|
||||
<text x={pad.left + cw / 2} y={H - 2} textAnchor="middle"
|
||||
fill="currentColor" fillOpacity={0.4} fontSize={8}>Step</text>
|
||||
{tooltip && (
|
||||
<g>
|
||||
<line x1={tooltip.x} x2={tooltip.x} y1={pad.top} y2={H - pad.bottom}
|
||||
stroke="var(--color-primary)" strokeOpacity={0.4} strokeDasharray="2 2" />
|
||||
<circle cx={tooltip.x} cy={tooltip.y} r={4} fill="var(--color-primary)" />
|
||||
<rect x={tooltip.x + 8} y={tooltip.y - 36} width={140} height={48} rx={4}
|
||||
fill="var(--color-bg)" stroke="var(--color-border)" strokeWidth={1}
|
||||
style={{ filter: 'drop-shadow(0 2px 4px rgba(0,0,0,0.15))' }} />
|
||||
<text x={tooltip.x + 16} y={tooltip.y - 20} fill="currentColor" fontSize={10}>
|
||||
Step: {tooltip.data.current_step} | Epoch: {(tooltip.data.current_epoch || 0).toFixed(1)}
|
||||
stroke={color} strokeOpacity={0.4} strokeDasharray="2 2" />
|
||||
<circle cx={tooltip.x} cy={tooltip.y} r={3} fill={color} />
|
||||
<rect x={Math.min(tooltip.x + 8, W - 120)} y={tooltip.y - 24} width={110} height={30} rx={3}
|
||||
fill="var(--color-bg)" stroke="var(--color-border)" strokeWidth={1} />
|
||||
<text x={Math.min(tooltip.x + 14, W - 114)} y={tooltip.y - 10} fill="currentColor" fontSize={9}>
|
||||
Step {tooltip.data.current_step}
|
||||
</text>
|
||||
<text x={tooltip.x + 16} y={tooltip.y - 6} fill="var(--color-primary)" fontSize={10} fontWeight="bold">
|
||||
Loss: {tooltip.data.loss?.toFixed(4)}
|
||||
<text x={Math.min(tooltip.x + 14, W - 114)} y={tooltip.y + 2} fill={color} fontSize={9} fontWeight="bold">
|
||||
{fmtVal(tooltip.data[valueKey])}
|
||||
</text>
|
||||
{tooltip.data.learning_rate > 0 && (
|
||||
<text x={tooltip.x + 16} y={tooltip.y + 8} fill="currentColor" fillOpacity={0.6} fontSize={9}>
|
||||
LR: {tooltip.data.learning_rate?.toExponential(2)}
|
||||
</text>
|
||||
)}
|
||||
</g>
|
||||
)}
|
||||
</svg>
|
||||
@@ -404,6 +296,35 @@ function TrainingChart({ events }) {
|
||||
)
|
||||
}
|
||||
|
||||
function ChartsGrid({ events }) {
|
||||
const lossData = events.filter(e => e.loss > 0)
|
||||
const evalData = events.filter(e => e.eval_loss > 0)
|
||||
const lrData = events.filter(e => e.learning_rate != null && e.learning_rate > 0)
|
||||
const gradNormData = events.filter(e => e.grad_norm != null && e.grad_norm > 0)
|
||||
|
||||
const fmtExp = (v) => v.toExponential(1)
|
||||
|
||||
if (lossData.length < 2 && lrData.length < 2 && gradNormData.length < 2) return null
|
||||
|
||||
return (
|
||||
<div style={{ display: 'grid', gridTemplateColumns: '1fr 1fr', gap: 'var(--spacing-md)', marginBottom: 'var(--spacing-md)' }}>
|
||||
<SingleMetricChart data={lossData} valueKey="loss" label="Training Loss" color="#3b82f6" events={events} />
|
||||
{evalData.length >= 1 ? (
|
||||
<SingleMetricChart data={evalData} valueKey="eval_loss" label="Eval Loss" color="#ef4444" events={events} />
|
||||
) : (
|
||||
<div style={{ display: 'flex', alignItems: 'center', justifyContent: 'center', background: 'var(--color-bg-secondary)', borderRadius: 'var(--radius-sm)', minHeight: 120 }}>
|
||||
<span style={{ fontSize: '0.8125rem', color: 'var(--color-text-muted)' }}>
|
||||
<i className="fas fa-chart-area" style={{ marginRight: 6 }} />
|
||||
Eval Loss — waiting for eval data
|
||||
</span>
|
||||
</div>
|
||||
)}
|
||||
<SingleMetricChart data={lrData} valueKey="learning_rate" label="Learning Rate" color="#8b5cf6" formatValue={fmtExp} events={events} />
|
||||
<SingleMetricChart data={gradNormData} valueKey="grad_norm" label="Gradient Norm" color="#f97316" events={events} />
|
||||
</div>
|
||||
)
|
||||
}
|
||||
|
||||
function TrainingMonitor({ job, onStop }) {
|
||||
const [events, setEvents] = useState([])
|
||||
const [latest, setLatest] = useState(null)
|
||||
@@ -512,8 +433,8 @@ function TrainingMonitor({ job, onStop }) {
|
||||
</div>
|
||||
)}
|
||||
|
||||
{/* Training chart */}
|
||||
<TrainingChart events={events} />
|
||||
{/* Training charts (2x2 grid) */}
|
||||
<ChartsGrid events={events} />
|
||||
|
||||
{latest?.message && (
|
||||
<div style={{ fontSize: '0.875rem', color: 'var(--color-text-muted)' }}>
|
||||
@@ -815,6 +736,12 @@ export default function FineTune() {
|
||||
const [showAdvanced, setShowAdvanced] = useState(false)
|
||||
const [resumeFromCheckpoint, setResumeFromCheckpoint] = useState('')
|
||||
const [saveTotalLimit, setSaveTotalLimit] = useState(0)
|
||||
const [evalEnabled, setEvalEnabled] = useState(false)
|
||||
const [evalStrategy, setEvalStrategy] = useState('steps')
|
||||
const [evalSteps, setEvalSteps] = useState(0)
|
||||
const [evalSplit, setEvalSplit] = useState('')
|
||||
const [evalDatasetSource, setEvalDatasetSource] = useState('')
|
||||
const [evalSplitRatio, setEvalSplitRatio] = useState(0.1)
|
||||
const [rewardFunctions, setRewardFunctions] = useState([]) // [{type, name, code?, params?}]
|
||||
const [showAddCustomReward, setShowAddCustomReward] = useState(false)
|
||||
const [customRewardName, setCustomRewardName] = useState('')
|
||||
@@ -862,6 +789,15 @@ export default function FineTune() {
|
||||
if (maxSeqLength) extra.max_seq_length = String(maxSeqLength)
|
||||
if (hfToken.trim()) extra.hf_token = hfToken.trim()
|
||||
if (saveTotalLimit > 0) extra.save_total_limit = String(saveTotalLimit)
|
||||
if (evalEnabled) {
|
||||
extra.eval_strategy = evalStrategy || 'steps'
|
||||
if (evalSteps > 0) extra.eval_steps = String(evalSteps)
|
||||
if (evalSplit.trim()) extra.eval_split = evalSplit.trim()
|
||||
if (evalDatasetSource.trim()) extra.eval_dataset_source = evalDatasetSource.trim()
|
||||
if (evalSplitRatio > 0 && evalSplitRatio !== 0.1) extra.eval_split_ratio = String(evalSplitRatio)
|
||||
} else {
|
||||
extra.eval_strategy = 'no'
|
||||
}
|
||||
for (const { key, value } of extraOptions) {
|
||||
if (key.trim()) extra[key.trim()] = value
|
||||
}
|
||||
@@ -960,6 +896,11 @@ export default function FineTune() {
|
||||
seed,
|
||||
mixed_precision: mixedPrecision,
|
||||
max_seq_length: maxSeqLength,
|
||||
eval_strategy: evalEnabled ? (evalStrategy || 'steps') : 'no',
|
||||
eval_steps: evalSteps,
|
||||
eval_split: evalSplit,
|
||||
eval_dataset_source: evalDatasetSource,
|
||||
eval_split_ratio: evalSplitRatio,
|
||||
extra_options: Object.keys(extra).length > 0 ? extra : {},
|
||||
reward_functions: rewardFunctions.length > 0 ? rewardFunctions : undefined,
|
||||
}
|
||||
@@ -1001,6 +942,24 @@ export default function FineTune() {
|
||||
setMaxSeqLength(Number(config.extra_options.max_seq_length))
|
||||
}
|
||||
|
||||
// Eval options — detect enabled state from strategy
|
||||
const restoreEval = (strategy, steps, split, src, ratio) => {
|
||||
if (strategy != null && strategy !== 'no') {
|
||||
setEvalEnabled(true)
|
||||
setEvalStrategy(strategy)
|
||||
} else if (strategy === 'no') {
|
||||
setEvalEnabled(false)
|
||||
}
|
||||
if (steps != null) setEvalSteps(Number(steps))
|
||||
if (split != null) setEvalSplit(split)
|
||||
if (src != null) setEvalDatasetSource(src)
|
||||
if (ratio != null) setEvalSplitRatio(Number(ratio))
|
||||
}
|
||||
restoreEval(config.eval_strategy, config.eval_steps, config.eval_split, config.eval_dataset_source, config.eval_split_ratio)
|
||||
// Also restore from extra_options if present (overrides top-level)
|
||||
const eo = config.extra_options
|
||||
if (eo) restoreEval(eo.eval_strategy, eo.eval_steps, eo.eval_split, eo.eval_dataset_source, eo.eval_split_ratio)
|
||||
|
||||
// Handle save_total_limit from extra_options
|
||||
if (config.extra_options?.save_total_limit != null) {
|
||||
setSaveTotalLimit(Number(config.extra_options.save_total_limit))
|
||||
@@ -1009,7 +968,7 @@ export default function FineTune() {
|
||||
// Convert extra_options object to [{key, value}] entries, filtering out handled keys
|
||||
if (config.extra_options && typeof config.extra_options === 'object') {
|
||||
const entries = Object.entries(config.extra_options)
|
||||
.filter(([k]) => !['max_seq_length', 'save_total_limit', 'hf_token'].includes(k))
|
||||
.filter(([k]) => !['max_seq_length', 'save_total_limit', 'hf_token', 'eval_strategy', 'eval_steps', 'eval_split', 'eval_dataset_source', 'eval_split_ratio'].includes(k))
|
||||
.map(([key, value]) => ({ key, value: String(value) }))
|
||||
setExtraOptions(entries)
|
||||
}
|
||||
@@ -1440,6 +1399,53 @@ export default function FineTune() {
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div style={{ marginBottom: 'var(--spacing-md)' }}>
|
||||
<label style={{ display: 'flex', alignItems: 'center', gap: 'var(--spacing-sm)', cursor: 'pointer', marginBottom: 'var(--spacing-sm)' }}>
|
||||
<div
|
||||
onClick={() => setEvalEnabled(!evalEnabled)}
|
||||
style={{
|
||||
width: 36, height: 20, borderRadius: 10, position: 'relative',
|
||||
background: evalEnabled ? 'var(--color-primary)' : 'var(--color-border)',
|
||||
transition: 'background 0.2s', cursor: 'pointer', flexShrink: 0,
|
||||
}}
|
||||
>
|
||||
<div style={{
|
||||
width: 16, height: 16, borderRadius: '50%', background: '#fff',
|
||||
position: 'absolute', top: 2, left: evalEnabled ? 18 : 2,
|
||||
transition: 'left 0.2s', boxShadow: '0 1px 2px rgba(0,0,0,0.2)',
|
||||
}} />
|
||||
</div>
|
||||
<span style={{ fontSize: '0.875rem', fontWeight: 600 }}>Enable Evaluation</span>
|
||||
</label>
|
||||
{evalEnabled && (
|
||||
<div style={{ display: 'grid', gridTemplateColumns: 'repeat(auto-fill, minmax(160px, 1fr))', gap: 'var(--spacing-md)', paddingLeft: 'var(--spacing-sm)' }}>
|
||||
<div>
|
||||
<label className="form-label">Eval Strategy</label>
|
||||
<select value={evalStrategy} onChange={e => setEvalStrategy(e.target.value)} className="input">
|
||||
<option value="steps">Steps</option>
|
||||
<option value="epoch">Epoch</option>
|
||||
</select>
|
||||
</div>
|
||||
<div>
|
||||
<label className="form-label">Eval Steps (0 = same as save)</label>
|
||||
<input type="number" value={evalSteps} onChange={e => setEvalSteps(Number(e.target.value))} className="input" min={0} />
|
||||
</div>
|
||||
<div>
|
||||
<label className="form-label">Eval Split</label>
|
||||
<input type="text" value={evalSplit} onChange={e => setEvalSplit(e.target.value)} placeholder="e.g. validation" className="input" />
|
||||
</div>
|
||||
<div>
|
||||
<label className="form-label">Eval Dataset Source</label>
|
||||
<input type="text" value={evalDatasetSource} onChange={e => setEvalDatasetSource(e.target.value)} placeholder="Separate HF dataset" className="input" />
|
||||
</div>
|
||||
<div>
|
||||
<label className="form-label">Auto-split Ratio</label>
|
||||
<input type="number" value={evalSplitRatio} onChange={e => setEvalSplitRatio(Number(e.target.value))} className="input" min={0.01} max={0.5} step={0.01} />
|
||||
</div>
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
|
||||
{resumeFromCheckpoint && (
|
||||
<div style={{ marginBottom: 'var(--spacing-md)' }}>
|
||||
<label className="form-label">Resume from Checkpoint</label>
|
||||
@@ -1474,8 +1480,31 @@ export default function FineTune() {
|
||||
</form>
|
||||
)}
|
||||
|
||||
{/* Jobs list */}
|
||||
<div style={{ display: 'grid', gridTemplateColumns: selectedJob ? '1fr 2fr' : '1fr', gap: 'var(--spacing-md)' }}>
|
||||
{/* Either show job detail OR job list — not side-by-side */}
|
||||
{selectedJob ? (
|
||||
<div>
|
||||
<button className="btn" onClick={() => setSelectedJob(null)} style={{ marginBottom: 'var(--spacing-md)' }}>
|
||||
<i className="fas fa-arrow-left" style={{ marginRight: 'var(--spacing-xs)' }} />
|
||||
Back to Jobs
|
||||
</button>
|
||||
<div className="card" style={{ marginBottom: 'var(--spacing-md)', padding: 'var(--spacing-md)' }}>
|
||||
<div style={{ display: 'flex', justifyContent: 'space-between', alignItems: 'center' }}>
|
||||
<div>
|
||||
<h3 style={{ margin: 0 }}>{selectedJob.model}</h3>
|
||||
<div style={{ fontSize: '0.8125rem', color: 'var(--color-text-muted)', marginTop: 'var(--spacing-xs)' }}>
|
||||
{selectedJob.backend} / {selectedJob.training_method || 'sft'} | ID: {selectedJob.id?.slice(0, 8)}... | {selectedJob.created_at}
|
||||
</div>
|
||||
</div>
|
||||
<span className={`badge ${statusBadgeClass[selectedJob.status] || ''}`}>
|
||||
{selectedJob.status}
|
||||
</span>
|
||||
</div>
|
||||
</div>
|
||||
<TrainingMonitor job={selectedJob} onStop={handleStop} />
|
||||
<CheckpointsPanel job={selectedJob} onResume={handleResumeFromCheckpoint} onExportCheckpoint={handleExportCheckpoint} />
|
||||
<ExportPanel job={selectedJob} prefilledCheckpoint={exportCheckpoint} />
|
||||
</div>
|
||||
) : (
|
||||
<div>
|
||||
<h3 style={{ margin: '0 0 var(--spacing-sm) 0' }}>Jobs</h3>
|
||||
{jobs.length === 0 ? (
|
||||
@@ -1486,19 +1515,11 @@ export default function FineTune() {
|
||||
</div>
|
||||
) : (
|
||||
jobs.map(job => (
|
||||
<JobCard key={job.id} job={job} isSelected={selectedJob?.id === job.id} onSelect={setSelectedJob} onUseConfig={handleUseConfig} onDelete={handleDelete} />
|
||||
<JobCard key={job.id} job={job} isSelected={false} onSelect={setSelectedJob} onUseConfig={handleUseConfig} onDelete={handleDelete} />
|
||||
))
|
||||
)}
|
||||
</div>
|
||||
|
||||
{selectedJob && (
|
||||
<div>
|
||||
<TrainingMonitor job={selectedJob} onStop={handleStop} />
|
||||
<CheckpointsPanel job={selectedJob} onResume={handleResumeFromCheckpoint} onExportCheckpoint={handleExportCheckpoint} />
|
||||
<ExportPanel job={selectedJob} prefilledCheckpoint={exportCheckpoint} />
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
)
|
||||
}
|
||||
|
||||
48
core/http/react-ui/src/pages/Studio.jsx
Normal file
48
core/http/react-ui/src/pages/Studio.jsx
Normal file
@@ -0,0 +1,48 @@
|
||||
import { useSearchParams } from 'react-router-dom'
|
||||
import ImageGen from './ImageGen'
|
||||
import VideoGen from './VideoGen'
|
||||
import TTS from './TTS'
|
||||
import Sound from './Sound'
|
||||
|
||||
const TABS = [
|
||||
{ key: 'images', label: 'Images', icon: 'fas fa-image' },
|
||||
{ key: 'video', label: 'Video', icon: 'fas fa-video' },
|
||||
{ key: 'tts', label: 'TTS', icon: 'fas fa-headphones' },
|
||||
{ key: 'sound', label: 'Sound', icon: 'fas fa-music' },
|
||||
]
|
||||
|
||||
const TAB_COMPONENTS = {
|
||||
images: ImageGen,
|
||||
video: VideoGen,
|
||||
tts: TTS,
|
||||
sound: Sound,
|
||||
}
|
||||
|
||||
export default function Studio() {
|
||||
const [searchParams, setSearchParams] = useSearchParams()
|
||||
const activeTab = searchParams.get('tab') || 'images'
|
||||
|
||||
const setTab = (key) => {
|
||||
setSearchParams({ tab: key }, { replace: true })
|
||||
}
|
||||
|
||||
const ActiveComponent = TAB_COMPONENTS[activeTab] || ImageGen
|
||||
|
||||
return (
|
||||
<div>
|
||||
<div className="studio-tabs">
|
||||
{TABS.map(tab => (
|
||||
<button
|
||||
key={tab.key}
|
||||
className={`studio-tab${activeTab === tab.key ? ' studio-tab-active' : ''}`}
|
||||
onClick={() => setTab(tab.key)}
|
||||
>
|
||||
<i className={tab.icon} />
|
||||
<span>{tab.label}</span>
|
||||
</button>
|
||||
))}
|
||||
</div>
|
||||
<ActiveComponent />
|
||||
</div>
|
||||
)
|
||||
}
|
||||
@@ -31,6 +31,7 @@ import BackendLogs from './pages/BackendLogs'
|
||||
import Explorer from './pages/Explorer'
|
||||
import Login from './pages/Login'
|
||||
import FineTune from './pages/FineTune'
|
||||
import Studio from './pages/Studio'
|
||||
import NotFound from './pages/NotFound'
|
||||
import Usage from './pages/Usage'
|
||||
import Users from './pages/Users'
|
||||
@@ -44,6 +45,7 @@ function BrowseRedirect() {
|
||||
return <Navigate to={`/app/${splat || ''}`} replace />
|
||||
}
|
||||
|
||||
|
||||
function Admin({ children }) {
|
||||
return <RequireAdmin>{children}</RequireAdmin>
|
||||
}
|
||||
@@ -65,6 +67,7 @@ const appChildren = [
|
||||
{ path: 'tts/:model', element: <TTS /> },
|
||||
{ path: 'sound', element: <Sound /> },
|
||||
{ path: 'sound/:model', element: <Sound /> },
|
||||
{ path: 'studio', element: <Studio /> },
|
||||
{ path: 'talk', element: <Talk /> },
|
||||
{ path: 'usage', element: <Usage /> },
|
||||
{ path: 'account', element: <Account /> },
|
||||
|
||||
@@ -55,6 +55,7 @@ type FineTuneJob struct {
|
||||
UserID string `json:"user_id,omitempty"`
|
||||
Model string `json:"model"`
|
||||
Backend string `json:"backend"`
|
||||
ModelID string `json:"model_id,omitempty"` // backend model loader ID
|
||||
TrainingType string `json:"training_type"`
|
||||
TrainingMethod string `json:"training_method"`
|
||||
Status string `json:"status"` // queued, loading_model, loading_dataset, training, saving, completed, failed, stopped
|
||||
|
||||
@@ -1042,7 +1042,7 @@ func (s *AgentPoolService) CreateCollection(name string) error {
|
||||
return s.collectionsBackend.CreateCollection(name)
|
||||
}
|
||||
|
||||
func (s *AgentPoolService) UploadToCollection(collection, filename string, fileBody io.Reader) error {
|
||||
func (s *AgentPoolService) UploadToCollection(collection, filename string, fileBody io.Reader) (string, error) {
|
||||
return s.collectionsBackend.Upload(collection, filename, fileBody)
|
||||
}
|
||||
|
||||
@@ -1554,10 +1554,10 @@ func (s *AgentPoolService) CreateCollectionForUser(userID, name string) error {
|
||||
}
|
||||
|
||||
// UploadToCollectionForUser uploads to a collection for a specific user.
|
||||
func (s *AgentPoolService) UploadToCollectionForUser(userID, collection, filename string, fileBody io.Reader) error {
|
||||
func (s *AgentPoolService) UploadToCollectionForUser(userID, collection, filename string, fileBody io.Reader) (string, error) {
|
||||
backend, err := s.CollectionsBackendForUser(userID)
|
||||
if err != nil {
|
||||
return err
|
||||
return "", err
|
||||
}
|
||||
return backend.Upload(collection, filename, fileBody)
|
||||
}
|
||||
|
||||
@@ -7,6 +7,7 @@ import (
|
||||
"os"
|
||||
"path/filepath"
|
||||
"regexp"
|
||||
"sort"
|
||||
"strings"
|
||||
"sync"
|
||||
"time"
|
||||
@@ -179,11 +180,12 @@ func (s *FineTuneService) StartJob(ctx context.Context, userID string, req schem
|
||||
grpcReq.ExtraOptions["reward_funcs"] = string(rfJSON)
|
||||
}
|
||||
|
||||
// Load the fine-tuning backend
|
||||
// Load the fine-tuning backend (per-job model ID so multiple jobs can run concurrently)
|
||||
modelID := backendName + "-finetune-" + jobID
|
||||
backendModel, err := s.modelLoader.Load(
|
||||
model.WithBackendString(backendName),
|
||||
model.WithModel(backendName),
|
||||
model.WithModelID(backendName+"-finetune"),
|
||||
model.WithModelID(modelID),
|
||||
)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to load backend %s: %w", backendName, err)
|
||||
@@ -204,6 +206,7 @@ func (s *FineTuneService) StartJob(ctx context.Context, userID string, req schem
|
||||
UserID: userID,
|
||||
Model: req.Model,
|
||||
Backend: backendName,
|
||||
ModelID: modelID,
|
||||
TrainingType: req.TrainingType,
|
||||
TrainingMethod: req.TrainingMethod,
|
||||
Status: "queued",
|
||||
@@ -237,7 +240,7 @@ func (s *FineTuneService) GetJob(userID, jobID string) (*schema.FineTuneJob, err
|
||||
return job, nil
|
||||
}
|
||||
|
||||
// ListJobs returns all jobs for a user.
|
||||
// ListJobs returns all jobs for a user, sorted by creation time (newest first).
|
||||
func (s *FineTuneService) ListJobs(userID string) []*schema.FineTuneJob {
|
||||
s.mu.Lock()
|
||||
defer s.mu.Unlock()
|
||||
@@ -248,6 +251,11 @@ func (s *FineTuneService) ListJobs(userID string) []*schema.FineTuneJob {
|
||||
result = append(result, job)
|
||||
}
|
||||
}
|
||||
|
||||
sort.Slice(result, func(i, j int) bool {
|
||||
return result[i].CreatedAt > result[j].CreatedAt
|
||||
})
|
||||
|
||||
return result
|
||||
}
|
||||
|
||||
@@ -265,12 +273,12 @@ func (s *FineTuneService) StopJob(ctx context.Context, userID, jobID string, sav
|
||||
}
|
||||
s.mu.Unlock()
|
||||
|
||||
// Kill the backend process directly — gRPC stop deadlocks on single-threaded Python backends
|
||||
modelID := job.Backend + "-finetune"
|
||||
err := s.modelLoader.ShutdownModel(modelID)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to stop backend: %w", err)
|
||||
// Kill the backend process directly
|
||||
stopModelID := job.ModelID
|
||||
if stopModelID == "" {
|
||||
stopModelID = job.Backend + "-finetune"
|
||||
}
|
||||
s.modelLoader.ShutdownModel(stopModelID)
|
||||
|
||||
s.mu.Lock()
|
||||
job.Status = "stopped"
|
||||
@@ -355,10 +363,14 @@ func (s *FineTuneService) StreamProgress(ctx context.Context, userID, jobID stri
|
||||
}
|
||||
s.mu.Unlock()
|
||||
|
||||
streamModelID := job.ModelID
|
||||
if streamModelID == "" {
|
||||
streamModelID = job.Backend + "-finetune"
|
||||
}
|
||||
backendModel, err := s.modelLoader.Load(
|
||||
model.WithBackendString(job.Backend),
|
||||
model.WithModel(job.Backend),
|
||||
model.WithModelID(job.Backend+"-finetune"),
|
||||
model.WithModelID(streamModelID),
|
||||
)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to load backend: %w", err)
|
||||
@@ -424,10 +436,14 @@ func (s *FineTuneService) ListCheckpoints(ctx context.Context, userID, jobID str
|
||||
}
|
||||
s.mu.Unlock()
|
||||
|
||||
ckptModelID := job.ModelID
|
||||
if ckptModelID == "" {
|
||||
ckptModelID = job.Backend + "-finetune"
|
||||
}
|
||||
backendModel, err := s.modelLoader.Load(
|
||||
model.WithBackendString(job.Backend),
|
||||
model.WithModel(job.Backend),
|
||||
model.WithModelID(job.Backend+"-finetune"),
|
||||
model.WithModelID(ckptModelID),
|
||||
)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to load backend: %w", err)
|
||||
@@ -514,10 +530,14 @@ func (s *FineTuneService) ExportModel(ctx context.Context, userID, jobID string,
|
||||
go func() {
|
||||
s.setExportMessage(job, "Loading export backend...")
|
||||
|
||||
exportModelID := job.ModelID
|
||||
if exportModelID == "" {
|
||||
exportModelID = job.Backend + "-finetune"
|
||||
}
|
||||
backendModel, err := s.modelLoader.Load(
|
||||
model.WithBackendString(job.Backend),
|
||||
model.WithModel(job.Backend),
|
||||
model.WithModelID(job.Backend+"-finetune"),
|
||||
model.WithModelID(exportModelID),
|
||||
)
|
||||
if err != nil {
|
||||
s.setExportFailed(job, fmt.Sprintf("failed to load backend: %v", err))
|
||||
|
||||
12
go.mod
12
go.mod
@@ -67,10 +67,18 @@ require (
|
||||
)
|
||||
|
||||
require (
|
||||
github.com/chasefleming/elem-go v0.30.0 // indirect
|
||||
github.com/dave-gray101/v2keyauth v0.0.0-20240624150259-c45d584d25e2 // indirect
|
||||
github.com/go-jose/go-jose/v4 v4.1.3 // indirect
|
||||
github.com/gofiber/template v1.8.3 // indirect
|
||||
github.com/gofiber/template/html/v2 v2.1.3 // indirect
|
||||
github.com/gofiber/utils v1.1.0 // indirect
|
||||
github.com/inconshreveable/mousetrap v1.1.0 // indirect
|
||||
github.com/jinzhu/inflection v1.0.0 // indirect
|
||||
github.com/jinzhu/now v1.1.5 // indirect
|
||||
github.com/mattn/go-sqlite3 v1.14.22 // indirect
|
||||
github.com/spf13/cobra v1.10.2 // indirect
|
||||
github.com/spf13/pflag v1.0.9 // indirect
|
||||
github.com/stretchr/testify v1.11.1 // indirect
|
||||
github.com/tmc/langchaingo v0.1.14 // indirect
|
||||
)
|
||||
@@ -136,8 +144,8 @@ require (
|
||||
github.com/kevinburke/ssh_config v1.2.0 // indirect
|
||||
github.com/labstack/gommon v0.4.2 // indirect
|
||||
github.com/mschoch/smat v0.2.0 // indirect
|
||||
github.com/mudler/LocalAGI v0.0.0-20260319174513-43c65ec7e88a
|
||||
github.com/mudler/localrecall v0.5.9-0.20260319170742-933f68603f62 // indirect
|
||||
github.com/mudler/LocalAGI v0.0.0-20260321004723-b485b77037c4
|
||||
github.com/mudler/localrecall v0.5.9-0.20260321005011-810084e9369b // indirect
|
||||
github.com/mudler/skillserver v0.0.5
|
||||
github.com/olekukonko/tablewriter v0.0.5 // indirect
|
||||
github.com/oxffaa/gopher-parse-sitemap v0.0.0-20191021113419-005d2eb1def4 // indirect
|
||||
|
||||
24
go.sum
24
go.sum
@@ -148,6 +148,8 @@ github.com/charmbracelet/x/exp/slice v0.0.0-20250327172914-2fdc97757edf h1:rLG0Y
|
||||
github.com/charmbracelet/x/exp/slice v0.0.0-20250327172914-2fdc97757edf/go.mod h1:B3UgsnsBZS/eX42BlaNiJkD1pPOUa+oF1IYC6Yd2CEU=
|
||||
github.com/charmbracelet/x/term v0.2.1 h1:AQeHeLZ1OqSXhrAWpYUtZyX1T3zVxfpZuEQMIQaGIAQ=
|
||||
github.com/charmbracelet/x/term v0.2.1/go.mod h1:oQ4enTYFV7QN4m0i9mzHrViD7TQKvNEEkHUMCmsxdUg=
|
||||
github.com/chasefleming/elem-go v0.30.0 h1:BlhV1ekv1RbFiM8XZUQeln1Ikb4D+bu2eDO4agREvok=
|
||||
github.com/chasefleming/elem-go v0.30.0/go.mod h1:hz73qILBIKnTgOujnSMtEj20/epI+f6vg71RUilJAA4=
|
||||
github.com/chengxilo/virtualterm v1.0.4 h1:Z6IpERbRVlfB8WkOmtbHiDbBANU7cimRIof7mk9/PwM=
|
||||
github.com/chengxilo/virtualterm v1.0.4/go.mod h1:DyxxBZz/x1iqJjFxTFcr6/x+jSpqN0iwWCOK1q10rlY=
|
||||
github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw=
|
||||
@@ -177,6 +179,7 @@ github.com/coreos/go-systemd/v22 v22.5.0/go.mod h1:Y58oyj3AT4RCenI/lSvhwexgC+NSV
|
||||
github.com/cpuguy83/dockercfg v0.3.2 h1:DlJTyZGBDlXqUZ2Dk2Q3xHs/FtnooJJVaad2S9GKorA=
|
||||
github.com/cpuguy83/dockercfg v0.3.2/go.mod h1:sugsbF4//dDlL/i+S+rtpIWp+5h0BHJHfjj5/jFyUJc=
|
||||
github.com/cpuguy83/go-md2man/v2 v2.0.0-20190314233015-f79a8a8ca69d/go.mod h1:maD7wRr/U5Z6m/iR4s+kqSMx2CaBsrgA7czyZG/E6dU=
|
||||
github.com/cpuguy83/go-md2man/v2 v2.0.6/go.mod h1:oOW0eioCTA6cOiMLiUPZOpcVxMig6NIQQ7OS05n1F4g=
|
||||
github.com/creachadair/mds v0.21.3 h1:RRgEAPIb52cU0q7UxGyN+13QlCVTZIL4slRr0cYYQfA=
|
||||
github.com/creachadair/mds v0.21.3/go.mod h1:1ltMWZd9yXhaHEoZwBialMaviWVUpRPvMwVP7saFAzM=
|
||||
github.com/creachadair/otp v0.5.0 h1:q3Th7CXm2zlmCdBjw5tEPFOj4oWJMnVL5HXlq0sNKS0=
|
||||
@@ -185,6 +188,8 @@ github.com/creack/pty v1.1.18 h1:n56/Zwd5o6whRC5PMGretI4IdRLlmBXYNjScPaBgsbY=
|
||||
github.com/creack/pty v1.1.18/go.mod h1:MOBLtS5ELjhRRrroQr9kyvTxUAFNvYEK993ew/Vr4O4=
|
||||
github.com/cyphar/filepath-securejoin v0.5.1 h1:eYgfMq5yryL4fbWfkLpFFy2ukSELzaJOTaUTuh+oF48=
|
||||
github.com/cyphar/filepath-securejoin v0.5.1/go.mod h1:Sdj7gXlvMcPZsbhwhQ33GguGLDGQL7h7bg04C/+u9jI=
|
||||
github.com/dave-gray101/v2keyauth v0.0.0-20240624150259-c45d584d25e2 h1:flLYmnQFZNo04x2NPehMbf30m7Pli57xwZ0NFqR/hb0=
|
||||
github.com/dave-gray101/v2keyauth v0.0.0-20240624150259-c45d584d25e2/go.mod h1:NtWqRzAp/1tw+twkW8uuBenEVVYndEAZACWU3F3xdoQ=
|
||||
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1VwoXQT9A3Wy9MM3WgvqSxFWenqJduM=
|
||||
@@ -341,6 +346,12 @@ github.com/godbus/dbus/v5 v5.1.0 h1:4KLkAxT3aOY8Li4FRJe/KvhoNFFxo0m6fNuFUO8QJUk=
|
||||
github.com/godbus/dbus/v5 v5.1.0/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA=
|
||||
github.com/gofiber/fiber/v2 v2.52.9 h1:YjKl5DOiyP3j0mO61u3NTmK7or8GzzWzCFzkboyP5cw=
|
||||
github.com/gofiber/fiber/v2 v2.52.9/go.mod h1:YEcBbO/FB+5M1IZNBP9FO3J9281zgPAreiI1oqg8nDw=
|
||||
github.com/gofiber/template v1.8.3 h1:hzHdvMwMo/T2kouz2pPCA0zGiLCeMnoGsQZBTSYgZxc=
|
||||
github.com/gofiber/template v1.8.3/go.mod h1:bs/2n0pSNPOkRa5VJ8zTIvedcI/lEYxzV3+YPXdBvq8=
|
||||
github.com/gofiber/template/html/v2 v2.1.3 h1:n1LYBtmr9C0V/k/3qBblXyMxV5B0o/gpb6dFLp8ea+o=
|
||||
github.com/gofiber/template/html/v2 v2.1.3/go.mod h1:U5Fxgc5KpyujU9OqKzy6Kn6Qup6Tm7zdsISR+VpnHRE=
|
||||
github.com/gofiber/utils v1.1.0 h1:vdEBpn7AzIUJRhe+CiTOJdUcTg4Q9RK+pEa0KPbLdrM=
|
||||
github.com/gofiber/utils v1.1.0/go.mod h1:poZpsnhBykfnY1Mc0KeEa6mSHrS3dV0+oBWyeQmb2e0=
|
||||
github.com/gofrs/flock v0.13.0 h1:95JolYOvGMqeH31+FC7D2+uULf6mG61mEZ/A8dRYMzw=
|
||||
github.com/gofrs/flock v0.13.0/go.mod h1:jxeyy9R1auM5S6JYDBhDt+E2TCo7DkratH4Pgi8P+Z0=
|
||||
github.com/gogo/protobuf v1.1.1/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ=
|
||||
@@ -445,6 +456,8 @@ github.com/huandu/xstrings v1.5.0 h1:2ag3IFq9ZDANvthTwTiqSSZLjDc+BedvHPAp5tJy2TI
|
||||
github.com/huandu/xstrings v1.5.0/go.mod h1:y5/lhBue+AyNmUVz9RLU9xbLR0o4KIIExikq4ovT0aE=
|
||||
github.com/huin/goupnp v1.3.0 h1:UvLUlWDNpoUdYzb2TCn+MuTWtcjXKSza2n6CBdQ0xXc=
|
||||
github.com/huin/goupnp v1.3.0/go.mod h1:gnGPsThkYa7bFi/KWmEysQRf48l2dvR5bxr2OFckNX8=
|
||||
github.com/inconshreveable/mousetrap v1.1.0 h1:wN+x4NVGpMsO7ErUn/mUI3vEoE6Jt13X2s0bqwp9tc8=
|
||||
github.com/inconshreveable/mousetrap v1.1.0/go.mod h1:vpF70FUmC8bwa3OWnCshd2FqLfsEA9PFc4w1p2J65bw=
|
||||
github.com/ipfs/boxo v0.30.0 h1:7afsoxPGGqfoH7Dum/wOTGUB9M5fb8HyKPMlLfBvIEQ=
|
||||
github.com/ipfs/boxo v0.30.0/go.mod h1:BPqgGGyHB9rZZcPSzah2Dc9C+5Or3U1aQe7EH1H7370=
|
||||
github.com/ipfs/go-block-format v0.2.0 h1:ZqrkxBA2ICbDRbK8KJs/u0O3dlp6gmAuuXUJNiW1Ycs=
|
||||
@@ -666,6 +679,8 @@ github.com/mschoch/smat v0.2.0 h1:8imxQsjDm8yFEAVBe7azKmKSgzSkZXDuKkSq9374khM=
|
||||
github.com/mschoch/smat v0.2.0/go.mod h1:kc9mz7DoBKqDyiRL7VZN8KvXQMWeTaVnttLRXOlotKw=
|
||||
github.com/mudler/LocalAGI v0.0.0-20260319174513-43c65ec7e88a h1:combrnE/eLPnUhqrYmtFmqEfR6x9xS+HoTFdnMozvik=
|
||||
github.com/mudler/LocalAGI v0.0.0-20260319174513-43c65ec7e88a/go.mod h1:AbBcAE9JqkexN4aG8rYQn5LzmzffWqcMvQ+Nlvin3WI=
|
||||
github.com/mudler/LocalAGI v0.0.0-20260321004723-b485b77037c4 h1:zWrAdAI/gwAPwXQAJuFLF8vvJdsxpxjKiBiC0EzhLOo=
|
||||
github.com/mudler/LocalAGI v0.0.0-20260321004723-b485b77037c4/go.mod h1:g+6CD5tP4a+rRW20CrMpE/JDazq5N4n4YDxIT7tT1mY=
|
||||
github.com/mudler/cogito v0.9.5-0.20260315222927-63abdec7189b h1:A74T2Lauvg61KodYqsjTYDY05kPLcW+efVZjd23dghU=
|
||||
github.com/mudler/cogito v0.9.5-0.20260315222927-63abdec7189b/go.mod h1:6sfja3lcu2nWRzEc0wwqGNu/eCG3EWgij+8s7xyUeQ4=
|
||||
github.com/mudler/edgevpn v0.31.1 h1:7qegiDWd0kAg6ljhNHxqvp8hbo/6BbzSdbb7/2WZfiY=
|
||||
@@ -676,6 +691,10 @@ github.com/mudler/go-processmanager v0.1.0 h1:fcSKgF9U/a1Z7KofAFeZnke5YseadCI5Gq
|
||||
github.com/mudler/go-processmanager v0.1.0/go.mod h1:h6kmHUZeafr+k5hRYpGLMzJFH4hItHffgpRo2QIkP+o=
|
||||
github.com/mudler/localrecall v0.5.9-0.20260319170742-933f68603f62 h1:KVTEukvLlQXKZx1C1ZLru+ahaiECLF+7v2caK8vauJ0=
|
||||
github.com/mudler/localrecall v0.5.9-0.20260319170742-933f68603f62/go.mod h1:/d2bG9H8G/HzsnXTTQl2bOD+ui74XwpeiSDJ+2gdkGc=
|
||||
github.com/mudler/localrecall v0.5.9-0.20260321003356-422f3b1fff45 h1:+zTrbYk70wHrtvpsO2k7gMPvHYnWYCnXNxAtMex+7yg=
|
||||
github.com/mudler/localrecall v0.5.9-0.20260321003356-422f3b1fff45/go.mod h1:/d2bG9H8G/HzsnXTTQl2bOD+ui74XwpeiSDJ+2gdkGc=
|
||||
github.com/mudler/localrecall v0.5.9-0.20260321005011-810084e9369b h1:XeAnOEOOSKMfS5XNGpRTltQgjKCinho0V4uAhrgxN7Q=
|
||||
github.com/mudler/localrecall v0.5.9-0.20260321005011-810084e9369b/go.mod h1:xuPtgL9zUyiQLmspYzO3kaboYrGbWmwi8BQPt1aCAcs=
|
||||
github.com/mudler/memory v0.0.0-20251216220809-d1256471a6c2 h1:+WHsL/j6EWOMUiMVIOJNKOwSKiQt/qDPc9fePCf87fA=
|
||||
github.com/mudler/memory v0.0.0-20251216220809-d1256471a6c2/go.mod h1:EA8Ashhd56o32qN7ouPKFSRUs/Z+LrRCF4v6R2Oarm8=
|
||||
github.com/mudler/skillserver v0.0.5 h1:t6HPpeSX8kEP7B8F5GXoQUam5VEYNmJuG6oy2/vdTu8=
|
||||
@@ -855,6 +874,7 @@ github.com/russross/blackfriday v1.5.2/go.mod h1:JO/DiYxRf+HjHt06OyowR9PTA263kcR
|
||||
github.com/russross/blackfriday v1.6.0 h1:KqfZb0pUVN2lYqZUYRddxF4OR8ZMURnJIG5Y3VRLtww=
|
||||
github.com/russross/blackfriday v1.6.0/go.mod h1:ti0ldHuxg49ri4ksnFxlkCfN+hvslNlmVHqNRXXJNAY=
|
||||
github.com/russross/blackfriday/v2 v2.0.1/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
|
||||
github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
|
||||
github.com/ruudk/golang-pdf417 v0.0.0-20181029194003-1af4ab5afa58/go.mod h1:6lfFZQK844Gfx8o5WFuvpxWRwnSoipWe/p622j1v06w=
|
||||
github.com/rymdport/portal v0.4.2 h1:7jKRSemwlTyVHHrTGgQg7gmNPJs88xkbKcIL3NlcmSU=
|
||||
github.com/rymdport/portal v0.4.2/go.mod h1:kFF4jslnJ8pD5uCi17brj/ODlfIidOxlgUDTO5ncnC4=
|
||||
@@ -928,6 +948,10 @@ github.com/spaolacci/murmur3 v1.1.0 h1:7c1g84S4BPRrfL5Xrdp6fOJ206sU9y293DDHaoy0b
|
||||
github.com/spaolacci/murmur3 v1.1.0/go.mod h1:JwIasOWyU6f++ZhiEuf87xNszmSA2myDM2Kzu9HwQUA=
|
||||
github.com/spf13/cast v1.7.0 h1:ntdiHjuueXFgm5nzDRdOS4yfT43P5Fnud6DH50rz/7w=
|
||||
github.com/spf13/cast v1.7.0/go.mod h1:ancEpBxwJDODSW/UG4rDrAqiKolqNNh2DX3mk86cAdo=
|
||||
github.com/spf13/cobra v1.10.2 h1:DMTTonx5m65Ic0GOoRY2c16WCbHxOOw6xxezuLaBpcU=
|
||||
github.com/spf13/cobra v1.10.2/go.mod h1:7C1pvHqHw5A4vrJfjNwvOdzYu0Gml16OCs2GRiTUUS4=
|
||||
github.com/spf13/pflag v1.0.9 h1:9exaQaMOCwffKiiiYk6/BndUBv+iRViNW+4lEMi0PvY=
|
||||
github.com/spf13/pflag v1.0.9/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
|
||||
github.com/srwiley/oksvg v0.0.0-20221011165216-be6e8873101c h1:km8GpoQut05eY3GiYWEedbTT0qnSxrCjsVbb7yKY1KE=
|
||||
github.com/srwiley/oksvg v0.0.0-20221011165216-be6e8873101c/go.mod h1:cNQ3dwVJtS5Hmnjxy6AgTPd0Inb3pW05ftPSX7NZO7Q=
|
||||
github.com/srwiley/rasterx v0.0.0-20220730225603-2ab79fcdd4ef h1:Ch6Q+AZUxDBCVqdkI8FSpFyZDtCVBc2VmejdNrm5rRQ=
|
||||
|
||||
Reference in New Issue
Block a user