Compare commits

...

39 Commits

Author SHA1 Message Date
Deluan
28ebd754a1 refactor(plugins): simplify goroutine management in task queue service
Signed-off-by: Deluan <deluan@navidrome.org>
2026-02-26 20:47:44 -05:00
Deluan
7a4147c489 feat(plugins): increase maxConcurrency for task queue and handle budget exhaustion
Signed-off-by: Deluan <deluan@navidrome.org>
2026-02-26 20:40:37 -05:00
Deluan
c40bf3540a refactor(plugins): streamline task queue configuration and error handling
Signed-off-by: Deluan <deluan@navidrome.org>
2026-02-26 19:24:48 -05:00
Deluan
c492ae19f3 fix(plugins): use context-aware database execution in TaskQueue host service
Signed-off-by: Deluan <deluan@navidrome.org>
2026-02-26 18:49:14 -05:00
Deluan
f9beb3c2d7 refactor(plugins): remove capability check for TaskWorker in TaskQueue host service
Signed-off-by: Deluan <deluan@navidrome.org>
2026-02-26 18:33:29 -05:00
Deluan
2ea20f2511 fix(plugins): harden TaskQueue host service with validation and safety improvements
Add input validation (queue name length, payload size limits), extract
status string constants to eliminate raw SQL literals, make CreateQueue
idempotent via upsert for crash recovery, fix RetentionMs default check
for negative values, cap exponential backoff at 1 hour to prevent
overflow, and replace manual mutex-based delay enforcement with
rate.Limiter from golang.org/x/time/rate for correct concurrent worker
serialization.
2026-02-26 18:00:08 -05:00
Deluan
6e8b826022 docs: document TaskQueue module for persistent task queues
Signed-off-by: Deluan <deluan@navidrome.org>
2026-02-26 16:31:20 -05:00
Deluan
57b39685bc feat(plugins): add integration tests for TaskQueue host service 2026-02-26 16:31:20 -05:00
Deluan
525aa0e861 feat(plugins): add test-taskqueue plugin for integration testing 2026-02-26 16:31:20 -05:00
Deluan
11461d5f2c feat(plugins): register TaskQueue host service in manager 2026-02-26 16:31:20 -05:00
Deluan
1cfc2d9741 feat(plugins): require TaskWorker capability for taskqueue permission 2026-02-26 16:31:20 -05:00
Deluan
f5dca3a2db feat(plugins): implement TaskQueue service with SQLite persistence and workers
Per-plugin SQLite database with queues and tasks tables. Worker goroutines
dequeue tasks and invoke nd_task_execute callback. Exponential backoff
retries, rate limiting via delayMs, automatic cleanup of terminal tasks.
2026-02-26 16:30:58 -05:00
Deluan
7180952103 feat(plugins): add taskqueue permission to manifest schema
Add TaskQueuePermission with maxConcurrency option.
2026-02-26 16:30:58 -05:00
Deluan
8238ed6a2c feat(plugins): define TaskWorker capability for task execution callbacks 2026-02-26 16:30:58 -05:00
Deluan
516e229b27 feat(plugins): define TaskQueue host service interface
Add the TaskQueueService interface with CreateQueue, Enqueue,
GetTaskStatus, and CancelTask methods plus QueueConfig struct.
2026-02-26 16:30:58 -05:00
Deluan
582d1b3cd9 refactor(plugins): validate scheduler capability at load time
Move scheduler capability check from runtime (when callback fires) to
load-time validation in ValidateWithCapabilities. This ensures plugins
declaring the scheduler permission must export the nd_scheduler_callback
function, failing fast with a clear error instead of silently skipping
callbacks at runtime.
2026-02-26 16:30:50 -05:00
Deluan
cdd3432788 refactor(http): rename HTTP client files and update struct names for consistency
Signed-off-by: Deluan <deluan@navidrome.org>
2026-02-26 16:19:37 -05:00
Deluan Quintão
5bc2bbb70e feat(subsonic): append album version to names in Subsonic API (#5111)
* feat(subsonic): append album version to album names in Subsonic API responses

Add AppendAlbumVersion config option (default: true) that appends the
album version tag to album names in Subsonic API responses, similar to
how AppendSubtitle works for track titles. This affects album names in
childFromAlbum and buildAlbumID3 responses.

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(subsonic): append album version to media file album names in Subsonic API

Add FullAlbumName() to MediaFile that appends the album version tag,
mirroring the Album.FullName() behavior. Use it in childFromMediaFile
and fakePath to ensure media file responses also show the album version.

Signed-off-by: Deluan <deluan@navidrome.org>

* fix(subsonic): use len() check for album version tag to prevent panic on empty slice

Use len(tags) > 0 instead of != nil to safely guard against empty
slices when accessing the first element of the album version tag.

Signed-off-by: Deluan <deluan@navidrome.org>

* fix(subsonic): use FullName in buildAlbumDirectory and deduplicate FullName calls

Apply album.FullName() in buildAlbumDirectory (getMusicDirectory) so
album names are consistent across all Subsonic endpoints. Also compute
al.FullName() once in childFromAlbum to avoid redundant calls.

Signed-off-by: Deluan <deluan@navidrome.org>

* fix: use len() check in MediaFile.FullTitle() to prevent panic on empty slice

Apply the same safety improvement as FullAlbumName() and Album.FullName()
for consistency.

Signed-off-by: Deluan <deluan@navidrome.org>

* test: add tests for Album.FullName, MediaFile.FullTitle, and MediaFile.FullAlbumName

Cover all cases: config enabled/disabled, tag present, tag absent, and
empty tag slice.

Signed-off-by: Deluan <deluan@navidrome.org>

---------

Signed-off-by: Deluan <deluan@navidrome.org>
2026-02-26 10:50:12 -05:00
Deluan
14343d91b0 chore(deps): update goose to 3.27.0
Signed-off-by: Deluan <deluan@navidrome.org>
2026-02-24 21:44:04 -05:00
Deluan
fc36f1daa6 chore(deps): update go-taglib dependency to latest version (mka fix)
Signed-off-by: Deluan <deluan@navidrome.org>
2026-02-24 21:19:11 -05:00
Deluan Quintão
652c27690b feat(plugins): add HTTP host service (#5095)
* feat(httpclient): implement HttpClient service for outbound HTTP requests in plugins

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(httpclient): enhance SSRF protection by validating host requests against private IPs

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(httpclient): support DELETE requests with body in HttpClient service

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(httpclient): refactor HTTP client initialization and enhance redirect handling

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor(http): standardize naming conventions for HTTP types and methods

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor example plugin to use host.HTTPSend for improved error management

Signed-off-by: Deluan <deluan@navidrome.org>

* fix(plugins): fix IPv6 SSRF bypass and wildcard host matching

Fix two bugs in the plugin HTTP/WebSocket host validation:

1. extractHostname now strips IPv6 brackets when no port is present
(e.g. "[::1]" → "::1"). Previously, net.SplitHostPort failed for
bracketed IPv6 without a port, leaving brackets intact. This caused
net.ParseIP to return nil, bypassing the private/loopback SSRF guard.

2. matchHostPattern now treats "*" as an allow-all pattern. Previously,
a bare "*" only matched via exact equality, so plugins declaring
requiredHosts: ["*"] (like webhook-rs) had all requests rejected.

---------

Signed-off-by: Deluan <deluan@navidrome.org>
2026-02-24 14:28:36 -05:00
Deluan Quintão
2bb13e5ff1 feat(server): add ExtAuth logout URL configuration (#5074)
* feat(server): add ExtAuth logout URL configuration (#4467)

When external authentication (reverse proxy auth) is active, the Logout
button is hidden because authentication is managed externally. Many
external auth services (Authelia, Authentik, Keycloak) provide a logout
URL that can terminate the session.

Add `ExtAuth.LogoutURL` config option that, when set, shows the Logout
button in the UI and redirects the user to the external auth provider's
logout endpoint instead of the Navidrome login page.

* feat(server): add validation for ExtAuth logout URL configuration

* feat(server): refactor ExtAuth logout URL validation to a reusable function

* fix(configuration): rename URL validation functions for consistency

Signed-off-by: Deluan <deluan@navidrome.org>

* fix(configuration): rename URL validation functions for consistency

Signed-off-by: Deluan <deluan@navidrome.org>

---------

Signed-off-by: Deluan <deluan@navidrome.org>
2026-02-23 20:28:38 -05:00
dependabot[bot]
d1c5e6a2f2 chore(deps): bump goreleaser/goreleaser-action in /.github/workflows (#5089)
Bumps [goreleaser/goreleaser-action](https://github.com/goreleaser/goreleaser-action) from 6 to 7.
- [Release notes](https://github.com/goreleaser/goreleaser-action/releases)
- [Commits](https://github.com/goreleaser/goreleaser-action/compare/v6...v7)

---
updated-dependencies:
- dependency-name: goreleaser/goreleaser-action
  dependency-version: '7'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-02-23 19:06:45 -05:00
Deluan
0c3cc86535 fix(subsonic): restore public attribute for playlists in XML responses
This was causing issues with DSub and DSub2000

Signed-off-by: Deluan <deluan@navidrome.org>
2026-02-23 18:17:44 -05:00
Deluan Quintão
b59eb32961 feat(subsonic): sort search3 results by relevance (#5086)
* fix(subsonic): optimize search3 for high-cardinality FTS queries

Use a two-phase query strategy for FTS5 searches to avoid the
performance penalty of expensive LEFT JOINs (annotation, bookmark,
library) on high-cardinality results like "the".

Phase 1 runs a lightweight query (main table + FTS index only) to get
sorted, paginated rowids. Phase 2 hydrates only those few rowids with
the full JOINs, making them nearly free.

For queries with complex ORDER BY expressions that reference joined
tables (e.g. artist search sorted by play count), the optimization is
skipped and the original single-query approach is used.

* fix(search): update order by clauses to include 'rank' for FTS queries

Signed-off-by: Deluan <deluan@navidrome.org>

* fix(search): reintroduce 'rank' in Phase 2 ORDER BY for FTS queries

Signed-off-by: Deluan <deluan@navidrome.org>

* fix(search): remove 'rank' from ORDER BY in non-FTS queries and adjust two-phase query handling

Signed-off-by: Deluan <deluan@navidrome.org>

* fix(search): update FTS ranking to use bm25 weights and simplify ORDER BY qualification

Signed-off-by: Deluan <deluan@navidrome.org>

* fix(search): refine FTS query handling and improve comments for clarity

Signed-off-by: Deluan <deluan@navidrome.org>

* fix(search): refactor full-text search handling to streamline query strategy selection and improve LIKE fallback logic.

Increase e2e coverage for search3

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor: enhance FTS column definitions and relevance weights

Signed-off-by: Deluan <deluan@navidrome.org>

* fix(search): refactor Search method signatures to remove offset and size parameters, streamline query handling

Signed-off-by: Deluan <deluan@navidrome.org>

* fix(search): allow single-character queries in search strategies and update related tests

Signed-off-by: Deluan <deluan@navidrome.org>

* fix(search): make FTS Phase 1 treat Max=0 as no limit, reorganize tests

FTS Phase 1 unconditionally called Limit(uint64(options.Max)), which
produced LIMIT 0 when Max was zero. This diverged from applyOptions
where Max=0 means no limit. Now Phase 1 mirrors applyOptions: only add
LIMIT/OFFSET when the value is positive. Also moved legacy backend
integration tests from sql_search_fts_test.go to sql_search_like_test.go
and added regression tests for the Max=0 behavior on both backends.

* refactor: simplify callSearch function by removing variadic options and directly using QueryOptions

Signed-off-by: Deluan <deluan@navidrome.org>

* fix(search): implement ftsQueryDegraded function to detect significant content loss in FTS queries

Signed-off-by: Deluan <deluan@navidrome.org>

---------

Signed-off-by: Deluan <deluan@navidrome.org>
2026-02-23 08:51:54 -05:00
Valeri Sokolov
23bf256a66 feat: make album and artist annotations available to smart playlists (#4927)
* feat(criteria): make album ratings available to smart playlist queries

Expose an "albumrating" field mapping to album annotations.

Signed-off-by: Valeri Sokolov <ulfurinn@ulfurinn.net>

* fix(criteria): use query parameters

Signed-off-by: Valeri Sokolov <ulfurinn@ulfurinn.net>

* feat: add album and artist annotation fields to smart playlists

Extend smart playlists to filter songs by album or artist annotations
(rating, loved, play count, last played, date loved, date rated). This
adds 12 new fields (6 album, 6 artist) with conditional JOINs that are
only added when the criteria or sort references them, avoiding
unnecessary query overhead. The album table JOIN is also removed since
media_file.album_id can be used directly.

---------

Signed-off-by: Valeri Sokolov <ulfurinn@ulfurinn.net>
Co-authored-by: Deluan <deluan@navidrome.org>
2026-02-22 22:05:59 -05:00
Deluan
d02bf9a53d test(e2e): add MusicBrainz ID tests for song and album searches
Signed-off-by: Deluan <deluan@navidrome.org>
2026-02-22 00:32:14 -05:00
Deluan
ec75808153 fix(subsonic): handle empty quoted phrases in FTS5 query and search expression
Signed-off-by: Deluan <deluan@navidrome.org>
2026-02-21 22:00:00 -05:00
Deluan Quintão
7ad2907719 refactor: move playlist business logic from repositories to service layer (#5027)
* refactor: move playlist business logic from repositories to core.Playlists service

Move authorization, permission checks, and orchestration logic from
playlist repositories to the core.Playlists service, following the
existing pattern used by core.Share and core.Library.

Changes:
- Expand core.Playlists interface with read, mutation, track management,
  and REST adapter methods
- Add playlistRepositoryWrapper for REST Save/Update/Delete with
  permission checks (follows Share/Library pattern)
- Simplify persistence/playlist_repository.go: remove isWritable(),
  auth checks from Delete()/Put()/updatePlaylist()
- Simplify persistence/playlist_track_repository.go: remove
  isTracksEditable() and permission checks from Add/Delete/Reorder
- Update Subsonic API handlers to route through service
- Update Native API handlers to accept core.Playlists instead of
  model.DataStore

* test: add coverage for playlist service methods and REST wrapper

Add 30 new tests covering the service methods added during the playlist
refactoring:

- Delete: owner, admin, denied, not found
- Create: new playlist, replace tracks, admin bypass, denied, not found
- AddTracks: owner, admin, denied, smart playlist, not found
- RemoveTracks: owner, smart playlist denied, non-owner denied
- ReorderTrack: owner, smart playlist denied
- NewRepository wrapper: Save (owner assignment, ID clearing),
  Update (owner, admin, denied, ownership change, not found),
  Delete (delegation with permission checks)

Expand mockedPlaylistRepo with Get, Delete, Tracks, GetWithTracks, and
rest.Persistable methods. Add mockedPlaylistTrackRepo for track
operation verification.

* fix: add authorization check to playlist Update method

Added ownership verification to the Subsonic Update endpoint in the
playlist service layer. The authorization check was present in the old
repository code but was not carried over during the refactoring to the
service layer, allowing any authenticated user to modify playlists they
don't own via the Subsonic API. Also added corresponding tests for the
Update method's permission logic.

* refactor: improve playlist permission checks and error handling, add e2e tests

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor: rename core.Playlists to playlists package and update references

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor: rename playlists_internal_test.go to parse_m3u_test.go and update tests; add new parse_nsp.go and rest_adapter.go files

Signed-off-by: Deluan <deluan@navidrome.org>

* fix: block track mutations on smart playlists in Create and Update

Create now rejects replacing tracks on smart playlists (pre-existing
gap). Update now uses checkTracksEditable instead of checkWritable
when track changes are requested, restoring the protection that was
removed from the repository layer during the refactoring. Metadata-only
updates on smart playlists remain allowed.

* test: add smart playlist protection tests to ensure readonly behavior and mutation restrictions

* refactor: optimize track removal and renumbering in playlists

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor: implement track reordering in playlists with SQL updates

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor: wrap track deletion and reordering in transactions for consistency

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor: remove unused getTracks method from playlistTrackRepository

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor: optimize playlist track renumbering with CTE-based UPDATE

Replace the DELETE + re-INSERT renumbering strategy with a two-step
UPDATE approach using a materialized CTE and ROW_NUMBER() window
function. The previous approach (SELECT all IDs, DELETE all tracks,
re-INSERT in chunks of 200) required 13 SQL operations for a 2000-track
playlist. The new approach uses just 2 UPDATEs: first negating all IDs
to clear the positive space, then assigning sequential positions via
UPDATE...FROM with a CTE. This avoids the UNIQUE constraint violations
that affected the original correlated subquery while reducing per-delete
request time from ~110ms to ~12ms on a 2000-track playlist.

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor: rename New function to NewPlaylists for clarity

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor: update mock playlist repository and tests for consistency

Signed-off-by: Deluan <deluan@navidrome.org>

---------

Signed-off-by: Deluan <deluan@navidrome.org>
2026-02-21 19:57:13 -05:00
Deluan
76c01566a9 test(ui): change datagrid from table to div to fix warning
Signed-off-by: Deluan <deluan@navidrome.org>
2026-02-21 18:57:12 -05:00
Deluan
1cf3fd9161 fix(scanner): prevent ScanOnStartup when scanner is disabled
Gate the ScanOnStartup config on Scanner.Enabled so that setting
Scanner.Enabled=false prevents automatic startup scans. Other automatic
scan triggers (interrupted scan resume, PID change, post-migration) are
preserved regardless of the Enabled flag to maintain data integrity.
2026-02-21 18:51:16 -05:00
Deluan Quintão
54de0dbc52 feat(server): implement FTS5-based full-text search (#5079)
* build: add sqlite_fts5 build tag to enable FTS5 support

* feat: add SearchBackend config option (default: fts)

* feat: add buildFTS5Query for safe FTS5 query preprocessing

* feat: add FTS5 search backend with config toggle, refactor legacy search

- Add searchExprFunc type and getSearchExpr() for backend selection
- Rename fullTextExpr to legacySearchExpr
- Add ftsSearchExpr using FTS5 MATCH subquery
- Update fullTextFilter in sql_restful.go to use configured backend

* feat: add FTS5 migration with virtual tables, triggers, and search_participants

Creates FTS5 virtual tables for media_file, album, and artist with
unicode61 tokenizer and diacritic folding. Adds search_participants
column, populates from JSON, and sets up INSERT/UPDATE/DELETE triggers.

* feat: populate search_participants in PostMapArgs for FTS5 indexing

* test: add FTS5 search integration tests

* fix: exclude FTS5 virtual tables from e2e DB restore

The restoreDB function iterates all tables in sqlite_master and
runs DELETE + INSERT to reset state. FTS5 contentless virtual tables
cannot be directly deleted from. Since triggers handle FTS5 sync
automatically, simply skip tables matching *_fts and *_fts_* patterns.

* build: add compile-time guard for sqlite_fts5 build tag

Same pattern as netgo: compilation fails with a clear error if
the sqlite_fts5 build tag is missing.

* build: add sqlite_fts5 tag to reflex dev server config

* build: extract GO_BUILD_TAGS variable in Makefile to avoid duplication

* fix: strip leading * from FTS5 queries to prevent "unknown special query" error

* feat: auto-append prefix wildcard to FTS5 search tokens for broader matching

Every plain search token now gets a trailing * appended (e.g., "love" becomes
"love*"), so searching for "love" also matches "lovelace", "lovely", etc.
Quoted phrases are preserved as exact matches without wildcards. Results are
ordered alphabetically by name/title, so shorter exact matches naturally
appear first.

* fix: clarify comments about FTS5 operator neutralization

The comments said "strip" but the code lowercases operators to
neutralize them (FTS5 operators are case-sensitive). Updated comments
to accurately describe the behavior.

* fix: use fmt.Sprintf for FTS5 phrase placeholders

The previous encoding used rune('0'+index) which silently breaks with
10+ quoted phrases. Use fmt.Sprintf for arbitrary index support.

* fix: validate and normalize SearchBackend config option

Normalize the value to lowercase and fall back to "fts" with a log
warning for unrecognized values. This prevents silent misconfiguration
from typos like "FTS", "Legacy", or "fts5".

* refactor: improve documentation for build tags and FTS5 requirements

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor: convert FTS5 query and search backend normalization tests to DescribeTable format

Signed-off-by: Deluan <deluan@navidrome.org>

* fix: add sqlite_fts5 build tag to golangci configuration

Signed-off-by: Deluan <deluan@navidrome.org>

* feat: add UISearchDebounceMs configuration option and update related components

Signed-off-by: Deluan <deluan@navidrome.org>

* fix: fall back to legacy search when SearchFullString is enabled

FTS5 is token-based and cannot match substrings within words, so
getSearchExpr now returns legacySearchExpr when SearchFullString
is true, regardless of SearchBackend setting.

* fix: add sqlite_fts5 build tag to CI pipeline and Dockerfile

* fix: add WHEN clauses to FTS5 AFTER UPDATE triggers

Added WHEN clauses to the media_file_fts_au, album_fts_au, and
artist_fts_au triggers so they only fire when FTS-indexed columns
actually change. Previously, every row update (e.g., play count, rating,
starred status) triggered an unnecessary delete+insert cycle in the FTS
shadow tables. The WHEN clauses use IS NOT for NULL-safe comparison of
each indexed column, avoiding FTS index churn for non-indexed updates.

* feat: add SearchBackend configuration option to data and insights components

Signed-off-by: Deluan <deluan@navidrome.org>

* fix: enhance input sanitization for FTS5 by stripping additional punctuation and special characters

Signed-off-by: Deluan <deluan@navidrome.org>

* feat: add search_normalized column for punctuated name search (R.E.M., AC/DC)

Add index-time normalization and query-time single-letter collapsing to
fix FTS5 search for punctuated names. A new search_normalized column
stores concatenated forms of punctuated words (e.g., "R.E.M." → "REM",
"AC/DC" → "ACDC") and is indexed in FTS5 tables. At query time, runs of
consecutive single letters (from dot-stripping) are collapsed into OR
expressions like ("R E M" OR REM*) to match both the original tokens and
the normalized form. This enables searching by "R.E.M.", "REM", "AC/DC",
"ACDC", "A-ha", or "Aha" and finding the correct results.

* refactor: simplify isSingleUnicodeLetter to avoid []rune allocation

Use utf8.DecodeRuneInString to check for a single Unicode letter
instead of converting the entire string to a []rune slice.

* feat: define ftsSearchColumns for flexible FTS5 search column inclusion

Signed-off-by: Deluan <deluan@navidrome.org>

* feat: update collapseSingleLetterRuns to return quoted phrases for abbreviations

Signed-off-by: Deluan <deluan@navidrome.org>

* feat: implement extractPunctuatedWords to handle artist/album names with embedded punctuation

Signed-off-by: Deluan <deluan@navidrome.org>

* feat: implement extractPunctuatedWords to handle artist/album names with embedded punctuation

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor: punctuated word handling to improve processing of artist/album names

Signed-off-by: Deluan <deluan@navidrome.org>

* feat: add CJK support for search queries with LIKE filters

Signed-off-by: Deluan <deluan@navidrome.org>

* feat: enhance FTS5 search by adding album version support and CJK handling

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor: search configuration to use structured options

Signed-off-by: Deluan <deluan@navidrome.org>

* feat: enhance search functionality to support punctuation-only queries and update related tests

Signed-off-by: Deluan <deluan@navidrome.org>

---------

Signed-off-by: Deluan <deluan@navidrome.org>
2026-02-21 17:52:42 -05:00
Deluan
6f5f58ae9d chore(deps): update go-taglib to v0.0.0-20260221220301-2fab4903f48e
Signed-off-by: Deluan <deluan@navidrome.org>
2026-02-21 17:04:59 -05:00
Deluan Quintão
821f22a86f feat(scanner): upgrade TagLib to 2.2, with MKA/Matroska support (#5071)
* chore(deps): update go-taglib fork with MKA/Matroska support

Bump deluan/go-taglib to cf75207bfff8, which upgrades the underlying
taglib to v2.2 and adds Matroska container format detection and
metadata handling (MKA audio files).

* chore(deps): update cross-taglib version to 2.2.0-1

Signed-off-by: Deluan <deluan@navidrome.org>

* chore(make): rename run-docker target to docker-run for consistency

Signed-off-by: Deluan <deluan@navidrome.org>

* chore(go-taglib): update version to 2.2 WASM and add debug logging

Signed-off-by: Deluan <deluan@navidrome.org>

* chore(deps): update go-taglib to v0.0.0-20260220032326 for MKA fixes

Signed-off-by: Deluan <deluan@navidrome.org>

---------

Signed-off-by: Deluan <deluan@navidrome.org>
2026-02-21 16:52:48 -05:00
Boris Rorsvort
74aa4d6fa5 fix(ui): Search focus after clear (#4932)
* wip

* refactor implem

* fixes
2026-02-21 14:39:38 -05:00
dependabot[bot]
dc4607c657 chore(deps): bump ajv from 6.12.6 to 6.14.0 in /ui (#5080)
Bumps [ajv](https://github.com/ajv-validator/ajv) from 6.12.6 to 6.14.0.
- [Release notes](https://github.com/ajv-validator/ajv/releases)
- [Commits](https://github.com/ajv-validator/ajv/compare/v6.12.6...v6.14.0)

---
updated-dependencies:
- dependency-name: ajv
  dependency-version: 6.14.0
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-02-21 12:44:32 -05:00
Deluan
ddab0da207 docs: update commit message format in CONTRIBUTING.md
Signed-off-by: Deluan <deluan@navidrome.org>
2026-02-20 11:00:34 -05:00
Deluan Quintão
08a71320ea fix(ui): make toggle switches visible in Gruvbox Dark theme (#5063) (#5064)
The secondary color (#3c3836) matches the panel/table cell background,
making checked MuiSwitch thumbs invisible. Add MuiSwitch override using
Gruvbox cyan (#458588), consistent with existing interactive elements.
2026-02-18 15:38:20 -05:00
Raphael Catolino
44a5482493 fix(ui): activity Indicator switching constantly between online/offline (#5054)
When using HTTP2, setting the writeTimeout too low causes the channel to
close before the keepAlive event has a chance of beeing sent.

Signed-off-by: rca <raphael.catolino@gmail.com>
Co-authored-by: Deluan Quintão <deluan@navidrome.org>
2026-02-17 14:47:20 -05:00
151 changed files with 9314 additions and 1030 deletions

View File

@@ -14,7 +14,7 @@ RUN apt-get update && export DEBIAN_FRONTEND=noninteractive \
&& apt-get -y install --no-install-recommends ffmpeg && apt-get -y install --no-install-recommends ffmpeg
# Install TagLib from cross-taglib releases # Install TagLib from cross-taglib releases
ARG CROSS_TAGLIB_VERSION="2.1.1-1" ARG CROSS_TAGLIB_VERSION="2.2.0-1"
ARG TARGETARCH ARG TARGETARCH
RUN DOWNLOAD_ARCH="linux-${TARGETARCH}" \ RUN DOWNLOAD_ARCH="linux-${TARGETARCH}" \
&& wget -q "https://github.com/navidrome/cross-taglib/releases/download/v${CROSS_TAGLIB_VERSION}/taglib-${DOWNLOAD_ARCH}.tar.gz" -O /tmp/cross-taglib.tar.gz \ && wget -q "https://github.com/navidrome/cross-taglib/releases/download/v${CROSS_TAGLIB_VERSION}/taglib-${DOWNLOAD_ARCH}.tar.gz" -O /tmp/cross-taglib.tar.gz \

View File

@@ -8,7 +8,7 @@
// Options // Options
"INSTALL_NODE": "true", "INSTALL_NODE": "true",
"NODE_VERSION": "v24", "NODE_VERSION": "v24",
"CROSS_TAGLIB_VERSION": "2.1.1-1" "CROSS_TAGLIB_VERSION": "2.2.0-1"
} }
}, },
"workspaceMount": "", "workspaceMount": "",

View File

@@ -14,7 +14,7 @@ concurrency:
cancel-in-progress: true cancel-in-progress: true
env: env:
CROSS_TAGLIB_VERSION: "2.1.1-2" CROSS_TAGLIB_VERSION: "2.2.0-1"
CGO_CFLAGS_ALLOW: "--define-prefix" CGO_CFLAGS_ALLOW: "--define-prefix"
IS_RELEASE: ${{ startsWith(github.ref, 'refs/tags/') && 'true' || 'false' }} IS_RELEASE: ${{ startsWith(github.ref, 'refs/tags/') && 'true' || 'false' }}
@@ -117,7 +117,7 @@ jobs:
- name: Test - name: Test
run: | run: |
pkg-config --define-prefix --cflags --libs taglib # for debugging pkg-config --define-prefix --cflags --libs taglib # for debugging
go test -shuffle=on -tags netgo -race ./... -v go test -shuffle=on -tags netgo,sqlite_fts5 -race ./... -v
- name: Test ndpgen - name: Test ndpgen
run: | run: |
@@ -424,7 +424,7 @@ jobs:
run: echo 'RELEASE_FLAGS=--skip=publish --snapshot' >> $GITHUB_ENV run: echo 'RELEASE_FLAGS=--skip=publish --snapshot' >> $GITHUB_ENV
- name: Run GoReleaser - name: Run GoReleaser
uses: goreleaser/goreleaser-action@v6 uses: goreleaser/goreleaser-action@v7
with: with:
version: '~> v2' version: '~> v2'
args: "release --clean -f release/goreleaser.yml ${{ env.RELEASE_FLAGS }}" args: "release --clean -f release/goreleaser.yml ${{ env.RELEASE_FLAGS }}"

View File

@@ -2,6 +2,7 @@ version: "2"
run: run:
build-tags: build-tags:
- netgo - netgo
- sqlite_fts5
linters: linters:
enable: enable:
- asasalint - asasalint

View File

@@ -38,7 +38,7 @@ Before submitting a pull request, ensure that you go through the following:
### Commit Conventions ### Commit Conventions
Each commit message must adhere to the following format: Each commit message must adhere to the following format:
``` ```
<type>(scope): <description> - <issue number> <type>(scope): <description>
[optional body] [optional body]
``` ```

View File

@@ -28,7 +28,7 @@ COPY --from=xx-build /out/ /usr/bin/
### Get TagLib ### Get TagLib
FROM --platform=$BUILDPLATFORM public.ecr.aws/docker/library/alpine:3.20 AS taglib-build FROM --platform=$BUILDPLATFORM public.ecr.aws/docker/library/alpine:3.20 AS taglib-build
ARG TARGETPLATFORM ARG TARGETPLATFORM
ARG CROSS_TAGLIB_VERSION=2.1.1-2 ARG CROSS_TAGLIB_VERSION=2.2.0-1
ENV CROSS_TAGLIB_RELEASES_URL=https://github.com/navidrome/cross-taglib/releases/download/v${CROSS_TAGLIB_VERSION}/ ENV CROSS_TAGLIB_RELEASES_URL=https://github.com/navidrome/cross-taglib/releases/download/v${CROSS_TAGLIB_VERSION}/
# wget in busybox can't follow redirects # wget in busybox can't follow redirects
@@ -109,7 +109,7 @@ RUN --mount=type=bind,source=. \
export EXT=".exe" export EXT=".exe"
fi fi
go build -tags=netgo -ldflags="${LD_EXTRA} -w -s \ go build -tags=netgo,sqlite_fts5 -ldflags="${LD_EXTRA} -w -s \
-X github.com/navidrome/navidrome/consts.gitSha=${GIT_SHA} \ -X github.com/navidrome/navidrome/consts.gitSha=${GIT_SHA} \
-X github.com/navidrome/navidrome/consts.gitTag=${GIT_TAG}" \ -X github.com/navidrome/navidrome/consts.gitTag=${GIT_TAG}" \
-o /out/navidrome${EXT} . -o /out/navidrome${EXT} .

View File

@@ -1,5 +1,6 @@
GO_VERSION=$(shell grep "^go " go.mod | cut -f 2 -d ' ') GO_VERSION=$(shell grep "^go " go.mod | cut -f 2 -d ' ')
NODE_VERSION=$(shell cat .nvmrc) NODE_VERSION=$(shell cat .nvmrc)
GO_BUILD_TAGS=netgo,sqlite_fts5
# Set global environment variables, required for most targets # Set global environment variables, required for most targets
export CGO_CFLAGS_ALLOW=--define-prefix export CGO_CFLAGS_ALLOW=--define-prefix
@@ -19,7 +20,7 @@ PLATFORMS ?= $(SUPPORTED_PLATFORMS)
DOCKER_TAG ?= deluan/navidrome:develop DOCKER_TAG ?= deluan/navidrome:develop
# Taglib version to use in cross-compilation, from https://github.com/navidrome/cross-taglib # Taglib version to use in cross-compilation, from https://github.com/navidrome/cross-taglib
CROSS_TAGLIB_VERSION ?= 2.1.1-2 CROSS_TAGLIB_VERSION ?= 2.2.0-1
GOLANGCI_LINT_VERSION ?= v2.10.0 GOLANGCI_LINT_VERSION ?= v2.10.0
UI_SRC_FILES := $(shell find ui -type f -not -path "ui/build/*" -not -path "ui/node_modules/*") UI_SRC_FILES := $(shell find ui -type f -not -path "ui/build/*" -not -path "ui/node_modules/*")
@@ -46,12 +47,12 @@ stop: ##@Development Stop development servers (UI and backend)
.PHONY: stop .PHONY: stop
watch: ##@Development Start Go tests in watch mode (re-run when code changes) watch: ##@Development Start Go tests in watch mode (re-run when code changes)
go tool ginkgo watch -tags=netgo -notify ./... go tool ginkgo watch -tags=$(GO_BUILD_TAGS) -notify ./...
.PHONY: watch .PHONY: watch
PKG ?= ./... PKG ?= ./...
test: ##@Development Run Go tests. Use PKG variable to specify packages to test, e.g. make test PKG=./server test: ##@Development Run Go tests. Use PKG variable to specify packages to test, e.g. make test PKG=./server
go test -tags netgo $(PKG) go test -tags $(GO_BUILD_TAGS) $(PKG)
.PHONY: test .PHONY: test
test-ndpgen: ##@Development Run tests for ndpgen plugin test-ndpgen: ##@Development Run tests for ndpgen plugin
@@ -62,7 +63,7 @@ testall: test test-ndpgen test-i18n test-js ##@Development Run Go and JS tests
.PHONY: testall .PHONY: testall
test-race: ##@Development Run Go tests with race detector test-race: ##@Development Run Go tests with race detector
go test -tags netgo -race -shuffle=on $(PKG) go test -tags $(GO_BUILD_TAGS) -race -shuffle=on $(PKG)
.PHONY: test-race .PHONY: test-race
test-js: ##@Development Run JS tests test-js: ##@Development Run JS tests
@@ -108,7 +109,7 @@ format: ##@Development Format code
.PHONY: format .PHONY: format
wire: check_go_env ##@Development Update Dependency Injection wire: check_go_env ##@Development Update Dependency Injection
go tool wire gen -tags=netgo ./... go tool wire gen -tags=$(GO_BUILD_TAGS) ./...
.PHONY: wire .PHONY: wire
gen: check_go_env ##@Development Run go generate for code generation gen: check_go_env ##@Development Run go generate for code generation
@@ -144,14 +145,14 @@ setup-git: ##@Development Setup Git hooks (pre-commit and pre-push)
.PHONY: setup-git .PHONY: setup-git
build: check_go_env buildjs ##@Build Build the project build: check_go_env buildjs ##@Build Build the project
go build -ldflags="-X github.com/navidrome/navidrome/consts.gitSha=$(GIT_SHA) -X github.com/navidrome/navidrome/consts.gitTag=$(GIT_TAG)" -tags=netgo go build -ldflags="-X github.com/navidrome/navidrome/consts.gitSha=$(GIT_SHA) -X github.com/navidrome/navidrome/consts.gitTag=$(GIT_TAG)" -tags=$(GO_BUILD_TAGS)
.PHONY: build .PHONY: build
buildall: deprecated build buildall: deprecated build
.PHONY: buildall .PHONY: buildall
debug-build: check_go_env buildjs ##@Build Build the project (with remote debug on) debug-build: check_go_env buildjs ##@Build Build the project (with remote debug on)
go build -gcflags="all=-N -l" -ldflags="-X github.com/navidrome/navidrome/consts.gitSha=$(GIT_SHA) -X github.com/navidrome/navidrome/consts.gitTag=$(GIT_TAG)" -tags=netgo go build -gcflags="all=-N -l" -ldflags="-X github.com/navidrome/navidrome/consts.gitSha=$(GIT_SHA) -X github.com/navidrome/navidrome/consts.gitTag=$(GIT_TAG)" -tags=$(GO_BUILD_TAGS)
.PHONY: debug-build .PHONY: debug-build
buildjs: check_node_env ui/build/index.html ##@Build Build only frontend buildjs: check_node_env ui/build/index.html ##@Build Build only frontend
@@ -201,8 +202,8 @@ docker-msi: ##@Cross_Compilation Build MSI installer for Windows
@du -h binaries/msi/*.msi @du -h binaries/msi/*.msi
.PHONY: docker-msi .PHONY: docker-msi
run-docker: ##@Development Run a Navidrome Docker image. Usage: make run-docker tag=<tag> docker-run: ##@Development Run a Navidrome Docker image. Usage: make docker-run tag=<tag>
@if [ -z "$(tag)" ]; then echo "Usage: make run-docker tag=<tag>"; exit 1; fi @if [ -z "$(tag)" ]; then echo "Usage: make docker-run tag=<tag>"; exit 1; fi
@TAG_DIR="tmp/$$(echo '$(tag)' | tr '/:' '_')"; mkdir -p "$$TAG_DIR"; \ @TAG_DIR="tmp/$$(echo '$(tag)' | tr '/:' '_')"; mkdir -p "$$TAG_DIR"; \
VOLUMES="-v $(PWD)/$$TAG_DIR:/data"; \ VOLUMES="-v $(PWD)/$$TAG_DIR:/data"; \
if [ -f navidrome.toml ]; then \ if [ -f navidrome.toml ]; then \
@@ -213,7 +214,7 @@ run-docker: ##@Development Run a Navidrome Docker image. Usage: make run-docker
fi; \ fi; \
fi; \ fi; \
echo "Running: docker run --rm -p 4533:4533 $$VOLUMES $(tag)"; docker run --rm -p 4533:4533 $$VOLUMES $(tag) echo "Running: docker run --rm -p 4533:4533 $$VOLUMES $(tag)"; docker run --rm -p 4533:4533 $$VOLUMES $(tag)
.PHONY: run-docker .PHONY: docker-run
package: docker-build ##@Cross_Compilation Create binaries and packages for ALL supported platforms package: docker-build ##@Cross_Compilation Create binaries and packages for ALL supported platforms
@if [ -z `which goreleaser` ]; then echo "Please install goreleaser first: https://goreleaser.com/install/"; exit 1; fi @if [ -z `which goreleaser` ]; then echo "Please install goreleaser first: https://goreleaser.com/install/"; exit 1; fi

View File

@@ -20,6 +20,7 @@ import (
"strings" "strings"
"time" "time"
"github.com/navidrome/navidrome/conf"
"github.com/navidrome/navidrome/core/storage/local" "github.com/navidrome/navidrome/core/storage/local"
"github.com/navidrome/navidrome/log" "github.com/navidrome/navidrome/log"
"github.com/navidrome/navidrome/model/metadata" "github.com/navidrome/navidrome/model/metadata"
@@ -43,7 +44,7 @@ func (e extractor) Parse(files ...string) (map[string]metadata.Info, error) {
} }
func (e extractor) Version() string { func (e extractor) Version() string {
return "go-taglib (TagLib 2.1.1 WASM)" return "2.2 WASM"
} }
func (e extractor) extractMetadata(filePath string) (*metadata.Info, error) { func (e extractor) extractMetadata(filePath string) (*metadata.Info, error) {
@@ -279,4 +280,7 @@ func init() {
local.RegisterExtractor("taglib", func(fsys fs.FS, baseDir string) local.Extractor { local.RegisterExtractor("taglib", func(fsys fs.FS, baseDir string) local.Extractor {
return &extractor{fsys} return &extractor{fsys}
}) })
conf.AddHook(func() {
log.Debug("go-taglib version", "version", extractor{}.Version())
})
} }

View File

@@ -196,7 +196,8 @@ func runInitialScan(ctx context.Context) func() error {
if err != nil { if err != nil {
return err return err
} }
scanNeeded := conf.Server.Scanner.ScanOnStartup || inProgress || fullScanRequired == "1" || pidHasChanged scanOnStartup := conf.Server.Scanner.Enabled && conf.Server.Scanner.ScanOnStartup
scanNeeded := scanOnStartup || inProgress || fullScanRequired == "1" || pidHasChanged
time.Sleep(2 * time.Second) // Wait 2 seconds before the initial scan time.Sleep(2 * time.Second) // Wait 2 seconds before the initial scan
if scanNeeded { if scanNeeded {
s := CreateScanner(ctx) s := CreateScanner(ctx)

View File

@@ -8,7 +8,7 @@ import (
"os" "os"
"strings" "strings"
"github.com/navidrome/navidrome/core" "github.com/navidrome/navidrome/core/playlists"
"github.com/navidrome/navidrome/db" "github.com/navidrome/navidrome/db"
"github.com/navidrome/navidrome/log" "github.com/navidrome/navidrome/log"
"github.com/navidrome/navidrome/model" "github.com/navidrome/navidrome/model"
@@ -74,7 +74,7 @@ func runScanner(ctx context.Context) {
sqlDB := db.Db() sqlDB := db.Db()
defer db.Db().Close() defer db.Db().Close()
ds := persistence.New(sqlDB) ds := persistence.New(sqlDB)
pls := core.NewPlaylists(ds) pls := playlists.NewPlaylists(ds)
// Parse targets from command line or file // Parse targets from command line or file
var scanTargets []model.ScanTarget var scanTargets []model.ScanTarget

View File

@@ -18,6 +18,7 @@ import (
"github.com/navidrome/navidrome/core/ffmpeg" "github.com/navidrome/navidrome/core/ffmpeg"
"github.com/navidrome/navidrome/core/metrics" "github.com/navidrome/navidrome/core/metrics"
"github.com/navidrome/navidrome/core/playback" "github.com/navidrome/navidrome/core/playback"
"github.com/navidrome/navidrome/core/playlists"
"github.com/navidrome/navidrome/core/scrobbler" "github.com/navidrome/navidrome/core/scrobbler"
"github.com/navidrome/navidrome/db" "github.com/navidrome/navidrome/db"
"github.com/navidrome/navidrome/model" "github.com/navidrome/navidrome/model"
@@ -61,7 +62,7 @@ func CreateNativeAPIRouter(ctx context.Context) *nativeapi.Router {
sqlDB := db.Db() sqlDB := db.Db()
dataStore := persistence.New(sqlDB) dataStore := persistence.New(sqlDB)
share := core.NewShare(dataStore) share := core.NewShare(dataStore)
playlists := core.NewPlaylists(dataStore) playlistsPlaylists := playlists.NewPlaylists(dataStore)
insights := metrics.GetInstance(dataStore) insights := metrics.GetInstance(dataStore)
fileCache := artwork.GetImageCache() fileCache := artwork.GetImageCache()
fFmpeg := ffmpeg.New() fFmpeg := ffmpeg.New()
@@ -72,12 +73,12 @@ func CreateNativeAPIRouter(ctx context.Context) *nativeapi.Router {
provider := external.NewProvider(dataStore, agentsAgents) provider := external.NewProvider(dataStore, agentsAgents)
artworkArtwork := artwork.NewArtwork(dataStore, fileCache, fFmpeg, provider) artworkArtwork := artwork.NewArtwork(dataStore, fileCache, fFmpeg, provider)
cacheWarmer := artwork.NewCacheWarmer(artworkArtwork, fileCache) cacheWarmer := artwork.NewCacheWarmer(artworkArtwork, fileCache)
modelScanner := scanner.New(ctx, dataStore, cacheWarmer, broker, playlists, metricsMetrics) modelScanner := scanner.New(ctx, dataStore, cacheWarmer, broker, playlistsPlaylists, metricsMetrics)
watcher := scanner.GetWatcher(dataStore, modelScanner) watcher := scanner.GetWatcher(dataStore, modelScanner)
library := core.NewLibrary(dataStore, modelScanner, watcher, broker, manager) library := core.NewLibrary(dataStore, modelScanner, watcher, broker, manager)
user := core.NewUser(dataStore, manager) user := core.NewUser(dataStore, manager)
maintenance := core.NewMaintenance(dataStore) maintenance := core.NewMaintenance(dataStore)
router := nativeapi.New(dataStore, share, playlists, insights, library, user, maintenance, manager) router := nativeapi.New(dataStore, share, playlistsPlaylists, insights, library, user, maintenance, manager)
return router return router
} }
@@ -98,11 +99,11 @@ func CreateSubsonicAPIRouter(ctx context.Context) *subsonic.Router {
archiver := core.NewArchiver(mediaStreamer, dataStore, share) archiver := core.NewArchiver(mediaStreamer, dataStore, share)
players := core.NewPlayers(dataStore) players := core.NewPlayers(dataStore)
cacheWarmer := artwork.NewCacheWarmer(artworkArtwork, fileCache) cacheWarmer := artwork.NewCacheWarmer(artworkArtwork, fileCache)
playlists := core.NewPlaylists(dataStore) playlistsPlaylists := playlists.NewPlaylists(dataStore)
modelScanner := scanner.New(ctx, dataStore, cacheWarmer, broker, playlists, metricsMetrics) modelScanner := scanner.New(ctx, dataStore, cacheWarmer, broker, playlistsPlaylists, metricsMetrics)
playTracker := scrobbler.GetPlayTracker(dataStore, broker, manager) playTracker := scrobbler.GetPlayTracker(dataStore, broker, manager)
playbackServer := playback.GetInstance(dataStore) playbackServer := playback.GetInstance(dataStore)
router := subsonic.New(dataStore, artworkArtwork, mediaStreamer, archiver, players, provider, modelScanner, broker, playlists, playTracker, share, playbackServer, metricsMetrics) router := subsonic.New(dataStore, artworkArtwork, mediaStreamer, archiver, players, provider, modelScanner, broker, playlistsPlaylists, playTracker, share, playbackServer, metricsMetrics)
return router return router
} }
@@ -165,8 +166,8 @@ func CreateScanner(ctx context.Context) model.Scanner {
provider := external.NewProvider(dataStore, agentsAgents) provider := external.NewProvider(dataStore, agentsAgents)
artworkArtwork := artwork.NewArtwork(dataStore, fileCache, fFmpeg, provider) artworkArtwork := artwork.NewArtwork(dataStore, fileCache, fFmpeg, provider)
cacheWarmer := artwork.NewCacheWarmer(artworkArtwork, fileCache) cacheWarmer := artwork.NewCacheWarmer(artworkArtwork, fileCache)
playlists := core.NewPlaylists(dataStore) playlistsPlaylists := playlists.NewPlaylists(dataStore)
modelScanner := scanner.New(ctx, dataStore, cacheWarmer, broker, playlists, metricsMetrics) modelScanner := scanner.New(ctx, dataStore, cacheWarmer, broker, playlistsPlaylists, metricsMetrics)
return modelScanner return modelScanner
} }
@@ -182,8 +183,8 @@ func CreateScanWatcher(ctx context.Context) scanner.Watcher {
provider := external.NewProvider(dataStore, agentsAgents) provider := external.NewProvider(dataStore, agentsAgents)
artworkArtwork := artwork.NewArtwork(dataStore, fileCache, fFmpeg, provider) artworkArtwork := artwork.NewArtwork(dataStore, fileCache, fFmpeg, provider)
cacheWarmer := artwork.NewCacheWarmer(artworkArtwork, fileCache) cacheWarmer := artwork.NewCacheWarmer(artworkArtwork, fileCache)
playlists := core.NewPlaylists(dataStore) playlistsPlaylists := playlists.NewPlaylists(dataStore)
modelScanner := scanner.New(ctx, dataStore, cacheWarmer, broker, playlists, metricsMetrics) modelScanner := scanner.New(ctx, dataStore, cacheWarmer, broker, playlistsPlaylists, metricsMetrics)
watcher := scanner.GetWatcher(dataStore, modelScanner) watcher := scanner.GetWatcher(dataStore, modelScanner)
return watcher return watcher
} }

View File

@@ -1,4 +0,0 @@
package buildtags
// This file is left intentionally empty. It is used to make sure the package is not empty, in the case all
// required build tags are disabled.

6
conf/buildtags/doc.go Normal file
View File

@@ -0,0 +1,6 @@
// Package buildtags provides compile-time enforcement of required build tags.
//
// Each file in this package is guarded by a build constraint and exports a variable
// that main.go references. If a required tag is missing during compilation, the build
// fails with an "undefined" error, directing the developer to use `make build`.
package buildtags

View File

@@ -2,10 +2,6 @@
package buildtags package buildtags
// NOTICE: This file was created to force the inclusion of the `netgo` tag when compiling the project. // The `netgo` tag is required when compiling the project. See https://github.com/navidrome/navidrome/issues/700
// If the tag is not included, the compilation will fail because this variable won't be defined, and the `main.go`
// file requires it.
// Why this tag is required? See https://github.com/navidrome/navidrome/issues/700
var NETGO = true var NETGO = true

View File

@@ -0,0 +1,8 @@
//go:build sqlite_fts5
package buildtags
// FTS5 is required for full-text search. Without this tag, the SQLite driver
// won't include FTS5 support, causing runtime failures on migrations and search queries.
var SQLITE_FTS5 = true

View File

@@ -58,7 +58,7 @@ type configOptions struct {
SmartPlaylistRefreshDelay time.Duration SmartPlaylistRefreshDelay time.Duration
AutoTranscodeDownload bool AutoTranscodeDownload bool
DefaultDownsamplingFormat string DefaultDownsamplingFormat string
SearchFullString bool Search searchOptions `json:",omitzero"`
SimilarSongsMatchThreshold int SimilarSongsMatchThreshold int
RecentlyAddedByModTime bool RecentlyAddedByModTime bool
PreferSortTags bool PreferSortTags bool
@@ -82,6 +82,7 @@ type configOptions struct {
DefaultTheme string DefaultTheme string
DefaultLanguage string DefaultLanguage string
DefaultUIVolume int DefaultUIVolume int
UISearchDebounceMs int
EnableReplayGain bool EnableReplayGain bool
EnableCoverAnimation bool EnableCoverAnimation bool
EnableNowPlaying bool EnableNowPlaying bool
@@ -154,6 +155,7 @@ type scannerOptions struct {
type subsonicOptions struct { type subsonicOptions struct {
AppendSubtitle bool AppendSubtitle bool
AppendAlbumVersion bool
ArtistParticipations bool ArtistParticipations bool
DefaultReportRealPath bool DefaultReportRealPath bool
EnableAverageRating bool EnableAverageRating bool
@@ -249,6 +251,12 @@ type pluginsOptions struct {
type extAuthOptions struct { type extAuthOptions struct {
TrustedSources string TrustedSources string
UserHeader string UserHeader string
LogoutURL string
}
type searchOptions struct {
Backend string
FullString bool
} }
var ( var (
@@ -339,11 +347,14 @@ func Load(noConfigDump bool) {
validateBackupSchedule, validateBackupSchedule,
validatePlaylistsPath, validatePlaylistsPath,
validatePurgeMissingOption, validatePurgeMissingOption,
validateURL("ExtAuth.LogoutURL", Server.ExtAuth.LogoutURL),
) )
if err != nil { if err != nil {
os.Exit(1) os.Exit(1)
} }
Server.Search.Backend = normalizeSearchBackend(Server.Search.Backend)
if Server.BaseURL != "" { if Server.BaseURL != "" {
u, err := url.Parse(Server.BaseURL) u, err := url.Parse(Server.BaseURL)
if err != nil { if err != nil {
@@ -392,6 +403,7 @@ func Load(noConfigDump bool) {
logDeprecatedOptions("Scanner.GenreSeparators", "") logDeprecatedOptions("Scanner.GenreSeparators", "")
logDeprecatedOptions("Scanner.GroupAlbumReleases", "") logDeprecatedOptions("Scanner.GroupAlbumReleases", "")
logDeprecatedOptions("DevEnableBufferedScrobble", "") // Deprecated: Buffered scrobbling is now always enabled and this option is ignored logDeprecatedOptions("DevEnableBufferedScrobble", "") // Deprecated: Buffered scrobbling is now always enabled and this option is ignored
logDeprecatedOptions("SearchFullString", "Search.FullString")
logDeprecatedOptions("ReverseProxyWhitelist", "ExtAuth.TrustedSources") logDeprecatedOptions("ReverseProxyWhitelist", "ExtAuth.TrustedSources")
logDeprecatedOptions("ReverseProxyUserHeader", "ExtAuth.UserHeader") logDeprecatedOptions("ReverseProxyUserHeader", "ExtAuth.UserHeader")
logDeprecatedOptions("HTTPSecurityHeaders.CustomFrameOptionsValue", "HTTPHeaders.FrameOptions") logDeprecatedOptions("HTTPSecurityHeaders.CustomFrameOptionsValue", "HTTPHeaders.FrameOptions")
@@ -539,6 +551,44 @@ func validateSchedule(schedule, field string) (string, error) {
return schedule, err return schedule, err
} }
// validateURL checks if the provided URL is valid and has either http or https scheme.
// It returns a function that can be used as a hook to validate URLs in the config.
func validateURL(optionName, optionURL string) func() error {
return func() error {
if optionURL == "" {
return nil
}
u, err := url.Parse(optionURL)
if err != nil {
log.Error(fmt.Sprintf("Invalid %s: it could not be parsed", optionName), "url", optionURL, "err", err)
return err
}
if u.Scheme != "http" && u.Scheme != "https" {
err := fmt.Errorf("invalid scheme for %s: '%s'. Only 'http' and 'https' are allowed", optionName, u.Scheme)
log.Error(err.Error())
return err
}
// Require an absolute URL with a non-empty host and no opaque component.
if u.Host == "" || u.Opaque != "" {
err := fmt.Errorf("invalid %s: '%s'. A full http(s) URL with a non-empty host is required", optionName, optionURL)
log.Error(err.Error())
return err
}
return nil
}
}
func normalizeSearchBackend(value string) string {
v := strings.ToLower(strings.TrimSpace(value))
switch v {
case "fts", "legacy":
return v
default:
log.Error("Invalid Search.Backend value, falling back to 'fts'", "value", value)
return "fts"
}
}
// AddHook is used to register initialization code that should run as soon as the config is loaded // AddHook is used to register initialization code that should run as soon as the config is loaded
func AddHook(hook func()) { func AddHook(hook func()) {
hooks = append(hooks, hook) hooks = append(hooks, hook)
@@ -585,7 +635,8 @@ func setViperDefaults() {
viper.SetDefault("enablemediafilecoverart", true) viper.SetDefault("enablemediafilecoverart", true)
viper.SetDefault("autotranscodedownload", false) viper.SetDefault("autotranscodedownload", false)
viper.SetDefault("defaultdownsamplingformat", consts.DefaultDownsamplingFormat) viper.SetDefault("defaultdownsamplingformat", consts.DefaultDownsamplingFormat)
viper.SetDefault("searchfullstring", false) viper.SetDefault("search.fullstring", false)
viper.SetDefault("search.backend", "fts")
viper.SetDefault("similarsongsmatchthreshold", 85) viper.SetDefault("similarsongsmatchthreshold", 85)
viper.SetDefault("recentlyaddedbymodtime", false) viper.SetDefault("recentlyaddedbymodtime", false)
viper.SetDefault("prefersorttags", false) viper.SetDefault("prefersorttags", false)
@@ -604,6 +655,7 @@ func setViperDefaults() {
viper.SetDefault("defaulttheme", "Dark") viper.SetDefault("defaulttheme", "Dark")
viper.SetDefault("defaultlanguage", "") viper.SetDefault("defaultlanguage", "")
viper.SetDefault("defaultuivolume", consts.DefaultUIVolume) viper.SetDefault("defaultuivolume", consts.DefaultUIVolume)
viper.SetDefault("uisearchdebouncems", consts.DefaultUISearchDebounceMs)
viper.SetDefault("enablereplaygain", true) viper.SetDefault("enablereplaygain", true)
viper.SetDefault("enablecoveranimation", true) viper.SetDefault("enablecoveranimation", true)
viper.SetDefault("enablenowplaying", true) viper.SetDefault("enablenowplaying", true)
@@ -619,6 +671,7 @@ func setViperDefaults() {
viper.SetDefault("passwordencryptionkey", "") viper.SetDefault("passwordencryptionkey", "")
viper.SetDefault("extauth.userheader", "Remote-User") viper.SetDefault("extauth.userheader", "Remote-User")
viper.SetDefault("extauth.trustedsources", "") viper.SetDefault("extauth.trustedsources", "")
viper.SetDefault("extauth.logouturl", "")
viper.SetDefault("prometheus.enabled", false) viper.SetDefault("prometheus.enabled", false)
viper.SetDefault("prometheus.metricspath", consts.PrometheusDefaultPath) viper.SetDefault("prometheus.metricspath", consts.PrometheusDefaultPath)
viper.SetDefault("prometheus.password", "") viper.SetDefault("prometheus.password", "")
@@ -637,6 +690,7 @@ func setViperDefaults() {
viper.SetDefault("scanner.followsymlinks", true) viper.SetDefault("scanner.followsymlinks", true)
viper.SetDefault("scanner.purgemissing", consts.PurgeMissingNever) viper.SetDefault("scanner.purgemissing", consts.PurgeMissingNever)
viper.SetDefault("subsonic.appendsubtitle", true) viper.SetDefault("subsonic.appendsubtitle", true)
viper.SetDefault("subsonic.appendalbumversion", true)
viper.SetDefault("subsonic.artistparticipations", false) viper.SetDefault("subsonic.artistparticipations", false)
viper.SetDefault("subsonic.defaultreportrealpath", false) viper.SetDefault("subsonic.defaultreportrealpath", false)
viper.SetDefault("subsonic.enableaveragerating", true) viper.SetDefault("subsonic.enableaveragerating", true)

View File

@@ -52,6 +52,62 @@ var _ = Describe("Configuration", func() {
}) })
}) })
Describe("ValidateURL", func() {
It("accepts a valid http URL", func() {
fn := conf.ValidateURL("TestOption", "http://example.com/path")
Expect(fn()).To(Succeed())
})
It("accepts a valid https URL", func() {
fn := conf.ValidateURL("TestOption", "https://example.com/path")
Expect(fn()).To(Succeed())
})
It("rejects a URL with no scheme", func() {
fn := conf.ValidateURL("TestOption", "example.com/path")
Expect(fn()).To(MatchError(ContainSubstring("invalid scheme")))
})
It("rejects a URL with an unsupported scheme", func() {
fn := conf.ValidateURL("TestOption", "javascript://example.com/path")
Expect(fn()).To(MatchError(ContainSubstring("invalid scheme")))
})
It("accepts an empty URL (optional config)", func() {
fn := conf.ValidateURL("TestOption", "")
Expect(fn()).To(Succeed())
})
It("includes the option name in the error message", func() {
fn := conf.ValidateURL("MyOption", "ftp://example.com")
Expect(fn()).To(MatchError(ContainSubstring("MyOption")))
})
It("rejects a URL that cannot be parsed", func() {
fn := conf.ValidateURL("TestOption", "://invalid")
Expect(fn()).To(HaveOccurred())
})
It("rejects a URL without a host", func() {
fn := conf.ValidateURL("TestOption", "http:///path")
Expect(fn()).To(MatchError(ContainSubstring("non-empty host is required")))
})
})
DescribeTable("NormalizeSearchBackend",
func(input, expected string) {
Expect(conf.NormalizeSearchBackend(input)).To(Equal(expected))
},
Entry("accepts 'fts'", "fts", "fts"),
Entry("accepts 'legacy'", "legacy", "legacy"),
Entry("normalizes 'FTS' to lowercase", "FTS", "fts"),
Entry("normalizes 'Legacy' to lowercase", "Legacy", "legacy"),
Entry("trims whitespace", " fts ", "fts"),
Entry("falls back to 'fts' for 'fts5'", "fts5", "fts"),
Entry("falls back to 'fts' for unrecognized values", "invalid", "fts"),
Entry("falls back to 'fts' for empty string", "", "fts"),
)
DescribeTable("should load configuration from", DescribeTable("should load configuration from",
func(format string) { func(format string) {
filename := filepath.Join("testdata", "cfg."+format) filename := filepath.Join("testdata", "cfg."+format)

View File

@@ -7,3 +7,7 @@ func ResetConf() {
var SetViperDefaults = setViperDefaults var SetViperDefaults = setViperDefaults
var ParseLanguages = parseLanguages var ParseLanguages = parseLanguages
var ValidateURL = validateURL
var NormalizeSearchBackend = normalizeSearchBackend

View File

@@ -71,6 +71,7 @@ const (
PlaceholderAvatar = "logo-192x192.png" PlaceholderAvatar = "logo-192x192.png"
UICoverArtSize = 300 UICoverArtSize = 300
DefaultUIVolume = 100 DefaultUIVolume = 100
DefaultUISearchDebounceMs = 200
DefaultHttpClientTimeOut = 10 * time.Second DefaultHttpClientTimeOut = 10 * time.Second

View File

@@ -208,7 +208,8 @@ var staticData = sync.OnceValue(func() insights.Data {
data.Config.TranscodingCacheSize = conf.Server.TranscodingCacheSize data.Config.TranscodingCacheSize = conf.Server.TranscodingCacheSize
data.Config.ImageCacheSize = conf.Server.ImageCacheSize data.Config.ImageCacheSize = conf.Server.ImageCacheSize
data.Config.SessionTimeout = uint64(math.Trunc(conf.Server.SessionTimeout.Seconds())) data.Config.SessionTimeout = uint64(math.Trunc(conf.Server.SessionTimeout.Seconds()))
data.Config.SearchFullString = conf.Server.SearchFullString data.Config.SearchFullString = conf.Server.Search.FullString
data.Config.SearchBackend = conf.Server.Search.Backend
data.Config.RecentlyAddedByModTime = conf.Server.RecentlyAddedByModTime data.Config.RecentlyAddedByModTime = conf.Server.RecentlyAddedByModTime
data.Config.PreferSortTags = conf.Server.PreferSortTags data.Config.PreferSortTags = conf.Server.PreferSortTags
data.Config.BackupSchedule = conf.Server.Backup.Schedule data.Config.BackupSchedule = conf.Server.Backup.Schedule

View File

@@ -68,6 +68,7 @@ type Data struct {
EnableNowPlaying bool `json:"enableNowPlaying,omitempty"` EnableNowPlaying bool `json:"enableNowPlaying,omitempty"`
SessionTimeout uint64 `json:"sessionTimeout,omitempty"` SessionTimeout uint64 `json:"sessionTimeout,omitempty"`
SearchFullString bool `json:"searchFullString,omitempty"` SearchFullString bool `json:"searchFullString,omitempty"`
SearchBackend string `json:"searchBackend,omitempty"`
RecentlyAddedByModTime bool `json:"recentlyAddedByModTime,omitempty"` RecentlyAddedByModTime bool `json:"recentlyAddedByModTime,omitempty"`
PreferSortTags bool `json:"preferSortTags,omitempty"` PreferSortTags bool `json:"preferSortTags,omitempty"`
BackupSchedule string `json:"backupSchedule,omitempty"` BackupSchedule string `json:"backupSchedule,omitempty"`

119
core/playlists/import.go Normal file
View File

@@ -0,0 +1,119 @@
package playlists
import (
"context"
"errors"
"io"
"os"
"path/filepath"
"strings"
"time"
"github.com/navidrome/navidrome/conf"
"github.com/navidrome/navidrome/log"
"github.com/navidrome/navidrome/model"
"github.com/navidrome/navidrome/model/request"
"github.com/navidrome/navidrome/utils/ioutils"
"golang.org/x/text/unicode/norm"
)
func (s *playlists) ImportFile(ctx context.Context, folder *model.Folder, filename string) (*model.Playlist, error) {
pls, err := s.parsePlaylist(ctx, filename, folder)
if err != nil {
log.Error(ctx, "Error parsing playlist", "path", filepath.Join(folder.AbsolutePath(), filename), err)
return nil, err
}
log.Debug(ctx, "Found playlist", "name", pls.Name, "lastUpdated", pls.UpdatedAt, "path", pls.Path, "numTracks", len(pls.Tracks))
err = s.updatePlaylist(ctx, pls)
if err != nil {
log.Error(ctx, "Error updating playlist", "path", filepath.Join(folder.AbsolutePath(), filename), err)
}
return pls, err
}
func (s *playlists) ImportM3U(ctx context.Context, reader io.Reader) (*model.Playlist, error) {
owner, _ := request.UserFrom(ctx)
pls := &model.Playlist{
OwnerID: owner.ID,
Public: false,
Sync: false,
}
err := s.parseM3U(ctx, pls, nil, reader)
if err != nil {
log.Error(ctx, "Error parsing playlist", err)
return nil, err
}
err = s.ds.Playlist(ctx).Put(pls)
if err != nil {
log.Error(ctx, "Error saving playlist", err)
return nil, err
}
return pls, nil
}
func (s *playlists) parsePlaylist(ctx context.Context, playlistFile string, folder *model.Folder) (*model.Playlist, error) {
pls, err := s.newSyncedPlaylist(folder.AbsolutePath(), playlistFile)
if err != nil {
return nil, err
}
file, err := os.Open(pls.Path)
if err != nil {
return nil, err
}
defer file.Close()
reader := ioutils.UTF8Reader(file)
extension := strings.ToLower(filepath.Ext(playlistFile))
switch extension {
case ".nsp":
err = s.parseNSP(ctx, pls, reader)
default:
err = s.parseM3U(ctx, pls, folder, reader)
}
return pls, err
}
func (s *playlists) updatePlaylist(ctx context.Context, newPls *model.Playlist) error {
owner, _ := request.UserFrom(ctx)
// Try to find existing playlist by path. Since filesystem normalization differs across
// platforms (macOS uses NFD, Linux/Windows use NFC), we try both forms to match
// playlists that may have been imported on a different platform.
pls, err := s.ds.Playlist(ctx).FindByPath(newPls.Path)
if errors.Is(err, model.ErrNotFound) {
// Try alternate normalization form
altPath := norm.NFD.String(newPls.Path)
if altPath == newPls.Path {
altPath = norm.NFC.String(newPls.Path)
}
if altPath != newPls.Path {
pls, err = s.ds.Playlist(ctx).FindByPath(altPath)
}
}
if err != nil && !errors.Is(err, model.ErrNotFound) {
return err
}
if err == nil && !pls.Sync {
log.Debug(ctx, "Playlist already imported and not synced", "playlist", pls.Name, "path", pls.Path)
return nil
}
if err == nil {
log.Info(ctx, "Updating synced playlist", "playlist", pls.Name, "path", newPls.Path)
newPls.ID = pls.ID
newPls.Name = pls.Name
newPls.Comment = pls.Comment
newPls.OwnerID = pls.OwnerID
newPls.Public = pls.Public
newPls.EvaluatedAt = &time.Time{}
} else {
log.Info(ctx, "Adding synced playlist", "playlist", newPls.Name, "path", newPls.Path, "owner", owner.UserName)
newPls.OwnerID = owner.ID
// For NSP files, Public may already be set from the file; for M3U, use server default
if !newPls.IsSmartPlaylist() {
newPls.Public = conf.Server.DefaultPlaylistPublicVisibility
}
}
return s.ds.Playlist(ctx).Put(newPls)
}

View File

@@ -1,4 +1,4 @@
package core_test package playlists_test
import ( import (
"context" "context"
@@ -9,7 +9,7 @@ import (
"github.com/navidrome/navidrome/conf" "github.com/navidrome/navidrome/conf"
"github.com/navidrome/navidrome/conf/configtest" "github.com/navidrome/navidrome/conf/configtest"
"github.com/navidrome/navidrome/core" "github.com/navidrome/navidrome/core/playlists"
"github.com/navidrome/navidrome/model" "github.com/navidrome/navidrome/model"
"github.com/navidrome/navidrome/model/criteria" "github.com/navidrome/navidrome/model/criteria"
"github.com/navidrome/navidrome/model/request" "github.com/navidrome/navidrome/model/request"
@@ -19,18 +19,18 @@ import (
"golang.org/x/text/unicode/norm" "golang.org/x/text/unicode/norm"
) )
var _ = Describe("Playlists", func() { var _ = Describe("Playlists - Import", func() {
var ds *tests.MockDataStore var ds *tests.MockDataStore
var ps core.Playlists var ps playlists.Playlists
var mockPlsRepo mockedPlaylistRepo var mockPlsRepo *tests.MockPlaylistRepo
var mockLibRepo *tests.MockLibraryRepo var mockLibRepo *tests.MockLibraryRepo
ctx := context.Background() ctx := context.Background()
BeforeEach(func() { BeforeEach(func() {
mockPlsRepo = mockedPlaylistRepo{} mockPlsRepo = tests.CreateMockPlaylistRepo()
mockLibRepo = &tests.MockLibraryRepo{} mockLibRepo = &tests.MockLibraryRepo{}
ds = &tests.MockDataStore{ ds = &tests.MockDataStore{
MockedPlaylist: &mockPlsRepo, MockedPlaylist: mockPlsRepo,
MockedLibrary: mockLibRepo, MockedLibrary: mockLibRepo,
} }
ctx = request.WithUser(ctx, model.User{ID: "123"}) ctx = request.WithUser(ctx, model.User{ID: "123"})
@@ -39,7 +39,7 @@ var _ = Describe("Playlists", func() {
Describe("ImportFile", func() { Describe("ImportFile", func() {
var folder *model.Folder var folder *model.Folder
BeforeEach(func() { BeforeEach(func() {
ps = core.NewPlaylists(ds) ps = playlists.NewPlaylists(ds)
ds.MockedMediaFile = &mockedMediaFileRepo{} ds.MockedMediaFile = &mockedMediaFileRepo{}
libPath, _ := os.Getwd() libPath, _ := os.Getwd()
// Set up library with the actual library path that matches the folder // Set up library with the actual library path that matches the folder
@@ -61,7 +61,7 @@ var _ = Describe("Playlists", func() {
Expect(pls.Tracks).To(HaveLen(2)) Expect(pls.Tracks).To(HaveLen(2))
Expect(pls.Tracks[0].Path).To(Equal("tests/fixtures/playlists/test.mp3")) Expect(pls.Tracks[0].Path).To(Equal("tests/fixtures/playlists/test.mp3"))
Expect(pls.Tracks[1].Path).To(Equal("tests/fixtures/playlists/test.ogg")) Expect(pls.Tracks[1].Path).To(Equal("tests/fixtures/playlists/test.ogg"))
Expect(mockPlsRepo.last).To(Equal(pls)) Expect(mockPlsRepo.Last).To(Equal(pls))
}) })
It("parses playlists using LF ending", func() { It("parses playlists using LF ending", func() {
@@ -99,7 +99,7 @@ var _ = Describe("Playlists", func() {
It("parses well-formed playlists", func() { It("parses well-formed playlists", func() {
pls, err := ps.ImportFile(ctx, folder, "recently_played.nsp") pls, err := ps.ImportFile(ctx, folder, "recently_played.nsp")
Expect(err).ToNot(HaveOccurred()) Expect(err).ToNot(HaveOccurred())
Expect(mockPlsRepo.last).To(Equal(pls)) Expect(mockPlsRepo.Last).To(Equal(pls))
Expect(pls.OwnerID).To(Equal("123")) Expect(pls.OwnerID).To(Equal("123"))
Expect(pls.Name).To(Equal("Recently Played")) Expect(pls.Name).To(Equal("Recently Played"))
Expect(pls.Comment).To(Equal("Recently played tracks")) Expect(pls.Comment).To(Equal("Recently played tracks"))
@@ -149,7 +149,7 @@ var _ = Describe("Playlists", func() {
tmpDir := GinkgoT().TempDir() tmpDir := GinkgoT().TempDir()
mockLibRepo.SetData([]model.Library{{ID: 1, Path: tmpDir}}) mockLibRepo.SetData([]model.Library{{ID: 1, Path: tmpDir}})
ds.MockedMediaFile = &mockedMediaFileFromListRepo{data: []string{}} ds.MockedMediaFile = &mockedMediaFileFromListRepo{data: []string{}}
ps = core.NewPlaylists(ds) ps = playlists.NewPlaylists(ds)
// Create the playlist file on disk with the filesystem's normalization form // Create the playlist file on disk with the filesystem's normalization form
plsFile := tmpDir + "/" + filesystemName + ".m3u" plsFile := tmpDir + "/" + filesystemName + ".m3u"
@@ -163,7 +163,7 @@ var _ = Describe("Playlists", func() {
Path: storedPath, Path: storedPath,
Sync: true, Sync: true,
} }
mockPlsRepo.data = map[string]*model.Playlist{storedPath: existingPls} mockPlsRepo.PathMap = map[string]*model.Playlist{storedPath: existingPls}
// Import using the filesystem's normalization form // Import using the filesystem's normalization form
plsFolder := &model.Folder{ plsFolder := &model.Folder{
@@ -209,7 +209,7 @@ var _ = Describe("Playlists", func() {
"def.mp3", // This is playlists/def.mp3 relative to plsDir "def.mp3", // This is playlists/def.mp3 relative to plsDir
}, },
} }
ps = core.NewPlaylists(ds) ps = playlists.NewPlaylists(ds)
}) })
It("handles relative paths that reference files in other libraries", func() { It("handles relative paths that reference files in other libraries", func() {
@@ -365,7 +365,7 @@ var _ = Describe("Playlists", func() {
}, },
} }
// Recreate playlists service to pick up new mock // Recreate playlists service to pick up new mock
ps = core.NewPlaylists(ds) ps = playlists.NewPlaylists(ds)
// Create playlist in music library that references both tracks // Create playlist in music library that references both tracks
plsContent := "#PLAYLIST:Same Path Test\nalbum/track.mp3\n../classical/album/track.mp3" plsContent := "#PLAYLIST:Same Path Test\nalbum/track.mp3\n../classical/album/track.mp3"
@@ -408,7 +408,7 @@ var _ = Describe("Playlists", func() {
BeforeEach(func() { BeforeEach(func() {
repo = &mockedMediaFileFromListRepo{} repo = &mockedMediaFileFromListRepo{}
ds.MockedMediaFile = repo ds.MockedMediaFile = repo
ps = core.NewPlaylists(ds) ps = playlists.NewPlaylists(ds)
mockLibRepo.SetData([]model.Library{{ID: 1, Path: "/music"}, {ID: 2, Path: "/new"}}) mockLibRepo.SetData([]model.Library{{ID: 1, Path: "/music"}, {ID: 2, Path: "/new"}})
ctx = request.WithUser(ctx, model.User{ID: "123"}) ctx = request.WithUser(ctx, model.User{ID: "123"})
}) })
@@ -439,7 +439,7 @@ var _ = Describe("Playlists", func() {
Expect(pls.Tracks[1].Path).To(Equal("tests/test.ogg")) Expect(pls.Tracks[1].Path).To(Equal("tests/test.ogg"))
Expect(pls.Tracks[2].Path).To(Equal("downloads/newfile.flac")) Expect(pls.Tracks[2].Path).To(Equal("downloads/newfile.flac"))
Expect(pls.Tracks[3].Path).To(Equal("tests/01 Invisible (RED) Edit Version.mp3")) Expect(pls.Tracks[3].Path).To(Equal("tests/01 Invisible (RED) Edit Version.mp3"))
Expect(mockPlsRepo.last).To(Equal(pls)) Expect(mockPlsRepo.Last).To(Equal(pls))
}) })
It("sets the playlist name as a timestamp if the #PLAYLIST directive is not present", func() { It("sets the playlist name as a timestamp if the #PLAYLIST directive is not present", func() {
@@ -460,7 +460,7 @@ var _ = Describe("Playlists", func() {
Expect(pls.Tracks).To(HaveLen(2)) Expect(pls.Tracks).To(HaveLen(2))
}) })
It("returns only tracks that exist in the database and in the same other as the m3u", func() { It("returns only tracks that exist in the database and in the same order as the m3u", func() {
repo.data = []string{ repo.data = []string{
"album1/test1.mp3", "album1/test1.mp3",
"album2/test2.mp3", "album2/test2.mp3",
@@ -570,7 +570,7 @@ var _ = Describe("Playlists", func() {
}) })
Describe("InPlaylistsPath", func() { Describe("InPath", func() {
var folder model.Folder var folder model.Folder
BeforeEach(func() { BeforeEach(func() {
@@ -584,27 +584,27 @@ var _ = Describe("Playlists", func() {
It("returns true if PlaylistsPath is empty", func() { It("returns true if PlaylistsPath is empty", func() {
conf.Server.PlaylistsPath = "" conf.Server.PlaylistsPath = ""
Expect(core.InPlaylistsPath(folder)).To(BeTrue()) Expect(playlists.InPath(folder)).To(BeTrue())
}) })
It("returns true if PlaylistsPath is any (**/**)", func() { It("returns true if PlaylistsPath is any (**/**)", func() {
conf.Server.PlaylistsPath = "**/**" conf.Server.PlaylistsPath = "**/**"
Expect(core.InPlaylistsPath(folder)).To(BeTrue()) Expect(playlists.InPath(folder)).To(BeTrue())
}) })
It("returns true if folder is in PlaylistsPath", func() { It("returns true if folder is in PlaylistsPath", func() {
conf.Server.PlaylistsPath = "other/**:playlists/**" conf.Server.PlaylistsPath = "other/**:playlists/**"
Expect(core.InPlaylistsPath(folder)).To(BeTrue()) Expect(playlists.InPath(folder)).To(BeTrue())
}) })
It("returns false if folder is not in PlaylistsPath", func() { It("returns false if folder is not in PlaylistsPath", func() {
conf.Server.PlaylistsPath = "other" conf.Server.PlaylistsPath = "other"
Expect(core.InPlaylistsPath(folder)).To(BeFalse()) Expect(playlists.InPath(folder)).To(BeFalse())
}) })
It("returns true if for a playlist in root of MusicFolder if PlaylistsPath is '.'", func() { It("returns true if for a playlist in root of MusicFolder if PlaylistsPath is '.'", func() {
conf.Server.PlaylistsPath = "." conf.Server.PlaylistsPath = "."
Expect(core.InPlaylistsPath(folder)).To(BeFalse()) Expect(playlists.InPath(folder)).To(BeFalse())
folder2 := model.Folder{ folder2 := model.Folder{
LibraryPath: "/music", LibraryPath: "/music",
@@ -612,7 +612,7 @@ var _ = Describe("Playlists", func() {
Name: ".", Name: ".",
} }
Expect(core.InPlaylistsPath(folder2)).To(BeTrue()) Expect(playlists.InPath(folder2)).To(BeTrue())
}) })
}) })
}) })
@@ -693,23 +693,3 @@ func (r *mockedMediaFileFromListRepo) FindByPaths(paths []string) (model.MediaFi
} }
return mfs, nil return mfs, nil
} }
type mockedPlaylistRepo struct {
last *model.Playlist
data map[string]*model.Playlist // keyed by path
model.PlaylistRepository
}
func (r *mockedPlaylistRepo) FindByPath(path string) (*model.Playlist, error) {
if r.data != nil {
if pls, ok := r.data[path]; ok {
return pls, nil
}
}
return nil, model.ErrNotFound
}
func (r *mockedPlaylistRepo) Put(pls *model.Playlist) error {
r.last = pls
return nil
}

View File

@@ -1,183 +1,28 @@
package core package playlists
import ( import (
"cmp" "cmp"
"context" "context"
"encoding/json"
"errors"
"fmt" "fmt"
"io" "io"
"net/url" "net/url"
"os"
"path/filepath" "path/filepath"
"slices" "slices"
"strings" "strings"
"time" "time"
"github.com/RaveNoX/go-jsoncommentstrip"
"github.com/bmatcuk/doublestar/v4"
"github.com/navidrome/navidrome/conf"
"github.com/navidrome/navidrome/log" "github.com/navidrome/navidrome/log"
"github.com/navidrome/navidrome/model" "github.com/navidrome/navidrome/model"
"github.com/navidrome/navidrome/model/criteria"
"github.com/navidrome/navidrome/model/request"
"github.com/navidrome/navidrome/utils/ioutils"
"github.com/navidrome/navidrome/utils/slice" "github.com/navidrome/navidrome/utils/slice"
"golang.org/x/text/unicode/norm" "golang.org/x/text/unicode/norm"
) )
type Playlists interface {
ImportFile(ctx context.Context, folder *model.Folder, filename string) (*model.Playlist, error)
Update(ctx context.Context, playlistID string, name *string, comment *string, public *bool, idsToAdd []string, idxToRemove []int) error
ImportM3U(ctx context.Context, reader io.Reader) (*model.Playlist, error)
}
type playlists struct {
ds model.DataStore
}
func NewPlaylists(ds model.DataStore) Playlists {
return &playlists{ds: ds}
}
func InPlaylistsPath(folder model.Folder) bool {
if conf.Server.PlaylistsPath == "" {
return true
}
rel, _ := filepath.Rel(folder.LibraryPath, folder.AbsolutePath())
for path := range strings.SplitSeq(conf.Server.PlaylistsPath, string(filepath.ListSeparator)) {
if match, _ := doublestar.Match(path, rel); match {
return true
}
}
return false
}
func (s *playlists) ImportFile(ctx context.Context, folder *model.Folder, filename string) (*model.Playlist, error) {
pls, err := s.parsePlaylist(ctx, filename, folder)
if err != nil {
log.Error(ctx, "Error parsing playlist", "path", filepath.Join(folder.AbsolutePath(), filename), err)
return nil, err
}
log.Debug("Found playlist", "name", pls.Name, "lastUpdated", pls.UpdatedAt, "path", pls.Path, "numTracks", len(pls.Tracks))
err = s.updatePlaylist(ctx, pls)
if err != nil {
log.Error(ctx, "Error updating playlist", "path", filepath.Join(folder.AbsolutePath(), filename), err)
}
return pls, err
}
func (s *playlists) ImportM3U(ctx context.Context, reader io.Reader) (*model.Playlist, error) {
owner, _ := request.UserFrom(ctx)
pls := &model.Playlist{
OwnerID: owner.ID,
Public: false,
Sync: false,
}
err := s.parseM3U(ctx, pls, nil, reader)
if err != nil {
log.Error(ctx, "Error parsing playlist", err)
return nil, err
}
err = s.ds.Playlist(ctx).Put(pls)
if err != nil {
log.Error(ctx, "Error saving playlist", err)
return nil, err
}
return pls, nil
}
func (s *playlists) parsePlaylist(ctx context.Context, playlistFile string, folder *model.Folder) (*model.Playlist, error) {
pls, err := s.newSyncedPlaylist(folder.AbsolutePath(), playlistFile)
if err != nil {
return nil, err
}
file, err := os.Open(pls.Path)
if err != nil {
return nil, err
}
defer file.Close()
reader := ioutils.UTF8Reader(file)
extension := strings.ToLower(filepath.Ext(playlistFile))
switch extension {
case ".nsp":
err = s.parseNSP(ctx, pls, reader)
default:
err = s.parseM3U(ctx, pls, folder, reader)
}
return pls, err
}
func (s *playlists) newSyncedPlaylist(baseDir string, playlistFile string) (*model.Playlist, error) {
playlistPath := filepath.Join(baseDir, playlistFile)
info, err := os.Stat(playlistPath)
if err != nil {
return nil, err
}
var extension = filepath.Ext(playlistFile)
var name = playlistFile[0 : len(playlistFile)-len(extension)]
pls := &model.Playlist{
Name: name,
Comment: fmt.Sprintf("Auto-imported from '%s'", playlistFile),
Public: false,
Path: playlistPath,
Sync: true,
UpdatedAt: info.ModTime(),
}
return pls, nil
}
func getPositionFromOffset(data []byte, offset int64) (line, column int) {
line = 1
for _, b := range data[:offset] {
if b == '\n' {
line++
column = 1
} else {
column++
}
}
return
}
func (s *playlists) parseNSP(_ context.Context, pls *model.Playlist, reader io.Reader) error {
nsp := &nspFile{}
reader = io.LimitReader(reader, 100*1024) // Limit to 100KB
reader = jsoncommentstrip.NewReader(reader)
input, err := io.ReadAll(reader)
if err != nil {
return fmt.Errorf("reading SmartPlaylist: %w", err)
}
err = json.Unmarshal(input, nsp)
if err != nil {
var syntaxErr *json.SyntaxError
if errors.As(err, &syntaxErr) {
line, col := getPositionFromOffset(input, syntaxErr.Offset)
return fmt.Errorf("JSON syntax error in SmartPlaylist at line %d, column %d: %w", line, col, err)
}
return fmt.Errorf("JSON parsing error in SmartPlaylist: %w", err)
}
pls.Rules = &nsp.Criteria
if nsp.Name != "" {
pls.Name = nsp.Name
}
if nsp.Comment != "" {
pls.Comment = nsp.Comment
}
if nsp.Public != nil {
pls.Public = *nsp.Public
} else {
pls.Public = conf.Server.DefaultPlaylistPublicVisibility
}
return nil
}
func (s *playlists) parseM3U(ctx context.Context, pls *model.Playlist, folder *model.Folder, reader io.Reader) error { func (s *playlists) parseM3U(ctx context.Context, pls *model.Playlist, folder *model.Folder, reader io.Reader) error {
mediaFileRepository := s.ds.MediaFile(ctx) mediaFileRepository := s.ds.MediaFile(ctx)
resolver, err := newPathResolver(ctx, s.ds)
if err != nil {
return err
}
var mfs model.MediaFiles var mfs model.MediaFiles
// Chunk size of 100 lines, as each line can generate up to 4 lookup candidates // Chunk size of 100 lines, as each line can generate up to 4 lookup candidates
// (NFC/NFD × raw/lowercase), and SQLite has a max expression tree depth of 1000. // (NFC/NFD × raw/lowercase), and SQLite has a max expression tree depth of 1000.
@@ -202,7 +47,7 @@ func (s *playlists) parseM3U(ctx context.Context, pls *model.Playlist, folder *m
} }
filteredLines = append(filteredLines, line) filteredLines = append(filteredLines, line)
} }
resolvedPaths, err := s.resolvePaths(ctx, folder, filteredLines) resolvedPaths, err := resolver.resolvePaths(ctx, folder, filteredLines)
if err != nil { if err != nil {
log.Warn(ctx, "Error resolving paths in playlist", "playlist", pls.Name, err) log.Warn(ctx, "Error resolving paths in playlist", "playlist", pls.Name, err)
continue continue
@@ -258,7 +103,9 @@ func (s *playlists) parseM3U(ctx context.Context, pls *model.Playlist, folder *m
existing[key] = idx existing[key] = idx
} }
// Find media files in the order of the resolved paths, to keep playlist order // Find media files in the order of the resolved paths, to keep playlist order.
// Both `existing` keys and `resolvedPaths` use the library-qualified format "libraryID:relativePath",
// so normalizing the full string produces matching keys (digits and ':' are ASCII-invariant).
for _, path := range resolvedPaths { for _, path := range resolvedPaths {
key := strings.ToLower(norm.NFC.String(path)) key := strings.ToLower(norm.NFC.String(path))
idx, ok := existing[key] idx, ok := existing[key]
@@ -398,15 +245,10 @@ func (r *pathResolver) findInLibraries(absolutePath string) pathResolution {
// resolvePaths converts playlist file paths to library-qualified paths (format: "libraryID:relativePath"). // resolvePaths converts playlist file paths to library-qualified paths (format: "libraryID:relativePath").
// For relative paths, it resolves them to absolute paths first, then determines which // For relative paths, it resolves them to absolute paths first, then determines which
// library they belong to. This allows playlists to reference files across library boundaries. // library they belong to. This allows playlists to reference files across library boundaries.
func (s *playlists) resolvePaths(ctx context.Context, folder *model.Folder, lines []string) ([]string, error) { func (r *pathResolver) resolvePaths(ctx context.Context, folder *model.Folder, lines []string) ([]string, error) {
resolver, err := newPathResolver(ctx, s.ds)
if err != nil {
return nil, err
}
results := make([]string, 0, len(lines)) results := make([]string, 0, len(lines))
for idx, line := range lines { for idx, line := range lines {
resolution := resolver.resolvePath(line, folder) resolution := r.resolvePath(line, folder)
if !resolution.valid { if !resolution.valid {
log.Warn(ctx, "Path in playlist not found in any library", "path", line, "line", idx) log.Warn(ctx, "Path in playlist not found in any library", "path", line, "line", idx)
@@ -425,123 +267,3 @@ func (s *playlists) resolvePaths(ctx context.Context, folder *model.Folder, line
return results, nil return results, nil
} }
func (s *playlists) updatePlaylist(ctx context.Context, newPls *model.Playlist) error {
owner, _ := request.UserFrom(ctx)
// Try to find existing playlist by path. Since filesystem normalization differs across
// platforms (macOS uses NFD, Linux/Windows use NFC), we try both forms to match
// playlists that may have been imported on a different platform.
pls, err := s.ds.Playlist(ctx).FindByPath(newPls.Path)
if errors.Is(err, model.ErrNotFound) {
// Try alternate normalization form
altPath := norm.NFD.String(newPls.Path)
if altPath == newPls.Path {
altPath = norm.NFC.String(newPls.Path)
}
if altPath != newPls.Path {
pls, err = s.ds.Playlist(ctx).FindByPath(altPath)
}
}
if err != nil && !errors.Is(err, model.ErrNotFound) {
return err
}
if err == nil && !pls.Sync {
log.Debug(ctx, "Playlist already imported and not synced", "playlist", pls.Name, "path", pls.Path)
return nil
}
if err == nil {
log.Info(ctx, "Updating synced playlist", "playlist", pls.Name, "path", newPls.Path)
newPls.ID = pls.ID
newPls.Name = pls.Name
newPls.Comment = pls.Comment
newPls.OwnerID = pls.OwnerID
newPls.Public = pls.Public
newPls.EvaluatedAt = &time.Time{}
} else {
log.Info(ctx, "Adding synced playlist", "playlist", newPls.Name, "path", newPls.Path, "owner", owner.UserName)
newPls.OwnerID = owner.ID
// For NSP files, Public may already be set from the file; for M3U, use server default
if !newPls.IsSmartPlaylist() {
newPls.Public = conf.Server.DefaultPlaylistPublicVisibility
}
}
return s.ds.Playlist(ctx).Put(newPls)
}
func (s *playlists) Update(ctx context.Context, playlistID string,
name *string, comment *string, public *bool,
idsToAdd []string, idxToRemove []int) error {
needsInfoUpdate := name != nil || comment != nil || public != nil
needsTrackRefresh := len(idxToRemove) > 0
return s.ds.WithTxImmediate(func(tx model.DataStore) error {
var pls *model.Playlist
var err error
repo := tx.Playlist(ctx)
tracks := repo.Tracks(playlistID, true)
if tracks == nil {
return fmt.Errorf("%w: playlist '%s'", model.ErrNotFound, playlistID)
}
if needsTrackRefresh {
pls, err = repo.GetWithTracks(playlistID, true, false)
pls.RemoveTracks(idxToRemove)
pls.AddMediaFilesByID(idsToAdd)
} else {
if len(idsToAdd) > 0 {
_, err = tracks.Add(idsToAdd)
if err != nil {
return err
}
}
if needsInfoUpdate {
pls, err = repo.Get(playlistID)
}
}
if err != nil {
return err
}
if !needsTrackRefresh && !needsInfoUpdate {
return nil
}
if name != nil {
pls.Name = *name
}
if comment != nil {
pls.Comment = *comment
}
if public != nil {
pls.Public = *public
}
// Special case: The playlist is now empty
if len(idxToRemove) > 0 && len(pls.Tracks) == 0 {
if err = tracks.DeleteAll(); err != nil {
return err
}
}
return repo.Put(pls)
})
}
type nspFile struct {
criteria.Criteria
Name string `json:"name"`
Comment string `json:"comment"`
Public *bool `json:"public"`
}
func (i *nspFile) UnmarshalJSON(data []byte) error {
m := map[string]any{}
err := json.Unmarshal(data, &m)
if err != nil {
return err
}
i.Name, _ = m["name"].(string)
i.Comment, _ = m["comment"].(string)
if public, ok := m["public"].(bool); ok {
i.Public = &public
}
return json.Unmarshal(data, &i.Criteria)
}

View File

@@ -1,4 +1,4 @@
package core package playlists
import ( import (
"context" "context"
@@ -214,6 +214,7 @@ var _ = Describe("pathResolver", func() {
}) })
Describe("resolvePath", func() { Describe("resolvePath", func() {
Context("basic", func() {
It("resolves absolute paths", func() { It("resolves absolute paths", func() {
resolution := resolver.resolvePath("/music/artist/album/track.mp3", nil) resolution := resolver.resolvePath("/music/artist/album/track.mp3", nil)
@@ -244,8 +245,7 @@ var _ = Describe("pathResolver", func() {
}) })
}) })
Describe("resolvePath", func() { Context("cross-library", func() {
Context("With absolute paths", func() {
It("resolves path within a library", func() { It("resolves path within a library", func() {
resolution := resolver.resolvePath("/music/track.mp3", nil) resolution := resolver.resolvePath("/music/track.mp3", nil)

103
core/playlists/parse_nsp.go Normal file
View File

@@ -0,0 +1,103 @@
package playlists
import (
"context"
"encoding/json"
"errors"
"fmt"
"io"
"os"
"path/filepath"
"github.com/RaveNoX/go-jsoncommentstrip"
"github.com/navidrome/navidrome/conf"
"github.com/navidrome/navidrome/model"
"github.com/navidrome/navidrome/model/criteria"
)
func (s *playlists) newSyncedPlaylist(baseDir string, playlistFile string) (*model.Playlist, error) {
playlistPath := filepath.Join(baseDir, playlistFile)
info, err := os.Stat(playlistPath)
if err != nil {
return nil, err
}
var extension = filepath.Ext(playlistFile)
var name = playlistFile[0 : len(playlistFile)-len(extension)]
pls := &model.Playlist{
Name: name,
Comment: fmt.Sprintf("Auto-imported from '%s'", playlistFile),
Public: false,
Path: playlistPath,
Sync: true,
UpdatedAt: info.ModTime(),
}
return pls, nil
}
func getPositionFromOffset(data []byte, offset int64) (line, column int) {
line = 1
for _, b := range data[:offset] {
if b == '\n' {
line++
column = 1
} else {
column++
}
}
return
}
func (s *playlists) parseNSP(_ context.Context, pls *model.Playlist, reader io.Reader) error {
nsp := &nspFile{}
reader = io.LimitReader(reader, 100*1024) // Limit to 100KB
reader = jsoncommentstrip.NewReader(reader)
input, err := io.ReadAll(reader)
if err != nil {
return fmt.Errorf("reading SmartPlaylist: %w", err)
}
err = json.Unmarshal(input, nsp)
if err != nil {
var syntaxErr *json.SyntaxError
if errors.As(err, &syntaxErr) {
line, col := getPositionFromOffset(input, syntaxErr.Offset)
return fmt.Errorf("JSON syntax error in SmartPlaylist at line %d, column %d: %w", line, col, err)
}
return fmt.Errorf("JSON parsing error in SmartPlaylist: %w", err)
}
pls.Rules = &nsp.Criteria
if nsp.Name != "" {
pls.Name = nsp.Name
}
if nsp.Comment != "" {
pls.Comment = nsp.Comment
}
if nsp.Public != nil {
pls.Public = *nsp.Public
} else {
pls.Public = conf.Server.DefaultPlaylistPublicVisibility
}
return nil
}
type nspFile struct {
criteria.Criteria
Name string `json:"name"`
Comment string `json:"comment"`
Public *bool `json:"public"`
}
func (i *nspFile) UnmarshalJSON(data []byte) error {
m := map[string]any{}
err := json.Unmarshal(data, &m)
if err != nil {
return err
}
i.Name, _ = m["name"].(string)
i.Comment, _ = m["comment"].(string)
if public, ok := m["public"].(bool); ok {
i.Public = &public
}
return json.Unmarshal(data, &i.Criteria)
}

View File

@@ -0,0 +1,213 @@
package playlists
import (
"context"
"os"
"path/filepath"
"strings"
"github.com/navidrome/navidrome/conf"
"github.com/navidrome/navidrome/conf/configtest"
"github.com/navidrome/navidrome/model"
"github.com/navidrome/navidrome/model/criteria"
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
)
var _ = Describe("parseNSP", func() {
var s *playlists
ctx := context.Background()
BeforeEach(func() {
s = &playlists{}
})
It("parses a well-formed NSP with all fields", func() {
nsp := `{
"name": "My Smart Playlist",
"comment": "A test playlist",
"public": true,
"all": [{"is": {"loved": true}}],
"sort": "title",
"order": "asc",
"limit": 50
}`
pls := &model.Playlist{Name: "default-name"}
err := s.parseNSP(ctx, pls, strings.NewReader(nsp))
Expect(err).ToNot(HaveOccurred())
Expect(pls.Name).To(Equal("My Smart Playlist"))
Expect(pls.Comment).To(Equal("A test playlist"))
Expect(pls.Public).To(BeTrue())
Expect(pls.Rules).ToNot(BeNil())
Expect(pls.Rules.Sort).To(Equal("title"))
Expect(pls.Rules.Order).To(Equal("asc"))
Expect(pls.Rules.Limit).To(Equal(50))
Expect(pls.Rules.Expression).To(BeAssignableToTypeOf(criteria.All{}))
})
It("keeps existing name when NSP has no name field", func() {
nsp := `{"all": [{"is": {"loved": true}}]}`
pls := &model.Playlist{Name: "Original Name"}
err := s.parseNSP(ctx, pls, strings.NewReader(nsp))
Expect(err).ToNot(HaveOccurred())
Expect(pls.Name).To(Equal("Original Name"))
})
It("keeps existing comment when NSP has no comment field", func() {
nsp := `{"all": [{"is": {"loved": true}}]}`
pls := &model.Playlist{Comment: "Original Comment"}
err := s.parseNSP(ctx, pls, strings.NewReader(nsp))
Expect(err).ToNot(HaveOccurred())
Expect(pls.Comment).To(Equal("Original Comment"))
})
It("strips JSON comments before parsing", func() {
nsp := `{
// Line comment
"name": "Commented Playlist",
/* Block comment */
"all": [{"is": {"loved": true}}]
}`
pls := &model.Playlist{}
err := s.parseNSP(ctx, pls, strings.NewReader(nsp))
Expect(err).ToNot(HaveOccurred())
Expect(pls.Name).To(Equal("Commented Playlist"))
})
It("uses server default when public field is absent", func() {
DeferCleanup(configtest.SetupConfig())
conf.Server.DefaultPlaylistPublicVisibility = true
nsp := `{"all": [{"is": {"loved": true}}]}`
pls := &model.Playlist{}
err := s.parseNSP(ctx, pls, strings.NewReader(nsp))
Expect(err).ToNot(HaveOccurred())
Expect(pls.Public).To(BeTrue())
})
It("honors explicit public: false over server default", func() {
DeferCleanup(configtest.SetupConfig())
conf.Server.DefaultPlaylistPublicVisibility = true
nsp := `{"public": false, "all": [{"is": {"loved": true}}]}`
pls := &model.Playlist{}
err := s.parseNSP(ctx, pls, strings.NewReader(nsp))
Expect(err).ToNot(HaveOccurred())
Expect(pls.Public).To(BeFalse())
})
It("returns a syntax error with line and column info", func() {
nsp := "{\n \"name\": \"Bad\",\n \"all\": [INVALID]\n}"
pls := &model.Playlist{}
err := s.parseNSP(ctx, pls, strings.NewReader(nsp))
Expect(err).To(HaveOccurred())
Expect(err.Error()).To(ContainSubstring("JSON syntax error in SmartPlaylist"))
Expect(err.Error()).To(MatchRegexp(`line \d+, column \d+`))
})
It("returns a parsing error for completely invalid JSON", func() {
nsp := `not json at all`
pls := &model.Playlist{}
err := s.parseNSP(ctx, pls, strings.NewReader(nsp))
Expect(err).To(HaveOccurred())
Expect(err.Error()).To(ContainSubstring("SmartPlaylist"))
})
It("gracefully handles non-string name field", func() {
nsp := `{"name": 123, "all": [{"is": {"loved": true}}]}`
pls := &model.Playlist{Name: "Original"}
err := s.parseNSP(ctx, pls, strings.NewReader(nsp))
Expect(err).ToNot(HaveOccurred())
// Type assertion in UnmarshalJSON fails silently; name stays as original
Expect(pls.Name).To(Equal("Original"))
})
It("parses criteria with multiple rules", func() {
nsp := `{
"all": [
{"is": {"loved": true}},
{"contains": {"title": "rock"}}
],
"sort": "lastPlayed",
"order": "desc",
"limit": 100
}`
pls := &model.Playlist{}
err := s.parseNSP(ctx, pls, strings.NewReader(nsp))
Expect(err).ToNot(HaveOccurred())
Expect(pls.Rules).ToNot(BeNil())
Expect(pls.Rules.Sort).To(Equal("lastPlayed"))
Expect(pls.Rules.Order).To(Equal("desc"))
Expect(pls.Rules.Limit).To(Equal(100))
})
})
var _ = Describe("getPositionFromOffset", func() {
It("returns correct position on first line", func() {
data := []byte("hello world")
line, col := getPositionFromOffset(data, 5)
Expect(line).To(Equal(1))
Expect(col).To(Equal(5))
})
It("returns correct position after newlines", func() {
data := []byte("line1\nline2\nline3")
// Offsets: l(0) i(1) n(2) e(3) 1(4) \n(5) l(6) i(7) n(8)
line, col := getPositionFromOffset(data, 8)
Expect(line).To(Equal(2))
Expect(col).To(Equal(3))
})
It("returns correct position at start of new line", func() {
data := []byte("line1\nline2")
// After \n at offset 5, col resets to 1; offset 6 is 'l' -> col=1
line, col := getPositionFromOffset(data, 6)
Expect(line).To(Equal(2))
Expect(col).To(Equal(1))
})
It("handles multiple newlines", func() {
data := []byte("a\nb\nc\nd")
// a(0) \n(1) b(2) \n(3) c(4) \n(5) d(6)
line, col := getPositionFromOffset(data, 6)
Expect(line).To(Equal(4))
Expect(col).To(Equal(1))
})
})
var _ = Describe("newSyncedPlaylist", func() {
var s *playlists
BeforeEach(func() {
s = &playlists{}
})
It("creates a synced playlist with correct attributes", func() {
tmpDir := GinkgoT().TempDir()
Expect(os.WriteFile(filepath.Join(tmpDir, "test.m3u"), []byte("content"), 0600)).To(Succeed())
pls, err := s.newSyncedPlaylist(tmpDir, "test.m3u")
Expect(err).ToNot(HaveOccurred())
Expect(pls.Name).To(Equal("test"))
Expect(pls.Comment).To(Equal("Auto-imported from 'test.m3u'"))
Expect(pls.Public).To(BeFalse())
Expect(pls.Path).To(Equal(filepath.Join(tmpDir, "test.m3u")))
Expect(pls.Sync).To(BeTrue())
Expect(pls.UpdatedAt).ToNot(BeZero())
})
It("strips extension from filename to derive name", func() {
tmpDir := GinkgoT().TempDir()
Expect(os.WriteFile(filepath.Join(tmpDir, "My Favorites.nsp"), []byte("{}"), 0600)).To(Succeed())
pls, err := s.newSyncedPlaylist(tmpDir, "My Favorites.nsp")
Expect(err).ToNot(HaveOccurred())
Expect(pls.Name).To(Equal("My Favorites"))
})
It("returns error for non-existent file", func() {
tmpDir := GinkgoT().TempDir()
_, err := s.newSyncedPlaylist(tmpDir, "nonexistent.m3u")
Expect(err).To(HaveOccurred())
})
})

265
core/playlists/playlists.go Normal file
View File

@@ -0,0 +1,265 @@
package playlists
import (
"context"
"io"
"path/filepath"
"strconv"
"strings"
"github.com/bmatcuk/doublestar/v4"
"github.com/deluan/rest"
"github.com/navidrome/navidrome/conf"
"github.com/navidrome/navidrome/model"
"github.com/navidrome/navidrome/model/request"
)
type Playlists interface {
// Reads
GetAll(ctx context.Context, options ...model.QueryOptions) (model.Playlists, error)
Get(ctx context.Context, id string) (*model.Playlist, error)
GetWithTracks(ctx context.Context, id string) (*model.Playlist, error)
GetPlaylists(ctx context.Context, mediaFileId string) (model.Playlists, error)
// Mutations
Create(ctx context.Context, playlistId string, name string, ids []string) (string, error)
Delete(ctx context.Context, id string) error
Update(ctx context.Context, playlistID string, name *string, comment *string, public *bool, idsToAdd []string, idxToRemove []int) error
// Track management
AddTracks(ctx context.Context, playlistID string, ids []string) (int, error)
AddAlbums(ctx context.Context, playlistID string, albumIds []string) (int, error)
AddArtists(ctx context.Context, playlistID string, artistIds []string) (int, error)
AddDiscs(ctx context.Context, playlistID string, discs []model.DiscID) (int, error)
RemoveTracks(ctx context.Context, playlistID string, trackIds []string) error
ReorderTrack(ctx context.Context, playlistID string, pos int, newPos int) error
// Import
ImportFile(ctx context.Context, folder *model.Folder, filename string) (*model.Playlist, error)
ImportM3U(ctx context.Context, reader io.Reader) (*model.Playlist, error)
// REST adapters (follows Share/Library pattern)
NewRepository(ctx context.Context) rest.Repository
TracksRepository(ctx context.Context, playlistId string, refreshSmartPlaylist bool) rest.Repository
}
type playlists struct {
ds model.DataStore
}
func NewPlaylists(ds model.DataStore) Playlists {
return &playlists{ds: ds}
}
func InPath(folder model.Folder) bool {
if conf.Server.PlaylistsPath == "" {
return true
}
rel, _ := filepath.Rel(folder.LibraryPath, folder.AbsolutePath())
for path := range strings.SplitSeq(conf.Server.PlaylistsPath, string(filepath.ListSeparator)) {
if match, _ := doublestar.Match(path, rel); match {
return true
}
}
return false
}
// --- Read operations ---
func (s *playlists) GetAll(ctx context.Context, options ...model.QueryOptions) (model.Playlists, error) {
return s.ds.Playlist(ctx).GetAll(options...)
}
func (s *playlists) Get(ctx context.Context, id string) (*model.Playlist, error) {
return s.ds.Playlist(ctx).Get(id)
}
func (s *playlists) GetWithTracks(ctx context.Context, id string) (*model.Playlist, error) {
return s.ds.Playlist(ctx).GetWithTracks(id, true, false)
}
func (s *playlists) GetPlaylists(ctx context.Context, mediaFileId string) (model.Playlists, error) {
return s.ds.Playlist(ctx).GetPlaylists(mediaFileId)
}
// --- Mutation operations ---
// Create creates a new playlist (when name is provided) or replaces tracks on an existing
// playlist (when playlistId is provided). This matches the Subsonic createPlaylist semantics.
func (s *playlists) Create(ctx context.Context, playlistId string, name string, ids []string) (string, error) {
usr, _ := request.UserFrom(ctx)
err := s.ds.WithTxImmediate(func(tx model.DataStore) error {
var pls *model.Playlist
var err error
if playlistId != "" {
pls, err = tx.Playlist(ctx).Get(playlistId)
if err != nil {
return err
}
if pls.IsSmartPlaylist() {
return model.ErrNotAuthorized
}
if !usr.IsAdmin && pls.OwnerID != usr.ID {
return model.ErrNotAuthorized
}
} else {
pls = &model.Playlist{Name: name}
pls.OwnerID = usr.ID
}
pls.Tracks = nil
pls.AddMediaFilesByID(ids)
err = tx.Playlist(ctx).Put(pls)
playlistId = pls.ID
return err
})
return playlistId, err
}
func (s *playlists) Delete(ctx context.Context, id string) error {
if _, err := s.checkWritable(ctx, id); err != nil {
return err
}
return s.ds.Playlist(ctx).Delete(id)
}
func (s *playlists) Update(ctx context.Context, playlistID string,
name *string, comment *string, public *bool,
idsToAdd []string, idxToRemove []int) error {
var pls *model.Playlist
var err error
hasTrackChanges := len(idsToAdd) > 0 || len(idxToRemove) > 0
if hasTrackChanges {
pls, err = s.checkTracksEditable(ctx, playlistID)
} else {
pls, err = s.checkWritable(ctx, playlistID)
}
if err != nil {
return err
}
return s.ds.WithTxImmediate(func(tx model.DataStore) error {
repo := tx.Playlist(ctx)
if len(idxToRemove) > 0 {
tracksRepo := repo.Tracks(playlistID, false)
// Convert 0-based indices to 1-based position IDs and delete them directly,
// avoiding the need to load all tracks into memory.
positions := make([]string, len(idxToRemove))
for i, idx := range idxToRemove {
positions[i] = strconv.Itoa(idx + 1)
}
if err := tracksRepo.Delete(positions...); err != nil {
return err
}
if len(idsToAdd) > 0 {
if _, err := tracksRepo.Add(idsToAdd); err != nil {
return err
}
}
return s.updateMetadata(ctx, tx, pls, name, comment, public)
}
if len(idsToAdd) > 0 {
if _, err := repo.Tracks(playlistID, false).Add(idsToAdd); err != nil {
return err
}
}
if name == nil && comment == nil && public == nil {
return nil
}
// Reuse the playlist from checkWritable (no tracks loaded, so Put only refreshes counters)
return s.updateMetadata(ctx, tx, pls, name, comment, public)
})
}
// --- Permission helpers ---
// checkWritable fetches the playlist and verifies the current user can modify it.
func (s *playlists) checkWritable(ctx context.Context, id string) (*model.Playlist, error) {
pls, err := s.ds.Playlist(ctx).Get(id)
if err != nil {
return nil, err
}
usr, _ := request.UserFrom(ctx)
if !usr.IsAdmin && pls.OwnerID != usr.ID {
return nil, model.ErrNotAuthorized
}
return pls, nil
}
// checkTracksEditable verifies the user can modify tracks (ownership + not smart playlist).
func (s *playlists) checkTracksEditable(ctx context.Context, playlistID string) (*model.Playlist, error) {
pls, err := s.checkWritable(ctx, playlistID)
if err != nil {
return nil, err
}
if pls.IsSmartPlaylist() {
return nil, model.ErrNotAuthorized
}
return pls, nil
}
// updateMetadata applies optional metadata changes to a playlist and persists it.
// Accepts a DataStore parameter so it can be used inside transactions.
// The caller is responsible for permission checks.
func (s *playlists) updateMetadata(ctx context.Context, ds model.DataStore, pls *model.Playlist, name *string, comment *string, public *bool) error {
if name != nil {
pls.Name = *name
}
if comment != nil {
pls.Comment = *comment
}
if public != nil {
pls.Public = *public
}
return ds.Playlist(ctx).Put(pls)
}
// --- Track management operations ---
func (s *playlists) AddTracks(ctx context.Context, playlistID string, ids []string) (int, error) {
if _, err := s.checkTracksEditable(ctx, playlistID); err != nil {
return 0, err
}
return s.ds.Playlist(ctx).Tracks(playlistID, false).Add(ids)
}
func (s *playlists) AddAlbums(ctx context.Context, playlistID string, albumIds []string) (int, error) {
if _, err := s.checkTracksEditable(ctx, playlistID); err != nil {
return 0, err
}
return s.ds.Playlist(ctx).Tracks(playlistID, false).AddAlbums(albumIds)
}
func (s *playlists) AddArtists(ctx context.Context, playlistID string, artistIds []string) (int, error) {
if _, err := s.checkTracksEditable(ctx, playlistID); err != nil {
return 0, err
}
return s.ds.Playlist(ctx).Tracks(playlistID, false).AddArtists(artistIds)
}
func (s *playlists) AddDiscs(ctx context.Context, playlistID string, discs []model.DiscID) (int, error) {
if _, err := s.checkTracksEditable(ctx, playlistID); err != nil {
return 0, err
}
return s.ds.Playlist(ctx).Tracks(playlistID, false).AddDiscs(discs)
}
func (s *playlists) RemoveTracks(ctx context.Context, playlistID string, trackIds []string) error {
if _, err := s.checkTracksEditable(ctx, playlistID); err != nil {
return err
}
return s.ds.WithTx(func(tx model.DataStore) error {
return tx.Playlist(ctx).Tracks(playlistID, false).Delete(trackIds...)
})
}
func (s *playlists) ReorderTrack(ctx context.Context, playlistID string, pos int, newPos int) error {
if _, err := s.checkTracksEditable(ctx, playlistID); err != nil {
return err
}
return s.ds.WithTx(func(tx model.DataStore) error {
return tx.Playlist(ctx).Tracks(playlistID, false).Reorder(pos, newPos)
})
}

View File

@@ -0,0 +1,17 @@
package playlists_test
import (
"testing"
"github.com/navidrome/navidrome/log"
"github.com/navidrome/navidrome/tests"
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
)
func TestPlaylists(t *testing.T) {
tests.Init(t, false)
log.SetLevel(log.LevelFatal)
RegisterFailHandler(Fail)
RunSpecs(t, "Playlists Suite")
}

View File

@@ -0,0 +1,297 @@
package playlists_test
import (
"context"
"github.com/navidrome/navidrome/core/playlists"
"github.com/navidrome/navidrome/model"
"github.com/navidrome/navidrome/model/criteria"
"github.com/navidrome/navidrome/model/request"
"github.com/navidrome/navidrome/tests"
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
)
var _ = Describe("Playlists", func() {
var ds *tests.MockDataStore
var ps playlists.Playlists
var mockPlsRepo *tests.MockPlaylistRepo
ctx := context.Background()
BeforeEach(func() {
mockPlsRepo = tests.CreateMockPlaylistRepo()
ds = &tests.MockDataStore{
MockedPlaylist: mockPlsRepo,
MockedLibrary: &tests.MockLibraryRepo{},
}
ctx = request.WithUser(ctx, model.User{ID: "123"})
})
Describe("Delete", func() {
var mockTracks *tests.MockPlaylistTrackRepo
BeforeEach(func() {
mockTracks = &tests.MockPlaylistTrackRepo{AddCount: 3}
mockPlsRepo.Data = map[string]*model.Playlist{
"pls-1": {ID: "pls-1", Name: "My Playlist", OwnerID: "user-1"},
}
mockPlsRepo.TracksRepo = mockTracks
ps = playlists.NewPlaylists(ds)
})
It("allows owner to delete their playlist", func() {
ctx = request.WithUser(ctx, model.User{ID: "user-1", IsAdmin: false})
err := ps.Delete(ctx, "pls-1")
Expect(err).ToNot(HaveOccurred())
Expect(mockPlsRepo.Deleted).To(ContainElement("pls-1"))
})
It("allows admin to delete any playlist", func() {
ctx = request.WithUser(ctx, model.User{ID: "admin-1", IsAdmin: true})
err := ps.Delete(ctx, "pls-1")
Expect(err).ToNot(HaveOccurred())
Expect(mockPlsRepo.Deleted).To(ContainElement("pls-1"))
})
It("denies non-owner, non-admin from deleting", func() {
ctx = request.WithUser(ctx, model.User{ID: "other-user", IsAdmin: false})
err := ps.Delete(ctx, "pls-1")
Expect(err).To(MatchError(model.ErrNotAuthorized))
Expect(mockPlsRepo.Deleted).To(BeEmpty())
})
It("returns error when playlist not found", func() {
ctx = request.WithUser(ctx, model.User{ID: "user-1", IsAdmin: false})
err := ps.Delete(ctx, "nonexistent")
Expect(err).To(Equal(model.ErrNotFound))
})
})
Describe("Create", func() {
BeforeEach(func() {
mockPlsRepo.Data = map[string]*model.Playlist{
"pls-1": {ID: "pls-1", Name: "Existing", OwnerID: "user-1"},
"pls-2": {ID: "pls-2", Name: "Other's", OwnerID: "other-user"},
"pls-smart": {ID: "pls-smart", Name: "Smart", OwnerID: "user-1",
Rules: &criteria.Criteria{Expression: criteria.Contains{"title": "test"}}},
}
ps = playlists.NewPlaylists(ds)
})
It("creates a new playlist with owner set from context", func() {
ctx = request.WithUser(ctx, model.User{ID: "user-1", IsAdmin: false})
id, err := ps.Create(ctx, "", "New Playlist", []string{"song-1", "song-2"})
Expect(err).ToNot(HaveOccurred())
Expect(id).ToNot(BeEmpty())
Expect(mockPlsRepo.Last.Name).To(Equal("New Playlist"))
Expect(mockPlsRepo.Last.OwnerID).To(Equal("user-1"))
})
It("replaces tracks on existing playlist when owner matches", func() {
ctx = request.WithUser(ctx, model.User{ID: "user-1", IsAdmin: false})
id, err := ps.Create(ctx, "pls-1", "", []string{"song-3"})
Expect(err).ToNot(HaveOccurred())
Expect(id).To(Equal("pls-1"))
Expect(mockPlsRepo.Last.Tracks).To(HaveLen(1))
})
It("allows admin to replace tracks on any playlist", func() {
ctx = request.WithUser(ctx, model.User{ID: "admin-1", IsAdmin: true})
id, err := ps.Create(ctx, "pls-2", "", []string{"song-3"})
Expect(err).ToNot(HaveOccurred())
Expect(id).To(Equal("pls-2"))
})
It("denies non-owner, non-admin from replacing tracks on existing playlist", func() {
ctx = request.WithUser(ctx, model.User{ID: "user-1", IsAdmin: false})
_, err := ps.Create(ctx, "pls-2", "", []string{"song-3"})
Expect(err).To(MatchError(model.ErrNotAuthorized))
})
It("returns error when existing playlistId not found", func() {
ctx = request.WithUser(ctx, model.User{ID: "user-1", IsAdmin: false})
_, err := ps.Create(ctx, "nonexistent", "", []string{"song-1"})
Expect(err).To(Equal(model.ErrNotFound))
})
It("denies replacing tracks on a smart playlist", func() {
ctx = request.WithUser(ctx, model.User{ID: "user-1", IsAdmin: false})
_, err := ps.Create(ctx, "pls-smart", "", []string{"song-1"})
Expect(err).To(MatchError(model.ErrNotAuthorized))
})
})
Describe("Update", func() {
var mockTracks *tests.MockPlaylistTrackRepo
BeforeEach(func() {
mockTracks = &tests.MockPlaylistTrackRepo{AddCount: 2}
mockPlsRepo.Data = map[string]*model.Playlist{
"pls-1": {ID: "pls-1", Name: "My Playlist", OwnerID: "user-1"},
"pls-other": {ID: "pls-other", Name: "Other's", OwnerID: "other-user"},
"pls-smart": {ID: "pls-smart", Name: "Smart", OwnerID: "user-1",
Rules: &criteria.Criteria{Expression: criteria.Contains{"title": "test"}}},
}
mockPlsRepo.TracksRepo = mockTracks
ps = playlists.NewPlaylists(ds)
})
It("allows owner to update their playlist", func() {
ctx = request.WithUser(ctx, model.User{ID: "user-1", IsAdmin: false})
newName := "Updated Name"
err := ps.Update(ctx, "pls-1", &newName, nil, nil, nil, nil)
Expect(err).ToNot(HaveOccurred())
})
It("allows admin to update any playlist", func() {
ctx = request.WithUser(ctx, model.User{ID: "admin-1", IsAdmin: true})
newName := "Updated Name"
err := ps.Update(ctx, "pls-other", &newName, nil, nil, nil, nil)
Expect(err).ToNot(HaveOccurred())
})
It("denies non-owner, non-admin from updating", func() {
ctx = request.WithUser(ctx, model.User{ID: "other-user", IsAdmin: false})
newName := "Updated Name"
err := ps.Update(ctx, "pls-1", &newName, nil, nil, nil, nil)
Expect(err).To(MatchError(model.ErrNotAuthorized))
})
It("returns error when playlist not found", func() {
ctx = request.WithUser(ctx, model.User{ID: "user-1", IsAdmin: false})
newName := "Updated Name"
err := ps.Update(ctx, "nonexistent", &newName, nil, nil, nil, nil)
Expect(err).To(Equal(model.ErrNotFound))
})
It("denies adding tracks to a smart playlist", func() {
ctx = request.WithUser(ctx, model.User{ID: "user-1", IsAdmin: false})
err := ps.Update(ctx, "pls-smart", nil, nil, nil, []string{"song-1"}, nil)
Expect(err).To(MatchError(model.ErrNotAuthorized))
})
It("denies removing tracks from a smart playlist", func() {
ctx = request.WithUser(ctx, model.User{ID: "user-1", IsAdmin: false})
err := ps.Update(ctx, "pls-smart", nil, nil, nil, nil, []int{0})
Expect(err).To(MatchError(model.ErrNotAuthorized))
})
It("allows metadata updates on a smart playlist", func() {
ctx = request.WithUser(ctx, model.User{ID: "user-1", IsAdmin: false})
newName := "Updated Smart"
err := ps.Update(ctx, "pls-smart", &newName, nil, nil, nil, nil)
Expect(err).ToNot(HaveOccurred())
})
})
Describe("AddTracks", func() {
var mockTracks *tests.MockPlaylistTrackRepo
BeforeEach(func() {
mockTracks = &tests.MockPlaylistTrackRepo{AddCount: 2}
mockPlsRepo.Data = map[string]*model.Playlist{
"pls-1": {ID: "pls-1", Name: "My Playlist", OwnerID: "user-1"},
"pls-smart": {ID: "pls-smart", Name: "Smart", OwnerID: "user-1",
Rules: &criteria.Criteria{Expression: criteria.Contains{"title": "test"}}},
"pls-other": {ID: "pls-other", Name: "Other's", OwnerID: "other-user"},
}
mockPlsRepo.TracksRepo = mockTracks
ps = playlists.NewPlaylists(ds)
})
It("allows owner to add tracks", func() {
ctx = request.WithUser(ctx, model.User{ID: "user-1", IsAdmin: false})
count, err := ps.AddTracks(ctx, "pls-1", []string{"song-1", "song-2"})
Expect(err).ToNot(HaveOccurred())
Expect(count).To(Equal(2))
Expect(mockTracks.AddedIds).To(ConsistOf("song-1", "song-2"))
})
It("allows admin to add tracks to any playlist", func() {
ctx = request.WithUser(ctx, model.User{ID: "admin-1", IsAdmin: true})
count, err := ps.AddTracks(ctx, "pls-other", []string{"song-1"})
Expect(err).ToNot(HaveOccurred())
Expect(count).To(Equal(2))
})
It("denies non-owner, non-admin", func() {
ctx = request.WithUser(ctx, model.User{ID: "other-user", IsAdmin: false})
_, err := ps.AddTracks(ctx, "pls-1", []string{"song-1"})
Expect(err).To(MatchError(model.ErrNotAuthorized))
})
It("denies editing smart playlists", func() {
ctx = request.WithUser(ctx, model.User{ID: "user-1", IsAdmin: false})
_, err := ps.AddTracks(ctx, "pls-smart", []string{"song-1"})
Expect(err).To(MatchError(model.ErrNotAuthorized))
})
It("returns error when playlist not found", func() {
ctx = request.WithUser(ctx, model.User{ID: "user-1", IsAdmin: false})
_, err := ps.AddTracks(ctx, "nonexistent", []string{"song-1"})
Expect(err).To(Equal(model.ErrNotFound))
})
})
Describe("RemoveTracks", func() {
var mockTracks *tests.MockPlaylistTrackRepo
BeforeEach(func() {
mockTracks = &tests.MockPlaylistTrackRepo{}
mockPlsRepo.Data = map[string]*model.Playlist{
"pls-1": {ID: "pls-1", Name: "My Playlist", OwnerID: "user-1"},
"pls-smart": {ID: "pls-smart", Name: "Smart", OwnerID: "user-1",
Rules: &criteria.Criteria{Expression: criteria.Contains{"title": "test"}}},
}
mockPlsRepo.TracksRepo = mockTracks
ps = playlists.NewPlaylists(ds)
})
It("allows owner to remove tracks", func() {
ctx = request.WithUser(ctx, model.User{ID: "user-1", IsAdmin: false})
err := ps.RemoveTracks(ctx, "pls-1", []string{"track-1", "track-2"})
Expect(err).ToNot(HaveOccurred())
Expect(mockTracks.DeletedIds).To(ConsistOf("track-1", "track-2"))
})
It("denies on smart playlist", func() {
ctx = request.WithUser(ctx, model.User{ID: "user-1", IsAdmin: false})
err := ps.RemoveTracks(ctx, "pls-smart", []string{"track-1"})
Expect(err).To(MatchError(model.ErrNotAuthorized))
})
It("denies non-owner", func() {
ctx = request.WithUser(ctx, model.User{ID: "other-user", IsAdmin: false})
err := ps.RemoveTracks(ctx, "pls-1", []string{"track-1"})
Expect(err).To(MatchError(model.ErrNotAuthorized))
})
})
Describe("ReorderTrack", func() {
var mockTracks *tests.MockPlaylistTrackRepo
BeforeEach(func() {
mockTracks = &tests.MockPlaylistTrackRepo{}
mockPlsRepo.Data = map[string]*model.Playlist{
"pls-1": {ID: "pls-1", Name: "My Playlist", OwnerID: "user-1"},
"pls-smart": {ID: "pls-smart", Name: "Smart", OwnerID: "user-1",
Rules: &criteria.Criteria{Expression: criteria.Contains{"title": "test"}}},
}
mockPlsRepo.TracksRepo = mockTracks
ps = playlists.NewPlaylists(ds)
})
It("allows owner to reorder", func() {
ctx = request.WithUser(ctx, model.User{ID: "user-1", IsAdmin: false})
err := ps.ReorderTrack(ctx, "pls-1", 1, 3)
Expect(err).ToNot(HaveOccurred())
Expect(mockTracks.Reordered).To(BeTrue())
})
It("denies on smart playlist", func() {
ctx = request.WithUser(ctx, model.User{ID: "user-1", IsAdmin: false})
err := ps.ReorderTrack(ctx, "pls-smart", 1, 3)
Expect(err).To(MatchError(model.ErrNotAuthorized))
})
})
})

View File

@@ -0,0 +1,95 @@
package playlists
import (
"context"
"errors"
"github.com/deluan/rest"
"github.com/navidrome/navidrome/model"
"github.com/navidrome/navidrome/model/request"
)
// --- REST adapter (follows Share/Library pattern) ---
func (s *playlists) NewRepository(ctx context.Context) rest.Repository {
return &playlistRepositoryWrapper{
ctx: ctx,
PlaylistRepository: s.ds.Playlist(ctx),
service: s,
}
}
// playlistRepositoryWrapper wraps the playlist repository as a thin REST-to-service adapter.
// It satisfies rest.Repository through the embedded PlaylistRepository (via ResourceRepository),
// and rest.Persistable by delegating to service methods for all mutations.
type playlistRepositoryWrapper struct {
model.PlaylistRepository
ctx context.Context
service *playlists
}
func (r *playlistRepositoryWrapper) Save(entity any) (string, error) {
return r.service.savePlaylist(r.ctx, entity.(*model.Playlist))
}
func (r *playlistRepositoryWrapper) Update(id string, entity any, cols ...string) error {
return r.service.updatePlaylistEntity(r.ctx, id, entity.(*model.Playlist), cols...)
}
func (r *playlistRepositoryWrapper) Delete(id string) error {
err := r.service.Delete(r.ctx, id)
switch {
case errors.Is(err, model.ErrNotFound):
return rest.ErrNotFound
case errors.Is(err, model.ErrNotAuthorized):
return rest.ErrPermissionDenied
default:
return err
}
}
func (s *playlists) TracksRepository(ctx context.Context, playlistId string, refreshSmartPlaylist bool) rest.Repository {
repo := s.ds.Playlist(ctx)
tracks := repo.Tracks(playlistId, refreshSmartPlaylist)
if tracks == nil {
return nil
}
return tracks.(rest.Repository)
}
// savePlaylist creates a new playlist, assigning the owner from context.
func (s *playlists) savePlaylist(ctx context.Context, pls *model.Playlist) (string, error) {
usr, _ := request.UserFrom(ctx)
pls.OwnerID = usr.ID
pls.ID = "" // Force new creation
err := s.ds.Playlist(ctx).Put(pls)
if err != nil {
return "", err
}
return pls.ID, nil
}
// updatePlaylistEntity updates playlist metadata with permission checks.
// Used by the REST API wrapper.
func (s *playlists) updatePlaylistEntity(ctx context.Context, id string, entity *model.Playlist, cols ...string) error {
current, err := s.checkWritable(ctx, id)
if err != nil {
switch {
case errors.Is(err, model.ErrNotFound):
return rest.ErrNotFound
case errors.Is(err, model.ErrNotAuthorized):
return rest.ErrPermissionDenied
default:
return err
}
}
usr, _ := request.UserFrom(ctx)
if !usr.IsAdmin && entity.OwnerID != "" && entity.OwnerID != current.OwnerID {
return rest.ErrPermissionDenied
}
// Apply ownership change (admin only)
if entity.OwnerID != "" {
current.OwnerID = entity.OwnerID
}
return s.updateMetadata(ctx, s.ds, current, &entity.Name, &entity.Comment, &entity.Public)
}

View File

@@ -0,0 +1,120 @@
package playlists_test
import (
"context"
"github.com/deluan/rest"
"github.com/navidrome/navidrome/core/playlists"
"github.com/navidrome/navidrome/model"
"github.com/navidrome/navidrome/model/request"
"github.com/navidrome/navidrome/tests"
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
)
var _ = Describe("REST Adapter", func() {
var ds *tests.MockDataStore
var ps playlists.Playlists
var mockPlsRepo *tests.MockPlaylistRepo
ctx := context.Background()
BeforeEach(func() {
mockPlsRepo = tests.CreateMockPlaylistRepo()
ds = &tests.MockDataStore{
MockedPlaylist: mockPlsRepo,
MockedLibrary: &tests.MockLibraryRepo{},
}
ctx = request.WithUser(ctx, model.User{ID: "123"})
})
Describe("NewRepository", func() {
var repo rest.Persistable
BeforeEach(func() {
mockPlsRepo.Data = map[string]*model.Playlist{
"pls-1": {ID: "pls-1", Name: "My Playlist", OwnerID: "user-1"},
}
ps = playlists.NewPlaylists(ds)
})
Describe("Save", func() {
It("sets the owner from the context user", func() {
ctx = request.WithUser(ctx, model.User{ID: "user-1", IsAdmin: false})
repo = ps.NewRepository(ctx).(rest.Persistable)
pls := &model.Playlist{Name: "New Playlist"}
id, err := repo.Save(pls)
Expect(err).ToNot(HaveOccurred())
Expect(id).ToNot(BeEmpty())
Expect(pls.OwnerID).To(Equal("user-1"))
})
It("forces a new creation by clearing ID", func() {
ctx = request.WithUser(ctx, model.User{ID: "user-1", IsAdmin: false})
repo = ps.NewRepository(ctx).(rest.Persistable)
pls := &model.Playlist{ID: "should-be-cleared", Name: "New"}
_, err := repo.Save(pls)
Expect(err).ToNot(HaveOccurred())
Expect(pls.ID).ToNot(Equal("should-be-cleared"))
})
})
Describe("Update", func() {
It("allows owner to update their playlist", func() {
ctx = request.WithUser(ctx, model.User{ID: "user-1", IsAdmin: false})
repo = ps.NewRepository(ctx).(rest.Persistable)
pls := &model.Playlist{Name: "Updated"}
err := repo.Update("pls-1", pls)
Expect(err).ToNot(HaveOccurred())
})
It("allows admin to update any playlist", func() {
ctx = request.WithUser(ctx, model.User{ID: "admin-1", IsAdmin: true})
repo = ps.NewRepository(ctx).(rest.Persistable)
pls := &model.Playlist{Name: "Updated"}
err := repo.Update("pls-1", pls)
Expect(err).ToNot(HaveOccurred())
})
It("denies non-owner, non-admin", func() {
ctx = request.WithUser(ctx, model.User{ID: "other-user", IsAdmin: false})
repo = ps.NewRepository(ctx).(rest.Persistable)
pls := &model.Playlist{Name: "Updated"}
err := repo.Update("pls-1", pls)
Expect(err).To(Equal(rest.ErrPermissionDenied))
})
It("denies regular user from changing ownership", func() {
ctx = request.WithUser(ctx, model.User{ID: "user-1", IsAdmin: false})
repo = ps.NewRepository(ctx).(rest.Persistable)
pls := &model.Playlist{Name: "Updated", OwnerID: "other-user"}
err := repo.Update("pls-1", pls)
Expect(err).To(Equal(rest.ErrPermissionDenied))
})
It("returns rest.ErrNotFound when playlist doesn't exist", func() {
ctx = request.WithUser(ctx, model.User{ID: "user-1", IsAdmin: false})
repo = ps.NewRepository(ctx).(rest.Persistable)
pls := &model.Playlist{Name: "Updated"}
err := repo.Update("nonexistent", pls)
Expect(err).To(Equal(rest.ErrNotFound))
})
})
Describe("Delete", func() {
It("delegates to service Delete with permission checks", func() {
ctx = request.WithUser(ctx, model.User{ID: "user-1", IsAdmin: false})
repo = ps.NewRepository(ctx).(rest.Persistable)
err := repo.Delete("pls-1")
Expect(err).ToNot(HaveOccurred())
Expect(mockPlsRepo.Deleted).To(ContainElement("pls-1"))
})
It("denies non-owner", func() {
ctx = request.WithUser(ctx, model.User{ID: "other-user", IsAdmin: false})
repo = ps.NewRepository(ctx).(rest.Persistable)
err := repo.Delete("pls-1")
Expect(err).To(Equal(rest.ErrPermissionDenied))
})
})
})
})

View File

@@ -7,6 +7,7 @@ import (
"github.com/navidrome/navidrome/core/ffmpeg" "github.com/navidrome/navidrome/core/ffmpeg"
"github.com/navidrome/navidrome/core/metrics" "github.com/navidrome/navidrome/core/metrics"
"github.com/navidrome/navidrome/core/playback" "github.com/navidrome/navidrome/core/playback"
"github.com/navidrome/navidrome/core/playlists"
"github.com/navidrome/navidrome/core/scrobbler" "github.com/navidrome/navidrome/core/scrobbler"
) )
@@ -16,7 +17,7 @@ var Set = wire.NewSet(
NewArchiver, NewArchiver,
NewPlayers, NewPlayers,
NewShare, NewShare,
NewPlaylists, playlists.NewPlaylists,
NewLibrary, NewLibrary,
NewUser, NewUser,
NewMaintenance, NewMaintenance,

View File

@@ -0,0 +1,391 @@
package migrations
import (
"context"
"database/sql"
"fmt"
"github.com/pressly/goose/v3"
)
func init() {
goose.AddMigrationContext(upAddFts5Search, downAddFts5Search)
}
// stripPunct generates a SQL expression that strips common punctuation from a column or expression.
// Used during migration to approximate the Go normalizeForFTS function for bulk-populating search_normalized.
func stripPunct(col string) string {
return fmt.Sprintf(
`REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(%s, '.', ''), '/', ''), '-', ''), '''', ''), '&', ''), ',', '')`,
col,
)
}
func upAddFts5Search(ctx context.Context, tx *sql.Tx) error {
notice(tx, "Adding FTS5 full-text search indexes. This may take a moment on large libraries.")
// Step 1: Add search_participants and search_normalized columns to media_file, album, and artist
_, err := tx.ExecContext(ctx, `ALTER TABLE media_file ADD COLUMN search_participants TEXT NOT NULL DEFAULT ''`)
if err != nil {
return fmt.Errorf("adding search_participants to media_file: %w", err)
}
_, err = tx.ExecContext(ctx, `ALTER TABLE media_file ADD COLUMN search_normalized TEXT NOT NULL DEFAULT ''`)
if err != nil {
return fmt.Errorf("adding search_normalized to media_file: %w", err)
}
_, err = tx.ExecContext(ctx, `ALTER TABLE album ADD COLUMN search_participants TEXT NOT NULL DEFAULT ''`)
if err != nil {
return fmt.Errorf("adding search_participants to album: %w", err)
}
_, err = tx.ExecContext(ctx, `ALTER TABLE album ADD COLUMN search_normalized TEXT NOT NULL DEFAULT ''`)
if err != nil {
return fmt.Errorf("adding search_normalized to album: %w", err)
}
_, err = tx.ExecContext(ctx, `ALTER TABLE artist ADD COLUMN search_normalized TEXT NOT NULL DEFAULT ''`)
if err != nil {
return fmt.Errorf("adding search_normalized to artist: %w", err)
}
// Step 2: Populate search_participants from participants JSON.
// Extract all "name" values from the participants JSON structure.
// participants is a JSON object like: {"artist":[{"name":"...","id":"..."}],"albumartist":[...]}
// We use json_each + json_extract to flatten all names into a space-separated string.
_, err = tx.ExecContext(ctx, `
UPDATE media_file SET search_participants = COALESCE(
(SELECT group_concat(json_extract(je2.value, '$.name'), ' ')
FROM json_each(media_file.participants) AS je1,
json_each(je1.value) AS je2
WHERE json_extract(je2.value, '$.name') IS NOT NULL),
''
)
WHERE participants IS NOT NULL AND participants != '' AND participants != '{}'
`)
if err != nil {
return fmt.Errorf("populating media_file search_participants: %w", err)
}
_, err = tx.ExecContext(ctx, `
UPDATE album SET search_participants = COALESCE(
(SELECT group_concat(json_extract(je2.value, '$.name'), ' ')
FROM json_each(album.participants) AS je1,
json_each(je1.value) AS je2
WHERE json_extract(je2.value, '$.name') IS NOT NULL),
''
)
WHERE participants IS NOT NULL AND participants != '' AND participants != '{}'
`)
if err != nil {
return fmt.Errorf("populating album search_participants: %w", err)
}
// Step 2b: Populate search_normalized using SQL REPLACE chains for common punctuation.
// The Go code will compute the precise value on next scan; this is a best-effort approximation.
_, err = tx.ExecContext(ctx, fmt.Sprintf(`
UPDATE artist SET search_normalized = %s
WHERE name != %s`,
stripPunct("name"), stripPunct("name")))
if err != nil {
return fmt.Errorf("populating artist search_normalized: %w", err)
}
_, err = tx.ExecContext(ctx, fmt.Sprintf(`
UPDATE album SET search_normalized = TRIM(%s || ' ' || %s)
WHERE name != %s OR COALESCE(album_artist, '') != %s`,
stripPunct("name"), stripPunct("COALESCE(album_artist, '')"),
stripPunct("name"), stripPunct("COALESCE(album_artist, '')")))
if err != nil {
return fmt.Errorf("populating album search_normalized: %w", err)
}
_, err = tx.ExecContext(ctx, fmt.Sprintf(`
UPDATE media_file SET search_normalized =
TRIM(%s || ' ' || %s || ' ' || %s || ' ' || %s)
WHERE title != %s
OR COALESCE(album, '') != %s
OR COALESCE(artist, '') != %s
OR COALESCE(album_artist, '') != %s`,
stripPunct("title"), stripPunct("COALESCE(album, '')"),
stripPunct("COALESCE(artist, '')"), stripPunct("COALESCE(album_artist, '')"),
stripPunct("title"), stripPunct("COALESCE(album, '')"),
stripPunct("COALESCE(artist, '')"), stripPunct("COALESCE(album_artist, '')")))
if err != nil {
return fmt.Errorf("populating media_file search_normalized: %w", err)
}
// Step 3: Create FTS5 virtual tables
_, err = tx.ExecContext(ctx, `
CREATE VIRTUAL TABLE IF NOT EXISTS media_file_fts USING fts5(
title, album, artist, album_artist,
sort_title, sort_album_name, sort_artist_name, sort_album_artist_name,
disc_subtitle, search_participants, search_normalized,
content='', content_rowid='rowid',
tokenize='unicode61 remove_diacritics 2'
)
`)
if err != nil {
return fmt.Errorf("creating media_file_fts: %w", err)
}
_, err = tx.ExecContext(ctx, `
CREATE VIRTUAL TABLE IF NOT EXISTS album_fts USING fts5(
name, sort_album_name, album_artist,
search_participants, discs, catalog_num, album_version, search_normalized,
content='', content_rowid='rowid',
tokenize='unicode61 remove_diacritics 2'
)
`)
if err != nil {
return fmt.Errorf("creating album_fts: %w", err)
}
_, err = tx.ExecContext(ctx, `
CREATE VIRTUAL TABLE IF NOT EXISTS artist_fts USING fts5(
name, sort_artist_name, search_normalized,
content='', content_rowid='rowid',
tokenize='unicode61 remove_diacritics 2'
)
`)
if err != nil {
return fmt.Errorf("creating artist_fts: %w", err)
}
// Step 4: Bulk-populate FTS5 indexes from existing data
_, err = tx.ExecContext(ctx, `
INSERT INTO media_file_fts(rowid, title, album, artist, album_artist,
sort_title, sort_album_name, sort_artist_name, sort_album_artist_name,
disc_subtitle, search_participants, search_normalized)
SELECT rowid, title, album, artist, album_artist,
sort_title, sort_album_name, sort_artist_name, sort_album_artist_name,
COALESCE(disc_subtitle, ''), COALESCE(search_participants, ''),
COALESCE(search_normalized, '')
FROM media_file
`)
if err != nil {
return fmt.Errorf("populating media_file_fts: %w", err)
}
_, err = tx.ExecContext(ctx, `
INSERT INTO album_fts(rowid, name, sort_album_name, album_artist,
search_participants, discs, catalog_num, album_version, search_normalized)
SELECT rowid, name, COALESCE(sort_album_name, ''), COALESCE(album_artist, ''),
COALESCE(search_participants, ''), COALESCE(discs, ''),
COALESCE(catalog_num, ''),
COALESCE((SELECT group_concat(json_extract(je.value, '$.value'), ' ')
FROM json_each(album.tags, '$.albumversion') AS je), ''),
COALESCE(search_normalized, '')
FROM album
`)
if err != nil {
return fmt.Errorf("populating album_fts: %w", err)
}
_, err = tx.ExecContext(ctx, `
INSERT INTO artist_fts(rowid, name, sort_artist_name, search_normalized)
SELECT rowid, name, COALESCE(sort_artist_name, ''), COALESCE(search_normalized, '')
FROM artist
`)
if err != nil {
return fmt.Errorf("populating artist_fts: %w", err)
}
// Step 5: Create triggers for media_file
_, err = tx.ExecContext(ctx, `
CREATE TRIGGER media_file_fts_ai AFTER INSERT ON media_file BEGIN
INSERT INTO media_file_fts(rowid, title, album, artist, album_artist,
sort_title, sort_album_name, sort_artist_name, sort_album_artist_name,
disc_subtitle, search_participants, search_normalized)
VALUES (NEW.rowid, NEW.title, NEW.album, NEW.artist, NEW.album_artist,
NEW.sort_title, NEW.sort_album_name, NEW.sort_artist_name, NEW.sort_album_artist_name,
COALESCE(NEW.disc_subtitle, ''), COALESCE(NEW.search_participants, ''),
COALESCE(NEW.search_normalized, ''));
END
`)
if err != nil {
return fmt.Errorf("creating media_file_fts insert trigger: %w", err)
}
_, err = tx.ExecContext(ctx, `
CREATE TRIGGER media_file_fts_ad AFTER DELETE ON media_file BEGIN
INSERT INTO media_file_fts(media_file_fts, rowid, title, album, artist, album_artist,
sort_title, sort_album_name, sort_artist_name, sort_album_artist_name,
disc_subtitle, search_participants, search_normalized)
VALUES ('delete', OLD.rowid, OLD.title, OLD.album, OLD.artist, OLD.album_artist,
OLD.sort_title, OLD.sort_album_name, OLD.sort_artist_name, OLD.sort_album_artist_name,
COALESCE(OLD.disc_subtitle, ''), COALESCE(OLD.search_participants, ''),
COALESCE(OLD.search_normalized, ''));
END
`)
if err != nil {
return fmt.Errorf("creating media_file_fts delete trigger: %w", err)
}
_, err = tx.ExecContext(ctx, `
CREATE TRIGGER media_file_fts_au AFTER UPDATE ON media_file
WHEN
OLD.title IS NOT NEW.title OR
OLD.album IS NOT NEW.album OR
OLD.artist IS NOT NEW.artist OR
OLD.album_artist IS NOT NEW.album_artist OR
OLD.sort_title IS NOT NEW.sort_title OR
OLD.sort_album_name IS NOT NEW.sort_album_name OR
OLD.sort_artist_name IS NOT NEW.sort_artist_name OR
OLD.sort_album_artist_name IS NOT NEW.sort_album_artist_name OR
OLD.disc_subtitle IS NOT NEW.disc_subtitle OR
OLD.search_participants IS NOT NEW.search_participants OR
OLD.search_normalized IS NOT NEW.search_normalized
BEGIN
INSERT INTO media_file_fts(media_file_fts, rowid, title, album, artist, album_artist,
sort_title, sort_album_name, sort_artist_name, sort_album_artist_name,
disc_subtitle, search_participants, search_normalized)
VALUES ('delete', OLD.rowid, OLD.title, OLD.album, OLD.artist, OLD.album_artist,
OLD.sort_title, OLD.sort_album_name, OLD.sort_artist_name, OLD.sort_album_artist_name,
COALESCE(OLD.disc_subtitle, ''), COALESCE(OLD.search_participants, ''),
COALESCE(OLD.search_normalized, ''));
INSERT INTO media_file_fts(rowid, title, album, artist, album_artist,
sort_title, sort_album_name, sort_artist_name, sort_album_artist_name,
disc_subtitle, search_participants, search_normalized)
VALUES (NEW.rowid, NEW.title, NEW.album, NEW.artist, NEW.album_artist,
NEW.sort_title, NEW.sort_album_name, NEW.sort_artist_name, NEW.sort_album_artist_name,
COALESCE(NEW.disc_subtitle, ''), COALESCE(NEW.search_participants, ''),
COALESCE(NEW.search_normalized, ''));
END
`)
if err != nil {
return fmt.Errorf("creating media_file_fts update trigger: %w", err)
}
// Step 6: Create triggers for album
_, err = tx.ExecContext(ctx, `
CREATE TRIGGER album_fts_ai AFTER INSERT ON album BEGIN
INSERT INTO album_fts(rowid, name, sort_album_name, album_artist,
search_participants, discs, catalog_num, album_version, search_normalized)
VALUES (NEW.rowid, NEW.name, COALESCE(NEW.sort_album_name, ''), COALESCE(NEW.album_artist, ''),
COALESCE(NEW.search_participants, ''), COALESCE(NEW.discs, ''),
COALESCE(NEW.catalog_num, ''),
COALESCE((SELECT group_concat(json_extract(je.value, '$.value'), ' ')
FROM json_each(NEW.tags, '$.albumversion') AS je), ''),
COALESCE(NEW.search_normalized, ''));
END
`)
if err != nil {
return fmt.Errorf("creating album_fts insert trigger: %w", err)
}
_, err = tx.ExecContext(ctx, `
CREATE TRIGGER album_fts_ad AFTER DELETE ON album BEGIN
INSERT INTO album_fts(album_fts, rowid, name, sort_album_name, album_artist,
search_participants, discs, catalog_num, album_version, search_normalized)
VALUES ('delete', OLD.rowid, OLD.name, COALESCE(OLD.sort_album_name, ''), COALESCE(OLD.album_artist, ''),
COALESCE(OLD.search_participants, ''), COALESCE(OLD.discs, ''),
COALESCE(OLD.catalog_num, ''),
COALESCE((SELECT group_concat(json_extract(je.value, '$.value'), ' ')
FROM json_each(OLD.tags, '$.albumversion') AS je), ''),
COALESCE(OLD.search_normalized, ''));
END
`)
if err != nil {
return fmt.Errorf("creating album_fts delete trigger: %w", err)
}
_, err = tx.ExecContext(ctx, `
CREATE TRIGGER album_fts_au AFTER UPDATE ON album
WHEN
OLD.name IS NOT NEW.name OR
OLD.sort_album_name IS NOT NEW.sort_album_name OR
OLD.album_artist IS NOT NEW.album_artist OR
OLD.search_participants IS NOT NEW.search_participants OR
OLD.discs IS NOT NEW.discs OR
OLD.catalog_num IS NOT NEW.catalog_num OR
OLD.tags IS NOT NEW.tags OR
OLD.search_normalized IS NOT NEW.search_normalized
BEGIN
INSERT INTO album_fts(album_fts, rowid, name, sort_album_name, album_artist,
search_participants, discs, catalog_num, album_version, search_normalized)
VALUES ('delete', OLD.rowid, OLD.name, COALESCE(OLD.sort_album_name, ''), COALESCE(OLD.album_artist, ''),
COALESCE(OLD.search_participants, ''), COALESCE(OLD.discs, ''),
COALESCE(OLD.catalog_num, ''),
COALESCE((SELECT group_concat(json_extract(je.value, '$.value'), ' ')
FROM json_each(OLD.tags, '$.albumversion') AS je), ''),
COALESCE(OLD.search_normalized, ''));
INSERT INTO album_fts(rowid, name, sort_album_name, album_artist,
search_participants, discs, catalog_num, album_version, search_normalized)
VALUES (NEW.rowid, NEW.name, COALESCE(NEW.sort_album_name, ''), COALESCE(NEW.album_artist, ''),
COALESCE(NEW.search_participants, ''), COALESCE(NEW.discs, ''),
COALESCE(NEW.catalog_num, ''),
COALESCE((SELECT group_concat(json_extract(je.value, '$.value'), ' ')
FROM json_each(NEW.tags, '$.albumversion') AS je), ''),
COALESCE(NEW.search_normalized, ''));
END
`)
if err != nil {
return fmt.Errorf("creating album_fts update trigger: %w", err)
}
// Step 7: Create triggers for artist
_, err = tx.ExecContext(ctx, `
CREATE TRIGGER artist_fts_ai AFTER INSERT ON artist BEGIN
INSERT INTO artist_fts(rowid, name, sort_artist_name, search_normalized)
VALUES (NEW.rowid, NEW.name, COALESCE(NEW.sort_artist_name, ''),
COALESCE(NEW.search_normalized, ''));
END
`)
if err != nil {
return fmt.Errorf("creating artist_fts insert trigger: %w", err)
}
_, err = tx.ExecContext(ctx, `
CREATE TRIGGER artist_fts_ad AFTER DELETE ON artist BEGIN
INSERT INTO artist_fts(artist_fts, rowid, name, sort_artist_name, search_normalized)
VALUES ('delete', OLD.rowid, OLD.name, COALESCE(OLD.sort_artist_name, ''),
COALESCE(OLD.search_normalized, ''));
END
`)
if err != nil {
return fmt.Errorf("creating artist_fts delete trigger: %w", err)
}
_, err = tx.ExecContext(ctx, `
CREATE TRIGGER artist_fts_au AFTER UPDATE ON artist
WHEN
OLD.name IS NOT NEW.name OR
OLD.sort_artist_name IS NOT NEW.sort_artist_name OR
OLD.search_normalized IS NOT NEW.search_normalized
BEGIN
INSERT INTO artist_fts(artist_fts, rowid, name, sort_artist_name, search_normalized)
VALUES ('delete', OLD.rowid, OLD.name, COALESCE(OLD.sort_artist_name, ''),
COALESCE(OLD.search_normalized, ''));
INSERT INTO artist_fts(rowid, name, sort_artist_name, search_normalized)
VALUES (NEW.rowid, NEW.name, COALESCE(NEW.sort_artist_name, ''),
COALESCE(NEW.search_normalized, ''));
END
`)
if err != nil {
return fmt.Errorf("creating artist_fts update trigger: %w", err)
}
return nil
}
func downAddFts5Search(ctx context.Context, tx *sql.Tx) error {
for _, trigger := range []string{
"media_file_fts_ai", "media_file_fts_ad", "media_file_fts_au",
"album_fts_ai", "album_fts_ad", "album_fts_au",
"artist_fts_ai", "artist_fts_ad", "artist_fts_au",
} {
_, err := tx.ExecContext(ctx, "DROP TRIGGER IF EXISTS "+trigger)
if err != nil {
return fmt.Errorf("dropping trigger %s: %w", trigger, err)
}
}
for _, table := range []string{"media_file_fts", "album_fts", "artist_fts"} {
_, err := tx.ExecContext(ctx, "DROP TABLE IF EXISTS "+table)
if err != nil {
return fmt.Errorf("dropping table %s: %w", table, err)
}
}
// Note: We don't drop search_participants columns because SQLite doesn't support DROP COLUMN
// on older versions, and the column is harmless if left in place.
return nil
}

9
go.mod
View File

@@ -1,13 +1,13 @@
module github.com/navidrome/navidrome module github.com/navidrome/navidrome
go 1.25 go 1.25.0
replace ( replace (
// Fork to fix https://github.com/navidrome/navidrome/issues/3254 // Fork to fix https://github.com/navidrome/navidrome/issues/3254
github.com/dhowden/tag v0.0.0-20240417053706-3d75831295e8 => github.com/deluan/tag v0.0.0-20241002021117-dfe5e6ea396d github.com/dhowden/tag v0.0.0-20240417053706-3d75831295e8 => github.com/deluan/tag v0.0.0-20241002021117-dfe5e6ea396d
// Fork to implement raw tags support // Fork to implement raw tags support
go.senan.xyz/taglib => github.com/deluan/go-taglib v0.0.0-20260212150743-3f1b97cb0d1e go.senan.xyz/taglib => github.com/deluan/go-taglib v0.0.0-20260225021432-1699562530f1
) )
require ( require (
@@ -53,7 +53,7 @@ require (
github.com/onsi/gomega v1.39.1 github.com/onsi/gomega v1.39.1
github.com/pelletier/go-toml/v2 v2.2.4 github.com/pelletier/go-toml/v2 v2.2.4
github.com/pocketbase/dbx v1.12.0 github.com/pocketbase/dbx v1.12.0
github.com/pressly/goose/v3 v3.26.0 github.com/pressly/goose/v3 v3.27.0
github.com/prometheus/client_golang v1.23.2 github.com/prometheus/client_golang v1.23.2
github.com/rjeczalik/notify v0.9.3 github.com/rjeczalik/notify v0.9.3
github.com/robfig/cron/v3 v3.0.1 github.com/robfig/cron/v3 v3.0.1
@@ -88,7 +88,7 @@ require (
github.com/cespare/xxhash/v2 v2.3.0 // indirect github.com/cespare/xxhash/v2 v2.3.0 // indirect
github.com/creack/pty v1.1.24 // indirect github.com/creack/pty v1.1.24 // indirect
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect
github.com/decred/dcrd/dcrec/secp256k1/v4 v4.4.0 // indirect github.com/decred/dcrd/dcrec/secp256k1/v4 v4.4.1 // indirect
github.com/dylibso/observe-sdk/go v0.0.0-20240828172851-9145d8ad07e1 // indirect github.com/dylibso/observe-sdk/go v0.0.0-20240828172851-9145d8ad07e1 // indirect
github.com/fsnotify/fsnotify v1.9.0 // indirect github.com/fsnotify/fsnotify v1.9.0 // indirect
github.com/go-logr/logr v1.4.3 // indirect github.com/go-logr/logr v1.4.3 // indirect
@@ -140,7 +140,6 @@ require (
go.yaml.in/yaml/v2 v2.4.3 // indirect go.yaml.in/yaml/v2 v2.4.3 // indirect
go.yaml.in/yaml/v3 v3.0.4 // indirect go.yaml.in/yaml/v3 v3.0.4 // indirect
golang.org/x/crypto v0.48.0 // indirect golang.org/x/crypto v0.48.0 // indirect
golang.org/x/exp v0.0.0-20260112195511-716be5621a96 // indirect
golang.org/x/mod v0.33.0 // indirect golang.org/x/mod v0.33.0 // indirect
golang.org/x/telemetry v0.0.0-20260209163413-e7419c687ee4 // indirect golang.org/x/telemetry v0.0.0-20260209163413-e7419c687ee4 // indirect
golang.org/x/tools v0.42.0 // indirect golang.org/x/tools v0.42.0 // indirect

36
go.sum
View File

@@ -1,7 +1,7 @@
dario.cat/mergo v1.0.2 h1:85+piFYR1tMbRrLcDwR18y4UKJ3aH1Tbzi24VRW1TK8= dario.cat/mergo v1.0.2 h1:85+piFYR1tMbRrLcDwR18y4UKJ3aH1Tbzi24VRW1TK8=
dario.cat/mergo v1.0.2/go.mod h1:E/hbnu0NxMFBjpMIE34DRGLWqDy0g5FuKDhCb31ngxA= dario.cat/mergo v1.0.2/go.mod h1:E/hbnu0NxMFBjpMIE34DRGLWqDy0g5FuKDhCb31ngxA=
filippo.io/edwards25519 v1.1.0 h1:FNf4tywRC1HmFuKW5xopWpigGjJKiJSV0Cqo0cJWDaA= filippo.io/edwards25519 v1.2.0 h1:crnVqOiS4jqYleHd9vaKZ+HKtHfllngJIiOpNpoJsjo=
filippo.io/edwards25519 v1.1.0/go.mod h1:BxyFTGdWcka3PhytdK4V28tE5sGfRvvvRV7EaN4VDT4= filippo.io/edwards25519 v1.2.0/go.mod h1:xzAOLCNug/yB62zG1bQ8uziwrIqIuxhctzJT18Q77mc=
github.com/Masterminds/semver/v3 v3.4.0 h1:Zog+i5UMtVoCU8oKka5P7i9q9HgrJeGzI9SA1Xbatp0= github.com/Masterminds/semver/v3 v3.4.0 h1:Zog+i5UMtVoCU8oKka5P7i9q9HgrJeGzI9SA1Xbatp0=
github.com/Masterminds/semver/v3 v3.4.0/go.mod h1:4V+yj/TJE1HU9XfppCwVMZq3I84lprf4nC11bSS5beM= github.com/Masterminds/semver/v3 v3.4.0/go.mod h1:4V+yj/TJE1HU9XfppCwVMZq3I84lprf4nC11bSS5beM=
github.com/Masterminds/squirrel v1.5.4 h1:uUcX/aBc8O7Fg9kaISIUsHXdKuqehiXAMQTYX8afzqM= github.com/Masterminds/squirrel v1.5.4 h1:uUcX/aBc8O7Fg9kaISIUsHXdKuqehiXAMQTYX8afzqM=
@@ -34,10 +34,10 @@ github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSs
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1VwoXQT9A3Wy9MM3WgvqSxFWenqJduM= github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1VwoXQT9A3Wy9MM3WgvqSxFWenqJduM=
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/decred/dcrd/dcrec/secp256k1/v4 v4.4.0 h1:NMZiJj8QnKe1LgsbDayM4UoHwbvwDRwnI3hwNaAHRnc= github.com/decred/dcrd/dcrec/secp256k1/v4 v4.4.1 h1:5RVFMOWjMyRy8cARdy79nAmgYw3hK/4HUq48LQ6Wwqo=
github.com/decred/dcrd/dcrec/secp256k1/v4 v4.4.0/go.mod h1:ZXNYxsqcloTdSy/rNShjYzMhyjf0LaoftYK0p+A3h40= github.com/decred/dcrd/dcrec/secp256k1/v4 v4.4.1/go.mod h1:ZXNYxsqcloTdSy/rNShjYzMhyjf0LaoftYK0p+A3h40=
github.com/deluan/go-taglib v0.0.0-20260212150743-3f1b97cb0d1e h1:pwx3kmHzl1N28coJV2C1zfm2ZF0qkQcGX+Z6BvXteB4= github.com/deluan/go-taglib v0.0.0-20260225021432-1699562530f1 h1:seWJmkPAb+M1ysRNGzTGS7FfdrUe9wQTHhB9p2fxDWg=
github.com/deluan/go-taglib v0.0.0-20260212150743-3f1b97cb0d1e/go.mod h1:sKDN0U4qXDlq6LFK+aOAkDH4Me5nDV1V/A4B+B69xBA= github.com/deluan/go-taglib v0.0.0-20260225021432-1699562530f1/go.mod h1:sKDN0U4qXDlq6LFK+aOAkDH4Me5nDV1V/A4B+B69xBA=
github.com/deluan/rest v0.0.0-20211102003136-6260bc399cbf h1:tb246l2Zmpt/GpF9EcHCKTtwzrd0HGfEmoODFA/qnk4= github.com/deluan/rest v0.0.0-20211102003136-6260bc399cbf h1:tb246l2Zmpt/GpF9EcHCKTtwzrd0HGfEmoODFA/qnk4=
github.com/deluan/rest v0.0.0-20211102003136-6260bc399cbf/go.mod h1:tSgDythFsl0QgS/PFWfIZqcJKnkADWneY80jaVRlqK8= github.com/deluan/rest v0.0.0-20211102003136-6260bc399cbf/go.mod h1:tSgDythFsl0QgS/PFWfIZqcJKnkADWneY80jaVRlqK8=
github.com/deluan/sanitize v0.0.0-20241120162836-fdfd8fdfaa55 h1:wSCnggTs2f2ji6nFwQmfwgINcmSMj0xF0oHnoyRSPe4= github.com/deluan/sanitize v0.0.0-20241120162836-fdfd8fdfaa55 h1:wSCnggTs2f2ji6nFwQmfwgINcmSMj0xF0oHnoyRSPe4=
@@ -143,8 +143,8 @@ github.com/kardianos/service v1.2.4 h1:XNlGtZOYNx2u91urOdg/Kfmc+gfmuIo1Dd3rEi2Og
github.com/kardianos/service v1.2.4/go.mod h1:E4V9ufUuY82F7Ztlu1eN9VXWIQxg8NoLQlmFe0MtrXc= github.com/kardianos/service v1.2.4/go.mod h1:E4V9ufUuY82F7Ztlu1eN9VXWIQxg8NoLQlmFe0MtrXc=
github.com/kballard/go-shellquote v0.0.0-20180428030007-95032a82bc51 h1:Z9n2FFNUXsshfwJMBgNA0RU6/i7WVaAegv3PtuIHPMs= github.com/kballard/go-shellquote v0.0.0-20180428030007-95032a82bc51 h1:Z9n2FFNUXsshfwJMBgNA0RU6/i7WVaAegv3PtuIHPMs=
github.com/kballard/go-shellquote v0.0.0-20180428030007-95032a82bc51/go.mod h1:CzGEWj7cYgsdH8dAjBGEr58BoE7ScuLd+fwFZ44+/x8= github.com/kballard/go-shellquote v0.0.0-20180428030007-95032a82bc51/go.mod h1:CzGEWj7cYgsdH8dAjBGEr58BoE7ScuLd+fwFZ44+/x8=
github.com/klauspost/compress v1.18.0 h1:c/Cqfb0r+Yi+JtIEq73FWXVkRonBlf0CRNYc8Zttxdo= github.com/klauspost/compress v1.18.4 h1:RPhnKRAQ4Fh8zU2FY/6ZFDwTVTxgJ/EMydqSTzE9a2c=
github.com/klauspost/compress v1.18.0/go.mod h1:2Pp+KzxcywXVXMr50+X0Q/Lsb43OQHYWRCY2AiWywWQ= github.com/klauspost/compress v1.18.4/go.mod h1:R0h/fSBs8DE4ENlcrlib3PsXS61voFxhIs2DeRhCvJ4=
github.com/klauspost/cpuid/v2 v2.3.0 h1:S4CRMLnYUhGeDFDqkGriYKdfoFlDnMtqTiI/sFzhA9Y= github.com/klauspost/cpuid/v2 v2.3.0 h1:S4CRMLnYUhGeDFDqkGriYKdfoFlDnMtqTiI/sFzhA9Y=
github.com/klauspost/cpuid/v2 v2.3.0/go.mod h1:hqwkgyIinND0mEev00jJYCxPNVRVXFQeu1XKlok6oO0= github.com/klauspost/cpuid/v2 v2.3.0/go.mod h1:hqwkgyIinND0mEev00jJYCxPNVRVXFQeu1XKlok6oO0=
github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ= github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
@@ -193,8 +193,8 @@ github.com/mitchellh/go-wordwrap v1.0.1 h1:TLuKupo69TCn6TQSyGxwI1EblZZEsQ0vMlAFQ
github.com/mitchellh/go-wordwrap v1.0.1/go.mod h1:R62XHJLzvMFRBbcrT7m7WgmE1eOyTSsCt+hzestvNj0= github.com/mitchellh/go-wordwrap v1.0.1/go.mod h1:R62XHJLzvMFRBbcrT7m7WgmE1eOyTSsCt+hzestvNj0=
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 h1:C3w9PqII01/Oq1c1nUAm88MOHcQC9l5mIlSMApZMrHA= github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 h1:C3w9PqII01/Oq1c1nUAm88MOHcQC9l5mIlSMApZMrHA=
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ= github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ=
github.com/ncruces/go-strftime v0.1.9 h1:bY0MQC28UADQmHmaF5dgpLmImcShSi2kHU9XLdhx/f4= github.com/ncruces/go-strftime v1.0.0 h1:HMFp8mLCTPp341M/ZnA4qaf7ZlsbTc+miZjCLOFAw7w=
github.com/ncruces/go-strftime v0.1.9/go.mod h1:Fwc5htZGVVkseilnfgOVb9mKy6w1naJmn9CehxcKcls= github.com/ncruces/go-strftime v1.0.0/go.mod h1:Fwc5htZGVVkseilnfgOVb9mKy6w1naJmn9CehxcKcls=
github.com/ogier/pflag v0.0.1 h1:RW6JSWSu/RkSatfcLtogGfFgpim5p7ARQ10ECk5O750= github.com/ogier/pflag v0.0.1 h1:RW6JSWSu/RkSatfcLtogGfFgpim5p7ARQ10ECk5O750=
github.com/ogier/pflag v0.0.1/go.mod h1:zkFki7tvTa0tafRvTBIZTvzYyAu6kQhPZFnshFFPE+g= github.com/ogier/pflag v0.0.1/go.mod h1:zkFki7tvTa0tafRvTBIZTvzYyAu6kQhPZFnshFFPE+g=
github.com/onsi/ginkgo/v2 v2.28.1 h1:S4hj+HbZp40fNKuLUQOYLDgZLwNUVn19N3Atb98NCyI= github.com/onsi/ginkgo/v2 v2.28.1 h1:S4hj+HbZp40fNKuLUQOYLDgZLwNUVn19N3Atb98NCyI=
@@ -212,8 +212,8 @@ github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRI
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/pocketbase/dbx v1.12.0 h1:/oLErM+A0b4xI0PWTGPqSDVjzix48PqI/bng2l0PzoA= github.com/pocketbase/dbx v1.12.0 h1:/oLErM+A0b4xI0PWTGPqSDVjzix48PqI/bng2l0PzoA=
github.com/pocketbase/dbx v1.12.0/go.mod h1:xXRCIAKTHMgUCyCKZm55pUOdvFziJjQfXaWKhu2vhMs= github.com/pocketbase/dbx v1.12.0/go.mod h1:xXRCIAKTHMgUCyCKZm55pUOdvFziJjQfXaWKhu2vhMs=
github.com/pressly/goose/v3 v3.26.0 h1:KJakav68jdH0WDvoAcj8+n61WqOIaPGgH0bJWS6jpmM= github.com/pressly/goose/v3 v3.27.0 h1:/D30gVTuQhu0WsNZYbJi4DMOsx1lNq+6SkLe+Wp59BM=
github.com/pressly/goose/v3 v3.26.0/go.mod h1:4hC1KrritdCxtuFsqgs1R4AU5bWtTAf+cnWvfhf2DNY= github.com/pressly/goose/v3 v3.27.0/go.mod h1:3ZBeCXqzkgIRvrEMDkYh1guvtoJTU5oMMuDdkutoM78=
github.com/prometheus/client_golang v1.23.2 h1:Je96obch5RDVy3FDMndoUsjAhG5Edi49h0RJWRi/o0o= github.com/prometheus/client_golang v1.23.2 h1:Je96obch5RDVy3FDMndoUsjAhG5Edi49h0RJWRi/o0o=
github.com/prometheus/client_golang v1.23.2/go.mod h1:Tb1a6LWHB3/SPIzCoaDXI4I8UHKeFTEQ1YCr+0Gyqmg= github.com/prometheus/client_golang v1.23.2/go.mod h1:Tb1a6LWHB3/SPIzCoaDXI4I8UHKeFTEQ1YCr+0Gyqmg=
github.com/prometheus/client_model v0.6.2 h1:oBsgwpGs7iVziMvrGhE53c/GrLUsZdHnqNwqPLxwZyk= github.com/prometheus/client_model v0.6.2 h1:oBsgwpGs7iVziMvrGhE53c/GrLUsZdHnqNwqPLxwZyk=
@@ -321,8 +321,8 @@ golang.org/x/crypto v0.23.0/go.mod h1:CKFgDieR+mRhux2Lsu27y0fO304Db0wZe70UKqHu0v
golang.org/x/crypto v0.31.0/go.mod h1:kDsLvtWBEx7MV9tJOj9bnXsPbxwJQ6csT/x4KIN4Ssk= golang.org/x/crypto v0.31.0/go.mod h1:kDsLvtWBEx7MV9tJOj9bnXsPbxwJQ6csT/x4KIN4Ssk=
golang.org/x/crypto v0.48.0 h1:/VRzVqiRSggnhY7gNRxPauEQ5Drw9haKdM0jqfcCFts= golang.org/x/crypto v0.48.0 h1:/VRzVqiRSggnhY7gNRxPauEQ5Drw9haKdM0jqfcCFts=
golang.org/x/crypto v0.48.0/go.mod h1:r0kV5h3qnFPlQnBSrULhlsRfryS2pmewsg+XfMgkVos= golang.org/x/crypto v0.48.0/go.mod h1:r0kV5h3qnFPlQnBSrULhlsRfryS2pmewsg+XfMgkVos=
golang.org/x/exp v0.0.0-20260112195511-716be5621a96 h1:Z/6YuSHTLOHfNFdb8zVZomZr7cqNgTJvA8+Qz75D8gU= golang.org/x/exp v0.0.0-20260218203240-3dfff04db8fa h1:Zt3DZoOFFYkKhDT3v7Lm9FDMEV06GpzjG2jrqW+QTE0=
golang.org/x/exp v0.0.0-20260112195511-716be5621a96/go.mod h1:nzimsREAkjBCIEFtHiYkrJyT+2uy9YZJB7H1k68CXZU= golang.org/x/exp v0.0.0-20260218203240-3dfff04db8fa/go.mod h1:K79w1Vqn7PoiZn+TkNpx3BUWUQksGO3JcVX6qIjytmA=
golang.org/x/image v0.0.0-20191009234506-e7c1f5e7dbb8/go.mod h1:FeLwcggjj3mMvU+oOTbSwawSJRM1uh48EjtB4UJZlP0= golang.org/x/image v0.0.0-20191009234506-e7c1f5e7dbb8/go.mod h1:FeLwcggjj3mMvU+oOTbSwawSJRM1uh48EjtB4UJZlP0=
golang.org/x/image v0.36.0 h1:Iknbfm1afbgtwPTmHnS2gTM/6PPZfH+z2EFuOkSbqwc= golang.org/x/image v0.36.0 h1:Iknbfm1afbgtwPTmHnS2gTM/6PPZfH+z2EFuOkSbqwc=
golang.org/x/image v0.36.0/go.mod h1:YsWD2TyyGKiIX1kZlu9QfKIsQ4nAAK9bdgdrIsE7xy4= golang.org/x/image v0.36.0/go.mod h1:YsWD2TyyGKiIX1kZlu9QfKIsQ4nAAK9bdgdrIsE7xy4=
@@ -423,11 +423,11 @@ gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA= gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
modernc.org/libc v1.66.3 h1:cfCbjTUcdsKyyZZfEUKfoHcP3S0Wkvz3jgSzByEWVCQ= modernc.org/libc v1.68.0 h1:PJ5ikFOV5pwpW+VqCK1hKJuEWsonkIJhhIXyuF/91pQ=
modernc.org/libc v1.66.3/go.mod h1:XD9zO8kt59cANKvHPXpx7yS2ELPheAey0vjIuZOhOU8= modernc.org/libc v1.68.0/go.mod h1:NnKCYeoYgsEqnY3PgvNgAeaJnso968ygU8Z0DxjoEc0=
modernc.org/mathutil v1.7.1 h1:GCZVGXdaN8gTqB1Mf/usp1Y/hSqgI2vAGGP4jZMCxOU= modernc.org/mathutil v1.7.1 h1:GCZVGXdaN8gTqB1Mf/usp1Y/hSqgI2vAGGP4jZMCxOU=
modernc.org/mathutil v1.7.1/go.mod h1:4p5IwJITfppl0G4sUEDtCr4DthTaT47/N3aT6MhfgJg= modernc.org/mathutil v1.7.1/go.mod h1:4p5IwJITfppl0G4sUEDtCr4DthTaT47/N3aT6MhfgJg=
modernc.org/memory v1.11.0 h1:o4QC8aMQzmcwCK3t3Ux/ZHmwFPzE6hf2Y5LbkRs+hbI= modernc.org/memory v1.11.0 h1:o4QC8aMQzmcwCK3t3Ux/ZHmwFPzE6hf2Y5LbkRs+hbI=
modernc.org/memory v1.11.0/go.mod h1:/JP4VbVC+K5sU2wZi9bHoq2MAkCnrt2r98UGeSK7Mjw= modernc.org/memory v1.11.0/go.mod h1:/JP4VbVC+K5sU2wZi9bHoq2MAkCnrt2r98UGeSK7Mjw=
modernc.org/sqlite v1.38.2 h1:Aclu7+tgjgcQVShZqim41Bbw9Cho0y/7WzYptXqkEek= modernc.org/sqlite v1.46.1 h1:eFJ2ShBLIEnUWlLy12raN0Z1plqmFX9Qe3rjQTKt6sU=
modernc.org/sqlite v1.38.2/go.mod h1:cPTJYSlgg3Sfg046yBShXENNtPrWrDX8bsbAQBzgQ5E= modernc.org/sqlite v1.46.1/go.mod h1:CzbrU2lSB1DKUusvwGz7rqEKIq+NUd8GWuBBZDs9/nA=

View File

@@ -9,11 +9,12 @@ import (
//goland:noinspection GoBoolExpressions //goland:noinspection GoBoolExpressions
func main() { func main() {
// This import is used to force the inclusion of the `netgo` tag when compiling the project. // These references force the inclusion of build tags when compiling the project.
// If you get compilation errors like "undefined: buildtags.NETGO", this means you forgot to specify // If you get compilation errors like "undefined: buildtags.NETGO", this means you forgot to specify
// the `netgo` build tag when compiling the project. // the required build tags when compiling the project.
// To avoid these kind of errors, you should use `make build` to compile the project. // To avoid these kind of errors, you should use `make build` to compile the project.
_ = buildtags.NETGO _ = buildtags.NETGO
_ = buildtags.SQLITE_FTS5
cmd.Execute() cmd.Execute()
} }

View File

@@ -1,11 +1,14 @@
package model package model
import ( import (
"fmt"
"iter" "iter"
"math" "math"
"sync" "sync"
"time" "time"
"github.com/navidrome/navidrome/conf"
"github.com/gohugoio/hashstructure" "github.com/gohugoio/hashstructure"
) )
@@ -70,6 +73,13 @@ func (a Album) CoverArtID() ArtworkID {
return artworkIDFromAlbum(a) return artworkIDFromAlbum(a)
} }
func (a Album) FullName() string {
if conf.Server.Subsonic.AppendAlbumVersion && len(a.Tags[TagAlbumVersion]) > 0 {
return fmt.Sprintf("%s (%s)", a.Name, a.Tags[TagAlbumVersion][0])
}
return a.Name
}
// Equals compares two Album structs, ignoring calculated fields // Equals compares two Album structs, ignoring calculated fields
func (a Album) Equals(other Album) bool { func (a Album) Equals(other Album) bool {
// Normalize float32 values to avoid false negatives // Normalize float32 values to avoid false negatives

View File

@@ -3,11 +3,30 @@ package model_test
import ( import (
"encoding/json" "encoding/json"
"github.com/navidrome/navidrome/conf"
"github.com/navidrome/navidrome/conf/configtest"
. "github.com/navidrome/navidrome/model" . "github.com/navidrome/navidrome/model"
. "github.com/onsi/ginkgo/v2" . "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega" . "github.com/onsi/gomega"
) )
var _ = Describe("Album", func() {
BeforeEach(func() {
DeferCleanup(configtest.SetupConfig())
})
DescribeTable("FullName",
func(enabled bool, tags Tags, expected string) {
conf.Server.Subsonic.AppendAlbumVersion = enabled
a := Album{Name: "Album", Tags: tags}
Expect(a.FullName()).To(Equal(expected))
},
Entry("appends version when enabled and tag is present", true, Tags{TagAlbumVersion: []string{"Remastered"}}, "Album (Remastered)"),
Entry("returns just name when disabled", false, Tags{TagAlbumVersion: []string{"Remastered"}}, "Album"),
Entry("returns just name when tag is absent", true, Tags{}, "Album"),
Entry("returns just name when tag is an empty slice", true, Tags{TagAlbumVersion: []string{}}, "Album"),
)
})
var _ = Describe("Albums", func() { var _ = Describe("Albums", func() {
var albums Albums var albums Albums

View File

@@ -95,6 +95,25 @@ func (c Criteria) ToSql() (sql string, args []any, err error) {
return c.Expression.ToSql() return c.Expression.ToSql()
} }
// RequiredJoins inspects the expression tree and Sort field to determine which
// additional JOINs are needed when evaluating this criteria.
func (c Criteria) RequiredJoins() JoinType {
result := JoinNone
if c.Expression != nil {
result |= extractJoinTypes(c.Expression)
}
// Also check Sort fields
if c.Sort != "" {
for _, p := range strings.Split(c.Sort, ",") {
p = strings.TrimSpace(p)
p = strings.TrimLeft(p, "+-")
p = strings.TrimSpace(p)
result |= fieldJoinType(p)
}
}
return result
}
func (c Criteria) ChildPlaylistIds() []string { func (c Criteria) ChildPlaylistIds() []string {
if c.Expression == nil { if c.Expression == nil {
return nil return nil

View File

@@ -27,6 +27,7 @@ var _ = Describe("Criteria", func() {
StartsWith{"comment": "this"}, StartsWith{"comment": "this"},
InTheRange{"year": []int{1980, 1990}}, InTheRange{"year": []int{1980, 1990}},
IsNot{"genre": "Rock"}, IsNot{"genre": "Rock"},
Gt{"albumrating": 3},
}, },
}, },
Sort: "title", Sort: "title",
@@ -48,7 +49,8 @@ var _ = Describe("Criteria", func() {
{ "all": [ { "all": [
{ "startsWith": {"comment": "this"} }, { "startsWith": {"comment": "this"} },
{ "inTheRange": {"year":[1980,1990]} }, { "inTheRange": {"year":[1980,1990]} },
{ "isNot": { "genre": "Rock" }} { "isNot": { "genre": "Rock" }},
{ "gt": { "albumrating": 3 } }
] ]
} }
], ],
@@ -68,10 +70,10 @@ var _ = Describe("Criteria", func() {
gomega.Expect(err).ToNot(gomega.HaveOccurred()) gomega.Expect(err).ToNot(gomega.HaveOccurred())
gomega.Expect(sql).To(gomega.Equal( gomega.Expect(sql).To(gomega.Equal(
`(media_file.title LIKE ? AND media_file.title NOT LIKE ? ` + `(media_file.title LIKE ? AND media_file.title NOT LIKE ? ` +
`AND (not exists (select 1 from json_tree(participants, '$.artist') where key='name' and value = ?) ` + `AND (not exists (select 1 from json_tree(media_file.participants, '$.artist') where key='name' and value = ?) ` +
`OR media_file.album = ?) AND (media_file.comment LIKE ? AND (media_file.year >= ? AND media_file.year <= ?) ` + `OR media_file.album = ?) AND (media_file.comment LIKE ? AND (media_file.year >= ? AND media_file.year <= ?) ` +
`AND not exists (select 1 from json_tree(tags, '$.genre') where key='value' and value = ?)))`)) `AND not exists (select 1 from json_tree(media_file.tags, '$.genre') where key='value' and value = ?) AND COALESCE(album_annotation.rating, 0) > ?))`))
gomega.Expect(args).To(gomega.HaveExactElements("%love%", "%hate%", "u2", "best of", "this%", 1980, 1990, "Rock")) gomega.Expect(args).To(gomega.HaveExactElements("%love%", "%hate%", "u2", "best of", "this%", 1980, 1990, "Rock", 3))
}) })
It("marshals to JSON", func() { It("marshals to JSON", func() {
j, err := json.Marshal(goObj) j, err := json.Marshal(goObj)
@@ -172,13 +174,95 @@ var _ = Describe("Criteria", func() {
sql, args, err := goObj.ToSql() sql, args, err := goObj.ToSql()
gomega.Expect(err).ToNot(gomega.HaveOccurred()) gomega.Expect(err).ToNot(gomega.HaveOccurred())
gomega.Expect(sql).To(gomega.Equal( gomega.Expect(sql).To(gomega.Equal(
`(exists (select 1 from json_tree(participants, '$.artist') where key='name' and value = ?) AND ` + `(exists (select 1 from json_tree(media_file.participants, '$.artist') where key='name' and value = ?) AND ` +
`exists (select 1 from json_tree(participants, '$.composer') where key='name' and value LIKE ?))`, `exists (select 1 from json_tree(media_file.participants, '$.composer') where key='name' and value LIKE ?))`,
)) ))
gomega.Expect(args).To(gomega.HaveExactElements("The Beatles", "%Lennon%")) gomega.Expect(args).To(gomega.HaveExactElements("The Beatles", "%Lennon%"))
}) })
}) })
Describe("RequiredJoins", func() {
It("returns JoinNone when no annotation fields are used", func() {
c := Criteria{
Expression: All{
Contains{"title": "love"},
},
}
gomega.Expect(c.RequiredJoins()).To(gomega.Equal(JoinNone))
})
It("returns JoinNone for media_file annotation fields", func() {
c := Criteria{
Expression: All{
Is{"loved": true},
Gt{"playCount": 5},
},
}
gomega.Expect(c.RequiredJoins()).To(gomega.Equal(JoinNone))
})
It("returns JoinAlbumAnnotation for album annotation fields", func() {
c := Criteria{
Expression: All{
Gt{"albumRating": 3},
},
}
gomega.Expect(c.RequiredJoins()).To(gomega.Equal(JoinAlbumAnnotation))
})
It("returns JoinArtistAnnotation for artist annotation fields", func() {
c := Criteria{
Expression: All{
Is{"artistLoved": true},
},
}
gomega.Expect(c.RequiredJoins()).To(gomega.Equal(JoinArtistAnnotation))
})
It("returns both join types when both are used", func() {
c := Criteria{
Expression: All{
Gt{"albumRating": 3},
Is{"artistLoved": true},
},
}
j := c.RequiredJoins()
gomega.Expect(j.Has(JoinAlbumAnnotation)).To(gomega.BeTrue())
gomega.Expect(j.Has(JoinArtistAnnotation)).To(gomega.BeTrue())
})
It("detects join types in nested expressions", func() {
c := Criteria{
Expression: All{
Any{
All{
Is{"albumLoved": true},
},
},
Any{
Gt{"artistPlayCount": 10},
},
},
}
j := c.RequiredJoins()
gomega.Expect(j.Has(JoinAlbumAnnotation)).To(gomega.BeTrue())
gomega.Expect(j.Has(JoinArtistAnnotation)).To(gomega.BeTrue())
})
It("detects join types from Sort field", func() {
c := Criteria{
Expression: All{
Contains{"title": "love"},
},
Sort: "albumRating",
}
gomega.Expect(c.RequiredJoins().Has(JoinAlbumAnnotation)).To(gomega.BeTrue())
})
It("detects join types from Sort field with direction prefix", func() {
c := Criteria{
Expression: All{
Contains{"title": "love"},
},
Sort: "-artistRating",
}
gomega.Expect(c.RequiredJoins().Has(JoinArtistAnnotation)).To(gomega.BeTrue())
})
})
Context("with child playlists", func() { Context("with child playlists", func() {
var ( var (
topLevelInPlaylistID string topLevelInPlaylistID string

View File

@@ -9,6 +9,18 @@ import (
"github.com/navidrome/navidrome/log" "github.com/navidrome/navidrome/log"
) )
// JoinType is a bitmask indicating which additional JOINs are needed by a smart playlist expression.
type JoinType int
const (
JoinNone JoinType = 0
JoinAlbumAnnotation JoinType = 1 << iota
JoinArtistAnnotation
)
// Has returns true if j contains all bits in other.
func (j JoinType) Has(other JoinType) bool { return j&other != 0 }
var fieldMap = map[string]*mappedField{ var fieldMap = map[string]*mappedField{
"title": {field: "media_file.title"}, "title": {field: "media_file.title"},
"album": {field: "media_file.album"}, "album": {field: "media_file.album"},
@@ -48,6 +60,20 @@ var fieldMap = map[string]*mappedField{
"daterated": {field: "annotation.rated_at"}, "daterated": {field: "annotation.rated_at"},
"playcount": {field: "COALESCE(annotation.play_count, 0)"}, "playcount": {field: "COALESCE(annotation.play_count, 0)"},
"rating": {field: "COALESCE(annotation.rating, 0)"}, "rating": {field: "COALESCE(annotation.rating, 0)"},
"albumrating": {field: "COALESCE(album_annotation.rating, 0)", joinType: JoinAlbumAnnotation},
"albumloved": {field: "COALESCE(album_annotation.starred, false)", joinType: JoinAlbumAnnotation},
"albumplaycount": {field: "COALESCE(album_annotation.play_count, 0)", joinType: JoinAlbumAnnotation},
"albumlastplayed": {field: "album_annotation.play_date", joinType: JoinAlbumAnnotation},
"albumdateloved": {field: "album_annotation.starred_at", joinType: JoinAlbumAnnotation},
"albumdaterated": {field: "album_annotation.rated_at", joinType: JoinAlbumAnnotation},
"artistrating": {field: "COALESCE(artist_annotation.rating, 0)", joinType: JoinArtistAnnotation},
"artistloved": {field: "COALESCE(artist_annotation.starred, false)", joinType: JoinArtistAnnotation},
"artistplaycount": {field: "COALESCE(artist_annotation.play_count, 0)", joinType: JoinArtistAnnotation},
"artistlastplayed": {field: "artist_annotation.play_date", joinType: JoinArtistAnnotation},
"artistdateloved": {field: "artist_annotation.starred_at", joinType: JoinArtistAnnotation},
"artistdaterated": {field: "artist_annotation.rated_at", joinType: JoinArtistAnnotation},
"mbz_album_id": {field: "media_file.mbz_album_id"}, "mbz_album_id": {field: "media_file.mbz_album_id"},
"mbz_album_artist_id": {field: "media_file.mbz_album_artist_id"}, "mbz_album_artist_id": {field: "media_file.mbz_album_artist_id"},
"mbz_artist_id": {field: "media_file.mbz_artist_id"}, "mbz_artist_id": {field: "media_file.mbz_artist_id"},
@@ -71,6 +97,7 @@ type mappedField struct {
isTag bool // true if the field is a tag imported from the file metadata isTag bool // true if the field is a tag imported from the file metadata
alias string // name from `mappings.yml` that may differ from the name used in the smart playlist alias string // name from `mappings.yml` that may differ from the name used in the smart playlist
numeric bool // true if the field/tag should be treated as numeric numeric bool // true if the field/tag should be treated as numeric
joinType JoinType // which additional JOINs this field requires
} }
func mapFields(expr map[string]any) map[string]any { func mapFields(expr map[string]any) map[string]any {
@@ -169,7 +196,7 @@ func (e tagCond) ToSql() (string, []any, error) {
} }
} }
cond = fmt.Sprintf("exists (select 1 from json_tree(tags, '$.%s') where key='value' and %s)", cond = fmt.Sprintf("exists (select 1 from json_tree(media_file.tags, '$.%s') where key='value' and %s)",
tagName, cond) tagName, cond)
if e.not { if e.not {
cond = "not " + cond cond = "not " + cond
@@ -189,7 +216,7 @@ type roleCond struct {
func (e roleCond) ToSql() (string, []any, error) { func (e roleCond) ToSql() (string, []any, error) {
cond, args, err := e.cond.ToSql() cond, args, err := e.cond.ToSql()
cond = fmt.Sprintf(`exists (select 1 from json_tree(participants, '$.%s') where key='name' and %s)`, cond = fmt.Sprintf(`exists (select 1 from json_tree(media_file.participants, '$.%s') where key='name' and %s)`,
e.role, cond) e.role, cond)
if e.not { if e.not {
cond = "not " + cond cond = "not " + cond
@@ -197,6 +224,38 @@ func (e roleCond) ToSql() (string, []any, error) {
return cond, args, err return cond, args, err
} }
// fieldJoinType returns the JoinType for a given field name (case-insensitive).
func fieldJoinType(name string) JoinType {
if f, ok := fieldMap[strings.ToLower(name)]; ok {
return f.joinType
}
return JoinNone
}
// extractJoinTypes walks an expression tree and collects all required JoinType flags.
func extractJoinTypes(expr any) JoinType {
result := JoinNone
switch e := expr.(type) {
case All:
for _, sub := range e {
result |= extractJoinTypes(sub)
}
case Any:
for _, sub := range e {
result |= extractJoinTypes(sub)
}
default:
// Leaf expression: use reflection to check if it's a map with field names
rv := reflect.ValueOf(expr)
if rv.Kind() == reflect.Map && rv.Type().Key().Kind() == reflect.String {
for _, key := range rv.MapKeys() {
result |= fieldJoinType(key.String())
}
}
}
return result
}
// AddRoles adds roles to the field map. This is used to add all artist roles to the field map, so they can be used in // AddRoles adds roles to the field map. This is used to add all artist roles to the field map, so they can be used in
// smart playlists. If a role already exists in the field map, it is ignored, so calls to this function are idempotent. // smart playlists. If a role already exists in the field map, it is ignored, so calls to this function are idempotent.
func AddRoles(roles []string) { func AddRoles(roles []string) {

View File

@@ -54,23 +54,43 @@ var _ = Describe("Operators", func() {
Entry("inTheLast", InTheLast{"lastPlayed": 30}, "annotation.play_date > ?", StartOfPeriod(30, time.Now())), Entry("inTheLast", InTheLast{"lastPlayed": 30}, "annotation.play_date > ?", StartOfPeriod(30, time.Now())),
Entry("notInTheLast", NotInTheLast{"lastPlayed": 30}, "(annotation.play_date < ? OR annotation.play_date IS NULL)", StartOfPeriod(30, time.Now())), Entry("notInTheLast", NotInTheLast{"lastPlayed": 30}, "(annotation.play_date < ? OR annotation.play_date IS NULL)", StartOfPeriod(30, time.Now())),
// Album annotation fields
Entry("albumRating", Gt{"albumRating": 3}, "COALESCE(album_annotation.rating, 0) > ?", 3),
Entry("albumLoved", Is{"albumLoved": true}, "COALESCE(album_annotation.starred, false) = ?", true),
Entry("albumPlayCount", Gt{"albumPlayCount": 5}, "COALESCE(album_annotation.play_count, 0) > ?", 5),
Entry("albumLastPlayed", After{"albumLastPlayed": rangeStart}, "album_annotation.play_date > ?", rangeStart),
Entry("albumDateLoved", Before{"albumDateLoved": rangeStart}, "album_annotation.starred_at < ?", rangeStart),
Entry("albumDateRated", After{"albumDateRated": rangeStart}, "album_annotation.rated_at > ?", rangeStart),
Entry("albumLastPlayed inTheLast", InTheLast{"albumLastPlayed": 30}, "album_annotation.play_date > ?", StartOfPeriod(30, time.Now())),
Entry("albumLastPlayed notInTheLast", NotInTheLast{"albumLastPlayed": 30}, "(album_annotation.play_date < ? OR album_annotation.play_date IS NULL)", StartOfPeriod(30, time.Now())),
// Artist annotation fields
Entry("artistRating", Gt{"artistRating": 3}, "COALESCE(artist_annotation.rating, 0) > ?", 3),
Entry("artistLoved", Is{"artistLoved": true}, "COALESCE(artist_annotation.starred, false) = ?", true),
Entry("artistPlayCount", Gt{"artistPlayCount": 5}, "COALESCE(artist_annotation.play_count, 0) > ?", 5),
Entry("artistLastPlayed", After{"artistLastPlayed": rangeStart}, "artist_annotation.play_date > ?", rangeStart),
Entry("artistDateLoved", Before{"artistDateLoved": rangeStart}, "artist_annotation.starred_at < ?", rangeStart),
Entry("artistDateRated", After{"artistDateRated": rangeStart}, "artist_annotation.rated_at > ?", rangeStart),
Entry("artistLastPlayed inTheLast", InTheLast{"artistLastPlayed": 30}, "artist_annotation.play_date > ?", StartOfPeriod(30, time.Now())),
Entry("artistLastPlayed notInTheLast", NotInTheLast{"artistLastPlayed": 30}, "(artist_annotation.play_date < ? OR artist_annotation.play_date IS NULL)", StartOfPeriod(30, time.Now())),
// Tag tests // Tag tests
Entry("tag is [string]", Is{"genre": "Rock"}, "exists (select 1 from json_tree(tags, '$.genre') where key='value' and value = ?)", "Rock"), Entry("tag is [string]", Is{"genre": "Rock"}, "exists (select 1 from json_tree(media_file.tags, '$.genre') where key='value' and value = ?)", "Rock"),
Entry("tag isNot [string]", IsNot{"genre": "Rock"}, "not exists (select 1 from json_tree(tags, '$.genre') where key='value' and value = ?)", "Rock"), Entry("tag isNot [string]", IsNot{"genre": "Rock"}, "not exists (select 1 from json_tree(media_file.tags, '$.genre') where key='value' and value = ?)", "Rock"),
Entry("tag gt", Gt{"genre": "A"}, "exists (select 1 from json_tree(tags, '$.genre') where key='value' and value > ?)", "A"), Entry("tag gt", Gt{"genre": "A"}, "exists (select 1 from json_tree(media_file.tags, '$.genre') where key='value' and value > ?)", "A"),
Entry("tag lt", Lt{"genre": "Z"}, "exists (select 1 from json_tree(tags, '$.genre') where key='value' and value < ?)", "Z"), Entry("tag lt", Lt{"genre": "Z"}, "exists (select 1 from json_tree(media_file.tags, '$.genre') where key='value' and value < ?)", "Z"),
Entry("tag contains", Contains{"genre": "Rock"}, "exists (select 1 from json_tree(tags, '$.genre') where key='value' and value LIKE ?)", "%Rock%"), Entry("tag contains", Contains{"genre": "Rock"}, "exists (select 1 from json_tree(media_file.tags, '$.genre') where key='value' and value LIKE ?)", "%Rock%"),
Entry("tag not contains", NotContains{"genre": "Rock"}, "not exists (select 1 from json_tree(tags, '$.genre') where key='value' and value LIKE ?)", "%Rock%"), Entry("tag not contains", NotContains{"genre": "Rock"}, "not exists (select 1 from json_tree(media_file.tags, '$.genre') where key='value' and value LIKE ?)", "%Rock%"),
Entry("tag startsWith", StartsWith{"genre": "Soft"}, "exists (select 1 from json_tree(tags, '$.genre') where key='value' and value LIKE ?)", "Soft%"), Entry("tag startsWith", StartsWith{"genre": "Soft"}, "exists (select 1 from json_tree(media_file.tags, '$.genre') where key='value' and value LIKE ?)", "Soft%"),
Entry("tag endsWith", EndsWith{"genre": "Rock"}, "exists (select 1 from json_tree(tags, '$.genre') where key='value' and value LIKE ?)", "%Rock"), Entry("tag endsWith", EndsWith{"genre": "Rock"}, "exists (select 1 from json_tree(media_file.tags, '$.genre') where key='value' and value LIKE ?)", "%Rock"),
// Artist roles tests // Artist roles tests
Entry("role is [string]", Is{"artist": "u2"}, "exists (select 1 from json_tree(participants, '$.artist') where key='name' and value = ?)", "u2"), Entry("role is [string]", Is{"artist": "u2"}, "exists (select 1 from json_tree(media_file.participants, '$.artist') where key='name' and value = ?)", "u2"),
Entry("role isNot [string]", IsNot{"artist": "u2"}, "not exists (select 1 from json_tree(participants, '$.artist') where key='name' and value = ?)", "u2"), Entry("role isNot [string]", IsNot{"artist": "u2"}, "not exists (select 1 from json_tree(media_file.participants, '$.artist') where key='name' and value = ?)", "u2"),
Entry("role contains [string]", Contains{"artist": "u2"}, "exists (select 1 from json_tree(participants, '$.artist') where key='name' and value LIKE ?)", "%u2%"), Entry("role contains [string]", Contains{"artist": "u2"}, "exists (select 1 from json_tree(media_file.participants, '$.artist') where key='name' and value LIKE ?)", "%u2%"),
Entry("role not contains [string]", NotContains{"artist": "u2"}, "not exists (select 1 from json_tree(participants, '$.artist') where key='name' and value LIKE ?)", "%u2%"), Entry("role not contains [string]", NotContains{"artist": "u2"}, "not exists (select 1 from json_tree(media_file.participants, '$.artist') where key='name' and value LIKE ?)", "%u2%"),
Entry("role startsWith [string]", StartsWith{"composer": "John"}, "exists (select 1 from json_tree(participants, '$.composer') where key='name' and value LIKE ?)", "John%"), Entry("role startsWith [string]", StartsWith{"composer": "John"}, "exists (select 1 from json_tree(media_file.participants, '$.composer') where key='name' and value LIKE ?)", "John%"),
Entry("role endsWith [string]", EndsWith{"composer": "Lennon"}, "exists (select 1 from json_tree(participants, '$.composer') where key='name' and value LIKE ?)", "%Lennon"), Entry("role endsWith [string]", EndsWith{"composer": "Lennon"}, "exists (select 1 from json_tree(media_file.participants, '$.composer') where key='name' and value LIKE ?)", "%Lennon"),
) )
// TODO Validate operators that are not valid for each field type. // TODO Validate operators that are not valid for each field type.
@@ -88,7 +108,7 @@ var _ = Describe("Operators", func() {
op := EndsWith{"mood": "Soft"} op := EndsWith{"mood": "Soft"}
sql, args, err := op.ToSql() sql, args, err := op.ToSql()
gomega.Expect(err).ToNot(gomega.HaveOccurred()) gomega.Expect(err).ToNot(gomega.HaveOccurred())
gomega.Expect(sql).To(gomega.Equal("exists (select 1 from json_tree(tags, '$.mood') where key='value' and value LIKE ?)")) gomega.Expect(sql).To(gomega.Equal("exists (select 1 from json_tree(media_file.tags, '$.mood') where key='value' and value LIKE ?)"))
gomega.Expect(args).To(gomega.HaveExactElements("%Soft")) gomega.Expect(args).To(gomega.HaveExactElements("%Soft"))
}) })
It("casts numeric comparisons", func() { It("casts numeric comparisons", func() {
@@ -96,7 +116,7 @@ var _ = Describe("Operators", func() {
op := Lt{"rate": 6} op := Lt{"rate": 6}
sql, args, err := op.ToSql() sql, args, err := op.ToSql()
gomega.Expect(err).ToNot(gomega.HaveOccurred()) gomega.Expect(err).ToNot(gomega.HaveOccurred())
gomega.Expect(sql).To(gomega.Equal("exists (select 1 from json_tree(tags, '$.rate') where key='value' and CAST(value AS REAL) < ?)")) gomega.Expect(sql).To(gomega.Equal("exists (select 1 from json_tree(media_file.tags, '$.rate') where key='value' and CAST(value AS REAL) < ?)"))
gomega.Expect(args).To(gomega.HaveExactElements(6)) gomega.Expect(args).To(gomega.HaveExactElements(6))
}) })
It("skips unknown tag names", func() { It("skips unknown tag names", func() {
@@ -110,7 +130,7 @@ var _ = Describe("Operators", func() {
op := Contains{"releasetype": "soundtrack"} op := Contains{"releasetype": "soundtrack"}
sql, args, err := op.ToSql() sql, args, err := op.ToSql()
gomega.Expect(err).ToNot(gomega.HaveOccurred()) gomega.Expect(err).ToNot(gomega.HaveOccurred())
gomega.Expect(sql).To(gomega.Equal("exists (select 1 from json_tree(tags, '$.releasetype') where key='value' and value LIKE ?)")) gomega.Expect(sql).To(gomega.Equal("exists (select 1 from json_tree(media_file.tags, '$.releasetype') where key='value' and value LIKE ?)"))
gomega.Expect(args).To(gomega.HaveExactElements("%soundtrack%")) gomega.Expect(args).To(gomega.HaveExactElements("%soundtrack%"))
}) })
It("supports albumtype as alias for releasetype", func() { It("supports albumtype as alias for releasetype", func() {
@@ -118,7 +138,7 @@ var _ = Describe("Operators", func() {
op := Contains{"albumtype": "live"} op := Contains{"albumtype": "live"}
sql, args, err := op.ToSql() sql, args, err := op.ToSql()
gomega.Expect(err).ToNot(gomega.HaveOccurred()) gomega.Expect(err).ToNot(gomega.HaveOccurred())
gomega.Expect(sql).To(gomega.Equal("exists (select 1 from json_tree(tags, '$.releasetype') where key='value' and value LIKE ?)")) gomega.Expect(sql).To(gomega.Equal("exists (select 1 from json_tree(media_file.tags, '$.releasetype') where key='value' and value LIKE ?)"))
gomega.Expect(args).To(gomega.HaveExactElements("%live%")) gomega.Expect(args).To(gomega.HaveExactElements("%live%"))
}) })
It("supports albumtype alias with Is operator", func() { It("supports albumtype alias with Is operator", func() {
@@ -127,7 +147,7 @@ var _ = Describe("Operators", func() {
sql, args, err := op.ToSql() sql, args, err := op.ToSql()
gomega.Expect(err).ToNot(gomega.HaveOccurred()) gomega.Expect(err).ToNot(gomega.HaveOccurred())
// Should query $.releasetype, not $.albumtype // Should query $.releasetype, not $.albumtype
gomega.Expect(sql).To(gomega.Equal("exists (select 1 from json_tree(tags, '$.releasetype') where key='value' and value = ?)")) gomega.Expect(sql).To(gomega.Equal("exists (select 1 from json_tree(media_file.tags, '$.releasetype') where key='value' and value = ?)"))
gomega.Expect(args).To(gomega.HaveExactElements("album")) gomega.Expect(args).To(gomega.HaveExactElements("album"))
}) })
It("supports albumtype alias with IsNot operator", func() { It("supports albumtype alias with IsNot operator", func() {
@@ -136,7 +156,7 @@ var _ = Describe("Operators", func() {
sql, args, err := op.ToSql() sql, args, err := op.ToSql()
gomega.Expect(err).ToNot(gomega.HaveOccurred()) gomega.Expect(err).ToNot(gomega.HaveOccurred())
// Should query $.releasetype, not $.albumtype // Should query $.releasetype, not $.albumtype
gomega.Expect(sql).To(gomega.Equal("not exists (select 1 from json_tree(tags, '$.releasetype') where key='value' and value = ?)")) gomega.Expect(sql).To(gomega.Equal("not exists (select 1 from json_tree(media_file.tags, '$.releasetype') where key='value' and value = ?)"))
gomega.Expect(args).To(gomega.HaveExactElements("compilation")) gomega.Expect(args).To(gomega.HaveExactElements("compilation"))
}) })
}) })
@@ -147,7 +167,7 @@ var _ = Describe("Operators", func() {
op := EndsWith{"producer": "Eno"} op := EndsWith{"producer": "Eno"}
sql, args, err := op.ToSql() sql, args, err := op.ToSql()
gomega.Expect(err).ToNot(gomega.HaveOccurred()) gomega.Expect(err).ToNot(gomega.HaveOccurred())
gomega.Expect(sql).To(gomega.Equal("exists (select 1 from json_tree(participants, '$.producer') where key='name' and value LIKE ?)")) gomega.Expect(sql).To(gomega.Equal("exists (select 1 from json_tree(media_file.participants, '$.producer') where key='name' and value LIKE ?)"))
gomega.Expect(args).To(gomega.HaveExactElements("%Eno")) gomega.Expect(args).To(gomega.HaveExactElements("%Eno"))
}) })
It("skips unknown roles", func() { It("skips unknown roles", func() {

View File

@@ -95,12 +95,19 @@ type MediaFile struct {
} }
func (mf MediaFile) FullTitle() string { func (mf MediaFile) FullTitle() string {
if conf.Server.Subsonic.AppendSubtitle && mf.Tags[TagSubtitle] != nil { if conf.Server.Subsonic.AppendSubtitle && len(mf.Tags[TagSubtitle]) > 0 {
return fmt.Sprintf("%s (%s)", mf.Title, mf.Tags[TagSubtitle][0]) return fmt.Sprintf("%s (%s)", mf.Title, mf.Tags[TagSubtitle][0])
} }
return mf.Title return mf.Title
} }
func (mf MediaFile) FullAlbumName() string {
if conf.Server.Subsonic.AppendAlbumVersion && len(mf.Tags[TagAlbumVersion]) > 0 {
return fmt.Sprintf("%s (%s)", mf.Album, mf.Tags[TagAlbumVersion][0])
}
return mf.Album
}
func (mf MediaFile) ContentType() string { func (mf MediaFile) ContentType() string {
return mime.TypeByExtension("." + mf.Suffix) return mime.TypeByExtension("." + mf.Suffix)
} }

View File

@@ -475,7 +475,29 @@ var _ = Describe("MediaFile", func() {
DeferCleanup(configtest.SetupConfig()) DeferCleanup(configtest.SetupConfig())
conf.Server.EnableMediaFileCoverArt = true conf.Server.EnableMediaFileCoverArt = true
}) })
Describe(".CoverArtId()", func() { DescribeTable("FullTitle",
func(enabled bool, tags Tags, expected string) {
conf.Server.Subsonic.AppendSubtitle = enabled
mf := MediaFile{Title: "Song", Tags: tags}
Expect(mf.FullTitle()).To(Equal(expected))
},
Entry("appends subtitle when enabled and tag is present", true, Tags{TagSubtitle: []string{"Live"}}, "Song (Live)"),
Entry("returns just title when disabled", false, Tags{TagSubtitle: []string{"Live"}}, "Song"),
Entry("returns just title when tag is absent", true, Tags{}, "Song"),
Entry("returns just title when tag is an empty slice", true, Tags{TagSubtitle: []string{}}, "Song"),
)
DescribeTable("FullAlbumName",
func(enabled bool, tags Tags, expected string) {
conf.Server.Subsonic.AppendAlbumVersion = enabled
mf := MediaFile{Album: "Album", Tags: tags}
Expect(mf.FullAlbumName()).To(Equal(expected))
},
Entry("appends version when enabled and tag is present", true, Tags{TagAlbumVersion: []string{"Deluxe Edition"}}, "Album (Deluxe Edition)"),
Entry("returns just album name when disabled", false, Tags{TagAlbumVersion: []string{"Deluxe Edition"}}, "Album"),
Entry("returns just album name when tag is absent", true, Tags{}, "Album"),
Entry("returns just album name when tag is an empty slice", true, Tags{TagAlbumVersion: []string{}}, "Album"),
)
Describe("CoverArtId()", func() {
It("returns its own id if it HasCoverArt", func() { It("returns its own id if it HasCoverArt", func() {
mf := MediaFile{ID: "111", AlbumID: "1", HasCoverArt: true} mf := MediaFile{ID: "111", AlbumID: "1", HasCoverArt: true}
id := mf.CoverArtID() id := mf.CoverArtID()

View File

@@ -1,5 +1,5 @@
package model package model
type SearchableRepository[T any] interface { type SearchableRepository[T any] interface {
Search(q string, offset, size int, options ...QueryOptions) (T, error) Search(q string, options ...QueryOptions) (T, error)
} }

View File

@@ -12,7 +12,6 @@ import (
. "github.com/Masterminds/squirrel" . "github.com/Masterminds/squirrel"
"github.com/deluan/rest" "github.com/deluan/rest"
"github.com/google/uuid"
"github.com/navidrome/navidrome/conf" "github.com/navidrome/navidrome/conf"
"github.com/navidrome/navidrome/log" "github.com/navidrome/navidrome/log"
"github.com/navidrome/navidrome/model" "github.com/navidrome/navidrome/model"
@@ -62,11 +61,14 @@ func (a *dbAlbum) PostScan() error {
func (a *dbAlbum) PostMapArgs(args map[string]any) error { func (a *dbAlbum) PostMapArgs(args map[string]any) error {
fullText := []string{a.Name, a.SortAlbumName, a.AlbumArtist} fullText := []string{a.Name, a.SortAlbumName, a.AlbumArtist}
fullText = append(fullText, a.Album.Participants.AllNames()...) participantNames := a.Album.Participants.AllNames()
fullText = append(fullText, participantNames...)
fullText = append(fullText, slices.Collect(maps.Values(a.Album.Discs))...) fullText = append(fullText, slices.Collect(maps.Values(a.Album.Discs))...)
fullText = append(fullText, a.Album.Tags[model.TagAlbumVersion]...) fullText = append(fullText, a.Album.Tags[model.TagAlbumVersion]...)
fullText = append(fullText, a.Album.Tags[model.TagCatalogNumber]...) fullText = append(fullText, a.Album.Tags[model.TagCatalogNumber]...)
args["full_text"] = formatFullText(fullText...) args["full_text"] = formatFullText(fullText...)
args["search_participants"] = strings.Join(participantNames, " ")
args["search_normalized"] = normalizeForFTS(a.Name, a.AlbumArtist)
args["tags"] = marshalTags(a.Album.Tags) args["tags"] = marshalTags(a.Album.Tags)
args["participants"] = marshalParticipants(a.Album.Participants) args["participants"] = marshalParticipants(a.Album.Participants)
@@ -350,18 +352,21 @@ func (r *albumRepository) purgeEmpty(libraryIDs ...int) error {
return nil return nil
} }
func (r *albumRepository) Search(q string, offset int, size int, options ...model.QueryOptions) (model.Albums, error) { var albumSearchConfig = searchConfig{
NaturalOrder: "album.rowid",
OrderBy: []string{"name"},
MBIDFields: []string{"mbz_album_id", "mbz_release_group_id"},
}
func (r *albumRepository) Search(q string, options ...model.QueryOptions) (model.Albums, error) {
var opts model.QueryOptions
if len(options) > 0 {
opts = options[0]
}
var res dbAlbums var res dbAlbums
if uuid.Validate(q) == nil { err := r.doSearch(r.selectAlbum(options...), q, &res, albumSearchConfig, opts)
err := r.searchByMBID(r.selectAlbum(options...), q, []string{"mbz_album_id", "mbz_release_group_id"}, &res)
if err != nil { if err != nil {
return nil, fmt.Errorf("searching album by MBID %q: %w", q, err) return nil, fmt.Errorf("searching album %q: %w", q, err)
}
} else {
err := r.doSearch(r.selectAlbum(options...), q, offset, size, &res, "album.rowid", "name")
if err != nil {
return nil, fmt.Errorf("searching album by query %q: %w", q, err)
}
} }
return res.toModels(), nil return res.toModels(), nil
} }

View File

@@ -56,17 +56,23 @@ var _ = Describe("AlbumRepository", func() {
It("returns all records sorted", func() { It("returns all records sorted", func() {
Expect(GetAll(model.QueryOptions{Sort: "name"})).To(Equal(model.Albums{ Expect(GetAll(model.QueryOptions{Sort: "name"})).To(Equal(model.Albums{
albumAbbeyRoad, albumAbbeyRoad,
albumWithVersion,
albumCJK,
albumMultiDisc, albumMultiDisc,
albumRadioactivity, albumRadioactivity,
albumSgtPeppers, albumSgtPeppers,
albumPunctuation,
})) }))
}) })
It("returns all records sorted desc", func() { It("returns all records sorted desc", func() {
Expect(GetAll(model.QueryOptions{Sort: "name", Order: "desc"})).To(Equal(model.Albums{ Expect(GetAll(model.QueryOptions{Sort: "name", Order: "desc"})).To(Equal(model.Albums{
albumPunctuation,
albumSgtPeppers, albumSgtPeppers,
albumRadioactivity, albumRadioactivity,
albumMultiDisc, albumMultiDisc,
albumCJK,
albumWithVersion,
albumAbbeyRoad, albumAbbeyRoad,
})) }))
}) })

View File

@@ -11,7 +11,6 @@ import (
. "github.com/Masterminds/squirrel" . "github.com/Masterminds/squirrel"
"github.com/deluan/rest" "github.com/deluan/rest"
"github.com/google/uuid"
"github.com/navidrome/navidrome/conf" "github.com/navidrome/navidrome/conf"
"github.com/navidrome/navidrome/log" "github.com/navidrome/navidrome/log"
"github.com/navidrome/navidrome/model" "github.com/navidrome/navidrome/model"
@@ -102,6 +101,7 @@ func (a *dbArtist) PostMapArgs(m map[string]any) error {
similarArtists, _ := json.Marshal(sa) similarArtists, _ := json.Marshal(sa)
m["similar_artists"] = string(similarArtists) m["similar_artists"] = string(similarArtists)
m["full_text"] = formatFullText(a.Name, a.SortArtistName) m["full_text"] = formatFullText(a.Name, a.SortArtistName)
m["search_normalized"] = normalizeForFTS(a.Name)
// Do not override the sort_artist_name and mbz_artist_id fields if they are empty // Do not override the sort_artist_name and mbz_artist_id fields if they are empty
// TODO: Better way to handle this? // TODO: Better way to handle this?
@@ -512,21 +512,26 @@ func (r *artistRepository) RefreshStats(allArtists bool) (int64, error) {
return totalRowsAffected, nil return totalRowsAffected, nil
} }
func (r *artistRepository) Search(q string, offset int, size int, options ...model.QueryOptions) (model.Artists, error) { func (r *artistRepository) searchCfg() searchConfig {
var res dbArtists return searchConfig{
if uuid.Validate(q) == nil {
err := r.searchByMBID(r.selectArtist(options...), q, []string{"mbz_artist_id"}, &res)
if err != nil {
return nil, fmt.Errorf("searching artist by MBID %q: %w", q, err)
}
} else {
// Natural order for artists is more performant by ID, due to GROUP BY clause in selectArtist // Natural order for artists is more performant by ID, due to GROUP BY clause in selectArtist
err := r.doSearch(r.selectArtist(options...), q, offset, size, &res, "artist.id", NaturalOrder: "artist.id",
"sum(json_extract(stats, '$.total.m')) desc", "name") OrderBy: []string{"sum(json_extract(stats, '$.total.m')) desc", "name"},
if err != nil { MBIDFields: []string{"mbz_artist_id"},
return nil, fmt.Errorf("searching artist by query %q: %w", q, err) LibraryFilter: r.applyLibraryFilterToArtistQuery,
} }
} }
func (r *artistRepository) Search(q string, options ...model.QueryOptions) (model.Artists, error) {
var opts model.QueryOptions
if len(options) > 0 {
opts = options[0]
}
var res dbArtists
err := r.doSearch(r.selectArtist(options...), q, &res, r.searchCfg(), opts)
if err != nil {
return nil, fmt.Errorf("searching artist %q: %w", q, err)
}
return res.toModels(), nil return res.toModels(), nil
} }

View File

@@ -193,7 +193,7 @@ var _ = Describe("ArtistRepository", func() {
Describe("Basic Operations", func() { Describe("Basic Operations", func() {
Describe("Count", func() { Describe("Count", func() {
It("returns the number of artists in the DB", func() { It("returns the number of artists in the DB", func() {
Expect(repo.CountAll()).To(Equal(int64(2))) Expect(repo.CountAll()).To(Equal(int64(4)))
}) })
}) })
@@ -228,13 +228,19 @@ var _ = Describe("ArtistRepository", func() {
idx, err := repo.GetIndex(false, []int{1}) idx, err := repo.GetIndex(false, []int{1})
Expect(err).ToNot(HaveOccurred()) Expect(err).ToNot(HaveOccurred())
Expect(idx).To(HaveLen(2)) Expect(idx).To(HaveLen(4))
Expect(idx[0].ID).To(Equal("F")) Expect(idx[0].ID).To(Equal("F"))
Expect(idx[0].Artists).To(HaveLen(1)) Expect(idx[0].Artists).To(HaveLen(1))
Expect(idx[0].Artists[0].Name).To(Equal(artistBeatles.Name)) Expect(idx[0].Artists[0].Name).To(Equal(artistBeatles.Name))
Expect(idx[1].ID).To(Equal("K")) Expect(idx[1].ID).To(Equal("K"))
Expect(idx[1].Artists).To(HaveLen(1)) Expect(idx[1].Artists).To(HaveLen(1))
Expect(idx[1].Artists[0].Name).To(Equal(artistKraftwerk.Name)) Expect(idx[1].Artists[0].Name).To(Equal(artistKraftwerk.Name))
Expect(idx[2].ID).To(Equal("R"))
Expect(idx[2].Artists).To(HaveLen(1))
Expect(idx[2].Artists[0].Name).To(Equal(artistPunctuation.Name))
Expect(idx[3].ID).To(Equal("S"))
Expect(idx[3].Artists).To(HaveLen(1))
Expect(idx[3].Artists[0].Name).To(Equal(artistCJK.Name))
// Restore the original value // Restore the original value
artistBeatles.SortArtistName = "" artistBeatles.SortArtistName = ""
@@ -246,13 +252,19 @@ var _ = Describe("ArtistRepository", func() {
XIt("returns the index when PreferSortTags is true and SortArtistName is empty", func() { XIt("returns the index when PreferSortTags is true and SortArtistName is empty", func() {
idx, err := repo.GetIndex(false, []int{1}) idx, err := repo.GetIndex(false, []int{1})
Expect(err).ToNot(HaveOccurred()) Expect(err).ToNot(HaveOccurred())
Expect(idx).To(HaveLen(2)) Expect(idx).To(HaveLen(4))
Expect(idx[0].ID).To(Equal("B")) Expect(idx[0].ID).To(Equal("B"))
Expect(idx[0].Artists).To(HaveLen(1)) Expect(idx[0].Artists).To(HaveLen(1))
Expect(idx[0].Artists[0].Name).To(Equal(artistBeatles.Name)) Expect(idx[0].Artists[0].Name).To(Equal(artistBeatles.Name))
Expect(idx[1].ID).To(Equal("K")) Expect(idx[1].ID).To(Equal("K"))
Expect(idx[1].Artists).To(HaveLen(1)) Expect(idx[1].Artists).To(HaveLen(1))
Expect(idx[1].Artists[0].Name).To(Equal(artistKraftwerk.Name)) Expect(idx[1].Artists[0].Name).To(Equal(artistKraftwerk.Name))
Expect(idx[2].ID).To(Equal("R"))
Expect(idx[2].Artists).To(HaveLen(1))
Expect(idx[2].Artists[0].Name).To(Equal(artistPunctuation.Name))
Expect(idx[3].ID).To(Equal("S"))
Expect(idx[3].Artists).To(HaveLen(1))
Expect(idx[3].Artists[0].Name).To(Equal(artistCJK.Name))
}) })
}) })
@@ -268,13 +280,19 @@ var _ = Describe("ArtistRepository", func() {
idx, err := repo.GetIndex(false, []int{1}) idx, err := repo.GetIndex(false, []int{1})
Expect(err).ToNot(HaveOccurred()) Expect(err).ToNot(HaveOccurred())
Expect(idx).To(HaveLen(2)) Expect(idx).To(HaveLen(4))
Expect(idx[0].ID).To(Equal("B")) Expect(idx[0].ID).To(Equal("B"))
Expect(idx[0].Artists).To(HaveLen(1)) Expect(idx[0].Artists).To(HaveLen(1))
Expect(idx[0].Artists[0].Name).To(Equal(artistBeatles.Name)) Expect(idx[0].Artists[0].Name).To(Equal(artistBeatles.Name))
Expect(idx[1].ID).To(Equal("K")) Expect(idx[1].ID).To(Equal("K"))
Expect(idx[1].Artists).To(HaveLen(1)) Expect(idx[1].Artists).To(HaveLen(1))
Expect(idx[1].Artists[0].Name).To(Equal(artistKraftwerk.Name)) Expect(idx[1].Artists[0].Name).To(Equal(artistKraftwerk.Name))
Expect(idx[2].ID).To(Equal("R"))
Expect(idx[2].Artists).To(HaveLen(1))
Expect(idx[2].Artists[0].Name).To(Equal(artistPunctuation.Name))
Expect(idx[3].ID).To(Equal("S"))
Expect(idx[3].Artists).To(HaveLen(1))
Expect(idx[3].Artists[0].Name).To(Equal(artistCJK.Name))
// Restore the original value // Restore the original value
artistBeatles.SortArtistName = "" artistBeatles.SortArtistName = ""
@@ -285,13 +303,19 @@ var _ = Describe("ArtistRepository", func() {
It("returns the index when SortArtistName is empty", func() { It("returns the index when SortArtistName is empty", func() {
idx, err := repo.GetIndex(false, []int{1}) idx, err := repo.GetIndex(false, []int{1})
Expect(err).ToNot(HaveOccurred()) Expect(err).ToNot(HaveOccurred())
Expect(idx).To(HaveLen(2)) Expect(idx).To(HaveLen(4))
Expect(idx[0].ID).To(Equal("B")) Expect(idx[0].ID).To(Equal("B"))
Expect(idx[0].Artists).To(HaveLen(1)) Expect(idx[0].Artists).To(HaveLen(1))
Expect(idx[0].Artists[0].Name).To(Equal(artistBeatles.Name)) Expect(idx[0].Artists[0].Name).To(Equal(artistBeatles.Name))
Expect(idx[1].ID).To(Equal("K")) Expect(idx[1].ID).To(Equal("K"))
Expect(idx[1].Artists).To(HaveLen(1)) Expect(idx[1].Artists).To(HaveLen(1))
Expect(idx[1].Artists[0].Name).To(Equal(artistKraftwerk.Name)) Expect(idx[1].Artists[0].Name).To(Equal(artistKraftwerk.Name))
Expect(idx[2].ID).To(Equal("R"))
Expect(idx[2].Artists).To(HaveLen(1))
Expect(idx[2].Artists[0].Name).To(Equal(artistPunctuation.Name))
Expect(idx[3].ID).To(Equal("S"))
Expect(idx[3].Artists).To(HaveLen(1))
Expect(idx[3].Artists[0].Name).To(Equal(artistCJK.Name))
}) })
}) })
@@ -377,7 +401,7 @@ var _ = Describe("ArtistRepository", func() {
// Admin users can see all content when valid library IDs are provided // Admin users can see all content when valid library IDs are provided
idx, err := repo.GetIndex(false, []int{1}) idx, err := repo.GetIndex(false, []int{1})
Expect(err).ToNot(HaveOccurred()) Expect(err).ToNot(HaveOccurred())
Expect(idx).To(HaveLen(2)) Expect(idx).To(HaveLen(4))
// With non-existent library ID, admin users see no content because no artists are associated with that library // With non-existent library ID, admin users see no content because no artists are associated with that library
idx, err = repo.GetIndex(false, []int{999}) idx, err = repo.GetIndex(false, []int{999})
@@ -488,7 +512,7 @@ var _ = Describe("ArtistRepository", func() {
Expect(err).ToNot(HaveOccurred()) Expect(err).ToNot(HaveOccurred())
// Test the search // Test the search
results, err := (*testRepo).Search("550e8400-e29b-41d4-a716-446655440010", 0, 10) results, err := (*testRepo).Search("550e8400-e29b-41d4-a716-446655440010", model.QueryOptions{Max: 10})
Expect(err).ToNot(HaveOccurred()) Expect(err).ToNot(HaveOccurred())
if shouldFind { if shouldFind {
@@ -519,12 +543,12 @@ var _ = Describe("ArtistRepository", func() {
Expect(err).ToNot(HaveOccurred()) Expect(err).ToNot(HaveOccurred())
// Restricted user should not find this artist // Restricted user should not find this artist
results, err := restrictedRepo.Search("a74b1b7f-71a5-4011-9441-d0b5e4122711", 0, 10) results, err := restrictedRepo.Search("a74b1b7f-71a5-4011-9441-d0b5e4122711", model.QueryOptions{Max: 10})
Expect(err).ToNot(HaveOccurred()) Expect(err).ToNot(HaveOccurred())
Expect(results).To(BeEmpty()) Expect(results).To(BeEmpty())
// But admin should find it // But admin should find it
results, err = repo.Search("a74b1b7f-71a5-4011-9441-d0b5e4122711", 0, 10) results, err = repo.Search("a74b1b7f-71a5-4011-9441-d0b5e4122711", model.QueryOptions{Max: 10})
Expect(err).ToNot(HaveOccurred()) Expect(err).ToNot(HaveOccurred())
Expect(results).To(HaveLen(1)) Expect(results).To(HaveLen(1))
@@ -536,7 +560,7 @@ var _ = Describe("ArtistRepository", func() {
Context("Text Search", func() { Context("Text Search", func() {
It("allows admin to find artists by name regardless of library", func() { It("allows admin to find artists by name regardless of library", func() {
results, err := repo.Search("Beatles", 0, 10) results, err := repo.Search("Beatles", model.QueryOptions{Max: 10})
Expect(err).ToNot(HaveOccurred()) Expect(err).ToNot(HaveOccurred())
Expect(results).To(HaveLen(1)) Expect(results).To(HaveLen(1))
Expect(results[0].Name).To(Equal("The Beatles")) Expect(results[0].Name).To(Equal("The Beatles"))
@@ -556,7 +580,7 @@ var _ = Describe("ArtistRepository", func() {
Expect(err).ToNot(HaveOccurred()) Expect(err).ToNot(HaveOccurred())
// Restricted user should not find this artist // Restricted user should not find this artist
results, err := restrictedRepo.Search("Unique Search Name", 0, 10) results, err := restrictedRepo.Search("Unique Search Name", model.QueryOptions{Max: 10})
Expect(err).ToNot(HaveOccurred()) Expect(err).ToNot(HaveOccurred())
Expect(results).To(BeEmpty(), "Text search should respect library filtering") Expect(results).To(BeEmpty(), "Text search should respect library filtering")
@@ -625,11 +649,11 @@ var _ = Describe("ArtistRepository", func() {
It("sees all artists regardless of library permissions", func() { It("sees all artists regardless of library permissions", func() {
count, err := repo.CountAll() count, err := repo.CountAll()
Expect(err).ToNot(HaveOccurred()) Expect(err).ToNot(HaveOccurred())
Expect(count).To(Equal(int64(2))) Expect(count).To(Equal(int64(4)))
artists, err := repo.GetAll() artists, err := repo.GetAll()
Expect(err).ToNot(HaveOccurred()) Expect(err).ToNot(HaveOccurred())
Expect(artists).To(HaveLen(2)) Expect(artists).To(HaveLen(4))
exists, err := repo.Exists(artistBeatles.ID) exists, err := repo.Exists(artistBeatles.ID)
Expect(err).ToNot(HaveOccurred()) Expect(err).ToNot(HaveOccurred())
@@ -661,10 +685,10 @@ var _ = Describe("ArtistRepository", func() {
// Should see missing artist in GetAll by default for admin users // Should see missing artist in GetAll by default for admin users
artists, err := repo.GetAll() artists, err := repo.GetAll()
Expect(err).ToNot(HaveOccurred()) Expect(err).ToNot(HaveOccurred())
Expect(artists).To(HaveLen(3)) // Including the missing artist Expect(artists).To(HaveLen(5)) // Including the missing artist
// Search never returns missing artists (hardcoded behavior) // Search never returns missing artists (hardcoded behavior)
results, err := repo.Search("Missing Artist", 0, 10) results, err := repo.Search("Missing Artist", model.QueryOptions{Max: 10})
Expect(err).ToNot(HaveOccurred()) Expect(err).ToNot(HaveOccurred())
Expect(results).To(BeEmpty()) Expect(results).To(BeEmpty())
}) })
@@ -718,11 +742,11 @@ var _ = Describe("ArtistRepository", func() {
}) })
It("Search returns empty results for users without library access", func() { It("Search returns empty results for users without library access", func() {
results, err := restrictedRepo.Search("Beatles", 0, 10) results, err := restrictedRepo.Search("Beatles", model.QueryOptions{Max: 10})
Expect(err).ToNot(HaveOccurred()) Expect(err).ToNot(HaveOccurred())
Expect(results).To(BeEmpty()) Expect(results).To(BeEmpty())
results, err = restrictedRepo.Search("Kraftwerk", 0, 10) results, err = restrictedRepo.Search("Kraftwerk", model.QueryOptions{Max: 10})
Expect(err).ToNot(HaveOccurred()) Expect(err).ToNot(HaveOccurred())
Expect(results).To(BeEmpty()) Expect(results).To(BeEmpty())
}) })
@@ -767,19 +791,19 @@ var _ = Describe("ArtistRepository", func() {
It("CountAll returns correct count after gaining access", func() { It("CountAll returns correct count after gaining access", func() {
count, err := restrictedRepo.CountAll() count, err := restrictedRepo.CountAll()
Expect(err).ToNot(HaveOccurred()) Expect(err).ToNot(HaveOccurred())
Expect(count).To(Equal(int64(2))) // Beatles and Kraftwerk Expect(count).To(Equal(int64(4))) // Beatles, Kraftwerk, Seatbelts, and The Roots
}) })
It("GetAll returns artists after gaining access", func() { It("GetAll returns artists after gaining access", func() {
artists, err := restrictedRepo.GetAll() artists, err := restrictedRepo.GetAll()
Expect(err).ToNot(HaveOccurred()) Expect(err).ToNot(HaveOccurred())
Expect(artists).To(HaveLen(2)) Expect(artists).To(HaveLen(4))
var names []string var names []string
for _, artist := range artists { for _, artist := range artists {
names = append(names, artist.Name) names = append(names, artist.Name)
} }
Expect(names).To(ContainElements("The Beatles", "Kraftwerk")) Expect(names).To(ContainElements("The Beatles", "Kraftwerk", "シートベルツ", "The Roots"))
}) })
It("Exists returns true for accessible artists", func() { It("Exists returns true for accessible artists", func() {
@@ -796,7 +820,7 @@ var _ = Describe("ArtistRepository", func() {
// With valid library access, should see artists // With valid library access, should see artists
idx, err := restrictedRepo.GetIndex(false, []int{1}) idx, err := restrictedRepo.GetIndex(false, []int{1})
Expect(err).ToNot(HaveOccurred()) Expect(err).ToNot(HaveOccurred())
Expect(idx).To(HaveLen(2)) Expect(idx).To(HaveLen(4))
// With non-existent library ID, should see nothing (non-admin user) // With non-existent library ID, should see nothing (non-admin user)
idx, err = restrictedRepo.GetIndex(false, []int{999}) idx, err = restrictedRepo.GetIndex(false, []int{999})

View File

@@ -11,7 +11,6 @@ import (
. "github.com/Masterminds/squirrel" . "github.com/Masterminds/squirrel"
"github.com/deluan/rest" "github.com/deluan/rest"
"github.com/google/uuid"
"github.com/navidrome/navidrome/conf" "github.com/navidrome/navidrome/conf"
"github.com/navidrome/navidrome/log" "github.com/navidrome/navidrome/log"
"github.com/navidrome/navidrome/model" "github.com/navidrome/navidrome/model"
@@ -58,8 +57,11 @@ func (m *dbMediaFile) PostScan() error {
func (m *dbMediaFile) PostMapArgs(args map[string]any) error { func (m *dbMediaFile) PostMapArgs(args map[string]any) error {
fullText := []string{m.FullTitle(), m.Album, m.Artist, m.AlbumArtist, fullText := []string{m.FullTitle(), m.Album, m.Artist, m.AlbumArtist,
m.SortTitle, m.SortAlbumName, m.SortArtistName, m.SortAlbumArtistName, m.DiscSubtitle} m.SortTitle, m.SortAlbumName, m.SortArtistName, m.SortAlbumArtistName, m.DiscSubtitle}
fullText = append(fullText, m.MediaFile.Participants.AllNames()...) participantNames := m.MediaFile.Participants.AllNames()
fullText = append(fullText, participantNames...)
args["full_text"] = formatFullText(fullText...) args["full_text"] = formatFullText(fullText...)
args["search_participants"] = strings.Join(participantNames, " ")
args["search_normalized"] = normalizeForFTS(m.FullTitle(), m.Album, m.Artist, m.AlbumArtist)
args["tags"] = marshalTags(m.MediaFile.Tags) args["tags"] = marshalTags(m.MediaFile.Tags)
args["participants"] = marshalParticipants(m.MediaFile.Participants) args["participants"] = marshalParticipants(m.MediaFile.Participants)
return nil return nil
@@ -425,18 +427,21 @@ func (r *mediaFileRepository) FindRecentFilesByProperties(missing model.MediaFil
return res.toModels(), nil return res.toModels(), nil
} }
func (r *mediaFileRepository) Search(q string, offset int, size int, options ...model.QueryOptions) (model.MediaFiles, error) { var mediaFileSearchConfig = searchConfig{
NaturalOrder: "media_file.rowid",
OrderBy: []string{"title"},
MBIDFields: []string{"mbz_recording_id", "mbz_release_track_id"},
}
func (r *mediaFileRepository) Search(q string, options ...model.QueryOptions) (model.MediaFiles, error) {
var opts model.QueryOptions
if len(options) > 0 {
opts = options[0]
}
var res dbMediaFiles var res dbMediaFiles
if uuid.Validate(q) == nil { err := r.doSearch(r.selectMediaFile(options...), q, &res, mediaFileSearchConfig, opts)
err := r.searchByMBID(r.selectMediaFile(options...), q, []string{"mbz_recording_id", "mbz_release_track_id"}, &res)
if err != nil { if err != nil {
return nil, fmt.Errorf("searching media_file by MBID %q: %w", q, err) return nil, fmt.Errorf("searching media_file %q: %w", q, err)
}
} else {
err := r.doSearch(r.selectMediaFile(options...), q, offset, size, &res, "media_file.rowid", "title")
if err != nil {
return nil, fmt.Errorf("searching media_file by query %q: %w", q, err)
}
} }
return res.toModels(), nil return res.toModels(), nil
} }

View File

@@ -39,7 +39,7 @@ var _ = Describe("MediaRepository", func() {
}) })
It("counts the number of mediafiles in the DB", func() { It("counts the number of mediafiles in the DB", func() {
Expect(mr.CountAll()).To(Equal(int64(10))) Expect(mr.CountAll()).To(Equal(int64(13)))
}) })
Describe("CountBySuffix", func() { Describe("CountBySuffix", func() {
@@ -527,7 +527,7 @@ var _ = Describe("MediaRepository", func() {
Describe("Search", func() { Describe("Search", func() {
Context("text search", func() { Context("text search", func() {
It("finds media files by title", func() { It("finds media files by title", func() {
results, err := mr.Search("Antenna", 0, 10) results, err := mr.Search("Antenna", model.QueryOptions{Max: 10})
Expect(err).ToNot(HaveOccurred()) Expect(err).ToNot(HaveOccurred())
Expect(results).To(HaveLen(3)) // songAntenna, songAntennaWithLyrics, songAntenna2 Expect(results).To(HaveLen(3)) // songAntenna, songAntennaWithLyrics, songAntenna2
for _, result := range results { for _, result := range results {
@@ -536,7 +536,7 @@ var _ = Describe("MediaRepository", func() {
}) })
It("finds media files case insensitively", func() { It("finds media files case insensitively", func() {
results, err := mr.Search("antenna", 0, 10) results, err := mr.Search("antenna", model.QueryOptions{Max: 10})
Expect(err).ToNot(HaveOccurred()) Expect(err).ToNot(HaveOccurred())
Expect(results).To(HaveLen(3)) Expect(results).To(HaveLen(3))
for _, result := range results { for _, result := range results {
@@ -545,7 +545,7 @@ var _ = Describe("MediaRepository", func() {
}) })
It("returns empty result when no matches found", func() { It("returns empty result when no matches found", func() {
results, err := mr.Search("nonexistent", 0, 10) results, err := mr.Search("nonexistent", model.QueryOptions{Max: 10})
Expect(err).ToNot(HaveOccurred()) Expect(err).ToNot(HaveOccurred())
Expect(results).To(BeEmpty()) Expect(results).To(BeEmpty())
}) })
@@ -578,7 +578,7 @@ var _ = Describe("MediaRepository", func() {
}) })
It("finds media file by mbz_recording_id", func() { It("finds media file by mbz_recording_id", func() {
results, err := mr.Search("550e8400-e29b-41d4-a716-446655440020", 0, 10) results, err := mr.Search("550e8400-e29b-41d4-a716-446655440020", model.QueryOptions{Max: 10})
Expect(err).ToNot(HaveOccurred()) Expect(err).ToNot(HaveOccurred())
Expect(results).To(HaveLen(1)) Expect(results).To(HaveLen(1))
Expect(results[0].ID).To(Equal("test-mbid-mediafile")) Expect(results[0].ID).To(Equal("test-mbid-mediafile"))
@@ -586,7 +586,7 @@ var _ = Describe("MediaRepository", func() {
}) })
It("finds media file by mbz_release_track_id", func() { It("finds media file by mbz_release_track_id", func() {
results, err := mr.Search("550e8400-e29b-41d4-a716-446655440021", 0, 10) results, err := mr.Search("550e8400-e29b-41d4-a716-446655440021", model.QueryOptions{Max: 10})
Expect(err).ToNot(HaveOccurred()) Expect(err).ToNot(HaveOccurred())
Expect(results).To(HaveLen(1)) Expect(results).To(HaveLen(1))
Expect(results[0].ID).To(Equal("test-mbid-mediafile")) Expect(results[0].ID).To(Equal("test-mbid-mediafile"))
@@ -594,7 +594,7 @@ var _ = Describe("MediaRepository", func() {
}) })
It("returns empty result when MBID is not found", func() { It("returns empty result when MBID is not found", func() {
results, err := mr.Search("550e8400-e29b-41d4-a716-446655440099", 0, 10) results, err := mr.Search("550e8400-e29b-41d4-a716-446655440099", model.QueryOptions{Max: 10})
Expect(err).ToNot(HaveOccurred()) Expect(err).ToNot(HaveOccurred())
Expect(results).To(BeEmpty()) Expect(results).To(BeEmpty())
}) })
@@ -614,7 +614,7 @@ var _ = Describe("MediaRepository", func() {
Expect(err).ToNot(HaveOccurred()) Expect(err).ToNot(HaveOccurred())
// Search never returns missing media files (hardcoded behavior) // Search never returns missing media files (hardcoded behavior)
results, err := mr.Search("550e8400-e29b-41d4-a716-446655440022", 0, 10) results, err := mr.Search("550e8400-e29b-41d4-a716-446655440022", model.QueryOptions{Max: 10})
Expect(err).ToNot(HaveOccurred()) Expect(err).ToNot(HaveOccurred())
Expect(results).To(BeEmpty()) Expect(results).To(BeEmpty())

View File

@@ -56,12 +56,22 @@ func al(al model.Album) model.Album {
return al return al
} }
func alWithTags(a model.Album, tags model.Tags) model.Album {
a = al(a)
a.Tags = tags
return a
}
var ( var (
artistKraftwerk = model.Artist{ID: "2", Name: "Kraftwerk", OrderArtistName: "kraftwerk"} artistKraftwerk = model.Artist{ID: "2", Name: "Kraftwerk", OrderArtistName: "kraftwerk"}
artistBeatles = model.Artist{ID: "3", Name: "The Beatles", OrderArtistName: "beatles"} artistBeatles = model.Artist{ID: "3", Name: "The Beatles", OrderArtistName: "beatles"}
artistCJK = model.Artist{ID: "4", Name: "シートベルツ", SortArtistName: "Seatbelts", OrderArtistName: "seatbelts"}
artistPunctuation = model.Artist{ID: "5", Name: "The Roots", OrderArtistName: "roots"}
testArtists = model.Artists{ testArtists = model.Artists{
artistKraftwerk, artistKraftwerk,
artistBeatles, artistBeatles,
artistCJK,
artistPunctuation,
} }
) )
@@ -70,11 +80,18 @@ var (
albumAbbeyRoad = al(model.Album{ID: "102", Name: "Abbey Road", AlbumArtist: "The Beatles", OrderAlbumName: "abbey road", AlbumArtistID: "3", EmbedArtPath: p("/beatles/1/come together.mp3"), SongCount: 1, MaxYear: 1969}) albumAbbeyRoad = al(model.Album{ID: "102", Name: "Abbey Road", AlbumArtist: "The Beatles", OrderAlbumName: "abbey road", AlbumArtistID: "3", EmbedArtPath: p("/beatles/1/come together.mp3"), SongCount: 1, MaxYear: 1969})
albumRadioactivity = al(model.Album{ID: "103", Name: "Radioactivity", AlbumArtist: "Kraftwerk", OrderAlbumName: "radioactivity", AlbumArtistID: "2", EmbedArtPath: p("/kraft/radio/radio.mp3"), SongCount: 2}) albumRadioactivity = al(model.Album{ID: "103", Name: "Radioactivity", AlbumArtist: "Kraftwerk", OrderAlbumName: "radioactivity", AlbumArtistID: "2", EmbedArtPath: p("/kraft/radio/radio.mp3"), SongCount: 2})
albumMultiDisc = al(model.Album{ID: "104", Name: "Multi Disc Album", AlbumArtist: "Test Artist", OrderAlbumName: "multi disc album", AlbumArtistID: "1", EmbedArtPath: p("/test/multi/disc1/track1.mp3"), SongCount: 4}) albumMultiDisc = al(model.Album{ID: "104", Name: "Multi Disc Album", AlbumArtist: "Test Artist", OrderAlbumName: "multi disc album", AlbumArtistID: "1", EmbedArtPath: p("/test/multi/disc1/track1.mp3"), SongCount: 4})
albumCJK = al(model.Album{ID: "105", Name: "COWBOY BEBOP", AlbumArtist: "シートベルツ", OrderAlbumName: "cowboy bebop", AlbumArtistID: "4", EmbedArtPath: p("/seatbelts/cowboy-bebop/track1.mp3"), SongCount: 1})
albumWithVersion = alWithTags(model.Album{ID: "106", Name: "Abbey Road", AlbumArtist: "The Beatles", OrderAlbumName: "abbey road", AlbumArtistID: "3", EmbedArtPath: p("/beatles/2/come together.mp3"), SongCount: 1, MaxYear: 2019},
model.Tags{model.TagAlbumVersion: {"Deluxe Edition"}})
albumPunctuation = al(model.Album{ID: "107", Name: "Things Fall Apart", AlbumArtist: "The Roots", OrderAlbumName: "things fall apart", AlbumArtistID: "5", EmbedArtPath: p("/roots/things/track1.mp3"), SongCount: 1})
testAlbums = model.Albums{ testAlbums = model.Albums{
albumSgtPeppers, albumSgtPeppers,
albumAbbeyRoad, albumAbbeyRoad,
albumRadioactivity, albumRadioactivity,
albumMultiDisc, albumMultiDisc,
albumCJK,
albumWithVersion,
albumPunctuation,
} }
) )
@@ -101,6 +118,9 @@ var (
songDisc1Track01 = mf(model.MediaFile{ID: "2002", Title: "Disc 1 Track 1", ArtistID: "1", Artist: "Test Artist", AlbumID: "104", Album: "Multi Disc Album", DiscNumber: 1, TrackNumber: 1, Path: p("/test/multi/disc1/track1.mp3"), OrderAlbumName: "multi disc album", OrderArtistName: "test artist"}) songDisc1Track01 = mf(model.MediaFile{ID: "2002", Title: "Disc 1 Track 1", ArtistID: "1", Artist: "Test Artist", AlbumID: "104", Album: "Multi Disc Album", DiscNumber: 1, TrackNumber: 1, Path: p("/test/multi/disc1/track1.mp3"), OrderAlbumName: "multi disc album", OrderArtistName: "test artist"})
songDisc2Track01 = mf(model.MediaFile{ID: "2003", Title: "Disc 2 Track 1", ArtistID: "1", Artist: "Test Artist", AlbumID: "104", Album: "Multi Disc Album", DiscNumber: 2, TrackNumber: 1, Path: p("/test/multi/disc2/track1.mp3"), OrderAlbumName: "multi disc album", OrderArtistName: "test artist"}) songDisc2Track01 = mf(model.MediaFile{ID: "2003", Title: "Disc 2 Track 1", ArtistID: "1", Artist: "Test Artist", AlbumID: "104", Album: "Multi Disc Album", DiscNumber: 2, TrackNumber: 1, Path: p("/test/multi/disc2/track1.mp3"), OrderAlbumName: "multi disc album", OrderArtistName: "test artist"})
songDisc1Track02 = mf(model.MediaFile{ID: "2004", Title: "Disc 1 Track 2", ArtistID: "1", Artist: "Test Artist", AlbumID: "104", Album: "Multi Disc Album", DiscNumber: 1, TrackNumber: 2, Path: p("/test/multi/disc1/track2.mp3"), OrderAlbumName: "multi disc album", OrderArtistName: "test artist"}) songDisc1Track02 = mf(model.MediaFile{ID: "2004", Title: "Disc 1 Track 2", ArtistID: "1", Artist: "Test Artist", AlbumID: "104", Album: "Multi Disc Album", DiscNumber: 1, TrackNumber: 2, Path: p("/test/multi/disc1/track2.mp3"), OrderAlbumName: "multi disc album", OrderArtistName: "test artist"})
songCJK = mf(model.MediaFile{ID: "3001", Title: "プラチナ・ジェット", ArtistID: "4", Artist: "シートベルツ", AlbumID: "105", Album: "COWBOY BEBOP", Path: p("/seatbelts/cowboy-bebop/track1.mp3")})
songVersioned = mf(model.MediaFile{ID: "3002", Title: "Come Together", ArtistID: "3", Artist: "The Beatles", AlbumID: "106", Album: "Abbey Road", Path: p("/beatles/2/come together.mp3")})
songPunctuation = mf(model.MediaFile{ID: "3003", Title: "!!!!!!!", ArtistID: "5", Artist: "The Roots", AlbumID: "107", Album: "Things Fall Apart", Path: p("/roots/things/track1.mp3")})
testSongs = model.MediaFiles{ testSongs = model.MediaFiles{
songDayInALife, songDayInALife,
songComeTogether, songComeTogether,
@@ -112,6 +132,9 @@ var (
songDisc1Track01, songDisc1Track01,
songDisc2Track01, songDisc2Track01,
songDisc1Track02, songDisc1Track02,
songCJK,
songVersioned,
songPunctuation,
} }
) )

View File

@@ -96,16 +96,6 @@ func (r *playlistRepository) Exists(id string) (bool, error) {
} }
func (r *playlistRepository) Delete(id string) error { func (r *playlistRepository) Delete(id string) error {
usr := loggedUser(r.ctx)
if !usr.IsAdmin {
pls, err := r.Get(id)
if err != nil {
return err
}
if pls.OwnerID != usr.ID {
return rest.ErrPermissionDenied
}
}
return r.delete(And{Eq{"id": id}, r.userFilter()}) return r.delete(And{Eq{"id": id}, r.userFilter()})
} }
@@ -113,14 +103,6 @@ func (r *playlistRepository) Put(p *model.Playlist) error {
pls := dbPlaylist{Playlist: *p} pls := dbPlaylist{Playlist: *p}
if pls.ID == "" { if pls.ID == "" {
pls.CreatedAt = time.Now() pls.CreatedAt = time.Now()
} else {
ok, err := r.Exists(pls.ID)
if err != nil {
return err
}
if !ok {
return model.ErrNotAuthorized
}
} }
pls.UpdatedAt = time.Now() pls.UpdatedAt = time.Now()
@@ -132,7 +114,6 @@ func (r *playlistRepository) Put(p *model.Playlist) error {
if p.IsSmartPlaylist() { if p.IsSmartPlaylist() {
// Do not update tracks at this point, as it may take a long time and lock the DB, breaking the scan process // Do not update tracks at this point, as it may take a long time and lock the DB, breaking the scan process
//r.refreshSmartPlaylist(p)
return nil return nil
} }
// Only update tracks if they were specified // Only update tracks if they were specified
@@ -263,7 +244,22 @@ func (r *playlistRepository) refreshSmartPlaylist(pls *model.Playlist) bool {
From("media_file").LeftJoin("annotation on ("+ From("media_file").LeftJoin("annotation on ("+
"annotation.item_id = media_file.id"+ "annotation.item_id = media_file.id"+
" AND annotation.item_type = 'media_file'"+ " AND annotation.item_type = 'media_file'"+
" AND annotation.user_id = '" + usr.ID + "')") " AND annotation.user_id = ?)", usr.ID)
// Conditionally join album/artist annotation tables only when referenced by criteria or sort
requiredJoins := rules.RequiredJoins()
if requiredJoins.Has(criteria.JoinAlbumAnnotation) {
sq = sq.LeftJoin("annotation AS album_annotation ON ("+
"album_annotation.item_id = media_file.album_id"+
" AND album_annotation.item_type = 'album'"+
" AND album_annotation.user_id = ?)", usr.ID)
}
if requiredJoins.Has(criteria.JoinArtistAnnotation) {
sq = sq.LeftJoin("annotation AS artist_annotation ON ("+
"artist_annotation.item_id = media_file.artist_id"+
" AND artist_annotation.item_type = 'artist'"+
" AND artist_annotation.user_id = ?)", usr.ID)
}
// Only include media files from libraries the user has access to // Only include media files from libraries the user has access to
sq = r.applyLibraryFilter(sq, "media_file") sq = r.applyLibraryFilter(sq, "media_file")
@@ -320,10 +316,6 @@ func (r *playlistRepository) updateTracks(id string, tracks model.MediaFiles) er
} }
func (r *playlistRepository) updatePlaylist(playlistId string, mediaFileIds []string) error { func (r *playlistRepository) updatePlaylist(playlistId string, mediaFileIds []string) error {
if !r.isWritable(playlistId) {
return rest.ErrPermissionDenied
}
// Remove old tracks // Remove old tracks
del := Delete("playlist_tracks").Where(Eq{"playlist_id": playlistId}) del := Delete("playlist_tracks").Where(Eq{"playlist_id": playlistId})
_, err := r.executeSQL(del) _, err := r.executeSQL(del)
@@ -439,8 +431,7 @@ func (r *playlistRepository) NewInstance() any {
func (r *playlistRepository) Save(entity any) (string, error) { func (r *playlistRepository) Save(entity any) (string, error) {
pls := entity.(*model.Playlist) pls := entity.(*model.Playlist)
pls.OwnerID = loggedUser(r.ctx).ID pls.ID = "" // Force new creation
pls.ID = "" // Make sure we don't override an existing playlist
err := r.Put(pls) err := r.Put(pls)
if err != nil { if err != nil {
return "", err return "", err
@@ -450,24 +441,9 @@ func (r *playlistRepository) Save(entity any) (string, error) {
func (r *playlistRepository) Update(id string, entity any, cols ...string) error { func (r *playlistRepository) Update(id string, entity any, cols ...string) error {
pls := dbPlaylist{Playlist: *entity.(*model.Playlist)} pls := dbPlaylist{Playlist: *entity.(*model.Playlist)}
current, err := r.Get(id)
if err != nil {
return err
}
usr := loggedUser(r.ctx)
if !usr.IsAdmin {
// Only the owner can update the playlist
if current.OwnerID != usr.ID {
return rest.ErrPermissionDenied
}
// Regular users can't change the ownership of a playlist
if pls.OwnerID != "" && pls.OwnerID != usr.ID {
return rest.ErrPermissionDenied
}
}
pls.ID = id pls.ID = id
pls.UpdatedAt = time.Now() pls.UpdatedAt = time.Now()
_, err = r.put(id, pls, append(cols, "updatedAt")...) _, err := r.put(id, pls, append(cols, "updatedAt")...)
if errors.Is(err, model.ErrNotFound) { if errors.Is(err, model.ErrNotFound) {
return rest.ErrNotFound return rest.ErrNotFound
} }
@@ -507,23 +483,31 @@ func (r *playlistRepository) removeOrphans() error {
return nil return nil
} }
// renumber updates the position of all tracks in the playlist to be sequential starting from 1, ordered by their
// current position. This is needed after removing orphan tracks, to ensure there are no gaps in the track numbering.
// The two-step approach (negate then reassign via CTE) avoids UNIQUE constraint violations on (playlist_id, id).
func (r *playlistRepository) renumber(id string) error { func (r *playlistRepository) renumber(id string) error {
var ids []string // Step 1: Negate all IDs to clear the positive ID space
sq := Select("media_file_id").From("playlist_tracks").Where(Eq{"playlist_id": id}).OrderBy("id") _, err := r.executeSQL(Expr(
err := r.queryAllSlice(sq, &ids) `UPDATE playlist_tracks SET id = -id WHERE playlist_id = ? AND id > 0`, id))
if err != nil { if err != nil {
return err return err
} }
return r.updatePlaylist(id, ids) // Step 2: Assign new sequential positive IDs using UPDATE...FROM with a CTE.
// The CTE is fully materialized before the UPDATE begins, avoiding self-referencing issues.
// ORDER BY id DESC restores original order since IDs are now negative.
_, err = r.executeSQL(Expr(
`WITH new_ids AS (
SELECT rowid as rid, ROW_NUMBER() OVER (ORDER BY id DESC) as new_id
FROM playlist_tracks WHERE playlist_id = ?
)
UPDATE playlist_tracks SET id = new_ids.new_id
FROM new_ids
WHERE playlist_tracks.rowid = new_ids.rid AND playlist_tracks.playlist_id = ?`, id, id))
if err != nil {
return err
} }
return r.refreshCounters(&model.Playlist{ID: id})
func (r *playlistRepository) isWritable(playlistId string) bool {
usr := loggedUser(r.ctx)
if usr.IsAdmin {
return true
}
pls, err := r.Get(playlistId)
return err == nil && pls.OwnerID == usr.ID
} }
var _ model.PlaylistRepository = (*playlistRepository)(nil) var _ model.PlaylistRepository = (*playlistRepository)(nil)

View File

@@ -287,6 +287,106 @@ var _ = Describe("PlaylistRepository", func() {
}) })
}) })
Describe("Smart Playlists with Album/Artist Annotation Criteria", func() {
var testPlaylistID string
AfterEach(func() {
if testPlaylistID != "" {
_ = repo.Delete(testPlaylistID)
testPlaylistID = ""
}
})
It("matches tracks from starred albums using albumLoved", func() {
// albumRadioactivity (ID "103") is starred in test fixtures
// Songs in album 103: 1003, 1004, 1005, 1006
rules := &criteria.Criteria{
Expression: criteria.All{
criteria.Is{"albumLoved": true},
},
}
newPls := model.Playlist{Name: "Starred Album Songs", OwnerID: "userid", Rules: rules}
Expect(repo.Put(&newPls)).To(Succeed())
testPlaylistID = newPls.ID
conf.Server.SmartPlaylistRefreshDelay = -1 * time.Second
pls, err := repo.GetWithTracks(newPls.ID, true, false)
Expect(err).ToNot(HaveOccurred())
trackIDs := make([]string, len(pls.Tracks))
for i, t := range pls.Tracks {
trackIDs[i] = t.MediaFileID
}
Expect(trackIDs).To(ConsistOf("1003", "1004", "1005", "1006"))
})
It("matches tracks from starred artists using artistLoved", func() {
// artistBeatles (ID "3") is starred in test fixtures
// Songs with ArtistID "3": 1001, 1002, 3002
rules := &criteria.Criteria{
Expression: criteria.All{
criteria.Is{"artistLoved": true},
},
}
newPls := model.Playlist{Name: "Starred Artist Songs", OwnerID: "userid", Rules: rules}
Expect(repo.Put(&newPls)).To(Succeed())
testPlaylistID = newPls.ID
conf.Server.SmartPlaylistRefreshDelay = -1 * time.Second
pls, err := repo.GetWithTracks(newPls.ID, true, false)
Expect(err).ToNot(HaveOccurred())
trackIDs := make([]string, len(pls.Tracks))
for i, t := range pls.Tracks {
trackIDs[i] = t.MediaFileID
}
Expect(trackIDs).To(ConsistOf("1001", "1002", "3002"))
})
It("matches tracks with combined album and artist criteria", func() {
// albumLoved=true → songs from album 103 (1003, 1004, 1005, 1006)
// artistLoved=true → songs with artist 3 (1001, 1002)
// Using Any: union of both sets
rules := &criteria.Criteria{
Expression: criteria.Any{
criteria.Is{"albumLoved": true},
criteria.Is{"artistLoved": true},
},
}
newPls := model.Playlist{Name: "Combined Album+Artist", OwnerID: "userid", Rules: rules}
Expect(repo.Put(&newPls)).To(Succeed())
testPlaylistID = newPls.ID
conf.Server.SmartPlaylistRefreshDelay = -1 * time.Second
pls, err := repo.GetWithTracks(newPls.ID, true, false)
Expect(err).ToNot(HaveOccurred())
trackIDs := make([]string, len(pls.Tracks))
for i, t := range pls.Tracks {
trackIDs[i] = t.MediaFileID
}
Expect(trackIDs).To(ConsistOf("1001", "1002", "1003", "1004", "1005", "1006", "3002"))
})
It("returns no tracks when no albums/artists match", func() {
// No album has rating 5 in fixtures
rules := &criteria.Criteria{
Expression: criteria.All{
criteria.Is{"albumRating": 5},
},
}
newPls := model.Playlist{Name: "No Match", OwnerID: "userid", Rules: rules}
Expect(repo.Put(&newPls)).To(Succeed())
testPlaylistID = newPls.ID
conf.Server.SmartPlaylistRefreshDelay = -1 * time.Second
pls, err := repo.GetWithTracks(newPls.ID, true, false)
Expect(err).ToNot(HaveOccurred())
Expect(pls.Tracks).To(BeEmpty())
})
})
Describe("Smart Playlists with Tag Criteria", func() { Describe("Smart Playlists with Tag Criteria", func() {
var mfRepo model.MediaFileRepository var mfRepo model.MediaFileRepository
var testPlaylistID string var testPlaylistID string
@@ -401,6 +501,79 @@ var _ = Describe("PlaylistRepository", func() {
}) })
}) })
Describe("Track Deletion and Renumbering", func() {
var testPlaylistID string
AfterEach(func() {
if testPlaylistID != "" {
Expect(repo.Delete(testPlaylistID)).To(BeNil())
testPlaylistID = ""
}
})
// helper to get track positions and media file IDs
getTrackInfo := func(playlistID string) (ids []string, mediaFileIDs []string) {
pls, err := repo.GetWithTracks(playlistID, false, false)
Expect(err).ToNot(HaveOccurred())
for _, t := range pls.Tracks {
ids = append(ids, t.ID)
mediaFileIDs = append(mediaFileIDs, t.MediaFileID)
}
return
}
It("renumbers correctly after deleting a track from the middle", func() {
By("creating a playlist with 4 tracks")
newPls := model.Playlist{Name: "Renumber Test Middle", OwnerID: "userid"}
newPls.AddMediaFilesByID([]string{"1001", "1002", "1003", "1004"})
Expect(repo.Put(&newPls)).To(Succeed())
testPlaylistID = newPls.ID
By("deleting the second track (position 2)")
tracksRepo := repo.Tracks(newPls.ID, false)
Expect(tracksRepo.Delete("2")).To(Succeed())
By("verifying remaining tracks are renumbered sequentially")
ids, mediaFileIDs := getTrackInfo(newPls.ID)
Expect(ids).To(Equal([]string{"1", "2", "3"}))
Expect(mediaFileIDs).To(Equal([]string{"1001", "1003", "1004"}))
})
It("renumbers correctly after deleting the first track", func() {
By("creating a playlist with 3 tracks")
newPls := model.Playlist{Name: "Renumber Test First", OwnerID: "userid"}
newPls.AddMediaFilesByID([]string{"1001", "1002", "1003"})
Expect(repo.Put(&newPls)).To(Succeed())
testPlaylistID = newPls.ID
By("deleting the first track (position 1)")
tracksRepo := repo.Tracks(newPls.ID, false)
Expect(tracksRepo.Delete("1")).To(Succeed())
By("verifying remaining tracks are renumbered sequentially")
ids, mediaFileIDs := getTrackInfo(newPls.ID)
Expect(ids).To(Equal([]string{"1", "2"}))
Expect(mediaFileIDs).To(Equal([]string{"1002", "1003"}))
})
It("renumbers correctly after deleting the last track", func() {
By("creating a playlist with 3 tracks")
newPls := model.Playlist{Name: "Renumber Test Last", OwnerID: "userid"}
newPls.AddMediaFilesByID([]string{"1001", "1002", "1003"})
Expect(repo.Put(&newPls)).To(Succeed())
testPlaylistID = newPls.ID
By("deleting the last track (position 3)")
tracksRepo := repo.Tracks(newPls.ID, false)
Expect(tracksRepo.Delete("3")).To(Succeed())
By("verifying remaining tracks are renumbered sequentially")
ids, mediaFileIDs := getTrackInfo(newPls.ID)
Expect(ids).To(Equal([]string{"1", "2"}))
Expect(mediaFileIDs).To(Equal([]string{"1001", "1002"}))
})
})
Describe("Smart Playlists Library Filtering", func() { Describe("Smart Playlists Library Filtering", func() {
var mfRepo model.MediaFileRepository var mfRepo model.MediaFileRepository
var testPlaylistID string var testPlaylistID string

View File

@@ -140,15 +140,7 @@ func (r *playlistTrackRepository) NewInstance() any {
return &model.PlaylistTrack{} return &model.PlaylistTrack{}
} }
func (r *playlistTrackRepository) isTracksEditable() bool {
return r.playlistRepo.isWritable(r.playlistId) && !r.playlist.IsSmartPlaylist()
}
func (r *playlistTrackRepository) Add(mediaFileIds []string) (int, error) { func (r *playlistTrackRepository) Add(mediaFileIds []string) (int, error) {
if !r.isTracksEditable() {
return 0, rest.ErrPermissionDenied
}
if len(mediaFileIds) > 0 { if len(mediaFileIds) > 0 {
log.Debug(r.ctx, "Adding songs to playlist", "playlistId", r.playlistId, "mediaFileIds", mediaFileIds) log.Debug(r.ctx, "Adding songs to playlist", "playlistId", r.playlistId, "mediaFileIds", mediaFileIds)
} else { } else {
@@ -196,22 +188,7 @@ func (r *playlistTrackRepository) AddDiscs(discs []model.DiscID) (int, error) {
return r.addMediaFileIds(clauses) return r.addMediaFileIds(clauses)
} }
// Get ids from all current tracks
func (r *playlistTrackRepository) getTracks() ([]string, error) {
all := r.newSelect().Columns("media_file_id").Where(Eq{"playlist_id": r.playlistId}).OrderBy("id")
var ids []string
err := r.queryAllSlice(all, &ids)
if err != nil {
log.Error(r.ctx, "Error querying current tracks from playlist", "playlistId", r.playlistId, err)
return nil, err
}
return ids, nil
}
func (r *playlistTrackRepository) Delete(ids ...string) error { func (r *playlistTrackRepository) Delete(ids ...string) error {
if !r.isTracksEditable() {
return rest.ErrPermissionDenied
}
err := r.delete(And{Eq{"playlist_id": r.playlistId}, Eq{"id": ids}}) err := r.delete(And{Eq{"playlist_id": r.playlistId}, Eq{"id": ids}})
if err != nil { if err != nil {
return err return err
@@ -221,9 +198,6 @@ func (r *playlistTrackRepository) Delete(ids ...string) error {
} }
func (r *playlistTrackRepository) DeleteAll() error { func (r *playlistTrackRepository) DeleteAll() error {
if !r.isTracksEditable() {
return rest.ErrPermissionDenied
}
err := r.delete(Eq{"playlist_id": r.playlistId}) err := r.delete(Eq{"playlist_id": r.playlistId})
if err != nil { if err != nil {
return err return err
@@ -232,16 +206,45 @@ func (r *playlistTrackRepository) DeleteAll() error {
return r.playlistRepo.renumber(r.playlistId) return r.playlistRepo.renumber(r.playlistId)
} }
// Reorder moves a track from pos to newPos, shifting other tracks accordingly.
func (r *playlistTrackRepository) Reorder(pos int, newPos int) error { func (r *playlistTrackRepository) Reorder(pos int, newPos int) error {
if !r.isTracksEditable() { if pos == newPos {
return rest.ErrPermissionDenied return nil
} }
ids, err := r.getTracks() pid := r.playlistId
// Step 1: Move the source track out of the way (temporary sentinel value)
_, err := r.executeSQL(Expr(
`UPDATE playlist_tracks SET id = -999999 WHERE playlist_id = ? AND id = ?`, pid, pos))
if err != nil { if err != nil {
return err return err
} }
newOrder := slice.Move(ids, pos-1, newPos-1)
return r.playlistRepo.updatePlaylist(r.playlistId, newOrder) // Step 2: Shift the affected range using negative values to avoid unique constraint violations
if pos < newPos {
_, err = r.executeSQL(Expr(
`UPDATE playlist_tracks SET id = -(id - 1) WHERE playlist_id = ? AND id > ? AND id <= ?`,
pid, pos, newPos))
} else {
_, err = r.executeSQL(Expr(
`UPDATE playlist_tracks SET id = -(id + 1) WHERE playlist_id = ? AND id >= ? AND id < ?`,
pid, newPos, pos))
}
if err != nil {
return err
}
// Step 3: Flip the shifted range back to positive
_, err = r.executeSQL(Expr(
`UPDATE playlist_tracks SET id = -id WHERE playlist_id = ? AND id < 0 AND id != -999999`, pid))
if err != nil {
return err
}
// Step 4: Place the source track at its new position
_, err = r.executeSQL(Expr(
`UPDATE playlist_tracks SET id = ? WHERE playlist_id = ? AND id = -999999`, newPos, pid))
return err
} }
var _ model.PlaylistTrackRepository = (*playlistTrackRepository)(nil) var _ model.PlaylistTrackRepository = (*playlistTrackRepository)(nil)

View File

@@ -109,11 +109,10 @@ func booleanFilter(field string, value any) Sqlizer {
func fullTextFilter(tableName string, mbidFields ...string) func(string, any) Sqlizer { func fullTextFilter(tableName string, mbidFields ...string) func(string, any) Sqlizer {
return func(field string, value any) Sqlizer { return func(field string, value any) Sqlizer {
v := strings.ToLower(value.(string)) v := strings.ToLower(value.(string))
cond := cmp.Or( return cmp.Or[Sqlizer](
mbidExpr(tableName, v, mbidFields...), mbidExpr(tableName, v, mbidFields...),
fullTextExpr(tableName, v), getSearchStrategy(tableName, v),
) )
return cond
} }
} }

View File

@@ -26,7 +26,9 @@ var _ = Describe("sqlRestful", func() {
Expect(r.parseRestFilters(context.Background(), options)).To(BeNil()) Expect(r.parseRestFilters(context.Background(), options)).To(BeNil())
}) })
It(`returns nil if tries a filter with fullTextExpr("'")`, func() { It(`returns nil if tries a filter with legacySearchExpr("'")`, func() {
DeferCleanup(configtest.SetupConfig())
conf.Server.Search.Backend = "legacy"
r.filterMappings = map[string]filterFunc{ r.filterMappings = map[string]filterFunc{
"name": fullTextFilter("table"), "name": fullTextFilter("table"),
} }
@@ -77,6 +79,7 @@ var _ = Describe("sqlRestful", func() {
BeforeEach(func() { BeforeEach(func() {
DeferCleanup(configtest.SetupConfig()) DeferCleanup(configtest.SetupConfig())
conf.Server.Search.Backend = "legacy"
tableName = "test_table" tableName = "test_table"
mbidFields = []string{"mbid", "artist_mbid"} mbidFields = []string{"mbid", "artist_mbid"}
filter = fullTextFilter(tableName, mbidFields...) filter = fullTextFilter(tableName, mbidFields...)
@@ -99,11 +102,11 @@ var _ = Describe("sqlRestful", func() {
uuid := "550e8400-e29b-41d4-a716-446655440000" uuid := "550e8400-e29b-41d4-a716-446655440000"
result := noMbidFilter("search", uuid) result := noMbidFilter("search", uuid)
// mbidExpr with no fields returns nil, so cmp.Or falls back to fullTextExpr // mbidExpr with no fields returns nil, so cmp.Or falls back to search strategy
expected := squirrel.And{ sql, args, err := result.ToSql()
squirrel.Like{"test_table.full_text": "% 550e8400-e29b-41d4-a716-446655440000%"}, Expect(err).ToNot(HaveOccurred())
} Expect(sql).To(ContainSubstring("test_table.full_text LIKE"))
Expect(result).To(Equal(expected)) Expect(args).To(ContainElement("% 550e8400-e29b-41d4-a716-446655440000%"))
}) })
}) })
@@ -111,54 +114,75 @@ var _ = Describe("sqlRestful", func() {
It("returns full text search condition only", func() { It("returns full text search condition only", func() {
result := filter("search", "beatles") result := filter("search", "beatles")
// mbidExpr returns nil for non-UUIDs, so fullTextExpr result is returned directly // mbidExpr returns nil for non-UUIDs, so search strategy result is returned directly
expected := squirrel.And{ sql, args, err := result.ToSql()
squirrel.Like{"test_table.full_text": "% beatles%"}, Expect(err).ToNot(HaveOccurred())
} Expect(sql).To(ContainSubstring("test_table.full_text LIKE"))
Expect(result).To(Equal(expected)) Expect(args).To(ContainElement("% beatles%"))
}) })
It("handles multi-word search terms", func() { It("handles multi-word search terms", func() {
result := filter("search", "the beatles abbey road") result := filter("search", "the beatles abbey road")
// Should return And condition directly sql, args, err := result.ToSql()
andCondition, ok := result.(squirrel.And) Expect(err).ToNot(HaveOccurred())
Expect(ok).To(BeTrue()) // All words should be present as LIKE conditions
Expect(andCondition).To(HaveLen(4)) Expect(sql).To(ContainSubstring("test_table.full_text LIKE"))
Expect(args).To(HaveLen(4))
// Check that all words are present (order may vary) Expect(args).To(ContainElement("% the%"))
Expect(andCondition).To(ContainElement(squirrel.Like{"test_table.full_text": "% the%"})) Expect(args).To(ContainElement("% beatles%"))
Expect(andCondition).To(ContainElement(squirrel.Like{"test_table.full_text": "% beatles%"})) Expect(args).To(ContainElement("% abbey%"))
Expect(andCondition).To(ContainElement(squirrel.Like{"test_table.full_text": "% abbey%"})) Expect(args).To(ContainElement("% road%"))
Expect(andCondition).To(ContainElement(squirrel.Like{"test_table.full_text": "% road%"}))
}) })
}) })
Context("when SearchFullString config changes behavior", func() { Context("when SearchFullString config changes behavior", func() {
It("uses different separator with SearchFullString=false", func() { It("uses different separator with SearchFullString=false", func() {
conf.Server.SearchFullString = false conf.Server.Search.FullString = false
result := filter("search", "test query") result := filter("search", "test query")
andCondition, ok := result.(squirrel.And) sql, args, err := result.ToSql()
Expect(ok).To(BeTrue()) Expect(err).ToNot(HaveOccurred())
Expect(andCondition).To(HaveLen(2)) Expect(sql).To(ContainSubstring("test_table.full_text LIKE"))
Expect(args).To(HaveLen(2))
// Check that all words are present with leading space (order may vary) Expect(args).To(ContainElement("% test%"))
Expect(andCondition).To(ContainElement(squirrel.Like{"test_table.full_text": "% test%"})) Expect(args).To(ContainElement("% query%"))
Expect(andCondition).To(ContainElement(squirrel.Like{"test_table.full_text": "% query%"}))
}) })
It("uses no separator with SearchFullString=true", func() { It("uses no separator with SearchFullString=true", func() {
conf.Server.SearchFullString = true conf.Server.Search.FullString = true
result := filter("search", "test query") result := filter("search", "test query")
andCondition, ok := result.(squirrel.And) sql, args, err := result.ToSql()
Expect(ok).To(BeTrue()) Expect(err).ToNot(HaveOccurred())
Expect(andCondition).To(HaveLen(2)) Expect(sql).To(ContainSubstring("test_table.full_text LIKE"))
Expect(args).To(HaveLen(2))
Expect(args).To(ContainElement("%test%"))
Expect(args).To(ContainElement("%query%"))
})
})
// Check that all words are present without leading space (order may vary) Context("single-character queries (regression: must not be rejected)", func() {
Expect(andCondition).To(ContainElement(squirrel.Like{"test_table.full_text": "%test%"})) It("returns valid filter for single-char query with legacy backend", func() {
Expect(andCondition).To(ContainElement(squirrel.Like{"test_table.full_text": "%query%"})) conf.Server.Search.Backend = "legacy"
result := filter("search", "a")
Expect(result).ToNot(BeNil(), "single-char REST filter must not be dropped")
sql, args, err := result.ToSql()
Expect(err).ToNot(HaveOccurred())
Expect(sql).To(ContainSubstring("LIKE"))
Expect(args).ToNot(BeEmpty())
})
It("returns valid filter for single-char query with FTS backend", func() {
conf.Server.Search.Backend = "fts"
conf.Server.Search.FullString = false
ftsFilter := fullTextFilter(tableName, mbidFields...)
result := ftsFilter("search", "a")
Expect(result).ToNot(BeNil(), "single-char REST filter must not be dropped")
sql, args, err := result.ToSql()
Expect(err).ToNot(HaveOccurred())
Expect(sql).To(ContainSubstring("MATCH"))
Expect(args).ToNot(BeEmpty())
}) })
}) })
@@ -176,10 +200,10 @@ var _ = Describe("sqlRestful", func() {
It("handles special characters that are sanitized", func() { It("handles special characters that are sanitized", func() {
result := filter("search", "don't") result := filter("search", "don't")
expected := squirrel.And{ sql, args, err := result.ToSql()
squirrel.Like{"test_table.full_text": "% dont%"}, // str.SanitizeStrings removes quotes Expect(err).ToNot(HaveOccurred())
} Expect(sql).To(ContainSubstring("test_table.full_text LIKE"))
Expect(result).To(Equal(expected)) Expect(args).To(ContainElement("% dont%"))
}) })
It("returns nil for single quote (SQL injection protection)", func() { It("returns nil for single quote (SQL injection protection)", func() {
@@ -203,31 +227,30 @@ var _ = Describe("sqlRestful", func() {
result := filter("search", "550e8400-invalid-uuid") result := filter("search", "550e8400-invalid-uuid")
// Should return full text filter since UUID is invalid // Should return full text filter since UUID is invalid
expected := squirrel.And{ sql, args, err := result.ToSql()
squirrel.Like{"test_table.full_text": "% 550e8400-invalid-uuid%"}, Expect(err).ToNot(HaveOccurred())
} Expect(sql).To(ContainSubstring("test_table.full_text LIKE"))
Expect(result).To(Equal(expected)) Expect(args).To(ContainElement("% 550e8400-invalid-uuid%"))
}) })
It("handles empty mbid fields array", func() { It("handles empty mbid fields array", func() {
emptyMbidFilter := fullTextFilter(tableName, []string{}...) emptyMbidFilter := fullTextFilter(tableName, []string{}...)
result := emptyMbidFilter("search", "test") result := emptyMbidFilter("search", "test")
// mbidExpr with empty fields returns nil, so cmp.Or falls back to fullTextExpr // mbidExpr with empty fields returns nil, so search strategy result is returned directly
expected := squirrel.And{ sql, args, err := result.ToSql()
squirrel.Like{"test_table.full_text": "% test%"}, Expect(err).ToNot(HaveOccurred())
} Expect(sql).To(ContainSubstring("test_table.full_text LIKE"))
Expect(result).To(Equal(expected)) Expect(args).To(ContainElement("% test%"))
}) })
It("converts value to lowercase before processing", func() { It("converts value to lowercase before processing", func() {
result := filter("search", "TEST") result := filter("search", "TEST")
// The function converts to lowercase internally sql, args, err := result.ToSql()
expected := squirrel.And{ Expect(err).ToNot(HaveOccurred())
squirrel.Like{"test_table.full_text": "% test%"}, Expect(sql).To(ContainSubstring("test_table.full_text LIKE"))
} Expect(args).To(ContainElement("% test%"))
Expect(result).To(Equal(expected))
}) })
}) })
}) })

View File

@@ -15,36 +15,71 @@ func formatFullText(text ...string) string {
return " " + fullText return " " + fullText
} }
// doSearch performs a full-text search with the specified parameters. // searchConfig holds per-repository constants for doSearch.
// The naturalOrder is used to sort results when no full-text filter is applied. It is useful for cases like type searchConfig struct {
// OpenSubsonic, where an empty search query should return all results in a natural order. Normally the parameter NaturalOrder string // ORDER BY for empty-query results (e.g. "album.rowid")
// should be `tableName + ".rowid"`, but some repositories (ex: artist) may use a different natural order. OrderBy []string // ORDER BY for text search results (e.g. ["name"])
func (r sqlRepository) doSearch(sq SelectBuilder, q string, offset, size int, results any, naturalOrder string, orderBys ...string) error { MBIDFields []string // columns to match when query is a UUID
// LibraryFilter overrides the default applyLibraryFilter for FTS Phase 1.
// Needed when library access requires a junction table (e.g. artist → library_artist).
LibraryFilter func(sq SelectBuilder) SelectBuilder
}
// searchStrategy defines how to execute a text search against a repository table.
// options carries filters and pagination that must reach all query phases,
// including FTS Phase 1 which builds its own query outside sq.
type searchStrategy interface {
Sqlizer
execute(r sqlRepository, sq SelectBuilder, dest any, cfg searchConfig, options model.QueryOptions) error
}
// getSearchStrategy returns the appropriate search strategy based on config and query content.
// Returns nil when the query produces no searchable tokens.
func getSearchStrategy(tableName, query string) searchStrategy {
if conf.Server.Search.Backend == "legacy" || conf.Server.Search.FullString {
return newLegacySearch(tableName, query)
}
if containsCJK(query) {
return newLikeSearch(tableName, query)
}
return newFTSSearch(tableName, query)
}
// doSearch dispatches a search query: empty → natural order, UUID → MBID match,
// otherwise delegates to getSearchStrategy. sq must already have LIMIT/OFFSET set
// via newSelect(options...). options is forwarded so FTS Phase 1 can apply the same
// filters and pagination independently.
func (r sqlRepository) doSearch(sq SelectBuilder, q string, results any, cfg searchConfig, options model.QueryOptions) error {
q = strings.TrimSpace(q) q = strings.TrimSpace(q)
q = strings.TrimSuffix(q, "*") q = strings.TrimSuffix(q, "*")
sq = sq.Where(Eq{r.tableName + ".missing": false})
// Empty query (OpenSubsonic `search3?query=""`) — return all in natural order.
if q == "" || q == `""` {
sq = sq.OrderBy(cfg.NaturalOrder)
return r.queryAll(sq, results, options)
}
// MBID search: if query is a valid UUID, search by MBID fields instead
if uuid.Validate(q) == nil && len(cfg.MBIDFields) > 0 {
sq = sq.Where(mbidExpr(r.tableName, q, cfg.MBIDFields...))
return r.queryAll(sq, results)
}
// Min-length guard: single-character queries are too broad for search3.
// This check lives here (not in the strategies) so that fullTextFilter
// (REST filter path) can still use single-character queries.
if len(q) < 2 { if len(q) < 2 {
return nil return nil
} }
filter := fullTextExpr(r.tableName, q) strategy := getSearchStrategy(r.tableName, q)
if filter != nil { if strategy == nil {
sq = sq.Where(filter) return nil
sq = sq.OrderBy(orderBys...)
} else {
// This is to speed up the results of `search3?query=""`, for OpenSubsonic
// If the filter is empty, we sort by the specified natural order.
sq = sq.OrderBy(naturalOrder)
}
sq = sq.Where(Eq{r.tableName + ".missing": false})
sq = sq.Limit(uint64(size)).Offset(uint64(offset))
return r.queryAll(sq, results, model.QueryOptions{Offset: offset})
} }
func (r sqlRepository) searchByMBID(sq SelectBuilder, mbid string, mbidFields []string, results any) error { return strategy.execute(r, sq, results, cfg, options)
sq = sq.Where(mbidExpr(r.tableName, mbid, mbidFields...))
sq = sq.Where(Eq{r.tableName + ".missing": false})
return r.queryAll(sq, results)
} }
func mbidExpr(tableName, mbid string, mbidFields ...string) Sqlizer { func mbidExpr(tableName, mbid string, mbidFields ...string) Sqlizer {
@@ -58,20 +93,3 @@ func mbidExpr(tableName, mbid string, mbidFields ...string) Sqlizer {
} }
return Or(cond) return Or(cond)
} }
func fullTextExpr(tableName string, s string) Sqlizer {
q := str.SanitizeStrings(s)
if q == "" {
return nil
}
var sep string
if !conf.Server.SearchFullString {
sep = " "
}
parts := strings.Split(q, " ")
filters := And{}
for _, part := range parts {
filters = append(filters, Like{tableName + ".full_text": "%" + sep + part + "%"})
}
return filters
}

View File

@@ -0,0 +1,422 @@
package persistence
import (
"fmt"
"regexp"
"strings"
"unicode"
"unicode/utf8"
. "github.com/Masterminds/squirrel"
"github.com/navidrome/navidrome/log"
"github.com/navidrome/navidrome/model"
)
// containsCJK returns true if the string contains any CJK (Chinese/Japanese/Korean) characters.
// CJK text doesn't use spaces between words, so FTS5's unicode61 tokenizer treats entire
// CJK phrases as single tokens, making token-based search ineffective for CJK content.
func containsCJK(s string) bool {
for _, r := range s {
if unicode.Is(unicode.Han, r) ||
unicode.Is(unicode.Hiragana, r) ||
unicode.Is(unicode.Katakana, r) ||
unicode.Is(unicode.Hangul, r) {
return true
}
}
return false
}
// fts5SpecialChars matches characters that should be stripped from user input.
// We keep only Unicode letters, numbers, whitespace, * (prefix wildcard), " (phrase quotes),
// and \x00 (internal placeholder marker). All punctuation is removed because the unicode61
// tokenizer treats it as token separators, and characters like ' can cause FTS5 parse errors
// as unbalanced string delimiters.
var fts5SpecialChars = regexp.MustCompile(`[^\p{L}\p{N}\s*"\x00]`)
// fts5PunctStrip strips everything except letters and numbers (no whitespace, wildcards, or quotes).
// Used for normalizing words at index time to create concatenated forms (e.g., "R.E.M." → "REM").
var fts5PunctStrip = regexp.MustCompile(`[^\p{L}\p{N}]`)
// fts5Operators matches FTS5 boolean operators as whole words (case-insensitive).
var fts5Operators = regexp.MustCompile(`(?i)\b(AND|OR|NOT|NEAR)\b`)
// fts5LeadingStar matches a * at the start of a token. FTS5 only supports * at the end (prefix queries).
var fts5LeadingStar = regexp.MustCompile(`(^|[\s])\*+`)
// normalizeForFTS takes multiple strings, strips non-letter/non-number characters from each word,
// and returns a space-separated string of words that changed after stripping (deduplicated).
// This is used at index time to create concatenated forms: "R.E.M." → "REM", "AC/DC" → "ACDC".
func normalizeForFTS(values ...string) string {
seen := make(map[string]struct{})
var result []string
for _, v := range values {
for _, word := range strings.Fields(v) {
stripped := fts5PunctStrip.ReplaceAllString(word, "")
if stripped == "" || stripped == word {
continue
}
lower := strings.ToLower(stripped)
if _, ok := seen[lower]; ok {
continue
}
seen[lower] = struct{}{}
result = append(result, stripped)
}
}
return strings.Join(result, " ")
}
// isSingleUnicodeLetter returns true if token is exactly one Unicode letter.
func isSingleUnicodeLetter(token string) bool {
r, size := utf8.DecodeRuneInString(token)
return size == len(token) && size > 0 && unicode.IsLetter(r)
}
// namePunctuation is the set of characters commonly used as separators in artist/album
// names (hyphens, slashes, dots, apostrophes). Only words containing these are candidates
// for punctuated-word processing; other special characters (^, :, &) are just stripped.
const namePunctuation = `-/.''`
// processPunctuatedWords handles words with embedded name punctuation before the general
// special-character stripping. For each punctuated word it produces either:
// - A quoted phrase for dotted abbreviations: R.E.M. → "R E M"
// - A phrase+concat OR for other patterns: a-ha → ("a ha" OR aha*)
func processPunctuatedWords(input string, phrases []string) (string, []string) {
words := strings.Fields(input)
var result []string
for _, w := range words {
if strings.HasPrefix(w, "\x00") || strings.ContainsAny(w, `*"`) || !strings.ContainsAny(w, namePunctuation) {
result = append(result, w)
continue
}
concat := fts5PunctStrip.ReplaceAllString(w, "")
if concat == "" || concat == w {
result = append(result, w)
continue
}
subTokens := strings.Fields(fts5SpecialChars.ReplaceAllString(w, " "))
if len(subTokens) < 2 {
// Single sub-token after splitting (e.g., N' → N): just use the stripped form
result = append(result, concat)
continue
}
// Dotted abbreviations (R.E.M., U.K.) — all single letters separated by dots only
if isDottedAbbreviation(w, subTokens) {
phrases = append(phrases, fmt.Sprintf(`"%s"`, strings.Join(subTokens, " ")))
} else {
// Punctuated names (a-ha, AC/DC, Jay-Z) — phrase for adjacency + concat for search_normalized
phrases = append(phrases, fmt.Sprintf(`("%s" OR %s*)`, strings.Join(subTokens, " "), concat))
}
result = append(result, fmt.Sprintf("\x00PHRASE%d\x00", len(phrases)-1))
}
return strings.Join(result, " "), phrases
}
// isDottedAbbreviation returns true if w uses only dots as punctuation and all sub-tokens
// are single letters (e.g., "R.E.M.", "U.K." but not "a-ha" or "AC/DC").
func isDottedAbbreviation(w string, subTokens []string) bool {
for _, r := range w {
if !unicode.IsLetter(r) && !unicode.IsNumber(r) && r != '.' {
return false
}
}
for _, st := range subTokens {
if !isSingleUnicodeLetter(st) {
return false
}
}
return true
}
// buildFTS5Query preprocesses user input into a safe FTS5 MATCH expression.
// It preserves quoted phrases and * prefix wildcards, neutralizes FTS5 operators
// (by lowercasing them, since FTS5 operators are case-sensitive) and strips
// special characters to prevent query injection.
func buildFTS5Query(userInput string) string {
q := strings.TrimSpace(userInput)
if q == "" || q == `""` {
return ""
}
var phrases []string
result := q
for {
start := strings.Index(result, `"`)
if start == -1 {
break
}
end := strings.Index(result[start+1:], `"`)
if end == -1 {
// Unmatched quote — remove it
result = result[:start] + result[start+1:]
break
}
end += start + 1
phrase := result[start : end+1] // includes quotes
phrases = append(phrases, phrase)
result = result[:start] + fmt.Sprintf("\x00PHRASE%d\x00", len(phrases)-1) + result[end+1:]
}
// Neutralize FTS5 operators by lowercasing them (FTS5 operators are case-sensitive:
// AND, OR, NOT, NEAR are operators, but and, or, not, near are plain tokens)
result = fts5Operators.ReplaceAllStringFunc(result, strings.ToLower)
// Handle words with embedded punctuation (a-ha, AC/DC, R.E.M.) before stripping
result, phrases = processPunctuatedWords(result, phrases)
result = fts5SpecialChars.ReplaceAllString(result, " ")
result = fts5LeadingStar.ReplaceAllString(result, "$1")
tokens := strings.Fields(result)
// Append * to plain tokens for prefix matching (e.g., "love" → "love*").
// Skip tokens that are already wildcarded or are quoted phrase placeholders.
for i, t := range tokens {
if strings.HasPrefix(t, "\x00") || strings.HasSuffix(t, "*") {
continue
}
tokens[i] = t + "*"
}
result = strings.Join(tokens, " ")
for i, phrase := range phrases {
placeholder := fmt.Sprintf("\x00PHRASE%d\x00", i)
result = strings.ReplaceAll(result, placeholder, phrase)
}
return result
}
// ftsColumn pairs an FTS5 column name with its BM25 relevance weight.
type ftsColumn struct {
Name string
Weight float64
}
// ftsColumnDefs defines FTS5 columns and their BM25 relevance weights.
// The order MUST match the column order in the FTS5 table definition (see migrations).
// All columns are both searched and ranked. When adding indexed-but-not-searched
// columns in the future, use Weight: 0 to exclude from the search column filter.
var ftsColumnDefs = map[string][]ftsColumn{
"media_file": {
{"title", 10.0},
{"album", 5.0},
{"artist", 3.0},
{"album_artist", 3.0},
{"sort_title", 1.0},
{"sort_album_name", 1.0},
{"sort_artist_name", 1.0},
{"sort_album_artist_name", 1.0},
{"disc_subtitle", 1.0},
{"search_participants", 2.0},
{"search_normalized", 1.0},
},
"album": {
{"name", 10.0},
{"sort_album_name", 1.0},
{"album_artist", 3.0},
{"search_participants", 2.0},
{"discs", 1.0},
{"catalog_num", 1.0},
{"album_version", 1.0},
{"search_normalized", 1.0},
},
"artist": {
{"name", 10.0},
{"sort_artist_name", 1.0},
{"search_normalized", 1.0},
},
}
// ftsColumnFilters and ftsBM25Weights are precomputed from ftsColumnDefs at init time
// to avoid per-query allocations.
var (
ftsColumnFilters = map[string]string{}
ftsBM25Weights = map[string]string{}
)
func init() {
for table, cols := range ftsColumnDefs {
var names []string
weights := make([]string, len(cols))
for i, c := range cols {
if c.Weight > 0 {
names = append(names, c.Name)
}
weights[i] = fmt.Sprintf("%.1f", c.Weight)
}
ftsColumnFilters[table] = "{" + strings.Join(names, " ") + "}"
ftsBM25Weights[table] = strings.Join(weights, ", ")
}
}
// ftsSearch implements searchStrategy using FTS5 full-text search with BM25 ranking.
type ftsSearch struct {
tableName string
ftsTable string
matchExpr string
rankExpr string
}
// ToSql returns a single-query fallback for the REST filter path (no two-phase split).
func (s *ftsSearch) ToSql() (string, []interface{}, error) {
sql := s.tableName + ".rowid IN (SELECT rowid FROM " + s.ftsTable + " WHERE " + s.ftsTable + " MATCH ?)"
return sql, []interface{}{s.matchExpr}, nil
}
// execute runs a two-phase FTS5 search:
// - Phase 1: lightweight rowid query (main table + FTS + library filter) for ranking and pagination.
// - Phase 2: full SELECT with all JOINs, scoped to Phase 1's rowid set.
//
// Complex ORDER BY (function calls, aggregations) are dropped from Phase 1.
func (s *ftsSearch) execute(r sqlRepository, sq SelectBuilder, dest any, cfg searchConfig, options model.QueryOptions) error {
qualifiedOrderBys := []string{s.rankExpr}
for _, ob := range cfg.OrderBy {
if qualified := qualifyOrderBy(s.tableName, ob); qualified != "" {
qualifiedOrderBys = append(qualifiedOrderBys, qualified)
}
}
// Phase 1: fresh query — must set LIMIT/OFFSET from options explicitly.
// Mirror applyOptions behavior: Max=0 means no limit, not LIMIT 0.
rowidQuery := Select(s.tableName+".rowid").
From(s.tableName).
Join(s.ftsTable+" ON "+s.ftsTable+".rowid = "+s.tableName+".rowid AND "+s.ftsTable+" MATCH ?", s.matchExpr).
Where(Eq{s.tableName + ".missing": false}).
OrderBy(qualifiedOrderBys...)
if options.Max > 0 {
rowidQuery = rowidQuery.Limit(uint64(options.Max))
}
if options.Offset > 0 {
rowidQuery = rowidQuery.Offset(uint64(options.Offset))
}
// Library filter + musicFolderId must be applied here, before pagination.
if cfg.LibraryFilter != nil {
rowidQuery = cfg.LibraryFilter(rowidQuery)
} else {
rowidQuery = r.applyLibraryFilter(rowidQuery)
}
if options.Filters != nil {
rowidQuery = rowidQuery.Where(options.Filters)
}
rowidSQL, rowidArgs, err := rowidQuery.ToSql()
if err != nil {
return fmt.Errorf("building FTS rowid query: %w", err)
}
// Phase 2: strip LIMIT/OFFSET from sq (Phase 1 handled pagination),
// join on the ranked rowid set to hydrate with full columns.
sq = sq.RemoveLimit().RemoveOffset()
rankedSubquery := fmt.Sprintf(
"(SELECT rowid as _rid, row_number() OVER () AS _rn FROM (%s)) AS _ranked",
rowidSQL,
)
sq = sq.Join(rankedSubquery+" ON "+s.tableName+".rowid = _ranked._rid", rowidArgs...)
sq = sq.OrderBy("_ranked._rn")
return r.queryAll(sq, dest)
}
// qualifyOrderBy prepends tableName to a simple column name. Returns empty string for
// complex expressions (function calls, aggregations) that can't be used in Phase 1.
func qualifyOrderBy(tableName, orderBy string) string {
orderBy = strings.TrimSpace(orderBy)
if orderBy == "" || strings.ContainsAny(orderBy, "(,") {
return ""
}
parts := strings.Fields(orderBy)
if !strings.Contains(parts[0], ".") {
parts[0] = tableName + "." + parts[0]
}
return strings.Join(parts, " ")
}
// ftsQueryDegraded returns true when the FTS query lost significant discriminating
// content compared to the original input. This happens when special characters that
// are part of the entity name (e.g., "1+", "C++", "!!!", "C#") get stripped by FTS
// tokenization, leaving only very short/broad tokens. Also detects quoted phrases
// that would be degraded by FTS5's unicode61 tokenizer (e.g., "1+" → token "1").
func ftsQueryDegraded(original, ftsQuery string) bool {
original = strings.TrimSpace(original)
if original == "" || ftsQuery == "" {
return false
}
// Strip quotes from original for comparison — we want the raw content
stripped := strings.ReplaceAll(original, `"`, "")
// Extract the alphanumeric content from the original query
alphaNum := fts5PunctStrip.ReplaceAllString(stripped, "")
// If the original is entirely alphanumeric, nothing was stripped — not degraded
if len(alphaNum) == len(stripped) {
return false
}
// Check if all effective FTS tokens are very short (≤2 chars).
// Short tokens with prefix matching are too broad when special chars were stripped.
// For quoted phrases, extract the content and check the tokens inside.
tokens := strings.Fields(ftsQuery)
for _, t := range tokens {
t = strings.TrimSuffix(t, "*")
// Skip internal phrase placeholders
if strings.HasPrefix(t, "\x00") {
return false
}
// For OR groups from processPunctuatedWords (e.g., ("a ha" OR aha*)),
// the punctuated word was already handled meaningfully — not degraded.
if strings.HasPrefix(t, "(") {
return false
}
// For quoted phrases, check the tokens inside as FTS5 will tokenize them
if strings.HasPrefix(t, `"`) {
// Extract content between quotes
inner := strings.Trim(t, `"`)
innerAlpha := fts5PunctStrip.ReplaceAllString(inner, " ")
for _, it := range strings.Fields(innerAlpha) {
if len(it) > 2 {
return false
}
}
continue
}
if len(t) > 2 {
return false
}
}
return true
}
// newFTSSearch creates an FTS5 search strategy. Falls back to LIKE search if the
// query produces no FTS tokens (e.g., punctuation-only like "!!!!!!!") or if FTS
// tokenization stripped significant content from the query (e.g., "1+" → "1*").
// Returns nil when the query produces no searchable tokens at all.
func newFTSSearch(tableName, query string) searchStrategy {
q := buildFTS5Query(query)
if q == "" || ftsQueryDegraded(query, q) {
// Fallback: try LIKE search with the raw query
cleaned := strings.TrimSpace(strings.ReplaceAll(query, `"`, ""))
if cleaned != "" {
log.Trace("Search using LIKE fallback for non-tokenizable query", "table", tableName, "query", cleaned)
return newLikeSearch(tableName, cleaned)
}
return nil
}
ftsTable := tableName + "_fts"
matchExpr := q
if cols, ok := ftsColumnFilters[tableName]; ok {
matchExpr = cols + " : (" + q + ")"
}
rankExpr := ftsTable + ".rank"
if weights, ok := ftsBM25Weights[tableName]; ok {
rankExpr = "bm25(" + ftsTable + ", " + weights + ")"
}
s := &ftsSearch{
tableName: tableName,
ftsTable: ftsTable,
matchExpr: matchExpr,
rankExpr: rankExpr,
}
log.Trace("Search using FTS5 backend", "table", tableName, "query", q, "filter", s)
return s
}

View File

@@ -0,0 +1,435 @@
package persistence
import (
"context"
"github.com/navidrome/navidrome/log"
"github.com/navidrome/navidrome/model"
"github.com/navidrome/navidrome/model/request"
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
)
var _ = DescribeTable("buildFTS5Query",
func(input, expected string) {
Expect(buildFTS5Query(input)).To(Equal(expected))
},
Entry("returns empty string for empty input", "", ""),
Entry("returns empty string for whitespace-only input", " ", ""),
Entry("appends * to a single word for prefix matching", "beatles", "beatles*"),
Entry("appends * to each word for prefix matching", "abbey road", "abbey* road*"),
Entry("preserves quoted phrases without appending *", `"the beatles"`, `"the beatles"`),
Entry("does not double-append * to existing prefix wildcard", "beat*", "beat*"),
Entry("strips FTS5 operators and appends * to lowercased words", "AND OR NOT NEAR", "and* or* not* near*"),
Entry("strips special FTS5 syntax characters and appends *", "test^col:val", "test* col* val*"),
Entry("handles mixed phrases and words", `"the beatles" abbey`, `"the beatles" abbey*`),
Entry("handles prefix with multiple words", "beat* abbey", "beat* abbey*"),
Entry("collapses multiple spaces", "abbey road", "abbey* road*"),
Entry("strips leading * from tokens and appends trailing *", "*livia", "livia*"),
Entry("strips leading * and preserves existing trailing *", "*livia oliv*", "livia* oliv*"),
Entry("strips standalone *", "*", ""),
Entry("strips apostrophe from input", "Guns N' Roses", "Guns* N* Roses*"),
Entry("converts slashed word to phrase+concat OR", "AC/DC", `("AC DC" OR ACDC*)`),
Entry("converts hyphenated word to phrase+concat OR", "a-ha", `("a ha" OR aha*)`),
Entry("converts partial hyphenated word to phrase+concat OR", "a-h", `("a h" OR ah*)`),
Entry("converts hyphenated name to phrase+concat OR", "Jay-Z", `("Jay Z" OR JayZ*)`),
Entry("converts contraction to phrase+concat OR", "it's", `("it s" OR its*)`),
Entry("handles punctuated word mixed with plain words", "best of a-ha", `best* of* ("a ha" OR aha*)`),
Entry("strips miscellaneous punctuation", "rock & roll, vol. 2", "rock* roll* vol* 2*"),
Entry("preserves unicode characters with diacritics", "Björk début", "Björk* début*"),
Entry("collapses dotted abbreviation into phrase", "R.E.M.", `"R E M"`),
Entry("collapses abbreviation without trailing dot", "R.E.M", `"R E M"`),
Entry("collapses abbreviation mixed with words", "best of R.E.M.", `best* of* "R E M"`),
Entry("collapses two-letter abbreviation", "U.K.", `"U K"`),
Entry("does not collapse single letter surrounded by words", "I am fine", "I* am* fine*"),
Entry("does not collapse single standalone letter", "A test", "A* test*"),
Entry("preserves quoted phrase with punctuation verbatim", `"ac/dc"`, `"ac/dc"`),
Entry("preserves quoted abbreviation verbatim", `"R.E.M."`, `"R.E.M."`),
Entry("returns empty string for punctuation-only input", "!!!!!!!", ""),
Entry("returns empty string for mixed punctuation", "!@#$%^&", ""),
Entry("returns empty string for empty quoted phrase", `""`, ""),
)
var _ = DescribeTable("ftsQueryDegraded",
func(original, ftsQuery string, expected bool) {
Expect(ftsQueryDegraded(original, ftsQuery)).To(Equal(expected))
},
Entry("not degraded for empty original", "", "1*", false),
Entry("not degraded for empty ftsQuery", "1+", "", false),
Entry("not degraded for purely alphanumeric query", "beatles", "beatles*", false),
Entry("not degraded when long tokens remain", "test^val", "test* val*", false),
Entry("not degraded for quoted phrase with long tokens", `"the beatles"`, `"the beatles"`, false),
Entry("degraded for quoted phrase with only short tokens after tokenizer strips special chars", `"1+"`, `"1+"`, true),
Entry("not degraded for quoted phrase with meaningful content", `"C++ programming"`, `"C++ programming"`, false),
Entry("degraded when special chars stripped leaving short token", "1+", "1*", true),
Entry("degraded when special chars stripped leaving two short tokens", "C# 1", "C* 1*", true),
Entry("not degraded when at least one long token remains", "1+ beatles", "1* beatles*", false),
Entry("not degraded for OR groups from processPunctuatedWords", "AC/DC", `("AC DC" OR ACDC*)`, false),
)
var _ = DescribeTable("normalizeForFTS",
func(expected string, values ...string) {
Expect(normalizeForFTS(values...)).To(Equal(expected))
},
Entry("strips dots and concatenates", "REM", "R.E.M."),
Entry("strips slash", "ACDC", "AC/DC"),
Entry("strips hyphen", "Aha", "A-ha"),
Entry("skips unchanged words", "", "The Beatles"),
Entry("handles mixed input", "REM", "R.E.M.", "Automatic for the People"),
Entry("deduplicates", "REM", "R.E.M.", "R.E.M."),
Entry("strips apostrophe from word", "N", "Guns N' Roses"),
Entry("handles multiple values with punctuation", "REM ACDC", "R.E.M.", "AC/DC"),
)
var _ = DescribeTable("containsCJK",
func(input string, expected bool) {
Expect(containsCJK(input)).To(Equal(expected))
},
Entry("returns false for empty string", "", false),
Entry("returns false for ASCII text", "hello world", false),
Entry("returns false for Latin with diacritics", "Björk début", false),
Entry("detects Chinese characters (Han)", "周杰伦", true),
Entry("detects Japanese Hiragana", "こんにちは", true),
Entry("detects Japanese Katakana", "カタカナ", true),
Entry("detects Korean Hangul", "한국어", true),
Entry("detects CJK mixed with Latin", "best of 周杰伦", true),
Entry("detects single CJK character", "a曲b", true),
)
var _ = DescribeTable("qualifyOrderBy",
func(tableName, orderBy, expected string) {
Expect(qualifyOrderBy(tableName, orderBy)).To(Equal(expected))
},
Entry("returns empty string for empty input", "artist", "", ""),
Entry("qualifies simple column with table name", "artist", "name", "artist.name"),
Entry("qualifies column with direction", "artist", "name desc", "artist.name desc"),
Entry("preserves already-qualified column", "artist", "artist.name", "artist.name"),
Entry("preserves already-qualified column with direction", "artist", "artist.name desc", "artist.name desc"),
Entry("returns empty for function call expression", "artist", "sum(json_extract(stats, '$.total.m')) desc", ""),
Entry("returns empty for expression with comma", "artist", "a, b", ""),
Entry("qualifies media_file column", "media_file", "title", "media_file.title"),
)
var _ = Describe("ftsColumnDefs helpers", func() {
Describe("ftsColumnFilters", func() {
It("returns column filter for media_file", func() {
Expect(ftsColumnFilters).To(HaveKeyWithValue("media_file",
"{title album artist album_artist sort_title sort_album_name sort_artist_name sort_album_artist_name disc_subtitle search_participants search_normalized}",
))
})
It("returns column filter for album", func() {
Expect(ftsColumnFilters).To(HaveKeyWithValue("album",
"{name sort_album_name album_artist search_participants discs catalog_num album_version search_normalized}",
))
})
It("returns column filter for artist", func() {
Expect(ftsColumnFilters).To(HaveKeyWithValue("artist",
"{name sort_artist_name search_normalized}",
))
})
It("has no entry for unknown table", func() {
Expect(ftsColumnFilters).ToNot(HaveKey("unknown"))
})
})
Describe("ftsBM25Weights", func() {
It("returns weight CSV for media_file", func() {
Expect(ftsBM25Weights).To(HaveKeyWithValue("media_file",
"10.0, 5.0, 3.0, 3.0, 1.0, 1.0, 1.0, 1.0, 1.0, 2.0, 1.0",
))
})
It("returns weight CSV for album", func() {
Expect(ftsBM25Weights).To(HaveKeyWithValue("album",
"10.0, 1.0, 3.0, 2.0, 1.0, 1.0, 1.0, 1.0",
))
})
It("returns weight CSV for artist", func() {
Expect(ftsBM25Weights).To(HaveKeyWithValue("artist",
"10.0, 1.0, 1.0",
))
})
It("has no entry for unknown table", func() {
Expect(ftsBM25Weights).ToNot(HaveKey("unknown"))
})
})
It("has definitions for all known tables", func() {
for _, table := range []string{"media_file", "album", "artist"} {
Expect(ftsColumnDefs).To(HaveKey(table))
Expect(ftsColumnDefs[table]).ToNot(BeEmpty())
}
})
It("has matching column count between filter and weights", func() {
for table, cols := range ftsColumnDefs {
// Column filter only includes Weight > 0 columns
filterCount := 0
for _, c := range cols {
if c.Weight > 0 {
filterCount++
}
}
// For now, all columns have Weight > 0, so filter count == total count
Expect(filterCount).To(Equal(len(cols)), "table %s: all columns should have positive weights", table)
}
})
})
var _ = Describe("newFTSSearch", func() {
It("returns nil for empty query", func() {
Expect(newFTSSearch("media_file", "")).To(BeNil())
})
It("returns non-nil for single-character query", func() {
strategy := newFTSSearch("media_file", "a")
Expect(strategy).ToNot(BeNil(), "single-char queries must not be rejected; min-length is enforced in doSearch, not here")
sql, _, err := strategy.ToSql()
Expect(err).ToNot(HaveOccurred())
Expect(sql).To(ContainSubstring("MATCH"))
})
It("returns ftsSearch with correct table names and MATCH expression", func() {
strategy := newFTSSearch("media_file", "beatles")
fts, ok := strategy.(*ftsSearch)
Expect(ok).To(BeTrue())
Expect(fts.tableName).To(Equal("media_file"))
Expect(fts.ftsTable).To(Equal("media_file_fts"))
Expect(fts.matchExpr).To(HavePrefix("{title album artist album_artist"))
Expect(fts.matchExpr).To(ContainSubstring("beatles*"))
})
It("ToSql generates rowid IN subquery with MATCH (fallback path)", func() {
strategy := newFTSSearch("media_file", "beatles")
sql, args, err := strategy.ToSql()
Expect(err).ToNot(HaveOccurred())
Expect(sql).To(ContainSubstring("media_file.rowid IN"))
Expect(sql).To(ContainSubstring("media_file_fts"))
Expect(sql).To(ContainSubstring("MATCH"))
Expect(args).To(HaveLen(1))
})
It("generates correct FTS table name per entity", func() {
for _, table := range []string{"media_file", "album", "artist"} {
strategy := newFTSSearch(table, "test")
fts, ok := strategy.(*ftsSearch)
Expect(ok).To(BeTrue())
Expect(fts.tableName).To(Equal(table))
Expect(fts.ftsTable).To(Equal(table + "_fts"))
}
})
It("builds bm25() rank expression with column weights", func() {
strategy := newFTSSearch("media_file", "beatles")
fts, ok := strategy.(*ftsSearch)
Expect(ok).To(BeTrue())
Expect(fts.rankExpr).To(HavePrefix("bm25(media_file_fts,"))
Expect(fts.rankExpr).To(ContainSubstring("10.0"))
strategy = newFTSSearch("artist", "beatles")
fts, ok = strategy.(*ftsSearch)
Expect(ok).To(BeTrue())
Expect(fts.rankExpr).To(HavePrefix("bm25(artist_fts,"))
})
It("falls back to ftsTable.rank for unknown tables", func() {
strategy := newFTSSearch("unknown_table", "test")
fts, ok := strategy.(*ftsSearch)
Expect(ok).To(BeTrue())
Expect(fts.rankExpr).To(Equal("unknown_table_fts.rank"))
})
It("wraps query with column filter for known tables", func() {
strategy := newFTSSearch("artist", "Beatles")
fts, ok := strategy.(*ftsSearch)
Expect(ok).To(BeTrue())
Expect(fts.matchExpr).To(Equal("{name sort_artist_name search_normalized} : (Beatles*)"))
})
It("passes query without column filter for unknown tables", func() {
strategy := newFTSSearch("unknown_table", "test")
fts, ok := strategy.(*ftsSearch)
Expect(ok).To(BeTrue())
Expect(fts.matchExpr).To(Equal("test*"))
})
It("preserves phrase queries inside column filter", func() {
strategy := newFTSSearch("media_file", `"the beatles"`)
fts, ok := strategy.(*ftsSearch)
Expect(ok).To(BeTrue())
Expect(fts.matchExpr).To(ContainSubstring(`"the beatles"`))
})
It("preserves prefix queries inside column filter", func() {
strategy := newFTSSearch("media_file", "beat*")
fts, ok := strategy.(*ftsSearch)
Expect(ok).To(BeTrue())
Expect(fts.matchExpr).To(ContainSubstring("beat*"))
})
It("falls back to LIKE search for punctuation-only query", func() {
strategy := newFTSSearch("media_file", "!!!!!!!")
Expect(strategy).ToNot(BeNil())
_, ok := strategy.(*ftsSearch)
Expect(ok).To(BeFalse(), "punctuation-only should fall back to LIKE, not FTS")
sql, args, err := strategy.ToSql()
Expect(err).ToNot(HaveOccurred())
Expect(sql).To(ContainSubstring("LIKE"))
Expect(args).To(ContainElement("%!!!!!!!%"))
})
It("falls back to LIKE search for degraded query (special chars stripped leaving short tokens)", func() {
strategy := newFTSSearch("album", "1+")
Expect(strategy).ToNot(BeNil())
_, ok := strategy.(*ftsSearch)
Expect(ok).To(BeFalse(), "degraded query should fall back to LIKE, not FTS")
sql, args, err := strategy.ToSql()
Expect(err).ToNot(HaveOccurred())
Expect(sql).To(ContainSubstring("LIKE"))
Expect(args).To(ContainElement("%1+%"))
})
It("returns nil for empty string even with LIKE fallback", func() {
Expect(newFTSSearch("media_file", "")).To(BeNil())
Expect(newFTSSearch("media_file", " ")).To(BeNil())
})
It("returns nil for empty quoted phrase", func() {
Expect(newFTSSearch("media_file", `""`)).To(BeNil())
})
})
var _ = Describe("FTS5 Integration Search", func() {
var (
mr model.MediaFileRepository
alr model.AlbumRepository
arr model.ArtistRepository
)
BeforeEach(func() {
ctx := log.NewContext(context.TODO())
ctx = request.WithUser(ctx, adminUser)
conn := GetDBXBuilder()
mr = NewMediaFileRepository(ctx, conn)
alr = NewAlbumRepository(ctx, conn)
arr = NewArtistRepository(ctx, conn)
})
Describe("MediaFile search", func() {
It("finds media files by title", func() {
results, err := mr.Search("Radioactivity", model.QueryOptions{Max: 10})
Expect(err).ToNot(HaveOccurred())
Expect(results).To(HaveLen(1))
Expect(results[0].Title).To(Equal("Radioactivity"))
Expect(results[0].ID).To(Equal(songRadioactivity.ID))
})
It("finds media files by artist name", func() {
results, err := mr.Search("Beatles", model.QueryOptions{Max: 10})
Expect(err).ToNot(HaveOccurred())
Expect(results).To(HaveLen(3))
for _, r := range results {
Expect(r.Artist).To(Equal("The Beatles"))
}
})
})
Describe("Album search", func() {
It("finds albums by name", func() {
results, err := alr.Search("Sgt Peppers", model.QueryOptions{Max: 10})
Expect(err).ToNot(HaveOccurred())
Expect(results).To(HaveLen(1))
Expect(results[0].Name).To(Equal("Sgt Peppers"))
Expect(results[0].ID).To(Equal(albumSgtPeppers.ID))
})
It("finds albums with multi-word search", func() {
results, err := alr.Search("Abbey Road", model.QueryOptions{Max: 10})
Expect(err).ToNot(HaveOccurred())
Expect(results).To(HaveLen(2))
})
})
Describe("Artist search", func() {
It("finds artists by name", func() {
results, err := arr.Search("Kraftwerk", model.QueryOptions{Max: 10})
Expect(err).ToNot(HaveOccurred())
Expect(results).To(HaveLen(1))
Expect(results[0].Name).To(Equal("Kraftwerk"))
Expect(results[0].ID).To(Equal(artistKraftwerk.ID))
})
})
Describe("CJK search", func() {
It("finds media files by CJK title", func() {
results, err := mr.Search("プラチナ", model.QueryOptions{Max: 10})
Expect(err).ToNot(HaveOccurred())
Expect(results).To(HaveLen(1))
Expect(results[0].Title).To(Equal("プラチナ・ジェット"))
Expect(results[0].ID).To(Equal(songCJK.ID))
})
It("finds media files by CJK artist name", func() {
results, err := mr.Search("シートベルツ", model.QueryOptions{Max: 10})
Expect(err).ToNot(HaveOccurred())
Expect(results).To(HaveLen(1))
Expect(results[0].Artist).To(Equal("シートベルツ"))
})
It("finds albums by CJK artist name", func() {
results, err := alr.Search("シートベルツ", model.QueryOptions{Max: 10})
Expect(err).ToNot(HaveOccurred())
Expect(results).To(HaveLen(1))
Expect(results[0].Name).To(Equal("COWBOY BEBOP"))
Expect(results[0].ID).To(Equal(albumCJK.ID))
})
It("finds artists by CJK name", func() {
results, err := arr.Search("シートベルツ", model.QueryOptions{Max: 10})
Expect(err).ToNot(HaveOccurred())
Expect(results).To(HaveLen(1))
Expect(results[0].Name).To(Equal("シートベルツ"))
Expect(results[0].ID).To(Equal(artistCJK.ID))
})
})
Describe("Album version search", func() {
It("finds albums by version tag via FTS", func() {
results, err := alr.Search("Deluxe", model.QueryOptions{Max: 10})
Expect(err).ToNot(HaveOccurred())
Expect(results).To(HaveLen(1))
Expect(results[0].ID).To(Equal(albumWithVersion.ID))
})
})
Describe("Punctuation-only search", func() {
It("finds media files with punctuation-only title", func() {
results, err := mr.Search("!!!!!!!", model.QueryOptions{Max: 10})
Expect(err).ToNot(HaveOccurred())
Expect(results).To(HaveLen(1))
Expect(results[0].Title).To(Equal("!!!!!!!"))
Expect(results[0].ID).To(Equal(songPunctuation.ID))
})
})
Describe("Single-character search (doSearch min-length guard)", func() {
It("returns empty results for single-char query via Search", func() {
results, err := mr.Search("a", model.QueryOptions{Max: 10})
Expect(err).ToNot(HaveOccurred())
Expect(results).To(BeEmpty(), "doSearch should reject single-char queries")
})
})
Describe("Max=0 means no limit (regression: must not produce LIMIT 0)", func() {
It("returns results with Max=0", func() {
results, err := mr.Search("Beatles", model.QueryOptions{Max: 0})
Expect(err).ToNot(HaveOccurred())
Expect(results).ToNot(BeEmpty(), "Max=0 should mean no limit, not LIMIT 0")
})
})
})

View File

@@ -0,0 +1,106 @@
package persistence
import (
"strings"
. "github.com/Masterminds/squirrel"
"github.com/navidrome/navidrome/conf"
"github.com/navidrome/navidrome/log"
"github.com/navidrome/navidrome/model"
"github.com/navidrome/navidrome/utils/str"
)
// likeSearch implements searchStrategy using LIKE-based SQL filters.
// Used for legacy full_text searches, CJK fallback, and punctuation-only fallback.
type likeSearch struct {
filter Sqlizer
}
func (s *likeSearch) ToSql() (string, []interface{}, error) {
return s.filter.ToSql()
}
func (s *likeSearch) execute(r sqlRepository, sq SelectBuilder, dest any, cfg searchConfig, options model.QueryOptions) error {
sq = sq.Where(s.filter)
sq = sq.OrderBy(cfg.OrderBy...)
return r.queryAll(sq, dest, options)
}
// newLegacySearch creates a LIKE search against the full_text column.
// Returns nil when the query produces no searchable tokens.
func newLegacySearch(tableName, query string) searchStrategy {
filter := legacySearchExpr(tableName, query)
if filter == nil {
return nil
}
return &likeSearch{filter: filter}
}
// newLikeSearch creates a LIKE search against core entity columns (CJK, punctuation fallback).
// No minimum length is enforced, since single CJK characters are meaningful words.
// Returns nil when the query produces no searchable tokens.
func newLikeSearch(tableName, query string) searchStrategy {
filter := likeSearchExpr(tableName, query)
if filter == nil {
return nil
}
return &likeSearch{filter: filter}
}
// legacySearchExpr generates LIKE-based search filters against the full_text column.
// This is the original search implementation, used when Search.Backend="legacy".
func legacySearchExpr(tableName string, s string) Sqlizer {
q := str.SanitizeStrings(s)
if q == "" {
log.Trace("Search using legacy backend, query is empty", "table", tableName)
return nil
}
var sep string
if !conf.Server.Search.FullString {
sep = " "
}
parts := strings.Split(q, " ")
filters := And{}
for _, part := range parts {
filters = append(filters, Like{tableName + ".full_text": "%" + sep + part + "%"})
}
log.Trace("Search using legacy backend", "query", filters, "table", tableName)
return filters
}
// likeSearchColumns defines the core columns to search with LIKE queries.
// These are the primary user-visible fields for each entity type.
// Used as a fallback when FTS5 cannot handle the query (e.g., CJK text, punctuation-only input).
var likeSearchColumns = map[string][]string{
"media_file": {"title", "album", "artist", "album_artist"},
"album": {"name", "album_artist"},
"artist": {"name"},
}
// likeSearchExpr generates LIKE-based search filters against core columns.
// Each word in the query must match at least one column (AND between words),
// and each word can match any column (OR within a word).
// Used as a fallback when FTS5 cannot handle the query (e.g., CJK text, punctuation-only input).
func likeSearchExpr(tableName string, s string) Sqlizer {
s = strings.TrimSpace(s)
if s == "" {
log.Trace("Search using LIKE backend, query is empty", "table", tableName)
return nil
}
columns, ok := likeSearchColumns[tableName]
if !ok {
log.Trace("Search using LIKE backend, couldn't find columns for this table", "table", tableName)
return nil
}
words := strings.Fields(s)
wordFilters := And{}
for _, word := range words {
colFilters := Or{}
for _, col := range columns {
colFilters = append(colFilters, Like{tableName + "." + col: "%" + word + "%"})
}
wordFilters = append(wordFilters, colFilters)
}
log.Trace("Search using LIKE backend", "query", wordFilters, "table", tableName)
return wordFilters
}

View File

@@ -0,0 +1,134 @@
package persistence
import (
"context"
"github.com/navidrome/navidrome/conf"
"github.com/navidrome/navidrome/conf/configtest"
"github.com/navidrome/navidrome/log"
"github.com/navidrome/navidrome/model"
"github.com/navidrome/navidrome/model/request"
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
)
var _ = Describe("newLegacySearch", func() {
It("returns non-nil for single-character query", func() {
strategy := newLegacySearch("media_file", "a")
Expect(strategy).ToNot(BeNil(), "single-char queries must not be rejected; min-length is enforced in doSearch, not here")
sql, _, err := strategy.ToSql()
Expect(err).ToNot(HaveOccurred())
Expect(sql).To(ContainSubstring("LIKE"))
})
})
var _ = Describe("legacySearchExpr", func() {
It("returns nil for empty query", func() {
Expect(legacySearchExpr("media_file", "")).To(BeNil())
})
It("generates LIKE filter for single word", func() {
expr := legacySearchExpr("media_file", "beatles")
sql, args, err := expr.ToSql()
Expect(err).ToNot(HaveOccurred())
Expect(sql).To(ContainSubstring("media_file.full_text LIKE"))
Expect(args).To(ContainElement("% beatles%"))
})
It("generates AND of LIKE filters for multiple words", func() {
expr := legacySearchExpr("media_file", "abbey road")
sql, args, err := expr.ToSql()
Expect(err).ToNot(HaveOccurred())
Expect(sql).To(ContainSubstring("AND"))
Expect(args).To(HaveLen(2))
})
})
var _ = Describe("likeSearchExpr", func() {
It("returns nil for empty query", func() {
Expect(likeSearchExpr("media_file", "")).To(BeNil())
})
It("returns nil for whitespace-only query", func() {
Expect(likeSearchExpr("media_file", " ")).To(BeNil())
})
It("generates LIKE filters against core columns for single CJK word", func() {
expr := likeSearchExpr("media_file", "周杰伦")
sql, args, err := expr.ToSql()
Expect(err).ToNot(HaveOccurred())
// Should have OR between columns for the single word
Expect(sql).To(ContainSubstring("OR"))
Expect(sql).To(ContainSubstring("media_file.title LIKE"))
Expect(sql).To(ContainSubstring("media_file.album LIKE"))
Expect(sql).To(ContainSubstring("media_file.artist LIKE"))
Expect(sql).To(ContainSubstring("media_file.album_artist LIKE"))
Expect(args).To(HaveLen(4))
for _, arg := range args {
Expect(arg).To(Equal("%周杰伦%"))
}
})
It("generates AND of OR groups for multi-word query", func() {
expr := likeSearchExpr("media_file", "周杰伦 greatest")
sql, args, err := expr.ToSql()
Expect(err).ToNot(HaveOccurred())
// Two groups AND'd together, each with 4 columns OR'd
Expect(sql).To(ContainSubstring("AND"))
Expect(args).To(HaveLen(8))
})
It("uses correct columns for album table", func() {
expr := likeSearchExpr("album", "周杰伦")
sql, args, err := expr.ToSql()
Expect(err).ToNot(HaveOccurred())
Expect(sql).To(ContainSubstring("album.name LIKE"))
Expect(sql).To(ContainSubstring("album.album_artist LIKE"))
Expect(args).To(HaveLen(2))
})
It("uses correct columns for artist table", func() {
expr := likeSearchExpr("artist", "周杰伦")
sql, args, err := expr.ToSql()
Expect(err).ToNot(HaveOccurred())
Expect(sql).To(ContainSubstring("artist.name LIKE"))
Expect(args).To(HaveLen(1))
})
It("returns nil for unknown table", func() {
Expect(likeSearchExpr("unknown_table", "周杰伦")).To(BeNil())
})
})
var _ = Describe("Legacy Integration Search", func() {
var mr model.MediaFileRepository
BeforeEach(func() {
DeferCleanup(configtest.SetupConfig())
conf.Server.Search.Backend = "legacy"
ctx := log.NewContext(context.TODO())
ctx = request.WithUser(ctx, adminUser)
conn := GetDBXBuilder()
mr = NewMediaFileRepository(ctx, conn)
})
It("returns results using legacy LIKE-based search", func() {
results, err := mr.Search("Radioactivity", model.QueryOptions{Max: 10})
Expect(err).ToNot(HaveOccurred())
Expect(results).To(HaveLen(1))
Expect(results[0].Title).To(Equal("Radioactivity"))
})
It("returns empty results for single-char query (doSearch min-length guard)", func() {
results, err := mr.Search("a", model.QueryOptions{Max: 10})
Expect(err).ToNot(HaveOccurred())
Expect(results).To(BeEmpty(), "doSearch should reject single-char queries")
})
It("returns results with Max=0 (regression: must not produce LIMIT 0)", func() {
results, err := mr.Search("Beatles", model.QueryOptions{Max: 0})
Expect(err).ToNot(HaveOccurred())
Expect(results).ToNot(BeEmpty(), "Max=0 should mean no limit, not LIMIT 0")
})
})

View File

@@ -1,6 +1,8 @@
package persistence package persistence
import ( import (
"github.com/navidrome/navidrome/conf"
"github.com/navidrome/navidrome/conf/configtest"
. "github.com/onsi/ginkgo/v2" . "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega" . "github.com/onsi/gomega"
) )
@@ -11,4 +13,100 @@ var _ = Describe("sqlRepository", func() {
Expect(formatFullText("legiao urbana")).To(Equal(" legiao urbana")) Expect(formatFullText("legiao urbana")).To(Equal(" legiao urbana"))
}) })
}) })
Describe("getSearchStrategy", func() {
It("returns FTS strategy by default", func() {
DeferCleanup(configtest.SetupConfig())
conf.Server.Search.Backend = "fts"
conf.Server.Search.FullString = false
strategy := getSearchStrategy("media_file", "test")
Expect(strategy).ToNot(BeNil())
sql, _, err := strategy.ToSql()
Expect(err).ToNot(HaveOccurred())
Expect(sql).To(ContainSubstring("MATCH"))
})
It("returns legacy LIKE strategy when SearchBackend is legacy", func() {
DeferCleanup(configtest.SetupConfig())
conf.Server.Search.Backend = "legacy"
conf.Server.Search.FullString = false
strategy := getSearchStrategy("media_file", "test")
Expect(strategy).ToNot(BeNil())
sql, _, err := strategy.ToSql()
Expect(err).ToNot(HaveOccurred())
Expect(sql).To(ContainSubstring("LIKE"))
})
It("falls back to legacy LIKE strategy when SearchFullString is enabled", func() {
DeferCleanup(configtest.SetupConfig())
conf.Server.Search.Backend = "fts"
conf.Server.Search.FullString = true
strategy := getSearchStrategy("media_file", "test")
Expect(strategy).ToNot(BeNil())
sql, _, err := strategy.ToSql()
Expect(err).ToNot(HaveOccurred())
Expect(sql).To(ContainSubstring("LIKE"))
})
It("routes CJK queries to LIKE strategy instead of FTS", func() {
DeferCleanup(configtest.SetupConfig())
conf.Server.Search.Backend = "fts"
conf.Server.Search.FullString = false
strategy := getSearchStrategy("media_file", "周杰伦")
Expect(strategy).ToNot(BeNil())
sql, _, err := strategy.ToSql()
Expect(err).ToNot(HaveOccurred())
// CJK should use LIKE, not MATCH
Expect(sql).To(ContainSubstring("LIKE"))
Expect(sql).NotTo(ContainSubstring("MATCH"))
})
It("routes non-CJK queries to FTS strategy", func() {
DeferCleanup(configtest.SetupConfig())
conf.Server.Search.Backend = "fts"
conf.Server.Search.FullString = false
strategy := getSearchStrategy("media_file", "beatles")
Expect(strategy).ToNot(BeNil())
sql, _, err := strategy.ToSql()
Expect(err).ToNot(HaveOccurred())
Expect(sql).To(ContainSubstring("MATCH"))
})
It("returns non-nil for single-character query (no min-length in strategy)", func() {
DeferCleanup(configtest.SetupConfig())
conf.Server.Search.Backend = "fts"
conf.Server.Search.FullString = false
strategy := getSearchStrategy("media_file", "a")
Expect(strategy).ToNot(BeNil(), "single-char queries must be accepted by strategies (min-length is enforced in doSearch)")
})
It("returns non-nil for single-character query with legacy backend", func() {
DeferCleanup(configtest.SetupConfig())
conf.Server.Search.Backend = "legacy"
conf.Server.Search.FullString = false
strategy := getSearchStrategy("media_file", "a")
Expect(strategy).ToNot(BeNil(), "single-char queries must be accepted by legacy strategy (min-length is enforced in doSearch)")
})
It("uses legacy for CJK when SearchBackend is legacy", func() {
DeferCleanup(configtest.SetupConfig())
conf.Server.Search.Backend = "legacy"
conf.Server.Search.FullString = false
strategy := getSearchStrategy("media_file", "周杰伦")
Expect(strategy).ToNot(BeNil())
sql, _, err := strategy.ToSql()
Expect(err).ToNot(HaveOccurred())
// Legacy should still use full_text column LIKE
Expect(sql).To(ContainSubstring("LIKE"))
Expect(sql).To(ContainSubstring("full_text"))
})
})
}) })

View File

@@ -0,0 +1,33 @@
package capabilities
// TaskWorker provides task execution handling.
// This capability allows plugins to receive callbacks when their queued tasks
// are ready to execute. Plugins that use the taskqueue host service must
// implement this capability.
//
//nd:capability name=taskworker
type TaskWorker interface {
// OnTaskExecute is called when a queued task is ready to run.
// Return an error to trigger retry (if retries are configured).
//nd:export name=nd_task_execute
OnTaskExecute(TaskExecuteRequest) (TaskExecuteResponse, error)
}
// TaskExecuteRequest is the request provided when a task is ready to execute.
type TaskExecuteRequest struct {
// QueueName is the name of the queue this task belongs to.
QueueName string `json:"queueName"`
// TaskID is the unique identifier for this task.
TaskID string `json:"taskId"`
// Payload is the opaque data provided when the task was enqueued.
Payload []byte `json:"payload"`
// Attempt is the current attempt number (1-based: first attempt = 1).
Attempt int32 `json:"attempt"`
}
// TaskExecuteResponse is the response from task execution.
type TaskExecuteResponse struct {
// Error, if non-empty, indicates the task failed. The task will be retried
// if retries are configured and attempts remain.
Error string `json:"error,omitempty"`
}

View File

@@ -0,0 +1,45 @@
version: v1-draft
exports:
nd_task_execute:
description: |-
OnTaskExecute is called when a queued task is ready to run.
Return an error to trigger retry (if retries are configured).
input:
$ref: '#/components/schemas/TaskExecuteRequest'
contentType: application/json
output:
$ref: '#/components/schemas/TaskExecuteResponse'
contentType: application/json
components:
schemas:
TaskExecuteRequest:
description: TaskExecuteRequest is the request provided when a task is ready to execute.
properties:
queueName:
type: string
description: QueueName is the name of the queue this task belongs to.
taskId:
type: string
description: TaskID is the unique identifier for this task.
payload:
type: array
description: Payload is the opaque data provided when the task was enqueued.
items:
type: object
attempt:
type: integer
format: int32
description: 'Attempt is the current attempt number (1-based: first attempt = 1).'
required:
- queueName
- taskId
- payload
- attempt
TaskExecuteResponse:
description: TaskExecuteResponse is the response from task execution.
properties:
error:
type: string
description: |-
Error, if non-empty, indicates the task failed. The task will be retried
if retries are configured and attempts remain.

View File

@@ -14,6 +14,7 @@ import (
"net/url" "net/url"
"strings" "strings"
"github.com/navidrome/navidrome/plugins/pdk/go/host"
"github.com/navidrome/navidrome/plugins/pdk/go/metadata" "github.com/navidrome/navidrome/plugins/pdk/go/metadata"
"github.com/navidrome/navidrome/plugins/pdk/go/pdk" "github.com/navidrome/navidrome/plugins/pdk/go/pdk"
) )
@@ -77,21 +78,28 @@ func sparqlQuery(endpoint, query string) (*SPARQLResult, error) {
form := url.Values{} form := url.Values{}
form.Set("query", query) form.Set("query", query)
req := pdk.NewHTTPRequest(pdk.MethodPost, endpoint)
req.SetHeader("Accept", "application/sparql-results+json")
req.SetHeader("Content-Type", "application/x-www-form-urlencoded")
req.SetHeader("User-Agent", "NavidromeWikimediaPlugin/1.0")
req.SetBody([]byte(form.Encode()))
pdk.Log(pdk.LogDebug, fmt.Sprintf("SPARQL query to %s: %s", endpoint, query)) pdk.Log(pdk.LogDebug, fmt.Sprintf("SPARQL query to %s: %s", endpoint, query))
resp := req.Send() resp, err := host.HTTPSend(host.HTTPRequest{
if resp.Status() != 200 { Method: "POST",
return nil, fmt.Errorf("SPARQL HTTP error: status %d", resp.Status()) URL: endpoint,
Headers: map[string]string{
"Accept": "application/sparql-results+json",
"Content-Type": "application/x-www-form-urlencoded",
"User-Agent": "NavidromeWikimediaPlugin/1.0",
},
Body: []byte(form.Encode()),
TimeoutMs: 10000,
})
if err != nil {
return nil, fmt.Errorf("SPARQL HTTP error: %w", err)
}
if resp.StatusCode != 200 {
return nil, fmt.Errorf("SPARQL HTTP error: status %d", resp.StatusCode)
} }
var result SPARQLResult var result SPARQLResult
if err := json.Unmarshal(resp.Body(), &result); err != nil { if err := json.Unmarshal(resp.Body, &result); err != nil {
return nil, fmt.Errorf("failed to parse SPARQL response: %w", err) return nil, fmt.Errorf("failed to parse SPARQL response: %w", err)
} }
if len(result.Results.Bindings) == 0 { if len(result.Results.Bindings) == 0 {
@@ -104,15 +112,22 @@ func sparqlQuery(endpoint, query string) (*SPARQLResult, error) {
func mediawikiQuery(params url.Values) ([]byte, error) { func mediawikiQuery(params url.Values) ([]byte, error) {
apiURL := fmt.Sprintf("%s?%s", mediawikiAPIEndpoint, params.Encode()) apiURL := fmt.Sprintf("%s?%s", mediawikiAPIEndpoint, params.Encode())
req := pdk.NewHTTPRequest(pdk.MethodGet, apiURL) resp, err := host.HTTPSend(host.HTTPRequest{
req.SetHeader("Accept", "application/json") Method: "GET",
req.SetHeader("User-Agent", "NavidromeWikimediaPlugin/1.0") URL: apiURL,
Headers: map[string]string{
resp := req.Send() "Accept": "application/json",
if resp.Status() != 200 { "User-Agent": "NavidromeWikimediaPlugin/1.0",
return nil, fmt.Errorf("MediaWiki HTTP error: status %d", resp.Status()) },
TimeoutMs: 10000,
})
if err != nil {
return nil, fmt.Errorf("MediaWiki HTTP error: %w", err)
} }
return resp.Body(), nil if resp.StatusCode != 200 {
return nil, fmt.Errorf("MediaWiki HTTP error: status %d", resp.StatusCode)
}
return resp.Body, nil
} }
// getWikidataWikipediaURL fetches the Wikipedia URL from Wikidata using MBID or name // getWikidataWikipediaURL fetches the Wikipedia URL from Wikidata using MBID or name

40
plugins/host/http.go Normal file
View File

@@ -0,0 +1,40 @@
package host
import "context"
// HTTPRequest represents an outbound HTTP request from a plugin.
type HTTPRequest struct {
Method string `json:"method"`
URL string `json:"url"`
Headers map[string]string `json:"headers,omitempty"`
Body []byte `json:"body,omitempty"`
TimeoutMs int32 `json:"timeoutMs,omitempty"`
}
// HTTPResponse represents the response from an outbound HTTP request.
type HTTPResponse struct {
StatusCode int32 `json:"statusCode"`
Headers map[string]string `json:"headers,omitempty"`
Body []byte `json:"body,omitempty"`
}
// HTTPService provides outbound HTTP request capabilities for plugins.
//
// This service allows plugins to make HTTP requests to external services.
// Requests are validated against the plugin's declared requiredHosts patterns
// from the http permission in the manifest. Redirects are followed but each
// redirect destination is also validated against the allowed hosts.
//
//nd:hostservice name=HTTP permission=http
type HTTPService interface {
// Send executes an HTTP request and returns the response.
//
// Parameters:
// - request: The HTTP request to execute, including method, URL, headers, body, and timeout
//
// Returns the HTTP response with status code, headers, and body.
// Network errors, timeouts, and permission failures are returned as Go errors.
// Successful HTTP calls (including 4xx/5xx status codes) return a non-nil response with nil error.
//nd:hostfunc
Send(ctx context.Context, request HTTPRequest) (*HTTPResponse, error)
}

88
plugins/host/http_gen.go Normal file
View File

@@ -0,0 +1,88 @@
// Code generated by ndpgen. DO NOT EDIT.
package host
import (
"context"
"encoding/json"
extism "github.com/extism/go-sdk"
)
// HTTPSendRequest is the request type for HTTP.Send.
type HTTPSendRequest struct {
Request HTTPRequest `json:"request"`
}
// HTTPSendResponse is the response type for HTTP.Send.
type HTTPSendResponse struct {
Result *HTTPResponse `json:"result,omitempty"`
Error string `json:"error,omitempty"`
}
// RegisterHTTPHostFunctions registers HTTP service host functions.
// The returned host functions should be added to the plugin's configuration.
func RegisterHTTPHostFunctions(service HTTPService) []extism.HostFunction {
return []extism.HostFunction{
newHTTPSendHostFunction(service),
}
}
func newHTTPSendHostFunction(service HTTPService) extism.HostFunction {
return extism.NewHostFunctionWithStack(
"http_send",
func(ctx context.Context, p *extism.CurrentPlugin, stack []uint64) {
// Read JSON request from plugin memory
reqBytes, err := p.ReadBytes(stack[0])
if err != nil {
httpWriteError(p, stack, err)
return
}
var req HTTPSendRequest
if err := json.Unmarshal(reqBytes, &req); err != nil {
httpWriteError(p, stack, err)
return
}
// Call the service method
result, svcErr := service.Send(ctx, req.Request)
if svcErr != nil {
httpWriteError(p, stack, svcErr)
return
}
// Write JSON response to plugin memory
resp := HTTPSendResponse{
Result: result,
}
httpWriteResponse(p, stack, resp)
},
[]extism.ValueType{extism.ValueTypePTR},
[]extism.ValueType{extism.ValueTypePTR},
)
}
// httpWriteResponse writes a JSON response to plugin memory.
func httpWriteResponse(p *extism.CurrentPlugin, stack []uint64, resp any) {
respBytes, err := json.Marshal(resp)
if err != nil {
httpWriteError(p, stack, err)
return
}
respPtr, err := p.WriteBytes(respBytes)
if err != nil {
stack[0] = 0
return
}
stack[0] = respPtr
}
// httpWriteError writes an error response to plugin memory.
func httpWriteError(p *extism.CurrentPlugin, stack []uint64, err error) {
errResp := struct {
Error string `json:"error"`
}{Error: err.Error()}
respBytes, _ := json.Marshal(errResp)
respPtr, _ := p.WriteBytes(respBytes)
stack[0] = respPtr
}

57
plugins/host/taskqueue.go Normal file
View File

@@ -0,0 +1,57 @@
package host
import "context"
// QueueConfig holds configuration for a task queue.
type QueueConfig struct {
// Concurrency is the max number of parallel workers. Default: 1.
// Capped by the plugin's manifest maxConcurrency.
Concurrency int32 `json:"concurrency"`
// MaxRetries is the number of times to retry a failed task. Default: 0.
MaxRetries int32 `json:"maxRetries"`
// BackoffMs is the initial backoff between retries in milliseconds.
// Doubles each retry (exponential: backoffMs * 2^(attempt-1)). Default: 1000.
BackoffMs int64 `json:"backoffMs"`
// DelayMs is the minimum delay between starting consecutive tasks
// in milliseconds. Useful for rate limiting. Default: 0.
DelayMs int64 `json:"delayMs"`
// RetentionMs is how long completed/failed/cancelled tasks are kept
// in milliseconds. Default: 3600000 (1h). Min: 60000 (1m). Max: 604800000 (1w).
RetentionMs int64 `json:"retentionMs"`
}
// TaskQueueService provides persistent task queues for plugins.
//
// This service allows plugins to create named queues with configurable concurrency,
// retry policies, and rate limiting. Tasks are persisted to SQLite and survive
// server restarts. When a task is ready to execute, the host calls the plugin's
// nd_task_execute callback function.
//
//nd:hostservice name=TaskQueue permission=taskqueue
type TaskQueueService interface {
// CreateQueue creates a named task queue with the given configuration.
// Zero-value fields in config use sensible defaults.
// If a queue with the same name already exists, returns an error.
// On startup, this also recovers any stale "running" tasks from a previous crash.
//nd:hostfunc
CreateQueue(ctx context.Context, name string, config QueueConfig) error
// Enqueue adds a task to the named queue. Returns the task ID.
// payload is opaque bytes passed back to the plugin on execution.
//nd:hostfunc
Enqueue(ctx context.Context, queueName string, payload []byte) (string, error)
// GetTaskStatus returns the status of a task: "pending", "running",
// "completed", "failed", or "cancelled".
//nd:hostfunc
GetTaskStatus(ctx context.Context, taskID string) (string, error)
// CancelTask cancels a pending task. Returns error if already
// running, completed, or failed.
//nd:hostfunc
CancelTask(ctx context.Context, taskID string) error
}

View File

@@ -0,0 +1,220 @@
// Code generated by ndpgen. DO NOT EDIT.
package host
import (
"context"
"encoding/json"
extism "github.com/extism/go-sdk"
)
// TaskQueueCreateQueueRequest is the request type for TaskQueue.CreateQueue.
type TaskQueueCreateQueueRequest struct {
Name string `json:"name"`
Config QueueConfig `json:"config"`
}
// TaskQueueCreateQueueResponse is the response type for TaskQueue.CreateQueue.
type TaskQueueCreateQueueResponse struct {
Error string `json:"error,omitempty"`
}
// TaskQueueEnqueueRequest is the request type for TaskQueue.Enqueue.
type TaskQueueEnqueueRequest struct {
QueueName string `json:"queueName"`
Payload []byte `json:"payload"`
}
// TaskQueueEnqueueResponse is the response type for TaskQueue.Enqueue.
type TaskQueueEnqueueResponse struct {
Result string `json:"result,omitempty"`
Error string `json:"error,omitempty"`
}
// TaskQueueGetTaskStatusRequest is the request type for TaskQueue.GetTaskStatus.
type TaskQueueGetTaskStatusRequest struct {
TaskID string `json:"taskId"`
}
// TaskQueueGetTaskStatusResponse is the response type for TaskQueue.GetTaskStatus.
type TaskQueueGetTaskStatusResponse struct {
Result string `json:"result,omitempty"`
Error string `json:"error,omitempty"`
}
// TaskQueueCancelTaskRequest is the request type for TaskQueue.CancelTask.
type TaskQueueCancelTaskRequest struct {
TaskID string `json:"taskId"`
}
// TaskQueueCancelTaskResponse is the response type for TaskQueue.CancelTask.
type TaskQueueCancelTaskResponse struct {
Error string `json:"error,omitempty"`
}
// RegisterTaskQueueHostFunctions registers TaskQueue service host functions.
// The returned host functions should be added to the plugin's configuration.
func RegisterTaskQueueHostFunctions(service TaskQueueService) []extism.HostFunction {
return []extism.HostFunction{
newTaskQueueCreateQueueHostFunction(service),
newTaskQueueEnqueueHostFunction(service),
newTaskQueueGetTaskStatusHostFunction(service),
newTaskQueueCancelTaskHostFunction(service),
}
}
func newTaskQueueCreateQueueHostFunction(service TaskQueueService) extism.HostFunction {
return extism.NewHostFunctionWithStack(
"taskqueue_createqueue",
func(ctx context.Context, p *extism.CurrentPlugin, stack []uint64) {
// Read JSON request from plugin memory
reqBytes, err := p.ReadBytes(stack[0])
if err != nil {
taskqueueWriteError(p, stack, err)
return
}
var req TaskQueueCreateQueueRequest
if err := json.Unmarshal(reqBytes, &req); err != nil {
taskqueueWriteError(p, stack, err)
return
}
// Call the service method
if svcErr := service.CreateQueue(ctx, req.Name, req.Config); svcErr != nil {
taskqueueWriteError(p, stack, svcErr)
return
}
// Write JSON response to plugin memory
resp := TaskQueueCreateQueueResponse{}
taskqueueWriteResponse(p, stack, resp)
},
[]extism.ValueType{extism.ValueTypePTR},
[]extism.ValueType{extism.ValueTypePTR},
)
}
func newTaskQueueEnqueueHostFunction(service TaskQueueService) extism.HostFunction {
return extism.NewHostFunctionWithStack(
"taskqueue_enqueue",
func(ctx context.Context, p *extism.CurrentPlugin, stack []uint64) {
// Read JSON request from plugin memory
reqBytes, err := p.ReadBytes(stack[0])
if err != nil {
taskqueueWriteError(p, stack, err)
return
}
var req TaskQueueEnqueueRequest
if err := json.Unmarshal(reqBytes, &req); err != nil {
taskqueueWriteError(p, stack, err)
return
}
// Call the service method
result, svcErr := service.Enqueue(ctx, req.QueueName, req.Payload)
if svcErr != nil {
taskqueueWriteError(p, stack, svcErr)
return
}
// Write JSON response to plugin memory
resp := TaskQueueEnqueueResponse{
Result: result,
}
taskqueueWriteResponse(p, stack, resp)
},
[]extism.ValueType{extism.ValueTypePTR},
[]extism.ValueType{extism.ValueTypePTR},
)
}
func newTaskQueueGetTaskStatusHostFunction(service TaskQueueService) extism.HostFunction {
return extism.NewHostFunctionWithStack(
"taskqueue_gettaskstatus",
func(ctx context.Context, p *extism.CurrentPlugin, stack []uint64) {
// Read JSON request from plugin memory
reqBytes, err := p.ReadBytes(stack[0])
if err != nil {
taskqueueWriteError(p, stack, err)
return
}
var req TaskQueueGetTaskStatusRequest
if err := json.Unmarshal(reqBytes, &req); err != nil {
taskqueueWriteError(p, stack, err)
return
}
// Call the service method
result, svcErr := service.GetTaskStatus(ctx, req.TaskID)
if svcErr != nil {
taskqueueWriteError(p, stack, svcErr)
return
}
// Write JSON response to plugin memory
resp := TaskQueueGetTaskStatusResponse{
Result: result,
}
taskqueueWriteResponse(p, stack, resp)
},
[]extism.ValueType{extism.ValueTypePTR},
[]extism.ValueType{extism.ValueTypePTR},
)
}
func newTaskQueueCancelTaskHostFunction(service TaskQueueService) extism.HostFunction {
return extism.NewHostFunctionWithStack(
"taskqueue_canceltask",
func(ctx context.Context, p *extism.CurrentPlugin, stack []uint64) {
// Read JSON request from plugin memory
reqBytes, err := p.ReadBytes(stack[0])
if err != nil {
taskqueueWriteError(p, stack, err)
return
}
var req TaskQueueCancelTaskRequest
if err := json.Unmarshal(reqBytes, &req); err != nil {
taskqueueWriteError(p, stack, err)
return
}
// Call the service method
if svcErr := service.CancelTask(ctx, req.TaskID); svcErr != nil {
taskqueueWriteError(p, stack, svcErr)
return
}
// Write JSON response to plugin memory
resp := TaskQueueCancelTaskResponse{}
taskqueueWriteResponse(p, stack, resp)
},
[]extism.ValueType{extism.ValueTypePTR},
[]extism.ValueType{extism.ValueTypePTR},
)
}
// taskqueueWriteResponse writes a JSON response to plugin memory.
func taskqueueWriteResponse(p *extism.CurrentPlugin, stack []uint64, resp any) {
respBytes, err := json.Marshal(resp)
if err != nil {
taskqueueWriteError(p, stack, err)
return
}
respPtr, err := p.WriteBytes(respBytes)
if err != nil {
stack[0] = 0
return
}
stack[0] = respPtr
}
// taskqueueWriteError writes an error response to plugin memory.
func taskqueueWriteError(p *extism.CurrentPlugin, stack []uint64, err error) {
errResp := struct {
Error string `json:"error"`
}{Error: err.Error()}
respBytes, _ := json.Marshal(errResp)
respPtr, _ := p.WriteBytes(respBytes)
stack[0] = respPtr
}

190
plugins/host_httpclient.go Normal file
View File

@@ -0,0 +1,190 @@
package plugins
import (
"bytes"
"cmp"
"context"
"fmt"
"io"
"net"
"net/http"
"net/url"
"strings"
"time"
"github.com/navidrome/navidrome/log"
"github.com/navidrome/navidrome/plugins/host"
)
const (
httpClientDefaultTimeout = 10 * time.Second
httpClientMaxRedirects = 5
httpClientMaxResponseBodyLen = 10 * 1024 * 1024 // 10 MB
)
// httpServiceImpl implements host.HTTPService.
type httpServiceImpl struct {
pluginName string
requiredHosts []string
client *http.Client
}
// newHTTPService creates a new HTTPService for a plugin.
func newHTTPService(pluginName string, permission *HTTPPermission) *httpServiceImpl {
var requiredHosts []string
if permission != nil {
requiredHosts = permission.RequiredHosts
}
svc := &httpServiceImpl{
pluginName: pluginName,
requiredHosts: requiredHosts,
}
svc.client = &http.Client{
Transport: http.DefaultTransport,
// Timeout is set per-request via context deadline, not here.
// CheckRedirect validates hosts and enforces redirect limits.
CheckRedirect: func(req *http.Request, via []*http.Request) error {
if len(via) >= httpClientMaxRedirects {
log.Warn(req.Context(), "HTTP redirect limit exceeded", "plugin", svc.pluginName, "url", req.URL.String(), "redirectCount", len(via))
return http.ErrUseLastResponse
}
if err := svc.validateHost(req.Context(), req.URL.Host); err != nil {
log.Warn(req.Context(), "HTTP redirect blocked", "plugin", svc.pluginName, "url", req.URL.String(), "err", err)
return err
}
return nil
},
}
return svc
}
func (s *httpServiceImpl) Send(ctx context.Context, request host.HTTPRequest) (*host.HTTPResponse, error) {
// Parse and validate URL
parsedURL, err := url.Parse(request.URL)
if err != nil {
return nil, fmt.Errorf("invalid URL: %w", err)
}
// Validate URL scheme
if parsedURL.Scheme != "http" && parsedURL.Scheme != "https" {
return nil, fmt.Errorf("invalid URL scheme %q: must be http or https", parsedURL.Scheme)
}
// Validate host against allowed hosts and private IP restrictions
if err := s.validateHost(ctx, parsedURL.Host); err != nil {
return nil, err
}
// Apply per-request timeout via context deadline
timeout := cmp.Or(time.Duration(request.TimeoutMs)*time.Millisecond, httpClientDefaultTimeout)
ctx, cancel := context.WithTimeout(ctx, timeout)
defer cancel()
// Build request body
method := strings.ToUpper(request.Method)
var body io.Reader
if len(request.Body) > 0 {
body = bytes.NewReader(request.Body)
}
// Create HTTP request
httpReq, err := http.NewRequestWithContext(ctx, method, request.URL, body)
if err != nil {
return nil, fmt.Errorf("creating request: %w", err)
}
for k, v := range request.Headers {
httpReq.Header.Set(k, v)
}
// Execute request
resp, err := s.client.Do(httpReq) //nolint:gosec // URL is validated against requiredHosts
if err != nil {
return nil, err
}
defer resp.Body.Close()
log.Trace(ctx, "HTTP request", "plugin", s.pluginName, "method", method, "url", request.URL, "status", resp.StatusCode)
// Read response body (with size limit to prevent memory exhaustion)
respBody, err := io.ReadAll(io.LimitReader(resp.Body, httpClientMaxResponseBodyLen))
if err != nil {
return nil, fmt.Errorf("reading response body: %w", err)
}
// Flatten response headers (first value only)
headers := make(map[string]string, len(resp.Header))
for k, v := range resp.Header {
if len(v) > 0 {
headers[k] = v[0]
}
}
return &host.HTTPResponse{
StatusCode: int32(resp.StatusCode),
Headers: headers,
Body: respBody,
}, nil
}
// validateHost checks whether a request to the given host is permitted.
// When requiredHosts is set, it checks against the allowlist.
// When requiredHosts is empty, it blocks private/loopback IPs to prevent SSRF.
func (s *httpServiceImpl) validateHost(ctx context.Context, hostStr string) error {
hostname := extractHostname(hostStr)
if len(s.requiredHosts) > 0 {
if !s.isHostAllowed(hostname) {
return fmt.Errorf("host %q is not allowed", hostStr)
}
return nil
}
// No explicit allowlist: block private/loopback IPs
if isPrivateOrLoopback(hostname) {
log.Warn(ctx, "HTTP request to private/loopback address blocked", "plugin", s.pluginName, "host", hostStr)
return fmt.Errorf("host %q is not allowed: private/loopback addresses require explicit requiredHosts in manifest", hostStr)
}
return nil
}
func (s *httpServiceImpl) isHostAllowed(hostname string) bool {
for _, pattern := range s.requiredHosts {
if matchHostPattern(pattern, hostname) {
return true
}
}
return false
}
// extractHostname returns the hostname portion of a host string, stripping
// any port number and IPv6 brackets. It handles IPv6 addresses correctly
// (e.g. "[::1]:8080" → "::1", "[::1]" → "::1").
func extractHostname(hostStr string) string {
if h, _, err := net.SplitHostPort(hostStr); err == nil {
return h
}
// Strip IPv6 brackets when no port is present (e.g. "[::1]" → "::1")
if strings.HasPrefix(hostStr, "[") && strings.HasSuffix(hostStr, "]") {
return hostStr[1 : len(hostStr)-1]
}
return hostStr
}
// isPrivateOrLoopback returns true if the given hostname resolves to or is
// a private, loopback, or link-local IP address. This includes:
// IPv4: 127.0.0.0/8, 10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16, 169.254.0.0/16
// IPv6: ::1, fc00::/7, fe80::/10
// It also blocks "localhost" by name.
func isPrivateOrLoopback(hostname string) bool {
if strings.EqualFold(hostname, "localhost") {
return true
}
ip := net.ParseIP(hostname)
if ip == nil {
return false
}
return ip.IsLoopback() || ip.IsPrivate() || ip.IsLinkLocalUnicast() || ip.IsLinkLocalMulticast()
}
// Verify interface implementation
var _ host.HTTPService = (*httpServiceImpl)(nil)

View File

@@ -0,0 +1,565 @@
//go:build !windows
package plugins
import (
"context"
"io"
"net/http"
"net/http/httptest"
"strings"
"time"
"github.com/navidrome/navidrome/plugins/host"
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
)
var _ = Describe("httpServiceImpl", func() {
var (
svc *httpServiceImpl
ts *httptest.Server
)
AfterEach(func() {
if ts != nil {
ts.Close()
}
})
Context("without host restrictions (default SSRF protection)", func() {
BeforeEach(func() {
svc = newHTTPService("test-plugin", nil)
})
It("should block requests to loopback IPs", func() {
ts = httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(200)
}))
_, err := svc.Send(context.Background(), host.HTTPRequest{
Method: "GET",
URL: ts.URL,
TimeoutMs: 1000,
})
Expect(err).To(HaveOccurred())
Expect(err.Error()).To(ContainSubstring("private/loopback"))
})
It("should block requests to localhost by name", func() {
_, err := svc.Send(context.Background(), host.HTTPRequest{
Method: "GET",
URL: "http://localhost:12345/test",
TimeoutMs: 1000,
})
Expect(err).To(HaveOccurred())
Expect(err.Error()).To(ContainSubstring("private/loopback"))
})
It("should block requests to private IPs (10.x)", func() {
_, err := svc.Send(context.Background(), host.HTTPRequest{
Method: "GET",
URL: "http://10.0.0.1/test",
TimeoutMs: 1000,
})
Expect(err).To(HaveOccurred())
Expect(err.Error()).To(ContainSubstring("private/loopback"))
})
It("should block requests to private IPs (192.168.x)", func() {
_, err := svc.Send(context.Background(), host.HTTPRequest{
Method: "GET",
URL: "http://192.168.1.1/test",
TimeoutMs: 1000,
})
Expect(err).To(HaveOccurred())
Expect(err.Error()).To(ContainSubstring("private/loopback"))
})
It("should block requests to private IPs (172.16.x)", func() {
_, err := svc.Send(context.Background(), host.HTTPRequest{
Method: "GET",
URL: "http://172.16.0.1/test",
TimeoutMs: 1000,
})
Expect(err).To(HaveOccurred())
Expect(err.Error()).To(ContainSubstring("private/loopback"))
})
It("should block requests to link-local IPs (169.254.x)", func() {
_, err := svc.Send(context.Background(), host.HTTPRequest{
Method: "GET",
URL: "http://169.254.169.254/latest/meta-data/",
TimeoutMs: 1000,
})
Expect(err).To(HaveOccurred())
Expect(err.Error()).To(ContainSubstring("private/loopback"))
})
It("should block requests to IPv6 loopback with port", func() {
_, err := svc.Send(context.Background(), host.HTTPRequest{
Method: "GET",
URL: "http://[::1]:8080/test",
TimeoutMs: 1000,
})
Expect(err).To(HaveOccurred())
Expect(err.Error()).To(ContainSubstring("private/loopback"))
})
It("should block requests to IPv6 loopback without port", func() {
_, err := svc.Send(context.Background(), host.HTTPRequest{
Method: "GET",
URL: "http://[::1]/test",
TimeoutMs: 1000,
})
Expect(err).To(HaveOccurred())
Expect(err.Error()).To(ContainSubstring("private/loopback"))
})
It("should allow requests to public hostnames", func() {
// This will fail at the network level (connection refused or DNS),
// but it should NOT fail with a "private/loopback" error
_, err := svc.Send(context.Background(), host.HTTPRequest{
Method: "GET",
URL: "http://203.0.113.1:1/test", // TEST-NET-3, non-routable but not private
TimeoutMs: 100,
})
// Should get a network error, not a permission error
if err != nil {
Expect(err.Error()).ToNot(ContainSubstring("private/loopback"))
}
})
It("should return error for invalid URL", func() {
_, err := svc.Send(context.Background(), host.HTTPRequest{
Method: "GET",
URL: "://bad-url",
})
Expect(err).To(HaveOccurred())
})
It("should reject non-http/https URL schemes", func() {
_, err := svc.Send(context.Background(), host.HTTPRequest{
Method: "GET",
URL: "ftp://example.com/file",
})
Expect(err).To(HaveOccurred())
Expect(err.Error()).To(ContainSubstring("must be http or https"))
})
})
Context("with explicit requiredHosts allowing loopback", func() {
BeforeEach(func() {
svc = newHTTPService("test-plugin", &HTTPPermission{
RequiredHosts: []string{"127.0.0.1"},
})
})
It("should handle GET requests", func() {
ts = httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
Expect(r.Method).To(Equal("GET"))
w.Header().Set("X-Test", "ok")
w.WriteHeader(201)
_, _ = w.Write([]byte("hello"))
}))
resp, err := svc.Send(context.Background(), host.HTTPRequest{
Method: "GET",
URL: ts.URL,
Headers: map[string]string{"Accept": "text/plain"},
TimeoutMs: 1000,
})
Expect(err).ToNot(HaveOccurred())
Expect(resp.StatusCode).To(Equal(int32(201)))
Expect(string(resp.Body)).To(Equal("hello"))
Expect(resp.Headers["X-Test"]).To(Equal("ok"))
})
It("should handle POST requests with body", func() {
ts = httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
Expect(r.Method).To(Equal("POST"))
b, _ := io.ReadAll(r.Body)
_, _ = w.Write([]byte("got:" + string(b)))
}))
resp, err := svc.Send(context.Background(), host.HTTPRequest{
Method: "POST",
URL: ts.URL,
Body: []byte("abc"),
TimeoutMs: 1000,
})
Expect(err).ToNot(HaveOccurred())
Expect(string(resp.Body)).To(Equal("got:abc"))
})
It("should handle PUT requests with body", func() {
ts = httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
Expect(r.Method).To(Equal("PUT"))
b, _ := io.ReadAll(r.Body)
_, _ = w.Write([]byte("put:" + string(b)))
}))
resp, err := svc.Send(context.Background(), host.HTTPRequest{
Method: "PUT",
URL: ts.URL,
Body: []byte("xyz"),
TimeoutMs: 1000,
})
Expect(err).ToNot(HaveOccurred())
Expect(string(resp.Body)).To(Equal("put:xyz"))
})
It("should handle DELETE requests", func() {
ts = httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
Expect(r.Method).To(Equal("DELETE"))
w.WriteHeader(204)
}))
resp, err := svc.Send(context.Background(), host.HTTPRequest{
Method: "DELETE",
URL: ts.URL,
TimeoutMs: 1000,
})
Expect(err).ToNot(HaveOccurred())
Expect(resp.StatusCode).To(Equal(int32(204)))
})
It("should handle DELETE requests with body", func() {
ts = httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
Expect(r.Method).To(Equal("DELETE"))
b, _ := io.ReadAll(r.Body)
_, _ = w.Write([]byte("del:" + string(b)))
}))
resp, err := svc.Send(context.Background(), host.HTTPRequest{
Method: "DELETE",
URL: ts.URL,
Body: []byte(`{"id":"123"}`),
TimeoutMs: 1000,
})
Expect(err).ToNot(HaveOccurred())
Expect(string(resp.Body)).To(Equal(`del:{"id":"123"}`))
})
It("should handle PATCH requests with body", func() {
ts = httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
Expect(r.Method).To(Equal("PATCH"))
b, _ := io.ReadAll(r.Body)
_, _ = w.Write([]byte("patch:" + string(b)))
}))
resp, err := svc.Send(context.Background(), host.HTTPRequest{
Method: "PATCH",
URL: ts.URL,
Body: []byte("data"),
TimeoutMs: 1000,
})
Expect(err).ToNot(HaveOccurred())
Expect(string(resp.Body)).To(Equal("patch:data"))
})
It("should handle HEAD requests", func() {
ts = httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
Expect(r.Method).To(Equal("HEAD"))
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(200)
}))
resp, err := svc.Send(context.Background(), host.HTTPRequest{
Method: "HEAD",
URL: ts.URL,
TimeoutMs: 1000,
})
Expect(err).ToNot(HaveOccurred())
Expect(resp.StatusCode).To(Equal(int32(200)))
Expect(resp.Headers["Content-Type"]).To(Equal("application/json"))
Expect(resp.Body).To(BeEmpty())
})
It("should use default timeout when TimeoutMs is 0", func() {
ts = httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(200)
}))
resp, err := svc.Send(context.Background(), host.HTTPRequest{
Method: "GET",
URL: ts.URL,
})
Expect(err).ToNot(HaveOccurred())
Expect(resp.StatusCode).To(Equal(int32(200)))
})
It("should return error on timeout", func() {
ts = httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
time.Sleep(50 * time.Millisecond)
}))
_, err := svc.Send(context.Background(), host.HTTPRequest{
Method: "GET",
URL: ts.URL,
TimeoutMs: 1,
})
Expect(err).To(HaveOccurred())
Expect(err.Error()).To(ContainSubstring("deadline exceeded"))
})
It("should return error on context cancellation", func() {
ts = httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
time.Sleep(50 * time.Millisecond)
}))
ctx, cancel := context.WithCancel(context.Background())
go func() {
time.Sleep(1 * time.Millisecond)
cancel()
}()
_, err := svc.Send(ctx, host.HTTPRequest{
Method: "GET",
URL: ts.URL,
TimeoutMs: 5000,
})
Expect(err).To(HaveOccurred())
Expect(err.Error()).To(ContainSubstring("context canceled"))
})
It("should send request headers", func() {
ts = httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
_, _ = w.Write([]byte(r.Header.Get("X-Custom")))
}))
resp, err := svc.Send(context.Background(), host.HTTPRequest{
Method: "GET",
URL: ts.URL,
Headers: map[string]string{"X-Custom": "myvalue"},
TimeoutMs: 1000,
})
Expect(err).ToNot(HaveOccurred())
Expect(string(resp.Body)).To(Equal("myvalue"))
})
})
Context("with host restrictions", func() {
BeforeEach(func() {
svc = newHTTPService("test-plugin", &HTTPPermission{
RequiredHosts: []string{"allowed.example.com", "*.allowed.org"},
})
})
It("should block requests to non-allowed hosts", func() {
ts = httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(200)
}))
// httptest server is on 127.0.0.1 which is not in requiredHosts
_, err := svc.Send(context.Background(), host.HTTPRequest{
Method: "GET",
URL: ts.URL,
TimeoutMs: 1000,
})
Expect(err).To(HaveOccurred())
Expect(err.Error()).To(ContainSubstring("not allowed"))
})
It("should follow redirects to allowed hosts", func() {
// Create a destination server
dest := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
_, _ = w.Write([]byte("final"))
}))
defer dest.Close()
// Create a redirect server
ts = httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
http.Redirect(w, r, dest.URL, http.StatusFound)
}))
// Allow both servers (both on 127.0.0.1)
svc.requiredHosts = []string{"127.0.0.1"}
resp, err := svc.Send(context.Background(), host.HTTPRequest{
Method: "GET",
URL: ts.URL,
TimeoutMs: 1000,
})
Expect(err).ToNot(HaveOccurred())
Expect(resp.StatusCode).To(Equal(int32(200)))
Expect(string(resp.Body)).To(Equal("final"))
})
It("should block redirects to non-allowed hosts", func() {
// Server that redirects to a disallowed host
ts = httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
http.Redirect(w, r, "http://evil.example.com/steal", http.StatusFound)
}))
// Override requiredHosts to allow the test server
svc.requiredHosts = []string{"127.0.0.1"}
_, err := svc.Send(context.Background(), host.HTTPRequest{
Method: "GET",
URL: ts.URL,
TimeoutMs: 1000,
})
Expect(err).To(HaveOccurred())
Expect(err.Error()).To(ContainSubstring("not allowed"))
})
It("should block redirects to private IPs when allowlist is set", func() {
// Server that redirects to a private IP
ts = httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
http.Redirect(w, r, "http://10.0.0.1/internal", http.StatusFound)
}))
// Allow the test server; redirect to 10.0.0.1 is blocked by allowlist
svc.requiredHosts = []string{"127.0.0.1"}
resp, err := svc.Send(context.Background(), host.HTTPRequest{
Method: "GET",
URL: ts.URL,
TimeoutMs: 1000,
})
Expect(err).To(HaveOccurred())
Expect(resp).To(BeNil())
})
It("should allow wildcard host patterns", func() {
ts = httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
_, _ = w.Write([]byte("wildcard"))
}))
// *.allowed.org is in the requiredHosts from BeforeEach, but test server is 127.0.0.1
// Override with a wildcard that matches the test server
svc.requiredHosts = []string{"*.0.0.1"}
resp, err := svc.Send(context.Background(), host.HTTPRequest{
Method: "GET",
URL: ts.URL,
TimeoutMs: 1000,
})
Expect(err).ToNot(HaveOccurred())
Expect(string(resp.Body)).To(Equal("wildcard"))
})
It("should reject hosts not matching wildcard patterns", func() {
svc.requiredHosts = []string{"*.example.com"}
_, err := svc.Send(context.Background(), host.HTTPRequest{
Method: "GET",
URL: "http://evil.other.com/test",
TimeoutMs: 1000,
})
Expect(err).To(HaveOccurred())
Expect(err.Error()).To(ContainSubstring("not allowed"))
})
})
Context("response body size limit", func() {
BeforeEach(func() {
svc = newHTTPService("test-plugin", &HTTPPermission{
RequiredHosts: []string{"127.0.0.1"},
})
})
It("should truncate response body at the size limit", func() {
// Serve a body larger than the limit
oversizedBody := strings.Repeat("x", httpClientMaxResponseBodyLen+1024)
ts = httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
_, _ = w.Write([]byte(oversizedBody))
}))
resp, err := svc.Send(context.Background(), host.HTTPRequest{
Method: "GET",
URL: ts.URL,
TimeoutMs: 5000,
})
Expect(err).ToNot(HaveOccurred())
Expect(len(resp.Body)).To(Equal(httpClientMaxResponseBodyLen))
})
})
Context("edge cases", func() {
BeforeEach(func() {
svc = newHTTPService("test-plugin", &HTTPPermission{
RequiredHosts: []string{"127.0.0.1"},
})
})
It("should default empty method to GET", func() {
ts = httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
_, _ = w.Write([]byte("method:" + r.Method))
}))
// Empty method — Go's http.NewRequestWithContext normalizes "" to "GET"
resp, err := svc.Send(context.Background(), host.HTTPRequest{
Method: "",
URL: ts.URL,
TimeoutMs: 1000,
})
Expect(err).ToNot(HaveOccurred())
Expect(string(resp.Body)).To(Equal("method:GET"))
})
})
})
var _ = Describe("extractHostname", func() {
It("should extract hostname from host:port", func() {
Expect(extractHostname("example.com:8080")).To(Equal("example.com"))
})
It("should return hostname when no port", func() {
Expect(extractHostname("example.com")).To(Equal("example.com"))
})
It("should handle IPv6 with port", func() {
Expect(extractHostname("[::1]:8080")).To(Equal("::1"))
})
It("should handle IPv6 without port", func() {
Expect(extractHostname("::1")).To(Equal("::1"))
})
It("should strip brackets from IPv6 without port", func() {
Expect(extractHostname("[::1]")).To(Equal("::1"))
})
It("should handle IPv4 with port", func() {
Expect(extractHostname("127.0.0.1:9090")).To(Equal("127.0.0.1"))
})
It("should handle IPv4 without port", func() {
Expect(extractHostname("127.0.0.1")).To(Equal("127.0.0.1"))
})
})
var _ = Describe("isPrivateOrLoopback", func() {
It("should detect IPv4 loopback", func() {
Expect(isPrivateOrLoopback("127.0.0.1")).To(BeTrue())
Expect(isPrivateOrLoopback("127.0.0.2")).To(BeTrue())
})
It("should detect IPv6 loopback", func() {
Expect(isPrivateOrLoopback("::1")).To(BeTrue())
})
It("should detect localhost by name", func() {
Expect(isPrivateOrLoopback("localhost")).To(BeTrue())
Expect(isPrivateOrLoopback("LOCALHOST")).To(BeTrue())
})
It("should detect 10.x.x.x private range", func() {
Expect(isPrivateOrLoopback("10.0.0.1")).To(BeTrue())
Expect(isPrivateOrLoopback("10.255.255.255")).To(BeTrue())
})
It("should detect 172.16.x.x private range", func() {
Expect(isPrivateOrLoopback("172.16.0.1")).To(BeTrue())
Expect(isPrivateOrLoopback("172.31.255.255")).To(BeTrue())
})
It("should detect 192.168.x.x private range", func() {
Expect(isPrivateOrLoopback("192.168.0.1")).To(BeTrue())
Expect(isPrivateOrLoopback("192.168.255.255")).To(BeTrue())
})
It("should detect link-local addresses", func() {
Expect(isPrivateOrLoopback("169.254.169.254")).To(BeTrue())
Expect(isPrivateOrLoopback("169.254.0.1")).To(BeTrue())
})
It("should detect IPv6 private (fc00::/7)", func() {
Expect(isPrivateOrLoopback("fd00::1")).To(BeTrue())
})
It("should detect IPv6 link-local (fe80::/10)", func() {
Expect(isPrivateOrLoopback("fe80::1")).To(BeTrue())
})
It("should allow public IPs", func() {
Expect(isPrivateOrLoopback("8.8.8.8")).To(BeFalse())
Expect(isPrivateOrLoopback("203.0.113.1")).To(BeFalse())
Expect(isPrivateOrLoopback("2001:db8::1")).To(BeFalse())
})
It("should allow non-IP hostnames (DNS names)", func() {
Expect(isPrivateOrLoopback("example.com")).To(BeFalse())
Expect(isPrivateOrLoopback("api.example.com")).To(BeFalse())
})
It("should not treat 172.32.x.x as private", func() {
Expect(isPrivateOrLoopback("172.32.0.1")).To(BeFalse())
})
})

View File

@@ -188,12 +188,6 @@ func (s *schedulerServiceImpl) invokeCallback(ctx context.Context, scheduleID st
return return
} }
// Check if plugin has the scheduler capability
if !hasCapability(instance.capabilities, CapabilityScheduler) {
log.Warn(ctx, "Plugin does not have scheduler capability", "plugin", s.pluginName, "scheduleID", scheduleID)
return
}
// Prepare callback input // Prepare callback input
input := capabilities.SchedulerCallbackRequest{ input := capabilities.SchedulerCallbackRequest{
ScheduleID: scheduleID, ScheduleID: scheduleID,

562
plugins/host_taskqueue.go Normal file
View File

@@ -0,0 +1,562 @@
package plugins
import (
"context"
"database/sql"
"errors"
"fmt"
"io"
"os"
"path/filepath"
"sync"
"time"
_ "github.com/mattn/go-sqlite3"
"github.com/navidrome/navidrome/conf"
"github.com/navidrome/navidrome/log"
"github.com/navidrome/navidrome/model/id"
"github.com/navidrome/navidrome/plugins/capabilities"
"github.com/navidrome/navidrome/plugins/host"
"golang.org/x/time/rate"
)
const (
defaultConcurrency int32 = 1
defaultBackoffMs int64 = 1000
defaultRetentionMs int64 = 3_600_000 // 1 hour
minRetentionMs int64 = 60_000 // 1 minute
maxRetentionMs int64 = 604_800_000 // 1 week
maxQueueNameLength = 128
maxPayloadSize = 1 * 1024 * 1024 // 1MB
maxBackoffMs int64 = 3_600_000 // 1 hour
cleanupInterval = 5 * time.Minute
pollInterval = 5 * time.Second
shutdownTimeout = 10 * time.Second
taskStatusPending = "pending"
taskStatusRunning = "running"
taskStatusCompleted = "completed"
taskStatusFailed = "failed"
taskStatusCancelled = "cancelled"
)
// CapabilityTaskWorker indicates the plugin can receive task execution callbacks.
const CapabilityTaskWorker Capability = "TaskWorker"
const FuncTaskWorkerCallback = "nd_task_execute"
func init() {
registerCapability(CapabilityTaskWorker, FuncTaskWorkerCallback)
}
type queueState struct {
config host.QueueConfig
signal chan struct{}
limiter *rate.Limiter
}
// notifyWorkers sends a non-blocking signal to wake up queue workers.
func (qs *queueState) notifyWorkers() {
select {
case qs.signal <- struct{}{}:
default:
}
}
// taskQueueServiceImpl implements host.TaskQueueService with SQLite persistence
// and background worker goroutines for task execution.
type taskQueueServiceImpl struct {
pluginName string
manager *Manager
maxConcurrency int32
db *sql.DB
ctx context.Context
cancel context.CancelFunc
wg sync.WaitGroup
mu sync.Mutex
queues map[string]*queueState
// For testing: override how callbacks are invoked
invokeCallbackFn func(ctx context.Context, queueName, taskID string, payload []byte, attempt int32) error
}
// newTaskQueueService creates a new taskQueueServiceImpl with its own SQLite database.
func newTaskQueueService(pluginName string, manager *Manager, maxConcurrency int32) (*taskQueueServiceImpl, error) {
dataDir := filepath.Join(conf.Server.DataFolder, "plugins", pluginName)
if err := os.MkdirAll(dataDir, 0700); err != nil {
return nil, fmt.Errorf("creating plugin data directory: %w", err)
}
dbPath := filepath.Join(dataDir, "taskqueue.db")
db, err := sql.Open("sqlite3", dbPath+"?_busy_timeout=5000&_journal_mode=WAL&_foreign_keys=off")
if err != nil {
return nil, fmt.Errorf("opening taskqueue database: %w", err)
}
db.SetMaxOpenConns(3)
db.SetMaxIdleConns(1)
if err := createTaskQueueSchema(db); err != nil {
db.Close()
return nil, fmt.Errorf("creating taskqueue schema: %w", err)
}
ctx, cancel := context.WithCancel(manager.ctx)
s := &taskQueueServiceImpl{
pluginName: pluginName,
manager: manager,
maxConcurrency: maxConcurrency,
db: db,
ctx: ctx,
cancel: cancel,
queues: make(map[string]*queueState),
}
s.invokeCallbackFn = s.defaultInvokeCallback
s.wg.Go(s.cleanupLoop)
log.Debug("Initialized plugin taskqueue", "plugin", pluginName, "path", dbPath, "maxConcurrency", maxConcurrency)
return s, nil
}
func createTaskQueueSchema(db *sql.DB) error {
_, err := db.Exec(`
CREATE TABLE IF NOT EXISTS queues (
name TEXT PRIMARY KEY,
concurrency INTEGER NOT NULL DEFAULT 1,
max_retries INTEGER NOT NULL DEFAULT 0,
backoff_ms INTEGER NOT NULL DEFAULT 1000,
delay_ms INTEGER NOT NULL DEFAULT 0,
retention_ms INTEGER NOT NULL DEFAULT 3600000
);
CREATE TABLE IF NOT EXISTS tasks (
id TEXT PRIMARY KEY,
queue_name TEXT NOT NULL REFERENCES queues(name),
payload BLOB NOT NULL,
status TEXT NOT NULL DEFAULT 'pending',
attempt INTEGER NOT NULL DEFAULT 0,
max_retries INTEGER NOT NULL,
next_run_at INTEGER NOT NULL,
created_at INTEGER NOT NULL,
updated_at INTEGER NOT NULL
);
CREATE INDEX IF NOT EXISTS idx_tasks_dequeue ON tasks(queue_name, status, next_run_at);
`)
return err
}
// applyConfigDefaults fills zero-value config fields with sensible defaults
// and clamps values to valid ranges, logging warnings for clamped values.
func (s *taskQueueServiceImpl) applyConfigDefaults(ctx context.Context, name string, config *host.QueueConfig) {
if config.Concurrency <= 0 {
config.Concurrency = defaultConcurrency
}
if config.BackoffMs <= 0 {
config.BackoffMs = defaultBackoffMs
}
if config.RetentionMs <= 0 {
config.RetentionMs = defaultRetentionMs
}
if config.RetentionMs < minRetentionMs {
log.Warn(ctx, "TaskQueue retention clamped to minimum", "plugin", s.pluginName, "queue", name,
"requested", config.RetentionMs, "min", minRetentionMs)
config.RetentionMs = minRetentionMs
}
if config.RetentionMs > maxRetentionMs {
log.Warn(ctx, "TaskQueue retention clamped to maximum", "plugin", s.pluginName, "queue", name,
"requested", config.RetentionMs, "max", maxRetentionMs)
config.RetentionMs = maxRetentionMs
}
}
// clampConcurrency reduces config.Concurrency if it exceeds the remaining budget.
// Returns an error when the concurrency budget is fully exhausted.
// Must be called with s.mu held.
func (s *taskQueueServiceImpl) clampConcurrency(ctx context.Context, name string, config *host.QueueConfig) error {
var allocated int32
for _, qs := range s.queues {
allocated += qs.config.Concurrency
}
available := s.maxConcurrency - allocated
if available <= 0 {
log.Warn(ctx, "TaskQueue concurrency budget exhausted", "plugin", s.pluginName, "queue", name,
"allocated", allocated, "maxConcurrency", s.maxConcurrency)
return fmt.Errorf("concurrency budget exhausted (%d/%d allocated)", allocated, s.maxConcurrency)
}
if config.Concurrency > available {
log.Warn(ctx, "TaskQueue concurrency clamped", "plugin", s.pluginName, "queue", name,
"requested", config.Concurrency, "available", available, "maxConcurrency", s.maxConcurrency)
config.Concurrency = available
}
return nil
}
func (s *taskQueueServiceImpl) CreateQueue(ctx context.Context, name string, config host.QueueConfig) error {
if len(name) == 0 {
return fmt.Errorf("queue name cannot be empty")
}
if len(name) > maxQueueNameLength {
return fmt.Errorf("queue name exceeds maximum length of %d bytes", maxQueueNameLength)
}
s.applyConfigDefaults(ctx, name, &config)
s.mu.Lock()
defer s.mu.Unlock()
if err := s.clampConcurrency(ctx, name, &config); err != nil {
return err
}
if _, exists := s.queues[name]; exists {
return fmt.Errorf("queue %q already exists", name)
}
// Upsert into queues table (idempotent across restarts)
_, err := s.db.ExecContext(ctx, `
INSERT INTO queues (name, concurrency, max_retries, backoff_ms, delay_ms, retention_ms)
VALUES (?, ?, ?, ?, ?, ?)
ON CONFLICT(name) DO UPDATE SET
concurrency = excluded.concurrency,
max_retries = excluded.max_retries,
backoff_ms = excluded.backoff_ms,
delay_ms = excluded.delay_ms,
retention_ms = excluded.retention_ms
`, name, config.Concurrency, config.MaxRetries, config.BackoffMs, config.DelayMs, config.RetentionMs)
if err != nil {
return fmt.Errorf("creating queue: %w", err)
}
// Reset stale running tasks from previous crash
now := time.Now().UnixMilli()
_, err = s.db.ExecContext(ctx, `
UPDATE tasks SET status = ?, updated_at = ? WHERE queue_name = ? AND status = ?
`, taskStatusPending, now, name, taskStatusRunning)
if err != nil {
return fmt.Errorf("resetting stale tasks: %w", err)
}
qs := &queueState{
config: config,
signal: make(chan struct{}, 1),
}
if config.DelayMs > 0 {
// Rate limit dispatches to enforce delay between tasks.
// Burst of 1 allows one immediate dispatch, then enforces the delay interval.
qs.limiter = rate.NewLimiter(rate.Every(time.Duration(config.DelayMs)*time.Millisecond), 1)
}
s.queues[name] = qs
for i := int32(0); i < config.Concurrency; i++ {
s.wg.Go(func() { s.worker(name, qs) })
}
log.Debug(ctx, "Created task queue", "plugin", s.pluginName, "queue", name,
"concurrency", config.Concurrency, "maxRetries", config.MaxRetries,
"backoffMs", config.BackoffMs, "delayMs", config.DelayMs, "retentionMs", config.RetentionMs)
return nil
}
func (s *taskQueueServiceImpl) Enqueue(ctx context.Context, queueName string, payload []byte) (string, error) {
s.mu.Lock()
qs, exists := s.queues[queueName]
s.mu.Unlock()
if !exists {
return "", fmt.Errorf("queue %q does not exist", queueName)
}
if len(payload) > maxPayloadSize {
return "", fmt.Errorf("payload size %d exceeds maximum of %d bytes", len(payload), maxPayloadSize)
}
taskID := id.NewRandom()
now := time.Now().UnixMilli()
_, err := s.db.ExecContext(ctx, `
INSERT INTO tasks (id, queue_name, payload, status, attempt, max_retries, next_run_at, created_at, updated_at)
VALUES (?, ?, ?, ?, 0, ?, ?, ?, ?)
`, taskID, queueName, payload, taskStatusPending, qs.config.MaxRetries, now, now, now)
if err != nil {
return "", fmt.Errorf("enqueuing task: %w", err)
}
qs.notifyWorkers()
log.Trace(ctx, "Enqueued task", "plugin", s.pluginName, "queue", queueName, "taskID", taskID)
return taskID, nil
}
// GetTaskStatus returns the status of a task.
func (s *taskQueueServiceImpl) GetTaskStatus(ctx context.Context, taskID string) (string, error) {
var status string
err := s.db.QueryRowContext(ctx, `SELECT status FROM tasks WHERE id = ?`, taskID).Scan(&status)
if errors.Is(err, sql.ErrNoRows) {
return "", fmt.Errorf("task %q not found", taskID)
}
if err != nil {
return "", fmt.Errorf("getting task status: %w", err)
}
return status, nil
}
// CancelTask cancels a pending task.
func (s *taskQueueServiceImpl) CancelTask(ctx context.Context, taskID string) error {
now := time.Now().UnixMilli()
result, err := s.db.ExecContext(ctx, `
UPDATE tasks SET status = ?, updated_at = ? WHERE id = ? AND status = ?
`, taskStatusCancelled, now, taskID, taskStatusPending)
if err != nil {
return fmt.Errorf("cancelling task: %w", err)
}
rowsAffected, err := result.RowsAffected()
if err != nil {
return fmt.Errorf("checking cancel result: %w", err)
}
if rowsAffected == 0 {
// Check if task exists at all
var status string
err := s.db.QueryRowContext(ctx, `SELECT status FROM tasks WHERE id = ?`, taskID).Scan(&status)
if errors.Is(err, sql.ErrNoRows) {
return fmt.Errorf("task %q not found", taskID)
}
if err != nil {
return fmt.Errorf("checking task existence: %w", err)
}
return fmt.Errorf("task %q cannot be cancelled (status: %s)", taskID, status)
}
log.Trace(ctx, "Cancelled task", "plugin", s.pluginName, "taskID", taskID)
return nil
}
// worker is the main loop for a single worker goroutine.
func (s *taskQueueServiceImpl) worker(queueName string, qs *queueState) {
// Process any existing pending tasks immediately on startup
s.drainQueue(queueName, qs)
ticker := time.NewTicker(pollInterval)
defer ticker.Stop()
for {
select {
case <-s.ctx.Done():
return
case <-qs.signal:
s.drainQueue(queueName, qs)
case <-ticker.C:
s.drainQueue(queueName, qs)
}
}
}
func (s *taskQueueServiceImpl) drainQueue(queueName string, qs *queueState) {
for s.ctx.Err() == nil && s.processTask(queueName, qs) {
}
}
// processTask dequeues and processes a single task. Returns true if a task was processed.
func (s *taskQueueServiceImpl) processTask(queueName string, qs *queueState) bool {
now := time.Now().UnixMilli()
// Atomically dequeue a task
var taskID string
var payload []byte
var attempt, maxRetries int32
err := s.db.QueryRowContext(s.ctx, `
UPDATE tasks SET status = ?, attempt = attempt + 1, updated_at = ?
WHERE id = (
SELECT id FROM tasks
WHERE queue_name = ? AND status = ? AND next_run_at <= ?
ORDER BY next_run_at, created_at LIMIT 1
)
RETURNING id, payload, attempt, max_retries
`, taskStatusRunning, now, queueName, taskStatusPending, now).Scan(&taskID, &payload, &attempt, &maxRetries)
if errors.Is(err, sql.ErrNoRows) {
return false
}
if err != nil {
log.Error(s.ctx, "Failed to dequeue task", "plugin", s.pluginName, "queue", queueName, err)
return false
}
// Enforce delay between task dispatches using a rate limiter.
// This is done after dequeue so that empty polls don't consume rate tokens.
if qs.limiter != nil {
if err := qs.limiter.Wait(s.ctx); err != nil {
// Context cancelled during wait — revert task to pending for recovery
s.revertTaskToPending(taskID)
return false
}
}
// Invoke callback
log.Debug(s.ctx, "Executing task", "plugin", s.pluginName, "queue", queueName, "taskID", taskID, "attempt", attempt)
callbackErr := s.invokeCallbackFn(s.ctx, queueName, taskID, payload, attempt)
// If context was cancelled (shutdown), revert task to pending for recovery
if s.ctx.Err() != nil {
s.revertTaskToPending(taskID)
return false
}
if callbackErr == nil {
s.completeTask(queueName, taskID)
} else {
s.handleTaskFailure(queueName, taskID, attempt, maxRetries, qs, callbackErr)
}
return true
}
func (s *taskQueueServiceImpl) completeTask(queueName, taskID string) {
now := time.Now().UnixMilli()
if _, err := s.db.ExecContext(s.ctx, `UPDATE tasks SET status = ?, updated_at = ? WHERE id = ?`, taskStatusCompleted, now, taskID); err != nil {
log.Error(s.ctx, "Failed to mark task as completed", "plugin", s.pluginName, "taskID", taskID, err)
}
log.Debug(s.ctx, "Task completed", "plugin", s.pluginName, "queue", queueName, "taskID", taskID)
}
func (s *taskQueueServiceImpl) handleTaskFailure(queueName, taskID string, attempt, maxRetries int32, qs *queueState, callbackErr error) {
log.Warn(s.ctx, "Task execution failed", "plugin", s.pluginName, "queue", queueName,
"taskID", taskID, "attempt", attempt, "maxRetries", maxRetries, "err", callbackErr)
now := time.Now().UnixMilli()
if attempt > maxRetries {
if _, err := s.db.ExecContext(s.ctx, `UPDATE tasks SET status = ?, updated_at = ? WHERE id = ?`, taskStatusFailed, now, taskID); err != nil {
log.Error(s.ctx, "Failed to mark task as failed", "plugin", s.pluginName, "taskID", taskID, err)
}
log.Warn(s.ctx, "Task failed after all retries", "plugin", s.pluginName, "queue", queueName, "taskID", taskID)
return
}
// Exponential backoff: backoffMs * 2^(attempt-1)
backoff := qs.config.BackoffMs << (attempt - 1)
if backoff <= 0 || backoff > maxBackoffMs {
backoff = maxBackoffMs
}
nextRunAt := now + backoff
if _, err := s.db.ExecContext(s.ctx, `
UPDATE tasks SET status = ?, next_run_at = ?, updated_at = ? WHERE id = ?
`, taskStatusPending, nextRunAt, now, taskID); err != nil {
log.Error(s.ctx, "Failed to reschedule task for retry", "plugin", s.pluginName, "taskID", taskID, err)
}
// Wake worker after backoff expires
time.AfterFunc(time.Duration(backoff)*time.Millisecond, func() {
qs.notifyWorkers()
})
}
// revertTaskToPending puts a running task back to pending status and decrements the attempt
// counter (used during shutdown to ensure the interrupted attempt doesn't count).
func (s *taskQueueServiceImpl) revertTaskToPending(taskID string) {
now := time.Now().UnixMilli()
_, err := s.db.Exec(`UPDATE tasks SET status = ?, attempt = MAX(attempt - 1, 0), updated_at = ? WHERE id = ? AND status = ?`, taskStatusPending, now, taskID, taskStatusRunning)
if err != nil {
log.Error("Failed to revert task to pending", "plugin", s.pluginName, "taskID", taskID, err)
}
}
// defaultInvokeCallback calls the plugin's nd_task_execute function.
func (s *taskQueueServiceImpl) defaultInvokeCallback(ctx context.Context, queueName, taskID string, payload []byte, attempt int32) error {
s.manager.mu.RLock()
p, ok := s.manager.plugins[s.pluginName]
s.manager.mu.RUnlock()
if !ok {
return fmt.Errorf("plugin %s not loaded", s.pluginName)
}
input := capabilities.TaskExecuteRequest{
QueueName: queueName,
TaskID: taskID,
Payload: payload,
Attempt: attempt,
}
result, err := callPluginFunction[capabilities.TaskExecuteRequest, capabilities.TaskExecuteResponse](ctx, p, FuncTaskWorkerCallback, input)
if err != nil {
return err
}
if result.Error != "" {
return fmt.Errorf("%s", result.Error)
}
return nil
}
// cleanupLoop periodically removes terminal tasks past their retention period.
func (s *taskQueueServiceImpl) cleanupLoop() {
ticker := time.NewTicker(cleanupInterval)
defer ticker.Stop()
for {
select {
case <-s.ctx.Done():
return
case <-ticker.C:
s.runCleanup()
}
}
}
// runCleanup deletes terminal tasks past their retention period.
func (s *taskQueueServiceImpl) runCleanup() {
s.mu.Lock()
queues := make(map[string]*queueState, len(s.queues))
for k, v := range s.queues {
queues[k] = v
}
s.mu.Unlock()
now := time.Now().UnixMilli()
for name, qs := range queues {
result, err := s.db.ExecContext(s.ctx, `
DELETE FROM tasks WHERE queue_name = ? AND status IN (?, ?, ?) AND updated_at + ? < ?
`, name, taskStatusCompleted, taskStatusFailed, taskStatusCancelled, qs.config.RetentionMs, now)
if err != nil {
log.Error(s.ctx, "Failed to cleanup tasks", "plugin", s.pluginName, "queue", name, err)
continue
}
if deleted, _ := result.RowsAffected(); deleted > 0 {
log.Debug(s.ctx, "Cleaned up terminal tasks", "plugin", s.pluginName, "queue", name, "deleted", deleted)
}
}
}
// Close shuts down the task queue service, stopping all workers and closing the database.
func (s *taskQueueServiceImpl) Close() error {
// Cancel context to signal all goroutines
s.cancel()
// Wait for goroutines with timeout
done := make(chan struct{})
go func() {
s.wg.Wait()
close(done)
}()
select {
case <-done:
case <-time.After(shutdownTimeout):
log.Warn("TaskQueue shutdown timed out", "plugin", s.pluginName)
}
// Mark running tasks as pending for recovery on next startup
if s.db != nil {
now := time.Now().UnixMilli()
if _, err := s.db.Exec(`UPDATE tasks SET status = ?, updated_at = ? WHERE status = ?`, taskStatusPending, now, taskStatusRunning); err != nil {
log.Error("Failed to reset running tasks on shutdown", "plugin", s.pluginName, err)
}
log.Debug("Closing plugin taskqueue", "plugin", s.pluginName)
return s.db.Close()
}
return nil
}
// Compile-time verification
var _ host.TaskQueueService = (*taskQueueServiceImpl)(nil)
var _ io.Closer = (*taskQueueServiceImpl)(nil)

View File

@@ -0,0 +1,968 @@
//go:build !windows
package plugins
import (
"context"
"crypto/sha256"
"encoding/hex"
"encoding/json"
"errors"
"fmt"
"net/http"
"os"
"path/filepath"
"sort"
"strings"
"sync"
"sync/atomic"
"time"
"github.com/navidrome/navidrome/conf"
"github.com/navidrome/navidrome/conf/configtest"
"github.com/navidrome/navidrome/model"
"github.com/navidrome/navidrome/plugins/host"
"github.com/navidrome/navidrome/tests"
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
)
var _ = Describe("TaskQueueService", func() {
var tmpDir string
var service *taskQueueServiceImpl
var ctx context.Context
var manager *Manager
BeforeEach(func() {
ctx = GinkgoT().Context()
var err error
tmpDir, err = os.MkdirTemp("", "taskqueue-test-*")
Expect(err).ToNot(HaveOccurred())
DeferCleanup(configtest.SetupConfig())
conf.Server.DataFolder = tmpDir
// Create a mock manager with context
managerCtx, cancel := context.WithCancel(ctx)
manager = &Manager{
plugins: make(map[string]*plugin),
ctx: managerCtx,
}
DeferCleanup(cancel)
service, err = newTaskQueueService("test_plugin", manager, 5)
Expect(err).ToNot(HaveOccurred())
})
AfterEach(func() {
if service != nil {
service.Close()
}
os.RemoveAll(tmpDir)
})
Describe("CreateQueue", func() {
It("creates a queue successfully", func() {
err := service.CreateQueue(ctx, "my-queue", host.QueueConfig{
Concurrency: 2,
MaxRetries: 3,
BackoffMs: 2000,
RetentionMs: 7200000,
})
Expect(err).ToNot(HaveOccurred())
service.mu.Lock()
qs, exists := service.queues["my-queue"]
service.mu.Unlock()
Expect(exists).To(BeTrue())
Expect(qs.config.Concurrency).To(Equal(int32(2)))
Expect(qs.config.MaxRetries).To(Equal(int32(3)))
Expect(qs.config.BackoffMs).To(Equal(int64(2000)))
Expect(qs.config.RetentionMs).To(Equal(int64(7200000)))
})
It("returns error for duplicate queue name", func() {
err := service.CreateQueue(ctx, "dup-queue", host.QueueConfig{})
Expect(err).ToNot(HaveOccurred())
err = service.CreateQueue(ctx, "dup-queue", host.QueueConfig{})
Expect(err).To(HaveOccurred())
Expect(err.Error()).To(ContainSubstring("already exists"))
})
})
Describe("CreateQueue name validation", func() {
It("rejects empty queue name", func() {
err := service.CreateQueue(ctx, "", host.QueueConfig{})
Expect(err).To(HaveOccurred())
Expect(err.Error()).To(ContainSubstring("queue name cannot be empty"))
})
It("rejects over-length queue name", func() {
longName := strings.Repeat("a", maxQueueNameLength+1)
err := service.CreateQueue(ctx, longName, host.QueueConfig{})
Expect(err).To(HaveOccurred())
Expect(err.Error()).To(ContainSubstring("exceeds maximum length"))
})
It("accepts queue name at maximum length", func() {
service.invokeCallbackFn = func(_ context.Context, _, _ string, _ []byte, _ int32) error {
return nil
}
exactName := strings.Repeat("a", maxQueueNameLength)
err := service.CreateQueue(ctx, exactName, host.QueueConfig{})
Expect(err).ToNot(HaveOccurred())
})
})
Describe("CreateQueue defaults", func() {
It("applies defaults for zero-value config", func() {
err := service.CreateQueue(ctx, "defaults-queue", host.QueueConfig{})
Expect(err).ToNot(HaveOccurred())
service.mu.Lock()
qs := service.queues["defaults-queue"]
service.mu.Unlock()
Expect(qs.config.Concurrency).To(Equal(defaultConcurrency))
Expect(qs.config.BackoffMs).To(Equal(defaultBackoffMs))
Expect(qs.config.RetentionMs).To(Equal(defaultRetentionMs))
})
})
Describe("CreateQueue defaults with negative values", func() {
It("applies default RetentionMs for negative value", func() {
service.invokeCallbackFn = func(_ context.Context, _, _ string, _ []byte, _ int32) error {
return nil
}
err := service.CreateQueue(ctx, "neg-retention", host.QueueConfig{
RetentionMs: -500,
})
Expect(err).ToNot(HaveOccurred())
service.mu.Lock()
qs := service.queues["neg-retention"]
service.mu.Unlock()
Expect(qs.config.RetentionMs).To(Equal(defaultRetentionMs))
})
})
Describe("CreateQueue clamping", func() {
It("clamps concurrency exceeding maxConcurrency", func() {
// maxConcurrency is 5; request 10
err := service.CreateQueue(ctx, "clamped-queue", host.QueueConfig{
Concurrency: 10,
})
Expect(err).ToNot(HaveOccurred())
service.mu.Lock()
qs := service.queues["clamped-queue"]
service.mu.Unlock()
Expect(qs.config.Concurrency).To(Equal(int32(5)))
})
It("returns error when concurrency budget is exhausted", func() {
// maxConcurrency is 5; create a queue that uses all 5
err := service.CreateQueue(ctx, "full-budget", host.QueueConfig{
Concurrency: 5,
})
Expect(err).ToNot(HaveOccurred())
// Next queue should fail — no budget remaining
err = service.CreateQueue(ctx, "over-budget", host.QueueConfig{
Concurrency: 1,
})
Expect(err).To(HaveOccurred())
Expect(err.Error()).To(ContainSubstring("concurrency budget exhausted"))
})
It("clamps retention below minimum", func() {
err := service.CreateQueue(ctx, "low-retention", host.QueueConfig{
RetentionMs: 100, // below minRetentionMs
})
Expect(err).ToNot(HaveOccurred())
service.mu.Lock()
qs := service.queues["low-retention"]
service.mu.Unlock()
Expect(qs.config.RetentionMs).To(Equal(minRetentionMs))
})
It("clamps retention above maximum", func() {
err := service.CreateQueue(ctx, "high-retention", host.QueueConfig{
RetentionMs: 999_999_999_999, // above maxRetentionMs
})
Expect(err).ToNot(HaveOccurred())
service.mu.Lock()
qs := service.queues["high-retention"]
service.mu.Unlock()
Expect(qs.config.RetentionMs).To(Equal(maxRetentionMs))
})
})
Describe("Enqueue", func() {
BeforeEach(func() {
// Use a no-op callback to prevent actual execution attempts
service.invokeCallbackFn = func(_ context.Context, _, _ string, _ []byte, _ int32) error {
return nil
}
err := service.CreateQueue(ctx, "enqueue-test", host.QueueConfig{})
Expect(err).ToNot(HaveOccurred())
})
It("enqueues a task and returns task ID", func() {
taskID, err := service.Enqueue(ctx, "enqueue-test", []byte("payload"))
Expect(err).ToNot(HaveOccurred())
Expect(taskID).ToNot(BeEmpty())
})
It("returns error for non-existent queue", func() {
_, err := service.Enqueue(ctx, "no-such-queue", []byte("payload"))
Expect(err).To(HaveOccurred())
Expect(err.Error()).To(ContainSubstring("does not exist"))
})
It("rejects payload exceeding maximum size", func() {
bigPayload := make([]byte, maxPayloadSize+1)
_, err := service.Enqueue(ctx, "enqueue-test", bigPayload)
Expect(err).To(HaveOccurred())
Expect(err.Error()).To(ContainSubstring("exceeds maximum"))
})
It("accepts payload at maximum size", func() {
exactPayload := make([]byte, maxPayloadSize)
taskID, err := service.Enqueue(ctx, "enqueue-test", exactPayload)
Expect(err).ToNot(HaveOccurred())
Expect(taskID).ToNot(BeEmpty())
})
})
Describe("GetTaskStatus", func() {
BeforeEach(func() {
// Use a callback that blocks until context is cancelled so tasks stay pending
service.invokeCallbackFn = func(ctx context.Context, _, _ string, _ []byte, _ int32) error {
<-ctx.Done()
return ctx.Err()
}
})
It("returns pending for a new task", func() {
err := service.CreateQueue(ctx, "status-test", host.QueueConfig{})
Expect(err).ToNot(HaveOccurred())
taskID, err := service.Enqueue(ctx, "status-test", []byte("data"))
Expect(err).ToNot(HaveOccurred())
// The task may get picked up quickly; check initial status
// Since the callback blocks, it should be either pending or running
status, err := service.GetTaskStatus(ctx, taskID)
Expect(err).ToNot(HaveOccurred())
Expect(status).To(BeElementOf("pending", "running"))
})
It("returns error for unknown task ID", func() {
_, err := service.GetTaskStatus(ctx, "nonexistent-id")
Expect(err).To(HaveOccurred())
Expect(err.Error()).To(ContainSubstring("not found"))
})
})
Describe("CancelTask", func() {
BeforeEach(func() {
// Block callback so tasks stay in pending/running
service.invokeCallbackFn = func(ctx context.Context, _, _ string, _ []byte, _ int32) error {
<-ctx.Done()
return ctx.Err()
}
})
It("cancels a pending task", func() {
// Block the callback so the first task occupies the worker
started := make(chan struct{})
service.invokeCallbackFn = func(ctx context.Context, _, _ string, _ []byte, _ int32) error {
close(started)
<-ctx.Done()
return ctx.Err()
}
err := service.CreateQueue(ctx, "cancel-test", host.QueueConfig{
Concurrency: 1,
})
Expect(err).ToNot(HaveOccurred())
// Enqueue a blocker task to occupy the single worker
_, err = service.Enqueue(ctx, "cancel-test", []byte("blocker"))
Expect(err).ToNot(HaveOccurred())
// Wait for the blocker task to start running
Eventually(started).WithTimeout(5 * time.Second).Should(BeClosed())
// Enqueue a second task — it stays pending since the worker is busy
taskID, err := service.Enqueue(ctx, "cancel-test", []byte("cancel-me"))
Expect(err).ToNot(HaveOccurred())
err = service.CancelTask(ctx, taskID)
Expect(err).ToNot(HaveOccurred())
status, err := service.GetTaskStatus(ctx, taskID)
Expect(err).ToNot(HaveOccurred())
Expect(status).To(Equal("cancelled"))
})
It("returns error for unknown task ID", func() {
err := service.CancelTask(ctx, "nonexistent-id")
Expect(err).To(HaveOccurred())
Expect(err.Error()).To(ContainSubstring("not found"))
})
It("returns error for non-pending task", func() {
// Create a queue where tasks complete immediately
service.invokeCallbackFn = func(_ context.Context, _, _ string, _ []byte, _ int32) error {
return nil
}
err := service.CreateQueue(ctx, "completed-test", host.QueueConfig{})
Expect(err).ToNot(HaveOccurred())
taskID, err := service.Enqueue(ctx, "completed-test", []byte("data"))
Expect(err).ToNot(HaveOccurred())
// Wait for task to complete
Eventually(func() string {
status, _ := service.GetTaskStatus(ctx, taskID)
return status
}).WithTimeout(5 * time.Second).WithPolling(50 * time.Millisecond).Should(Equal("completed"))
// Try to cancel completed task
err = service.CancelTask(ctx, taskID)
Expect(err).To(HaveOccurred())
Expect(err.Error()).To(ContainSubstring("cannot be cancelled"))
})
})
Describe("Worker execution", func() {
It("invokes callback and completes task", func() {
var callCount atomic.Int32
var receivedQueueName, receivedTaskID string
var receivedPayload []byte
var receivedAttempt int32
service.invokeCallbackFn = func(_ context.Context, queueName, taskID string, payload []byte, attempt int32) error {
callCount.Add(1)
receivedQueueName = queueName
receivedTaskID = taskID
receivedPayload = payload
receivedAttempt = attempt
return nil
}
err := service.CreateQueue(ctx, "worker-test", host.QueueConfig{})
Expect(err).ToNot(HaveOccurred())
taskID, err := service.Enqueue(ctx, "worker-test", []byte("test-payload"))
Expect(err).ToNot(HaveOccurred())
Eventually(func() string {
status, _ := service.GetTaskStatus(ctx, taskID)
return status
}).WithTimeout(5 * time.Second).WithPolling(50 * time.Millisecond).Should(Equal("completed"))
Expect(callCount.Load()).To(Equal(int32(1)))
Expect(receivedQueueName).To(Equal("worker-test"))
Expect(receivedTaskID).To(Equal(taskID))
Expect(receivedPayload).To(Equal([]byte("test-payload")))
Expect(receivedAttempt).To(Equal(int32(1)))
})
})
Describe("Retry on failure", func() {
It("retries and eventually fails after exhausting retries", func() {
var callCount atomic.Int32
service.invokeCallbackFn = func(_ context.Context, _, _ string, _ []byte, _ int32) error {
callCount.Add(1)
return fmt.Errorf("task failed")
}
err := service.CreateQueue(ctx, "retry-test", host.QueueConfig{
MaxRetries: 2,
BackoffMs: 10, // Very short for testing
})
Expect(err).ToNot(HaveOccurred())
taskID, err := service.Enqueue(ctx, "retry-test", []byte("retry-payload"))
Expect(err).ToNot(HaveOccurred())
Eventually(func() string {
status, _ := service.GetTaskStatus(ctx, taskID)
return status
}).WithTimeout(10 * time.Second).WithPolling(50 * time.Millisecond).Should(Equal("failed"))
// 1 initial attempt + 2 retries = 3 total calls
Expect(callCount.Load()).To(Equal(int32(3)))
})
})
Describe("Retry then succeed", func() {
It("retries and succeeds on second attempt", func() {
var callCount atomic.Int32
service.invokeCallbackFn = func(_ context.Context, _, _ string, _ []byte, attempt int32) error {
callCount.Add(1)
if attempt == 1 {
return fmt.Errorf("temporary error")
}
return nil
}
err := service.CreateQueue(ctx, "retry-succeed", host.QueueConfig{
MaxRetries: 1,
BackoffMs: 10, // Very short for testing
})
Expect(err).ToNot(HaveOccurred())
taskID, err := service.Enqueue(ctx, "retry-succeed", []byte("data"))
Expect(err).ToNot(HaveOccurred())
Eventually(func() string {
status, _ := service.GetTaskStatus(ctx, taskID)
return status
}).WithTimeout(10 * time.Second).WithPolling(50 * time.Millisecond).Should(Equal("completed"))
Expect(callCount.Load()).To(Equal(int32(2)))
})
})
Describe("Backoff overflow cap", func() {
It("caps backoff at maxRetentionMs to prevent overflow", func() {
var callCount atomic.Int32
service.invokeCallbackFn = func(_ context.Context, _, _ string, _ []byte, _ int32) error {
callCount.Add(1)
return fmt.Errorf("always fail")
}
err := service.CreateQueue(ctx, "backoff-overflow", host.QueueConfig{
MaxRetries: 3,
BackoffMs: 1_000_000_000, // Very large backoff to trigger overflow on exponentiation
})
Expect(err).ToNot(HaveOccurred())
taskID, err := service.Enqueue(ctx, "backoff-overflow", []byte("overflow-test"))
Expect(err).ToNot(HaveOccurred())
// Wait for first attempt to fail
Eventually(func() int32 {
return callCount.Load()
}).WithTimeout(5 * time.Second).WithPolling(50 * time.Millisecond).Should(BeNumerically(">=", int32(1)))
// Check next_run_at is positive and reasonable (capped at maxRetentionMs from now)
var nextRunAt int64
err = service.db.QueryRow(`SELECT next_run_at FROM tasks WHERE id = ?`, taskID).Scan(&nextRunAt)
Expect(err).ToNot(HaveOccurred())
now := time.Now().UnixMilli()
Expect(nextRunAt).To(BeNumerically(">", int64(0)), "next_run_at should be positive")
Expect(nextRunAt).To(BeNumerically("<=", now+maxBackoffMs+1000), "next_run_at should be at most maxBackoffMs from now")
})
})
Describe("Delay enforcement with concurrent workers", func() {
It("enforces delay between dispatches even with multiple workers", func() {
var mu sync.Mutex
var dispatchTimes []time.Time
service.invokeCallbackFn = func(_ context.Context, _, _ string, _ []byte, _ int32) error {
mu.Lock()
dispatchTimes = append(dispatchTimes, time.Now())
mu.Unlock()
return nil
}
err := service.CreateQueue(ctx, "delay-concurrent", host.QueueConfig{
Concurrency: 3,
DelayMs: 200,
})
Expect(err).ToNot(HaveOccurred())
// Enqueue 5 tasks
for i := 0; i < 5; i++ {
_, err := service.Enqueue(ctx, "delay-concurrent", []byte(fmt.Sprintf("task-%d", i)))
Expect(err).ToNot(HaveOccurred())
}
// Wait for all tasks to complete
Eventually(func() int {
mu.Lock()
defer mu.Unlock()
return len(dispatchTimes)
}).WithTimeout(10 * time.Second).WithPolling(50 * time.Millisecond).Should(Equal(5))
// Sort dispatch times and verify gaps
mu.Lock()
sort.Slice(dispatchTimes, func(i, j int) bool {
return dispatchTimes[i].Before(dispatchTimes[j])
})
times := make([]time.Time, len(dispatchTimes))
copy(times, dispatchTimes)
mu.Unlock()
// Consecutive dispatches should have at least ~160ms gap (80% of 200ms)
for i := 1; i < len(times); i++ {
gap := times[i].Sub(times[i-1])
Expect(gap).To(BeNumerically(">=", 160*time.Millisecond),
fmt.Sprintf("gap between dispatch %d and %d was %v, expected >= 160ms", i-1, i, gap))
}
})
})
Describe("Shutdown recovery", func() {
It("resets stale running tasks on CreateQueue", func() {
// Create a first service and queue, enqueue a task
service.invokeCallbackFn = func(ctx context.Context, _, _ string, _ []byte, _ int32) error {
<-ctx.Done()
return ctx.Err()
}
err := service.CreateQueue(ctx, "recovery-queue", host.QueueConfig{})
Expect(err).ToNot(HaveOccurred())
taskID, err := service.Enqueue(ctx, "recovery-queue", []byte("stale-task"))
Expect(err).ToNot(HaveOccurred())
// Wait for the task to start running
Eventually(func() string {
status, _ := service.GetTaskStatus(ctx, taskID)
return status
}).WithTimeout(5 * time.Second).WithPolling(50 * time.Millisecond).Should(Equal("running"))
// Close the service (simulates crash - tasks left in running state)
service.Close()
// Create a new service pointing to the same DB
managerCtx2, cancel2 := context.WithCancel(ctx)
DeferCleanup(cancel2)
manager2 := &Manager{
plugins: make(map[string]*plugin),
ctx: managerCtx2,
}
service, err = newTaskQueueService("test_plugin", manager2, 5)
Expect(err).ToNot(HaveOccurred())
// Override callback to succeed
service.invokeCallbackFn = func(_ context.Context, _, _ string, _ []byte, _ int32) error {
return nil
}
// Re-create the queue - the upsert handles the existing row from the old service
err = service.CreateQueue(ctx, "recovery-queue", host.QueueConfig{})
Expect(err).ToNot(HaveOccurred())
// The stale running task should now be reset to pending and eventually completed
Eventually(func() string {
status, _ := service.GetTaskStatus(ctx, taskID)
return status
}).WithTimeout(10 * time.Second).WithPolling(50 * time.Millisecond).Should(Equal("completed"))
})
})
Describe("Close", func() {
It("prevents subsequent operations after close", func() {
err := service.CreateQueue(ctx, "close-test", host.QueueConfig{})
Expect(err).ToNot(HaveOccurred())
service.Close()
// After close, operations should fail
_, err = service.Enqueue(ctx, "close-test", []byte("data"))
Expect(err).To(HaveOccurred())
})
})
Describe("Plugin isolation", func() {
It("uses separate databases for different plugins", func() {
managerCtx2, cancel2 := context.WithCancel(ctx)
DeferCleanup(cancel2)
manager2 := &Manager{
plugins: make(map[string]*plugin),
ctx: managerCtx2,
}
service2, err := newTaskQueueService("other_plugin", manager2, 5)
Expect(err).ToNot(HaveOccurred())
defer service2.Close()
// Check that separate database files exist
_, err = os.Stat(filepath.Join(tmpDir, "plugins", "test_plugin", "taskqueue.db"))
Expect(err).ToNot(HaveOccurred())
_, err = os.Stat(filepath.Join(tmpDir, "plugins", "other_plugin", "taskqueue.db"))
Expect(err).ToNot(HaveOccurred())
// Both services should be able to create queues with the same name independently
service.invokeCallbackFn = func(_ context.Context, _, _ string, _ []byte, _ int32) error { return nil }
service2.invokeCallbackFn = func(_ context.Context, _, _ string, _ []byte, _ int32) error { return nil }
err = service.CreateQueue(ctx, "shared-name", host.QueueConfig{})
Expect(err).ToNot(HaveOccurred())
err = service2.CreateQueue(ctx, "shared-name", host.QueueConfig{})
Expect(err).ToNot(HaveOccurred())
// Enqueue to each and verify they work independently
taskID1, err := service.Enqueue(ctx, "shared-name", []byte("plugin1"))
Expect(err).ToNot(HaveOccurred())
taskID2, err := service2.Enqueue(ctx, "shared-name", []byte("plugin2"))
Expect(err).ToNot(HaveOccurred())
Expect(taskID1).ToNot(Equal(taskID2))
// Both should complete
Eventually(func() string {
status, _ := service.GetTaskStatus(ctx, taskID1)
return status
}).WithTimeout(5 * time.Second).WithPolling(50 * time.Millisecond).Should(Equal("completed"))
Eventually(func() string {
status, _ := service2.GetTaskStatus(ctx, taskID2)
return status
}).WithTimeout(5 * time.Second).WithPolling(50 * time.Millisecond).Should(Equal("completed"))
})
})
})
var _ = Describe("TaskQueueService Integration", Ordered, func() {
var manager *Manager
var tmpDir string
BeforeAll(func() {
var err error
tmpDir, err = os.MkdirTemp("", "taskqueue-integration-test-*")
Expect(err).ToNot(HaveOccurred())
// Copy the test-taskqueue plugin
srcPath := filepath.Join(testdataDir, "test-taskqueue"+PackageExtension)
destPath := filepath.Join(tmpDir, "test-taskqueue"+PackageExtension)
data, err := os.ReadFile(srcPath)
Expect(err).ToNot(HaveOccurred())
err = os.WriteFile(destPath, data, 0600)
Expect(err).ToNot(HaveOccurred())
// Compute SHA256 for the plugin
hash := sha256.Sum256(data)
hashHex := hex.EncodeToString(hash[:])
// Setup config
DeferCleanup(configtest.SetupConfig())
conf.Server.Plugins.Enabled = true
conf.Server.Plugins.Folder = tmpDir
conf.Server.Plugins.AutoReload = false
conf.Server.CacheFolder = filepath.Join(tmpDir, "cache")
conf.Server.DataFolder = tmpDir
// Setup mock DataStore with pre-enabled plugin
mockPluginRepo := tests.CreateMockPluginRepo()
mockPluginRepo.Permitted = true
mockPluginRepo.SetData(model.Plugins{{
ID: "test-taskqueue",
Path: destPath,
SHA256: hashHex,
Enabled: true,
}})
dataStore := &tests.MockDataStore{MockedPlugin: mockPluginRepo}
// Create and start manager
manager = &Manager{
plugins: make(map[string]*plugin),
ds: dataStore,
metrics: noopMetricsRecorder{},
subsonicRouter: http.NotFoundHandler(),
}
err = manager.Start(GinkgoT().Context())
Expect(err).ToNot(HaveOccurred())
DeferCleanup(func() {
_ = manager.Stop()
_ = os.RemoveAll(tmpDir)
})
})
// Helper types for calling the test plugin
type testQueueConfig struct {
Concurrency int32 `json:"concurrency,omitempty"`
MaxRetries int32 `json:"maxRetries,omitempty"`
BackoffMs int64 `json:"backoffMs,omitempty"`
DelayMs int64 `json:"delayMs,omitempty"`
RetentionMs int64 `json:"retentionMs,omitempty"`
}
type testTaskQueueInput struct {
Operation string `json:"operation"`
QueueName string `json:"queueName,omitempty"`
Config *testQueueConfig `json:"config,omitempty"`
Payload []byte `json:"payload,omitempty"`
TaskID string `json:"taskId,omitempty"`
}
type testTaskQueueOutput struct {
TaskID string `json:"taskId,omitempty"`
Status string `json:"status,omitempty"`
Error *string `json:"error,omitempty"`
}
callTestTaskQueue := func(ctx context.Context, input testTaskQueueInput) (*testTaskQueueOutput, error) {
manager.mu.RLock()
p := manager.plugins["test-taskqueue"]
manager.mu.RUnlock()
instance, err := p.instance(ctx)
if err != nil {
return nil, err
}
defer instance.Close(ctx)
inputBytes, _ := json.Marshal(input)
_, outputBytes, err := instance.Call("nd_test_taskqueue", inputBytes)
if err != nil {
return nil, err
}
var output testTaskQueueOutput
if err := json.Unmarshal(outputBytes, &output); err != nil {
return nil, err
}
if output.Error != nil {
return nil, errors.New(*output.Error)
}
return &output, nil
}
Describe("Plugin Loading", func() {
It("should load plugin with taskqueue permission and TaskWorker capability", func() {
manager.mu.RLock()
p, ok := manager.plugins["test-taskqueue"]
manager.mu.RUnlock()
Expect(ok).To(BeTrue())
Expect(p.manifest.Permissions).ToNot(BeNil())
Expect(p.manifest.Permissions.Taskqueue).ToNot(BeNil())
Expect(p.manifest.Permissions.Taskqueue.MaxConcurrency).To(Equal(10))
Expect(p.capabilities).To(ContainElement(CapabilityTaskWorker))
})
})
Describe("Create Queue", func() {
It("should create a queue without error", func() {
ctx := GinkgoT().Context()
_, err := callTestTaskQueue(ctx, testTaskQueueInput{
Operation: "create_queue",
QueueName: "test-create",
})
Expect(err).ToNot(HaveOccurred())
})
It("should return error for duplicate queue name", func() {
ctx := GinkgoT().Context()
_, err := callTestTaskQueue(ctx, testTaskQueueInput{
Operation: "create_queue",
QueueName: "test-dup",
})
Expect(err).ToNot(HaveOccurred())
_, err = callTestTaskQueue(ctx, testTaskQueueInput{
Operation: "create_queue",
QueueName: "test-dup",
})
Expect(err).To(HaveOccurred())
Expect(err.Error()).To(ContainSubstring("already exists"))
})
})
Describe("Enqueue and Task Completion", func() {
It("should enqueue a task and complete successfully", func() {
ctx := GinkgoT().Context()
// Create queue
_, err := callTestTaskQueue(ctx, testTaskQueueInput{
Operation: "create_queue",
QueueName: "test-complete",
})
Expect(err).ToNot(HaveOccurred())
// Enqueue task with payload "hello"
output, err := callTestTaskQueue(ctx, testTaskQueueInput{
Operation: "enqueue",
QueueName: "test-complete",
Payload: []byte("hello"),
})
Expect(err).ToNot(HaveOccurred())
Expect(output.TaskID).ToNot(BeEmpty())
taskID := output.TaskID
// Poll until completed
Eventually(func() string {
out, err := callTestTaskQueue(ctx, testTaskQueueInput{
Operation: "get_task_status",
TaskID: taskID,
})
if err != nil {
return "error"
}
return out.Status
}).WithTimeout(5 * time.Second).WithPolling(100 * time.Millisecond).Should(Equal("completed"))
})
})
Describe("Enqueue with Failure, No Retries", func() {
It("should fail when payload is 'fail' and maxRetries is 0", func() {
ctx := GinkgoT().Context()
// Create queue with no retries
_, err := callTestTaskQueue(ctx, testTaskQueueInput{
Operation: "create_queue",
QueueName: "test-fail-no-retry",
Config: &testQueueConfig{
MaxRetries: 0,
},
})
Expect(err).ToNot(HaveOccurred())
// Enqueue task that will fail
output, err := callTestTaskQueue(ctx, testTaskQueueInput{
Operation: "enqueue",
QueueName: "test-fail-no-retry",
Payload: []byte("fail"),
})
Expect(err).ToNot(HaveOccurred())
taskID := output.TaskID
// Poll until failed
Eventually(func() string {
out, err := callTestTaskQueue(ctx, testTaskQueueInput{
Operation: "get_task_status",
TaskID: taskID,
})
if err != nil {
return "error"
}
return out.Status
}).WithTimeout(5 * time.Second).WithPolling(100 * time.Millisecond).Should(Equal("failed"))
})
})
Describe("Enqueue with Retry Then Success", func() {
It("should retry and eventually succeed with 'fail-then-succeed' payload", func() {
ctx := GinkgoT().Context()
// Create queue with retries and short backoff
_, err := callTestTaskQueue(ctx, testTaskQueueInput{
Operation: "create_queue",
QueueName: "test-retry-succeed",
Config: &testQueueConfig{
MaxRetries: 2,
BackoffMs: 100,
},
})
Expect(err).ToNot(HaveOccurred())
// Enqueue task that fails on attempt < 2, then succeeds
output, err := callTestTaskQueue(ctx, testTaskQueueInput{
Operation: "enqueue",
QueueName: "test-retry-succeed",
Payload: []byte("fail-then-succeed"),
})
Expect(err).ToNot(HaveOccurred())
taskID := output.TaskID
// Poll until completed
Eventually(func() string {
out, err := callTestTaskQueue(ctx, testTaskQueueInput{
Operation: "get_task_status",
TaskID: taskID,
})
if err != nil {
return "error"
}
return out.Status
}).WithTimeout(5 * time.Second).WithPolling(100 * time.Millisecond).Should(Equal("completed"))
})
})
Describe("Cancel Pending Task", func() {
It("should cancel a pending task", func() {
ctx := GinkgoT().Context()
// Create queue with concurrency=1 and a large delay between dispatches.
// The first task completes immediately (burst token), the second is dequeued
// but blocks on the rate limiter. Tasks 3+ remain in 'pending' and can be cancelled.
_, err := callTestTaskQueue(ctx, testTaskQueueInput{
Operation: "create_queue",
QueueName: "test-cancel",
Config: &testQueueConfig{
Concurrency: 1,
DelayMs: 60000,
},
})
Expect(err).ToNot(HaveOccurred())
// Enqueue several tasks - the first will complete immediately,
// the second will be dequeued but block on the rate limiter (status=running),
// the rest will stay pending.
var taskIDs []string
for i := 0; i < 5; i++ {
output, err := callTestTaskQueue(ctx, testTaskQueueInput{
Operation: "enqueue",
QueueName: "test-cancel",
Payload: []byte("hello"),
})
Expect(err).ToNot(HaveOccurred())
taskIDs = append(taskIDs, output.TaskID)
}
// Wait for the first task to complete (it has no delay)
Eventually(func() string {
out, err := callTestTaskQueue(ctx, testTaskQueueInput{
Operation: "get_task_status",
TaskID: taskIDs[0],
})
if err != nil {
return "error"
}
return out.Status
}).WithTimeout(5 * time.Second).WithPolling(50 * time.Millisecond).Should(Equal("completed"))
// Give the worker a moment to dequeue the second task (which will
// block on the delay) so tasks 3+ stay in 'pending'
time.Sleep(100 * time.Millisecond)
// Cancel the last task - it should still be pending
lastTaskID := taskIDs[len(taskIDs)-1]
_, err = callTestTaskQueue(ctx, testTaskQueueInput{
Operation: "cancel_task",
TaskID: lastTaskID,
})
Expect(err).ToNot(HaveOccurred())
// Verify status is cancelled
statusOut, err := callTestTaskQueue(ctx, testTaskQueueInput{
Operation: "get_task_status",
TaskID: lastTaskID,
})
Expect(err).ToNot(HaveOccurred())
Expect(statusOut.Status).To(Equal("cancelled"))
})
})
Describe("Enqueue to Non-Existent Queue", func() {
It("should return error when enqueueing to a queue that does not exist", func() {
ctx := GinkgoT().Context()
_, err := callTestTaskQueue(ctx, testTaskQueueInput{
Operation: "enqueue",
QueueName: "nonexistent-queue",
Payload: []byte("payload"),
})
Expect(err).To(HaveOccurred())
Expect(err.Error()).To(ContainSubstring("does not exist"))
})
})
})

View File

@@ -256,8 +256,11 @@ func (s *webSocketServiceImpl) isHostAllowed(host string) bool {
} }
// matchHostPattern matches a host against a pattern. // matchHostPattern matches a host against a pattern.
// Supports wildcards like *.example.com // Supports "*" (allow all) and wildcards like "*.example.com".
func matchHostPattern(pattern, host string) bool { func matchHostPattern(pattern, host string) bool {
if pattern == "*" {
return true
}
if pattern == host { if pattern == host {
return true return true
} }

View File

@@ -575,6 +575,12 @@ var _ = Describe("WebSocketService", Ordered, func() {
Expect(matchHostPattern("*.example.com", "deep.api.example.com")).To(BeTrue()) Expect(matchHostPattern("*.example.com", "deep.api.example.com")).To(BeTrue())
}) })
It("should match bare '*' as allow-all", func() {
Expect(matchHostPattern("*", "anything.example.com")).To(BeTrue())
Expect(matchHostPattern("*", "127.0.0.1")).To(BeTrue())
Expect(matchHostPattern("*", "::1")).To(BeTrue())
})
It("should not match partial patterns", func() { It("should not match partial patterns", func() {
Expect(matchHostPattern("*.example.com", "example.com.evil.org")).To(BeFalse()) Expect(matchHostPattern("*.example.com", "example.com.evil.org")).To(BeFalse())
}) })

View File

@@ -119,6 +119,32 @@ var hostServices = []hostServiceEntry{
return host.RegisterUsersHostFunctions(service), nil return host.RegisterUsersHostFunctions(service), nil
}, },
}, },
{
name: "HTTP",
hasPermission: func(p *Permissions) bool { return p != nil && p.Http != nil },
create: func(ctx *serviceContext) ([]extism.HostFunction, io.Closer) {
perm := ctx.permissions.Http
service := newHTTPService(ctx.pluginName, perm)
return host.RegisterHTTPHostFunctions(service), nil
},
},
{
name: "TaskQueue",
hasPermission: func(p *Permissions) bool { return p != nil && p.Taskqueue != nil },
create: func(ctx *serviceContext) ([]extism.HostFunction, io.Closer) {
perm := ctx.permissions.Taskqueue
maxConcurrency := int32(1)
if perm.MaxConcurrency > 0 {
maxConcurrency = int32(perm.MaxConcurrency)
}
service, err := newTaskQueueService(ctx.pluginName, ctx.manager, maxConcurrency)
if err != nil {
log.Error("Failed to create TaskQueue service", "plugin", ctx.pluginName, err)
return nil, nil
}
return host.RegisterTaskQueueHostFunctions(service), service
},
},
} }
// extractManifest reads manifest from an .ndp package and computes its SHA-256 hash. // extractManifest reads manifest from an .ndp package and computes its SHA-256 hash.

View File

@@ -110,6 +110,9 @@
}, },
"users": { "users": {
"$ref": "#/$defs/UsersPermission" "$ref": "#/$defs/UsersPermission"
},
"taskqueue": {
"$ref": "#/$defs/TaskQueuePermission"
} }
} }
}, },
@@ -224,6 +227,23 @@
} }
} }
}, },
"TaskQueuePermission": {
"type": "object",
"description": "Task queue permissions for background task processing",
"additionalProperties": false,
"properties": {
"reason": {
"type": "string",
"description": "Explanation for why task queue access is needed"
},
"maxConcurrency": {
"type": "integer",
"description": "Maximum total concurrent workers across all queues. Default: 1",
"minimum": 1,
"default": 1
}
}
},
"UsersPermission": { "UsersPermission": {
"type": "object", "type": "object",
"description": "Users service permissions for accessing user information", "description": "Users service permissions for accessing user information",

View File

@@ -64,6 +64,21 @@ func ValidateWithCapabilities(m *Manifest, capabilities []Capability) error {
return fmt.Errorf("scrobbler capability requires 'users' permission to be declared in manifest") return fmt.Errorf("scrobbler capability requires 'users' permission to be declared in manifest")
} }
} }
// Scheduler permission requires SchedulerCallback capability
if m.Permissions != nil && m.Permissions.Scheduler != nil {
if !hasCapability(capabilities, CapabilityScheduler) {
return fmt.Errorf("'scheduler' permission requires plugin to export '%s' function", FuncSchedulerCallback)
}
}
// TaskQueue permission requires TaskWorker capability
if m.Permissions != nil && m.Permissions.Taskqueue != nil {
if !hasCapability(capabilities, CapabilityTaskWorker) {
return fmt.Errorf("'taskqueue' permission requires plugin to export '%s' function", FuncTaskWorkerCallback)
}
}
return nil return nil
} }

View File

@@ -181,6 +181,9 @@ type Permissions struct {
// Subsonicapi corresponds to the JSON schema field "subsonicapi". // Subsonicapi corresponds to the JSON schema field "subsonicapi".
Subsonicapi *SubsonicAPIPermission `json:"subsonicapi,omitempty" yaml:"subsonicapi,omitempty" mapstructure:"subsonicapi,omitempty"` Subsonicapi *SubsonicAPIPermission `json:"subsonicapi,omitempty" yaml:"subsonicapi,omitempty" mapstructure:"subsonicapi,omitempty"`
// Taskqueue corresponds to the JSON schema field "taskqueue".
Taskqueue *TaskQueuePermission `json:"taskqueue,omitempty" yaml:"taskqueue,omitempty" mapstructure:"taskqueue,omitempty"`
// Users corresponds to the JSON schema field "users". // Users corresponds to the JSON schema field "users".
Users *UsersPermission `json:"users,omitempty" yaml:"users,omitempty" mapstructure:"users,omitempty"` Users *UsersPermission `json:"users,omitempty" yaml:"users,omitempty" mapstructure:"users,omitempty"`
@@ -200,6 +203,36 @@ type SubsonicAPIPermission struct {
Reason *string `json:"reason,omitempty" yaml:"reason,omitempty" mapstructure:"reason,omitempty"` Reason *string `json:"reason,omitempty" yaml:"reason,omitempty" mapstructure:"reason,omitempty"`
} }
// Task queue permissions for background task processing
type TaskQueuePermission struct {
// Maximum total concurrent workers across all queues. Default: 1
MaxConcurrency int `json:"maxConcurrency,omitempty" yaml:"maxConcurrency,omitempty" mapstructure:"maxConcurrency,omitempty"`
// Explanation for why task queue access is needed
Reason *string `json:"reason,omitempty" yaml:"reason,omitempty" mapstructure:"reason,omitempty"`
}
// UnmarshalJSON implements json.Unmarshaler.
func (j *TaskQueuePermission) UnmarshalJSON(value []byte) error {
var raw map[string]interface{}
if err := json.Unmarshal(value, &raw); err != nil {
return err
}
type Plain TaskQueuePermission
var plain Plain
if err := json.Unmarshal(value, &plain); err != nil {
return err
}
if v, ok := raw["maxConcurrency"]; !ok || v == nil {
plain.MaxConcurrency = 1.0
}
if 1 > plain.MaxConcurrency {
return fmt.Errorf("field %s: must be >= %v", "maxConcurrency", 1)
}
*j = TaskQueuePermission(plain)
return nil
}
// Enable experimental WebAssembly threads support // Enable experimental WebAssembly threads support
type ThreadsFeature struct { type ThreadsFeature struct {
// Explanation for why threads support is needed // Explanation for why threads support is needed

View File

@@ -6,3 +6,10 @@ require (
github.com/extism/go-pdk v1.1.3 github.com/extism/go-pdk v1.1.3
github.com/stretchr/testify v1.11.1 github.com/stretchr/testify v1.11.1
) )
require (
github.com/davecgh/go-spew v1.1.1 // indirect
github.com/pmezard/go-difflib v1.0.0 // indirect
github.com/stretchr/objx v0.5.2 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
)

View File

@@ -38,10 +38,12 @@ The following host services are available:
- Artwork: provides artwork public URL generation capabilities for plugins. - Artwork: provides artwork public URL generation capabilities for plugins.
- Cache: provides in-memory TTL-based caching capabilities for plugins. - Cache: provides in-memory TTL-based caching capabilities for plugins.
- Config: provides access to plugin configuration values. - Config: provides access to plugin configuration values.
- HTTP: provides outbound HTTP request capabilities for plugins.
- KVStore: provides persistent key-value storage for plugins. - KVStore: provides persistent key-value storage for plugins.
- Library: provides access to music library metadata for plugins. - Library: provides access to music library metadata for plugins.
- Scheduler: provides task scheduling capabilities for plugins. - Scheduler: provides task scheduling capabilities for plugins.
- SubsonicAPI: provides access to Navidrome's Subsonic API from plugins. - SubsonicAPI: provides access to Navidrome's Subsonic API from plugins.
- TaskQueue: provides persistent task queues for plugins.
- Users: provides access to user information for plugins. - Users: provides access to user information for plugins.
- WebSocket: provides WebSocket communication capabilities for plugins. - WebSocket: provides WebSocket communication capabilities for plugins.

View File

@@ -0,0 +1,89 @@
// Code generated by ndpgen. DO NOT EDIT.
//
// This file contains client wrappers for the HTTP host service.
// It is intended for use in Navidrome plugins built with TinyGo.
//
//go:build wasip1
package host
import (
"encoding/json"
"errors"
"github.com/navidrome/navidrome/plugins/pdk/go/pdk"
)
// HTTPRequest represents the HTTPRequest data structure.
// HTTPRequest represents an outbound HTTP request from a plugin.
type HTTPRequest struct {
Method string `json:"method"`
URL string `json:"url"`
Headers map[string]string `json:"headers"`
Body []byte `json:"body"`
TimeoutMs int32 `json:"timeoutMs"`
}
// HTTPResponse represents the HTTPResponse data structure.
// HTTPResponse represents the response from an outbound HTTP request.
type HTTPResponse struct {
StatusCode int32 `json:"statusCode"`
Headers map[string]string `json:"headers"`
Body []byte `json:"body"`
}
// http_send is the host function provided by Navidrome.
//
//go:wasmimport extism:host/user http_send
func http_send(uint64) uint64
type hTTPSendRequest struct {
Request HTTPRequest `json:"request"`
}
type hTTPSendResponse struct {
Result *HTTPResponse `json:"result,omitempty"`
Error string `json:"error,omitempty"`
}
// HTTPSend calls the http_send host function.
// Send executes an HTTP request and returns the response.
//
// Parameters:
// - request: The HTTP request to execute, including method, URL, headers, body, and timeout
//
// Returns the HTTP response with status code, headers, and body.
// Network errors, timeouts, and permission failures are returned as Go errors.
// Successful HTTP calls (including 4xx/5xx status codes) return a non-nil response with nil error.
func HTTPSend(request HTTPRequest) (*HTTPResponse, error) {
// Marshal request to JSON
req := hTTPSendRequest{
Request: request,
}
reqBytes, err := json.Marshal(req)
if err != nil {
return nil, err
}
reqMem := pdk.AllocateBytes(reqBytes)
defer reqMem.Free()
// Call the host function
responsePtr := http_send(reqMem.Offset())
// Read the response from memory
responseMem := pdk.FindMemory(responsePtr)
responseBytes := responseMem.ReadBytes()
// Parse the response
var response hTTPSendResponse
if err := json.Unmarshal(responseBytes, &response); err != nil {
return nil, err
}
// Convert Error field to Go error
if response.Error != "" {
return nil, errors.New(response.Error)
}
return response.Result, nil
}

View File

@@ -0,0 +1,57 @@
// Code generated by ndpgen. DO NOT EDIT.
//
// This file contains mock implementations for non-WASM builds.
// These mocks allow IDE support, compilation, and unit testing on non-WASM platforms.
// Plugin authors can use the exported mock instances to set expectations in tests.
//
//go:build !wasip1
package host
import "github.com/stretchr/testify/mock"
// HTTPRequest represents the HTTPRequest data structure.
// HTTPRequest represents an outbound HTTP request from a plugin.
type HTTPRequest struct {
Method string `json:"method"`
URL string `json:"url"`
Headers map[string]string `json:"headers"`
Body []byte `json:"body"`
TimeoutMs int32 `json:"timeoutMs"`
}
// HTTPResponse represents the HTTPResponse data structure.
// HTTPResponse represents the response from an outbound HTTP request.
type HTTPResponse struct {
StatusCode int32 `json:"statusCode"`
Headers map[string]string `json:"headers"`
Body []byte `json:"body"`
}
// mockHTTPService is the mock implementation for testing.
type mockHTTPService struct {
mock.Mock
}
// HTTPMock is the auto-instantiated mock instance for testing.
// Use this to set expectations: host.HTTPMock.On("MethodName", args...).Return(values...)
var HTTPMock = &mockHTTPService{}
// Send is the mock method for HTTPSend.
func (m *mockHTTPService) Send(request HTTPRequest) (*HTTPResponse, error) {
args := m.Called(request)
return args.Get(0).(*HTTPResponse), args.Error(1)
}
// HTTPSend delegates to the mock instance.
// Send executes an HTTP request and returns the response.
//
// Parameters:
// - request: The HTTP request to execute, including method, URL, headers, body, and timeout
//
// Returns the HTTP response with status code, headers, and body.
// Network errors, timeouts, and permission failures are returned as Go errors.
// Successful HTTP calls (including 4xx/5xx status codes) return a non-nil response with nil error.
func HTTPSend(request HTTPRequest) (*HTTPResponse, error) {
return HTTPMock.Send(request)
}

View File

@@ -0,0 +1,219 @@
// Code generated by ndpgen. DO NOT EDIT.
//
// This file contains client wrappers for the TaskQueue host service.
// It is intended for use in Navidrome plugins built with TinyGo.
//
//go:build wasip1
package host
import (
"encoding/json"
"errors"
"github.com/navidrome/navidrome/plugins/pdk/go/pdk"
)
// QueueConfig represents the QueueConfig data structure.
// QueueConfig holds configuration for a task queue.
type QueueConfig struct {
Concurrency int32 `json:"concurrency"`
MaxRetries int32 `json:"maxRetries"`
BackoffMs int64 `json:"backoffMs"`
DelayMs int64 `json:"delayMs"`
RetentionMs int64 `json:"retentionMs"`
}
// taskqueue_createqueue is the host function provided by Navidrome.
//
//go:wasmimport extism:host/user taskqueue_createqueue
func taskqueue_createqueue(uint64) uint64
// taskqueue_enqueue is the host function provided by Navidrome.
//
//go:wasmimport extism:host/user taskqueue_enqueue
func taskqueue_enqueue(uint64) uint64
// taskqueue_gettaskstatus is the host function provided by Navidrome.
//
//go:wasmimport extism:host/user taskqueue_gettaskstatus
func taskqueue_gettaskstatus(uint64) uint64
// taskqueue_canceltask is the host function provided by Navidrome.
//
//go:wasmimport extism:host/user taskqueue_canceltask
func taskqueue_canceltask(uint64) uint64
type taskQueueCreateQueueRequest struct {
Name string `json:"name"`
Config QueueConfig `json:"config"`
}
type taskQueueEnqueueRequest struct {
QueueName string `json:"queueName"`
Payload []byte `json:"payload"`
}
type taskQueueEnqueueResponse struct {
Result string `json:"result,omitempty"`
Error string `json:"error,omitempty"`
}
type taskQueueGetTaskStatusRequest struct {
TaskID string `json:"taskId"`
}
type taskQueueGetTaskStatusResponse struct {
Result string `json:"result,omitempty"`
Error string `json:"error,omitempty"`
}
type taskQueueCancelTaskRequest struct {
TaskID string `json:"taskId"`
}
// TaskQueueCreateQueue calls the taskqueue_createqueue host function.
// CreateQueue creates a named task queue with the given configuration.
// Zero-value fields in config use sensible defaults.
// If a queue with the same name already exists, returns an error.
// On startup, this also recovers any stale "running" tasks from a previous crash.
func TaskQueueCreateQueue(name string, config QueueConfig) error {
// Marshal request to JSON
req := taskQueueCreateQueueRequest{
Name: name,
Config: config,
}
reqBytes, err := json.Marshal(req)
if err != nil {
return err
}
reqMem := pdk.AllocateBytes(reqBytes)
defer reqMem.Free()
// Call the host function
responsePtr := taskqueue_createqueue(reqMem.Offset())
// Read the response from memory
responseMem := pdk.FindMemory(responsePtr)
responseBytes := responseMem.ReadBytes()
// Parse error-only response
var response struct {
Error string `json:"error,omitempty"`
}
if err := json.Unmarshal(responseBytes, &response); err != nil {
return err
}
if response.Error != "" {
return errors.New(response.Error)
}
return nil
}
// TaskQueueEnqueue calls the taskqueue_enqueue host function.
// Enqueue adds a task to the named queue. Returns the task ID.
// payload is opaque bytes passed back to the plugin on execution.
func TaskQueueEnqueue(queueName string, payload []byte) (string, error) {
// Marshal request to JSON
req := taskQueueEnqueueRequest{
QueueName: queueName,
Payload: payload,
}
reqBytes, err := json.Marshal(req)
if err != nil {
return "", err
}
reqMem := pdk.AllocateBytes(reqBytes)
defer reqMem.Free()
// Call the host function
responsePtr := taskqueue_enqueue(reqMem.Offset())
// Read the response from memory
responseMem := pdk.FindMemory(responsePtr)
responseBytes := responseMem.ReadBytes()
// Parse the response
var response taskQueueEnqueueResponse
if err := json.Unmarshal(responseBytes, &response); err != nil {
return "", err
}
// Convert Error field to Go error
if response.Error != "" {
return "", errors.New(response.Error)
}
return response.Result, nil
}
// TaskQueueGetTaskStatus calls the taskqueue_gettaskstatus host function.
// GetTaskStatus returns the status of a task: "pending", "running",
// "completed", "failed", or "cancelled".
func TaskQueueGetTaskStatus(taskID string) (string, error) {
// Marshal request to JSON
req := taskQueueGetTaskStatusRequest{
TaskID: taskID,
}
reqBytes, err := json.Marshal(req)
if err != nil {
return "", err
}
reqMem := pdk.AllocateBytes(reqBytes)
defer reqMem.Free()
// Call the host function
responsePtr := taskqueue_gettaskstatus(reqMem.Offset())
// Read the response from memory
responseMem := pdk.FindMemory(responsePtr)
responseBytes := responseMem.ReadBytes()
// Parse the response
var response taskQueueGetTaskStatusResponse
if err := json.Unmarshal(responseBytes, &response); err != nil {
return "", err
}
// Convert Error field to Go error
if response.Error != "" {
return "", errors.New(response.Error)
}
return response.Result, nil
}
// TaskQueueCancelTask calls the taskqueue_canceltask host function.
// CancelTask cancels a pending task. Returns error if already
// running, completed, or failed.
func TaskQueueCancelTask(taskID string) error {
// Marshal request to JSON
req := taskQueueCancelTaskRequest{
TaskID: taskID,
}
reqBytes, err := json.Marshal(req)
if err != nil {
return err
}
reqMem := pdk.AllocateBytes(reqBytes)
defer reqMem.Free()
// Call the host function
responsePtr := taskqueue_canceltask(reqMem.Offset())
// Read the response from memory
responseMem := pdk.FindMemory(responsePtr)
responseBytes := responseMem.ReadBytes()
// Parse error-only response
var response struct {
Error string `json:"error,omitempty"`
}
if err := json.Unmarshal(responseBytes, &response); err != nil {
return err
}
if response.Error != "" {
return errors.New(response.Error)
}
return nil
}

View File

@@ -0,0 +1,84 @@
// Code generated by ndpgen. DO NOT EDIT.
//
// This file contains mock implementations for non-WASM builds.
// These mocks allow IDE support, compilation, and unit testing on non-WASM platforms.
// Plugin authors can use the exported mock instances to set expectations in tests.
//
//go:build !wasip1
package host
import "github.com/stretchr/testify/mock"
// QueueConfig represents the QueueConfig data structure.
// QueueConfig holds configuration for a task queue.
type QueueConfig struct {
Concurrency int32 `json:"concurrency"`
MaxRetries int32 `json:"maxRetries"`
BackoffMs int64 `json:"backoffMs"`
DelayMs int64 `json:"delayMs"`
RetentionMs int64 `json:"retentionMs"`
}
// mockTaskQueueService is the mock implementation for testing.
type mockTaskQueueService struct {
mock.Mock
}
// TaskQueueMock is the auto-instantiated mock instance for testing.
// Use this to set expectations: host.TaskQueueMock.On("MethodName", args...).Return(values...)
var TaskQueueMock = &mockTaskQueueService{}
// CreateQueue is the mock method for TaskQueueCreateQueue.
func (m *mockTaskQueueService) CreateQueue(name string, config QueueConfig) error {
args := m.Called(name, config)
return args.Error(0)
}
// TaskQueueCreateQueue delegates to the mock instance.
// CreateQueue creates a named task queue with the given configuration.
// Zero-value fields in config use sensible defaults.
// If a queue with the same name already exists, returns an error.
// On startup, this also recovers any stale "running" tasks from a previous crash.
func TaskQueueCreateQueue(name string, config QueueConfig) error {
return TaskQueueMock.CreateQueue(name, config)
}
// Enqueue is the mock method for TaskQueueEnqueue.
func (m *mockTaskQueueService) Enqueue(queueName string, payload []byte) (string, error) {
args := m.Called(queueName, payload)
return args.String(0), args.Error(1)
}
// TaskQueueEnqueue delegates to the mock instance.
// Enqueue adds a task to the named queue. Returns the task ID.
// payload is opaque bytes passed back to the plugin on execution.
func TaskQueueEnqueue(queueName string, payload []byte) (string, error) {
return TaskQueueMock.Enqueue(queueName, payload)
}
// GetTaskStatus is the mock method for TaskQueueGetTaskStatus.
func (m *mockTaskQueueService) GetTaskStatus(taskID string) (string, error) {
args := m.Called(taskID)
return args.String(0), args.Error(1)
}
// TaskQueueGetTaskStatus delegates to the mock instance.
// GetTaskStatus returns the status of a task: "pending", "running",
// "completed", "failed", or "cancelled".
func TaskQueueGetTaskStatus(taskID string) (string, error) {
return TaskQueueMock.GetTaskStatus(taskID)
}
// CancelTask is the mock method for TaskQueueCancelTask.
func (m *mockTaskQueueService) CancelTask(taskID string) error {
args := m.Called(taskID)
return args.Error(0)
}
// TaskQueueCancelTask delegates to the mock instance.
// CancelTask cancels a pending task. Returns error if already
// running, completed, or failed.
func TaskQueueCancelTask(taskID string) error {
return TaskQueueMock.CancelTask(taskID)
}

View File

@@ -0,0 +1,86 @@
// Code generated by ndpgen. DO NOT EDIT.
//
// This file contains export wrappers for the TaskWorker capability.
// It is intended for use in Navidrome plugins built with TinyGo.
//
//go:build wasip1
package taskworker
import (
"github.com/navidrome/navidrome/plugins/pdk/go/pdk"
)
// TaskExecuteRequest is the request provided when a task is ready to execute.
type TaskExecuteRequest struct {
// QueueName is the name of the queue this task belongs to.
QueueName string `json:"queueName"`
// TaskID is the unique identifier for this task.
TaskID string `json:"taskId"`
// Payload is the opaque data provided when the task was enqueued.
Payload []byte `json:"payload"`
// Attempt is the current attempt number (1-based: first attempt = 1).
Attempt int32 `json:"attempt"`
}
// TaskExecuteResponse is the response from task execution.
type TaskExecuteResponse struct {
// Error, if non-empty, indicates the task failed. The task will be retried
// if retries are configured and attempts remain.
Error string `json:"error,omitempty"`
}
// TaskWorker is the marker interface for taskworker plugins.
// Implement one or more of the provider interfaces below.
// TaskWorker provides task execution handling.
// This capability allows plugins to receive callbacks when their queued tasks
// are ready to execute. Plugins that use the taskqueue host service must
// implement this capability.
type TaskWorker interface{}
// TaskExecuteProvider provides the OnTaskExecute function.
type TaskExecuteProvider interface {
OnTaskExecute(TaskExecuteRequest) (TaskExecuteResponse, error)
} // Internal implementation holders
var (
taskExecuteImpl func(TaskExecuteRequest) (TaskExecuteResponse, error)
)
// Register registers a taskworker implementation.
// The implementation is checked for optional provider interfaces.
func Register(impl TaskWorker) {
if p, ok := impl.(TaskExecuteProvider); ok {
taskExecuteImpl = p.OnTaskExecute
}
}
// NotImplementedCode is the standard return code for unimplemented functions.
// The host recognizes this and skips the plugin gracefully.
const NotImplementedCode int32 = -2
//go:wasmexport nd_task_execute
func _NdTaskExecute() int32 {
if taskExecuteImpl == nil {
// Return standard code - host will skip this plugin gracefully
return NotImplementedCode
}
var input TaskExecuteRequest
if err := pdk.InputJSON(&input); err != nil {
pdk.SetError(err)
return -1
}
output, err := taskExecuteImpl(input)
if err != nil {
pdk.SetError(err)
return -1
}
if err := pdk.OutputJSON(output); err != nil {
pdk.SetError(err)
return -1
}
return 0
}

View File

@@ -0,0 +1,48 @@
// Code generated by ndpgen. DO NOT EDIT.
//
// This file provides stub implementations for non-WASM platforms.
// It allows Go plugins to compile and run tests outside of WASM,
// but the actual functionality is only available in WASM builds.
//
//go:build !wasip1
package taskworker
// TaskExecuteRequest is the request provided when a task is ready to execute.
type TaskExecuteRequest struct {
// QueueName is the name of the queue this task belongs to.
QueueName string `json:"queueName"`
// TaskID is the unique identifier for this task.
TaskID string `json:"taskId"`
// Payload is the opaque data provided when the task was enqueued.
Payload []byte `json:"payload"`
// Attempt is the current attempt number (1-based: first attempt = 1).
Attempt int32 `json:"attempt"`
}
// TaskExecuteResponse is the response from task execution.
type TaskExecuteResponse struct {
// Error, if non-empty, indicates the task failed. The task will be retried
// if retries are configured and attempts remain.
Error string `json:"error,omitempty"`
}
// TaskWorker is the marker interface for taskworker plugins.
// Implement one or more of the provider interfaces below.
// TaskWorker provides task execution handling.
// This capability allows plugins to receive callbacks when their queued tasks
// are ready to execute. Plugins that use the taskqueue host service must
// implement this capability.
type TaskWorker interface{}
// TaskExecuteProvider provides the OnTaskExecute function.
type TaskExecuteProvider interface {
OnTaskExecute(TaskExecuteRequest) (TaskExecuteResponse, error)
}
// NotImplementedCode is the standard return code for unimplemented functions.
const NotImplementedCode int32 = -2
// Register is a no-op on non-WASM platforms.
// This stub allows code to compile outside of WASM.
func Register(_ TaskWorker) {}

View File

@@ -0,0 +1,59 @@
# Code generated by ndpgen. DO NOT EDIT.
#
# This file contains client wrappers for the HTTP host service.
# It is intended for use in Navidrome plugins built with extism-py.
#
# IMPORTANT: Due to a limitation in extism-py, you cannot import this file directly.
# The @extism.import_fn decorators are only detected when defined in the plugin's
# main __init__.py file. Copy the needed functions from this file into your plugin.
from dataclasses import dataclass
from typing import Any
import extism
import json
class HostFunctionError(Exception):
"""Raised when a host function returns an error."""
pass
@extism.import_fn("extism:host/user", "http_send")
def _http_send(offset: int) -> int:
"""Raw host function - do not call directly."""
...
def http_send(request: Any) -> Any:
"""Send executes an HTTP request and returns the response.
Parameters:
- request: The HTTP request to execute, including method, URL, headers, body, and timeout
Returns the HTTP response with status code, headers, and body.
Network errors, timeouts, and permission failures are returned as Go errors.
Successful HTTP calls (including 4xx/5xx status codes) return a non-nil response with nil error.
Args:
request: Any parameter.
Returns:
Any: The result value.
Raises:
HostFunctionError: If the host function returns an error.
"""
request = {
"request": request,
}
request_bytes = json.dumps(request).encode("utf-8")
request_mem = extism.memory.alloc(request_bytes)
response_offset = _http_send(request_mem.offset)
response_mem = extism.memory.find(response_offset)
response = json.loads(extism.memory.string(response_mem))
if response.get("error"):
raise HostFunctionError(response["error"])
return response.get("result", None)

View File

@@ -0,0 +1,59 @@
# Code generated by ndpgen. DO NOT EDIT.
#
# This file contains client wrappers for the HTTP host service.
# It is intended for use in Navidrome plugins built with extism-py.
#
# IMPORTANT: Due to a limitation in extism-py, you cannot import this file directly.
# The @extism.import_fn decorators are only detected when defined in the plugin's
# main __init__.py file. Copy the needed functions from this file into your plugin.
from dataclasses import dataclass
from typing import Any
import extism
import json
class HostFunctionError(Exception):
"""Raised when a host function returns an error."""
pass
@extism.import_fn("extism:host/user", "http_send")
def _http_send(offset: int) -> int:
"""Raw host function - do not call directly."""
...
def http_send(request: Any) -> Any:
"""Send executes an HTTP request and returns the response.
Parameters:
- request: The HTTP request to execute, including method, URL, headers, body, and timeout
Returns the HTTP response with status code, headers, and body.
Network errors, timeouts, and permission failures are returned as errors.
Successful HTTP calls (including 4xx/5xx status codes) return a non-nil response with nil error.
Args:
request: Any parameter.
Returns:
Any: The result value.
Raises:
HostFunctionError: If the host function returns an error.
"""
request = {
"request": request,
}
request_bytes = json.dumps(request).encode("utf-8")
request_mem = extism.memory.alloc(request_bytes)
response_offset = _http_send(request_mem.offset)
response_mem = extism.memory.find(response_offset)
response = json.loads(extism.memory.string(response_mem))
if response.get("error"):
raise HostFunctionError(response["error"])
return response.get("result", None)

View File

@@ -0,0 +1,153 @@
# Code generated by ndpgen. DO NOT EDIT.
#
# This file contains client wrappers for the TaskQueue host service.
# It is intended for use in Navidrome plugins built with extism-py.
#
# IMPORTANT: Due to a limitation in extism-py, you cannot import this file directly.
# The @extism.import_fn decorators are only detected when defined in the plugin's
# main __init__.py file. Copy the needed functions from this file into your plugin.
from dataclasses import dataclass
from typing import Any
import extism
import json
class HostFunctionError(Exception):
"""Raised when a host function returns an error."""
pass
@extism.import_fn("extism:host/user", "taskqueue_createqueue")
def _taskqueue_createqueue(offset: int) -> int:
"""Raw host function - do not call directly."""
...
@extism.import_fn("extism:host/user", "taskqueue_enqueue")
def _taskqueue_enqueue(offset: int) -> int:
"""Raw host function - do not call directly."""
...
@extism.import_fn("extism:host/user", "taskqueue_gettaskstatus")
def _taskqueue_gettaskstatus(offset: int) -> int:
"""Raw host function - do not call directly."""
...
@extism.import_fn("extism:host/user", "taskqueue_canceltask")
def _taskqueue_canceltask(offset: int) -> int:
"""Raw host function - do not call directly."""
...
def taskqueue_create_queue(name: str, config: Any) -> None:
"""CreateQueue creates a named task queue with the given configuration.
Zero-value fields in config use sensible defaults.
If a queue with the same name already exists, returns an error.
On startup, this also recovers any stale "running" tasks from a previous crash.
Args:
name: str parameter.
config: Any parameter.
Raises:
HostFunctionError: If the host function returns an error.
"""
request = {
"name": name,
"config": config,
}
request_bytes = json.dumps(request).encode("utf-8")
request_mem = extism.memory.alloc(request_bytes)
response_offset = _taskqueue_createqueue(request_mem.offset)
response_mem = extism.memory.find(response_offset)
response = json.loads(extism.memory.string(response_mem))
if response.get("error"):
raise HostFunctionError(response["error"])
def taskqueue_enqueue(queue_name: str, payload: bytes) -> str:
"""Enqueue adds a task to the named queue. Returns the task ID.
payload is opaque bytes passed back to the plugin on execution.
Args:
queue_name: str parameter.
payload: bytes parameter.
Returns:
str: The result value.
Raises:
HostFunctionError: If the host function returns an error.
"""
request = {
"queueName": queue_name,
"payload": payload,
}
request_bytes = json.dumps(request).encode("utf-8")
request_mem = extism.memory.alloc(request_bytes)
response_offset = _taskqueue_enqueue(request_mem.offset)
response_mem = extism.memory.find(response_offset)
response = json.loads(extism.memory.string(response_mem))
if response.get("error"):
raise HostFunctionError(response["error"])
return response.get("result", "")
def taskqueue_get_task_status(task_id: str) -> str:
"""GetTaskStatus returns the status of a task: "pending", "running",
"completed", "failed", or "cancelled".
Args:
task_id: str parameter.
Returns:
str: The result value.
Raises:
HostFunctionError: If the host function returns an error.
"""
request = {
"taskId": task_id,
}
request_bytes = json.dumps(request).encode("utf-8")
request_mem = extism.memory.alloc(request_bytes)
response_offset = _taskqueue_gettaskstatus(request_mem.offset)
response_mem = extism.memory.find(response_offset)
response = json.loads(extism.memory.string(response_mem))
if response.get("error"):
raise HostFunctionError(response["error"])
return response.get("result", "")
def taskqueue_cancel_task(task_id: str) -> None:
"""CancelTask cancels a pending task. Returns error if already
running, completed, or failed.
Args:
task_id: str parameter.
Raises:
HostFunctionError: If the host function returns an error.
"""
request = {
"taskId": task_id,
}
request_bytes = json.dumps(request).encode("utf-8")
request_mem = extism.memory.alloc(request_bytes)
response_offset = _taskqueue_canceltask(request_mem.offset)
response_mem = extism.memory.find(response_offset)
response = json.loads(extism.memory.string(response_mem))
if response.get("error"):
raise HostFunctionError(response["error"])

View File

@@ -9,4 +9,5 @@ pub mod lifecycle;
pub mod metadata; pub mod metadata;
pub mod scheduler; pub mod scheduler;
pub mod scrobbler; pub mod scrobbler;
pub mod taskworker;
pub mod websocket; pub mod websocket;

View File

@@ -0,0 +1,87 @@
// Code generated by ndpgen. DO NOT EDIT.
//
// This file contains export wrappers for the TaskWorker capability.
// It is intended for use in Navidrome plugins built with extism-pdk.
use serde::{Deserialize, Serialize};
// Helper functions for skip_serializing_if with numeric types
#[allow(dead_code)]
fn is_zero_i32(value: &i32) -> bool { *value == 0 }
#[allow(dead_code)]
fn is_zero_u32(value: &u32) -> bool { *value == 0 }
#[allow(dead_code)]
fn is_zero_i64(value: &i64) -> bool { *value == 0 }
#[allow(dead_code)]
fn is_zero_u64(value: &u64) -> bool { *value == 0 }
#[allow(dead_code)]
fn is_zero_f32(value: &f32) -> bool { *value == 0.0 }
#[allow(dead_code)]
fn is_zero_f64(value: &f64) -> bool { *value == 0.0 }
/// TaskExecuteRequest is the request provided when a task is ready to execute.
#[derive(Debug, Clone, Default, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
pub struct TaskExecuteRequest {
/// QueueName is the name of the queue this task belongs to.
#[serde(default)]
pub queue_name: String,
/// TaskID is the unique identifier for this task.
#[serde(default)]
pub task_id: String,
/// Payload is the opaque data provided when the task was enqueued.
#[serde(default)]
pub payload: Vec<u8>,
/// Attempt is the current attempt number (1-based: first attempt = 1).
#[serde(default)]
pub attempt: i32,
}
/// TaskExecuteResponse is the response from task execution.
#[derive(Debug, Clone, Default, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
pub struct TaskExecuteResponse {
/// Error, if non-empty, indicates the task failed. The task will be retried
/// if retries are configured and attempts remain.
#[serde(default, skip_serializing_if = "String::is_empty")]
pub error: String,
}
/// Error represents an error from a capability method.
#[derive(Debug)]
pub struct Error {
pub message: String,
}
impl std::fmt::Display for Error {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "{}", self.message)
}
}
impl std::error::Error for Error {}
impl Error {
pub fn new(message: impl Into<String>) -> Self {
Self { message: message.into() }
}
}
/// TaskExecuteProvider provides the OnTaskExecute function.
pub trait TaskExecuteProvider {
fn on_task_execute(&self, req: TaskExecuteRequest) -> Result<TaskExecuteResponse, Error>;
}
/// Register the on_task_execute export.
/// This macro generates the WASM export function for this method.
#[macro_export]
macro_rules! register_taskworker_task_execute {
($plugin_type:ty) => {
#[extism_pdk::plugin_fn]
pub fn nd_task_execute(
req: extism_pdk::Json<$crate::taskworker::TaskExecuteRequest>
) -> extism_pdk::FnResult<extism_pdk::Json<$crate::taskworker::TaskExecuteResponse>> {
let plugin = <$plugin_type>::default();
let result = $crate::taskworker::TaskExecuteProvider::on_task_execute(&plugin, req.into_inner())?;
Ok(extism_pdk::Json(result))
}
};
}

View File

@@ -35,10 +35,12 @@
//! - [`artwork`] - provides artwork public URL generation capabilities for plugins. //! - [`artwork`] - provides artwork public URL generation capabilities for plugins.
//! - [`cache`] - provides in-memory TTL-based caching capabilities for plugins. //! - [`cache`] - provides in-memory TTL-based caching capabilities for plugins.
//! - [`config`] - provides access to plugin configuration values. //! - [`config`] - provides access to plugin configuration values.
//! - [`http`] - provides outbound HTTP request capabilities for plugins.
//! - [`kvstore`] - provides persistent key-value storage for plugins. //! - [`kvstore`] - provides persistent key-value storage for plugins.
//! - [`library`] - provides access to music library metadata for plugins. //! - [`library`] - provides access to music library metadata for plugins.
//! - [`scheduler`] - provides task scheduling capabilities for plugins. //! - [`scheduler`] - provides task scheduling capabilities for plugins.
//! - [`subsonicapi`] - provides access to Navidrome's Subsonic API from plugins. //! - [`subsonicapi`] - provides access to Navidrome's Subsonic API from plugins.
//! - [`taskqueue`] - provides persistent task queues for plugins.
//! - [`users`] - provides access to user information for plugins. //! - [`users`] - provides access to user information for plugins.
//! - [`websocket`] - provides WebSocket communication capabilities for plugins. //! - [`websocket`] - provides WebSocket communication capabilities for plugins.
@@ -63,6 +65,13 @@ pub mod config {
pub use super::nd_host_config::*; pub use super::nd_host_config::*;
} }
#[doc(hidden)]
mod nd_host_http;
/// provides outbound HTTP request capabilities for plugins.
pub mod http {
pub use super::nd_host_http::*;
}
#[doc(hidden)] #[doc(hidden)]
mod nd_host_kvstore; mod nd_host_kvstore;
/// provides persistent key-value storage for plugins. /// provides persistent key-value storage for plugins.
@@ -91,6 +100,13 @@ pub mod subsonicapi {
pub use super::nd_host_subsonicapi::*; pub use super::nd_host_subsonicapi::*;
} }
#[doc(hidden)]
mod nd_host_taskqueue;
/// provides persistent task queues for plugins.
pub mod taskqueue {
pub use super::nd_host_taskqueue::*;
}
#[doc(hidden)] #[doc(hidden)]
mod nd_host_users; mod nd_host_users;
/// provides access to user information for plugins. /// provides access to user information for plugins.

View File

@@ -0,0 +1,83 @@
// Code generated by ndpgen. DO NOT EDIT.
//
// This file contains client wrappers for the HTTP host service.
// It is intended for use in Navidrome plugins built with extism-pdk.
use extism_pdk::*;
use serde::{Deserialize, Serialize};
/// HTTPRequest represents an outbound HTTP request from a plugin.
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
pub struct HTTPRequest {
pub method: String,
pub url: String,
#[serde(default)]
pub headers: std::collections::HashMap<String, String>,
#[serde(default)]
pub body: Vec<u8>,
#[serde(default)]
pub timeout_ms: i32,
}
/// HTTPResponse represents the response from an outbound HTTP request.
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
pub struct HTTPResponse {
pub status_code: i32,
#[serde(default)]
pub headers: std::collections::HashMap<String, String>,
#[serde(default)]
pub body: Vec<u8>,
}
#[derive(Debug, Clone, Serialize)]
#[serde(rename_all = "camelCase")]
struct HTTPSendRequest {
request: HTTPRequest,
}
#[derive(Debug, Clone, Deserialize)]
#[serde(rename_all = "camelCase")]
struct HTTPSendResponse {
#[serde(default)]
result: Option<HTTPResponse>,
#[serde(default)]
error: Option<String>,
}
#[host_fn]
extern "ExtismHost" {
fn http_send(input: Json<HTTPSendRequest>) -> Json<HTTPSendResponse>;
}
/// Send executes an HTTP request and returns the response.
///
/// Parameters:
/// - request: The HTTP request to execute, including method, URL, headers, body, and timeout
///
/// Returns the HTTP response with status code, headers, and body.
/// Network errors, timeouts, and permission failures are returned as Go errors.
/// Successful HTTP calls (including 4xx/5xx status codes) return a non-nil response with nil error.
///
/// # Arguments
/// * `request` - HTTPRequest parameter.
///
/// # Returns
/// The result value.
///
/// # Errors
/// Returns an error if the host function call fails.
pub fn send(request: HTTPRequest) -> Result<Option<HTTPResponse>, Error> {
let response = unsafe {
http_send(Json(HTTPSendRequest {
request: request,
}))?
};
if let Some(err) = response.0.error {
return Err(Error::msg(err));
}
Ok(response.0.result)
}

View File

@@ -0,0 +1,184 @@
// Code generated by ndpgen. DO NOT EDIT.
//
// This file contains client wrappers for the TaskQueue host service.
// It is intended for use in Navidrome plugins built with extism-pdk.
use extism_pdk::*;
use serde::{Deserialize, Serialize};
/// QueueConfig holds configuration for a task queue.
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
pub struct QueueConfig {
pub concurrency: i32,
pub max_retries: i32,
pub backoff_ms: i64,
pub delay_ms: i64,
pub retention_ms: i64,
}
#[derive(Debug, Clone, Serialize)]
#[serde(rename_all = "camelCase")]
struct TaskQueueCreateQueueRequest {
name: String,
config: QueueConfig,
}
#[derive(Debug, Clone, Deserialize)]
#[serde(rename_all = "camelCase")]
struct TaskQueueCreateQueueResponse {
#[serde(default)]
error: Option<String>,
}
#[derive(Debug, Clone, Serialize)]
#[serde(rename_all = "camelCase")]
struct TaskQueueEnqueueRequest {
queue_name: String,
payload: Vec<u8>,
}
#[derive(Debug, Clone, Deserialize)]
#[serde(rename_all = "camelCase")]
struct TaskQueueEnqueueResponse {
#[serde(default)]
result: String,
#[serde(default)]
error: Option<String>,
}
#[derive(Debug, Clone, Serialize)]
#[serde(rename_all = "camelCase")]
struct TaskQueueGetTaskStatusRequest {
task_id: String,
}
#[derive(Debug, Clone, Deserialize)]
#[serde(rename_all = "camelCase")]
struct TaskQueueGetTaskStatusResponse {
#[serde(default)]
result: String,
#[serde(default)]
error: Option<String>,
}
#[derive(Debug, Clone, Serialize)]
#[serde(rename_all = "camelCase")]
struct TaskQueueCancelTaskRequest {
task_id: String,
}
#[derive(Debug, Clone, Deserialize)]
#[serde(rename_all = "camelCase")]
struct TaskQueueCancelTaskResponse {
#[serde(default)]
error: Option<String>,
}
#[host_fn]
extern "ExtismHost" {
fn taskqueue_createqueue(input: Json<TaskQueueCreateQueueRequest>) -> Json<TaskQueueCreateQueueResponse>;
fn taskqueue_enqueue(input: Json<TaskQueueEnqueueRequest>) -> Json<TaskQueueEnqueueResponse>;
fn taskqueue_gettaskstatus(input: Json<TaskQueueGetTaskStatusRequest>) -> Json<TaskQueueGetTaskStatusResponse>;
fn taskqueue_canceltask(input: Json<TaskQueueCancelTaskRequest>) -> Json<TaskQueueCancelTaskResponse>;
}
/// CreateQueue creates a named task queue with the given configuration.
/// Zero-value fields in config use sensible defaults.
/// If a queue with the same name already exists, returns an error.
/// On startup, this also recovers any stale "running" tasks from a previous crash.
///
/// # Arguments
/// * `name` - String parameter.
/// * `config` - QueueConfig parameter.
///
/// # Errors
/// Returns an error if the host function call fails.
pub fn create_queue(name: &str, config: QueueConfig) -> Result<(), Error> {
let response = unsafe {
taskqueue_createqueue(Json(TaskQueueCreateQueueRequest {
name: name.to_owned(),
config: config,
}))?
};
if let Some(err) = response.0.error {
return Err(Error::msg(err));
}
Ok(())
}
/// Enqueue adds a task to the named queue. Returns the task ID.
/// payload is opaque bytes passed back to the plugin on execution.
///
/// # Arguments
/// * `queue_name` - String parameter.
/// * `payload` - Vec<u8> parameter.
///
/// # Returns
/// The result value.
///
/// # Errors
/// Returns an error if the host function call fails.
pub fn enqueue(queue_name: &str, payload: Vec<u8>) -> Result<String, Error> {
let response = unsafe {
taskqueue_enqueue(Json(TaskQueueEnqueueRequest {
queue_name: queue_name.to_owned(),
payload: payload,
}))?
};
if let Some(err) = response.0.error {
return Err(Error::msg(err));
}
Ok(response.0.result)
}
/// GetTaskStatus returns the status of a task: "pending", "running",
/// "completed", "failed", or "cancelled".
///
/// # Arguments
/// * `task_id` - String parameter.
///
/// # Returns
/// The result value.
///
/// # Errors
/// Returns an error if the host function call fails.
pub fn get_task_status(task_id: &str) -> Result<String, Error> {
let response = unsafe {
taskqueue_gettaskstatus(Json(TaskQueueGetTaskStatusRequest {
task_id: task_id.to_owned(),
}))?
};
if let Some(err) = response.0.error {
return Err(Error::msg(err));
}
Ok(response.0.result)
}
/// CancelTask cancels a pending task. Returns error if already
/// running, completed, or failed.
///
/// # Arguments
/// * `task_id` - String parameter.
///
/// # Errors
/// Returns an error if the host function call fails.
pub fn cancel_task(task_id: &str) -> Result<(), Error> {
let response = unsafe {
taskqueue_canceltask(Json(TaskQueueCancelTaskRequest {
task_id: task_id.to_owned(),
}))?
};
if let Some(err) = response.0.error {
return Err(Error::msg(err));
}
Ok(())
}

16
plugins/testdata/test-taskqueue/go.mod vendored Normal file
View File

@@ -0,0 +1,16 @@
module test-taskqueue
go 1.25
require github.com/navidrome/navidrome/plugins/pdk/go v0.0.0
require (
github.com/davecgh/go-spew v1.1.1 // indirect
github.com/extism/go-pdk v1.1.3 // indirect
github.com/pmezard/go-difflib v1.0.0 // indirect
github.com/stretchr/objx v0.5.2 // indirect
github.com/stretchr/testify v1.11.1 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
)
replace github.com/navidrome/navidrome/plugins/pdk/go => ../../pdk/go

14
plugins/testdata/test-taskqueue/go.sum vendored Normal file
View File

@@ -0,0 +1,14 @@
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/extism/go-pdk v1.1.3 h1:hfViMPWrqjN6u67cIYRALZTZLk/enSPpNKa+rZ9X2SQ=
github.com/extism/go-pdk v1.1.3/go.mod h1:Gz+LIU/YCKnKXhgge8yo5Yu1F/lbv7KtKFkiCSzW/P4=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/stretchr/objx v0.5.2 h1:xuMeJ0Sdp5ZMRXx/aWO6RZxdr3beISkG5/G/aIRr3pY=
github.com/stretchr/objx v0.5.2/go.mod h1:FRsXN1f5AsAjCGJKqEizvkpNtU+EGNCLh3NxZ/8L+MA=
github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U=
github.com/stretchr/testify v1.11.1/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405 h1:yhCVgyC4o1eVCa2tZl7eS0r+SDo693bJlVdllGtEeKM=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=

Some files were not shown because too many files have changed in this diff Show More