Compare commits

...

152 Commits

Author SHA1 Message Date
Deluan
4994ae0aed fix artist refreshstats query
Signed-off-by: Deluan <deluan@navidrome.org>
2025-11-02 12:07:08 -05:00
Deluan
4f1732f186 wip - use unix socket
Signed-off-by: Deluan <deluan@navidrome.org>
2025-11-02 12:06:21 -05:00
deluan
f0270dc48c WIP
# Conflicts:
#	persistence/artist_repository.go
2025-11-02 12:06:18 -05:00
Deluan
8d4feb242b wip
Signed-off-by: Deluan <deluan@navidrome.org>
2025-11-02 12:05:28 -05:00
Deluan
dd635c4e30 convert schema to postgres
Signed-off-by: Deluan <deluan@navidrome.org>
2025-11-02 12:05:28 -05:00
Deluan Quintão
775626e037 refactor(scanner): optimize update artist's statistics using normalized media_file_artists table (#4641)
Optimized to use the normalized media_file_artists table instead of parsing JSONB

Signed-off-by: Deluan <deluan@navidrome.org>
2025-11-01 20:25:33 -04:00
Deluan Quintão
91fab68578 fix: handle UTF BOM in lyrics and playlist files (#4637)
* fix: handle UTF-8 BOM in lyrics and playlist files

Added UTF-8 BOM (Byte Order Mark) detection and stripping for external lyrics files and playlist files. This ensures that files with BOM markers are correctly parsed and recognized as synced lyrics or valid playlists.

The fix introduces a new ioutils package with UTF8Reader and UTF8ReadFile functions that automatically detect and remove UTF-8, UTF-16 LE, and UTF-16 BE BOMs. These utilities are now used when reading external lyrics and playlist files to ensure consistent parsing regardless of BOM presence.

Added comprehensive tests for BOM handling in both lyrics and playlists, including test fixtures with actual BOM markers to verify correct behavior.

* test: add test for UTF-16 LE encoded LRC files

Signed-off-by: Deluan <deluan@navidrome.org>

---------

Signed-off-by: Deluan <deluan@navidrome.org>
2025-10-31 09:07:23 -04:00
deluan
0bdd3e6f8b fix(ui): fix Ligera theme's RaPaginationActions contrast 2025-10-30 16:34:31 -04:00
Konstantin Morenko
465846c1bc fix(ui): fix color of MuiIconButton in Gruvbox Dark theme (#4585)
* Fixed color of MuiIconButton in gruvboxDark.js

* Update ui/src/themes/gruvboxDark.js

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

---------

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2025-10-29 09:14:40 -04:00
Deluan Quintão
cce11c5416 fix(scanner): restore basic tag extraction fallback mechanism for improved metadata parsing (#4401)
* feat: add basic tag extraction fallback mechanism

Added basic tag extraction from TagLib's generic Tag interface as a fallback
when PropertyMap doesn't contain standard metadata fields. This ensures that
essential tags like title, artist, album, comment, genre, year, and track
are always available even when they're not present in format-specific
property maps.

Changes include:
- Extract basic tags (__title, __artist, etc.) in C++ wrapper
- Add parseBasicTag function to process basic tags in Go extractor
- Refactor parseProp function to be reusable across property parsing
- Ensure basic tags are preferred over PropertyMap when available

* feat(taglib): update tag parsing to use double underscores for properties

Signed-off-by: Deluan <deluan@navidrome.org>

---------

Signed-off-by: Deluan <deluan@navidrome.org>
2025-10-26 19:38:34 -04:00
Deluan Quintão
d021289279 fix: enable multi-valued releasetype in smart playlists (#4621)
* fix: prevent infinite loop in Type filter autocomplete

Fixed an infinite loop issue in the album Type filter caused by an inline
arrow function in the optionText prop. The inline function created a new
reference on every render, causing React-Admin's AutocompleteInput to
continuously re-fetch data from the /api/tag endpoint.

The solution extracts the formatting function outside the component scope
as formatReleaseType, ensuring a stable function reference across renders.
This prevents unnecessary re-renders and API calls while maintaining the
humanized display format for release type values.

* fix: enable multi-valued releasetype in smart playlists

Smart playlists can now match all values in multi-valued releasetype tags.
Previously, the albumtype field was mapped to the single-valued mbz_album_type
database field, which only stored the first value from tags like album; soundtrack.
This prevented smart playlists from matching albums with secondary release types
like soundtrack, live, or compilation when tagged by MusicBrainz Picard.

The fix removes the direct database field mapping and allows both albumtype and
releasetype to use the multi-valued tag system. The albumtype field is now an
alias that points to the releasetype tag field, ensuring both query the same
JSON path in the tags column. This maintains backward compatibility with the
documented albumtype field while enabling proper multi-value tag matching.

Added tests to verify both releasetype and albumtype correctly generate
multi-valued tag queries.

Fixes #4616

* fix: resolve albumtype alias for all operators and sorting

Codex correctly identified that the initial fix only worked for Contains/StartsWith/EndsWith operators. The alias resolution was happening too late in the code path.

Fixed by resolving the alias in two places:
1. tagCond.ToSql() - now uses the actual field name (releasetype) in the JSON path
2. Criteria.OrderBy() - now uses the actual field name when building sort expressions

Added tests for Is/IsNot operators and sorting to ensure complete coverage.
2025-10-26 19:36:44 -04:00
Daniele Ricci
aa7f55646d build(docker): use standalone wget instead of the busybox one, fix #4473
wget in busybox doesn't support redirects (required for downloading
artifacts from GitHub)
2025-10-25 17:47:09 -04:00
Deluan
925bfafc1f build: enhance golangci-lint installation process to check version and reinstall if necessary 2025-10-25 17:42:33 -04:00
Deluan
e24f7984cc chore(deps-dev): update happy-dom to version 20.0.8
Signed-off-by: Deluan <deluan@navidrome.org>
2025-10-25 17:25:52 -04:00
dependabot[bot]
ac3e6ae6a5 chore(deps-dev): bump brace-expansion from 1.1.11 to 1.1.12 in /ui (#4217)
Bumps [brace-expansion](https://github.com/juliangruber/brace-expansion) from 1.1.11 to 1.1.12.
- [Release notes](https://github.com/juliangruber/brace-expansion/releases)
- [Commits](https://github.com/juliangruber/brace-expansion/compare/1.1.11...v1.1.12)

---
updated-dependencies:
- dependency-name: brace-expansion
  dependency-version: 1.1.12
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Deluan Quintão <deluan@navidrome.org>
2025-10-25 17:24:31 -04:00
Deluan Quintão
b2019da999 chore(deps): update all dependencies (#4618)
* chore: update to Go 1.25.3

Signed-off-by: Deluan <deluan@navidrome.org>

* chore: update to golangci-lint

Signed-off-by: Deluan <deluan@navidrome.org>

* chore: update go dependencies

Signed-off-by: Deluan <deluan@navidrome.org>

* chore: update vite dependencies in package.json and improve EventSource mock in tests

- Upgraded @vitejs/plugin-react to version 5.1.0 and @vitest/coverage-v8 to version 4.0.3.
- Updated vite to version 7.1.12 and vite-plugin-pwa to version 1.1.0.
- Enhanced the EventSource mock implementation in eventStream.test.js for better test isolation.

* ci: remove coverage flag from Go test command in pipeline

* chore: update Node.js version to v24 in devcontainer, pipeline, and .nvmrc

* chore: prettier

Signed-off-by: Deluan <deluan@navidrome.org>

* chore: update actions/checkout from v4 to v5 in pipeline and update-translations workflows

* chore: update JS dependencies remove unused jest-dom import in Linkify.test.jsx

* chore: update actions/download-artifact from v4 to v5 in pipeline

---------

Signed-off-by: Deluan <deluan@navidrome.org>
2025-10-25 17:05:16 -04:00
yanggqi
871ee730cd fix(ui): update Chinese simplified translation (#4403)
* Update zh-Hans.json

Updated Chinese translation

* Update resources/i18n/zh-Hans.json

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Update resources/i18n/zh-Hans.json

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Update resources/i18n/zh-Hans.json

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Update resources/i18n/zh-Hans.json

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Update zh-Hans.json

* Update resources/i18n/zh-Hans.json

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Update resources/i18n/zh-Hans.json

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-07-31 12:18:06 -04:00
Deluan
c2657e0adb chore: add make stop target to terminate development servers
Signed-off-by: Deluan <deluan@navidrome.org>
2025-07-30 17:49:41 -04:00
Deluan
aff9c7120b feat(ui): add Genre column as optional field in playlist table view
Added genre as a toggleable column in the playlist songs table. The Genre column
displays genre information for each song in playlists and is available through
the column toggle menu but disabled by default.

Implements feature request from GitHub discussion #4400.

Signed-off-by: Deluan <deluan@navidrome.org>
2025-07-29 20:54:04 -04:00
Deluan
94d2696c84 feat(subsonic): populate Folder field with user's accessible library IDs
Added functionality to populate the Folder field in GetUser and GetUsers API responses
with the library IDs that the user has access to. This allows Subsonic API clients
to understand which music folders (libraries) a user can access for proper
content filtering and UI presentation.

Signed-off-by: Deluan <deluan@navidrome.org>
2025-07-29 18:00:33 -04:00
Michael Brückner
949bff993e fix(ui): update Deutsch, Galego, Italiano translations (#4394) 2025-07-29 12:06:29 -04:00
Muhammed Šehić
b2ee5b5156 feat(ui): add new Bosnian translation (#4399)
Update translations for Bosnian language
2025-07-29 12:06:09 -04:00
Deluan Quintão
9dbe0c183e feat(insights): add plugin and multi-library information (#4391)
* feat(plugins): add PluginList method

Signed-off-by: Deluan <deluan@navidrome.org>

* feat: enhance insights collection with plugin awareness and expanded metrics

Enhanced the insights collection system to provide more comprehensive telemetry data about Navidrome installations. This update adds plugin awareness through dependency injection integration, expands configuration detection capabilities, and includes additional library metrics.

Key improvements include:
- Added PluginLoader interface integration to collect plugin information when enabled
- Enhanced configuration detection with proper credential validation for LastFM, Spotify, and Deezer
- Added new library metrics including Libraries count and smart playlist detection
- Expanded configuration insights with reverse proxy, custom PID, and custom tags detection
- Updated Wire dependency injection to support the new plugin loader requirement
- Added corresponding data structures for plugin information collection

This enhancement provides valuable insights into feature usage patterns and plugin adoption while maintaining privacy and following existing telemetry practices.

* fix: correct type assertion in plugin manager test

Fixed type mismatch in test where PluginManifestCapabilitiesElem was being
compared with string literal. The test now properly casts the string to the
correct enum type for comparison.

* refactor: move static config checks to staticData function

Moved HasCustomTags, ReverseProxyConfigured, and HasCustomPID configuration checks from the dynamic collect() function to the static staticData() function where they belong. This eliminates redundant computation on every insights collection cycle and implements the actual logic for HasCustomTags instead of the hardcoded false value.

The HasCustomTags field now properly detects if custom tags are configured by checking the length of conf.Server.Tags. This change improves performance by computing static configuration values only once rather than on every insights collection.

* feat: add granular control for insights collection

Added DevEnablePluginsInsights configuration option to allow fine-grained control over whether plugin information is collected as part of the insights data. This change enhances privacy controls by allowing users to opt-out of plugin reporting while still participating in general insights collection.

The implementation includes:
- New configuration option DevEnablePluginsInsights with default value true
- Gated plugin collection in insights.go based on both plugin enablement and permission flag
- Enhanced plugin information to include version data alongside name
- Improved code organization with clearer conditional logic for data collection

* refactor: rename PluginNames parameter from serviceName to capability

Signed-off-by: Deluan <deluan@navidrome.org>

---------

Signed-off-by: Deluan <deluan@navidrome.org>
2025-07-28 13:21:10 -04:00
Deluan Quintão
d9aa3529d7 fix(ui): update Polish translations from POEditor (#4384)
Co-authored-by: navidrome-bot <navidrome-bot@navidrome.org>
2025-07-28 11:23:50 -04:00
Akshat Mehta
77e47f1ea2 feat(ui): add Hindi language translation (#4390)
* Hindi Language Support for "Navidrome"

Added Hindi Language Support

* Little changes for this Language and more well structured
2025-07-28 11:21:27 -04:00
Kendall Garner
d75ebc5efd fix(plugins): don't log "no proxy IP found" when using Subsonic API in plugins with reverse proxy auth (#4388)
* fix(auth): Do not try reverse proxy auth if internal auth succeeds

* cmp.Or will still require function results to be evaluated...

* move to a function
2025-07-28 10:18:49 -04:00
Cristiandis
5ea14ba520 docs(plugins): fix README.md for Discord Rich Presence (#4387) 2025-07-28 10:04:33 -04:00
Deluan
3e61b0426b fix(scanner): custom tags working again
Signed-off-by: Deluan <deluan@navidrome.org>
2025-07-26 21:40:41 -04:00
Deluan Quintão
d28a282de4 fix(scanner): Apple Music playlists import for songs with accented characters (#4385)
* fix: resolve playlist import issues with Unicode character paths

Fixes #3332 where songs with accented characters failed to import from Apple Music M3U playlists. The issue occurred because Apple Music exports use NFC Unicode normalization while macOS filesystem stores paths in NFD normalization.

Added normalizePathForComparison() function that normalizes both filesystem and M3U playlist paths to NFC form before comparison. This ensures consistent path matching regardless of the Unicode normalization form used.

Changes include comprehensive test coverage for Unicode normalization scenarios with both NFC and NFD character representations.

* address comments

Signed-off-by: Deluan <deluan@navidrome.org>

* fix(tests): add check for unequal original Unicode paths in playlist normalization tests

Signed-off-by: Deluan <deluan@navidrome.org>

---------

Signed-off-by: Deluan <deluan@navidrome.org>
2025-07-26 11:27:35 -04:00
Deluan Quintão
1eef2e554c fix(ui): update Danish, German, Greek, Spanish, Finnish, French, Indonesian, Russian, Slovenian, Swedish, Turkish, Ukrainian translations from POEditor (#4326)
Co-authored-by: navidrome-bot <navidrome-bot@navidrome.org>
2025-07-25 18:58:57 -04:00
Deluan
6722af50e2 chore(deps): update Go dependencies to latest versions
Signed-off-by: Deluan <deluan@navidrome.org>
2025-07-25 18:56:52 -04:00
Deluan Quintão
eeef98e2ca fix(server): optimize search3 performance with multi-library (#4382)
* fix(server): remove includeMissing from search (always false)

Signed-off-by: Deluan <deluan@navidrome.org>

* fix(search): optimize search order by using natural order for improved performance

Signed-off-by: Deluan <deluan@navidrome.org>

---------

Signed-off-by: Deluan <deluan@navidrome.org>
2025-07-25 18:53:40 -04:00
Deluan
be83d68956 fix(scanner): fix misleading custom tag split config message.
See https://github.com/navidrome/navidrome/discussions/3901#discussioncomment-13883185

Signed-off-by: Deluan <deluan@navidrome.org>
2025-07-25 17:54:51 -04:00
Deluan
c8915ecd88 fix(server): change sorting from rowid to id for improved sync performance for artists
Signed-off-by: Deluan <deluan@navidrome.org>
2025-07-24 17:23:32 -04:00
Deluan Quintão
0da2352907 fix: improve URL path handling in local storage for special characters (#4378)
* refactor: improve URL path handling in local storage system

Enhanced the local storage implementation to properly handle URL-decoded paths
and fix issues with file paths containing special characters. Added decodedPath
field to localStorage struct to separate URL parsing concerns from file system
operations.

Key changes:
- Added decodedPath field to localStorage struct for proper URL decoding
- Modified newLocalStorage to use decoded path instead of modifying original URL
- Fixed Windows path handling to work with decoded paths
- Improved URL escaping in storage.For() to handle special characters
- Added comprehensive test suite covering URL decoding, symlink resolution,
  Windows paths, and edge cases
- Refactored test extractor to use mockTestExtractor for better isolation

This ensures that file paths with spaces, hash symbols, and other special
characters are handled correctly throughout the storage system.

Signed-off-by: Deluan <deluan@navidrome.org>

* fix(tests): fix test file permissions and add missing tests.Init call

* refactor(tests): remove redundant test

Signed-off-by: Deluan <deluan@navidrome.org>

* fix: URL building for Windows and remove redundant variable

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor: simplify URL path escaping in local storage

Signed-off-by: Deluan <deluan@navidrome.org>

---------

Signed-off-by: Deluan <deluan@navidrome.org>
2025-07-23 20:46:47 -04:00
Deluan Quintão
a30fa478ac feat(ui): reset activity panel error icon to normal state when clicked (#4379)
* ui: reset activity icon after viewing error

* refactor: improve ActivityPanel error acknowledgment logic

Replaced boolean errorAcknowledged state with acknowledgedError string state to track which specific error was acknowledged. This prevents icon flickering when error messages change and simplifies the logic by removing the need for useEffect.

Key changes:
- Changed from errorAcknowledged boolean to acknowledgedError string state
- Added derived isErrorVisible computed value for cleaner logic
- Removed useEffect dependency on scanStatus.error changes
- Updated handleMenuOpen to store actual error string instead of boolean flag
- Fixed test mock to return proper error state matching test expectations

This change addresses code review feedback and follows React best practices by using derived state instead of imperative effects.
2025-07-23 19:43:42 -04:00
Deluan
9f0059e13f refactor(tests): clean up tests
Signed-off-by: Deluan <deluan@navidrome.org>
2025-07-23 11:41:00 -04:00
ChekeredList71
159aa28ec8 fix(ui): update Hungarian translations (#4375)
* Hungarian: new strings and some old ones updated

* misplaced keys fixed

---------

Co-authored-by: ChekeredList71 <asd@asd.com>
2025-07-23 09:00:17 -04:00
Deluan Quintão
39febfac28 fix(scanner): prevent foreign key constraint errors in album participant insertion (#4373)
* fix: prevent foreign key constraint error in album participants

Prevent foreign key constraint errors when album participants contain
artist IDs that don't exist in the artist table. The updateParticipants
method now filters out non-existent artist IDs before attempting to
insert album_artists relationships.

- Add defensive filtering in updateParticipants() to query existing artist IDs
- Only insert relationships for artist IDs that exist in the artist table
- Add comprehensive regression test for both albums and media files
- Fixes scanner errors when JSON participant data contains stale artist references

Signed-off-by: Deluan <deluan@navidrome.org>

* fix: optimize foreign key handling in album artists insertion

Signed-off-by: Deluan <deluan@navidrome.org>

* fix: improve participants foreign key tests

Signed-off-by: Deluan <deluan@navidrome.org>

* fix: clarify comments in album artists insertion query

Signed-off-by: Deluan <deluan@navidrome.org>

* test: add cleanup to album repository tests

Added individual test cleanup to 4 album repository tests that create temporary
artists and albums. This ensures proper test isolation by removing test data
after each test completes, preventing test interference when running with
shuffle mode. Each test now cleans up its own temporary data from the artist,
library_artist, album, and album_artists tables using direct SQL deletion.

Signed-off-by: Deluan <deluan@navidrome.org>

* fix: refactor participant JSON handling for simpler and improved SQL processing

Signed-off-by: Deluan <deluan@navidrome.org>

* fix: update test command description in Makefile for clarity

Signed-off-by: Deluan <deluan@navidrome.org>

* fix: refactor album repository tests to use albumRepository type directly

Signed-off-by: Deluan <deluan@navidrome.org>

---------

Signed-off-by: Deluan <deluan@navidrome.org>
2025-07-22 14:35:12 -04:00
Deluan Quintão
36d73eec0d fix(scanner): prevent foreign key constraint error in tag UpdateCounts (#4370)
* fix: prevent foreign key constraint error in tag UpdateCounts

Added JOIN clause with tag table in UpdateCounts SQL query to filter out
tag IDs from JSON that don't exist in the tag table. This prevents
'FOREIGN KEY constraint failed' errors when the library_tag table
tries to reference non-existent tag IDs during scanner operations.

The fix ensures only valid tag references are counted while maintaining
data integrity and preventing scanner failures during library updates.

* test(tag): add regression tests for foreign key constraint fix

Add comprehensive regression tests to prevent the foreign key constraint
error when tag IDs in JSON data don't exist in the tag table. Tests cover
both album and media file scenarios with non-existent tag IDs.

- Test UpdateCounts() with albums containing non-existent tag IDs
- Test UpdateCounts() with media files containing non-existent tag IDs
- Verify operations complete without foreign key errors

Signed-off-by: Deluan <deluan@navidrome.org>

---------

Signed-off-by: Deluan <deluan@navidrome.org>
2025-07-21 22:55:28 -04:00
Deluan
e9a8d7ed66 fix: update stats format comment in selectArtist method
Signed-off-by: Deluan <deluan@navidrome.org>
2025-07-21 16:33:17 -04:00
Deluan Quintão
c193bb2a09 fix(server): headless library access improvements (#4362)
* fix: enable library access for headless processes

Fixed multi-library filtering to allow headless processes (shares, external providers) to access data by skipping library restrictions when no user context is present.

Previously, the library filtering system returned empty results (WHERE 1=0) for processes without user authentication, breaking functionality like public shares and external service integrations.

Key changes:
- Modified applyLibraryFilter methods to skip filtering when user.ID == invalidUserId
- Refactored tag repository to use helper method for library filtering logic
- Fixed SQL aggregation bug in tag statistics calculation across multiple libraries
- Added comprehensive test coverage for headless process scenarios
- Updated genre repository to support proper column mappings for aggregated data

This preserves the secure "safe by default" approach for authenticated users while restoring backward compatibility for background processes that need unrestricted data access.

Signed-off-by: Deluan <deluan@navidrome.org>

* fix: resolve SQL ambiguity errors in share queries

Fixed SQL ambiguity errors that were breaking share links after the Multi-library PR.
The Multi-library changes introduced JOINs between album and library tables,
both of which have 'id' columns, causing 'ambiguous column name: id' errors
when unqualified column references were used in WHERE clauses.

Changes made:
- Updated core/share.go to use 'album.id' instead of 'id' in contentsLabelFromAlbums
- Updated persistence/share_repository.go to use 'album.id' in album share loading
- Updated persistence/sql_participations.go to use 'artist.id' for consistency
- Added regression tests to prevent future SQL ambiguity issues

This resolves HTTP 500 errors that users experienced when accessing existing
share URLs after the Multi-library feature was merged.

Signed-off-by: Deluan <deluan@navidrome.org>

* fix: improve headless library access handling

Added proper user context validation and reordered joins in applyLibraryFilterToArtistQuery to ensure library filtering works correctly for both authenticated and headless operations. The user_library join is now only applied when a valid user context exists, while the library_artist join is always applied to maintain proper data relationships. (+1 squashed commit)
Squashed commits:
[a28c6965b] fix: remove headless library access guard

Removed the invalidUserId guard condition in applyLibraryFilterToArtistQuery that was preventing proper library filtering for headless operations. This fix ensures that library filtering joins are always applied consistently, allowing headless library access to work correctly with the library_artist junction table filtering.

The previous guard was skipping all library filtering when no user context was present, which could cause issues with headless operations that still need to respect library boundaries through the library_artist relationship.

* fix: simplify genre selection query in genre repository

Signed-off-by: Deluan <deluan@navidrome.org>

* fix: enhance tag library filtering tests for headless access

Signed-off-by: Deluan <deluan@navidrome.org>

* test: add comprehensive test coverage for headless library access

Added extensive test coverage for headless library access improvements including:

- Added 17 new tests across 4 test files covering headless access scenarios
- artist_repository_test.go: Added headless process tests for GetAll, Count,
  Get operations and explicit library_id filtering functionality
- genre_repository_test.go: Added library filtering tests for headless processes
  including GetAll, Count, ReadAll, and Read operations
- sql_base_repository_test.go: Added applyLibraryFilter method tests covering
  admin users, regular users, and headless processes with/without custom table names
- share_repository_test.go: Added headless access tests and SQL ambiguity
  verification for the album.id vs id fix in loadMedia function
- Cleaned up test setup by replacing log.NewContext usage with GinkgoT().Context()
  and removing unnecessary configtest.SetupConfig() calls for better test isolation

These tests ensure that headless processes (background operations without user context)
can access all libraries while respecting explicit filters, and verify that the SQL
ambiguity fixes work correctly without breaking existing functionality.

* revert: remove user context handling from scrobble buffer getParticipants

Reverts commit 5b8ef74f05.

The artist repository no longer requires user context for proper library
filtering, so the workaround of temporarily injecting user context into
the scrobbleBufferRepository.Next method is no longer needed.

This simplifies the code and removes the dependency on fetching user
information during background scrobbling operations.

* fix: improve library access filtering for artists

Enhanced artist repository filtering to properly handle library access restrictions
and prevent artists with no accessible content from appearing in results.

Backend changes:
- Modified roleFilter to use direct JSON_EXTRACT instead of EXISTS subquery for better performance
- Enhanced applyLibraryFilterToArtistQuery to filter out artists with empty stats (no content)
- Changed from LEFT JOIN to INNER JOIN with library_artist table for stricter filtering
- Added condition to exclude artists where library_artist.stats = '{}' (empty content)

Frontend changes:
- Added null-checking in getCounter function to prevent TypeError when accessing undefined records
- Improved optional chaining for safer property access in role-based statistics display

These changes ensure that users only see artists that have actual accessible content
in their permitted libraries, fixing issues where artists appeared in the list
despite having no albums or songs available to the user.

* fix: update library access logic for non-admin users and enhance test coverage

Signed-off-by: Deluan <deluan@navidrome.org>

* fix: refine library artist query and implement cleanup for empty entries

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor: consolidate artist repository tests to eliminate duplication

Significantly refactored artist_repository_test.go to reduce code duplication and
improve maintainability by ~27% (930 to 680 lines). Key improvements include:

- Added test helper functions createTestArtistWithMBID() and createUserWithLibraries()
  to eliminate repetitive test data creation
- Consolidated duplicate MBID search tests using DescribeTable for parameterized testing
- Removed entire 'Permission-Based Behavior Comparison' section (~150 lines) that
  duplicated functionality already covered in other test contexts
- Reorganized search tests into cohesive 'MBID and Text Search' section with proper
  setup/teardown and shared test infrastructure
- Streamlined missing artist tests and moved them to dedicated section
- Maintained 100% test coverage while eliminating redundant test patterns

All tests continue to pass with identical functionality and coverage.

---------

Signed-off-by: Deluan <deluan@navidrome.org>
2025-07-20 15:58:21 -04:00
emmmm
72031d99ed fix: typo in Dockerfile (#4363) 2025-07-20 13:36:46 -04:00
Deluan
9fcc996336 fix(plugins): prevent race condition in plugin tests
Add EnsureCompiled calls in plugin test BeforeEach blocks to wait for
WebAssembly compilation before loading plugins. This prevents race conditions
where tests would attempt to load plugins before compilation completed,
causing flaky test failures in CI environments.

The race condition occurred because ScanPlugins() registers plugins
synchronously but compiles them asynchronously in background goroutines
with a concurrency limit of 2. Tests that immediately called LoadPlugin()
or LoadMediaAgent() after ScanPlugins() could fail if compilation wasn't
finished yet.

Fixed in both adapter_media_agent_test.go and manager_test.go which had
multiple tests vulnerable to this timing issue.
2025-07-20 10:43:04 -04:00
Kendall Garner
d5fa46e948 fix(subsonic): only use genre tag when searching by genre (#4361) 2025-07-19 21:52:29 -04:00
Deluan
9f46204b63 fix(subsonic): artist search in search3 endpoint
Signed-off-by: Deluan <deluan@navidrome.org>
2025-07-19 16:44:07 -04:00
Deluan
a60bea70c9 fix(ui): replace NumberInput with TextInput for read-only fields in LibraryEdit
Signed-off-by: Deluan <deluan@navidrome.org>
2025-07-18 21:43:52 -04:00
Deluan
a569f6788e fix(ui): update Portuguese translation and remove unused terms
Signed-off-by: Deluan <deluan@navidrome.org>
2025-07-18 18:59:52 -04:00
Deluan Quintão
00c83af170 feat: Multi-library support (#4181)
* feat(database): add user_library table and library access methods

Signed-off-by: Deluan <deluan@navidrome.org>

# Conflicts:
#	tests/mock_library_repo.go

* feat(database): enhance user retrieval with library associations

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(api): implement library management and user-library association endpoints

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(api): restrict access to library and config endpoints to admin users

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor(library): implement library management service and update API routes

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(database): add library filtering to album, folder, and media file queries

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor library service to use REST repository pattern and remove CRUD operations

Signed-off-by: Deluan <deluan@navidrome.org>

* add total_duration column to library and update user_library table

Signed-off-by: Deluan <deluan@navidrome.org>

* fix migration file name

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(library): add library management features including create, edit, delete, and list functionalities - WIP

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(library): enhance library validation and management with path checks and normalization - WIP

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(library): improve library path validation and error handling - WIP

Signed-off-by: Deluan <deluan@navidrome.org>

* use utils/formatBytes

Signed-off-by: Deluan <deluan@navidrome.org>

* simplify DeleteLibraryButton.jsx

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(library): enhance validation messages and error handling for library paths

Signed-off-by: Deluan <deluan@navidrome.org>

* lint

Signed-off-by: Deluan <deluan@navidrome.org>

* test(scanner): add tests for multi-library scanning and validation

Signed-off-by: Deluan <deluan@navidrome.org>

* test(scanner): improve handling of filesystem errors and ensure warnings are returned

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(controller): add function to retrieve the most recent scan time across all libraries

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(library): add additional fields and restructure LibraryEdit component for enhanced statistics display

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(library): enhance LibraryCreate and LibraryEdit components with additional props and styling

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(mediafile): add LibraryName field and update queries to include library name

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(missingfiles): add library filter and display in MissingFilesList component

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(library): implement scanner interface for triggering library scans on create/update

Signed-off-by: Deluan <deluan@navidrome.org>

# Conflicts:
#	cmd/wire_gen.go
#	cmd/wire_injectors.go

# Conflicts:
#	cmd/wire_gen.go

# Conflicts:
#	cmd/wire_gen.go
#	cmd/wire_injectors.go

* feat(library): trigger scan after successful library deletion to clean up orphaned data

Signed-off-by: Deluan <deluan@navidrome.org>

* rename migration file for user library table to maintain versioning order

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor: move scan triggering logic into a helper method for clarity

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(library): add library path and name fields to album and mediafile models

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(library): add/remove watchers on demand, not only when server starts

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor(scanner): streamline library handling by using state-libraries for consistency

Signed-off-by: Deluan <deluan@navidrome.org>

* fix: track processed libraries by updating state with scan timestamps

Signed-off-by: Deluan <deluan@navidrome.org>

* prepend libraryID for track and album PIDs

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(repository): apply library filtering in CountAll methods for albums, folders, and media files

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(user): add library selection for user creation and editing

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(library): implement library selection functionality with reducer and UI component

Signed-off-by: Deluan <deluan@navidrome.org>

# Conflicts:
#	.github/copilot-instructions.md

# Conflicts:
#	.gitignore

* feat(library): add tests for LibrarySelector and library selection hooks

Signed-off-by: Deluan <deluan@navidrome.org>

* test: add unit tests for file utility functions

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(library): add library ID filtering for album resources

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(library): streamline library ID filtering in repositories and update resource filtering logic

Signed-off-by: Deluan <deluan@navidrome.org>

* fix(repository): add table name handling in filter functions for SQL queries

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(library): add refresh functionality on LibrarySelector close

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(artist): add library ID filtering for artists in repository and update resource filtering logic

Signed-off-by: Deluan <deluan@navidrome.org>

# Conflicts:
#	persistence/artist_repository.go

* Add library_id field support for smart playlists

- Add library_id field to smart playlist criteria system
- Supports Is and IsNot operators for filtering by library ID
- Includes comprehensive test coverage for single values and lists
- Enables creation of library-specific smart playlists

* feat(subsonic): implement user-specific library access in GetMusicFolders

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(library): enhance LibrarySelectionField to extract library IDs from record

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(subsonic): update GetIndexes and GetArtists method to support library ID filtering

Signed-off-by: Deluan <deluan@navidrome.org>

* fix: ensure LibrarySelector dropdown refreshes on button close

Added refresh() call when closing the dropdown via button click to maintain
consistency with the ClickAwayListener behavior. This ensures the UI
updates properly regardless of how the dropdown is closed, fixing an
inconsistent refresh behavior between different closing methods.

The fix tracks the previous open state and calls refresh() only when
the dropdown was open and is being closed by the button click.

* refactor: simplify getUserAccessibleLibraries function and update related tests

Signed-off-by: Deluan <deluan@navidrome.org>

* feat: enhance selectedMusicFolderIds function to handle valid music folder IDs and improve fallback logic

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor: change ArtistRepository.GetIndex to accept multiple library IDs

Updated the GetIndex method signature to accept a slice of library IDs instead of a single ID, enabling support for filtering artists across multiple libraries simultaneously.

Changes include:
- Modified ArtistRepository interface in model/artist.go
- Updated implementation in persistence/artist_repository.go with improved library filtering logic
- Refactored Subsonic API browsing.go to use new selectedMusicFolderIds helper
- Added comprehensive test coverage for multiple library scenarios
- Updated mock repository implementation for testing

This change improves flexibility for multi-library operations while maintaining backward compatibility through the selectedMusicFolderIds helper function.

* feat: add library access validation to selectedMusicFolderIds

Enhanced the selectedMusicFolderIds function to validate musicFolderId parameters
against the user's accessible libraries. Invalid library IDs (those the user
doesn't have access to) are now silently filtered out, improving security by
preventing users from accessing libraries they don't have permission for.

Changes include:
- Added validation logic to check musicFolderId parameters against user's accessible libraries
- Added slices package import for efficient validation
- Enhanced function documentation to clarify validation behavior
- Added comprehensive test cases covering validation scenarios
- Maintains backward compatibility with existing behavior

* feat: implement multi-library support for GetAlbumList and GetAlbumList2 endpoints

- Enhanced selectedMusicFolderIds helper to validate and filter library IDs
- Added ApplyLibraryFilter function in filter/filters.go for library filtering
- Updated getAlbumList to support musicFolderId parameter filtering
- Added comprehensive tests for multi-library functionality
- Supports single and multiple musicFolderId values
- Falls back to all accessible libraries when no musicFolderId provided
- Validates library access permissions for user security

* feat: implement multi-library support for GetRandomSongs, GetSongsByGenre, GetStarred, and GetStarred2

- Added multi-library filtering to GetRandomSongs endpoint using musicFolderId parameter
- Added multi-library filtering to GetSongsByGenre endpoint using musicFolderId parameter
- Enhanced GetStarred and GetStarred2 to filter artists, albums, and songs by library
- Added Options field to MockMediaFileRepo and MockArtistRepo for test compatibility
- Added comprehensive Ginkgo/Gomega tests for all new multi-library functionality
- All tests verify proper SQL filter generation and library access validation
- Supports single/multiple musicFolderId values with fallback to all accessible libraries

* refactor: optimize starred items queries with parallel execution and fix test isolation

Refactored starred items functionality by extracting common logic into getStarredItems()
method that executes artist, album, and media file queries in parallel for better performance.
This eliminates code duplication between GetStarred and GetStarred2 methods while improving
response times through concurrent database queries using run.Parallel().

Also fixed test isolation issues by adding missing auth.Init(ds) call in album lists test setup.
This resolves nil pointer dereference errors in GetStarred and GetStarred2 tests when run independently.

* fix: add ApplyArtistLibraryFilter to filter artists by associated music folders

Signed-off-by: Deluan <deluan@navidrome.org>

* feat: add library access methods to User model

Signed-off-by: Deluan <deluan@navidrome.org>

* feat: implement library access filtering for artist queries based on user permissions

Signed-off-by: Deluan <deluan@navidrome.org>

* feat: enhance artist library filtering based on user permissions and optimize library ID retrieval

Signed-off-by: Deluan <deluan@navidrome.org>

* fix: return error when any musicFolderId is invalid or inaccessible

Changed behavior from silently filtering invalid library IDs to returning
ErrorDataNotFound (code 70) when any provided musicFolderId parameter
is invalid or the user doesn't have access to it.

The error message includes the specific library number for better debugging.
This affects album/song list endpoints (getAlbumList, getRandomSongs,
getSongsByGenre, getStarred) to provide consistent error handling
across all Subsonic API endpoints.

Updated corresponding tests to expect errors instead of silent filtering.

* feat: add musicFolderId parameter support to Search2 and Search3 endpoints

Implemented musicFolderId parameter support for Subsonic API Search2 and Search3 endpoints, completing multi-library functionality across all Subsonic endpoints.

Key changes:
- Added musicFolderId parameter handling to Search2 and Search3 endpoints
- Updated search logic to filter results by specified library or all accessible libraries when parameter not provided
- Added proper error handling for invalid/inaccessible musicFolderId values
- Refactored SearchableRepository interface to support library filtering with variadic QueryOptions
- Updated repository implementations (Album, Artist, MediaFile) to handle library filtering in search operations
- Added comprehensive test coverage with robust assertions verifying library filtering works correctly
- Enhanced mock repositories to capture QueryOptions for test validation

Signed-off-by: Deluan <deluan@navidrome.org>

* feat: refresh LibraryList on scan end

Signed-off-by: Deluan <deluan@navidrome.org>

* fix: allow editing name of main library

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor: implement SendBroadcastMessage method for event broadcasting

Signed-off-by: Deluan <deluan@navidrome.org>

* feat: add event broadcasting for library creation, update, and deletion

Signed-off-by: Deluan <deluan@navidrome.org>

* feat: add useRefreshOnEvents hook for custom refresh logic on event changes

Signed-off-by: Deluan <deluan@navidrome.org>

* feat: enhance library management with refresh event broadcasting

Signed-off-by: Deluan <deluan@navidrome.org>

* feat: replace AddUserLibrary and RemoveUserLibrary with SetUserLibraries for better library management

Signed-off-by: Deluan <deluan@navidrome.org>

* chore: remove commented-out genre repository code from persistence tests

* feat: enhance library selection with master checkbox functionality

Added a master checkbox to the SelectLibraryInput component, allowing users to select or deselect all libraries at once. This improves user experience by simplifying the selection process when multiple libraries are available. Additionally, updated translations in the en.json file to include a new message for selecting all libraries, ensuring consistency in user interface messaging.

Signed-off-by: Deluan <deluan@navidrome.org>

* feat: add default library assignment for new users

Introduced a new column `default_new_users` in the library table to
facilitate automatic assignment of default libraries to new regular users.
When a new user is created, they will now be assigned to libraries marked
as default, enhancing user experience by ensuring they have immediate access
to essential resources. Additionally, updated the user repository logic
to handle this new functionality and modified the user creation validation
to reflect that library selection is optional for non-admin users.

Signed-off-by: Deluan <deluan@navidrome.org>

* fix: correct updated_at assignment in library repository

Signed-off-by: Deluan <deluan@navidrome.org>

* fix: improve cache buffering logic

Refactored the cache buffering logic to ensure thread safety when checking
the buffer length

Signed-off-by: Deluan <deluan@navidrome.org>

* fix formating

Signed-off-by: Deluan <deluan@navidrome.org>

* feat: implement per-library artist statistics with automatic aggregation

Implemented comprehensive multi-library support for artist statistics that
automatically aggregates stats from user-accessible libraries. This fundamental
change moves artist statistics from global scope to per-library granularity
while maintaining backward compatibility and transparent operation.

Key changes include:
- Migrated artist statistics from global artist.stats to per-library library_artist.stats
- Added automatic library filtering and aggregation in existing Get/GetAll methods
- Updated role-based filtering to work with per-library statistics storage
- Enhanced statistics calculation to process and store stats per library
- Implemented user permission-aware aggregation that respects library access control
- Added comprehensive test coverage for library filtering and restricted user access
- Created helper functions to ensure proper library associations in tests

This enables users to see statistics that accurately reflect only the content
from libraries they have access to, providing proper multi-tenant behavior
while maintaining the existing API surface and UI functionality.

Signed-off-by: Deluan <deluan@navidrome.org>

* feat: add multi-library support with per-library tag statistics - WIP

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor: genre and tag repositories. add comprehensive tests

Signed-off-by: Deluan <deluan@navidrome.org>

* feat: add multi-library support to tag repository system

Implemented comprehensive library filtering for tag repositories to support the multi-library feature. This change ensures that users only see tags from libraries they have access to, while admin users can see all tags.

Key changes:
- Enhanced TagRepository.Add() method to accept libraryID parameter for proper library association
- Updated baseTagRepository to implement library-aware queries with proper joins
- Added library_tag table integration for per-library tag statistics
- Implemented user permission-based filtering through user_library associations
- Added comprehensive test coverage for library filtering scenarios
- Updated UI data provider to include tag filtering by selected libraries
- Modified scanner to pass library ID when adding tags during folder processing

The implementation maintains backward compatibility while providing proper isolation between libraries for tag-based operations like genres and other metadata tags.

* refactor: simplify artist repository library filtering

Removed conditional admin logic from applyLibraryFilterToArtistQuery method
and unified the library filtering approach to match the tag repository pattern.
The method now always uses the same SQL join structure regardless of user role,
with admin access handled automatically through user_library associations.

Added artistLibraryIdFilter function to properly qualify library_id column
references and prevent SQL ambiguity errors when multiple tables contain
library_id columns. This ensures the filter targets library_artist.library_id
specifically rather than causing ambiguous column name conflicts.

* fix: resolve LibrarySelectionField validation error for non-admin users

Fixed validation error 'At least one library must be selected for non-admin users' that appeared even when libraries were selected. The issue was caused by a data format mismatch between backend and frontend.

The backend sends user data with libraries as an array of objects, but the LibrarySelectionField component expects libraryIds as an array of IDs. Added data transformation in the data provider's getOne method to automatically convert libraries array to libraryIds format when fetching user records.

Also extracted validation logic into a separate userValidation module for better code organization and added comprehensive test coverage to prevent similar issues.

* refactor: remove unused library access functions and related tests

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor: rename search_test.go to searching_test.go for consistency

Signed-off-by: Deluan <deluan@navidrome.org>

* fix: add user context to scrobble buffer getParticipants call

Added user context handling to scrobbleBufferRepository.Next method to resolve
SQL error 'no such column: library_artist.library_id' when processing scrobble
entries in multi-library environments. The artist repository now requires user
context for proper library filtering, so we fetch the user and temporarily
inject it into the context before calling getParticipants. This ensures
background scrobbling operations work correctly with multi-library support.

* feat: add cross-library move detection for scanner

Implemented cross-library move detection for the scanner phase 2 to properly handle files moved between libraries. This prevents users from losing play counts, ratings, and other metadata when moving files across library boundaries.

Changes include:
- Added MediaFileRepository methods for two-tier matching: FindRecentFilesByMBZTrackID (primary) and FindRecentFilesByProperties (fallback)
- Extended scanner phase 2 pipeline with processCrossLibraryMoves stage that processes files unmatched within their library
- Implemented findCrossLibraryMatch with MusicBrainz Release Track ID priority and intrinsic properties fallback
- Updated producer logic to handle missing tracks without matches, ensuring cross-library processing
- Updated tests to reflect new producer behavior and cross-library functionality

The implementation uses existing moveMatched function for unified move operations, automatically preserving all user data through database foreign key relationships. Cross-library moves are detected using the same Equals() and IsEquivalent() matching logic as within-library moves for consistency.

Signed-off-by: Deluan <deluan@navidrome.org>

* feat: add album annotation reassignment for cross-library moves

Implemented album annotation reassignment functionality for the scanner's missing tracks phase. When tracks move between libraries and change album IDs, the system now properly reassigns album annotations (starred status, ratings) from the old album to the new album. This prevents loss of user annotations when tracks are moved across library boundaries.

The implementation includes:
- Thread-safe annotation reassignment using mutex protection
- Duplicate reassignment prevention through processed album tracking
- Graceful error handling that doesn't fail the entire move operation
- Comprehensive test coverage for various scenarios including error conditions

This enhancement ensures data integrity and user experience continuity during cross-library media file movements.

* fix: address PR review comments for multi-library support

Fixed several issues identified in PR review:

- Removed unnecessary artist stats initialization check since the map is already initialized in PostScan()
- Improved code clarity in user repository by extracting isNewUser variable to avoid checking count == 0 twice
- Fixed library selection logic to properly handle initial library state and prevent overriding user selections

These changes address code quality and logic issues identified during the multi-library support PR review.

* feat: add automatic playlist statistics refreshing

Implemented automatic playlist statistics (duration, size, song count) refreshing
when tracks are modified. Added new refreshStats() method to recalculate
statistics from playlist tracks, and SetTracks() method to update tracks
and refresh statistics atomically. Modified all track manipulation methods
(RemoveTracks, AddTracks, AddMediaFiles) to automatically refresh statistics.
Updated playlist repository to use the new SetTracks method for consistent
statistics handling.

* refactor: rename AddTracks to AddMediaFilesByID for clarity

Renamed the AddTracks method to AddMediaFilesByID throughout the codebase
to better reflect its purpose of adding media files to a playlist by their IDs.
This change improves code readability and makes the method name more descriptive
of its actual functionality. Updated all references in playlist model, tests,
core playlist logic, and Subsonic API handlers to use the new method name.

* refactor: consolidate user context access in persistence layer

Removed duplicate helper functions userId() and isAdmin() from sql_base_repository.go and consolidated all user context access to use loggedUser(r.ctx).ID and loggedUser(r.ctx).IsAdmin consistently across the persistence layer.

This change eliminates code duplication and provides a single, consistent pattern for accessing user context information in repository methods. All functionality remains unchanged - this is purely a code cleanup refactoring.

* refactor: eliminate MockLibraryService duplication using embedded struct

- Replace 235-line MockLibraryService with 40-line embedded struct pattern
- Enhance MockLibraryRepo with service-layer methods (192→310 lines)
- Maintain full compatibility with existing tests
- All 72 nativeapi specs pass with proper error handling

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor: cleanup

Signed-off-by: Deluan <deluan@navidrome.org>

---------

Signed-off-by: Deluan <deluan@navidrome.org>
2025-07-18 18:41:12 -04:00
Deluan
089dbe9499 refactor: remove unused CSS class in SongContextMenu
Signed-off-by: Deluan <deluan@navidrome.org>
2025-07-17 12:14:05 -04:00
Deluan Quintão
445880c006 fix(ui): prevent disabled Show in Playlist menu item from triggering actions (#4356)
* fix: prevent disabled Show in Playlist menu item from triggering actions

Fixed bug where clicking on the disabled 'Show in Playlist' menu item would unintentionally trigger music playback and replace the queue. The menu item now properly prevents event propagation when disabled and takes no action.

This resolves the issue where users would accidentally start playing music when clicking on the greyed-out menu option. The fix includes:
- Custom onClick handler that stops event propagation for disabled state
- Proper styling to maintain visual disabled state while allowing event handling
- Comprehensive test coverage for the disabled behavior

* style: clean up disabled menu item styling code

Simplified the arrow function for disabled onClick handler and changed inline style from empty object to undefined when not needed. Also added a CSS class for disabled menu items for potential future use.

These changes improve code readability and follow React best practices by using undefined instead of empty objects for conditional styles.
2025-07-17 11:00:12 -04:00
Deluan
3c1e5603d0 fix(ui): don't show year "0"
Signed-off-by: Deluan <deluan@navidrome.org>
2025-07-15 19:12:25 -04:00
Deluan
adef0ea1e7 fix(plugins): resolve race condition in plugin manager registration
Fixed a race condition in the plugin manager where goroutines started during
plugin registration could concurrently access shared plugin maps while the
main registration loop was still running. The fix separates plugin registration
from background processing by collecting all plugins first, then starting
background goroutines after registration is complete.

This prevents concurrent read/write access to the plugins and adapters maps
that was causing data races detected by the Go race detector. The solution
maintains the same functionality while ensuring thread safety during the
plugin scanning and registration process.

Signed-off-by: Deluan <deluan@navidrome.org>
2025-07-15 12:58:16 -04:00
bytesingsong
b69a7652b9 chore: fix some typos in comment and logs (#4333)
Signed-off-by: bytesingsong <bytesing@icloud.com>
2025-07-13 14:31:15 -04:00
bytetigers
d8e829ad18 chore: fix function name/description in comment (#4325)
* chore: fix function in comment

Signed-off-by: bytetigers <bytetiger@icloud.com>

* Update model/metadata/persistent_ids.go

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

---------

Signed-off-by: bytetigers <bytetiger@icloud.com>
Co-authored-by: Deluan Quintão <github@deluan.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2025-07-13 14:30:58 -04:00
Deluan Quintão
5b73a4d5b7 feat(plugins): add TimeNow function to SchedulerService (#4337)
* feat: add TimeNow function to SchedulerService plugin

Added new TimeNow RPC method to the SchedulerService host service that returns
the current time in two formats: RFC3339Nano string and Unix milliseconds int64.
This provides plugins with a standardized way to get current time information
from the host system.

The implementation includes:
- TimeNowRequest/TimeNowResponse protobuf message definitions
- Go host service implementation using time.Now()
- Complete test coverage with format validation
- Generated WASM interface code for plugin communication

* feat: add LocalTimeZone field to TimeNow response

Added LocalTimeZone field to TimeNowResponse message in the SchedulerService
plugin host service. This field contains the server's local timezone name
(e.g., 'America/New_York', 'UTC') providing plugins with timezone context
alongside the existing RFC3339Nano and Unix milliseconds timestamps.

The implementation includes:
- New local_time_zone protobuf field definition
- Go implementation using time.Now().Location().String()
- Updated test coverage with timezone validation
- Generated protobuf serialization/deserialization code

* docs: update plugin README with TimeNow function documentation

Updated the plugins README.md to document the new TimeNow function in the
SchedulerService. The documentation includes detailed descriptions of the
three return formats (RFC3339Nano, UnixMilli, LocalTimeZone), practical
use cases, and a comprehensive Go code example showing how plugins can
access current time information for logging, calculations, and timezone-aware
operations.

* docs: remove wrong comment from InitRequest

Signed-off-by: Deluan <deluan@navidrome.org>

* fix: add missing TimeNow method to namedSchedulerService

Added TimeNow method implementation to namedSchedulerService struct to satisfy the scheduler.SchedulerService interface contract. This method was recently added to the interface but the namedSchedulerService wrapper was not updated, causing compilation failures in plugin tests. The implementation is a simple pass-through to the underlying scheduler service since TimeNow doesn't require any special handling for named callbacks.

---------

Signed-off-by: Deluan <deluan@navidrome.org>
2025-07-13 14:23:58 -04:00
Deluan
1de84dbd0c refactor(ui): replace translation key with direct character for remove action
Signed-off-by: Deluan <deluan@navidrome.org>
2025-07-12 16:55:21 -04:00
Deluan
e8a3495c70 test: suppress console.log output in eventStream test
Added console.log mock in eventStream.test.js to suppress the 'EventStream error' message that was appearing during test execution. This improves test output cleanliness by preventing the expected error logging from the eventStream error handling code from cluttering the test console output.

The mock follows the existing pattern used in the codebase for suppressing console output during tests and only affects the test environment, preserving the original logging functionality in production code.
2025-07-10 18:00:37 -03:00
Deluan
1166a0fabf fix(plugins): enhance error handling in checkErr function
Improved the error handling logic in the checkErr function to map specific error strings to their corresponding API error constants. This change ensures that errors from plugins are correctly identified and returned, enhancing the robustness of error reporting.

Signed-off-by: Deluan <deluan@navidrome.org>
2025-07-09 14:32:43 -03:00
Xabi
9e97d0a9d9 fix(ui): update Basque translation (#4309)
* Update eu.json

Added the most recent strings and tried to improve some of the older ones.

* Update eu.json - typo

just a typo
2025-07-09 00:28:38 -03:00
Kendall Garner
6730716d26 fix(scanner): lyrics tag parsing to properly handle both ID3 and aliased tags
* fix(taglib): parse both id3 and aliased tags, as lyrics appears to be mapped to lyrics-xxx

* address feedback, make confusing test more stable
2025-07-09 00:27:40 -03:00
Deluan Quintão
65961cce4b fix(ui): replaygain for Artist Radio and Top Songs (#4328)
* Map replaygain info from getSimilarSongs2

* refactor: rename mapping function

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor: Applied code review improvements

Signed-off-by: Deluan <deluan@navidrome.org>

---------

Signed-off-by: Deluan <deluan@navidrome.org>
2025-07-08 17:41:14 -03:00
Deluan Quintão
d041cb3249 fix(plugins): correct error handling in plugin initialization (#4311)
Updated the error handling logic in the plugin lifecycle manager to accurately record the success of the OnInit method. The change ensures that the metrics reflect whether the initialization was successful, improving the reliability of plugin metrics tracking. Additionally, removed the unused errorMapper interface from base_capability.go to clean up the codebase.

Signed-off-by: Deluan <deluan@navidrome.org>
2025-07-07 16:24:10 -03:00
Deluan Quintão
f1f1fd2007 refactor: streamline agents logic and remove unnecessary caching (#4298)
* refactor: enhance agent loading with structured data

Introduced a new struct, EnabledAgent, to encapsulate agent name and type
information (plugin or built-in). Updated the getEnabledAgentNames function
to return a slice of EnabledAgent instead of a slice of strings, allowing
for more detailed agent management. This change improves the clarity and
maintainability of the code by providing a structured approach to handling
enabled agents and their types.

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor: remove agent caching logic

Eliminated the caching mechanism for agents, including the associated
data structures and methods. This change simplifies the agent loading
process by directly retrieving agents without caching, which is no longer
necessary for the current implementation. The removal of this logic helps
reduce complexity and improve maintainability of the codebase.

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor: replace range with slice.Contains

Signed-off-by: Deluan <deluan@navidrome.org>

* test: simplify agent name extraction in tests

Signed-off-by: Deluan <deluan@navidrome.org>

---------

Signed-off-by: Deluan <deluan@navidrome.org>
2025-07-05 10:11:35 -03:00
Deluan Quintão
66eaac2762 fix(plugins): add metrics on callbacks and improve plugin method calling (#4304)
* refactor: implement OnSchedulerCallback method in wasmSchedulerCallback

Added the OnSchedulerCallback method to the wasmSchedulerCallback struct, enabling it to handle scheduler callback events. This method constructs a SchedulerCallbackRequest and invokes the corresponding plugin method, facilitating better integration with the scheduling system. The changes improve the plugin's ability to respond to scheduled events, enhancing overall functionality.

Signed-off-by: Deluan <deluan@navidrome.org>

* fix(plugins): update executeCallback method to use callMethod

Modified the executeCallback method to accept an additional parameter,
methodName, which specifies the callback method to be executed. This change
ensures that the correct method is called for each WebSocket event,
improving the accuracy of callback execution for plugins.

Signed-off-by: Deluan <deluan@navidrome.org>

* fix(plugins): capture OnInit metrics

Signed-off-by: Deluan <deluan@navidrome.org>

* fix(plugins): improve logging for metrics in callMethod

Updated the logging statement in the callMethod function to include the
elapsed time as a separate key in the log output. This change enhances
the clarity of the logged metrics, making it easier to analyze the
performance of plugin requests and troubleshoot any issues that may arise.

Signed-off-by: Deluan <deluan@navidrome.org>

* fix(plugins): enhance logging for schedule callback execution

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor(server): streamline scrobbler stopping logic

Refactored the logic for stopping scrobbler instances when they are removed.
The new implementation introduces a `stoppableScrobbler` interface to
simplify the type assertion process, allowing for a more concise and
readable code structure. This change ensures that any scrobbler
implementing the `Stop` method is properly stopped before removal,
improving the overall reliability of the plugin management system.

Signed-off-by: Deluan <deluan@navidrome.org>

* fix(plugins): improve plugin lifecycle management and error handling

Enhanced the plugin lifecycle management by implementing error handling in the OnInit method. The changes include the addition of specific error conditions that can be returned during plugin initialization, allowing for better management of plugin states. Additionally, the unregisterPlugin method was updated to ensure proper cleanup of plugins that fail to initialize, improving overall stability and reliability of the plugin system.

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor(plugins): remove unused LoadAllPlugins and related methods

Eliminated the LoadAllPlugins, LoadAllMediaAgents, and LoadAllScrobblers
methods from the manager implementation as they were not utilized in the codebase.
This cleanup reduces complexity and improves maintainability by removing
redundant code, allowing for a more streamlined plugin management process.

Signed-off-by: Deluan <deluan@navidrome.org>

* fix(plugins): update logging configuration for plugins

Configured logging for multiple plugins to remove timestamps and source file/line information, while adding specific prefixes for better identification.

Signed-off-by: Deluan <deluan@navidrome.org>

* fix(plugins): clear initialization state when unregistering a plugin

Added functionality to clear the initialization state of a plugin in the
lifecycle manager when it is unregistered. This change ensures that the
lifecycle state is accurately maintained, preventing potential issues with
plugins that may be re-registered after being unregistered. The new method
`clearInitialized` was implemented to handle this state management.

Signed-off-by: Deluan <deluan@navidrome.org>

* test: add unit tests for convertError function, rename to checkErr

Added comprehensive unit tests for the convertError function to ensure
correct behavior across various scenarios, including handling nil responses,
typed nils, and responses implementing errorResponse. These tests validate
that the function returns the expected results without panicking and
correctly wraps original errors when necessary.

Signed-off-by: Deluan <deluan@navidrome.org>

* fix(plugins): update plugin base implementation and method calls

Refactored the plugin base implementation by renaming `wasmBasePlugin` to `baseCapability` across multiple files. Updated method calls in the `wasmMediaAgent`, `wasmSchedulerCallback`, and `wasmScrobblerPlugin` to align with the new base structure. These changes improve code clarity and maintainability by standardizing the plugin architecture, ensuring consistent usage of the base capabilities across different plugin types.

Signed-off-by: Deluan <deluan@navidrome.org>

* fix(discord): handle failed connections and improve heartbeat checks

Added a new method to clean up failed connections, which cancels the heartbeat schedule, closes the WebSocket connection, and removes cache entries. Enhanced the heartbeat check to log failures and trigger the cleanup process on the first failure. These changes ensure better management of user connections and improve the overall reliability of the RPC system.

Signed-off-by: Deluan <deluan@navidrome.org>

---------

Signed-off-by: Deluan <deluan@navidrome.org>
2025-07-05 09:03:49 -03:00
Deluan Quintão
c583ff57a3 test: add translation validation system with CI integration (#4306)
* feat: add translation validation script and update JSON files

Introduced a new script `validate-translations.sh` to validate the structure
of JSON translation files against an English reference. This script checks
for missing and extra translation keys, ensuring consistency across language
files. Additionally, several JSON files were updated to include new keys
and improve existing translations, enhancing the overall localization
efforts for the application.

Signed-off-by: Deluan <deluan@navidrome.org>

* feat: enhance translation validation script

Updated the translation validation script to improve its functionality and usability. The script now validates JSON translation files against a reference English file, checking for JSON syntax, structural integrity, and reporting missing or extra keys. It also integrates with GitHub Actions for CI/CD, providing annotations for errors and warnings. Additionally, the usage instructions have been clarified, and verbose output options have been added for better debugging.

Signed-off-by: Deluan <deluan@navidrome.org>

* revert translations

Signed-off-by: Deluan <deluan@navidrome.org>

* fix: Hungarian translation JSON structure

Signed-off-by: Deluan <deluan@navidrome.org>

* chore: update testall target in Makefile

Modified the 'testall' target in the Makefile to include 'test-i18n' in the test sequence. This change ensures that internationalization tests are run alongside other tests, improving the overall testing process and ensuring that translation-related issues are caught early in the development cycle.

Signed-off-by: Deluan <deluan@navidrome.org>

* run validation with verbose output

Signed-off-by: Deluan <deluan@navidrome.org>

---------

Signed-off-by: Deluan <deluan@navidrome.org>
2025-07-03 09:59:39 -04:00
Deluan Quintão
9b3d3d15a1 fix(plugins): report metrics for all plugin types, not only MetadataAgents (#4303)
- Add ErrNotImplemented error to plugins/api package with proper documentation
- Refactor callMethod in wasm_base_plugin to use api.ErrNotImplemented
- Improve metrics recording logic to exclude not-implemented methods
- Add better tracing and context handling for plugin calls
- Reorganize error definitions with clear documentation
2025-07-02 22:05:28 -04:00
Kendall Garner
d4f869152b fix(scanner): read cover art from dsf, wavpak, fix wma test (#4296)
* fix(taglib): read cover art from dsf

* address feedback and alsi realize wma/wavpack are missing

* feedback

* more const char and remove unused import
2025-07-02 22:04:27 -04:00
Chris M
ee34433cc5 test: fix mpv tests on systems without /bin/bash installed - 4301 (#4302)
Not all systems have bash at `/bin/bash`. `/bin/sh` is POSIX and should
be present on all systems making this much more portable. No bash
features are currently used in the script so this change should be safe.
2025-07-02 21:55:55 -04:00
Deluan Quintão
a3d1a9dbe5 fix(plugins): silence plugin warnings and folder creation when plugins disabled (#4297)
* fix(plugins): silence repeated “Plugin not found” spam for inactive Spotify/Last.fm plugins

Navidrome was emitting a warning when the optional Spotify or
Last.fm agents weren’t enabled, filling the journal with entries like:

    level=warning msg="Plugin not found" capability=MetadataAgent name=spotify

Fixed by completely disable the plugin system when Plugins.Enabled = false.

Signed-off-by: Deluan <deluan@navidrome.org>

* style: update test description for clarity

Signed-off-by: Deluan <deluan@navidrome.org>

* fix: ensure plugin folder is created only if plugins are enabled

Signed-off-by: Deluan <deluan@navidrome.org>

---------

Signed-off-by: Deluan <deluan@navidrome.org>
2025-07-02 13:17:59 -04:00
ChekeredList71
82f490d066 fix(ui): update Hungarian translation (#4291)
* Hungarian: added new strings

new strings from the comparition of d903d3f1 and 4909232e

* Hungarian: fixed my mistakes

---------

Co-authored-by: ChekeredList71 <asd@asd.com>
2025-07-02 09:49:44 -04:00
Deluan Quintão
4909232e8f fix(ui): update German, Greek, French, Indonesian, Russian, Swedish, Turkish translations from POEditor (#4157)
Co-authored-by: navidrome-bot <navidrome-bot@navidrome.org>
2025-07-01 12:30:13 -04:00
Deluan
4096760b67 feat: support MBIDs in smart playlists
Signed-off-by: Deluan <deluan@navidrome.org>
2025-07-01 10:38:36 -04:00
Deluan
f92c807c0f chore: add pull request template
Introduced a new pull request template to standardize contributions and improve clarity in the review process. This template includes sections for description, related issues, type of change, checklist, testing instructions, and additional notes. By providing a structured format, contributors can better communicate their changes and maintainers can more easily review submissions.

Signed-off-by: Deluan <deluan@navidrome.org>
2025-06-30 17:15:42 -04:00
Deluan Quintão
bfa5b29913 feat: MBID search functionality for albums, artists and songs (#4286)
* feat(subsonic): search by MBID functionality

Updated the search methods in the mediaFileRepository, albumRepository, and artistRepository to support searching by MBID in addition to the existing query methods. This change improves the efficiency of media file, album, and artist searches, allowing for faster retrieval of records based on MBID.

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(subsonic): enhance MBID search functionality for albums and artists

Updated the search functionality to support searching by MBID for both
albums and artists. The fullTextFilter function was modified to accept
additional MBID fields, allowing for more comprehensive searches. New
tests were added to ensure that the search functionality correctly
handles MBID queries, including cases for missing entries and the
includeMissing parameter. This enhancement improves the overall search
capabilities of the application, making it easier for users to find
specific media items by their unique identifiers.

Signed-off-by: Deluan <deluan@navidrome.org>

* fix(subsonic): normalize MBID to lowercase for consistent querying

Updated the MBID handling in the SQL search logic to convert the input
to lowercase before executing the query. This change ensures that
searches are case-insensitive, improving the accuracy and reliability
of the search results when querying by MBID.

Signed-off-by: Deluan <deluan@navidrome.org>

---------

Signed-off-by: Deluan <deluan@navidrome.org>
2025-06-30 17:11:54 -04:00
Kendall Garner
f9c7cc5348 fix(prometheus): report subsonic error code (#4282)
* fix(prometheus): report subsonic error code

* address feedback
2025-06-30 11:54:02 -04:00
Deluan Quintão
a559414ffa chore(deps): update TagLib to 2.1.1 (#4281)
* chore: update CROSS_TAGLIB_VERSION to 2.1.1-1

* feat: add run-docker target

Introduced a new Makefile target `run-docker` that allows users to run a Navidrome Docker image with specified tags. This addition simplifies the process of launching the Docker container by handling volume mappings for configuration and music folders. The change enhances the development workflow by making it easier to test and run PR images

Signed-off-by: Deluan <deluan@navidrome.org>

---------

Signed-off-by: Deluan <deluan@navidrome.org>
2025-06-30 11:40:20 -04:00
Deluan Quintão
e3aec6d2a9 feat(ui): implement RecentlyAddedByModTime support for tracks (#4046) (#4279)
* fix: implement RecentlyAddedByModTime support for mediafiles

Fixes #4046 by adding recently_added sort mapping to MediaFileRepository that respects the RecentlyAddedByModTime configuration setting. Previously, this feature only worked for albums, causing inconsistent behavior when clients requested tracks sorted by 'recently added'.

Changes include:
- Add mediaFileRecentlyAddedSort() function that returns 'updated_at' when RecentlyAddedByModTime=true, 'created_at' otherwise
- Add 'recently_added' sort mapping to mediafile repository
- Add comprehensive tests to verify both configuration scenarios

This ensures consistent sorting behavior between albums and tracks when using the RecentlyAddedByModTime feature.

* fix: update createdAt field to sort by recently added

Modified the createdAt field in the SongList component to include a sortBy
attribute set to "recently_added". This change ensures that the media files
are displayed in the order they were added, improving the user experience
when browsing through recently added items.

Signed-off-by: Deluan <deluan@navidrome.org>

* better testing

Signed-off-by: Deluan <deluan@navidrome.org>

---------

Signed-off-by: Deluan <deluan@navidrome.org>
2025-06-30 09:14:35 -04:00
Kendall Garner
91e7f7b5c9 fix(server): ensure that similar artists retrieved from provider are no more than limit (#4267)
* fix(provider): ensure that similar artists retreived from provider are no more than limit

* add overlimit multiplier
2025-06-29 12:19:29 -04:00
Deluan
4f83987840 fix(ui): keep the NowPlayingPanel badge in sync.
Introduced a new event, EVENT_STREAM_RECONNECTED, to track the last
timestamp of stream reconnections. This change updates the activity
reducer to handle the new event and modifies the NowPlayingPanel to
refresh data based on server and stream status.

Signed-off-by: Deluan <deluan@navidrome.org>
2025-06-29 11:35:10 -04:00
Deluan
dce7705999 feat(ui): implement new event stream connection logic
Added a new event stream connection method to enhance the handling of
server events. This includes a reconnect mechanism for improved reliability
in case of connection errors. The configuration now allows toggling the
new event stream feature via `devNewEventStream`. Additionally, tests
were added to ensure the new functionality works as expected, including
reconnection behavior after an error.

Signed-off-by: Deluan <deluan@navidrome.org>
2025-06-29 10:18:05 -04:00
Deluan
411b32ebb8 test: improve serve_index_test code
Signed-off-by: Deluan <deluan@navidrome.org>
2025-06-28 20:01:47 -04:00
Deluan
b4aaa7f3a6 fix(ui): update Portuguese translations
Signed-off-by: Deluan <deluan@navidrome.org>
2025-06-28 19:40:25 -04:00
Kendall Garner
2741b1a5c5 feat(server): expose main credit stat to reflect only album artist | artist credit (#4268)
* attempt using artist | albumartist

* add primary stats, expose to ND and Subsonic

* response to feedback (1)

* address feedback part 1

* fix docs and artist show

* fix migration order

---------

Co-authored-by: Deluan Quintão <deluan@navidrome.org>
2025-06-28 19:00:13 -04:00
Deluan Quintão
d4f8419d83 fix(db): clear dangling music from BFR upgrade (#4262)
* fix(db): remove dangling items from BFR upgrade.

Signed-off-by: Deluan <deluan@navidrome.org>

* chore: .gitignore any navidrome binary

Signed-off-by: Deluan <deluan@navidrome.org>

---------

Signed-off-by: Deluan <deluan@navidrome.org>
2025-06-28 18:43:11 -04:00
Bastiaan van der Plaat
93040b3f85 feat(agents): Add Deezer API artist image provider agent (#4180)
* feat(agents): Add Deezer API artist image provider agent

* fix(agents): Use proper naming convention of consts

* fix(agents): Check if json test data can be read

* fix(agents): Use underscores for unused function arguments

* fix(agents): Move int literal to deezerArtistSearchLimit const

* feat: add Deezer configuration option to disable it.

Signed-off-by: Deluan <deluan@navidrome.org>

---------

Signed-off-by: Deluan <deluan@navidrome.org>
Co-authored-by: Deluan Quintão <deluan@navidrome.org>
2025-06-28 17:50:06 -04:00
Kendall Garner
0cd15c1ddc feat(prometheus): add metrics to Subsonic API and Plugins (#4266)
* Add prometheus metrics to subsonic and plugins

* address feedback, do not log error if operation is not supported

* add missing timestamp and client to stats

* remove .view from subsonic route

* directly inject DataStore in Prometheus, to avoid having to pass it in every call

Signed-off-by: Deluan <deluan@navidrome.org>

---------

Signed-off-by: Deluan <deluan@navidrome.org>
Co-authored-by: Deluan <deluan@navidrome.org>
2025-06-27 22:13:57 -04:00
Deluan
709714cfc0 chore(deps): update Go dependencies to latest versions
Signed-off-by: Deluan <deluan@navidrome.org>
2025-06-27 21:24:47 -04:00
Deluan Quintão
b63630fa6e fix(scanner) artist stats not refreshing during quick scan and after missing file deletion (#4269)
* Fix artist not being marked as touched during quick scans

When a new album is added during quick scans, artists were not being
marked as 'touched' due to media files having older modification times
than the scan completion time.

Changes:
- Add 'updated_at' to artist Put() columns in scanner to ensure
  timestamp is set when artists are processed
- Simplify RefreshStats query to check artist.updated_at directly
  instead of complex media file joins
- Artists from new albums now properly get refreshed in later phases

This fixes the issue where newly added albums would have incomplete
artist information after quick scans.

* fix(missing): refresh artist stats in background after deleting missing files

Signed-off-by: Deluan <deluan@navidrome.org>

* fix(request): add InternalAuth to user context

Signed-off-by: Deluan <deluan@navidrome.org>

* Add comprehensive test for artist stats update during quick scans

- Add test that verifies artist statistics are correctly updated when new files are added during incremental scans
- Test ensures both overall stats (AlbumCount, SongCount) and role-specific stats are properly refreshed
- Validates fix for artist stats not being refreshed during quick scans when new albums are added
- Uses real artist repository instead of mock to verify actual stats calculation behavior

---------

Signed-off-by: Deluan <deluan@navidrome.org>
2025-06-26 15:50:56 -04:00
Deluan Quintão
28bbd00dcc refactor: rename SimilarSongs to ArtistRadio (#4248) 2025-06-25 18:21:14 -04:00
Deluan Quintão
45c408a674 feat(plugins): allow Plugins to call the Subsonic API (#4260)
* chore: .gitignore any navidrome binary

Signed-off-by: Deluan <deluan@navidrome.org>

* feat: implement internal authentication handling in middleware

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(manager): add SubsonicRouter to Manager for API routing

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(plugins): add SubsonicAPI Host service for plugins and an example plugin

Signed-off-by: Deluan <deluan@navidrome.org>

* fix lint

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(plugins): refactor path handling in SubsonicAPI to extract endpoint correctly

Signed-off-by: Deluan <deluan@navidrome.org>

* docs(plugins): add SubsonicAPI service documentation to README

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(plugins): implement permission checks for SubsonicAPI service

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(plugins): enhance SubsonicAPI service initialization with atomic router handling

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor(plugins): better encapsulated dependency injection

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor(plugins): rename parameter in WithInternalAuth for clarity

Signed-off-by: Deluan <deluan@navidrome.org>

* docs(plugins): update SubsonicAPI permissions section in README for clarity and detail

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(plugins): enhance SubsonicAPI permissions output with allowed usernames and admin flag

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(plugins): add schema reference to example plugins

Signed-off-by: Deluan <deluan@navidrome.org>

* remove import alias

Signed-off-by: Deluan <deluan@navidrome.org>

---------

Signed-off-by: Deluan <deluan@navidrome.org>
2025-06-25 14:18:32 -04:00
Deluan
024b50dc2b chore: .gitignore any navidrome binary
Signed-off-by: Deluan <deluan@navidrome.org>
2025-06-25 09:44:22 -04:00
Deluan Quintão
aab3223e00 fix(subsonic): clearing playlist comment and public in Subsonic API (#4258)
* fix(subsonic): allow clearing playlist comment

* fix(playlists): simplify comment and public parameter handling

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor(playlists): streamline fakePlaylists implementation in tests

Signed-off-by: Deluan <deluan@navidrome.org>

---------

Signed-off-by: Deluan <deluan@navidrome.org>
2025-06-24 08:50:06 -04:00
Deluan Quintão
e5e2d860ef fix(scanner): ensure full scans update the DB (#4252)
* fix: ensure full scan refreshes all artist stats

After PR #4059, full scans were not forcing a refresh of all artists.
This change ensures that during full scans, all artist stats are refreshed
instead of only those with recently updated media files.

Changes:
- Set changesDetected=true at start of full scans to ensure maintenance operations run
- Add allArtists parameter to RefreshStats() method
- Pass fullScan state to RefreshStats to control refresh scope
- Update mock repository to match new interface

Fixes #4246
Related to PR #4059

* fix: add tests for full and incremental scans

Signed-off-by: Deluan <deluan@navidrome.org>

---------

Signed-off-by: Deluan <deluan@navidrome.org>
2025-06-23 13:26:48 -04:00
Deluan Quintão
1bec99a2f8 fix(plugins): prevent concurrent WASM compilation race condition (#4253)
* fix: eliminate race condition in plugin system

Added compilation waiting mechanism to prevent WASM plugins from being instantiated
before their background compilation completes. This fixes the intermittent error
'source module must be compiled before instantiation' that occurred when tests
or plugin usage happened before asynchronous compilation finished.

Changes include:
- Added manager reference to wasmBasePlugin for compilation synchronization
- Modified all plugin adapter constructors to accept manager parameter
- Updated getInstance() to wait for compilation before loading instances
- Fixed runtime test to handle manually created plugins appropriately

The race condition was caused by plugins trying to compile WASM modules
synchronously during Load() calls while background compilation was still
in progress. This change ensures proper coordination between the compilation
and instantiation phases.

* fix: add plugin-clean target to Makefile for easier plugin cleanup

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor: reorder plugin constructor parameters and add nil safety

Moved manager parameter to third position in pluginConstructor signature for\nbetter parameter ordering consistency.\n\nAlso added nil check for adapter creation to prevent registration of failed\nplugin adapters, which could lead to nil-pointer dereferences. Plugin\ncreation failures are now logged with context and gracefully skipped.\n\nChanges:\n- Reordered pluginConstructor parameters: manager moved before runtime\n- Updated all 4 adapter constructor signatures to match new order\n- Added nil safety check in registerPlugin to skip failed adapters\n- Updated runtime test to use new parameter order\n\nThis improves both code consistency and runtime safety by preventing\nnil adapters from being registered in the plugin manager.

* fix: prevent concurrent WASM compilation race condition

* refactor: remove unnecessary manager parameter from plugin constructors

* fix: update parameter name in newWasmSchedulerCallback for consistency

Signed-off-by: Deluan <deluan@navidrome.org>

---------

Signed-off-by: Deluan <deluan@navidrome.org>
2025-06-23 11:51:30 -04:00
Deluan
cfa1d7fa81 fix(scanner): filter folders by num_audio_files to ensure accurate statistics
Signed-off-by: Deluan <deluan@navidrome.org>
2025-06-23 10:26:26 -04:00
Deluan
177de7269b fix(scanner): always check for needed initial scan.
Relates to #4246

Signed-off-by: Deluan <deluan@navidrome.org>
2025-06-23 10:09:07 -04:00
Deluan Quintão
f1fc2cd9b9 feat(plugins): experimental support for plugins (#3998)
* feat(plugins): add minimal test agent plugin with API definitions

Signed-off-by: Deluan <deluan@navidrome.org>

* feat: add plugin manager with auto-registration and unique agent names

Introduced a plugin manager that scans the plugins folder for subdirectories containing plugin.wasm files and auto-registers them as agents using the directory name as the unique agent name. Updated the configuration to support plugins with enabled/folder options, and ensured the plugin manager is started as a concurrent task during server startup. The wasmAgent now returns the plugin directory name for AgentName, ensuring each plugin agent is uniquely identifiable. This enables dynamic plugin discovery and integration with the agents orchestrator.

* test: add Ginkgo suite and test for plugin manager auto-registration

Added a Ginkgo v2 suite bootstrap (plugins_suite_test.go) for the plugins package and a test (manager_test.go) to verify that plugins in the testdata folder are auto-registered and can be loaded as agents. The test uses a mock DataStore and asserts that the agent is registered and its AgentName matches the plugin directory. Updated go.mod and go.sum for wazero dependency required by plugin WASM support.

* test(plugins): ensure test WASM plugin is always freshly built before running suite; add real-plugin Ginkgo tests. Add BeforeSuite to plugins suite to build plugins/testdata/agent/plugin.wasm using Go WASI build command, matching README instructions. Remove plugin.wasm before build to guarantee a clean build. Add full real-plugin Ginkgo/Gomega tests for wasmAgent, covering all methods and error cases. Fix manager_test.go to use pointer to Manager. This ensures plugin tests are always run against a freshly compiled WASM binary, increasing reliability and reproducibility.

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(plugins): implement persistent compilation cache for WASM agent plugins

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(plugins): implement instance pooling for wasmAgent to improve resource management

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(plugins): enhance logging for wasmAgent and plugin manager operations

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(plugins): implement HttpService for handling HTTP requests in WASM plugins

Also add a sample Wikimedia plugin

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(plugins): standardize error handling in wasmAgent and MinimalAgent

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor: clean up wikimedia plugin code

Standardized error creation using 'errors.New' where formatting was not needed. Introduced a constant for HTTP request timeouts. Removed commented-out log statement. Improved code comments for clarity and accuracy.

* refactor: use unified SPARQLResult struct and parser for SPARQL responses

Introduced a single SPARQLResult struct to represent all possible SPARQL response fields (sitelink, wiki, comment, img). Added a parseSPARQLResult helper to unmarshal and check for empty results, simplifying all fetch functions and improving type safety and maintainability.

* feat(plugins): improve error handling in HTTP request processing

Signed-off-by: Deluan <deluan@navidrome.org>

* fix: background plugin compilation, logging, and race safety

Implemented background WASM plugin compilation with concurrency limits, proper closure capture, and global compilation cache to avoid data races. Added debug and warning logs for plugin compilation results, including elapsed time. Ensured plugin registration is correct and all tests pass.

* perf: implement true lazy loading for agents

Changed agent instantiation to be fully lazy. The Agents struct now stores agent names in order and only instantiates each agent on first use, caching the result. This preserves agent call order, improves server startup time, and ensures thread safety. Updated all agent methods and tests to use the new pattern. No changes to agent registration or interface. All tests pass.

* fix: ensure wasm plugin instances are closed via runtime.AddCleanup

Introduced runtime.AddCleanup to guarantee that the Close method of WASM plugin instances is called, even if they are garbage collected from the sync.Pool. Modified the sync.Pool.New function in manager.go to register a cleanup function for each loaded instance that implements Close. Updated agent.go to handle the pooledInstance wrapper containing the instance and its cleanup handle. Ensured cleanup.Stop() is called before explicitly closing an instance (on error or agent shutdown) to prevent double closing. This fixes a potential resource leak where instances could be GC'd from the pool without proper cleanup.

* refactor: break down long functions in plugin manager and agent

Refactored plugins/manager.go and plugins/agent.go to improve readability and reduce function length. Extracted pool initialization logic into newPluginPool and background compilation/agent factory logic into precompilePlugin/createAgentFactory in manager.go. Extracted pool retrieval/validation and cleanup function creation into getValidPooledInstance/createPoolCleanupFunc in agent.go.

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor(plugins): rename wasmAgent to wasmArtistAgent

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(api): add AlbumMetadataService with AlbumInfo and AlbumImages requests

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor(plugin): rename MinimalAgent for artist metadata service

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(api): implement wasmAlbumAgent for album metadata service with GetAlbumInfo and GetAlbumImages methods

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor(plugins): simplify wasmAlbumAgent and wasmArtistAgent by using wasmBasePlugin

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(plugins): add support for ArtistMetadataService and AlbumMetadataService in plugin manager

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(plugins): enhance plugin pool creation with custom runtime and precompilation support

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor(plugins): implement generic plugin pool and agent factory for improved service handling

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor(plugins): reorganize plugin management

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor(plugins): improve function signatures for clarity and consistency

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(plugins): implement background precompilation for plugins and agent factory creation

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor(plugins): include instanceID in logging for better traceability

Signed-off-by: Deluan <deluan@navidrome.org>

* test(plugins): add tests for plugin pre-compilation and agent factory synchronization

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(plugins): add minimal album test agent plugin for AlbumMetadataService

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(plugins): rename fake artist and album test agent plugins for metadata services

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(makefile): add Makefile for building plugin WASM binaries

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(plugins): add FakeMultiAgent plugin implementing Artist and Album metadata services

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor(plugins): remove log statements from FakeArtistAgent and FakeMultiAgent methods

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor: split AlbumInfoRetriever and AlbumImageRetriever, update all usages

Split the AlbumInfoRetriever interface into two: AlbumInfoRetriever (for album metadata) and AlbumImageRetriever (for album images), to better separate concerns and simplify implementations. Updated all agents, providers, plugins, and tests to use the new interfaces and methods. Removed the now-unnecessary mockAlbumAgents in favor of the shared mockAgents. Fixed a missing images slice declaration in lastfm agent. All tests pass except for known ignored persistence tests. This change reduces code duplication, improves clarity, and keeps the codebase clean and organized.

* feat(plugins): add Cover Art Archive AlbumMetadataService plugin for album cover images

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor: remove wasm module pooling

it was causing issues with the GC and the Close methods

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor: rename metadata service files to adapter naming convention

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor: unify album and artist method calls by introducing callMethod function

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor: unify album and artist method calls by introducing callMethod function

Signed-off-by: Deluan <deluan@navidrome.org>

* fix: handle nil values in data redaction process

Signed-off-by: Deluan <deluan@navidrome.org>

* fix: add timeout for plugin compilation to prevent indefinite blocking

Signed-off-by: Deluan <deluan@navidrome.org>

* feat: implement ScrobblerService plugin with authorization and scrobbling capabilities

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor: simplify generalization

Signed-off-by: Deluan <deluan@navidrome.org>

* fix: tests

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor: enhance plugin management by improving scanning and loading mechanisms

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor: update plugin creation functions to return specific interfaces for better type safety

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor: enhance wasmBasePlugin to support specific plugin types for improved type safety

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor: implement MediaMetadataService with combined artist and album methods

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor: improve MediaMetadataService plugin implementation and testing structure

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor: add tests for Adapter Media Agent and improve plugin documentation

Signed-off-by: Deluan <deluan@navidrome.org>

* docs: add README for Navidrome Plugin System with detailed architecture and usage guidelines

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor: enhance agent management with plugin loading and caching

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor: update agent discovery logic to include only local agent when no config is specified

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor: encapsulate agent caching logic in agentCache struct\n\nReplaced direct map/mutex usage for agent caching in Agents with a dedicated agentCache struct. This improves readability, maintainability, and testability by centralizing TTL and concurrency logic. Cleaned up comments and ensured all linter and test requirements are met.

Signed-off-by: Deluan <deluan@navidrome.org>

* fix: correct file extension filter in goimports command

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor: use defer to unlock the mutex

Signed-off-by: Deluan <deluan@navidrome.org>

* chore: move Cover Art Archive AlbumMetadataService plugins to an example folder

Signed-off-by: Deluan <deluan@navidrome.org>

* fix: handle errors when creating media metadata and scrobbler service plugins

Signed-off-by: Deluan <deluan@navidrome.org>

* fix: increase compilation timeout to one minute

Signed-off-by: Deluan <deluan@navidrome.org>

* feat: add configurable plugin compilation timeout

Signed-off-by: Deluan <deluan@navidrome.org>

* feat: implement plugin scrobbler support in PlayTracker

Signed-off-by: Deluan <deluan@navidrome.org>

* feat: add context management and Stop method to buffered scrobbler

Signed-off-by: Deluan <deluan@navidrome.org>

* feat: add username field to scrobbler requests and update logging

Signed-off-by: Deluan <deluan@navidrome.org>

* fix: data race in test

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor: rename http proto files to host and update references

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor: remove unused plugin registration methods from manager

Signed-off-by: Deluan <deluan@navidrome.org>

* feat: extend plugin manifests and implement plugin management commands

Signed-off-by: Deluan <deluan@navidrome.org>

* Update utils/files.go

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>

* fix for code scanning alert no. 43: Arbitrary file access during archive extraction ("Zip Slip")

Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>

* feat: add plugin dev workflow support

Added new CLI commands to improve plugin development workflow: 'plugin dev' to create symlinks from development directories to plugins folder, 'plugin refresh' to reload plugins without restarting Navidrome, enhanced 'plugin remove' to handle symlinked development plugins correctly, and updated 'plugin list' to display development plugins with '(dev)' indicator. These changes make the plugin development workflow more efficient by allowing developers to work on plugins in their own directories, link them to Navidrome without copying files, refresh plugins after changes without restart, and clean up safely.

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(plugins): implement timer service with register and cancel functionality - WIP

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(plugins): implement timer service with register and cancel functionality - WIP

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(plugins): implement timer service with register and cancel functionality - WIP

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(plugins): implement timer service with register and cancel functionality

Signed-off-by: Deluan <deluan@navidrome.org>

* fix: lint errors

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(README): update documentation to include TimerCallbackService and its functionality

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(plugins): add InitService with OnInit method and initialization tracking - WIP

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(plugins): add tests for InitService and plugin initialization tracking

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(plugins): expand documentation on plugin system implementation and architecture

Signed-off-by: Deluan <deluan@navidrome.org>

* fix: panic

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(plugins): redirect plugins' stderr to logs

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(plugins): add safe accessor methods for TimerService

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(plugins): add plugin-specific configuration support in InitRequest and documentation

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(plugins): add TimerCallbackService plugin adapter and integration

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor(plugins): rename services for consistency and clarity

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(plugins): add mutex for configuration access and clone plugin config

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor(tests): remove configtest dependency to prevent data races in integration tests

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor(plugins): remove PluginName method from WASM plugin implementations and update LoadPlugin to accept service type

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(plugins): implement instance pooling for wasmBasePlugin to improve performance - WIP

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(plugins): add wasmInstancePool for managing WASM plugin instances with TTL and max size

Signed-off-by: Deluan <deluan@navidrome.org>

* fix(plugins): correctly pass error to done function in wasmBasePlugin

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor(plugins): rename service types to capabilities for consistency

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor(plugins): simplify instance management in wasmBasePlugin by removing error handling in closure

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor(plugins): update wasmBasePlugin and wasmInstancePool to return errors for better error handling

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor(plugins): rename InitService to LifecycleManagement for consistency

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor(plugins): fix instance ID logging in wasmBasePlugin

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor(plugins): extract instance ID logging to a separate function in wasmBasePlugin, to avoid vet error

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor(plugins): make timers be isolated per plugin

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor(plugins): make timers be isolated per plugin

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor(plugins): rename HttpServiceImpl to httpServiceImpl for consistency and improve logging

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(plugins): add config service for plugin-specific configuration management

Signed-off-by: Deluan <deluan@navidrome.org>

* Update plugins/manager.go

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>

* Update plugins/manager.go

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>

* feat(crontab): implement crontab service for scheduling and canceling jobs

Signed-off-by: Deluan <deluan@navidrome.org>

* fix(singleton): fix deadlock issue when a constructor calls GetSingleton again

Signed-off-by: Deluan <deluan@navidrome.org> (+1 squashed commit)
Squashed commits:
[325a96ea2] fix(singleton): fix deadlock issue when a constructor calls GetSingleton again

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(scheduler): implement Scheduler for one-time and recurring job scheduling, merging CrontabService and TimerService

Signed-off-by: Deluan <deluan@navidrome.org>

* fix(scheduler): race condition in the scheduleOneTime and scheduleRecurring methods when replacing jobs with the same ID

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor(scheduler): consolidate job scheduling logic into a single helper function

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor(plugin): rename GetInstance method to Instantiate for clarity

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(plugins): add WebSocket service for handling connections and messages

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(crypto-ticker): add WebSocket plugin for real-time cryptocurrency price tracking

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(websocket): enhance connection management and callback handling

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(manager): only create one adapter instance for each adapter/capability pair

Signed-off-by: Deluan <deluan@navidrome.org>

* fix(websocket): ensure proper resource management by closing response body and use defer to unlocking mutexes

Signed-off-by: Deluan <deluan@navidrome.org>

* fix: flaky test

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(plugin): refactor WebSocket service integration and improve error logging

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(plugin): add SchedulerCallback support and improve reconnection logic

Signed-off-by: Deluan <deluan@navidrome.org>

* fix: test panic

Signed-off-by: Deluan <deluan@navidrome.org>

* docs: add crypto-ticker plugin example to README

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(manager): add LoadAllPlugins and LoadAllMediaAgents methods with slice.Map integration

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(api): add Timestamp field to ScrobblerNowPlayingRequest and update related methods

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(websocket): add error field to response messages for better error handling

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(cache): implement CacheService with string, int, float, and byte operations

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(tests): update buffered scrobbler tests for improved scrobble verification and use RWMutex in mock repo

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor(cache): simplify cache service implementation and remove unnecessary synchronization

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(tests): add build step for test plugins in the test suite

Signed-off-by: Deluan <deluan@navidrome.org>

* wip

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(scheduler): implement named scheduler callbacks and enhance Discord plugin integration

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(rpc): enhance activity image processing and improve error handling in Discord integration

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(discord): enhance activity state with artist list and add large text asset

Signed-off-by: Deluan <deluan@navidrome.org>

* fix tests

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(artwork): implement ArtworkService for retrieving artwork URLs

Signed-off-by: Deluan <deluan@navidrome.org>

* Add playback position to scrobble NowPlaying (#4089)

* test(playtracker): cover playback position

* address review comment

Signed-off-by: Deluan <deluan@navidrome.org>

---------

Signed-off-by: Deluan <deluan@navidrome.org>

* fix merge

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor: remove unnecessary check for empty slice in Map function

Signed-off-by: Deluan <deluan@navidrome.org>

* fix: update reflex.conf to include .wasm file extension

Signed-off-by: Deluan <deluan@navidrome.org>

* fix(scanner): normalize attribute strings and add edge case tests for PID calculation

Relates to https://github.com/navidrome/navidrome/issues/4183#issuecomment-2952729458

Signed-off-by: Deluan <deluan@navidrome.org>

* test(ui): fix warnings (#4187)

* fix(ui): address test warnings

* ignore lint error in test

Signed-off-by: Deluan <deluan@navidrome.org>

---------

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor(server): optimize top songs lookup (#4189)

* optimize top songs lookup

* Optimize title matching queries

* refactor: simplify top songs matching

* improve error handling and logging in track loading functions

Signed-off-by: Deluan <deluan@navidrome.org>

* test: add cases for fallback to title matching and combined MBID/title matching

Signed-off-by: Deluan <deluan@navidrome.org>

---------

Signed-off-by: Deluan <deluan@navidrome.org>

* fix(ui): playlist details overflow in spotify-based themes (#4184)

* test: ensure playlist details width

* fix(test): simplify expectation for minWidth in NDPlaylistDetails

Signed-off-by: Deluan <deluan@navidrome.org>

* fix(test): test all themes

Signed-off-by: Deluan <deluan@navidrome.org>

---------

Signed-off-by: Deluan <deluan@navidrome.org>

* chore(deps): update TagLib to version 2.1 (#4185)

* chore: update cross-taglib

* fix(taglib): add logging for TagLib version

Signed-off-by: Deluan <deluan@navidrome.org>

---------

Signed-off-by: Deluan <deluan@navidrome.org>

* test: verify agents fallback (#4191)

* build(docker): downgrade Alpine version from 3.21 to 3.19, oldest supported version.

This is to reduce the image size, as we don't really need the latest.

Signed-off-by: Deluan <deluan@navidrome.org>

* fix tests

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(runtime): implement pooled WASM runtime and module for better instance management

Signed-off-by: Deluan <deluan@navidrome.org>

* fix(discord-plugin): adjust timer delay calculation for track completion

Signed-off-by: Deluan <deluan@navidrome.org>

* resolve PR comments

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(plugins): implement cache cleanup by size functionality

Signed-off-by: Deluan <deluan@navidrome.org>

* fix(manager): return error from getCompilationCache and handle it in ScanPlugins

Signed-off-by: Deluan <deluan@navidrome.org>

* fix possible rce condition

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(docs): update README to include Cache and Artwork services

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(manager): add permissions support for host services in custom runtime - WIP

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(manifest): add permissions field to plugin manifests - WIP

Signed-off-by: Deluan <deluan@navidrome.org>

* test(permissions): implement permission validation and testing for plugins - WIP

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(plugins): add unauthorized_plugin to test permission enforcement - WIP

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(docs): add Plugin Permission System section to README - WIP

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(manifest): add detailed reasons for permissions in plugin manifests - WIP

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(permissions): implement granular HTTP permissions for plugins - WIP

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(permissions): implement HTTP and WebSocket permissions for plugins - WIP

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor: unexport all plugins package private symbols

Signed-off-by: Deluan <deluan@navidrome.org>

* update docs

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor: rename plugin_lifecycle_manager

Signed-off-by: Deluan <deluan@navidrome.org>

* docs: add discord-rich-presence plugin example to README

Signed-off-by: Deluan <deluan@navidrome.org>

* feat: add support for PATCH, HEAD, and OPTIONS HTTP methods

Signed-off-by: Deluan <deluan@navidrome.org>

* feat: use folder names as unique identifiers for plugins

Signed-off-by: Deluan <deluan@navidrome.org>

* fix: read config just once, to avoid data race in tests

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor: rename pluginName to pluginID for consistency across services

Signed-off-by: Deluan <deluan@navidrome.org>

* fix: use symlink name instead of folder name for plugin registration

Signed-off-by: Deluan <deluan@navidrome.org>

* feat: update plugin output format to include ID and enhance README with symlink usage

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor: implement shared plugin discovery function to streamline plugin scanning and error handling

Signed-off-by: Deluan <deluan@navidrome.org>

* feat: show plugin permissions in `plugin info`

Signed-off-by: Deluan <deluan@navidrome.org>

* feat: add JSON schema for Navidrome Plugin manifest and generate corresponding Go types - WIP

Signed-off-by: Deluan <deluan@navidrome.org>

* feat: implement typed permissions for plugins to enhance permission handling

Signed-off-by: Deluan <deluan@navidrome.org>

* feat: refactor plugin permissions to use typed schema and improve validation - WIP

Signed-off-by: Deluan <deluan@navidrome.org>

* feat: update HTTP permissions handling to use typed schema for allowed URLs - WIP

Signed-off-by: Deluan <deluan@navidrome.org>

* feat: remove unused JSON schema validation for plugin manifests

Signed-off-by: Deluan <deluan@navidrome.org>

* feat: remove unused fields from PluginPackage struct in package.go

Signed-off-by: Deluan <deluan@navidrome.org>

* feat: update file permissions in tests and remove unused permission parsing function

Signed-off-by: Deluan <deluan@navidrome.org>

* feat: refactor test plugin creation to use typed permissions and remove legacy helper

Signed-off-by: Deluan <deluan@navidrome.org>

* feat: add website field to plugin manifests and update test cases

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor: permission schema to use basePermission structure for consistency

Signed-off-by: Deluan <deluan@navidrome.org>

* feat: enhance host service management by adding permission checks for each service

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor: reorganize code files

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor: simplify custom runtime creation by removing compilation cache parameter

Signed-off-by: Deluan <deluan@navidrome.org>

* doc: add WebSocketService and update ConfigService for plugin-specific configuration

Signed-off-by: Deluan <deluan@navidrome.org>

* feat: implement WASM loading optimization to enhance plugin instance creation speed

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor: rename custom runtime functions and update related tests for clarity

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor: enhance plugin structure with compilation handling and error reporting

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor: improve logging and context tracing in runtime and wasm base plugin

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor: enhance runtime management with scoped runtime and caching improvements

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor: implement EnsureCompiled method for improved plugin compilation handling

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor: implement cached module management with TTL for improved performance

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor: replace map with sync.Map

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor: adjust time tolerance in scrobble buffer repository tests to avoid flakiness

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor: enhance image processing with fallback mechanism for improved error handling

Signed-off-by: Deluan <deluan@navidrome.org>

* docs: review test plugins readme

Signed-off-by: Deluan <deluan@navidrome.org>

* feat: set default timeout for HTTP client to 10 seconds

Signed-off-by: Deluan <deluan@navidrome.org>

* feat: enhance wasm instance pool with concurrency limits and timeout settings

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(discordrp): implement caching for processed image URLs with configurable TTL

Signed-off-by: Deluan <deluan@navidrome.org>

---------

Signed-off-by: Deluan <deluan@navidrome.org>
Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>
Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
2025-06-22 20:45:38 -04:00
Kendall Garner
7640c474cf fix: Allow nullable ReplayGain and support 0.0 (#4239)
* fix(ui,scanner,subsonic): Allow nullable replaygain and support 0.0

Resolves #4236.

Makes the replaygain columns (track/album gain/peak) nullable.
Converts the type to a pointer, allowing for 0.0 (a valid value) to be returned from Subsonic.
Updates tests for this behavior.

* small refactor

Signed-off-by: Deluan <deluan@navidrome.org>

---------

Signed-off-by: Deluan <deluan@navidrome.org>
Co-authored-by: Deluan <deluan@navidrome.org>
2025-06-17 12:02:25 -04:00
Deluan
4359adc042 test: add coverage for missing id parameter in GetCoverArt
Signed-off-by: Deluan <deluan@navidrome.org>
2025-06-16 13:02:00 -04:00
Deluan
8a4936dbc6 test: enhance GetCoverArt tests with context cancellation handling
Signed-off-by: Deluan <deluan@navidrome.org>
2025-06-16 12:58:20 -04:00
Kendall Garner
8d594671c4 fix(subsonic): Sort songs by presence of lyrics for getLyrics (#4237)
* fix(subsonic): Sort songs by presence of lyrics for `getLyrics`

The current implementation of `getLyrics` fetches any songs matching the artist and title.
However, this misses a case where there may be multiple matches for the same artist/song, and one has lyrics while the other doesn't.
Resolve this by adding a custom SQL dynamic column that checks for the presence of lyrics.

* add options to selectMediaFile, update test

* more robust testing of GetAllByLyrics

* fix(subsonic): refactor GetAllByLyrics to GetAll with lyrics sorting

Signed-off-by: Deluan <deluan@navidrome.org>

* use has_lyrics, and properly support multiple sort parts

* better handle complicated internal sorts

* just use a simpler filter

* add note to setSortMappings

* remove custom sort mapping, improve test with different updatedat

* refactor tests and mock

Signed-off-by: Deluan <deluan@navidrome.org>

* default order when not specified is `asc`

Signed-off-by: Deluan <deluan@navidrome.org>

---------

Signed-off-by: Deluan <deluan@navidrome.org>
Co-authored-by: Deluan <deluan@navidrome.org>
2025-06-16 12:04:41 -04:00
Emmanuel Ferdman
873905bdf6 fix(ci): update GoReleaser deprecated configuration (#4234)
Signed-off-by: Emmanuel Ferdman <emmanuelferdman@gmail.com>
2025-06-15 12:42:37 -04:00
wilywyrm
9249659773 fix(subsonic): getLyrics does not try to retrieve lyrics from external files (#4232) 2025-06-15 12:40:40 -04:00
Deluan
65029968ab refactor: rename chain package to run and update references
Signed-off-by: Deluan <deluan@navidrome.org>
2025-06-14 17:19:06 -04:00
Deluan Quintão
5667f6ab75 feat(scanner): add library stats to DB (#4229)
* Combine library stats migrations

* test: verify full library stats

* Fix total_songs calculation

* Fix library stats migration

* fix(scanner): log elapsed time and number of libraries updated during scan

Signed-off-by: Deluan <deluan@navidrome.org>

* fix(scanner): refresh library stats conditionally, only if changes were detected

Signed-off-by: Deluan <deluan@navidrome.org>

* fix(scanner): refresh library stats conditionally, only if changes were detected

Signed-off-by: Deluan <deluan@navidrome.org>

* fix(scanner): update queries to exclude missing entries in library stats

Signed-off-by: Deluan <deluan@navidrome.org>

---------

Signed-off-by: Deluan <deluan@navidrome.org>
2025-06-14 15:58:33 -04:00
Deluan
44834204de fix(scanner): improve folderEntry methods and hashing logic for better change detection
Signed-off-by: Deluan <deluan@navidrome.org>
2025-06-14 12:35:28 -04:00
Deluan
6f749b387b fix(ui): update AboutDialog styles and improve layout
Signed-off-by: Deluan <deluan@navidrome.org>
2025-06-13 17:55:15 -04:00
Deluan
6e84236c1d chore(deps): go mod tidy
Signed-off-by: Deluan <deluan@navidrome.org>
2025-06-13 17:43:06 -04:00
Deluan
5bbde9d9e9 fix(ui): update title attribute for info icon in AppBar component
Signed-off-by: Deluan <deluan@navidrome.org>
2025-06-13 17:36:38 -04:00
Deluan
464a5e7bc4 chore(deps): update Go dependencies to latest versions
Signed-off-by: Deluan <deluan@navidrome.org>
2025-06-13 17:30:58 -04:00
Kendall Garner
6fe3e3b6ad fix(db): add user foreign key constraint to annotation table (#4211)
* fix(db): add user foreign key constraint to annotation table

Associates user_id with user.id, with cascade for delete (drop annotation) and update (update annotation).
Migration script will only copy/insert annotations for user IDs that exist

* remove default for user_id

* refactor(db): rename migration correct sequencing

Signed-off-by: Deluan <deluan@navidrome.org>

---------

Signed-off-by: Deluan <deluan@navidrome.org>
Co-authored-by: Deluan <deluan@navidrome.org>
2025-06-13 17:27:57 -04:00
Deluan Quintão
043f79d746 feat(ui): add EnableNowPlaying configuration (default true) (#4219)
* Add EnableNowPlaying config option

* Return 501 for disabled NowPlaying

* chore(tests): remove get_now_playing_route test

* Disable now playing events when disabled

* fix(tests): add mutex for thread-safe access to scrobble buffer

Signed-off-by: Deluan <deluan@navidrome.org>

---------

Signed-off-by: Deluan <deluan@navidrome.org>
2025-06-13 00:06:08 -04:00
Deluan
fcba2ba902 fix(ui): always define config resource.
fixes #4224

Signed-off-by: Deluan <deluan@navidrome.org>
2025-06-13 00:04:37 -04:00
Deluan Quintão
0d74d36cec feat(scanner): add folder hash for smarter quick scan change detection (#4220)
* Simplify folder hash migration

* fix hashing lint

* refactor

Signed-off-by: Deluan <deluan@navidrome.org>

* Update scanner/folder_entry.go

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

---------

Signed-off-by: Deluan <deluan@navidrome.org>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-06-12 13:17:34 -04:00
Deluan
050aa173cc fix(scanner): add 'album_artist' alias for albumartist
Signed-off-by: Deluan <deluan@navidrome.org>
2025-06-12 12:53:43 -04:00
Deluan
f7e005a991 fix(server): ensure single record per user by reusing existing playqueue ID
Signed-off-by: Deluan <deluan@navidrome.org>
2025-06-11 17:26:13 -04:00
Deluan Quintão
410e457e5a feat(server): add update and clear play queue endpoints to native API (#4215)
* Refactor queue payload handling

* Refine queue update validation

* refactor(queue): avoid loading tracks for validation

* refactor/rename repository methods

Signed-off-by: Deluan <deluan@navidrome.org>

* more tests

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor

Signed-off-by: Deluan <deluan@navidrome.org>

---------

Signed-off-by: Deluan <deluan@navidrome.org>
2025-06-11 12:02:31 -04:00
Deluan Quintão
356caa93c7 feat(server): allow multiple sort fields in smart playlists (#4214)
* allow multiple sort fields

* Handle invalid sort fields

* Update model/criteria/criteria.go

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-06-11 11:34:17 -04:00
Deluan
e350e0ab49 chore(deps): update Go version to 1.24.4
Signed-off-by: Deluan <deluan@navidrome.org>
2025-06-11 11:04:58 -04:00
Deluan Quintão
8fcd8ba61a feat(server): add index-based play queue endpoints to native API (#4210)
* Add migration converting playqueue current to index

* refactor

Signed-off-by: Deluan <deluan@navidrome.org>

* fix(queue): ensure valid current index and improve test coverage

Signed-off-by: Deluan <deluan@navidrome.org>

---------

Signed-off-by: Deluan <deluan@navidrome.org>
2025-06-10 23:00:44 -04:00
Deluan Quintão
76042ba173 feat(ui): add Now Playing panel for admins (#4209)
* feat(ui): add Now Playing panel and integrate now playing count updates

Signed-off-by: Deluan <deluan@navidrome.org>

* fix: check return value in test to satisfy linter

* fix: format React code with prettier

* fix: resolve race condition in play tracker test

* fix: log error when fetching now playing data fails

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(ui): refactor Now Playing panel with new components and error handling

Signed-off-by: Deluan <deluan@navidrome.org>

* fix(ui): adjust padding and height in Now Playing panel for improved layout

Signed-off-by: Deluan <deluan@navidrome.org>

* fix(cache): add automatic cleanup to prevent goroutine leak on cache garbage collection

Signed-off-by: Deluan <deluan@navidrome.org>

---------

Signed-off-by: Deluan <deluan@navidrome.org>
2025-06-10 17:22:13 -04:00
Deluan Quintão
a65140b965 feat(ui): add Play Artist's Top Songs button (#4204)
* ui: add Play button to artist toolbar

* refactor

Signed-off-by: Deluan <deluan@navidrome.org>

* test(ui): add tests for Play button functionality in ArtistActions

Signed-off-by: Deluan <deluan@navidrome.org>

* ui: update Play button label to Top Songs in ArtistActions

Signed-off-by: Deluan <deluan@navidrome.org>

---------

Signed-off-by: Deluan <deluan@navidrome.org>
2025-06-09 19:09:04 -04:00
Deluan
aee2a1f8be fix(ui): artist buttons in spotify-ish
Signed-off-by: Deluan <deluan@navidrome.org>
2025-06-09 17:56:59 -04:00
Deluan Quintão
5882889a80 feat(ui): Add Artist Radio and Shuffle options (#4186)
* Add Play Similar option

* Add pt-br translation for Play Similar

* Refactor playSimilar and add helper

* Improve Play Similar feedback

* Add artist actions bar with shuffle and radio

* Add Play Similar menu and align artist actions

* Refine artist actions and revert menu option

* fix(ui): enhance layout of ArtistActions and ArtistShow components

Signed-off-by: Deluan <deluan@navidrome.org>

* fix(i18n): revert unused changes

Signed-off-by: Deluan <deluan@navidrome.org>

* fix(ui): improve layout for mobile

Signed-off-by: Deluan <deluan@navidrome.org>

* fix(ui): improve error handling for fetching similar songs

Signed-off-by: Deluan <deluan@navidrome.org>

* fix(ui): enhance error logging for fetching songs in shuffle

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor(ui): shuffle handling to use async/await for better readability

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor(ui): simplify button label handling in ArtistActions component

Signed-off-by: Deluan <deluan@navidrome.org>

---------

Signed-off-by: Deluan <deluan@navidrome.org>
2025-06-09 17:06:10 -04:00
Deluan
7928adb3d1 build(docker): downgrade Alpine version from 3.21 to 3.19, oldest supported version.
This is to reduce the image size, as we don't really need the latest.

Signed-off-by: Deluan <deluan@navidrome.org>
2025-06-09 14:30:48 -04:00
Deluan Quintão
19008ad70e test: verify agents fallback (#4191) 2025-06-08 18:45:06 -04:00
Deluan Quintão
e3f740cafb chore(deps): update TagLib to version 2.1 (#4185)
* chore: update cross-taglib

* fix(taglib): add logging for TagLib version

Signed-off-by: Deluan <deluan@navidrome.org>

---------

Signed-off-by: Deluan <deluan@navidrome.org>
2025-06-08 15:47:56 -04:00
Deluan Quintão
7d1f5ddf06 fix(ui): playlist details overflow in spotify-based themes (#4184)
* test: ensure playlist details width

* fix(test): simplify expectation for minWidth in NDPlaylistDetails

Signed-off-by: Deluan <deluan@navidrome.org>

* fix(test): test all themes

Signed-off-by: Deluan <deluan@navidrome.org>

---------

Signed-off-by: Deluan <deluan@navidrome.org>
2025-06-08 14:21:40 -04:00
Deluan Quintão
bc733540f9 refactor(server): optimize top songs lookup (#4189)
* optimize top songs lookup

* Optimize title matching queries

* refactor: simplify top songs matching

* improve error handling and logging in track loading functions

Signed-off-by: Deluan <deluan@navidrome.org>

* test: add cases for fallback to title matching and combined MBID/title matching

Signed-off-by: Deluan <deluan@navidrome.org>

---------

Signed-off-by: Deluan <deluan@navidrome.org>
2025-06-08 11:44:44 -04:00
Deluan Quintão
844966df89 test(ui): fix warnings (#4187)
* fix(ui): address test warnings

* ignore lint error in test

Signed-off-by: Deluan <deluan@navidrome.org>

---------

Signed-off-by: Deluan <deluan@navidrome.org>
2025-06-07 23:11:13 -04:00
Deluan
2867cebd55 fix(scanner): normalize attribute strings and add edge case tests for PID calculation
Relates to https://github.com/navidrome/navidrome/issues/4183#issuecomment-2952729458

Signed-off-by: Deluan <deluan@navidrome.org>
2025-06-07 12:45:53 -04:00
Deluan Quintão
4172d2332a feat(ui): add song Love and Rating functionality to playlist view (#4134)
* feat(ui): add playlist track love button

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(ui): add star rating feature for playlist tracks

Signed-off-by: Deluan <deluan@navidrome.org>

* fix(ui): handle loading state and error logging in toggle love and rating components

Signed-off-by: Deluan <deluan@navidrome.org>

---------

Signed-off-by: Deluan <deluan@navidrome.org>
2025-06-04 20:38:28 -04:00
Deluan Quintão
ee8ef661c3 fix(ui): update audio title link to include playlist support (#4175)
Signed-off-by: Deluan <deluan@navidrome.org>
2025-06-04 18:52:18 -04:00
Deluan Quintão
e3527f9c00 fix(subsonic): fix JukeboxRole logic in GetUser and eliminate code duplication (#4170)
- Fix GetUser JukeboxRole to properly respect AdminOnly setting

- Extract buildUserResponse helper to eliminate duplication between GetUser and GetUsers

- Fix username field inconsistency (GetUsers was using loggedUser.Name instead of UserName)

- Add comprehensive tests covering jukebox role permissions and consistency between methods

Fixes #4160
2025-06-02 21:34:43 -04:00
Patrick O'Shea
a79e05b648 fix(jukebox): jukebox mode doesn't include MusicFolder (#4067)
* fix(configuration.go, mpv.go): Jukebox mode doesn't include MusicFolder in mpv command - #4066

The call to createMPVCommand is not including the MusicFolder path in
mpv command causing it to fail with file not found errors.

Updated default command template and createMPVCommand to use additional
substitution with conf.server.MusicFolder

Signed-off-by: Pat <patso.oshea@gmail.com>

* Revert config.go change, use filepath.Join for cross platform

* Update track.go with mf.AbsolutePath()

---------

Signed-off-by: Pat <patso.oshea@gmail.com>
Co-authored-by: Deluan <deluan@navidrome.org>
2025-06-02 21:02:26 -04:00
Deluan Quintão
011f5891c3 fix(jukebox): fix mpv command and template parsing (#4162)
* test(mpv): add unit tests for MPV command generation and execution

Signed-off-by: Deluan <deluan@navidrome.org>

* fix(mpv): improve command template parsing

Signed-off-by: Deluan <deluan@navidrome.org>

* fix(mpv): update mock script to output arguments to stdout instead of a file

Signed-off-by: Deluan <deluan@navidrome.org>

* test(mpv): add test suite for MPV command functionality

Signed-off-by: Deluan <deluan@navidrome.org>

* fix(mpv): improve MPV command template parsing to handle quoted arguments

Signed-off-by: Deluan <deluan@navidrome.org>

* fix(mpv): simplify MPV command check by removing unnecessary string containment

Signed-off-by: Deluan <deluan@navidrome.org>

* fix(mpv): add error handling for empty command arguments and malformed templates

Signed-off-by: Deluan <deluan@navidrome.org>

---------

Signed-off-by: Deluan <deluan@navidrome.org>
2025-06-02 20:52:05 -04:00
Kendall Garner
b79e84a535 fix(scanner): update prometheus at the end of the scan (#4163)
* fix(scannner): use prometheus instance over noop if configured properly

* Real Fix: move `WriteAfterScanMetrics` outside gofunc

* refactor: remove unused artwork.CacheWarmer param from CallScan function

Signed-off-by: Deluan <deluan@navidrome.org>

---------

Signed-off-by: Deluan <deluan@navidrome.org>
Co-authored-by: Deluan <deluan@navidrome.org>
2025-06-02 20:13:54 -04:00
Deluan
ac966d98a9 fix(ui): improve layout and responsiveness of SelectPlaylistInput component
Signed-off-by: Deluan <deluan@navidrome.org>
2025-06-02 12:28:04 -04:00
Deluan
9c4af3c6d0 fix(server): don't override /song routes
Signed-off-by: Deluan <deluan@navidrome.org>
2025-06-01 14:41:50 -04:00
Deluan
f5aac7af0d fix(ui): make the height of the AddToPlaylistDialog static.
Signed-off-by: Deluan <deluan@navidrome.org>
2025-06-01 12:00:23 -04:00
Deluan Quintão
36ed2f2f58 refactor: simplify configuration endpoint with JSON serialization (#4159)
* refactor(config): reorganize configuration handling

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor(aboutUtils): improve array formatting and handling in TOML conversion

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor(aboutUtils): add escapeTomlKey function to handle special characters in TOML keys

Signed-off-by: Deluan <deluan@navidrome.org>

* fix(test): remove unused getNestedValue function

* fix(ui): apply prettier formatting

---------

Signed-off-by: Deluan <deluan@navidrome.org>
2025-05-31 19:37:23 -04:00
Deluan Quintão
8e32eeae93 fix(ui): add button is covered when adding to a playlist (#4156)
* refactor: fix SelectPlaylistInput layout and improve readability - Replace dropdown with fixed list to prevent button overlay - Break down into smaller focused components - Add comprehensive test coverage - Reduce spacing for compact layout

* refactor: update playlist input translations

Signed-off-by: Deluan <deluan@navidrome.org>

* fix: format code with prettier - Fix formatting issues in AddToPlaylistDialog.test.jsx

---------

Signed-off-by: Deluan <deluan@navidrome.org>
2025-05-30 23:15:02 -04:00
Kendall Garner
7bb1fcdd4b fix(ui): DevFlags order in TOML export (#4155)
* fix(ui): update artist link rendering and improve button styles

Signed-off-by: Deluan <deluan@navidrome.org>

* fix(ui): Move Dev* flags before sections in export

---------

Signed-off-by: Deluan <deluan@navidrome.org>
Co-authored-by: Deluan <deluan@navidrome.org>
2025-05-30 23:12:44 -04:00
Deluan Quintão
ded8cf236e feat(ui): add 'Show in Playlist' context menu (#4139)
* Update song playlist menu and endpoint

* feat(ui): show submenu on click, not on hover

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(ui): integrate dataProvider for fetching playlists in song context menu

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(ui): update song context menu to use dataProvider for fetching playlists and inspecting songs

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(ui): stop event propagation when closing playlist submenu

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(ui): add 'show in playlist' option to options object

Signed-off-by: Deluan <deluan@navidrome.org>

---------

Signed-off-by: Deluan <deluan@navidrome.org>
2025-05-30 21:26:35 -04:00
Deluan Quintão
6dd98e0bed feat(ui): add configuration tab in About dialog (#4142)
* Flatten config endpoint and improve About dialog

* add config resource

Signed-off-by: Deluan <deluan@navidrome.org>

* fix(ui): replace `==` with `===`

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(ui): add environment variables

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(ui): add sensitive value redaction

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(ui): more translations

Signed-off-by: Deluan <deluan@navidrome.org>

* address PR comments

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(ui): add configuration export feature in About dialog

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(ui): translate development flags section header

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(api): refactor routes for keepalive and insights endpoints

Signed-off-by: Deluan <deluan@navidrome.org>

* lint

Signed-off-by: Deluan <deluan@navidrome.org>

* fix(ui): enhance string escaping in formatTomlValue function

Updated the formatTomlValue function to properly escape backslashes in addition to quotes. Added new test cases to ensure correct handling of strings containing both backslashes and quotes.

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(ui): adjust dialog size

Signed-off-by: Deluan <deluan@navidrome.org>

---------

Signed-off-by: Deluan <deluan@navidrome.org>
2025-05-30 21:07:08 -04:00
Deluan Quintão
22c3486e38 fix(server): enhance artist folder detection with directory traversal (#4151)
* fix: enhance artist folder detection with directory traversal

Enhanced fromArtistFolder function to implement directory traversal fallback for finding artist images. The original implementation only searched in the calculated artist folder, which failed for single album artists where artist.jpg files were not detected.

Changes: Modified fromArtistFolder to search up to 3 directory levels (artist folder + 2 parent levels), extracted findImageInFolder helper function for cleaner code organization, added proper boundary checks to prevent infinite traversal, maintained backward compatibility with existing functionality.

This fix ensures artist.jpg files are properly detected for single album artists while preserving all existing behavior for multi-album artists.

* refactor: address PR review suggestions

Applied review suggestions from gemini-code-assist bot:

- Added maxArtistFolderTraversalDepth constant instead of hardcoded value 3

- Updated error message to mention that parent directories were also searched

- Enhanced test assertion to verify the improved error message

* fix: improve artist folder traversal logic and enhance error logging

Signed-off-by: Deluan <deluan@navidrome.org>

* fix: remove test for special glob characters in artist folder detection

Signed-off-by: Deluan <deluan@navidrome.org>

* fix: add logging for artist image search in folder

Signed-off-by: Deluan <deluan@navidrome.org>

---------

Signed-off-by: Deluan <deluan@navidrome.org>
2025-05-30 18:06:14 -04:00
Michael Tighe
11c9dd4bd9 fix(ui): reset page to 1 on playlist change - #1676 (#4154)
Signed-off-by: Michael Tighe <strideriidx@gmail.com>
2025-05-30 17:28:39 -04:00
Kevian
623919f53e fix(ui): update Spanish translation (#4146)
Changed translation of "Top Rated" from "Los Mejores Calificados" to "Mejor Calificados" for consistency purposes with other list entries. While the previous version was correct, this version is shorter and aligns better with the rest of the terms.
2025-05-30 17:19:04 -04:00
Deluan
920800e909 fix(ui): restructure AboutDialog's version notification layout
Signed-off-by: Deluan <deluan@navidrome.org>
2025-05-30 16:18:07 -04:00
Deluan Quintão
c12472bd19 fix(ui): update song fetching logic to disable for radio (#4149)
Signed-off-by: Deluan <deluan@navidrome.org>
2025-05-30 08:29:36 -04:00
Deluan
a2d764d5bc test: add tests for filtering artists by role
Signed-off-by: Deluan <deluan@navidrome.org>
2025-05-29 15:44:27 -04:00
560 changed files with 68271 additions and 10077 deletions

View File

@@ -4,10 +4,10 @@
"dockerfile": "Dockerfile",
"args": {
// Update the VARIANT arg to pick a version of Go: 1, 1.15, 1.14
"VARIANT": "1.24",
"VARIANT": "1.25",
// Options
"INSTALL_NODE": "true",
"NODE_VERSION": "v20"
"NODE_VERSION": "v24"
}
},
"workspaceMount": "",

View File

@@ -1,53 +0,0 @@
# Navidrome Code Guidelines
This is a music streaming server written in Go with a React frontend. The application manages music libraries, provides streaming capabilities, and offers various features like artist information, artwork handling, and external service integrations.
## Code Standards
### Backend (Go)
- Follow standard Go conventions and idioms
- Use context propagation for cancellation signals
- Write unit tests for new functionality using Ginkgo/Gomega
- Use mutex appropriately for concurrent operations
- Implement interfaces for dependencies to facilitate testing
### Frontend (React)
- Use functional components with hooks
- Follow React best practices for state management
- Implement PropTypes for component properties
- Prefer using React-Admin and Material-UI components
- Icons should be imported from `react-icons` only
- Follow existing patterns for API interaction
## Repository Structure
- `core/`: Server-side business logic (artwork handling, playback, etc.)
- `ui/`: React frontend components
- `model/`: Data models and repository interfaces
- `server/`: API endpoints and server implementation
- `utils/`: Shared utility functions
- `persistence/`: Database access layer
- `scanner/`: Music library scanning functionality
## Key Guidelines
1. Maintain cache management patterns for performance
2. Follow the existing concurrency patterns (mutex, atomic)
3. Use the testing framework appropriately (Ginkgo/Gomega for Go)
4. Keep UI components focused and reusable
5. Document configuration options in code
6. Consider performance implications when working with music libraries
7. Follow existing error handling patterns
8. Ensure compatibility with external services (LastFM, Spotify)
## Development Workflow
- Test changes thoroughly, especially around concurrent operations
- Validate both backend and frontend interactions
- Consider how changes will affect user experience and performance
- Test with different music library sizes and configurations
- Before committing, ALWAYS run `make format lint test`, and make sure there are no issues
## Important commands
- `make build`: Build the application
- `make test`: Run Go tests
- To run tests for a specific package, use `make test PKG=./pkgname/...`
- `make lintall`: Run linters
- `make format`: Format code

38
.github/pull_request_template.md vendored Normal file
View File

@@ -0,0 +1,38 @@
### Description
<!-- Please provide a clear and concise description of what this PR does and why it is needed. -->
### Related Issues
<!-- List any related issues, e.g., "Fixes #123" or "Related to #456". -->
### Type of Change
- [ ] Bug fix
- [ ] New feature
- [ ] Documentation update
- [ ] Refactor
- [ ] Other (please describe):
### Checklist
Please review and check all that apply:
- [ ] My code follows the projects coding style
- [ ] I have tested the changes locally
- [ ] I have added or updated documentation as needed
- [ ] I have added tests that prove my fix/feature works (or explain why not)
- [ ] All existing and new tests pass
### How to Test
<!-- Describe the steps to test your changes. Include setup, commands, and expected results. -->
### Screenshots / Demos (if applicable)
<!-- Add screenshots, GIFs, or links to demos if your change includes UI updates or visual changes. -->
### Additional Notes
<!-- Anything else the maintainer should know? Potential side effects, breaking changes, or areas of concern? -->
<!--
**Tips for Contributors:**
- Be concise but thorough.
- If your PR is large, consider breaking it into smaller PRs.
- Tag the maintainer if you need a prompt review.
- Avoid force pushing to the branch after opening the PR, as it can complicate the review process.
-->

View File

@@ -14,7 +14,7 @@ concurrency:
cancel-in-progress: true
env:
CROSS_TAGLIB_VERSION: "2.0.2-1"
CROSS_TAGLIB_VERSION: "2.1.1-1"
IS_RELEASE: ${{ startsWith(github.ref, 'refs/tags/') && 'true' || 'false' }}
jobs:
@@ -25,7 +25,7 @@ jobs:
git_tag: ${{ steps.git-version.outputs.GIT_TAG }}
git_sha: ${{ steps.git-version.outputs.GIT_SHA }}
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v5
with:
fetch-depth: 0
fetch-tags: true
@@ -63,7 +63,7 @@ jobs:
name: Lint Go code
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v5
- name: Download TagLib
uses: ./.github/actions/download-taglib
@@ -78,7 +78,7 @@ jobs:
args: --timeout 2m
- name: Run go goimports
run: go run golang.org/x/tools/cmd/goimports@latest -w `find . -name '*.go' | grep -v '_gen.go$'`
run: go run golang.org/x/tools/cmd/goimports@latest -w `find . -name '*.go' | grep -v '_gen.go$' | grep -v '.pb.go$'`
- run: go mod tidy
- name: Verify no changes from goimports and go mod tidy
run: |
@@ -93,7 +93,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Check out code into the Go module directory
uses: actions/checkout@v4
uses: actions/checkout@v5
- name: Download TagLib
uses: ./.github/actions/download-taglib
@@ -106,7 +106,7 @@ jobs:
- name: Test
run: |
pkg-config --define-prefix --cflags --libs taglib # for debugging
go test -shuffle=on -tags netgo -race -cover ./... -v
go test -shuffle=on -tags netgo -race ./... -v
js:
name: Test JS code
@@ -114,10 +114,10 @@ jobs:
env:
NODE_OPTIONS: "--max_old_space_size=4096"
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
- uses: actions/checkout@v5
- uses: actions/setup-node@v6
with:
node-version: 20
node-version: 24
cache: "npm"
cache-dependency-path: "**/package-lock.json"
@@ -145,7 +145,7 @@ jobs:
name: Lint i18n files
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v5
- run: |
set -e
for file in resources/i18n/*.json; do
@@ -157,6 +157,8 @@ jobs:
exit 1
fi
done
- run: ./.github/workflows/validate-translations.sh -v
check-push-enabled:
name: Check Docker configuration
@@ -189,7 +191,7 @@ jobs:
PLATFORM=$(echo ${{ matrix.platform }} | tr '/' '_')
echo "PLATFORM=$PLATFORM" >> $GITHUB_ENV
- uses: actions/checkout@v4
- uses: actions/checkout@v5
- name: Prepare Docker Buildx
uses: ./.github/actions/prepare-docker
@@ -262,10 +264,10 @@ jobs:
env:
REGISTRY_IMAGE: ghcr.io/${{ github.repository }}
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v5
- name: Download digests
uses: actions/download-artifact@v4
uses: actions/download-artifact@v5
with:
path: /tmp/digests
pattern: digests-*
@@ -316,9 +318,9 @@ jobs:
runs-on: ubuntu-24.04
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v5
- uses: actions/download-artifact@v4
- uses: actions/download-artifact@v5
with:
path: ./binaries
pattern: navidrome-windows*
@@ -350,12 +352,12 @@ jobs:
outputs:
package_list: ${{ steps.set-package-list.outputs.package_list }}
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v5
with:
fetch-depth: 0
fetch-tags: true
- uses: actions/download-artifact@v4
- uses: actions/download-artifact@v5
with:
path: ./binaries
pattern: navidrome-*
@@ -404,7 +406,7 @@ jobs:
item: ${{ fromJson(needs.release.outputs.package_list) }}
steps:
- name: Download all-packages artifact
uses: actions/download-artifact@v4
uses: actions/download-artifact@v5
with:
name: packages
path: ./dist

View File

@@ -8,7 +8,7 @@ jobs:
runs-on: ubuntu-latest
if: ${{ github.repository_owner == 'navidrome' }}
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v5
- name: Get updated translations
id: poeditor
env:

236
.github/workflows/validate-translations.sh vendored Executable file
View File

@@ -0,0 +1,236 @@
#!/bin/bash
# validate-translations.sh
#
# This script validates the structure of JSON translation files by comparing them
# against the reference English translation file (ui/src/i18n/en.json).
#
# The script performs the following validations:
# 1. JSON syntax validation using jq
# 2. Structural validation - ensures all keys from English file are present
# 3. Reports missing keys (translation incomplete)
# 4. Reports extra keys (keys not in English reference, possibly deprecated)
# 5. Emits GitHub Actions annotations for CI/CD integration
#
# Usage:
# ./validate-translations.sh
#
# Environment Variables:
# EN_FILE - Path to reference English file (default: ui/src/i18n/en.json)
# TRANSLATION_DIR - Directory containing translation files (default: resources/i18n)
#
# Exit codes:
# 0 - All translations are valid
# 1 - One or more translations have structural issues
#
# GitHub Actions Integration:
# The script outputs GitHub Actions annotations using ::error and ::warning
# format that will be displayed in PR checks and workflow summaries.
# Script to validate JSON translation files structure against en.json
set -e
# Path to the reference English translation file
EN_FILE="${EN_FILE:-ui/src/i18n/en.json}"
TRANSLATION_DIR="${TRANSLATION_DIR:-resources/i18n}"
VERBOSE=false
# Parse command line arguments
while [[ $# -gt 0 ]]; do
case "$1" in
-v|--verbose)
VERBOSE=true
shift
;;
-h|--help)
echo "Usage: $0 [options]"
echo ""
echo "Validates JSON translation files structure against English reference file."
echo ""
echo "Options:"
echo " -h, --help Show this help message"
echo " -v, --verbose Show detailed output (default: only show errors)"
echo ""
echo "Environment Variables:"
echo " EN_FILE Path to reference English file (default: ui/src/i18n/en.json)"
echo " TRANSLATION_DIR Directory with translation files (default: resources/i18n)"
echo ""
echo "Examples:"
echo " $0 # Validate all translation files (quiet mode)"
echo " $0 -v # Validate with detailed output"
echo " EN_FILE=custom/en.json $0 # Use custom reference file"
echo " TRANSLATION_DIR=custom/i18n $0 # Use custom translations directory"
exit 0
;;
*)
echo "Unknown option: $1" >&2
echo "Use --help for usage information" >&2
exit 1
;;
esac
done
# Color codes for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m' # No Color
if [[ "$VERBOSE" == "true" ]]; then
echo "Validating translation files structure against ${EN_FILE}..."
fi
# Check if English reference file exists
if [[ ! -f "$EN_FILE" ]]; then
echo "::error::Reference file $EN_FILE not found"
exit 1
fi
# Function to extract all JSON keys from a file, creating a flat list of dot-separated paths
extract_keys() {
local file="$1"
jq -r 'paths(scalars) as $p | $p | join(".")' "$file" 2>/dev/null | sort
}
# Function to extract all non-empty string keys (to identify structural issues)
extract_structure_keys() {
local file="$1"
# Get only keys where values are not empty strings
jq -r 'paths(scalars) as $p | select(getpath($p) != "") | $p | join(".")' "$file" 2>/dev/null | sort
}
# Function to validate a single translation file
validate_translation() {
local translation_file="$1"
local filename=$(basename "$translation_file")
local has_errors=false
local verbose=${2:-false}
if [[ "$verbose" == "true" ]]; then
echo "Validating $filename..."
fi
# First validate JSON syntax
if ! jq empty "$translation_file" 2>/dev/null; then
echo "::error file=$translation_file::Invalid JSON syntax"
echo -e "${RED}$filename has invalid JSON syntax${NC}"
return 1
fi
# Extract all keys from both files (for statistics)
local en_keys_file=$(mktemp)
local translation_keys_file=$(mktemp)
extract_keys "$EN_FILE" > "$en_keys_file"
extract_keys "$translation_file" > "$translation_keys_file"
# Extract only non-empty structure keys (to validate structural issues)
local en_structure_file=$(mktemp)
local translation_structure_file=$(mktemp)
extract_structure_keys "$EN_FILE" > "$en_structure_file"
extract_structure_keys "$translation_file" > "$translation_structure_file"
# Find structural issues: keys in translation not in English (misplaced)
local extra_keys=$(comm -13 "$en_keys_file" "$translation_keys_file")
# Find missing keys (for statistics only)
local missing_keys=$(comm -23 "$en_keys_file" "$translation_keys_file")
# Count keys for statistics
local total_en_keys=$(wc -l < "$en_keys_file")
local total_translation_keys=$(wc -l < "$translation_keys_file")
local missing_count=0
local extra_count=0
if [[ -n "$missing_keys" ]]; then
missing_count=$(echo "$missing_keys" | grep -c '^' || echo 0)
fi
if [[ -n "$extra_keys" ]]; then
extra_count=$(echo "$extra_keys" | grep -c '^' || echo 0)
has_errors=true
fi
# Report extra/misplaced keys (these are structural issues)
if [[ -n "$extra_keys" ]]; then
if [[ "$verbose" == "true" ]]; then
echo -e "${YELLOW}Misplaced keys in $filename ($extra_count):${NC}"
fi
while IFS= read -r key; do
# Try to find the line number
line=$(grep -n "\"$(echo "$key" | sed 's/.*\.//')" "$translation_file" | head -1 | cut -d: -f1)
line=${line:-1} # Default to line 1 if not found
echo "::error file=$translation_file,line=$line::Misplaced key: $key"
if [[ "$verbose" == "true" ]]; then
echo " + $key (line ~$line)"
fi
done <<< "$extra_keys"
fi
# Clean up temp files
rm -f "$en_keys_file" "$translation_keys_file" "$en_structure_file" "$translation_structure_file"
# Print statistics
if [[ "$verbose" == "true" ]]; then
echo " Keys: $total_translation_keys/$total_en_keys (Missing: $missing_count, Extra/Misplaced: $extra_count)"
if [[ "$has_errors" == "true" ]]; then
echo -e "${RED}$filename has structural issues${NC}"
else
echo -e "${GREEN}$filename structure is valid${NC}"
fi
elif [[ "$has_errors" == "true" ]]; then
echo -e "${RED}$filename has structural issues (Extra/Misplaced: $extra_count)${NC}"
fi
return $([[ "$has_errors" == "true" ]] && echo 1 || echo 0)
}
# Main validation loop
validation_failed=false
total_files=0
failed_files=0
valid_files=0
for translation_file in "$TRANSLATION_DIR"/*.json; do
if [[ -f "$translation_file" ]]; then
total_files=$((total_files + 1))
if ! validate_translation "$translation_file" "$VERBOSE"; then
validation_failed=true
failed_files=$((failed_files + 1))
else
valid_files=$((valid_files + 1))
fi
if [[ "$VERBOSE" == "true" ]]; then
echo "" # Add spacing between files
fi
fi
done
# Summary
if [[ "$VERBOSE" == "true" ]]; then
echo "========================================="
echo "Translation Validation Summary:"
echo " Total files: $total_files"
echo " Valid files: $valid_files"
echo " Files with structural issues: $failed_files"
echo "========================================="
fi
if [[ "$validation_failed" == "true" ]]; then
if [[ "$VERBOSE" == "true" ]]; then
echo -e "${RED}Translation validation failed - $failed_files file(s) have structural issues${NC}"
else
echo -e "${RED}Translation validation failed - $failed_files/$total_files file(s) have structural issues${NC}"
fi
exit 1
elif [[ "$VERBOSE" == "true" ]]; then
echo -e "${GREEN}All translation files are structurally valid${NC}"
fi
exit 0

9
.gitignore vendored
View File

@@ -5,6 +5,7 @@
/navidrome
/iTunes*.xml
/tmp
/bin
data/*
vendor/*/
wiki
@@ -23,7 +24,11 @@ music
docker-compose.yml
!contrib/docker-compose.yml
binaries
navidrome-master
navidrome-*
AGENTS.md
.github/prompts
.github/instructions
.github/git-commit-instructions.md
*.exe
bin/
*.test
*.wasm

2
.nvmrc
View File

@@ -1 +1 @@
v20
v24

View File

@@ -1,8 +1,8 @@
FROM --platform=$BUILDPLATFORM ghcr.io/crazy-max/osxcross:14.5-debian AS osxcross
########################################################################################################################
### Build xx (orignal image: tonistiigi/xx)
FROM --platform=$BUILDPLATFORM public.ecr.aws/docker/library/alpine:3.21 AS xx-build
### Build xx (original image: tonistiigi/xx)
FROM --platform=$BUILDPLATFORM public.ecr.aws/docker/library/alpine:3.19 AS xx-build
# v1.5.0
ENV XX_VERSION=b4e4c451c778822e6742bfc9d9a91d7c7d885c8a
@@ -26,12 +26,14 @@ COPY --from=xx-build /out/ /usr/bin/
########################################################################################################################
### Get TagLib
FROM --platform=$BUILDPLATFORM public.ecr.aws/docker/library/alpine:3.21 AS taglib-build
FROM --platform=$BUILDPLATFORM public.ecr.aws/docker/library/alpine:3.19 AS taglib-build
ARG TARGETPLATFORM
ARG CROSS_TAGLIB_VERSION=2.0.2-1
ARG CROSS_TAGLIB_VERSION=2.1.1-1
ENV CROSS_TAGLIB_RELEASES_URL=https://github.com/navidrome/cross-taglib/releases/download/v${CROSS_TAGLIB_VERSION}/
# wget in busybox can't follow redirects
RUN <<EOT
apk add --no-cache wget
PLATFORM=$(echo ${TARGETPLATFORM} | tr '/' '-')
FILE=taglib-${PLATFORM}.tar.gz
@@ -61,7 +63,7 @@ COPY --from=ui /build /build
########################################################################################################################
### Build Navidrome binary
FROM --platform=$BUILDPLATFORM public.ecr.aws/docker/library/golang:1.24-bookworm AS base
FROM --platform=$BUILDPLATFORM public.ecr.aws/docker/library/golang:1.25-bookworm AS base
RUN apt-get update && apt-get install -y clang lld
COPY --from=xx / /
WORKDIR /workspace
@@ -120,7 +122,7 @@ COPY --from=build /out /
########################################################################################################################
### Build Final Image
FROM public.ecr.aws/docker/library/alpine:3.21 AS final
FROM public.ecr.aws/docker/library/alpine:3.19 AS final
LABEL maintainer="deluan@navidrome.org"
LABEL org.opencontainers.image.source="https://github.com/navidrome/navidrome"

View File

@@ -15,7 +15,8 @@ PLATFORMS ?= $(SUPPORTED_PLATFORMS)
DOCKER_TAG ?= deluan/navidrome:develop
# Taglib version to use in cross-compilation, from https://github.com/navidrome/cross-taglib
CROSS_TAGLIB_VERSION ?= 2.0.2-1
CROSS_TAGLIB_VERSION ?= 2.1.1-1
GOLANGCI_LINT_VERSION ?= v2.5.0
UI_SRC_FILES := $(shell find ui -type f -not -path "ui/build/*" -not -path "ui/node_modules/*")
@@ -32,25 +33,55 @@ server: check_go_env buildjs ##@Development Start the backend in development mod
@ND_ENABLEINSIGHTSCOLLECTOR="false" go tool reflex -d none -c reflex.conf
.PHONY: server
stop: ##@Development Stop development servers (UI and backend)
@echo "Stopping development servers..."
@-pkill -f "vite"
@-pkill -f "go tool reflex.*reflex.conf"
@-pkill -f "go run.*netgo"
@echo "Development servers stopped."
.PHONY: stop
watch: ##@Development Start Go tests in watch mode (re-run when code changes)
go tool ginkgo watch -tags=netgo -notify ./...
.PHONY: watch
PKG ?= ./...
test: ##@Development Run Go tests
test: ##@Development Run Go tests. Use PKG variable to specify packages to test, e.g. make test PKG=./server
go test -tags netgo $(PKG)
.PHONY: test
testrace: ##@Development Run Go tests with race detector
go test -tags netgo -race -shuffle=on ./...
.PHONY: test
testall: testrace ##@Development Run Go and JS tests
@(cd ./ui && npm run test)
testall: test-race test-i18n test-js ##@Development Run Go and JS tests
.PHONY: testall
test-race: ##@Development Run Go tests with race detector
go test -tags netgo -race -shuffle=on ./...
.PHONY: test-race
test-js: ##@Development Run JS tests
@(cd ./ui && npm run test)
.PHONY: test-js
test-i18n: ##@Development Validate all translations files
./.github/workflows/validate-translations.sh
.PHONY: test-i18n
install-golangci-lint: ##@Development Install golangci-lint if not present
@PATH=$$PATH:./bin which golangci-lint > /dev/null || (echo "Installing golangci-lint..." && curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/HEAD/install.sh | sh -s v2.1.6)
@INSTALL=false; \
if PATH=$$PATH:./bin which golangci-lint > /dev/null 2>&1; then \
CURRENT_VERSION=$$(PATH=$$PATH:./bin golangci-lint version 2>/dev/null | grep -oE '[0-9]+\.[0-9]+\.[0-9]+' | head -n1); \
REQUIRED_VERSION=$$(echo "$(GOLANGCI_LINT_VERSION)" | sed 's/^v//'); \
if [ "$$CURRENT_VERSION" != "$$REQUIRED_VERSION" ]; then \
echo "Found golangci-lint $$CURRENT_VERSION, but $$REQUIRED_VERSION is required. Reinstalling..."; \
rm -f ./bin/golangci-lint; \
INSTALL=true; \
fi; \
else \
INSTALL=true; \
fi; \
if [ "$$INSTALL" = "true" ]; then \
echo "Installing golangci-lint $(GOLANGCI_LINT_VERSION)..."; \
curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/HEAD/install.sh | sh -s $(GOLANGCI_LINT_VERSION); \
fi
.PHONY: install-golangci-lint
lint: install-golangci-lint ##@Development Lint Go code
@@ -64,7 +95,7 @@ lintall: lint ##@Development Lint Go and JS code
format: ##@Development Format code
@(cd ./ui && npm run prettier)
@go tool goimports -w `find . -name '*.go' | grep -v _gen.go$$`
@go tool goimports -w `find . -name '*.go' | grep -v _gen.go$$ | grep -v .pb.go$$`
@go mod tidy
.PHONY: format
@@ -153,6 +184,20 @@ docker-msi: ##@Cross_Compilation Build MSI installer for Windows
@du -h binaries/msi/*.msi
.PHONY: docker-msi
run-docker: ##@Development Run a Navidrome Docker image. Usage: make run-docker tag=<tag>
@if [ -z "$(tag)" ]; then echo "Usage: make run-docker tag=<tag>"; exit 1; fi
@TAG_DIR="tmp/$$(echo '$(tag)' | tr '/:' '_')"; mkdir -p "$$TAG_DIR"; \
VOLUMES="-v $(PWD)/$$TAG_DIR:/data"; \
if [ -f navidrome.toml ]; then \
VOLUMES="$$VOLUMES -v $(PWD)/navidrome.toml:/data/navidrome.toml:ro"; \
MUSIC_FOLDER=$$(grep '^MusicFolder' navidrome.toml | head -n1 | sed 's/.*= *"//' | sed 's/".*//'); \
if [ -n "$$MUSIC_FOLDER" ] && [ -d "$$MUSIC_FOLDER" ]; then \
VOLUMES="$$VOLUMES -v $$MUSIC_FOLDER:/music:ro"; \
fi; \
fi; \
echo "Running: docker run --rm -p 4533:4533 $$VOLUMES $(tag)"; docker run --rm -p 4533:4533 $$VOLUMES $(tag)
.PHONY: run-docker
package: docker-build ##@Cross_Compilation Create binaries and packages for ALL supported platforms
@if [ -z `which goreleaser` ]; then echo "Please install goreleaser first: https://goreleaser.com/install/"; exit 1; fi
goreleaser release -f release/goreleaser.yml --clean --skip=publish --snapshot
@@ -221,6 +266,24 @@ deprecated:
@echo "WARNING: This target is deprecated and will be removed in future releases. Use 'make build' instead."
.PHONY: deprecated
# Generate Go code from plugins/api/api.proto
plugin-gen: check_go_env ##@Development Generate Go code from plugins protobuf files
go generate ./plugins/...
.PHONY: plugin-gen
plugin-examples: check_go_env ##@Development Build all example plugins
$(MAKE) -C plugins/examples clean all
.PHONY: plugin-examples
plugin-clean: check_go_env ##@Development Clean all plugins
$(MAKE) -C plugins/examples clean
$(MAKE) -C plugins/testdata clean
.PHONY: plugin-clean
plugin-tests: check_go_env ##@Development Build all test plugins
$(MAKE) -C plugins/testdata clean all
.PHONY: plugin-tests
.DEFAULT_GOAL := help
HELP_FUN = \

View File

@@ -8,6 +8,7 @@ import (
"github.com/djherbis/times"
"github.com/navidrome/navidrome/model"
"github.com/navidrome/navidrome/model/metadata"
"github.com/navidrome/navidrome/utils/gg"
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
)
@@ -78,22 +79,116 @@ var _ = Describe("Extractor", func() {
var e *extractor
parseTestFile := func(path string) *model.MediaFile {
mds, err := e.Parse(path)
Expect(err).ToNot(HaveOccurred())
info, ok := mds[path]
Expect(ok).To(BeTrue())
fileInfo, err := os.Stat(path)
Expect(err).ToNot(HaveOccurred())
info.FileInfo = testFileInfo{FileInfo: fileInfo}
metadata := metadata.New(path, info)
mf := metadata.ToMediaFile(1, "folderID")
return &mf
}
BeforeEach(func() {
e = &extractor{}
})
Describe("ReplayGain", func() {
DescribeTable("test replaygain end-to-end", func(file string, trackGain, trackPeak, albumGain, albumPeak *float64) {
mf := parseTestFile("tests/fixtures/" + file)
Expect(mf.RGTrackGain).To(Equal(trackGain))
Expect(mf.RGTrackPeak).To(Equal(trackPeak))
Expect(mf.RGAlbumGain).To(Equal(albumGain))
Expect(mf.RGAlbumPeak).To(Equal(albumPeak))
},
Entry("mp3 with no replaygain", "no_replaygain.mp3", nil, nil, nil, nil),
Entry("mp3 with no zero replaygain", "zero_replaygain.mp3", gg.P(0.0), gg.P(1.0), gg.P(0.0), gg.P(1.0)),
)
})
Describe("lyrics", func() {
makeLyrics := func(code, secondLine string) model.Lyrics {
return model.Lyrics{
DisplayArtist: "",
DisplayTitle: "",
Lang: code,
Line: []model.Line{
{Start: gg.P(int64(0)), Value: "This is"},
{Start: gg.P(int64(2500)), Value: secondLine},
},
Offset: nil,
Synced: true,
}
}
It("should fetch both synced and unsynced lyrics in mixed flac", func() {
mf := parseTestFile("tests/fixtures/mixed-lyrics.flac")
lyrics, err := mf.StructuredLyrics()
Expect(err).ToNot(HaveOccurred())
Expect(lyrics).To(HaveLen(2))
Expect(lyrics[0].Synced).To(BeTrue())
Expect(lyrics[1].Synced).To(BeFalse())
})
It("should handle mp3 with uslt and sylt", func() {
mf := parseTestFile("tests/fixtures/test.mp3")
lyrics, err := mf.StructuredLyrics()
Expect(err).ToNot(HaveOccurred())
Expect(lyrics).To(HaveLen(4))
engSylt := makeLyrics("eng", "English SYLT")
engUslt := makeLyrics("eng", "English")
unsSylt := makeLyrics("xxx", "unspecified SYLT")
unsUslt := makeLyrics("xxx", "unspecified")
// Why is the order inconsistent between runs? Nobody knows
Expect(lyrics).To(Or(
Equal(model.LyricList{engSylt, engUslt, unsSylt, unsUslt}),
Equal(model.LyricList{unsSylt, unsUslt, engSylt, engUslt}),
))
})
DescribeTable("format-specific lyrics", func(file string, isId3 bool) {
mf := parseTestFile("tests/fixtures/" + file)
lyrics, err := mf.StructuredLyrics()
Expect(err).To(Not(HaveOccurred()))
Expect(lyrics).To(HaveLen(2))
unspec := makeLyrics("xxx", "unspecified")
eng := makeLyrics("xxx", "English")
if isId3 {
eng.Lang = "eng"
}
Expect(lyrics).To(Or(
Equal(model.LyricList{unspec, eng}),
Equal(model.LyricList{eng, unspec})))
},
Entry("flac", "test.flac", false),
Entry("m4a", "test.m4a", false),
Entry("ogg", "test.ogg", false),
Entry("wma", "test.wma", false),
Entry("wv", "test.wv", false),
Entry("wav", "test.wav", true),
Entry("aiff", "test.aiff", true),
)
})
Describe("Participants", func() {
DescribeTable("test tags consistent across formats", func(format string) {
path := "tests/fixtures/test." + format
mds, err := e.Parse(path)
Expect(err).ToNot(HaveOccurred())
info := mds[path]
fileInfo, _ := os.Stat(path)
info.FileInfo = testFileInfo{FileInfo: fileInfo}
metadata := metadata.New(path, info)
mf := metadata.ToMediaFile(1, "folderID")
mf := parseTestFile("tests/fixtures/test." + format)
for _, data := range roles {
role := data.Role
@@ -144,11 +239,40 @@ var _ = Describe("Extractor", func() {
Entry("FLAC format", "flac"),
Entry("M4a format", "m4a"),
Entry("OGG format", "ogg"),
Entry("WMA format", "wv"),
Entry("WV format", "wv"),
Entry("MP3 format", "mp3"),
Entry("WAV format", "wav"),
Entry("AIFF format", "aiff"),
)
It("should parse wma", func() {
mf := parseTestFile("tests/fixtures/test.wma")
for _, data := range roles {
role := data.Role
artists := data.ParticipantList
actual := mf.Participants[role]
// WMA has no Arranger role
if role == model.RoleArranger {
Expect(actual).To(HaveLen(0))
continue
}
Expect(actual).To(HaveLen(len(artists)), role.String())
// For some bizarre reason, the order is inverted. We also don't get
// sort names or MBIDs
for i := range artists {
idx := len(artists) - 1 - i
actualArtist := actual[i]
expectedArtist := artists[idx]
Expect(actualArtist.Name).To(Equal(expectedArtist.Name))
}
}
})
})
})

View File

@@ -7,6 +7,7 @@ import (
"strings"
"time"
"github.com/navidrome/navidrome/conf"
"github.com/navidrome/navidrome/core/storage/local"
"github.com/navidrome/navidrome/log"
"github.com/navidrome/navidrome/model/metadata"
@@ -42,23 +43,21 @@ func (e extractor) extractMetadata(filePath string) (*metadata.Info, error) {
// Parse audio properties
ap := metadata.AudioProperties{}
if length, ok := tags["_lengthinmilliseconds"]; ok && len(length) > 0 {
millis, _ := strconv.Atoi(length[0])
if millis > 0 {
ap.Duration = (time.Millisecond * time.Duration(millis)).Round(time.Millisecond * 10)
}
delete(tags, "_lengthinmilliseconds")
}
parseProp := func(prop string, target *int) {
if value, ok := tags[prop]; ok && len(value) > 0 {
*target, _ = strconv.Atoi(value[0])
delete(tags, prop)
}
}
parseProp("_bitrate", &ap.BitRate)
parseProp("_channels", &ap.Channels)
parseProp("_samplerate", &ap.SampleRate)
parseProp("_bitspersample", &ap.BitDepth)
ap.BitRate = parseProp(tags, "__bitrate")
ap.Channels = parseProp(tags, "__channels")
ap.SampleRate = parseProp(tags, "__samplerate")
ap.BitDepth = parseProp(tags, "__bitspersample")
length := parseProp(tags, "__lengthinmilliseconds")
ap.Duration = (time.Millisecond * time.Duration(length)).Round(time.Millisecond * 10)
// Extract basic tags
parseBasicTag(tags, "__title", "title")
parseBasicTag(tags, "__artist", "artist")
parseBasicTag(tags, "__album", "album")
parseBasicTag(tags, "__comment", "comment")
parseBasicTag(tags, "__genre", "genre")
parseBasicTag(tags, "__year", "year")
parseBasicTag(tags, "__track", "tracknumber")
// Parse track/disc totals
parseTuple := func(prop string) {
@@ -106,6 +105,31 @@ var tiplMapping = map[string]string{
"DJ-mix": "djmixer",
}
// parseProp parses a property from the tags map and sets it to the target integer.
// It also deletes the property from the tags map after parsing.
func parseProp(tags map[string][]string, prop string) int {
if value, ok := tags[prop]; ok && len(value) > 0 {
v, _ := strconv.Atoi(value[0])
delete(tags, prop)
return v
}
return 0
}
// parseBasicTag checks if a basic tag (like __title, __artist, etc.) exists in the tags map.
// If it does, it moves the value to a more appropriate tag name (like title, artist, etc.),
// and deletes the basic tag from the map. If the target tag already exists, it ignores the basic tag.
func parseBasicTag(tags map[string][]string, basicName string, tagName string) {
basicValue := tags[basicName]
if len(basicValue) == 0 {
return
}
delete(tags, basicName)
if len(tags[tagName]) == 0 {
tags[tagName] = basicValue
}
}
// parseTIPL parses the ID3v2.4 TIPL frame string, which is received from TagLib in the format:
//
// "arranger Andrew Powell engineer Chris Blair engineer Pat Stapley producer Eric Woolfson".
@@ -148,4 +172,7 @@ func init() {
// ignores fs, as taglib extractor only works with local files
return &extractor{baseDir}
})
conf.AddHook(func() {
log.Debug("TagLib version", "version", Version())
})
}

View File

@@ -179,7 +179,7 @@ var _ = Describe("Extractor", func() {
Entry("correctly parses wma/asf tags", "test.wma", "1.02s", 1, 44100, 16, "3.27 dB", "0.132914", "3.27 dB", "0.132914", false, true),
// ffmpeg -f lavfi -i "sine=frequency=800:duration=1" test.wv
Entry("correctly parses wv (wavpak) tags", "test.wv", "1s", 1, 44100, 16, "3.43 dB", "0.125061", "3.43 dB", "0.125061", false, false),
Entry("correctly parses wv (wavpak) tags", "test.wv", "1s", 1, 44100, 16, "3.43 dB", "0.125061", "3.43 dB", "0.125061", false, true),
// ffmpeg -f lavfi -i "sine=frequency=1000:duration=1" test.wav
Entry("correctly parses wav tags", "test.wav", "1s", 1, 44100, 16, "3.06 dB", "0.125056", "3.06 dB", "0.125056", true, true),

View File

@@ -1,6 +1,5 @@
#include <stdlib.h>
#include <string.h>
#include <typeinfo>
#define TAGLIB_STATIC
#include <apeproperties.h>
@@ -46,31 +45,63 @@ int taglib_read(const FILENAME_CHAR_T *filename, unsigned long id) {
// Add audio properties to the tags
const TagLib::AudioProperties *props(f.audioProperties());
goPutInt(id, (char *)"_lengthinmilliseconds", props->lengthInMilliseconds());
goPutInt(id, (char *)"_bitrate", props->bitrate());
goPutInt(id, (char *)"_channels", props->channels());
goPutInt(id, (char *)"_samplerate", props->sampleRate());
goPutInt(id, (char *)"__lengthinmilliseconds", props->lengthInMilliseconds());
goPutInt(id, (char *)"__bitrate", props->bitrate());
goPutInt(id, (char *)"__channels", props->channels());
goPutInt(id, (char *)"__samplerate", props->sampleRate());
// Extract bits per sample for supported formats
int bitsPerSample = 0;
if (const auto* apeProperties{ dynamic_cast<const TagLib::APE::Properties*>(props) })
goPutInt(id, (char *)"_bitspersample", apeProperties->bitsPerSample());
if (const auto* asfProperties{ dynamic_cast<const TagLib::ASF::Properties*>(props) })
goPutInt(id, (char *)"_bitspersample", asfProperties->bitsPerSample());
bitsPerSample = apeProperties->bitsPerSample();
else if (const auto* asfProperties{ dynamic_cast<const TagLib::ASF::Properties*>(props) })
bitsPerSample = asfProperties->bitsPerSample();
else if (const auto* flacProperties{ dynamic_cast<const TagLib::FLAC::Properties*>(props) })
goPutInt(id, (char *)"_bitspersample", flacProperties->bitsPerSample());
bitsPerSample = flacProperties->bitsPerSample();
else if (const auto* mp4Properties{ dynamic_cast<const TagLib::MP4::Properties*>(props) })
goPutInt(id, (char *)"_bitspersample", mp4Properties->bitsPerSample());
bitsPerSample = mp4Properties->bitsPerSample();
else if (const auto* wavePackProperties{ dynamic_cast<const TagLib::WavPack::Properties*>(props) })
goPutInt(id, (char *)"_bitspersample", wavePackProperties->bitsPerSample());
bitsPerSample = wavePackProperties->bitsPerSample();
else if (const auto* aiffProperties{ dynamic_cast<const TagLib::RIFF::AIFF::Properties*>(props) })
goPutInt(id, (char *)"_bitspersample", aiffProperties->bitsPerSample());
bitsPerSample = aiffProperties->bitsPerSample();
else if (const auto* wavProperties{ dynamic_cast<const TagLib::RIFF::WAV::Properties*>(props) })
goPutInt(id, (char *)"_bitspersample", wavProperties->bitsPerSample());
bitsPerSample = wavProperties->bitsPerSample();
else if (const auto* dsfProperties{ dynamic_cast<const TagLib::DSF::Properties*>(props) })
goPutInt(id, (char *)"_bitspersample", dsfProperties->bitsPerSample());
bitsPerSample = dsfProperties->bitsPerSample();
if (bitsPerSample > 0) {
goPutInt(id, (char *)"__bitspersample", bitsPerSample);
}
// Send all properties to the Go map
TagLib::PropertyMap tags = f.file()->properties();
// Make sure at least the basic properties are extracted
TagLib::Tag *basic = f.file()->tag();
if (!basic->isEmpty()) {
if (!basic->title().isEmpty()) {
tags.insert("__title", basic->title());
}
if (!basic->artist().isEmpty()) {
tags.insert("__artist", basic->artist());
}
if (!basic->album().isEmpty()) {
tags.insert("__album", basic->album());
}
if (!basic->comment().isEmpty()) {
tags.insert("__comment", basic->comment());
}
if (!basic->genre().isEmpty()) {
tags.insert("__genre", basic->genre());
}
if (basic->year() > 0) {
tags.insert("__year", TagLib::String::number(basic->year()));
}
if (basic->track() > 0) {
tags.insert("__track", TagLib::String::number(basic->track()));
}
}
TagLib::ID3v2::Tag *id3Tags = NULL;
// Get some extended/non-standard ID3-only tags (ex: iTunes extended frames)
@@ -113,7 +144,7 @@ int taglib_read(const FILENAME_CHAR_T *filename, unsigned long id) {
strncpy(language, bv.data(), 3);
}
char *val = (char *)frame->text().toCString(true);
char *val = const_cast<char*>(frame->text().toCString(true));
goPutLyrics(id, language, val);
}
@@ -132,7 +163,7 @@ int taglib_read(const FILENAME_CHAR_T *filename, unsigned long id) {
if (format == TagLib::ID3v2::SynchronizedLyricsFrame::AbsoluteMilliseconds) {
for (const auto &line: frame->synchedText()) {
char *text = (char *)line.text.toCString(true);
char *text = const_cast<char*>(line.text.toCString(true));
goPutLyricLine(id, language, text, line.time);
}
} else if (format == TagLib::ID3v2::SynchronizedLyricsFrame::AbsoluteMpegFrames) {
@@ -141,7 +172,7 @@ int taglib_read(const FILENAME_CHAR_T *filename, unsigned long id) {
if (sampleRate != 0) {
for (const auto &line: frame->synchedText()) {
const int timeInMs = (line.time * 1000) / sampleRate;
char *text = (char *)line.text.toCString(true);
char *text = const_cast<char*>(line.text.toCString(true));
goPutLyricLine(id, language, text, timeInMs);
}
}
@@ -160,9 +191,9 @@ int taglib_read(const FILENAME_CHAR_T *filename, unsigned long id) {
if (m4afile != NULL) {
const auto itemListMap = m4afile->tag()->itemMap();
for (const auto item: itemListMap) {
char *key = (char *)item.first.toCString(true);
char *key = const_cast<char*>(item.first.toCString(true));
for (const auto value: item.second.toStringList()) {
char *val = (char *)value.toCString(true);
char *val = const_cast<char*>(value.toCString(true));
goPutM4AStr(id, key, val);
}
}
@@ -174,17 +205,24 @@ int taglib_read(const FILENAME_CHAR_T *filename, unsigned long id) {
const TagLib::ASF::Tag *asfTags{asfFile->tag()};
const auto itemListMap = asfTags->attributeListMap();
for (const auto item : itemListMap) {
tags.insert(item.first, item.second.front().toString());
char *key = const_cast<char*>(item.first.toCString(true));
for (auto j = item.second.begin();
j != item.second.end(); ++j) {
char *val = const_cast<char*>(j->toString().toCString(true));
goPutStr(id, key, val);
}
}
}
// Send all collected tags to the Go map
for (TagLib::PropertyMap::ConstIterator i = tags.begin(); i != tags.end();
++i) {
char *key = (char *)i->first.toCString(true);
char *key = const_cast<char*>(i->first.toCString(true));
for (TagLib::StringList::ConstIterator j = i->second.begin();
j != i->second.end(); ++j) {
char *val = (char *)(*j).toCString(true);
char *val = const_cast<char*>((*j).toCString(true));
goPutStr(id, key, val);
}
}
@@ -242,7 +280,19 @@ char has_cover(const TagLib::FileRef f) {
// ----- WMA
else if (TagLib::ASF::File * asfFile{dynamic_cast<TagLib::ASF::File *>(f.file())}) {
const TagLib::ASF::Tag *tag{ asfFile->tag() };
hasCover = tag && asfFile->tag()->attributeListMap().contains("WM/Picture");
hasCover = tag && tag->attributeListMap().contains("WM/Picture");
}
// ----- DSF
else if (TagLib::DSF::File * dsffile{ dynamic_cast<TagLib::DSF::File *>(f.file())}) {
const TagLib::ID3v2::Tag *tag { dsffile->tag() };
hasCover = tag && !tag->frameListMap()["APIC"].isEmpty();
}
// ----- WAVPAK (APE tag)
else if (TagLib::WavPack::File * wvFile{dynamic_cast<TagLib::WavPack::File *>(f.file())}) {
if (wvFile->hasAPETag()) {
// This is the particular string that Picard uses
hasCover = !wvFile->APETag()->itemListMap()["COVER ART (FRONT)"].isEmpty();
}
}
return hasCover;

View File

@@ -1,186 +1,187 @@
package cmd
import (
"context"
"fmt"
"os"
"strings"
"time"
"github.com/navidrome/navidrome/conf"
"github.com/navidrome/navidrome/db"
"github.com/navidrome/navidrome/log"
"github.com/spf13/cobra"
)
var (
backupCount int
backupDir string
force bool
restorePath string
)
func init() {
rootCmd.AddCommand(backupRoot)
backupCmd.Flags().StringVarP(&backupDir, "backup-dir", "d", "", "directory to manually make backup")
backupRoot.AddCommand(backupCmd)
pruneCmd.Flags().StringVarP(&backupDir, "backup-dir", "d", "", "directory holding Navidrome backups")
pruneCmd.Flags().IntVarP(&backupCount, "keep-count", "k", -1, "specify the number of backups to keep. 0 remove ALL backups, and negative values mean to use the default from configuration")
pruneCmd.Flags().BoolVarP(&force, "force", "f", false, "bypass warning when backup count is zero")
backupRoot.AddCommand(pruneCmd)
restoreCommand.Flags().StringVarP(&restorePath, "backup-file", "b", "", "path of backup database to restore")
restoreCommand.Flags().BoolVarP(&force, "force", "f", false, "bypass restore warning")
_ = restoreCommand.MarkFlagRequired("backup-file")
backupRoot.AddCommand(restoreCommand)
}
var (
backupRoot = &cobra.Command{
Use: "backup",
Aliases: []string{"bkp"},
Short: "Create, restore and prune database backups",
Long: "Create, restore and prune database backups",
}
backupCmd = &cobra.Command{
Use: "create",
Short: "Create a backup database",
Long: "Manually backup Navidrome database. This will ignore BackupCount",
Run: func(cmd *cobra.Command, _ []string) {
runBackup(cmd.Context())
},
}
pruneCmd = &cobra.Command{
Use: "prune",
Short: "Prune database backups",
Long: "Manually prune database backups according to backup rules",
Run: func(cmd *cobra.Command, _ []string) {
runPrune(cmd.Context())
},
}
restoreCommand = &cobra.Command{
Use: "restore",
Short: "Restore Navidrome database",
Long: "Restore Navidrome database from a backup. This must be done offline",
Run: func(cmd *cobra.Command, _ []string) {
runRestore(cmd.Context())
},
}
)
func runBackup(ctx context.Context) {
if backupDir != "" {
conf.Server.Backup.Path = backupDir
}
idx := strings.LastIndex(conf.Server.DbPath, "?")
var path string
if idx == -1 {
path = conf.Server.DbPath
} else {
path = conf.Server.DbPath[:idx]
}
if _, err := os.Stat(path); os.IsNotExist(err) {
log.Fatal("No existing database", "path", path)
return
}
start := time.Now()
path, err := db.Backup(ctx)
if err != nil {
log.Fatal("Error backing up database", "backup path", conf.Server.BasePath, err)
}
elapsed := time.Since(start)
log.Info("Backup complete", "elapsed", elapsed, "path", path)
}
func runPrune(ctx context.Context) {
if backupDir != "" {
conf.Server.Backup.Path = backupDir
}
if backupCount != -1 {
conf.Server.Backup.Count = backupCount
}
if conf.Server.Backup.Count == 0 && !force {
fmt.Println("Warning: pruning ALL backups")
fmt.Printf("Please enter YES (all caps) to continue: ")
var input string
_, err := fmt.Scanln(&input)
if input != "YES" || err != nil {
log.Warn("Prune cancelled")
return
}
}
idx := strings.LastIndex(conf.Server.DbPath, "?")
var path string
if idx == -1 {
path = conf.Server.DbPath
} else {
path = conf.Server.DbPath[:idx]
}
if _, err := os.Stat(path); os.IsNotExist(err) {
log.Fatal("No existing database", "path", path)
return
}
start := time.Now()
count, err := db.Prune(ctx)
if err != nil {
log.Fatal("Error pruning up database", "backup path", conf.Server.BasePath, err)
}
elapsed := time.Since(start)
log.Info("Prune complete", "elapsed", elapsed, "successfully pruned", count)
}
func runRestore(ctx context.Context) {
idx := strings.LastIndex(conf.Server.DbPath, "?")
var path string
if idx == -1 {
path = conf.Server.DbPath
} else {
path = conf.Server.DbPath[:idx]
}
if _, err := os.Stat(path); os.IsNotExist(err) {
log.Fatal("No existing database", "path", path)
return
}
if !force {
fmt.Println("Warning: restoring the Navidrome database should only be done offline, especially if your backup is very old.")
fmt.Printf("Please enter YES (all caps) to continue: ")
var input string
_, err := fmt.Scanln(&input)
if input != "YES" || err != nil {
log.Warn("Restore cancelled")
return
}
}
start := time.Now()
err := db.Restore(ctx, restorePath)
if err != nil {
log.Fatal("Error restoring database", "backup path", conf.Server.BasePath, err)
}
elapsed := time.Since(start)
log.Info("Restore complete", "elapsed", elapsed)
}
//
//import (
// "context"
// "fmt"
// "os"
// "strings"
// "time"
//
// "github.com/navidrome/navidrome/conf"
// "github.com/navidrome/navidrome/db"
// "github.com/navidrome/navidrome/log"
// "github.com/spf13/cobra"
//)
//
//var (
// backupCount int
// backupDir string
// force bool
// restorePath string
//)
//
//func init() {
// rootCmd.AddCommand(backupRoot)
//
// backupCmd.Flags().StringVarP(&backupDir, "backup-dir", "d", "", "directory to manually make backup")
// backupRoot.AddCommand(backupCmd)
//
// pruneCmd.Flags().StringVarP(&backupDir, "backup-dir", "d", "", "directory holding Navidrome backups")
// pruneCmd.Flags().IntVarP(&backupCount, "keep-count", "k", -1, "specify the number of backups to keep. 0 remove ALL backups, and negative values mean to use the default from configuration")
// pruneCmd.Flags().BoolVarP(&force, "force", "f", false, "bypass warning when backup count is zero")
// backupRoot.AddCommand(pruneCmd)
//
// restoreCommand.Flags().StringVarP(&restorePath, "backup-file", "b", "", "path of backup database to restore")
// restoreCommand.Flags().BoolVarP(&force, "force", "f", false, "bypass restore warning")
// _ = restoreCommand.MarkFlagRequired("backup-file")
// backupRoot.AddCommand(restoreCommand)
//}
//
//var (
// backupRoot = &cobra.Command{
// Use: "backup",
// Aliases: []string{"bkp"},
// Short: "Create, restore and prune database backups",
// Long: "Create, restore and prune database backups",
// }
//
// backupCmd = &cobra.Command{
// Use: "create",
// Short: "Create a backup database",
// Long: "Manually backup Navidrome database. This will ignore BackupCount",
// Run: func(cmd *cobra.Command, _ []string) {
// runBackup(cmd.Context())
// },
// }
//
// pruneCmd = &cobra.Command{
// Use: "prune",
// Short: "Prune database backups",
// Long: "Manually prune database backups according to backup rules",
// Run: func(cmd *cobra.Command, _ []string) {
// runPrune(cmd.Context())
// },
// }
//
// restoreCommand = &cobra.Command{
// Use: "restore",
// Short: "Restore Navidrome database",
// Long: "Restore Navidrome database from a backup. This must be done offline",
// Run: func(cmd *cobra.Command, _ []string) {
// runRestore(cmd.Context())
// },
// }
//)
//
//func runBackup(ctx context.Context) {
// if backupDir != "" {
// conf.Server.Backup.Path = backupDir
// }
//
// idx := strings.LastIndex(conf.Server.DbPath, "?")
// var path string
//
// if idx == -1 {
// path = conf.Server.DbPath
// } else {
// path = conf.Server.DbPath[:idx]
// }
//
// if _, err := os.Stat(path); os.IsNotExist(err) {
// log.Fatal("No existing database", "path", path)
// return
// }
//
// start := time.Now()
// path, err := db.Backup(ctx)
// if err != nil {
// log.Fatal("Error backing up database", "backup path", conf.Server.BasePath, err)
// }
//
// elapsed := time.Since(start)
// log.Info("Backup complete", "elapsed", elapsed, "path", path)
//}
//
//func runPrune(ctx context.Context) {
// if backupDir != "" {
// conf.Server.Backup.Path = backupDir
// }
//
// if backupCount != -1 {
// conf.Server.Backup.Count = backupCount
// }
//
// if conf.Server.Backup.Count == 0 && !force {
// fmt.Println("Warning: pruning ALL backups")
// fmt.Printf("Please enter YES (all caps) to continue: ")
// var input string
// _, err := fmt.Scanln(&input)
//
// if input != "YES" || err != nil {
// log.Warn("Prune cancelled")
// return
// }
// }
//
// idx := strings.LastIndex(conf.Server.DbPath, "?")
// var path string
//
// if idx == -1 {
// path = conf.Server.DbPath
// } else {
// path = conf.Server.DbPath[:idx]
// }
//
// if _, err := os.Stat(path); os.IsNotExist(err) {
// log.Fatal("No existing database", "path", path)
// return
// }
//
// start := time.Now()
// count, err := db.Prune(ctx)
// if err != nil {
// log.Fatal("Error pruning up database", "backup path", conf.Server.BasePath, err)
// }
//
// elapsed := time.Since(start)
//
// log.Info("Prune complete", "elapsed", elapsed, "successfully pruned", count)
//}
//
//func runRestore(ctx context.Context) {
// idx := strings.LastIndex(conf.Server.DbPath, "?")
// var path string
//
// if idx == -1 {
// path = conf.Server.DbPath
// } else {
// path = conf.Server.DbPath[:idx]
// }
//
// if _, err := os.Stat(path); os.IsNotExist(err) {
// log.Fatal("No existing database", "path", path)
// return
// }
//
// if !force {
// fmt.Println("Warning: restoring the Navidrome database should only be done offline, especially if your backup is very old.")
// fmt.Printf("Please enter YES (all caps) to continue: ")
// var input string
// _, err := fmt.Scanln(&input)
//
// if input != "YES" || err != nil {
// log.Warn("Restore cancelled")
// return
// }
// }
//
// start := time.Now()
// err := db.Restore(ctx, restorePath)
// if err != nil {
// log.Fatal("Error restoring database", "backup path", conf.Server.BasePath, err)
// }
//
// elapsed := time.Since(start)
// log.Info("Restore complete", "elapsed", elapsed)
//}

17
cmd/cmd_suite_test.go Normal file
View File

@@ -0,0 +1,17 @@
package cmd
import (
"testing"
"github.com/navidrome/navidrome/log"
"github.com/navidrome/navidrome/tests"
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
)
func TestCmd(t *testing.T) {
tests.Init(t, false)
log.SetLevel(log.LevelFatal)
RegisterFailHandler(Fail)
RunSpecs(t, "Cmd Suite")
}

716
cmd/plugin.go Normal file
View File

@@ -0,0 +1,716 @@
package cmd
import (
"cmp"
"crypto/sha256"
"encoding/hex"
"fmt"
"io"
"os"
"path/filepath"
"strings"
"text/tabwriter"
"time"
"github.com/navidrome/navidrome/conf"
"github.com/navidrome/navidrome/log"
"github.com/navidrome/navidrome/plugins"
"github.com/navidrome/navidrome/plugins/schema"
"github.com/navidrome/navidrome/utils"
"github.com/navidrome/navidrome/utils/slice"
"github.com/spf13/cobra"
)
const (
pluginPackageExtension = ".ndp"
pluginDirPermissions = 0700
pluginFilePermissions = 0600
)
func init() {
pluginCmd := &cobra.Command{
Use: "plugin",
Short: "Manage Navidrome plugins",
Long: "Commands for managing Navidrome plugins",
}
listCmd := &cobra.Command{
Use: "list",
Short: "List installed plugins",
Long: "List all installed plugins with their metadata",
Run: pluginList,
}
infoCmd := &cobra.Command{
Use: "info [pluginPackage|pluginName]",
Short: "Show details of a plugin",
Long: "Show detailed information about a plugin package (.ndp file) or an installed plugin",
Args: cobra.ExactArgs(1),
Run: pluginInfo,
}
installCmd := &cobra.Command{
Use: "install [pluginPackage]",
Short: "Install a plugin from a .ndp file",
Long: "Install a Navidrome Plugin Package (.ndp) file",
Args: cobra.ExactArgs(1),
Run: pluginInstall,
}
removeCmd := &cobra.Command{
Use: "remove [pluginName]",
Short: "Remove an installed plugin",
Long: "Remove a plugin by name",
Args: cobra.ExactArgs(1),
Run: pluginRemove,
}
updateCmd := &cobra.Command{
Use: "update [pluginPackage]",
Short: "Update an existing plugin",
Long: "Update an installed plugin with a new version from a .ndp file",
Args: cobra.ExactArgs(1),
Run: pluginUpdate,
}
refreshCmd := &cobra.Command{
Use: "refresh [pluginName]",
Short: "Reload a plugin without restarting Navidrome",
Long: "Reload and recompile a plugin without needing to restart Navidrome",
Args: cobra.ExactArgs(1),
Run: pluginRefresh,
}
devCmd := &cobra.Command{
Use: "dev [folder_path]",
Short: "Create symlink to development folder",
Long: "Create a symlink from a plugin development folder to the plugins directory for easier development",
Args: cobra.ExactArgs(1),
Run: pluginDev,
}
pluginCmd.AddCommand(listCmd, infoCmd, installCmd, removeCmd, updateCmd, refreshCmd, devCmd)
rootCmd.AddCommand(pluginCmd)
}
// Validation helpers
func validatePluginPackageFile(path string) error {
if !utils.FileExists(path) {
return fmt.Errorf("plugin package not found: %s", path)
}
if filepath.Ext(path) != pluginPackageExtension {
return fmt.Errorf("not a valid plugin package: %s (expected %s extension)", path, pluginPackageExtension)
}
return nil
}
func validatePluginDirectory(pluginsDir, pluginName string) (string, error) {
pluginDir := filepath.Join(pluginsDir, pluginName)
if !utils.FileExists(pluginDir) {
return "", fmt.Errorf("plugin not found: %s (path: %s)", pluginName, pluginDir)
}
return pluginDir, nil
}
func resolvePluginPath(pluginDir string) (resolvedPath string, isSymlink bool, err error) {
// Check if it's a directory or a symlink
lstat, err := os.Lstat(pluginDir)
if err != nil {
return "", false, fmt.Errorf("failed to stat plugin: %w", err)
}
isSymlink = lstat.Mode()&os.ModeSymlink != 0
if isSymlink {
// Resolve the symlink target
targetDir, err := os.Readlink(pluginDir)
if err != nil {
return "", true, fmt.Errorf("failed to resolve symlink: %w", err)
}
// If target is a relative path, make it absolute
if !filepath.IsAbs(targetDir) {
targetDir = filepath.Join(filepath.Dir(pluginDir), targetDir)
}
// Verify the target exists and is a directory
targetInfo, err := os.Stat(targetDir)
if err != nil {
return "", true, fmt.Errorf("failed to access symlink target %s: %w", targetDir, err)
}
if !targetInfo.IsDir() {
return "", true, fmt.Errorf("symlink target is not a directory: %s", targetDir)
}
return targetDir, true, nil
} else if !lstat.IsDir() {
return "", false, fmt.Errorf("not a valid plugin directory: %s", pluginDir)
}
return pluginDir, false, nil
}
// Package handling helpers
func loadAndValidatePackage(ndpPath string) (*plugins.PluginPackage, error) {
if err := validatePluginPackageFile(ndpPath); err != nil {
return nil, err
}
pkg, err := plugins.LoadPackage(ndpPath)
if err != nil {
return nil, fmt.Errorf("failed to load plugin package: %w", err)
}
return pkg, nil
}
func extractAndSetupPlugin(ndpPath, targetDir string) error {
if err := plugins.ExtractPackage(ndpPath, targetDir); err != nil {
return fmt.Errorf("failed to extract plugin package: %w", err)
}
ensurePluginDirPermissions(targetDir)
return nil
}
// Display helpers
func displayPluginTableRow(w *tabwriter.Writer, discovery plugins.PluginDiscoveryEntry) {
if discovery.Error != nil {
// Handle global errors (like directory read failure)
if discovery.ID == "" {
log.Error("Failed to read plugins directory", "folder", conf.Server.Plugins.Folder, discovery.Error)
return
}
// Handle individual plugin errors - show them in the table
fmt.Fprintf(w, "%s\tERROR\tERROR\tERROR\tERROR\t%v\n", discovery.ID, discovery.Error)
return
}
// Mark symlinks with an indicator
nameDisplay := discovery.Manifest.Name
if discovery.IsSymlink {
nameDisplay = nameDisplay + " (dev)"
}
// Convert capabilities to strings
capabilities := slice.Map(discovery.Manifest.Capabilities, func(cap schema.PluginManifestCapabilitiesElem) string {
return string(cap)
})
fmt.Fprintf(w, "%s\t%s\t%s\t%s\t%s\t%s\n",
discovery.ID,
nameDisplay,
cmp.Or(discovery.Manifest.Author, "-"),
cmp.Or(discovery.Manifest.Version, "-"),
strings.Join(capabilities, ", "),
cmp.Or(discovery.Manifest.Description, "-"))
}
func displayTypedPermissions(permissions schema.PluginManifestPermissions, indent string) {
if permissions.Http != nil {
fmt.Printf("%shttp:\n", indent)
fmt.Printf("%s Reason: %s\n", indent, permissions.Http.Reason)
fmt.Printf("%s Allow Local Network: %t\n", indent, permissions.Http.AllowLocalNetwork)
fmt.Printf("%s Allowed URLs:\n", indent)
for urlPattern, methodEnums := range permissions.Http.AllowedUrls {
methods := make([]string, len(methodEnums))
for i, methodEnum := range methodEnums {
methods[i] = string(methodEnum)
}
fmt.Printf("%s %s: [%s]\n", indent, urlPattern, strings.Join(methods, ", "))
}
fmt.Println()
}
if permissions.Config != nil {
fmt.Printf("%sconfig:\n", indent)
fmt.Printf("%s Reason: %s\n", indent, permissions.Config.Reason)
fmt.Println()
}
if permissions.Scheduler != nil {
fmt.Printf("%sscheduler:\n", indent)
fmt.Printf("%s Reason: %s\n", indent, permissions.Scheduler.Reason)
fmt.Println()
}
if permissions.Websocket != nil {
fmt.Printf("%swebsocket:\n", indent)
fmt.Printf("%s Reason: %s\n", indent, permissions.Websocket.Reason)
fmt.Printf("%s Allow Local Network: %t\n", indent, permissions.Websocket.AllowLocalNetwork)
fmt.Printf("%s Allowed URLs: [%s]\n", indent, strings.Join(permissions.Websocket.AllowedUrls, ", "))
fmt.Println()
}
if permissions.Cache != nil {
fmt.Printf("%scache:\n", indent)
fmt.Printf("%s Reason: %s\n", indent, permissions.Cache.Reason)
fmt.Println()
}
if permissions.Artwork != nil {
fmt.Printf("%sartwork:\n", indent)
fmt.Printf("%s Reason: %s\n", indent, permissions.Artwork.Reason)
fmt.Println()
}
if permissions.Subsonicapi != nil {
allowedUsers := "All Users"
if len(permissions.Subsonicapi.AllowedUsernames) > 0 {
allowedUsers = strings.Join(permissions.Subsonicapi.AllowedUsernames, ", ")
}
fmt.Printf("%ssubsonicapi:\n", indent)
fmt.Printf("%s Reason: %s\n", indent, permissions.Subsonicapi.Reason)
fmt.Printf("%s Allow Admins: %t\n", indent, permissions.Subsonicapi.AllowAdmins)
fmt.Printf("%s Allowed Usernames: [%s]\n", indent, allowedUsers)
fmt.Println()
}
}
func displayPluginDetails(manifest *schema.PluginManifest, fileInfo *pluginFileInfo, permInfo *pluginPermissionInfo) {
fmt.Println("\nPlugin Information:")
fmt.Printf(" Name: %s\n", manifest.Name)
fmt.Printf(" Author: %s\n", manifest.Author)
fmt.Printf(" Version: %s\n", manifest.Version)
fmt.Printf(" Description: %s\n", manifest.Description)
fmt.Print(" Capabilities: ")
capabilities := make([]string, len(manifest.Capabilities))
for i, cap := range manifest.Capabilities {
capabilities[i] = string(cap)
}
fmt.Print(strings.Join(capabilities, ", "))
fmt.Println()
// Display manifest permissions using the typed permissions
fmt.Println(" Required Permissions:")
displayTypedPermissions(manifest.Permissions, " ")
// Print file information if available
if fileInfo != nil {
fmt.Println("Package Information:")
fmt.Printf(" File: %s\n", fileInfo.path)
fmt.Printf(" Size: %d bytes (%.2f KB)\n", fileInfo.size, float64(fileInfo.size)/1024)
fmt.Printf(" SHA-256: %s\n", fileInfo.hash)
fmt.Printf(" Modified: %s\n", fileInfo.modTime.Format(time.RFC3339))
}
// Print file permissions information if available
if permInfo != nil {
fmt.Println("File Permissions:")
fmt.Printf(" Plugin Directory: %s (%s)\n", permInfo.dirPath, permInfo.dirMode)
if permInfo.isSymlink {
fmt.Printf(" Symlink Target: %s (%s)\n", permInfo.targetPath, permInfo.targetMode)
}
fmt.Printf(" Manifest File: %s\n", permInfo.manifestMode)
if permInfo.wasmMode != "" {
fmt.Printf(" WASM File: %s\n", permInfo.wasmMode)
}
}
}
type pluginFileInfo struct {
path string
size int64
hash string
modTime time.Time
}
type pluginPermissionInfo struct {
dirPath string
dirMode string
isSymlink bool
targetPath string
targetMode string
manifestMode string
wasmMode string
}
func getFileInfo(path string) *pluginFileInfo {
fileInfo, err := os.Stat(path)
if err != nil {
log.Error("Failed to get file information", err)
return nil
}
return &pluginFileInfo{
path: path,
size: fileInfo.Size(),
hash: calculateSHA256(path),
modTime: fileInfo.ModTime(),
}
}
func getPermissionInfo(pluginDir string) *pluginPermissionInfo {
// Get plugin directory permissions
dirInfo, err := os.Lstat(pluginDir)
if err != nil {
log.Error("Failed to get plugin directory permissions", err)
return nil
}
permInfo := &pluginPermissionInfo{
dirPath: pluginDir,
dirMode: dirInfo.Mode().String(),
}
// Check if it's a symlink
if dirInfo.Mode()&os.ModeSymlink != 0 {
permInfo.isSymlink = true
// Get target path and permissions
targetPath, err := os.Readlink(pluginDir)
if err == nil {
if !filepath.IsAbs(targetPath) {
targetPath = filepath.Join(filepath.Dir(pluginDir), targetPath)
}
permInfo.targetPath = targetPath
if targetInfo, err := os.Stat(targetPath); err == nil {
permInfo.targetMode = targetInfo.Mode().String()
}
}
}
// Get manifest file permissions
manifestPath := filepath.Join(pluginDir, "manifest.json")
if manifestInfo, err := os.Stat(manifestPath); err == nil {
permInfo.manifestMode = manifestInfo.Mode().String()
}
// Get WASM file permissions (look for .wasm files)
entries, err := os.ReadDir(pluginDir)
if err == nil {
for _, entry := range entries {
if filepath.Ext(entry.Name()) == ".wasm" {
wasmPath := filepath.Join(pluginDir, entry.Name())
if wasmInfo, err := os.Stat(wasmPath); err == nil {
permInfo.wasmMode = wasmInfo.Mode().String()
break // Just show the first WASM file found
}
}
}
}
return permInfo
}
// Command implementations
func pluginList(cmd *cobra.Command, args []string) {
discoveries := plugins.DiscoverPlugins(conf.Server.Plugins.Folder)
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
fmt.Fprintln(w, "ID\tNAME\tAUTHOR\tVERSION\tCAPABILITIES\tDESCRIPTION")
for _, discovery := range discoveries {
displayPluginTableRow(w, discovery)
}
w.Flush()
}
func pluginInfo(cmd *cobra.Command, args []string) {
path := args[0]
pluginsDir := conf.Server.Plugins.Folder
var manifest *schema.PluginManifest
var fileInfo *pluginFileInfo
var permInfo *pluginPermissionInfo
if filepath.Ext(path) == pluginPackageExtension {
// It's a package file
pkg, err := loadAndValidatePackage(path)
if err != nil {
log.Fatal("Failed to load plugin package", err)
}
manifest = pkg.Manifest
fileInfo = getFileInfo(path)
// No permission info for package files
} else {
// It's a plugin name
pluginDir, err := validatePluginDirectory(pluginsDir, path)
if err != nil {
log.Fatal("Plugin validation failed", err)
}
manifest, err = plugins.LoadManifest(pluginDir)
if err != nil {
log.Fatal("Failed to load plugin manifest", err)
}
// Get permission info for installed plugins
permInfo = getPermissionInfo(pluginDir)
}
displayPluginDetails(manifest, fileInfo, permInfo)
}
func pluginInstall(cmd *cobra.Command, args []string) {
ndpPath := args[0]
pluginsDir := conf.Server.Plugins.Folder
pkg, err := loadAndValidatePackage(ndpPath)
if err != nil {
log.Fatal("Package validation failed", err)
}
// Create target directory based on plugin name
targetDir := filepath.Join(pluginsDir, pkg.Manifest.Name)
// Check if plugin already exists
if utils.FileExists(targetDir) {
log.Fatal("Plugin already installed", "name", pkg.Manifest.Name, "path", targetDir,
"use", "navidrome plugin update")
}
if err := extractAndSetupPlugin(ndpPath, targetDir); err != nil {
log.Fatal("Plugin installation failed", err)
}
fmt.Printf("Plugin '%s' v%s installed successfully\n", pkg.Manifest.Name, pkg.Manifest.Version)
}
func pluginRemove(cmd *cobra.Command, args []string) {
pluginName := args[0]
pluginsDir := conf.Server.Plugins.Folder
pluginDir, err := validatePluginDirectory(pluginsDir, pluginName)
if err != nil {
log.Fatal("Plugin validation failed", err)
}
_, isSymlink, err := resolvePluginPath(pluginDir)
if err != nil {
log.Fatal("Failed to resolve plugin path", err)
}
if isSymlink {
// For symlinked plugins (dev mode), just remove the symlink
if err := os.Remove(pluginDir); err != nil {
log.Fatal("Failed to remove plugin symlink", "name", pluginName, err)
}
fmt.Printf("Development plugin symlink '%s' removed successfully (target directory preserved)\n", pluginName)
} else {
// For regular plugins, remove the entire directory
if err := os.RemoveAll(pluginDir); err != nil {
log.Fatal("Failed to remove plugin directory", "name", pluginName, err)
}
fmt.Printf("Plugin '%s' removed successfully\n", pluginName)
}
}
func pluginUpdate(cmd *cobra.Command, args []string) {
ndpPath := args[0]
pluginsDir := conf.Server.Plugins.Folder
pkg, err := loadAndValidatePackage(ndpPath)
if err != nil {
log.Fatal("Package validation failed", err)
}
// Check if plugin exists
targetDir := filepath.Join(pluginsDir, pkg.Manifest.Name)
if !utils.FileExists(targetDir) {
log.Fatal("Plugin not found", "name", pkg.Manifest.Name, "path", targetDir,
"use", "navidrome plugin install")
}
// Create a backup of the existing plugin
backupDir := targetDir + ".bak." + time.Now().Format("20060102150405")
if err := os.Rename(targetDir, backupDir); err != nil {
log.Fatal("Failed to backup existing plugin", err)
}
// Extract the new package
if err := extractAndSetupPlugin(ndpPath, targetDir); err != nil {
// Restore backup if extraction failed
os.RemoveAll(targetDir)
_ = os.Rename(backupDir, targetDir) // Ignore error as we're already in a fatal path
log.Fatal("Plugin update failed", err)
}
// Remove the backup
os.RemoveAll(backupDir)
fmt.Printf("Plugin '%s' updated to v%s successfully\n", pkg.Manifest.Name, pkg.Manifest.Version)
}
func pluginRefresh(cmd *cobra.Command, args []string) {
pluginName := args[0]
pluginsDir := conf.Server.Plugins.Folder
pluginDir, err := validatePluginDirectory(pluginsDir, pluginName)
if err != nil {
log.Fatal("Plugin validation failed", err)
}
resolvedPath, isSymlink, err := resolvePluginPath(pluginDir)
if err != nil {
log.Fatal("Failed to resolve plugin path", err)
}
if isSymlink {
log.Debug("Processing symlinked plugin", "name", pluginName, "link", pluginDir, "target", resolvedPath)
}
fmt.Printf("Refreshing plugin '%s'...\n", pluginName)
// Get the plugin manager and refresh
mgr := GetPluginManager(cmd.Context())
log.Debug("Scanning plugins directory", "path", pluginsDir)
mgr.ScanPlugins()
log.Info("Waiting for plugin compilation to complete", "name", pluginName)
// Wait for compilation to complete
if err := mgr.EnsureCompiled(pluginName); err != nil {
log.Fatal("Failed to compile refreshed plugin", "name", pluginName, err)
}
log.Info("Plugin compilation completed successfully", "name", pluginName)
fmt.Printf("Plugin '%s' refreshed successfully\n", pluginName)
}
func pluginDev(cmd *cobra.Command, args []string) {
sourcePath, err := filepath.Abs(args[0])
if err != nil {
log.Fatal("Invalid path", "path", args[0], err)
}
pluginsDir := conf.Server.Plugins.Folder
// Validate source directory and manifest
if err := validateDevSource(sourcePath); err != nil {
log.Fatal("Source validation failed", err)
}
// Load manifest to get plugin name
manifest, err := plugins.LoadManifest(sourcePath)
if err != nil {
log.Fatal("Failed to load plugin manifest", "path", filepath.Join(sourcePath, "manifest.json"), err)
}
pluginName := cmp.Or(manifest.Name, filepath.Base(sourcePath))
targetPath := filepath.Join(pluginsDir, pluginName)
// Handle existing target
if err := handleExistingTarget(targetPath, sourcePath); err != nil {
log.Fatal("Failed to handle existing target", err)
}
// Create target directory if needed
if err := os.MkdirAll(filepath.Dir(targetPath), 0755); err != nil {
log.Fatal("Failed to create plugins directory", "path", filepath.Dir(targetPath), err)
}
// Create the symlink
if err := os.Symlink(sourcePath, targetPath); err != nil {
log.Fatal("Failed to create symlink", "source", sourcePath, "target", targetPath, err)
}
fmt.Printf("Development symlink created: '%s' -> '%s'\n", targetPath, sourcePath)
fmt.Println("Plugin can be refreshed with: navidrome plugin refresh", pluginName)
}
// Utility functions
func validateDevSource(sourcePath string) error {
sourceInfo, err := os.Stat(sourcePath)
if err != nil {
return fmt.Errorf("source folder not found: %s (%w)", sourcePath, err)
}
if !sourceInfo.IsDir() {
return fmt.Errorf("source path is not a directory: %s", sourcePath)
}
manifestPath := filepath.Join(sourcePath, "manifest.json")
if !utils.FileExists(manifestPath) {
return fmt.Errorf("source folder missing manifest.json: %s", sourcePath)
}
return nil
}
func handleExistingTarget(targetPath, sourcePath string) error {
if !utils.FileExists(targetPath) {
return nil // Nothing to handle
}
// Check if it's already a symlink to our source
existingLink, err := os.Readlink(targetPath)
if err == nil && existingLink == sourcePath {
fmt.Printf("Symlink already exists and points to the correct source\n")
return fmt.Errorf("symlink already exists") // This will cause early return in caller
}
// Handle case where target exists but is not a symlink to our source
fmt.Printf("Target path '%s' already exists.\n", targetPath)
fmt.Print("Do you want to replace it? (y/N): ")
var response string
_, err = fmt.Scanln(&response)
if err != nil || strings.ToLower(response) != "y" {
if err != nil {
log.Debug("Error reading input, assuming 'no'", err)
}
return fmt.Errorf("operation canceled")
}
// Remove existing target
if err := os.RemoveAll(targetPath); err != nil {
return fmt.Errorf("failed to remove existing target %s: %w", targetPath, err)
}
return nil
}
func ensurePluginDirPermissions(dir string) {
if err := os.Chmod(dir, pluginDirPermissions); err != nil {
log.Error("Failed to set plugin directory permissions", "dir", dir, err)
}
// Apply permissions to all files in the directory
entries, err := os.ReadDir(dir)
if err != nil {
log.Error("Failed to read plugin directory", "dir", dir, err)
return
}
for _, entry := range entries {
path := filepath.Join(dir, entry.Name())
info, err := os.Stat(path)
if err != nil {
log.Error("Failed to stat file", "path", path, err)
continue
}
mode := os.FileMode(pluginFilePermissions) // Files
if info.IsDir() {
mode = os.FileMode(pluginDirPermissions) // Directories
ensurePluginDirPermissions(path) // Recursive
}
if err := os.Chmod(path, mode); err != nil {
log.Error("Failed to set file permissions", "path", path, err)
}
}
}
func calculateSHA256(filePath string) string {
file, err := os.Open(filePath)
if err != nil {
log.Error("Failed to open file for hashing", err)
return "N/A"
}
defer file.Close()
hasher := sha256.New()
if _, err := io.Copy(hasher, file); err != nil {
log.Error("Failed to calculate hash", err)
return "N/A"
}
return hex.EncodeToString(hasher.Sum(nil))
}

193
cmd/plugin_test.go Normal file
View File

@@ -0,0 +1,193 @@
package cmd
import (
"io"
"os"
"path/filepath"
"strings"
"github.com/navidrome/navidrome/conf"
"github.com/navidrome/navidrome/conf/configtest"
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
"github.com/spf13/cobra"
)
var _ = Describe("Plugin CLI Commands", func() {
var tempDir string
var cmd *cobra.Command
var stdOut *os.File
var origStdout *os.File
var outReader *os.File
// Helper to create a test plugin with the given name and details
createTestPlugin := func(name, author, version string, capabilities []string) string {
pluginDir := filepath.Join(tempDir, name)
Expect(os.MkdirAll(pluginDir, 0755)).To(Succeed())
// Create a properly formatted capabilities JSON array
capabilitiesJSON := `"` + strings.Join(capabilities, `", "`) + `"`
manifest := `{
"name": "` + name + `",
"author": "` + author + `",
"version": "` + version + `",
"description": "Plugin for testing",
"website": "https://test.navidrome.org/` + name + `",
"capabilities": [` + capabilitiesJSON + `],
"permissions": {}
}`
Expect(os.WriteFile(filepath.Join(pluginDir, "manifest.json"), []byte(manifest), 0600)).To(Succeed())
// Create a dummy WASM file
wasmContent := []byte("dummy wasm content for testing")
Expect(os.WriteFile(filepath.Join(pluginDir, "plugin.wasm"), wasmContent, 0600)).To(Succeed())
return pluginDir
}
// Helper to execute a command and return captured output
captureOutput := func(reader io.Reader) string {
stdOut.Close()
outputBytes, err := io.ReadAll(reader)
Expect(err).NotTo(HaveOccurred())
return string(outputBytes)
}
BeforeEach(func() {
DeferCleanup(configtest.SetupConfig())
tempDir = GinkgoT().TempDir()
// Setup config
conf.Server.Plugins.Enabled = true
conf.Server.Plugins.Folder = tempDir
// Create a command for testing
cmd = &cobra.Command{Use: "test"}
// Setup stdout capture
origStdout = os.Stdout
var err error
outReader, stdOut, err = os.Pipe()
Expect(err).NotTo(HaveOccurred())
os.Stdout = stdOut
DeferCleanup(func() {
os.Stdout = origStdout
})
})
AfterEach(func() {
os.Stdout = origStdout
if stdOut != nil {
stdOut.Close()
}
if outReader != nil {
outReader.Close()
}
})
Describe("Plugin list command", func() {
It("should list installed plugins", func() {
// Create test plugins
createTestPlugin("plugin1", "Test Author", "1.0.0", []string{"MetadataAgent"})
createTestPlugin("plugin2", "Another Author", "2.1.0", []string{"Scrobbler"})
// Execute command
pluginList(cmd, []string{})
// Verify output
output := captureOutput(outReader)
Expect(output).To(ContainSubstring("plugin1"))
Expect(output).To(ContainSubstring("Test Author"))
Expect(output).To(ContainSubstring("1.0.0"))
Expect(output).To(ContainSubstring("MetadataAgent"))
Expect(output).To(ContainSubstring("plugin2"))
Expect(output).To(ContainSubstring("Another Author"))
Expect(output).To(ContainSubstring("2.1.0"))
Expect(output).To(ContainSubstring("Scrobbler"))
})
})
Describe("Plugin info command", func() {
It("should display information about an installed plugin", func() {
// Create test plugin with multiple capabilities
createTestPlugin("test-plugin", "Test Author", "1.0.0",
[]string{"MetadataAgent", "Scrobbler"})
// Execute command
pluginInfo(cmd, []string{"test-plugin"})
// Verify output
output := captureOutput(outReader)
Expect(output).To(ContainSubstring("Name: test-plugin"))
Expect(output).To(ContainSubstring("Author: Test Author"))
Expect(output).To(ContainSubstring("Version: 1.0.0"))
Expect(output).To(ContainSubstring("Description: Plugin for testing"))
Expect(output).To(ContainSubstring("Capabilities: MetadataAgent, Scrobbler"))
})
})
Describe("Plugin remove command", func() {
It("should remove a regular plugin directory", func() {
// Create test plugin
pluginDir := createTestPlugin("regular-plugin", "Test Author", "1.0.0",
[]string{"MetadataAgent"})
// Execute command
pluginRemove(cmd, []string{"regular-plugin"})
// Verify output
output := captureOutput(outReader)
Expect(output).To(ContainSubstring("Plugin 'regular-plugin' removed successfully"))
// Verify directory is actually removed
_, err := os.Stat(pluginDir)
Expect(os.IsNotExist(err)).To(BeTrue())
})
It("should remove only the symlink for a development plugin", func() {
// Create a real source directory
sourceDir := filepath.Join(GinkgoT().TempDir(), "dev-plugin-source")
Expect(os.MkdirAll(sourceDir, 0755)).To(Succeed())
manifest := `{
"name": "dev-plugin",
"author": "Dev Author",
"version": "0.1.0",
"description": "Development plugin for testing",
"website": "https://test.navidrome.org/dev-plugin",
"capabilities": ["Scrobbler"],
"permissions": {}
}`
Expect(os.WriteFile(filepath.Join(sourceDir, "manifest.json"), []byte(manifest), 0600)).To(Succeed())
// Create a dummy WASM file
wasmContent := []byte("dummy wasm content for testing")
Expect(os.WriteFile(filepath.Join(sourceDir, "plugin.wasm"), wasmContent, 0600)).To(Succeed())
// Create a symlink in the plugins directory
symlinkPath := filepath.Join(tempDir, "dev-plugin")
Expect(os.Symlink(sourceDir, symlinkPath)).To(Succeed())
// Execute command
pluginRemove(cmd, []string{"dev-plugin"})
// Verify output
output := captureOutput(outReader)
Expect(output).To(ContainSubstring("Development plugin symlink 'dev-plugin' removed successfully"))
Expect(output).To(ContainSubstring("target directory preserved"))
// Verify the symlink is removed but source directory exists
_, err := os.Lstat(symlinkPath)
Expect(os.IsNotExist(err)).To(BeTrue())
_, err = os.Stat(sourceDir)
Expect(err).NotTo(HaveOccurred())
})
})
})

View File

@@ -16,7 +16,6 @@ import (
"github.com/navidrome/navidrome/log"
"github.com/navidrome/navidrome/model"
"github.com/navidrome/navidrome/resources"
"github.com/navidrome/navidrome/scanner"
"github.com/navidrome/navidrome/scheduler"
"github.com/navidrome/navidrome/server/backgrounds"
"github.com/spf13/cobra"
@@ -81,9 +80,9 @@ func runNavidrome(ctx context.Context) {
g.Go(startPlaybackServer(ctx))
g.Go(schedulePeriodicBackup(ctx))
g.Go(startInsightsCollector(ctx))
g.Go(scheduleDBOptimizer(ctx))
g.Go(startPluginManager(ctx))
g.Go(runInitialScan(ctx))
if conf.Server.Scanner.Enabled {
g.Go(runInitialScan(ctx))
g.Go(startScanWatcher(ctx))
g.Go(schedulePeriodicScan(ctx))
} else {
@@ -109,7 +108,7 @@ func mainContext(ctx context.Context) (context.Context, context.CancelFunc) {
func startServer(ctx context.Context) func() error {
return func() error {
a := CreateServer()
a.MountRouter("Native API", consts.URLPathNativeAPI, CreateNativeAPIRouter())
a.MountRouter("Native API", consts.URLPathNativeAPI, CreateNativeAPIRouter(ctx))
a.MountRouter("Subsonic API", consts.URLPathSubsonicAPI, CreateSubsonicAPIRouter(ctx))
a.MountRouter("Public Endpoints", consts.URLPathPublic, CreatePublicRouter())
if conf.Server.LastFM.Enabled {
@@ -147,7 +146,7 @@ func schedulePeriodicScan(ctx context.Context) func() error {
schedulerInstance := scheduler.GetInstance()
log.Info("Scheduling periodic scan", "schedule", schedule)
err := schedulerInstance.Add(schedule, func() {
_, err := schedulerInstance.Add(schedule, func() {
_, err := s.ScanAll(ctx, false)
if err != nil {
log.Error(ctx, "Error executing periodic scan", err)
@@ -172,6 +171,7 @@ func pidHashChanged(ds model.DataStore) (bool, error) {
return !strings.EqualFold(pidAlbum, conf.Server.PID.Album) || !strings.EqualFold(pidTrack, conf.Server.PID.Track), nil
}
// runInitialScan runs an initial scan of the music library if needed.
func runInitialScan(ctx context.Context) func() error {
return func() error {
ds := CreateDataStore()
@@ -190,7 +190,7 @@ func runInitialScan(ctx context.Context) func() error {
scanNeeded := conf.Server.Scanner.ScanOnStartup || inProgress || fullScanRequired == "1" || pidHasChanged
time.Sleep(2 * time.Second) // Wait 2 seconds before the initial scan
if scanNeeded {
scanner := CreateScanner(ctx)
s := CreateScanner(ctx)
switch {
case fullScanRequired == "1":
log.Warn(ctx, "Full scan required after migration")
@@ -204,7 +204,7 @@ func runInitialScan(ctx context.Context) func() error {
log.Info("Executing initial scan")
}
_, err = scanner.ScanAll(ctx, fullScanRequired == "1")
_, err = s.ScanAll(ctx, fullScanRequired == "1")
if err != nil {
log.Error(ctx, "Scan failed", err)
} else {
@@ -234,51 +234,37 @@ func startScanWatcher(ctx context.Context) func() error {
func schedulePeriodicBackup(ctx context.Context) func() error {
return func() error {
schedule := conf.Server.Backup.Schedule
if schedule == "" {
log.Info(ctx, "Periodic backup is DISABLED")
return nil
}
schedulerInstance := scheduler.GetInstance()
log.Info("Scheduling periodic backup", "schedule", schedule)
err := schedulerInstance.Add(schedule, func() {
start := time.Now()
path, err := db.Backup(ctx)
elapsed := time.Since(start)
if err != nil {
log.Error(ctx, "Error backing up database", "elapsed", elapsed, err)
return
}
log.Info(ctx, "Backup complete", "elapsed", elapsed, "path", path)
count, err := db.Prune(ctx)
if err != nil {
log.Error(ctx, "Error pruning database", "error", err)
} else if count > 0 {
log.Info(ctx, "Successfully pruned old files", "count", count)
} else {
log.Info(ctx, "No backups pruned")
}
})
return err
}
}
func scheduleDBOptimizer(ctx context.Context) func() error {
return func() error {
log.Info(ctx, "Scheduling DB optimizer", "schedule", consts.OptimizeDBSchedule)
schedulerInstance := scheduler.GetInstance()
err := schedulerInstance.Add(consts.OptimizeDBSchedule, func() {
if scanner.IsScanning() {
log.Debug(ctx, "Skipping DB optimization because a scan is in progress")
return
}
db.Optimize(ctx)
})
return err
//schedule := conf.Server.Backup.Schedule
//if schedule == "" {
// log.Info(ctx, "Periodic backup is DISABLED")
// return nil
//}
//
//schedulerInstance := scheduler.GetInstance()
//
//log.Info("Scheduling periodic backup", "schedule", schedule)
//_, err := schedulerInstance.Add(schedule, func() {
// start := time.Now()
// path, err := db.Backup(ctx)
// elapsed := time.Since(start)
// if err != nil {
// log.Error(ctx, "Error backing up database", "elapsed", elapsed, err)
// return
// }
// log.Info(ctx, "Backup complete", "elapsed", elapsed, "path", path)
//
// count, err := db.Prune(ctx)
// if err != nil {
// log.Error(ctx, "Error pruning database", "error", err)
// } else if count > 0 {
// log.Info(ctx, "Successfully pruned old files", "count", count)
// } else {
// log.Info(ctx, "No backups pruned")
// }
//})
//
//return err
return nil
}
}
@@ -325,6 +311,22 @@ func startPlaybackServer(ctx context.Context) func() error {
}
}
// startPluginManager starts the plugin manager, if configured.
func startPluginManager(ctx context.Context) func() error {
return func() error {
if !conf.Server.Plugins.Enabled {
log.Debug("Plugins are DISABLED")
return nil
}
log.Info(ctx, "Starting plugin manager")
// Get the manager instance and scan for plugins
manager := GetPluginManager(ctx)
manager.ScanPlugins()
return nil
}
}
// TODO: Implement some struct tags to map flags to viper
func init() {
cobra.OnInitialize(func() {

View File

@@ -6,8 +6,6 @@ import (
"os"
"github.com/navidrome/navidrome/core"
"github.com/navidrome/navidrome/core/artwork"
"github.com/navidrome/navidrome/core/metrics"
"github.com/navidrome/navidrome/db"
"github.com/navidrome/navidrome/log"
"github.com/navidrome/navidrome/persistence"
@@ -65,12 +63,14 @@ func trackScanAsSubprocess(ctx context.Context, progress <-chan *scanner.Progres
}
func runScanner(ctx context.Context) {
defer db.Init(ctx)()
sqlDB := db.Db()
defer db.Db().Close()
ds := persistence.New(sqlDB)
pls := core.NewPlaylists(ds)
progress, err := scanner.CallScan(ctx, ds, artwork.NoopCacheWarmer(), pls, metrics.NewNoopInstance(), fullScan)
progress, err := scanner.CallScan(ctx, ds, pls, fullScan)
if err != nil {
log.Fatal(ctx, "Failed to scan", err)
}

View File

@@ -22,6 +22,7 @@ import (
"github.com/navidrome/navidrome/db"
"github.com/navidrome/navidrome/model"
"github.com/navidrome/navidrome/persistence"
"github.com/navidrome/navidrome/plugins"
"github.com/navidrome/navidrome/scanner"
"github.com/navidrome/navidrome/server"
"github.com/navidrome/navidrome/server/events"
@@ -46,18 +47,32 @@ func CreateServer() *server.Server {
sqlDB := db.Db()
dataStore := persistence.New(sqlDB)
broker := events.GetBroker()
insights := metrics.GetInstance(dataStore)
metricsMetrics := metrics.GetPrometheusInstance(dataStore)
manager := plugins.GetManager(dataStore, metricsMetrics)
insights := metrics.GetInstance(dataStore, manager)
serverServer := server.New(dataStore, broker, insights)
return serverServer
}
func CreateNativeAPIRouter() *nativeapi.Router {
func CreateNativeAPIRouter(ctx context.Context) *nativeapi.Router {
sqlDB := db.Db()
dataStore := persistence.New(sqlDB)
share := core.NewShare(dataStore)
playlists := core.NewPlaylists(dataStore)
insights := metrics.GetInstance(dataStore)
router := nativeapi.New(dataStore, share, playlists, insights)
metricsMetrics := metrics.GetPrometheusInstance(dataStore)
manager := plugins.GetManager(dataStore, metricsMetrics)
insights := metrics.GetInstance(dataStore, manager)
fileCache := artwork.GetImageCache()
fFmpeg := ffmpeg.New()
agentsAgents := agents.GetAgents(dataStore, manager)
provider := external.NewProvider(dataStore, agentsAgents)
artworkArtwork := artwork.NewArtwork(dataStore, fileCache, fFmpeg, provider)
cacheWarmer := artwork.NewCacheWarmer(artworkArtwork, fileCache)
broker := events.GetBroker()
scannerScanner := scanner.New(ctx, dataStore, cacheWarmer, broker, playlists, metricsMetrics)
watcher := scanner.GetWatcher(dataStore, scannerScanner)
library := core.NewLibrary(dataStore, scannerScanner, watcher, broker)
router := nativeapi.New(dataStore, share, playlists, insights, library)
return router
}
@@ -66,7 +81,9 @@ func CreateSubsonicAPIRouter(ctx context.Context) *subsonic.Router {
dataStore := persistence.New(sqlDB)
fileCache := artwork.GetImageCache()
fFmpeg := ffmpeg.New()
agentsAgents := agents.GetAgents(dataStore)
metricsMetrics := metrics.GetPrometheusInstance(dataStore)
manager := plugins.GetManager(dataStore, metricsMetrics)
agentsAgents := agents.GetAgents(dataStore, manager)
provider := external.NewProvider(dataStore, agentsAgents)
artworkArtwork := artwork.NewArtwork(dataStore, fileCache, fFmpeg, provider)
transcodingCache := core.GetTranscodingCache()
@@ -77,11 +94,10 @@ func CreateSubsonicAPIRouter(ctx context.Context) *subsonic.Router {
cacheWarmer := artwork.NewCacheWarmer(artworkArtwork, fileCache)
broker := events.GetBroker()
playlists := core.NewPlaylists(dataStore)
metricsMetrics := metrics.NewPrometheusInstance(dataStore)
scannerScanner := scanner.New(ctx, dataStore, cacheWarmer, broker, playlists, metricsMetrics)
playTracker := scrobbler.GetPlayTracker(dataStore, broker)
playTracker := scrobbler.GetPlayTracker(dataStore, broker, manager)
playbackServer := playback.GetInstance(dataStore)
router := subsonic.New(dataStore, artworkArtwork, mediaStreamer, archiver, players, provider, scannerScanner, broker, playlists, playTracker, share, playbackServer)
router := subsonic.New(dataStore, artworkArtwork, mediaStreamer, archiver, players, provider, scannerScanner, broker, playlists, playTracker, share, playbackServer, metricsMetrics)
return router
}
@@ -90,7 +106,9 @@ func CreatePublicRouter() *public.Router {
dataStore := persistence.New(sqlDB)
fileCache := artwork.GetImageCache()
fFmpeg := ffmpeg.New()
agentsAgents := agents.GetAgents(dataStore)
metricsMetrics := metrics.GetPrometheusInstance(dataStore)
manager := plugins.GetManager(dataStore, metricsMetrics)
agentsAgents := agents.GetAgents(dataStore, manager)
provider := external.NewProvider(dataStore, agentsAgents)
artworkArtwork := artwork.NewArtwork(dataStore, fileCache, fFmpeg, provider)
transcodingCache := core.GetTranscodingCache()
@@ -118,14 +136,16 @@ func CreateListenBrainzRouter() *listenbrainz.Router {
func CreateInsights() metrics.Insights {
sqlDB := db.Db()
dataStore := persistence.New(sqlDB)
insights := metrics.GetInstance(dataStore)
metricsMetrics := metrics.GetPrometheusInstance(dataStore)
manager := plugins.GetManager(dataStore, metricsMetrics)
insights := metrics.GetInstance(dataStore, manager)
return insights
}
func CreatePrometheus() metrics.Metrics {
sqlDB := db.Db()
dataStore := persistence.New(sqlDB)
metricsMetrics := metrics.NewPrometheusInstance(dataStore)
metricsMetrics := metrics.GetPrometheusInstance(dataStore)
return metricsMetrics
}
@@ -134,13 +154,14 @@ func CreateScanner(ctx context.Context) scanner.Scanner {
dataStore := persistence.New(sqlDB)
fileCache := artwork.GetImageCache()
fFmpeg := ffmpeg.New()
agentsAgents := agents.GetAgents(dataStore)
metricsMetrics := metrics.GetPrometheusInstance(dataStore)
manager := plugins.GetManager(dataStore, metricsMetrics)
agentsAgents := agents.GetAgents(dataStore, manager)
provider := external.NewProvider(dataStore, agentsAgents)
artworkArtwork := artwork.NewArtwork(dataStore, fileCache, fFmpeg, provider)
cacheWarmer := artwork.NewCacheWarmer(artworkArtwork, fileCache)
broker := events.GetBroker()
playlists := core.NewPlaylists(dataStore)
metricsMetrics := metrics.NewPrometheusInstance(dataStore)
scannerScanner := scanner.New(ctx, dataStore, cacheWarmer, broker, playlists, metricsMetrics)
return scannerScanner
}
@@ -150,15 +171,16 @@ func CreateScanWatcher(ctx context.Context) scanner.Watcher {
dataStore := persistence.New(sqlDB)
fileCache := artwork.GetImageCache()
fFmpeg := ffmpeg.New()
agentsAgents := agents.GetAgents(dataStore)
metricsMetrics := metrics.GetPrometheusInstance(dataStore)
manager := plugins.GetManager(dataStore, metricsMetrics)
agentsAgents := agents.GetAgents(dataStore, manager)
provider := external.NewProvider(dataStore, agentsAgents)
artworkArtwork := artwork.NewArtwork(dataStore, fileCache, fFmpeg, provider)
cacheWarmer := artwork.NewCacheWarmer(artworkArtwork, fileCache)
broker := events.GetBroker()
playlists := core.NewPlaylists(dataStore)
metricsMetrics := metrics.NewPrometheusInstance(dataStore)
scannerScanner := scanner.New(ctx, dataStore, cacheWarmer, broker, playlists, metricsMetrics)
watcher := scanner.NewWatcher(dataStore, scannerScanner)
watcher := scanner.GetWatcher(dataStore, scannerScanner)
return watcher
}
@@ -169,6 +191,20 @@ func GetPlaybackServer() playback.PlaybackServer {
return playbackServer
}
func getPluginManager() plugins.Manager {
sqlDB := db.Db()
dataStore := persistence.New(sqlDB)
metricsMetrics := metrics.GetPrometheusInstance(dataStore)
manager := plugins.GetManager(dataStore, metricsMetrics)
return manager
}
// wire_injectors.go:
var allProviders = wire.NewSet(core.Set, artwork.Set, server.New, subsonic.New, nativeapi.New, public.New, persistence.New, lastfm.NewRouter, listenbrainz.NewRouter, events.GetBroker, scanner.New, scanner.NewWatcher, metrics.NewPrometheusInstance, db.Db)
var allProviders = wire.NewSet(core.Set, artwork.Set, server.New, subsonic.New, nativeapi.New, public.New, persistence.New, lastfm.NewRouter, listenbrainz.NewRouter, events.GetBroker, scanner.New, scanner.GetWatcher, plugins.GetManager, metrics.GetPrometheusInstance, db.Db, wire.Bind(new(agents.PluginLoader), new(plugins.Manager)), wire.Bind(new(scrobbler.PluginLoader), new(plugins.Manager)), wire.Bind(new(metrics.PluginLoader), new(plugins.Manager)), wire.Bind(new(core.Scanner), new(scanner.Scanner)), wire.Bind(new(core.Watcher), new(scanner.Watcher)))
func GetPluginManager(ctx context.Context) plugins.Manager {
manager := getPluginManager()
manager.SetSubsonicRouter(CreateSubsonicAPIRouter(ctx))
return manager
}

View File

@@ -7,14 +7,17 @@ import (
"github.com/google/wire"
"github.com/navidrome/navidrome/core"
"github.com/navidrome/navidrome/core/agents"
"github.com/navidrome/navidrome/core/agents/lastfm"
"github.com/navidrome/navidrome/core/agents/listenbrainz"
"github.com/navidrome/navidrome/core/artwork"
"github.com/navidrome/navidrome/core/metrics"
"github.com/navidrome/navidrome/core/playback"
"github.com/navidrome/navidrome/core/scrobbler"
"github.com/navidrome/navidrome/db"
"github.com/navidrome/navidrome/model"
"github.com/navidrome/navidrome/persistence"
"github.com/navidrome/navidrome/plugins"
"github.com/navidrome/navidrome/scanner"
"github.com/navidrome/navidrome/server"
"github.com/navidrome/navidrome/server/events"
@@ -35,9 +38,15 @@ var allProviders = wire.NewSet(
listenbrainz.NewRouter,
events.GetBroker,
scanner.New,
scanner.NewWatcher,
metrics.NewPrometheusInstance,
scanner.GetWatcher,
plugins.GetManager,
metrics.GetPrometheusInstance,
db.Db,
wire.Bind(new(agents.PluginLoader), new(plugins.Manager)),
wire.Bind(new(scrobbler.PluginLoader), new(plugins.Manager)),
wire.Bind(new(metrics.PluginLoader), new(plugins.Manager)),
wire.Bind(new(core.Scanner), new(scanner.Scanner)),
wire.Bind(new(core.Watcher), new(scanner.Watcher)),
)
func CreateDataStore() model.DataStore {
@@ -52,7 +61,7 @@ func CreateServer() *server.Server {
))
}
func CreateNativeAPIRouter() *nativeapi.Router {
func CreateNativeAPIRouter(ctx context.Context) *nativeapi.Router {
panic(wire.Build(
allProviders,
))
@@ -111,3 +120,15 @@ func GetPlaybackServer() playback.PlaybackServer {
allProviders,
))
}
func getPluginManager() plugins.Manager {
panic(wire.Build(
allProviders,
))
}
func GetPluginManager(ctx context.Context) plugins.Manager {
manager := getPluginManager()
manager.SetSubsonicRouter(CreateSubsonicAPIRouter(ctx))
return manager
}

View File

@@ -14,7 +14,7 @@ import (
"github.com/kr/pretty"
"github.com/navidrome/navidrome/consts"
"github.com/navidrome/navidrome/log"
"github.com/navidrome/navidrome/utils/chain"
"github.com/navidrome/navidrome/utils/run"
"github.com/robfig/cron/v3"
"github.com/spf13/viper"
)
@@ -66,6 +66,7 @@ type configOptions struct {
CoverArtPriority string
CoverJpegQuality int
ArtistArtPriority string
LyricsPriority string
EnableGravatar bool
EnableFavourites bool
EnableStarRating bool
@@ -79,6 +80,7 @@ type configOptions struct {
DefaultUIVolume int
EnableReplayGain bool
EnableCoverAnimation bool
EnableNowPlaying bool
GATrackingID string
EnableLogRedacting bool
AuthRequestLimit int
@@ -86,25 +88,26 @@ type configOptions struct {
PasswordEncryptionKey string
ReverseProxyUserHeader string
ReverseProxyWhitelist string
HTTPSecurityHeaders secureOptions
Prometheus prometheusOptions
Scanner scannerOptions
Jukebox jukeboxOptions
Backup backupOptions
PID pidOptions
Inspect inspectOptions
Subsonic subsonicOptions
LyricsPriority string
Agents string
LastFM lastfmOptions
Spotify spotifyOptions
ListenBrainz listenBrainzOptions
Tags map[string]TagConf
Plugins pluginsOptions
PluginConfig map[string]map[string]string
HTTPSecurityHeaders secureOptions `json:",omitzero"`
Prometheus prometheusOptions `json:",omitzero"`
Scanner scannerOptions `json:",omitzero"`
Jukebox jukeboxOptions `json:",omitzero"`
Backup backupOptions `json:",omitzero"`
PID pidOptions `json:",omitzero"`
Inspect inspectOptions `json:",omitzero"`
Subsonic subsonicOptions `json:",omitzero"`
LastFM lastfmOptions `json:",omitzero"`
Spotify spotifyOptions `json:",omitzero"`
Deezer deezerOptions `json:",omitzero"`
ListenBrainz listenBrainzOptions `json:",omitzero"`
Tags map[string]TagConf `json:",omitempty"`
Agents string
// DevFlags. These are used to enable/disable debugging and incomplete features
DevLogLevels map[string]string `json:",omitempty"`
DevLogSourceLine bool
DevLogLevels map[string]string
DevEnableProfiler bool
DevAutoCreateAdminPassword string
DevAutoLoginUsername string
@@ -112,6 +115,8 @@ type configOptions struct {
DevActivityPanelUpdateRate time.Duration
DevSidebarPlaylists bool
DevShowArtistPage bool
DevUIShowConfig bool
DevNewEventStream bool
DevOffsetOptimize int
DevArtworkMaxRequests int
DevArtworkThrottleBacklogLimit int
@@ -122,6 +127,9 @@ type configOptions struct {
DevScannerThreads uint
DevInsightsInitialDelay time.Duration
DevEnablePlayerInsights bool
DevEnablePluginsInsights bool
DevPluginCompilationTimeout time.Duration
DevExternalArtistFetchMultiplier float64
}
type scannerOptions struct {
@@ -145,12 +153,12 @@ type subsonicOptions struct {
}
type TagConf struct {
Ignore bool `yaml:"ignore"`
Aliases []string `yaml:"aliases"`
Type string `yaml:"type"`
MaxLength int `yaml:"maxLength"`
Split []string `yaml:"split"`
Album bool `yaml:"album"`
Ignore bool `yaml:"ignore" json:",omitempty"`
Aliases []string `yaml:"aliases" json:",omitempty"`
Type string `yaml:"type" json:",omitempty"`
MaxLength int `yaml:"maxLength" json:",omitempty"`
Split []string `yaml:"split" json:",omitempty"`
Album bool `yaml:"album" json:",omitempty"`
}
type lastfmOptions struct {
@@ -166,6 +174,10 @@ type spotifyOptions struct {
Secret string
}
type deezerOptions struct {
Enabled bool
}
type listenBrainzOptions struct {
Enabled bool
BaseURL string
@@ -208,6 +220,12 @@ type inspectOptions struct {
BacklogTimeout int
}
type pluginsOptions struct {
Enabled bool
Folder string
CacheSize string
}
var (
Server = &configOptions{}
hooks []func()
@@ -247,6 +265,17 @@ func Load(noConfigDump bool) {
os.Exit(1)
}
if Server.Plugins.Enabled {
if Server.Plugins.Folder == "" {
Server.Plugins.Folder = filepath.Join(Server.DataFolder, "plugins")
}
err = os.MkdirAll(Server.Plugins.Folder, 0700)
if err != nil {
_, _ = fmt.Fprintln(os.Stderr, "FATAL: Error creating plugins path:", err)
os.Exit(1)
}
}
Server.ConfigFile = viper.GetViper().ConfigFileUsed()
if Server.DbPath == "" {
Server.DbPath = filepath.Join(Server.DataFolder, consts.DefaultDbPath)
@@ -275,7 +304,7 @@ func Load(noConfigDump bool) {
log.SetLogSourceLine(Server.DevLogSourceLine)
log.SetRedacting(Server.EnableLogRedacting)
err = chain.RunSequentially(
err = run.Sequentially(
validateScanSchedule,
validateBackupSchedule,
validatePlaylistsPath,
@@ -367,6 +396,7 @@ func disableExternalServices() {
Server.EnableInsightsCollector = false
Server.LastFM.Enabled = false
Server.Spotify.ID = ""
Server.Deezer.Enabled = false
Server.ListenBrainz.Enabled = false
Server.Agents = ""
if Server.UILoginBackgroundURL == consts.DefaultUILoginBackgroundURL {
@@ -478,10 +508,11 @@ func setViperDefaults() {
viper.SetDefault("ignoredarticles", "The El La Los Las Le Les Os As O A")
viper.SetDefault("indexgroups", "A B C D E F G H I J K L M N O P Q R S T U V W X-Z(XYZ) [Unknown]([)")
viper.SetDefault("ffmpegpath", "")
viper.SetDefault("mpvcmdtemplate", "mpv --audio-device=%d --no-audio-display --pause %f --input-ipc-server=%s")
viper.SetDefault("mpvcmdtemplate", "mpv --audio-device=%d --no-audio-display %f --input-ipc-server=%s")
viper.SetDefault("coverartpriority", "cover.*, folder.*, front.*, embedded, external")
viper.SetDefault("coverjpegquality", 75)
viper.SetDefault("artistartpriority", "artist.*, album/artist.*, external")
viper.SetDefault("lyricspriority", ".lrc,.txt,embedded")
viper.SetDefault("enablegravatar", false)
viper.SetDefault("enablefavourites", true)
viper.SetDefault("enablestarrating", true)
@@ -491,6 +522,7 @@ func setViperDefaults() {
viper.SetDefault("defaultuivolume", consts.DefaultUIVolume)
viper.SetDefault("enablereplaygain", true)
viper.SetDefault("enablecoveranimation", true)
viper.SetDefault("enablenowplaying", true)
viper.SetDefault("enablesharing", false)
viper.SetDefault("shareurl", "")
viper.SetDefault("defaultshareexpiration", 8760*time.Hour)
@@ -519,12 +551,12 @@ func setViperDefaults() {
viper.SetDefault("scanner.genreseparators", "")
viper.SetDefault("scanner.groupalbumreleases", false)
viper.SetDefault("scanner.followsymlinks", true)
viper.SetDefault("scanner.purgemissing", "never")
viper.SetDefault("scanner.purgemissing", consts.PurgeMissingNever)
viper.SetDefault("subsonic.appendsubtitle", true)
viper.SetDefault("subsonic.artistparticipations", false)
viper.SetDefault("subsonic.defaultreportrealpath", false)
viper.SetDefault("subsonic.legacyclients", "DSub")
viper.SetDefault("agents", "lastfm,spotify")
viper.SetDefault("agents", "lastfm,spotify,deezer")
viper.SetDefault("lastfm.enabled", true)
viper.SetDefault("lastfm.language", "en")
viper.SetDefault("lastfm.apikey", "")
@@ -532,6 +564,7 @@ func setViperDefaults() {
viper.SetDefault("lastfm.scrobblefirstartistonly", false)
viper.SetDefault("spotify.id", "")
viper.SetDefault("spotify.secret", "")
viper.SetDefault("deezer.enabled", true)
viper.SetDefault("listenbrainz.enabled", true)
viper.SetDefault("listenbrainz.baseurl", "https://api.listenbrainz.org/1/")
viper.SetDefault("httpsecurityheaders.customframeoptionsvalue", "DENY")
@@ -544,7 +577,11 @@ func setViperDefaults() {
viper.SetDefault("inspect.maxrequests", 1)
viper.SetDefault("inspect.backloglimit", consts.RequestThrottleBacklogLimit)
viper.SetDefault("inspect.backlogtimeout", consts.RequestThrottleBacklogTimeout)
viper.SetDefault("lyricspriority", ".lrc,.txt,embedded")
viper.SetDefault("plugins.folder", "")
viper.SetDefault("plugins.enabled", false)
viper.SetDefault("plugins.cachesize", "100MB")
// DevFlags. These are used to enable/disable debugging and incomplete features
viper.SetDefault("devlogsourceline", false)
viper.SetDefault("devenableprofiler", false)
viper.SetDefault("devautocreateadminpassword", "")
@@ -553,6 +590,8 @@ func setViperDefaults() {
viper.SetDefault("devactivitypanelupdaterate", 300*time.Millisecond)
viper.SetDefault("devsidebarplaylists", true)
viper.SetDefault("devshowartistpage", true)
viper.SetDefault("devuishowconfig", true)
viper.SetDefault("devneweventstream", true)
viper.SetDefault("devoffsetoptimize", 50000)
viper.SetDefault("devartworkmaxrequests", max(2, runtime.NumCPU()/3))
viper.SetDefault("devartworkthrottlebackloglimit", consts.RequestThrottleBacklogLimit)
@@ -563,6 +602,9 @@ func setViperDefaults() {
viper.SetDefault("devscannerthreads", 5)
viper.SetDefault("devinsightsinitialdelay", consts.InsightsInitialDelay)
viper.SetDefault("devenableplayerinsights", true)
viper.SetDefault("devenablepluginsinsights", true)
viper.SetDefault("devplugincompilationtimeout", time.Minute)
viper.SetDefault("devexternalartistfetchmultiplier", 1.5)
}
func init() {

View File

@@ -2,6 +2,7 @@ package agents
import (
"context"
"slices"
"strings"
"time"
@@ -13,43 +14,107 @@ import (
"github.com/navidrome/navidrome/utils/singleton"
)
type Agents struct {
ds model.DataStore
agents []Interface
// PluginLoader defines an interface for loading plugins
type PluginLoader interface {
// PluginNames returns the names of all plugins that implement a particular service
PluginNames(capability string) []string
// LoadMediaAgent loads and returns a media agent plugin
LoadMediaAgent(name string) (Interface, bool)
}
func GetAgents(ds model.DataStore) *Agents {
type Agents struct {
ds model.DataStore
pluginLoader PluginLoader
}
// GetAgents returns the singleton instance of Agents
func GetAgents(ds model.DataStore, pluginLoader PluginLoader) *Agents {
return singleton.GetInstance(func() *Agents {
return createAgents(ds)
return createAgents(ds, pluginLoader)
})
}
func createAgents(ds model.DataStore) *Agents {
var order []string
if conf.Server.Agents != "" {
order = strings.Split(conf.Server.Agents, ",")
// createAgents creates a new Agents instance. Used in tests
func createAgents(ds model.DataStore, pluginLoader PluginLoader) *Agents {
return &Agents{
ds: ds,
pluginLoader: pluginLoader,
}
order = append(order, LocalAgentName)
var res []Interface
var enabled []string
for _, name := range order {
init, ok := Map[name]
if !ok {
log.Error("Invalid agent. Check `Agents` configuration", "name", name, "conf", conf.Server.Agents)
continue
}
}
agent := init(ds)
if agent == nil {
log.Debug("Agent not available. Missing configuration?", "name", name)
continue
}
enabled = append(enabled, name)
res = append(res, init(ds))
// enabledAgent represents an enabled agent with its type information
type enabledAgent struct {
name string
isPlugin bool
}
// getEnabledAgentNames returns the current list of enabled agents, including:
// 1. Built-in agents and plugins from config (in the specified order)
// 2. Always include LocalAgentName
// 3. If config is empty, include ONLY LocalAgentName
// Each enabledAgent contains the name and whether it's a plugin (true) or built-in (false)
func (a *Agents) getEnabledAgentNames() []enabledAgent {
// If no agents configured, ONLY use the local agent
if conf.Server.Agents == "" {
return []enabledAgent{{name: LocalAgentName, isPlugin: false}}
}
log.Debug("List of agents enabled", "names", enabled)
return &Agents{ds: ds, agents: res}
// Get all available plugin names
var availablePlugins []string
if a.pluginLoader != nil {
availablePlugins = a.pluginLoader.PluginNames("MetadataAgent")
}
configuredAgents := strings.Split(conf.Server.Agents, ",")
// Always add LocalAgentName if not already included
hasLocalAgent := slices.Contains(configuredAgents, LocalAgentName)
if !hasLocalAgent {
configuredAgents = append(configuredAgents, LocalAgentName)
}
// Filter to only include valid agents (built-in or plugins)
var validAgents []enabledAgent
for _, name := range configuredAgents {
// Check if it's a built-in agent
isBuiltIn := Map[name] != nil
// Check if it's a plugin
isPlugin := slices.Contains(availablePlugins, name)
if isBuiltIn {
validAgents = append(validAgents, enabledAgent{name: name, isPlugin: false})
} else if isPlugin {
validAgents = append(validAgents, enabledAgent{name: name, isPlugin: true})
} else {
log.Warn("Unknown agent ignored", "name", name)
}
}
return validAgents
}
func (a *Agents) getAgent(ea enabledAgent) Interface {
if ea.isPlugin {
// Try to load WASM plugin agent (if plugin loader is available)
if a.pluginLoader != nil {
agent, ok := a.pluginLoader.LoadMediaAgent(ea.name)
if ok && agent != nil {
return agent
}
}
} else {
// Try to get built-in agent
constructor, ok := Map[ea.name]
if ok {
agent := constructor(a.ds)
if agent != nil {
return agent
}
log.Debug("Built-in agent not available. Missing configuration?", "name", ea.name)
}
}
return nil
}
func (a *Agents) AgentName() string {
@@ -64,15 +129,19 @@ func (a *Agents) GetArtistMBID(ctx context.Context, id string, name string) (str
return "", nil
}
start := time.Now()
for _, ag := range a.agents {
for _, enabledAgent := range a.getEnabledAgentNames() {
ag := a.getAgent(enabledAgent)
if ag == nil {
continue
}
if utils.IsCtxDone(ctx) {
break
}
agent, ok := ag.(ArtistMBIDRetriever)
retriever, ok := ag.(ArtistMBIDRetriever)
if !ok {
continue
}
mbid, err := agent.GetArtistMBID(ctx, id, name)
mbid, err := retriever.GetArtistMBID(ctx, id, name)
if mbid != "" && err == nil {
log.Debug(ctx, "Got MBID", "agent", ag.AgentName(), "artist", name, "mbid", mbid, "elapsed", time.Since(start))
return mbid, nil
@@ -89,15 +158,19 @@ func (a *Agents) GetArtistURL(ctx context.Context, id, name, mbid string) (strin
return "", nil
}
start := time.Now()
for _, ag := range a.agents {
for _, enabledAgent := range a.getEnabledAgentNames() {
ag := a.getAgent(enabledAgent)
if ag == nil {
continue
}
if utils.IsCtxDone(ctx) {
break
}
agent, ok := ag.(ArtistURLRetriever)
retriever, ok := ag.(ArtistURLRetriever)
if !ok {
continue
}
url, err := agent.GetArtistURL(ctx, id, name, mbid)
url, err := retriever.GetArtistURL(ctx, id, name, mbid)
if url != "" && err == nil {
log.Debug(ctx, "Got External Url", "agent", ag.AgentName(), "artist", name, "url", url, "elapsed", time.Since(start))
return url, nil
@@ -114,15 +187,19 @@ func (a *Agents) GetArtistBiography(ctx context.Context, id, name, mbid string)
return "", nil
}
start := time.Now()
for _, ag := range a.agents {
for _, enabledAgent := range a.getEnabledAgentNames() {
ag := a.getAgent(enabledAgent)
if ag == nil {
continue
}
if utils.IsCtxDone(ctx) {
break
}
agent, ok := ag.(ArtistBiographyRetriever)
retriever, ok := ag.(ArtistBiographyRetriever)
if !ok {
continue
}
bio, err := agent.GetArtistBiography(ctx, id, name, mbid)
bio, err := retriever.GetArtistBiography(ctx, id, name, mbid)
if err == nil {
log.Debug(ctx, "Got Biography", "agent", ag.AgentName(), "artist", name, "len", len(bio), "elapsed", time.Since(start))
return bio, nil
@@ -131,6 +208,8 @@ func (a *Agents) GetArtistBiography(ctx context.Context, id, name, mbid string)
return "", ErrNotFound
}
// GetSimilarArtists returns similar artists by id, name, and/or mbid. Because some artists returned from an enabled
// agent may not exist in the database, return at most limit * conf.Server.DevExternalArtistFetchMultiplier items.
func (a *Agents) GetSimilarArtists(ctx context.Context, id, name, mbid string, limit int) ([]Artist, error) {
switch id {
case consts.UnknownArtistID:
@@ -138,16 +217,23 @@ func (a *Agents) GetSimilarArtists(ctx context.Context, id, name, mbid string, l
case consts.VariousArtistsID:
return nil, nil
}
overLimit := int(float64(limit) * conf.Server.DevExternalArtistFetchMultiplier)
start := time.Now()
for _, ag := range a.agents {
for _, enabledAgent := range a.getEnabledAgentNames() {
ag := a.getAgent(enabledAgent)
if ag == nil {
continue
}
if utils.IsCtxDone(ctx) {
break
}
agent, ok := ag.(ArtistSimilarRetriever)
retriever, ok := ag.(ArtistSimilarRetriever)
if !ok {
continue
}
similar, err := agent.GetSimilarArtists(ctx, id, name, mbid, limit)
similar, err := retriever.GetSimilarArtists(ctx, id, name, mbid, overLimit)
if len(similar) > 0 && err == nil {
if log.IsGreaterOrEqualTo(log.LevelTrace) {
log.Debug(ctx, "Got Similar Artists", "agent", ag.AgentName(), "artist", name, "similar", similar, "elapsed", time.Since(start))
@@ -168,15 +254,19 @@ func (a *Agents) GetArtistImages(ctx context.Context, id, name, mbid string) ([]
return nil, nil
}
start := time.Now()
for _, ag := range a.agents {
for _, enabledAgent := range a.getEnabledAgentNames() {
ag := a.getAgent(enabledAgent)
if ag == nil {
continue
}
if utils.IsCtxDone(ctx) {
break
}
agent, ok := ag.(ArtistImageRetriever)
retriever, ok := ag.(ArtistImageRetriever)
if !ok {
continue
}
images, err := agent.GetArtistImages(ctx, id, name, mbid)
images, err := retriever.GetArtistImages(ctx, id, name, mbid)
if len(images) > 0 && err == nil {
log.Debug(ctx, "Got Images", "agent", ag.AgentName(), "artist", name, "images", images, "elapsed", time.Since(start))
return images, nil
@@ -185,6 +275,8 @@ func (a *Agents) GetArtistImages(ctx context.Context, id, name, mbid string) ([]
return nil, ErrNotFound
}
// GetArtistTopSongs returns top songs by id, name, and/or mbid. Because some songs returned from an enabled
// agent may not exist in the database, return at most limit * conf.Server.DevExternalArtistFetchMultiplier items.
func (a *Agents) GetArtistTopSongs(ctx context.Context, id, artistName, mbid string, count int) ([]Song, error) {
switch id {
case consts.UnknownArtistID:
@@ -192,16 +284,23 @@ func (a *Agents) GetArtistTopSongs(ctx context.Context, id, artistName, mbid str
case consts.VariousArtistsID:
return nil, nil
}
overLimit := int(float64(count) * conf.Server.DevExternalArtistFetchMultiplier)
start := time.Now()
for _, ag := range a.agents {
for _, enabledAgent := range a.getEnabledAgentNames() {
ag := a.getAgent(enabledAgent)
if ag == nil {
continue
}
if utils.IsCtxDone(ctx) {
break
}
agent, ok := ag.(ArtistTopSongsRetriever)
retriever, ok := ag.(ArtistTopSongsRetriever)
if !ok {
continue
}
songs, err := agent.GetArtistTopSongs(ctx, id, artistName, mbid, count)
songs, err := retriever.GetArtistTopSongs(ctx, id, artistName, mbid, overLimit)
if len(songs) > 0 && err == nil {
log.Debug(ctx, "Got Top Songs", "agent", ag.AgentName(), "artist", artistName, "songs", songs, "elapsed", time.Since(start))
return songs, nil
@@ -215,15 +314,19 @@ func (a *Agents) GetAlbumInfo(ctx context.Context, name, artist, mbid string) (*
return nil, ErrNotFound
}
start := time.Now()
for _, ag := range a.agents {
for _, enabledAgent := range a.getEnabledAgentNames() {
ag := a.getAgent(enabledAgent)
if ag == nil {
continue
}
if utils.IsCtxDone(ctx) {
break
}
agent, ok := ag.(AlbumInfoRetriever)
retriever, ok := ag.(AlbumInfoRetriever)
if !ok {
continue
}
album, err := agent.GetAlbumInfo(ctx, name, artist, mbid)
album, err := retriever.GetAlbumInfo(ctx, name, artist, mbid)
if err == nil {
log.Debug(ctx, "Got Album Info", "agent", ag.AgentName(), "album", name, "artist", artist,
"mbid", mbid, "elapsed", time.Since(start))
@@ -233,6 +336,33 @@ func (a *Agents) GetAlbumInfo(ctx context.Context, name, artist, mbid string) (*
return nil, ErrNotFound
}
func (a *Agents) GetAlbumImages(ctx context.Context, name, artist, mbid string) ([]ExternalImage, error) {
if name == consts.UnknownAlbum {
return nil, ErrNotFound
}
start := time.Now()
for _, enabledAgent := range a.getEnabledAgentNames() {
ag := a.getAgent(enabledAgent)
if ag == nil {
continue
}
if utils.IsCtxDone(ctx) {
break
}
retriever, ok := ag.(AlbumImageRetriever)
if !ok {
continue
}
images, err := retriever.GetAlbumImages(ctx, name, artist, mbid)
if len(images) > 0 && err == nil {
log.Debug(ctx, "Got Album Images", "agent", ag.AgentName(), "album", name, "artist", artist,
"mbid", mbid, "elapsed", time.Since(start))
return images, nil
}
}
return nil, ErrNotFound
}
var _ Interface = (*Agents)(nil)
var _ ArtistMBIDRetriever = (*Agents)(nil)
var _ ArtistURLRetriever = (*Agents)(nil)
@@ -241,3 +371,4 @@ var _ ArtistSimilarRetriever = (*Agents)(nil)
var _ ArtistImageRetriever = (*Agents)(nil)
var _ ArtistTopSongsRetriever = (*Agents)(nil)
var _ AlbumInfoRetriever = (*Agents)(nil)
var _ AlbumImageRetriever = (*Agents)(nil)

View File

@@ -0,0 +1,281 @@
package agents
import (
"context"
"github.com/navidrome/navidrome/conf"
"github.com/navidrome/navidrome/model"
"github.com/navidrome/navidrome/utils/slice"
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
)
// MockPluginLoader implements PluginLoader for testing
type MockPluginLoader struct {
pluginNames []string
loadedAgents map[string]*MockAgent
pluginCallCount map[string]int
}
func NewMockPluginLoader() *MockPluginLoader {
return &MockPluginLoader{
pluginNames: []string{},
loadedAgents: make(map[string]*MockAgent),
pluginCallCount: make(map[string]int),
}
}
func (m *MockPluginLoader) PluginNames(serviceName string) []string {
return m.pluginNames
}
func (m *MockPluginLoader) LoadMediaAgent(name string) (Interface, bool) {
m.pluginCallCount[name]++
agent, exists := m.loadedAgents[name]
return agent, exists
}
// MockAgent is a mock agent implementation for testing
type MockAgent struct {
name string
mbid string
}
func (m *MockAgent) AgentName() string {
return m.name
}
func (m *MockAgent) GetArtistMBID(ctx context.Context, id string, name string) (string, error) {
return m.mbid, nil
}
var _ Interface = (*MockAgent)(nil)
var _ ArtistMBIDRetriever = (*MockAgent)(nil)
var _ PluginLoader = (*MockPluginLoader)(nil)
var _ = Describe("Agents with Plugin Loading", func() {
var mockLoader *MockPluginLoader
var agents *Agents
BeforeEach(func() {
mockLoader = NewMockPluginLoader()
// Create the agents instance with our mock loader
agents = createAgents(nil, mockLoader)
})
Context("Dynamic agent discovery", func() {
It("should include ONLY local agent when no config is specified", func() {
// Ensure no specific agents are configured
conf.Server.Agents = ""
// Add some plugin agents that should be ignored
mockLoader.pluginNames = append(mockLoader.pluginNames, "plugin_agent", "another_plugin")
// Should only include the local agent
enabledAgents := agents.getEnabledAgentNames()
Expect(enabledAgents).To(HaveLen(1))
Expect(enabledAgents[0].name).To(Equal(LocalAgentName))
Expect(enabledAgents[0].isPlugin).To(BeFalse()) // LocalAgent is built-in, not plugin
})
It("should NOT include plugin agents when no config is specified", func() {
// Ensure no specific agents are configured
conf.Server.Agents = ""
// Add a plugin agent
mockLoader.pluginNames = append(mockLoader.pluginNames, "plugin_agent")
// Should only include the local agent
enabledAgents := agents.getEnabledAgentNames()
Expect(enabledAgents).To(HaveLen(1))
Expect(enabledAgents[0].name).To(Equal(LocalAgentName))
Expect(enabledAgents[0].isPlugin).To(BeFalse()) // LocalAgent is built-in, not plugin
})
It("should include plugin agents in the enabled agents list ONLY when explicitly configured", func() {
// Add a plugin agent
mockLoader.pluginNames = append(mockLoader.pluginNames, "plugin_agent")
// With no config, should not include plugin
conf.Server.Agents = ""
enabledAgents := agents.getEnabledAgentNames()
Expect(enabledAgents).To(HaveLen(1))
Expect(enabledAgents[0].name).To(Equal(LocalAgentName))
// When explicitly configured, should include plugin
conf.Server.Agents = "plugin_agent"
enabledAgents = agents.getEnabledAgentNames()
var agentNames []string
var pluginAgentFound bool
for _, agent := range enabledAgents {
agentNames = append(agentNames, agent.name)
if agent.name == "plugin_agent" {
pluginAgentFound = true
Expect(agent.isPlugin).To(BeTrue()) // plugin_agent is a plugin
}
}
Expect(agentNames).To(ContainElements(LocalAgentName, "plugin_agent"))
Expect(pluginAgentFound).To(BeTrue())
})
It("should only include configured plugin agents when config is specified", func() {
// Add two plugin agents
mockLoader.pluginNames = append(mockLoader.pluginNames, "plugin_one", "plugin_two")
// Configure only one of them
conf.Server.Agents = "plugin_one"
// Verify only the configured one is included
enabledAgents := agents.getEnabledAgentNames()
var agentNames []string
var pluginOneFound bool
for _, agent := range enabledAgents {
agentNames = append(agentNames, agent.name)
if agent.name == "plugin_one" {
pluginOneFound = true
Expect(agent.isPlugin).To(BeTrue()) // plugin_one is a plugin
}
}
Expect(agentNames).To(ContainElements(LocalAgentName, "plugin_one"))
Expect(agentNames).NotTo(ContainElement("plugin_two"))
Expect(pluginOneFound).To(BeTrue())
})
It("should load plugin agents on demand", func() {
ctx := context.Background()
// Configure to use our plugin
conf.Server.Agents = "plugin_agent"
// Add a plugin agent
mockLoader.pluginNames = append(mockLoader.pluginNames, "plugin_agent")
mockLoader.loadedAgents["plugin_agent"] = &MockAgent{
name: "plugin_agent",
mbid: "plugin-mbid",
}
// Try to get data from it
mbid, err := agents.GetArtistMBID(ctx, "123", "Artist")
Expect(err).ToNot(HaveOccurred())
Expect(mbid).To(Equal("plugin-mbid"))
Expect(mockLoader.pluginCallCount["plugin_agent"]).To(Equal(1))
})
It("should try both built-in and plugin agents", func() {
// Create a mock built-in agent
Register("built_in", func(ds model.DataStore) Interface {
return &MockAgent{
name: "built_in",
mbid: "built-in-mbid",
}
})
defer func() {
delete(Map, "built_in")
}()
// Configure to use both built-in and plugin
conf.Server.Agents = "built_in,plugin_agent"
// Add a plugin agent
mockLoader.pluginNames = append(mockLoader.pluginNames, "plugin_agent")
mockLoader.loadedAgents["plugin_agent"] = &MockAgent{
name: "plugin_agent",
mbid: "plugin-mbid",
}
// Verify that both are in the enabled list
enabledAgents := agents.getEnabledAgentNames()
var agentNames []string
var builtInFound, pluginFound bool
for _, agent := range enabledAgents {
agentNames = append(agentNames, agent.name)
if agent.name == "built_in" {
builtInFound = true
Expect(agent.isPlugin).To(BeFalse()) // built-in agent
}
if agent.name == "plugin_agent" {
pluginFound = true
Expect(agent.isPlugin).To(BeTrue()) // plugin agent
}
}
Expect(agentNames).To(ContainElements("built_in", "plugin_agent", LocalAgentName))
Expect(builtInFound).To(BeTrue())
Expect(pluginFound).To(BeTrue())
})
It("should respect the order specified in configuration", func() {
// Create mock built-in agents
Register("agent_a", func(ds model.DataStore) Interface {
return &MockAgent{name: "agent_a"}
})
Register("agent_b", func(ds model.DataStore) Interface {
return &MockAgent{name: "agent_b"}
})
defer func() {
delete(Map, "agent_a")
delete(Map, "agent_b")
}()
// Add plugin agents
mockLoader.pluginNames = append(mockLoader.pluginNames, "plugin_x", "plugin_y")
// Configure specific order - plugin first, then built-ins
conf.Server.Agents = "plugin_y,agent_b,plugin_x,agent_a"
// Get the agent names
enabledAgents := agents.getEnabledAgentNames()
// Extract just the names to verify the order
agentNames := slice.Map(enabledAgents, func(a enabledAgent) string { return a.name })
// Verify the order matches configuration, with LocalAgentName at the end
Expect(agentNames).To(HaveExactElements("plugin_y", "agent_b", "plugin_x", "agent_a", LocalAgentName))
})
It("should NOT call LoadMediaAgent for built-in agents", func() {
ctx := context.Background()
// Create a mock built-in agent
Register("builtin_agent", func(ds model.DataStore) Interface {
return &MockAgent{
name: "builtin_agent",
mbid: "builtin-mbid",
}
})
defer func() {
delete(Map, "builtin_agent")
}()
// Configure to use only built-in agents
conf.Server.Agents = "builtin_agent"
// Call GetArtistMBID which should only use the built-in agent
mbid, err := agents.GetArtistMBID(ctx, "123", "Artist")
Expect(err).ToNot(HaveOccurred())
Expect(mbid).To(Equal("builtin-mbid"))
// Verify LoadMediaAgent was NEVER called (no plugin loading for built-in agents)
Expect(mockLoader.pluginCallCount).To(BeEmpty())
})
It("should NOT call LoadMediaAgent for invalid agent names", func() {
ctx := context.Background()
// Configure with an invalid agent name (not built-in, not a plugin)
conf.Server.Agents = "invalid_agent"
// This should only result in using the local agent (as the invalid one is ignored)
_, err := agents.GetArtistMBID(ctx, "123", "Artist")
// Should get ErrNotFound since only local agent is available and it returns not found for this operation
Expect(err).To(MatchError(ErrNotFound))
// Verify LoadMediaAgent was NEVER called for the invalid agent
Expect(mockLoader.pluginCallCount).To(BeEmpty())
})
})
})

View File

@@ -4,10 +4,10 @@ import (
"context"
"errors"
"github.com/navidrome/navidrome/conf/configtest"
"github.com/navidrome/navidrome/consts"
"github.com/navidrome/navidrome/model"
"github.com/navidrome/navidrome/tests"
"github.com/navidrome/navidrome/utils/slice"
"github.com/navidrome/navidrome/conf"
. "github.com/onsi/ginkgo/v2"
@@ -20,6 +20,7 @@ var _ = Describe("Agents", func() {
var ds model.DataStore
var mfRepo *tests.MockMediaFileRepo
BeforeEach(func() {
DeferCleanup(configtest.SetupConfig())
ctx, cancel = context.WithCancel(context.Background())
mfRepo = tests.CreateMockMediaFileRepo()
ds = &tests.MockDataStore{MockedMediaFile: mfRepo}
@@ -29,7 +30,7 @@ var _ = Describe("Agents", func() {
var ag *Agents
BeforeEach(func() {
conf.Server.Agents = ""
ag = createAgents(ds)
ag = createAgents(ds, nil)
})
It("calls the placeholder GetArtistImages", func() {
@@ -49,12 +50,18 @@ var _ = Describe("Agents", func() {
Register("disabled", func(model.DataStore) Interface { return nil })
Register("empty", func(model.DataStore) Interface { return &emptyAgent{} })
conf.Server.Agents = "empty,fake,disabled"
ag = createAgents(ds)
ag = createAgents(ds, nil)
Expect(ag.AgentName()).To(Equal("agents"))
})
It("does not register disabled agents", func() {
ags := slice.Map(ag.agents, func(a Interface) string { return a.AgentName() })
var ags []string
for _, enabledAgent := range ag.getEnabledAgentNames() {
agent := ag.getAgent(enabledAgent)
if agent != nil {
ags = append(ags, agent.AgentName())
}
}
// local agent is always appended to the end of the agents list
Expect(ags).To(HaveExactElements("empty", "fake", "local"))
Expect(ags).ToNot(ContainElement("disabled"))
@@ -173,6 +180,42 @@ var _ = Describe("Agents", func() {
Expect(err).To(MatchError(ErrNotFound))
Expect(mock.Args).To(BeEmpty())
})
Context("with multiple image agents", func() {
var first *testImageAgent
var second *testImageAgent
BeforeEach(func() {
first = &testImageAgent{Name: "imgFail", Err: errors.New("fail")}
second = &testImageAgent{Name: "imgOk", Images: []ExternalImage{{URL: "ok", Size: 1}}}
Register("imgFail", func(model.DataStore) Interface { return first })
Register("imgOk", func(model.DataStore) Interface { return second })
})
It("falls back to the next agent on error", func() {
conf.Server.Agents = "imgFail,imgOk"
ag = createAgents(ds, nil)
images, err := ag.GetArtistImages(ctx, "id", "artist", "mbid")
Expect(err).ToNot(HaveOccurred())
Expect(images).To(Equal([]ExternalImage{{URL: "ok", Size: 1}}))
Expect(first.Args).To(HaveExactElements("id", "artist", "mbid"))
Expect(second.Args).To(HaveExactElements("id", "artist", "mbid"))
})
It("falls back if the first agent returns no images", func() {
first.Err = nil
first.Images = []ExternalImage{}
conf.Server.Agents = "imgFail,imgOk"
ag = createAgents(ds, nil)
images, err := ag.GetArtistImages(ctx, "id", "artist", "mbid")
Expect(err).ToNot(HaveOccurred())
Expect(images).To(Equal([]ExternalImage{{URL: "ok", Size: 1}}))
Expect(first.Args).To(HaveExactElements("id", "artist", "mbid"))
Expect(second.Args).To(HaveExactElements("id", "artist", "mbid"))
})
})
})
Describe("GetSimilarArtists", func() {
@@ -199,6 +242,7 @@ var _ = Describe("Agents", func() {
Describe("GetArtistTopSongs", func() {
It("returns on first match", func() {
conf.Server.DevExternalArtistFetchMultiplier = 1
Expect(ag.GetArtistTopSongs(ctx, "123", "test", "mb123", 2)).To(Equal([]Song{{
Name: "A Song",
MBID: "mbid444",
@@ -206,6 +250,7 @@ var _ = Describe("Agents", func() {
Expect(mock.Args).To(HaveExactElements("123", "test", "mb123", 2))
})
It("skips the agent if it returns an error", func() {
conf.Server.DevExternalArtistFetchMultiplier = 1
mock.Err = errors.New("error")
_, err := ag.GetArtistTopSongs(ctx, "123", "test", "mb123", 2)
Expect(err).To(MatchError(ErrNotFound))
@@ -217,6 +262,14 @@ var _ = Describe("Agents", func() {
Expect(err).To(MatchError(ErrNotFound))
Expect(mock.Args).To(BeEmpty())
})
It("fetches with multiplier", func() {
conf.Server.DevExternalArtistFetchMultiplier = 2
Expect(ag.GetArtistTopSongs(ctx, "123", "test", "mb123", 2)).To(Equal([]Song{{
Name: "A Song",
MBID: "mbid444",
}}))
Expect(mock.Args).To(HaveExactElements("123", "test", "mb123", 4))
})
})
Describe("GetAlbumInfo", func() {
@@ -226,18 +279,6 @@ var _ = Describe("Agents", func() {
MBID: "mbid444",
Description: "A Description",
URL: "External URL",
Images: []ExternalImage{
{
Size: 174,
URL: "https://lastfm.freetls.fastly.net/i/u/174s/00000000000000000000000000000000.png",
}, {
Size: 64,
URL: "https://lastfm.freetls.fastly.net/i/u/64s/00000000000000000000000000000000.png",
}, {
Size: 34,
URL: "https://lastfm.freetls.fastly.net/i/u/34s/00000000000000000000000000000000.png",
},
},
}))
Expect(mock.Args).To(HaveExactElements("album", "artist", "mbid"))
})
@@ -333,18 +374,6 @@ func (a *mockAgent) GetAlbumInfo(ctx context.Context, name, artist, mbid string)
MBID: "mbid444",
Description: "A Description",
URL: "External URL",
Images: []ExternalImage{
{
Size: 174,
URL: "https://lastfm.freetls.fastly.net/i/u/174s/00000000000000000000000000000000.png",
}, {
Size: 64,
URL: "https://lastfm.freetls.fastly.net/i/u/64s/00000000000000000000000000000000.png",
}, {
Size: 34,
URL: "https://lastfm.freetls.fastly.net/i/u/34s/00000000000000000000000000000000.png",
},
},
}, nil
}
@@ -355,3 +384,17 @@ type emptyAgent struct {
func (e *emptyAgent) AgentName() string {
return "empty"
}
type testImageAgent struct {
Name string
Images []ExternalImage
Err error
Args []interface{}
}
func (t *testImageAgent) AgentName() string { return t.Name }
func (t *testImageAgent) GetArtistImages(_ context.Context, id, name, mbid string) ([]ExternalImage, error) {
t.Args = []interface{}{id, name, mbid}
return t.Images, t.Err
}

View File

@@ -0,0 +1,83 @@
package deezer
import (
"context"
"encoding/json"
"errors"
"fmt"
"io"
"net/http"
"net/url"
"strconv"
"github.com/navidrome/navidrome/log"
)
const apiBaseURL = "https://api.deezer.com"
var (
ErrNotFound = errors.New("deezer: not found")
)
type httpDoer interface {
Do(req *http.Request) (*http.Response, error)
}
type client struct {
httpDoer httpDoer
}
func newClient(hc httpDoer) *client {
return &client{hc}
}
func (c *client) searchArtists(ctx context.Context, name string, limit int) ([]Artist, error) {
params := url.Values{}
params.Add("q", name)
params.Add("limit", strconv.Itoa(limit))
req, err := http.NewRequestWithContext(ctx, "GET", apiBaseURL+"/search/artist", nil)
if err != nil {
return nil, err
}
req.URL.RawQuery = params.Encode()
var results SearchArtistResults
err = c.makeRequest(req, &results)
if err != nil {
return nil, err
}
if len(results.Data) == 0 {
return nil, ErrNotFound
}
return results.Data, nil
}
func (c *client) makeRequest(req *http.Request, response interface{}) error {
log.Trace(req.Context(), fmt.Sprintf("Sending Deezer %s request", req.Method), "url", req.URL)
resp, err := c.httpDoer.Do(req)
if err != nil {
return err
}
defer resp.Body.Close()
data, err := io.ReadAll(resp.Body)
if err != nil {
return err
}
if resp.StatusCode != 200 {
return c.parseError(data)
}
return json.Unmarshal(data, response)
}
func (c *client) parseError(data []byte) error {
var deezerError Error
err := json.Unmarshal(data, &deezerError)
if err != nil {
return err
}
return fmt.Errorf("deezer error(%d): %s", deezerError.Error.Code, deezerError.Error.Message)
}

View File

@@ -0,0 +1,68 @@
package deezer
import (
"bytes"
"context"
"io"
"net/http"
"os"
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
)
var _ = Describe("client", func() {
var httpClient *fakeHttpClient
var client *client
BeforeEach(func() {
httpClient = &fakeHttpClient{}
client = newClient(httpClient)
})
Describe("ArtistImages", func() {
It("returns artist images from a successful request", func() {
f, err := os.Open("tests/fixtures/deezer.search.artist.json")
Expect(err).To(BeNil())
httpClient.mock("https://api.deezer.com/search/artist", http.Response{Body: f, StatusCode: 200})
artists, err := client.searchArtists(context.TODO(), "Michael Jackson", 20)
Expect(err).To(BeNil())
Expect(artists).To(HaveLen(17))
Expect(artists[0].Name).To(Equal("Michael Jackson"))
Expect(artists[0].PictureXl).To(Equal("https://cdn-images.dzcdn.net/images/artist/97fae13b2b30e4aec2e8c9e0c7839d92/1000x1000-000000-80-0-0.jpg"))
})
It("fails if artist was not found", func() {
httpClient.mock("https://api.deezer.com/search/artist", http.Response{
StatusCode: 200,
Body: io.NopCloser(bytes.NewBufferString(`{"data":[],"total":0}`)),
})
_, err := client.searchArtists(context.TODO(), "Michael Jackson", 20)
Expect(err).To(MatchError(ErrNotFound))
})
})
})
type fakeHttpClient struct {
responses map[string]*http.Response
lastRequest *http.Request
}
func (c *fakeHttpClient) mock(url string, response http.Response) {
if c.responses == nil {
c.responses = make(map[string]*http.Response)
}
c.responses[url] = &response
}
func (c *fakeHttpClient) Do(req *http.Request) (*http.Response, error) {
c.lastRequest = req
u := req.URL
u.RawQuery = ""
if resp, ok := c.responses[u.String()]; ok {
return resp, nil
}
panic("URL not mocked: " + u.String())
}

View File

@@ -0,0 +1,97 @@
package deezer
import (
"context"
"errors"
"net/http"
"strings"
"github.com/navidrome/navidrome/conf"
"github.com/navidrome/navidrome/consts"
"github.com/navidrome/navidrome/core/agents"
"github.com/navidrome/navidrome/log"
"github.com/navidrome/navidrome/model"
"github.com/navidrome/navidrome/utils/cache"
)
const deezerAgentName = "deezer"
const deezerApiPictureXlSize = 1000
const deezerApiPictureBigSize = 500
const deezerApiPictureMediumSize = 250
const deezerApiPictureSmallSize = 56
const deezerArtistSearchLimit = 50
type deezerAgent struct {
dataStore model.DataStore
client *client
}
func deezerConstructor(dataStore model.DataStore) agents.Interface {
agent := &deezerAgent{dataStore: dataStore}
httpClient := &http.Client{
Timeout: consts.DefaultHttpClientTimeOut,
}
cachedHttpClient := cache.NewHTTPClient(httpClient, consts.DefaultHttpClientTimeOut)
agent.client = newClient(cachedHttpClient)
return agent
}
func (s *deezerAgent) AgentName() string {
return deezerAgentName
}
func (s *deezerAgent) GetArtistImages(ctx context.Context, _, name, _ string) ([]agents.ExternalImage, error) {
artist, err := s.searchArtist(ctx, name)
if err != nil {
if errors.Is(err, agents.ErrNotFound) {
log.Warn(ctx, "Artist not found in deezer", "artist", name)
} else {
log.Error(ctx, "Error calling deezer", "artist", name, err)
}
return nil, err
}
var res []agents.ExternalImage
possibleImages := []struct {
URL string
Size int
}{
{artist.PictureXl, deezerApiPictureXlSize},
{artist.PictureBig, deezerApiPictureBigSize},
{artist.PictureMedium, deezerApiPictureMediumSize},
{artist.PictureSmall, deezerApiPictureSmallSize},
}
for _, imgData := range possibleImages {
if imgData.URL != "" {
res = append(res, agents.ExternalImage{
URL: imgData.URL,
Size: imgData.Size,
})
}
}
return res, nil
}
func (s *deezerAgent) searchArtist(ctx context.Context, name string) (*Artist, error) {
artists, err := s.client.searchArtists(ctx, name, deezerArtistSearchLimit)
if errors.Is(err, ErrNotFound) || len(artists) == 0 {
return nil, agents.ErrNotFound
}
if err != nil {
return nil, err
}
// If the first one has the same name, that's the one
if !strings.EqualFold(artists[0].Name, name) {
return nil, agents.ErrNotFound
}
return &artists[0], err
}
func init() {
conf.AddHook(func() {
if conf.Server.Deezer.Enabled {
agents.Register(deezerAgentName, deezerConstructor)
}
})
}

View File

@@ -0,0 +1,17 @@
package deezer
import (
"testing"
"github.com/navidrome/navidrome/log"
"github.com/navidrome/navidrome/tests"
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
)
func TestDeezer(t *testing.T) {
tests.Init(t, false)
log.SetLevel(log.LevelFatal)
RegisterFailHandler(Fail)
RunSpecs(t, "Deezer Test Suite")
}

View File

@@ -0,0 +1,31 @@
package deezer
type SearchArtistResults struct {
Data []Artist `json:"data"`
Total int `json:"total"`
Next string `json:"next"`
}
type Artist struct {
ID int `json:"id"`
Name string `json:"name"`
Link string `json:"link"`
Picture string `json:"picture"`
PictureSmall string `json:"picture_small"`
PictureMedium string `json:"picture_medium"`
PictureBig string `json:"picture_big"`
PictureXl string `json:"picture_xl"`
NbAlbum int `json:"nb_album"`
NbFan int `json:"nb_fan"`
Radio bool `json:"radio"`
Tracklist string `json:"tracklist"`
Type string `json:"type"`
}
type Error struct {
Error struct {
Type string `json:"type"`
Message string `json:"message"`
Code int `json:"code"`
} `json:"error"`
}

View File

@@ -0,0 +1,38 @@
package deezer
import (
"encoding/json"
"os"
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
)
var _ = Describe("Responses", func() {
Describe("Search type=artist", func() {
It("parses the artist search result correctly ", func() {
var resp SearchArtistResults
body, err := os.ReadFile("tests/fixtures/deezer.search.artist.json")
Expect(err).To(BeNil())
err = json.Unmarshal(body, &resp)
Expect(err).To(BeNil())
Expect(resp.Data).To(HaveLen(17))
michael := resp.Data[0]
Expect(michael.Name).To(Equal("Michael Jackson"))
Expect(michael.PictureXl).To(Equal("https://cdn-images.dzcdn.net/images/artist/97fae13b2b30e4aec2e8c9e0c7839d92/1000x1000-000000-80-0-0.jpg"))
})
})
Describe("Error", func() {
It("parses the error response correctly", func() {
var errorResp Error
body := []byte(`{"error":{"type":"MissingParameterException","message":"Missing parameters: q","code":501}}`)
err := json.Unmarshal(body, &errorResp)
Expect(err).To(BeNil())
Expect(errorResp.Error.Code).To(Equal(501))
Expect(errorResp.Error.Message).To(Equal("Missing parameters: q"))
})
})
})

View File

@@ -13,12 +13,12 @@ type Interface interface {
AgentName() string
}
// AlbumInfo contains album metadata (no images)
type AlbumInfo struct {
Name string
MBID string
Description string
URL string
Images []ExternalImage
}
type Artist struct {
@@ -40,11 +40,16 @@ var (
ErrNotFound = errors.New("not found")
)
// TODO Break up this interface in more specific methods, like artists
// AlbumInfoRetriever provides album info (no images)
type AlbumInfoRetriever interface {
GetAlbumInfo(ctx context.Context, name, artist, mbid string) (*AlbumInfo, error)
}
// AlbumImageRetriever provides album images
type AlbumImageRetriever interface {
GetAlbumImages(ctx context.Context, name, artist, mbid string) ([]ExternalImage, error)
}
type ArtistMBIDRetriever interface {
GetArtistMBID(ctx context.Context, id string, name string) (string, error)
}

View File

@@ -72,16 +72,23 @@ func (l *lastfmAgent) GetAlbumInfo(ctx context.Context, name, artist, mbid strin
return nil, err
}
response := agents.AlbumInfo{
return &agents.AlbumInfo{
Name: a.Name,
MBID: a.MBID,
Description: a.Description.Summary,
URL: a.URL,
Images: make([]agents.ExternalImage, 0),
}, nil
}
func (l *lastfmAgent) GetAlbumImages(ctx context.Context, name, artist, mbid string) ([]agents.ExternalImage, error) {
a, err := l.callAlbumGetInfo(ctx, name, artist, mbid)
if err != nil {
return nil, err
}
// Last.fm can return duplicate sizes.
seenSizes := map[int]bool{}
images := make([]agents.ExternalImage, 0)
// This assumes that Last.fm returns images with size small, medium, and large.
// This is true as of December 29, 2022
@@ -92,23 +99,20 @@ func (l *lastfmAgent) GetAlbumInfo(ctx context.Context, name, artist, mbid strin
log.Trace(ctx, "LastFM/albuminfo image URL does not match expected regex or is empty", "url", img.URL, "size", img.Size)
continue
}
numericSize, err := strconv.Atoi(size[0][2:])
if err != nil {
log.Error(ctx, "LastFM/albuminfo image URL does not match expected regex", "url", img.URL, "size", img.Size, err)
return nil, err
} else {
if _, exists := seenSizes[numericSize]; !exists {
response.Images = append(response.Images, agents.ExternalImage{
Size: numericSize,
URL: img.URL,
})
seenSizes[numericSize] = true
}
}
if _, exists := seenSizes[numericSize]; !exists {
images = append(images, agents.ExternalImage{
Size: numericSize,
URL: img.URL,
})
seenSizes[numericSize] = true
}
}
return &response, nil
return images, nil
}
func (l *lastfmAgent) GetArtistMBID(ctx context.Context, id string, name string) (string, error) {
@@ -286,7 +290,7 @@ func (l *lastfmAgent) getArtistForScrobble(track *model.MediaFile) string {
return track.Artist
}
func (l *lastfmAgent) NowPlaying(ctx context.Context, userId string, track *model.MediaFile) error {
func (l *lastfmAgent) NowPlaying(ctx context.Context, userId string, track *model.MediaFile, position int) error {
sk, err := l.sessionKeys.Get(ctx, userId)
if err != nil || sk == "" {
return scrobbler.ErrNotAuthorized

View File

@@ -209,7 +209,7 @@ var _ = Describe("lastfmAgent", func() {
It("calls Last.fm with correct params", func() {
httpClient.Res = http.Response{Body: io.NopCloser(bytes.NewBufferString("{}")), StatusCode: 200}
err := agent.NowPlaying(ctx, "user-1", track)
err := agent.NowPlaying(ctx, "user-1", track, 0)
Expect(err).ToNot(HaveOccurred())
Expect(httpClient.SavedRequest.Method).To(Equal(http.MethodPost))
@@ -226,7 +226,7 @@ var _ = Describe("lastfmAgent", func() {
})
It("returns ErrNotAuthorized if user is not linked", func() {
err := agent.NowPlaying(ctx, "user-2", track)
err := agent.NowPlaying(ctx, "user-2", track, 0)
Expect(err).To(MatchError(scrobbler.ErrNotAuthorized))
})
})
@@ -345,24 +345,6 @@ var _ = Describe("lastfmAgent", func() {
MBID: "03c91c40-49a6-44a7-90e7-a700edf97a62",
Description: "Believe is the twenty-third studio album by American singer-actress Cher, released on November 10, 1998 by Warner Bros. Records. The RIAA certified it Quadruple Platinum on December 23, 1999, recognizing four million shipments in the United States; Worldwide, the album has sold more than 20 million copies, making it the biggest-selling album of her career. In 1999 the album received three Grammy Awards nominations including \"Record of the Year\", \"Best Pop Album\" and winning \"Best Dance Recording\" for the single \"Believe\". It was released by Warner Bros. Records at the end of 1998. The album was executive produced by Rob <a href=\"https://www.last.fm/music/Cher/Believe\">Read more on Last.fm</a>.",
URL: "https://www.last.fm/music/Cher/Believe",
Images: []agents.ExternalImage{
{
URL: "https://lastfm.freetls.fastly.net/i/u/34s/3b54885952161aaea4ce2965b2db1638.png",
Size: 34,
},
{
URL: "https://lastfm.freetls.fastly.net/i/u/64s/3b54885952161aaea4ce2965b2db1638.png",
Size: 64,
},
{
URL: "https://lastfm.freetls.fastly.net/i/u/174s/3b54885952161aaea4ce2965b2db1638.png",
Size: 174,
},
{
URL: "https://lastfm.freetls.fastly.net/i/u/300x300/3b54885952161aaea4ce2965b2db1638.png",
Size: 300,
},
},
}))
Expect(httpClient.RequestCount).To(Equal(1))
Expect(httpClient.SavedRequest.URL.Query().Get("mbid")).To(Equal("03c91c40-49a6-44a7-90e7-a700edf97a62"))
@@ -372,9 +354,8 @@ var _ = Describe("lastfmAgent", func() {
f, _ := os.Open("tests/fixtures/lastfm.album.getinfo.empty_urls.json")
httpClient.Res = http.Response{Body: f, StatusCode: 200}
Expect(agent.GetAlbumInfo(ctx, "The Definitive Less Damage And More Joy", "The Jesus and Mary Chain", "")).To(Equal(&agents.AlbumInfo{
Name: "The Definitive Less Damage And More Joy",
URL: "https://www.last.fm/music/The+Jesus+and+Mary+Chain/The+Definitive+Less+Damage+And+More+Joy",
Images: []agents.ExternalImage{},
Name: "The Definitive Less Damage And More Joy",
URL: "https://www.last.fm/music/The+Jesus+and+Mary+Chain/The+Definitive+Less+Damage+And+More+Joy",
}))
Expect(httpClient.RequestCount).To(Equal(1))
Expect(httpClient.SavedRequest.URL.Query().Get("album")).To(Equal("The Definitive Less Damage And More Joy"))

View File

@@ -73,7 +73,7 @@ func (l *listenBrainzAgent) formatListen(track *model.MediaFile) listenInfo {
return li
}
func (l *listenBrainzAgent) NowPlaying(ctx context.Context, userId string, track *model.MediaFile) error {
func (l *listenBrainzAgent) NowPlaying(ctx context.Context, userId string, track *model.MediaFile, position int) error {
sk, err := l.sessionKeys.Get(ctx, userId)
if err != nil || sk == "" {
return errors.Join(err, scrobbler.ErrNotAuthorized)

View File

@@ -79,12 +79,12 @@ var _ = Describe("listenBrainzAgent", func() {
It("updates NowPlaying successfully", func() {
httpClient.Res = http.Response{Body: io.NopCloser(bytes.NewBufferString(`{"status": "ok"}`)), StatusCode: 200}
err := agent.NowPlaying(ctx, "user-1", track)
err := agent.NowPlaying(ctx, "user-1", track, 0)
Expect(err).ToNot(HaveOccurred())
})
It("returns ErrNotAuthorized if user is not linked", func() {
err := agent.NowPlaying(ctx, "user-2", track)
err := agent.NowPlaying(ctx, "user-2", track, 0)
Expect(err).To(MatchError(scrobbler.ErrNotAuthorized))
})
})

View File

@@ -96,8 +96,11 @@ func (a *cacheWarmer) run(ctx context.Context) {
// If cache not available, keep waiting
if !a.cache.Available(ctx) {
if len(a.buffer) > 0 {
log.Trace(ctx, "Cache not available, buffering precache request", "bufferLen", len(a.buffer))
a.mutex.Lock()
bufferLen := len(a.buffer)
a.mutex.Unlock()
if bufferLen > 0 {
log.Trace(ctx, "Cache not available, buffering precache request", "bufferLen", bufferLen)
}
continue
}

View File

@@ -80,6 +80,7 @@ var _ = Describe("CacheWarmer", func() {
})
It("adds multiple items to buffer", func() {
fc.SetReady(false) // Make cache unavailable so items stay in buffer
cw := NewCacheWarmer(aw, fc).(*cacheWarmer)
cw.PreCache(model.MustParseArtworkID("al-1"))
cw.PreCache(model.MustParseArtworkID("al-2"))
@@ -214,3 +215,7 @@ func (f *mockFileCache) SetDisabled(v bool) {
f.disabled.Store(v)
f.ready.Store(true)
}
func (f *mockFileCache) SetReady(v bool) {
f.ready.Store(v)
}

View File

@@ -20,6 +20,12 @@ import (
"github.com/navidrome/navidrome/utils/str"
)
const (
// maxArtistFolderTraversalDepth defines how many directory levels to search
// when looking for artist images (artist folder + parent directories)
maxArtistFolderTraversalDepth = 3
)
type artistReader struct {
cacheKey
a *artwork
@@ -38,7 +44,7 @@ func newArtistArtworkReader(ctx context.Context, artwork *artwork, artID model.A
als, err := artwork.ds.Album(ctx).GetAll(model.QueryOptions{
Filters: squirrel.And{
squirrel.Eq{"album_artist_id": artID.ID},
squirrel.Eq{"json_array_length(participants, '$.albumartist')": 1},
squirrel.Eq{"jsonb_array_length(participants->'albumartist')": 1},
},
})
if err != nil {
@@ -108,36 +114,52 @@ func (a *artistReader) fromArtistArtPriority(ctx context.Context, priority strin
func fromArtistFolder(ctx context.Context, artistFolder string, pattern string) sourceFunc {
return func() (io.ReadCloser, string, error) {
fsys := os.DirFS(artistFolder)
matches, err := fs.Glob(fsys, pattern)
if err != nil {
log.Warn(ctx, "Error matching artist image pattern", "pattern", pattern, "folder", artistFolder)
return nil, "", err
}
if len(matches) == 0 {
return nil, "", fmt.Errorf(`no matches for '%s' in '%s'`, pattern, artistFolder)
}
for _, m := range matches {
filePath := filepath.Join(artistFolder, m)
if !model.IsImageFile(m) {
continue
current := artistFolder
for i := 0; i < maxArtistFolderTraversalDepth; i++ {
if reader, path, err := findImageInFolder(ctx, current, pattern); err == nil {
return reader, path, nil
}
f, err := os.Open(filePath)
if err != nil {
log.Warn(ctx, "Could not open cover art file", "file", filePath, err)
return nil, "", err
parent := filepath.Dir(current)
if parent == current {
break
}
return f, filePath, nil
current = parent
}
return nil, "", nil
return nil, "", fmt.Errorf(`no matches for '%s' in '%s' or its parent directories`, pattern, artistFolder)
}
}
func findImageInFolder(ctx context.Context, folder, pattern string) (io.ReadCloser, string, error) {
log.Trace(ctx, "looking for artist image", "pattern", pattern, "folder", folder)
fsys := os.DirFS(folder)
matches, err := fs.Glob(fsys, pattern)
if err != nil {
log.Warn(ctx, "Error matching artist image pattern", "pattern", pattern, "folder", folder, err)
return nil, "", err
}
for _, m := range matches {
if !model.IsImageFile(m) {
continue
}
filePath := filepath.Join(folder, m)
f, err := os.Open(filePath)
if err != nil {
log.Warn(ctx, "Could not open cover art file", "file", filePath, err)
continue
}
return f, filePath, nil
}
return nil, "", fmt.Errorf(`no matches for '%s' in '%s'`, pattern, folder)
}
func loadArtistFolder(ctx context.Context, ds model.DataStore, albums model.Albums, paths []string) (string, time.Time, error) {
if len(albums) == 0 {
return "", time.Time{}, nil
}
libID := albums[0].LibraryID // Just need one of the albums, as they should all be in the same Library
libID := albums[0].LibraryID // Just need one of the albums, as they should all be in the same Library - for now! TODO: Support multiple libraries
folderPath := str.LongestCommonPrefix(paths)
if !strings.HasSuffix(folderPath, string(filepath.Separator)) {

View File

@@ -3,6 +3,8 @@ package artwork
import (
"context"
"errors"
"io"
"os"
"path/filepath"
"time"
@@ -108,6 +110,254 @@ var _ = Describe("artistArtworkReader", func() {
})
})
})
var _ = Describe("fromArtistFolder", func() {
var (
ctx context.Context
tempDir string
testFunc sourceFunc
)
BeforeEach(func() {
ctx = context.Background()
tempDir = GinkgoT().TempDir()
})
When("artist folder contains matching image", func() {
BeforeEach(func() {
// Create test structure: /temp/artist/artist.jpg
artistDir := filepath.Join(tempDir, "artist")
Expect(os.MkdirAll(artistDir, 0755)).To(Succeed())
artistImagePath := filepath.Join(artistDir, "artist.jpg")
Expect(os.WriteFile(artistImagePath, []byte("fake image data"), 0600)).To(Succeed())
testFunc = fromArtistFolder(ctx, artistDir, "artist.*")
})
It("finds and returns the image", func() {
reader, path, err := testFunc()
Expect(err).ToNot(HaveOccurred())
Expect(reader).ToNot(BeNil())
Expect(path).To(ContainSubstring("artist.jpg"))
// Verify we can read the content
data, err := io.ReadAll(reader)
Expect(err).ToNot(HaveOccurred())
Expect(string(data)).To(Equal("fake image data"))
reader.Close()
})
})
When("artist folder is empty but parent contains image", func() {
BeforeEach(func() {
// Create test structure: /temp/parent/artist.jpg and /temp/parent/artist/album/
parentDir := filepath.Join(tempDir, "parent")
artistDir := filepath.Join(parentDir, "artist")
albumDir := filepath.Join(artistDir, "album")
Expect(os.MkdirAll(albumDir, 0755)).To(Succeed())
// Put artist image in parent directory
artistImagePath := filepath.Join(parentDir, "artist.jpg")
Expect(os.WriteFile(artistImagePath, []byte("parent image"), 0600)).To(Succeed())
testFunc = fromArtistFolder(ctx, artistDir, "artist.*")
})
It("finds image in parent directory", func() {
reader, path, err := testFunc()
Expect(err).ToNot(HaveOccurred())
Expect(reader).ToNot(BeNil())
Expect(path).To(ContainSubstring("parent" + string(filepath.Separator) + "artist.jpg"))
data, err := io.ReadAll(reader)
Expect(err).ToNot(HaveOccurred())
Expect(string(data)).To(Equal("parent image"))
reader.Close()
})
})
When("image is two levels up", func() {
BeforeEach(func() {
// Create test structure: /temp/grandparent/artist.jpg and /temp/grandparent/parent/artist/
grandparentDir := filepath.Join(tempDir, "grandparent")
parentDir := filepath.Join(grandparentDir, "parent")
artistDir := filepath.Join(parentDir, "artist")
Expect(os.MkdirAll(artistDir, 0755)).To(Succeed())
// Put artist image in grandparent directory
artistImagePath := filepath.Join(grandparentDir, "artist.jpg")
Expect(os.WriteFile(artistImagePath, []byte("grandparent image"), 0600)).To(Succeed())
testFunc = fromArtistFolder(ctx, artistDir, "artist.*")
})
It("finds image in grandparent directory", func() {
reader, path, err := testFunc()
Expect(err).ToNot(HaveOccurred())
Expect(reader).ToNot(BeNil())
Expect(path).To(ContainSubstring("grandparent" + string(filepath.Separator) + "artist.jpg"))
data, err := io.ReadAll(reader)
Expect(err).ToNot(HaveOccurred())
Expect(string(data)).To(Equal("grandparent image"))
reader.Close()
})
})
When("images exist at multiple levels", func() {
BeforeEach(func() {
// Create test structure with images at multiple levels
grandparentDir := filepath.Join(tempDir, "grandparent")
parentDir := filepath.Join(grandparentDir, "parent")
artistDir := filepath.Join(parentDir, "artist")
Expect(os.MkdirAll(artistDir, 0755)).To(Succeed())
// Put artist images at all levels
Expect(os.WriteFile(filepath.Join(artistDir, "artist.jpg"), []byte("artist level"), 0600)).To(Succeed())
Expect(os.WriteFile(filepath.Join(parentDir, "artist.jpg"), []byte("parent level"), 0600)).To(Succeed())
Expect(os.WriteFile(filepath.Join(grandparentDir, "artist.jpg"), []byte("grandparent level"), 0600)).To(Succeed())
testFunc = fromArtistFolder(ctx, artistDir, "artist.*")
})
It("prioritizes the closest (artist folder) image", func() {
reader, path, err := testFunc()
Expect(err).ToNot(HaveOccurred())
Expect(reader).ToNot(BeNil())
Expect(path).To(ContainSubstring("artist" + string(filepath.Separator) + "artist.jpg"))
data, err := io.ReadAll(reader)
Expect(err).ToNot(HaveOccurred())
Expect(string(data)).To(Equal("artist level"))
reader.Close()
})
})
When("pattern matches multiple files", func() {
BeforeEach(func() {
artistDir := filepath.Join(tempDir, "artist")
Expect(os.MkdirAll(artistDir, 0755)).To(Succeed())
// Create multiple matching files
Expect(os.WriteFile(filepath.Join(artistDir, "artist.jpg"), []byte("jpg image"), 0600)).To(Succeed())
Expect(os.WriteFile(filepath.Join(artistDir, "artist.png"), []byte("png image"), 0600)).To(Succeed())
Expect(os.WriteFile(filepath.Join(artistDir, "artist.txt"), []byte("text file"), 0600)).To(Succeed())
testFunc = fromArtistFolder(ctx, artistDir, "artist.*")
})
It("returns the first valid image file", func() {
reader, path, err := testFunc()
Expect(err).ToNot(HaveOccurred())
Expect(reader).ToNot(BeNil())
// Should return an image file, not the text file
Expect(path).To(SatisfyAny(
ContainSubstring("artist.jpg"),
ContainSubstring("artist.png"),
))
Expect(path).ToNot(ContainSubstring("artist.txt"))
reader.Close()
})
})
When("no matching files exist anywhere", func() {
BeforeEach(func() {
artistDir := filepath.Join(tempDir, "artist")
Expect(os.MkdirAll(artistDir, 0755)).To(Succeed())
// Create non-matching files
Expect(os.WriteFile(filepath.Join(artistDir, "cover.jpg"), []byte("cover image"), 0600)).To(Succeed())
testFunc = fromArtistFolder(ctx, artistDir, "artist.*")
})
It("returns an error", func() {
reader, path, err := testFunc()
Expect(err).To(HaveOccurred())
Expect(reader).To(BeNil())
Expect(path).To(BeEmpty())
Expect(err.Error()).To(ContainSubstring("no matches for 'artist.*'"))
Expect(err.Error()).To(ContainSubstring("parent directories"))
})
})
When("directory traversal reaches filesystem root", func() {
BeforeEach(func() {
// Start from a shallow directory to test root boundary
artistDir := filepath.Join(tempDir, "artist")
Expect(os.MkdirAll(artistDir, 0755)).To(Succeed())
testFunc = fromArtistFolder(ctx, artistDir, "artist.*")
})
It("handles root boundary gracefully", func() {
reader, path, err := testFunc()
Expect(err).To(HaveOccurred())
Expect(reader).To(BeNil())
Expect(path).To(BeEmpty())
// Should not panic or cause infinite loop
})
})
When("file exists but cannot be opened", func() {
BeforeEach(func() {
artistDir := filepath.Join(tempDir, "artist")
Expect(os.MkdirAll(artistDir, 0755)).To(Succeed())
// Create a file that cannot be opened (permission denied)
restrictedFile := filepath.Join(artistDir, "artist.jpg")
Expect(os.WriteFile(restrictedFile, []byte("restricted"), 0600)).To(Succeed())
testFunc = fromArtistFolder(ctx, artistDir, "artist.*")
})
It("logs warning and continues searching", func() {
// This test depends on the ability to restrict file permissions
// For now, we'll just ensure it doesn't panic and returns appropriate error
reader, _, err := testFunc()
// The file should be readable in test environment, so this will succeed
// In a real scenario with permission issues, it would continue searching
if err == nil {
Expect(reader).ToNot(BeNil())
reader.Close()
}
})
})
When("single album artist scenario (original issue)", func() {
BeforeEach(func() {
// Simulate the exact folder structure from the issue:
// /music/artist/album1/ (single album)
// /music/artist/artist.jpg (artist image that should be found)
artistDir := filepath.Join(tempDir, "music", "artist")
albumDir := filepath.Join(artistDir, "album1")
Expect(os.MkdirAll(albumDir, 0755)).To(Succeed())
// Create artist.jpg in the artist folder (this was not being found before)
artistImagePath := filepath.Join(artistDir, "artist.jpg")
Expect(os.WriteFile(artistImagePath, []byte("single album artist image"), 0600)).To(Succeed())
// The fromArtistFolder is called with the artist folder path
testFunc = fromArtistFolder(ctx, artistDir, "artist.*")
})
It("finds artist.jpg in artist folder for single album artist", func() {
reader, path, err := testFunc()
Expect(err).ToNot(HaveOccurred())
Expect(reader).ToNot(BeNil())
Expect(path).To(ContainSubstring("artist.jpg"))
Expect(path).To(ContainSubstring("artist"))
// Verify the content
data, err := io.ReadAll(reader)
Expect(err).ToNot(HaveOccurred())
Expect(string(data)).To(Equal("single album artist image"))
reader.Close()
})
})
})
})
type fakeFolderRepo struct {

View File

@@ -188,7 +188,7 @@ func fromURL(ctx context.Context, imageUrl *url.URL) (io.ReadCloser, string, err
}
if resp.StatusCode != http.StatusOK {
resp.Body.Close()
return nil, "", fmt.Errorf("error retrieveing artwork from %s: %s", imageUrl, resp.Status)
return nil, "", fmt.Errorf("error retrieving artwork from %s: %s", imageUrl, resp.Status)
}
return resp.Body, imageUrl.String(), nil
}

View File

@@ -190,10 +190,13 @@ type mockAgents struct {
topSongsAgent agents.ArtistTopSongsRetriever
similarAgent agents.ArtistSimilarRetriever
imageAgent agents.ArtistImageRetriever
albumInfoAgent agents.AlbumInfoRetriever
bioAgent agents.ArtistBiographyRetriever
mbidAgent agents.ArtistMBIDRetriever
urlAgent agents.ArtistURLRetriever
albumInfoAgent interface {
agents.AlbumInfoRetriever
agents.AlbumImageRetriever
}
bioAgent agents.ArtistBiographyRetriever
mbidAgent agents.ArtistMBIDRetriever
urlAgent agents.ArtistURLRetriever
agents.Interface
}
@@ -268,3 +271,14 @@ func (m *mockAgents) GetArtistImages(ctx context.Context, id, name, mbid string)
}
return nil, args.Error(1)
}
func (m *mockAgents) GetAlbumImages(ctx context.Context, name, artist, mbid string) ([]agents.ExternalImage, error) {
if m.albumInfoAgent != nil {
return m.albumInfoAgent.GetAlbumImages(ctx, name, artist, mbid)
}
args := m.Called(ctx, name, artist, mbid)
if args.Get(0) != nil {
return args.Get(0).([]agents.ExternalImage), args.Error(1)
}
return nil, args.Error(1)
}

View File

@@ -3,6 +3,7 @@ package external
import (
"context"
"errors"
"fmt"
"net/url"
"sort"
"strings"
@@ -11,6 +12,7 @@ import (
"github.com/Masterminds/squirrel"
"github.com/navidrome/navidrome/conf"
"github.com/navidrome/navidrome/core/agents"
_ "github.com/navidrome/navidrome/core/agents/deezer"
_ "github.com/navidrome/navidrome/core/agents/lastfm"
_ "github.com/navidrome/navidrome/core/agents/listenbrainz"
_ "github.com/navidrome/navidrome/core/agents/spotify"
@@ -34,7 +36,7 @@ const (
type Provider interface {
UpdateAlbumInfo(ctx context.Context, id string) (*model.Album, error)
UpdateArtistInfo(ctx context.Context, id string, count int, includeNotPresent bool) (*model.Artist, error)
SimilarSongs(ctx context.Context, id string, count int) (model.MediaFiles, error)
ArtistRadio(ctx context.Context, id string, count int) (model.MediaFiles, error)
TopSongs(ctx context.Context, artist string, count int) (model.MediaFiles, error)
ArtistImage(ctx context.Context, id string) (*url.URL, error)
AlbumImage(ctx context.Context, id string) (*url.URL, error)
@@ -59,6 +61,7 @@ type auxArtist struct {
type Agents interface {
agents.AlbumInfoRetriever
agents.AlbumImageRetriever
agents.ArtistBiographyRetriever
agents.ArtistMBIDRetriever
agents.ArtistImageRetriever
@@ -139,19 +142,20 @@ func (e *provider) populateAlbumInfo(ctx context.Context, album auxAlbum) (auxAl
album.Description = info.Description
}
if len(info.Images) > 0 {
sort.Slice(info.Images, func(i, j int) bool {
return info.Images[i].Size > info.Images[j].Size
images, err := e.ag.GetAlbumImages(ctx, album.Name, album.AlbumArtist, album.MbzAlbumID)
if err == nil && len(images) > 0 {
sort.Slice(images, func(i, j int) bool {
return images[i].Size > images[j].Size
})
album.LargeImageUrl = info.Images[0].URL
album.LargeImageUrl = images[0].URL
if len(info.Images) >= 2 {
album.MediumImageUrl = info.Images[1].URL
if len(images) >= 2 {
album.MediumImageUrl = images[1].URL
}
if len(info.Images) >= 3 {
album.SmallImageUrl = info.Images[2].URL
if len(images) >= 3 {
album.SmallImageUrl = images[2].URL
}
}
@@ -257,7 +261,7 @@ func (e *provider) populateArtistInfo(ctx context.Context, artist auxArtist) (au
return artist, nil
}
func (e *provider) SimilarSongs(ctx context.Context, id string, count int) (model.MediaFiles, error) {
func (e *provider) ArtistRadio(ctx context.Context, id string, count int) (model.MediaFiles, error) {
artist, err := e.getArtist(ctx, id)
if err != nil {
return nil, err
@@ -265,14 +269,14 @@ func (e *provider) SimilarSongs(ctx context.Context, id string, count int) (mode
e.callGetSimilar(ctx, e.ag, &artist, 15, false)
if utils.IsCtxDone(ctx) {
log.Warn(ctx, "SimilarSongs call canceled", ctx.Err())
log.Warn(ctx, "ArtistRadio call canceled", ctx.Err())
return nil, ctx.Err()
}
weightedSongs := random.NewWeightedChooser[model.MediaFile]()
addArtist := func(a model.Artist, weightedSongs *random.WeightedChooser[model.MediaFile], count, artistWeight int) error {
if utils.IsCtxDone(ctx) {
log.Warn(ctx, "SimilarSongs call canceled", ctx.Err())
log.Warn(ctx, "ArtistRadio call canceled", ctx.Err())
return ctx.Err()
}
@@ -340,29 +344,28 @@ func (e *provider) AlbumImage(ctx context.Context, id string) (*url.URL, error)
return nil, err
}
info, err := e.ag.GetAlbumInfo(ctx, album.Name, album.AlbumArtist, album.MbzAlbumID)
images, err := e.ag.GetAlbumImages(ctx, album.Name, album.AlbumArtist, album.MbzAlbumID)
if err != nil {
switch {
case errors.Is(err, agents.ErrNotFound):
log.Trace(ctx, "Album not found in agent", "albumID", id, "name", album.Name, "artist", album.AlbumArtist)
return nil, model.ErrNotFound
case errors.Is(err, context.Canceled):
log.Debug(ctx, "GetAlbumInfo call canceled", err)
log.Debug(ctx, "GetAlbumImages call canceled", err)
default:
log.Warn(ctx, "Error getting album info from agent", "albumID", id, "name", album.Name, "artist", album.AlbumArtist, err)
log.Warn(ctx, "Error getting album images from agent", "albumID", id, "name", album.Name, "artist", album.AlbumArtist, err)
}
return nil, err
}
if info == nil {
log.Warn(ctx, "Agent returned nil info without error", "albumID", id, "name", album.Name, "artist", album.AlbumArtist)
if len(images) == 0 {
log.Warn(ctx, "Agent returned no images without error", "albumID", id, "name", album.Name, "artist", album.AlbumArtist)
return nil, model.ErrNotFound
}
// Return the biggest image
var img agents.ExternalImage
for _, i := range info.Images {
for _, i := range images {
if img.Size <= i.Size {
img = i
}
@@ -400,20 +403,21 @@ func (e *provider) TopSongs(ctx context.Context, artistName string, count int) (
func (e *provider) getMatchingTopSongs(ctx context.Context, agent agents.ArtistTopSongsRetriever, artist *auxArtist, count int) (model.MediaFiles, error) {
songs, err := agent.GetArtistTopSongs(ctx, artist.ID, artist.Name, artist.MbzArtistID, count)
if err != nil {
return nil, err
return nil, fmt.Errorf("failed to get top songs for artist %s: %w", artist.Name, err)
}
var mfs model.MediaFiles
for _, t := range songs {
mf, err := e.findMatchingTrack(ctx, t.MBID, artist.ID, t.Name)
if err != nil {
continue
}
mfs = append(mfs, *mf)
if len(mfs) == count {
break
}
mbidMatches, err := e.loadTracksByMBID(ctx, songs)
if err != nil {
return nil, fmt.Errorf("failed to load tracks by MBID: %w", err)
}
titleMatches, err := e.loadTracksByTitle(ctx, songs, artist, mbidMatches)
if err != nil {
return nil, fmt.Errorf("failed to load tracks by title: %w", err)
}
log.Trace(ctx, "Top Songs loaded", "name", artist.Name, "numSongs", len(songs), "numMBIDMatches", len(mbidMatches), "numTitleMatches", len(titleMatches))
mfs := e.selectTopSongs(songs, mbidMatches, titleMatches, count)
if len(mfs) == 0 {
log.Debug(ctx, "No matching top songs found", "name", artist.Name)
} else {
@@ -423,35 +427,94 @@ func (e *provider) getMatchingTopSongs(ctx context.Context, agent agents.ArtistT
return mfs, nil
}
func (e *provider) findMatchingTrack(ctx context.Context, mbid string, artistID, title string) (*model.MediaFile, error) {
if mbid != "" {
mfs, err := e.ds.MediaFile(ctx).GetAll(model.QueryOptions{
Filters: squirrel.And{
squirrel.Eq{"mbz_recording_id": mbid},
squirrel.Eq{"missing": false},
},
})
if err == nil && len(mfs) > 0 {
return &mfs[0], nil
func (e *provider) loadTracksByMBID(ctx context.Context, songs []agents.Song) (map[string]model.MediaFile, error) {
var mbids []string
for _, s := range songs {
if s.MBID != "" {
mbids = append(mbids, s.MBID)
}
return e.findMatchingTrack(ctx, "", artistID, title)
}
mfs, err := e.ds.MediaFile(ctx).GetAll(model.QueryOptions{
matches := map[string]model.MediaFile{}
if len(mbids) == 0 {
return matches, nil
}
res, err := e.ds.MediaFile(ctx).GetAll(model.QueryOptions{
Filters: squirrel.And{
squirrel.Eq{"mbz_recording_id": mbids},
squirrel.Eq{"missing": false},
},
})
if err != nil {
return matches, err
}
for _, mf := range res {
if id := mf.MbzRecordingID; id != "" {
if _, ok := matches[id]; !ok {
matches[id] = mf
}
}
}
return matches, nil
}
func (e *provider) loadTracksByTitle(ctx context.Context, songs []agents.Song, artist *auxArtist, mbidMatches map[string]model.MediaFile) (map[string]model.MediaFile, error) {
titleMap := map[string]string{}
for _, s := range songs {
if s.MBID != "" && mbidMatches[s.MBID].ID != "" {
continue
}
sanitized := str.SanitizeFieldForSorting(s.Name)
titleMap[sanitized] = s.Name
}
matches := map[string]model.MediaFile{}
if len(titleMap) == 0 {
return matches, nil
}
titleFilters := squirrel.Or{}
for sanitized := range titleMap {
titleFilters = append(titleFilters, squirrel.Like{"order_title": sanitized})
}
res, err := e.ds.MediaFile(ctx).GetAll(model.QueryOptions{
Filters: squirrel.And{
squirrel.Or{
squirrel.Eq{"artist_id": artistID},
squirrel.Eq{"album_artist_id": artistID},
squirrel.Eq{"artist_id": artist.ID},
squirrel.Eq{"album_artist_id": artist.ID},
},
squirrel.Like{"order_title": str.SanitizeFieldForSorting(title)},
titleFilters,
squirrel.Eq{"missing": false},
},
Sort: "starred desc, rating desc, year asc, compilation asc ",
Max: 1,
})
if err != nil || len(mfs) == 0 {
return nil, model.ErrNotFound
if err != nil {
return matches, err
}
return &mfs[0], nil
for _, mf := range res {
sanitized := str.SanitizeFieldForSorting(mf.Title)
if _, ok := matches[sanitized]; !ok {
matches[sanitized] = mf
}
}
return matches, nil
}
func (e *provider) selectTopSongs(songs []agents.Song, byMBID, byTitle map[string]model.MediaFile, count int) model.MediaFiles {
var mfs model.MediaFiles
for _, t := range songs {
if len(mfs) == count {
break
}
if t.MBID != "" {
if mf, ok := byMBID[t.MBID]; ok {
mfs = append(mfs, mf)
continue
}
}
if mf, ok := byTitle[str.SanitizeFieldForSorting(t.Name)]; ok {
mfs = append(mfs, mf)
}
}
return mfs
}
func (e *provider) callGetURL(ctx context.Context, agent agents.ArtistURLRetriever, artist *auxArtist) {
@@ -497,7 +560,7 @@ func (e *provider) callGetSimilar(ctx context.Context, agent agents.ArtistSimila
return
}
start := time.Now()
sa, err := e.mapSimilarArtists(ctx, similar, includeNotPresent)
sa, err := e.mapSimilarArtists(ctx, similar, limit, includeNotPresent)
log.Debug(ctx, "Mapped Similar Artists", "artist", artist.Name, "numSimilar", len(sa), "elapsed", time.Since(start))
if err != nil {
return
@@ -505,7 +568,7 @@ func (e *provider) callGetSimilar(ctx context.Context, agent agents.ArtistSimila
artist.SimilarArtists = sa
}
func (e *provider) mapSimilarArtists(ctx context.Context, similar []agents.Artist, includeNotPresent bool) (model.Artists, error) {
func (e *provider) mapSimilarArtists(ctx context.Context, similar []agents.Artist, limit int, includeNotPresent bool) (model.Artists, error) {
var result model.Artists
var notPresent []string
@@ -528,21 +591,33 @@ func (e *provider) mapSimilarArtists(ctx context.Context, similar []agents.Artis
artistMap[artist.Name] = artist
}
count := 0
// Process the similar artists
for _, s := range similar {
if artist, found := artistMap[s.Name]; found {
result = append(result, artist)
count++
if count >= limit {
break
}
} else {
notPresent = append(notPresent, s.Name)
}
}
// Then fill up with non-present artists
if includeNotPresent {
if includeNotPresent && count < limit {
for _, s := range notPresent {
// Let the ID empty to indicate that the artist is not present in the DB
sa := model.Artist{Name: s}
result = append(result, sa)
count++
if count >= limit {
break
}
}
}

View File

@@ -23,7 +23,6 @@ var _ = Describe("Provider - AlbumImage", func() {
var mockAlbumRepo *mockAlbumRepo
var mockMediaFileRepo *mockMediaFileRepo
var mockAlbumAgent *mockAlbumInfoAgent
var agentsCombined *mockAgents
var ctx context.Context
BeforeEach(func() {
@@ -43,10 +42,7 @@ var _ = Describe("Provider - AlbumImage", func() {
mockAlbumAgent = newMockAlbumInfoAgent()
agentsCombined = &mockAgents{
albumInfoAgent: mockAlbumAgent,
}
agentsCombined := &mockAgents{albumInfoAgent: mockAlbumAgent}
provider = NewProvider(ds, agentsCombined)
// Default mocks
@@ -66,13 +62,11 @@ var _ = Describe("Provider - AlbumImage", func() {
mockArtistRepo.On("Get", "album-1").Return(nil, model.ErrNotFound).Once() // Expect GetEntityByID sequence
mockAlbumRepo.On("Get", "album-1").Return(&model.Album{ID: "album-1", Name: "Album One", AlbumArtistID: "artist-1"}, nil).Once()
// Explicitly mock agent call for this test
mockAlbumAgent.On("GetAlbumInfo", ctx, "Album One", "", "").
Return(&agents.AlbumInfo{
Images: []agents.ExternalImage{
{URL: "http://example.com/large.jpg", Size: 1000},
{URL: "http://example.com/medium.jpg", Size: 500},
{URL: "http://example.com/small.jpg", Size: 200},
},
mockAlbumAgent.On("GetAlbumImages", ctx, "Album One", "", "").
Return([]agents.ExternalImage{
{URL: "http://example.com/large.jpg", Size: 1000},
{URL: "http://example.com/medium.jpg", Size: 500},
{URL: "http://example.com/small.jpg", Size: 200},
}, nil).Once()
expectedURL, _ := url.Parse("http://example.com/large.jpg")
@@ -82,8 +76,8 @@ var _ = Describe("Provider - AlbumImage", func() {
Expect(imgURL).To(Equal(expectedURL))
mockArtistRepo.AssertCalled(GinkgoT(), "Get", "album-1") // From GetEntityByID
mockAlbumRepo.AssertCalled(GinkgoT(), "Get", "album-1")
mockArtistRepo.AssertNotCalled(GinkgoT(), "Get", "artist-1") // Artist lookup no longer happens in getAlbum
mockAlbumAgent.AssertCalled(GinkgoT(), "GetAlbumInfo", ctx, "Album One", "", "") // Expect empty artist name
mockArtistRepo.AssertNotCalled(GinkgoT(), "Get", "artist-1") // Artist lookup no longer happens in getAlbum
mockAlbumAgent.AssertCalled(GinkgoT(), "GetAlbumImages", ctx, "Album One", "", "") // Expect empty artist name
})
It("returns ErrNotFound if the album is not found in the DB", func() {
@@ -99,7 +93,7 @@ var _ = Describe("Provider - AlbumImage", func() {
mockArtistRepo.AssertCalled(GinkgoT(), "Get", "not-found")
mockAlbumRepo.AssertCalled(GinkgoT(), "Get", "not-found")
mockMediaFileRepo.AssertCalled(GinkgoT(), "Get", "not-found")
mockAlbumAgent.AssertNotCalled(GinkgoT(), "GetAlbumInfo", mock.Anything, mock.Anything, mock.Anything, mock.Anything)
mockAlbumAgent.AssertNotCalled(GinkgoT(), "GetAlbumImages", mock.Anything, mock.Anything, mock.Anything)
})
It("returns the agent error if the agent fails", func() {
@@ -109,7 +103,7 @@ var _ = Describe("Provider - AlbumImage", func() {
agentErr := errors.New("agent failure")
// Explicitly mock agent call for this test
mockAlbumAgent.On("GetAlbumInfo", ctx, "Album One", "", "").Return(nil, agentErr).Once() // Expect empty artist
mockAlbumAgent.On("GetAlbumImages", ctx, "Album One", "", "").Return(nil, agentErr).Once() // Expect empty artist
imgURL, err := provider.AlbumImage(ctx, "album-1")
@@ -118,7 +112,7 @@ var _ = Describe("Provider - AlbumImage", func() {
mockArtistRepo.AssertCalled(GinkgoT(), "Get", "album-1")
mockAlbumRepo.AssertCalled(GinkgoT(), "Get", "album-1")
mockArtistRepo.AssertNotCalled(GinkgoT(), "Get", "artist-1")
mockAlbumAgent.AssertCalled(GinkgoT(), "GetAlbumInfo", ctx, "Album One", "", "") // Expect empty artist
mockAlbumAgent.AssertCalled(GinkgoT(), "GetAlbumImages", ctx, "Album One", "", "") // Expect empty artist
})
It("returns ErrNotFound if the agent returns ErrNotFound", func() {
@@ -127,7 +121,7 @@ var _ = Describe("Provider - AlbumImage", func() {
mockAlbumRepo.On("Get", "album-1").Return(&model.Album{ID: "album-1", Name: "Album One", AlbumArtistID: "artist-1"}, nil).Once()
// Explicitly mock agent call for this test
mockAlbumAgent.On("GetAlbumInfo", ctx, "Album One", "", "").Return(nil, agents.ErrNotFound).Once() // Expect empty artist
mockAlbumAgent.On("GetAlbumImages", ctx, "Album One", "", "").Return(nil, agents.ErrNotFound).Once() // Expect empty artist
imgURL, err := provider.AlbumImage(ctx, "album-1")
@@ -135,7 +129,7 @@ var _ = Describe("Provider - AlbumImage", func() {
Expect(imgURL).To(BeNil())
mockArtistRepo.AssertCalled(GinkgoT(), "Get", "album-1")
mockAlbumRepo.AssertCalled(GinkgoT(), "Get", "album-1")
mockAlbumAgent.AssertCalled(GinkgoT(), "GetAlbumInfo", ctx, "Album One", "", "") // Expect empty artist
mockAlbumAgent.AssertCalled(GinkgoT(), "GetAlbumImages", ctx, "Album One", "", "") // Expect empty artist
})
It("returns ErrNotFound if the agent returns no images", func() {
@@ -144,8 +138,8 @@ var _ = Describe("Provider - AlbumImage", func() {
mockAlbumRepo.On("Get", "album-1").Return(&model.Album{ID: "album-1", Name: "Album One", AlbumArtistID: "artist-1"}, nil).Once()
// Explicitly mock agent call for this test
mockAlbumAgent.On("GetAlbumInfo", ctx, "Album One", "", "").
Return(&agents.AlbumInfo{Images: []agents.ExternalImage{}}, nil).Once() // Expect empty artist
mockAlbumAgent.On("GetAlbumImages", ctx, "Album One", "", "").
Return([]agents.ExternalImage{}, nil).Once() // Expect empty artist
imgURL, err := provider.AlbumImage(ctx, "album-1")
@@ -153,7 +147,7 @@ var _ = Describe("Provider - AlbumImage", func() {
Expect(imgURL).To(BeNil())
mockArtistRepo.AssertCalled(GinkgoT(), "Get", "album-1")
mockAlbumRepo.AssertCalled(GinkgoT(), "Get", "album-1")
mockAlbumAgent.AssertCalled(GinkgoT(), "GetAlbumInfo", ctx, "Album One", "", "") // Expect empty artist
mockAlbumAgent.AssertCalled(GinkgoT(), "GetAlbumImages", ctx, "Album One", "", "") // Expect empty artist
})
It("returns context error if context is canceled", func() {
@@ -163,7 +157,7 @@ var _ = Describe("Provider - AlbumImage", func() {
mockArtistRepo.On("Get", "album-1").Return(nil, model.ErrNotFound).Once()
mockAlbumRepo.On("Get", "album-1").Return(&model.Album{ID: "album-1", Name: "Album One", AlbumArtistID: "artist-1"}, nil).Once()
// Expect the agent call even if context is cancelled, returning the context error
mockAlbumAgent.On("GetAlbumInfo", cctx, "Album One", "", "").Return(nil, context.Canceled).Once()
mockAlbumAgent.On("GetAlbumImages", cctx, "Album One", "", "").Return(nil, context.Canceled).Once()
// Cancel the context *before* calling the function under test
cancelCtx()
@@ -174,7 +168,7 @@ var _ = Describe("Provider - AlbumImage", func() {
mockArtistRepo.AssertCalled(GinkgoT(), "Get", "album-1")
mockAlbumRepo.AssertCalled(GinkgoT(), "Get", "album-1")
// Agent should now be called, verify this expectation
mockAlbumAgent.AssertCalled(GinkgoT(), "GetAlbumInfo", cctx, "Album One", "", "")
mockAlbumAgent.AssertCalled(GinkgoT(), "GetAlbumImages", cctx, "Album One", "", "")
})
It("derives album ID from MediaFile ID", func() {
@@ -186,13 +180,11 @@ var _ = Describe("Provider - AlbumImage", func() {
mockAlbumRepo.On("Get", "album-1").Return(&model.Album{ID: "album-1", Name: "Album One", AlbumArtistID: "artist-1"}, nil).Once()
// Explicitly mock agent call for this test
mockAlbumAgent.On("GetAlbumInfo", ctx, "Album One", "", "").
Return(&agents.AlbumInfo{
Images: []agents.ExternalImage{
{URL: "http://example.com/large.jpg", Size: 1000},
{URL: "http://example.com/medium.jpg", Size: 500},
{URL: "http://example.com/small.jpg", Size: 200},
},
mockAlbumAgent.On("GetAlbumImages", ctx, "Album One", "", "").
Return([]agents.ExternalImage{
{URL: "http://example.com/large.jpg", Size: 1000},
{URL: "http://example.com/medium.jpg", Size: 500},
{URL: "http://example.com/small.jpg", Size: 200},
}, nil).Once()
expectedURL, _ := url.Parse("http://example.com/large.jpg")
@@ -206,7 +198,7 @@ var _ = Describe("Provider - AlbumImage", func() {
mockArtistRepo.AssertCalled(GinkgoT(), "Get", "album-1")
mockAlbumRepo.AssertCalled(GinkgoT(), "Get", "album-1")
mockArtistRepo.AssertNotCalled(GinkgoT(), "Get", "artist-1")
mockAlbumAgent.AssertCalled(GinkgoT(), "GetAlbumInfo", ctx, "Album One", "", "")
mockAlbumAgent.AssertCalled(GinkgoT(), "GetAlbumImages", ctx, "Album One", "", "")
})
It("handles different image orders from agent", func() {
@@ -214,13 +206,11 @@ var _ = Describe("Provider - AlbumImage", func() {
mockArtistRepo.On("Get", "album-1").Return(nil, model.ErrNotFound).Once() // Expect GetEntityByID sequence
mockAlbumRepo.On("Get", "album-1").Return(&model.Album{ID: "album-1", Name: "Album One", AlbumArtistID: "artist-1"}, nil).Once()
// Explicitly mock agent call for this test
mockAlbumAgent.On("GetAlbumInfo", ctx, "Album One", "", "").
Return(&agents.AlbumInfo{
Images: []agents.ExternalImage{
{URL: "http://example.com/small.jpg", Size: 200},
{URL: "http://example.com/large.jpg", Size: 1000},
{URL: "http://example.com/medium.jpg", Size: 500},
},
mockAlbumAgent.On("GetAlbumImages", ctx, "Album One", "", "").
Return([]agents.ExternalImage{
{URL: "http://example.com/small.jpg", Size: 200},
{URL: "http://example.com/large.jpg", Size: 1000},
{URL: "http://example.com/medium.jpg", Size: 500},
}, nil).Once()
expectedURL, _ := url.Parse("http://example.com/large.jpg")
@@ -228,7 +218,7 @@ var _ = Describe("Provider - AlbumImage", func() {
Expect(err).ToNot(HaveOccurred())
Expect(imgURL).To(Equal(expectedURL)) // Should still pick the largest
mockAlbumAgent.AssertCalled(GinkgoT(), "GetAlbumInfo", ctx, "Album One", "", "")
mockAlbumAgent.AssertCalled(GinkgoT(), "GetAlbumImages", ctx, "Album One", "", "")
})
It("handles agent returning only one image", func() {
@@ -236,11 +226,9 @@ var _ = Describe("Provider - AlbumImage", func() {
mockArtistRepo.On("Get", "album-1").Return(nil, model.ErrNotFound).Once() // Expect GetEntityByID sequence
mockAlbumRepo.On("Get", "album-1").Return(&model.Album{ID: "album-1", Name: "Album One", AlbumArtistID: "artist-1"}, nil).Once()
// Explicitly mock agent call for this test
mockAlbumAgent.On("GetAlbumInfo", ctx, "Album One", "", "").
Return(&agents.AlbumInfo{
Images: []agents.ExternalImage{
{URL: "http://example.com/single.jpg", Size: 700},
},
mockAlbumAgent.On("GetAlbumImages", ctx, "Album One", "", "").
Return([]agents.ExternalImage{
{URL: "http://example.com/single.jpg", Size: 700},
}, nil).Once()
expectedURL, _ := url.Parse("http://example.com/single.jpg")
@@ -248,7 +236,7 @@ var _ = Describe("Provider - AlbumImage", func() {
Expect(err).ToNot(HaveOccurred())
Expect(imgURL).To(Equal(expectedURL))
mockAlbumAgent.AssertCalled(GinkgoT(), "GetAlbumInfo", ctx, "Album One", "", "")
mockAlbumAgent.AssertCalled(GinkgoT(), "GetAlbumImages", ctx, "Album One", "", "")
})
It("returns ErrNotFound if deriving album ID fails", func() {
@@ -270,14 +258,15 @@ var _ = Describe("Provider - AlbumImage", func() {
mockArtistRepo.AssertCalled(GinkgoT(), "Get", "not-found")
mockAlbumRepo.AssertCalled(GinkgoT(), "Get", "not-found")
mockMediaFileRepo.AssertCalled(GinkgoT(), "Get", "not-found")
mockAlbumAgent.AssertNotCalled(GinkgoT(), "GetAlbumInfo", mock.Anything, mock.Anything, mock.Anything, mock.Anything)
mockAlbumAgent.AssertNotCalled(GinkgoT(), "GetAlbumImages", mock.Anything, mock.Anything, mock.Anything)
})
})
// mockAlbumInfoAgent implementation
type mockAlbumInfoAgent struct {
mock.Mock
agents.AlbumInfoRetriever // Embed interface
agents.AlbumInfoRetriever
agents.AlbumImageRetriever
}
func newMockAlbumInfoAgent() *mockAlbumInfoAgent {
@@ -299,5 +288,14 @@ func (m *mockAlbumInfoAgent) GetAlbumInfo(ctx context.Context, name, artist, mbi
return args.Get(0).(*agents.AlbumInfo), args.Error(1)
}
// Ensure mockAgent implements the interface
func (m *mockAlbumInfoAgent) GetAlbumImages(ctx context.Context, name, artist, mbid string) ([]agents.ExternalImage, error) {
args := m.Called(ctx, name, artist, mbid)
if args.Get(0) == nil {
return nil, args.Error(1)
}
return args.Get(0).([]agents.ExternalImage), args.Error(1)
}
// Ensure mockAgent implements the interfaces
var _ agents.AlbumInfoRetriever = (*mockAlbumInfoAgent)(nil)
var _ agents.AlbumImageRetriever = (*mockAlbumInfoAgent)(nil)

View File

@@ -13,7 +13,7 @@ import (
"github.com/stretchr/testify/mock"
)
var _ = Describe("Provider - SimilarSongs", func() {
var _ = Describe("Provider - ArtistRadio", func() {
var ds model.DataStore
var provider Provider
var mockAgent *mockSimilarArtistAgent
@@ -50,9 +50,9 @@ var _ = Describe("Provider - SimilarSongs", func() {
It("returns similar songs from main artist and similar artists", func() {
artist1 := model.Artist{ID: "artist-1", Name: "Artist One"}
similarArtist := model.Artist{ID: "artist-3", Name: "Similar Artist"}
song1 := model.MediaFile{ID: "song-1", Title: "Song One", ArtistID: "artist-1"}
song2 := model.MediaFile{ID: "song-2", Title: "Song Two", ArtistID: "artist-1"}
song3 := model.MediaFile{ID: "song-3", Title: "Song Three", ArtistID: "artist-3"}
song1 := model.MediaFile{ID: "song-1", Title: "Song One", ArtistID: "artist-1", MbzRecordingID: "mbid-1"}
song2 := model.MediaFile{ID: "song-2", Title: "Song Two", ArtistID: "artist-1", MbzRecordingID: "mbid-2"}
song3 := model.MediaFile{ID: "song-3", Title: "Song Three", ArtistID: "artist-3", MbzRecordingID: "mbid-3"}
artistRepo.On("Get", "artist-1").Return(&artist1, nil).Maybe()
artistRepo.On("Get", "artist-3").Return(&similarArtist, nil).Maybe()
@@ -82,11 +82,10 @@ var _ = Describe("Provider - SimilarSongs", func() {
{Name: "Song Three", MBID: "mbid-3"},
}, nil).Once()
mediaFileRepo.FindByMBID("mbid-1", song1)
mediaFileRepo.FindByMBID("mbid-2", song2)
mediaFileRepo.FindByMBID("mbid-3", song3)
mediaFileRepo.On("GetAll", mock.AnythingOfType("model.QueryOptions")).Return(model.MediaFiles{song1, song2}, nil).Once()
mediaFileRepo.On("GetAll", mock.AnythingOfType("model.QueryOptions")).Return(model.MediaFiles{song3}, nil).Once()
songs, err := provider.SimilarSongs(ctx, "artist-1", 3)
songs, err := provider.ArtistRadio(ctx, "artist-1", 3)
Expect(err).ToNot(HaveOccurred())
Expect(songs).To(HaveLen(3))
@@ -103,7 +102,7 @@ var _ = Describe("Provider - SimilarSongs", func() {
return opt.Max == 1 && opt.Filters != nil
})).Return(model.Artists{}, nil).Maybe()
songs, err := provider.SimilarSongs(ctx, "artist-unknown-artist", 5)
songs, err := provider.ArtistRadio(ctx, "artist-unknown-artist", 5)
Expect(err).To(Equal(model.ErrNotFound))
Expect(songs).To(BeNil())
@@ -111,7 +110,7 @@ var _ = Describe("Provider - SimilarSongs", func() {
It("returns songs from main artist when GetSimilarArtists returns error", func() {
artist1 := model.Artist{ID: "artist-1", Name: "Artist One"}
song1 := model.MediaFile{ID: "song-1", Title: "Song One", ArtistID: "artist-1"}
song1 := model.MediaFile{ID: "song-1", Title: "Song One", ArtistID: "artist-1", MbzRecordingID: "mbid-1"}
artistRepo.On("Get", "artist-1").Return(&artist1, nil).Maybe()
artistRepo.On("GetAll", mock.MatchedBy(func(opt model.QueryOptions) bool {
@@ -130,9 +129,9 @@ var _ = Describe("Provider - SimilarSongs", func() {
{Name: "Song One", MBID: "mbid-1"},
}, nil).Once()
mediaFileRepo.FindByMBID("mbid-1", song1)
mediaFileRepo.On("GetAll", mock.AnythingOfType("model.QueryOptions")).Return(model.MediaFiles{song1}, nil).Once()
songs, err := provider.SimilarSongs(ctx, "artist-1", 5)
songs, err := provider.ArtistRadio(ctx, "artist-1", 5)
Expect(err).ToNot(HaveOccurred())
Expect(songs).To(HaveLen(1))
@@ -157,7 +156,7 @@ var _ = Describe("Provider - SimilarSongs", func() {
mockAgent.On("GetArtistTopSongs", mock.Anything, "artist-1", "Artist One", "", mock.Anything).
Return(nil, errors.New("error getting top songs")).Once()
songs, err := provider.SimilarSongs(ctx, "artist-1", 5)
songs, err := provider.ArtistRadio(ctx, "artist-1", 5)
Expect(err).ToNot(HaveOccurred())
Expect(songs).To(BeEmpty())
@@ -165,8 +164,8 @@ var _ = Describe("Provider - SimilarSongs", func() {
It("respects count parameter", func() {
artist1 := model.Artist{ID: "artist-1", Name: "Artist One"}
song1 := model.MediaFile{ID: "song-1", Title: "Song One", ArtistID: "artist-1"}
song2 := model.MediaFile{ID: "song-2", Title: "Song Two", ArtistID: "artist-1"}
song1 := model.MediaFile{ID: "song-1", Title: "Song One", ArtistID: "artist-1", MbzRecordingID: "mbid-1"}
song2 := model.MediaFile{ID: "song-2", Title: "Song Two", ArtistID: "artist-1", MbzRecordingID: "mbid-2"}
artistRepo.On("Get", "artist-1").Return(&artist1, nil).Maybe()
artistRepo.On("GetAll", mock.MatchedBy(func(opt model.QueryOptions) bool {
@@ -186,10 +185,9 @@ var _ = Describe("Provider - SimilarSongs", func() {
{Name: "Song Two", MBID: "mbid-2"},
}, nil).Once()
mediaFileRepo.FindByMBID("mbid-1", song1)
mediaFileRepo.FindByMBID("mbid-2", song2)
mediaFileRepo.On("GetAll", mock.AnythingOfType("model.QueryOptions")).Return(model.MediaFiles{song1, song2}, nil).Once()
songs, err := provider.SimilarSongs(ctx, "artist-1", 1)
songs, err := provider.ArtistRadio(ctx, "artist-1", 1)
Expect(err).ToNot(HaveOccurred())
Expect(songs).To(HaveLen(1))

View File

@@ -42,10 +42,6 @@ var _ = Describe("Provider - TopSongs", func() {
p = NewProvider(ds, ag)
})
BeforeEach(func() {
// Setup expectations in individual tests
})
It("returns top songs for a known artist", func() {
// Mock finding the artist
artist1 := model.Artist{ID: "artist-1", Name: "Artist One", MbzArtistID: "mbid-artist-1"}
@@ -58,11 +54,10 @@ var _ = Describe("Provider - TopSongs", func() {
}
ag.On("GetArtistTopSongs", ctx, "artist-1", "Artist One", "mbid-artist-1", 2).Return(agentSongs, nil).Once()
// Mock finding matching tracks
// Mock finding matching tracks (both returned in a single query)
song1 := model.MediaFile{ID: "song-1", Title: "Song One", ArtistID: "artist-1", MbzRecordingID: "mbid-song-1"}
song2 := model.MediaFile{ID: "song-2", Title: "Song Two", ArtistID: "artist-1", MbzRecordingID: "mbid-song-2"}
mediaFileRepo.On("GetAll", mock.AnythingOfType("model.QueryOptions")).Return(model.MediaFiles{song1}, nil).Once()
mediaFileRepo.On("GetAll", mock.AnythingOfType("model.QueryOptions")).Return(model.MediaFiles{song2}, nil).Once()
mediaFileRepo.On("GetAll", mock.AnythingOfType("model.QueryOptions")).Return(model.MediaFiles{song1, song2}, nil).Once()
songs, err := p.TopSongs(ctx, "Artist One", 2)
@@ -155,11 +150,10 @@ var _ = Describe("Provider - TopSongs", func() {
}
ag.On("GetArtistTopSongs", ctx, "artist-1", "Artist One", "mbid-artist-1", 2).Return(agentSongs, nil).Once()
// Mock finding matching tracks (only find song 1)
// Mock finding matching tracks (only find song 1 on bulk query)
song1 := model.MediaFile{ID: "song-1", Title: "Song One", ArtistID: "artist-1", MbzRecordingID: "mbid-song-1"}
mediaFileRepo.On("GetAll", mock.AnythingOfType("model.QueryOptions")).Return(model.MediaFiles{song1}, nil).Once()
mediaFileRepo.On("GetAll", mock.AnythingOfType("model.QueryOptions")).Return(model.MediaFiles{}, nil).Once() // For mbid-song-2 (fails)
mediaFileRepo.On("GetAll", mock.AnythingOfType("model.QueryOptions")).Return(model.MediaFiles{}, nil).Once() // For title fallback (fails)
mediaFileRepo.On("GetAll", mock.AnythingOfType("model.QueryOptions")).Return(model.MediaFiles{song1}, nil).Once() // bulk MBID query
mediaFileRepo.On("GetAll", mock.AnythingOfType("model.QueryOptions")).Return(model.MediaFiles{}, nil).Once() // title fallback for song2
songs, err := p.TopSongs(ctx, "Artist One", 2)
@@ -190,4 +184,91 @@ var _ = Describe("Provider - TopSongs", func() {
artistRepo.AssertExpectations(GinkgoT())
ag.AssertExpectations(GinkgoT())
})
It("falls back to title matching when MbzRecordingID is missing", func() {
// Mock finding the artist
artist1 := model.Artist{ID: "artist-1", Name: "Artist One", MbzArtistID: "mbid-artist-1"}
artistRepo.On("GetAll", mock.AnythingOfType("model.QueryOptions")).Return(model.Artists{artist1}, nil).Once()
// Mock agent response with songs that have NO MBID (empty string)
agentSongs := []agents.Song{
{Name: "Song One", MBID: ""}, // No MBID, should fall back to title matching
{Name: "Song Two", MBID: ""}, // No MBID, should fall back to title matching
}
ag.On("GetArtistTopSongs", ctx, "artist-1", "Artist One", "mbid-artist-1", 2).Return(agentSongs, nil).Once()
// Since there are no MBIDs, loadTracksByMBID should not make any database call
// loadTracksByTitle should make a database call for title matching
song1 := model.MediaFile{ID: "song-1", Title: "Song One", ArtistID: "artist-1", MbzRecordingID: "", OrderTitle: "song one"}
song2 := model.MediaFile{ID: "song-2", Title: "Song Two", ArtistID: "artist-1", MbzRecordingID: "", OrderTitle: "song two"}
mediaFileRepo.On("GetAll", mock.AnythingOfType("model.QueryOptions")).Return(model.MediaFiles{song1, song2}, nil).Once()
songs, err := p.TopSongs(ctx, "Artist One", 2)
Expect(err).ToNot(HaveOccurred())
Expect(songs).To(HaveLen(2))
Expect(songs[0].ID).To(Equal("song-1"))
Expect(songs[1].ID).To(Equal("song-2"))
artistRepo.AssertExpectations(GinkgoT())
ag.AssertExpectations(GinkgoT())
mediaFileRepo.AssertExpectations(GinkgoT())
})
It("combines MBID and title matching when some songs have missing MbzRecordingID", func() {
// Mock finding the artist
artist1 := model.Artist{ID: "artist-1", Name: "Artist One", MbzArtistID: "mbid-artist-1"}
artistRepo.On("GetAll", mock.AnythingOfType("model.QueryOptions")).Return(model.Artists{artist1}, nil).Once()
// Mock agent response with mixed MBID availability
agentSongs := []agents.Song{
{Name: "Song One", MBID: "mbid-song-1"}, // Has MBID, should match by MBID
{Name: "Song Two", MBID: ""}, // No MBID, should fall back to title matching
}
ag.On("GetArtistTopSongs", ctx, "artist-1", "Artist One", "mbid-artist-1", 2).Return(agentSongs, nil).Once()
// Mock the MBID query (finds song1 by MBID)
song1 := model.MediaFile{ID: "song-1", Title: "Song One", ArtistID: "artist-1", MbzRecordingID: "mbid-song-1", OrderTitle: "song one"}
mediaFileRepo.On("GetAll", mock.AnythingOfType("model.QueryOptions")).Return(model.MediaFiles{song1}, nil).Once()
// Mock the title fallback query (finds song2 by title)
song2 := model.MediaFile{ID: "song-2", Title: "Song Two", ArtistID: "artist-1", MbzRecordingID: "", OrderTitle: "song two"}
mediaFileRepo.On("GetAll", mock.AnythingOfType("model.QueryOptions")).Return(model.MediaFiles{song2}, nil).Once()
songs, err := p.TopSongs(ctx, "Artist One", 2)
Expect(err).ToNot(HaveOccurred())
Expect(songs).To(HaveLen(2))
Expect(songs[0].ID).To(Equal("song-1")) // Found by MBID
Expect(songs[1].ID).To(Equal("song-2")) // Found by title
artistRepo.AssertExpectations(GinkgoT())
ag.AssertExpectations(GinkgoT())
mediaFileRepo.AssertExpectations(GinkgoT())
})
It("only returns requested count when provider returns additional items", func() {
// Mock finding the artist
artist1 := model.Artist{ID: "artist-1", Name: "Artist One", MbzArtistID: "mbid-artist-1"}
artistRepo.On("GetAll", mock.AnythingOfType("model.QueryOptions")).Return(model.Artists{artist1}, nil).Once()
// Mock agent response
agentSongs := []agents.Song{
{Name: "Song One", MBID: "mbid-song-1"},
{Name: "Song Two", MBID: "mbid-song-2"},
}
ag.On("GetArtistTopSongs", ctx, "artist-1", "Artist One", "mbid-artist-1", 1).Return(agentSongs, nil).Once()
// Mock finding matching tracks (both returned in a single query)
song1 := model.MediaFile{ID: "song-1", Title: "Song One", ArtistID: "artist-1", MbzRecordingID: "mbid-song-1"}
song2 := model.MediaFile{ID: "song-2", Title: "Song Two", ArtistID: "artist-1", MbzRecordingID: "mbid-song-2"}
mediaFileRepo.On("GetAll", mock.AnythingOfType("model.QueryOptions")).Return(model.MediaFiles{song1, song2}, nil).Once()
songs, err := p.TopSongs(ctx, "Artist One", 1)
Expect(err).ToNot(HaveOccurred())
Expect(songs).To(HaveLen(1))
Expect(songs[0].ID).To(Equal("song-1"))
artistRepo.AssertExpectations(GinkgoT())
ag.AssertExpectations(GinkgoT())
mediaFileRepo.AssertExpectations(GinkgoT())
})
})

View File

@@ -59,13 +59,13 @@ var _ = Describe("Provider - UpdateAlbumInfo", func() {
expectedInfo := &agents.AlbumInfo{
URL: "http://example.com/album",
Description: "Album Description",
Images: []agents.ExternalImage{
{URL: "http://example.com/large.jpg", Size: 300},
{URL: "http://example.com/medium.jpg", Size: 200},
{URL: "http://example.com/small.jpg", Size: 100},
},
}
ag.On("GetAlbumInfo", ctx, "Test Album", "Test Artist", "mbid-album").Return(expectedInfo, nil)
ag.On("GetAlbumImages", ctx, "Test Album", "Test Artist", "mbid-album").Return([]agents.ExternalImage{
{URL: "http://example.com/large.jpg", Size: 300},
{URL: "http://example.com/medium.jpg", Size: 200},
{URL: "http://example.com/small.jpg", Size: 100},
}, nil)
updatedAlbum, err := p.UpdateAlbumInfo(ctx, "al-existing")
@@ -74,9 +74,6 @@ var _ = Describe("Provider - UpdateAlbumInfo", func() {
Expect(updatedAlbum.ID).To(Equal("al-existing"))
Expect(updatedAlbum.ExternalUrl).To(Equal("http://example.com/album"))
Expect(updatedAlbum.Description).To(Equal("Album Description"))
Expect(updatedAlbum.LargeImageUrl).To(Equal("http://example.com/large.jpg"))
Expect(updatedAlbum.MediumImageUrl).To(Equal("http://example.com/medium.jpg"))
Expect(updatedAlbum.SmallImageUrl).To(Equal("http://example.com/small.jpg"))
Expect(updatedAlbum.ExternalInfoUpdatedAt).NotTo(BeNil())
Expect(*updatedAlbum.ExternalInfoUpdatedAt).To(BeTemporally("~", time.Now(), time.Second))

412
core/library.go Normal file
View File

@@ -0,0 +1,412 @@
package core
import (
"context"
"errors"
"fmt"
"io/fs"
"os"
"path/filepath"
"strconv"
"strings"
"time"
"github.com/Masterminds/squirrel"
"github.com/deluan/rest"
"github.com/navidrome/navidrome/core/storage"
"github.com/navidrome/navidrome/log"
"github.com/navidrome/navidrome/model"
"github.com/navidrome/navidrome/model/request"
"github.com/navidrome/navidrome/server/events"
"github.com/navidrome/navidrome/utils/slice"
)
// Scanner interface for triggering scans
type Scanner interface {
ScanAll(ctx context.Context, fullScan bool) (warnings []string, err error)
}
// Watcher interface for managing file system watchers
type Watcher interface {
Watch(ctx context.Context, lib *model.Library) error
StopWatching(ctx context.Context, libraryID int) error
}
// Library provides business logic for library management and user-library associations
type Library interface {
GetUserLibraries(ctx context.Context, userID string) (model.Libraries, error)
SetUserLibraries(ctx context.Context, userID string, libraryIDs []int) error
ValidateLibraryAccess(ctx context.Context, userID string, libraryID int) error
NewRepository(ctx context.Context) rest.Repository
}
type libraryService struct {
ds model.DataStore
scanner Scanner
watcher Watcher
broker events.Broker
}
// NewLibrary creates a new Library service
func NewLibrary(ds model.DataStore, scanner Scanner, watcher Watcher, broker events.Broker) Library {
return &libraryService{
ds: ds,
scanner: scanner,
watcher: watcher,
broker: broker,
}
}
// User-library association operations
func (s *libraryService) GetUserLibraries(ctx context.Context, userID string) (model.Libraries, error) {
// Verify user exists
if _, err := s.ds.User(ctx).Get(userID); err != nil {
return nil, err
}
return s.ds.User(ctx).GetUserLibraries(userID)
}
func (s *libraryService) SetUserLibraries(ctx context.Context, userID string, libraryIDs []int) error {
// Verify user exists
user, err := s.ds.User(ctx).Get(userID)
if err != nil {
return err
}
// Admin users get all libraries automatically - don't allow manual assignment
if user.IsAdmin {
return fmt.Errorf("%w: cannot manually assign libraries to admin users", model.ErrValidation)
}
// Regular users must have at least one library
if len(libraryIDs) == 0 {
return fmt.Errorf("%w: at least one library must be assigned to non-admin users", model.ErrValidation)
}
// Validate all library IDs exist
if len(libraryIDs) > 0 {
if err := s.validateLibraryIDs(ctx, libraryIDs); err != nil {
return err
}
}
// Set user libraries
err = s.ds.User(ctx).SetUserLibraries(userID, libraryIDs)
if err != nil {
return fmt.Errorf("error setting user libraries: %w", err)
}
// Send refresh event to all clients
event := &events.RefreshResource{}
libIDs := slice.Map(libraryIDs, func(id int) string { return strconv.Itoa(id) })
event = event.With("user", userID).With("library", libIDs...)
s.broker.SendBroadcastMessage(ctx, event)
return nil
}
func (s *libraryService) ValidateLibraryAccess(ctx context.Context, userID string, libraryID int) error {
user, ok := request.UserFrom(ctx)
if !ok {
return fmt.Errorf("user not found in context")
}
// Admin users have access to all libraries
if user.IsAdmin {
return nil
}
// Check if user has explicit access to this library
libraries, err := s.ds.User(ctx).GetUserLibraries(userID)
if err != nil {
log.Error(ctx, "Error checking library access", "userID", userID, "libraryID", libraryID, err)
return fmt.Errorf("error checking library access: %w", err)
}
for _, lib := range libraries {
if lib.ID == libraryID {
return nil
}
}
return fmt.Errorf("%w: user does not have access to library %d", model.ErrNotAuthorized, libraryID)
}
// REST repository wrapper
func (s *libraryService) NewRepository(ctx context.Context) rest.Repository {
repo := s.ds.Library(ctx)
wrapper := &libraryRepositoryWrapper{
ctx: ctx,
LibraryRepository: repo,
Repository: repo.(rest.Repository),
ds: s.ds,
scanner: s.scanner,
watcher: s.watcher,
broker: s.broker,
}
return wrapper
}
type libraryRepositoryWrapper struct {
rest.Repository
model.LibraryRepository
ctx context.Context
ds model.DataStore
scanner Scanner
watcher Watcher
broker events.Broker
}
func (r *libraryRepositoryWrapper) Save(entity interface{}) (string, error) {
lib := entity.(*model.Library)
if err := r.validateLibrary(lib); err != nil {
return "", err
}
err := r.LibraryRepository.Put(lib)
if err != nil {
return "", r.mapError(err)
}
// Start watcher and trigger scan after successful library creation
if r.watcher != nil {
if err := r.watcher.Watch(r.ctx, lib); err != nil {
log.Warn(r.ctx, "Failed to start watcher for new library", "libraryID", lib.ID, "name", lib.Name, "path", lib.Path, err)
}
}
if r.scanner != nil {
go r.triggerScan(lib, "new")
}
// Send library refresh event to all clients
if r.broker != nil {
event := &events.RefreshResource{}
r.broker.SendBroadcastMessage(r.ctx, event.With("library", strconv.Itoa(lib.ID)))
log.Debug(r.ctx, "Library created - sent refresh event", "libraryID", lib.ID, "name", lib.Name)
}
return strconv.Itoa(lib.ID), nil
}
func (r *libraryRepositoryWrapper) Update(id string, entity interface{}, cols ...string) error {
lib := entity.(*model.Library)
libID, err := strconv.Atoi(id)
if err != nil {
return fmt.Errorf("invalid library ID: %s", id)
}
lib.ID = libID
if err := r.validateLibrary(lib); err != nil {
return err
}
// Get the original library to check if path changed
originalLib, err := r.Get(libID)
if err != nil {
return r.mapError(err)
}
pathChanged := originalLib.Path != lib.Path
err = r.LibraryRepository.Put(lib)
if err != nil {
return r.mapError(err)
}
// Restart watcher and trigger scan if path was updated
if pathChanged {
if r.watcher != nil {
if err := r.watcher.Watch(r.ctx, lib); err != nil {
log.Warn(r.ctx, "Failed to restart watcher for updated library", "libraryID", lib.ID, "name", lib.Name, "path", lib.Path, err)
}
}
if r.scanner != nil {
go r.triggerScan(lib, "updated")
}
}
// Send library refresh event to all clients
if r.broker != nil {
event := &events.RefreshResource{}
r.broker.SendBroadcastMessage(r.ctx, event.With("library", id))
log.Debug(r.ctx, "Library updated - sent refresh event", "libraryID", libID, "name", lib.Name)
}
return nil
}
func (r *libraryRepositoryWrapper) Delete(id string) error {
libID, err := strconv.Atoi(id)
if err != nil {
return &rest.ValidationError{Errors: map[string]string{
"id": "invalid library ID format",
}}
}
// Get library info before deletion for logging
lib, err := r.Get(libID)
if err != nil {
return r.mapError(err)
}
err = r.LibraryRepository.Delete(libID)
if err != nil {
return r.mapError(err)
}
// Stop watcher and trigger scan after successful library deletion to clean up orphaned data
if r.watcher != nil {
if err := r.watcher.StopWatching(r.ctx, libID); err != nil {
log.Warn(r.ctx, "Failed to stop watcher for deleted library", "libraryID", libID, "name", lib.Name, "path", lib.Path, err)
}
}
if r.scanner != nil {
go r.triggerScan(lib, "deleted")
}
// Send library refresh event to all clients
if r.broker != nil {
event := &events.RefreshResource{}
r.broker.SendBroadcastMessage(r.ctx, event.With("library", id))
log.Debug(r.ctx, "Library deleted - sent refresh event", "libraryID", libID, "name", lib.Name)
}
return nil
}
// Helper methods
func (r *libraryRepositoryWrapper) mapError(err error) error {
if err == nil {
return nil
}
errStr := err.Error()
// Handle database constraint violations.
// TODO: Being tied to react-admin translations is not ideal, but this will probably go away with the new UI/API
if strings.Contains(errStr, "UNIQUE constraint failed") {
if strings.Contains(errStr, "library.name") {
return &rest.ValidationError{Errors: map[string]string{"name": "ra.validation.unique"}}
}
if strings.Contains(errStr, "library.path") {
return &rest.ValidationError{Errors: map[string]string{"path": "ra.validation.unique"}}
}
}
switch {
case errors.Is(err, model.ErrNotFound):
return rest.ErrNotFound
case errors.Is(err, model.ErrNotAuthorized):
return rest.ErrPermissionDenied
default:
return err
}
}
func (r *libraryRepositoryWrapper) validateLibrary(library *model.Library) error {
validationErrors := make(map[string]string)
if library.Name == "" {
validationErrors["name"] = "ra.validation.required"
}
if library.Path == "" {
validationErrors["path"] = "ra.validation.required"
} else {
// Validate path format and accessibility
if err := r.validateLibraryPath(library); err != nil {
validationErrors["path"] = err.Error()
}
}
if len(validationErrors) > 0 {
return &rest.ValidationError{Errors: validationErrors}
}
return nil
}
func (r *libraryRepositoryWrapper) validateLibraryPath(library *model.Library) error {
// Validate path format
if !filepath.IsAbs(library.Path) {
return fmt.Errorf("library path must be absolute")
}
// Clean the path to normalize it
cleanPath := filepath.Clean(library.Path)
library.Path = cleanPath
// Check if path exists and is accessible using storage abstraction
fileStore, err := storage.For(library.Path)
if err != nil {
return fmt.Errorf("invalid storage scheme: %w", err)
}
fsys, err := fileStore.FS()
if err != nil {
log.Warn(r.ctx, "Error validating library.path", "path", library.Path, err)
return fmt.Errorf("resources.library.validation.pathInvalid")
}
// Check if root directory exists
info, err := fs.Stat(fsys, ".")
if err != nil {
// Parse the error message to check for "not a directory"
log.Warn(r.ctx, "Error stating library.path", "path", library.Path, err)
errStr := err.Error()
if strings.Contains(errStr, "not a directory") ||
strings.Contains(errStr, "The directory name is invalid.") {
return fmt.Errorf("resources.library.validation.pathNotDirectory")
} else if os.IsNotExist(err) {
return fmt.Errorf("resources.library.validation.pathNotFound")
} else if os.IsPermission(err) {
return fmt.Errorf("resources.library.validation.pathNotAccessible")
} else {
return fmt.Errorf("resources.library.validation.pathInvalid")
}
}
if !info.IsDir() {
return fmt.Errorf("resources.library.validation.pathNotDirectory")
}
return nil
}
func (s *libraryService) validateLibraryIDs(ctx context.Context, libraryIDs []int) error {
if len(libraryIDs) == 0 {
return nil
}
// Use CountAll to efficiently validate library IDs exist
count, err := s.ds.Library(ctx).CountAll(model.QueryOptions{
Filters: squirrel.Eq{"id": libraryIDs},
})
if err != nil {
return fmt.Errorf("error validating library IDs: %w", err)
}
if int(count) != len(libraryIDs) {
return fmt.Errorf("%w: one or more library IDs are invalid", model.ErrValidation)
}
return nil
}
func (r *libraryRepositoryWrapper) triggerScan(lib *model.Library, action string) {
log.Info(r.ctx, fmt.Sprintf("Triggering scan for %s library", action), "libraryID", lib.ID, "name", lib.Name, "path", lib.Path)
start := time.Now()
warnings, err := r.scanner.ScanAll(r.ctx, false) // Quick scan for new library
if err != nil {
log.Error(r.ctx, fmt.Sprintf("Error scanning %s library", action), "libraryID", lib.ID, "name", lib.Name, err)
} else {
log.Info(r.ctx, fmt.Sprintf("Scan completed for %s library", action), "libraryID", lib.ID, "name", lib.Name, "warnings", len(warnings), "elapsed", time.Since(start))
}
}

980
core/library_test.go Normal file
View File

@@ -0,0 +1,980 @@
package core_test
import (
"context"
"errors"
"net/http"
"os"
"path/filepath"
"sync"
"github.com/deluan/rest"
_ "github.com/navidrome/navidrome/adapters/taglib" // Register taglib extractor
"github.com/navidrome/navidrome/conf/configtest"
"github.com/navidrome/navidrome/core"
_ "github.com/navidrome/navidrome/core/storage/local" // Register local storage
"github.com/navidrome/navidrome/model"
"github.com/navidrome/navidrome/model/request"
"github.com/navidrome/navidrome/server/events"
"github.com/navidrome/navidrome/tests"
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
)
// These tests require the local storage adapter and the taglib extractor to be registered.
var _ = Describe("Library Service", func() {
var service core.Library
var ds *tests.MockDataStore
var libraryRepo *tests.MockLibraryRepo
var userRepo *tests.MockedUserRepo
var ctx context.Context
var tempDir string
var scanner *mockScanner
var watcherManager *mockWatcherManager
var broker *mockEventBroker
BeforeEach(func() {
DeferCleanup(configtest.SetupConfig())
ds = &tests.MockDataStore{}
libraryRepo = &tests.MockLibraryRepo{}
userRepo = tests.CreateMockUserRepo()
ds.MockedLibrary = libraryRepo
ds.MockedUser = userRepo
// Create a mock scanner that tracks calls
scanner = &mockScanner{}
// Create a mock watcher manager
watcherManager = &mockWatcherManager{
libraryStates: make(map[int]model.Library),
}
// Create a mock event broker
broker = &mockEventBroker{}
service = core.NewLibrary(ds, scanner, watcherManager, broker)
ctx = context.Background()
// Create a temporary directory for testing valid paths
var err error
tempDir, err = os.MkdirTemp("", "navidrome-library-test-")
Expect(err).NotTo(HaveOccurred())
DeferCleanup(func() {
os.RemoveAll(tempDir)
})
})
Describe("Library CRUD Operations", func() {
var repo rest.Persistable
BeforeEach(func() {
r := service.NewRepository(ctx)
repo = r.(rest.Persistable)
})
Describe("Create", func() {
It("creates a new library successfully", func() {
library := &model.Library{ID: 1, Name: "New Library", Path: tempDir}
_, err := repo.Save(library)
Expect(err).NotTo(HaveOccurred())
Expect(libraryRepo.Data[1].Name).To(Equal("New Library"))
Expect(libraryRepo.Data[1].Path).To(Equal(tempDir))
})
It("fails when library name is empty", func() {
library := &model.Library{Path: tempDir}
_, err := repo.Save(library)
Expect(err).To(HaveOccurred())
Expect(err.Error()).To(ContainSubstring("ra.validation.required"))
})
It("fails when library path is empty", func() {
library := &model.Library{Name: "Test"}
_, err := repo.Save(library)
Expect(err).To(HaveOccurred())
Expect(err.Error()).To(ContainSubstring("ra.validation.required"))
})
It("fails when library path is not absolute", func() {
library := &model.Library{Name: "Test", Path: "relative/path"}
_, err := repo.Save(library)
Expect(err).To(HaveOccurred())
var validationErr *rest.ValidationError
Expect(errors.As(err, &validationErr)).To(BeTrue())
Expect(validationErr.Errors["path"]).To(Equal("library path must be absolute"))
})
Context("Database constraint violations", func() {
BeforeEach(func() {
// Set up an existing library that will cause constraint violations
libraryRepo.SetData(model.Libraries{
{ID: 1, Name: "Existing Library", Path: tempDir},
})
})
AfterEach(func() {
// Reset custom PutFn after each test
libraryRepo.PutFn = nil
})
It("handles name uniqueness constraint violation from database", func() {
// Create the directory that will be used for the test
otherTempDir, err := os.MkdirTemp("", "navidrome-other-")
Expect(err).NotTo(HaveOccurred())
DeferCleanup(func() { os.RemoveAll(otherTempDir) })
// Try to create another library with the same name
library := &model.Library{ID: 2, Name: "Existing Library", Path: otherTempDir}
// Mock the repository to return a UNIQUE constraint error
libraryRepo.PutFn = func(library *model.Library) error {
return errors.New("UNIQUE constraint failed: library.name")
}
_, err = repo.Save(library)
Expect(err).To(HaveOccurred())
var validationErr *rest.ValidationError
Expect(errors.As(err, &validationErr)).To(BeTrue())
Expect(validationErr.Errors["name"]).To(Equal("ra.validation.unique"))
})
It("handles path uniqueness constraint violation from database", func() {
// Try to create another library with the same path
library := &model.Library{ID: 2, Name: "Different Library", Path: tempDir}
// Mock the repository to return a UNIQUE constraint error
libraryRepo.PutFn = func(library *model.Library) error {
return errors.New("UNIQUE constraint failed: library.path")
}
_, err := repo.Save(library)
Expect(err).To(HaveOccurred())
var validationErr *rest.ValidationError
Expect(errors.As(err, &validationErr)).To(BeTrue())
Expect(validationErr.Errors["path"]).To(Equal("ra.validation.unique"))
})
})
})
Describe("Update", func() {
BeforeEach(func() {
libraryRepo.SetData(model.Libraries{
{ID: 1, Name: "Original Library", Path: tempDir},
})
})
It("updates an existing library successfully", func() {
newTempDir, err := os.MkdirTemp("", "navidrome-library-update-")
Expect(err).NotTo(HaveOccurred())
DeferCleanup(func() { os.RemoveAll(newTempDir) })
library := &model.Library{ID: 1, Name: "Updated Library", Path: newTempDir}
err = repo.Update("1", library)
Expect(err).NotTo(HaveOccurred())
Expect(libraryRepo.Data[1].Name).To(Equal("Updated Library"))
Expect(libraryRepo.Data[1].Path).To(Equal(newTempDir))
})
It("fails when library doesn't exist", func() {
// Create a unique temporary directory to avoid path conflicts
uniqueTempDir, err := os.MkdirTemp("", "navidrome-nonexistent-")
Expect(err).NotTo(HaveOccurred())
DeferCleanup(func() { os.RemoveAll(uniqueTempDir) })
library := &model.Library{ID: 999, Name: "Non-existent", Path: uniqueTempDir}
err = repo.Update("999", library)
Expect(err).To(HaveOccurred())
Expect(err).To(Equal(model.ErrNotFound))
})
It("fails when library name is empty", func() {
library := &model.Library{ID: 1, Path: tempDir}
err := repo.Update("1", library)
Expect(err).To(HaveOccurred())
Expect(err.Error()).To(ContainSubstring("ra.validation.required"))
})
It("cleans and normalizes the path on update", func() {
unnormalizedPath := tempDir + "//../" + filepath.Base(tempDir)
library := &model.Library{ID: 1, Name: "Updated Library", Path: unnormalizedPath}
err := repo.Update("1", library)
Expect(err).NotTo(HaveOccurred())
Expect(libraryRepo.Data[1].Path).To(Equal(filepath.Clean(unnormalizedPath)))
})
It("allows updating library with same name (no change)", func() {
// Set up a library
libraryRepo.SetData(model.Libraries{
{ID: 1, Name: "Test Library", Path: tempDir},
})
// Update the library keeping the same name (should be allowed)
library := &model.Library{ID: 1, Name: "Test Library", Path: tempDir}
err := repo.Update("1", library)
Expect(err).NotTo(HaveOccurred())
})
It("allows updating library with same path (no change)", func() {
// Set up a library
libraryRepo.SetData(model.Libraries{
{ID: 1, Name: "Test Library", Path: tempDir},
})
// Update the library keeping the same path (should be allowed)
library := &model.Library{ID: 1, Name: "Test Library", Path: tempDir}
err := repo.Update("1", library)
Expect(err).NotTo(HaveOccurred())
})
Context("Database constraint violations during update", func() {
BeforeEach(func() {
// Reset any custom PutFn from previous tests
libraryRepo.PutFn = nil
})
It("handles name uniqueness constraint violation during update", func() {
// Create additional temp directory for the test
otherTempDir, err := os.MkdirTemp("", "navidrome-other-")
Expect(err).NotTo(HaveOccurred())
DeferCleanup(func() { os.RemoveAll(otherTempDir) })
// Set up two libraries
libraryRepo.SetData(model.Libraries{
{ID: 1, Name: "Library One", Path: tempDir},
{ID: 2, Name: "Library Two", Path: otherTempDir},
})
// Mock database constraint violation
libraryRepo.PutFn = func(library *model.Library) error {
return errors.New("UNIQUE constraint failed: library.name")
}
// Try to update library 2 to have the same name as library 1
library := &model.Library{ID: 2, Name: "Library One", Path: otherTempDir}
err = repo.Update("2", library)
Expect(err).To(HaveOccurred())
var validationErr *rest.ValidationError
Expect(errors.As(err, &validationErr)).To(BeTrue())
Expect(validationErr.Errors["name"]).To(Equal("ra.validation.unique"))
})
It("handles path uniqueness constraint violation during update", func() {
// Create additional temp directory for the test
otherTempDir, err := os.MkdirTemp("", "navidrome-other-")
Expect(err).NotTo(HaveOccurred())
DeferCleanup(func() { os.RemoveAll(otherTempDir) })
// Set up two libraries
libraryRepo.SetData(model.Libraries{
{ID: 1, Name: "Library One", Path: tempDir},
{ID: 2, Name: "Library Two", Path: otherTempDir},
})
// Mock database constraint violation
libraryRepo.PutFn = func(library *model.Library) error {
return errors.New("UNIQUE constraint failed: library.path")
}
// Try to update library 2 to have the same path as library 1
library := &model.Library{ID: 2, Name: "Library Two", Path: tempDir}
err = repo.Update("2", library)
Expect(err).To(HaveOccurred())
var validationErr *rest.ValidationError
Expect(errors.As(err, &validationErr)).To(BeTrue())
Expect(validationErr.Errors["path"]).To(Equal("ra.validation.unique"))
})
})
})
Describe("Path Validation", func() {
Context("Create operation", func() {
It("fails when path is not absolute", func() {
library := &model.Library{Name: "Test", Path: "relative/path"}
_, err := repo.Save(library)
Expect(err).To(HaveOccurred())
var validationErr *rest.ValidationError
Expect(errors.As(err, &validationErr)).To(BeTrue())
Expect(validationErr.Errors["path"]).To(Equal("library path must be absolute"))
})
It("fails when path does not exist", func() {
nonExistentPath := filepath.Join(tempDir, "nonexistent")
library := &model.Library{Name: "Test", Path: nonExistentPath}
_, err := repo.Save(library)
Expect(err).To(HaveOccurred())
var validationErr *rest.ValidationError
Expect(errors.As(err, &validationErr)).To(BeTrue())
Expect(validationErr.Errors["path"]).To(Equal("resources.library.validation.pathInvalid"))
})
It("fails when path is a file instead of directory", func() {
testFile := filepath.Join(tempDir, "testfile.txt")
err := os.WriteFile(testFile, []byte("test"), 0600)
Expect(err).NotTo(HaveOccurred())
library := &model.Library{Name: "Test", Path: testFile}
_, err = repo.Save(library)
Expect(err).To(HaveOccurred())
var validationErr *rest.ValidationError
Expect(errors.As(err, &validationErr)).To(BeTrue())
Expect(validationErr.Errors["path"]).To(Equal("resources.library.validation.pathNotDirectory"))
})
It("fails when path is not accessible due to permissions", func() {
Skip("Permission tests are environment-dependent and may fail in CI")
// This test is skipped because creating a directory with no read permissions
// is complex and may not work consistently across different environments
})
It("handles multiple validation errors", func() {
library := &model.Library{Name: "", Path: "relative/path"}
_, err := repo.Save(library)
Expect(err).To(HaveOccurred())
var validationErr *rest.ValidationError
Expect(errors.As(err, &validationErr)).To(BeTrue())
Expect(validationErr.Errors).To(HaveKey("name"))
Expect(validationErr.Errors).To(HaveKey("path"))
Expect(validationErr.Errors["name"]).To(Equal("ra.validation.required"))
Expect(validationErr.Errors["path"]).To(Equal("library path must be absolute"))
})
})
Context("Update operation", func() {
BeforeEach(func() {
libraryRepo.SetData(model.Libraries{
{ID: 1, Name: "Test Library", Path: tempDir},
})
})
It("fails when updated path is not absolute", func() {
library := &model.Library{ID: 1, Name: "Test", Path: "relative/path"}
err := repo.Update("1", library)
Expect(err).To(HaveOccurred())
var validationErr *rest.ValidationError
Expect(errors.As(err, &validationErr)).To(BeTrue())
Expect(validationErr.Errors["path"]).To(Equal("library path must be absolute"))
})
It("allows updating library with same name (no change)", func() {
// Set up a library
libraryRepo.SetData(model.Libraries{
{ID: 1, Name: "Test Library", Path: tempDir},
})
// Update the library keeping the same name (should be allowed)
library := &model.Library{ID: 1, Name: "Test Library", Path: tempDir}
err := repo.Update("1", library)
Expect(err).NotTo(HaveOccurred())
})
It("fails when updated path does not exist", func() {
nonExistentPath := filepath.Join(tempDir, "nonexistent")
library := &model.Library{ID: 1, Name: "Test", Path: nonExistentPath}
err := repo.Update("1", library)
Expect(err).To(HaveOccurred())
var validationErr *rest.ValidationError
Expect(errors.As(err, &validationErr)).To(BeTrue())
Expect(validationErr.Errors["path"]).To(Equal("resources.library.validation.pathInvalid"))
})
It("fails when updated path is a file instead of directory", func() {
testFile := filepath.Join(tempDir, "updatefile.txt")
err := os.WriteFile(testFile, []byte("test"), 0600)
Expect(err).NotTo(HaveOccurred())
library := &model.Library{ID: 1, Name: "Test", Path: testFile}
err = repo.Update("1", library)
Expect(err).To(HaveOccurred())
var validationErr *rest.ValidationError
Expect(errors.As(err, &validationErr)).To(BeTrue())
Expect(validationErr.Errors["path"]).To(Equal("resources.library.validation.pathNotDirectory"))
})
It("handles multiple validation errors on update", func() {
// Try to update with empty name and invalid path
library := &model.Library{ID: 1, Name: "", Path: "relative/path"}
err := repo.Update("1", library)
Expect(err).To(HaveOccurred())
var validationErr *rest.ValidationError
Expect(errors.As(err, &validationErr)).To(BeTrue())
Expect(validationErr.Errors).To(HaveKey("name"))
Expect(validationErr.Errors).To(HaveKey("path"))
Expect(validationErr.Errors["name"]).To(Equal("ra.validation.required"))
Expect(validationErr.Errors["path"]).To(Equal("library path must be absolute"))
})
})
})
Describe("Delete", func() {
BeforeEach(func() {
libraryRepo.SetData(model.Libraries{
{ID: 1, Name: "Library to Delete", Path: tempDir},
})
})
It("deletes an existing library successfully", func() {
err := repo.Delete("1")
Expect(err).NotTo(HaveOccurred())
Expect(libraryRepo.Data).To(HaveLen(0))
})
It("fails when library doesn't exist", func() {
err := repo.Delete("999")
Expect(err).To(HaveOccurred())
Expect(err).To(Equal(model.ErrNotFound))
})
})
})
Describe("User-Library Association Operations", func() {
var regularUser, adminUser *model.User
BeforeEach(func() {
regularUser = &model.User{ID: "user1", UserName: "regular", IsAdmin: false}
adminUser = &model.User{ID: "admin1", UserName: "admin", IsAdmin: true}
userRepo.Data = map[string]*model.User{
"regular": regularUser,
"admin": adminUser,
}
libraryRepo.SetData(model.Libraries{
{ID: 1, Name: "Library 1", Path: "/music1"},
{ID: 2, Name: "Library 2", Path: "/music2"},
{ID: 3, Name: "Library 3", Path: "/music3"},
})
})
Describe("GetUserLibraries", func() {
It("returns user's libraries", func() {
userRepo.UserLibraries = map[string][]int{
"user1": {1},
}
result, err := service.GetUserLibraries(ctx, "user1")
Expect(err).NotTo(HaveOccurred())
Expect(result).To(HaveLen(1))
Expect(result[0].ID).To(Equal(1))
})
It("fails when user doesn't exist", func() {
_, err := service.GetUserLibraries(ctx, "nonexistent")
Expect(err).To(HaveOccurred())
Expect(err).To(Equal(model.ErrNotFound))
})
})
Describe("SetUserLibraries", func() {
It("sets libraries for regular user successfully", func() {
err := service.SetUserLibraries(ctx, "user1", []int{1, 2})
Expect(err).NotTo(HaveOccurred())
libraries := userRepo.UserLibraries["user1"]
Expect(libraries).To(HaveLen(2))
})
It("fails when user doesn't exist", func() {
err := service.SetUserLibraries(ctx, "nonexistent", []int{1})
Expect(err).To(HaveOccurred())
Expect(err).To(Equal(model.ErrNotFound))
})
It("fails when trying to set libraries for admin user", func() {
err := service.SetUserLibraries(ctx, "admin1", []int{1})
Expect(err).To(HaveOccurred())
Expect(err.Error()).To(ContainSubstring("cannot manually assign libraries to admin users"))
})
It("fails when no libraries provided for regular user", func() {
err := service.SetUserLibraries(ctx, "user1", []int{})
Expect(err).To(HaveOccurred())
Expect(err.Error()).To(ContainSubstring("at least one library must be assigned to non-admin users"))
})
It("fails when library doesn't exist", func() {
err := service.SetUserLibraries(ctx, "user1", []int{999})
Expect(err).To(HaveOccurred())
Expect(err.Error()).To(ContainSubstring("one or more library IDs are invalid"))
})
It("fails when some libraries don't exist", func() {
err := service.SetUserLibraries(ctx, "user1", []int{1, 999, 2})
Expect(err).To(HaveOccurred())
Expect(err.Error()).To(ContainSubstring("one or more library IDs are invalid"))
})
})
Describe("ValidateLibraryAccess", func() {
Context("admin user", func() {
BeforeEach(func() {
ctx = request.WithUser(ctx, *adminUser)
})
It("allows access to any library", func() {
err := service.ValidateLibraryAccess(ctx, "admin1", 1)
Expect(err).NotTo(HaveOccurred())
})
})
Context("regular user", func() {
BeforeEach(func() {
ctx = request.WithUser(ctx, *regularUser)
userRepo.UserLibraries = map[string][]int{
"user1": {1},
}
})
It("allows access to user's libraries", func() {
err := service.ValidateLibraryAccess(ctx, "user1", 1)
Expect(err).NotTo(HaveOccurred())
})
It("denies access to libraries user doesn't have", func() {
err := service.ValidateLibraryAccess(ctx, "user1", 2)
Expect(err).To(HaveOccurred())
Expect(err.Error()).To(ContainSubstring("user does not have access to library 2"))
})
})
Context("no user in context", func() {
It("fails with user not found error", func() {
err := service.ValidateLibraryAccess(ctx, "user1", 1)
Expect(err).To(HaveOccurred())
Expect(err.Error()).To(ContainSubstring("user not found in context"))
})
})
})
})
Describe("Scan Triggering", func() {
var repo rest.Persistable
BeforeEach(func() {
r := service.NewRepository(ctx)
repo = r.(rest.Persistable)
})
It("triggers scan when creating a new library", func() {
library := &model.Library{ID: 1, Name: "New Library", Path: tempDir}
_, err := repo.Save(library)
Expect(err).NotTo(HaveOccurred())
// Wait briefly for the goroutine to complete
Eventually(func() int {
return scanner.len()
}, "1s", "10ms").Should(Equal(1))
// Verify scan was called with correct parameters
Expect(scanner.ScanCalls[0].FullScan).To(BeFalse()) // Should be quick scan
})
It("triggers scan when updating library path", func() {
// First create a library
libraryRepo.SetData(model.Libraries{
{ID: 1, Name: "Original Library", Path: tempDir},
})
// Create a new temporary directory for the update
newTempDir, err := os.MkdirTemp("", "navidrome-library-update-")
Expect(err).NotTo(HaveOccurred())
DeferCleanup(func() { os.RemoveAll(newTempDir) })
// Update the library with a new path
library := &model.Library{ID: 1, Name: "Updated Library", Path: newTempDir}
err = repo.Update("1", library)
Expect(err).NotTo(HaveOccurred())
// Wait briefly for the goroutine to complete
Eventually(func() int {
return scanner.len()
}, "1s", "10ms").Should(Equal(1))
// Verify scan was called with correct parameters
Expect(scanner.ScanCalls[0].FullScan).To(BeFalse()) // Should be quick scan
})
It("does not trigger scan when updating library without path change", func() {
// First create a library
libraryRepo.SetData(model.Libraries{
{ID: 1, Name: "Original Library", Path: tempDir},
})
// Update the library name only (same path)
library := &model.Library{ID: 1, Name: "Updated Name", Path: tempDir}
err := repo.Update("1", library)
Expect(err).NotTo(HaveOccurred())
// Wait a bit to ensure no scan was triggered
Consistently(func() int {
return scanner.len()
}, "100ms", "10ms").Should(Equal(0))
})
It("does not trigger scan when library creation fails", func() {
// Try to create library with invalid data (empty name)
library := &model.Library{Path: tempDir}
_, err := repo.Save(library)
Expect(err).To(HaveOccurred())
// Ensure no scan was triggered since creation failed
Consistently(func() int {
return scanner.len()
}, "100ms", "10ms").Should(Equal(0))
})
It("does not trigger scan when library update fails", func() {
// First create a library
libraryRepo.SetData(model.Libraries{
{ID: 1, Name: "Original Library", Path: tempDir},
})
// Try to update with invalid data (empty name)
library := &model.Library{ID: 1, Name: "", Path: tempDir}
err := repo.Update("1", library)
Expect(err).To(HaveOccurred())
// Ensure no scan was triggered since update failed
Consistently(func() int {
return scanner.len()
}, "100ms", "10ms").Should(Equal(0))
})
It("triggers scan when deleting a library", func() {
// First create a library
libraryRepo.SetData(model.Libraries{
{ID: 1, Name: "Library to Delete", Path: tempDir},
})
// Delete the library
err := repo.Delete("1")
Expect(err).NotTo(HaveOccurred())
// Wait briefly for the goroutine to complete
Eventually(func() int {
return scanner.len()
}, "1s", "10ms").Should(Equal(1))
// Verify scan was called with correct parameters
Expect(scanner.ScanCalls[0].FullScan).To(BeFalse()) // Should be quick scan
})
It("does not trigger scan when library deletion fails", func() {
// Try to delete a non-existent library
err := repo.Delete("999")
Expect(err).To(HaveOccurred())
// Ensure no scan was triggered since deletion failed
Consistently(func() int {
return scanner.len()
}, "100ms", "10ms").Should(Equal(0))
})
Context("Watcher Integration", func() {
It("starts watcher when creating a new library", func() {
library := &model.Library{ID: 1, Name: "New Library", Path: tempDir}
_, err := repo.Save(library)
Expect(err).NotTo(HaveOccurred())
// Verify watcher was started
Eventually(func() int {
return watcherManager.lenStarted()
}, "1s", "10ms").Should(Equal(1))
Expect(watcherManager.StartedWatchers[0].ID).To(Equal(1))
Expect(watcherManager.StartedWatchers[0].Name).To(Equal("New Library"))
Expect(watcherManager.StartedWatchers[0].Path).To(Equal(tempDir))
})
It("restarts watcher when library path is updated", func() {
// First create a library
libraryRepo.SetData(model.Libraries{
{ID: 1, Name: "Original Library", Path: tempDir},
})
// Simulate that this library already has a watcher
watcherManager.simulateExistingLibrary(model.Library{ID: 1, Name: "Original Library", Path: tempDir})
// Create a new temp directory for the update
newTempDir, err := os.MkdirTemp("", "navidrome-library-update-")
Expect(err).NotTo(HaveOccurred())
DeferCleanup(func() { os.RemoveAll(newTempDir) })
// Update library with new path
library := &model.Library{ID: 1, Name: "Updated Library", Path: newTempDir}
err = repo.Update("1", library)
Expect(err).NotTo(HaveOccurred())
// Verify watcher was restarted
Eventually(func() int {
return watcherManager.lenRestarted()
}, "1s", "10ms").Should(Equal(1))
Expect(watcherManager.RestartedWatchers[0].ID).To(Equal(1))
Expect(watcherManager.RestartedWatchers[0].Path).To(Equal(newTempDir))
})
It("does not restart watcher when only library name is updated", func() {
// First create a library
libraryRepo.SetData(model.Libraries{
{ID: 1, Name: "Original Library", Path: tempDir},
})
// Update library with same path but different name
library := &model.Library{ID: 1, Name: "Updated Name", Path: tempDir}
err := repo.Update("1", library)
Expect(err).NotTo(HaveOccurred())
// Verify watcher was NOT restarted (since path didn't change)
Consistently(func() int {
return watcherManager.lenRestarted()
}, "100ms", "10ms").Should(Equal(0))
})
It("stops watcher when library is deleted", func() {
// Set up a library
libraryRepo.SetData(model.Libraries{
{ID: 1, Name: "Test Library", Path: tempDir},
})
err := repo.Delete("1")
Expect(err).NotTo(HaveOccurred())
// Verify watcher was stopped
Eventually(func() int {
return watcherManager.lenStopped()
}, "1s", "10ms").Should(Equal(1))
Expect(watcherManager.StoppedWatchers[0]).To(Equal(1))
})
It("does not stop watcher when library deletion fails", func() {
// Set up a library
libraryRepo.SetData(model.Libraries{
{ID: 1, Name: "Test Library", Path: tempDir},
})
// Mock deletion to fail by trying to delete non-existent library
err := repo.Delete("999")
Expect(err).To(HaveOccurred())
// Verify watcher was NOT stopped since deletion failed
Consistently(func() int {
return watcherManager.lenStopped()
}, "100ms", "10ms").Should(Equal(0))
})
})
})
Describe("Event Broadcasting", func() {
var repo rest.Persistable
BeforeEach(func() {
r := service.NewRepository(ctx)
repo = r.(rest.Persistable)
// Clear any events from broker
broker.Events = []events.Event{}
})
It("sends refresh event when creating a library", func() {
library := &model.Library{ID: 1, Name: "New Library", Path: tempDir}
_, err := repo.Save(library)
Expect(err).NotTo(HaveOccurred())
Expect(broker.Events).To(HaveLen(1))
})
It("sends refresh event when updating a library", func() {
// First create a library
libraryRepo.SetData(model.Libraries{
{ID: 1, Name: "Original Library", Path: tempDir},
})
library := &model.Library{ID: 1, Name: "Updated Library", Path: tempDir}
err := repo.Update("1", library)
Expect(err).NotTo(HaveOccurred())
Expect(broker.Events).To(HaveLen(1))
})
It("sends refresh event when deleting a library", func() {
// First create a library
libraryRepo.SetData(model.Libraries{
{ID: 2, Name: "Library to Delete", Path: tempDir},
})
err := repo.Delete("2")
Expect(err).NotTo(HaveOccurred())
Expect(broker.Events).To(HaveLen(1))
})
})
})
// mockScanner provides a simple mock implementation of core.Scanner for testing
type mockScanner struct {
ScanCalls []ScanCall
mu sync.RWMutex
}
type ScanCall struct {
FullScan bool
}
func (m *mockScanner) ScanAll(ctx context.Context, fullScan bool) (warnings []string, err error) {
m.mu.Lock()
defer m.mu.Unlock()
m.ScanCalls = append(m.ScanCalls, ScanCall{
FullScan: fullScan,
})
return []string{}, nil
}
func (m *mockScanner) len() int {
m.mu.RLock()
defer m.mu.RUnlock()
return len(m.ScanCalls)
}
// mockWatcherManager provides a simple mock implementation of core.Watcher for testing
type mockWatcherManager struct {
StartedWatchers []model.Library
StoppedWatchers []int
RestartedWatchers []model.Library
libraryStates map[int]model.Library // Track which libraries we know about
mu sync.RWMutex
}
func (m *mockWatcherManager) Watch(ctx context.Context, lib *model.Library) error {
m.mu.Lock()
defer m.mu.Unlock()
// Check if we already know about this library ID
if _, exists := m.libraryStates[lib.ID]; exists {
// This is a restart - the library already existed
// Update our tracking and record the restart
for i, startedLib := range m.StartedWatchers {
if startedLib.ID == lib.ID {
m.StartedWatchers[i] = *lib
break
}
}
m.RestartedWatchers = append(m.RestartedWatchers, *lib)
m.libraryStates[lib.ID] = *lib
return nil
}
// This is a new library - first time we're seeing it
m.StartedWatchers = append(m.StartedWatchers, *lib)
m.libraryStates[lib.ID] = *lib
return nil
}
func (m *mockWatcherManager) StopWatching(ctx context.Context, libraryID int) error {
m.mu.Lock()
defer m.mu.Unlock()
m.StoppedWatchers = append(m.StoppedWatchers, libraryID)
return nil
}
func (m *mockWatcherManager) lenStarted() int {
m.mu.RLock()
defer m.mu.RUnlock()
return len(m.StartedWatchers)
}
func (m *mockWatcherManager) lenStopped() int {
m.mu.RLock()
defer m.mu.RUnlock()
return len(m.StoppedWatchers)
}
func (m *mockWatcherManager) lenRestarted() int {
m.mu.RLock()
defer m.mu.RUnlock()
return len(m.RestartedWatchers)
}
// simulateExistingLibrary simulates the scenario where a library already exists
// and has a watcher running (used by tests to set up the initial state)
func (m *mockWatcherManager) simulateExistingLibrary(lib model.Library) {
m.mu.Lock()
defer m.mu.Unlock()
m.libraryStates[lib.ID] = lib
}
// mockEventBroker provides a mock implementation of events.Broker for testing
type mockEventBroker struct {
http.Handler
Events []events.Event
mu sync.RWMutex
}
func (m *mockEventBroker) SendMessage(ctx context.Context, event events.Event) {
m.mu.Lock()
defer m.mu.Unlock()
m.Events = append(m.Events, event)
}
func (m *mockEventBroker) SendBroadcastMessage(ctx context.Context, event events.Event) {
m.mu.Lock()
defer m.mu.Unlock()
m.Events = append(m.Events, event)
}

View File

@@ -8,6 +8,7 @@ import (
"github.com/navidrome/navidrome/log"
"github.com/navidrome/navidrome/model"
"github.com/navidrome/navidrome/utils/ioutils"
)
func fromEmbedded(ctx context.Context, mf *model.MediaFile) (model.LyricList, error) {
@@ -27,8 +28,7 @@ func fromExternalFile(ctx context.Context, mf *model.MediaFile, suffix string) (
externalLyric := basePath[0:len(basePath)-len(ext)] + suffix
contents, err := os.ReadFile(externalLyric)
contents, err := ioutils.UTF8ReadFile(externalLyric)
if errors.Is(err, os.ErrNotExist) {
log.Trace(ctx, "no lyrics found at path", "path", externalLyric)
return nil, nil

View File

@@ -108,5 +108,39 @@ var _ = Describe("sources", func() {
},
}))
})
It("should handle LRC files with UTF-8 BOM marker (issue #4631)", func() {
// The function looks for <basePath-without-ext><suffix>, so we need to pass
// a MediaFile with .mp3 path and look for .lrc suffix
mf := model.MediaFile{Path: "tests/fixtures/bom-test.mp3"}
lyrics, err := fromExternalFile(ctx, &mf, ".lrc")
Expect(err).To(BeNil())
Expect(lyrics).ToNot(BeNil())
Expect(lyrics).To(HaveLen(1))
// The critical assertion: even with BOM, synced should be true
Expect(lyrics[0].Synced).To(BeTrue(), "Lyrics with BOM marker should be recognized as synced")
Expect(lyrics[0].Line).To(HaveLen(1))
Expect(lyrics[0].Line[0].Start).To(Equal(gg.P(int64(0))))
Expect(lyrics[0].Line[0].Value).To(ContainSubstring("作曲"))
})
It("should handle UTF-16 LE encoded LRC files", func() {
mf := model.MediaFile{Path: "tests/fixtures/bom-utf16-test.mp3"}
lyrics, err := fromExternalFile(ctx, &mf, ".lrc")
Expect(err).To(BeNil())
Expect(lyrics).ToNot(BeNil())
Expect(lyrics).To(HaveLen(1))
// UTF-16 should be properly converted to UTF-8
Expect(lyrics[0].Synced).To(BeTrue(), "UTF-16 encoded lyrics should be recognized as synced")
Expect(lyrics[0].Line).To(HaveLen(2))
Expect(lyrics[0].Line[0].Start).To(Equal(gg.P(int64(18800))))
Expect(lyrics[0].Line[0].Value).To(Equal("We're no strangers to love"))
Expect(lyrics[0].Line[1].Start).To(Equal(gg.P(int64(22801))))
Expect(lyrics[0].Line[1].Value).To(Equal("You know the rules and so do I"))
})
})
})

View File

@@ -21,6 +21,7 @@ import (
"github.com/navidrome/navidrome/core/metrics/insights"
"github.com/navidrome/navidrome/log"
"github.com/navidrome/navidrome/model"
"github.com/navidrome/navidrome/plugins/schema"
"github.com/navidrome/navidrome/utils/singleton"
)
@@ -34,12 +35,18 @@ var (
)
type insightsCollector struct {
ds model.DataStore
lastRun atomic.Int64
lastStatus atomic.Bool
ds model.DataStore
pluginLoader PluginLoader
lastRun atomic.Int64
lastStatus atomic.Bool
}
func GetInstance(ds model.DataStore) Insights {
// PluginLoader defines an interface for loading plugins
type PluginLoader interface {
PluginList() map[string]schema.PluginManifest
}
func GetInstance(ds model.DataStore, pluginLoader PluginLoader) Insights {
return singleton.GetInstance(func() *insightsCollector {
id, err := ds.Property(context.TODO()).Get(consts.InsightsIDKey)
if err != nil {
@@ -51,7 +58,7 @@ func GetInstance(ds model.DataStore) Insights {
}
}
insightsID = id
return &insightsCollector{ds: ds}
return &insightsCollector{ds: ds, pluginLoader: pluginLoader}
})
}
@@ -176,13 +183,15 @@ var staticData = sync.OnceValue(func() insights.Data {
data.Config.DefaultBackgroundURLSet = conf.Server.UILoginBackgroundURL == consts.DefaultUILoginBackgroundURL
data.Config.EnableArtworkPrecache = conf.Server.EnableArtworkPrecache
data.Config.EnableCoverAnimation = conf.Server.EnableCoverAnimation
data.Config.EnableNowPlaying = conf.Server.EnableNowPlaying
data.Config.EnableDownloads = conf.Server.EnableDownloads
data.Config.EnableSharing = conf.Server.EnableSharing
data.Config.EnableStarRating = conf.Server.EnableStarRating
data.Config.EnableLastFM = conf.Server.LastFM.Enabled
data.Config.EnableLastFM = conf.Server.LastFM.Enabled && conf.Server.LastFM.ApiKey != "" && conf.Server.LastFM.Secret != ""
data.Config.EnableSpotify = conf.Server.Spotify.ID != "" && conf.Server.Spotify.Secret != ""
data.Config.EnableListenBrainz = conf.Server.ListenBrainz.Enabled
data.Config.EnableDeezer = conf.Server.Deezer.Enabled
data.Config.EnableMediaFileCoverArt = conf.Server.EnableMediaFileCoverArt
data.Config.EnableSpotify = conf.Server.Spotify.ID != ""
data.Config.EnableJukebox = conf.Server.Jukebox.Enabled
data.Config.EnablePrometheus = conf.Server.Prometheus.Enabled
data.Config.TranscodingCacheSize = conf.Server.TranscodingCacheSize
@@ -198,6 +207,9 @@ var staticData = sync.OnceValue(func() insights.Data {
data.Config.ScanSchedule = conf.Server.Scanner.Schedule
data.Config.ScanWatcherWait = uint64(math.Trunc(conf.Server.Scanner.WatcherWait.Seconds()))
data.Config.ScanOnStartup = conf.Server.Scanner.ScanOnStartup
data.Config.ReverseProxyConfigured = conf.Server.ReverseProxyWhitelist != ""
data.Config.HasCustomPID = conf.Server.PID.Track != "" || conf.Server.PID.Album != ""
data.Config.HasCustomTags = len(conf.Server.Tags) > 0
return data
})
@@ -232,12 +244,29 @@ func (c *insightsCollector) collect(ctx context.Context) []byte {
if err != nil {
log.Trace(ctx, "Error reading radios count", err)
}
data.Library.Libraries, err = c.ds.Library(ctx).CountAll()
if err != nil {
log.Trace(ctx, "Error reading libraries count", err)
}
data.Library.ActiveUsers, err = c.ds.User(ctx).CountAll(model.QueryOptions{
Filters: squirrel.Gt{"last_access_at": time.Now().Add(-7 * 24 * time.Hour)},
})
if err != nil {
log.Trace(ctx, "Error reading active users count", err)
}
// Check for smart playlists
data.Config.HasSmartPlaylists, err = c.hasSmartPlaylists(ctx)
if err != nil {
log.Trace(ctx, "Error checking for smart playlists", err)
}
// Collect plugins if permitted and enabled
if conf.Server.DevEnablePluginsInsights && conf.Server.Plugins.Enabled {
data.Plugins = c.collectPlugins(ctx)
}
// Collect active players if permitted
if conf.Server.DevEnablePlayerInsights {
data.Library.ActivePlayers, err = c.ds.Player(ctx).CountByClient(model.QueryOptions{
Filters: squirrel.Gt{"last_seen": time.Now().Add(-7 * 24 * time.Hour)},
@@ -263,3 +292,23 @@ func (c *insightsCollector) collect(ctx context.Context) []byte {
}
return resp
}
// hasSmartPlaylists checks if there are any smart playlists (playlists with rules)
func (c *insightsCollector) hasSmartPlaylists(ctx context.Context) (bool, error) {
count, err := c.ds.Playlist(ctx).CountAll(model.QueryOptions{
Filters: squirrel.And{squirrel.NotEq{"rules": ""}, squirrel.NotEq{"rules": nil}},
})
return count > 0, err
}
// collectPlugins collects information about installed plugins
func (c *insightsCollector) collectPlugins(_ context.Context) map[string]insights.PluginInfo {
plugins := make(map[string]insights.PluginInfo)
for id, manifest := range c.pluginLoader.PluginList() {
plugins[id] = insights.PluginInfo{
Name: manifest.Name,
Version: manifest.Version,
}
}
return plugins
}

View File

@@ -36,6 +36,7 @@ type Data struct {
Playlists int64 `json:"playlists"`
Shares int64 `json:"shares"`
Radios int64 `json:"radios"`
Libraries int64 `json:"libraries"`
ActiveUsers int64 `json:"activeUsers"`
ActivePlayers map[string]int64 `json:"activePlayers,omitempty"`
} `json:"library"`
@@ -55,11 +56,13 @@ type Data struct {
EnableStarRating bool `json:"enableStarRating,omitempty"`
EnableLastFM bool `json:"enableLastFM,omitempty"`
EnableListenBrainz bool `json:"enableListenBrainz,omitempty"`
EnableDeezer bool `json:"enableDeezer,omitempty"`
EnableMediaFileCoverArt bool `json:"enableMediaFileCoverArt,omitempty"`
EnableSpotify bool `json:"enableSpotify,omitempty"`
EnableJukebox bool `json:"enableJukebox,omitempty"`
EnablePrometheus bool `json:"enablePrometheus,omitempty"`
EnableCoverAnimation bool `json:"enableCoverAnimation,omitempty"`
EnableNowPlaying bool `json:"enableNowPlaying,omitempty"`
SessionTimeout uint64 `json:"sessionTimeout,omitempty"`
SearchFullString bool `json:"searchFullString,omitempty"`
RecentlyAddedByModTime bool `json:"recentlyAddedByModTime,omitempty"`
@@ -68,7 +71,17 @@ type Data struct {
BackupCount int `json:"backupCount,omitempty"`
DevActivityPanel bool `json:"devActivityPanel,omitempty"`
DefaultBackgroundURLSet bool `json:"defaultBackgroundURL,omitempty"`
HasSmartPlaylists bool `json:"hasSmartPlaylists,omitempty"`
ReverseProxyConfigured bool `json:"reverseProxyConfigured,omitempty"`
HasCustomPID bool `json:"hasCustomPID,omitempty"`
HasCustomTags bool `json:"hasCustomTags,omitempty"`
} `json:"config"`
Plugins map[string]PluginInfo `json:"plugins,omitempty"`
}
type PluginInfo struct {
Name string `json:"name"`
Version string `json:"version"`
}
type FSInfo struct {

View File

@@ -2,7 +2,6 @@ package metrics
import (
"context"
"fmt"
"net/http"
"strconv"
"sync"
@@ -13,6 +12,7 @@ import (
"github.com/navidrome/navidrome/consts"
"github.com/navidrome/navidrome/log"
"github.com/navidrome/navidrome/model"
"github.com/navidrome/navidrome/utils/singleton"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promhttp"
)
@@ -20,6 +20,8 @@ import (
type Metrics interface {
WriteInitialMetrics(ctx context.Context)
WriteAfterScanMetrics(ctx context.Context, success bool)
RecordRequest(ctx context.Context, endpoint, method, client string, status int32, elapsed int64)
RecordPluginRequest(ctx context.Context, plugin, method string, ok bool, elapsed int64)
GetHandler() http.Handler
}
@@ -27,11 +29,14 @@ type metrics struct {
ds model.DataStore
}
func NewPrometheusInstance(ds model.DataStore) Metrics {
if conf.Server.Prometheus.Enabled {
return &metrics{ds: ds}
func GetPrometheusInstance(ds model.DataStore) Metrics {
if !conf.Server.Prometheus.Enabled {
return noopMetrics{}
}
return noopMetrics{}
return singleton.GetInstance(func() *metrics {
return &metrics{ds: ds}
})
}
func NewNoopInstance() Metrics {
@@ -51,6 +56,38 @@ func (m *metrics) WriteAfterScanMetrics(ctx context.Context, success bool) {
getPrometheusMetrics().mediaScansCounter.With(scanLabels).Inc()
}
func (m *metrics) RecordRequest(_ context.Context, endpoint, method, client string, status int32, elapsed int64) {
httpLabel := prometheus.Labels{
"endpoint": endpoint,
"method": method,
"client": client,
"status": strconv.FormatInt(int64(status), 10),
}
getPrometheusMetrics().httpRequestCounter.With(httpLabel).Inc()
httpLatencyLabel := prometheus.Labels{
"endpoint": endpoint,
"method": method,
"client": client,
}
getPrometheusMetrics().httpRequestDuration.With(httpLatencyLabel).Observe(float64(elapsed))
}
func (m *metrics) RecordPluginRequest(_ context.Context, plugin, method string, ok bool, elapsed int64) {
pluginLabel := prometheus.Labels{
"plugin": plugin,
"method": method,
"ok": strconv.FormatBool(ok),
}
getPrometheusMetrics().pluginRequestCounter.With(pluginLabel).Inc()
pluginLatencyLabel := prometheus.Labels{
"plugin": plugin,
"method": method,
}
getPrometheusMetrics().pluginRequestDuration.With(pluginLatencyLabel).Observe(float64(elapsed))
}
func (m *metrics) GetHandler() http.Handler {
r := chi.NewRouter()
@@ -59,20 +96,31 @@ func (m *metrics) GetHandler() http.Handler {
consts.PrometheusAuthUser: conf.Server.Prometheus.Password,
}))
}
r.Handle("/", promhttp.Handler())
// Enable created at timestamp to handle zero counter on create.
// This requires --enable-feature=created-timestamp-zero-ingestion to be passed in Prometheus
r.Handle("/", promhttp.HandlerFor(prometheus.DefaultGatherer, promhttp.HandlerOpts{
EnableOpenMetrics: true,
EnableOpenMetricsTextCreatedSamples: true,
}))
return r
}
type prometheusMetrics struct {
dbTotal *prometheus.GaugeVec
versionInfo *prometheus.GaugeVec
lastMediaScan *prometheus.GaugeVec
mediaScansCounter *prometheus.CounterVec
dbTotal *prometheus.GaugeVec
versionInfo *prometheus.GaugeVec
lastMediaScan *prometheus.GaugeVec
mediaScansCounter *prometheus.CounterVec
httpRequestCounter *prometheus.CounterVec
httpRequestDuration *prometheus.SummaryVec
pluginRequestCounter *prometheus.CounterVec
pluginRequestDuration *prometheus.SummaryVec
}
// Prometheus' metrics requires initialization. But not more than once
var getPrometheusMetrics = sync.OnceValue(func() *prometheusMetrics {
quartilesToEstimate := map[float64]float64{0.5: 0.05, 0.75: 0.025, 0.9: 0.01, 0.99: 0.001}
instance := &prometheusMetrics{
dbTotal: prometheus.NewGaugeVec(
prometheus.GaugeOpts{
@@ -102,23 +150,49 @@ var getPrometheusMetrics = sync.OnceValue(func() *prometheusMetrics {
},
[]string{"success"},
),
httpRequestCounter: prometheus.NewCounterVec(
prometheus.CounterOpts{
Name: "http_request_count",
Help: "Request types by status",
},
[]string{"endpoint", "method", "client", "status"},
),
httpRequestDuration: prometheus.NewSummaryVec(
prometheus.SummaryOpts{
Name: "http_request_latency",
Help: "Latency (in ms) of HTTP requests",
Objectives: quartilesToEstimate,
},
[]string{"endpoint", "method", "client"},
),
pluginRequestCounter: prometheus.NewCounterVec(
prometheus.CounterOpts{
Name: "plugin_request_count",
Help: "Plugin requests by method/status",
},
[]string{"plugin", "method", "ok"},
),
pluginRequestDuration: prometheus.NewSummaryVec(
prometheus.SummaryOpts{
Name: "plugin_request_latency",
Help: "Latency (in ms) of plugin requests",
Objectives: quartilesToEstimate,
},
[]string{"plugin", "method"},
),
}
err := prometheus.DefaultRegisterer.Register(instance.dbTotal)
if err != nil {
log.Fatal("Unable to create Prometheus metric instance", fmt.Errorf("unable to register db_model_totals metrics: %w", err))
}
err = prometheus.DefaultRegisterer.Register(instance.versionInfo)
if err != nil {
log.Fatal("Unable to create Prometheus metric instance", fmt.Errorf("unable to register navidrome_info metrics: %w", err))
}
err = prometheus.DefaultRegisterer.Register(instance.lastMediaScan)
if err != nil {
log.Fatal("Unable to create Prometheus metric instance", fmt.Errorf("unable to register media_scan_last metrics: %w", err))
}
err = prometheus.DefaultRegisterer.Register(instance.mediaScansCounter)
if err != nil {
log.Fatal("Unable to create Prometheus metric instance", fmt.Errorf("unable to register media_scans metrics: %w", err))
}
prometheus.DefaultRegisterer.MustRegister(
instance.dbTotal,
instance.versionInfo,
instance.lastMediaScan,
instance.mediaScansCounter,
instance.httpRequestCounter,
instance.httpRequestDuration,
instance.pluginRequestCounter,
instance.pluginRequestDuration,
)
return instance
})
@@ -159,4 +233,8 @@ func (n noopMetrics) WriteInitialMetrics(context.Context) {}
func (n noopMetrics) WriteAfterScanMetrics(context.Context, bool) {}
func (n noopMetrics) RecordRequest(context.Context, string, string, string, int32, int64) {}
func (n noopMetrics) RecordPluginRequest(context.Context, string, string, bool, int64) {}
func (n noopMetrics) GetHandler() http.Handler { return nil }

View File

@@ -0,0 +1,46 @@
package core
import (
"context"
"github.com/deluan/rest"
"github.com/navidrome/navidrome/model"
"github.com/navidrome/navidrome/tests"
)
// MockLibraryWrapper provides a simple wrapper around MockLibraryRepo
// that implements the core.Library interface for testing
type MockLibraryWrapper struct {
*tests.MockLibraryRepo
}
// MockLibraryRestAdapter adapts MockLibraryRepo to rest.Repository interface
type MockLibraryRestAdapter struct {
*tests.MockLibraryRepo
}
// NewMockLibraryService creates a new mock library service for testing
func NewMockLibraryService() Library {
repo := &tests.MockLibraryRepo{
Data: make(map[int]model.Library),
}
// Set up default test data
repo.SetData(model.Libraries{
{ID: 1, Name: "Test Library 1", Path: "/music/library1"},
{ID: 2, Name: "Test Library 2", Path: "/music/library2"},
})
return &MockLibraryWrapper{MockLibraryRepo: repo}
}
func (m *MockLibraryWrapper) NewRepository(ctx context.Context) rest.Repository {
return &MockLibraryRestAdapter{MockLibraryRepo: m.MockLibraryRepo}
}
// rest.Repository interface implementation
func (a *MockLibraryRestAdapter) Delete(id string) error {
return a.DeleteByStringID(id)
}
var _ Library = (*MockLibraryWrapper)(nil)
var _ rest.Repository = (*MockLibraryRestAdapter)(nil)

View File

@@ -10,11 +10,15 @@ import (
"strings"
"sync"
"github.com/kballard/go-shellquote"
"github.com/navidrome/navidrome/conf"
"github.com/navidrome/navidrome/log"
)
func start(ctx context.Context, args []string) (Executor, error) {
if len(args) == 0 {
return Executor{}, fmt.Errorf("no command arguments provided")
}
log.Debug("Executing mpv command", "cmd", args)
j := Executor{args: args}
j.PipeReader, j.out = io.Pipe()
@@ -71,28 +75,32 @@ func (j *Executor) wait() {
// Path will always be an absolute path
func createMPVCommand(deviceName string, filename string, socketName string) []string {
split := strings.Split(fixCmd(conf.Server.MPVCmdTemplate), " ")
for i, s := range split {
s = strings.ReplaceAll(s, "%d", deviceName)
s = strings.ReplaceAll(s, "%f", filename)
s = strings.ReplaceAll(s, "%s", socketName)
split[i] = s
// Parse the template structure using shell parsing to handle quoted arguments
templateArgs, err := shellquote.Split(conf.Server.MPVCmdTemplate)
if err != nil {
log.Error("Failed to parse MPV command template", "template", conf.Server.MPVCmdTemplate, err)
return nil
}
return split
}
func fixCmd(cmd string) string {
split := strings.Split(cmd, " ")
var result []string
cmdPath, _ := mpvCommand()
for _, s := range split {
if s == "mpv" || s == "mpv.exe" {
result = append(result, cmdPath)
} else {
result = append(result, s)
// Replace placeholders in each parsed argument to preserve spaces in substituted values
for i, arg := range templateArgs {
arg = strings.ReplaceAll(arg, "%d", deviceName)
arg = strings.ReplaceAll(arg, "%f", filename)
arg = strings.ReplaceAll(arg, "%s", socketName)
templateArgs[i] = arg
}
// Replace mpv executable references with the configured path
if len(templateArgs) > 0 {
cmdPath, err := mpvCommand()
if err == nil {
if templateArgs[0] == "mpv" || templateArgs[0] == "mpv.exe" {
templateArgs[0] = cmdPath
}
}
}
return strings.Join(result, " ")
return templateArgs
}
// This is a 1:1 copy of the stuff in ffmpeg.go, need to be unified.

View File

@@ -0,0 +1,17 @@
package mpv
import (
"testing"
"github.com/navidrome/navidrome/log"
"github.com/navidrome/navidrome/tests"
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
)
func TestMPV(t *testing.T) {
tests.Init(t, false)
log.SetLevel(log.LevelFatal)
RegisterFailHandler(Fail)
RunSpecs(t, "MPV Suite")
}

View File

@@ -0,0 +1,390 @@
package mpv
import (
"context"
"fmt"
"io"
"os"
"path/filepath"
"runtime"
"strings"
"sync"
"time"
"github.com/navidrome/navidrome/conf"
"github.com/navidrome/navidrome/conf/configtest"
"github.com/navidrome/navidrome/model"
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
)
var _ = Describe("MPV", func() {
var (
testScript string
tempDir string
)
BeforeEach(func() {
DeferCleanup(configtest.SetupConfig())
// Reset MPV cache
mpvOnce = sync.Once{}
mpvPath = ""
mpvErr = nil
// Create temporary directory for test files
var err error
tempDir, err = os.MkdirTemp("", "mpv_test_*")
Expect(err).ToNot(HaveOccurred())
DeferCleanup(func() { os.RemoveAll(tempDir) })
// Create mock MPV script that outputs arguments to stdout
testScript = createMockMPVScript(tempDir)
// Configure test MPV path
conf.Server.MPVPath = testScript
})
Describe("createMPVCommand", func() {
Context("with default template", func() {
BeforeEach(func() {
conf.Server.MPVCmdTemplate = "mpv --audio-device=%d --no-audio-display --pause %f --input-ipc-server=%s"
})
It("creates correct command with simple paths", func() {
args := createMPVCommand("auto", "/music/test.mp3", "/tmp/socket")
Expect(args).To(Equal([]string{
testScript,
"--audio-device=auto",
"--no-audio-display",
"--pause",
"/music/test.mp3",
"--input-ipc-server=/tmp/socket",
}))
})
It("handles paths with spaces", func() {
args := createMPVCommand("auto", "/music/My Album/01 - Song.mp3", "/tmp/socket")
Expect(args).To(Equal([]string{
testScript,
"--audio-device=auto",
"--no-audio-display",
"--pause",
"/music/My Album/01 - Song.mp3",
"--input-ipc-server=/tmp/socket",
}))
})
It("handles complex device names", func() {
deviceName := "coreaudio/AppleUSBAudioEngine:Cambridge Audio :Cambridge Audio USB Audio 1.0:0000:1"
args := createMPVCommand(deviceName, "/music/test.mp3", "/tmp/socket")
Expect(args).To(Equal([]string{
testScript,
"--audio-device=" + deviceName,
"--no-audio-display",
"--pause",
"/music/test.mp3",
"--input-ipc-server=/tmp/socket",
}))
})
})
Context("with snapcast template (issue #3619)", func() {
BeforeEach(func() {
// This is the template that fails with naive space splitting
conf.Server.MPVCmdTemplate = "mpv --no-audio-display --pause %f --input-ipc-server=%s --audio-channels=stereo --audio-samplerate=48000 --audio-format=s16 --ao=pcm --ao-pcm-file=/audio/snapcast_fifo"
})
It("creates correct command for snapcast integration", func() {
args := createMPVCommand("auto", "/music/test.mp3", "/tmp/socket")
Expect(args).To(Equal([]string{
testScript,
"--no-audio-display",
"--pause",
"/music/test.mp3",
"--input-ipc-server=/tmp/socket",
"--audio-channels=stereo",
"--audio-samplerate=48000",
"--audio-format=s16",
"--ao=pcm",
"--ao-pcm-file=/audio/snapcast_fifo",
}))
})
})
Context("with wrapper script template", func() {
BeforeEach(func() {
// Test case that would break with naive splitting due to quoted arguments
conf.Server.MPVCmdTemplate = `/tmp/mpv.sh --no-audio-display --pause %f --input-ipc-server=%s --audio-channels=stereo`
})
It("handles wrapper script paths", func() {
args := createMPVCommand("auto", "/music/test.mp3", "/tmp/socket")
Expect(args).To(Equal([]string{
"/tmp/mpv.sh",
"--no-audio-display",
"--pause",
"/music/test.mp3",
"--input-ipc-server=/tmp/socket",
"--audio-channels=stereo",
}))
})
})
Context("with extra spaces in template", func() {
BeforeEach(func() {
conf.Server.MPVCmdTemplate = "mpv --audio-device=%d --no-audio-display --pause %f --input-ipc-server=%s"
})
It("handles extra spaces correctly", func() {
args := createMPVCommand("auto", "/music/test.mp3", "/tmp/socket")
Expect(args).To(Equal([]string{
testScript,
"--audio-device=auto",
"--no-audio-display",
"--pause",
"/music/test.mp3",
"--input-ipc-server=/tmp/socket",
}))
})
})
Context("with paths containing spaces in template arguments", func() {
BeforeEach(func() {
// Template with spaces in the path arguments themselves
conf.Server.MPVCmdTemplate = `mpv --no-audio-display --pause %f --ao-pcm-file="/audio/my folder/snapcast_fifo" --input-ipc-server=%s`
})
It("handles spaces in quoted template argument paths", func() {
args := createMPVCommand("auto", "/music/test.mp3", "/tmp/socket")
// This test reveals the limitation of strings.Fields() - it will split on all spaces
// Expected behavior would be to keep the path as one argument
Expect(args).To(Equal([]string{
testScript,
"--no-audio-display",
"--pause",
"/music/test.mp3",
"--ao-pcm-file=/audio/my folder/snapcast_fifo", // This should be one argument
"--input-ipc-server=/tmp/socket",
}))
})
})
Context("with malformed template", func() {
BeforeEach(func() {
// Template with unmatched quotes that will cause shell parsing to fail
conf.Server.MPVCmdTemplate = `mpv --no-audio-display --pause %f --input-ipc-server=%s --ao-pcm-file="/unclosed/quote`
})
It("returns nil when shell parsing fails", func() {
args := createMPVCommand("auto", "/music/test.mp3", "/tmp/socket")
Expect(args).To(BeNil())
})
})
Context("with empty template", func() {
BeforeEach(func() {
conf.Server.MPVCmdTemplate = ""
})
It("returns empty slice for empty template", func() {
args := createMPVCommand("auto", "/music/test.mp3", "/tmp/socket")
Expect(args).To(Equal([]string{}))
})
})
})
Describe("start", func() {
BeforeEach(func() {
conf.Server.MPVCmdTemplate = "mpv --audio-device=%d --no-audio-display --pause %f --input-ipc-server=%s"
})
It("executes MPV command and captures arguments correctly", func() {
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
deviceName := "auto"
filename := "/music/test.mp3"
socketName := "/tmp/test_socket"
args := createMPVCommand(deviceName, filename, socketName)
executor, err := start(ctx, args)
Expect(err).ToNot(HaveOccurred())
// Read all the output from stdout (this will block until the process finishes or is canceled)
output, err := io.ReadAll(executor)
Expect(err).ToNot(HaveOccurred())
// Parse the captured arguments
lines := strings.Split(strings.TrimSpace(string(output)), "\n")
Expect(lines).To(HaveLen(6))
Expect(lines[0]).To(Equal(testScript))
Expect(lines[1]).To(Equal("--audio-device=auto"))
Expect(lines[2]).To(Equal("--no-audio-display"))
Expect(lines[3]).To(Equal("--pause"))
Expect(lines[4]).To(Equal("/music/test.mp3"))
Expect(lines[5]).To(Equal("--input-ipc-server=/tmp/test_socket"))
})
It("handles file paths with spaces", func() {
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
deviceName := "auto"
filename := "/music/My Album/01 - My Song.mp3"
socketName := "/tmp/test socket"
args := createMPVCommand(deviceName, filename, socketName)
executor, err := start(ctx, args)
Expect(err).ToNot(HaveOccurred())
// Read all the output from stdout (this will block until the process finishes or is canceled)
output, err := io.ReadAll(executor)
Expect(err).ToNot(HaveOccurred())
// Parse the captured arguments
lines := strings.Split(strings.TrimSpace(string(output)), "\n")
Expect(lines).To(ContainElement("/music/My Album/01 - My Song.mp3"))
Expect(lines).To(ContainElement("--input-ipc-server=/tmp/test socket"))
})
Context("with complex snapcast configuration", func() {
BeforeEach(func() {
conf.Server.MPVCmdTemplate = "mpv --no-audio-display --pause %f --input-ipc-server=%s --audio-channels=stereo --audio-samplerate=48000 --audio-format=s16 --ao=pcm --ao-pcm-file=/audio/snapcast_fifo"
})
It("passes all snapcast arguments correctly", func() {
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
deviceName := "auto"
filename := "/music/album/track.flac"
socketName := "/tmp/mpv-ctrl-test.socket"
args := createMPVCommand(deviceName, filename, socketName)
executor, err := start(ctx, args)
Expect(err).ToNot(HaveOccurred())
// Read all the output from stdout (this will block until the process finishes or is canceled)
output, err := io.ReadAll(executor)
Expect(err).ToNot(HaveOccurred())
// Parse the captured arguments
lines := strings.Split(strings.TrimSpace(string(output)), "\n")
// Verify all expected arguments are present
Expect(lines).To(ContainElement("--no-audio-display"))
Expect(lines).To(ContainElement("--pause"))
Expect(lines).To(ContainElement("/music/album/track.flac"))
Expect(lines).To(ContainElement("--input-ipc-server=/tmp/mpv-ctrl-test.socket"))
Expect(lines).To(ContainElement("--audio-channels=stereo"))
Expect(lines).To(ContainElement("--audio-samplerate=48000"))
Expect(lines).To(ContainElement("--audio-format=s16"))
Expect(lines).To(ContainElement("--ao=pcm"))
Expect(lines).To(ContainElement("--ao-pcm-file=/audio/snapcast_fifo"))
})
})
Context("with nil args", func() {
It("returns error when args is nil", func() {
ctx, cancel := context.WithTimeout(context.Background(), 1*time.Second)
defer cancel()
_, err := start(ctx, nil)
Expect(err).To(HaveOccurred())
Expect(err.Error()).To(Equal("no command arguments provided"))
})
It("returns error when args is empty", func() {
ctx, cancel := context.WithTimeout(context.Background(), 1*time.Second)
defer cancel()
_, err := start(ctx, []string{})
Expect(err).To(HaveOccurred())
Expect(err.Error()).To(Equal("no command arguments provided"))
})
})
})
Describe("mpvCommand", func() {
BeforeEach(func() {
// Reset the mpv command cache
mpvOnce = sync.Once{}
mpvPath = ""
mpvErr = nil
})
It("finds the configured MPV path", func() {
conf.Server.MPVPath = testScript
path, err := mpvCommand()
Expect(err).ToNot(HaveOccurred())
Expect(path).To(Equal(testScript))
})
})
Describe("NewTrack integration", func() {
var testMediaFile model.MediaFile
BeforeEach(func() {
DeferCleanup(configtest.SetupConfig())
conf.Server.MPVPath = testScript
// Create a test media file
testMediaFile = model.MediaFile{
ID: "test-id",
Path: "/music/test.mp3",
}
})
Context("with malformed template", func() {
BeforeEach(func() {
// Template with unmatched quotes that will cause shell parsing to fail
conf.Server.MPVCmdTemplate = `mpv --no-audio-display --pause %f --input-ipc-server=%s --ao-pcm-file="/unclosed/quote`
})
It("returns error when createMPVCommand fails", func() {
ctx, cancel := context.WithTimeout(context.Background(), 1*time.Second)
defer cancel()
playbackDone := make(chan bool, 1)
_, err := NewTrack(ctx, playbackDone, "auto", testMediaFile)
Expect(err).To(HaveOccurred())
Expect(err.Error()).To(Equal("no mpv command arguments provided"))
})
})
})
})
// createMockMPVScript creates a mock script that outputs arguments to stdout
func createMockMPVScript(tempDir string) string {
var scriptContent string
var scriptExt string
if runtime.GOOS == "windows" {
scriptExt = ".bat"
scriptContent = `@echo off
echo %0
:loop
if "%~1"=="" goto end
echo %~1
shift
goto loop
:end
`
} else {
scriptExt = ".sh"
scriptContent = `#!/bin/sh
echo "$0"
for arg in "$@"; do
echo "$arg"
done
`
}
scriptPath := filepath.Join(tempDir, "mock_mpv"+scriptExt)
err := os.WriteFile(scriptPath, []byte(scriptContent), 0755) // nolint:gosec
if err != nil {
panic(fmt.Sprintf("Failed to create mock script: %v", err))
}
return scriptPath
}

View File

@@ -34,7 +34,10 @@ func NewTrack(ctx context.Context, playbackDoneChannel chan bool, deviceName str
tmpSocketName := socketName("mpv-ctrl-", ".socket")
args := createMPVCommand(deviceName, mf.Path, tmpSocketName)
args := createMPVCommand(deviceName, mf.AbsolutePath(), tmpSocketName)
if len(args) == 0 {
return nil, fmt.Errorf("no mpv command arguments provided")
}
exe, err := start(ctx, args)
if err != nil {
log.Error("Error starting mpv process", err)

View File

@@ -20,7 +20,9 @@ import (
"github.com/navidrome/navidrome/model"
"github.com/navidrome/navidrome/model/criteria"
"github.com/navidrome/navidrome/model/request"
"github.com/navidrome/navidrome/utils/ioutils"
"github.com/navidrome/navidrome/utils/slice"
"golang.org/x/text/unicode/norm"
)
type Playlists interface {
@@ -96,12 +98,13 @@ func (s *playlists) parsePlaylist(ctx context.Context, playlistFile string, fold
}
defer file.Close()
reader := ioutils.UTF8Reader(file)
extension := strings.ToLower(filepath.Ext(playlistFile))
switch extension {
case ".nsp":
err = s.parseNSP(ctx, pls, file)
err = s.parseNSP(ctx, pls, reader)
default:
err = s.parseM3U(ctx, pls, folder, file)
err = s.parseM3U(ctx, pls, folder, reader)
}
return pls, err
}
@@ -203,10 +206,10 @@ func (s *playlists) parseM3U(ctx context.Context, pls *model.Playlist, folder *m
}
existing := make(map[string]int, len(found))
for idx := range found {
existing[strings.ToLower(found[idx].Path)] = idx
existing[normalizePathForComparison(found[idx].Path)] = idx
}
for _, path := range paths {
idx, ok := existing[strings.ToLower(path)]
idx, ok := existing[normalizePathForComparison(path)]
if ok {
mfs = append(mfs, found[idx])
} else {
@@ -223,6 +226,13 @@ func (s *playlists) parseM3U(ctx context.Context, pls *model.Playlist, folder *m
return nil
}
// normalizePathForComparison normalizes a file path to NFC form and converts to lowercase
// for consistent comparison. This fixes Unicode normalization issues on macOS where
// Apple Music creates playlists with NFC-encoded paths but the filesystem uses NFD.
func normalizePathForComparison(path string) string {
return strings.ToLower(norm.NFC.String(path))
}
// TODO This won't work for multiple libraries
func (s *playlists) normalizePaths(ctx context.Context, pls *model.Playlist, folder *model.Folder, lines []string) ([]string, error) {
libRegex, err := s.compileLibraryPaths(ctx)
@@ -326,7 +336,7 @@ func (s *playlists) Update(ctx context.Context, playlistID string,
if needsTrackRefresh {
pls, err = repo.GetWithTracks(playlistID, true, false)
pls.RemoveTracks(idxToRemove)
pls.AddTracks(idsToAdd)
pls.AddMediaFilesByID(idsToAdd)
} else {
if len(idsToAdd) > 0 {
_, err = tracks.Add(idsToAdd)

View File

@@ -15,6 +15,7 @@ import (
"github.com/navidrome/navidrome/tests"
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
"golang.org/x/text/unicode/norm"
)
var _ = Describe("Playlists", func() {
@@ -73,6 +74,24 @@ var _ = Describe("Playlists", func() {
Expect(err).ToNot(HaveOccurred())
Expect(pls.Tracks).To(HaveLen(2))
})
It("parses playlists with UTF-8 BOM marker", func() {
pls, err := ps.ImportFile(ctx, folder, "bom-test.m3u")
Expect(err).ToNot(HaveOccurred())
Expect(pls.OwnerID).To(Equal("123"))
Expect(pls.Name).To(Equal("Test Playlist"))
Expect(pls.Tracks).To(HaveLen(1))
Expect(pls.Tracks[0].Path).To(Equal("tests/fixtures/playlists/test.mp3"))
})
It("parses UTF-16 LE encoded playlists with BOM and converts to UTF-8", func() {
pls, err := ps.ImportFile(ctx, folder, "bom-test-utf16.m3u")
Expect(err).ToNot(HaveOccurred())
Expect(pls.OwnerID).To(Equal("123"))
Expect(pls.Name).To(Equal("UTF-16 Test Playlist"))
Expect(pls.Tracks).To(HaveLen(1))
Expect(pls.Tracks[0].Path).To(Equal("tests/fixtures/playlists/test.mp3"))
})
})
Describe("NSP", func() {
@@ -186,6 +205,54 @@ var _ = Describe("Playlists", func() {
Expect(pls.Tracks).To(HaveLen(1))
Expect(pls.Tracks[0].Path).To(Equal("abc/tEsT1.Mp3"))
})
It("handles Unicode normalization when comparing paths", func() {
// Test case for Apple Music playlists that use NFC encoding vs macOS filesystem NFD
// The character "è" can be represented as NFC (single codepoint) or NFD (e + combining accent)
const pathWithAccents = "artist/Michèle Desrosiers/album/Noël.m4a"
// Simulate a database entry with NFD encoding (as stored by macOS filesystem)
nfdPath := norm.NFD.String(pathWithAccents)
repo.data = []string{nfdPath}
// Simulate an Apple Music M3U playlist entry with NFC encoding
nfcPath := norm.NFC.String("/music/" + pathWithAccents)
m3u := strings.Join([]string{
nfcPath,
}, "\n")
f := strings.NewReader(m3u)
pls, err := ps.ImportM3U(ctx, f)
Expect(err).ToNot(HaveOccurred())
Expect(pls.Tracks).To(HaveLen(1), "Should find the track despite Unicode normalization differences")
Expect(pls.Tracks[0].Path).To(Equal(nfdPath))
})
})
Describe("normalizePathForComparison", func() {
It("normalizes Unicode characters to NFC form and converts to lowercase", func() {
// Test with NFD (decomposed) input - as would come from macOS filesystem
nfdPath := norm.NFD.String("Michèle") // Explicitly convert to NFD form
normalized := normalizePathForComparison(nfdPath)
Expect(normalized).To(Equal("michèle"))
// Test with NFC (composed) input - as would come from Apple Music M3U
nfcPath := "Michèle" // This might be in NFC form
normalizedNfc := normalizePathForComparison(nfcPath)
// Ensure the two paths are not equal in their original forms
Expect(nfdPath).ToNot(Equal(nfcPath))
// Both should normalize to the same result
Expect(normalized).To(Equal(normalizedNfc))
})
It("handles paths with mixed case and Unicode characters", func() {
path := "Artist/Noël Coward/Album/Song.mp3"
normalized := normalizePathForComparison(path)
Expect(normalized).To(Equal("artist/noël coward/album/song.mp3"))
})
})
Describe("InPlaylistsPath", func() {

View File

@@ -10,9 +10,16 @@ import (
)
func newBufferedScrobbler(ds model.DataStore, s Scrobbler, service string) *bufferedScrobbler {
b := &bufferedScrobbler{ds: ds, wrapped: s, service: service}
b.wakeSignal = make(chan struct{}, 1)
go b.run(context.TODO())
ctx, cancel := context.WithCancel(context.Background())
b := &bufferedScrobbler{
ds: ds,
wrapped: s,
service: service,
wakeSignal: make(chan struct{}, 1),
ctx: ctx,
cancel: cancel,
}
go b.run(ctx)
return b
}
@@ -21,14 +28,22 @@ type bufferedScrobbler struct {
wrapped Scrobbler
service string
wakeSignal chan struct{}
ctx context.Context
cancel context.CancelFunc
}
func (b *bufferedScrobbler) Stop() {
if b.cancel != nil {
b.cancel()
}
}
func (b *bufferedScrobbler) IsAuthorized(ctx context.Context, userId string) bool {
return b.wrapped.IsAuthorized(ctx, userId)
}
func (b *bufferedScrobbler) NowPlaying(ctx context.Context, userId string, track *model.MediaFile) error {
return b.wrapped.NowPlaying(ctx, userId, track)
func (b *bufferedScrobbler) NowPlaying(ctx context.Context, userId string, track *model.MediaFile, position int) error {
return b.wrapped.NowPlaying(ctx, userId, track, position)
}
func (b *bufferedScrobbler) Scrobble(ctx context.Context, userId string, s Scrobble) error {

View File

@@ -0,0 +1,88 @@
package scrobbler
import (
"context"
"time"
"github.com/navidrome/navidrome/model"
"github.com/navidrome/navidrome/tests"
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
)
var _ = Describe("BufferedScrobbler", func() {
var ds model.DataStore
var scr *fakeScrobbler
var bs *bufferedScrobbler
var ctx context.Context
var buffer *tests.MockedScrobbleBufferRepo
BeforeEach(func() {
ctx = context.Background()
buffer = tests.CreateMockedScrobbleBufferRepo()
ds = &tests.MockDataStore{
MockedScrobbleBuffer: buffer,
}
scr = &fakeScrobbler{Authorized: true}
bs = newBufferedScrobbler(ds, scr, "test")
})
It("forwards IsAuthorized calls", func() {
scr.Authorized = true
Expect(bs.IsAuthorized(ctx, "user1")).To(BeTrue())
scr.Authorized = false
Expect(bs.IsAuthorized(ctx, "user1")).To(BeFalse())
})
It("forwards NowPlaying calls", func() {
track := &model.MediaFile{ID: "123", Title: "Test Track"}
Expect(bs.NowPlaying(ctx, "user1", track, 0)).To(Succeed())
Expect(scr.NowPlayingCalled).To(BeTrue())
Expect(scr.UserID).To(Equal("user1"))
Expect(scr.Track).To(Equal(track))
})
It("enqueues scrobbles to buffer", func() {
track := model.MediaFile{ID: "123", Title: "Test Track"}
now := time.Now()
scrobble := Scrobble{MediaFile: track, TimeStamp: now}
Expect(buffer.Length()).To(Equal(int64(0)))
Expect(scr.ScrobbleCalled.Load()).To(BeFalse())
Expect(bs.Scrobble(ctx, "user1", scrobble)).To(Succeed())
Expect(buffer.Length()).To(Equal(int64(1)))
// Wait for the scrobble to be sent
Eventually(scr.ScrobbleCalled.Load).Should(BeTrue())
lastScrobble := scr.LastScrobble.Load()
Expect(lastScrobble.MediaFile.ID).To(Equal("123"))
Expect(lastScrobble.TimeStamp).To(BeTemporally("==", now))
})
It("stops the background goroutine when Stop is called", func() {
// Replace the real run method with one that signals when it exits
done := make(chan struct{})
// Start our instrumented run function that will signal when it exits
go func() {
defer close(done)
bs.run(bs.ctx)
}()
// Wait a bit to ensure the goroutine is running
time.Sleep(10 * time.Millisecond)
// Call the real Stop method
bs.Stop()
// Wait for the goroutine to exit or timeout
select {
case <-done:
// Success, goroutine exited
case <-time.After(100 * time.Millisecond):
Fail("Goroutine did not exit in time after Stop was called")
}
})
})

View File

@@ -21,7 +21,7 @@ var (
type Scrobbler interface {
IsAuthorized(ctx context.Context, userId string) bool
NowPlaying(ctx context.Context, userId string, track *model.MediaFile) error
NowPlaying(ctx context.Context, userId string, track *model.MediaFile, position int) error
Scrobble(ctx context.Context, userId string, s Scrobble) error
}

View File

@@ -2,9 +2,12 @@ package scrobbler
import (
"context"
"maps"
"sort"
"sync"
"time"
"github.com/navidrome/navidrome/conf"
"github.com/navidrome/navidrome/consts"
"github.com/navidrome/navidrome/log"
"github.com/navidrome/navidrome/model"
@@ -17,6 +20,7 @@ import (
type NowPlayingInfo struct {
MediaFile model.MediaFile
Start time.Time
Position int
Username string
PlayerId string
PlayerName string
@@ -28,30 +32,52 @@ type Submission struct {
}
type PlayTracker interface {
NowPlaying(ctx context.Context, playerId string, playerName string, trackId string) error
NowPlaying(ctx context.Context, playerId string, playerName string, trackId string, position int) error
GetNowPlaying(ctx context.Context) ([]NowPlayingInfo, error)
Submit(ctx context.Context, submissions []Submission) error
}
type playTracker struct {
ds model.DataStore
broker events.Broker
playMap cache.SimpleCache[string, NowPlayingInfo]
scrobblers map[string]Scrobbler
// PluginLoader is a minimal interface for plugin manager usage in PlayTracker
// (avoids import cycles)
type PluginLoader interface {
PluginNames(capability string) []string
LoadScrobbler(name string) (Scrobbler, bool)
}
func GetPlayTracker(ds model.DataStore, broker events.Broker) PlayTracker {
type playTracker struct {
ds model.DataStore
broker events.Broker
playMap cache.SimpleCache[string, NowPlayingInfo]
builtinScrobblers map[string]Scrobbler
pluginScrobblers map[string]Scrobbler
pluginLoader PluginLoader
mu sync.RWMutex
}
func GetPlayTracker(ds model.DataStore, broker events.Broker, pluginManager PluginLoader) PlayTracker {
return singleton.GetInstance(func() *playTracker {
return newPlayTracker(ds, broker)
return newPlayTracker(ds, broker, pluginManager)
})
}
// This constructor only exists for testing. For normal usage, the PlayTracker has to be a singleton, returned by
// the GetPlayTracker function above
func newPlayTracker(ds model.DataStore, broker events.Broker) *playTracker {
func newPlayTracker(ds model.DataStore, broker events.Broker, pluginManager PluginLoader) *playTracker {
m := cache.NewSimpleCache[string, NowPlayingInfo]()
p := &playTracker{ds: ds, playMap: m, broker: broker}
p.scrobblers = make(map[string]Scrobbler)
p := &playTracker{
ds: ds,
playMap: m,
broker: broker,
builtinScrobblers: make(map[string]Scrobbler),
pluginScrobblers: make(map[string]Scrobbler),
pluginLoader: pluginManager,
}
if conf.Server.EnableNowPlaying {
m.OnExpiration(func(_ string, _ NowPlayingInfo) {
broker.SendBroadcastMessage(context.Background(), &events.NowPlayingCount{Count: m.Len()})
})
}
var enabled []string
for name, constructor := range constructors {
s := constructor(ds)
@@ -61,13 +87,87 @@ func newPlayTracker(ds model.DataStore, broker events.Broker) *playTracker {
}
enabled = append(enabled, name)
s = newBufferedScrobbler(ds, s, name)
p.scrobblers[name] = s
p.builtinScrobblers[name] = s
}
log.Debug("List of scrobblers enabled", "names", enabled)
log.Debug("List of builtin scrobblers enabled", "names", enabled)
return p
}
func (p *playTracker) NowPlaying(ctx context.Context, playerId string, playerName string, trackId string) error {
// pluginNamesMatchScrobblers returns true if the set of pluginNames matches the keys in pluginScrobblers
func pluginNamesMatchScrobblers(pluginNames []string, scrobblers map[string]Scrobbler) bool {
if len(pluginNames) != len(scrobblers) {
return false
}
for _, name := range pluginNames {
if _, ok := scrobblers[name]; !ok {
return false
}
}
return true
}
// refreshPluginScrobblers updates the pluginScrobblers map to match the current set of plugin scrobblers
func (p *playTracker) refreshPluginScrobblers() {
p.mu.Lock()
defer p.mu.Unlock()
if p.pluginLoader == nil {
return
}
// Get the list of available plugin names
pluginNames := p.pluginLoader.PluginNames("Scrobbler")
// Early return if plugin names match existing scrobblers (no change)
if pluginNamesMatchScrobblers(pluginNames, p.pluginScrobblers) {
return
}
// Build a set of current plugins for faster lookups
current := make(map[string]struct{}, len(pluginNames))
// Process additions - add new plugins
for _, name := range pluginNames {
current[name] = struct{}{}
// Only create a new scrobbler if it doesn't exist
if _, exists := p.pluginScrobblers[name]; !exists {
s, ok := p.pluginLoader.LoadScrobbler(name)
if ok && s != nil {
p.pluginScrobblers[name] = newBufferedScrobbler(p.ds, s, name)
}
}
}
type stoppableScrobbler interface {
Scrobbler
Stop()
}
// Process removals - remove plugins that no longer exist
for name, scrobbler := range p.pluginScrobblers {
if _, exists := current[name]; !exists {
// If the scrobbler implements stoppableScrobbler, call Stop() before removing it
if stoppable, ok := scrobbler.(stoppableScrobbler); ok {
log.Debug("Stopping scrobbler", "name", name)
stoppable.Stop()
}
delete(p.pluginScrobblers, name)
}
}
}
// getActiveScrobblers refreshes plugin scrobblers, acquires a read lock,
// combines builtin and plugin scrobblers into a new map, releases the lock,
// and returns the combined map.
func (p *playTracker) getActiveScrobblers() map[string]Scrobbler {
p.refreshPluginScrobblers()
p.mu.RLock()
defer p.mu.RUnlock()
combined := maps.Clone(p.builtinScrobblers)
maps.Copy(combined, p.pluginScrobblers)
return combined
}
func (p *playTracker) NowPlaying(ctx context.Context, playerId string, playerName string, trackId string, position int) error {
mf, err := p.ds.MediaFile(ctx).GetWithParticipants(trackId)
if err != nil {
log.Error(ctx, "Error retrieving mediaFile", "id", trackId, err)
@@ -78,31 +178,43 @@ func (p *playTracker) NowPlaying(ctx context.Context, playerId string, playerNam
info := NowPlayingInfo{
MediaFile: *mf,
Start: time.Now(),
Position: position,
Username: user.UserName,
PlayerId: playerId,
PlayerName: playerName,
}
ttl := time.Duration(int(mf.Duration)+5) * time.Second
// Calculate TTL based on remaining track duration. If position exceeds track duration,
// remaining is set to 0 to avoid negative TTL.
remaining := int(mf.Duration) - position
if remaining < 0 {
remaining = 0
}
// Add 5 seconds buffer to ensure the NowPlaying info is available slightly longer than the track duration.
ttl := time.Duration(remaining+5) * time.Second
_ = p.playMap.AddWithTTL(playerId, info, ttl)
if conf.Server.EnableNowPlaying {
p.broker.SendBroadcastMessage(ctx, &events.NowPlayingCount{Count: p.playMap.Len()})
}
player, _ := request.PlayerFrom(ctx)
if player.ScrobbleEnabled {
p.dispatchNowPlaying(ctx, user.ID, mf)
p.dispatchNowPlaying(ctx, user.ID, mf, position)
}
return nil
}
func (p *playTracker) dispatchNowPlaying(ctx context.Context, userId string, t *model.MediaFile) {
func (p *playTracker) dispatchNowPlaying(ctx context.Context, userId string, t *model.MediaFile, position int) {
if t.Artist == consts.UnknownArtist {
log.Debug(ctx, "Ignoring external NowPlaying update for track with unknown artist", "track", t.Title, "artist", t.Artist)
return
}
for name, s := range p.scrobblers {
allScrobblers := p.getActiveScrobblers()
for name, s := range allScrobblers {
if !s.IsAuthorized(ctx, userId) {
continue
}
log.Debug(ctx, "Sending NowPlaying update", "scrobbler", name, "track", t.Title, "artist", t.Artist)
err := s.NowPlaying(ctx, userId, t)
log.Debug(ctx, "Sending NowPlaying update", "scrobbler", name, "track", t.Title, "artist", t.Artist, "position", position)
err := s.NowPlaying(ctx, userId, t, position)
if err != nil {
log.Error(ctx, "Error sending NowPlayingInfo", "scrobbler", name, "track", t.Title, "artist", t.Artist, err)
continue
@@ -174,9 +286,11 @@ func (p *playTracker) dispatchScrobble(ctx context.Context, t *model.MediaFile,
log.Debug(ctx, "Ignoring external Scrobble for track with unknown artist", "track", t.Title, "artist", t.Artist)
return
}
allScrobblers := p.getActiveScrobblers()
u, _ := request.UserFrom(ctx)
scrobble := Scrobble{MediaFile: *t, TimeStamp: playTime}
for name, s := range p.scrobblers {
for name, s := range allScrobblers {
if !s.IsAuthorized(ctx, u.ID) {
continue
}

View File

@@ -3,8 +3,13 @@ package scrobbler
import (
"context"
"errors"
"net/http"
"sync"
"sync/atomic"
"time"
"github.com/navidrome/navidrome/conf"
"github.com/navidrome/navidrome/conf/configtest"
"github.com/navidrome/navidrome/consts"
"github.com/navidrome/navidrome/model"
@@ -15,10 +20,28 @@ import (
. "github.com/onsi/gomega"
)
// mockPluginLoader is a test implementation of PluginLoader for plugin scrobbler tests
// Moved to top-level scope to avoid linter issues
type mockPluginLoader struct {
names []string
scrobblers map[string]Scrobbler
}
func (m *mockPluginLoader) PluginNames(service string) []string {
return m.names
}
func (m *mockPluginLoader) LoadScrobbler(name string) (Scrobbler, bool) {
s, ok := m.scrobblers[name]
return s, ok
}
var _ = Describe("PlayTracker", func() {
var ctx context.Context
var ds model.DataStore
var tracker PlayTracker
var eventBroker *fakeEventBroker
var track model.MediaFile
var album model.Album
var artist1 model.Artist
@@ -26,6 +49,7 @@ var _ = Describe("PlayTracker", func() {
var fake fakeScrobbler
BeforeEach(func() {
DeferCleanup(configtest.SetupConfig())
ctx = context.Background()
ctx = request.WithUser(ctx, model.User{ID: "u-1"})
ctx = request.WithPlayer(ctx, model.Player{ScrobbleEnabled: true})
@@ -37,8 +61,9 @@ var _ = Describe("PlayTracker", func() {
Register("disabled", func(model.DataStore) Scrobbler {
return nil
})
tracker = newPlayTracker(ds, events.GetBroker())
tracker.(*playTracker).scrobblers["fake"] = &fake // Bypass buffering for tests
eventBroker = &fakeEventBroker{}
tracker = newPlayTracker(ds, eventBroker, nil)
tracker.(*playTracker).builtinScrobblers["fake"] = &fake // Bypass buffering for tests
track = model.MediaFile{
ID: "123",
@@ -62,13 +87,13 @@ var _ = Describe("PlayTracker", func() {
})
It("does not register disabled scrobblers", func() {
Expect(tracker.(*playTracker).scrobblers).To(HaveKey("fake"))
Expect(tracker.(*playTracker).scrobblers).ToNot(HaveKey("disabled"))
Expect(tracker.(*playTracker).builtinScrobblers).To(HaveKey("fake"))
Expect(tracker.(*playTracker).builtinScrobblers).ToNot(HaveKey("disabled"))
})
Describe("NowPlaying", func() {
It("sends track to agent", func() {
err := tracker.NowPlaying(ctx, "player-1", "player-one", "123")
err := tracker.NowPlaying(ctx, "player-1", "player-one", "123", 0)
Expect(err).ToNot(HaveOccurred())
Expect(fake.NowPlayingCalled).To(BeTrue())
Expect(fake.UserID).To(Equal("u-1"))
@@ -78,7 +103,7 @@ var _ = Describe("PlayTracker", func() {
It("does not send track to agent if user has not authorized", func() {
fake.Authorized = false
err := tracker.NowPlaying(ctx, "player-1", "player-one", "123")
err := tracker.NowPlaying(ctx, "player-1", "player-one", "123", 0)
Expect(err).ToNot(HaveOccurred())
Expect(fake.NowPlayingCalled).To(BeFalse())
@@ -86,7 +111,7 @@ var _ = Describe("PlayTracker", func() {
It("does not send track to agent if player is not enabled to send scrobbles", func() {
ctx = request.WithPlayer(ctx, model.Player{ScrobbleEnabled: false})
err := tracker.NowPlaying(ctx, "player-1", "player-one", "123")
err := tracker.NowPlaying(ctx, "player-1", "player-one", "123", 0)
Expect(err).ToNot(HaveOccurred())
Expect(fake.NowPlayingCalled).To(BeFalse())
@@ -94,11 +119,40 @@ var _ = Describe("PlayTracker", func() {
It("does not send track to agent if artist is unknown", func() {
track.Artist = consts.UnknownArtist
err := tracker.NowPlaying(ctx, "player-1", "player-one", "123")
err := tracker.NowPlaying(ctx, "player-1", "player-one", "123", 0)
Expect(err).ToNot(HaveOccurred())
Expect(fake.NowPlayingCalled).To(BeFalse())
})
It("stores position when greater than zero", func() {
pos := 42
err := tracker.NowPlaying(ctx, "player-1", "player-one", "123", pos)
Expect(err).ToNot(HaveOccurred())
playing, err := tracker.GetNowPlaying(ctx)
Expect(err).ToNot(HaveOccurred())
Expect(playing).To(HaveLen(1))
Expect(playing[0].Position).To(Equal(pos))
Expect(fake.Position).To(Equal(pos))
})
It("sends event with count", func() {
err := tracker.NowPlaying(ctx, "player-1", "player-one", "123", 0)
Expect(err).ToNot(HaveOccurred())
eventList := eventBroker.getEvents()
Expect(eventList).ToNot(BeEmpty())
evt, ok := eventList[0].(*events.NowPlayingCount)
Expect(ok).To(BeTrue())
Expect(evt.Count).To(Equal(1))
})
It("does not send event when disabled", func() {
conf.Server.EnableNowPlaying = false
err := tracker.NowPlaying(ctx, "player-1", "player-one", "123", 0)
Expect(err).ToNot(HaveOccurred())
Expect(eventBroker.getEvents()).To(BeEmpty())
})
})
Describe("GetNowPlaying", func() {
@@ -107,9 +161,9 @@ var _ = Describe("PlayTracker", func() {
track2.ID = "456"
_ = ds.MediaFile(ctx).Put(&track2)
ctx = request.WithUser(context.Background(), model.User{UserName: "user-1"})
_ = tracker.NowPlaying(ctx, "player-1", "player-one", "123")
_ = tracker.NowPlaying(ctx, "player-1", "player-one", "123", 0)
ctx = request.WithUser(context.Background(), model.User{UserName: "user-2"})
_ = tracker.NowPlaying(ctx, "player-2", "player-two", "456")
_ = tracker.NowPlaying(ctx, "player-2", "player-two", "456", 0)
playing, err := tracker.GetNowPlaying(ctx)
@@ -127,6 +181,26 @@ var _ = Describe("PlayTracker", func() {
})
})
Describe("Expiration events", func() {
It("sends event when entry expires", func() {
info := NowPlayingInfo{MediaFile: track, Start: time.Now(), Username: "user"}
_ = tracker.(*playTracker).playMap.AddWithTTL("player-1", info, 10*time.Millisecond)
Eventually(func() int { return len(eventBroker.getEvents()) }).Should(BeNumerically(">", 0))
eventList := eventBroker.getEvents()
evt, ok := eventList[len(eventList)-1].(*events.NowPlayingCount)
Expect(ok).To(BeTrue())
Expect(evt.Count).To(Equal(0))
})
It("does not send event when disabled", func() {
conf.Server.EnableNowPlaying = false
tracker = newPlayTracker(ds, eventBroker, nil)
info := NowPlayingInfo{MediaFile: track, Start: time.Now(), Username: "user"}
_ = tracker.(*playTracker).playMap.AddWithTTL("player-2", info, 10*time.Millisecond)
Consistently(func() int { return len(eventBroker.getEvents()) }).Should(Equal(0))
})
})
Describe("Submit", func() {
It("sends track to agent", func() {
ctx = request.WithUser(ctx, model.User{ID: "u-1", UserName: "user-1"})
@@ -135,10 +209,12 @@ var _ = Describe("PlayTracker", func() {
err := tracker.Submit(ctx, []Submission{{TrackID: "123", Timestamp: ts}})
Expect(err).ToNot(HaveOccurred())
Expect(fake.ScrobbleCalled).To(BeTrue())
Expect(fake.ScrobbleCalled.Load()).To(BeTrue())
Expect(fake.UserID).To(Equal("u-1"))
Expect(fake.LastScrobble.ID).To(Equal("123"))
Expect(fake.LastScrobble.Participants).To(Equal(track.Participants))
lastScrobble := fake.LastScrobble.Load()
Expect(lastScrobble.TimeStamp).To(BeTemporally("~", ts, 1*time.Second))
Expect(lastScrobble.ID).To(Equal("123"))
Expect(lastScrobble.Participants).To(Equal(track.Participants))
})
It("increments play counts in the DB", func() {
@@ -162,7 +238,7 @@ var _ = Describe("PlayTracker", func() {
err := tracker.Submit(ctx, []Submission{{TrackID: "123", Timestamp: time.Now()}})
Expect(err).ToNot(HaveOccurred())
Expect(fake.ScrobbleCalled).To(BeFalse())
Expect(fake.ScrobbleCalled.Load()).To(BeFalse())
})
It("does not send track to agent if player is not enabled to send scrobbles", func() {
@@ -171,7 +247,7 @@ var _ = Describe("PlayTracker", func() {
err := tracker.Submit(ctx, []Submission{{TrackID: "123", Timestamp: time.Now()}})
Expect(err).ToNot(HaveOccurred())
Expect(fake.ScrobbleCalled).To(BeFalse())
Expect(fake.ScrobbleCalled.Load()).To(BeFalse())
})
It("does not send track to agent if artist is unknown", func() {
@@ -180,7 +256,7 @@ var _ = Describe("PlayTracker", func() {
err := tracker.Submit(ctx, []Submission{{TrackID: "123", Timestamp: time.Now()}})
Expect(err).ToNot(HaveOccurred())
Expect(fake.ScrobbleCalled).To(BeFalse())
Expect(fake.ScrobbleCalled.Load()).To(BeFalse())
})
It("increments play counts even if it cannot scrobble", func() {
@@ -189,7 +265,7 @@ var _ = Describe("PlayTracker", func() {
err := tracker.Submit(ctx, []Submission{{TrackID: "123", Timestamp: time.Now()}})
Expect(err).ToNot(HaveOccurred())
Expect(fake.ScrobbleCalled).To(BeFalse())
Expect(fake.ScrobbleCalled.Load()).To(BeFalse())
Expect(track.PlayCount).To(Equal(int64(1)))
Expect(album.PlayCount).To(Equal(int64(1)))
@@ -200,15 +276,111 @@ var _ = Describe("PlayTracker", func() {
})
})
Describe("Plugin scrobbler logic", func() {
var pluginLoader *mockPluginLoader
var pluginFake fakeScrobbler
BeforeEach(func() {
pluginFake = fakeScrobbler{Authorized: true}
pluginLoader = &mockPluginLoader{
names: []string{"plugin1"},
scrobblers: map[string]Scrobbler{"plugin1": &pluginFake},
}
tracker = newPlayTracker(ds, events.GetBroker(), pluginLoader)
// Bypass buffering for both built-in and plugin scrobblers
tracker.(*playTracker).builtinScrobblers["fake"] = &fake
tracker.(*playTracker).pluginScrobblers["plugin1"] = &pluginFake
})
It("registers and uses plugin scrobbler for NowPlaying", func() {
err := tracker.NowPlaying(ctx, "player-1", "player-one", "123", 0)
Expect(err).ToNot(HaveOccurred())
Expect(pluginFake.NowPlayingCalled).To(BeTrue())
})
It("removes plugin scrobbler if not present anymore", func() {
// First call: plugin present
_ = tracker.NowPlaying(ctx, "player-1", "player-one", "123", 0)
Expect(pluginFake.NowPlayingCalled).To(BeTrue())
pluginFake.NowPlayingCalled = false
// Remove plugin
pluginLoader.names = []string{}
_ = tracker.NowPlaying(ctx, "player-1", "player-one", "123", 0)
Expect(pluginFake.NowPlayingCalled).To(BeFalse())
})
It("calls both builtin and plugin scrobblers for NowPlaying", func() {
fake.NowPlayingCalled = false
pluginFake.NowPlayingCalled = false
err := tracker.NowPlaying(ctx, "player-1", "player-one", "123", 0)
Expect(err).ToNot(HaveOccurred())
Expect(fake.NowPlayingCalled).To(BeTrue())
Expect(pluginFake.NowPlayingCalled).To(BeTrue())
})
It("calls plugin scrobbler for Submit", func() {
ts := time.Now()
err := tracker.Submit(ctx, []Submission{{TrackID: "123", Timestamp: ts}})
Expect(err).ToNot(HaveOccurred())
Expect(pluginFake.ScrobbleCalled.Load()).To(BeTrue())
})
})
Describe("Plugin Scrobbler Management", func() {
var pluginScr *fakeScrobbler
var mockPlugin *mockPluginLoader
var pTracker *playTracker
var mockedBS *mockBufferedScrobbler
BeforeEach(func() {
ctx = context.Background()
ctx = request.WithUser(ctx, model.User{ID: "u-1"})
ctx = request.WithPlayer(ctx, model.Player{ScrobbleEnabled: true})
ds = &tests.MockDataStore{}
// Setup plugin scrobbler
pluginScr = &fakeScrobbler{Authorized: true}
mockPlugin = &mockPluginLoader{
names: []string{"plugin1"},
scrobblers: map[string]Scrobbler{"plugin1": pluginScr},
}
// Create a tracker with the mock plugin loader
pTracker = newPlayTracker(ds, events.GetBroker(), mockPlugin)
// Create a mock buffered scrobbler and explicitly cast it to Scrobbler
mockedBS = &mockBufferedScrobbler{
wrapped: pluginScr,
}
// Make sure the instance is added with its concrete type preserved
pTracker.pluginScrobblers["plugin1"] = mockedBS
})
It("calls Stop on scrobblers when removing them", func() {
// Change the plugin names to simulate a plugin being removed
mockPlugin.names = []string{}
// Call refreshPluginScrobblers which should detect the removed plugin
pTracker.refreshPluginScrobblers()
// Verify the Stop method was called
Expect(mockedBS.stopCalled).To(BeTrue())
// Verify the scrobbler was removed from the map
Expect(pTracker.pluginScrobblers).NotTo(HaveKey("plugin1"))
})
})
})
type fakeScrobbler struct {
Authorized bool
NowPlayingCalled bool
ScrobbleCalled bool
ScrobbleCalled atomic.Bool
UserID string
Track *model.MediaFile
LastScrobble Scrobble
Position int
LastScrobble atomic.Pointer[Scrobble]
Error error
}
@@ -216,23 +388,24 @@ func (f *fakeScrobbler) IsAuthorized(ctx context.Context, userId string) bool {
return f.Error == nil && f.Authorized
}
func (f *fakeScrobbler) NowPlaying(ctx context.Context, userId string, track *model.MediaFile) error {
func (f *fakeScrobbler) NowPlaying(ctx context.Context, userId string, track *model.MediaFile, position int) error {
f.NowPlayingCalled = true
if f.Error != nil {
return f.Error
}
f.UserID = userId
f.Track = track
f.Position = position
return nil
}
func (f *fakeScrobbler) Scrobble(ctx context.Context, userId string, s Scrobble) error {
f.ScrobbleCalled = true
f.UserID = userId
f.LastScrobble.Store(&s)
f.ScrobbleCalled.Store(true)
if f.Error != nil {
return f.Error
}
f.UserID = userId
f.LastScrobble = s
return nil
}
@@ -243,3 +416,51 @@ func _p(id, name string, sortName ...string) model.Participant {
}
return p
}
type fakeEventBroker struct {
http.Handler
events []events.Event
mu sync.Mutex
}
func (f *fakeEventBroker) SendMessage(_ context.Context, event events.Event) {
f.mu.Lock()
defer f.mu.Unlock()
f.events = append(f.events, event)
}
func (f *fakeEventBroker) SendBroadcastMessage(_ context.Context, event events.Event) {
f.mu.Lock()
defer f.mu.Unlock()
f.events = append(f.events, event)
}
func (f *fakeEventBroker) getEvents() []events.Event {
f.mu.Lock()
defer f.mu.Unlock()
return f.events
}
var _ events.Broker = (*fakeEventBroker)(nil)
// mockBufferedScrobbler used to test that Stop is called
type mockBufferedScrobbler struct {
wrapped Scrobbler
stopCalled bool
}
func (m *mockBufferedScrobbler) Stop() {
m.stopCalled = true
}
func (m *mockBufferedScrobbler) IsAuthorized(ctx context.Context, userId string) bool {
return m.wrapped.IsAuthorized(ctx, userId)
}
func (m *mockBufferedScrobbler) NowPlaying(ctx context.Context, userId string, track *model.MediaFile, position int) error {
return m.wrapped.NowPlaying(ctx, userId, track, position)
}
func (m *mockBufferedScrobbler) Scrobble(ctx context.Context, userId string, s Scrobble) error {
return m.wrapped.Scrobble(ctx, userId, s)
}

View File

@@ -149,7 +149,7 @@ func (r *shareRepositoryWrapper) contentsLabelFromArtist(shareID string, ids str
func (r *shareRepositoryWrapper) contentsLabelFromAlbums(shareID string, ids string) string {
idList := strings.Split(ids, ",")
all, err := r.ds.Album(r.ctx).GetAll(model.QueryOptions{Filters: squirrel.Eq{"id": idList}})
all, err := r.ds.Album(r.ctx).GetAll(model.QueryOptions{Filters: squirrel.Eq{"album.id": idList}})
if err != nil {
log.Error(r.ctx, "Error retrieving album names for share", "share", shareID, err)
return ""

View File

@@ -3,11 +3,15 @@ package local
import (
"testing"
"github.com/navidrome/navidrome/log"
"github.com/navidrome/navidrome/tests"
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
)
func TestLocal(t *testing.T) {
tests.Init(t, false)
log.SetLevel(log.LevelFatal)
RegisterFailHandler(Fail)
RunSpecs(t, "Local Storage Test Suite")
RunSpecs(t, "Local Storage Suite")
}

View File

@@ -0,0 +1,428 @@
package local
import (
"io/fs"
"net/url"
"os"
"path/filepath"
"runtime"
"time"
"github.com/navidrome/navidrome/conf"
"github.com/navidrome/navidrome/conf/configtest"
"github.com/navidrome/navidrome/core/storage"
"github.com/navidrome/navidrome/model/metadata"
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
)
var _ = Describe("LocalStorage", func() {
var tempDir string
var testExtractor *mockTestExtractor
BeforeEach(func() {
DeferCleanup(configtest.SetupConfig())
// Create a temporary directory for testing
var err error
tempDir, err = os.MkdirTemp("", "navidrome-local-storage-test-")
Expect(err).ToNot(HaveOccurred())
DeferCleanup(func() {
os.RemoveAll(tempDir)
})
// Create and register a test extractor
testExtractor = &mockTestExtractor{
results: make(map[string]metadata.Info),
}
RegisterExtractor("test", func(fs.FS, string) Extractor {
return testExtractor
})
conf.Server.Scanner.Extractor = "test"
})
Describe("newLocalStorage", func() {
Context("with valid path", func() {
It("should create a localStorage instance with correct path", func() {
u, err := url.Parse("file://" + tempDir)
Expect(err).ToNot(HaveOccurred())
storage := newLocalStorage(*u)
localStorage := storage.(*localStorage)
Expect(localStorage.u.Scheme).To(Equal("file"))
// Check that the path is set correctly (could be resolved to real path on macOS)
Expect(localStorage.u.Path).To(ContainSubstring("navidrome-local-storage-test"))
Expect(localStorage.resolvedPath).To(ContainSubstring("navidrome-local-storage-test"))
Expect(localStorage.extractor).ToNot(BeNil())
})
It("should handle URL-decoded paths correctly", func() {
// Create a directory with spaces to test URL decoding
spacedDir := filepath.Join(tempDir, "test folder")
err := os.MkdirAll(spacedDir, 0755)
Expect(err).ToNot(HaveOccurred())
// Use proper URL construction instead of manual escaping
u := &url.URL{
Scheme: "file",
Path: spacedDir,
}
storage := newLocalStorage(*u)
localStorage, ok := storage.(*localStorage)
Expect(ok).To(BeTrue())
Expect(localStorage.u.Path).To(Equal(spacedDir))
})
It("should resolve symlinks when possible", func() {
// Create a real directory and a symlink to it
realDir := filepath.Join(tempDir, "real")
linkDir := filepath.Join(tempDir, "link")
err := os.MkdirAll(realDir, 0755)
Expect(err).ToNot(HaveOccurred())
err = os.Symlink(realDir, linkDir)
Expect(err).ToNot(HaveOccurred())
u, err := url.Parse("file://" + linkDir)
Expect(err).ToNot(HaveOccurred())
storage := newLocalStorage(*u)
localStorage, ok := storage.(*localStorage)
Expect(ok).To(BeTrue())
Expect(localStorage.u.Path).To(Equal(linkDir))
// Check that the resolved path contains the real directory name
Expect(localStorage.resolvedPath).To(ContainSubstring("real"))
})
It("should use u.Path as resolvedPath when symlink resolution fails", func() {
// Use a non-existent path to trigger symlink resolution failure
nonExistentPath := filepath.Join(tempDir, "non-existent")
u, err := url.Parse("file://" + nonExistentPath)
Expect(err).ToNot(HaveOccurred())
storage := newLocalStorage(*u)
localStorage, ok := storage.(*localStorage)
Expect(ok).To(BeTrue())
Expect(localStorage.u.Path).To(Equal(nonExistentPath))
Expect(localStorage.resolvedPath).To(Equal(nonExistentPath))
})
})
Context("with Windows path", func() {
BeforeEach(func() {
if runtime.GOOS != "windows" {
Skip("Windows-specific test")
}
})
It("should handle Windows drive letters correctly", func() {
u, err := url.Parse("file://C:/music")
Expect(err).ToNot(HaveOccurred())
storage := newLocalStorage(*u)
localStorage, ok := storage.(*localStorage)
Expect(ok).To(BeTrue())
Expect(localStorage.u.Path).To(Equal("C:/music"))
})
})
Context("with invalid extractor", func() {
It("should handle extractor validation correctly", func() {
// Note: The actual implementation uses log.Fatal which exits the process,
// so we test the normal path where extractors exist
u, err := url.Parse("file://" + tempDir)
Expect(err).ToNot(HaveOccurred())
storage := newLocalStorage(*u)
Expect(storage).ToNot(BeNil())
})
})
})
Describe("localStorage.FS", func() {
Context("with existing directory", func() {
It("should return a localFS instance", func() {
u, err := url.Parse("file://" + tempDir)
Expect(err).ToNot(HaveOccurred())
storage := newLocalStorage(*u)
musicFS, err := storage.FS()
Expect(err).ToNot(HaveOccurred())
Expect(musicFS).ToNot(BeNil())
_, ok := musicFS.(*localFS)
Expect(ok).To(BeTrue())
})
})
Context("with non-existent directory", func() {
It("should return an error", func() {
nonExistentPath := filepath.Join(tempDir, "non-existent")
u, err := url.Parse("file://" + nonExistentPath)
Expect(err).ToNot(HaveOccurred())
storage := newLocalStorage(*u)
_, err = storage.FS()
Expect(err).To(HaveOccurred())
Expect(err.Error()).To(ContainSubstring(nonExistentPath))
})
})
})
Describe("localFS.ReadTags", func() {
var testFile string
BeforeEach(func() {
// Create a test file
testFile = filepath.Join(tempDir, "test.mp3")
err := os.WriteFile(testFile, []byte("test data"), 0600)
Expect(err).ToNot(HaveOccurred())
// Reset extractor state
testExtractor.results = make(map[string]metadata.Info)
testExtractor.err = nil
})
Context("when extractor returns complete metadata", func() {
It("should return the metadata as-is", func() {
expectedInfo := metadata.Info{
Tags: map[string][]string{
"title": {"Test Song"},
"artist": {"Test Artist"},
},
AudioProperties: metadata.AudioProperties{
Duration: 180,
BitRate: 320,
},
FileInfo: &testFileInfo{name: "test.mp3"},
}
testExtractor.results["test.mp3"] = expectedInfo
u, err := url.Parse("file://" + tempDir)
Expect(err).ToNot(HaveOccurred())
storage := newLocalStorage(*u)
musicFS, err := storage.FS()
Expect(err).ToNot(HaveOccurred())
results, err := musicFS.ReadTags("test.mp3")
Expect(err).ToNot(HaveOccurred())
Expect(results).To(HaveKey("test.mp3"))
Expect(results["test.mp3"]).To(Equal(expectedInfo))
})
})
Context("when extractor returns metadata without FileInfo", func() {
It("should populate FileInfo from filesystem", func() {
incompleteInfo := metadata.Info{
Tags: map[string][]string{
"title": {"Test Song"},
},
FileInfo: nil, // Missing FileInfo
}
testExtractor.results["test.mp3"] = incompleteInfo
u, err := url.Parse("file://" + tempDir)
Expect(err).ToNot(HaveOccurred())
storage := newLocalStorage(*u)
musicFS, err := storage.FS()
Expect(err).ToNot(HaveOccurred())
results, err := musicFS.ReadTags("test.mp3")
Expect(err).ToNot(HaveOccurred())
Expect(results).To(HaveKey("test.mp3"))
result := results["test.mp3"]
Expect(result.FileInfo).ToNot(BeNil())
Expect(result.FileInfo.Name()).To(Equal("test.mp3"))
// Should be wrapped in localFileInfo
_, ok := result.FileInfo.(localFileInfo)
Expect(ok).To(BeTrue())
})
})
Context("when filesystem stat fails", func() {
It("should return an error", func() {
incompleteInfo := metadata.Info{
Tags: map[string][]string{"title": {"Test Song"}},
FileInfo: nil,
}
testExtractor.results["non-existent.mp3"] = incompleteInfo
u, err := url.Parse("file://" + tempDir)
Expect(err).ToNot(HaveOccurred())
storage := newLocalStorage(*u)
musicFS, err := storage.FS()
Expect(err).ToNot(HaveOccurred())
_, err = musicFS.ReadTags("non-existent.mp3")
Expect(err).To(HaveOccurred())
})
})
Context("when extractor fails", func() {
It("should return the extractor error", func() {
testExtractor.err = &extractorError{message: "extractor failed"}
u, err := url.Parse("file://" + tempDir)
Expect(err).ToNot(HaveOccurred())
storage := newLocalStorage(*u)
musicFS, err := storage.FS()
Expect(err).ToNot(HaveOccurred())
_, err = musicFS.ReadTags("test.mp3")
Expect(err).To(HaveOccurred())
Expect(err.Error()).To(ContainSubstring("extractor failed"))
})
})
Context("with multiple files", func() {
It("should process all files correctly", func() {
// Create another test file
testFile2 := filepath.Join(tempDir, "test2.mp3")
err := os.WriteFile(testFile2, []byte("test data 2"), 0600)
Expect(err).ToNot(HaveOccurred())
info1 := metadata.Info{
Tags: map[string][]string{"title": {"Song 1"}},
FileInfo: &testFileInfo{name: "test.mp3"},
}
info2 := metadata.Info{
Tags: map[string][]string{"title": {"Song 2"}},
FileInfo: nil, // This one needs FileInfo populated
}
testExtractor.results["test.mp3"] = info1
testExtractor.results["test2.mp3"] = info2
u, err := url.Parse("file://" + tempDir)
Expect(err).ToNot(HaveOccurred())
storage := newLocalStorage(*u)
musicFS, err := storage.FS()
Expect(err).ToNot(HaveOccurred())
results, err := musicFS.ReadTags("test.mp3", "test2.mp3")
Expect(err).ToNot(HaveOccurred())
Expect(results).To(HaveLen(2))
Expect(results["test.mp3"].FileInfo).To(Equal(&testFileInfo{name: "test.mp3"}))
Expect(results["test2.mp3"].FileInfo).ToNot(BeNil())
Expect(results["test2.mp3"].FileInfo.Name()).To(Equal("test2.mp3"))
})
})
})
Describe("localFileInfo", func() {
var testFile string
var fileInfo fs.FileInfo
BeforeEach(func() {
testFile = filepath.Join(tempDir, "test.mp3")
err := os.WriteFile(testFile, []byte("test data"), 0600)
Expect(err).ToNot(HaveOccurred())
fileInfo, err = os.Stat(testFile)
Expect(err).ToNot(HaveOccurred())
})
Describe("BirthTime", func() {
It("should return birth time when available", func() {
lfi := localFileInfo{FileInfo: fileInfo}
birthTime := lfi.BirthTime()
// Birth time should be a valid time (not zero value)
Expect(birthTime).ToNot(BeZero())
// Should be around the current time (within last few minutes)
Expect(birthTime).To(BeTemporally("~", time.Now(), 5*time.Minute))
})
})
It("should delegate all other FileInfo methods", func() {
lfi := localFileInfo{FileInfo: fileInfo}
Expect(lfi.Name()).To(Equal(fileInfo.Name()))
Expect(lfi.Size()).To(Equal(fileInfo.Size()))
Expect(lfi.Mode()).To(Equal(fileInfo.Mode()))
Expect(lfi.ModTime()).To(Equal(fileInfo.ModTime()))
Expect(lfi.IsDir()).To(Equal(fileInfo.IsDir()))
Expect(lfi.Sys()).To(Equal(fileInfo.Sys()))
})
})
Describe("Storage registration", func() {
It("should register localStorage for file scheme", func() {
// This tests the init() function indirectly
storage, err := storage.For("file://" + tempDir)
Expect(err).ToNot(HaveOccurred())
Expect(storage).To(BeAssignableToTypeOf(&localStorage{}))
})
})
})
// Test extractor for testing
type mockTestExtractor struct {
results map[string]metadata.Info
err error
}
func (m *mockTestExtractor) Parse(files ...string) (map[string]metadata.Info, error) {
if m.err != nil {
return nil, m.err
}
result := make(map[string]metadata.Info)
for _, file := range files {
if info, exists := m.results[file]; exists {
result[file] = info
}
}
return result, nil
}
func (m *mockTestExtractor) Version() string {
return "test-1.0"
}
type extractorError struct {
message string
}
func (e *extractorError) Error() string {
return e.message
}
// Test FileInfo that implements metadata.FileInfo
type testFileInfo struct {
name string
size int64
mode fs.FileMode
modTime time.Time
isDir bool
birthTime time.Time
}
func (t *testFileInfo) Name() string { return t.name }
func (t *testFileInfo) Size() int64 { return t.size }
func (t *testFileInfo) Mode() fs.FileMode { return t.mode }
func (t *testFileInfo) ModTime() time.Time { return t.modTime }
func (t *testFileInfo) IsDir() bool { return t.isDir }
func (t *testFileInfo) Sys() any { return nil }
func (t *testFileInfo) BirthTime() time.Time {
if t.birthTime.IsZero() {
return time.Now()
}
return t.birthTime
}

View File

@@ -6,6 +6,8 @@ import (
"path/filepath"
"strings"
"sync"
"github.com/navidrome/navidrome/utils/slice"
)
const LocalSchemaID = "file"
@@ -36,7 +38,14 @@ func For(uri string) (Storage, error) {
if len(parts) < 2 {
uri, _ = filepath.Abs(uri)
uri = filepath.ToSlash(uri)
uri = LocalSchemaID + "://" + uri
// Properly escape each path component using URL standards
pathParts := strings.Split(uri, "/")
escapedParts := slice.Map(pathParts, func(s string) string {
return url.PathEscape(s)
})
uri = LocalSchemaID + "://" + strings.Join(escapedParts, "/")
}
u, err := url.Parse(uri)

View File

@@ -65,6 +65,21 @@ var _ = Describe("Storage", func() {
_, err := For("webdav:///tmp")
Expect(err).To(HaveOccurred())
})
DescribeTable("should handle paths with special characters correctly",
func(inputPath string) {
s, err := For(inputPath)
Expect(err).ToNot(HaveOccurred())
Expect(s).To(BeAssignableToTypeOf(&fakeLocalStorage{}))
Expect(s.(*fakeLocalStorage).u.Scheme).To(Equal("file"))
// The path should be exactly the same as the input - after URL parsing it gets decoded back
Expect(s.(*fakeLocalStorage).u.Path).To(Equal(inputPath))
},
Entry("hash symbols", "/tmp/test#folder/file.mp3"),
Entry("spaces", "/tmp/test folder/file with spaces.mp3"),
Entry("question marks", "/tmp/test?query/file.mp3"),
Entry("ampersands", "/tmp/test&amp/file.mp3"),
Entry("multiple special chars", "/tmp/Song #1 & More?.mp3"),
)
})
})

View File

@@ -17,6 +17,7 @@ var Set = wire.NewSet(
NewPlayers,
NewShare,
NewPlaylists,
NewLibrary,
agents.GetAgents,
external.NewProvider,
wire.Bind(new(external.Agents), new(*agents.Agents)),

View File

@@ -1,167 +1,168 @@
package db
import (
"context"
"database/sql"
"errors"
"fmt"
"os"
"path/filepath"
"regexp"
"slices"
"time"
"github.com/mattn/go-sqlite3"
"github.com/navidrome/navidrome/conf"
"github.com/navidrome/navidrome/log"
)
const (
backupPrefix = "navidrome_backup"
backupRegexString = backupPrefix + "_(.+)\\.db"
)
var backupRegex = regexp.MustCompile(backupRegexString)
const backupSuffixLayout = "2006.01.02_15.04.05"
func backupPath(t time.Time) string {
return filepath.Join(
conf.Server.Backup.Path,
fmt.Sprintf("%s_%s.db", backupPrefix, t.Format(backupSuffixLayout)),
)
}
func backupOrRestore(ctx context.Context, isBackup bool, path string) error {
// heavily inspired by https://codingrabbits.dev/posts/go_and_sqlite_backup_and_maybe_restore/
existingConn, err := Db().Conn(ctx)
if err != nil {
return fmt.Errorf("getting existing connection: %w", err)
}
defer existingConn.Close()
backupDb, err := sql.Open(Driver, path)
if err != nil {
return fmt.Errorf("opening backup database in '%s': %w", path, err)
}
defer backupDb.Close()
backupConn, err := backupDb.Conn(ctx)
if err != nil {
return fmt.Errorf("getting backup connection: %w", err)
}
defer backupConn.Close()
err = existingConn.Raw(func(existing any) error {
return backupConn.Raw(func(backup any) error {
var sourceOk, destOk bool
var sourceConn, destConn *sqlite3.SQLiteConn
if isBackup {
sourceConn, sourceOk = existing.(*sqlite3.SQLiteConn)
destConn, destOk = backup.(*sqlite3.SQLiteConn)
} else {
sourceConn, sourceOk = backup.(*sqlite3.SQLiteConn)
destConn, destOk = existing.(*sqlite3.SQLiteConn)
}
if !sourceOk {
return fmt.Errorf("error trying to convert source to sqlite connection")
}
if !destOk {
return fmt.Errorf("error trying to convert destination to sqlite connection")
}
backupOp, err := destConn.Backup("main", sourceConn, "main")
if err != nil {
return fmt.Errorf("error starting sqlite backup: %w", err)
}
defer backupOp.Close()
// Caution: -1 means that sqlite will hold a read lock until the operation finishes
// This will lock out other writes that could happen at the same time
done, err := backupOp.Step(-1)
if !done {
return fmt.Errorf("backup not done with step -1")
}
if err != nil {
return fmt.Errorf("error during backup step: %w", err)
}
err = backupOp.Finish()
if err != nil {
return fmt.Errorf("error finishing backup: %w", err)
}
return nil
})
})
return err
}
func Backup(ctx context.Context) (string, error) {
destPath := backupPath(time.Now())
log.Debug(ctx, "Creating backup", "path", destPath)
err := backupOrRestore(ctx, true, destPath)
if err != nil {
return "", err
}
return destPath, nil
}
func Restore(ctx context.Context, path string) error {
log.Debug(ctx, "Restoring backup", "path", path)
return backupOrRestore(ctx, false, path)
}
func Prune(ctx context.Context) (int, error) {
files, err := os.ReadDir(conf.Server.Backup.Path)
if err != nil {
return 0, fmt.Errorf("unable to read database backup entries: %w", err)
}
var backupTimes []time.Time
for _, file := range files {
if !file.IsDir() {
submatch := backupRegex.FindStringSubmatch(file.Name())
if len(submatch) == 2 {
timestamp, err := time.Parse(backupSuffixLayout, submatch[1])
if err == nil {
backupTimes = append(backupTimes, timestamp)
}
}
}
}
if len(backupTimes) <= conf.Server.Backup.Count {
return 0, nil
}
slices.SortFunc(backupTimes, func(a, b time.Time) int {
return b.Compare(a)
})
pruneCount := 0
var errs []error
for _, timeToPrune := range backupTimes[conf.Server.Backup.Count:] {
log.Debug(ctx, "Pruning backup", "time", timeToPrune)
path := backupPath(timeToPrune)
err = os.Remove(path)
if err != nil {
errs = append(errs, err)
} else {
pruneCount++
}
}
if len(errs) > 0 {
err = errors.Join(errs...)
log.Error(ctx, "Failed to delete one or more files", "errors", err)
}
return pruneCount, err
}
//
//import (
// "context"
// "database/sql"
// "errors"
// "fmt"
// "os"
// "path/filepath"
// "regexp"
// "slices"
// "time"
//
// "github.com/mattn/go-sqlite3"
// "github.com/navidrome/navidrome/conf"
// "github.com/navidrome/navidrome/log"
//)
//
//const (
// backupPrefix = "navidrome_backup"
// backupRegexString = backupPrefix + "_(.+)\\.db"
//)
//
//var backupRegex = regexp.MustCompile(backupRegexString)
//
//const backupSuffixLayout = "2006.01.02_15.04.05"
//
//func backupPath(t time.Time) string {
// return filepath.Join(
// conf.Server.Backup.Path,
// fmt.Sprintf("%s_%s.db", backupPrefix, t.Format(backupSuffixLayout)),
// )
//}
//
//func backupOrRestore(ctx context.Context, isBackup bool, path string) error {
// // heavily inspired by https://codingrabbits.dev/posts/go_and_sqlite_backup_and_maybe_restore/
// existingConn, err := Db().Conn(ctx)
// if err != nil {
// return fmt.Errorf("getting existing connection: %w", err)
// }
// defer existingConn.Close()
//
// backupDb, err := sql.Open(Driver, path)
// if err != nil {
// return fmt.Errorf("opening backup database in '%s': %w", path, err)
// }
// defer backupDb.Close()
//
// backupConn, err := backupDb.Conn(ctx)
// if err != nil {
// return fmt.Errorf("getting backup connection: %w", err)
// }
// defer backupConn.Close()
//
// err = existingConn.Raw(func(existing any) error {
// return backupConn.Raw(func(backup any) error {
// var sourceOk, destOk bool
// var sourceConn, destConn *sqlite3.SQLiteConn
//
// if isBackup {
// sourceConn, sourceOk = existing.(*sqlite3.SQLiteConn)
// destConn, destOk = backup.(*sqlite3.SQLiteConn)
// } else {
// sourceConn, sourceOk = backup.(*sqlite3.SQLiteConn)
// destConn, destOk = existing.(*sqlite3.SQLiteConn)
// }
//
// if !sourceOk {
// return fmt.Errorf("error trying to convert source to sqlite connection")
// }
// if !destOk {
// return fmt.Errorf("error trying to convert destination to sqlite connection")
// }
//
// backupOp, err := destConn.Backup("main", sourceConn, "main")
// if err != nil {
// return fmt.Errorf("error starting sqlite backup: %w", err)
// }
// defer backupOp.Close()
//
// // Caution: -1 means that sqlite will hold a read lock until the operation finishes
// // This will lock out other writes that could happen at the same time
// done, err := backupOp.Step(-1)
// if !done {
// return fmt.Errorf("backup not done with step -1")
// }
// if err != nil {
// return fmt.Errorf("error during backup step: %w", err)
// }
//
// err = backupOp.Finish()
// if err != nil {
// return fmt.Errorf("error finishing backup: %w", err)
// }
//
// return nil
// })
// })
//
// return err
//}
//
//func Backup(ctx context.Context) (string, error) {
// destPath := backupPath(time.Now())
// log.Debug(ctx, "Creating backup", "path", destPath)
// err := backupOrRestore(ctx, true, destPath)
// if err != nil {
// return "", err
// }
//
// return destPath, nil
//}
//
//func Restore(ctx context.Context, path string) error {
// log.Debug(ctx, "Restoring backup", "path", path)
// return backupOrRestore(ctx, false, path)
//}
//
//func Prune(ctx context.Context) (int, error) {
// files, err := os.ReadDir(conf.Server.Backup.Path)
// if err != nil {
// return 0, fmt.Errorf("unable to read database backup entries: %w", err)
// }
//
// var backupTimes []time.Time
//
// for _, file := range files {
// if !file.IsDir() {
// submatch := backupRegex.FindStringSubmatch(file.Name())
// if len(submatch) == 2 {
// timestamp, err := time.Parse(backupSuffixLayout, submatch[1])
// if err == nil {
// backupTimes = append(backupTimes, timestamp)
// }
// }
// }
// }
//
// if len(backupTimes) <= conf.Server.Backup.Count {
// return 0, nil
// }
//
// slices.SortFunc(backupTimes, func(a, b time.Time) int {
// return b.Compare(a)
// })
//
// pruneCount := 0
// var errs []error
//
// for _, timeToPrune := range backupTimes[conf.Server.Backup.Count:] {
// log.Debug(ctx, "Pruning backup", "time", timeToPrune)
// path := backupPath(timeToPrune)
// err = os.Remove(path)
// if err != nil {
// errs = append(errs, err)
// } else {
// pruneCount++
// }
// }
//
// if len(errs) > 0 {
// err = errors.Join(errs...)
// log.Error(ctx, "Failed to delete one or more files", "errors", err)
// }
//
// return pruneCount, err
//}

164
db/db.go
View File

@@ -5,20 +5,22 @@ import (
"database/sql"
"embed"
"fmt"
"runtime"
"path/filepath"
"strings"
"time"
"github.com/mattn/go-sqlite3"
embeddedpostgres "github.com/fergusstrange/embedded-postgres"
_ "github.com/jackc/pgx/v5/stdlib"
"github.com/navidrome/navidrome/conf"
_ "github.com/navidrome/navidrome/db/migrations"
"github.com/navidrome/navidrome/log"
"github.com/navidrome/navidrome/utils/hasher"
"github.com/navidrome/navidrome/utils/singleton"
"github.com/pressly/goose/v3"
)
var (
Dialect = "sqlite3"
Driver = Dialect + "_custom"
Dialect = "postgres"
Driver = "pgx"
Path string
)
@@ -27,29 +29,77 @@ var embedMigrations embed.FS
const migrationsFolder = "migrations"
var postgresInstance *embeddedpostgres.EmbeddedPostgres
func Db() *sql.DB {
return singleton.GetInstance(func() *sql.DB {
sql.Register(Driver, &sqlite3.SQLiteDriver{
ConnectHook: func(conn *sqlite3.SQLiteConn) error {
return conn.RegisterFunc("SEEDEDRAND", hasher.HashFunc(), false)
},
})
Path = conf.Server.DbPath
if Path == ":memory:" {
Path = "file::memory:?cache=shared&_foreign_keys=on"
conf.Server.DbPath = Path
start := time.Now()
log.Info("Starting Embedded Postgres...")
postgresInstance = embeddedpostgres.NewDatabase(
embeddedpostgres.
DefaultConfig().
Port(5432).
//Password(password).
Logger(&logAdapter{ctx: context.Background()}).
DataPath(filepath.Join(conf.Server.DataFolder, "postgres")).
StartParameters(map[string]string{
"unix_socket_directories": "/tmp",
"unix_socket_permissions": "0700",
}).
BinariesPath(filepath.Join(conf.Server.CacheFolder, "postgres")),
)
if err := postgresInstance.Start(); err != nil {
if !strings.Contains(err.Error(), "already listening on port") {
_ = postgresInstance.Stop()
log.Fatal("Failed to start embedded Postgres", err)
}
log.Info("Server already running on port 5432, assuming it's our embedded Postgres", "elapsed", time.Since(start))
} else {
log.Info("Embedded Postgres started", "elapsed", time.Since(start))
}
// Create the navidrome database if it doesn't exist
adminPath := "postgresql://postgres:postgres@/postgres?sslmode=disable&host=/tmp"
adminDB, err := sql.Open(Driver, adminPath)
if err != nil {
_ = postgresInstance.Stop()
log.Fatal("Error connecting to admin database", err)
}
defer adminDB.Close()
// Check if navidrome database exists, create if not
var exists bool
err = adminDB.QueryRow("SELECT EXISTS(SELECT 1 FROM pg_database WHERE datname = 'navidrome')").Scan(&exists)
if err != nil {
_ = postgresInstance.Stop()
log.Fatal("Error checking if database exists", err)
}
if !exists {
log.Info("Creating navidrome database...")
_, err = adminDB.Exec("CREATE DATABASE navidrome")
if err != nil {
_ = postgresInstance.Stop()
log.Fatal("Error creating navidrome database", err)
}
}
// TODO: Implement seeded random function
//sql.Register(Driver, &sqlite3.SQLiteDriver{
// ConnectHook: func(conn *sqlite3.SQLiteConn) error {
// return conn.RegisterFunc("SEEDEDRAND", hasher.HashFunc(), false)
// },
//})
//Path = conf.Server.DbPath
// Ensure client does not attempt TLS when connecting to the embedded Postgres
// and avoid shadowing the package-level Path variable.
Path = "postgresql://postgres:postgres@/navidrome?sslmode=disable&host=/tmp"
log.Debug("Opening DataBase", "dbPath", Path, "driver", Driver)
db, err := sql.Open(Driver, Path)
db.SetMaxOpenConns(max(4, runtime.NumCPU()))
//db.SetMaxOpenConns(max(4, runtime.NumCPU()))
if err != nil {
_ = postgresInstance.Stop()
log.Fatal("Error opening database", err)
}
_, err = db.Exec("PRAGMA optimize=0x10002")
if err != nil {
log.Error("Error applying PRAGMA optimize", err)
return nil
}
return db
})
}
@@ -58,33 +108,24 @@ func Close(ctx context.Context) {
// Ignore cancellations when closing the DB
ctx = context.WithoutCancel(ctx)
// Run optimize before closing
Optimize(ctx)
log.Info(ctx, "Closing Database")
err := Db().Close()
if err != nil {
log.Error(ctx, "Error closing Database", err)
}
if postgresInstance != nil {
err = postgresInstance.Stop()
if err != nil {
log.Error(ctx, "Error stopping embedded Postgres", err)
}
}
}
func Init(ctx context.Context) func() {
db := Db()
// Disable foreign_keys to allow re-creating tables in migrations
_, err := db.ExecContext(ctx, "PRAGMA foreign_keys=off")
defer func() {
_, err := db.ExecContext(ctx, "PRAGMA foreign_keys=on")
if err != nil {
log.Error(ctx, "Error re-enabling foreign_keys", err)
}
}()
if err != nil {
log.Error(ctx, "Error disabling foreign_keys", err)
}
goose.SetBaseFS(embedMigrations)
err = goose.SetDialect(Dialect)
err := goose.SetDialect(Dialect)
if err != nil {
log.Fatal(ctx, "Invalid DB driver", "driver", Driver, err)
}
@@ -99,51 +140,17 @@ func Init(ctx context.Context) func() {
log.Fatal(ctx, "Failed to apply new migrations", err)
}
if hasSchemaChanges {
log.Debug(ctx, "Applying PRAGMA optimize after schema changes")
_, err = db.ExecContext(ctx, "PRAGMA optimize")
if err != nil {
log.Error(ctx, "Error applying PRAGMA optimize", err)
}
}
return func() {
Close(ctx)
}
}
// Optimize runs PRAGMA optimize on each connection in the pool
func Optimize(ctx context.Context) {
numConns := Db().Stats().OpenConnections
if numConns == 0 {
log.Debug(ctx, "No open connections to optimize")
return
}
log.Debug(ctx, "Optimizing open connections", "numConns", numConns)
var conns []*sql.Conn
for i := 0; i < numConns; i++ {
conn, err := Db().Conn(ctx)
conns = append(conns, conn)
if err != nil {
log.Error(ctx, "Error getting connection from pool", err)
continue
}
_, err = conn.ExecContext(ctx, "PRAGMA optimize;")
if err != nil {
log.Error(ctx, "Error running PRAGMA optimize", err)
}
}
// Return all connections to the Connection Pool
for _, conn := range conns {
conn.Close()
}
}
type statusLogger struct{ numPending int }
func (*statusLogger) Fatalf(format string, v ...interface{}) { log.Fatal(fmt.Sprintf(format, v...)) }
func (l *statusLogger) Printf(format string, v ...interface{}) {
// format is part of the goose logger signature; reference it to avoid linter warnings
_ = format
if len(v) < 1 {
return
}
@@ -165,11 +172,15 @@ func hasPendingMigrations(ctx context.Context, db *sql.DB, folder string) bool {
}
func isSchemaEmpty(ctx context.Context, db *sql.DB) bool {
rows, err := db.QueryContext(ctx, "SELECT name FROM sqlite_master WHERE type='table' AND name='goose_db_version';") // nolint:rowserrcheck
rows, err := db.QueryContext(ctx, "SELECT tablename FROM pg_tables WHERE schemaname = 'public' AND tablename = 'goose_db_version';") // nolint:rowserrcheck
if err != nil {
log.Fatal(ctx, "Database could not be opened!", err)
}
defer rows.Close()
defer func() {
if cerr := rows.Close(); cerr != nil {
log.Error(ctx, "Error closing rows", cerr)
}
}()
return !rows.Next()
}
@@ -178,6 +189,11 @@ type logAdapter struct {
silent bool
}
func (l *logAdapter) Write(p []byte) (n int, err error) {
log.Debug(l.ctx, string(p))
return len(p), nil
}
func (l *logAdapter) Fatal(v ...interface{}) {
log.Fatal(l.ctx, fmt.Sprint(v...))
}

View File

@@ -1,184 +0,0 @@
package migrations
import (
"context"
"database/sql"
"github.com/navidrome/navidrome/log"
"github.com/pressly/goose/v3"
)
func init() {
goose.AddMigrationContext(Up20200130083147, Down20200130083147)
}
func Up20200130083147(_ context.Context, tx *sql.Tx) error {
log.Info("Creating DB Schema")
_, err := tx.Exec(`
create table if not exists album
(
id varchar(255) not null
primary key,
name varchar(255) default '' not null,
artist_id varchar(255) default '' not null,
cover_art_path varchar(255) default '' not null,
cover_art_id varchar(255) default '' not null,
artist varchar(255) default '' not null,
album_artist varchar(255) default '' not null,
year integer default 0 not null,
compilation bool default FALSE not null,
song_count integer default 0 not null,
duration integer default 0 not null,
genre varchar(255) default '' not null,
created_at datetime,
updated_at datetime
);
create index if not exists album_artist
on album (artist);
create index if not exists album_artist_id
on album (artist_id);
create index if not exists album_genre
on album (genre);
create index if not exists album_name
on album (name);
create index if not exists album_year
on album (year);
create table if not exists annotation
(
ann_id varchar(255) not null
primary key,
user_id varchar(255) default '' not null,
item_id varchar(255) default '' not null,
item_type varchar(255) default '' not null,
play_count integer,
play_date datetime,
rating integer,
starred bool default FALSE not null,
starred_at datetime,
unique (user_id, item_id, item_type)
);
create index if not exists annotation_play_count
on annotation (play_count);
create index if not exists annotation_play_date
on annotation (play_date);
create index if not exists annotation_rating
on annotation (rating);
create index if not exists annotation_starred
on annotation (starred);
create table if not exists artist
(
id varchar(255) not null
primary key,
name varchar(255) default '' not null,
album_count integer default 0 not null
);
create index if not exists artist_name
on artist (name);
create table if not exists media_file
(
id varchar(255) not null
primary key,
path varchar(255) default '' not null,
title varchar(255) default '' not null,
album varchar(255) default '' not null,
artist varchar(255) default '' not null,
artist_id varchar(255) default '' not null,
album_artist varchar(255) default '' not null,
album_id varchar(255) default '' not null,
has_cover_art bool default FALSE not null,
track_number integer default 0 not null,
disc_number integer default 0 not null,
year integer default 0 not null,
size integer default 0 not null,
suffix varchar(255) default '' not null,
duration integer default 0 not null,
bit_rate integer default 0 not null,
genre varchar(255) default '' not null,
compilation bool default FALSE not null,
created_at datetime,
updated_at datetime
);
create index if not exists media_file_album_id
on media_file (album_id);
create index if not exists media_file_genre
on media_file (genre);
create index if not exists media_file_path
on media_file (path);
create index if not exists media_file_title
on media_file (title);
create table if not exists playlist
(
id varchar(255) not null
primary key,
name varchar(255) default '' not null,
comment varchar(255) default '' not null,
duration integer default 0 not null,
owner varchar(255) default '' not null,
public bool default FALSE not null,
tracks text not null
);
create index if not exists playlist_name
on playlist (name);
create table if not exists property
(
id varchar(255) not null
primary key,
value varchar(255) default '' not null
);
create table if not exists search
(
id varchar(255) not null
primary key,
"table" varchar(255) default '' not null,
full_text varchar(255) default '' not null
);
create index if not exists search_full_text
on search (full_text);
create index if not exists search_table
on search ("table");
create table if not exists user
(
id varchar(255) not null
primary key,
user_name varchar(255) default '' not null
unique,
name varchar(255) default '' not null,
email varchar(255) default '' not null
unique,
password varchar(255) default '' not null,
is_admin bool default FALSE not null,
last_login_at datetime,
last_access_at datetime,
created_at datetime not null,
updated_at datetime not null
);`)
return err
}
func Down20200130083147(_ context.Context, tx *sql.Tx) error {
return nil
}

View File

@@ -1,64 +0,0 @@
package migrations
import (
"context"
"database/sql"
"github.com/pressly/goose/v3"
)
func init() {
goose.AddMigrationContext(Up20200131183653, Down20200131183653)
}
func Up20200131183653(_ context.Context, tx *sql.Tx) error {
_, err := tx.Exec(`
create table search_dg_tmp
(
id varchar(255) not null
primary key,
item_type varchar(255) default '' not null,
full_text varchar(255) default '' not null
);
insert into search_dg_tmp(id, item_type, full_text) select id, "table", full_text from search;
drop table search;
alter table search_dg_tmp rename to search;
create index search_full_text
on search (full_text);
create index search_table
on search (item_type);
update annotation set item_type = 'media_file' where item_type = 'mediaFile';
`)
return err
}
func Down20200131183653(_ context.Context, tx *sql.Tx) error {
_, err := tx.Exec(`
create table search_dg_tmp
(
id varchar(255) not null
primary key,
"table" varchar(255) default '' not null,
full_text varchar(255) default '' not null
);
insert into search_dg_tmp(id, "table", full_text) select id, item_type, full_text from search;
drop table search;
alter table search_dg_tmp rename to search;
create index search_full_text
on search (full_text);
create index search_table
on search ("table");
update annotation set item_type = 'mediaFile' where item_type = 'media_file';
`)
return err
}

View File

@@ -1,56 +0,0 @@
package migrations
import (
"context"
"database/sql"
"github.com/pressly/goose/v3"
)
func init() {
goose.AddMigrationContext(Up20200208222418, Down20200208222418)
}
func Up20200208222418(_ context.Context, tx *sql.Tx) error {
_, err := tx.Exec(`
update annotation set play_count = 0 where play_count is null;
update annotation set rating = 0 where rating is null;
create table annotation_dg_tmp
(
ann_id varchar(255) not null
primary key,
user_id varchar(255) default '' not null,
item_id varchar(255) default '' not null,
item_type varchar(255) default '' not null,
play_count integer default 0,
play_date datetime,
rating integer default 0,
starred bool default FALSE not null,
starred_at datetime,
unique (user_id, item_id, item_type)
);
insert into annotation_dg_tmp(ann_id, user_id, item_id, item_type, play_count, play_date, rating, starred, starred_at) select ann_id, user_id, item_id, item_type, play_count, play_date, rating, starred, starred_at from annotation;
drop table annotation;
alter table annotation_dg_tmp rename to annotation;
create index annotation_play_count
on annotation (play_count);
create index annotation_play_date
on annotation (play_date);
create index annotation_rating
on annotation (rating);
create index annotation_starred
on annotation (starred);
`)
return err
}
func Down20200208222418(_ context.Context, tx *sql.Tx) error {
return nil
}

View File

@@ -1,130 +0,0 @@
package migrations
import (
"context"
"database/sql"
"github.com/pressly/goose/v3"
)
func init() {
goose.AddMigrationContext(Up20200220143731, Down20200220143731)
}
func Up20200220143731(_ context.Context, tx *sql.Tx) error {
notice(tx, "This migration will force the next scan to be a full rescan!")
_, err := tx.Exec(`
create table media_file_dg_tmp
(
id varchar(255) not null
primary key,
path varchar(255) default '' not null,
title varchar(255) default '' not null,
album varchar(255) default '' not null,
artist varchar(255) default '' not null,
artist_id varchar(255) default '' not null,
album_artist varchar(255) default '' not null,
album_id varchar(255) default '' not null,
has_cover_art bool default FALSE not null,
track_number integer default 0 not null,
disc_number integer default 0 not null,
year integer default 0 not null,
size integer default 0 not null,
suffix varchar(255) default '' not null,
duration real default 0 not null,
bit_rate integer default 0 not null,
genre varchar(255) default '' not null,
compilation bool default FALSE not null,
created_at datetime,
updated_at datetime
);
insert into media_file_dg_tmp(id, path, title, album, artist, artist_id, album_artist, album_id, has_cover_art, track_number, disc_number, year, size, suffix, duration, bit_rate, genre, compilation, created_at, updated_at) select id, path, title, album, artist, artist_id, album_artist, album_id, has_cover_art, track_number, disc_number, year, size, suffix, duration, bit_rate, genre, compilation, created_at, updated_at from media_file;
drop table media_file;
alter table media_file_dg_tmp rename to media_file;
create index media_file_album_id
on media_file (album_id);
create index media_file_genre
on media_file (genre);
create index media_file_path
on media_file (path);
create index media_file_title
on media_file (title);
create table album_dg_tmp
(
id varchar(255) not null
primary key,
name varchar(255) default '' not null,
artist_id varchar(255) default '' not null,
cover_art_path varchar(255) default '' not null,
cover_art_id varchar(255) default '' not null,
artist varchar(255) default '' not null,
album_artist varchar(255) default '' not null,
year integer default 0 not null,
compilation bool default FALSE not null,
song_count integer default 0 not null,
duration real default 0 not null,
genre varchar(255) default '' not null,
created_at datetime,
updated_at datetime
);
insert into album_dg_tmp(id, name, artist_id, cover_art_path, cover_art_id, artist, album_artist, year, compilation, song_count, duration, genre, created_at, updated_at) select id, name, artist_id, cover_art_path, cover_art_id, artist, album_artist, year, compilation, song_count, duration, genre, created_at, updated_at from album;
drop table album;
alter table album_dg_tmp rename to album;
create index album_artist
on album (artist);
create index album_artist_id
on album (artist_id);
create index album_genre
on album (genre);
create index album_name
on album (name);
create index album_year
on album (year);
create table playlist_dg_tmp
(
id varchar(255) not null
primary key,
name varchar(255) default '' not null,
comment varchar(255) default '' not null,
duration real default 0 not null,
owner varchar(255) default '' not null,
public bool default FALSE not null,
tracks text not null
);
insert into playlist_dg_tmp(id, name, comment, duration, owner, public, tracks) select id, name, comment, duration, owner, public, tracks from playlist;
drop table playlist;
alter table playlist_dg_tmp rename to playlist;
create index playlist_name
on playlist (name);
-- Force a full rescan
delete from property where id like 'LastScan%';
update media_file set updated_at = '0001-01-01';
`)
return err
}
func Down20200220143731(_ context.Context, tx *sql.Tx) error {
return nil
}

View File

@@ -1,21 +0,0 @@
package migrations
import (
"context"
"database/sql"
"github.com/pressly/goose/v3"
)
func init() {
goose.AddMigrationContext(Up20200310171621, Down20200310171621)
}
func Up20200310171621(_ context.Context, tx *sql.Tx) error {
notice(tx, "A full rescan will be performed to enable search by Album Artist!")
return forceFullRescan(tx)
}
func Down20200310171621(_ context.Context, tx *sql.Tx) error {
return nil
}

View File

@@ -1,54 +0,0 @@
package migrations
import (
"context"
"database/sql"
"github.com/pressly/goose/v3"
)
func init() {
goose.AddMigrationContext(Up20200310181627, Down20200310181627)
}
func Up20200310181627(_ context.Context, tx *sql.Tx) error {
_, err := tx.Exec(`
create table transcoding
(
id varchar(255) not null primary key,
name varchar(255) not null,
target_format varchar(255) not null,
command varchar(255) default '' not null,
default_bit_rate int default 192,
unique (name),
unique (target_format)
);
create table player
(
id varchar(255) not null primary key,
name varchar not null,
type varchar,
user_name varchar not null,
client varchar not null,
ip_address varchar,
last_seen timestamp,
max_bit_rate int default 0,
transcoding_id varchar,
unique (name),
foreign key (transcoding_id)
references transcoding(id)
on update restrict
on delete restrict
);
`)
return err
}
func Down20200310181627(_ context.Context, tx *sql.Tx) error {
_, err := tx.Exec(`
drop table transcoding;
drop table player;
`)
return err
}

View File

@@ -1,42 +0,0 @@
package migrations
import (
"context"
"database/sql"
"github.com/pressly/goose/v3"
)
func init() {
goose.AddMigrationContext(Up20200319211049, Down20200319211049)
}
func Up20200319211049(_ context.Context, tx *sql.Tx) error {
_, err := tx.Exec(`
alter table media_file
add full_text varchar(255) default '';
create index if not exists media_file_full_text
on media_file (full_text);
alter table album
add full_text varchar(255) default '';
create index if not exists album_full_text
on album (full_text);
alter table artist
add full_text varchar(255) default '';
create index if not exists artist_full_text
on artist (full_text);
drop table if exists search;
`)
if err != nil {
return err
}
notice(tx, "A full rescan will be performed!")
return forceFullRescan(tx)
}
func Down20200319211049(_ context.Context, tx *sql.Tx) error {
return nil
}

View File

@@ -1,35 +0,0 @@
package migrations
import (
"context"
"database/sql"
"github.com/pressly/goose/v3"
)
func init() {
goose.AddMigrationContext(Up20200325185135, Down20200325185135)
}
func Up20200325185135(_ context.Context, tx *sql.Tx) error {
_, err := tx.Exec(`
alter table album
add album_artist_id varchar(255) default '';
create index album_artist_album_id
on album (album_artist_id);
alter table media_file
add album_artist_id varchar(255) default '';
create index media_file_artist_album_id
on media_file (album_artist_id);
`)
if err != nil {
return err
}
notice(tx, "A full rescan will be performed!")
return forceFullRescan(tx)
}
func Down20200325185135(_ context.Context, tx *sql.Tx) error {
return nil
}

View File

@@ -1,21 +0,0 @@
package migrations
import (
"context"
"database/sql"
"github.com/pressly/goose/v3"
)
func init() {
goose.AddMigrationContext(Up20200326090707, Down20200326090707)
}
func Up20200326090707(_ context.Context, tx *sql.Tx) error {
notice(tx, "A full rescan will be performed!")
return forceFullRescan(tx)
}
func Down20200326090707(_ context.Context, tx *sql.Tx) error {
return nil
}

View File

@@ -1,81 +0,0 @@
package migrations
import (
"context"
"database/sql"
"github.com/pressly/goose/v3"
)
func init() {
goose.AddMigrationContext(Up20200327193744, Down20200327193744)
}
func Up20200327193744(_ context.Context, tx *sql.Tx) error {
_, err := tx.Exec(`
create table album_dg_tmp
(
id varchar(255) not null
primary key,
name varchar(255) default '' not null,
artist_id varchar(255) default '' not null,
cover_art_path varchar(255) default '' not null,
cover_art_id varchar(255) default '' not null,
artist varchar(255) default '' not null,
album_artist varchar(255) default '' not null,
min_year int default 0 not null,
max_year integer default 0 not null,
compilation bool default FALSE not null,
song_count integer default 0 not null,
duration real default 0 not null,
genre varchar(255) default '' not null,
created_at datetime,
updated_at datetime,
full_text varchar(255) default '',
album_artist_id varchar(255) default ''
);
insert into album_dg_tmp(id, name, artist_id, cover_art_path, cover_art_id, artist, album_artist, max_year, compilation, song_count, duration, genre, created_at, updated_at, full_text, album_artist_id) select id, name, artist_id, cover_art_path, cover_art_id, artist, album_artist, year, compilation, song_count, duration, genre, created_at, updated_at, full_text, album_artist_id from album;
drop table album;
alter table album_dg_tmp rename to album;
create index album_artist
on album (artist);
create index album_artist_album
on album (artist);
create index album_artist_album_id
on album (album_artist_id);
create index album_artist_id
on album (artist_id);
create index album_full_text
on album (full_text);
create index album_genre
on album (genre);
create index album_name
on album (name);
create index album_min_year
on album (min_year);
create index album_max_year
on album (max_year);
`)
if err != nil {
return err
}
notice(tx, "A full rescan will be performed!")
return forceFullRescan(tx)
}
func Down20200327193744(_ context.Context, tx *sql.Tx) error {
return nil
}

View File

@@ -1,30 +0,0 @@
package migrations
import (
"context"
"database/sql"
"github.com/pressly/goose/v3"
)
func init() {
goose.AddMigrationContext(Up20200404214704, Down20200404214704)
}
func Up20200404214704(_ context.Context, tx *sql.Tx) error {
_, err := tx.Exec(`
create index if not exists media_file_year
on media_file (year);
create index if not exists media_file_duration
on media_file (duration);
create index if not exists media_file_track_number
on media_file (disc_number, track_number);
`)
return err
}
func Down20200404214704(_ context.Context, tx *sql.Tx) error {
return nil
}

View File

@@ -1,21 +0,0 @@
package migrations
import (
"context"
"database/sql"
"github.com/pressly/goose/v3"
)
func init() {
goose.AddMigrationContext(Up20200409002249, Down20200409002249)
}
func Up20200409002249(_ context.Context, tx *sql.Tx) error {
notice(tx, "A full rescan will be performed to enable search by individual Artist in an Album!")
return forceFullRescan(tx)
}
func Down20200409002249(_ context.Context, tx *sql.Tx) error {
return nil
}

View File

@@ -1,28 +0,0 @@
package migrations
import (
"context"
"database/sql"
"github.com/pressly/goose/v3"
)
func init() {
goose.AddMigrationContext(Up20200411164603, Down20200411164603)
}
func Up20200411164603(_ context.Context, tx *sql.Tx) error {
_, err := tx.Exec(`
alter table playlist
add created_at datetime;
alter table playlist
add updated_at datetime;
update playlist
set created_at = datetime('now'), updated_at = datetime('now');
`)
return err
}
func Down20200411164603(_ context.Context, tx *sql.Tx) error {
return nil
}

View File

@@ -1,21 +0,0 @@
package migrations
import (
"context"
"database/sql"
"github.com/pressly/goose/v3"
)
func init() {
goose.AddMigrationContext(Up20200418110522, Down20200418110522)
}
func Up20200418110522(_ context.Context, tx *sql.Tx) error {
notice(tx, "A full rescan will be performed to fix search Albums by year")
return forceFullRescan(tx)
}
func Down20200418110522(_ context.Context, tx *sql.Tx) error {
return nil
}

View File

@@ -1,21 +0,0 @@
package migrations
import (
"context"
"database/sql"
"github.com/pressly/goose/v3"
)
func init() {
goose.AddMigrationContext(Up20200419222708, Down20200419222708)
}
func Up20200419222708(_ context.Context, tx *sql.Tx) error {
notice(tx, "A full rescan will be performed to change the search behaviour")
return forceFullRescan(tx)
}
func Down20200419222708(_ context.Context, tx *sql.Tx) error {
return nil
}

View File

@@ -1,66 +0,0 @@
package migrations
import (
"context"
"database/sql"
"github.com/pressly/goose/v3"
)
func init() {
goose.AddMigrationContext(Up20200423204116, Down20200423204116)
}
func Up20200423204116(_ context.Context, tx *sql.Tx) error {
_, err := tx.Exec(`
alter table artist
add order_artist_name varchar(255) collate nocase;
alter table artist
add sort_artist_name varchar(255) collate nocase;
create index if not exists artist_order_artist_name
on artist (order_artist_name);
alter table album
add order_album_name varchar(255) collate nocase;
alter table album
add order_album_artist_name varchar(255) collate nocase;
alter table album
add sort_album_name varchar(255) collate nocase;
alter table album
add sort_artist_name varchar(255) collate nocase;
alter table album
add sort_album_artist_name varchar(255) collate nocase;
create index if not exists album_order_album_name
on album (order_album_name);
create index if not exists album_order_album_artist_name
on album (order_album_artist_name);
alter table media_file
add order_album_name varchar(255) collate nocase;
alter table media_file
add order_album_artist_name varchar(255) collate nocase;
alter table media_file
add order_artist_name varchar(255) collate nocase;
alter table media_file
add sort_album_name varchar(255) collate nocase;
alter table media_file
add sort_artist_name varchar(255) collate nocase;
alter table media_file
add sort_album_artist_name varchar(255) collate nocase;
alter table media_file
add sort_title varchar(255) collate nocase;
create index if not exists media_file_order_album_name
on media_file (order_album_name);
create index if not exists media_file_order_artist_name
on media_file (order_artist_name);
`)
if err != nil {
return err
}
notice(tx, "A full rescan will be performed to change the search behaviour")
return forceFullRescan(tx)
}
func Down20200423204116(_ context.Context, tx *sql.Tx) error {
return nil
}

View File

@@ -1,28 +0,0 @@
package migrations
import (
"context"
"database/sql"
"github.com/pressly/goose/v3"
)
func init() {
goose.AddMigrationContext(Up20200508093059, Down20200508093059)
}
func Up20200508093059(_ context.Context, tx *sql.Tx) error {
_, err := tx.Exec(`
alter table artist
add song_count integer default 0 not null;
`)
if err != nil {
return err
}
notice(tx, "A full rescan will be performed to calculate artists' song counts")
return forceFullRescan(tx)
}
func Down20200508093059(_ context.Context, tx *sql.Tx) error {
return nil
}

View File

@@ -1,28 +0,0 @@
package migrations
import (
"context"
"database/sql"
"github.com/pressly/goose/v3"
)
func init() {
goose.AddMigrationContext(Up20200512104202, Down20200512104202)
}
func Up20200512104202(_ context.Context, tx *sql.Tx) error {
_, err := tx.Exec(`
alter table media_file
add disc_subtitle varchar(255);
`)
if err != nil {
return err
}
notice(tx, "A full rescan will be performed to import disc subtitles")
return forceFullRescan(tx)
}
func Down20200512104202(_ context.Context, tx *sql.Tx) error {
return nil
}

View File

@@ -1,101 +0,0 @@
package migrations
import (
"context"
"database/sql"
"strings"
"github.com/navidrome/navidrome/log"
"github.com/pressly/goose/v3"
)
func init() {
goose.AddMigrationContext(Up20200516140647, Down20200516140647)
}
func Up20200516140647(_ context.Context, tx *sql.Tx) error {
_, err := tx.Exec(`
create table if not exists playlist_tracks
(
id integer default 0 not null,
playlist_id varchar(255) not null,
media_file_id varchar(255) not null
);
create unique index if not exists playlist_tracks_pos
on playlist_tracks (playlist_id, id);
`)
if err != nil {
return err
}
rows, err := tx.Query("select id, tracks from playlist")
if err != nil {
return err
}
defer rows.Close()
var id, tracks string
for rows.Next() {
err := rows.Scan(&id, &tracks)
if err != nil {
return err
}
err = Up20200516140647UpdatePlaylistTracks(tx, id, tracks)
if err != nil {
return err
}
}
err = rows.Err()
if err != nil {
return err
}
_, err = tx.Exec(`
create table playlist_dg_tmp
(
id varchar(255) not null
primary key,
name varchar(255) default '' not null,
comment varchar(255) default '' not null,
duration real default 0 not null,
song_count integer default 0 not null,
owner varchar(255) default '' not null,
public bool default FALSE not null,
created_at datetime,
updated_at datetime
);
insert into playlist_dg_tmp(id, name, comment, duration, owner, public, created_at, updated_at)
select id, name, comment, duration, owner, public, created_at, updated_at from playlist;
drop table playlist;
alter table playlist_dg_tmp rename to playlist;
create index playlist_name
on playlist (name);
update playlist set song_count = (select count(*) from playlist_tracks where playlist_id = playlist.id)
where id <> ''
`)
return err
}
func Up20200516140647UpdatePlaylistTracks(tx *sql.Tx, id string, tracks string) error {
trackList := strings.Split(tracks, ",")
stmt, err := tx.Prepare("insert into playlist_tracks (playlist_id, media_file_id, id) values (?, ?, ?)")
if err != nil {
return err
}
for i, trackId := range trackList {
_, err := stmt.Exec(id, trackId, i+1)
if err != nil {
log.Error("Error adding track to playlist", "playlistId", id, "trackId", trackId, err)
}
}
return nil
}
func Down20200516140647(_ context.Context, tx *sql.Tx) error {
return nil
}

View File

@@ -1,138 +0,0 @@
package migrations
import (
"context"
"database/sql"
"github.com/pressly/goose/v3"
)
func init() {
goose.AddMigrationContext(Up20200608153717, Down20200608153717)
}
func Up20200608153717(_ context.Context, tx *sql.Tx) error {
// First delete dangling players
_, err := tx.Exec(`
delete from player where user_name not in (select user_name from user)`)
if err != nil {
return err
}
// Also delete dangling players
_, err = tx.Exec(`
delete from playlist where owner not in (select user_name from user)`)
if err != nil {
return err
}
// Also delete dangling playlist tracks
_, err = tx.Exec(`
delete from playlist_tracks where playlist_id not in (select id from playlist)`)
if err != nil {
return err
}
// Add foreign key to player table
err = updatePlayer_20200608153717(tx)
if err != nil {
return err
}
// Add foreign key to playlist table
err = updatePlaylist_20200608153717(tx)
if err != nil {
return err
}
// Add foreign keys to playlist_tracks table
return updatePlaylistTracks_20200608153717(tx)
}
func updatePlayer_20200608153717(tx *sql.Tx) error {
_, err := tx.Exec(`
create table player_dg_tmp
(
id varchar(255) not null
primary key,
name varchar not null
unique,
type varchar,
user_name varchar not null
references user (user_name)
on update cascade on delete cascade,
client varchar not null,
ip_address varchar,
last_seen timestamp,
max_bit_rate int default 0,
transcoding_id varchar null
);
insert into player_dg_tmp(id, name, type, user_name, client, ip_address, last_seen, max_bit_rate, transcoding_id) select id, name, type, user_name, client, ip_address, last_seen, max_bit_rate, transcoding_id from player;
drop table player;
alter table player_dg_tmp rename to player;
`)
return err
}
func updatePlaylist_20200608153717(tx *sql.Tx) error {
_, err := tx.Exec(`
create table playlist_dg_tmp
(
id varchar(255) not null
primary key,
name varchar(255) default '' not null,
comment varchar(255) default '' not null,
duration real default 0 not null,
song_count integer default 0 not null,
owner varchar(255) default '' not null
constraint playlist_user_user_name_fk
references user (user_name)
on update cascade on delete cascade,
public bool default FALSE not null,
created_at datetime,
updated_at datetime
);
insert into playlist_dg_tmp(id, name, comment, duration, song_count, owner, public, created_at, updated_at) select id, name, comment, duration, song_count, owner, public, created_at, updated_at from playlist;
drop table playlist;
alter table playlist_dg_tmp rename to playlist;
create index playlist_name
on playlist (name);
`)
return err
}
func updatePlaylistTracks_20200608153717(tx *sql.Tx) error {
_, err := tx.Exec(`
create table playlist_tracks_dg_tmp
(
id integer default 0 not null,
playlist_id varchar(255) not null
constraint playlist_tracks_playlist_id_fk
references playlist
on update cascade on delete cascade,
media_file_id varchar(255) not null
);
insert into playlist_tracks_dg_tmp(id, playlist_id, media_file_id) select id, playlist_id, media_file_id from playlist_tracks;
drop table playlist_tracks;
alter table playlist_tracks_dg_tmp rename to playlist_tracks;
create unique index playlist_tracks_pos
on playlist_tracks (playlist_id, id);
`)
return err
}
func Down20200608153717(_ context.Context, tx *sql.Tx) error {
return nil
}

View File

@@ -1,43 +0,0 @@
package migrations
import (
"context"
"database/sql"
"github.com/navidrome/navidrome/consts"
"github.com/navidrome/navidrome/model/id"
"github.com/pressly/goose/v3"
)
func init() {
goose.AddMigrationContext(upAddDefaultTranscodings, downAddDefaultTranscodings)
}
func upAddDefaultTranscodings(_ context.Context, tx *sql.Tx) error {
row := tx.QueryRow("SELECT COUNT(*) FROM transcoding")
var count int
err := row.Scan(&count)
if err != nil {
return err
}
if count > 0 {
return nil
}
stmt, err := tx.Prepare("insert into transcoding (id, name, target_format, default_bit_rate, command) values (?, ?, ?, ?, ?)")
if err != nil {
return err
}
for _, t := range consts.DefaultTranscodings {
_, err := stmt.Exec(id.NewRandom(), t.Name, t.TargetFormat, t.DefaultBitRate, t.Command)
if err != nil {
return err
}
}
return nil
}
func downAddDefaultTranscodings(_ context.Context, tx *sql.Tx) error {
return nil
}

View File

@@ -1,28 +0,0 @@
package migrations
import (
"context"
"database/sql"
"github.com/pressly/goose/v3"
)
func init() {
goose.AddMigrationContext(upAddPlaylistPath, downAddPlaylistPath)
}
func upAddPlaylistPath(_ context.Context, tx *sql.Tx) error {
_, err := tx.Exec(`
alter table playlist
add path string default '' not null;
alter table playlist
add sync bool default false not null;
`)
return err
}
func downAddPlaylistPath(_ context.Context, tx *sql.Tx) error {
return nil
}

View File

@@ -1,37 +0,0 @@
package migrations
import (
"context"
"database/sql"
"github.com/pressly/goose/v3"
)
func init() {
goose.AddMigrationContext(upCreatePlayQueuesTable, downCreatePlayQueuesTable)
}
func upCreatePlayQueuesTable(_ context.Context, tx *sql.Tx) error {
_, err := tx.Exec(`
create table playqueue
(
id varchar(255) not null primary key,
user_id varchar(255) not null
references user (id)
on update cascade on delete cascade,
comment varchar(255),
current varchar(255) not null,
position integer,
changed_by varchar(255),
items varchar(255),
created_at datetime,
updated_at datetime
);
`)
return err
}
func downCreatePlayQueuesTable(_ context.Context, tx *sql.Tx) error {
return nil
}

View File

@@ -1,54 +0,0 @@
package migrations
import (
"context"
"database/sql"
"github.com/pressly/goose/v3"
)
func init() {
goose.AddMigrationContext(upCreateBookmarkTable, downCreateBookmarkTable)
}
func upCreateBookmarkTable(_ context.Context, tx *sql.Tx) error {
_, err := tx.Exec(`
create table bookmark
(
user_id varchar(255) not null
references user
on update cascade on delete cascade,
item_id varchar(255) not null,
item_type varchar(255) not null,
comment varchar(255),
position integer,
changed_by varchar(255),
created_at datetime,
updated_at datetime,
constraint bookmark_pk
unique (user_id, item_id, item_type)
);
create table playqueue_dg_tmp
(
id varchar(255) not null,
user_id varchar(255) not null
references user
on update cascade on delete cascade,
current varchar(255),
position real,
changed_by varchar(255),
items varchar(255),
created_at datetime,
updated_at datetime
);
drop table playqueue;
alter table playqueue_dg_tmp rename to playqueue;
`)
return err
}
func downCreateBookmarkTable(_ context.Context, tx *sql.Tx) error {
return nil
}

View File

@@ -1,43 +0,0 @@
package migrations
import (
"context"
"database/sql"
"github.com/pressly/goose/v3"
)
func init() {
goose.AddMigrationContext(upDropEmailUniqueConstraint, downDropEmailUniqueConstraint)
}
func upDropEmailUniqueConstraint(_ context.Context, tx *sql.Tx) error {
_, err := tx.Exec(`
create table user_dg_tmp
(
id varchar(255) not null
primary key,
user_name varchar(255) default '' not null
unique,
name varchar(255) default '' not null,
email varchar(255) default '' not null,
password varchar(255) default '' not null,
is_admin bool default FALSE not null,
last_login_at datetime,
last_access_at datetime,
created_at datetime not null,
updated_at datetime not null
);
insert into user_dg_tmp(id, user_name, name, email, password, is_admin, last_login_at, last_access_at, created_at, updated_at) select id, user_name, name, email, password, is_admin, last_login_at, last_access_at, created_at, updated_at from user;
drop table user;
alter table user_dg_tmp rename to user;
`)
return err
}
func downDropEmailUniqueConstraint(_ context.Context, tx *sql.Tx) error {
return nil
}

Some files were not shown because too many files have changed in this diff Show More