Compare commits

...

71 Commits

Author SHA1 Message Date
Deluan
342b9eb2f2 chore: install TagLib in devcontainer Dockerfile
Move TagLib installation from postCreateCommand script into the devcontainer Dockerfile to leverage Docker layer caching and simplify setup.\n\nChanges:\n- Install cross-taglib v2.1.1-1 directly in Dockerfile using TARGETARCH for multi-arch support (amd64/arm64).\n- Remove redundant libtag1-dev apt dependency; keep ffmpeg only.\n- Add CROSS_TAGLIB_VERSION as a build arg for consistency with CI/Makefile.\n- Remove postCreateCommand from devcontainer.json and delete install-taglib.sh script.\n\nWhy:\n- Avoid re-downloading TagLib on each container create; benefit from cached image layers.\n- Reduce redundancy and potential version mismatch between apt libtag and cross-taglib.\n- Keep devcontainer aligned with production build approach and CI settings.
2025-12-01 20:17:12 -05:00
floatlesss
cb38d2a031 Apply Gemini suggested changes
Signed-off-by: floatlesss <117862164+floatlesss@users.noreply.github.com>
2025-11-30 20:57:49 +00:00
floatlesss
382b80ccb4 Merge branch 'navidrome:master' into fix-taglib-issues-in-vscodedevcontainer/4749 2025-11-29 20:15:22 +00:00
floatlesss
41cc4610af Signed-off-by: floatlesss <117862164+floatlesss@users.noreply.github.com>
fix(vscodedevcontainer): fix-taglib-build-issues - #4749
2025-11-29 20:07:04 +00:00
floatlesss
64a9260174 fix(ui): allow scrolling in shareplayer queue by adding delay #4748
fix(shareplayer): allow-scrolling-in-shareplayer - #4747
2025-11-29 12:54:46 -05:00
Deluan
6a7381aa5a test: prevent environment variables from overriding config file values in tests
Added a loadEnvVars parameter to InitConfig to control whether environment
variables should be loaded via viper.AutomaticEnv(). In tests, environment
variables (like ND_MUSICFOLDER) were overriding values from config test files,
causing tests to fail when these variables were set in the developer's
environment. Now tests can pass loadEnvVars=false to isolate from the
environment while production code continues to use loadEnvVars=true.

Signed-off-by: Deluan <deluan@navidrome.org>
2025-11-29 11:45:07 -05:00
Deluan Quintão
e36fef8692 fix: retry insights collection when no admin user available (#4746)
Previously, the insights collector would only try to get an admin user once
at startup. If no admin user existed (e.g., fresh database before first user
registration), insights collection would silently fail forever.

This change moves the admin context creation inside the collection loop so it
retries on each interval. It also updates log messages in WithAdminUser to
remove the Scanner prefix since this function is now used by other components.

Signed-off-by: Deluan <deluan@navidrome.org>
2025-11-28 19:38:28 -05:00
Deluan Quintão
9913235542 fix(server): improve error message for encrypted TLS private keys (#4742)
Added TLS certificate validation that detects encrypted (password-protected)
private keys and provides a clear error message with instructions on how to
decrypt them using openssl. This addresses user confusion when Go's standard
library fails with the cryptic 'tls: failed to parse private key' error.

Changes:
- Added validateTLSCertificates function to validate certs before server start
- Added isEncryptedPEM helper to detect both PKCS#8 and legacy encrypted keys
- Added comprehensive tests for TLS validation including encrypted key detection
- Added integration test that starts server with TLS and verifies HTTPS works
- Added test certificates (valid for 100 years) with SAN for localhost

Signed-off-by: Deluan <deluan@navidrome.org>
2025-11-28 17:08:34 -05:00
Deluan
a87b6a50a6 test: use unique library name and path in tests
Avoid UNIQUE constraint conflicts on library.name and library.path when
running tests in parallel. Both playlist_repository_test.go and
tag_library_filtering_test.go now generate timestamp-based unique
suffixes for library names and paths to ensure test isolation.

Signed-off-by: Deluan <deluan@navidrome.org>
2025-11-28 16:11:13 -05:00
Stephan Wahlen
2b30ed1520 fix(ui): Amusic theme improvements (#4731)
* fix low contrast in "delete missing files" button

* make login screen a bit nicer

* style modal similar to rest of ui

* Add custom styles for Ra Pagination

* Refactor styles in amusic.js

Removed albumSubtitle color and updated styles for albumPlayButton and albumArtistName

* Add NDDeleteLibraryButton and NDDeleteUserButton styles

low contrast

* low contrast text on delete buttons

* playbutton color back to pink without background
2025-11-28 08:52:26 -05:00
Deluan Quintão
1024d61a5e fix: apply library filter to smart playlist track generation (#4739)
Smart playlists were including tracks from all libraries regardless of the
user's library access permissions. This resulted in ghost tracks that users
could not see or play, while the playlist showed incorrect song counts.

Added applyLibraryFilter to the refreshSmartPlaylist function to ensure only
tracks from libraries the user has access to are included when populating
smart playlist tracks. Added regression test to verify the fix.

Closes #4738
2025-11-27 07:58:39 -05:00
Deluan
ca83ebbb53 feat: add DevOptimizeDB flag to control SQLite optimization
Added a new DevOptimizeDB configuration flag (default true) that controls
whether SQLite PRAGMA OPTIMIZE and ANALYZE commands are executed. This allows
disabling database optimization operations for debugging or testing purposes.

The flag guards optimization commands in:
- db/db.go: Initial connection, post-migration, and shutdown optimization
- persistence/library_repository.go: Post-scan optimization
- db/migrations/migration.go: ANALYZE during forced full rescans

Set ND_DEVOPTIMIZEDB=false to disable all database optimization commands.
2025-11-25 19:49:03 -05:00
dependabot[bot]
dc07dc413d chore(deps): bump golangci/golangci-lint-action in /.github/workflows (#4673)
Bumps [golangci/golangci-lint-action](https://github.com/golangci/golangci-lint-action) from 8 to 9.
- [Release notes](https://github.com/golangci/golangci-lint-action/releases)
- [Commits](https://github.com/golangci/golangci-lint-action/compare/v8...v9)

---
updated-dependencies:
- dependency-name: golangci/golangci-lint-action
  dependency-version: '9'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-11-24 23:36:19 -05:00
zacaj
3294bcacfc feat: add Rated At field - #4653 (#4660)
* feat(model): add Rated At field - #4653

Signed-off-by: zacaj <zacaj@zacaj.com>

* fix(ui): ignore empty dates in rating/love tooltips - #4653

* refactor(ui): add isDateSet util function

Signed-off-by: zacaj <zacaj@zacaj.com>

* feat: add tests for isDateSet and rated_at sort mappings

Added comprehensive tests for isDateSet and urlValidate functions in
ui/src/utils/validations.test.js covering falsy values, Go zero date handling,
valid date strings, Date objects, and edge cases.

Added rated_at sort mapping to album, artist, and mediafile repositories,
following the same pattern as starred_at (sorting by rating first, then by
timestamp). This enables proper sorting by rating date in the UI.

---------

Signed-off-by: zacaj <zacaj@zacaj.com>
Co-authored-by: zacaj <zacaj@zacaj.com>
Co-authored-by: Deluan <deluan@navidrome.org>
2025-11-24 23:18:05 -05:00
Deluan
228211f925 test: add smart playlist tag criteria tests for issue #4728
Add integration tests verifying the workaround for checking if a tag has any
value in smart playlists. The tests confirm that using 'contains' with an empty
string generates SQL that matches any non-empty tag value (value LIKE '%%'),
which is the recommended workaround for issue #4728.

Tests added:
- Verify contains with empty string matches tracks with tag values
- Verify notContains with empty string excludes tracks with tag values

Also updated test context to use GinkgoT().Context() instead of context.TODO().
2025-11-24 21:16:28 -05:00
dependabot[bot]
a6a682b385 chore(deps): bump actions/checkout from 5 to 6 in /.github/workflows (#4730)
Bumps [actions/checkout](https://github.com/actions/checkout) from 5 to 6.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](https://github.com/actions/checkout/compare/v5...v6)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-version: '6'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-11-24 13:18:34 -05:00
Kendall Garner
c40f12e65b fix(scanner): Use repeated arg instead of comma split (#4727) 2025-11-23 22:16:10 -05:00
Deluan
12d0898585 chore(docker): remove GODEBUG=asyncpreemptoff=1 flag, as it should not be needed on Go 1.15+
Signed-off-by: Deluan <deluan@navidrome.org>
2025-11-22 21:36:44 -05:00
Deluan
c21aee7360 fix(config): enables quoted ; as values in ini files
Signed-off-by: Deluan <deluan@navidrome.org>
2025-11-22 20:14:44 -05:00
Xavier Araque
ee51bd9281 feat(ui): add SquiddiesGlass Theme (#4632)
* feat: Add SquiddiesGlass Theme

* feat: fix commnets by gemini-code-assist in PR

* feat: fix Prettier format

* feat: fix play button, and text mobile

* feat: fix play button, and text mobile, prettier

* feat: fix chip, title artist

* fix: loading albbun, play button color

* prettier

Signed-off-by: Deluan <deluan@navidrome.org>

---------

Signed-off-by: Deluan <deluan@navidrome.org>
Co-authored-by: Xavier Araque <francisco.araque@toolfactory.net>
Co-authored-by: Deluan <deluan@navidrome.org>
2025-11-22 13:41:59 -05:00
Stephan Wahlen
2451e9e7ae feat(ui): add AMusic (Apple Music inspired) theme (#4723)
* first show at AMuisc Theme

* prettier

* fix Duplicate key 'MuiButton'

* fix file name

* Update amusic.js

* Add styles for NDAlbumGridView in amusic.js

* Fix MuiToolbar background property in amusic.js

* Fix syntax error in amusic.js background property

* run prettier

* fix banded table styling and more

* more styling to player

- fix some appearances of green in queue
- match queue styling to rest of theme
- round albumart in player and prevent rotation

* fix queue panel background and border

to make it stand out more against the background

* fix stray comma

and lint+prettier

* queue hover still green

and player preview image not rounded properly

* Update amusic.css.js

* more mobile color fixes

* artist page

* prettier

* rounded art in albumgridview

* small tweaks to colors and radiuses

* artist and album heading

* external links colors

* unify font colors + albumgrid corner radius

* get rid of queue hover green

* unify colors in player

same red shades as primary

* mobile player floating panel background shade of green

* unify border colors

and attempt to get album cover corner radius working

* final touches

* Update amusic.css.js

* fix invisible button color fir muibutton

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

* fix css syntax on player queue color overrides

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

* remove unused MuiTableHead

* sort theme list in index.js alphabetically

* remove unused properties

* Revert "fix css syntax on player queue color overrides"

This reverts commit 503bba321d.

---------

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2025-11-22 11:23:02 -05:00
Deluan
f6b2ab5726 feat(ui): add loading state to artist action buttons for improved user experience
Signed-off-by: Deluan <deluan@navidrome.org>
2025-11-21 22:23:38 -05:00
Deluan
67c4e24957 fix(scanner): defer artwork PreCache calls until after transaction commits
The CacheWarmer was failing with data not found errors because PreCache was being called inside the database transaction before the data was committed. The CacheWarmer runs in a separate goroutine with its own database context and could not access the uncommitted data due to transaction isolation.

Changed the persistChanges method in phase_1_folders.go to collect artwork IDs during the transaction and only call PreCache after the transaction successfully commits. This ensures the artwork data is visible to the CacheWarmer when it attempts to retrieve and cache the images.

The fix eliminates the data not found errors and allows the cache warmer to properly pre-cache album and artist artwork during library scanning.

Signed-off-by: Deluan <deluan@navidrome.org>
2025-11-21 15:27:25 -05:00
Deluan Quintão
255ed1f8e2 feat(deezer): Add artist bio, top tracks, related artists and language support (#4720)
* feat(deezer): add functions to fetch related artists, biographies, and top tracks for an artist

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(deezer): add language support for Deezer API client

Signed-off-by: Deluan <deluan@navidrome.org>

* fix(deezer): Use GraphQL API for translated biographies

The previous implementation scraped the __DZR_APP_STATE__ from HTML,
which only contained English content. The actual biography displayed
on Deezer's website comes from their GraphQL API at pipe.deezer.com,
which properly respects the Accept-Language header and returns
translated content.

This change:
- Switches from HTML scraping to the GraphQL API
- Uses Accept-Language header instead of URL path for language
- Updates tests to match the new implementation
- Removes unused HTML fixture file

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor(deezer): move JWT token handling to a separate file for better organization

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(deezer): enhance JWT token handling with expiration validation

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor(deezer): change log level for unknown agent warnings from Warn to Debug

Signed-off-by: Deluan <deluan@navidrome.org>

* fix(deezer): reduce JWT token expiration buffer from 10 minutes to 1 minute

Signed-off-by: Deluan <deluan@navidrome.org>

---------

Signed-off-by: Deluan <deluan@navidrome.org>
2025-11-21 15:09:24 -05:00
Deluan
152f57e642 chore(deps): update golangci-lint version to v2.6.2
Signed-off-by: Deluan <deluan@navidrome.org>
2025-11-20 10:38:54 -05:00
Deluan
5c16622501 chore(makefile): update golangci-lint version to v2.6.2
See comment 0c71842b12 (commitcomment-170969373)

Signed-off-by: Deluan <deluan@navidrome.org>
2025-11-20 10:38:40 -05:00
Deluan
36fa869329 feat(scanner): improve error messages for cleanup operations in annotations, bookmarks, and tags
Signed-off-by: Deluan <deluan@navidrome.org>
2025-11-20 09:27:42 -05:00
Deluan
0c3012bbbd chore(deps): update Go dependencies to latest versions
Signed-off-by: Deluan <deluan@navidrome.org>
2025-11-19 22:05:46 -05:00
Deluan
353aff2c88 fix(lastfm): ignore artist placeholder image.
Fix #4702

Signed-off-by: Deluan <deluan@navidrome.org>
2025-11-19 20:49:29 -05:00
Deluan
c873466e5b fix(scanner): reset watcher trigger timer for debounce on notification receipt
Signed-off-by: Deluan <deluan@navidrome.org>
2025-11-19 20:24:13 -05:00
Deluan Quintão
3d1946e31c fix(plugins): avoid Chi RouteContext pollution by using http.NewRequest (#4713)
Signed-off-by: Deluan <deluan@navidrome.org>
2025-11-19 20:17:01 -05:00
Dongeun
6fb228bc10 fix(ui): fix translation display for library list terms (#4712) 2025-11-19 13:42:33 -05:00
Kendall Garner
32e1313fc6 ci: bump plugin compilation timeout for regressions (#4690) 2025-11-16 13:46:32 -05:00
Deluan
489d5c7760 test: update make test-race target to use PKG variable for improved flexibility
Signed-off-by: Deluan <deluan@navidrome.org>
2025-11-16 13:41:22 -05:00
Kendall Garner
0f1ede2581 fix(scanner): specify exact table to use for missing mediafile filter (#4689)
In `getAffectedAlbumIDs`, when one or more IDs is added, it adds a filter `"id": ids`.
This filter is ambiguous though, because the `getAll` query joins with library table, which _also_ has an `id` field.
Clarify this by adding the table name to the filter.

Note that this was not caught in testing, as it only uses mock db.
2025-11-16 12:54:28 -05:00
Deluan Quintão
395a36e10f fix(ui): fix library selection state for single-library users (#4686)
* fix: validate library selection state for single-library users

Fixes issues where users with a single library see no content when
selectedLibraries in localStorage contains library IDs they no longer
have access to (e.g., after removing libraries or switching accounts).

Changes:
- libraryReducer: Validate selectedLibraries when SET_USER_LIBRARIES
  is dispatched, filtering out invalid IDs and resetting to empty for
  single-library users (empty means 'all accessible libraries')
- wrapperDataProvider: Add defensive validation in getSelectedLibraries
  to check against current user libraries before applying filters
- Add comprehensive test coverage for reducer validation logic

Fixes #4553, #4508, #4569

* style: format code with prettier
2025-11-15 17:42:28 -05:00
Deluan
0161a0958c fix(ui): add CreateButton back to LibraryListActions
Signed-off-by: Deluan <deluan@navidrome.org>
2025-11-15 17:31:37 -05:00
Deluan Quintão
28d5299ffc feat(scanner): implement selective folder scanning and file system watcher improvements (#4674)
* feat: Add selective folder scanning capability

Implement targeted scanning of specific library/folder pairs without
full recursion. This enables efficient rescanning of individual folders
when changes are detected, significantly reducing scan time for large
libraries.

Key changes:
- Add ScanTarget struct and ScanFolders API to Scanner interface
- Implement CLI flag --targets for specifying libraryID:folderPath pairs
- Add FolderRepository.GetByPaths() for batch folder info retrieval
- Create loadSpecificFolders() for non-recursive directory loading
- Scope GC operations to affected libraries only (with TODO for full impl)
- Add comprehensive tests for selective scanning behavior

The selective scan:
- Only processes specified folders (no subdirectory recursion)
- Maintains library isolation
- Runs full maintenance pipeline scoped to affected libraries
- Supports both full and quick scan modes

Examples:
  navidrome scan --targets "1:Music/Rock,1:Music/Jazz"
  navidrome scan --full --targets "2:Classical"

* feat(folder): replace GetByPaths with GetFolderUpdateInfo for improved folder updates retrieval

Signed-off-by: Deluan <deluan@navidrome.org>

* test: update parseTargets test to handle folder names with spaces

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor(folder): remove unused LibraryPath struct and update GC logging message

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor(folder): enhance external scanner to support target-specific scanning

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor(scanner): simplify scanner methods

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(watcher): implement folder scanning notifications with deduplication

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor(watcher): add resolveFolderPath function for testability

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(watcher): implement path ignoring based on .ndignore patterns

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor(scanner): implement IgnoreChecker for managing .ndignore patterns

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor(ignore_checker): rename scanner to lineScanner for clarity

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor(scanner): enhance ScanTarget struct with String method for better target representation

Signed-off-by: Deluan <deluan@navidrome.org>

* fix(scanner): validate library ID to prevent negative values

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor(scanner): simplify GC method by removing library ID parameter

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(scanner): update folder scanning to include all descendants of specified folders

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(subsonic): allow selective scan in the /startScan endpoint

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor(scanner): update CallScan to handle specific library/folder pairs

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor(scanner): streamline scanning logic by removing scanAll method

Signed-off-by: Deluan <deluan@navidrome.org>

* test: enhance mockScanner for thread safety and improve test reliability

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor(scanner): move scanner.ScanTarget to model.ScanTarget

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor: move scanner types to model,implement MockScanner

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor(scanner): update scanner interface and implementations to use model.Scanner

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor(folder_repository): normalize target path handling by using filepath.Clean

Signed-off-by: Deluan <deluan@navidrome.org>

* test(folder_repository): add comprehensive tests for folder retrieval and child exclusion

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor(scanner): simplify selective scan logic using slice.Filter

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor(scanner): streamline phase folder and album creation by removing unnecessary library parameter

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor(scanner): move initialization logic from phase_1 to the scanner itself

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor(tests): rename selective scan test file to scanner_selective_test.go

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(configuration): add DevSelectiveWatcher configuration option

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(watcher): enhance .ndignore handling for folder deletions and file changes

Signed-off-by: Deluan <deluan@navidrome.org>

* docs(scanner): comments

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor(scanner): enhance walkDirTree to support target folder scanning

Signed-off-by: Deluan <deluan@navidrome.org>

* fix(scanner, watcher): handle errors when pushing ignore patterns for folders

Signed-off-by: Deluan <deluan@navidrome.org>

* Update scanner/phase_1_folders.go

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* refactor(scanner): replace parseTargets function with direct call to scanner.ParseTargets

Signed-off-by: Deluan <deluan@navidrome.org>

* test(scanner): add tests for ScanBegin and ScanEnd functionality

Signed-off-by: Deluan <deluan@navidrome.org>

* fix(library): update PRAGMA optimize to check table sizes without ANALYZE

Signed-off-by: Deluan <deluan@navidrome.org>

* test(scanner): refactor tests

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(ui): add selective scan options and update translations

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(ui): add quick and full scan options for individual libraries

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(ui): add Scan buttonsto the LibraryList

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(scan): update scanning parameters from 'path' to 'target' for selective scans.

* refactor(scan): move ParseTargets function to model package

* test(scan): suppress unused return value from SetUserLibraries in tests

* feat(gc): enhance garbage collection to support selective library purging

Signed-off-by: Deluan <deluan@navidrome.org>

* fix(scanner): prevent race condition when scanning deleted folders

When the watcher detects changes in a folder that gets deleted before
the scanner runs (due to the 10-second delay), the scanner was
prematurely removing these folders from the tracking map, preventing
them from being marked as missing.

The issue occurred because `newFolderEntry` was calling `popLastUpdate`
before verifying the folder actually exists on the filesystem.

Changes:
- Move fs.Stat check before newFolderEntry creation in loadDir to
  ensure deleted folders remain in lastUpdates for finalize() to handle
- Add early existence check in walkDirTree to skip non-existent target
  folders with a warning log
- Add unit test verifying non-existent folders aren't removed from
  lastUpdates prematurely
- Add integration test for deleted folder scenario with ScanFolders

Fixes the issue where deleting entire folders (e.g., /music/AC_DC)
wouldn't mark tracks as missing when using selective folder scanning.

* refactor(scan): streamline folder entry creation and update handling

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(scan): add '@Recycle' (QNAP) to ignored directories list

Signed-off-by: Deluan <deluan@navidrome.org>

* fix(log): improve thread safety in logging level management

* test(scan): move unit tests for ParseTargets function

Signed-off-by: Deluan <deluan@navidrome.org>

* review

Signed-off-by: Deluan <deluan@navidrome.org>

---------

Signed-off-by: Deluan <deluan@navidrome.org>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: deluan <deluan.quintao@mechanical-orchard.com>
2025-11-14 22:15:43 -05:00
Deluan
bca76069c3 fix(server): prioritize artist base image filenames over numeric suffixes and add tests for sorting
Signed-off-by: Deluan <deluan@navidrome.org>
2025-11-14 13:15:50 -05:00
Deluan Quintão
a10f839221 fix(server): prefer cover.jpg over cover.1.jpg (#4684)
* fix(reader): prioritize cover art selection by base filename without numeric suffixes

Signed-off-by: Deluan <deluan@navidrome.org>

* fix(reader): update image file comparison to use natural sorting and prioritize files without numeric suffixes

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor(reader): simplify comparison, add case-sensitivity test case

Signed-off-by: Deluan <deluan@navidrome.org>

---------

Signed-off-by: Deluan <deluan@navidrome.org>
2025-11-14 12:19:10 -05:00
Deluan
2385c8a548 test: mock formatFullDate for consistent test results 2025-11-13 18:46:06 -05:00
Deluan
9b3bdc8a8b fix(ui): adjust margins for bulk actions buttons in Spotify-ish and Ligera
Signed-off-by: Deluan <deluan@navidrome.org>
2025-11-13 18:30:44 -05:00
Deluan
f939ad84f3 fix(ui): increase contrast of button text in the Dark theme
Signed-off-by: Deluan <deluan@navidrome.org>
2025-11-12 16:17:41 -05:00
Deluan
c3e8c67116 feat(ui): update totalSize formatting to display two decimal places
Signed-off-by: Deluan <deluan@navidrome.org>
2025-11-12 13:23:18 -05:00
Deluan
d57a8e6d84 refactor(scanner): refactor legacyReleaseDate logic and add tests for date mapping
Signed-off-by: Deluan <deluan@navidrome.org>
2025-11-12 13:11:33 -05:00
Deluan
73ec89e1af feat(ui): add SizeField to display total size in LibraryList
Signed-off-by: Deluan <deluan@navidrome.org>
2025-11-12 13:01:17 -05:00
Rob Emery
131c0c565c feat(insights): detecting packaging method (#3841)
* Adding environmental variable so that navidrome can detect
if its running as an MSI install for insights

* Renaming to be ND_PACKAGE_TYPE so we can reuse this for the
.deb/.rpm stats as well

* Packaged implies a bool, this is a description so it should
be packaging or just package imo

* wixl currently doesn't support <Environment> so I'm swapping out
to a file next-door to the configuration file, we should be
able to reuse this for deb/rpm as well

* Using a file we should be able to add support for linux like this
also

* MSI should copy the package into place for us, it's not a KeyPath
as older versions won't have it, so it's presence doesn't indicate
the installed status of the package

* OK this doesn't exist, need to find another way to do it

* package to .package and moving to the datadir

* fix(scanner): better log message when AutoImportPlaylists is disabled

Fix #3861

Signed-off-by: Deluan <deluan@navidrome.org>

* fix(scanner): support ID3v2 embedded images in WAV files

Fix #3867

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(ui): show bitDepth in song info dialog

Signed-off-by: Deluan <deluan@navidrome.org>

* fix(server): don't break if the ND_CONFIGFILE does not exist

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(docker): automatically loads a navidrome.toml file from /data, if available

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(server): custom ArtistJoiner config (#3873)

* feat(server): custom ArtistJoiner config

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor(ui): organize ArtistLinkField, add tests

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(ui): use display artist

* feat(ui): use display artist

Signed-off-by: Deluan <deluan@navidrome.org>

---------

Signed-off-by: Deluan <deluan@navidrome.org>

* chore: remove some BFR-related TODOs that are not valid anymore

Signed-off-by: Deluan <deluan@navidrome.org>

* chore: remove more outdated TODOs

Signed-off-by: Deluan <deluan@navidrome.org>

* fix(scanner): elapsed time for folder processing is wrong in the logs

Signed-off-by: Deluan <deluan@navidrome.org>

* Should be able to reuse this mechanism with deb and rpm, I think
it would be nice to know which specific one it is without guessing
based on /etc/debian_version or something; but it doesn't look like
that is exposed by goreleaser into an env or anything :/

* Need to reference the installed file and I think Id's don't require []

* Need to add into the root directory for this to work

* That was not deliberately removed

* feat: add RPM and DEB package configuration files for Navidrome

Signed-off-by: Deluan <deluan@navidrome.org>

* Don't need this as goreleaser will sort it out

---------

Signed-off-by: Deluan <deluan@navidrome.org>
Co-authored-by: Deluan Quintão <deluan@navidrome.org>
2025-11-09 12:57:55 -05:00
Kendall Garner
53ff33866d feat(subsonic): implement indexBasedQueue extension (#4244)
* redo this whole PR, but clearner now that better errata is in

* update play queue types
2025-11-09 12:52:05 -05:00
Deluan
508670ecfb Revert "feat(ui): add Vietnamese localization for the application"
This reverts commit 9621a40f29.
2025-11-09 12:41:25 -05:00
Deluan
c369224597 test: fix flaky CacheWarmer deduplication test
Fixed race condition in the 'deduplicates items in buffer' test where the
background worker goroutine could process and clear the buffer before the
test could verify its contents. Added fc.SetReady(false) to keep the cache
unavailable during the test, ensuring buffered items remain in memory for
verification. This matches the pattern already used in the 'adds multiple
items to buffer' test.

Signed-off-by: Deluan <deluan@navidrome.org>
2025-11-09 12:19:28 -05:00
Deluan
ff583970f0 chore(deps): update golang.org/x/sync to v0.18.0 and golang.org/x/sys to v0.38.0
Signed-off-by: Deluan <deluan@navidrome.org>
2025-11-08 21:05:29 -05:00
Deluan
38ca65726a chore(deps): update wazero to version 1.10.0 and clean up go.mod
Signed-off-by: Deluan <deluan@navidrome.org>
2025-11-08 21:04:20 -05:00
Deluan Quintão
5ce6e16d96 fix: album statistics not updating after deleting missing files (#4668)
* feat: add album refresh functionality after deleting missing files

Implemented RefreshAlbums method in AlbumRepository to recalculate album attributes (size, duration, song count) from their constituent media files. This method processes albums in batches to maintain efficiency with large datasets.

Added integration in deleteMissingFiles to automatically refresh affected albums in the background after deleting missing media files, ensuring album statistics remain accurate. Includes comprehensive test coverage for various scenarios including single/multiple albums, empty batches, and large batch processing.

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor: extract missing files deletion into reusable service layer

Extracted inline deletion logic from server/nativeapi/missing.go into a new core.MissingFiles service interface and implementation. This provides better separation of concerns and testability.

The MissingFiles service handles:
- Deletion of specific or all missing files via transaction
- Garbage collection after deletion
- Extraction of affected album IDs from missing files
- Background refresh of artist and album statistics

The deleteMissingFiles HTTP handler now simply delegates to the service, removing 70+ lines of inline logic. All deletion, transaction, and stat refresh logic is now centralized in core/missing_files.go.

Updated dependency injection to provide MissingFiles service to the native API router. Renamed receiver variable from 'n' to 'api' throughout native_api.go for consistency.

* refactor: consolidate maintenance operations into unified service

Consolidate MissingFiles and RefreshAlbums functionality into a new Maintenance service. This refactoring:
- Creates core.Maintenance interface combining DeleteMissingFiles, DeleteAllMissingFiles, and RefreshAlbums methods
- Moves RefreshAlbums logic from AlbumRepository persistence layer to core Maintenance service
- Removes MissingFiles interface and moves its implementation to maintenanceService
- Updates all references in wire providers, native API router, and handlers
- Removes RefreshAlbums interface method from AlbumRepository model
- Improves separation of concerns by centralizing maintenance operations in the core domain

This change provides a cleaner API and better organization of maintenance-related database operations.

* refactor: remove MissingFiles interface and update references

Remove obsolete MissingFiles interface and its references:
- Delete core/missing_files.go and core/missing_files_test.go
- Remove RefreshAlbums method from AlbumRepository interface and implementation
- Remove RefreshAlbums tests from AlbumRepository test suite
- Update wire providers to use NewMaintenance instead of NewMissingFiles
- Update native API router to use Maintenance service
- Update missing.go handler to use Maintenance interface

All functionality is now consolidated in the core.Maintenance service.

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor: rename RefreshAlbums to refreshAlbums and update related calls

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor: optimize album refresh logic and improve test coverage

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor: simplify logging setup in tests with reusable LogHook function

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor: add synchronization to logger and maintenance service for thread safety

Signed-off-by: Deluan <deluan@navidrome.org>

---------

Signed-off-by: Deluan <deluan@navidrome.org>
2025-11-08 20:11:00 -05:00
Deluan Quintão
69527085db fix(ui): resolve transparent dropdown background in Ligera theme (#4665)
Fixed the multi-library selector dropdown background in the Ligera theme by changing the palette.background.paper value from 'inherit' to bLight['500'] ('#ffffff'). This ensures the dropdown has a solid white background that properly overlays content, making the library selection options clearly readable.

Closes #4502
2025-11-08 12:47:02 -05:00
Nagi
9bb933c0d6 fix(ui): fix Playlist Italian translation(#4642)
In Italian, we usually use "Playlist" rather than "Scalette/a". "Scalette/a" refers to other functions or objects.
2025-11-07 18:41:23 -05:00
Deluan Quintão
6f4fa76772 fix(ui): update Galician, Dutch, Thai translations from POEditor (#4416)
Co-authored-by: navidrome-bot <navidrome-bot@navidrome.org>
2025-11-07 18:20:39 -05:00
Deluan
9621a40f29 feat(ui): add Vietnamese localization for the application 2025-11-07 18:13:46 -05:00
DDinghoya
df95dffa74 fix(ui): update ko.json (#4443)
* Update ko.json

* Update ko.json

Removed remove one of the entrie as below

"shuffleAll": "모두 셔플"

* Update ko.json

* Update ko.json

* Update ko.json

* Update ko.json

* Update ko.json
2025-11-07 18:10:38 -05:00
York
a59b59192a fix(ui): update zh-Hant.json (#4454)
* Update zh-Hant.json

Updated and optimized Traditional Chinese translation.

* Update zh-Hant.json

Updated and optimized Traditional Chinese translation.

* Update zh-Hant.json

Updated and optimized Traditional Chinese translation.
2025-11-07 18:06:41 -05:00
Deluan Quintão
4f7dc105b0 fix(ui): correct track ordering when sorting playlists by album (#4657)
* fix(deps): update wazero dependencies to resolve issues

Signed-off-by: Deluan <deluan@navidrome.org>

* fix(deps): update wazero dependency to latest version

Signed-off-by: Deluan <deluan@navidrome.org>

* fix: correct track ordering when sorting playlists by album

Fixed issue #3177 where tracks within multi-disc albums were displayed out of order when sorting playlists by album. The playlist track repository was using an incomplete sort mapping that only sorted by album name and artist, missing the critical disc_number and track_number fields.

Changed the album sort mapping in playlist_track_repository from:
  order_album_name, order_album_artist_name
to:
  order_album_name, order_album_artist_name, disc_number, track_number, order_artist_name, title

This now matches the sorting used in the media file repository, ensuring tracks are sorted by:
1. Album name (groups by album)
2. Album artist (handles compilations)
3. Disc number (multi-disc album discs in order)
4. Track number (tracks within disc in order)
5. Artist name and title (edge cases with missing metadata)

Added comprehensive tests with a multi-disc test album to verify correct sorting behavior.

* chore: sync go.mod and go.sum with master

* chore: align playlist album sort order with mediafile_repository (use album_id)

* fix: clean up test playlist to prevent state leakage in randomized test runs

---------

Signed-off-by: Deluan <deluan@navidrome.org>
2025-11-06 16:50:54 -05:00
Deluan Quintão
e918e049e2 fix: update wazero dependency to resolve ARM64 SIGILL crash (#4655)
* fix(deps): update wazero dependencies to resolve issues

Signed-off-by: Deluan <deluan@navidrome.org>

* fix(deps): update wazero dependency to latest version

Signed-off-by: Deluan <deluan@navidrome.org>

* fix(deps): update wazero dependency to latest version for issue resolution

Signed-off-by: Deluan <deluan@navidrome.org>

---------

Signed-off-by: Deluan <deluan@navidrome.org>
2025-11-06 15:07:09 -05:00
Deluan Quintão
1e8d28ff46 fix: qualify user id filter to avoid ambiguous column (#4511) 2025-11-06 14:54:01 -05:00
Kendall Garner
a128b3cf98 fix(db): make playqueue position field an integer (#4481) 2025-11-06 14:41:09 -05:00
Deluan Quintão
290a9fdeaa test: fix locale-dependent tests by making formatNumber locale-aware (#4619)
- Add optional locale parameter to formatNumber function
- Update tests to explicitly pass 'en-US' locale for deterministic results
- Maintains backward compatibility: defaults to system locale when no locale specified
- No need for cross-env or environment variable manipulation
- Tests now pass consistently regardless of system locale

Related to #4417
2025-11-06 14:34:00 -05:00
Deluan
58b5ed86df refactor: extract TruncateRunes function for safe string truncation with suffix
Signed-off-by: Deluan <deluan@navidrome.org>

# Conflicts:
#	core/share.go
#	core/share_test.go
2025-11-06 14:27:38 -05:00
beerpsi
fe1cee0159 fix(share): slice content label by utf-8 runes (#4634)
* fix(share): slice content label by utf-8 runes

* Apply suggestions about avoiding allocations

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

* lint: remove unused import

* test: add test cases for CJK truncation

* test: add tests for ASCII labels too

---------

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2025-11-06 14:24:07 -05:00
Deluan
3dfaa8cca1 ci: go mod tidy
Signed-off-by: Deluan <deluan@navidrome.org>
2025-11-06 12:53:41 -05:00
Deluan
0a5abfc1b1 chore: update actions/upload-artifact and actions/download-artifact to latest versions
Signed-off-by: Deluan <deluan@navidrome.org>
2025-11-06 12:43:35 -05:00
Deluan
c501bc6996 chore(deps): update ginkgo to version 2.27.2
Signed-off-by: Deluan <deluan@navidrome.org>
2025-11-06 12:41:16 -05:00
Deluan
0c71842b12 chore: update Go version to 1.25.4
Signed-off-by: Deluan <deluan@navidrome.org>
2025-11-06 12:40:44 -05:00
pca006132
e86dc03619 fix(ui): allow scrolling in play queue by adding delay (#4562) 2025-11-01 20:47:03 -04:00
180 changed files with 8926 additions and 1265 deletions

View File

@@ -9,12 +9,19 @@ ARG INSTALL_NODE="true"
ARG NODE_VERSION="lts/*"
RUN if [ "${INSTALL_NODE}" = "true" ]; then su vscode -c "source /usr/local/share/nvm/nvm.sh && nvm install ${NODE_VERSION} 2>&1"; fi
# [Optional] Uncomment this section to install additional OS packages.
# Install additional OS packages
RUN apt-get update && export DEBIAN_FRONTEND=noninteractive \
&& apt-get -y install --no-install-recommends libtag1-dev ffmpeg
&& apt-get -y install --no-install-recommends ffmpeg
# [Optional] Uncomment the next line to use go get to install anything else you need
# RUN go get -x <your-dependency-or-tool>
# Install TagLib from cross-taglib releases
ARG CROSS_TAGLIB_VERSION="2.1.1-1"
ARG TARGETARCH
RUN DOWNLOAD_ARCH="linux-${TARGETARCH}" \
&& wget -q "https://github.com/navidrome/cross-taglib/releases/download/v${CROSS_TAGLIB_VERSION}/taglib-${DOWNLOAD_ARCH}.tar.gz" -O /tmp/cross-taglib.tar.gz \
&& tar -xzf /tmp/cross-taglib.tar.gz -C /usr --strip-components=1 \
&& mv /usr/include/taglib/* /usr/include/ \
&& rmdir /usr/include/taglib \
&& rm /tmp/cross-taglib.tar.gz /usr/provenance.json
# [Optional] Uncomment this line to install global node packages.
# RUN su vscode -c "source /usr/local/share/nvm/nvm.sh && npm install -g <your-package-here>" 2>&1

View File

@@ -7,7 +7,8 @@
"VARIANT": "1.25",
// Options
"INSTALL_NODE": "true",
"NODE_VERSION": "v24"
"NODE_VERSION": "v24",
"CROSS_TAGLIB_VERSION": "2.1.1-1"
}
},
"workspaceMount": "",
@@ -54,12 +55,10 @@
4533,
4633
],
// Use 'postCreateCommand' to run commands after the container is created.
// "postCreateCommand": "make setup-dev",
// Comment out connect as root instead. More info: https://aka.ms/vscode-remote/containers/non-root.
"remoteUser": "vscode",
"remoteEnv": {
"ND_MUSICFOLDER": "./music",
"ND_DATAFOLDER": "./data"
}
}
}

View File

@@ -25,7 +25,7 @@ jobs:
git_tag: ${{ steps.git-version.outputs.GIT_TAG }}
git_sha: ${{ steps.git-version.outputs.GIT_SHA }}
steps:
- uses: actions/checkout@v5
- uses: actions/checkout@v6
with:
fetch-depth: 0
fetch-tags: true
@@ -63,7 +63,7 @@ jobs:
name: Lint Go code
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v5
- uses: actions/checkout@v6
- name: Download TagLib
uses: ./.github/actions/download-taglib
@@ -71,7 +71,7 @@ jobs:
version: ${{ env.CROSS_TAGLIB_VERSION }}
- name: golangci-lint
uses: golangci/golangci-lint-action@v8
uses: golangci/golangci-lint-action@v9
with:
version: latest
problem-matchers: true
@@ -93,7 +93,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Check out code into the Go module directory
uses: actions/checkout@v5
uses: actions/checkout@v6
- name: Download TagLib
uses: ./.github/actions/download-taglib
@@ -114,7 +114,7 @@ jobs:
env:
NODE_OPTIONS: "--max_old_space_size=4096"
steps:
- uses: actions/checkout@v5
- uses: actions/checkout@v6
- uses: actions/setup-node@v6
with:
node-version: 24
@@ -145,7 +145,7 @@ jobs:
name: Lint i18n files
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v5
- uses: actions/checkout@v6
- run: |
set -e
for file in resources/i18n/*.json; do
@@ -191,7 +191,7 @@ jobs:
PLATFORM=$(echo ${{ matrix.platform }} | tr '/' '_')
echo "PLATFORM=$PLATFORM" >> $GITHUB_ENV
- uses: actions/checkout@v5
- uses: actions/checkout@v6
- name: Prepare Docker Buildx
uses: ./.github/actions/prepare-docker
@@ -217,7 +217,7 @@ jobs:
CROSS_TAGLIB_VERSION=${{ env.CROSS_TAGLIB_VERSION }}
- name: Upload Binaries
uses: actions/upload-artifact@v4
uses: actions/upload-artifact@v5
with:
name: navidrome-${{ env.PLATFORM }}
path: ./output
@@ -248,7 +248,7 @@ jobs:
touch "/tmp/digests/${digest#sha256:}"
- name: Upload digest
uses: actions/upload-artifact@v4
uses: actions/upload-artifact@v5
if: env.IS_LINUX == 'true' && env.IS_DOCKER_PUSH_CONFIGURED == 'true' && env.IS_ARMV5 == 'false'
with:
name: digests-${{ env.PLATFORM }}
@@ -264,10 +264,10 @@ jobs:
env:
REGISTRY_IMAGE: ghcr.io/${{ github.repository }}
steps:
- uses: actions/checkout@v5
- uses: actions/checkout@v6
- name: Download digests
uses: actions/download-artifact@v5
uses: actions/download-artifact@v6
with:
path: /tmp/digests
pattern: digests-*
@@ -318,9 +318,9 @@ jobs:
runs-on: ubuntu-24.04
steps:
- uses: actions/checkout@v5
- uses: actions/checkout@v6
- uses: actions/download-artifact@v5
- uses: actions/download-artifact@v6
with:
path: ./binaries
pattern: navidrome-windows*
@@ -339,7 +339,7 @@ jobs:
du -h binaries/msi/*.msi
- name: Upload MSI files
uses: actions/upload-artifact@v4
uses: actions/upload-artifact@v5
with:
name: navidrome-windows-installers
path: binaries/msi/*.msi
@@ -352,12 +352,12 @@ jobs:
outputs:
package_list: ${{ steps.set-package-list.outputs.package_list }}
steps:
- uses: actions/checkout@v5
- uses: actions/checkout@v6
with:
fetch-depth: 0
fetch-tags: true
- uses: actions/download-artifact@v5
- uses: actions/download-artifact@v6
with:
path: ./binaries
pattern: navidrome-*
@@ -383,7 +383,7 @@ jobs:
rm ./dist/*.tar.gz ./dist/*.zip
- name: Upload all-packages artifact
uses: actions/upload-artifact@v4
uses: actions/upload-artifact@v5
with:
name: packages
path: dist/navidrome_0*
@@ -406,13 +406,13 @@ jobs:
item: ${{ fromJson(needs.release.outputs.package_list) }}
steps:
- name: Download all-packages artifact
uses: actions/download-artifact@v5
uses: actions/download-artifact@v6
with:
name: packages
path: ./dist
- name: Upload all-packages artifact
uses: actions/upload-artifact@v4
uses: actions/upload-artifact@v5
with:
name: navidrome_linux_${{ matrix.item }}
path: dist/navidrome_0*_linux_${{ matrix.item }}

View File

@@ -8,7 +8,7 @@ jobs:
runs-on: ubuntu-latest
if: ${{ github.repository_owner == 'navidrome' }}
steps:
- uses: actions/checkout@v5
- uses: actions/checkout@v6
- name: Get updated translations
id: poeditor
env:

View File

@@ -137,7 +137,6 @@ ENV ND_MUSICFOLDER=/music
ENV ND_DATAFOLDER=/data
ENV ND_CONFIGFILE=/data/navidrome.toml
ENV ND_PORT=4533
ENV GODEBUG="asyncpreemptoff=1"
RUN touch /.nddockerenv
EXPOSE ${ND_PORT}

View File

@@ -16,7 +16,7 @@ DOCKER_TAG ?= deluan/navidrome:develop
# Taglib version to use in cross-compilation, from https://github.com/navidrome/cross-taglib
CROSS_TAGLIB_VERSION ?= 2.1.1-1
GOLANGCI_LINT_VERSION ?= v2.5.0
GOLANGCI_LINT_VERSION ?= v2.6.2
UI_SRC_FILES := $(shell find ui -type f -not -path "ui/build/*" -not -path "ui/node_modules/*")
@@ -54,7 +54,7 @@ testall: test-race test-i18n test-js ##@Development Run Go and JS tests
.PHONY: testall
test-race: ##@Development Run Go tests with race detector
go test -tags netgo -race -shuffle=on ./...
go test -tags netgo -race -shuffle=on $(PKG)
.PHONY: test-race
test-js: ##@Development Run JS tests

View File

@@ -346,7 +346,7 @@ func startPluginManager(ctx context.Context) func() error {
// TODO: Implement some struct tags to map flags to viper
func init() {
cobra.OnInitialize(func() {
conf.InitConfig(cfgFile)
conf.InitConfig(cfgFile, true)
})
rootCmd.PersistentFlags().StringVarP(&cfgFile, "configfile", "c", "", `config file (default "./navidrome.toml")`)

View File

@@ -8,6 +8,7 @@ import (
"github.com/navidrome/navidrome/core"
"github.com/navidrome/navidrome/db"
"github.com/navidrome/navidrome/log"
"github.com/navidrome/navidrome/model"
"github.com/navidrome/navidrome/persistence"
"github.com/navidrome/navidrome/scanner"
"github.com/navidrome/navidrome/utils/pl"
@@ -17,11 +18,13 @@ import (
var (
fullScan bool
subprocess bool
targets []string
)
func init() {
scanCmd.Flags().BoolVarP(&fullScan, "full", "f", false, "check all subfolders, ignoring timestamps")
scanCmd.Flags().BoolVarP(&subprocess, "subprocess", "", false, "run as subprocess (internal use)")
scanCmd.Flags().StringArrayVarP(&targets, "target", "t", []string{}, "list of libraryID:folderPath pairs, can be repeated (e.g., \"-t 1:Music/Rock -t 1:Music/Jazz -t 2:Classical\")")
rootCmd.AddCommand(scanCmd)
}
@@ -68,7 +71,18 @@ func runScanner(ctx context.Context) {
ds := persistence.New(sqlDB)
pls := core.NewPlaylists(ds)
progress, err := scanner.CallScan(ctx, ds, pls, fullScan)
// Parse targets if provided
var scanTargets []model.ScanTarget
if len(targets) > 0 {
var err error
scanTargets, err = model.ParseTargets(targets)
if err != nil {
log.Fatal(ctx, "Failed to parse targets", err)
}
log.Info(ctx, "Scanning specific folders", "numTargets", len(scanTargets))
}
progress, err := scanner.CallScan(ctx, ds, pls, fullScan, scanTargets)
if err != nil {
log.Fatal(ctx, "Failed to scan", err)
}

View File

@@ -69,10 +69,11 @@ func CreateNativeAPIRouter(ctx context.Context) *nativeapi.Router {
artworkArtwork := artwork.NewArtwork(dataStore, fileCache, fFmpeg, provider)
cacheWarmer := artwork.NewCacheWarmer(artworkArtwork, fileCache)
broker := events.GetBroker()
scannerScanner := scanner.New(ctx, dataStore, cacheWarmer, broker, playlists, metricsMetrics)
watcher := scanner.GetWatcher(dataStore, scannerScanner)
library := core.NewLibrary(dataStore, scannerScanner, watcher, broker)
router := nativeapi.New(dataStore, share, playlists, insights, library)
modelScanner := scanner.New(ctx, dataStore, cacheWarmer, broker, playlists, metricsMetrics)
watcher := scanner.GetWatcher(dataStore, modelScanner)
library := core.NewLibrary(dataStore, modelScanner, watcher, broker)
maintenance := core.NewMaintenance(dataStore)
router := nativeapi.New(dataStore, share, playlists, insights, library, maintenance)
return router
}
@@ -94,10 +95,10 @@ func CreateSubsonicAPIRouter(ctx context.Context) *subsonic.Router {
cacheWarmer := artwork.NewCacheWarmer(artworkArtwork, fileCache)
broker := events.GetBroker()
playlists := core.NewPlaylists(dataStore)
scannerScanner := scanner.New(ctx, dataStore, cacheWarmer, broker, playlists, metricsMetrics)
modelScanner := scanner.New(ctx, dataStore, cacheWarmer, broker, playlists, metricsMetrics)
playTracker := scrobbler.GetPlayTracker(dataStore, broker, manager)
playbackServer := playback.GetInstance(dataStore)
router := subsonic.New(dataStore, artworkArtwork, mediaStreamer, archiver, players, provider, scannerScanner, broker, playlists, playTracker, share, playbackServer, metricsMetrics)
router := subsonic.New(dataStore, artworkArtwork, mediaStreamer, archiver, players, provider, modelScanner, broker, playlists, playTracker, share, playbackServer, metricsMetrics)
return router
}
@@ -149,7 +150,7 @@ func CreatePrometheus() metrics.Metrics {
return metricsMetrics
}
func CreateScanner(ctx context.Context) scanner.Scanner {
func CreateScanner(ctx context.Context) model.Scanner {
sqlDB := db.Db()
dataStore := persistence.New(sqlDB)
fileCache := artwork.GetImageCache()
@@ -162,8 +163,8 @@ func CreateScanner(ctx context.Context) scanner.Scanner {
cacheWarmer := artwork.NewCacheWarmer(artworkArtwork, fileCache)
broker := events.GetBroker()
playlists := core.NewPlaylists(dataStore)
scannerScanner := scanner.New(ctx, dataStore, cacheWarmer, broker, playlists, metricsMetrics)
return scannerScanner
modelScanner := scanner.New(ctx, dataStore, cacheWarmer, broker, playlists, metricsMetrics)
return modelScanner
}
func CreateScanWatcher(ctx context.Context) scanner.Watcher {
@@ -179,8 +180,8 @@ func CreateScanWatcher(ctx context.Context) scanner.Watcher {
cacheWarmer := artwork.NewCacheWarmer(artworkArtwork, fileCache)
broker := events.GetBroker()
playlists := core.NewPlaylists(dataStore)
scannerScanner := scanner.New(ctx, dataStore, cacheWarmer, broker, playlists, metricsMetrics)
watcher := scanner.GetWatcher(dataStore, scannerScanner)
modelScanner := scanner.New(ctx, dataStore, cacheWarmer, broker, playlists, metricsMetrics)
watcher := scanner.GetWatcher(dataStore, modelScanner)
return watcher
}
@@ -201,7 +202,7 @@ func getPluginManager() plugins.Manager {
// wire_injectors.go:
var allProviders = wire.NewSet(core.Set, artwork.Set, server.New, subsonic.New, nativeapi.New, public.New, persistence.New, lastfm.NewRouter, listenbrainz.NewRouter, events.GetBroker, scanner.New, scanner.GetWatcher, plugins.GetManager, metrics.GetPrometheusInstance, db.Db, wire.Bind(new(agents.PluginLoader), new(plugins.Manager)), wire.Bind(new(scrobbler.PluginLoader), new(plugins.Manager)), wire.Bind(new(metrics.PluginLoader), new(plugins.Manager)), wire.Bind(new(core.Scanner), new(scanner.Scanner)), wire.Bind(new(core.Watcher), new(scanner.Watcher)))
var allProviders = wire.NewSet(core.Set, artwork.Set, server.New, subsonic.New, nativeapi.New, public.New, persistence.New, lastfm.NewRouter, listenbrainz.NewRouter, events.GetBroker, scanner.New, scanner.GetWatcher, plugins.GetManager, metrics.GetPrometheusInstance, db.Db, wire.Bind(new(agents.PluginLoader), new(plugins.Manager)), wire.Bind(new(scrobbler.PluginLoader), new(plugins.Manager)), wire.Bind(new(metrics.PluginLoader), new(plugins.Manager)), wire.Bind(new(core.Watcher), new(scanner.Watcher)))
func GetPluginManager(ctx context.Context) plugins.Manager {
manager := getPluginManager()

View File

@@ -45,7 +45,6 @@ var allProviders = wire.NewSet(
wire.Bind(new(agents.PluginLoader), new(plugins.Manager)),
wire.Bind(new(scrobbler.PluginLoader), new(plugins.Manager)),
wire.Bind(new(metrics.PluginLoader), new(plugins.Manager)),
wire.Bind(new(core.Scanner), new(scanner.Scanner)),
wire.Bind(new(core.Watcher), new(scanner.Watcher)),
)
@@ -103,7 +102,7 @@ func CreatePrometheus() metrics.Metrics {
))
}
func CreateScanner(ctx context.Context) scanner.Scanner {
func CreateScanner(ctx context.Context) model.Scanner {
panic(wire.Build(
allProviders,
))

View File

@@ -125,11 +125,13 @@ type configOptions struct {
DevAlbumInfoTimeToLive time.Duration
DevExternalScanner bool
DevScannerThreads uint
DevSelectiveWatcher bool
DevInsightsInitialDelay time.Duration
DevEnablePlayerInsights bool
DevEnablePluginsInsights bool
DevPluginCompilationTimeout time.Duration
DevExternalArtistFetchMultiplier float64
DevOptimizeDB bool
}
type scannerOptions struct {
@@ -175,7 +177,8 @@ type spotifyOptions struct {
}
type deezerOptions struct {
Enabled bool
Enabled bool
Language string
}
type listenBrainzOptions struct {
@@ -425,7 +428,7 @@ func validatePurgeMissingOption() error {
}
}
if !valid {
err := fmt.Errorf("Invalid Scanner.PurgeMissing value: '%s'. Must be one of: %v", Server.Scanner.PurgeMissing, allowedValues)
err := fmt.Errorf("invalid Scanner.PurgeMissing value: '%s'. Must be one of: %v", Server.Scanner.PurgeMissing, allowedValues)
log.Error(err.Error())
Server.Scanner.PurgeMissing = consts.PurgeMissingNever
return err
@@ -565,6 +568,7 @@ func setViperDefaults() {
viper.SetDefault("spotify.id", "")
viper.SetDefault("spotify.secret", "")
viper.SetDefault("deezer.enabled", true)
viper.SetDefault("deezer.language", "en")
viper.SetDefault("listenbrainz.enabled", true)
viper.SetDefault("listenbrainz.baseurl", "https://api.listenbrainz.org/1/")
viper.SetDefault("httpsecurityheaders.customframeoptionsvalue", "DENY")
@@ -600,20 +604,27 @@ func setViperDefaults() {
viper.SetDefault("devalbuminfotimetolive", consts.AlbumInfoTimeToLive)
viper.SetDefault("devexternalscanner", true)
viper.SetDefault("devscannerthreads", 5)
viper.SetDefault("devselectivewatcher", true)
viper.SetDefault("devinsightsinitialdelay", consts.InsightsInitialDelay)
viper.SetDefault("devenableplayerinsights", true)
viper.SetDefault("devenablepluginsinsights", true)
viper.SetDefault("devplugincompilationtimeout", time.Minute)
viper.SetDefault("devexternalartistfetchmultiplier", 1.5)
viper.SetDefault("devoptimizedb", true)
}
func init() {
setViperDefaults()
}
func InitConfig(cfgFile string) {
func InitConfig(cfgFile string, loadEnvVars bool) {
codecRegistry := viper.NewCodecRegistry()
_ = codecRegistry.RegisterCodec("ini", ini.Codec{})
_ = codecRegistry.RegisterCodec("ini", ini.Codec{
LoadOptions: ini.LoadOptions{
UnescapeValueDoubleQuotes: true,
UnescapeValueCommentSymbols: true,
},
})
viper.SetOptions(viper.WithCodecRegistry(codecRegistry))
cfgFile = getConfigFile(cfgFile)
@@ -627,10 +638,12 @@ func InitConfig(cfgFile string) {
}
_ = viper.BindEnv("port")
viper.SetEnvPrefix("ND")
replacer := strings.NewReplacer(".", "_")
viper.SetEnvKeyReplacer(replacer)
viper.AutomaticEnv()
if loadEnvVars {
viper.SetEnvPrefix("ND")
replacer := strings.NewReplacer(".", "_")
viper.SetEnvKeyReplacer(replacer)
viper.AutomaticEnv()
}
err := viper.ReadInConfig()
if viper.ConfigFileUsed() != "" && err != nil {

View File

@@ -31,7 +31,7 @@ var _ = Describe("Configuration", func() {
filename := filepath.Join("testdata", "cfg."+format)
// Initialize config with the test file
conf.InitConfig(filename)
conf.InitConfig(filename, false)
// Load the configuration (with noConfigDump=true)
conf.Load(true)
@@ -39,6 +39,7 @@ var _ = Describe("Configuration", func() {
Expect(conf.Server.MusicFolder).To(Equal(fmt.Sprintf("/%s/music", format)))
Expect(conf.Server.UIWelcomeMessage).To(Equal("Welcome " + format))
Expect(conf.Server.Tags["custom"].Aliases).To(Equal([]string{format, "test"}))
Expect(conf.Server.Tags["artist"].Split).To(Equal([]string{";"}))
// The config file used should be the one we created
Expect(conf.Server.ConfigFile).To(Equal(filename))

View File

@@ -1,6 +1,7 @@
[default]
MusicFolder = /ini/music
UIWelcomeMessage = Welcome ini
UIWelcomeMessage = 'Welcome ini' ; Just a comment to test the LoadOptions
[Tags]
Custom.Aliases = ini,test
Custom.Aliases = ini,test
artist.Split = ";" # Should be able to read ; as a separator

View File

@@ -2,6 +2,9 @@
"musicFolder": "/json/music",
"uiWelcomeMessage": "Welcome json",
"Tags": {
"artist": {
"split": ";"
},
"custom": {
"aliases": [
"json",

View File

@@ -1,5 +1,7 @@
musicFolder = "/toml/music"
uiWelcomeMessage = "Welcome toml"
Tags.artist.Split = ';'
[Tags.custom]
aliases = ["toml", "test"]

View File

@@ -1,6 +1,8 @@
musicFolder: "/yaml/music"
uiWelcomeMessage: "Welcome yaml"
Tags:
artist:
split: [";"]
custom:
aliases:
- yaml

View File

@@ -87,7 +87,7 @@ func (a *Agents) getEnabledAgentNames() []enabledAgent {
} else if isPlugin {
validAgents = append(validAgents, enabledAgent{name: name, isPlugin: true})
} else {
log.Warn("Unknown agent ignored", "name", name)
log.Debug("Unknown agent ignored", "name", name)
}
}
return validAgents

View File

@@ -1,6 +1,7 @@
package deezer
import (
bytes "bytes"
"context"
"encoding/json"
"errors"
@@ -9,11 +10,14 @@ import (
"net/http"
"net/url"
"strconv"
"strings"
"github.com/microcosm-cc/bluemonday"
"github.com/navidrome/navidrome/log"
)
const apiBaseURL = "https://api.deezer.com"
const authBaseURL = "https://auth.deezer.com"
var (
ErrNotFound = errors.New("deezer: not found")
@@ -25,10 +29,15 @@ type httpDoer interface {
type client struct {
httpDoer httpDoer
language string
jwt jwtToken
}
func newClient(hc httpDoer) *client {
return &client{hc}
func newClient(hc httpDoer, language string) *client {
return &client{
httpDoer: hc,
language: language,
}
}
func (c *client) searchArtists(ctx context.Context, name string, limit int) ([]Artist, error) {
@@ -53,7 +62,7 @@ func (c *client) searchArtists(ctx context.Context, name string, limit int) ([]A
return results.Data, nil
}
func (c *client) makeRequest(req *http.Request, response interface{}) error {
func (c *client) makeRequest(req *http.Request, response any) error {
log.Trace(req.Context(), fmt.Sprintf("Sending Deezer %s request", req.Method), "url", req.URL)
resp, err := c.httpDoer.Do(req)
if err != nil {
@@ -81,3 +90,129 @@ func (c *client) parseError(data []byte) error {
}
return fmt.Errorf("deezer error(%d): %s", deezerError.Error.Code, deezerError.Error.Message)
}
func (c *client) getRelatedArtists(ctx context.Context, artistID int) ([]Artist, error) {
req, err := http.NewRequestWithContext(ctx, "GET", fmt.Sprintf("%s/artist/%d/related", apiBaseURL, artistID), nil)
if err != nil {
return nil, err
}
var results RelatedArtists
err = c.makeRequest(req, &results)
if err != nil {
return nil, err
}
return results.Data, nil
}
func (c *client) getTopTracks(ctx context.Context, artistID int, limit int) ([]Track, error) {
params := url.Values{}
params.Add("limit", strconv.Itoa(limit))
req, err := http.NewRequestWithContext(ctx, "GET", fmt.Sprintf("%s/artist/%d/top", apiBaseURL, artistID), nil)
if err != nil {
return nil, err
}
req.URL.RawQuery = params.Encode()
var results TopTracks
err = c.makeRequest(req, &results)
if err != nil {
return nil, err
}
return results.Data, nil
}
const pipeAPIURL = "https://pipe.deezer.com/api"
var strictPolicy = bluemonday.StrictPolicy()
func (c *client) getArtistBio(ctx context.Context, artistID int) (string, error) {
jwt, err := c.getJWT(ctx)
if err != nil {
return "", fmt.Errorf("deezer: failed to get JWT: %w", err)
}
query := map[string]any{
"operationName": "ArtistBio",
"variables": map[string]any{
"artistId": strconv.Itoa(artistID),
},
"query": `query ArtistBio($artistId: String!) {
artist(artistId: $artistId) {
bio {
full
}
}
}`,
}
body, err := json.Marshal(query)
if err != nil {
return "", err
}
req, err := http.NewRequestWithContext(ctx, "POST", pipeAPIURL, bytes.NewReader(body))
if err != nil {
return "", err
}
req.Header.Set("Content-Type", "application/json")
req.Header.Set("Accept-Language", c.language)
req.Header.Set("Authorization", "Bearer "+jwt)
log.Trace(ctx, "Fetching Deezer artist biography via GraphQL", "artistId", artistID, "language", c.language)
resp, err := c.httpDoer.Do(req)
if err != nil {
return "", err
}
defer resp.Body.Close()
if resp.StatusCode != 200 {
return "", fmt.Errorf("deezer: failed to fetch biography: %s", resp.Status)
}
data, err := io.ReadAll(resp.Body)
if err != nil {
return "", err
}
type graphQLResponse struct {
Data struct {
Artist struct {
Bio struct {
Full string `json:"full"`
} `json:"bio"`
} `json:"artist"`
} `json:"data"`
Errors []struct {
Message string `json:"message"`
}
}
var result graphQLResponse
if err := json.Unmarshal(data, &result); err != nil {
return "", fmt.Errorf("deezer: failed to parse GraphQL response: %w", err)
}
if len(result.Errors) > 0 {
var errs []error
for m := range result.Errors {
errs = append(errs, errors.New(result.Errors[m].Message))
}
err := errors.Join(errs...)
return "", fmt.Errorf("deezer: GraphQL error: %w", err)
}
if result.Data.Artist.Bio.Full == "" {
return "", errors.New("deezer: biography not found")
}
return cleanBio(result.Data.Artist.Bio.Full), nil
}
func cleanBio(bio string) string {
bio = strings.ReplaceAll(bio, "</p>", "\n")
return strictPolicy.Sanitize(bio)
}

View File

@@ -0,0 +1,101 @@
package deezer
import (
"context"
"encoding/json"
"errors"
"fmt"
"io"
"net/http"
"sync"
"time"
"github.com/lestrrat-go/jwx/v2/jwt"
"github.com/navidrome/navidrome/log"
)
type jwtToken struct {
token string
expiresAt time.Time
mu sync.RWMutex
}
func (j *jwtToken) get() (string, bool) {
j.mu.RLock()
defer j.mu.RUnlock()
if time.Now().Before(j.expiresAt) {
return j.token, true
}
return "", false
}
func (j *jwtToken) set(token string, expiresIn time.Duration) {
j.mu.Lock()
defer j.mu.Unlock()
j.token = token
j.expiresAt = time.Now().Add(expiresIn)
}
func (c *client) getJWT(ctx context.Context) (string, error) {
// Check if we have a valid cached token
if token, valid := c.jwt.get(); valid {
return token, nil
}
// Fetch a new anonymous token
req, err := http.NewRequestWithContext(ctx, "GET", authBaseURL+"/login/anonymous?jo=p&rto=c", nil)
if err != nil {
return "", err
}
req.Header.Set("Accept", "application/json")
resp, err := c.httpDoer.Do(req)
if err != nil {
return "", err
}
defer resp.Body.Close()
if resp.StatusCode != 200 {
return "", fmt.Errorf("deezer: failed to get JWT token: %s", resp.Status)
}
data, err := io.ReadAll(resp.Body)
if err != nil {
return "", err
}
type authResponse struct {
JWT string `json:"jwt"`
}
var result authResponse
if err := json.Unmarshal(data, &result); err != nil {
return "", fmt.Errorf("deezer: failed to parse auth response: %w", err)
}
if result.JWT == "" {
return "", errors.New("deezer: no JWT token in response")
}
// Parse JWT to get actual expiration time
token, err := jwt.ParseString(result.JWT, jwt.WithVerify(false), jwt.WithValidate(false))
if err != nil {
return "", fmt.Errorf("deezer: failed to parse JWT token: %w", err)
}
// Calculate TTL with a 1-minute buffer for clock skew and network delays
expiresAt := token.Expiration()
if expiresAt.IsZero() {
return "", errors.New("deezer: JWT token has no expiration time")
}
ttl := time.Until(expiresAt) - 1*time.Minute
if ttl <= 0 {
return "", errors.New("deezer: JWT token already expired or expires too soon")
}
c.jwt.set(result.JWT, ttl)
log.Trace(ctx, "Fetched new Deezer JWT token", "expiresAt", expiresAt, "ttl", ttl)
return result.JWT, nil
}

View File

@@ -0,0 +1,293 @@
package deezer
import (
"bytes"
"context"
"fmt"
"io"
"net/http"
"sync"
"time"
"github.com/lestrrat-go/jwx/v2/jwt"
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
)
var _ = Describe("JWT Authentication", func() {
var httpClient *fakeHttpClient
var client *client
var ctx context.Context
BeforeEach(func() {
httpClient = &fakeHttpClient{}
client = newClient(httpClient, "en")
ctx = context.Background()
})
Describe("getJWT", func() {
Context("with a valid JWT response", func() {
It("successfully fetches and caches a JWT token", func() {
testJWT := createTestJWT(5 * time.Minute)
httpClient.mock("https://auth.deezer.com/login/anonymous", http.Response{
StatusCode: 200,
Body: io.NopCloser(bytes.NewBufferString(fmt.Sprintf(`{"jwt":"%s"}`, testJWT))),
})
token, err := client.getJWT(ctx)
Expect(err).To(BeNil())
Expect(token).To(Equal(testJWT))
})
It("returns the cached token on subsequent calls", func() {
testJWT := createTestJWT(5 * time.Minute)
httpClient.mock("https://auth.deezer.com/login/anonymous", http.Response{
StatusCode: 200,
Body: io.NopCloser(bytes.NewBufferString(fmt.Sprintf(`{"jwt":"%s"}`, testJWT))),
})
// First call should fetch from API
token1, err := client.getJWT(ctx)
Expect(err).To(BeNil())
Expect(token1).To(Equal(testJWT))
Expect(httpClient.lastRequest.URL.Path).To(Equal("/login/anonymous"))
// Second call should return cached token without hitting API
httpClient.lastRequest = nil // Clear last request to verify no new request is made
token2, err := client.getJWT(ctx)
Expect(err).To(BeNil())
Expect(token2).To(Equal(testJWT))
Expect(httpClient.lastRequest).To(BeNil()) // No new request made
})
It("parses the JWT expiration time correctly", func() {
expectedExpiration := time.Now().Add(5 * time.Minute)
testToken, err := jwt.NewBuilder().
Expiration(expectedExpiration).
Build()
Expect(err).To(BeNil())
testJWT, err := jwt.Sign(testToken, jwt.WithInsecureNoSignature())
Expect(err).To(BeNil())
httpClient.mock("https://auth.deezer.com/login/anonymous", http.Response{
StatusCode: 200,
Body: io.NopCloser(bytes.NewBufferString(fmt.Sprintf(`{"jwt":"%s"}`, string(testJWT)))),
})
token, err := client.getJWT(ctx)
Expect(err).To(BeNil())
Expect(token).ToNot(BeEmpty())
// Verify the token is cached until close to expiration
// The cache should expire 1 minute before the JWT expires
expectedCacheExpiry := expectedExpiration.Add(-1 * time.Minute)
Expect(client.jwt.expiresAt).To(BeTemporally("~", expectedCacheExpiry, 2*time.Second))
})
})
Context("with JWT tokens that expire soon", func() {
It("rejects tokens that expire in less than 1 minute", func() {
// Create a token that expires in 30 seconds (less than 1-minute buffer)
testJWT := createTestJWT(30 * time.Second)
httpClient.mock("https://auth.deezer.com/login/anonymous", http.Response{
StatusCode: 200,
Body: io.NopCloser(bytes.NewBufferString(fmt.Sprintf(`{"jwt":"%s"}`, testJWT))),
})
_, err := client.getJWT(ctx)
Expect(err).To(HaveOccurred())
Expect(err.Error()).To(ContainSubstring("JWT token already expired or expires too soon"))
})
It("rejects already expired tokens", func() {
// Create a token that expired 1 minute ago
testJWT := createTestJWT(-1 * time.Minute)
httpClient.mock("https://auth.deezer.com/login/anonymous", http.Response{
StatusCode: 200,
Body: io.NopCloser(bytes.NewBufferString(fmt.Sprintf(`{"jwt":"%s"}`, testJWT))),
})
_, err := client.getJWT(ctx)
Expect(err).To(HaveOccurred())
Expect(err.Error()).To(ContainSubstring("JWT token already expired or expires too soon"))
})
It("accepts tokens that expire in more than 1 minute", func() {
// Create a token that expires in 2 minutes (just over the 1-minute buffer)
testJWT := createTestJWT(2 * time.Minute)
httpClient.mock("https://auth.deezer.com/login/anonymous", http.Response{
StatusCode: 200,
Body: io.NopCloser(bytes.NewBufferString(fmt.Sprintf(`{"jwt":"%s"}`, testJWT))),
})
token, err := client.getJWT(ctx)
Expect(err).To(BeNil())
Expect(token).ToNot(BeEmpty())
})
})
Context("with invalid responses", func() {
It("handles HTTP error responses", func() {
httpClient.mock("https://auth.deezer.com/login/anonymous", http.Response{
StatusCode: 500,
Body: io.NopCloser(bytes.NewBufferString(`{"error":"Internal server error"}`)),
})
_, err := client.getJWT(ctx)
Expect(err).To(HaveOccurred())
Expect(err.Error()).To(ContainSubstring("failed to get JWT token"))
})
It("handles malformed JSON responses", func() {
httpClient.mock("https://auth.deezer.com/login/anonymous", http.Response{
StatusCode: 200,
Body: io.NopCloser(bytes.NewBufferString(`{invalid json}`)),
})
_, err := client.getJWT(ctx)
Expect(err).To(HaveOccurred())
Expect(err.Error()).To(ContainSubstring("failed to parse auth response"))
})
It("handles responses with empty JWT field", func() {
httpClient.mock("https://auth.deezer.com/login/anonymous", http.Response{
StatusCode: 200,
Body: io.NopCloser(bytes.NewBufferString(`{"jwt":""}`)),
})
_, err := client.getJWT(ctx)
Expect(err).To(HaveOccurred())
Expect(err.Error()).To(Equal("deezer: no JWT token in response"))
})
It("handles invalid JWT tokens", func() {
httpClient.mock("https://auth.deezer.com/login/anonymous", http.Response{
StatusCode: 200,
Body: io.NopCloser(bytes.NewBufferString(`{"jwt":"not-a-valid-jwt"}`)),
})
_, err := client.getJWT(ctx)
Expect(err).To(HaveOccurred())
Expect(err.Error()).To(ContainSubstring("failed to parse JWT token"))
})
It("rejects JWT tokens without expiration", func() {
// Create a JWT without expiration claim
testToken, err := jwt.NewBuilder().
Claim("custom", "value").
Build()
Expect(err).To(BeNil())
// Verify token has no expiration
Expect(testToken.Expiration().IsZero()).To(BeTrue())
testJWT, err := jwt.Sign(testToken, jwt.WithInsecureNoSignature())
Expect(err).To(BeNil())
httpClient.mock("https://auth.deezer.com/login/anonymous", http.Response{
StatusCode: 200,
Body: io.NopCloser(bytes.NewBufferString(fmt.Sprintf(`{"jwt":"%s"}`, string(testJWT)))),
})
_, err = client.getJWT(ctx)
Expect(err).To(HaveOccurred())
Expect(err.Error()).To(Equal("deezer: JWT token has no expiration time"))
})
})
Context("token caching behavior", func() {
It("fetches a new token when the cached token expires", func() {
// First token expires in 5 minutes
firstJWT := createTestJWT(5 * time.Minute)
httpClient.mock("https://auth.deezer.com/login/anonymous", http.Response{
StatusCode: 200,
Body: io.NopCloser(bytes.NewBufferString(fmt.Sprintf(`{"jwt":"%s"}`, firstJWT))),
})
token1, err := client.getJWT(ctx)
Expect(err).To(BeNil())
Expect(token1).To(Equal(firstJWT))
// Manually expire the cached token
client.jwt.expiresAt = time.Now().Add(-1 * time.Second)
// Second token with different expiration (10 minutes)
secondJWT := createTestJWT(10 * time.Minute)
httpClient.mock("https://auth.deezer.com/login/anonymous", http.Response{
StatusCode: 200,
Body: io.NopCloser(bytes.NewBufferString(fmt.Sprintf(`{"jwt":"%s"}`, secondJWT))),
})
token2, err := client.getJWT(ctx)
Expect(err).To(BeNil())
Expect(token2).To(Equal(secondJWT))
Expect(token2).ToNot(Equal(token1))
})
})
})
Describe("jwtToken cache", func() {
var cache *jwtToken
BeforeEach(func() {
cache = &jwtToken{}
})
It("returns false for expired tokens", func() {
cache.set("test-token", -1*time.Second) // Already expired
token, valid := cache.get()
Expect(valid).To(BeFalse())
Expect(token).To(BeEmpty())
})
It("returns true for valid tokens", func() {
cache.set("test-token", 4*time.Minute)
token, valid := cache.get()
Expect(valid).To(BeTrue())
Expect(token).To(Equal("test-token"))
})
It("is thread-safe for concurrent access", func() {
wg := sync.WaitGroup{}
// Writer goroutine
wg.Go(func() {
for i := 0; i < 100; i++ {
cache.set(fmt.Sprintf("token-%d", i), 1*time.Hour)
time.Sleep(1 * time.Millisecond)
}
})
// Reader goroutine
wg.Go(func() {
for i := 0; i < 100; i++ {
cache.get()
time.Sleep(1 * time.Millisecond)
}
})
// Wait for both goroutines to complete
wg.Wait()
// Verify final state is valid
token, valid := cache.get()
Expect(valid).To(BeTrue())
Expect(token).To(HavePrefix("token-"))
})
})
})
// createTestJWT creates a valid JWT token for testing purposes
func createTestJWT(expiresIn time.Duration) string {
token, err := jwt.NewBuilder().
Expiration(time.Now().Add(expiresIn)).
Build()
if err != nil {
panic(fmt.Sprintf("failed to create test JWT: %v", err))
}
signed, err := jwt.Sign(token, jwt.WithInsecureNoSignature())
if err != nil {
panic(fmt.Sprintf("failed to sign test JWT: %v", err))
}
return string(signed)
}

View File

@@ -2,10 +2,11 @@ package deezer
import (
"bytes"
"context"
"fmt"
"io"
"net/http"
"os"
"time"
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
@@ -17,7 +18,7 @@ var _ = Describe("client", func() {
BeforeEach(func() {
httpClient = &fakeHttpClient{}
client = newClient(httpClient)
client = newClient(httpClient, "en")
})
Describe("ArtistImages", func() {
@@ -26,7 +27,7 @@ var _ = Describe("client", func() {
Expect(err).To(BeNil())
httpClient.mock("https://api.deezer.com/search/artist", http.Response{Body: f, StatusCode: 200})
artists, err := client.searchArtists(context.TODO(), "Michael Jackson", 20)
artists, err := client.searchArtists(GinkgoT().Context(), "Michael Jackson", 20)
Expect(err).To(BeNil())
Expect(artists).To(HaveLen(17))
Expect(artists[0].Name).To(Equal("Michael Jackson"))
@@ -39,10 +40,136 @@ var _ = Describe("client", func() {
Body: io.NopCloser(bytes.NewBufferString(`{"data":[],"total":0}`)),
})
_, err := client.searchArtists(context.TODO(), "Michael Jackson", 20)
_, err := client.searchArtists(GinkgoT().Context(), "Michael Jackson", 20)
Expect(err).To(MatchError(ErrNotFound))
})
})
Describe("ArtistBio", func() {
BeforeEach(func() {
// Mock the JWT token endpoint with a valid JWT that expires in 5 minutes
testJWT := createTestJWT(5 * time.Minute)
httpClient.mock("https://auth.deezer.com/login/anonymous", http.Response{
StatusCode: 200,
Body: io.NopCloser(bytes.NewBufferString(fmt.Sprintf(`{"jwt":"%s","refresh_token":""}`, testJWT))),
})
})
It("returns artist bio from a successful request", func() {
f, err := os.Open("tests/fixtures/deezer.artist.bio.json")
Expect(err).To(BeNil())
httpClient.mock("https://pipe.deezer.com/api", http.Response{Body: f, StatusCode: 200})
bio, err := client.getArtistBio(GinkgoT().Context(), 27)
Expect(err).To(BeNil())
Expect(bio).To(ContainSubstring("Schoolmates Thomas and Guy-Manuel"))
Expect(bio).ToNot(ContainSubstring("<p>"))
Expect(bio).ToNot(ContainSubstring("</p>"))
})
It("uses the configured language", func() {
client = newClient(httpClient, "fr")
// Mock JWT token for the new client instance with a valid JWT
testJWT := createTestJWT(5 * time.Minute)
httpClient.mock("https://auth.deezer.com/login/anonymous", http.Response{
StatusCode: 200,
Body: io.NopCloser(bytes.NewBufferString(fmt.Sprintf(`{"jwt":"%s","refresh_token":""}`, testJWT))),
})
f, err := os.Open("tests/fixtures/deezer.artist.bio.json")
Expect(err).To(BeNil())
httpClient.mock("https://pipe.deezer.com/api", http.Response{Body: f, StatusCode: 200})
_, err = client.getArtistBio(GinkgoT().Context(), 27)
Expect(err).To(BeNil())
Expect(httpClient.lastRequest.Header.Get("Accept-Language")).To(Equal("fr"))
})
It("includes the JWT token in the request", func() {
f, err := os.Open("tests/fixtures/deezer.artist.bio.json")
Expect(err).To(BeNil())
httpClient.mock("https://pipe.deezer.com/api", http.Response{Body: f, StatusCode: 200})
_, err = client.getArtistBio(GinkgoT().Context(), 27)
Expect(err).To(BeNil())
// Verify that the Authorization header has the Bearer token format
authHeader := httpClient.lastRequest.Header.Get("Authorization")
Expect(authHeader).To(HavePrefix("Bearer "))
Expect(len(authHeader)).To(BeNumerically(">", 20)) // JWT tokens are longer than 20 chars
})
It("handles GraphQL errors", func() {
errorResponse := `{
"data": {
"artist": {
"bio": {
"full": ""
}
}
},
"errors": [
{
"message": "Artist not found"
},
{
"message": "Invalid artist ID"
}
]
}`
httpClient.mock("https://pipe.deezer.com/api", http.Response{
StatusCode: 200,
Body: io.NopCloser(bytes.NewBufferString(errorResponse)),
})
_, err := client.getArtistBio(GinkgoT().Context(), 999)
Expect(err).To(HaveOccurred())
Expect(err.Error()).To(ContainSubstring("GraphQL error"))
Expect(err.Error()).To(ContainSubstring("Artist not found"))
Expect(err.Error()).To(ContainSubstring("Invalid artist ID"))
})
It("handles empty biography", func() {
emptyBioResponse := `{
"data": {
"artist": {
"bio": {
"full": ""
}
}
}
}`
httpClient.mock("https://pipe.deezer.com/api", http.Response{
StatusCode: 200,
Body: io.NopCloser(bytes.NewBufferString(emptyBioResponse)),
})
_, err := client.getArtistBio(GinkgoT().Context(), 27)
Expect(err).To(MatchError("deezer: biography not found"))
})
It("handles JWT token fetch failure", func() {
httpClient.mock("https://auth.deezer.com/login/anonymous", http.Response{
StatusCode: 500,
Body: io.NopCloser(bytes.NewBufferString(`{"error":"Internal server error"}`)),
})
_, err := client.getArtistBio(GinkgoT().Context(), 27)
Expect(err).To(HaveOccurred())
Expect(err.Error()).To(ContainSubstring("failed to get JWT"))
})
It("handles JWT token that expires too soon", func() {
// Create a JWT that expires in 30 seconds (less than the 1-minute buffer)
expiredJWT := createTestJWT(30 * time.Second)
httpClient.mock("https://auth.deezer.com/login/anonymous", http.Response{
StatusCode: 200,
Body: io.NopCloser(bytes.NewBufferString(fmt.Sprintf(`{"jwt":"%s","refresh_token":""}`, expiredJWT))),
})
_, err := client.getArtistBio(GinkgoT().Context(), 27)
Expect(err).To(HaveOccurred())
Expect(err.Error()).To(ContainSubstring("JWT token already expired or expires too soon"))
})
})
})
type fakeHttpClient struct {

View File

@@ -12,6 +12,7 @@ import (
"github.com/navidrome/navidrome/log"
"github.com/navidrome/navidrome/model"
"github.com/navidrome/navidrome/utils/cache"
"github.com/navidrome/navidrome/utils/slice"
)
const deezerAgentName = "deezer"
@@ -32,7 +33,7 @@ func deezerConstructor(dataStore model.DataStore) agents.Interface {
Timeout: consts.DefaultHttpClientTimeOut,
}
cachedHttpClient := cache.NewHTTPClient(httpClient, consts.DefaultHttpClientTimeOut)
agent.client = newClient(cachedHttpClient)
agent.client = newClient(cachedHttpClient, conf.Server.Deezer.Language)
return agent
}
@@ -88,6 +89,56 @@ func (s *deezerAgent) searchArtist(ctx context.Context, name string) (*Artist, e
return &artists[0], err
}
func (s *deezerAgent) GetSimilarArtists(ctx context.Context, _, name, _ string, limit int) ([]agents.Artist, error) {
artist, err := s.searchArtist(ctx, name)
if err != nil {
return nil, err
}
related, err := s.client.getRelatedArtists(ctx, artist.ID)
if err != nil {
return nil, err
}
res := slice.Map(related, func(r Artist) agents.Artist {
return agents.Artist{
Name: r.Name,
}
})
if len(res) > limit {
res = res[:limit]
}
return res, nil
}
func (s *deezerAgent) GetArtistTopSongs(ctx context.Context, _, artistName, _ string, count int) ([]agents.Song, error) {
artist, err := s.searchArtist(ctx, artistName)
if err != nil {
return nil, err
}
tracks, err := s.client.getTopTracks(ctx, artist.ID, count)
if err != nil {
return nil, err
}
res := slice.Map(tracks, func(r Track) agents.Song {
return agents.Song{
Name: r.Title,
}
})
return res, nil
}
func (s *deezerAgent) GetArtistBiography(ctx context.Context, _, name, _ string) (string, error) {
artist, err := s.searchArtist(ctx, name)
if err != nil {
return "", err
}
return s.client.getArtistBio(ctx, artist.ID)
}
func init() {
conf.AddHook(func() {
if conf.Server.Deezer.Enabled {

View File

@@ -29,3 +29,38 @@ type Error struct {
Code int `json:"code"`
} `json:"error"`
}
type RelatedArtists struct {
Data []Artist `json:"data"`
Total int `json:"total"`
}
type TopTracks struct {
Data []Track `json:"data"`
Total int `json:"total"`
Next string `json:"next"`
}
type Track struct {
ID int `json:"id"`
Title string `json:"title"`
Link string `json:"link"`
Duration int `json:"duration"`
Rank int `json:"rank"`
Preview string `json:"preview"`
Artist Artist `json:"artist"`
Album Album `json:"album"`
Contributors []Artist `json:"contributors"`
}
type Album struct {
ID int `json:"id"`
Title string `json:"title"`
Cover string `json:"cover"`
CoverSmall string `json:"cover_small"`
CoverMedium string `json:"cover_medium"`
CoverBig string `json:"cover_big"`
CoverXl string `json:"cover_xl"`
Tracklist string `json:"tracklist"`
Type string `json:"type"`
}

View File

@@ -35,4 +35,35 @@ var _ = Describe("Responses", func() {
Expect(errorResp.Error.Message).To(Equal("Missing parameters: q"))
})
})
Describe("Related Artists", func() {
It("parses the related artists response correctly", func() {
var resp RelatedArtists
body, err := os.ReadFile("tests/fixtures/deezer.artist.related.json")
Expect(err).To(BeNil())
err = json.Unmarshal(body, &resp)
Expect(err).To(BeNil())
Expect(resp.Data).To(HaveLen(20))
justice := resp.Data[0]
Expect(justice.Name).To(Equal("Justice"))
Expect(justice.ID).To(Equal(6404))
})
})
Describe("Top Tracks", func() {
It("parses the top tracks response correctly", func() {
var resp TopTracks
body, err := os.ReadFile("tests/fixtures/deezer.artist.top.json")
Expect(err).To(BeNil())
err = json.Unmarshal(body, &resp)
Expect(err).To(BeNil())
Expect(resp.Data).To(HaveLen(5))
track := resp.Data[0]
Expect(track.Title).To(Equal("Instant Crush (feat. Julian Casablancas)"))
Expect(track.ID).To(Equal(67238732))
Expect(track.Album.Title).To(Equal("Random Access Memories"))
})
})
})

View File

@@ -38,6 +38,7 @@ type lastfmAgent struct {
secret string
lang string
client *client
httpClient httpDoer
getInfoMutex sync.Mutex
}
@@ -56,6 +57,7 @@ func lastFMConstructor(ds model.DataStore) *lastfmAgent {
Timeout: consts.DefaultHttpClientTimeOut,
}
chc := cache.NewHTTPClient(hc, consts.DefaultHttpClientTimeOut)
l.httpClient = chc
l.client = newClient(l.apiKey, l.secret, l.lang, chc)
return l
}
@@ -190,13 +192,13 @@ func (l *lastfmAgent) GetArtistTopSongs(ctx context.Context, id, artistName, mbi
return res, nil
}
var artistOpenGraphQuery = cascadia.MustCompile(`html > head > meta[property="og:image"]`)
var (
artistOpenGraphQuery = cascadia.MustCompile(`html > head > meta[property="og:image"]`)
artistIgnoredImage = "2a96cbd8b46e442fc41c2b86b821562f" // Last.fm artist placeholder image name
)
func (l *lastfmAgent) GetArtistImages(ctx context.Context, _, name, mbid string) ([]agents.ExternalImage, error) {
log.Debug(ctx, "Getting artist images from Last.fm", "name", name)
hc := http.Client{
Timeout: consts.DefaultHttpClientTimeOut,
}
a, err := l.callArtistGetInfo(ctx, name)
if err != nil {
return nil, fmt.Errorf("get artist info: %w", err)
@@ -205,7 +207,7 @@ func (l *lastfmAgent) GetArtistImages(ctx context.Context, _, name, mbid string)
if err != nil {
return nil, fmt.Errorf("create artist image request: %w", err)
}
resp, err := hc.Do(req)
resp, err := l.httpClient.Do(req)
if err != nil {
return nil, fmt.Errorf("get artist url: %w", err)
}
@@ -222,11 +224,16 @@ func (l *lastfmAgent) GetArtistImages(ctx context.Context, _, name, mbid string)
return res, nil
}
for _, attr := range n.Attr {
if attr.Key == "content" {
res = []agents.ExternalImage{
{URL: attr.Val},
}
break
if attr.Key != "content" {
continue
}
if strings.Contains(attr.Val, artistIgnoredImage) {
log.Debug(ctx, "Artist image is ignored default image", "name", name, "url", attr.Val)
return res, nil
}
res = []agents.ExternalImage{
{URL: attr.Val},
}
}
return res, nil

View File

@@ -393,4 +393,73 @@ var _ = Describe("lastfmAgent", func() {
})
})
})
Describe("GetArtistImages", func() {
var agent *lastfmAgent
var apiClient *tests.FakeHttpClient
var httpClient *tests.FakeHttpClient
BeforeEach(func() {
apiClient = &tests.FakeHttpClient{}
httpClient = &tests.FakeHttpClient{}
client := newClient("API_KEY", "SECRET", "pt", apiClient)
agent = lastFMConstructor(ds)
agent.client = client
agent.httpClient = httpClient
})
It("returns the artist image from the page", func() {
fApi, _ := os.Open("tests/fixtures/lastfm.artist.getinfo.json")
apiClient.Res = http.Response{Body: fApi, StatusCode: 200}
fScraper, _ := os.Open("tests/fixtures/lastfm.artist.page.html")
httpClient.Res = http.Response{Body: fScraper, StatusCode: 200}
images, err := agent.GetArtistImages(ctx, "123", "U2", "")
Expect(err).ToNot(HaveOccurred())
Expect(images).To(HaveLen(1))
Expect(images[0].URL).To(Equal("https://lastfm.freetls.fastly.net/i/u/ar0/818148bf682d429dc21b59a73ef6f68e.png"))
})
It("returns empty list if image is the ignored default image", func() {
fApi, _ := os.Open("tests/fixtures/lastfm.artist.getinfo.json")
apiClient.Res = http.Response{Body: fApi, StatusCode: 200}
fScraper, _ := os.Open("tests/fixtures/lastfm.artist.page.ignored.html")
httpClient.Res = http.Response{Body: fScraper, StatusCode: 200}
images, err := agent.GetArtistImages(ctx, "123", "U2", "")
Expect(err).ToNot(HaveOccurred())
Expect(images).To(BeEmpty())
})
It("returns empty list if page has no meta tags", func() {
fApi, _ := os.Open("tests/fixtures/lastfm.artist.getinfo.json")
apiClient.Res = http.Response{Body: fApi, StatusCode: 200}
fScraper, _ := os.Open("tests/fixtures/lastfm.artist.page.no_meta.html")
httpClient.Res = http.Response{Body: fScraper, StatusCode: 200}
images, err := agent.GetArtistImages(ctx, "123", "U2", "")
Expect(err).ToNot(HaveOccurred())
Expect(images).To(BeEmpty())
})
It("returns error if API call fails", func() {
apiClient.Err = errors.New("api error")
_, err := agent.GetArtistImages(ctx, "123", "U2", "")
Expect(err).To(HaveOccurred())
Expect(err.Error()).To(ContainSubstring("get artist info"))
})
It("returns error if scraper call fails", func() {
fApi, _ := os.Open("tests/fixtures/lastfm.artist.getinfo.json")
apiClient.Res = http.Response{Body: fApi, StatusCode: 200}
httpClient.Err = errors.New("scraper error")
_, err := agent.GetArtistImages(ctx, "123", "U2", "")
Expect(err).To(HaveOccurred())
Expect(err.Error()).To(ContainSubstring("get artist url"))
})
})
})

View File

@@ -90,6 +90,7 @@ var _ = Describe("CacheWarmer", func() {
})
It("deduplicates items in buffer", func() {
fc.SetReady(false) // Make cache unavailable so items stay in buffer
cw := NewCacheWarmer(aw, fc).(*cacheWarmer)
cw.PreCache(model.MustParseArtworkID("al-1"))
cw.PreCache(model.MustParseArtworkID("al-1"))

View File

@@ -1,6 +1,7 @@
package artwork
import (
"cmp"
"context"
"crypto/md5"
"fmt"
@@ -11,6 +12,7 @@ import (
"time"
"github.com/Masterminds/squirrel"
"github.com/maruel/natural"
"github.com/navidrome/navidrome/conf"
"github.com/navidrome/navidrome/core"
"github.com/navidrome/navidrome/core/external"
@@ -116,8 +118,30 @@ func loadAlbumFoldersPaths(ctx context.Context, ds model.DataStore, albums ...mo
}
// Sort image files to ensure consistent selection of cover art
// This prioritizes files from lower-numbered disc folders by sorting the paths
slices.Sort(imgFiles)
// This prioritizes files without numeric suffixes (e.g., cover.jpg over cover.1.jpg)
// by comparing base filenames without extensions
slices.SortFunc(imgFiles, compareImageFiles)
return paths, imgFiles, &updatedAt, nil
}
// compareImageFiles compares two image file paths for sorting.
// It extracts the base filename (without extension) and compares case-insensitively.
// This ensures that "cover.jpg" sorts before "cover.1.jpg" since "cover" < "cover.1".
// Note: This function is called O(n log n) times during sorting, but in practice albums
// typically have only 1-20 image files, making the repeated string operations negligible.
func compareImageFiles(a, b string) int {
// Case-insensitive comparison
a = strings.ToLower(a)
b = strings.ToLower(b)
// Extract base filenames without extensions
baseA := strings.TrimSuffix(filepath.Base(a), filepath.Ext(a))
baseB := strings.TrimSuffix(filepath.Base(b), filepath.Ext(b))
// Compare base names first, then full paths if equal
return cmp.Or(
natural.Compare(baseA, baseB),
natural.Compare(a, b),
)
}

View File

@@ -27,26 +27,7 @@ var _ = Describe("Album Artwork Reader", func() {
expectedAt = now.Add(5 * time.Minute)
// Set up the test folders with image files
repo = &fakeFolderRepo{
result: []model.Folder{
{
Path: "Artist/Album/Disc1",
ImagesUpdatedAt: expectedAt,
ImageFiles: []string{"cover.jpg", "back.jpg"},
},
{
Path: "Artist/Album/Disc2",
ImagesUpdatedAt: now,
ImageFiles: []string{"cover.jpg"},
},
{
Path: "Artist/Album/Disc10",
ImagesUpdatedAt: now,
ImageFiles: []string{"cover.jpg"},
},
},
err: nil,
}
repo = &fakeFolderRepo{}
ds = &fakeDataStore{
folderRepo: repo,
}
@@ -58,19 +39,82 @@ var _ = Describe("Album Artwork Reader", func() {
})
It("returns sorted image files", func() {
repo.result = []model.Folder{
{
Path: "Artist/Album/Disc1",
ImagesUpdatedAt: expectedAt,
ImageFiles: []string{"cover.jpg", "back.jpg", "cover.1.jpg"},
},
{
Path: "Artist/Album/Disc2",
ImagesUpdatedAt: now,
ImageFiles: []string{"cover.jpg"},
},
{
Path: "Artist/Album/Disc10",
ImagesUpdatedAt: now,
ImageFiles: []string{"cover.jpg"},
},
}
_, imgFiles, imagesUpdatedAt, err := loadAlbumFoldersPaths(ctx, ds, album)
Expect(err).ToNot(HaveOccurred())
Expect(*imagesUpdatedAt).To(Equal(expectedAt))
// Check that image files are sorted alphabetically
Expect(imgFiles).To(HaveLen(4))
// Check that image files are sorted by base name (without extension)
Expect(imgFiles).To(HaveLen(5))
// The files should be sorted by full path
// Files should be sorted by base filename without extension, then by full path
// "back" < "cover", so back.jpg comes first
// Then all cover.jpg files, sorted by path
Expect(imgFiles[0]).To(Equal(filepath.FromSlash("Artist/Album/Disc1/back.jpg")))
Expect(imgFiles[1]).To(Equal(filepath.FromSlash("Artist/Album/Disc1/cover.jpg")))
Expect(imgFiles[2]).To(Equal(filepath.FromSlash("Artist/Album/Disc10/cover.jpg")))
Expect(imgFiles[3]).To(Equal(filepath.FromSlash("Artist/Album/Disc2/cover.jpg")))
Expect(imgFiles[2]).To(Equal(filepath.FromSlash("Artist/Album/Disc2/cover.jpg")))
Expect(imgFiles[3]).To(Equal(filepath.FromSlash("Artist/Album/Disc10/cover.jpg")))
Expect(imgFiles[4]).To(Equal(filepath.FromSlash("Artist/Album/Disc1/cover.1.jpg")))
})
It("prioritizes files without numeric suffixes", func() {
// Test case for issue #4683: cover.jpg should come before cover.1.jpg
repo.result = []model.Folder{
{
Path: "Artist/Album",
ImagesUpdatedAt: now,
ImageFiles: []string{"cover.1.jpg", "cover.jpg", "cover.2.jpg"},
},
}
_, imgFiles, _, err := loadAlbumFoldersPaths(ctx, ds, album)
Expect(err).ToNot(HaveOccurred())
Expect(imgFiles).To(HaveLen(3))
// cover.jpg should come first because "cover" < "cover.1" < "cover.2"
Expect(imgFiles[0]).To(Equal(filepath.FromSlash("Artist/Album/cover.jpg")))
Expect(imgFiles[1]).To(Equal(filepath.FromSlash("Artist/Album/cover.1.jpg")))
Expect(imgFiles[2]).To(Equal(filepath.FromSlash("Artist/Album/cover.2.jpg")))
})
It("handles case-insensitive sorting", func() {
// Test that Cover.jpg and cover.jpg are treated as equivalent
repo.result = []model.Folder{
{
Path: "Artist/Album",
ImagesUpdatedAt: now,
ImageFiles: []string{"Folder.jpg", "cover.jpg", "BACK.jpg"},
},
}
_, imgFiles, _, err := loadAlbumFoldersPaths(ctx, ds, album)
Expect(err).ToNot(HaveOccurred())
Expect(imgFiles).To(HaveLen(3))
// Files should be sorted case-insensitively: BACK, cover, Folder
Expect(imgFiles[0]).To(Equal(filepath.FromSlash("Artist/Album/BACK.jpg")))
Expect(imgFiles[1]).To(Equal(filepath.FromSlash("Artist/Album/cover.jpg")))
Expect(imgFiles[2]).To(Equal(filepath.FromSlash("Artist/Album/Folder.jpg")))
})
})
})

View File

@@ -8,6 +8,7 @@ import (
"io/fs"
"os"
"path/filepath"
"slices"
"strings"
"time"
@@ -139,11 +140,22 @@ func findImageInFolder(ctx context.Context, folder, pattern string) (io.ReadClos
return nil, "", err
}
// Filter to valid image files
var imagePaths []string
for _, m := range matches {
if !model.IsImageFile(m) {
continue
}
filePath := filepath.Join(folder, m)
imagePaths = append(imagePaths, m)
}
// Sort image files by prioritizing base filenames without numeric
// suffixes (e.g., artist.jpg before artist.1.jpg)
slices.SortFunc(imagePaths, compareImageFiles)
// Try to open files in sorted order
for _, p := range imagePaths {
filePath := filepath.Join(folder, p)
f, err := os.Open(filePath)
if err != nil {
log.Warn(ctx, "Could not open cover art file", "file", filePath, err)

View File

@@ -240,24 +240,79 @@ var _ = Describe("artistArtworkReader", func() {
Expect(os.MkdirAll(artistDir, 0755)).To(Succeed())
// Create multiple matching files
Expect(os.WriteFile(filepath.Join(artistDir, "artist.jpg"), []byte("jpg image"), 0600)).To(Succeed())
Expect(os.WriteFile(filepath.Join(artistDir, "artist.abc"), []byte("text file"), 0600)).To(Succeed())
Expect(os.WriteFile(filepath.Join(artistDir, "artist.png"), []byte("png image"), 0600)).To(Succeed())
Expect(os.WriteFile(filepath.Join(artistDir, "artist.txt"), []byte("text file"), 0600)).To(Succeed())
Expect(os.WriteFile(filepath.Join(artistDir, "artist.jpg"), []byte("jpg image"), 0600)).To(Succeed())
testFunc = fromArtistFolder(ctx, artistDir, "artist.*")
})
It("returns the first valid image file", func() {
It("returns the first valid image file in sorted order", func() {
reader, path, err := testFunc()
Expect(err).ToNot(HaveOccurred())
Expect(reader).ToNot(BeNil())
// Should return an image file, not the text file
Expect(path).To(SatisfyAny(
ContainSubstring("artist.jpg"),
ContainSubstring("artist.png"),
))
Expect(path).ToNot(ContainSubstring("artist.txt"))
// Should return an image file,
// Files are sorted: jpg comes before png alphabetically.
// .abc comes first, but it's not an image.
Expect(path).To(ContainSubstring("artist.jpg"))
reader.Close()
})
})
When("prioritizing files without numeric suffixes", func() {
BeforeEach(func() {
// Test case for issue #4683: artist.jpg should come before artist.1.jpg
artistDir := filepath.Join(tempDir, "artist")
Expect(os.MkdirAll(artistDir, 0755)).To(Succeed())
// Create multiple matches with and without numeric suffixes
Expect(os.WriteFile(filepath.Join(artistDir, "artist.1.jpg"), []byte("artist 1"), 0600)).To(Succeed())
Expect(os.WriteFile(filepath.Join(artistDir, "artist.jpg"), []byte("artist main"), 0600)).To(Succeed())
Expect(os.WriteFile(filepath.Join(artistDir, "artist.2.jpg"), []byte("artist 2"), 0600)).To(Succeed())
testFunc = fromArtistFolder(ctx, artistDir, "artist.*")
})
It("returns artist.jpg before artist.1.jpg and artist.2.jpg", func() {
reader, path, err := testFunc()
Expect(err).ToNot(HaveOccurred())
Expect(reader).ToNot(BeNil())
Expect(path).To(ContainSubstring("artist.jpg"))
// Verify it's the main file, not a numbered variant
data, err := io.ReadAll(reader)
Expect(err).ToNot(HaveOccurred())
Expect(string(data)).To(Equal("artist main"))
reader.Close()
})
})
When("handling case-insensitive sorting", func() {
BeforeEach(func() {
// Test case to ensure case-insensitive natural sorting
artistDir := filepath.Join(tempDir, "artist")
Expect(os.MkdirAll(artistDir, 0755)).To(Succeed())
// Create files with mixed case names
Expect(os.WriteFile(filepath.Join(artistDir, "Folder.jpg"), []byte("folder"), 0600)).To(Succeed())
Expect(os.WriteFile(filepath.Join(artistDir, "artist.jpg"), []byte("artist"), 0600)).To(Succeed())
Expect(os.WriteFile(filepath.Join(artistDir, "BACK.jpg"), []byte("back"), 0600)).To(Succeed())
testFunc = fromArtistFolder(ctx, artistDir, "*.*")
})
It("sorts case-insensitively", func() {
reader, path, err := testFunc()
Expect(err).ToNot(HaveOccurred())
Expect(reader).ToNot(BeNil())
// Should return artist.jpg first (case-insensitive: "artist" < "back" < "folder")
Expect(path).To(ContainSubstring("artist.jpg"))
data, err := io.ReadAll(reader)
Expect(err).ToNot(HaveOccurred())
Expect(string(data)).To(Equal("artist"))
reader.Close()
})
})

View File

@@ -113,9 +113,9 @@ func WithAdminUser(ctx context.Context, ds model.DataStore) context.Context {
if err != nil {
c, err := ds.User(ctx).CountAll()
if c == 0 && err == nil {
log.Debug(ctx, "Scanner: No admin user yet!", err)
log.Debug(ctx, "No admin user yet!", err)
} else {
log.Error(ctx, "Scanner: No admin user found!", err)
log.Error(ctx, "No admin user found!", err)
}
u = &model.User{}
}

View File

@@ -21,11 +21,6 @@ import (
"github.com/navidrome/navidrome/utils/slice"
)
// Scanner interface for triggering scans
type Scanner interface {
ScanAll(ctx context.Context, fullScan bool) (warnings []string, err error)
}
// Watcher interface for managing file system watchers
type Watcher interface {
Watch(ctx context.Context, lib *model.Library) error
@@ -43,13 +38,13 @@ type Library interface {
type libraryService struct {
ds model.DataStore
scanner Scanner
scanner model.Scanner
watcher Watcher
broker events.Broker
}
// NewLibrary creates a new Library service
func NewLibrary(ds model.DataStore, scanner Scanner, watcher Watcher, broker events.Broker) Library {
func NewLibrary(ds model.DataStore, scanner model.Scanner, watcher Watcher, broker events.Broker) Library {
return &libraryService{
ds: ds,
scanner: scanner,
@@ -155,7 +150,7 @@ type libraryRepositoryWrapper struct {
model.LibraryRepository
ctx context.Context
ds model.DataStore
scanner Scanner
scanner model.Scanner
watcher Watcher
broker events.Broker
}
@@ -192,7 +187,7 @@ func (r *libraryRepositoryWrapper) Save(entity interface{}) (string, error) {
return strconv.Itoa(lib.ID), nil
}
func (r *libraryRepositoryWrapper) Update(id string, entity interface{}, cols ...string) error {
func (r *libraryRepositoryWrapper) Update(id string, entity interface{}, _ ...string) error {
lib := entity.(*model.Library)
libID, err := strconv.Atoi(id)
if err != nil {

View File

@@ -29,7 +29,7 @@ var _ = Describe("Library Service", func() {
var userRepo *tests.MockedUserRepo
var ctx context.Context
var tempDir string
var scanner *mockScanner
var scanner *tests.MockScanner
var watcherManager *mockWatcherManager
var broker *mockEventBroker
@@ -43,7 +43,7 @@ var _ = Describe("Library Service", func() {
ds.MockedUser = userRepo
// Create a mock scanner that tracks calls
scanner = &mockScanner{}
scanner = tests.NewMockScanner()
// Create a mock watcher manager
watcherManager = &mockWatcherManager{
libraryStates: make(map[int]model.Library),
@@ -616,11 +616,12 @@ var _ = Describe("Library Service", func() {
// Wait briefly for the goroutine to complete
Eventually(func() int {
return scanner.len()
return scanner.GetScanAllCallCount()
}, "1s", "10ms").Should(Equal(1))
// Verify scan was called with correct parameters
Expect(scanner.ScanCalls[0].FullScan).To(BeFalse()) // Should be quick scan
calls := scanner.GetScanAllCalls()
Expect(calls[0].FullScan).To(BeFalse()) // Should be quick scan
})
It("triggers scan when updating library path", func() {
@@ -641,11 +642,12 @@ var _ = Describe("Library Service", func() {
// Wait briefly for the goroutine to complete
Eventually(func() int {
return scanner.len()
return scanner.GetScanAllCallCount()
}, "1s", "10ms").Should(Equal(1))
// Verify scan was called with correct parameters
Expect(scanner.ScanCalls[0].FullScan).To(BeFalse()) // Should be quick scan
calls := scanner.GetScanAllCalls()
Expect(calls[0].FullScan).To(BeFalse()) // Should be quick scan
})
It("does not trigger scan when updating library without path change", func() {
@@ -661,7 +663,7 @@ var _ = Describe("Library Service", func() {
// Wait a bit to ensure no scan was triggered
Consistently(func() int {
return scanner.len()
return scanner.GetScanAllCallCount()
}, "100ms", "10ms").Should(Equal(0))
})
@@ -674,7 +676,7 @@ var _ = Describe("Library Service", func() {
// Ensure no scan was triggered since creation failed
Consistently(func() int {
return scanner.len()
return scanner.GetScanAllCallCount()
}, "100ms", "10ms").Should(Equal(0))
})
@@ -691,7 +693,7 @@ var _ = Describe("Library Service", func() {
// Ensure no scan was triggered since update failed
Consistently(func() int {
return scanner.len()
return scanner.GetScanAllCallCount()
}, "100ms", "10ms").Should(Equal(0))
})
@@ -707,11 +709,12 @@ var _ = Describe("Library Service", func() {
// Wait briefly for the goroutine to complete
Eventually(func() int {
return scanner.len()
return scanner.GetScanAllCallCount()
}, "1s", "10ms").Should(Equal(1))
// Verify scan was called with correct parameters
Expect(scanner.ScanCalls[0].FullScan).To(BeFalse()) // Should be quick scan
calls := scanner.GetScanAllCalls()
Expect(calls[0].FullScan).To(BeFalse()) // Should be quick scan
})
It("does not trigger scan when library deletion fails", func() {
@@ -721,7 +724,7 @@ var _ = Describe("Library Service", func() {
// Ensure no scan was triggered since deletion failed
Consistently(func() int {
return scanner.len()
return scanner.GetScanAllCallCount()
}, "100ms", "10ms").Should(Equal(0))
})
@@ -868,31 +871,6 @@ var _ = Describe("Library Service", func() {
})
})
// mockScanner provides a simple mock implementation of core.Scanner for testing
type mockScanner struct {
ScanCalls []ScanCall
mu sync.RWMutex
}
type ScanCall struct {
FullScan bool
}
func (m *mockScanner) ScanAll(ctx context.Context, fullScan bool) (warnings []string, err error) {
m.mu.Lock()
defer m.mu.Unlock()
m.ScanCalls = append(m.ScanCalls, ScanCall{
FullScan: fullScan,
})
return []string{}, nil
}
func (m *mockScanner) len() int {
m.mu.RLock()
defer m.mu.RUnlock()
return len(m.ScanCalls)
}
// mockWatcherManager provides a simple mock implementation of core.Watcher for testing
type mockWatcherManager struct {
StartedWatchers []model.Library

226
core/maintenance.go Normal file
View File

@@ -0,0 +1,226 @@
package core
import (
"context"
"fmt"
"slices"
"sync"
"time"
"github.com/Masterminds/squirrel"
"github.com/navidrome/navidrome/log"
"github.com/navidrome/navidrome/model"
"github.com/navidrome/navidrome/model/request"
"github.com/navidrome/navidrome/utils/slice"
)
type Maintenance interface {
// DeleteMissingFiles deletes specific missing files by their IDs
DeleteMissingFiles(ctx context.Context, ids []string) error
// DeleteAllMissingFiles deletes all files marked as missing
DeleteAllMissingFiles(ctx context.Context) error
}
type maintenanceService struct {
ds model.DataStore
wg sync.WaitGroup
}
func NewMaintenance(ds model.DataStore) Maintenance {
return &maintenanceService{
ds: ds,
}
}
func (s *maintenanceService) DeleteMissingFiles(ctx context.Context, ids []string) error {
return s.deleteMissing(ctx, ids)
}
func (s *maintenanceService) DeleteAllMissingFiles(ctx context.Context) error {
return s.deleteMissing(ctx, nil)
}
// deleteMissing handles the deletion of missing files and triggers necessary cleanup operations
func (s *maintenanceService) deleteMissing(ctx context.Context, ids []string) error {
// Track affected album IDs before deletion for refresh
affectedAlbumIDs, err := s.getAffectedAlbumIDs(ctx, ids)
if err != nil {
log.Warn(ctx, "Error tracking affected albums for refresh", err)
// Don't fail the operation, just log the warning
}
// Delete missing files within a transaction
err = s.ds.WithTx(func(tx model.DataStore) error {
if len(ids) == 0 {
_, err := tx.MediaFile(ctx).DeleteAllMissing()
return err
}
return tx.MediaFile(ctx).DeleteMissing(ids)
})
if err != nil {
log.Error(ctx, "Error deleting missing tracks from DB", "ids", ids, err)
return err
}
// Run garbage collection to clean up orphaned records
if err := s.ds.GC(ctx); err != nil {
log.Error(ctx, "Error running GC after deleting missing tracks", err)
return err
}
// Refresh statistics in background
s.refreshStatsAsync(ctx, affectedAlbumIDs)
return nil
}
// refreshAlbums recalculates album attributes (size, duration, song count, etc.) from media files.
// It uses batch queries to minimize database round-trips for efficiency.
func (s *maintenanceService) refreshAlbums(ctx context.Context, albumIDs []string) error {
if len(albumIDs) == 0 {
return nil
}
log.Debug(ctx, "Refreshing albums", "count", len(albumIDs))
// Process in chunks to avoid query size limits
const chunkSize = 100
for chunk := range slice.CollectChunks(slices.Values(albumIDs), chunkSize) {
if err := s.refreshAlbumChunk(ctx, chunk); err != nil {
return fmt.Errorf("refreshing album chunk: %w", err)
}
}
log.Debug(ctx, "Successfully refreshed albums", "count", len(albumIDs))
return nil
}
// refreshAlbumChunk processes a single chunk of album IDs
func (s *maintenanceService) refreshAlbumChunk(ctx context.Context, albumIDs []string) error {
albumRepo := s.ds.Album(ctx)
mfRepo := s.ds.MediaFile(ctx)
// Batch load existing albums
albums, err := albumRepo.GetAll(model.QueryOptions{
Filters: squirrel.Eq{"album.id": albumIDs},
})
if err != nil {
return fmt.Errorf("loading albums: %w", err)
}
// Create a map for quick lookup
albumMap := make(map[string]*model.Album, len(albums))
for i := range albums {
albumMap[albums[i].ID] = &albums[i]
}
// Batch load all media files for these albums
mediaFiles, err := mfRepo.GetAll(model.QueryOptions{
Filters: squirrel.Eq{"album_id": albumIDs},
Sort: "album_id, path",
})
if err != nil {
return fmt.Errorf("loading media files: %w", err)
}
// Group media files by album ID
filesByAlbum := make(map[string]model.MediaFiles)
for i := range mediaFiles {
albumID := mediaFiles[i].AlbumID
filesByAlbum[albumID] = append(filesByAlbum[albumID], mediaFiles[i])
}
// Recalculate each album from its media files
for albumID, oldAlbum := range albumMap {
mfs, hasTracks := filesByAlbum[albumID]
if !hasTracks {
// Album has no tracks anymore, skip (will be cleaned up by GC)
log.Debug(ctx, "Skipping album with no tracks", "albumID", albumID)
continue
}
// Recalculate album from media files
newAlbum := mfs.ToAlbum()
// Only update if something changed (avoid unnecessary writes)
if !oldAlbum.Equals(newAlbum) {
// Preserve original timestamps
newAlbum.UpdatedAt = time.Now()
newAlbum.CreatedAt = oldAlbum.CreatedAt
if err := albumRepo.Put(&newAlbum); err != nil {
log.Error(ctx, "Error updating album during refresh", "albumID", albumID, err)
// Continue with other albums instead of failing entirely
continue
}
log.Trace(ctx, "Refreshed album", "albumID", albumID, "name", newAlbum.Name)
}
}
return nil
}
// getAffectedAlbumIDs returns distinct album IDs from missing media files
func (s *maintenanceService) getAffectedAlbumIDs(ctx context.Context, ids []string) ([]string, error) {
var filters squirrel.Sqlizer = squirrel.Eq{"missing": true}
if len(ids) > 0 {
filters = squirrel.And{
squirrel.Eq{"missing": true},
squirrel.Eq{"media_file.id": ids},
}
}
mfs, err := s.ds.MediaFile(ctx).GetAll(model.QueryOptions{
Filters: filters,
})
if err != nil {
return nil, err
}
// Extract unique album IDs
albumIDMap := make(map[string]struct{}, len(mfs))
for _, mf := range mfs {
if mf.AlbumID != "" {
albumIDMap[mf.AlbumID] = struct{}{}
}
}
albumIDs := make([]string, 0, len(albumIDMap))
for id := range albumIDMap {
albumIDs = append(albumIDs, id)
}
return albumIDs, nil
}
// refreshStatsAsync refreshes artist and album statistics in background goroutines
func (s *maintenanceService) refreshStatsAsync(ctx context.Context, affectedAlbumIDs []string) {
// Refresh artist stats in background
s.wg.Add(1)
go func() {
defer s.wg.Done()
bgCtx := request.AddValues(context.Background(), ctx)
if _, err := s.ds.Artist(bgCtx).RefreshStats(true); err != nil {
log.Error(bgCtx, "Error refreshing artist stats after deleting missing files", err)
} else {
log.Debug(bgCtx, "Successfully refreshed artist stats after deleting missing files")
}
// Refresh album stats in background if we have affected albums
if len(affectedAlbumIDs) > 0 {
if err := s.refreshAlbums(bgCtx, affectedAlbumIDs); err != nil {
log.Error(bgCtx, "Error refreshing album stats after deleting missing files", err)
} else {
log.Debug(bgCtx, "Successfully refreshed album stats after deleting missing files", "count", len(affectedAlbumIDs))
}
}
}()
}
// Wait waits for all background goroutines to complete.
// WARNING: This method is ONLY for testing. Never call this in production code.
// Calling Wait() in production will block until ALL background operations complete
// and may cause race conditions with new operations starting.
func (s *maintenanceService) wait() {
s.wg.Wait()
}

364
core/maintenance_test.go Normal file
View File

@@ -0,0 +1,364 @@
package core
import (
"context"
"errors"
"sync"
"github.com/navidrome/navidrome/model"
"github.com/navidrome/navidrome/model/request"
"github.com/navidrome/navidrome/tests"
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
"github.com/sirupsen/logrus"
)
var _ = Describe("Maintenance", func() {
var ds *tests.MockDataStore
var mfRepo *extendedMediaFileRepo
var service Maintenance
var ctx context.Context
BeforeEach(func() {
ctx = context.Background()
ctx = request.WithUser(ctx, model.User{ID: "user1", IsAdmin: true})
ds = createTestDataStore()
mfRepo = ds.MockedMediaFile.(*extendedMediaFileRepo)
service = NewMaintenance(ds)
})
Describe("DeleteMissingFiles", func() {
Context("with specific IDs", func() {
It("deletes specific missing files and runs GC", func() {
// Setup: mock missing files with album IDs
mfRepo.SetData(model.MediaFiles{
{ID: "mf1", AlbumID: "album1", Missing: true},
{ID: "mf2", AlbumID: "album2", Missing: true},
})
err := service.DeleteMissingFiles(ctx, []string{"mf1", "mf2"})
Expect(err).ToNot(HaveOccurred())
Expect(mfRepo.deleteMissingCalled).To(BeTrue())
Expect(mfRepo.deletedIDs).To(Equal([]string{"mf1", "mf2"}))
Expect(ds.GCCalled).To(BeTrue(), "GC should be called after deletion")
})
It("triggers artist stats refresh and album refresh after deletion", func() {
artistRepo := ds.MockedArtist.(*extendedArtistRepo)
// Setup: mock missing files with albums
albumRepo := ds.MockedAlbum.(*extendedAlbumRepo)
albumRepo.SetData(model.Albums{
{ID: "album1", Name: "Test Album", SongCount: 5},
})
mfRepo.SetData(model.MediaFiles{
{ID: "mf1", AlbumID: "album1", Missing: true},
{ID: "mf2", AlbumID: "album1", Missing: false, Size: 1000, Duration: 180},
{ID: "mf3", AlbumID: "album1", Missing: false, Size: 2000, Duration: 200},
})
err := service.DeleteMissingFiles(ctx, []string{"mf1"})
Expect(err).ToNot(HaveOccurred())
// Wait for background goroutines to complete
service.(*maintenanceService).wait()
// RefreshStats should be called
Expect(artistRepo.IsRefreshStatsCalled()).To(BeTrue(), "Artist stats should be refreshed")
// Album should be updated with new calculated values
Expect(albumRepo.GetPutCallCount()).To(BeNumerically(">", 0), "Album.Put() should be called to refresh album data")
})
It("returns error if deletion fails", func() {
mfRepo.deleteMissingError = errors.New("delete failed")
err := service.DeleteMissingFiles(ctx, []string{"mf1"})
Expect(err).To(HaveOccurred())
Expect(err.Error()).To(ContainSubstring("delete failed"))
})
It("continues even if album tracking fails", func() {
mfRepo.SetError(true)
err := service.DeleteMissingFiles(ctx, []string{"mf1"})
// Should not fail, just log warning
Expect(err).ToNot(HaveOccurred())
Expect(mfRepo.deleteMissingCalled).To(BeTrue())
})
It("returns error if GC fails", func() {
mfRepo.SetData(model.MediaFiles{
{ID: "mf1", AlbumID: "album1", Missing: true},
})
// Set GC to return error
ds.GCError = errors.New("gc failed")
err := service.DeleteMissingFiles(ctx, []string{"mf1"})
Expect(err).To(HaveOccurred())
Expect(err.Error()).To(ContainSubstring("gc failed"))
})
})
Context("album ID extraction", func() {
It("extracts unique album IDs from missing files", func() {
mfRepo.SetData(model.MediaFiles{
{ID: "mf1", AlbumID: "album1", Missing: true},
{ID: "mf2", AlbumID: "album1", Missing: true},
{ID: "mf3", AlbumID: "album2", Missing: true},
})
err := service.DeleteMissingFiles(ctx, []string{"mf1", "mf2", "mf3"})
Expect(err).ToNot(HaveOccurred())
})
It("skips files without album IDs", func() {
mfRepo.SetData(model.MediaFiles{
{ID: "mf1", AlbumID: "", Missing: true},
{ID: "mf2", AlbumID: "album1", Missing: true},
})
err := service.DeleteMissingFiles(ctx, []string{"mf1", "mf2"})
Expect(err).ToNot(HaveOccurred())
})
})
})
Describe("DeleteAllMissingFiles", func() {
It("deletes all missing files and runs GC", func() {
mfRepo.SetData(model.MediaFiles{
{ID: "mf1", AlbumID: "album1", Missing: true},
{ID: "mf2", AlbumID: "album2", Missing: true},
{ID: "mf3", AlbumID: "album3", Missing: true},
})
err := service.DeleteAllMissingFiles(ctx)
Expect(err).ToNot(HaveOccurred())
Expect(ds.GCCalled).To(BeTrue(), "GC should be called after deletion")
})
It("returns error if deletion fails", func() {
mfRepo.SetError(true)
err := service.DeleteAllMissingFiles(ctx)
Expect(err).To(HaveOccurred())
})
It("handles empty result gracefully", func() {
mfRepo.SetData(model.MediaFiles{})
err := service.DeleteAllMissingFiles(ctx)
Expect(err).ToNot(HaveOccurred())
})
})
Describe("Album refresh logic", func() {
var albumRepo *extendedAlbumRepo
BeforeEach(func() {
albumRepo = ds.MockedAlbum.(*extendedAlbumRepo)
})
Context("when album has no tracks after deletion", func() {
It("skips the album without updating it", func() {
// Setup album with no remaining tracks
albumRepo.SetData(model.Albums{
{ID: "album1", Name: "Empty Album", SongCount: 1},
})
mfRepo.SetData(model.MediaFiles{
{ID: "mf1", AlbumID: "album1", Missing: true},
})
err := service.DeleteMissingFiles(ctx, []string{"mf1"})
Expect(err).ToNot(HaveOccurred())
// Wait for background goroutines to complete
service.(*maintenanceService).wait()
// Album should NOT be updated because it has no tracks left
Expect(albumRepo.GetPutCallCount()).To(Equal(0), "Album with no tracks should not be updated")
})
})
Context("when Put fails for one album", func() {
It("continues processing other albums", func() {
albumRepo.SetData(model.Albums{
{ID: "album1", Name: "Album 1"},
{ID: "album2", Name: "Album 2"},
})
mfRepo.SetData(model.MediaFiles{
{ID: "mf1", AlbumID: "album1", Missing: true},
{ID: "mf2", AlbumID: "album1", Missing: false, Size: 1000, Duration: 180},
{ID: "mf3", AlbumID: "album2", Missing: true},
{ID: "mf4", AlbumID: "album2", Missing: false, Size: 2000, Duration: 200},
})
// Make Put fail on first call but succeed on subsequent calls
albumRepo.putError = errors.New("put failed")
albumRepo.failOnce = true
err := service.DeleteMissingFiles(ctx, []string{"mf1", "mf3"})
// Should not fail even if one album's Put fails
Expect(err).ToNot(HaveOccurred())
// Wait for background goroutines to complete
service.(*maintenanceService).wait()
// Put should have been called multiple times
Expect(albumRepo.GetPutCallCount()).To(BeNumerically(">", 0), "Put should be attempted")
})
})
Context("when media file loading fails", func() {
It("logs warning but continues when tracking affected albums fails", func() {
// Set up log capturing
hook, cleanup := tests.LogHook()
defer cleanup()
albumRepo.SetData(model.Albums{
{ID: "album1", Name: "Album 1"},
})
mfRepo.SetData(model.MediaFiles{
{ID: "mf1", AlbumID: "album1", Missing: true},
})
// Make GetAll fail when loading media files
mfRepo.SetError(true)
err := service.DeleteMissingFiles(ctx, []string{"mf1"})
// Deletion should succeed despite the tracking error
Expect(err).ToNot(HaveOccurred())
Expect(mfRepo.deleteMissingCalled).To(BeTrue())
// Verify the warning was logged
Expect(hook.LastEntry()).ToNot(BeNil())
Expect(hook.LastEntry().Level).To(Equal(logrus.WarnLevel))
Expect(hook.LastEntry().Message).To(Equal("Error tracking affected albums for refresh"))
})
})
})
})
// Test helper to create a mock DataStore with controllable behavior
func createTestDataStore() *tests.MockDataStore {
ds := &tests.MockDataStore{}
// Create extended album repo with Put tracking
albumRepo := &extendedAlbumRepo{
MockAlbumRepo: tests.CreateMockAlbumRepo(),
}
ds.MockedAlbum = albumRepo
// Create extended artist repo with RefreshStats tracking
artistRepo := &extendedArtistRepo{
MockArtistRepo: tests.CreateMockArtistRepo(),
}
ds.MockedArtist = artistRepo
// Create extended media file repo with DeleteMissing support
mfRepo := &extendedMediaFileRepo{
MockMediaFileRepo: tests.CreateMockMediaFileRepo(),
}
ds.MockedMediaFile = mfRepo
return ds
}
// Extension of MockMediaFileRepo to add DeleteMissing method
type extendedMediaFileRepo struct {
*tests.MockMediaFileRepo
deleteMissingCalled bool
deletedIDs []string
deleteMissingError error
}
func (m *extendedMediaFileRepo) DeleteMissing(ids []string) error {
m.deleteMissingCalled = true
m.deletedIDs = ids
if m.deleteMissingError != nil {
return m.deleteMissingError
}
// Actually delete from the mock data
for _, id := range ids {
delete(m.Data, id)
}
return nil
}
// Extension of MockAlbumRepo to track Put calls
type extendedAlbumRepo struct {
*tests.MockAlbumRepo
mu sync.RWMutex
putCallCount int
lastPutData *model.Album
putError error
failOnce bool
}
func (m *extendedAlbumRepo) Put(album *model.Album) error {
m.mu.Lock()
m.putCallCount++
m.lastPutData = album
// Handle failOnce behavior
var err error
if m.putError != nil {
if m.failOnce {
err = m.putError
m.putError = nil // Clear error after first failure
m.mu.Unlock()
return err
}
err = m.putError
m.mu.Unlock()
return err
}
m.mu.Unlock()
return m.MockAlbumRepo.Put(album)
}
func (m *extendedAlbumRepo) GetPutCallCount() int {
m.mu.RLock()
defer m.mu.RUnlock()
return m.putCallCount
}
// Extension of MockArtistRepo to track RefreshStats calls
type extendedArtistRepo struct {
*tests.MockArtistRepo
mu sync.RWMutex
refreshStatsCalled bool
refreshStatsError error
}
func (m *extendedArtistRepo) RefreshStats(allArtists bool) (int64, error) {
m.mu.Lock()
m.refreshStatsCalled = true
err := m.refreshStatsError
m.mu.Unlock()
if err != nil {
return 0, err
}
return m.MockArtistRepo.RefreshStats(allArtists)
}
func (m *extendedArtistRepo) IsRefreshStatsCalled() bool {
m.mu.RLock()
defer m.mu.RUnlock()
return m.refreshStatsCalled
}

View File

@@ -6,6 +6,7 @@ import (
"encoding/json"
"math"
"net/http"
"os"
"path/filepath"
"runtime"
"runtime/debug"
@@ -21,6 +22,7 @@ import (
"github.com/navidrome/navidrome/core/metrics/insights"
"github.com/navidrome/navidrome/log"
"github.com/navidrome/navidrome/model"
"github.com/navidrome/navidrome/model/request"
"github.com/navidrome/navidrome/plugins/schema"
"github.com/navidrome/navidrome/utils/singleton"
)
@@ -63,9 +65,16 @@ func GetInstance(ds model.DataStore, pluginLoader PluginLoader) Insights {
}
func (c *insightsCollector) Run(ctx context.Context) {
ctx = auth.WithAdminUser(ctx, c.ds)
for {
c.sendInsights(ctx)
// Refresh admin context on each iteration to handle cases where
// admin user wasn't available on previous runs
insightsCtx := auth.WithAdminUser(ctx, c.ds)
u, _ := request.UserFrom(insightsCtx)
if !u.IsAdmin {
log.Trace(insightsCtx, "No admin user available, skipping insights collection")
} else {
c.sendInsights(insightsCtx)
}
select {
case <-time.After(consts.InsightsUpdateInterval):
continue
@@ -160,6 +169,13 @@ var staticData = sync.OnceValue(func() insights.Data {
data.Build.Settings, data.Build.GoVersion = buildInfo()
data.OS.Containerized = consts.InContainer
// Install info
packageFilename := filepath.Join(conf.Server.DataFolder, ".package")
packageFileData, err := os.ReadFile(packageFilename)
if err == nil {
data.OS.Package = string(packageFileData)
}
// OS info
data.OS.Type = runtime.GOOS
data.OS.Arch = runtime.GOARCH

View File

@@ -16,6 +16,7 @@ type Data struct {
Containerized bool `json:"containerized"`
Arch string `json:"arch"`
NumCPU int `json:"numCPU"`
Package string `json:"package,omitempty"`
} `json:"os"`
Mem struct {
Alloc uint64 `json:"alloc"`

View File

@@ -13,6 +13,7 @@ import (
"github.com/navidrome/navidrome/model"
. "github.com/navidrome/navidrome/utils/gg"
"github.com/navidrome/navidrome/utils/slice"
"github.com/navidrome/navidrome/utils/str"
)
type Share interface {
@@ -119,9 +120,8 @@ func (r *shareRepositoryWrapper) Save(entity interface{}) (string, error) {
log.Error(r.ctx, "Invalid Resource ID", "id", firstId)
return "", model.ErrNotFound
}
if len(s.Contents) > 30 {
s.Contents = s.Contents[:26] + "..."
}
s.Contents = str.TruncateRunes(s.Contents, 30, "...")
id, err = r.Persistable.Save(s)
return id, err

View File

@@ -38,6 +38,38 @@ var _ = Describe("Share", func() {
Expect(id).ToNot(BeEmpty())
Expect(entity.ID).To(Equal(id))
})
It("does not truncate ASCII labels shorter than 30 characters", func() {
_ = ds.MediaFile(ctx).Put(&model.MediaFile{ID: "456", Title: "Example Media File"})
entity := &model.Share{Description: "test", ResourceIDs: "456"}
_, err := repo.Save(entity)
Expect(err).ToNot(HaveOccurred())
Expect(entity.Contents).To(Equal("Example Media File"))
})
It("truncates ASCII labels longer than 30 characters", func() {
_ = ds.MediaFile(ctx).Put(&model.MediaFile{ID: "789", Title: "Example Media File But The Title Is Really Long For Testing Purposes"})
entity := &model.Share{Description: "test", ResourceIDs: "789"}
_, err := repo.Save(entity)
Expect(err).ToNot(HaveOccurred())
Expect(entity.Contents).To(Equal("Example Media File But The ..."))
})
It("does not truncate CJK labels shorter than 30 runes", func() {
_ = ds.MediaFile(ctx).Put(&model.MediaFile{ID: "456", Title: "青春コンプレックス"})
entity := &model.Share{Description: "test", ResourceIDs: "456"}
_, err := repo.Save(entity)
Expect(err).ToNot(HaveOccurred())
Expect(entity.Contents).To(Equal("青春コンプレックス"))
})
It("truncates CJK labels longer than 30 runes", func() {
_ = ds.MediaFile(ctx).Put(&model.MediaFile{ID: "789", Title: "私の中の幻想的世界観及びその顕現を想起させたある現実での出来事に関する一考察"})
entity := &model.Share{Description: "test", ResourceIDs: "789"}
_, err := repo.Save(entity)
Expect(err).ToNot(HaveOccurred())
Expect(entity.Contents).To(Equal("私の中の幻想的世界観及びその顕現を想起させたある現実で..."))
})
})
Describe("Update", func() {

View File

@@ -18,6 +18,7 @@ var Set = wire.NewSet(
NewShare,
NewPlaylists,
NewLibrary,
NewMaintenance,
agents.GetAgents,
external.NewProvider,
wire.Bind(new(external.Agents), new(*agents.Agents)),

View File

@@ -45,10 +45,12 @@ func Db() *sql.DB {
if err != nil {
log.Fatal("Error opening database", err)
}
_, err = db.Exec("PRAGMA optimize=0x10002")
if err != nil {
log.Error("Error applying PRAGMA optimize", err)
return nil
if conf.Server.DevOptimizeDB {
_, err = db.Exec("PRAGMA optimize=0x10002")
if err != nil {
log.Error("Error applying PRAGMA optimize", err)
return nil
}
}
return db
})
@@ -99,7 +101,7 @@ func Init(ctx context.Context) func() {
log.Fatal(ctx, "Failed to apply new migrations", err)
}
if hasSchemaChanges {
if hasSchemaChanges && conf.Server.DevOptimizeDB {
log.Debug(ctx, "Applying PRAGMA optimize after schema changes")
_, err = db.ExecContext(ctx, "PRAGMA optimize")
if err != nil {
@@ -114,6 +116,9 @@ func Init(ctx context.Context) func() {
// Optimize runs PRAGMA optimize on each connection in the pool
func Optimize(ctx context.Context) {
if !conf.Server.DevOptimizeDB {
return
}
numConns := Db().Stats().OpenConnections
if numConns == 0 {
log.Debug(ctx, "No open connections to optimize")

View File

@@ -0,0 +1,9 @@
-- +goose Up
-- +goose StatementBegin
ALTER TABLE playqueue ADD COLUMN position_int integer;
UPDATE playqueue SET position_int = CAST(position as INTEGER) ;
ALTER TABLE playqueue DROP COLUMN position;
ALTER TABLE playqueue RENAME COLUMN position_int TO position;
-- +goose StatementEnd
-- +goose Down

View File

@@ -0,0 +1,7 @@
-- +goose Up
-- +goose StatementBegin
ALTER TABLE annotation ADD COLUMN rated_at datetime;
-- +goose StatementEnd
-- +goose Down

View File

@@ -7,6 +7,7 @@ import (
"strings"
"sync"
"github.com/navidrome/navidrome/conf"
"github.com/navidrome/navidrome/consts"
)
@@ -21,11 +22,13 @@ func notice(tx *sql.Tx, msg string) {
// Call this in migrations that requires a full rescan
func forceFullRescan(tx *sql.Tx) error {
// If a full scan is required, most probably the query optimizer is outdated, so we run `analyze`.
_, err := tx.Exec(`ANALYZE;`)
if err != nil {
return err
if conf.Server.DevOptimizeDB {
_, err := tx.Exec(`ANALYZE;`)
if err != nil {
return err
}
}
_, err = tx.Exec(fmt.Sprintf(`
_, err := tx.Exec(fmt.Sprintf(`
INSERT OR REPLACE into property (id, value) values ('%s', '1');
`, consts.FullScanAfterMigrationFlagKey))
return err

32
go.mod
View File

@@ -1,8 +1,8 @@
module github.com/navidrome/navidrome
go 1.25.3
go 1.25
// Fork to fix https://github.com/navidrome/navidrome/pull/3254
// Fork to fix https://github.com/navidrome/navidrome/issues/3254
replace github.com/dhowden/tag v0.0.0-20240417053706-3d75831295e8 => github.com/deluan/tag v0.0.0-20241002021117-dfe5e6ea396d
require (
@@ -39,11 +39,12 @@ require (
github.com/knqyf263/go-plugin v0.9.0
github.com/kr/pretty v0.3.1
github.com/lestrrat-go/jwx/v2 v2.1.6
github.com/maruel/natural v1.2.1
github.com/matoous/go-nanoid/v2 v2.1.0
github.com/mattn/go-sqlite3 v1.14.32
github.com/microcosm-cc/bluemonday v1.0.27
github.com/mileusna/useragent v1.3.5
github.com/onsi/ginkgo/v2 v2.27.1
github.com/onsi/ginkgo/v2 v2.27.2
github.com/onsi/gomega v1.38.2
github.com/pelletier/go-toml/v2 v2.2.4
github.com/pocketbase/dbx v1.11.0
@@ -56,16 +57,16 @@ require (
github.com/spf13/cobra v1.10.1
github.com/spf13/viper v1.21.0
github.com/stretchr/testify v1.11.1
github.com/tetratelabs/wazero v1.9.0
github.com/tetratelabs/wazero v1.10.1
github.com/unrolled/secure v1.17.0
github.com/xrash/smetrics v0.0.0-20250705151800-55b8f293f342
go.uber.org/goleak v1.3.0
golang.org/x/exp v0.0.0-20251023183803-a4bb9ffd2546
golang.org/x/image v0.32.0
golang.org/x/net v0.46.0
golang.org/x/sync v0.17.0
golang.org/x/sys v0.37.0
golang.org/x/text v0.30.0
golang.org/x/exp v0.0.0-20251113190631-e25ba8c21ef6
golang.org/x/image v0.33.0
golang.org/x/net v0.47.0
golang.org/x/sync v0.18.0
golang.org/x/sys v0.38.0
golang.org/x/text v0.31.0
golang.org/x/time v0.14.0
google.golang.org/protobuf v1.36.10
gopkg.in/yaml.v3 v3.0.1
@@ -89,7 +90,7 @@ require (
github.com/goccy/go-json v0.10.5 // indirect
github.com/goccy/go-yaml v1.18.0 // indirect
github.com/google/go-cmp v0.7.0 // indirect
github.com/google/pprof v0.0.0-20251007162407-5df77e3f7d1d // indirect
github.com/google/pprof v0.0.0-20251114195745-4902fdda35c8 // indirect
github.com/google/subcommands v1.2.0 // indirect
github.com/gorilla/css v1.0.1 // indirect
github.com/hashicorp/errwrap v1.1.0 // indirect
@@ -124,14 +125,13 @@ require (
github.com/stretchr/objx v0.5.2 // indirect
github.com/subosito/gotenv v1.6.0 // indirect
github.com/zeebo/xxh3 v1.0.2 // indirect
go.uber.org/automaxprocs v1.6.0 // indirect
go.uber.org/multierr v1.11.0 // indirect
go.yaml.in/yaml/v2 v2.4.2 // indirect
go.yaml.in/yaml/v3 v3.0.4 // indirect
golang.org/x/crypto v0.43.0 // indirect
golang.org/x/mod v0.29.0 // indirect
golang.org/x/telemetry v0.0.0-20251008203120-078029d740a8 // indirect
golang.org/x/tools v0.38.0 // indirect
golang.org/x/crypto v0.45.0 // indirect
golang.org/x/mod v0.30.0 // indirect
golang.org/x/telemetry v0.0.0-20251111182119-bc8e575c7b54 // indirect
golang.org/x/tools v0.39.0 // indirect
gopkg.in/ini.v1 v1.67.0 // indirect
gopkg.in/natefinch/npipe.v2 v2.0.0-20160621034901-c1b8fa8bdcce // indirect
)

60
go.sum
View File

@@ -99,8 +99,8 @@ github.com/google/go-cmp v0.7.0 h1:wk8382ETsv4JYUZwIsn6YpYiWiBsYLSJiTsyBybVuN8=
github.com/google/go-cmp v0.7.0/go.mod h1:pXiqmnSA92OHEEa9HXL2W4E7lf9JzCmGVUdgjX3N/iU=
github.com/google/go-pipeline v0.0.0-20230411140531-6cbedfc1d3fc h1:hd+uUVsB1vdxohPneMrhGH2YfQuH5hRIK9u4/XCeUtw=
github.com/google/go-pipeline v0.0.0-20230411140531-6cbedfc1d3fc/go.mod h1:SL66SJVysrh7YbDCP9tH30b8a9o/N2HeiQNUm85EKhc=
github.com/google/pprof v0.0.0-20251007162407-5df77e3f7d1d h1:KJIErDwbSHjnp/SGzE5ed8Aol7JsKiI5X7yWKAtzhM0=
github.com/google/pprof v0.0.0-20251007162407-5df77e3f7d1d/go.mod h1:I6V7YzU0XDpsHqbsyrghnFZLO1gwK6NPTNvmetQIk9U=
github.com/google/pprof v0.0.0-20251114195745-4902fdda35c8 h1:3DsUAV+VNEQa2CUVLxCY3f87278uWfIDhJnbdvDjvmE=
github.com/google/pprof v0.0.0-20251114195745-4902fdda35c8/go.mod h1:I6V7YzU0XDpsHqbsyrghnFZLO1gwK6NPTNvmetQIk9U=
github.com/google/subcommands v1.2.0 h1:vWQspBTo2nEqTUFita5/KeEWlUL8kQObDFbub/EN9oE=
github.com/google/subcommands v1.2.0/go.mod h1:ZjhPrFU+Olkh9WazFPsl27BQ4UPiG37m3yTrtFlrHVk=
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
@@ -162,8 +162,8 @@ github.com/lestrrat-go/jwx/v2 v2.1.6 h1:hxM1gfDILk/l5ylers6BX/Eq1m/pnxe9NBwW6lVf
github.com/lestrrat-go/jwx/v2 v2.1.6/go.mod h1:Y722kU5r/8mV7fYDifjug0r8FK8mZdw0K0GpJw/l8pU=
github.com/lestrrat-go/option v1.0.1 h1:oAzP2fvZGQKWkvHa1/SAcFolBEca1oN+mQ7eooNBEYU=
github.com/lestrrat-go/option v1.0.1/go.mod h1:5ZHFbivi4xwXxhxY9XHDe2FHo6/Z7WWmtT7T5nBBp3I=
github.com/maruel/natural v1.1.1 h1:Hja7XhhmvEFhcByqDoHz9QZbkWey+COd9xWfCfn1ioo=
github.com/maruel/natural v1.1.1/go.mod h1:v+Rfd79xlw1AgVBjbO0BEQmptqb5HvL/k9GRHB7ZKEg=
github.com/maruel/natural v1.2.1 h1:G/y4pwtTA07lbQsMefvsmEO0VN0NfqpxprxXDM4R/4o=
github.com/maruel/natural v1.2.1/go.mod h1:v+Rfd79xlw1AgVBjbO0BEQmptqb5HvL/k9GRHB7ZKEg=
github.com/matoous/go-nanoid/v2 v2.1.0 h1:P64+dmq21hhWdtvZfEAofnvJULaRR1Yib0+PnU669bE=
github.com/matoous/go-nanoid/v2 v2.1.0/go.mod h1:KlbGNQ+FhrUNIHUxZdL63t7tl4LaPkZNpUULS8H4uVM=
github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWEY=
@@ -186,8 +186,8 @@ github.com/ncruces/go-strftime v0.1.9 h1:bY0MQC28UADQmHmaF5dgpLmImcShSi2kHU9XLdh
github.com/ncruces/go-strftime v0.1.9/go.mod h1:Fwc5htZGVVkseilnfgOVb9mKy6w1naJmn9CehxcKcls=
github.com/ogier/pflag v0.0.1 h1:RW6JSWSu/RkSatfcLtogGfFgpim5p7ARQ10ECk5O750=
github.com/ogier/pflag v0.0.1/go.mod h1:zkFki7tvTa0tafRvTBIZTvzYyAu6kQhPZFnshFFPE+g=
github.com/onsi/ginkgo/v2 v2.27.1 h1:0LJC8MpUSQnfnp4n/3W3GdlmJP3ENGF0ZPzjQGLPP7s=
github.com/onsi/ginkgo/v2 v2.27.1/go.mod h1:wmy3vCqiBjirARfVhAqFpYt8uvX0yaFe+GudAqqcCqA=
github.com/onsi/ginkgo/v2 v2.27.2 h1:LzwLj0b89qtIy6SSASkzlNvX6WktqurSHwkk2ipF/Ns=
github.com/onsi/ginkgo/v2 v2.27.2/go.mod h1:ArE1D/XhNXBXCBkKOLkbsb2c81dQHCRcF5zwn/ykDRo=
github.com/onsi/gomega v1.38.2 h1:eZCjf2xjZAqe+LeWvKb5weQ+NcPwX84kqJ0cZNxok2A=
github.com/onsi/gomega v1.38.2/go.mod h1:W2MJcYxRGV63b418Ai34Ud0hEdTVXq9NW9+Sx6uXf3k=
github.com/pelletier/go-toml/v2 v2.2.4 h1:mye9XuhQ6gvn5h28+VilKrrPoQVanw5PMw/TB0t5Ec4=
@@ -201,8 +201,6 @@ github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRI
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/pocketbase/dbx v1.11.0 h1:LpZezioMfT3K4tLrqA55wWFw1EtH1pM4tzSVa7kgszU=
github.com/pocketbase/dbx v1.11.0/go.mod h1:xXRCIAKTHMgUCyCKZm55pUOdvFziJjQfXaWKhu2vhMs=
github.com/prashantv/gostub v1.1.0 h1:BTyx3RfQjRHnUWaGF9oQos79AlQ5k8WNktv7VGvVH4g=
github.com/prashantv/gostub v1.1.0/go.mod h1:A5zLQHz7ieHGG7is6LLXLz7I8+3LZzsrV0P1IAHhP5U=
github.com/pressly/goose/v3 v3.26.0 h1:KJakav68jdH0WDvoAcj8+n61WqOIaPGgH0bJWS6jpmM=
github.com/pressly/goose/v3 v3.26.0/go.mod h1:4hC1KrritdCxtuFsqgs1R4AU5bWtTAf+cnWvfhf2DNY=
github.com/prometheus/client_golang v1.23.2 h1:Je96obch5RDVy3FDMndoUsjAhG5Edi49h0RJWRi/o0o=
@@ -267,8 +265,8 @@ github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu
github.com/stretchr/testify v1.11.1/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U=
github.com/subosito/gotenv v1.6.0 h1:9NlTDc1FTs4qu0DDq7AEtTPNw6SVm7uBMsUCUjABIf8=
github.com/subosito/gotenv v1.6.0/go.mod h1:Dk4QP5c2W3ibzajGcXpNraDfq2IrhjMIvMSWPKKo0FU=
github.com/tetratelabs/wazero v1.9.0 h1:IcZ56OuxrtaEz8UYNRHBrUa9bYeX9oVY93KspZZBf/I=
github.com/tetratelabs/wazero v1.9.0/go.mod h1:TSbcXCfFP0L2FGkRPxHphadXPjo1T6W+CseNNY7EkjM=
github.com/tetratelabs/wazero v1.10.1 h1:2DugeJf6VVk58KTPszlNfeeN8AhhpwcZqkJj2wwFuH8=
github.com/tetratelabs/wazero v1.10.1/go.mod h1:DRm5twOQ5Gr1AoEdSi0CLjDQF1J9ZAuyqFIjl1KKfQU=
github.com/tidwall/gjson v1.18.0 h1:FIDeeyB800efLX89e5a8Y0BNH+LOngJyGrIWxG2FKQY=
github.com/tidwall/gjson v1.18.0/go.mod h1:/wbyibRr2FHMks5tjHJ5F8dMZh3AcwJEMf5vlfC0lxk=
github.com/tidwall/match v1.1.1 h1:+Ho715JplO36QYgwN9PGYNhgZvoUSc9X2c80KVTi+GA=
@@ -286,8 +284,6 @@ github.com/zeebo/assert v1.3.0 h1:g7C04CbJuIDKNPFHmsk4hwZDO5O+kntRxzaUoNXj+IQ=
github.com/zeebo/assert v1.3.0/go.mod h1:Pq9JiuJQpG8JLJdtkwrJESF0Foym2/D9XMU5ciN/wJ0=
github.com/zeebo/xxh3 v1.0.2 h1:xZmwmqxHZA8AI603jOQ0tMqmBr9lPeFwGg6d+xy9DC0=
github.com/zeebo/xxh3 v1.0.2/go.mod h1:5NWz9Sef7zIDm2JHfFlcQvNekmcEl9ekUZQQKCYaDcA=
go.uber.org/automaxprocs v1.6.0 h1:O3y2/QNTOdbF+e/dpXNNW7Rx2hZ4sTIPyybbxyNqTUs=
go.uber.org/automaxprocs v1.6.0/go.mod h1:ifeIMSnPZuznNm6jmdzmU3/bfk01Fe2fotchwEFJ8r8=
go.uber.org/goleak v1.3.0 h1:2K3zAYmnTNqV73imy9J1T3WC+gmCePx2hEGkimedGto=
go.uber.org/goleak v1.3.0/go.mod h1:CoHD4mav9JJNrW/WLlf7HGZPjdw8EucARQHekz1X6bE=
go.uber.org/multierr v1.11.0 h1:blXXJkSxSSfBVBlC76pxqeO+LN3aDfLQo+309xJstO0=
@@ -302,20 +298,20 @@ golang.org/x/crypto v0.13.0/go.mod h1:y6Z2r+Rw4iayiXXAIxJIDAJ1zMW4yaTpebo8fPOliY
golang.org/x/crypto v0.19.0/go.mod h1:Iy9bg/ha4yyC70EfRS8jz+B6ybOBKMaSxLj6P6oBDfU=
golang.org/x/crypto v0.23.0/go.mod h1:CKFgDieR+mRhux2Lsu27y0fO304Db0wZe70UKqHu0v8=
golang.org/x/crypto v0.31.0/go.mod h1:kDsLvtWBEx7MV9tJOj9bnXsPbxwJQ6csT/x4KIN4Ssk=
golang.org/x/crypto v0.43.0 h1:dduJYIi3A3KOfdGOHX8AVZ/jGiyPa3IbBozJ5kNuE04=
golang.org/x/crypto v0.43.0/go.mod h1:BFbav4mRNlXJL4wNeejLpWxB7wMbc79PdRGhWKncxR0=
golang.org/x/exp v0.0.0-20251023183803-a4bb9ffd2546 h1:mgKeJMpvi0yx/sU5GsxQ7p6s2wtOnGAHZWCHUM4KGzY=
golang.org/x/exp v0.0.0-20251023183803-a4bb9ffd2546/go.mod h1:j/pmGrbnkbPtQfxEe5D0VQhZC6qKbfKifgD0oM7sR70=
golang.org/x/crypto v0.45.0 h1:jMBrvKuj23MTlT0bQEOBcAE0mjg8mK9RXFhRH6nyF3Q=
golang.org/x/crypto v0.45.0/go.mod h1:XTGrrkGJve7CYK7J8PEww4aY7gM3qMCElcJQ8n8JdX4=
golang.org/x/exp v0.0.0-20251113190631-e25ba8c21ef6 h1:zfMcR1Cs4KNuomFFgGefv5N0czO2XZpUbxGUy8i8ug0=
golang.org/x/exp v0.0.0-20251113190631-e25ba8c21ef6/go.mod h1:46edojNIoXTNOhySWIWdix628clX9ODXwPsQuG6hsK0=
golang.org/x/image v0.0.0-20191009234506-e7c1f5e7dbb8/go.mod h1:FeLwcggjj3mMvU+oOTbSwawSJRM1uh48EjtB4UJZlP0=
golang.org/x/image v0.32.0 h1:6lZQWq75h7L5IWNk0r+SCpUJ6tUVd3v4ZHnbRKLkUDQ=
golang.org/x/image v0.32.0/go.mod h1:/R37rrQmKXtO6tYXAjtDLwQgFLHmhW+V6ayXlxzP2Pc=
golang.org/x/image v0.33.0 h1:LXRZRnv1+zGd5XBUVRFmYEphyyKJjQjCRiOuAP3sZfQ=
golang.org/x/image v0.33.0/go.mod h1:DD3OsTYT9chzuzTQt+zMcOlBHgfoKQb1gry8p76Y1sc=
golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4=
golang.org/x/mod v0.8.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs=
golang.org/x/mod v0.12.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs=
golang.org/x/mod v0.15.0/go.mod h1:hTbmBsO62+eylJbnUtE2MGJUyE7QWk4xUqPFrRgJ+7c=
golang.org/x/mod v0.17.0/go.mod h1:hTbmBsO62+eylJbnUtE2MGJUyE7QWk4xUqPFrRgJ+7c=
golang.org/x/mod v0.29.0 h1:HV8lRxZC4l2cr3Zq1LvtOsi/ThTgWnUk/y64QSs8GwA=
golang.org/x/mod v0.29.0/go.mod h1:NyhrlYXJ2H4eJiRy/WDBO6HMqZQ6q9nk4JzS3NuCK+w=
golang.org/x/mod v0.30.0 h1:fDEXFVZ/fmCKProc/yAXXUijritrDzahmwwefnjoPFk=
golang.org/x/mod v0.30.0/go.mod h1:lAsf5O2EvJeSFMiBxXDki7sCgAxEUcZHXoXMKT4GJKc=
golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190603091049-60506f45cf65/go.mod h1:HSz+uSET+XFnRR8LxR5pz3Of3rY3CfYBVs4xY44aLks=
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
@@ -327,8 +323,8 @@ golang.org/x/net v0.15.0/go.mod h1:idbUs1IY1+zTqbi8yxTbhexhEEk5ur9LInksu6HrEpk=
golang.org/x/net v0.21.0/go.mod h1:bIjVDfnllIU7BJ2DNgfnXvpSvtn8VRwhlsaeUTyUS44=
golang.org/x/net v0.25.0/go.mod h1:JkAGAh7GEvH74S6FOH42FLoXpXbE/aqXSrIQjXgsiwM=
golang.org/x/net v0.33.0/go.mod h1:HXLR5J+9DxmrqMwG9qjGCxZ+zKXxBru04zlTvWlWuN4=
golang.org/x/net v0.46.0 h1:giFlY12I07fugqwPuWJi68oOnpfqFnJIJzaIIm2JVV4=
golang.org/x/net v0.46.0/go.mod h1:Q9BGdFy1y4nkUwiLvT5qtyhAnEHgnQ/zd8PfU6nc210=
golang.org/x/net v0.47.0 h1:Mx+4dIFzqraBXUugkia1OOvlD6LemFo1ALMHjrXDOhY=
golang.org/x/net v0.47.0/go.mod h1:/jNxtkgq5yWUGYkaZGqo27cfGZ1c5Nen03aYrrKpVRU=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.1.0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
@@ -336,8 +332,8 @@ golang.org/x/sync v0.3.0/go.mod h1:FU7BRWz2tNW+3quACPkgCx/L+uEAv1htQ0V83Z9Rj+Y=
golang.org/x/sync v0.6.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
golang.org/x/sync v0.7.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
golang.org/x/sync v0.10.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
golang.org/x/sync v0.17.0 h1:l60nONMj9l5drqw6jlhIELNv9I0A4OFgRsG9k2oT9Ug=
golang.org/x/sync v0.17.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI=
golang.org/x/sync v0.18.0 h1:kr88TuHDroi+UVf+0hZnirlk8o8T+4MrK6mr60WkH/I=
golang.org/x/sync v0.18.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI=
golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180926160741-c2ed4eda69e7/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
@@ -354,11 +350,11 @@ golang.org/x/sys v0.12.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.17.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.20.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.28.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.37.0 h1:fdNQudmxPjkdUTPnLn5mdQv7Zwvbvpaxqs831goi9kQ=
golang.org/x/sys v0.37.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
golang.org/x/sys v0.38.0 h1:3yZWxaJjBmCWXqhN1qh02AkOnCQ1poK6oF+a7xWL6Gc=
golang.org/x/sys v0.38.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
golang.org/x/telemetry v0.0.0-20240228155512-f48c80bd79b2/go.mod h1:TeRTkGYfJXctD9OcfyVLyj2J3IxLnKwHJR8f4D8a3YE=
golang.org/x/telemetry v0.0.0-20251008203120-078029d740a8 h1:LvzTn0GQhWuvKH/kVRS3R3bVAsdQWI7hvfLHGgh9+lU=
golang.org/x/telemetry v0.0.0-20251008203120-078029d740a8/go.mod h1:Pi4ztBfryZoJEkyFTI5/Ocsu2jXyDr6iSdgJiYE/uwE=
golang.org/x/telemetry v0.0.0-20251111182119-bc8e575c7b54 h1:E2/AqCUMZGgd73TQkxUMcMla25GB9i/5HOdLr+uH7Vo=
golang.org/x/telemetry v0.0.0-20251111182119-bc8e575c7b54/go.mod h1:hKdjCMrbv9skySur+Nek8Hd0uJ0GuxJIoIX2payrIdQ=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
golang.org/x/term v0.5.0/go.mod h1:jMB1sMXY+tzblOD4FWmEbocvup2/aLOaQEp7JmGp78k=
@@ -377,8 +373,8 @@ golang.org/x/text v0.13.0/go.mod h1:TvPlkZtksWOMsz7fbANvkp4WM8x/WCo/om8BMLbz+aE=
golang.org/x/text v0.14.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU=
golang.org/x/text v0.15.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU=
golang.org/x/text v0.21.0/go.mod h1:4IBbMaMmOPCJ8SecivzSH54+73PCFmPWxNTLm+vZkEQ=
golang.org/x/text v0.30.0 h1:yznKA/E9zq54KzlzBEAWn1NXSQ8DIp/NYMy88xJjl4k=
golang.org/x/text v0.30.0/go.mod h1:yDdHFIX9t+tORqspjENWgzaCVXgk0yYnYuSZ8UzzBVM=
golang.org/x/text v0.31.0 h1:aC8ghyu4JhP8VojJ2lEHBnochRno1sgL6nEi9WGFGMM=
golang.org/x/text v0.31.0/go.mod h1:tKRAlv61yKIjGGHX/4tP1LTbc13YSec1pxVEWXzfoeM=
golang.org/x/time v0.14.0 h1:MRx4UaLrDotUKUdCIqzPC48t1Y9hANFKIRpNx+Te8PI=
golang.org/x/time v0.14.0/go.mod h1:eL/Oa2bBBK0TkX57Fyni+NgnyQQN4LitPmob2Hjnqw4=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
@@ -388,8 +384,8 @@ golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc
golang.org/x/tools v0.6.0/go.mod h1:Xwgl3UAJ/d3gWutnCtw505GrjyAbvKui8lOU390QaIU=
golang.org/x/tools v0.13.0/go.mod h1:HvlwmtVNQAhOuCjW7xxvovg8wbNq7LwfXh/k7wXUl58=
golang.org/x/tools v0.21.1-0.20240508182429-e35e4ccd0d2d/go.mod h1:aiJjzUbINMkxbQROHiO6hDPo2LHcIPhhQsa9DLh0yGk=
golang.org/x/tools v0.38.0 h1:Hx2Xv8hISq8Lm16jvBZ2VQf+RLmbd7wVUsALibYI/IQ=
golang.org/x/tools v0.38.0/go.mod h1:yEsQ/d/YK8cjh0L6rZlY8tgtlKiBNTL14pGDJPJpYQs=
golang.org/x/tools v0.39.0 h1:ik4ho21kwuQln40uelmciQPp9SipgNDdrafrYA4TmQQ=
golang.org/x/tools v0.39.0/go.mod h1:JnefbkDPyD8UU2kI5fuf8ZX4/yUeh9W877ZeBONxUqQ=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
google.golang.org/appengine v1.6.5/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCIDZVag1xfc=
google.golang.org/protobuf v1.36.10 h1:AYd7cD/uASjIL6Q9LiTjz8JLcrh/88q5UObnmY3aOOE=

View File

@@ -11,6 +11,7 @@ import (
"runtime"
"sort"
"strings"
"sync"
"time"
"github.com/sirupsen/logrus"
@@ -70,6 +71,7 @@ type levelPath struct {
var (
currentLevel Level
loggerMu sync.RWMutex
defaultLogger = logrus.New()
logSourceLine = false
rootPath string
@@ -78,8 +80,10 @@ var (
// SetLevel sets the global log level used by the simple logger.
func SetLevel(l Level) {
loggerMu.Lock()
currentLevel = l
defaultLogger.Level = logrus.TraceLevel
loggerMu.Unlock()
logrus.SetLevel(logrus.Level(l))
}
@@ -110,6 +114,8 @@ func levelFromString(l string) Level {
// SetLogLevels sets the log levels for specific paths in the codebase.
func SetLogLevels(levels map[string]string) {
loggerMu.Lock()
defer loggerMu.Unlock()
logLevels = nil
for k, v := range levels {
logLevels = append(logLevels, levelPath{path: k, level: levelFromString(v)})
@@ -125,6 +131,8 @@ func SetLogSourceLine(enabled bool) {
func SetRedacting(enabled bool) {
if enabled {
loggerMu.Lock()
defer loggerMu.Unlock()
defaultLogger.AddHook(redacted)
}
}
@@ -133,6 +141,8 @@ func SetOutput(w io.Writer) {
if runtime.GOOS == "windows" {
w = CRLFWriter(w)
}
loggerMu.Lock()
defer loggerMu.Unlock()
defaultLogger.SetOutput(w)
}
@@ -158,10 +168,14 @@ func NewContext(ctx context.Context, keyValuePairs ...interface{}) context.Conte
}
func SetDefaultLogger(l *logrus.Logger) {
loggerMu.Lock()
defer loggerMu.Unlock()
defaultLogger = l
}
func CurrentLevel() Level {
loggerMu.RLock()
defer loggerMu.RUnlock()
return currentLevel
}
@@ -204,14 +218,21 @@ func log(level Level, args ...interface{}) {
}
func Writer() io.Writer {
loggerMu.RLock()
defer loggerMu.RUnlock()
return defaultLogger.Writer()
}
func shouldLog(requiredLevel Level, skip int) bool {
if currentLevel >= requiredLevel {
loggerMu.RLock()
level := currentLevel
levels := logLevels
loggerMu.RUnlock()
if level >= requiredLevel {
return true
}
if len(logLevels) == 0 {
if len(levels) == 0 {
return false
}
@@ -221,7 +242,7 @@ func shouldLog(requiredLevel Level, skip int) bool {
}
file = strings.TrimPrefix(file, rootPath)
for _, lp := range logLevels {
for _, lp := range levels {
if strings.HasPrefix(file, lp.path) {
return lp.level >= requiredLevel
}
@@ -314,6 +335,8 @@ func extractLogger(ctx interface{}) (*logrus.Entry, error) {
func createNewLogger() *logrus.Entry {
//logrus.SetFormatter(&logrus.TextFormatter{ForceColors: true, DisableTimestamp: false, FullTimestamp: true})
//l.Formatter = &logrus.TextFormatter{ForceColors: true, DisableTimestamp: false, FullTimestamp: true}
loggerMu.RLock()
defer loggerMu.RUnlock()
logger := logrus.NewEntry(defaultLogger)
return logger
}

View File

@@ -6,6 +6,7 @@ type Annotations struct {
PlayCount int64 `structs:"play_count" json:"playCount,omitempty"`
PlayDate *time.Time `structs:"play_date" json:"playDate,omitempty" `
Rating int `structs:"rating" json:"rating,omitempty" `
RatedAt *time.Time `structs:"rated_at" json:"ratedAt,omitempty" `
Starred bool `structs:"starred" json:"starred,omitempty" `
StarredAt *time.Time `structs:"starred_at" json:"starredAt,omitempty"`
}

View File

@@ -44,6 +44,7 @@ var fieldMap = map[string]*mappedField{
"loved": {field: "COALESCE(annotation.starred, false)"},
"dateloved": {field: "annotation.starred_at"},
"lastplayed": {field: "annotation.play_date"},
"daterated": {field: "annotation.rated_at"},
"playcount": {field: "COALESCE(annotation.play_count, 0)"},
"rating": {field: "COALESCE(annotation.rating, 0)"},
"mbz_album_id": {field: "media_file.mbz_album_id"},

View File

@@ -43,5 +43,5 @@ type DataStore interface {
WithTx(block func(tx DataStore) error, scope ...string) error
WithTxImmediate(block func(tx DataStore) error, scope ...string) error
GC(ctx context.Context) error
GC(ctx context.Context, libraryIDs ...int) error
}

View File

@@ -85,7 +85,7 @@ type FolderRepository interface {
GetByPath(lib Library, path string) (*Folder, error)
GetAll(...QueryOptions) ([]Folder, error)
CountAll(...QueryOptions) (int64, error)
GetLastUpdates(lib Library) (map[string]FolderUpdateInfo, error)
GetFolderUpdateInfo(lib Library, targetPaths ...string) (map[string]FolderUpdateInfo, error)
Put(*Folder) error
MarkMissing(missing bool, ids ...string) error
GetTouchedWithPlaylists() (FolderCursor, error)

View File

@@ -23,7 +23,7 @@ func legacyTrackID(mf model.MediaFile, prependLibId bool) string {
}
func legacyAlbumID(mf model.MediaFile, md Metadata, prependLibId bool) string {
releaseDate := legacyReleaseDate(md)
_, _, releaseDate := md.mapDates()
albumPath := strings.ToLower(fmt.Sprintf("%s\\%s", legacyMapAlbumArtistName(md), legacyMapAlbumName(md)))
if !conf.Server.Scanner.GroupAlbumReleases {
if len(releaseDate) != 0 {
@@ -55,9 +55,3 @@ func legacyMapAlbumName(md Metadata) string {
consts.UnknownAlbum,
)
}
// Keep the TaggedLikePicard logic for backwards compatibility
func legacyReleaseDate(md Metadata) string {
_, _, releaseDate := md.mapDates()
return string(releaseDate)
}

View File

@@ -1,30 +0,0 @@
package metadata
import (
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
)
var _ = Describe("legacyReleaseDate", func() {
DescribeTable("legacyReleaseDate",
func(recordingDate, originalDate, releaseDate, expected string) {
md := New("", Info{
Tags: map[string][]string{
"DATE": {recordingDate},
"ORIGINALDATE": {originalDate},
"RELEASEDATE": {releaseDate},
},
})
result := legacyReleaseDate(md)
Expect(result).To(Equal(expected))
},
Entry("regular mapping", "2020-05-15", "2019-02-10", "2021-01-01", "2021-01-01"),
Entry("legacy mapping", "2020-05-15", "2019-02-10", "", "2020-05-15"),
Entry("legacy mapping, originalYear < year", "2018-05-15", "2019-02-10", "2021-01-01", "2021-01-01"),
Entry("legacy mapping, originalYear empty", "2020-05-15", "", "2021-01-01", "2021-01-01"),
Entry("legacy mapping, releaseYear", "2020-05-15", "2019-02-10", "2021-01-01", "2021-01-01"),
Entry("legacy mapping, same dates", "2020-05-15", "2020-05-15", "", "2020-05-15"),
)
})

View File

@@ -75,6 +75,23 @@ var _ = Describe("ToMediaFile", func() {
Expect(mf.OriginalYear).To(Equal(1966))
Expect(mf.ReleaseYear).To(Equal(2014))
})
DescribeTable("legacyReleaseDate (TaggedLikePicard old behavior)",
func(recordingDate, originalDate, releaseDate, expected string) {
mf := toMediaFile(model.RawTags{
"DATE": {recordingDate},
"ORIGINALDATE": {originalDate},
"RELEASEDATE": {releaseDate},
})
Expect(mf.ReleaseDate).To(Equal(expected))
},
Entry("regular mapping", "2020-05-15", "2019-02-10", "2021-01-01", "2021-01-01"),
Entry("legacy mapping", "2020-05-15", "2019-02-10", "", "2020-05-15"),
Entry("legacy mapping, originalYear < year", "2018-05-15", "2019-02-10", "2021-01-01", "2021-01-01"),
Entry("legacy mapping, originalYear empty", "2020-05-15", "", "2021-01-01", "2021-01-01"),
Entry("legacy mapping, releaseYear", "2020-05-15", "2019-02-10", "2021-01-01", "2021-01-01"),
Entry("legacy mapping, same dates", "2020-05-15", "2020-05-15", "", "2020-05-15"),
)
})
Describe("Lyrics", func() {

81
model/scanner.go Normal file
View File

@@ -0,0 +1,81 @@
package model
import (
"context"
"fmt"
"strconv"
"strings"
"time"
)
// ScanTarget represents a specific folder within a library to be scanned.
// NOTE: This struct is used as a map key, so it should only contain comparable types.
type ScanTarget struct {
LibraryID int
FolderPath string // Relative path within the library, or "" for entire library
}
func (st ScanTarget) String() string {
return fmt.Sprintf("%d:%s", st.LibraryID, st.FolderPath)
}
// ScannerStatus holds information about the current scan status
type ScannerStatus struct {
Scanning bool
LastScan time.Time
Count uint32
FolderCount uint32
LastError string
ScanType string
ElapsedTime time.Duration
}
type Scanner interface {
// ScanAll starts a scan of all libraries. This is a blocking operation.
ScanAll(ctx context.Context, fullScan bool) (warnings []string, err error)
// ScanFolders scans specific library/folder pairs, recursing into subdirectories.
// If targets is nil, it scans all libraries. This is a blocking operation.
ScanFolders(ctx context.Context, fullScan bool, targets []ScanTarget) (warnings []string, err error)
Status(context.Context) (*ScannerStatus, error)
}
// ParseTargets parses scan targets strings into ScanTarget structs.
// Example: []string{"1:Music/Rock", "2:Classical"}
func ParseTargets(libFolders []string) ([]ScanTarget, error) {
targets := make([]ScanTarget, 0, len(libFolders))
for _, part := range libFolders {
part = strings.TrimSpace(part)
if part == "" {
continue
}
// Split by the first colon
colonIdx := strings.Index(part, ":")
if colonIdx == -1 {
return nil, fmt.Errorf("invalid target format: %q (expected libraryID:folderPath)", part)
}
libIDStr := part[:colonIdx]
folderPath := part[colonIdx+1:]
libID, err := strconv.Atoi(libIDStr)
if err != nil {
return nil, fmt.Errorf("invalid library ID %q: %w", libIDStr, err)
}
if libID <= 0 {
return nil, fmt.Errorf("invalid library ID %q", libIDStr)
}
targets = append(targets, ScanTarget{
LibraryID: libID,
FolderPath: folderPath,
})
}
if len(targets) == 0 {
return nil, fmt.Errorf("no valid targets found")
}
return targets, nil
}

89
model/scanner_test.go Normal file
View File

@@ -0,0 +1,89 @@
package model_test
import (
"github.com/navidrome/navidrome/model"
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
)
var _ = Describe("ParseTargets", func() {
It("parses multiple entries in slice", func() {
targets, err := model.ParseTargets([]string{"1:Music/Rock", "1:Music/Jazz", "2:Classical"})
Expect(err).ToNot(HaveOccurred())
Expect(targets).To(HaveLen(3))
Expect(targets[0].LibraryID).To(Equal(1))
Expect(targets[0].FolderPath).To(Equal("Music/Rock"))
Expect(targets[1].LibraryID).To(Equal(1))
Expect(targets[1].FolderPath).To(Equal("Music/Jazz"))
Expect(targets[2].LibraryID).To(Equal(2))
Expect(targets[2].FolderPath).To(Equal("Classical"))
})
It("handles empty folder paths", func() {
targets, err := model.ParseTargets([]string{"1:", "2:"})
Expect(err).ToNot(HaveOccurred())
Expect(targets).To(HaveLen(2))
Expect(targets[0].FolderPath).To(Equal(""))
Expect(targets[1].FolderPath).To(Equal(""))
})
It("trims whitespace from entries", func() {
targets, err := model.ParseTargets([]string{" 1:Music/Rock", " 2:Classical "})
Expect(err).ToNot(HaveOccurred())
Expect(targets).To(HaveLen(2))
Expect(targets[0].LibraryID).To(Equal(1))
Expect(targets[0].FolderPath).To(Equal("Music/Rock"))
Expect(targets[1].LibraryID).To(Equal(2))
Expect(targets[1].FolderPath).To(Equal("Classical"))
})
It("skips empty strings", func() {
targets, err := model.ParseTargets([]string{"1:Music/Rock", "", "2:Classical"})
Expect(err).ToNot(HaveOccurred())
Expect(targets).To(HaveLen(2))
})
It("handles paths with colons", func() {
targets, err := model.ParseTargets([]string{"1:C:/Music/Rock", "2:/path:with:colons"})
Expect(err).ToNot(HaveOccurred())
Expect(targets).To(HaveLen(2))
Expect(targets[0].FolderPath).To(Equal("C:/Music/Rock"))
Expect(targets[1].FolderPath).To(Equal("/path:with:colons"))
})
It("returns error for invalid format without colon", func() {
_, err := model.ParseTargets([]string{"1Music/Rock"})
Expect(err).To(HaveOccurred())
Expect(err.Error()).To(ContainSubstring("invalid target format"))
})
It("returns error for non-numeric library ID", func() {
_, err := model.ParseTargets([]string{"abc:Music/Rock"})
Expect(err).To(HaveOccurred())
Expect(err.Error()).To(ContainSubstring("invalid library ID"))
})
It("returns error for negative library ID", func() {
_, err := model.ParseTargets([]string{"-1:Music/Rock"})
Expect(err).To(HaveOccurred())
Expect(err.Error()).To(ContainSubstring("invalid library ID"))
})
It("returns error for zero library ID", func() {
_, err := model.ParseTargets([]string{"0:Music/Rock"})
Expect(err).To(HaveOccurred())
Expect(err.Error()).To(ContainSubstring("invalid library ID"))
})
It("returns error for empty input", func() {
_, err := model.ParseTargets([]string{})
Expect(err).To(HaveOccurred())
Expect(err.Error()).To(ContainSubstring("no valid targets found"))
})
It("returns error for all empty strings", func() {
_, err := model.ParseTargets([]string{"", " ", ""})
Expect(err).To(HaveOccurred())
Expect(err.Error()).To(ContainSubstring("no valid targets found"))
})
})

View File

@@ -106,6 +106,7 @@ func NewAlbumRepository(ctx context.Context, db dbx.Builder) model.AlbumReposito
"random": "random",
"recently_added": recentlyAddedSort(),
"starred_at": "starred, starred_at",
"rated_at": "rating, rated_at",
})
return r
}
@@ -337,8 +338,12 @@ on conflict (user_id, item_id, item_type) do update
return r.executeSQL(query)
}
func (r *albumRepository) purgeEmpty() error {
func (r *albumRepository) purgeEmpty(libraryIDs ...int) error {
del := Delete(r.tableName).Where("id not in (select distinct(album_id) from media_file)")
// If libraryIDs are specified, only purge albums from those libraries
if len(libraryIDs) > 0 {
del = del.Where(Eq{"library_id": libraryIDs})
}
c, err := r.executeSQL(del)
if err != nil {
return fmt.Errorf("purging empty albums: %w", err)

View File

@@ -55,6 +55,7 @@ var _ = Describe("AlbumRepository", func() {
It("returns all records sorted", func() {
Expect(GetAll(model.QueryOptions{Sort: "name"})).To(Equal(model.Albums{
albumAbbeyRoad,
albumMultiDisc,
albumRadioactivity,
albumSgtPeppers,
}))
@@ -64,6 +65,7 @@ var _ = Describe("AlbumRepository", func() {
Expect(GetAll(model.QueryOptions{Sort: "name", Order: "desc"})).To(Equal(model.Albums{
albumSgtPeppers,
albumRadioactivity,
albumMultiDisc,
albumAbbeyRoad,
}))
})

View File

@@ -141,6 +141,7 @@ func NewArtistRepository(ctx context.Context, db dbx.Builder) model.ArtistReposi
r.setSortMappings(map[string]string{
"name": "order_artist_name",
"starred_at": "starred, starred_at",
"rated_at": "rating, rated_at",
"song_count": "stats->>'total'->>'m'",
"album_count": "stats->>'total'->>'a'",
"size": "stats->>'total'->>'s'",

View File

@@ -4,7 +4,10 @@ import (
"context"
"encoding/json"
"fmt"
"os"
"path/filepath"
"slices"
"strings"
"time"
. "github.com/Masterminds/squirrel"
@@ -91,8 +94,47 @@ func (r folderRepository) CountAll(opt ...model.QueryOptions) (int64, error) {
return r.count(query)
}
func (r folderRepository) GetLastUpdates(lib model.Library) (map[string]model.FolderUpdateInfo, error) {
sq := r.newSelect().Columns("id", "updated_at", "hash").Where(Eq{"library_id": lib.ID, "missing": false})
func (r folderRepository) GetFolderUpdateInfo(lib model.Library, targetPaths ...string) (map[string]model.FolderUpdateInfo, error) {
where := And{
Eq{"library_id": lib.ID},
Eq{"missing": false},
}
// If specific paths are requested, include those folders and all their descendants
if len(targetPaths) > 0 {
// Collect folder IDs for exact target folders and path conditions for descendants
folderIDs := make([]string, 0, len(targetPaths))
pathConditions := make(Or, 0, len(targetPaths)*2)
for _, targetPath := range targetPaths {
if targetPath == "" || targetPath == "." {
// Root path - include everything in this library
pathConditions = Or{}
folderIDs = nil
break
}
// Clean the path to normalize it. Paths stored in the folder table do not have leading/trailing slashes.
cleanPath := strings.TrimPrefix(targetPath, string(os.PathSeparator))
cleanPath = filepath.Clean(cleanPath)
// Include the target folder itself by ID
folderIDs = append(folderIDs, model.FolderID(lib, cleanPath))
// Include all descendants: folders whose path field equals or starts with the target path
// Note: Folder.Path is the directory path, so children have path = targetPath
pathConditions = append(pathConditions, Eq{"path": cleanPath})
pathConditions = append(pathConditions, Like{"path": cleanPath + "/%"})
}
// Combine conditions: exact folder IDs OR descendant path patterns
if len(folderIDs) > 0 {
where = append(where, Or{Eq{"id": folderIDs}, pathConditions})
} else if len(pathConditions) > 0 {
where = append(where, pathConditions)
}
}
sq := r.newSelect().Columns("id", "updated_at", "hash").Where(where)
var res []struct {
ID string
UpdatedAt time.Time
@@ -149,7 +191,7 @@ func (r folderRepository) GetTouchedWithPlaylists() (model.FolderCursor, error)
}, nil
}
func (r folderRepository) purgeEmpty() error {
func (r folderRepository) purgeEmpty(libraryIDs ...int) error {
sq := Delete(r.tableName).Where(And{
Eq{"num_audio_files": 0},
Eq{"num_playlists": 0},
@@ -157,6 +199,10 @@ func (r folderRepository) purgeEmpty() error {
ConcatExpr("id not in (select parent_id from folder)"),
ConcatExpr("id not in (select folder_id from media_file)"),
})
// If libraryIDs are specified, only purge folders from those libraries
if len(libraryIDs) > 0 {
sq = sq.Where(Eq{"library_id": libraryIDs})
}
c, err := r.executeSQL(sq)
if err != nil {
return fmt.Errorf("purging empty folders: %w", err)

View File

@@ -0,0 +1,213 @@
package persistence
import (
"context"
"fmt"
"github.com/navidrome/navidrome/log"
"github.com/navidrome/navidrome/model"
"github.com/navidrome/navidrome/model/request"
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
"github.com/pocketbase/dbx"
)
var _ = Describe("FolderRepository", func() {
var repo model.FolderRepository
var ctx context.Context
var conn *dbx.DB
var testLib, otherLib model.Library
BeforeEach(func() {
ctx = request.WithUser(log.NewContext(context.TODO()), model.User{ID: "userid"})
conn = GetDBXBuilder()
repo = newFolderRepository(ctx, conn)
// Use existing library ID 1 from test fixtures
libRepo := NewLibraryRepository(ctx, conn)
lib, err := libRepo.Get(1)
Expect(err).ToNot(HaveOccurred())
testLib = *lib
// Create a second library with its own folder to verify isolation
otherLib = model.Library{Name: "Other Library", Path: "/other/path"}
Expect(libRepo.Put(&otherLib)).To(Succeed())
})
AfterEach(func() {
// Clean up only test folders created by our tests (paths starting with "Test")
// This prevents interference with fixture data needed by other tests
_, _ = conn.NewQuery("DELETE FROM folder WHERE library_id = 1 AND path LIKE 'Test%'").Execute()
_, _ = conn.NewQuery(fmt.Sprintf("DELETE FROM library WHERE id = %d", otherLib.ID)).Execute()
})
Describe("GetFolderUpdateInfo", func() {
Context("with no target paths", func() {
It("returns all folders in the library", func() {
// Create test folders with unique names to avoid conflicts
folder1 := model.NewFolder(testLib, "TestGetLastUpdates/Folder1")
folder2 := model.NewFolder(testLib, "TestGetLastUpdates/Folder2")
err := repo.Put(folder1)
Expect(err).ToNot(HaveOccurred())
err = repo.Put(folder2)
Expect(err).ToNot(HaveOccurred())
otherFolder := model.NewFolder(otherLib, "TestOtherLib/Folder")
err = repo.Put(otherFolder)
Expect(err).ToNot(HaveOccurred())
// Query all folders (no target paths) - should only return folders from testLib
results, err := repo.GetFolderUpdateInfo(testLib)
Expect(err).ToNot(HaveOccurred())
// Should include folders from testLib
Expect(results).To(HaveKey(folder1.ID))
Expect(results).To(HaveKey(folder2.ID))
// Should NOT include folders from other library
Expect(results).ToNot(HaveKey(otherFolder.ID))
})
})
Context("with specific target paths", func() {
It("returns folder info for existing folders", func() {
// Create test folders with unique names
folder1 := model.NewFolder(testLib, "TestSpecific/Rock")
folder2 := model.NewFolder(testLib, "TestSpecific/Jazz")
folder3 := model.NewFolder(testLib, "TestSpecific/Classical")
err := repo.Put(folder1)
Expect(err).ToNot(HaveOccurred())
err = repo.Put(folder2)
Expect(err).ToNot(HaveOccurred())
err = repo.Put(folder3)
Expect(err).ToNot(HaveOccurred())
// Query specific paths
results, err := repo.GetFolderUpdateInfo(testLib, "TestSpecific/Rock", "TestSpecific/Classical")
Expect(err).ToNot(HaveOccurred())
Expect(results).To(HaveLen(2))
// Verify folder IDs are in results
Expect(results).To(HaveKey(folder1.ID))
Expect(results).To(HaveKey(folder3.ID))
Expect(results).ToNot(HaveKey(folder2.ID))
// Verify update info is populated
Expect(results[folder1.ID].UpdatedAt).ToNot(BeZero())
Expect(results[folder1.ID].Hash).To(Equal(folder1.Hash))
})
It("includes all child folders when querying parent", func() {
// Create a parent folder with multiple children
parent := model.NewFolder(testLib, "TestParent/Music")
child1 := model.NewFolder(testLib, "TestParent/Music/Rock/Queen")
child2 := model.NewFolder(testLib, "TestParent/Music/Jazz")
otherParent := model.NewFolder(testLib, "TestParent2/Music/Jazz")
Expect(repo.Put(parent)).To(Succeed())
Expect(repo.Put(child1)).To(Succeed())
Expect(repo.Put(child2)).To(Succeed())
// Query the parent folder - should return parent and all children
results, err := repo.GetFolderUpdateInfo(testLib, "TestParent/Music")
Expect(err).ToNot(HaveOccurred())
Expect(results).To(HaveLen(3))
Expect(results).To(HaveKey(parent.ID))
Expect(results).To(HaveKey(child1.ID))
Expect(results).To(HaveKey(child2.ID))
Expect(results).ToNot(HaveKey(otherParent.ID))
})
It("excludes children from other libraries", func() {
// Create parent in testLib
parent := model.NewFolder(testLib, "TestIsolation/Parent")
child := model.NewFolder(testLib, "TestIsolation/Parent/Child")
Expect(repo.Put(parent)).To(Succeed())
Expect(repo.Put(child)).To(Succeed())
// Create similar path in other library
otherParent := model.NewFolder(otherLib, "TestIsolation/Parent")
otherChild := model.NewFolder(otherLib, "TestIsolation/Parent/Child")
Expect(repo.Put(otherParent)).To(Succeed())
Expect(repo.Put(otherChild)).To(Succeed())
// Query should only return folders from testLib
results, err := repo.GetFolderUpdateInfo(testLib, "TestIsolation/Parent")
Expect(err).ToNot(HaveOccurred())
Expect(results).To(HaveLen(2))
Expect(results).To(HaveKey(parent.ID))
Expect(results).To(HaveKey(child.ID))
Expect(results).ToNot(HaveKey(otherParent.ID))
Expect(results).ToNot(HaveKey(otherChild.ID))
})
It("excludes missing children when querying parent", func() {
// Create parent and children, mark one as missing
parent := model.NewFolder(testLib, "TestMissingChild/Parent")
child1 := model.NewFolder(testLib, "TestMissingChild/Parent/Child1")
child2 := model.NewFolder(testLib, "TestMissingChild/Parent/Child2")
child2.Missing = true
Expect(repo.Put(parent)).To(Succeed())
Expect(repo.Put(child1)).To(Succeed())
Expect(repo.Put(child2)).To(Succeed())
// Query parent - should only return parent and non-missing child
results, err := repo.GetFolderUpdateInfo(testLib, "TestMissingChild/Parent")
Expect(err).ToNot(HaveOccurred())
Expect(results).To(HaveLen(2))
Expect(results).To(HaveKey(parent.ID))
Expect(results).To(HaveKey(child1.ID))
Expect(results).ToNot(HaveKey(child2.ID))
})
It("handles mix of existing and non-existing target paths", func() {
// Create folders for one path but not the other
existingParent := model.NewFolder(testLib, "TestMixed/Exists")
existingChild := model.NewFolder(testLib, "TestMixed/Exists/Child")
Expect(repo.Put(existingParent)).To(Succeed())
Expect(repo.Put(existingChild)).To(Succeed())
// Query both existing and non-existing paths
results, err := repo.GetFolderUpdateInfo(testLib, "TestMixed/Exists", "TestMixed/DoesNotExist")
Expect(err).ToNot(HaveOccurred())
Expect(results).To(HaveLen(2))
Expect(results).To(HaveKey(existingParent.ID))
Expect(results).To(HaveKey(existingChild.ID))
})
It("handles empty folder path as root", func() {
// Test querying for root folder without creating it (fixtures should have one)
rootFolderID := model.FolderID(testLib, ".")
results, err := repo.GetFolderUpdateInfo(testLib, "")
Expect(err).ToNot(HaveOccurred())
// Should return the root folder if it exists
if len(results) > 0 {
Expect(results).To(HaveKey(rootFolderID))
}
})
It("returns empty map for non-existent folders", func() {
results, err := repo.GetFolderUpdateInfo(testLib, "NonExistent/Path")
Expect(err).ToNot(HaveOccurred())
Expect(results).To(BeEmpty())
})
It("skips missing folders", func() {
// Create a folder and mark it as missing
folder := model.NewFolder(testLib, "TestMissing/Folder")
folder.Missing = true
err := repo.Put(folder)
Expect(err).ToNot(HaveOccurred())
results, err := repo.GetFolderUpdateInfo(testLib, "TestMissing/Folder")
Expect(err).ToNot(HaveOccurred())
Expect(results).To(BeEmpty())
})
})
})
})

View File

@@ -177,7 +177,11 @@ func (r *libraryRepository) ScanEnd(id int) error {
return err
}
// https://www.sqlite.org/pragma.html#pragma_optimize
_, err = r.executeSQL(Expr("PRAGMA optimize=0x10012;"))
// Use mask 0x10000 to check table sizes without running ANALYZE
// Running ANALYZE can cause query planner issues with expression-based collation indexes
if conf.Server.DevOptimizeDB {
_, err = r.executeSQL(Expr("PRAGMA optimize=0x10000;"))
}
return err
}

View File

@@ -142,4 +142,62 @@ var _ = Describe("LibraryRepository", func() {
Expect(libAfter.TotalSize).To(Equal(sizeRes.Sum))
Expect(libAfter.TotalDuration).To(Equal(durationRes.Sum))
})
Describe("ScanBegin and ScanEnd", func() {
var lib *model.Library
BeforeEach(func() {
lib = &model.Library{
ID: 0,
Name: "Test Scan Library",
Path: "/music/test-scan",
}
err := repo.Put(lib)
Expect(err).ToNot(HaveOccurred())
})
DescribeTable("ScanBegin",
func(fullScan bool, expectedFullScanInProgress bool) {
err := repo.ScanBegin(lib.ID, fullScan)
Expect(err).ToNot(HaveOccurred())
updatedLib, err := repo.Get(lib.ID)
Expect(err).ToNot(HaveOccurred())
Expect(updatedLib.LastScanStartedAt).ToNot(BeZero())
Expect(updatedLib.FullScanInProgress).To(Equal(expectedFullScanInProgress))
},
Entry("sets FullScanInProgress to true for full scan", true, true),
Entry("sets FullScanInProgress to false for quick scan", false, false),
)
Context("ScanEnd", func() {
BeforeEach(func() {
err := repo.ScanBegin(lib.ID, true)
Expect(err).ToNot(HaveOccurred())
})
It("sets LastScanAt and clears FullScanInProgress and LastScanStartedAt", func() {
err := repo.ScanEnd(lib.ID)
Expect(err).ToNot(HaveOccurred())
updatedLib, err := repo.Get(lib.ID)
Expect(err).ToNot(HaveOccurred())
Expect(updatedLib.LastScanAt).ToNot(BeZero())
Expect(updatedLib.FullScanInProgress).To(BeFalse())
Expect(updatedLib.LastScanStartedAt).To(BeZero())
})
It("sets LastScanAt to be after LastScanStartedAt", func() {
libBefore, err := repo.Get(lib.ID)
Expect(err).ToNot(HaveOccurred())
err = repo.ScanEnd(lib.ID)
Expect(err).ToNot(HaveOccurred())
libAfter, err := repo.Get(lib.ID)
Expect(err).ToNot(HaveOccurred())
Expect(libAfter.LastScanAt).To(BeTemporally(">=", libBefore.LastScanStartedAt))
})
})
})
})

View File

@@ -84,6 +84,7 @@ func NewMediaFileRepository(ctx context.Context, db dbx.Builder) model.MediaFile
"created_at": "media_file.created_at",
"recently_added": mediaFileRecentlyAddedSort(),
"starred_at": "starred, starred_at",
"rated_at": "rating, rated_at",
})
return r
}

View File

@@ -38,7 +38,7 @@ var _ = Describe("MediaRepository", func() {
})
It("counts the number of mediafiles in the DB", func() {
Expect(mr.CountAll()).To(Equal(int64(6)))
Expect(mr.CountAll()).To(Equal(int64(10)))
})
It("returns songs ordered by lyrics with a specific title/artist", func() {

View File

@@ -157,7 +157,7 @@ func (s *SQLStore) WithTxImmediate(block func(tx model.DataStore) error, scope .
}, scope...)
}
func (s *SQLStore) GC(ctx context.Context) error {
func (s *SQLStore) GC(ctx context.Context, libraryIDs ...int) error {
trace := func(ctx context.Context, msg string, f func() error) func() error {
return func() error {
start := time.Now()
@@ -167,11 +167,17 @@ func (s *SQLStore) GC(ctx context.Context) error {
}
}
// If libraryIDs are provided, scope operations to those libraries where possible
scoped := len(libraryIDs) > 0
if scoped {
log.Debug(ctx, "GC: Running selective garbage collection", "libraryIDs", libraryIDs)
}
err := run.Sequentially(
trace(ctx, "purge empty albums", func() error { return s.Album(ctx).(*albumRepository).purgeEmpty() }),
trace(ctx, "purge empty albums", func() error { return s.Album(ctx).(*albumRepository).purgeEmpty(libraryIDs...) }),
trace(ctx, "purge empty artists", func() error { return s.Artist(ctx).(*artistRepository).purgeEmpty() }),
trace(ctx, "mark missing artists", func() error { return s.Artist(ctx).(*artistRepository).markMissing() }),
trace(ctx, "purge empty folders", func() error { return s.Folder(ctx).(*folderRepository).purgeEmpty() }),
trace(ctx, "purge empty folders", func() error { return s.Folder(ctx).(*folderRepository).purgeEmpty(libraryIDs...) }),
trace(ctx, "clean album annotations", func() error { return s.Album(ctx).(*albumRepository).cleanAnnotations() }),
trace(ctx, "clean artist annotations", func() error { return s.Artist(ctx).(*artistRepository).cleanAnnotations() }),
trace(ctx, "clean media file annotations", func() error { return s.MediaFile(ctx).(*mediaFileRepository).cleanAnnotations() }),

View File

@@ -69,10 +69,12 @@ var (
albumSgtPeppers = al(model.Album{ID: "101", Name: "Sgt Peppers", AlbumArtist: "The Beatles", OrderAlbumName: "sgt peppers", AlbumArtistID: "3", EmbedArtPath: p("/beatles/1/sgt/a day.mp3"), SongCount: 1, MaxYear: 1967})
albumAbbeyRoad = al(model.Album{ID: "102", Name: "Abbey Road", AlbumArtist: "The Beatles", OrderAlbumName: "abbey road", AlbumArtistID: "3", EmbedArtPath: p("/beatles/1/come together.mp3"), SongCount: 1, MaxYear: 1969})
albumRadioactivity = al(model.Album{ID: "103", Name: "Radioactivity", AlbumArtist: "Kraftwerk", OrderAlbumName: "radioactivity", AlbumArtistID: "2", EmbedArtPath: p("/kraft/radio/radio.mp3"), SongCount: 2})
albumMultiDisc = al(model.Album{ID: "104", Name: "Multi Disc Album", AlbumArtist: "Test Artist", OrderAlbumName: "multi disc album", AlbumArtistID: "1", EmbedArtPath: p("/test/multi/disc1/track1.mp3"), SongCount: 4})
testAlbums = model.Albums{
albumSgtPeppers,
albumAbbeyRoad,
albumRadioactivity,
albumMultiDisc,
}
)
@@ -94,13 +96,22 @@ var (
Lyrics: `[{"lang":"xxx","line":[{"value":"This is a set of lyrics"}],"synced":false}]`,
})
songAntenna2 = mf(model.MediaFile{ID: "1006", Title: "Antenna", ArtistID: "2", Artist: "Kraftwerk", AlbumID: "103"})
testSongs = model.MediaFiles{
// Multi-disc album tracks (intentionally out of order to test sorting)
songDisc2Track11 = mf(model.MediaFile{ID: "2001", Title: "Disc 2 Track 11", ArtistID: "1", Artist: "Test Artist", AlbumID: "104", Album: "Multi Disc Album", DiscNumber: 2, TrackNumber: 11, Path: p("/test/multi/disc2/track11.mp3"), OrderAlbumName: "multi disc album", OrderArtistName: "test artist"})
songDisc1Track01 = mf(model.MediaFile{ID: "2002", Title: "Disc 1 Track 1", ArtistID: "1", Artist: "Test Artist", AlbumID: "104", Album: "Multi Disc Album", DiscNumber: 1, TrackNumber: 1, Path: p("/test/multi/disc1/track1.mp3"), OrderAlbumName: "multi disc album", OrderArtistName: "test artist"})
songDisc2Track01 = mf(model.MediaFile{ID: "2003", Title: "Disc 2 Track 1", ArtistID: "1", Artist: "Test Artist", AlbumID: "104", Album: "Multi Disc Album", DiscNumber: 2, TrackNumber: 1, Path: p("/test/multi/disc2/track1.mp3"), OrderAlbumName: "multi disc album", OrderArtistName: "test artist"})
songDisc1Track02 = mf(model.MediaFile{ID: "2004", Title: "Disc 1 Track 2", ArtistID: "1", Artist: "Test Artist", AlbumID: "104", Album: "Multi Disc Album", DiscNumber: 1, TrackNumber: 2, Path: p("/test/multi/disc1/track2.mp3"), OrderAlbumName: "multi disc album", OrderArtistName: "test artist"})
testSongs = model.MediaFiles{
songDayInALife,
songComeTogether,
songRadioactivity,
songAntenna,
songAntennaWithLyrics,
songAntenna2,
songDisc2Track11,
songDisc1Track01,
songDisc2Track01,
songDisc1Track02,
}
)

View File

@@ -264,6 +264,11 @@ func (r *playlistRepository) refreshSmartPlaylist(pls *model.Playlist) bool {
"annotation.item_id = media_file.id" +
" AND annotation.item_type = 'media_file'" +
" AND annotation.user_id = '" + usr.ID + "')")
// Only include media files from libraries the user has access to
sq = r.applyLibraryFilter(sq, "media_file")
// Apply the criteria rules
sq = r.addCriteria(sq, rules)
insSql := Insert("playlist_tracks").Columns("id", "playlist_id", "media_file_id").Select(sq)
_, err = r.executeSQL(insSql)
@@ -388,6 +393,7 @@ func (r *playlistRepository) loadTracks(sel SelectBuilder, id string) (model.Pla
"coalesce(play_count, 0) as play_count",
"play_date",
"coalesce(rating, 0) as rating",
"rated_at",
"f.*",
"playlist_tracks.*",
"library.path as library_path",

View File

@@ -1,7 +1,6 @@
package persistence
import (
"context"
"time"
"github.com/navidrome/navidrome/conf"
@@ -11,13 +10,14 @@ import (
"github.com/navidrome/navidrome/model/request"
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
"github.com/pocketbase/dbx"
)
var _ = Describe("PlaylistRepository", func() {
var repo model.PlaylistRepository
BeforeEach(func() {
ctx := log.NewContext(context.TODO())
ctx := log.NewContext(GinkgoT().Context())
ctx = request.WithUser(ctx, model.User{ID: "userid", UserName: "userid", IsAdmin: true})
repo = NewPlaylistRepository(ctx, GetDBXBuilder())
})
@@ -219,4 +219,283 @@ var _ = Describe("PlaylistRepository", func() {
})
})
})
Describe("Playlist Track Sorting", func() {
var testPlaylistID string
AfterEach(func() {
if testPlaylistID != "" {
Expect(repo.Delete(testPlaylistID)).To(BeNil())
testPlaylistID = ""
}
})
It("sorts tracks correctly by album (disc and track number)", func() {
By("creating a playlist with multi-disc album tracks in arbitrary order")
newPls := model.Playlist{Name: "Multi-Disc Test", OwnerID: "userid"}
// Add tracks in intentionally scrambled order
newPls.AddMediaFilesByID([]string{"2001", "2002", "2003", "2004"})
Expect(repo.Put(&newPls)).To(Succeed())
testPlaylistID = newPls.ID
By("retrieving tracks sorted by album")
tracksRepo := repo.Tracks(newPls.ID, false)
tracks, err := tracksRepo.GetAll(model.QueryOptions{Sort: "album", Order: "asc"})
Expect(err).ToNot(HaveOccurred())
By("verifying tracks are sorted by disc number then track number")
Expect(tracks).To(HaveLen(4))
// Expected order: Disc 1 Track 1, Disc 1 Track 2, Disc 2 Track 1, Disc 2 Track 11
Expect(tracks[0].MediaFileID).To(Equal("2002")) // Disc 1, Track 1
Expect(tracks[1].MediaFileID).To(Equal("2004")) // Disc 1, Track 2
Expect(tracks[2].MediaFileID).To(Equal("2003")) // Disc 2, Track 1
Expect(tracks[3].MediaFileID).To(Equal("2001")) // Disc 2, Track 11
})
})
Describe("Smart Playlists with Tag Criteria", func() {
var mfRepo model.MediaFileRepository
var testPlaylistID string
var songWithGrouping, songWithoutGrouping model.MediaFile
BeforeEach(func() {
ctx := log.NewContext(GinkgoT().Context())
ctx = request.WithUser(ctx, model.User{ID: "userid", UserName: "userid", IsAdmin: true})
mfRepo = NewMediaFileRepository(ctx, GetDBXBuilder())
// Register 'grouping' as a valid tag for smart playlists
criteria.AddTagNames([]string{"grouping"})
// Create a song with the grouping tag
songWithGrouping = model.MediaFile{
ID: "test-grouping-1",
Title: "Song With Grouping",
Artist: "Test Artist",
ArtistID: "1",
Album: "Test Album",
AlbumID: "101",
Path: "/test/grouping/song1.mp3",
Tags: model.Tags{
"grouping": []string{"My Crate"},
},
Participants: model.Participants{},
LibraryID: 1,
Lyrics: "[]",
}
Expect(mfRepo.Put(&songWithGrouping)).To(Succeed())
// Create a song without the grouping tag
songWithoutGrouping = model.MediaFile{
ID: "test-grouping-2",
Title: "Song Without Grouping",
Artist: "Test Artist",
ArtistID: "1",
Album: "Test Album",
AlbumID: "101",
Path: "/test/grouping/song2.mp3",
Tags: model.Tags{},
Participants: model.Participants{},
LibraryID: 1,
Lyrics: "[]",
}
Expect(mfRepo.Put(&songWithoutGrouping)).To(Succeed())
})
AfterEach(func() {
if testPlaylistID != "" {
_ = repo.Delete(testPlaylistID)
testPlaylistID = ""
}
// Clean up test media files
_, _ = GetDBXBuilder().Delete("media_file", dbx.HashExp{"id": "test-grouping-1"}).Execute()
_, _ = GetDBXBuilder().Delete("media_file", dbx.HashExp{"id": "test-grouping-2"}).Execute()
})
It("matches tracks with a tag value using 'contains' with empty string (issue #4728 workaround)", func() {
By("creating a smart playlist that checks if grouping tag has any value")
// This is the workaround for issue #4728: using 'contains' with empty string
// generates SQL: value LIKE '%%' which matches any non-empty string
rules := &criteria.Criteria{
Expression: criteria.All{
criteria.Contains{"grouping": ""},
},
}
newPls := model.Playlist{Name: "Tracks with Grouping", OwnerID: "userid", Rules: rules}
Expect(repo.Put(&newPls)).To(Succeed())
testPlaylistID = newPls.ID
By("refreshing the smart playlist")
conf.Server.SmartPlaylistRefreshDelay = -1 * time.Second // Force refresh
pls, err := repo.GetWithTracks(newPls.ID, true, false)
Expect(err).ToNot(HaveOccurred())
By("verifying only the track with grouping tag is matched")
Expect(pls.Tracks).To(HaveLen(1))
Expect(pls.Tracks[0].MediaFileID).To(Equal(songWithGrouping.ID))
})
It("excludes tracks with a tag value using 'notContains' with empty string", func() {
By("creating a smart playlist that checks if grouping tag is NOT set")
rules := &criteria.Criteria{
Expression: criteria.All{
criteria.NotContains{"grouping": ""},
},
}
newPls := model.Playlist{Name: "Tracks without Grouping", OwnerID: "userid", Rules: rules}
Expect(repo.Put(&newPls)).To(Succeed())
testPlaylistID = newPls.ID
By("refreshing the smart playlist")
conf.Server.SmartPlaylistRefreshDelay = -1 * time.Second // Force refresh
pls, err := repo.GetWithTracks(newPls.ID, true, false)
Expect(err).ToNot(HaveOccurred())
By("verifying the track with grouping is NOT in the playlist")
for _, track := range pls.Tracks {
Expect(track.MediaFileID).ToNot(Equal(songWithGrouping.ID))
}
By("verifying the track without grouping IS in the playlist")
var foundWithoutGrouping bool
for _, track := range pls.Tracks {
if track.MediaFileID == songWithoutGrouping.ID {
foundWithoutGrouping = true
break
}
}
Expect(foundWithoutGrouping).To(BeTrue())
})
})
Describe("Smart Playlists Library Filtering", func() {
var mfRepo model.MediaFileRepository
var testPlaylistID string
var lib2ID int
var restrictedUserID string
var uniqueLibPath string
BeforeEach(func() {
db := GetDBXBuilder()
// Generate unique IDs for this test run
uniqueSuffix := time.Now().Format("20060102150405.000")
restrictedUserID = "restricted-user-" + uniqueSuffix
uniqueLibPath = "/music/lib2-" + uniqueSuffix
// Create a second library with unique name and path to avoid conflicts with other tests
_, err := db.DB().Exec("INSERT INTO library (name, path, created_at, updated_at) VALUES (?, ?, datetime('now'), datetime('now'))", "Library 2-"+uniqueSuffix, uniqueLibPath)
Expect(err).ToNot(HaveOccurred())
err = db.DB().QueryRow("SELECT last_insert_rowid()").Scan(&lib2ID)
Expect(err).ToNot(HaveOccurred())
// Create a restricted user with access only to library 1
_, err = db.DB().Exec("INSERT INTO user (id, user_name, name, is_admin, password, created_at, updated_at) VALUES (?, ?, 'Restricted User', false, 'pass', datetime('now'), datetime('now'))", restrictedUserID, restrictedUserID)
Expect(err).ToNot(HaveOccurred())
_, err = db.DB().Exec("INSERT INTO user_library (user_id, library_id) VALUES (?, 1)", restrictedUserID)
Expect(err).ToNot(HaveOccurred())
// Create test media files in each library
ctx := log.NewContext(GinkgoT().Context())
ctx = request.WithUser(ctx, model.User{ID: "userid", UserName: "userid", IsAdmin: true})
mfRepo = NewMediaFileRepository(ctx, db)
// Song in library 1 (accessible by restricted user)
songLib1 := model.MediaFile{
ID: "lib1-song",
Title: "Song in Lib1",
Artist: "Test Artist",
ArtistID: "1",
Album: "Test Album",
AlbumID: "101",
Path: "/music/lib1/song.mp3",
LibraryID: 1,
Participants: model.Participants{},
Tags: model.Tags{},
Lyrics: "[]",
}
Expect(mfRepo.Put(&songLib1)).To(Succeed())
// Song in library 2 (NOT accessible by restricted user)
songLib2 := model.MediaFile{
ID: "lib2-song",
Title: "Song in Lib2",
Artist: "Test Artist",
ArtistID: "1",
Album: "Test Album",
AlbumID: "101",
Path: uniqueLibPath + "/song.mp3",
LibraryID: lib2ID,
Participants: model.Participants{},
Tags: model.Tags{},
Lyrics: "[]",
}
Expect(mfRepo.Put(&songLib2)).To(Succeed())
})
AfterEach(func() {
db := GetDBXBuilder()
if testPlaylistID != "" {
_ = repo.Delete(testPlaylistID)
testPlaylistID = ""
}
// Clean up test data
_, _ = db.Delete("media_file", dbx.HashExp{"id": "lib1-song"}).Execute()
_, _ = db.Delete("media_file", dbx.HashExp{"id": "lib2-song"}).Execute()
_, _ = db.Delete("user_library", dbx.HashExp{"user_id": restrictedUserID}).Execute()
_, _ = db.Delete("user", dbx.HashExp{"id": restrictedUserID}).Execute()
_, _ = db.DB().Exec("DELETE FROM library WHERE id = ?", lib2ID)
})
It("should only include tracks from libraries the user has access to (issue #4738)", func() {
db := GetDBXBuilder()
ctx := log.NewContext(GinkgoT().Context())
// Create the smart playlist as the restricted user
restrictedUser := model.User{ID: restrictedUserID, UserName: restrictedUserID, IsAdmin: false}
ctx = request.WithUser(ctx, restrictedUser)
restrictedRepo := NewPlaylistRepository(ctx, db)
// Create a smart playlist that matches all songs
rules := &criteria.Criteria{
Expression: criteria.All{
criteria.Gt{"playCount": -1}, // Matches everything
},
}
newPls := model.Playlist{Name: "All Songs", OwnerID: restrictedUserID, Rules: rules}
Expect(restrictedRepo.Put(&newPls)).To(Succeed())
testPlaylistID = newPls.ID
By("refreshing the smart playlist")
conf.Server.SmartPlaylistRefreshDelay = -1 * time.Second // Force refresh
pls, err := restrictedRepo.GetWithTracks(newPls.ID, true, false)
Expect(err).ToNot(HaveOccurred())
By("verifying only the track from library 1 is in the playlist")
var foundLib1Song, foundLib2Song bool
for _, track := range pls.Tracks {
if track.MediaFileID == "lib1-song" {
foundLib1Song = true
}
if track.MediaFileID == "lib2-song" {
foundLib2Song = true
}
}
Expect(foundLib1Song).To(BeTrue(), "Song from library 1 should be in the playlist")
Expect(foundLib2Song).To(BeFalse(), "Song from library 2 should NOT be in the playlist")
By("verifying playlist_tracks table only contains the accessible track")
var playlistTracksCount int
err = db.DB().QueryRow("SELECT count(*) FROM playlist_tracks WHERE playlist_id = ?", newPls.ID).Scan(&playlistTracksCount)
Expect(err).ToNot(HaveOccurred())
// Count should only include tracks visible to the user (lib1-song)
// The count may include other test songs from library 1, but NOT lib2-song
var lib2TrackCount int
err = db.DB().QueryRow("SELECT count(*) FROM playlist_tracks WHERE playlist_id = ? AND media_file_id = 'lib2-song'", newPls.ID).Scan(&lib2TrackCount)
Expect(err).ToNot(HaveOccurred())
Expect(lib2TrackCount).To(Equal(0), "lib2-song should not be in playlist_tracks")
By("verifying SongCount matches visible tracks")
Expect(pls.SongCount).To(Equal(len(pls.Tracks)), "SongCount should match the number of visible tracks")
})
})
})

View File

@@ -55,7 +55,7 @@ func (r *playlistRepository) Tracks(playlistId string, refreshSmartPlaylist bool
"id": "playlist_tracks.id",
"artist": "order_artist_name",
"album_artist": "order_album_artist_name",
"album": "order_album_name, order_album_artist_name",
"album": "order_album_name, album_id, disc_number, track_number, order_artist_name, title",
"title": "order_title",
// To make sure these fields will be whitelisted
"duration": "duration",
@@ -97,6 +97,7 @@ func (r *playlistTrackRepository) Read(id string) (interface{}, error) {
"coalesce(rating, 0) as rating",
"starred_at",
"play_date",
"rated_at",
"f.*",
"playlist_tracks.*",
).

View File

@@ -28,6 +28,7 @@ func (r sqlRepository) withAnnotation(query SelectBuilder, idField string) Selec
"coalesce(rating, 0) as rating",
"starred_at",
"play_date",
"rated_at",
)
if conf.Server.AlbumPlayCountMode == consts.AlbumPlayCountModeNormalized && r.tableName == "album" {
query = query.Columns(
@@ -77,7 +78,8 @@ func (r sqlRepository) SetStar(starred bool, ids ...string) error {
}
func (r sqlRepository) SetRating(rating int, itemID string) error {
return r.annUpsert(map[string]interface{}{"rating": rating}, itemID)
ratedAt := time.Now()
return r.annUpsert(map[string]interface{}{"rating": rating, "rated_at": ratedAt}, itemID)
}
func (r sqlRepository) IncPlayCount(itemID string, ts time.Time) error {
@@ -119,7 +121,7 @@ func (r sqlRepository) cleanAnnotations() error {
del := Delete(annotationTable).Where(Eq{"item_type": r.tableName}).Where("item_id not in (select id from " + r.tableName + ")")
c, err := r.executeSQL(del)
if err != nil {
return fmt.Errorf("error cleaning up annotations: %w", err)
return fmt.Errorf("error cleaning up %s annotations: %w", r.tableName, err)
}
if c > 0 {
log.Debug(r.ctx, "Clean-up annotations", "table", r.tableName, "totalDeleted", c)

View File

@@ -148,10 +148,10 @@ func (r sqlRepository) cleanBookmarks() error {
del := Delete(bookmarkTable).Where(Eq{"item_type": r.tableName}).Where("item_id not in (select id from " + r.tableName + ")")
c, err := r.executeSQL(del)
if err != nil {
return fmt.Errorf("error cleaning up bookmarks: %w", err)
return fmt.Errorf("error cleaning up %s bookmarks: %w", r.tableName, err)
}
if c > 0 {
log.Debug(r.ctx, "Clean-up bookmarks", "totalDeleted", c)
log.Debug(r.ctx, "Clean-up bookmarks", "totalDeleted", c, "itemType", r.tableName)
}
return nil
}

View File

@@ -2,6 +2,7 @@ package persistence
import (
"context"
"time"
"github.com/deluan/rest"
"github.com/navidrome/navidrome/conf/configtest"
@@ -45,6 +46,9 @@ var _ = Describe("Tag Library Filtering", func() {
BeforeEach(func() {
DeferCleanup(configtest.SetupConfig())
// Generate unique path suffix to avoid conflicts with other tests
uniqueSuffix := time.Now().Format("20060102150405.000")
// Clean up database
db := GetDBXBuilder()
_, err := db.NewQuery("DELETE FROM library_tag").Execute()
@@ -57,12 +61,12 @@ var _ = Describe("Tag Library Filtering", func() {
_, err = db.NewQuery("DELETE FROM library WHERE id > 1").Execute()
Expect(err).ToNot(HaveOccurred())
// Create test libraries
// Create test libraries with unique names and paths to avoid conflicts with other tests
_, err = db.NewQuery("INSERT INTO library (id, name, path) VALUES ({:id}, {:name}, {:path})").
Bind(dbx.Params{"id": libraryID2, "name": "Library 2", "path": "/music/lib2"}).Execute()
Bind(dbx.Params{"id": libraryID2, "name": "Library 2-" + uniqueSuffix, "path": "/music/lib2-" + uniqueSuffix}).Execute()
Expect(err).ToNot(HaveOccurred())
_, err = db.NewQuery("INSERT INTO library (id, name, path) VALUES ({:id}, {:name}, {:path})").
Bind(dbx.Params{"id": libraryID3, "name": "Library 3", "path": "/music/lib3"}).Execute()
Bind(dbx.Params{"id": libraryID3, "name": "Library 3-" + uniqueSuffix, "path": "/music/lib3-" + uniqueSuffix}).Execute()
Expect(err).ToNot(HaveOccurred())
// Give admin access to all libraries

View File

@@ -88,10 +88,10 @@ func (r *tagRepository) purgeUnused() error {
`)
c, err := r.executeSQL(del)
if err != nil {
return fmt.Errorf("error purging unused tags: %w", err)
return fmt.Errorf("error purging %s unused tags: %w", r.tableName, err)
}
if c > 0 {
log.Debug(r.ctx, "Purged unused tags", "totalDeleted", c)
log.Debug(r.ctx, "Purged unused tags", "totalDeleted", c, "table", r.tableName)
}
return err
}

View File

@@ -57,6 +57,7 @@ func NewUserRepository(ctx context.Context, db dbx.Builder) model.UserRepository
r.db = db
r.tableName = "user"
r.registerModel(&model.User{}, map[string]filterFunc{
"id": idFilter(r.tableName),
"password": invalidFilter(ctx),
"name": r.withTableName(startsWithFilter),
})

View File

@@ -559,4 +559,15 @@ var _ = Describe("UserRepository", func() {
Expect(user.Libraries[0].ID).To(Equal(1))
})
})
Describe("filters", func() {
It("qualifies id filter with table name", func() {
r := repo.(*userRepository)
qo := r.parseRestOptions(r.ctx, rest.QueryOptions{Filters: map[string]any{"id": "123"}})
sel := r.selectUserWithLibraries(qo)
query, _, err := r.toSQL(sel)
Expect(err).NotTo(HaveOccurred())
Expect(query).To(ContainSubstring("user.id = {:p0}"))
})
})
})

View File

@@ -93,8 +93,12 @@ func (s *subsonicAPIServiceImpl) Call(ctx context.Context, req *subsonicapi.Call
RawQuery: query.Encode(),
}
// Create HTTP request with internal authentication
httpReq, err := http.NewRequestWithContext(ctx, "GET", finalURL.String(), nil)
// Create HTTP request with a fresh context to avoid Chi RouteContext pollution.
// Using http.NewRequest (instead of http.NewRequestWithContext) ensures the internal
// SubsonicAPI call doesn't inherit routing information from the parent handler,
// which would cause Chi to invoke the wrong handler. Authentication context is
// explicitly added in the next step via request.WithInternalAuth.
httpReq, err := http.NewRequest("GET", finalURL.String(), nil)
if err != nil {
return &subsonicapi.CallResponse{
Error: fmt.Sprintf("failed to create HTTP request: %v", err),

View File

@@ -4,6 +4,7 @@ import (
"context"
"os"
"path/filepath"
"time"
"github.com/navidrome/navidrome/conf"
"github.com/navidrome/navidrome/core/agents"
@@ -22,8 +23,11 @@ var _ = Describe("Plugin Manager", func() {
// but, as this is an integration test, we can't use configtest.SetupConfig() as it causes
// data races.
originalPluginsFolder := conf.Server.Plugins.Folder
originalTimeout := conf.Server.DevPluginCompilationTimeout
conf.Server.DevPluginCompilationTimeout = 2 * time.Minute
DeferCleanup(func() {
conf.Server.Plugins.Folder = originalPluginsFolder
conf.Server.DevPluginCompilationTimeout = originalTimeout
})
conf.Server.Plugins.Enabled = true
conf.Server.Plugins.Folder = testDataDir

View File

@@ -83,6 +83,15 @@ nfpms:
owner: navidrome
group: navidrome
- src: release/linux/.package.rpm # contents: "rpm"
dst: /var/lib/navidrome/.package
type: "config|noreplace"
packager: rpm
- src: release/linux/.package.deb # contents: "deb"
dst: /var/lib/navidrome/.package
type: "config|noreplace"
packager: deb
scripts:
preinstall: "release/linux/preinstall.sh"
postinstall: "release/linux/postinstall.sh"

View File

@@ -0,0 +1 @@
deb

View File

@@ -0,0 +1 @@
rpm

View File

@@ -49,6 +49,9 @@ cp "${DOWNLOAD_FOLDER}"/extracted_ffmpeg/${FFMPEG_FILE}/bin/ffmpeg.exe "$MSI_OUT
cp "$WORKSPACE"/LICENSE "$WORKSPACE"/README.md "$MSI_OUTPUT_DIR"
cp "$BINARY" "$MSI_OUTPUT_DIR"
# package type indicator file
echo "msi" > "$MSI_OUTPUT_DIR/.package"
# workaround for wixl WixVariable not working to override bmp locations
cp "$WORKSPACE"/release/wix/bmp/banner.bmp /usr/share/wixl-*/ext/ui/bitmaps/bannrbmp.bmp
cp "$WORKSPACE"/release/wix/bmp/dialogue.bmp /usr/share/wixl-*/ext/ui/bitmaps/dlgbmp.bmp

View File

@@ -69,6 +69,12 @@
</Directory>
</Directory>
<Directory Id="ND_DATAFOLDER" name="[ND_DATAFOLDER]">
<Component Id='PackageFile' Guid='9eec0697-803c-4629-858f-20dc376c960b' Win64="$(var.Win64)">
<File Id='package' Name='.package' DiskId='1' Source='.package' KeyPath='no' />
</Component>
</Directory>
</Directory>
<InstallUISequence>
@@ -81,6 +87,7 @@
<ComponentRef Id='Configuration'/>
<ComponentRef Id='MainExecutable' />
<ComponentRef Id='FFMpegExecutable' />
<ComponentRef Id='PackageFile' />
</Feature>
</Product>
</Wix>

View File

@@ -31,8 +31,12 @@
"mood": "Estado",
"participants": "Participantes adicionais",
"tags": "Etiquetas adicionais",
"mappedTags": "",
"rawTags": "Etiquetas en cru"
"mappedTags": "Etiquetas mapeadas",
"rawTags": "Etiquetas en cru",
"bitDepth": "Calidade de Bit",
"sampleRate": "Taxa de mostra",
"missing": "Falta",
"libraryName": "Biblioteca"
},
"actions": {
"addToQueue": "Ao final da cola",
@@ -41,7 +45,8 @@
"shuffleAll": "Remexer todo",
"download": "Descargar",
"playNext": "A continuación",
"info": "Obter info"
"info": "Obter info",
"showInPlaylist": "Mostrar en Lista de reprodución"
}
},
"album": {
@@ -70,7 +75,10 @@
"releaseType": "Tipo",
"grouping": "Grupos",
"media": "Multimedia",
"mood": "Estado"
"mood": "Estado",
"date": "Data de gravación",
"missing": "Falta",
"libraryName": "Biblioteca"
},
"actions": {
"playAll": "Reproducir",
@@ -102,7 +110,8 @@
"rating": "Valoración",
"genre": "Xénero",
"size": "Tamaño",
"role": "Rol"
"role": "Rol",
"missing": "Falta"
},
"roles": {
"albumartist": "Artista do álbum |||| Artistas do álbum",
@@ -117,7 +126,13 @@
"mixer": "Mistura |||| Mistura",
"remixer": "Remezcla |||| Remezcla",
"djmixer": "Mezcla DJs |||| Mezcla DJs",
"performer": "Intérprete |||| Intérpretes"
"performer": "Intérprete |||| Intérpretes",
"maincredit": "Artista do álbum ou Artista |||| Artistas do álbum ou Artistas"
},
"actions": {
"shuffle": "Barallar",
"radio": "Radio",
"topSongs": "Cancións destacadas"
}
},
"user": {
@@ -134,10 +149,12 @@
"currentPassword": "Contrasinal actual",
"newPassword": "Novo contrasinal",
"token": "Token",
"lastAccessAt": "Último acceso"
"lastAccessAt": "Último acceso",
"libraries": "Bibliotecas"
},
"helperTexts": {
"name": "Os cambios no nome aplicaranse a próxima vez que accedas"
"name": "Os cambios no nome aplicaranse a próxima vez que accedas",
"libraries": "Selecciona bibliotecas específicas para esta usuaria, ou deixa baleiro para usar as bibliotecas por defecto"
},
"notifications": {
"created": "Creouse a usuaria",
@@ -146,7 +163,12 @@
},
"message": {
"listenBrainzToken": "Escribe o token de usuaria de ListenBrainz",
"clickHereForToken": "Preme aquí para obter o token"
"clickHereForToken": "Preme aquí para obter o token",
"selectAllLibraries": "Seleccionar todas as bibliotecas",
"adminAutoLibraries": "As usuarias Admin teñen acceso por defecto a todas as bibliotecas"
},
"validation": {
"librariesRequired": "Debes seleccionar polo menos unha biblioteca para usuarias non admins"
}
},
"player": {
@@ -190,11 +212,17 @@
"addNewPlaylist": "Crear \"%{name}\"",
"export": "Exportar",
"makePublic": "Facela Pública",
"makePrivate": "Facela Privada"
"makePrivate": "Facela Privada",
"saveQueue": "Salvar a Cola como Lista de reprodución",
"searchOrCreate": "Buscar listas ou escribe para crear nova…",
"pressEnterToCreate": "Preme Enter para crear nova lista",
"removeFromSelection": "Retirar da selección"
},
"message": {
"duplicate_song": "Engadir cancións duplicadas",
"song_exist": "Hai duplicadas que serán engadidas á lista de reprodución. Desexas engadir as duplicadas ou omitilas?"
"song_exist": "Hai duplicadas que serán engadidas á lista de reprodución. Desexas engadir as duplicadas ou omitilas?",
"noPlaylistsFound": "Sen listas de reprodución",
"noPlaylists": "Sen listas dispoñibles"
}
},
"radio": {
@@ -232,13 +260,68 @@
"fields": {
"path": "Ruta",
"size": "Tamaño",
"updatedAt": "Desapareceu o"
"updatedAt": "Desapareceu o",
"libraryName": "Biblioteca"
},
"actions": {
"remove": "Retirar"
"remove": "Retirar",
"remove_all": "Retirar todo"
},
"notifications": {
"removed": "Ficheiro(s) faltantes retirados"
},
"empty": "Sen ficheiros faltantes"
},
"library": {
"name": "Biblioteca |||| Bibliotecas",
"fields": {
"name": "Nome",
"path": "Ruta",
"remotePath": "Ruta remota",
"lastScanAt": "Último escaneado",
"songCount": "Cancións",
"albumCount": "Álbums",
"artistCount": "Artistas",
"totalSongs": "Cancións",
"totalAlbums": "Álbums",
"totalArtists": "Artistas",
"totalFolders": "Cartafoles",
"totalFiles": "Ficheiros",
"totalMissingFiles": "Ficheiros que faltan",
"totalSize": "Tamaño total",
"totalDuration": "Duración",
"defaultNewUsers": "Por defecto para novas usuarias",
"createdAt": "Creada",
"updatedAt": "Actualizada"
},
"sections": {
"basic": "Información básica",
"statistics": "Estatísticas"
},
"actions": {
"scan": "Escanear Biblioteca",
"manageUsers": "Xestionar acceso das usuarias",
"viewDetails": "Ver detalles"
},
"notifications": {
"created": "Biblioteca creada correctamente",
"updated": "Biblioteca actualizada correctamente",
"deleted": "Biblioteca eliminada correctamente",
"scanStarted": "Comezou o escaneo da biblioteca",
"scanCompleted": "Completouse o escaneado da biblioteca"
},
"validation": {
"nameRequired": "Requírese un nome para a biblioteca",
"pathRequired": "Requírese unha ruta para a biblioteca",
"pathNotDirectory": "A ruta á biblioteca ten que ser un directorio",
"pathNotFound": "Non se atopa a ruta á biblioteca",
"pathNotAccessible": "A ruta á biblioteca non é accesible",
"pathInvalid": "Ruta non válida á biblioteca"
},
"messages": {
"deleteConfirm": "Tes certeza de querer eliminar esta biblioteca? Isto eliminará todos os datos asociados e accesos de usuarias.",
"scanInProgress": "Escaneo en progreso…",
"noLibrariesAssigned": "Sen bibliotecas asignadas a esta usuaria"
}
}
},
@@ -419,7 +502,11 @@
"downloadDialogTitle": "Descargar %{resource} '%{name}' (%{size})",
"shareCopyToClipboard": "Copiar ao portapapeis: Ctrl+C, Enter",
"remove_missing_title": "Retirar ficheiros que faltan",
"remove_missing_content": "Tes certeza de querer retirar da base de datos os ficheiros que faltan? Isto retirará de xeito permanente todas a referencias a eles, incluíndo a conta de reproducións e valoracións."
"remove_missing_content": "Tes certeza de querer retirar da base de datos os ficheiros que faltan? Isto retirará de xeito permanente todas a referencias a eles, incluíndo a conta de reproducións e valoracións.",
"remove_all_missing_title": "Retirar todos os ficheiros que faltan",
"remove_all_missing_content": "Tes certeza de querer retirar da base de datos todos os ficheiros que faltan? Isto eliminará todas as referencias a eles, incluíndo o número de reproducións e valoracións.",
"noSimilarSongsFound": "Sen cancións parecidas",
"noTopSongsFound": "Sen cancións destacadas"
},
"menu": {
"library": "Biblioteca",
@@ -448,7 +535,13 @@
"albumList": "Álbums",
"about": "Acerca de",
"playlists": "Listas de reprodución",
"sharedPlaylists": "Listas compartidas"
"sharedPlaylists": "Listas compartidas",
"librarySelector": {
"allLibraries": "Todas as bibliotecas (%{count})",
"multipleLibraries": "%{selected} de %{total} Bibliotecas",
"selectLibraries": "Seleccionar Bibliotecas",
"none": "Ningunha"
}
},
"player": {
"playListsText": "Reproducir cola",
@@ -485,6 +578,21 @@
"disabled": "Desactivado",
"waiting": "Agardando"
}
},
"tabs": {
"about": "Sobre",
"config": "Configuración"
},
"config": {
"configName": "Nome",
"environmentVariable": "Variable de entorno",
"currentValue": "Valor actual",
"configurationFile": "Ficheiro de configuración",
"exportToml": "Exportar configuración (TOML)",
"exportSuccess": "Configuración exportada ao portapapeis no formato TOML",
"exportFailed": "Fallou a copia da configuración",
"devFlagsHeader": "Configuracións de Desenvolvemento (suxeitas a cambio/retirada)",
"devFlagsComment": "Son axustes experimentais e poden retirarse en futuras versións"
}
},
"activity": {
@@ -493,7 +601,10 @@
"quickScan": "Escaneo rápido",
"fullScan": "Escaneo completo",
"serverUptime": "Servidor a funcionar",
"serverDown": "SEN CONEXIÓN"
"serverDown": "SEN CONEXIÓN",
"scanType": "Tipo",
"status": "Erro de escaneado",
"elapsedTime": "Tempo transcurrido"
},
"help": {
"title": "Atallos de Navidrome",
@@ -508,5 +619,10 @@
"toggle_love": "Engadir canción a favoritas",
"current_song": "Ir á Canción actual "
}
},
"nowPlaying": {
"title": "En reprodución",
"empty": "Sen reprodución",
"minutesAgo": "hai %{smart_count} minuto |||| hai %{smart_count} minutos"
}
}

View File

@@ -400,8 +400,8 @@
},
"albumList": "Album",
"about": "Info",
"playlists": "Scalette",
"sharedPlaylists": "Scalette Condivise"
"playlists": "Playlist",
"sharedPlaylists": "Playlist Condivise"
},
"player": {
"playListsText": "Coda",
@@ -457,4 +457,4 @@
"current_song": ""
}
}
}
}

View File

@@ -12,6 +12,7 @@
"artist": "아티스트",
"album": "앨범",
"path": "파일 경로",
"libraryName": "라이브러리",
"genre": "장르",
"compilation": "컴필레이션",
"year": "년",
@@ -34,7 +35,8 @@
"participants": "추가 참가자",
"tags": "추가 태그",
"mappedTags": "매핑된 태그",
"rawTags": "원시 태그"
"rawTags": "원시 태그",
"missing": "누락"
},
"actions": {
"addToQueue": "나중에 재생",
@@ -56,6 +58,7 @@
"playCount": "재생 횟수",
"size": "크기",
"name": "이름",
"libraryName": "라이브러리",
"genre": "장르",
"compilation": "컴필레이션",
"year": "년",
@@ -73,7 +76,8 @@
"releaseType": "유형",
"grouping": "그룹",
"media": "미디어",
"mood": "분위기"
"mood": "분위기",
"missing": "누락"
},
"actions": {
"playAll": "재생",
@@ -105,7 +109,8 @@
"playCount": "재생 횟수",
"rating": "평가",
"genre": "장르",
"role": "역할"
"role": "역할",
"missing": "누락"
},
"roles": {
"albumartist": "앨범 아티스트 |||| 앨범 아티스트들",
@@ -120,7 +125,13 @@
"mixer": "믹서 |||| 믹서들",
"remixer": "리믹서 |||| 리믹서들",
"djmixer": "DJ 믹서 |||| DJ 믹서들",
"performer": "공연자 |||| 공연자들"
"performer": "공연자 |||| 공연자들",
"maincredit": "앨범 아티스트 또는 아티스트 |||| 앨범 아티스트들 또는 아티스트들"
},
"actions": {
"topSongs": "인기곡",
"shuffle": "셔플",
"radio": "라디오"
}
},
"user": {
@@ -137,19 +148,26 @@
"changePassword": "비밀번호를 변경할까요?",
"currentPassword": "현재 비밀번호",
"newPassword": "새 비밀번호",
"token": "토큰"
"token": "토큰",
"libraries": "라이브러리"
},
"helperTexts": {
"name": "이름 변경 사항은 다음 로그인 시에만 반영됨"
"name": "이름 변경 사항은 다음 로그인 시에만 반영됨",
"libraries": "이 사용자에 대한 특정 라이브러리를 선택하거나 기본 라이브러리를 사용하려면 비움"
},
"notifications": {
"created": "사용자 생성됨",
"updated": "사용자 업데이트됨",
"deleted": "사용자 삭제됨"
},
"validation": {
"librariesRequired": "관리자가 아닌 사용자의 경우 최소한 하나의 라이브러리를 선택해야 함"
},
"message": {
"listenBrainzToken": "ListenBrainz 사용자 토큰을 입력하세요.",
"clickHereForToken": "여기를 클릭하여 토큰을 얻으세요"
"clickHereForToken": "여기를 클릭하여 토큰을 얻으세요",
"selectAllLibraries": "모든 라이브러리 선택",
"adminAutoLibraries": "관리자 사용자는 자동으로 모든 라이브러리에 접속할 수 있음"
}
},
"player": {
@@ -192,12 +210,18 @@
"selectPlaylist": "재생목록 선택:",
"addNewPlaylist": "\"%{name}\" 만들기",
"export": "내보내기",
"saveQueue": "재생목록에 대기열 저장",
"makePublic": "공개 만들기",
"makePrivate": "비공개 만들기"
"makePrivate": "비공개 만들기",
"searchOrCreate": "재생목록을 검색하거나 입력하여 새 재생목록을 만드세요...",
"pressEnterToCreate": "새 재생목록을 만드려면 Enter 키를 누름",
"removeFromSelection": "선택에서 제거"
},
"message": {
"duplicate_song": "중복된 노래 추가",
"song_exist": "이미 재생목록에 존재하는 노래입니다. 중복을 추가할까요 아니면 건너뛸까요?"
"song_exist": "이미 재생목록에 존재하는 노래입니다. 중복을 추가할까요 아니면 건너뛸까요?",
"noPlaylistsFound": "재생목록을 찾을 수 없음",
"noPlaylists": "사용 가능한 재생 목록이 없음"
}
},
"radio": {
@@ -238,14 +262,68 @@
"fields": {
"path": "경로",
"size": "크기",
"libraryName": "라이브러리",
"updatedAt": "사라짐"
},
"actions": {
"remove": "제거"
"remove": "제거",
"remove_all": "모두 제거"
},
"notifications": {
"removed": "누락된 파일이 제거되었음"
}
},
"library": {
"name": "라이브러리 |||| 라이브러리들",
"fields": {
"name": "이름",
"path": "경로",
"remotePath": "원격 경로",
"lastScanAt": "최근 스캔",
"songCount": "노래",
"albumCount": "앨범",
"artistCount": "아티스트",
"totalSongs": "노래",
"totalAlbums": "앨범",
"totalArtists": "아티스트",
"totalFolders": "폴더",
"totalFiles": "파일",
"totalMissingFiles": "누락된 파일",
"totalSize": "총 크기",
"totalDuration": "기간",
"defaultNewUsers": "신규 사용자 기본값",
"createdAt": "생성됨",
"updatedAt": "업데이트됨"
},
"sections": {
"basic": "기본 정보",
"statistics": "통계"
},
"actions": {
"scan": "라이브러리 스캔",
"manageUsers": "자용자 접속 관리",
"viewDetails": "상세 보기"
},
"notifications": {
"created": "라이브러리가 성공적으로 생성됨",
"updated": "라이브러리가 성공적으로 업데이트됨",
"deleted": "라이브러리가 성공적으로 삭제됨",
"scanStarted": "라이브러리 스캔 스작됨",
"scanCompleted": "라이브러리 스캔 완료됨"
},
"validation": {
"nameRequired": "라이브러리 이름이 필요함",
"pathRequired": "라이브러리 경로가 필요함",
"pathNotDirectory": "라이브러리 경로는 디렉터리여야 함",
"pathNotFound": "라이브러리 경로를 찾을 수 없음",
"pathNotAccessible": "라이브러리 경로에 접근할 수 없음",
"pathInvalid": "잘못된 라이브러리 경로"
},
"messages": {
"deleteConfirm": "이 라이브러리를 삭제할까요? 삭제하면 연결된 모든 데이터와 사용자 접속 권한이 제거됩니다.",
"scanInProgress": "스캔 진행 중...",
"noLibrariesAssigned": "이 사용자에게 할당된 라이브러리가 없음"
}
}
},
"ra": {
@@ -398,11 +476,15 @@
"transcodingDisabled": "웹 인터페이스를 통한 트랜스코딩 구성 변경은 보안상의 이유로 비활성화되어 있습니다. 트랜스코딩 옵션을 변경(편집 또는 추가)하려면, %{config} 구성 옵션으로 서버를 다시 시작하세요.",
"transcodingEnabled": "Navidrome은 현재 %{config}로 실행 중이므로 웹 인터페이스를 사용하여 트랜스코딩 설정에서 시스템 명령을 실행할 수 있습니다. 보안상의 이유로 비활성화하고 트랜스코딩 옵션을 구성할 때만 활성화하는 것이 좋습니다.",
"songsAddedToPlaylist": "1 개의 노래를 재생목록에 추가하였음 |||| %{smart_count} 개의 노래를 재생 목록에 추가하였음",
"noSimilarSongsFound": "비슷한 노래를 찾을 수 없음",
"noTopSongsFound": "인기곡을 찾을 수 없음",
"noPlaylistsAvailable": "사용 가능한 노래 없음",
"delete_user_title": "사용자 '%{name}' 삭제",
"delete_user_content": "이 사용자와 해당 사용자의 모든 데이터(재생 목록 및 환경 설정 포함)를 삭제할까요?",
"remove_missing_title": "누락된 파일들 제거",
"remove_missing_content": "선택한 누락된 파일을 데이터베이스에서 삭제할까요? 삭제하면 재생 횟수 및 평점을 포함하여 해당 파일에 대한 모든 참조가 영구적으로 삭제됩니다.",
"remove_all_missing_title": "누락된 모든 파일 제거",
"remove_all_missing_content": "데이터베이스에서 누락된 모든 파일을 제거할까요? 이렇게 하면 해당 게임의 플레이 횟수와 평점을 포함한 모든 참조 내용이 영구적으로 삭제됩니다.",
"notifications_blocked": "브라우저 설정에서 이 사이트의 알림을 차단하였음",
"notifications_not_available": "이 브라우저는 데스크톱 알림을 지원하지 않거나 https를 통해 Navidrome에 접속하고 있지 않음",
"lastfmLinkSuccess": "Last.fm이 성공적으로 연결되었고 스크로블링이 활성화되었음",
@@ -429,6 +511,12 @@
},
"menu": {
"library": "라이브러리",
"librarySelector": {
"allLibraries": "모든 라이브러리 (%{count})",
"multipleLibraries": "%{selected} / %{total} 라이브러리",
"selectLibraries": "라이브러리 선택",
"none": "없음"
},
"settings": "설정",
"version": "버전",
"theme": "테마",
@@ -491,6 +579,21 @@
"disabled": "비활성화",
"waiting": "대기중"
}
},
"tabs": {
"about": "정보",
"config": "구성"
},
"config": {
"configName": "구성 이름",
"environmentVariable": "환경 변수",
"currentValue": "현재 값",
"configurationFile": "구성 파일",
"exportToml": "구성 내보내기 (TOML)",
"exportSuccess": "TOML 형식으로 클립보드로 내보낸 구성",
"exportFailed": "구성 복사 실패",
"devFlagsHeader": "개발 플래그 (변경/삭제 가능)",
"devFlagsComment": "이는 실험적 설정이므로 향후 버전에서 제거될 수 있음"
}
},
"activity": {
@@ -499,7 +602,15 @@
"quickScan": "빠른 스캔",
"fullScan": "전체 스캔",
"serverUptime": "서버 가동 시간",
"serverDown": "오프라인"
"serverDown": "오프라인",
"scanType": "유형",
"status": "스캔 오류",
"elapsedTime": "경과 시간"
},
"nowPlaying": {
"title": "현재 재생 중",
"empty": "재생 중인 콘텐츠 없음",
"minutesAgo": "%{smart_count} 분 전"
},
"help": {
"title": "Navidrome 단축키",

View File

@@ -5,7 +5,7 @@
"name": "Nummer |||| Nummers",
"fields": {
"albumArtist": "Album Artiest",
"duration": "Lengte",
"duration": "Afspeelduur",
"trackNumber": "Nummer #",
"playCount": "Aantal keren afgespeeld",
"title": "Titel",
@@ -35,7 +35,8 @@
"rawTags": "Onbewerkte tags",
"bitDepth": "Bit diepte",
"sampleRate": "Sample waarde",
"missing": "Ontbrekend"
"missing": "Ontbrekend",
"libraryName": "Bibliotheek"
},
"actions": {
"addToQueue": "Voeg toe aan wachtrij",
@@ -44,7 +45,8 @@
"shuffleAll": "Shuffle alles",
"download": "Downloaden",
"playNext": "Volgende",
"info": "Meer info"
"info": "Meer info",
"showInPlaylist": "Toon in afspeellijst"
}
},
"album": {
@@ -55,7 +57,7 @@
"duration": "Afspeelduur",
"songCount": "Nummers",
"playCount": "Aantal keren afgespeeld",
"name": "Naam",
"name": "Titel",
"genre": "Genre",
"compilation": "Compilatie",
"year": "Jaar",
@@ -65,9 +67,9 @@
"createdAt": "Datum toegevoegd",
"size": "Grootte",
"originalDate": "Origineel",
"releaseDate": "Uitgegeven",
"releaseDate": "Uitgave",
"releases": "Uitgave |||| Uitgaven",
"released": "Uitgegeven",
"released": "Uitgave",
"recordLabel": "Label",
"catalogNum": "Catalogus nummer",
"releaseType": "Type",
@@ -75,7 +77,8 @@
"media": "Media",
"mood": "Sfeer",
"date": "Opnamedatum",
"missing": "Ontbrekend"
"missing": "Ontbrekend",
"libraryName": "Bibliotheek"
},
"actions": {
"playAll": "Afspelen",
@@ -123,7 +126,13 @@
"mixer": "Mixer |||| Mixers",
"remixer": "Remixer |||| Remixers",
"djmixer": "DJ Mixer |||| DJ Mixers",
"performer": "Performer |||| Performers"
"performer": "Performer |||| Performers",
"maincredit": "Album Artiest of Artiest |||| Album Artiesten or Artiesten"
},
"actions": {
"shuffle": "Shuffle",
"radio": "Radio",
"topSongs": "Beste nummers"
}
},
"user": {
@@ -132,7 +141,7 @@
"userName": "Gebruikersnaam",
"isAdmin": "Is beheerder",
"lastLoginAt": "Laatst ingelogd op",
"updatedAt": "Laatst gewijzigd op",
"updatedAt": "Laatst bijgewerkt op",
"name": "Naam",
"password": "Wachtwoord",
"createdAt": "Aangemaakt op",
@@ -140,19 +149,26 @@
"currentPassword": "Huidig wachtwoord",
"newPassword": "Nieuw wachtwoord",
"token": "Token",
"lastAccessAt": "Meest recente toegang"
"lastAccessAt": "Meest recente toegang",
"libraries": "Bibliotheken"
},
"helperTexts": {
"name": "Naamswijziging wordt pas zichtbaar bij de volgende login"
"name": "Naamswijziging wordt pas zichtbaar bij de volgende login",
"libraries": "Selecteer specifieke bibliotheken voor deze gebruiker, of laat leeg om de standaardbiblliotheken te gebruiken"
},
"notifications": {
"created": "Aangemaakt door gebruiker",
"updated": "Gewijzigd door gebruiker",
"deleted": "Gewist door gebruiker"
"updated": "Bijgewerkt door gebruiker",
"deleted": "Gebruiker verwijderd"
},
"message": {
"listenBrainzToken": "Vul je ListenBrainz gebruikers-token in.",
"clickHereForToken": "Klik hier voor je token"
"clickHereForToken": "Klik hier voor je token",
"selectAllLibraries": "Selecteer alle bibliotheken",
"adminAutoLibraries": "Admin gebruikers hebben automatisch toegang tot alle bibliotheken"
},
"validation": {
"librariesRequired": "Minstens één bibliotheek moet geselecteerd worden voor niet-admin gebruikers"
}
},
"player": {
@@ -181,10 +197,10 @@
"name": "Afspeellijst |||| Afspeellijsten",
"fields": {
"name": "Titel",
"duration": "Lengte",
"duration": "Afspeelduur",
"ownerName": "Eigenaar",
"public": "Publiek",
"updatedAt": "Laatst gewijzigd op",
"updatedAt": "Laatst bijgewerkt op",
"createdAt": "Aangemaakt op",
"songCount": "Nummers",
"comment": "Commentaar",
@@ -197,11 +213,16 @@
"export": "Exporteer",
"makePublic": "Openbaar maken",
"makePrivate": "Privé maken",
"saveQueue": "Bewaar wachtrij als playlist"
"saveQueue": "Bewaar wachtrij als playlist",
"searchOrCreate": "Zoek afspeellijsten of typ om een nieuwe te starten...",
"pressEnterToCreate": "Druk Enter om nieuwe afspeellijst te maken",
"removeFromSelection": "Verwijder van selectie"
},
"message": {
"duplicate_song": "Dubbele nummers toevoegen",
"song_exist": "Er komen nummers dubbel in de afspeellijst. Wil je de dubbele nummers toevoegen of overslaan?"
"song_exist": "Er komen nummers dubbel in de afspeellijst. Wil je de dubbele nummers toevoegen of overslaan?",
"noPlaylistsFound": "Geen playlists gevonden",
"noPlaylists": "Geen playlists beschikbaar"
}
},
"radio": {
@@ -210,8 +231,8 @@
"name": "Naam",
"streamUrl": "Stream URL",
"homePageUrl": "Hoofdpagina URL",
"updatedAt": "Geüpdate op",
"createdAt": "Gecreëerd op"
"updatedAt": "Bijgewerkt op",
"createdAt": "Aangemaakt op"
},
"actions": {
"playNow": "Speel nu"
@@ -229,8 +250,8 @@
"visitCount": "Bezocht",
"format": "Formaat",
"maxBitRate": "Max. bitrate",
"updatedAt": "Geüpdatet op",
"createdAt": "Gecreëerd op",
"updatedAt": "Bijgewerkt op",
"createdAt": "Aangemaakt op",
"downloadable": "Downloads toestaan?"
}
},
@@ -239,7 +260,8 @@
"fields": {
"path": "Pad",
"size": "Grootte",
"updatedAt": "Verdwenen op"
"updatedAt": "Verdwenen op",
"libraryName": "Bibliotheek"
},
"actions": {
"remove": "Verwijder",
@@ -249,6 +271,58 @@
"removed": "Ontbrekende bestanden verwijderd"
},
"empty": "Geen ontbrekende bestanden"
},
"library": {
"name": "Bibliotheek |||| Bibliotheken",
"fields": {
"name": "Naam",
"path": "Pad",
"remotePath": "Extern pad",
"lastScanAt": "Laatste scan",
"songCount": "Nummers",
"albumCount": "Albums",
"artistCount": "Artiesten",
"totalSongs": "Nummers",
"totalAlbums": "Albums",
"totalArtists": "Artiesten",
"totalFolders": "Mappen",
"totalFiles": "Bestanden",
"totalMissingFiles": "Ontbrekende bestanden",
"totalSize": "Totale bestandsgrootte",
"totalDuration": "Afspeelduur",
"defaultNewUsers": "Standaard voor nieuwe gebruikers",
"createdAt": "Aangemaakt",
"updatedAt": "Bijgewerkt"
},
"sections": {
"basic": "Basisinformatie",
"statistics": "Statistieken"
},
"actions": {
"scan": "Scan bibliotheek",
"manageUsers": "Beheer gebruikerstoegang",
"viewDetails": "Bekijk details"
},
"notifications": {
"created": "Bibliotheek succesvol aangemaakt",
"updated": "Bibliotheek succesvol bijgewerkt",
"deleted": "Bibliotheek succesvol verwijderd",
"scanStarted": "Bibliotheekscan is gestart",
"scanCompleted": "Bibliotheekscan is voltooid"
},
"validation": {
"nameRequired": "Bibliotheek naam is vereist",
"pathRequired": "Pad naar bibliotheek is vereist",
"pathNotDirectory": "Pad naar bibliotheek moet een map zijn",
"pathNotFound": "Pad naar bibliotheek niet gevonden",
"pathNotAccessible": "Pad naar bibliotheek is niet toegankelijk",
"pathInvalid": "Ongeldig pad naar bibliotheek"
},
"messages": {
"deleteConfirm": "Weet je zeker dat je deze bibliotheek wil verwijderen? Dit verwijdert ook alle gerelateerde data en gebruikerstoegang.",
"scanInProgress": "Scan is bezig...",
"noLibrariesAssigned": "Geen bibliotheken aan deze gebruiker toegewezen"
}
}
},
"ra": {
@@ -430,7 +504,9 @@
"remove_missing_title": "Verwijder ontbrekende bestanden",
"remove_missing_content": "Weet je zeker dat je alle ontbrekende bestanden van de database wil verwijderen? Dit wist permanent al hun referenties inclusief afspeel tellers en beoordelingen.",
"remove_all_missing_title": "Verwijder alle ontbrekende bestanden",
"remove_all_missing_content": "Weet je zeker dat je alle ontbrekende bestanden van de database wil verwijderen? Dit wist permanent al hun referenties inclusief afspeel tellers en beoordelingen."
"remove_all_missing_content": "Weet je zeker dat je alle ontbrekende bestanden van de database wil verwijderen? Dit wist permanent al hun referenties inclusief afspeel tellers en beoordelingen.",
"noSimilarSongsFound": "Geen vergelijkbare nummers gevonden",
"noTopSongsFound": "Geen beste nummers gevonden"
},
"menu": {
"library": "Bibliotheek",
@@ -459,7 +535,13 @@
"albumList": "Albums",
"about": "Over",
"playlists": "Afspeellijsten",
"sharedPlaylists": "Gedeelde afspeellijsten"
"sharedPlaylists": "Gedeelde afspeellijsten",
"librarySelector": {
"allLibraries": "Alle bibliotheken (%{count})",
"multipleLibraries": "%{selected} van %{total} bibliotheken",
"selectLibraries": "Selecteer bibliotheken",
"none": "Geen"
}
},
"player": {
"playListsText": "Wachtrij",
@@ -468,7 +550,7 @@
"notContentText": "Geen muziek",
"clickToPlayText": "Klik om af te spelen",
"clickToPauseText": "Klik om te pauzeren",
"nextTrackText": "Volgende",
"nextTrackText": "Volgend nummer",
"previousTrackText": "Vorige",
"reloadText": "Herladen",
"volumeText": "Volume",
@@ -496,11 +578,26 @@
"disabled": "Uitgeschakeld",
"waiting": "Wachten"
}
},
"tabs": {
"about": "Over",
"config": "Configuratie"
},
"config": {
"configName": "Config Naam",
"environmentVariable": "Omgevingsvariabele",
"currentValue": "Huidige waarde",
"configurationFile": "Configuratiebestand",
"exportToml": "Exporteer configuratie (TOML)",
"exportSuccess": "Configuratie geëxporteerd naar klembord in TOML formaat",
"exportFailed": "Kopiëren van configuratie mislukt",
"devFlagsHeader": "Ontwikkelaarsinstellingen (onder voorbehoud)",
"devFlagsComment": "Dit zijn experimentele instellingen en worden mogelijk in latere versies verwijderd"
}
},
"activity": {
"title": "Activiteit",
"totalScanned": "Totaal gescande folders",
"totalScanned": "Totaal gescande mappen",
"quickScan": "Snelle scan",
"fullScan": "Volledige scan",
"serverUptime": "Server uptime",
@@ -522,5 +619,10 @@
"toggle_love": "Voeg toe aan favorieten",
"current_song": "Ga naar huidig nummer"
}
},
"nowPlaying": {
"title": "Speelt nu",
"empty": "Er wordt niets afgespeed",
"minutesAgo": "%{smart_count} minuut geleden |||| %{smart_count} minuten geleden"
}
}

View File

@@ -300,6 +300,8 @@
},
"actions": {
"scan": "Scanear Biblioteca",
"quickScan": "Scan Rápido",
"fullScan": "Scan Completo",
"manageUsers": "Gerenciar Acesso do Usuário",
"viewDetails": "Ver Detalhes"
},
@@ -308,6 +310,9 @@
"updated": "Biblioteca atualizada com sucesso",
"deleted": "Biblioteca excluída com sucesso",
"scanStarted": "Scan da biblioteca iniciada",
"quickScanStarted": "Scan rápido iniciado",
"fullScanStarted": "Scan completo iniciado",
"scanError": "Erro ao iniciar o scan. Verifique os logs",
"scanCompleted": "Scan da biblioteca concluída"
},
"validation": {
@@ -598,11 +603,12 @@
"activity": {
"title": "Atividade",
"totalScanned": "Total de pastas scaneadas",
"quickScan": "Scan rápido",
"fullScan": "Scan completo",
"quickScan": "Rápido",
"fullScan": "Completo",
"selectiveScan": "Seletivo",
"serverUptime": "Uptime do servidor",
"serverDown": "DESCONECTADO",
"scanType": "Tipo",
"scanType": "Último Scan",
"status": "Erro",
"elapsedTime": "Duração"
},

View File

@@ -26,7 +26,17 @@
"bpm": "BPM",
"playDate": "เล่นล่าสุด",
"channels": "ช่อง",
"createdAt": "เพิ่มเมื่อ"
"createdAt": "เพิ่มเมื่อ",
"grouping": "จัดกลุ่ม",
"mood": "อารมณ์",
"participants": "ผู้มีส่วนร่วม",
"tags": "แทกเพิ่มเติม",
"mappedTags": "แมพแทก",
"rawTags": "แทกเริ่มต้น",
"bitDepth": "Bit depth",
"sampleRate": "แซมเปิ้ลเรต",
"missing": "หายไป",
"libraryName": "ห้องสมุด"
},
"actions": {
"addToQueue": "เพิ่มในคิว",
@@ -35,7 +45,8 @@
"shuffleAll": "สุ่มทั้งหมด",
"download": "ดาวน์โหลด",
"playNext": "เล่นถัดไป",
"info": "ดูรายละเอียด"
"info": "ดูรายละเอียด",
"showInPlaylist": "แสดงในเพลย์ลิสต์"
}
},
"album": {
@@ -58,7 +69,16 @@
"originalDate": "วันที่เริ่ม",
"releaseDate": "เผยแพร่เมื่อ",
"releases": "เผยแพร่ |||| เผยแพร่",
"released": "เผยแพร่เมื่อ"
"released": "เผยแพร่เมื่อ",
"recordLabel": "ป้าย",
"catalogNum": "หมายเลขแคตาล็อก",
"releaseType": "ประเภท",
"grouping": "จัดกลุ่ม",
"media": "มีเดีย",
"mood": "อารมณ์",
"date": "บันทึกเมื่อ",
"missing": "หายไป",
"libraryName": "ห้องสมุด"
},
"actions": {
"playAll": "เล่นทั้งหมด",
@@ -89,7 +109,30 @@
"playCount": "เล่นแล้ว",
"rating": "ความนิยม",
"genre": "ประเภท",
"size": "ขนาด"
"size": "ขนาด",
"role": "Role",
"missing": "หายไป"
},
"roles": {
"albumartist": "ศิลปินอัลบั้ม |||| ศิลปินอัลบั้ม",
"artist": "ศิลปิน |||| ศิลปิน",
"composer": "ผู้แต่ง |||| ผู้แต่ง",
"conductor": "คอนดักเตอร์ |||| คอนดักเตอร์",
"lyricist": "เนื้อเพลง |||| เนื้อเพลง",
"arranger": "ผู้ดำเนินการ |||| ผู้ดำเนินการ",
"producer": "ผู้จัด |||| ผู้จัด",
"director": "ไดเรกเตอร์ |||| ไดเรกเตอร์",
"engineer": "วิศวกร |||| วิศวกร",
"mixer": "มิกเซอร์ |||| มิกเซอร์",
"remixer": "รีมิกเซอร์ |||| รีมิกเซอร์",
"djmixer": "ดีเจมิกเซอร์ |||| ดีเจมิกเซอร์",
"performer": "ผู้เล่น |||| ผู้เล่น",
"maincredit": "ศิลปิน |||| ศิลปิน"
},
"actions": {
"shuffle": "เล่นสุ่ม",
"radio": "วิทยุ",
"topSongs": "เพลงยอดนิยม"
}
},
"user": {
@@ -106,10 +149,12 @@
"currentPassword": "รหัสผ่านปัจจุบัน",
"newPassword": "รหัสผ่านใหม่",
"token": "โทเคน",
"lastAccessAt": "เข้าใช้ล่าสุด"
"lastAccessAt": "เข้าใช้ล่าสุด",
"libraries": "ห้องสมุด"
},
"helperTexts": {
"name": "การเปลี่ยนชื่อจะมีผลในการล็อกอินครั้งถัดไป"
"name": "การเปลี่ยนชื่อจะมีผลในการล็อกอินครั้งถัดไป",
"libraries": "เลือกห้องสมุดสำหรับผู้ใช้นี้หรือปล่อยว่างเพื่อใช้ห้องสมุดเริ่มต้น"
},
"notifications": {
"created": "สร้างชื่อผู้ใช้",
@@ -118,7 +163,12 @@
},
"message": {
"listenBrainzToken": "ใส่โทเคน ListenBrainz ของคุณ",
"clickHereForToken": "กดที่นี่เพื่อรับโทเคนของคุณ"
"clickHereForToken": "กดที่นี่เพื่อรับโทเคนของคุณ",
"selectAllLibraries": "เลือกห้องสมุดทั้งหมด",
"adminAutoLibraries": "ผู้ดูแลเข้าถึงห้องสมุดทั้งหมดโดยอัตโนมัติ"
},
"validation": {
"librariesRequired": "ต้องเลือกห้องสมุด 1 ห้อง สำหรับผู้ใช้ที่ไม่ใช่ผู้ดูแล"
}
},
"player": {
@@ -162,11 +212,17 @@
"addNewPlaylist": "สร้าง \"%{name}\"",
"export": "ส่งออก",
"makePublic": "ทำเป็นสาธารณะ",
"makePrivate": "ทำเป็นส่วนตัว"
"makePrivate": "ทำเป็นส่วนตัว",
"saveQueue": "บันทึกคิวลงเพลย์ลิสต์",
"searchOrCreate": "ค้นหาเพลย์ลิสต์หรือพิมพ์เพื่อสร้างใหม่",
"pressEnterToCreate": "กด Enter เพื่อสร้างเพลย์ลิสต์",
"removeFromSelection": "เอาออกจากที่เลือกไว้"
},
"message": {
"duplicate_song": "เพิ่มเพลงซ้ำ",
"song_exist": "เพิ่มเพลงซ้ำกันในเพลย์ลิสต์ คุณจะเพิ่มเพลงต่อหรือข้าม"
"song_exist": "เพิ่มเพลงซ้ำกันในเพลย์ลิสต์ คุณจะเพิ่มเพลงต่อหรือข้าม",
"noPlaylistsFound": "ไม่พบเพลย์ลิสต์",
"noPlaylists": "ไม่มีเพลย์ลิสต์อยู่"
}
},
"radio": {
@@ -198,6 +254,75 @@
"createdAt": "สร้างเมื่อ",
"downloadable": "อนุญาตให้ดาวโหลด?"
}
},
"missing": {
"name": "ไฟล์ที่หายไป |||| ไฟล์ที่หายไป",
"fields": {
"path": "พาร์ท",
"size": "ขนาด",
"updatedAt": "หายไปจาก",
"libraryName": "ห้องสมุด"
},
"actions": {
"remove": "เอาออก",
"remove_all": "เอาออกทั้งหมด"
},
"notifications": {
"removed": "เอาไฟล์ที่หายไปออกแล้ว"
},
"empty": "ไม่มีไฟล์หาย"
},
"library": {
"name": "ห้องสมุด |||| ห้องสมุด",
"fields": {
"name": "ชื่อ",
"path": "พาร์ท",
"remotePath": "รีโมทพาร์ท",
"lastScanAt": "สแกนล่าสุด",
"songCount": "เพลง",
"albumCount": "อัลบัม",
"artistCount": "ศิลปิน",
"totalSongs": "เพลง",
"totalAlbums": "อัลบัม",
"totalArtists": "ศิลปิน",
"totalFolders": "แฟ้ม",
"totalFiles": "ไฟล์",
"totalMissingFiles": "ไฟล์ที่หายไป",
"totalSize": "ขนาดทั้งหมด",
"totalDuration": "ความยาว",
"defaultNewUsers": "ค่าเริ่มต้นผู้ใช้ใหม่",
"createdAt": "สร้าง",
"updatedAt": "อัพเดท"
},
"sections": {
"basic": "ข้อมูลเบื้องต้น",
"statistics": "สถิติ"
},
"actions": {
"scan": "สแกนห้องสมุด",
"manageUsers": "ตั้งค่าการเข้าถึง",
"viewDetails": "ดูรายละเอียด"
},
"notifications": {
"created": "สร้างห้องสมุดเรียบร้อย",
"updated": "อัพเดทห้องสมุดเรียบร้อย",
"deleted": "ลบห้องสมุดเพลงเรียบร้อยแล้ว",
"scanStarted": "เริ่มสแกนห้องสมุด",
"scanCompleted": "สแกนห้องสมุดเสร็จแล้ว"
},
"validation": {
"nameRequired": "ต้องใส่ชื่อห้องสมุดเพลง",
"pathRequired": "ต้องใส่พาร์ทของห้องสมุด",
"pathNotDirectory": "พาร์ทของห้องสมุดต้องเป็นแฟ้ม",
"pathNotFound": "ไม่เจอพาร์ทของห้องสมุด",
"pathNotAccessible": "ไม่สามารถเข้าพาร์ทของห้องสมุด",
"pathInvalid": "พาร์ทห้องสมุดไม่ถูก"
},
"messages": {
"deleteConfirm": "คุณแน่ใจว่าจะลบห้องสมุดนี้? นี่จะลบข้อมูลและการเข้าถึงของผู้ใช้ที่เกี่ยวข้องทั้งหมด",
"scanInProgress": "กำลังสแกน...",
"noLibrariesAssigned": "ไม่มีห้องสมุดสำหรับผู้ใช้นี้"
}
}
},
"ra": {
@@ -375,7 +500,13 @@
"shareSuccess": "คัดลอก URL ไปคลิปบอร์ด: %{url}",
"shareFailure": "คัดลอก URL %{url} ไปคลิปบอร์ดผิดพลาด",
"downloadDialogTitle": "ดาวโหลด %{resource} '%{name}' (%{size})",
"shareCopyToClipboard": "คัดลอกไปคลิปบอร์ด: Ctrl+C, Enter"
"shareCopyToClipboard": "คัดลอกไปคลิปบอร์ด: Ctrl+C, Enter",
"remove_missing_title": "ลบรายการไฟล์ที่หายไป",
"remove_missing_content": "คุณแน่ใจว่าจะเอารายการไฟล์ที่หายไปออกจากดาต้าเบส นี่จะเป็นการลบข้อมูลอ้างอิงทั้งหมดของไฟล์ออกอย่างถาวร",
"remove_all_missing_title": "เอารายการไฟล์ที่หายไปออกทั้งหมด",
"remove_all_missing_content": "คุณแน่ใจว่าจะเอารายการไฟล์ที่หายไปออกจากดาต้าเบส นี่จะเป็นการลบข้อมูลอ้างอิงทั้งหมดของไฟล์ออกอย่างถาวร",
"noSimilarSongsFound": "ไม่มีเพลงคล้ายกัน",
"noTopSongsFound": "ไม่พบเพลงยอดนิยม"
},
"menu": {
"library": "ห้องสมุดเพลง",
@@ -404,7 +535,13 @@
"albumList": "อัลบั้ม",
"about": "เกี่ยวกับ",
"playlists": "เพลย์ลิสต์",
"sharedPlaylists": "เพลย์ลิสต์ที่แบ่งปัน"
"sharedPlaylists": "เพลย์ลิสต์ที่แบ่งปัน",
"librarySelector": {
"allLibraries": "ห้องสมุด (%{count}) ห้อง",
"multipleLibraries": "%{selected} ของ %{total} ห้องสมุด",
"selectLibraries": "เลือกห้องสมุด",
"none": "ไม่มี"
}
},
"player": {
"playListsText": "คิวเล่น",
@@ -441,6 +578,21 @@
"disabled": "ปิดการทำงาน",
"waiting": "รอ"
}
},
"tabs": {
"about": "เกี่ยวกับ",
"config": "การตั้งค่า"
},
"config": {
"configName": "ชื่อการตั้งค่า",
"environmentVariable": "ค่าทั่วไป",
"currentValue": "ค่าปัจจุบัน",
"configurationFile": "ไฟล์การตั้งค่า",
"exportToml": "นำออกการตั้งค่า (TOML)",
"exportSuccess": "นำออกการตั้งค่าไปยังคลิปบอร์ดในรูปแบบ TOML แล้ว",
"exportFailed": "คัดลอกการตั้งค่าล้มเหลว",
"devFlagsHeader": "ปักธงการพัฒนา (อาจมีการเปลี่ยน/เอาออก)",
"devFlagsComment": "การตั้งค่านี้อยู่ในช่วงทดลองและอาจจะมีการเอาออกในเวอร์ชั่นหลัง"
}
},
"activity": {
@@ -449,7 +601,10 @@
"quickScan": "สแกนแบบเร็ว",
"fullScan": "สแกนทั้งหมด",
"serverUptime": "เซิร์ฟเวอร์ออนไลน์นาน",
"serverDown": "ออฟไลน์"
"serverDown": "ออฟไลน์",
"scanType": "ประเภท",
"status": "สแกนผิดพลาด",
"elapsedTime": "เวลาที่ใช้"
},
"help": {
"title": "คีย์ลัด Navidrome",
@@ -464,5 +619,10 @@
"toggle_love": "เพิ่มเพลงนี้ไปยังรายการโปรด",
"current_song": "ไปยังเพลงปัจจุบัน"
}
},
"nowPlaying": {
"title": "กำลังเล่น",
"empty": "ไม่มีเพลงเล่น",
"minutesAgo": "%{smart_count} นาทีที่แล้ว |||| %{smart_count} นาทีที่แล้ว"
}
}

View File

File diff suppressed because it is too large Load Diff

View File

@@ -26,24 +26,8 @@ var (
ErrAlreadyScanning = errors.New("already scanning")
)
type Scanner interface {
// ScanAll starts a full scan of the music library. This is a blocking operation.
ScanAll(ctx context.Context, fullScan bool) (warnings []string, err error)
Status(context.Context) (*StatusInfo, error)
}
type StatusInfo struct {
Scanning bool
LastScan time.Time
Count uint32
FolderCount uint32
LastError string
ScanType string
ElapsedTime time.Duration
}
func New(rootCtx context.Context, ds model.DataStore, cw artwork.CacheWarmer, broker events.Broker,
pls core.Playlists, m metrics.Metrics) Scanner {
pls core.Playlists, m metrics.Metrics) model.Scanner {
c := &controller{
rootCtx: rootCtx,
ds: ds,
@@ -65,9 +49,10 @@ func (s *controller) getScanner() scanner {
return &scannerImpl{ds: s.ds, cw: s.cw, pls: s.pls}
}
// CallScan starts an in-process scan of the music library.
// CallScan starts an in-process scan of specific library/folder pairs.
// If targets is empty, it scans all libraries.
// This is meant to be called from the command line (see cmd/scan.go).
func CallScan(ctx context.Context, ds model.DataStore, pls core.Playlists, fullScan bool) (<-chan *ProgressInfo, error) {
func CallScan(ctx context.Context, ds model.DataStore, pls core.Playlists, fullScan bool, targets []model.ScanTarget) (<-chan *ProgressInfo, error) {
release, err := lockScan(ctx)
if err != nil {
return nil, err
@@ -79,7 +64,7 @@ func CallScan(ctx context.Context, ds model.DataStore, pls core.Playlists, fullS
go func() {
defer close(progress)
scanner := &scannerImpl{ds: ds, cw: artwork.NoopCacheWarmer(), pls: pls}
scanner.scanAll(ctx, fullScan, progress)
scanner.scanFolders(ctx, fullScan, targets, progress)
}()
return progress, nil
}
@@ -99,8 +84,11 @@ type ProgressInfo struct {
ForceUpdate bool
}
// scanner defines the interface for different scanner implementations.
// This allows for swapping between in-process and external scanners.
type scanner interface {
scanAll(ctx context.Context, fullScan bool, progress chan<- *ProgressInfo)
// scanFolders performs the actual scanning of folders. If targets is nil, it scans all libraries.
scanFolders(ctx context.Context, fullScan bool, targets []model.ScanTarget, progress chan<- *ProgressInfo)
}
type controller struct {
@@ -158,7 +146,7 @@ func (s *controller) getScanInfo(ctx context.Context) (scanType string, elapsed
return scanType, elapsed, lastErr
}
func (s *controller) Status(ctx context.Context) (*StatusInfo, error) {
func (s *controller) Status(ctx context.Context) (*model.ScannerStatus, error) {
lastScanTime, err := s.getLastScanTime(ctx)
if err != nil {
return nil, fmt.Errorf("getting last scan time: %w", err)
@@ -167,7 +155,7 @@ func (s *controller) Status(ctx context.Context) (*StatusInfo, error) {
scanType, elapsed, lastErr := s.getScanInfo(ctx)
if running.Load() {
status := &StatusInfo{
status := &model.ScannerStatus{
Scanning: true,
LastScan: lastScanTime,
Count: s.count.Load(),
@@ -183,7 +171,7 @@ func (s *controller) Status(ctx context.Context) (*StatusInfo, error) {
if err != nil {
return nil, fmt.Errorf("getting library stats: %w", err)
}
return &StatusInfo{
return &model.ScannerStatus{
Scanning: false,
LastScan: lastScanTime,
Count: uint32(count),
@@ -208,6 +196,10 @@ func (s *controller) getCounters(ctx context.Context) (int64, int64, error) {
}
func (s *controller) ScanAll(requestCtx context.Context, fullScan bool) ([]string, error) {
return s.ScanFolders(requestCtx, fullScan, nil)
}
func (s *controller) ScanFolders(requestCtx context.Context, fullScan bool, targets []model.ScanTarget) ([]string, error) {
release, err := lockScan(requestCtx)
if err != nil {
return nil, err
@@ -224,7 +216,7 @@ func (s *controller) ScanAll(requestCtx context.Context, fullScan bool) ([]strin
go func() {
defer close(progress)
scanner := s.getScanner()
scanner.scanAll(ctx, fullScan, progress)
scanner.scanFolders(ctx, fullScan, targets, progress)
}()
// Wait for the scan to finish, sending progress events to all connected clients

View File

@@ -9,6 +9,7 @@ import (
"github.com/navidrome/navidrome/core/artwork"
"github.com/navidrome/navidrome/core/metrics"
"github.com/navidrome/navidrome/db"
"github.com/navidrome/navidrome/model"
"github.com/navidrome/navidrome/persistence"
"github.com/navidrome/navidrome/scanner"
"github.com/navidrome/navidrome/server/events"
@@ -20,7 +21,7 @@ import (
var _ = Describe("Controller", func() {
var ctx context.Context
var ds *tests.MockDataStore
var ctrl scanner.Scanner
var ctrl model.Scanner
Describe("Status", func() {
BeforeEach(func() {

View File

@@ -11,7 +11,7 @@ import (
"github.com/navidrome/navidrome/conf"
"github.com/navidrome/navidrome/log"
. "github.com/navidrome/navidrome/utils/gg"
"github.com/navidrome/navidrome/model"
)
// scannerExternal is a scanner that runs an external process to do the scanning. It is used to avoid
@@ -23,19 +23,42 @@ import (
// process will forward them to the caller.
type scannerExternal struct{}
func (s *scannerExternal) scanAll(ctx context.Context, fullScan bool, progress chan<- *ProgressInfo) {
func (s *scannerExternal) scanFolders(ctx context.Context, fullScan bool, targets []model.ScanTarget, progress chan<- *ProgressInfo) {
s.scan(ctx, fullScan, targets, progress)
}
func (s *scannerExternal) scan(ctx context.Context, fullScan bool, targets []model.ScanTarget, progress chan<- *ProgressInfo) {
exe, err := os.Executable()
if err != nil {
progress <- &ProgressInfo{Error: fmt.Sprintf("failed to get executable path: %s", err)}
return
}
log.Debug(ctx, "Spawning external scanner process", "fullScan", fullScan, "path", exe)
cmd := exec.CommandContext(ctx, exe, "scan",
// Build command arguments
args := []string{
"scan",
"--nobanner", "--subprocess",
"--configfile", conf.Server.ConfigFile,
"--datafolder", conf.Server.DataFolder,
"--cachefolder", conf.Server.CacheFolder,
If(fullScan, "--full", ""))
}
// Add targets if provided
if len(targets) > 0 {
for _, target := range targets {
args = append(args, "-t", target.String())
}
log.Debug(ctx, "Spawning external scanner process with targets", "fullScan", fullScan, "path", exe, "targets", targets)
} else {
log.Debug(ctx, "Spawning external scanner process", "fullScan", fullScan, "path", exe)
}
// Add full scan flag if needed
if fullScan {
args = append(args, "--full")
}
cmd := exec.CommandContext(ctx, exe, args...)
in, out := io.Pipe()
defer in.Close()

View File

@@ -15,9 +15,7 @@ import (
"github.com/navidrome/navidrome/utils/chrono"
)
func newFolderEntry(job *scanJob, path string) *folderEntry {
id := model.FolderID(job.lib, path)
info := job.popLastUpdate(id)
func newFolderEntry(job *scanJob, id, path string, updTime time.Time, hash string) *folderEntry {
f := &folderEntry{
id: id,
job: job,
@@ -25,8 +23,8 @@ func newFolderEntry(job *scanJob, path string) *folderEntry {
audioFiles: make(map[string]fs.DirEntry),
imageFiles: make(map[string]fs.DirEntry),
albumIDMap: make(map[string]string),
updTime: info.UpdatedAt,
prevHash: info.Hash,
updTime: updTime,
prevHash: hash,
}
return f
}

View File

@@ -40,9 +40,8 @@ var _ = Describe("folder_entry", func() {
UpdatedAt: time.Now().Add(-30 * time.Minute),
Hash: "previous-hash",
}
job.lastUpdates[folderID] = updateInfo
entry := newFolderEntry(job, path)
entry := newFolderEntry(job, folderID, path, updateInfo.UpdatedAt, updateInfo.Hash)
Expect(entry.id).To(Equal(folderID))
Expect(entry.job).To(Equal(job))
@@ -53,15 +52,10 @@ var _ = Describe("folder_entry", func() {
Expect(entry.updTime).To(Equal(updateInfo.UpdatedAt))
Expect(entry.prevHash).To(Equal(updateInfo.Hash))
})
})
It("creates a new folder entry with zero time when no previous update exists", func() {
entry := newFolderEntry(job, path)
Expect(entry.updTime).To(BeZero())
Expect(entry.prevHash).To(BeEmpty())
})
It("removes the lastUpdate from the job after popping", func() {
Describe("createFolderEntry", func() {
It("removes the lastUpdate from the job after creation", func() {
folderID := model.FolderID(lib, path)
updateInfo := model.FolderUpdateInfo{
UpdatedAt: time.Now().Add(-30 * time.Minute),
@@ -69,8 +63,10 @@ var _ = Describe("folder_entry", func() {
}
job.lastUpdates[folderID] = updateInfo
newFolderEntry(job, path)
entry := job.createFolderEntry(path)
Expect(entry.updTime).To(Equal(updateInfo.UpdatedAt))
Expect(entry.prevHash).To(Equal(updateInfo.Hash))
Expect(job.lastUpdates).ToNot(HaveKey(folderID))
})
})
@@ -79,7 +75,8 @@ var _ = Describe("folder_entry", func() {
var entry *folderEntry
BeforeEach(func() {
entry = newFolderEntry(job, path)
folderID := model.FolderID(lib, path)
entry = newFolderEntry(job, folderID, path, time.Time{}, "")
})
Describe("hasNoFiles", func() {
@@ -458,7 +455,9 @@ var _ = Describe("folder_entry", func() {
Describe("integration scenarios", func() {
It("handles complete folder lifecycle", func() {
// Create new folder entry
entry := newFolderEntry(job, "music/rock/album")
folderPath := "music/rock/album"
folderID := model.FolderID(lib, folderPath)
entry := newFolderEntry(job, folderID, folderPath, time.Time{}, "")
// Initially new and has no files
Expect(entry.isNew()).To(BeTrue())

163
scanner/ignore_checker.go Normal file
View File

@@ -0,0 +1,163 @@
package scanner
import (
"bufio"
"context"
"io/fs"
"path"
"strings"
"github.com/navidrome/navidrome/consts"
"github.com/navidrome/navidrome/log"
ignore "github.com/sabhiram/go-gitignore"
)
// IgnoreChecker manages .ndignore patterns using a stack-based approach.
// Use Push() to add patterns when entering a folder, Pop() when leaving,
// and ShouldIgnore() to check if a path should be ignored.
type IgnoreChecker struct {
fsys fs.FS
patternStack [][]string // Stack of patterns for each folder level
currentPatterns []string // Flattened current patterns
matcher *ignore.GitIgnore // Compiled matcher for current patterns
}
// newIgnoreChecker creates a new IgnoreChecker for the given filesystem.
func newIgnoreChecker(fsys fs.FS) *IgnoreChecker {
return &IgnoreChecker{
fsys: fsys,
patternStack: make([][]string, 0),
}
}
// Push loads .ndignore patterns from the specified folder and adds them to the pattern stack.
// Use this when entering a folder during directory tree traversal.
func (ic *IgnoreChecker) Push(ctx context.Context, folder string) error {
patterns := ic.loadPatternsFromFolder(ctx, folder)
ic.patternStack = append(ic.patternStack, patterns)
ic.rebuildCurrentPatterns()
return nil
}
// Pop removes the most recent patterns from the stack.
// Use this when leaving a folder during directory tree traversal.
func (ic *IgnoreChecker) Pop() {
if len(ic.patternStack) > 0 {
ic.patternStack = ic.patternStack[:len(ic.patternStack)-1]
ic.rebuildCurrentPatterns()
}
}
// PushAllParents pushes patterns from root down to the target path.
// This is a convenience method for when you need to check a specific path
// without recursively walking the tree. It handles the common pattern of
// pushing all parent directories from root to the target.
// This method is optimized to compile patterns only once at the end.
func (ic *IgnoreChecker) PushAllParents(ctx context.Context, targetPath string) error {
if targetPath == "." || targetPath == "" {
// Simple case: just push root
return ic.Push(ctx, ".")
}
// Load patterns for root
patterns := ic.loadPatternsFromFolder(ctx, ".")
ic.patternStack = append(ic.patternStack, patterns)
// Load patterns for each parent directory
currentPath := "."
parts := strings.Split(path.Clean(targetPath), "/")
for _, part := range parts {
if part == "." || part == "" {
continue
}
currentPath = path.Join(currentPath, part)
patterns = ic.loadPatternsFromFolder(ctx, currentPath)
ic.patternStack = append(ic.patternStack, patterns)
}
// Rebuild and compile patterns only once at the end
ic.rebuildCurrentPatterns()
return nil
}
// ShouldIgnore checks if the given path should be ignored based on the current patterns.
// Returns true if the path matches any ignore pattern, false otherwise.
func (ic *IgnoreChecker) ShouldIgnore(ctx context.Context, relPath string) bool {
// Handle root/empty path - never ignore
if relPath == "" || relPath == "." {
return false
}
// If no patterns loaded, nothing to ignore
if ic.matcher == nil {
return false
}
matches := ic.matcher.MatchesPath(relPath)
if matches {
log.Trace(ctx, "Scanner: Ignoring entry matching .ndignore", "path", relPath)
}
return matches
}
// loadPatternsFromFolder reads the .ndignore file in the specified folder and returns the patterns.
// If the file doesn't exist, returns an empty slice.
// If the file exists but is empty, returns a pattern to ignore everything ("**/*").
func (ic *IgnoreChecker) loadPatternsFromFolder(ctx context.Context, folder string) []string {
ignoreFilePath := path.Join(folder, consts.ScanIgnoreFile)
var patterns []string
// Check if .ndignore file exists
if _, err := fs.Stat(ic.fsys, ignoreFilePath); err != nil {
// No .ndignore file in this folder
return patterns
}
// Read and parse the .ndignore file
ignoreFile, err := ic.fsys.Open(ignoreFilePath)
if err != nil {
log.Warn(ctx, "Scanner: Error opening .ndignore file", "path", ignoreFilePath, err)
return patterns
}
defer ignoreFile.Close()
lineScanner := bufio.NewScanner(ignoreFile)
for lineScanner.Scan() {
line := strings.TrimSpace(lineScanner.Text())
if line == "" || strings.HasPrefix(line, "#") {
continue // Skip empty lines, whitespace-only lines, and comments
}
patterns = append(patterns, line)
}
if err := lineScanner.Err(); err != nil {
log.Warn(ctx, "Scanner: Error reading .ndignore file", "path", ignoreFilePath, err)
return patterns
}
// If the .ndignore file is empty, ignore everything
if len(patterns) == 0 {
log.Trace(ctx, "Scanner: .ndignore file is empty, ignoring everything", "path", folder)
patterns = []string{"**/*"}
}
return patterns
}
// rebuildCurrentPatterns flattens the pattern stack into currentPatterns and recompiles the matcher.
func (ic *IgnoreChecker) rebuildCurrentPatterns() {
ic.currentPatterns = make([]string, 0)
for _, patterns := range ic.patternStack {
ic.currentPatterns = append(ic.currentPatterns, patterns...)
}
ic.compilePatterns()
}
// compilePatterns compiles the current patterns into a GitIgnore matcher.
func (ic *IgnoreChecker) compilePatterns() {
if len(ic.currentPatterns) == 0 {
ic.matcher = nil
return
}
ic.matcher = ignore.CompileIgnoreLines(ic.currentPatterns...)
}

View File

@@ -0,0 +1,313 @@
package scanner
import (
"context"
"testing/fstest"
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
)
var _ = Describe("IgnoreChecker", func() {
Describe("loadPatternsFromFolder", func() {
var ic *IgnoreChecker
var ctx context.Context
BeforeEach(func() {
ctx = context.Background()
})
Context("when .ndignore file does not exist", func() {
It("should return empty patterns", func() {
fsys := fstest.MapFS{}
ic = newIgnoreChecker(fsys)
patterns := ic.loadPatternsFromFolder(ctx, ".")
Expect(patterns).To(BeEmpty())
})
})
Context("when .ndignore file is empty", func() {
It("should return wildcard to ignore everything", func() {
fsys := fstest.MapFS{
".ndignore": &fstest.MapFile{Data: []byte("")},
}
ic = newIgnoreChecker(fsys)
patterns := ic.loadPatternsFromFolder(ctx, ".")
Expect(patterns).To(Equal([]string{"**/*"}))
})
})
DescribeTable("parsing .ndignore content",
func(content string, expectedPatterns []string) {
fsys := fstest.MapFS{
".ndignore": &fstest.MapFile{Data: []byte(content)},
}
ic = newIgnoreChecker(fsys)
patterns := ic.loadPatternsFromFolder(ctx, ".")
Expect(patterns).To(Equal(expectedPatterns))
},
Entry("single pattern", "*.txt", []string{"*.txt"}),
Entry("multiple patterns", "*.txt\n*.log", []string{"*.txt", "*.log"}),
Entry("with comments", "# comment\n*.txt\n# another\n*.log", []string{"*.txt", "*.log"}),
Entry("with empty lines", "*.txt\n\n*.log\n\n", []string{"*.txt", "*.log"}),
Entry("mixed content", "# header\n\n*.txt\n# middle\n*.log\n\n", []string{"*.txt", "*.log"}),
Entry("only comments and empty lines", "# comment\n\n# another\n", []string{"**/*"}),
Entry("trailing newline", "*.txt\n*.log\n", []string{"*.txt", "*.log"}),
Entry("directory pattern", "temp/", []string{"temp/"}),
Entry("wildcard pattern", "**/*.mp3", []string{"**/*.mp3"}),
Entry("multiple wildcards", "**/*.mp3\n**/*.flac\n*.log", []string{"**/*.mp3", "**/*.flac", "*.log"}),
Entry("negation pattern", "!important.txt", []string{"!important.txt"}),
Entry("comment with hash not at start is pattern", "not#comment", []string{"not#comment"}),
Entry("whitespace-only lines skipped", "*.txt\n \n*.log\n\t\n", []string{"*.txt", "*.log"}),
Entry("patterns with whitespace trimmed", " *.txt \n\t*.log\t", []string{"*.txt", "*.log"}),
)
})
Describe("Push and Pop", func() {
var ic *IgnoreChecker
var fsys fstest.MapFS
var ctx context.Context
BeforeEach(func() {
ctx = context.Background()
fsys = fstest.MapFS{
".ndignore": &fstest.MapFile{Data: []byte("*.txt")},
"folder1/.ndignore": &fstest.MapFile{Data: []byte("*.mp3")},
"folder2/.ndignore": &fstest.MapFile{Data: []byte("*.flac")},
}
ic = newIgnoreChecker(fsys)
})
Context("Push", func() {
It("should add patterns to stack", func() {
err := ic.Push(ctx, ".")
Expect(err).ToNot(HaveOccurred())
Expect(len(ic.patternStack)).To(Equal(1))
Expect(ic.currentPatterns).To(ContainElement("*.txt"))
})
It("should compile matcher after push", func() {
err := ic.Push(ctx, ".")
Expect(err).ToNot(HaveOccurred())
Expect(ic.matcher).ToNot(BeNil())
})
It("should accumulate patterns from multiple levels", func() {
err := ic.Push(ctx, ".")
Expect(err).ToNot(HaveOccurred())
err = ic.Push(ctx, "folder1")
Expect(err).ToNot(HaveOccurred())
Expect(len(ic.patternStack)).To(Equal(2))
Expect(ic.currentPatterns).To(ConsistOf("*.txt", "*.mp3"))
})
It("should handle push when no .ndignore exists", func() {
err := ic.Push(ctx, "nonexistent")
Expect(err).ToNot(HaveOccurred())
Expect(len(ic.patternStack)).To(Equal(1))
Expect(ic.currentPatterns).To(BeEmpty())
})
})
Context("Pop", func() {
It("should remove most recent patterns", func() {
err := ic.Push(ctx, ".")
Expect(err).ToNot(HaveOccurred())
err = ic.Push(ctx, "folder1")
Expect(err).ToNot(HaveOccurred())
ic.Pop()
Expect(len(ic.patternStack)).To(Equal(1))
Expect(ic.currentPatterns).To(Equal([]string{"*.txt"}))
})
It("should handle Pop on empty stack gracefully", func() {
Expect(func() { ic.Pop() }).ToNot(Panic())
Expect(ic.patternStack).To(BeEmpty())
})
It("should set matcher to nil when all patterns popped", func() {
err := ic.Push(ctx, ".")
Expect(err).ToNot(HaveOccurred())
Expect(ic.matcher).ToNot(BeNil())
ic.Pop()
Expect(ic.matcher).To(BeNil())
})
It("should update matcher after pop", func() {
err := ic.Push(ctx, ".")
Expect(err).ToNot(HaveOccurred())
err = ic.Push(ctx, "folder1")
Expect(err).ToNot(HaveOccurred())
matcher1 := ic.matcher
ic.Pop()
matcher2 := ic.matcher
Expect(matcher1).ToNot(Equal(matcher2))
})
})
Context("multiple Push/Pop cycles", func() {
It("should maintain correct state through cycles", func() {
err := ic.Push(ctx, ".")
Expect(err).ToNot(HaveOccurred())
Expect(ic.currentPatterns).To(Equal([]string{"*.txt"}))
err = ic.Push(ctx, "folder1")
Expect(err).ToNot(HaveOccurred())
Expect(ic.currentPatterns).To(ConsistOf("*.txt", "*.mp3"))
ic.Pop()
Expect(ic.currentPatterns).To(Equal([]string{"*.txt"}))
err = ic.Push(ctx, "folder2")
Expect(err).ToNot(HaveOccurred())
Expect(ic.currentPatterns).To(ConsistOf("*.txt", "*.flac"))
ic.Pop()
Expect(ic.currentPatterns).To(Equal([]string{"*.txt"}))
ic.Pop()
Expect(ic.currentPatterns).To(BeEmpty())
})
})
})
Describe("PushAllParents", func() {
var ic *IgnoreChecker
var ctx context.Context
BeforeEach(func() {
ctx = context.Background()
fsys := fstest.MapFS{
".ndignore": &fstest.MapFile{Data: []byte("root.txt")},
"folder1/.ndignore": &fstest.MapFile{Data: []byte("level1.txt")},
"folder1/folder2/.ndignore": &fstest.MapFile{Data: []byte("level2.txt")},
"folder1/folder2/folder3/.ndignore": &fstest.MapFile{Data: []byte("level3.txt")},
}
ic = newIgnoreChecker(fsys)
})
DescribeTable("loading parent patterns",
func(targetPath string, expectedStackDepth int, expectedPatterns []string) {
err := ic.PushAllParents(ctx, targetPath)
Expect(err).ToNot(HaveOccurred())
Expect(len(ic.patternStack)).To(Equal(expectedStackDepth))
Expect(ic.currentPatterns).To(ConsistOf(expectedPatterns))
},
Entry("root path", ".", 1, []string{"root.txt"}),
Entry("empty path", "", 1, []string{"root.txt"}),
Entry("single level", "folder1", 2, []string{"root.txt", "level1.txt"}),
Entry("two levels", "folder1/folder2", 3, []string{"root.txt", "level1.txt", "level2.txt"}),
Entry("three levels", "folder1/folder2/folder3", 4, []string{"root.txt", "level1.txt", "level2.txt", "level3.txt"}),
)
It("should only compile patterns once at the end", func() {
// This is more of a behavioral test - we verify the matcher is not nil after PushAllParents
err := ic.PushAllParents(ctx, "folder1/folder2")
Expect(err).ToNot(HaveOccurred())
Expect(ic.matcher).ToNot(BeNil())
})
It("should handle paths with dot", func() {
err := ic.PushAllParents(ctx, "./folder1")
Expect(err).ToNot(HaveOccurred())
Expect(len(ic.patternStack)).To(Equal(2))
})
Context("when some parent folders have no .ndignore", func() {
BeforeEach(func() {
fsys := fstest.MapFS{
".ndignore": &fstest.MapFile{Data: []byte("root.txt")},
"folder1/folder2/.ndignore": &fstest.MapFile{Data: []byte("level2.txt")},
}
ic = newIgnoreChecker(fsys)
})
It("should still push all parent levels", func() {
err := ic.PushAllParents(ctx, "folder1/folder2")
Expect(err).ToNot(HaveOccurred())
Expect(len(ic.patternStack)).To(Equal(3)) // root, folder1 (empty), folder2
Expect(ic.currentPatterns).To(ConsistOf("root.txt", "level2.txt"))
})
})
})
Describe("ShouldIgnore", func() {
var ic *IgnoreChecker
var ctx context.Context
BeforeEach(func() {
ctx = context.Background()
})
Context("with no patterns loaded", func() {
It("should not ignore any path", func() {
fsys := fstest.MapFS{}
ic = newIgnoreChecker(fsys)
Expect(ic.ShouldIgnore(ctx, "anything.txt")).To(BeFalse())
Expect(ic.ShouldIgnore(ctx, "folder/file.mp3")).To(BeFalse())
})
})
Context("special paths", func() {
BeforeEach(func() {
fsys := fstest.MapFS{
".ndignore": &fstest.MapFile{Data: []byte("**/*")},
}
ic = newIgnoreChecker(fsys)
err := ic.Push(ctx, ".")
Expect(err).ToNot(HaveOccurred())
})
It("should never ignore root or empty paths", func() {
Expect(ic.ShouldIgnore(ctx, "")).To(BeFalse())
Expect(ic.ShouldIgnore(ctx, ".")).To(BeFalse())
})
It("should ignore all other paths with wildcard", func() {
Expect(ic.ShouldIgnore(ctx, "file.txt")).To(BeTrue())
Expect(ic.ShouldIgnore(ctx, "folder/file.mp3")).To(BeTrue())
})
})
DescribeTable("pattern matching",
func(pattern string, path string, shouldMatch bool) {
fsys := fstest.MapFS{
".ndignore": &fstest.MapFile{Data: []byte(pattern)},
}
ic = newIgnoreChecker(fsys)
err := ic.Push(ctx, ".")
Expect(err).ToNot(HaveOccurred())
Expect(ic.ShouldIgnore(ctx, path)).To(Equal(shouldMatch))
},
Entry("glob match", "*.txt", "file.txt", true),
Entry("glob no match", "*.txt", "file.mp3", false),
Entry("directory pattern match", "tmp/", "tmp/file.txt", true),
Entry("directory pattern no match", "tmp/", "temporary/file.txt", false),
Entry("nested glob match", "**/*.log", "deep/nested/file.log", true),
Entry("nested glob no match", "**/*.log", "deep/nested/file.txt", false),
Entry("specific file match", "ignore.me", "ignore.me", true),
Entry("specific file no match", "ignore.me", "keep.me", false),
Entry("wildcard all", "**/*", "any/path/file.txt", true),
Entry("nested specific match", "temp/*", "temp/cache.db", true),
Entry("nested specific no match", "temp/*", "temporary/cache.db", false),
)
Context("with multiple patterns", func() {
BeforeEach(func() {
fsys := fstest.MapFS{
".ndignore": &fstest.MapFile{Data: []byte("*.txt\n*.log\ntemp/")},
}
ic = newIgnoreChecker(fsys)
err := ic.Push(ctx, ".")
Expect(err).ToNot(HaveOccurred())
})
It("should match any of the patterns", func() {
Expect(ic.ShouldIgnore(ctx, "file.txt")).To(BeTrue())
Expect(ic.ShouldIgnore(ctx, "debug.log")).To(BeTrue())
Expect(ic.ShouldIgnore(ctx, "temp/cache")).To(BeTrue())
Expect(ic.ShouldIgnore(ctx, "music.mp3")).To(BeFalse())
})
})
})
})

View File

@@ -26,58 +26,46 @@ import (
"github.com/navidrome/navidrome/utils/slice"
)
func createPhaseFolders(ctx context.Context, state *scanState, ds model.DataStore, cw artwork.CacheWarmer, libs []model.Library) *phaseFolders {
func createPhaseFolders(ctx context.Context, state *scanState, ds model.DataStore, cw artwork.CacheWarmer) *phaseFolders {
var jobs []*scanJob
var updatedLibs []model.Library
for _, lib := range libs {
if lib.LastScanStartedAt.IsZero() {
err := ds.Library(ctx).ScanBegin(lib.ID, state.fullScan)
if err != nil {
log.Error(ctx, "Scanner: Error updating last scan started at", "lib", lib.Name, err)
state.sendWarning(err.Error())
continue
}
// Reload library to get updated state
l, err := ds.Library(ctx).Get(lib.ID)
if err != nil {
log.Error(ctx, "Scanner: Error reloading library", "lib", lib.Name, err)
state.sendWarning(err.Error())
continue
}
lib = *l
} else {
log.Debug(ctx, "Scanner: Resuming previous scan", "lib", lib.Name, "lastScanStartedAt", lib.LastScanStartedAt, "fullScan", lib.FullScanInProgress)
// Create scan jobs for all libraries
for _, lib := range state.libraries {
// Get target folders for this library if selective scan
var targetFolders []string
if state.isSelectiveScan() {
targetFolders = state.targets[lib.ID]
}
job, err := newScanJob(ctx, ds, cw, lib, state.fullScan)
job, err := newScanJob(ctx, ds, cw, lib, state.fullScan, targetFolders)
if err != nil {
log.Error(ctx, "Scanner: Error creating scan context", "lib", lib.Name, err)
state.sendWarning(err.Error())
continue
}
jobs = append(jobs, job)
updatedLibs = append(updatedLibs, lib)
}
// Update the state with the libraries that have been processed and have their scan timestamps set
state.libraries = updatedLibs
return &phaseFolders{jobs: jobs, ctx: ctx, ds: ds, state: state}
}
type scanJob struct {
lib model.Library
fs storage.MusicFS
cw artwork.CacheWarmer
lastUpdates map[string]model.FolderUpdateInfo
lock sync.Mutex
numFolders atomic.Int64
lib model.Library
fs storage.MusicFS
cw artwork.CacheWarmer
lastUpdates map[string]model.FolderUpdateInfo // Holds last update info for all (DB) folders in this library
targetFolders []string // Specific folders to scan (including all descendants)
lock sync.Mutex
numFolders atomic.Int64
}
func newScanJob(ctx context.Context, ds model.DataStore, cw artwork.CacheWarmer, lib model.Library, fullScan bool) (*scanJob, error) {
lastUpdates, err := ds.Folder(ctx).GetLastUpdates(lib)
func newScanJob(ctx context.Context, ds model.DataStore, cw artwork.CacheWarmer, lib model.Library, fullScan bool, targetFolders []string) (*scanJob, error) {
// Get folder updates, optionally filtered to specific target folders
lastUpdates, err := ds.Folder(ctx).GetFolderUpdateInfo(lib, targetFolders...)
if err != nil {
return nil, fmt.Errorf("getting last updates: %w", err)
}
fileStore, err := storage.For(lib.Path)
if err != nil {
log.Error(ctx, "Error getting storage for library", "library", lib.Name, "path", lib.Path, err)
@@ -88,15 +76,17 @@ func newScanJob(ctx context.Context, ds model.DataStore, cw artwork.CacheWarmer,
log.Error(ctx, "Error getting fs for library", "library", lib.Name, "path", lib.Path, err)
return nil, fmt.Errorf("getting fs for library: %w", err)
}
lib.FullScanInProgress = lib.FullScanInProgress || fullScan
return &scanJob{
lib: lib,
fs: fsys,
cw: cw,
lastUpdates: lastUpdates,
lib: lib,
fs: fsys,
cw: cw,
lastUpdates: lastUpdates,
targetFolders: targetFolders,
}, nil
}
// popLastUpdate retrieves and removes the last update info for the given folder ID
// This is used to track which folders have been found during the walk_dir_tree
func (j *scanJob) popLastUpdate(folderID string) model.FolderUpdateInfo {
j.lock.Lock()
defer j.lock.Unlock()
@@ -106,6 +96,15 @@ func (j *scanJob) popLastUpdate(folderID string) model.FolderUpdateInfo {
return lastUpdate
}
// createFolderEntry creates a new folderEntry for the given path, using the last update info from the job
// to populate the previous update time and hash. It also removes the folder from the job's lastUpdates map.
// This is used to track which folders have been found during the walk_dir_tree.
func (j *scanJob) createFolderEntry(path string) *folderEntry {
id := model.FolderID(j.lib, path)
info := j.popLastUpdate(id)
return newFolderEntry(j, id, path, info.UpdatedAt, info.Hash)
}
// phaseFolders represents the first phase of the scanning process, which is responsible
// for scanning all libraries and importing new or updated files. This phase involves
// traversing the directory tree of each library, identifying new or modified media files,
@@ -144,7 +143,8 @@ func (p *phaseFolders) producer() ppl.Producer[*folderEntry] {
if utils.IsCtxDone(p.ctx) {
break
}
outputChan, err := walkDirTree(p.ctx, job)
outputChan, err := walkDirTree(p.ctx, job, job.targetFolders...)
if err != nil {
log.Warn(p.ctx, "Scanner: Error scanning library", "lib", job.lib.Name, err)
}
@@ -324,6 +324,9 @@ func (p *phaseFolders) persistChanges(entry *folderEntry) (*folderEntry, error)
defer p.measure(entry)()
p.state.changesDetected.Store(true)
// Collect artwork IDs to pre-cache after the transaction commits
var artworkIDs []model.ArtworkID
err := p.ds.WithTx(func(tx model.DataStore) error {
// Instantiate all repositories just once per folder
folderRepo := tx.Folder(p.ctx)
@@ -362,7 +365,7 @@ func (p *phaseFolders) persistChanges(entry *folderEntry) (*folderEntry, error)
return err
}
if entry.artists[i].Name != consts.UnknownArtist && entry.artists[i].Name != consts.VariousArtists {
entry.job.cw.PreCache(entry.artists[i].CoverArtID())
artworkIDs = append(artworkIDs, entry.artists[i].CoverArtID())
}
}
@@ -374,7 +377,7 @@ func (p *phaseFolders) persistChanges(entry *folderEntry) (*folderEntry, error)
return err
}
if entry.albums[i].Name != consts.UnknownAlbum {
entry.job.cw.PreCache(entry.albums[i].CoverArtID())
artworkIDs = append(artworkIDs, entry.albums[i].CoverArtID())
}
}
@@ -411,6 +414,14 @@ func (p *phaseFolders) persistChanges(entry *folderEntry) (*folderEntry, error)
if err != nil {
log.Error(p.ctx, "Scanner: Error persisting changes to DB", "folder", entry.path, err)
}
// Pre-cache artwork after the transaction commits successfully
if err == nil {
for _, artID := range artworkIDs {
entry.job.cw.PreCache(artID)
}
}
return entry, err
}

View File

@@ -69,9 +69,6 @@ func (p *phaseMissingTracks) produce(put func(tracks *missingTracks)) error {
}
}
for _, lib := range p.state.libraries {
if lib.LastScanStartedAt.IsZero() {
continue
}
log.Debug(p.ctx, "Scanner: Checking missing tracks", "libraryId", lib.ID, "libraryName", lib.Name)
cursor, err := p.ds.MediaFile(p.ctx).GetMissingAndMatching(lib.ID)
if err != nil {

Some files were not shown because too many files have changed in this diff Show More