mirror of
https://github.com/navidrome/navidrome.git
synced 2025-12-31 19:08:06 -05:00
Compare commits
12 Commits
plugins-mc
...
codex
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
e0f1ddecbe | ||
|
|
1e4e3eac6e | ||
|
|
19d443ec7f | ||
|
|
db92cf9e47 | ||
|
|
ec9f9aa243 | ||
|
|
0d1f2bcc8a | ||
|
|
dfa217ab51 | ||
|
|
3d6a2380bc | ||
|
|
53aa640f35 | ||
|
|
e4d65a7828 | ||
|
|
b41123f75e | ||
|
|
6f52c0201c |
110
AGENTS.md
Normal file
110
AGENTS.md
Normal file
@@ -0,0 +1,110 @@
|
||||
# Testing Instructions
|
||||
|
||||
- **No implementation task is considered complete until it includes thorough, passing tests that cover the new or
|
||||
changed functionality. All new code must be accompanied by Ginkgo/Gomega tests, and PRs/commits without tests should
|
||||
be considered incomplete.**
|
||||
- All Go tests in this project **MUST** be written using the **Ginkgo v2** and **Gomega** frameworks.
|
||||
- To run all tests, use `make test`.
|
||||
- To run tests for a specific package, use `make test PKG=./pkgname/...`
|
||||
- Do not run tests in parallel
|
||||
- Don't use `--fail-on-pending`
|
||||
|
||||
## Mocking Convention
|
||||
|
||||
- Always try to use the mocks provided in the `tests` package before creating a new mock implementation.
|
||||
- Only create a new mock if the required functionality is not covered by the existing mocks in `tests`.
|
||||
- Never mock a real implementation when testing. Remember: there is no value in testing an interface, only the real implementation.
|
||||
|
||||
## Example
|
||||
|
||||
Every package that you write tests for, should have a `*_suite_test.go` file, to hook up the Ginkgo test suite. Example:
|
||||
```
|
||||
package core
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/navidrome/navidrome/log"
|
||||
"github.com/navidrome/navidrome/tests"
|
||||
. "github.com/onsi/ginkgo/v2"
|
||||
. "github.com/onsi/gomega"
|
||||
)
|
||||
|
||||
func TestCore(t *testing.T) {
|
||||
tests.Init(t, false)
|
||||
log.SetLevel(log.LevelFatal)
|
||||
RegisterFailHandler(Fail)
|
||||
RunSpecs(t, "Core Suite")
|
||||
}
|
||||
```
|
||||
Never put a `func Test*` in regular *_test.go files, only in `*_suite_test.go` files.
|
||||
|
||||
Refer to existing test suites for examples of proper setup and usage, such as the one defined in @core_suite_test.go
|
||||
|
||||
## Exceptions
|
||||
|
||||
There should be no exceptions to this rule. If you encounter tests written with the standard `testing` package or other frameworks, they should be refactored to use Ginkgo/Gomega. If you need a new mock, first confirm that it does not already exist in the `tests` package.
|
||||
|
||||
### Configuration
|
||||
|
||||
You can set config values in the BeforeEach/BeforeAll blocks. If you do so, remember to add `DeferCleanup(configtest.SetupConfig())` to reset the values. Example:
|
||||
|
||||
```go
|
||||
BeforeEach(func() {
|
||||
DeferCleanup(configtest.SetupConfig())
|
||||
conf.Server.EnableDownloads = true
|
||||
})
|
||||
```
|
||||
|
||||
# Logging System Usage Guide
|
||||
|
||||
This project uses a custom logging system built on top of logrus, `log/log.go`. Follow these conventions for all logging:
|
||||
|
||||
## Logging API
|
||||
- Use the provided functions for logging at different levels:
|
||||
- `Error(...)`, `Warn(...)`, `Info(...)`, `Debug(...)`, `Trace(...)`, `Fatal(...)`
|
||||
- These functions accept flexible arguments:
|
||||
- The first argument can be a context (`context.Context`), an HTTP request, or `nil`.
|
||||
- The next argument is the log message (string or error).
|
||||
- Additional arguments are key-value pairs (e.g., `"key", value`).
|
||||
- If the last argument is an error, it is logged under the `error` key.
|
||||
|
||||
**Examples:**
|
||||
```go
|
||||
log.Error("A message")
|
||||
log.Error(ctx, "A message with context")
|
||||
log.Error("Failed to save", "id", 123, err)
|
||||
log.Info(req, "Request received", "user", userID)
|
||||
```
|
||||
|
||||
## Logging errors
|
||||
- You don't need to add "err" key when logging an error, it is automatically added.
|
||||
- Error must always be the last parameter in the log call.
|
||||
Examples:
|
||||
```go
|
||||
log.Error("Failed to save", "id", 123, err) // GOOD
|
||||
log.Error("Failed to save", "id", 123, "err", err) // BAD
|
||||
log.Error("Failed to save", err, "id", 123) // BAD
|
||||
```
|
||||
|
||||
## Context and Request Logging
|
||||
- If a context or HTTP request is passed as the first argument, any logger fields in the context are included in the log entry.
|
||||
- Use `log.NewContext(ctx, "key", value, ...)` to add fields to a context for logging.
|
||||
|
||||
## Log Levels
|
||||
- Set the global log level with `log.SetLevel(log.LevelInfo)` or `log.SetLevelString("info")`.
|
||||
- Per-path log levels can be set with `log.SetLogLevels(map[string]string{"path": "level"})`.
|
||||
- Use `log.IsGreaterOrEqualTo(level)` to check if a log level is enabled for the current code path.
|
||||
|
||||
## Source Line Logging
|
||||
- Enable source file/line logging with `log.SetLogSourceLine(true)`.
|
||||
|
||||
## Best Practices
|
||||
- Always use the logging API, never log directly with logrus or fmt.
|
||||
- Prefer structured logging (key-value pairs) for important data.
|
||||
- Use context/request logging for traceability in web handlers.
|
||||
- For tests, use Ginkgo/Gomega and set up a test logger as in `log/log_test.go`.
|
||||
|
||||
## See Also
|
||||
- `log/log.go` for implementation details
|
||||
- `log/log_test.go` for usage examples and test patterns
|
||||
11
Makefile
11
Makefile
@@ -36,8 +36,9 @@ watch: ##@Development Start Go tests in watch mode (re-run when code changes)
|
||||
go tool ginkgo watch -tags=netgo -notify ./...
|
||||
.PHONY: watch
|
||||
|
||||
PKG ?= ./...
|
||||
test: ##@Development Run Go tests
|
||||
go test -tags netgo ./...
|
||||
go test -tags netgo $(PKG)
|
||||
.PHONY: test
|
||||
|
||||
testrace: ##@Development Run Go tests with race detector
|
||||
@@ -156,10 +157,10 @@ package: docker-build ##@Cross_Compilation Create binaries and packages for ALL
|
||||
get-music: ##@Development Download some free music from Navidrome's demo instance
|
||||
mkdir -p music
|
||||
( cd music; \
|
||||
curl "https://demo.navidrome.org/rest/download?u=demo&p=demo&f=json&v=1.8.0&c=dev_download&id=ec2093ec4801402f1e17cc462195cdbb" > brock.zip; \
|
||||
curl "https://demo.navidrome.org/rest/download?u=demo&p=demo&f=json&v=1.8.0&c=dev_download&id=b376eeb4652d2498aa2b25ba0696725e" > back_on_earth.zip; \
|
||||
curl "https://demo.navidrome.org/rest/download?u=demo&p=demo&f=json&v=1.8.0&c=dev_download&id=e49c609b542fc51899ee8b53aa858cb4" > ugress.zip; \
|
||||
curl "https://demo.navidrome.org/rest/download?u=demo&p=demo&f=json&v=1.8.0&c=dev_download&id=350bcab3a4c1d93869e39ce496464f03" > voodoocuts.zip; \
|
||||
curl "https://demo.navidrome.org/rest/download?u=demo&p=demo&f=json&v=1.8.0&c=dev_download&id=2Y3qQA6zJC3ObbBrF9ZBoV" > brock.zip; \
|
||||
curl "https://demo.navidrome.org/rest/download?u=demo&p=demo&f=json&v=1.8.0&c=dev_download&id=04HrSORpypcLGNUdQp37gn" > back_on_earth.zip; \
|
||||
curl "https://demo.navidrome.org/rest/download?u=demo&p=demo&f=json&v=1.8.0&c=dev_download&id=5xcMPJdeEgNrGtnzYbzAqb" > ugress.zip; \
|
||||
curl "https://demo.navidrome.org/rest/download?u=demo&p=demo&f=json&v=1.8.0&c=dev_download&id=1jjQMAZrG3lUsJ0YH6ZRS0" > voodoocuts.zip; \
|
||||
for file in *.zip; do unzip -n $${file}; done )
|
||||
@echo "Done. Remember to set your MusicFolder to ./music"
|
||||
.PHONY: get-music
|
||||
|
||||
@@ -82,15 +82,15 @@ var _ = Describe("Extractor", func() {
|
||||
|
||||
// TabLib 1.12 returns 18, previous versions return 39.
|
||||
// See https://github.com/taglib/taglib/commit/2f238921824741b2cfe6fbfbfc9701d9827ab06b
|
||||
Expect(m.AudioProperties.BitRate).To(BeElementOf(18, 39, 40, 43, 49))
|
||||
Expect(m.AudioProperties.BitRate).To(BeElementOf(18, 19, 39, 40, 43, 49))
|
||||
Expect(m.AudioProperties.Channels).To(BeElementOf(2))
|
||||
Expect(m.AudioProperties.SampleRate).To(BeElementOf(8000))
|
||||
Expect(m.AudioProperties.SampleRate).To(BeElementOf(8000))
|
||||
Expect(m.HasPicture).To(BeFalse())
|
||||
Expect(m.HasPicture).To(BeTrue())
|
||||
})
|
||||
|
||||
DescribeTable("Format-Specific tests",
|
||||
func(file, duration string, channels, samplerate, bitdepth int, albumGain, albumPeak, trackGain, trackPeak string, id3Lyrics bool) {
|
||||
func(file, duration string, channels, samplerate, bitdepth int, albumGain, albumPeak, trackGain, trackPeak string, id3Lyrics bool, image bool) {
|
||||
file = "tests/fixtures/" + file
|
||||
mds, err := e.Parse(file)
|
||||
Expect(err).NotTo(HaveOccurred())
|
||||
@@ -98,7 +98,7 @@ var _ = Describe("Extractor", func() {
|
||||
|
||||
m := mds[file]
|
||||
|
||||
Expect(m.HasPicture).To(BeFalse())
|
||||
Expect(m.HasPicture).To(Equal(image))
|
||||
Expect(m.AudioProperties.Duration.String()).To(Equal(duration))
|
||||
Expect(m.AudioProperties.Channels).To(Equal(channels))
|
||||
Expect(m.AudioProperties.SampleRate).To(Equal(samplerate))
|
||||
@@ -168,24 +168,24 @@ var _ = Describe("Extractor", func() {
|
||||
},
|
||||
|
||||
// ffmpeg -f lavfi -i "sine=frequency=1200:duration=1" test.flac
|
||||
Entry("correctly parses flac tags", "test.flac", "1s", 1, 44100, 16, "+4.06 dB", "0.12496948", "+4.06 dB", "0.12496948", false),
|
||||
Entry("correctly parses flac tags", "test.flac", "1s", 1, 44100, 16, "+4.06 dB", "0.12496948", "+4.06 dB", "0.12496948", false, true),
|
||||
|
||||
Entry("correctly parses m4a (aac) gain tags", "01 Invisible (RED) Edit Version.m4a", "1.04s", 2, 44100, 16, "0.37", "0.48", "0.37", "0.48", false),
|
||||
Entry("correctly parses m4a (aac) gain tags (uppercase)", "test.m4a", "1.04s", 2, 44100, 16, "0.37", "0.48", "0.37", "0.48", false),
|
||||
Entry("correctly parses ogg (vorbis) tags", "test.ogg", "1.04s", 2, 8000, 0, "+7.64 dB", "0.11772506", "+7.64 dB", "0.11772506", false),
|
||||
Entry("correctly parses m4a (aac) gain tags", "01 Invisible (RED) Edit Version.m4a", "1.04s", 2, 44100, 16, "0.37", "0.48", "0.37", "0.48", false, true),
|
||||
Entry("correctly parses m4a (aac) gain tags (uppercase)", "test.m4a", "1.04s", 2, 44100, 16, "0.37", "0.48", "0.37", "0.48", false, true),
|
||||
Entry("correctly parses ogg (vorbis) tags", "test.ogg", "1.04s", 2, 8000, 0, "+7.64 dB", "0.11772506", "+7.64 dB", "0.11772506", false, true),
|
||||
|
||||
// ffmpeg -f lavfi -i "sine=frequency=900:duration=1" test.wma
|
||||
// Weird note: for the tag parsing to work, the lyrics are actually stored in the reverse order
|
||||
Entry("correctly parses wma/asf tags", "test.wma", "1.02s", 1, 44100, 16, "3.27 dB", "0.132914", "3.27 dB", "0.132914", false),
|
||||
Entry("correctly parses wma/asf tags", "test.wma", "1.02s", 1, 44100, 16, "3.27 dB", "0.132914", "3.27 dB", "0.132914", false, true),
|
||||
|
||||
// ffmpeg -f lavfi -i "sine=frequency=800:duration=1" test.wv
|
||||
Entry("correctly parses wv (wavpak) tags", "test.wv", "1s", 1, 44100, 16, "3.43 dB", "0.125061", "3.43 dB", "0.125061", false),
|
||||
Entry("correctly parses wv (wavpak) tags", "test.wv", "1s", 1, 44100, 16, "3.43 dB", "0.125061", "3.43 dB", "0.125061", false, false),
|
||||
|
||||
// ffmpeg -f lavfi -i "sine=frequency=1000:duration=1" test.wav
|
||||
Entry("correctly parses wav tags", "test.wav", "1s", 1, 44100, 16, "3.06 dB", "0.125056", "3.06 dB", "0.125056", true),
|
||||
Entry("correctly parses wav tags", "test.wav", "1s", 1, 44100, 16, "3.06 dB", "0.125056", "3.06 dB", "0.125056", true, true),
|
||||
|
||||
// ffmpeg -f lavfi -i "sine=frequency=1400:duration=1" test.aiff
|
||||
Entry("correctly parses aiff tags", "test.aiff", "1s", 1, 44100, 16, "2.00 dB", "0.124972", "2.00 dB", "0.124972", true),
|
||||
Entry("correctly parses aiff tags", "test.aiff", "1s", 1, 44100, 16, "2.00 dB", "0.124972", "2.00 dB", "0.124972", true, true),
|
||||
)
|
||||
|
||||
// Skip these tests when running as root
|
||||
|
||||
@@ -225,18 +225,25 @@ char has_cover(const TagLib::FileRef f) {
|
||||
else if (TagLib::Ogg::Opus::File * opusFile{dynamic_cast<TagLib::Ogg::Opus::File *>(f.file())}) {
|
||||
hasCover = !opusFile->tag()->pictureList().isEmpty();
|
||||
}
|
||||
// ----- WMA
|
||||
else if (TagLib::ASF::File * asfFile{dynamic_cast<TagLib::ASF::File *>(f.file())}) {
|
||||
const TagLib::ASF::Tag *tag{asfFile->tag()};
|
||||
hasCover = tag && tag->attributeListMap().contains("WM/Picture");
|
||||
}
|
||||
// ----- WAV
|
||||
else if (TagLib::RIFF::WAV::File * wavFile{ dynamic_cast<TagLib::RIFF::WAV::File*>(f.file()) }) {
|
||||
if (wavFile->hasID3v2Tag()) {
|
||||
const auto& frameListMap{ wavFile->ID3v2Tag()->frameListMap() };
|
||||
hasCover = !frameListMap["APIC"].isEmpty();
|
||||
const auto& frameListMap{ wavFile->ID3v2Tag()->frameListMap() };
|
||||
hasCover = !frameListMap["APIC"].isEmpty();
|
||||
}
|
||||
}
|
||||
// ----- AIFF
|
||||
else if (TagLib::RIFF::AIFF::File * aiffFile{ dynamic_cast<TagLib::RIFF::AIFF::File *>(f.file())}) {
|
||||
if (aiffFile->hasID3v2Tag()) {
|
||||
const auto& frameListMap{ aiffFile->tag()->frameListMap() };
|
||||
hasCover = !frameListMap["APIC"].isEmpty();
|
||||
}
|
||||
}
|
||||
// ----- WMA
|
||||
else if (TagLib::ASF::File * asfFile{dynamic_cast<TagLib::ASF::File *>(f.file())}) {
|
||||
const TagLib::ASF::Tag *tag{ asfFile->tag() };
|
||||
hasCover = tag && asfFile->tag()->attributeListMap().contains("WM/Picture");
|
||||
}
|
||||
|
||||
return hasCover;
|
||||
}
|
||||
|
||||
@@ -93,6 +93,7 @@ type configOptions struct {
|
||||
PID pidOptions
|
||||
Inspect inspectOptions
|
||||
Subsonic subsonicOptions
|
||||
LyricsPriority string
|
||||
|
||||
Agents string
|
||||
LastFM lastfmOptions
|
||||
@@ -109,7 +110,6 @@ type configOptions struct {
|
||||
DevActivityPanel bool
|
||||
DevActivityPanelUpdateRate time.Duration
|
||||
DevSidebarPlaylists bool
|
||||
DevEnableBufferedScrobble bool
|
||||
DevShowArtistPage bool
|
||||
DevOffsetOptimize int
|
||||
DevArtworkMaxRequests int
|
||||
@@ -132,6 +132,7 @@ type scannerOptions struct {
|
||||
ArtistJoiner string
|
||||
GenreSeparators string // Deprecated: Use Tags.genre.Split instead
|
||||
GroupAlbumReleases bool // Deprecated: Use PID.Album instead
|
||||
FollowSymlinks bool // Whether to follow symlinks when scanning directories
|
||||
}
|
||||
|
||||
type subsonicOptions struct {
|
||||
@@ -312,6 +313,7 @@ func Load(noConfigDump bool) {
|
||||
}
|
||||
logDeprecatedOptions("Scanner.GenreSeparators")
|
||||
logDeprecatedOptions("Scanner.GroupAlbumReleases")
|
||||
logDeprecatedOptions("DevEnableBufferedScrobble") // Deprecated: Buffered scrobbling is now always enabled and this option is ignored
|
||||
|
||||
// Call init hooks
|
||||
for _, hook := range hooks {
|
||||
@@ -498,6 +500,7 @@ func init() {
|
||||
viper.SetDefault("scanner.artistjoiner", consts.ArtistJoiner)
|
||||
viper.SetDefault("scanner.genreseparators", "")
|
||||
viper.SetDefault("scanner.groupalbumreleases", false)
|
||||
viper.SetDefault("scanner.followsymlinks", true)
|
||||
|
||||
viper.SetDefault("subsonic.appendsubtitle", true)
|
||||
viper.SetDefault("subsonic.artistparticipations", false)
|
||||
@@ -528,6 +531,8 @@ func init() {
|
||||
viper.SetDefault("inspect.backloglimit", consts.RequestThrottleBacklogLimit)
|
||||
viper.SetDefault("inspect.backlogtimeout", consts.RequestThrottleBacklogTimeout)
|
||||
|
||||
viper.SetDefault("lyricspriority", ".lrc,.txt,embedded")
|
||||
|
||||
// DevFlags. These are used to enable/disable debugging and incomplete features
|
||||
viper.SetDefault("devlogsourceline", false)
|
||||
viper.SetDefault("devenableprofiler", false)
|
||||
@@ -535,7 +540,6 @@ func init() {
|
||||
viper.SetDefault("devautologinusername", "")
|
||||
viper.SetDefault("devactivitypanel", true)
|
||||
viper.SetDefault("devactivitypanelupdaterate", 300*time.Millisecond)
|
||||
viper.SetDefault("devenablebufferedscrobble", true)
|
||||
viper.SetDefault("devsidebarplaylists", true)
|
||||
viper.SetDefault("devshowartistpage", true)
|
||||
viper.SetDefault("devoffsetoptimize", 50000)
|
||||
|
||||
@@ -344,18 +344,10 @@ func (l *lastfmAgent) IsAuthorized(ctx context.Context, userId string) bool {
|
||||
func init() {
|
||||
conf.AddHook(func() {
|
||||
agents.Register(lastFMAgentName, func(ds model.DataStore) agents.Interface {
|
||||
a := lastFMConstructor(ds)
|
||||
if a != nil {
|
||||
return a
|
||||
}
|
||||
return nil
|
||||
return lastFMConstructor(ds)
|
||||
})
|
||||
scrobbler.Register(lastFMAgentName, func(ds model.DataStore) scrobbler.Scrobbler {
|
||||
a := lastFMConstructor(ds)
|
||||
if a != nil {
|
||||
return a
|
||||
}
|
||||
return nil
|
||||
return lastFMConstructor(ds)
|
||||
})
|
||||
})
|
||||
}
|
||||
|
||||
2
core/agents/mcp/mcp-server/.gitignore
vendored
2
core/agents/mcp/mcp-server/.gitignore
vendored
@@ -1,2 +0,0 @@
|
||||
mcp-server
|
||||
*.wasm
|
||||
@@ -1,17 +0,0 @@
|
||||
# MCP Server (Proof of Concept)
|
||||
|
||||
This directory contains the source code for the `mcp-server`, a simple server implementation used as a proof-of-concept (PoC) for the Navidrome Plugin/MCP agent system.
|
||||
|
||||
This server is designed to be compiled into a WebAssembly (WASM) module (`.wasm`) using the `wasip1` target.
|
||||
|
||||
## Compilation
|
||||
|
||||
To compile the server into a WASM module (`mcp-server.wasm`), navigate to this directory in your terminal and run the following command:
|
||||
|
||||
```bash
|
||||
CGO_ENABLED=0 GOOS=wasip1 GOARCH=wasm go build -o mcp-server.wasm .
|
||||
```
|
||||
|
||||
**Note:** This command compiles the WASM module _without_ the `netgo` tag. Networking operations (like HTTP requests) are expected to be handled by host functions provided by the embedding application (Navidrome's `MCPAgent`) rather than directly within the WASM module itself.
|
||||
|
||||
Place the resulting `mcp-server.wasm` file where the Navidrome `MCPAgent` expects it (currently configured via the `McpServerPath` constant in `core/agents/mcp/mcp_agent.go`).
|
||||
@@ -1,172 +0,0 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json" // Reusing ErrNotFound from wikidata.go (implicitly via main)
|
||||
"fmt"
|
||||
"log"
|
||||
"net/http"
|
||||
"net/url"
|
||||
"strings"
|
||||
"time"
|
||||
)
|
||||
|
||||
const dbpediaEndpoint = "https://dbpedia.org/sparql"
|
||||
|
||||
// Default timeout for DBpedia requests
|
||||
const defaultDbpediaTimeout = 20 * time.Second
|
||||
|
||||
// Can potentially reuse SparqlResult, SparqlBindings, SparqlValue from wikidata.go
|
||||
// if the structure is identical. Assuming it is for now.
|
||||
|
||||
// GetArtistBioFromDBpedia queries DBpedia for an artist's abstract using their name.
|
||||
func GetArtistBioFromDBpedia(fetcher Fetcher, ctx context.Context, name string) (string, error) {
|
||||
log.Printf("[MCP] Debug: GetArtistBioFromDBpedia called for name: %s", name)
|
||||
if name == "" {
|
||||
log.Printf("[MCP] Error: GetArtistBioFromDBpedia requires a name.")
|
||||
return "", fmt.Errorf("name is required to query DBpedia by name")
|
||||
}
|
||||
|
||||
// Escape name for SPARQL query literal
|
||||
escapedName := strings.ReplaceAll(name, "\"", "\\\"")
|
||||
|
||||
// SPARQL query using DBpedia ontology (dbo)
|
||||
// Prefixes are recommended but can be omitted if endpoint resolves them.
|
||||
// Searching case-insensitively on the label.
|
||||
// Filtering for dbo:MusicalArtist or dbo:Band.
|
||||
// Selecting the English abstract.
|
||||
sparqlQuery := fmt.Sprintf(`
|
||||
PREFIX dbo: <http://dbpedia.org/ontology/>
|
||||
PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>
|
||||
PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
|
||||
|
||||
SELECT DISTINCT ?abstract WHERE {
|
||||
?artist rdfs:label ?nameLabel .
|
||||
FILTER(LCASE(STR(?nameLabel)) = LCASE("%s") && LANG(?nameLabel) = "en")
|
||||
|
||||
# Ensure it's a musical artist or band
|
||||
{ ?artist rdf:type dbo:MusicalArtist } UNION { ?artist rdf:type dbo:Band }
|
||||
|
||||
?artist dbo:abstract ?abstract .
|
||||
FILTER(LANG(?abstract) = "en")
|
||||
} LIMIT 1`, escapedName)
|
||||
|
||||
// Prepare and execute HTTP request
|
||||
queryValues := url.Values{}
|
||||
queryValues.Set("query", sparqlQuery)
|
||||
queryValues.Set("format", "application/sparql-results+json") // DBpedia standard format
|
||||
|
||||
reqURL := fmt.Sprintf("%s?%s", dbpediaEndpoint, queryValues.Encode())
|
||||
log.Printf("[MCP] Debug: DBpedia Bio Request URL: %s", reqURL)
|
||||
|
||||
timeout := defaultDbpediaTimeout
|
||||
if deadline, ok := ctx.Deadline(); ok {
|
||||
timeout = time.Until(deadline)
|
||||
}
|
||||
log.Printf("[MCP] Debug: Fetching from DBpedia with timeout: %v", timeout)
|
||||
|
||||
statusCode, bodyBytes, err := fetcher.Fetch(ctx, "GET", reqURL, nil, timeout)
|
||||
if err != nil {
|
||||
log.Printf("[MCP] Error: Fetcher failed for DBpedia bio request (name: '%s'): %v", name, err)
|
||||
return "", fmt.Errorf("failed to execute DBpedia request: %w", err)
|
||||
}
|
||||
|
||||
if statusCode != http.StatusOK {
|
||||
log.Printf("[MCP] Error: DBpedia bio query failed for name '%s' with status %d: %s", name, statusCode, string(bodyBytes))
|
||||
return "", fmt.Errorf("DBpedia query failed with status %d: %s", statusCode, string(bodyBytes))
|
||||
}
|
||||
log.Printf("[MCP] Debug: DBpedia bio query successful (status %d), %d bytes received.", statusCode, len(bodyBytes))
|
||||
|
||||
var result SparqlResult
|
||||
if err := json.Unmarshal(bodyBytes, &result); err != nil {
|
||||
// Try reading the raw body for debugging if JSON parsing fails
|
||||
// (Seek back to the beginning might be needed if already read for error)
|
||||
// For simplicity, just return the parsing error now.
|
||||
log.Printf("[MCP] Error: Failed to decode DBpedia bio response for name '%s': %v", name, err)
|
||||
return "", fmt.Errorf("failed to decode DBpedia response: %w", err)
|
||||
}
|
||||
|
||||
// Extract the abstract
|
||||
if len(result.Results.Bindings) > 0 {
|
||||
if abstractVal, ok := result.Results.Bindings[0]["abstract"]; ok {
|
||||
log.Printf("[MCP] Debug: Found DBpedia abstract for '%s'.", name)
|
||||
return abstractVal.Value, nil
|
||||
}
|
||||
}
|
||||
|
||||
// Use the shared ErrNotFound
|
||||
log.Printf("[MCP] Warn: No abstract found on DBpedia for name '%s'.", name)
|
||||
return "", ErrNotFound
|
||||
}
|
||||
|
||||
// GetArtistWikipediaURLFromDBpedia queries DBpedia for an artist's Wikipedia URL using their name.
|
||||
func GetArtistWikipediaURLFromDBpedia(fetcher Fetcher, ctx context.Context, name string) (string, error) {
|
||||
log.Printf("[MCP] Debug: GetArtistWikipediaURLFromDBpedia called for name: %s", name)
|
||||
if name == "" {
|
||||
log.Printf("[MCP] Error: GetArtistWikipediaURLFromDBpedia requires a name.")
|
||||
return "", fmt.Errorf("name is required to query DBpedia by name for URL")
|
||||
}
|
||||
|
||||
escapedName := strings.ReplaceAll(name, "\"", "\\\"")
|
||||
|
||||
// SPARQL query using foaf:isPrimaryTopicOf
|
||||
sparqlQuery := fmt.Sprintf(`
|
||||
PREFIX dbo: <http://dbpedia.org/ontology/>
|
||||
PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>
|
||||
PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
|
||||
PREFIX foaf: <http://xmlns.com/foaf/0.1/>
|
||||
|
||||
SELECT DISTINCT ?wikiPage WHERE {
|
||||
?artist rdfs:label ?nameLabel .
|
||||
FILTER(LCASE(STR(?nameLabel)) = LCASE("%s") && LANG(?nameLabel) = "en")
|
||||
|
||||
{ ?artist rdf:type dbo:MusicalArtist } UNION { ?artist rdf:type dbo:Band }
|
||||
|
||||
?artist foaf:isPrimaryTopicOf ?wikiPage .
|
||||
# Ensure it links to the English Wikipedia
|
||||
FILTER(STRSTARTS(STR(?wikiPage), "https://en.wikipedia.org/"))
|
||||
} LIMIT 1`, escapedName)
|
||||
|
||||
// Prepare and execute HTTP request (similar structure to bio query)
|
||||
queryValues := url.Values{}
|
||||
queryValues.Set("query", sparqlQuery)
|
||||
queryValues.Set("format", "application/sparql-results+json")
|
||||
|
||||
reqURL := fmt.Sprintf("%s?%s", dbpediaEndpoint, queryValues.Encode())
|
||||
log.Printf("[MCP] Debug: DBpedia URL Request URL: %s", reqURL)
|
||||
|
||||
timeout := defaultDbpediaTimeout
|
||||
if deadline, ok := ctx.Deadline(); ok {
|
||||
timeout = time.Until(deadline)
|
||||
}
|
||||
log.Printf("[MCP] Debug: Fetching DBpedia URL with timeout: %v", timeout)
|
||||
|
||||
statusCode, bodyBytes, err := fetcher.Fetch(ctx, "GET", reqURL, nil, timeout)
|
||||
if err != nil {
|
||||
log.Printf("[MCP] Error: Fetcher failed for DBpedia URL request (name: '%s'): %v", name, err)
|
||||
return "", fmt.Errorf("failed to execute DBpedia URL request: %w", err)
|
||||
}
|
||||
|
||||
if statusCode != http.StatusOK {
|
||||
log.Printf("[MCP] Error: DBpedia URL query failed for name '%s' with status %d: %s", name, statusCode, string(bodyBytes))
|
||||
return "", fmt.Errorf("DBpedia URL query failed with status %d: %s", statusCode, string(bodyBytes))
|
||||
}
|
||||
log.Printf("[MCP] Debug: DBpedia URL query successful (status %d), %d bytes received.", statusCode, len(bodyBytes))
|
||||
|
||||
var result SparqlResult
|
||||
if err := json.Unmarshal(bodyBytes, &result); err != nil {
|
||||
log.Printf("[MCP] Error: Failed to decode DBpedia URL response for name '%s': %v", name, err)
|
||||
return "", fmt.Errorf("failed to decode DBpedia URL response: %w", err)
|
||||
}
|
||||
|
||||
// Extract the URL
|
||||
if len(result.Results.Bindings) > 0 {
|
||||
if pageVal, ok := result.Results.Bindings[0]["wikiPage"]; ok {
|
||||
log.Printf("[MCP] Debug: Found DBpedia Wikipedia URL for '%s': %s", name, pageVal.Value)
|
||||
return pageVal.Value, nil
|
||||
}
|
||||
}
|
||||
|
||||
log.Printf("[MCP] Warn: No Wikipedia URL found on DBpedia for name '%s'.", name)
|
||||
return "", ErrNotFound
|
||||
}
|
||||
@@ -1,16 +0,0 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"context"
|
||||
"time"
|
||||
)
|
||||
|
||||
// Fetcher defines an interface for making HTTP requests, abstracting
|
||||
// over native net/http and WASM host functions.
|
||||
type Fetcher interface {
|
||||
// Fetch performs an HTTP request.
|
||||
// Returns the status code, response body, and any error encountered.
|
||||
// Note: Implementations should aim to return the body even on non-2xx status codes
|
||||
// if the body was successfully read, allowing callers to potentially inspect it.
|
||||
Fetch(ctx context.Context, method, url string, requestBody []byte, timeout time.Duration) (statusCode int, responseBody []byte, err error)
|
||||
}
|
||||
@@ -1,83 +0,0 @@
|
||||
//go:build !wasm
|
||||
|
||||
package main
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"fmt"
|
||||
"io"
|
||||
"log"
|
||||
"net/http"
|
||||
"time"
|
||||
)
|
||||
|
||||
type nativeFetcher struct {
|
||||
// We could hold a shared client, but creating one per request
|
||||
// with the specific timeout is simpler for this adapter.
|
||||
}
|
||||
|
||||
// Ensure nativeFetcher implements Fetcher
|
||||
var _ Fetcher = (*nativeFetcher)(nil)
|
||||
|
||||
// NewFetcher creates the default native HTTP fetcher.
|
||||
func NewFetcher() Fetcher {
|
||||
log.Println("[MCP] Debug: Using Native HTTP fetcher")
|
||||
return &nativeFetcher{}
|
||||
}
|
||||
|
||||
func (nf *nativeFetcher) Fetch(ctx context.Context, method, urlStr string, requestBody []byte, timeout time.Duration) (statusCode int, responseBody []byte, err error) {
|
||||
log.Printf("[MCP] Debug: Native Fetch: Method=%s, URL=%s, Timeout=%v", method, urlStr, timeout)
|
||||
// Create a client with the specific timeout for this request
|
||||
client := &http.Client{Timeout: timeout}
|
||||
|
||||
var bodyReader io.Reader
|
||||
if requestBody != nil {
|
||||
bodyReader = bytes.NewReader(requestBody)
|
||||
}
|
||||
|
||||
req, err := http.NewRequestWithContext(ctx, method, urlStr, bodyReader)
|
||||
if err != nil {
|
||||
log.Printf("[MCP] Error: Native Fetch failed to create request: %v", err)
|
||||
return 0, nil, fmt.Errorf("failed to create native request: %w", err)
|
||||
}
|
||||
|
||||
// Set headers consistent with previous direct client usage
|
||||
req.Header.Set("Accept", "application/sparql-results+json, application/json")
|
||||
// Note: Specific User-Agent was set per call site previously, might need adjustment
|
||||
// if different user agents are desired per service.
|
||||
req.Header.Set("User-Agent", "MCPGoServerExample/0.1 (Native Client)")
|
||||
|
||||
log.Printf("[MCP] Debug: Native Fetch executing request...")
|
||||
resp, err := client.Do(req)
|
||||
if err != nil {
|
||||
// Let context cancellation errors pass through
|
||||
if ctx.Err() != nil {
|
||||
log.Printf("[MCP] Debug: Native Fetch context cancelled: %v", ctx.Err())
|
||||
return 0, nil, ctx.Err()
|
||||
}
|
||||
log.Printf("[MCP] Error: Native Fetch HTTP request failed: %v", err)
|
||||
return 0, nil, fmt.Errorf("native HTTP request failed: %w", err)
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
statusCode = resp.StatusCode
|
||||
log.Printf("[MCP] Debug: Native Fetch received status code: %d", statusCode)
|
||||
responseBodyBytes, readErr := io.ReadAll(resp.Body)
|
||||
if readErr != nil {
|
||||
// Still return status code if body read fails
|
||||
log.Printf("[MCP] Error: Native Fetch failed to read response body: %v", readErr)
|
||||
return statusCode, nil, fmt.Errorf("failed to read native response body: %w", readErr)
|
||||
}
|
||||
responseBody = responseBodyBytes
|
||||
log.Printf("[MCP] Debug: Native Fetch read %d bytes from response body", len(responseBodyBytes))
|
||||
|
||||
// Mimic behavior of returning body even on error status
|
||||
if statusCode < 200 || statusCode >= 300 {
|
||||
log.Printf("[MCP] Warn: Native Fetch request failed with status %d. Body: %s", statusCode, string(responseBody))
|
||||
return statusCode, responseBody, fmt.Errorf("native request failed with status %d", statusCode)
|
||||
}
|
||||
|
||||
log.Printf("[MCP] Debug: Native Fetch completed successfully.")
|
||||
return statusCode, responseBody, nil
|
||||
}
|
||||
@@ -1,171 +0,0 @@
|
||||
//go:build wasm
|
||||
|
||||
package main
|
||||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"fmt"
|
||||
"log"
|
||||
"time"
|
||||
"unsafe"
|
||||
)
|
||||
|
||||
// --- WASM Host Function Import --- (Copied from user prompt)
|
||||
|
||||
//go:wasmimport env http_fetch
|
||||
//go:noescape
|
||||
func http_fetch(
|
||||
// Request details
|
||||
urlPtr, urlLen uint32,
|
||||
methodPtr, methodLen uint32,
|
||||
bodyPtr, bodyLen uint32,
|
||||
timeoutMillis uint32,
|
||||
// Result pointers
|
||||
resultStatusPtr uint32,
|
||||
resultBodyPtr uint32, resultBodyCapacity uint32, resultBodyLenPtr uint32,
|
||||
resultErrorPtr uint32, resultErrorCapacity uint32, resultErrorLenPtr uint32,
|
||||
) uint32 // 0 on success, 1 on host error
|
||||
|
||||
// --- Go Wrapper for Host Function --- (Copied from user prompt)
|
||||
|
||||
const (
|
||||
defaultResponseBodyCapacity = 1024 * 10 // 10 KB for response body
|
||||
defaultResponseErrorCapacity = 1024 // 1 KB for error messages
|
||||
)
|
||||
|
||||
// callHostHTTPFetch provides a Go-friendly interface to the http_fetch host function.
|
||||
func callHostHTTPFetch(ctx context.Context, method, url string, requestBody []byte, timeout time.Duration) (statusCode int, responseBody []byte, err error) {
|
||||
log.Printf("[MCP] Debug: WASM Fetch (Host Call): Method=%s, URL=%s, Timeout=%v", method, url, timeout)
|
||||
|
||||
// --- Prepare Input Pointers ---
|
||||
urlPtr, urlLen := stringToPtr(url)
|
||||
methodPtr, methodLen := stringToPtr(method)
|
||||
bodyPtr, bodyLen := bytesToPtr(requestBody)
|
||||
|
||||
timeoutMillis := uint32(timeout.Milliseconds())
|
||||
if timeoutMillis <= 0 {
|
||||
timeoutMillis = 30000 // Default 30 seconds if 0 or negative
|
||||
}
|
||||
if timeout == 0 {
|
||||
// Handle case where context might already be cancelled
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
log.Printf("[MCP] Debug: WASM Fetch context cancelled before host call: %v", ctx.Err())
|
||||
return 0, nil, ctx.Err()
|
||||
default:
|
||||
}
|
||||
}
|
||||
|
||||
// --- Prepare Output Buffers and Pointers ---
|
||||
resultBodyBuffer := make([]byte, defaultResponseBodyCapacity)
|
||||
resultErrorBuffer := make([]byte, defaultResponseErrorCapacity)
|
||||
|
||||
resultStatus := uint32(0)
|
||||
resultBodyLen := uint32(0)
|
||||
resultErrorLen := uint32(0)
|
||||
|
||||
resultStatusPtr := &resultStatus
|
||||
resultBodyPtr, resultBodyCapacity := bytesToPtr(resultBodyBuffer)
|
||||
resultBodyLenPtr := &resultBodyLen
|
||||
resultErrorPtr, resultErrorCapacity := bytesToPtr(resultErrorBuffer)
|
||||
resultErrorLenPtr := &resultErrorLen
|
||||
|
||||
// --- Call the Host Function ---
|
||||
log.Printf("[MCP] Debug: WASM Fetch calling host function http_fetch...")
|
||||
hostReturnCode := http_fetch(
|
||||
urlPtr, urlLen,
|
||||
methodPtr, methodLen,
|
||||
bodyPtr, bodyLen,
|
||||
timeoutMillis,
|
||||
uint32(uintptr(unsafe.Pointer(resultStatusPtr))),
|
||||
resultBodyPtr, resultBodyCapacity, uint32(uintptr(unsafe.Pointer(resultBodyLenPtr))),
|
||||
resultErrorPtr, resultErrorCapacity, uint32(uintptr(unsafe.Pointer(resultErrorLenPtr))),
|
||||
)
|
||||
log.Printf("[MCP] Debug: WASM Fetch host function returned code: %d", hostReturnCode)
|
||||
|
||||
// --- Process Results ---
|
||||
if hostReturnCode != 0 {
|
||||
err = errors.New("host function http_fetch failed internally")
|
||||
log.Printf("[MCP] Error: WASM Fetch host function failed: %v", err)
|
||||
return 0, nil, err
|
||||
}
|
||||
|
||||
statusCode = int(resultStatus)
|
||||
log.Printf("[MCP] Debug: WASM Fetch received status code from host: %d", statusCode)
|
||||
|
||||
if resultErrorLen > 0 {
|
||||
actualErrorLen := min(resultErrorLen, resultErrorCapacity)
|
||||
errMsg := string(resultErrorBuffer[:actualErrorLen])
|
||||
err = errors.New(errMsg)
|
||||
log.Printf("[MCP] Error: WASM Fetch received error from host: %s", errMsg)
|
||||
return statusCode, nil, err
|
||||
}
|
||||
|
||||
if resultBodyLen > 0 {
|
||||
actualBodyLen := min(resultBodyLen, resultBodyCapacity)
|
||||
responseBody = make([]byte, actualBodyLen)
|
||||
copy(responseBody, resultBodyBuffer[:actualBodyLen])
|
||||
log.Printf("[MCP] Debug: WASM Fetch received %d bytes from host body (reported size: %d)", actualBodyLen, resultBodyLen)
|
||||
|
||||
if resultBodyLen > resultBodyCapacity {
|
||||
err = fmt.Errorf("response body truncated: received %d bytes, but actual size was %d", actualBodyLen, resultBodyLen)
|
||||
log.Printf("[MCP] Warn: WASM Fetch %v", err)
|
||||
return statusCode, responseBody, err // Return truncated body with error
|
||||
}
|
||||
log.Printf("[MCP] Debug: WASM Fetch completed successfully.")
|
||||
return statusCode, responseBody, nil
|
||||
}
|
||||
|
||||
log.Printf("[MCP] Debug: WASM Fetch completed successfully (no body, no error).")
|
||||
return statusCode, nil, nil
|
||||
}
|
||||
|
||||
// --- Pointer Helper Functions --- (Copied from user prompt)
|
||||
|
||||
func stringToPtr(s string) (ptr uint32, length uint32) {
|
||||
if len(s) == 0 {
|
||||
return 0, 0
|
||||
}
|
||||
// Use unsafe.StringData for potentially safer pointer access in modern Go
|
||||
// Needs Go 1.20+
|
||||
// return uint32(uintptr(unsafe.Pointer(unsafe.StringData(s)))), uint32(len(s))
|
||||
// Fallback to slice conversion for broader compatibility / if StringData isn't available
|
||||
buf := []byte(s)
|
||||
return bytesToPtr(buf)
|
||||
}
|
||||
|
||||
func bytesToPtr(b []byte) (ptr uint32, length uint32) {
|
||||
if len(b) == 0 {
|
||||
return 0, 0
|
||||
}
|
||||
// Use unsafe.SliceData for potentially safer pointer access in modern Go
|
||||
// Needs Go 1.20+
|
||||
// return uint32(uintptr(unsafe.Pointer(unsafe.SliceData(b)))), uint32(len(b))
|
||||
// Fallback for broader compatibility
|
||||
return uint32(uintptr(unsafe.Pointer(&b[0]))), uint32(len(b))
|
||||
}
|
||||
|
||||
func min(a, b uint32) uint32 {
|
||||
if a < b {
|
||||
return a
|
||||
}
|
||||
return b
|
||||
}
|
||||
|
||||
// --- WASM Fetcher Implementation ---
|
||||
type wasmFetcher struct{}
|
||||
|
||||
// Ensure wasmFetcher implements Fetcher
|
||||
var _ Fetcher = (*wasmFetcher)(nil)
|
||||
|
||||
// NewFetcher creates the WASM host function fetcher.
|
||||
func NewFetcher() Fetcher {
|
||||
log.Println("[MCP] Debug: Using WASM host fetcher")
|
||||
return &wasmFetcher{}
|
||||
}
|
||||
|
||||
func (wf *wasmFetcher) Fetch(ctx context.Context, method, url string, requestBody []byte, timeout time.Duration) (statusCode int, responseBody []byte, err error) {
|
||||
// Directly call the wrapper which now contains logging
|
||||
return callHostHTTPFetch(ctx, method, url, requestBody, timeout)
|
||||
}
|
||||
@@ -1,19 +0,0 @@
|
||||
module mcp-server
|
||||
|
||||
go 1.24.2
|
||||
|
||||
require github.com/metoro-io/mcp-golang v0.11.0
|
||||
|
||||
require (
|
||||
github.com/bahlo/generic-list-go v0.2.0 // indirect
|
||||
github.com/buger/jsonparser v1.1.1 // indirect
|
||||
github.com/invopop/jsonschema v0.12.0 // indirect
|
||||
github.com/mailru/easyjson v0.7.7 // indirect
|
||||
github.com/pkg/errors v0.9.1 // indirect
|
||||
github.com/tidwall/gjson v1.18.0 // indirect
|
||||
github.com/tidwall/match v1.1.1 // indirect
|
||||
github.com/tidwall/pretty v1.2.1 // indirect
|
||||
github.com/tidwall/sjson v1.2.5 // indirect
|
||||
github.com/wk8/go-ordered-map/v2 v2.1.8 // indirect
|
||||
gopkg.in/yaml.v3 v3.0.1 // indirect
|
||||
)
|
||||
@@ -1,34 +0,0 @@
|
||||
github.com/bahlo/generic-list-go v0.2.0 h1:5sz/EEAK+ls5wF+NeqDpk5+iNdMDXrh3z3nPnH1Wvgk=
|
||||
github.com/bahlo/generic-list-go v0.2.0/go.mod h1:2KvAjgMlE5NNynlg/5iLrrCCZ2+5xWbdbCW3pNTGyYg=
|
||||
github.com/buger/jsonparser v1.1.1 h1:2PnMjfWD7wBILjqQbt530v576A/cAbQvEW9gGIpYMUs=
|
||||
github.com/buger/jsonparser v1.1.1/go.mod h1:6RYKKt7H4d4+iWqouImQ9R2FZql3VbhNgx27UK13J/0=
|
||||
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
|
||||
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||
github.com/invopop/jsonschema v0.12.0 h1:6ovsNSuvn9wEQVOyc72aycBMVQFKz7cPdMJn10CvzRI=
|
||||
github.com/invopop/jsonschema v0.12.0/go.mod h1:ffZ5Km5SWWRAIN6wbDXItl95euhFz2uON45H2qjYt+0=
|
||||
github.com/josharian/intern v1.0.0/go.mod h1:5DoeVV0s6jJacbCEi61lwdGj/aVlrQvzHFFd8Hwg//Y=
|
||||
github.com/mailru/easyjson v0.7.7 h1:UGYAvKxe3sBsEDzO8ZeWOSlIQfWFlxbzLZe7hwFURr0=
|
||||
github.com/mailru/easyjson v0.7.7/go.mod h1:xzfreul335JAWq5oZzymOObrkdz5UnU4kGfJJLY9Nlc=
|
||||
github.com/metoro-io/mcp-golang v0.11.0 h1:1k+VSE9QaeMTLn0gJ3FgE/DcjsCBsLFnz5eSFbgXUiI=
|
||||
github.com/metoro-io/mcp-golang v0.11.0/go.mod h1:ifLP9ZzKpN1UqFWNTpAHOqSvNkMK6b7d1FSZ5Lu0lN0=
|
||||
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
|
||||
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
|
||||
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
|
||||
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
|
||||
github.com/stretchr/testify v1.9.0 h1:HtqpIVDClZ4nwg75+f6Lvsy/wHu+3BoSGCbBAcpTsTg=
|
||||
github.com/stretchr/testify v1.9.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
|
||||
github.com/tidwall/gjson v1.14.2/go.mod h1:/wbyibRr2FHMks5tjHJ5F8dMZh3AcwJEMf5vlfC0lxk=
|
||||
github.com/tidwall/gjson v1.18.0 h1:FIDeeyB800efLX89e5a8Y0BNH+LOngJyGrIWxG2FKQY=
|
||||
github.com/tidwall/gjson v1.18.0/go.mod h1:/wbyibRr2FHMks5tjHJ5F8dMZh3AcwJEMf5vlfC0lxk=
|
||||
github.com/tidwall/match v1.1.1 h1:+Ho715JplO36QYgwN9PGYNhgZvoUSc9X2c80KVTi+GA=
|
||||
github.com/tidwall/match v1.1.1/go.mod h1:eRSPERbgtNPcGhD8UCthc6PmLEQXEWd3PRB5JTxsfmM=
|
||||
github.com/tidwall/pretty v1.2.0/go.mod h1:ITEVvHYasfjBbM0u2Pg8T2nJnzm8xPwvNhhsoaGGjNU=
|
||||
github.com/tidwall/pretty v1.2.1 h1:qjsOFOWWQl+N3RsoF5/ssm1pHmJJwhjlSbZ51I6wMl4=
|
||||
github.com/tidwall/pretty v1.2.1/go.mod h1:ITEVvHYasfjBbM0u2Pg8T2nJnzm8xPwvNhhsoaGGjNU=
|
||||
github.com/tidwall/sjson v1.2.5 h1:kLy8mja+1c9jlljvWTlSazM7cKDRfJuR/bOJhcY5NcY=
|
||||
github.com/tidwall/sjson v1.2.5/go.mod h1:Fvgq9kS/6ociJEDnK0Fk1cpYF4FIW6ZF7LAe+6jwd28=
|
||||
github.com/wk8/go-ordered-map/v2 v2.1.8 h1:5h/BUHu93oj4gIdvHHHGsScSTMijfx5PeYkE/fJgbpc=
|
||||
github.com/wk8/go-ordered-map/v2 v2.1.8/go.mod h1:5nJHM5DyteebpVlHnWMV0rPz6Zp7+xBAnxjb1X5vnTw=
|
||||
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
||||
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
|
||||
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
|
||||
@@ -1,289 +0,0 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"flag"
|
||||
"fmt"
|
||||
"log"
|
||||
"net/url"
|
||||
"os"
|
||||
|
||||
mcp_golang "github.com/metoro-io/mcp-golang"
|
||||
"github.com/metoro-io/mcp-golang/transport/stdio"
|
||||
)
|
||||
|
||||
type Content struct {
|
||||
Title string `json:"title" jsonschema:"required,description=The title to submit"`
|
||||
Description *string `json:"description" jsonschema:"description=The description to submit"`
|
||||
}
|
||||
type MyFunctionsArguments struct {
|
||||
Submitter string `json:"submitter" jsonschema:"required,description=The name of the thing calling this tool (openai, google, claude, etc)"`
|
||||
Content Content `json:"content" jsonschema:"required,description=The content of the message"`
|
||||
}
|
||||
|
||||
type ArtistBiography struct {
|
||||
ID string `json:"id" jsonschema:"required,description=The id of the artist"`
|
||||
Name string `json:"name" jsonschema:"required,description=The name of the artist"`
|
||||
MBID string `json:"mbid" jsonschema:"description=The mbid of the artist"`
|
||||
}
|
||||
|
||||
type ArtistURLArgs struct {
|
||||
ID string `json:"id" jsonschema:"required,description=The id of the artist"`
|
||||
Name string `json:"name" jsonschema:"required,description=The name of the artist"`
|
||||
MBID string `json:"mbid" jsonschema:"description=The mbid of the artist"`
|
||||
}
|
||||
|
||||
func main() {
|
||||
log.Println("[MCP] Starting mcp-server...")
|
||||
done := make(chan struct{})
|
||||
|
||||
// Create the appropriate fetcher (native or WASM based on build tags)
|
||||
log.Printf("[MCP] Debug: Creating fetcher...")
|
||||
fetcher := NewFetcher()
|
||||
log.Printf("[MCP] Debug: Fetcher created successfully.")
|
||||
|
||||
// --- Command Line Flag Handling ---
|
||||
nameFlag := flag.String("name", "", "Artist name to query directly")
|
||||
mbidFlag := flag.String("mbid", "", "Artist MBID to query directly")
|
||||
flag.Parse()
|
||||
|
||||
if *nameFlag != "" || *mbidFlag != "" {
|
||||
log.Printf("[MCP] Debug: Running tools directly via CLI flags (Name: '%s', MBID: '%s')", *nameFlag, *mbidFlag)
|
||||
fmt.Println("--- Running Tools Directly ---")
|
||||
|
||||
// Call getArtistBiography
|
||||
fmt.Printf("Calling get_artist_biography (Name: '%s', MBID: '%s')...\n", *nameFlag, *mbidFlag)
|
||||
if *mbidFlag == "" && *nameFlag == "" {
|
||||
fmt.Println(" Error: --mbid or --name is required for get_artist_biography")
|
||||
} else {
|
||||
// Use context.Background for CLI calls
|
||||
log.Printf("[MCP] Debug: CLI calling getArtistBiography...")
|
||||
bio, bioErr := getArtistBiography(fetcher, context.Background(), "cli", *nameFlag, *mbidFlag)
|
||||
if bioErr != nil {
|
||||
fmt.Printf(" Error: %v\n", bioErr)
|
||||
log.Printf("[MCP] Error: CLI getArtistBiography failed: %v", bioErr)
|
||||
} else {
|
||||
fmt.Printf(" Result: %s\n", bio)
|
||||
log.Printf("[MCP] Debug: CLI getArtistBiography succeeded.")
|
||||
}
|
||||
}
|
||||
|
||||
// Call getArtistURL
|
||||
fmt.Printf("Calling get_artist_url (Name: '%s', MBID: '%s')...\n", *nameFlag, *mbidFlag)
|
||||
if *mbidFlag == "" && *nameFlag == "" {
|
||||
fmt.Println(" Error: --mbid or --name is required for get_artist_url")
|
||||
} else {
|
||||
log.Printf("[MCP] Debug: CLI calling getArtistURL...")
|
||||
urlResult, urlErr := getArtistURL(fetcher, context.Background(), "cli", *nameFlag, *mbidFlag)
|
||||
if urlErr != nil {
|
||||
fmt.Printf(" Error: %v\n", urlErr)
|
||||
log.Printf("[MCP] Error: CLI getArtistURL failed: %v", urlErr)
|
||||
} else {
|
||||
fmt.Printf(" Result: %s\n", urlResult)
|
||||
log.Printf("[MCP] Debug: CLI getArtistURL succeeded.")
|
||||
}
|
||||
}
|
||||
|
||||
fmt.Println("-----------------------------")
|
||||
log.Printf("[MCP] Debug: CLI execution finished.")
|
||||
return // Exit after direct execution
|
||||
}
|
||||
// --- End Command Line Flag Handling ---
|
||||
|
||||
log.Printf("[MCP] Debug: Initializing MCP server...")
|
||||
server := mcp_golang.NewServer(stdio.NewStdioServerTransport())
|
||||
|
||||
log.Printf("[MCP] Debug: Registering tool 'hello'...")
|
||||
err := server.RegisterTool("hello", "Say hello to a person", func(arguments MyFunctionsArguments) (*mcp_golang.ToolResponse, error) {
|
||||
log.Printf("[MCP] Debug: Tool 'hello' called with args: %+v", arguments)
|
||||
return mcp_golang.NewToolResponse(mcp_golang.NewTextContent(fmt.Sprintf("Hello, %server!", arguments.Submitter))), nil
|
||||
})
|
||||
if err != nil {
|
||||
log.Fatalf("[MCP] Fatal: Failed to register tool 'hello': %v", err)
|
||||
}
|
||||
|
||||
log.Printf("[MCP] Debug: Registering tool 'get_artist_biography'...")
|
||||
err = server.RegisterTool("get_artist_biography", "Get the biography of an artist", func(arguments ArtistBiography) (*mcp_golang.ToolResponse, error) {
|
||||
log.Printf("[MCP] Debug: Tool 'get_artist_biography' called with args: %+v", arguments)
|
||||
// Using background context in handlers as request context isn't passed through MCP library currently
|
||||
bio, err := getArtistBiography(fetcher, context.Background(), arguments.ID, arguments.Name, arguments.MBID)
|
||||
if err != nil {
|
||||
log.Printf("[MCP] Error: getArtistBiography handler failed: %v", err)
|
||||
return nil, fmt.Errorf("handler returned an error: %w", err) // Return structured error
|
||||
}
|
||||
log.Printf("[MCP] Debug: Tool 'get_artist_biography' succeeded.")
|
||||
return mcp_golang.NewToolResponse(mcp_golang.NewTextContent(bio)), nil
|
||||
})
|
||||
if err != nil {
|
||||
log.Fatalf("[MCP] Fatal: Failed to register tool 'get_artist_biography': %v", err)
|
||||
}
|
||||
|
||||
log.Printf("[MCP] Debug: Registering tool 'get_artist_url'...")
|
||||
err = server.RegisterTool("get_artist_url", "Get the artist's specific Wikipedia URL via MBID, or a search URL using name as fallback", func(arguments ArtistURLArgs) (*mcp_golang.ToolResponse, error) {
|
||||
log.Printf("[MCP] Debug: Tool 'get_artist_url' called with args: %+v", arguments)
|
||||
urlResult, err := getArtistURL(fetcher, context.Background(), arguments.ID, arguments.Name, arguments.MBID)
|
||||
if err != nil {
|
||||
log.Printf("[MCP] Error: getArtistURL handler failed: %v", err)
|
||||
return nil, fmt.Errorf("handler returned an error: %w", err)
|
||||
}
|
||||
log.Printf("[MCP] Debug: Tool 'get_artist_url' succeeded.")
|
||||
return mcp_golang.NewToolResponse(mcp_golang.NewTextContent(urlResult)), nil
|
||||
})
|
||||
if err != nil {
|
||||
log.Fatalf("[MCP] Fatal: Failed to register tool 'get_artist_url': %v", err)
|
||||
}
|
||||
|
||||
log.Printf("[MCP] Debug: Registering prompt 'prompt_test'...")
|
||||
err = server.RegisterPrompt("prompt_test", "This is a test prompt", func(arguments Content) (*mcp_golang.PromptResponse, error) {
|
||||
log.Printf("[MCP] Debug: Prompt 'prompt_test' called with args: %+v", arguments)
|
||||
return mcp_golang.NewPromptResponse("description", mcp_golang.NewPromptMessage(mcp_golang.NewTextContent(fmt.Sprintf("Hello, %server!", arguments.Title)), mcp_golang.RoleUser)), nil
|
||||
})
|
||||
if err != nil {
|
||||
log.Fatalf("[MCP] Fatal: Failed to register prompt 'prompt_test': %v", err)
|
||||
}
|
||||
|
||||
log.Printf("[MCP] Debug: Registering resource 'test://resource'...")
|
||||
err = server.RegisterResource("test://resource", "resource_test", "This is a test resource", "application/json", func() (*mcp_golang.ResourceResponse, error) {
|
||||
log.Printf("[MCP] Debug: Resource 'test://resource' called")
|
||||
return mcp_golang.NewResourceResponse(mcp_golang.NewTextEmbeddedResource("test://resource", "This is a test resource", "application/json")), nil
|
||||
})
|
||||
if err != nil {
|
||||
log.Fatalf("[MCP] Fatal: Failed to register resource 'test://resource': %v", err)
|
||||
}
|
||||
|
||||
log.Printf("[MCP] Debug: Registering resource 'file://app_logs'...")
|
||||
err = server.RegisterResource("file://app_logs", "app_logs", "The app logs", "text/plain", func() (*mcp_golang.ResourceResponse, error) {
|
||||
log.Printf("[MCP] Debug: Resource 'file://app_logs' called")
|
||||
return mcp_golang.NewResourceResponse(mcp_golang.NewTextEmbeddedResource("file://app_logs", "This is a test resource", "text/plain")), nil
|
||||
})
|
||||
if err != nil {
|
||||
log.Fatalf("[MCP] Fatal: Failed to register resource 'file://app_logs': %v", err)
|
||||
}
|
||||
|
||||
log.Println("[MCP] MCP server initialized and starting to serve...")
|
||||
err = server.Serve()
|
||||
if err != nil {
|
||||
log.Fatalf("[MCP] Fatal: Server exited with error: %v", err)
|
||||
}
|
||||
|
||||
log.Println("[MCP] Server exited cleanly.")
|
||||
<-done // Keep running until interrupted (though server.Serve() is blocking)
|
||||
}
|
||||
|
||||
func getArtistBiography(fetcher Fetcher, ctx context.Context, id, name, mbid string) (string, error) {
|
||||
log.Printf("[MCP] Debug: getArtistBiography called (id: %s, name: %s, mbid: %s)", id, name, mbid)
|
||||
if mbid == "" {
|
||||
fmt.Fprintf(os.Stderr, "MBID not provided, attempting DBpedia lookup by name: %s\n", name)
|
||||
log.Printf("[MCP] Debug: MBID not provided, attempting DBpedia lookup by name: %s", name)
|
||||
} else {
|
||||
// 1. Attempt Wikidata MBID lookup first
|
||||
log.Printf("[MCP] Debug: Attempting Wikidata URL lookup for MBID: %s", mbid)
|
||||
wikiURL, err := GetArtistWikipediaURL(fetcher, ctx, mbid)
|
||||
if err == nil {
|
||||
// 1a. Found Wikidata URL, now fetch from Wikipedia API
|
||||
log.Printf("[MCP] Debug: Found Wikidata URL '%s', fetching bio from Wikipedia API...", wikiURL)
|
||||
bio, errBio := GetBioFromWikipediaAPI(fetcher, ctx, wikiURL)
|
||||
if errBio == nil {
|
||||
log.Printf("[MCP] Debug: Successfully fetched bio from Wikipedia API for '%s'.", name)
|
||||
return bio, nil // Success via Wikidata/Wikipedia!
|
||||
} else {
|
||||
// Failed to get bio even though URL was found
|
||||
log.Printf("[MCP] Error: Found Wikipedia URL (%s) via MBID %s, but failed to fetch bio: %v", wikiURL, mbid, errBio)
|
||||
fmt.Fprintf(os.Stderr, "Found Wikipedia URL (%s) via MBID %s, but failed to fetch bio: %v\n", wikiURL, mbid, errBio)
|
||||
// Fall through to try DBpedia by name as a last resort?
|
||||
// Let's fall through for now.
|
||||
}
|
||||
} else if !errors.Is(err, ErrNotFound) {
|
||||
// Wikidata lookup failed for a reason other than not found (e.g., network)
|
||||
log.Printf("[MCP] Error: Wikidata URL lookup failed for MBID %s (non-NotFound error): %v", mbid, err)
|
||||
fmt.Fprintf(os.Stderr, "Wikidata URL lookup failed for MBID %s (non-NotFound error): %v\n", mbid, err)
|
||||
// Don't proceed to DBpedia name lookup if Wikidata had a technical failure
|
||||
return "", fmt.Errorf("Wikidata lookup failed: %w", err)
|
||||
} else {
|
||||
// Wikidata lookup returned ErrNotFound for MBID
|
||||
log.Printf("[MCP] Debug: MBID %s not found on Wikidata, attempting DBpedia lookup by name: %s", mbid, name)
|
||||
fmt.Fprintf(os.Stderr, "MBID %s not found on Wikidata, attempting DBpedia lookup by name: %s\n", mbid, name)
|
||||
}
|
||||
}
|
||||
|
||||
// 2. Attempt DBpedia lookup by name (if MBID was missing or failed with ErrNotFound)
|
||||
if name == "" {
|
||||
log.Printf("[MCP] Error: Cannot find artist bio: MBID lookup failed/missing, and no name provided.")
|
||||
return "", fmt.Errorf("cannot find artist: MBID lookup failed or MBID not provided, and no name provided for DBpedia fallback")
|
||||
}
|
||||
log.Printf("[MCP] Debug: Attempting DBpedia bio lookup by name: %s", name)
|
||||
dbpediaBio, errDb := GetArtistBioFromDBpedia(fetcher, ctx, name)
|
||||
if errDb == nil {
|
||||
log.Printf("[MCP] Debug: Successfully fetched bio from DBpedia for '%s'.", name)
|
||||
return dbpediaBio, nil // Success via DBpedia!
|
||||
}
|
||||
|
||||
// 3. If both Wikidata (MBID) and DBpedia (Name) failed
|
||||
if errors.Is(errDb, ErrNotFound) {
|
||||
log.Printf("[MCP] Error: Artist '%s' (MBID: %s) not found via Wikidata or DBpedia name lookup.", name, mbid)
|
||||
return "", fmt.Errorf("artist '%s' (MBID: %s) not found via Wikidata MBID or DBpedia Name lookup", name, mbid)
|
||||
}
|
||||
|
||||
// Return DBpedia's error if it wasn't ErrNotFound
|
||||
log.Printf("[MCP] Error: DBpedia lookup failed for name '%s': %v", name, errDb)
|
||||
return "", fmt.Errorf("DBpedia lookup failed for name '%s': %w", name, errDb)
|
||||
}
|
||||
|
||||
// getArtistURL attempts to find the specific Wikipedia URL using MBID (via Wikidata),
|
||||
// then by Name (via DBpedia), falling back to a search URL using name.
|
||||
func getArtistURL(fetcher Fetcher, ctx context.Context, id, name, mbid string) (string, error) {
|
||||
log.Printf("[MCP] Debug: getArtistURL called (id: %s, name: %s, mbid: %s)", id, name, mbid)
|
||||
if mbid == "" {
|
||||
fmt.Fprintf(os.Stderr, "getArtistURL: MBID not provided, attempting DBpedia lookup by name: %s\n", name)
|
||||
log.Printf("[MCP] Debug: getArtistURL: MBID not provided, attempting DBpedia lookup by name: %s", name)
|
||||
} else {
|
||||
// Try to get the specific URL from Wikidata using MBID
|
||||
log.Printf("[MCP] Debug: getArtistURL: Attempting Wikidata URL lookup for MBID: %s", mbid)
|
||||
wikiURL, err := GetArtistWikipediaURL(fetcher, ctx, mbid)
|
||||
if err == nil && wikiURL != "" {
|
||||
log.Printf("[MCP] Debug: getArtistURL: Found specific URL '%s' via Wikidata MBID.", wikiURL)
|
||||
return wikiURL, nil // Found specific URL via MBID
|
||||
}
|
||||
// Log error if Wikidata lookup failed for reasons other than not found
|
||||
if err != nil && !errors.Is(err, ErrNotFound) {
|
||||
log.Printf("[MCP] Error: getArtistURL: Wikidata URL lookup failed for MBID %s (non-NotFound error): %v", mbid, err)
|
||||
fmt.Fprintf(os.Stderr, "getArtistURL: Wikidata URL lookup failed for MBID %s (non-NotFound error): %v\n", mbid, err)
|
||||
// Fall through to try DBpedia if name is available
|
||||
} else if errors.Is(err, ErrNotFound) {
|
||||
log.Printf("[MCP] Debug: getArtistURL: MBID %s not found on Wikidata, attempting DBpedia lookup by name: %s", mbid, name)
|
||||
fmt.Fprintf(os.Stderr, "getArtistURL: MBID %s not found on Wikidata, attempting DBpedia lookup by name: %s\n", mbid, name)
|
||||
}
|
||||
}
|
||||
|
||||
// Fallback 1: Try DBpedia lookup by name
|
||||
if name != "" {
|
||||
log.Printf("[MCP] Debug: getArtistURL: Attempting DBpedia URL lookup by name: %s", name)
|
||||
dbpediaWikiURL, errDb := GetArtistWikipediaURLFromDBpedia(fetcher, ctx, name)
|
||||
if errDb == nil && dbpediaWikiURL != "" {
|
||||
log.Printf("[MCP] Debug: getArtistURL: Found specific URL '%s' via DBpedia Name lookup.", dbpediaWikiURL)
|
||||
return dbpediaWikiURL, nil // Found specific URL via DBpedia Name lookup
|
||||
}
|
||||
// Log error if DBpedia lookup failed for reasons other than not found
|
||||
if errDb != nil && !errors.Is(errDb, ErrNotFound) {
|
||||
log.Printf("[MCP] Error: getArtistURL: DBpedia URL lookup failed for name '%s' (non-NotFound error): %v", name, errDb)
|
||||
fmt.Fprintf(os.Stderr, "getArtistURL: DBpedia URL lookup failed for name '%s' (non-NotFound error): %v\n", name, errDb)
|
||||
// Fall through to search URL fallback
|
||||
} else if errors.Is(errDb, ErrNotFound) {
|
||||
log.Printf("[MCP] Debug: getArtistURL: Name '%s' not found on DBpedia, attempting search fallback", name)
|
||||
fmt.Fprintf(os.Stderr, "getArtistURL: Name '%s' not found on DBpedia, attempting search fallback\n", name)
|
||||
}
|
||||
}
|
||||
|
||||
// Fallback 2: Generate a search URL if name is provided
|
||||
if name != "" {
|
||||
searchURL := fmt.Sprintf("https://en.wikipedia.org/w/index.php?search=%s", url.QueryEscape(name))
|
||||
log.Printf("[MCP] Debug: getArtistURL: Falling back to search URL: %s", searchURL)
|
||||
fmt.Fprintf(os.Stderr, "getArtistURL: Falling back to search URL: %s\n", searchURL)
|
||||
return searchURL, nil
|
||||
}
|
||||
|
||||
// Final error: MBID lookup failed (or no MBID given) AND no name provided for fallback
|
||||
log.Printf("[MCP] Error: getArtistURL: Cannot generate Wikipedia URL: Lookups failed and no name provided.")
|
||||
return "", fmt.Errorf("cannot generate Wikipedia URL: Wikidata/DBpedia lookups failed and no artist name provided for search fallback")
|
||||
}
|
||||
@@ -1,205 +0,0 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"log"
|
||||
"net/http"
|
||||
"net/url"
|
||||
"os"
|
||||
"time"
|
||||
)
|
||||
|
||||
const wikidataEndpoint = "https://query.wikidata.org/sparql"
|
||||
|
||||
// ErrNotFound indicates a specific item (like an artist or URL) was not found on Wikidata.
|
||||
var ErrNotFound = errors.New("item not found on Wikidata")
|
||||
|
||||
// Wikidata SPARQL query result structures
|
||||
type SparqlResult struct {
|
||||
Results SparqlBindings `json:"results"`
|
||||
}
|
||||
|
||||
type SparqlBindings struct {
|
||||
Bindings []map[string]SparqlValue `json:"bindings"`
|
||||
}
|
||||
|
||||
type SparqlValue struct {
|
||||
Type string `json:"type"`
|
||||
Value string `json:"value"`
|
||||
Lang string `json:"xml:lang,omitempty"` // Handle language tags like "en"
|
||||
}
|
||||
|
||||
// GetArtistBioFromWikidata queries Wikidata for an artist's description using their MBID.
|
||||
// NOTE: This function is currently UNUSED as the main logic prefers Wikipedia/DBpedia.
|
||||
func GetArtistBioFromWikidata(client *http.Client, mbid string) (string, error) {
|
||||
log.Printf("[MCP] Debug: GetArtistBioFromWikidata called for MBID: %s", mbid)
|
||||
if mbid == "" {
|
||||
log.Printf("[MCP] Error: GetArtistBioFromWikidata requires an MBID.")
|
||||
return "", fmt.Errorf("MBID is required to query Wikidata")
|
||||
}
|
||||
|
||||
// SPARQL query to find the English description for an entity with a specific MusicBrainz ID
|
||||
sparqlQuery := fmt.Sprintf(`
|
||||
SELECT ?artistDescription WHERE {
|
||||
?artist wdt:P434 "%s" . # P434 is the property for MusicBrainz artist ID
|
||||
OPTIONAL {
|
||||
?artist schema:description ?artistDescription .
|
||||
FILTER(LANG(?artistDescription) = "en")
|
||||
}
|
||||
SERVICE wikibase:label { bd:serviceParam wikibase:language "en". }
|
||||
}
|
||||
LIMIT 1`, mbid)
|
||||
|
||||
// Prepare the HTTP request
|
||||
queryValues := url.Values{}
|
||||
queryValues.Set("query", sparqlQuery)
|
||||
queryValues.Set("format", "json")
|
||||
|
||||
reqURL := fmt.Sprintf("%s?%s", wikidataEndpoint, queryValues.Encode())
|
||||
log.Printf("[MCP] Debug: Wikidata Bio Request URL: %s", reqURL)
|
||||
|
||||
req, err := http.NewRequest("GET", reqURL, nil)
|
||||
if err != nil {
|
||||
log.Printf("[MCP] Error: Failed to create Wikidata bio request: %v", err)
|
||||
return "", fmt.Errorf("failed to create Wikidata request: %w", err)
|
||||
}
|
||||
req.Header.Set("Accept", "application/sparql-results+json")
|
||||
req.Header.Set("User-Agent", "MCPGoServerExample/0.1 (https://example.com/contact)") // Good practice to identify your client
|
||||
|
||||
// Execute the request
|
||||
log.Printf("[MCP] Debug: Executing Wikidata bio request...")
|
||||
resp, err := client.Do(req)
|
||||
if err != nil {
|
||||
log.Printf("[MCP] Error: Failed to execute Wikidata bio request: %v", err)
|
||||
return "", fmt.Errorf("failed to execute Wikidata request: %w", err)
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
if resp.StatusCode != http.StatusOK {
|
||||
// Attempt to read body for more error info, but don't fail if it doesn't work
|
||||
bodyBytes, readErr := io.ReadAll(resp.Body)
|
||||
errorMsg := "Could not read error body"
|
||||
if readErr == nil {
|
||||
errorMsg = string(bodyBytes)
|
||||
}
|
||||
log.Printf("[MCP] Error: Wikidata bio query failed with status %d: %s", resp.StatusCode, errorMsg)
|
||||
return "", fmt.Errorf("Wikidata query failed with status %d: %s", resp.StatusCode, errorMsg)
|
||||
}
|
||||
log.Printf("[MCP] Debug: Wikidata bio query successful (status %d).", resp.StatusCode)
|
||||
|
||||
// Parse the response
|
||||
var result SparqlResult
|
||||
if err := json.NewDecoder(resp.Body).Decode(&result); err != nil {
|
||||
log.Printf("[MCP] Error: Failed to decode Wikidata bio response: %v", err)
|
||||
return "", fmt.Errorf("failed to decode Wikidata response: %w", err)
|
||||
}
|
||||
|
||||
// Extract the description
|
||||
if len(result.Results.Bindings) > 0 {
|
||||
if descriptionVal, ok := result.Results.Bindings[0]["artistDescription"]; ok {
|
||||
log.Printf("[MCP] Debug: Found description for MBID %s", mbid)
|
||||
return descriptionVal.Value, nil
|
||||
}
|
||||
}
|
||||
|
||||
log.Printf("[MCP] Warn: No English description found on Wikidata for MBID %s", mbid)
|
||||
return "", fmt.Errorf("no English description found on Wikidata for MBID %s", mbid)
|
||||
}
|
||||
|
||||
// GetArtistWikipediaURL queries Wikidata for an artist's English Wikipedia page URL using MBID.
|
||||
// It tries searching by MBID first, then falls back to searching by name.
|
||||
func GetArtistWikipediaURL(fetcher Fetcher, ctx context.Context, mbid string) (string, error) {
|
||||
log.Printf("[MCP] Debug: GetArtistWikipediaURL called for MBID: %s", mbid)
|
||||
// 1. Try finding by MBID
|
||||
if mbid == "" {
|
||||
log.Printf("[MCP] Error: GetArtistWikipediaURL requires an MBID.")
|
||||
return "", fmt.Errorf("MBID is required to find Wikipedia URL on Wikidata")
|
||||
} else {
|
||||
// SPARQL query to find the enwiki URL for an entity with a specific MusicBrainz ID
|
||||
sparqlQuery := fmt.Sprintf(`
|
||||
SELECT ?article WHERE {
|
||||
?artist wdt:P434 "%s" . # P434 is MusicBrainz artist ID
|
||||
?article schema:about ?artist ;
|
||||
schema:isPartOf <https://en.wikipedia.org/> .
|
||||
}
|
||||
LIMIT 1`, mbid)
|
||||
|
||||
log.Printf("[MCP] Debug: Executing Wikidata URL query for MBID: %s", mbid)
|
||||
foundURL, err := executeWikidataURLQuery(fetcher, ctx, sparqlQuery)
|
||||
if err == nil && foundURL != "" {
|
||||
log.Printf("[MCP] Debug: Found Wikipedia URL '%s' via MBID %s", foundURL, mbid)
|
||||
return foundURL, nil // Found via MBID
|
||||
}
|
||||
// Use the specific ErrNotFound
|
||||
if errors.Is(err, ErrNotFound) {
|
||||
log.Printf("[MCP] Debug: MBID %s not found on Wikidata for URL lookup.", mbid)
|
||||
return "", ErrNotFound // Explicitly return ErrNotFound
|
||||
}
|
||||
// Log other errors
|
||||
if err != nil {
|
||||
log.Printf("[MCP] Error: Wikidata URL lookup via MBID %s failed: %v", mbid, err)
|
||||
fmt.Fprintf(os.Stderr, "Wikidata URL lookup via MBID %s failed: %v\n", mbid, err)
|
||||
return "", fmt.Errorf("Wikidata URL lookup via MBID failed: %w", err)
|
||||
}
|
||||
}
|
||||
|
||||
// Should ideally not be reached if MBID is required and lookup failed or was not found
|
||||
log.Printf("[MCP] Warn: Reached end of GetArtistWikipediaURL unexpectedly for MBID %s", mbid)
|
||||
return "", ErrNotFound // Return ErrNotFound if somehow reached
|
||||
}
|
||||
|
||||
// executeWikidataURLQuery is a helper to run SPARQL and extract the first bound URL for '?article'.
|
||||
func executeWikidataURLQuery(fetcher Fetcher, ctx context.Context, sparqlQuery string) (string, error) {
|
||||
log.Printf("[MCP] Debug: executeWikidataURLQuery called.")
|
||||
queryValues := url.Values{}
|
||||
queryValues.Set("query", sparqlQuery)
|
||||
queryValues.Set("format", "json")
|
||||
|
||||
reqURL := fmt.Sprintf("%s?%s", wikidataEndpoint, queryValues.Encode())
|
||||
log.Printf("[MCP] Debug: Wikidata Sparql Request URL: %s", reqURL)
|
||||
|
||||
// Directly use the fetcher
|
||||
// Note: Headers (Accept, User-Agent) are now handled by the Fetcher implementation
|
||||
// The WASM fetcher currently doesn't support setting them via the host func interface.
|
||||
// Timeout is handled via context for native, and passed to host func for WASM.
|
||||
// Let's use a default timeout here if not provided via context (e.g., 15s)
|
||||
// TODO: Consider making timeout configurable or passed down
|
||||
timeout := 15 * time.Second
|
||||
if deadline, ok := ctx.Deadline(); ok {
|
||||
timeout = time.Until(deadline)
|
||||
}
|
||||
log.Printf("[MCP] Debug: Fetching from Wikidata with timeout: %v", timeout)
|
||||
|
||||
statusCode, bodyBytes, err := fetcher.Fetch(ctx, "GET", reqURL, nil, timeout)
|
||||
if err != nil {
|
||||
log.Printf("[MCP] Error: Fetcher failed for Wikidata SPARQL request: %v", err)
|
||||
return "", fmt.Errorf("failed to execute Wikidata request: %w", err)
|
||||
}
|
||||
|
||||
// Check status code. Fetcher interface implies body might be returned even on error.
|
||||
if statusCode != http.StatusOK {
|
||||
log.Printf("[MCP] Error: Wikidata SPARQL query failed with status %d: %s", statusCode, string(bodyBytes))
|
||||
return "", fmt.Errorf("Wikidata query failed with status %d: %s", statusCode, string(bodyBytes))
|
||||
}
|
||||
log.Printf("[MCP] Debug: Wikidata SPARQL query successful (status %d), %d bytes received.", statusCode, len(bodyBytes))
|
||||
|
||||
var result SparqlResult
|
||||
if err := json.Unmarshal(bodyBytes, &result); err != nil { // Use Unmarshal for byte slice
|
||||
log.Printf("[MCP] Error: Failed to decode Wikidata SPARQL response: %v", err)
|
||||
return "", fmt.Errorf("failed to decode Wikidata response: %w", err)
|
||||
}
|
||||
|
||||
if len(result.Results.Bindings) > 0 {
|
||||
if articleVal, ok := result.Results.Bindings[0]["article"]; ok {
|
||||
log.Printf("[MCP] Debug: Found Wikidata article URL: %s", articleVal.Value)
|
||||
return articleVal.Value, nil
|
||||
}
|
||||
}
|
||||
|
||||
log.Printf("[MCP] Debug: No Wikidata article URL found in SPARQL response.")
|
||||
return "", ErrNotFound
|
||||
}
|
||||
@@ -1,121 +0,0 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"log"
|
||||
"net/http"
|
||||
"net/url"
|
||||
"strings"
|
||||
"time"
|
||||
)
|
||||
|
||||
const mediaWikiAPIEndpoint = "https://en.wikipedia.org/w/api.php"
|
||||
|
||||
// Structures for parsing MediaWiki API response (query extracts)
|
||||
type MediaWikiQueryResult struct {
|
||||
Query MediaWikiQuery `json:"query"`
|
||||
}
|
||||
|
||||
type MediaWikiQuery struct {
|
||||
Pages map[string]MediaWikiPage `json:"pages"`
|
||||
}
|
||||
|
||||
type MediaWikiPage struct {
|
||||
PageID int `json:"pageid"`
|
||||
Ns int `json:"ns"`
|
||||
Title string `json:"title"`
|
||||
Extract string `json:"extract"`
|
||||
}
|
||||
|
||||
// Default timeout for Wikipedia API requests
|
||||
const defaultWikipediaTimeout = 15 * time.Second
|
||||
|
||||
// GetBioFromWikipediaAPI fetches the introductory text of a Wikipedia page.
|
||||
func GetBioFromWikipediaAPI(fetcher Fetcher, ctx context.Context, wikipediaURL string) (string, error) {
|
||||
log.Printf("[MCP] Debug: GetBioFromWikipediaAPI called for URL: %s", wikipediaURL)
|
||||
pageTitle, err := extractPageTitleFromURL(wikipediaURL)
|
||||
if err != nil {
|
||||
log.Printf("[MCP] Error: Could not extract title from Wikipedia URL '%s': %v", wikipediaURL, err)
|
||||
return "", fmt.Errorf("could not extract title from Wikipedia URL %s: %w", wikipediaURL, err)
|
||||
}
|
||||
log.Printf("[MCP] Debug: Extracted Wikipedia page title: %s", pageTitle)
|
||||
|
||||
// Prepare API request parameters
|
||||
apiParams := url.Values{}
|
||||
apiParams.Set("action", "query")
|
||||
apiParams.Set("format", "json")
|
||||
apiParams.Set("prop", "extracts") // Request page extracts
|
||||
apiParams.Set("exintro", "true") // Get only the intro section
|
||||
apiParams.Set("explaintext", "true") // Get plain text instead of HTML
|
||||
apiParams.Set("titles", pageTitle) // Specify the page title
|
||||
apiParams.Set("redirects", "1") // Follow redirects
|
||||
|
||||
reqURL := fmt.Sprintf("%s?%s", mediaWikiAPIEndpoint, apiParams.Encode())
|
||||
log.Printf("[MCP] Debug: MediaWiki API Request URL: %s", reqURL)
|
||||
|
||||
timeout := defaultWikipediaTimeout
|
||||
if deadline, ok := ctx.Deadline(); ok {
|
||||
timeout = time.Until(deadline)
|
||||
}
|
||||
log.Printf("[MCP] Debug: Fetching from MediaWiki with timeout: %v", timeout)
|
||||
|
||||
statusCode, bodyBytes, err := fetcher.Fetch(ctx, "GET", reqURL, nil, timeout)
|
||||
if err != nil {
|
||||
log.Printf("[MCP] Error: Fetcher failed for MediaWiki request (title: '%s'): %v", pageTitle, err)
|
||||
return "", fmt.Errorf("failed to execute MediaWiki request for title '%s': %w", pageTitle, err)
|
||||
}
|
||||
|
||||
if statusCode != http.StatusOK {
|
||||
log.Printf("[MCP] Error: MediaWiki query for '%s' failed with status %d: %s", pageTitle, statusCode, string(bodyBytes))
|
||||
return "", fmt.Errorf("MediaWiki query for '%s' failed with status %d: %s", pageTitle, statusCode, string(bodyBytes))
|
||||
}
|
||||
log.Printf("[MCP] Debug: MediaWiki query successful (status %d), %d bytes received.", statusCode, len(bodyBytes))
|
||||
|
||||
// Parse the response
|
||||
var result MediaWikiQueryResult
|
||||
if err := json.Unmarshal(bodyBytes, &result); err != nil {
|
||||
log.Printf("[MCP] Error: Failed to decode MediaWiki response for '%s': %v", pageTitle, err)
|
||||
return "", fmt.Errorf("failed to decode MediaWiki response for '%s': %w", pageTitle, err)
|
||||
}
|
||||
|
||||
// Extract the text - MediaWiki API returns pages keyed by page ID
|
||||
for pageID, page := range result.Query.Pages {
|
||||
log.Printf("[MCP] Debug: Processing MediaWiki page ID: %s, Title: %s", pageID, page.Title)
|
||||
if page.Extract != "" {
|
||||
// Often includes a newline at the end, trim it
|
||||
log.Printf("[MCP] Debug: Found extract for '%s'. Length: %d", pageTitle, len(page.Extract))
|
||||
return strings.TrimSpace(page.Extract), nil
|
||||
}
|
||||
}
|
||||
|
||||
log.Printf("[MCP] Warn: No extract found in MediaWiki response for title '%s'", pageTitle)
|
||||
return "", fmt.Errorf("no extract found in MediaWiki response for title '%s' (page might not exist or be empty)", pageTitle)
|
||||
}
|
||||
|
||||
// extractPageTitleFromURL attempts to get the page title from a standard Wikipedia URL.
|
||||
// Example: https://en.wikipedia.org/wiki/The_Beatles -> The_Beatles
|
||||
func extractPageTitleFromURL(wikiURL string) (string, error) {
|
||||
parsedURL, err := url.Parse(wikiURL)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
if parsedURL.Host != "en.wikipedia.org" {
|
||||
return "", fmt.Errorf("URL host is not en.wikipedia.org: %s", parsedURL.Host)
|
||||
}
|
||||
pathParts := strings.Split(strings.TrimPrefix(parsedURL.Path, "/"), "/")
|
||||
if len(pathParts) < 2 || pathParts[0] != "wiki" {
|
||||
return "", fmt.Errorf("URL path does not match /wiki/<title> format: %s", parsedURL.Path)
|
||||
}
|
||||
title := pathParts[1]
|
||||
if title == "" {
|
||||
return "", fmt.Errorf("extracted title is empty")
|
||||
}
|
||||
// URL Decode the title (e.g., %27 -> ')
|
||||
decodedTitle, err := url.PathUnescape(title)
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("failed to decode title '%s': %w", title, err)
|
||||
}
|
||||
return decodedTitle, nil
|
||||
}
|
||||
@@ -1,207 +0,0 @@
|
||||
package mcp
|
||||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
mcp "github.com/metoro-io/mcp-golang"
|
||||
"github.com/tetratelabs/wazero"
|
||||
|
||||
"github.com/tetratelabs/wazero/imports/wasi_snapshot_preview1"
|
||||
|
||||
"github.com/navidrome/navidrome/conf"
|
||||
"github.com/navidrome/navidrome/core/agents"
|
||||
"github.com/navidrome/navidrome/log"
|
||||
"github.com/navidrome/navidrome/model"
|
||||
)
|
||||
|
||||
// Constants used by the MCP agent
|
||||
const (
|
||||
McpAgentName = "mcp"
|
||||
initializationTimeout = 5 * time.Second
|
||||
// McpServerPath defines the location of the MCP server executable or WASM module.
|
||||
// McpServerPath = "./core/agents/mcp/mcp-server/mcp-server"
|
||||
McpServerPath = "./core/agents/mcp/mcp-server/mcp-server.wasm"
|
||||
McpToolNameGetBio = "get_artist_biography"
|
||||
McpToolNameGetURL = "get_artist_url"
|
||||
)
|
||||
|
||||
// mcpClient interface matching the methods used from mcp.Client.
|
||||
type mcpClient interface {
|
||||
Initialize(ctx context.Context) (*mcp.InitializeResponse, error)
|
||||
CallTool(ctx context.Context, toolName string, args any) (*mcp.ToolResponse, error)
|
||||
}
|
||||
|
||||
// mcpImplementation defines the common interface for both native and WASM MCP agents.
|
||||
// This allows the main MCPAgent to delegate calls without knowing the underlying type.
|
||||
type mcpImplementation interface {
|
||||
Close() error // For cleaning up resources associated with this specific implementation.
|
||||
|
||||
// callMCPTool is the core method implemented differently by native/wasm
|
||||
callMCPTool(ctx context.Context, toolName string, args any) (string, error)
|
||||
}
|
||||
|
||||
// MCPAgent is the public-facing agent registered with Navidrome.
|
||||
// It acts as a wrapper around the actual implementation (native or WASM).
|
||||
type MCPAgent struct {
|
||||
// No mutex needed here if impl is set once at construction
|
||||
// and the implementation handles its own internal state synchronization.
|
||||
impl mcpImplementation
|
||||
|
||||
// Shared Wazero resources (runtime, cache) are managed externally
|
||||
// and closed separately, likely during application shutdown.
|
||||
}
|
||||
|
||||
// mcpConstructor creates the appropriate MCP implementation (native or WASM)
|
||||
// and wraps it in the MCPAgent.
|
||||
func mcpConstructor(ds model.DataStore) agents.Interface {
|
||||
if _, err := os.Stat(McpServerPath); os.IsNotExist(err) {
|
||||
log.Warn("MCP server executable/WASM not found, disabling agent", "path", McpServerPath, "error", err)
|
||||
return nil
|
||||
}
|
||||
|
||||
var agentImpl mcpImplementation
|
||||
var err error
|
||||
|
||||
if strings.HasSuffix(McpServerPath, ".wasm") {
|
||||
log.Info("Configuring MCP agent for WASM execution", "path", McpServerPath)
|
||||
ctx := context.Background()
|
||||
|
||||
// Setup Shared Wazero Resources
|
||||
var cache wazero.CompilationCache
|
||||
cacheDir := filepath.Join(conf.Server.DataFolder, "cache", "wazero")
|
||||
if errMkdir := os.MkdirAll(cacheDir, 0755); errMkdir != nil {
|
||||
log.Error(ctx, "Failed to create Wazero cache directory, WASM caching disabled", "path", cacheDir, "error", errMkdir)
|
||||
} else {
|
||||
cache, err = wazero.NewCompilationCacheWithDir(cacheDir)
|
||||
if err != nil {
|
||||
log.Error(ctx, "Failed to create Wazero compilation cache, WASM caching disabled", "path", cacheDir, "error", err)
|
||||
cache = nil
|
||||
}
|
||||
}
|
||||
|
||||
runtimeConfig := wazero.NewRuntimeConfig()
|
||||
if cache != nil {
|
||||
runtimeConfig = runtimeConfig.WithCompilationCache(cache)
|
||||
}
|
||||
|
||||
runtime := wazero.NewRuntimeWithConfig(ctx, runtimeConfig)
|
||||
|
||||
if err = registerHostFunctions(ctx, runtime); err != nil {
|
||||
_ = runtime.Close(ctx)
|
||||
if cache != nil {
|
||||
_ = cache.Close(ctx)
|
||||
}
|
||||
return nil // Fatal error: Host functions required
|
||||
}
|
||||
|
||||
if _, err = wasi_snapshot_preview1.Instantiate(ctx, runtime); err != nil {
|
||||
log.Error(ctx, "Failed to instantiate WASI on shared Wazero runtime, MCP WASM agent disabled", "error", err)
|
||||
_ = runtime.Close(ctx)
|
||||
if cache != nil {
|
||||
_ = cache.Close(ctx)
|
||||
}
|
||||
return nil // Fatal error: WASI required
|
||||
}
|
||||
|
||||
// Compile the module
|
||||
log.Debug(ctx, "Pre-compiling WASM module...", "path", McpServerPath)
|
||||
wasmBytes, err := os.ReadFile(McpServerPath)
|
||||
if err != nil {
|
||||
log.Error(ctx, "Failed to read WASM module file, disabling agent", "path", McpServerPath, "error", err)
|
||||
_ = runtime.Close(ctx)
|
||||
if cache != nil {
|
||||
_ = cache.Close(ctx)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
compiledModule, err := runtime.CompileModule(ctx, wasmBytes)
|
||||
if err != nil {
|
||||
log.Error(ctx, "Failed to pre-compile WASM module, disabling agent", "path", McpServerPath, "error", err)
|
||||
_ = runtime.Close(ctx)
|
||||
if cache != nil {
|
||||
_ = cache.Close(ctx)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
agentImpl = newMCPWasm(runtime, cache, compiledModule)
|
||||
log.Info(ctx, "Shared Wazero runtime, WASI, cache, host functions initialized, and module pre-compiled for MCP agent")
|
||||
|
||||
} else {
|
||||
log.Info("Configuring MCP agent for native execution", "path", McpServerPath)
|
||||
agentImpl = newMCPNative()
|
||||
}
|
||||
|
||||
log.Info("MCP Agent implementation created successfully")
|
||||
return &MCPAgent{impl: agentImpl}
|
||||
}
|
||||
|
||||
// NewAgentForTesting is a constructor specifically for tests.
|
||||
// It creates the appropriate implementation based on McpServerPath
|
||||
// and injects a mock mcpClient into its ClientOverride field.
|
||||
func NewAgentForTesting(mockClient mcpClient) agents.Interface {
|
||||
// We need to replicate the logic from mcpConstructor to determine
|
||||
// the implementation type, but without actually starting processes.
|
||||
|
||||
var agentImpl mcpImplementation
|
||||
|
||||
if strings.HasSuffix(McpServerPath, ".wasm") {
|
||||
// For WASM testing, we might not need the full runtime setup,
|
||||
// just the struct. Pass nil for shared resources for now.
|
||||
// We rely on the mockClient being used before any real WASM interaction.
|
||||
wasmImpl := newMCPWasm(nil, nil, nil)
|
||||
wasmImpl.ClientOverride = mockClient
|
||||
agentImpl = wasmImpl
|
||||
} else {
|
||||
nativeImpl := newMCPNative()
|
||||
nativeImpl.ClientOverride = mockClient
|
||||
agentImpl = nativeImpl
|
||||
}
|
||||
|
||||
return &MCPAgent{impl: agentImpl}
|
||||
}
|
||||
|
||||
func (a *MCPAgent) AgentName() string {
|
||||
return McpAgentName
|
||||
}
|
||||
|
||||
func (a *MCPAgent) GetArtistBiography(ctx context.Context, id, name, mbid string) (string, error) {
|
||||
if a.impl == nil {
|
||||
return "", errors.New("MCP agent implementation is nil")
|
||||
}
|
||||
// Construct args and call the implementation's specific tool caller
|
||||
args := ArtistArgs{ID: id, Name: name, Mbid: mbid}
|
||||
return a.impl.callMCPTool(ctx, McpToolNameGetBio, args)
|
||||
}
|
||||
|
||||
func (a *MCPAgent) GetArtistURL(ctx context.Context, id, name, mbid string) (string, error) {
|
||||
if a.impl == nil {
|
||||
return "", errors.New("MCP agent implementation is nil")
|
||||
}
|
||||
// Construct args and call the implementation's specific tool caller
|
||||
args := ArtistArgs{ID: id, Name: name, Mbid: mbid}
|
||||
return a.impl.callMCPTool(ctx, McpToolNameGetURL, args)
|
||||
}
|
||||
|
||||
// Note: A Close method on MCPAgent itself isn't part of agents.Interface.
|
||||
// Cleanup of the specific implementation happens via impl.Close().
|
||||
// Cleanup of shared Wazero resources needs separate handling (e.g., on app shutdown).
|
||||
|
||||
// ArtistArgs defines the structure for MCP tool arguments requiring artist info.
|
||||
type ArtistArgs struct {
|
||||
ID string `json:"id"`
|
||||
Name string `json:"name"`
|
||||
Mbid string `json:"mbid,omitempty"`
|
||||
}
|
||||
|
||||
var _ agents.ArtistBiographyRetriever = (*MCPAgent)(nil)
|
||||
var _ agents.ArtistURLRetriever = (*MCPAgent)(nil)
|
||||
|
||||
func init() {
|
||||
agents.Register(McpAgentName, mcpConstructor)
|
||||
}
|
||||
@@ -1,237 +0,0 @@
|
||||
package mcp_test
|
||||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
|
||||
mcp_client "github.com/metoro-io/mcp-golang" // Renamed alias for clarity
|
||||
"github.com/navidrome/navidrome/core/agents"
|
||||
"github.com/navidrome/navidrome/core/agents/mcp"
|
||||
. "github.com/onsi/ginkgo/v2"
|
||||
. "github.com/onsi/gomega"
|
||||
)
|
||||
|
||||
// Define the mcpClient interface locally for mocking, matching the one
|
||||
// used internally by MCPNative/MCPWasm.
|
||||
type mcpClient interface {
|
||||
Initialize(ctx context.Context) (*mcp_client.InitializeResponse, error)
|
||||
CallTool(ctx context.Context, toolName string, args any) (*mcp_client.ToolResponse, error)
|
||||
}
|
||||
|
||||
// mockMCPClient is a mock implementation of mcpClient for testing.
|
||||
type mockMCPClient struct {
|
||||
InitializeFunc func(ctx context.Context) (*mcp_client.InitializeResponse, error)
|
||||
CallToolFunc func(ctx context.Context, toolName string, args any) (*mcp_client.ToolResponse, error)
|
||||
callToolArgs []any // Store args for verification
|
||||
callToolName string // Store tool name for verification
|
||||
}
|
||||
|
||||
func (m *mockMCPClient) Initialize(ctx context.Context) (*mcp_client.InitializeResponse, error) {
|
||||
if m.InitializeFunc != nil {
|
||||
return m.InitializeFunc(ctx)
|
||||
}
|
||||
return &mcp_client.InitializeResponse{}, nil // Default success
|
||||
}
|
||||
|
||||
func (m *mockMCPClient) CallTool(ctx context.Context, toolName string, args any) (*mcp_client.ToolResponse, error) {
|
||||
m.callToolName = toolName
|
||||
m.callToolArgs = append(m.callToolArgs, args)
|
||||
if m.CallToolFunc != nil {
|
||||
return m.CallToolFunc(ctx, toolName, args)
|
||||
}
|
||||
return &mcp_client.ToolResponse{}, nil
|
||||
}
|
||||
|
||||
// Ensure mock implements the local interface (compile-time check)
|
||||
var _ mcpClient = (*mockMCPClient)(nil)
|
||||
|
||||
var _ = Describe("MCPAgent", func() {
|
||||
var (
|
||||
ctx context.Context
|
||||
// We test the public MCPAgent wrapper, which uses the implementations internally.
|
||||
// The actual agent instance might be native or wasm depending on McpServerPath
|
||||
agent agents.Interface // Use the public agents.Interface
|
||||
mockClient *mockMCPClient
|
||||
)
|
||||
|
||||
BeforeEach(func() {
|
||||
ctx = context.Background()
|
||||
mockClient = &mockMCPClient{
|
||||
callToolArgs: make([]any, 0), // Reset args on each test
|
||||
}
|
||||
|
||||
// Instantiate the real agent using a testing constructor
|
||||
// This constructor needs to be added to the mcp package.
|
||||
agent = mcp.NewAgentForTesting(mockClient)
|
||||
Expect(agent).NotTo(BeNil(), "Agent should be created")
|
||||
})
|
||||
|
||||
// Helper to get the concrete agent type for calling specific methods
|
||||
getConcreteAgent := func() *mcp.MCPAgent {
|
||||
concreteAgent, ok := agent.(*mcp.MCPAgent)
|
||||
Expect(ok).To(BeTrue(), "Agent should be of type *mcp.MCPAgent")
|
||||
return concreteAgent
|
||||
}
|
||||
|
||||
Describe("GetArtistBiography", func() {
|
||||
It("should call the correct tool and return the biography", func() {
|
||||
expectedBio := "This is the artist bio."
|
||||
mockClient.CallToolFunc = func(ctx context.Context, toolName string, args any) (*mcp_client.ToolResponse, error) {
|
||||
Expect(toolName).To(Equal(mcp.McpToolNameGetBio))
|
||||
Expect(args).To(BeAssignableToTypeOf(mcp.ArtistArgs{}))
|
||||
typedArgs := args.(mcp.ArtistArgs)
|
||||
Expect(typedArgs.ID).To(Equal("id1"))
|
||||
Expect(typedArgs.Name).To(Equal("Artist Name"))
|
||||
Expect(typedArgs.Mbid).To(Equal("mbid1"))
|
||||
return mcp_client.NewToolResponse(mcp_client.NewTextContent(expectedBio)), nil
|
||||
}
|
||||
|
||||
bio, err := getConcreteAgent().GetArtistBiography(ctx, "id1", "Artist Name", "mbid1")
|
||||
Expect(err).NotTo(HaveOccurred())
|
||||
Expect(bio).To(Equal(expectedBio))
|
||||
})
|
||||
|
||||
It("should return error if CallTool fails", func() {
|
||||
expectedErr := errors.New("mcp tool error")
|
||||
mockClient.CallToolFunc = func(ctx context.Context, toolName string, args any) (*mcp_client.ToolResponse, error) {
|
||||
return nil, expectedErr
|
||||
}
|
||||
|
||||
bio, err := getConcreteAgent().GetArtistBiography(ctx, "id1", "Artist Name", "mbid1")
|
||||
Expect(err).To(HaveOccurred())
|
||||
// The error originates from the implementation now, check for specific part
|
||||
Expect(err.Error()).To(SatisfyAny(
|
||||
ContainSubstring("native MCP agent not ready"), // Error from native
|
||||
ContainSubstring("WASM MCP agent not ready"), // Error from WASM
|
||||
ContainSubstring("failed to call native MCP tool"),
|
||||
ContainSubstring("failed to call WASM MCP tool"),
|
||||
))
|
||||
Expect(errors.Is(err, expectedErr)).To(BeTrue())
|
||||
Expect(bio).To(BeEmpty())
|
||||
})
|
||||
|
||||
It("should return ErrNotFound if CallTool response is empty", func() {
|
||||
mockClient.CallToolFunc = func(ctx context.Context, toolName string, args any) (*mcp_client.ToolResponse, error) {
|
||||
// Return a response created with no content parts
|
||||
return mcp_client.NewToolResponse(), nil
|
||||
}
|
||||
|
||||
bio, err := getConcreteAgent().GetArtistBiography(ctx, "id1", "Artist Name", "mbid1")
|
||||
Expect(err).To(MatchError(agents.ErrNotFound))
|
||||
Expect(bio).To(BeEmpty())
|
||||
})
|
||||
|
||||
It("should return ErrNotFound if CallTool response has nil TextContent (simulated by empty string)", func() {
|
||||
mockClient.CallToolFunc = func(ctx context.Context, toolName string, args any) (*mcp_client.ToolResponse, error) {
|
||||
// Simulate nil/empty text content by creating response with empty string text
|
||||
return mcp_client.NewToolResponse(mcp_client.NewTextContent("")), nil
|
||||
}
|
||||
|
||||
bio, err := getConcreteAgent().GetArtistBiography(ctx, "id1", "Artist Name", "mbid1")
|
||||
Expect(err).To(MatchError(agents.ErrNotFound))
|
||||
Expect(bio).To(BeEmpty())
|
||||
})
|
||||
|
||||
It("should return comm error if CallTool returns pipe error", func() {
|
||||
mockClient.CallToolFunc = func(ctx context.Context, toolName string, args any) (*mcp_client.ToolResponse, error) {
|
||||
return nil, io.ErrClosedPipe
|
||||
}
|
||||
|
||||
bio, err := getConcreteAgent().GetArtistBiography(ctx, "id1", "Artist Name", "mbid1")
|
||||
Expect(err).To(HaveOccurred())
|
||||
Expect(err.Error()).To(SatisfyAny(
|
||||
ContainSubstring("native MCP agent process communication error"),
|
||||
ContainSubstring("WASM MCP agent module communication error"),
|
||||
))
|
||||
Expect(errors.Is(err, io.ErrClosedPipe)).To(BeTrue())
|
||||
Expect(bio).To(BeEmpty())
|
||||
})
|
||||
|
||||
It("should return ErrNotFound if MCP tool returns an error string", func() {
|
||||
mcpErrorString := "handler returned an error: something went wrong on the server"
|
||||
mockClient.CallToolFunc = func(ctx context.Context, toolName string, args any) (*mcp_client.ToolResponse, error) {
|
||||
return mcp_client.NewToolResponse(mcp_client.NewTextContent(mcpErrorString)), nil
|
||||
}
|
||||
|
||||
bio, err := getConcreteAgent().GetArtistBiography(ctx, "id1", "Artist Name", "mbid1")
|
||||
Expect(err).To(MatchError(agents.ErrNotFound))
|
||||
Expect(bio).To(BeEmpty())
|
||||
})
|
||||
})
|
||||
|
||||
Describe("GetArtistURL", func() {
|
||||
It("should call the correct tool and return the URL", func() {
|
||||
expectedURL := "http://example.com/artist"
|
||||
mockClient.CallToolFunc = func(ctx context.Context, toolName string, args any) (*mcp_client.ToolResponse, error) {
|
||||
Expect(toolName).To(Equal(mcp.McpToolNameGetURL))
|
||||
Expect(args).To(BeAssignableToTypeOf(mcp.ArtistArgs{}))
|
||||
typedArgs := args.(mcp.ArtistArgs)
|
||||
Expect(typedArgs.ID).To(Equal("id2"))
|
||||
Expect(typedArgs.Name).To(Equal("Another Artist"))
|
||||
Expect(typedArgs.Mbid).To(Equal("mbid2"))
|
||||
return mcp_client.NewToolResponse(mcp_client.NewTextContent(expectedURL)), nil
|
||||
}
|
||||
|
||||
url, err := getConcreteAgent().GetArtistURL(ctx, "id2", "Another Artist", "mbid2")
|
||||
Expect(err).NotTo(HaveOccurred())
|
||||
Expect(url).To(Equal(expectedURL))
|
||||
})
|
||||
|
||||
It("should return error if CallTool fails", func() {
|
||||
expectedErr := errors.New("mcp tool error url")
|
||||
mockClient.CallToolFunc = func(ctx context.Context, toolName string, args any) (*mcp_client.ToolResponse, error) {
|
||||
return nil, expectedErr
|
||||
}
|
||||
|
||||
url, err := getConcreteAgent().GetArtistURL(ctx, "id2", "Another Artist", "mbid2")
|
||||
Expect(err).To(HaveOccurred())
|
||||
Expect(err.Error()).To(SatisfyAny(
|
||||
ContainSubstring("native MCP agent not ready"), // Error from native
|
||||
ContainSubstring("WASM MCP agent not ready"), // Error from WASM
|
||||
ContainSubstring("failed to call native MCP tool"),
|
||||
ContainSubstring("failed to call WASM MCP tool"),
|
||||
))
|
||||
Expect(errors.Is(err, expectedErr)).To(BeTrue())
|
||||
Expect(url).To(BeEmpty())
|
||||
})
|
||||
|
||||
It("should return ErrNotFound if CallTool response is empty", func() {
|
||||
mockClient.CallToolFunc = func(ctx context.Context, toolName string, args any) (*mcp_client.ToolResponse, error) {
|
||||
// Return a response created with no content parts
|
||||
return mcp_client.NewToolResponse(), nil
|
||||
}
|
||||
|
||||
url, err := getConcreteAgent().GetArtistURL(ctx, "id2", "Another Artist", "mbid2")
|
||||
Expect(err).To(MatchError(agents.ErrNotFound))
|
||||
Expect(url).To(BeEmpty())
|
||||
})
|
||||
|
||||
It("should return comm error if CallTool returns pipe error", func() {
|
||||
mockClient.CallToolFunc = func(ctx context.Context, toolName string, args any) (*mcp_client.ToolResponse, error) {
|
||||
return nil, fmt.Errorf("write: %w", io.ErrClosedPipe)
|
||||
}
|
||||
|
||||
url, err := getConcreteAgent().GetArtistURL(ctx, "id2", "Another Artist", "mbid2")
|
||||
Expect(err).To(HaveOccurred())
|
||||
Expect(err.Error()).To(SatisfyAny(
|
||||
ContainSubstring("native MCP agent process communication error"),
|
||||
ContainSubstring("WASM MCP agent module communication error"),
|
||||
))
|
||||
Expect(errors.Is(err, io.ErrClosedPipe)).To(BeTrue())
|
||||
Expect(url).To(BeEmpty())
|
||||
})
|
||||
|
||||
It("should return ErrNotFound if MCP tool returns an error string", func() {
|
||||
mcpErrorString := "handler returned an error: could not find url"
|
||||
mockClient.CallToolFunc = func(ctx context.Context, toolName string, args any) (*mcp_client.ToolResponse, error) {
|
||||
return mcp_client.NewToolResponse(mcp_client.NewTextContent(mcpErrorString)), nil
|
||||
}
|
||||
|
||||
url, err := getConcreteAgent().GetArtistURL(ctx, "id2", "Another Artist", "mbid2")
|
||||
Expect(err).To(MatchError(agents.ErrNotFound))
|
||||
Expect(url).To(BeEmpty())
|
||||
})
|
||||
})
|
||||
})
|
||||
@@ -1,189 +0,0 @@
|
||||
package mcp
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"fmt"
|
||||
"io"
|
||||
"net/http"
|
||||
"time"
|
||||
|
||||
"github.com/navidrome/navidrome/log"
|
||||
"github.com/tetratelabs/wazero"
|
||||
"github.com/tetratelabs/wazero/api"
|
||||
)
|
||||
|
||||
// httpClient is a shared HTTP client for host function reuse.
|
||||
var httpClient = &http.Client{
|
||||
// Consider adding a default timeout
|
||||
Timeout: 30 * time.Second,
|
||||
}
|
||||
|
||||
// registerHostFunctions defines and registers the host functions (e.g., http_fetch)
|
||||
// into the provided Wazero runtime.
|
||||
func registerHostFunctions(ctx context.Context, runtime wazero.Runtime) error {
|
||||
// Define and Instantiate Host Module "env"
|
||||
_, err := runtime.NewHostModuleBuilder("env"). // "env" is the conventional module name
|
||||
NewFunctionBuilder().
|
||||
WithFunc(httpFetch). // Register our Go function
|
||||
Export("http_fetch"). // Export it with the name WASM will use
|
||||
Instantiate(ctx)
|
||||
if err != nil {
|
||||
log.Error(ctx, "Failed to instantiate 'env' host module with httpFetch", "error", err)
|
||||
return fmt.Errorf("instantiate host module 'env': %w", err)
|
||||
}
|
||||
log.Info(ctx, "Instantiated 'env' host module with http_fetch function")
|
||||
return nil
|
||||
}
|
||||
|
||||
// httpFetch is the host function exposed to WASM.
|
||||
// ... (full implementation as provided previously) ...
|
||||
// Returns:
|
||||
// - 0 on success (request completed, results written).
|
||||
// - 1 on host-side failure (e.g., memory access error, invalid input).
|
||||
func httpFetch(
|
||||
ctx context.Context, mod api.Module, // Standard Wazero host function params
|
||||
// Request details
|
||||
urlPtr, urlLen uint32,
|
||||
methodPtr, methodLen uint32,
|
||||
bodyPtr, bodyLen uint32,
|
||||
timeoutMillis uint32,
|
||||
// Result pointers
|
||||
resultStatusPtr uint32,
|
||||
resultBodyPtr uint32, resultBodyCapacity uint32, resultBodyLenPtr uint32,
|
||||
resultErrorPtr uint32, resultErrorCapacity uint32, resultErrorLenPtr uint32,
|
||||
) uint32 { // Using uint32 for status code convention (0=success, 1=failure)
|
||||
mem := mod.Memory()
|
||||
|
||||
// --- Read Inputs ---
|
||||
urlBytes, ok := mem.Read(urlPtr, urlLen)
|
||||
if !ok {
|
||||
log.Error(ctx, "httpFetch host error: failed to read URL from WASM memory")
|
||||
// Cannot write error back as we don't have the pointers validated yet
|
||||
return 1
|
||||
}
|
||||
url := string(urlBytes)
|
||||
|
||||
methodBytes, ok := mem.Read(methodPtr, methodLen)
|
||||
if !ok {
|
||||
log.Error(ctx, "httpFetch host error: failed to read method from WASM memory", "url", url)
|
||||
return 1 // Bail out
|
||||
}
|
||||
method := string(methodBytes)
|
||||
if method == "" {
|
||||
method = "GET" // Default to GET
|
||||
}
|
||||
|
||||
var reqBody io.Reader
|
||||
if bodyLen > 0 {
|
||||
bodyBytes, ok := mem.Read(bodyPtr, bodyLen)
|
||||
if !ok {
|
||||
log.Error(ctx, "httpFetch host error: failed to read body from WASM memory", "url", url, "method", method)
|
||||
return 1 // Bail out
|
||||
}
|
||||
reqBody = bytes.NewReader(bodyBytes)
|
||||
}
|
||||
|
||||
timeout := time.Duration(timeoutMillis) * time.Millisecond
|
||||
if timeout <= 0 {
|
||||
timeout = 30 * time.Second // Default timeout matching httpClient
|
||||
}
|
||||
|
||||
// --- Prepare and Execute Request ---
|
||||
log.Debug(ctx, "httpFetch executing request", "method", method, "url", url, "timeout", timeout)
|
||||
|
||||
// Use a specific context for the request, derived from the host function's context
|
||||
// but with the specific timeout for this call.
|
||||
reqCtx, cancel := context.WithTimeout(ctx, timeout)
|
||||
defer cancel()
|
||||
|
||||
req, err := http.NewRequestWithContext(reqCtx, method, url, reqBody)
|
||||
if err != nil {
|
||||
errMsg := fmt.Sprintf("failed to create request: %v", err)
|
||||
log.Error(ctx, "httpFetch host error", "url", url, "method", method, "error", errMsg)
|
||||
writeStringResult(mem, resultErrorPtr, resultErrorCapacity, resultErrorLenPtr, errMsg)
|
||||
mem.WriteUint32Le(resultStatusPtr, 0) // Write 0 status on creation error
|
||||
mem.WriteUint32Le(resultBodyLenPtr, 0) // No body
|
||||
return 0 // Indicate results (including error) were written
|
||||
}
|
||||
|
||||
// TODO: Consider adding a User-Agent?
|
||||
// req.Header.Set("User-Agent", "Navidrome/MCP-Agent-Host")
|
||||
|
||||
resp, err := httpClient.Do(req)
|
||||
if err != nil {
|
||||
// Handle client-side errors (network, DNS, timeout)
|
||||
errMsg := fmt.Sprintf("failed to execute request: %v", err)
|
||||
log.Error(ctx, "httpFetch host error", "url", url, "method", method, "error", errMsg)
|
||||
writeStringResult(mem, resultErrorPtr, resultErrorCapacity, resultErrorLenPtr, errMsg)
|
||||
mem.WriteUint32Le(resultStatusPtr, 0) // Write 0 status on transport error
|
||||
mem.WriteUint32Le(resultBodyLenPtr, 0)
|
||||
return 0 // Indicate results written
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
// --- Process Response ---
|
||||
statusCode := uint32(resp.StatusCode)
|
||||
responseBodyBytes, readErr := io.ReadAll(resp.Body)
|
||||
if readErr != nil {
|
||||
errMsg := fmt.Sprintf("failed to read response body: %v", readErr)
|
||||
log.Error(ctx, "httpFetch host error", "url", url, "method", method, "status", statusCode, "error", errMsg)
|
||||
writeStringResult(mem, resultErrorPtr, resultErrorCapacity, resultErrorLenPtr, errMsg)
|
||||
mem.WriteUint32Le(resultStatusPtr, statusCode) // Write actual status code
|
||||
mem.WriteUint32Le(resultBodyLenPtr, 0)
|
||||
return 0 // Indicate results written
|
||||
}
|
||||
|
||||
// --- Write Results Back to WASM Memory ---
|
||||
log.Debug(ctx, "httpFetch writing results", "url", url, "method", method, "status", statusCode, "bodyLen", len(responseBodyBytes))
|
||||
|
||||
// Write status code
|
||||
if !mem.WriteUint32Le(resultStatusPtr, statusCode) {
|
||||
log.Error(ctx, "httpFetch host error: failed to write status code to WASM memory")
|
||||
return 1 // Host error
|
||||
}
|
||||
|
||||
// Write response body (checking capacity)
|
||||
if !writeBytesResult(mem, resultBodyPtr, resultBodyCapacity, resultBodyLenPtr, responseBodyBytes) {
|
||||
// If body write fails (likely due to capacity), write an error message instead.
|
||||
errMsg := fmt.Sprintf("response body size (%d) exceeds buffer capacity (%d)", len(responseBodyBytes), resultBodyCapacity)
|
||||
log.Error(ctx, "httpFetch host error", "url", url, "method", method, "status", statusCode, "error", errMsg)
|
||||
writeStringResult(mem, resultErrorPtr, resultErrorCapacity, resultErrorLenPtr, errMsg)
|
||||
mem.WriteUint32Le(resultBodyLenPtr, 0) // Ensure body length is 0 if we wrote an error
|
||||
} else {
|
||||
// Write empty error string if body write was successful
|
||||
mem.WriteUint32Le(resultErrorLenPtr, 0)
|
||||
}
|
||||
|
||||
return 0 // Success
|
||||
}
|
||||
|
||||
// Helper to write string results, respecting capacity. Returns true on success.
|
||||
func writeStringResult(mem api.Memory, ptr, capacity, lenPtr uint32, result string) bool {
|
||||
bytes := []byte(result)
|
||||
return writeBytesResult(mem, ptr, capacity, lenPtr, bytes)
|
||||
}
|
||||
|
||||
// Helper to write byte results, respecting capacity. Returns true on success.
|
||||
func writeBytesResult(mem api.Memory, ptr, capacity, lenPtr uint32, result []byte) bool {
|
||||
resultLen := uint32(len(result))
|
||||
writeLen := resultLen
|
||||
if writeLen > capacity {
|
||||
log.Warn(context.Background(), "WASM host write truncated", "requested", resultLen, "capacity", capacity)
|
||||
writeLen = capacity // Truncate if too large for buffer
|
||||
}
|
||||
|
||||
if writeLen > 0 {
|
||||
if !mem.Write(ptr, result[:writeLen]) {
|
||||
log.Error(context.Background(), "WASM host memory write failed", "ptr", ptr, "len", writeLen)
|
||||
return false // Memory write failed
|
||||
}
|
||||
}
|
||||
|
||||
// Write the *original* length of the data (even if truncated) so the WASM side knows.
|
||||
if !mem.WriteUint32Le(lenPtr, resultLen) {
|
||||
log.Error(context.Background(), "WASM host memory length write failed", "lenPtr", lenPtr, "len", resultLen)
|
||||
return false // Memory write failed
|
||||
}
|
||||
return true
|
||||
}
|
||||
@@ -1,252 +0,0 @@
|
||||
package mcp
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"context"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"os"
|
||||
"os/exec"
|
||||
"strings"
|
||||
"sync"
|
||||
|
||||
mcp "github.com/metoro-io/mcp-golang"
|
||||
"github.com/metoro-io/mcp-golang/transport/stdio"
|
||||
"github.com/navidrome/navidrome/core/agents"
|
||||
"github.com/navidrome/navidrome/log"
|
||||
)
|
||||
|
||||
// MCPNative implements the mcpImplementation interface for running the MCP server as a native process.
|
||||
type MCPNative struct {
|
||||
mu sync.Mutex
|
||||
cmd *exec.Cmd // Stores the running command
|
||||
stdin io.WriteCloser
|
||||
client mcpClient
|
||||
|
||||
// ClientOverride allows injecting a mock client for testing this specific implementation.
|
||||
ClientOverride mcpClient // TODO: Consider if this is the best way to test
|
||||
}
|
||||
|
||||
// newMCPNative creates a new instance of the native MCP agent implementation.
|
||||
func newMCPNative() *MCPNative {
|
||||
return &MCPNative{}
|
||||
}
|
||||
|
||||
// --- mcpImplementation interface methods ---
|
||||
|
||||
func (n *MCPNative) Close() error {
|
||||
n.mu.Lock()
|
||||
defer n.mu.Unlock()
|
||||
n.cleanupResources_locked()
|
||||
return nil // Currently, cleanup doesn't return errors
|
||||
}
|
||||
|
||||
// --- Internal Helper Methods ---
|
||||
|
||||
// ensureClientInitialized starts the MCP server process and initializes the client if needed.
|
||||
// MUST be called with the mutex HELD.
|
||||
func (n *MCPNative) ensureClientInitialized_locked(ctx context.Context) error {
|
||||
// Use override if provided (for testing)
|
||||
if n.ClientOverride != nil {
|
||||
if n.client == nil {
|
||||
n.client = n.ClientOverride
|
||||
log.Debug(ctx, "Using provided MCP client override for native testing")
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Check if already initialized
|
||||
if n.client != nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
log.Info(ctx, "Initializing Native MCP client and starting/restarting server process...", "serverPath", McpServerPath)
|
||||
|
||||
// Clean up any old resources *before* starting new ones
|
||||
n.cleanupResources_locked()
|
||||
|
||||
hostStdinWriter, hostStdoutReader, nativeCmd, startErr := n.startProcess_locked(ctx)
|
||||
if startErr != nil {
|
||||
log.Error(ctx, "Failed to start Native MCP server process", "error", startErr)
|
||||
// Ensure pipes are closed if start failed (startProcess might handle this, but be sure)
|
||||
if hostStdinWriter != nil {
|
||||
_ = hostStdinWriter.Close()
|
||||
}
|
||||
if hostStdoutReader != nil {
|
||||
_ = hostStdoutReader.Close()
|
||||
}
|
||||
return fmt.Errorf("failed to start native MCP server: %w", startErr)
|
||||
}
|
||||
|
||||
// --- Initialize MCP client ---
|
||||
transport := stdio.NewStdioServerTransportWithIO(hostStdoutReader, hostStdinWriter)
|
||||
clientImpl := mcp.NewClient(transport)
|
||||
|
||||
initCtx, cancel := context.WithTimeout(context.Background(), initializationTimeout)
|
||||
defer cancel()
|
||||
if _, initErr := clientImpl.Initialize(initCtx); initErr != nil {
|
||||
err := fmt.Errorf("failed to initialize native MCP client: %w", initErr)
|
||||
log.Error(ctx, "Native MCP client initialization failed", "error", err)
|
||||
// Cleanup the newly started process and close pipes
|
||||
n.cmd = nativeCmd // Temporarily set cmd so cleanup can kill it
|
||||
n.cleanupResources_locked()
|
||||
if hostStdinWriter != nil {
|
||||
_ = hostStdinWriter.Close()
|
||||
}
|
||||
if hostStdoutReader != nil {
|
||||
_ = hostStdoutReader.Close()
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
// --- Initialization successful, update agent state ---
|
||||
n.cmd = nativeCmd
|
||||
n.stdin = hostStdinWriter // This is the pipe the agent writes to
|
||||
n.client = clientImpl
|
||||
|
||||
log.Info(ctx, "Native MCP client initialized successfully", "pid", n.cmd.Process.Pid)
|
||||
return nil // Success
|
||||
}
|
||||
|
||||
// callMCPTool handles ensuring initialization and calling the MCP tool.
|
||||
func (n *MCPNative) callMCPTool(ctx context.Context, toolName string, args any) (string, error) {
|
||||
// Ensure the client is initialized and the server is running (attempts restart if needed)
|
||||
n.mu.Lock()
|
||||
err := n.ensureClientInitialized_locked(ctx)
|
||||
if err != nil {
|
||||
n.mu.Unlock()
|
||||
log.Error(ctx, "Native MCP agent initialization/restart failed", "tool", toolName, "error", err)
|
||||
return "", fmt.Errorf("native MCP agent not ready: %w", err)
|
||||
}
|
||||
|
||||
// Keep a reference to the client while locked
|
||||
currentClient := n.client
|
||||
// Unlock mutex *before* making the potentially blocking MCP call
|
||||
n.mu.Unlock()
|
||||
|
||||
// Call the tool using the client reference
|
||||
log.Debug(ctx, "Calling Native MCP tool", "tool", toolName, "args", args)
|
||||
response, callErr := currentClient.CallTool(ctx, toolName, args)
|
||||
if callErr != nil {
|
||||
// Handle potential pipe closures or other communication errors
|
||||
log.Error(ctx, "Failed to call Native MCP tool", "tool", toolName, "error", callErr)
|
||||
// Check if the error indicates a broken pipe, suggesting the server died
|
||||
// The monitoring goroutine will handle cleanup, just return error here.
|
||||
if errors.Is(callErr, io.ErrClosedPipe) || strings.Contains(callErr.Error(), "broken pipe") || strings.Contains(callErr.Error(), "EOF") {
|
||||
log.Warn(ctx, "Native MCP tool call failed, possibly due to server process exit.", "tool", toolName)
|
||||
// No need to explicitly call cleanup, monitoring goroutine handles it.
|
||||
return "", fmt.Errorf("native MCP agent process communication error: %w", callErr)
|
||||
}
|
||||
return "", fmt.Errorf("failed to call native MCP tool '%s': %w", toolName, callErr)
|
||||
}
|
||||
|
||||
// Process the response (same logic as before)
|
||||
if response == nil || len(response.Content) == 0 || response.Content[0].TextContent == nil || response.Content[0].TextContent.Text == "" {
|
||||
log.Warn(ctx, "Native MCP tool returned empty/invalid response", "tool", toolName)
|
||||
// Treat as not found for agent interface consistency
|
||||
return "", agents.ErrNotFound // Import agents package if needed, or define locally
|
||||
}
|
||||
resultText := response.Content[0].TextContent.Text
|
||||
if strings.HasPrefix(resultText, "handler returned an error:") {
|
||||
log.Warn(ctx, "Native MCP tool returned an error message", "tool", toolName, "mcpError", resultText)
|
||||
return "", agents.ErrNotFound // Treat MCP tool errors as "not found"
|
||||
}
|
||||
|
||||
log.Debug(ctx, "Received response from Native MCP agent", "tool", toolName, "length", len(resultText))
|
||||
return resultText, nil
|
||||
}
|
||||
|
||||
// cleanupResources closes existing resources (stdin, server process).
|
||||
// MUST be called with the mutex HELD.
|
||||
func (n *MCPNative) cleanupResources_locked() {
|
||||
log.Debug(context.Background(), "Cleaning up Native MCP instance resources...")
|
||||
if n.stdin != nil {
|
||||
_ = n.stdin.Close()
|
||||
n.stdin = nil
|
||||
}
|
||||
if n.cmd != nil && n.cmd.Process != nil {
|
||||
pid := n.cmd.Process.Pid
|
||||
log.Debug(context.Background(), "Killing native MCP process", "pid", pid)
|
||||
// Kill the process. Ignore error if it's already done.
|
||||
if err := n.cmd.Process.Kill(); err != nil && !errors.Is(err, os.ErrProcessDone) {
|
||||
log.Error(context.Background(), "Failed to kill native process", "pid", pid, "error", err)
|
||||
}
|
||||
// Wait for the process to release resources. Ignore error.
|
||||
_ = n.cmd.Wait()
|
||||
n.cmd = nil
|
||||
}
|
||||
// Mark client as invalid
|
||||
n.client = nil
|
||||
}
|
||||
|
||||
// startProcess starts the MCP server as a native executable and sets up monitoring.
|
||||
// MUST be called with the mutex HELD.
|
||||
func (n *MCPNative) startProcess_locked(ctx context.Context) (stdin io.WriteCloser, stdout io.ReadCloser, cmd *exec.Cmd, err error) {
|
||||
log.Debug(ctx, "Starting native MCP server process", "path", McpServerPath)
|
||||
// Use Background context for the command itself, as it should outlive the request context (ctx)
|
||||
cmd = exec.CommandContext(context.Background(), McpServerPath)
|
||||
|
||||
stdinPipe, err := cmd.StdinPipe()
|
||||
if err != nil {
|
||||
return nil, nil, nil, fmt.Errorf("native stdin pipe: %w", err)
|
||||
}
|
||||
|
||||
stdoutPipe, err := cmd.StdoutPipe()
|
||||
if err != nil {
|
||||
_ = stdinPipe.Close()
|
||||
return nil, nil, nil, fmt.Errorf("native stdout pipe: %w", err)
|
||||
}
|
||||
|
||||
// Get stderr pipe to stream logs
|
||||
stderrPipe, err := cmd.StderrPipe()
|
||||
if err != nil {
|
||||
_ = stdinPipe.Close()
|
||||
_ = stdoutPipe.Close()
|
||||
return nil, nil, nil, fmt.Errorf("native stderr pipe: %w", err)
|
||||
}
|
||||
|
||||
if err = cmd.Start(); err != nil {
|
||||
_ = stdinPipe.Close()
|
||||
_ = stdoutPipe.Close()
|
||||
// stderrPipe gets closed implicitly if cmd.Start() fails
|
||||
return nil, nil, nil, fmt.Errorf("native start: %w", err)
|
||||
}
|
||||
|
||||
currentPid := cmd.Process.Pid
|
||||
currentCmd := cmd // Capture the current cmd pointer for the goroutine
|
||||
log.Info(ctx, "Native MCP server process started", "pid", currentPid)
|
||||
|
||||
// Start monitoring goroutine for process exit
|
||||
go func() {
|
||||
// Start separate goroutine to stream stderr
|
||||
go func() {
|
||||
scanner := bufio.NewScanner(stderrPipe)
|
||||
for scanner.Scan() {
|
||||
log.Info("[MCP-SERVER] " + scanner.Text())
|
||||
}
|
||||
if err := scanner.Err(); err != nil {
|
||||
log.Error("Error reading MCP server stderr", "pid", currentPid, "error", err)
|
||||
}
|
||||
log.Debug("MCP server stderr pipe closed", "pid", currentPid)
|
||||
}()
|
||||
|
||||
waitErr := currentCmd.Wait() // Wait for the specific process this goroutine monitors
|
||||
n.mu.Lock()
|
||||
// Stderr is now streamed, so we don't capture it here anymore.
|
||||
log.Warn("Native MCP server process exited", "pid", currentPid, "error", waitErr)
|
||||
|
||||
// Critical: Check if the agent's current command is STILL the one we were monitoring.
|
||||
// If it's different, it means cleanup/restart already happened, so we shouldn't cleanup again.
|
||||
if n.cmd == currentCmd {
|
||||
n.cleanupResources_locked() // Use the locked version as we hold the lock
|
||||
log.Info("MCP Native agent state cleaned up after process exit", "pid", currentPid)
|
||||
} else {
|
||||
log.Debug("Native MCP process exited, but state already updated/cmd mismatch", "exitedPid", currentPid)
|
||||
}
|
||||
n.mu.Unlock()
|
||||
}()
|
||||
|
||||
// Return the pipes connected to the process and the Cmd object
|
||||
return stdinPipe, stdoutPipe, cmd, nil
|
||||
}
|
||||
@@ -1,305 +0,0 @@
|
||||
package mcp
|
||||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"os"
|
||||
"strings"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
mcp "github.com/metoro-io/mcp-golang"
|
||||
"github.com/metoro-io/mcp-golang/transport/stdio"
|
||||
"github.com/navidrome/navidrome/core/agents" // Needed for ErrNotFound
|
||||
"github.com/navidrome/navidrome/log"
|
||||
"github.com/tetratelabs/wazero" // Needed for types
|
||||
"github.com/tetratelabs/wazero/api" // Needed for types
|
||||
)
|
||||
|
||||
// MCPWasm implements the mcpImplementation interface for running the MCP server as a WASM module.
|
||||
type MCPWasm struct {
|
||||
mu sync.Mutex
|
||||
wasmModule api.Module // Stores the instantiated module
|
||||
wasmCompiled api.Closer // Stores the compiled module reference for this instance
|
||||
stdin io.WriteCloser
|
||||
client mcpClient
|
||||
|
||||
// Shared resources (passed in, not owned by this struct)
|
||||
wasmRuntime api.Closer // Shared Wazero Runtime
|
||||
wasmCache wazero.CompilationCache // Shared Compilation Cache (can be nil)
|
||||
preCompiledModule wazero.CompiledModule // Pre-compiled module from constructor
|
||||
|
||||
// ClientOverride allows injecting a mock client for testing this specific implementation.
|
||||
ClientOverride mcpClient // TODO: Consider if this is the best way to test
|
||||
}
|
||||
|
||||
// newMCPWasm creates a new instance of the WASM MCP agent implementation.
|
||||
// It stores the shared runtime, cache, and the pre-compiled module.
|
||||
func newMCPWasm(runtime api.Closer, cache wazero.CompilationCache, compiledModule wazero.CompiledModule) *MCPWasm {
|
||||
return &MCPWasm{
|
||||
wasmRuntime: runtime,
|
||||
wasmCache: cache,
|
||||
preCompiledModule: compiledModule,
|
||||
}
|
||||
}
|
||||
|
||||
// --- mcpImplementation interface methods ---
|
||||
|
||||
// Close cleans up instance-specific WASM resources.
|
||||
// It does NOT close the shared runtime or cache.
|
||||
func (w *MCPWasm) Close() error {
|
||||
w.mu.Lock()
|
||||
defer w.mu.Unlock()
|
||||
w.cleanupResources_locked()
|
||||
return nil
|
||||
}
|
||||
|
||||
// --- Internal Helper Methods ---
|
||||
|
||||
// ensureClientInitialized starts the MCP WASM module and initializes the client if needed.
|
||||
// MUST be called with the mutex HELD.
|
||||
func (w *MCPWasm) ensureClientInitialized_locked(ctx context.Context) error {
|
||||
// Use override if provided (for testing)
|
||||
if w.ClientOverride != nil {
|
||||
if w.client == nil {
|
||||
w.client = w.ClientOverride
|
||||
log.Debug(ctx, "Using provided MCP client override for WASM testing")
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Check if already initialized
|
||||
if w.client != nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
log.Info(ctx, "Initializing WASM MCP client and starting/restarting server module...", "serverPath", McpServerPath)
|
||||
|
||||
w.cleanupResources_locked()
|
||||
|
||||
// Check if shared runtime exists
|
||||
if w.wasmRuntime == nil {
|
||||
return errors.New("shared Wazero runtime not initialized for MCPWasm")
|
||||
}
|
||||
|
||||
hostStdinWriter, hostStdoutReader, mod, err := w.startModule_locked(ctx)
|
||||
if err != nil {
|
||||
log.Error(ctx, "Failed to start WASM MCP server module", "error", err)
|
||||
// Ensure pipes are closed if start failed
|
||||
if hostStdinWriter != nil {
|
||||
_ = hostStdinWriter.Close()
|
||||
}
|
||||
if hostStdoutReader != nil {
|
||||
_ = hostStdoutReader.Close()
|
||||
}
|
||||
// startModule_locked handles cleanup of mod/compiled on error
|
||||
return fmt.Errorf("failed to start WASM MCP server: %w", err)
|
||||
}
|
||||
|
||||
transport := stdio.NewStdioServerTransportWithIO(hostStdoutReader, hostStdinWriter)
|
||||
clientImpl := mcp.NewClient(transport)
|
||||
|
||||
initCtx, cancel := context.WithTimeout(context.Background(), initializationTimeout)
|
||||
defer cancel()
|
||||
if _, initErr := clientImpl.Initialize(initCtx); initErr != nil {
|
||||
err := fmt.Errorf("failed to initialize WASM MCP client: %w", initErr)
|
||||
log.Error(ctx, "WASM MCP client initialization failed", "error", err)
|
||||
// Cleanup the newly started module and close pipes
|
||||
w.wasmModule = mod // Temporarily set so cleanup can close it
|
||||
w.wasmCompiled = nil // We don't store the compiled instance ref anymore, just the module instance
|
||||
w.cleanupResources_locked()
|
||||
if hostStdinWriter != nil {
|
||||
_ = hostStdinWriter.Close()
|
||||
}
|
||||
if hostStdoutReader != nil {
|
||||
_ = hostStdoutReader.Close()
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
w.wasmModule = mod
|
||||
w.wasmCompiled = nil // We don't store the compiled instance ref anymore, just the module instance
|
||||
w.stdin = hostStdinWriter
|
||||
w.client = clientImpl
|
||||
|
||||
log.Info(ctx, "WASM MCP client initialized successfully")
|
||||
return nil
|
||||
}
|
||||
|
||||
// callMCPTool handles ensuring initialization and calling the MCP tool.
|
||||
func (w *MCPWasm) callMCPTool(ctx context.Context, toolName string, args any) (string, error) {
|
||||
w.mu.Lock()
|
||||
err := w.ensureClientInitialized_locked(ctx)
|
||||
if err != nil {
|
||||
w.mu.Unlock()
|
||||
log.Error(ctx, "WASM MCP agent initialization/restart failed", "tool", toolName, "error", err)
|
||||
return "", fmt.Errorf("WASM MCP agent not ready: %w", err)
|
||||
}
|
||||
|
||||
// Keep a reference to the client while locked
|
||||
currentClient := w.client
|
||||
// Unlock mutex *before* making the potentially blocking MCP call
|
||||
w.mu.Unlock()
|
||||
|
||||
// Call the tool using the client reference
|
||||
log.Debug(ctx, "Calling WASM MCP tool", "tool", toolName, "args", args)
|
||||
response, callErr := currentClient.CallTool(ctx, toolName, args)
|
||||
if callErr != nil {
|
||||
// Handle potential pipe closures or other communication errors
|
||||
log.Error(ctx, "Failed to call WASM MCP tool", "tool", toolName, "error", callErr)
|
||||
// Check if the error indicates a broken pipe, suggesting the server died
|
||||
// The monitoring goroutine will handle cleanup, just return error here.
|
||||
if errors.Is(callErr, io.ErrClosedPipe) || strings.Contains(callErr.Error(), "broken pipe") || strings.Contains(callErr.Error(), "EOF") {
|
||||
log.Warn(ctx, "WASM MCP tool call failed, possibly due to server module exit.", "tool", toolName)
|
||||
// No need to explicitly call cleanup, monitoring goroutine handles it.
|
||||
return "", fmt.Errorf("WASM MCP agent module communication error: %w", callErr)
|
||||
}
|
||||
return "", fmt.Errorf("failed to call WASM MCP tool '%s': %w", toolName, callErr)
|
||||
}
|
||||
|
||||
// Process the response (same logic as native)
|
||||
if response == nil || len(response.Content) == 0 || response.Content[0].TextContent == nil || response.Content[0].TextContent.Text == "" {
|
||||
log.Warn(ctx, "WASM MCP tool returned empty/invalid response", "tool", toolName)
|
||||
return "", agents.ErrNotFound
|
||||
}
|
||||
resultText := response.Content[0].TextContent.Text
|
||||
if strings.HasPrefix(resultText, "handler returned an error:") {
|
||||
log.Warn(ctx, "WASM MCP tool returned an error message", "tool", toolName, "mcpError", resultText)
|
||||
return "", agents.ErrNotFound // Treat MCP tool errors as "not found"
|
||||
}
|
||||
|
||||
log.Debug(ctx, "Received response from WASM MCP agent", "tool", toolName, "length", len(resultText))
|
||||
return resultText, nil
|
||||
}
|
||||
|
||||
// cleanupResources closes instance-specific WASM resources (stdin, module, compiled ref).
|
||||
// It specifically avoids closing the shared runtime or cache.
|
||||
// MUST be called with the mutex HELD.
|
||||
func (w *MCPWasm) cleanupResources_locked() {
|
||||
log.Debug(context.Background(), "Cleaning up WASM MCP instance resources...")
|
||||
if w.stdin != nil {
|
||||
_ = w.stdin.Close()
|
||||
w.stdin = nil
|
||||
}
|
||||
// Close the module instance
|
||||
if w.wasmModule != nil {
|
||||
log.Debug(context.Background(), "Closing WASM module instance")
|
||||
ctxClose, cancel := context.WithTimeout(context.Background(), 2*time.Second)
|
||||
if err := w.wasmModule.Close(ctxClose); err != nil && !errors.Is(err, context.DeadlineExceeded) {
|
||||
log.Error(context.Background(), "Failed to close WASM module instance", "error", err)
|
||||
}
|
||||
cancel()
|
||||
w.wasmModule = nil
|
||||
}
|
||||
// DO NOT close w.wasmCompiled (instance ref)
|
||||
// DO NOT close w.preCompiledModule (shared pre-compiled code)
|
||||
// DO NOT CLOSE w.wasmRuntime or w.wasmCache here!
|
||||
w.client = nil
|
||||
}
|
||||
|
||||
// startModule loads and starts the MCP server as a WASM module.
|
||||
// It now uses the pre-compiled module.
|
||||
// MUST be called with the mutex HELD.
|
||||
func (w *MCPWasm) startModule_locked(ctx context.Context) (hostStdinWriter io.WriteCloser, hostStdoutReader io.ReadCloser, mod api.Module, err error) {
|
||||
// Check for pre-compiled module
|
||||
if w.preCompiledModule == nil {
|
||||
return nil, nil, nil, errors.New("pre-compiled WASM module is nil")
|
||||
}
|
||||
|
||||
// Create pipes for stdio redirection
|
||||
wasmStdinReader, hostStdinWriter, err := os.Pipe()
|
||||
if err != nil {
|
||||
return nil, nil, nil, fmt.Errorf("wasm stdin pipe: %w", err)
|
||||
}
|
||||
// Defer close pipes on error exit
|
||||
shouldClosePipesOnError := true
|
||||
defer func() {
|
||||
if shouldClosePipesOnError {
|
||||
if wasmStdinReader != nil {
|
||||
_ = wasmStdinReader.Close()
|
||||
}
|
||||
if hostStdinWriter != nil {
|
||||
_ = hostStdinWriter.Close()
|
||||
}
|
||||
// hostStdoutReader and wasmStdoutWriter handled below
|
||||
}
|
||||
}()
|
||||
|
||||
hostStdoutReader, wasmStdoutWriter, err := os.Pipe()
|
||||
if err != nil {
|
||||
return nil, nil, nil, fmt.Errorf("wasm stdout pipe: %w", err)
|
||||
}
|
||||
// Defer close pipes on error exit
|
||||
defer func() {
|
||||
if shouldClosePipesOnError {
|
||||
if hostStdoutReader != nil {
|
||||
_ = hostStdoutReader.Close()
|
||||
}
|
||||
if wasmStdoutWriter != nil {
|
||||
_ = wasmStdoutWriter.Close()
|
||||
}
|
||||
}
|
||||
}()
|
||||
|
||||
// Use the SHARDED runtime from the agent struct
|
||||
runtime, ok := w.wasmRuntime.(wazero.Runtime)
|
||||
if !ok || runtime == nil {
|
||||
return nil, nil, nil, errors.New("wasmRuntime is not initialized or not a wazero.Runtime")
|
||||
}
|
||||
|
||||
// Prepare module configuration (host funcs/WASI already instantiated on runtime)
|
||||
config := wazero.NewModuleConfig().
|
||||
WithStdin(wasmStdinReader).
|
||||
WithStdout(wasmStdoutWriter).
|
||||
WithStderr(os.Stderr).
|
||||
WithArgs(McpServerPath).
|
||||
WithFS(os.DirFS("/")) // Keep FS access for now
|
||||
|
||||
log.Info(ctx, "Instantiating pre-compiled WASM module (will run _start)...")
|
||||
var moduleInstance api.Module
|
||||
instanceErrChan := make(chan error, 1)
|
||||
go func() {
|
||||
var instantiateErr error
|
||||
// Use context.Background() for the module's main execution context
|
||||
moduleInstance, instantiateErr = runtime.InstantiateModule(context.Background(), w.preCompiledModule, config)
|
||||
instanceErrChan <- instantiateErr
|
||||
}()
|
||||
|
||||
// Wait briefly for immediate instantiation errors
|
||||
select {
|
||||
case instantiateErr := <-instanceErrChan:
|
||||
if instantiateErr != nil {
|
||||
log.Error(ctx, "Failed to instantiate pre-compiled WASM module", "error", instantiateErr)
|
||||
// Pipes closed by defer
|
||||
return nil, nil, nil, fmt.Errorf("instantiate wasm module: %w", instantiateErr)
|
||||
}
|
||||
log.Warn(ctx, "WASM module instantiation returned (exited?) immediately without error.")
|
||||
case <-time.After(1 * time.Second): // Shorter wait now, instantiation should be faster
|
||||
log.Debug(ctx, "WASM module instantiation likely blocking (server running), proceeding...")
|
||||
}
|
||||
|
||||
// Start a monitoring goroutine for WASM module exit/error
|
||||
go func(instanceToMonitor api.Module, errChan chan error) {
|
||||
// This blocks until the instance created by InstantiateModule exits or errors.
|
||||
instantiateErr := <-errChan
|
||||
|
||||
w.mu.Lock() // Lock the specific MCPWasm instance
|
||||
log.Warn("WASM module exited/errored", "error", instantiateErr)
|
||||
|
||||
// Critical: Check if the agent's current module is STILL the one we were monitoring.
|
||||
if w.wasmModule == instanceToMonitor {
|
||||
w.cleanupResources_locked() // Use the locked version
|
||||
log.Info("MCP WASM agent state cleaned up after module exit/error")
|
||||
} else {
|
||||
log.Debug("WASM module exited, but state already updated/module mismatch. No cleanup needed by this goroutine.")
|
||||
// No need to close anything here, the pre-compiled module is shared
|
||||
}
|
||||
w.mu.Unlock()
|
||||
}(moduleInstance, instanceErrChan)
|
||||
|
||||
// Success: prevent deferred cleanup of pipes
|
||||
shouldClosePipesOnError = false
|
||||
return hostStdinWriter, hostStdoutReader, moduleInstance, nil
|
||||
}
|
||||
1
core/external/provider.go
vendored
1
core/external/provider.go
vendored
@@ -13,7 +13,6 @@ import (
|
||||
"github.com/navidrome/navidrome/core/agents"
|
||||
_ "github.com/navidrome/navidrome/core/agents/lastfm"
|
||||
_ "github.com/navidrome/navidrome/core/agents/listenbrainz"
|
||||
_ "github.com/navidrome/navidrome/core/agents/mcp"
|
||||
_ "github.com/navidrome/navidrome/core/agents/spotify"
|
||||
"github.com/navidrome/navidrome/log"
|
||||
"github.com/navidrome/navidrome/model"
|
||||
|
||||
37
core/lyrics/lyrics.go
Normal file
37
core/lyrics/lyrics.go
Normal file
@@ -0,0 +1,37 @@
|
||||
package lyrics
|
||||
|
||||
import (
|
||||
"context"
|
||||
"strings"
|
||||
|
||||
"github.com/navidrome/navidrome/conf"
|
||||
"github.com/navidrome/navidrome/log"
|
||||
"github.com/navidrome/navidrome/model"
|
||||
)
|
||||
|
||||
func GetLyrics(ctx context.Context, mf *model.MediaFile) (model.LyricList, error) {
|
||||
var lyricsList model.LyricList
|
||||
var err error
|
||||
|
||||
for pattern := range strings.SplitSeq(strings.ToLower(conf.Server.LyricsPriority), ",") {
|
||||
pattern = strings.TrimSpace(pattern)
|
||||
switch {
|
||||
case pattern == "embedded":
|
||||
lyricsList, err = fromEmbedded(ctx, mf)
|
||||
case strings.HasPrefix(pattern, "."):
|
||||
lyricsList, err = fromExternalFile(ctx, mf, pattern)
|
||||
default:
|
||||
log.Error(ctx, "Invalid lyric pattern", "pattern", pattern)
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
log.Error(ctx, "error parsing lyrics", "source", pattern, err)
|
||||
}
|
||||
|
||||
if len(lyricsList) > 0 {
|
||||
return lyricsList, nil
|
||||
}
|
||||
}
|
||||
|
||||
return nil, nil
|
||||
}
|
||||
@@ -1,4 +1,4 @@
|
||||
package mcp_test
|
||||
package lyrics_test
|
||||
|
||||
import (
|
||||
"testing"
|
||||
@@ -9,9 +9,9 @@ import (
|
||||
. "github.com/onsi/gomega"
|
||||
)
|
||||
|
||||
func TestMCPAgent(t *testing.T) {
|
||||
func TestLyrics(t *testing.T) {
|
||||
tests.Init(t, false)
|
||||
log.SetLevel(log.LevelFatal)
|
||||
RegisterFailHandler(Fail)
|
||||
RunSpecs(t, "MCP Agent Test Suite")
|
||||
RunSpecs(t, "Lyrics Suite")
|
||||
}
|
||||
124
core/lyrics/lyrics_test.go
Normal file
124
core/lyrics/lyrics_test.go
Normal file
@@ -0,0 +1,124 @@
|
||||
package lyrics_test
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"os"
|
||||
|
||||
"github.com/navidrome/navidrome/conf"
|
||||
"github.com/navidrome/navidrome/conf/configtest"
|
||||
"github.com/navidrome/navidrome/core/lyrics"
|
||||
"github.com/navidrome/navidrome/model"
|
||||
"github.com/navidrome/navidrome/utils"
|
||||
"github.com/navidrome/navidrome/utils/gg"
|
||||
. "github.com/onsi/ginkgo/v2"
|
||||
. "github.com/onsi/gomega"
|
||||
)
|
||||
|
||||
var _ = Describe("sources", func() {
|
||||
var mf model.MediaFile
|
||||
var ctx context.Context
|
||||
|
||||
const badLyrics = "This is a set of lyrics\nThat is not good"
|
||||
unsynced, _ := model.ToLyrics("xxx", badLyrics)
|
||||
embeddedLyrics := model.LyricList{*unsynced}
|
||||
|
||||
syncedLyrics := model.LyricList{
|
||||
model.Lyrics{
|
||||
DisplayArtist: "Rick Astley",
|
||||
DisplayTitle: "That one song",
|
||||
Lang: "eng",
|
||||
Line: []model.Line{
|
||||
{
|
||||
Start: gg.P(int64(18800)),
|
||||
Value: "We're no strangers to love",
|
||||
},
|
||||
{
|
||||
Start: gg.P(int64(22801)),
|
||||
Value: "You know the rules and so do I",
|
||||
},
|
||||
},
|
||||
Offset: gg.P(int64(-100)),
|
||||
Synced: true,
|
||||
},
|
||||
}
|
||||
|
||||
unsyncedLyrics := model.LyricList{
|
||||
model.Lyrics{
|
||||
Lang: "xxx",
|
||||
Line: []model.Line{
|
||||
{
|
||||
Value: "We're no strangers to love",
|
||||
},
|
||||
{
|
||||
Value: "You know the rules and so do I",
|
||||
},
|
||||
},
|
||||
Synced: false,
|
||||
},
|
||||
}
|
||||
|
||||
BeforeEach(func() {
|
||||
DeferCleanup(configtest.SetupConfig())
|
||||
|
||||
lyricsJson, _ := json.Marshal(embeddedLyrics)
|
||||
|
||||
mf = model.MediaFile{
|
||||
Lyrics: string(lyricsJson),
|
||||
Path: "tests/fixtures/test.mp3",
|
||||
}
|
||||
ctx = context.Background()
|
||||
})
|
||||
|
||||
DescribeTable("Lyrics Priority", func(priority string, expected model.LyricList) {
|
||||
conf.Server.LyricsPriority = priority
|
||||
list, err := lyrics.GetLyrics(ctx, &mf)
|
||||
Expect(err).To(BeNil())
|
||||
Expect(list).To(Equal(expected))
|
||||
},
|
||||
Entry("embedded > lrc > txt", "embedded,.lrc,.txt", embeddedLyrics),
|
||||
Entry("lrc > embedded > txt", ".lrc,embedded,.txt", syncedLyrics),
|
||||
Entry("txt > lrc > embedded", ".txt,.lrc,embedded", unsyncedLyrics))
|
||||
|
||||
Context("Errors", func() {
|
||||
var RegularUserContext = XContext
|
||||
var isRegularUser = os.Getuid() != 0
|
||||
if isRegularUser {
|
||||
RegularUserContext = Context
|
||||
}
|
||||
|
||||
RegularUserContext("run without root permissions", func() {
|
||||
var accessForbiddenFile string
|
||||
|
||||
BeforeEach(func() {
|
||||
accessForbiddenFile = utils.TempFileName("access_forbidden-", ".mp3")
|
||||
|
||||
f, err := os.OpenFile(accessForbiddenFile, os.O_WRONLY|os.O_CREATE, 0222)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
|
||||
mf.Path = accessForbiddenFile
|
||||
|
||||
DeferCleanup(func() {
|
||||
Expect(f.Close()).To(Succeed())
|
||||
Expect(os.Remove(accessForbiddenFile)).To(Succeed())
|
||||
})
|
||||
})
|
||||
|
||||
It("should fallback to embedded if an error happens when parsing file", func() {
|
||||
conf.Server.LyricsPriority = ".mp3,embedded"
|
||||
|
||||
list, err := lyrics.GetLyrics(ctx, &mf)
|
||||
Expect(err).To(BeNil())
|
||||
Expect(list).To(Equal(embeddedLyrics))
|
||||
})
|
||||
|
||||
It("should return nothing if error happens when trying to parse file", func() {
|
||||
conf.Server.LyricsPriority = ".mp3"
|
||||
|
||||
list, err := lyrics.GetLyrics(ctx, &mf)
|
||||
Expect(err).To(BeNil())
|
||||
Expect(list).To(BeEmpty())
|
||||
})
|
||||
})
|
||||
})
|
||||
})
|
||||
51
core/lyrics/sources.go
Normal file
51
core/lyrics/sources.go
Normal file
@@ -0,0 +1,51 @@
|
||||
package lyrics
|
||||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"os"
|
||||
"path"
|
||||
|
||||
"github.com/navidrome/navidrome/log"
|
||||
"github.com/navidrome/navidrome/model"
|
||||
)
|
||||
|
||||
func fromEmbedded(ctx context.Context, mf *model.MediaFile) (model.LyricList, error) {
|
||||
if mf.Lyrics != "" {
|
||||
log.Trace(ctx, "embedded lyrics found in file", "title", mf.Title)
|
||||
return mf.StructuredLyrics()
|
||||
}
|
||||
|
||||
log.Trace(ctx, "no embedded lyrics for file", "path", mf.Title)
|
||||
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
func fromExternalFile(ctx context.Context, mf *model.MediaFile, suffix string) (model.LyricList, error) {
|
||||
basePath := mf.AbsolutePath()
|
||||
ext := path.Ext(basePath)
|
||||
|
||||
externalLyric := basePath[0:len(basePath)-len(ext)] + suffix
|
||||
|
||||
contents, err := os.ReadFile(externalLyric)
|
||||
|
||||
if errors.Is(err, os.ErrNotExist) {
|
||||
log.Trace(ctx, "no lyrics found at path", "path", externalLyric)
|
||||
return nil, nil
|
||||
} else if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
lyrics, err := model.ToLyrics("xxx", string(contents))
|
||||
if err != nil {
|
||||
log.Error(ctx, "error parsing lyric external file", "path", externalLyric, err)
|
||||
return nil, err
|
||||
} else if lyrics == nil {
|
||||
log.Trace(ctx, "empty lyrics from external file", "path", externalLyric)
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
log.Trace(ctx, "retrieved lyrics from external file", "path", externalLyric)
|
||||
|
||||
return model.LyricList{*lyrics}, nil
|
||||
}
|
||||
112
core/lyrics/sources_test.go
Normal file
112
core/lyrics/sources_test.go
Normal file
@@ -0,0 +1,112 @@
|
||||
package lyrics
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
|
||||
"github.com/navidrome/navidrome/model"
|
||||
"github.com/navidrome/navidrome/utils/gg"
|
||||
. "github.com/onsi/ginkgo/v2"
|
||||
. "github.com/onsi/gomega"
|
||||
)
|
||||
|
||||
var _ = Describe("sources", func() {
|
||||
ctx := context.Background()
|
||||
|
||||
Describe("fromEmbedded", func() {
|
||||
It("should return nothing for a media file with no lyrics", func() {
|
||||
mf := model.MediaFile{}
|
||||
lyrics, err := fromEmbedded(ctx, &mf)
|
||||
|
||||
Expect(err).To(BeNil())
|
||||
Expect(lyrics).To(HaveLen(0))
|
||||
})
|
||||
|
||||
It("should return lyrics for a media file with well-formatted lyrics", func() {
|
||||
const syncedLyrics = "[00:18.80]We're no strangers to love\n[00:22.801]You know the rules and so do I"
|
||||
const unsyncedLyrics = "We're no strangers to love\nYou know the rules and so do I"
|
||||
|
||||
synced, _ := model.ToLyrics("eng", syncedLyrics)
|
||||
unsynced, _ := model.ToLyrics("xxx", unsyncedLyrics)
|
||||
|
||||
expectedList := model.LyricList{*synced, *unsynced}
|
||||
lyricsJson, err := json.Marshal(expectedList)
|
||||
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
|
||||
mf := model.MediaFile{
|
||||
Lyrics: string(lyricsJson),
|
||||
}
|
||||
|
||||
lyrics, err := fromEmbedded(ctx, &mf)
|
||||
Expect(err).To(BeNil())
|
||||
Expect(lyrics).ToNot(BeNil())
|
||||
Expect(lyrics).To(Equal(expectedList))
|
||||
})
|
||||
|
||||
It("should return an error if somehow the JSON is bad", func() {
|
||||
mf := model.MediaFile{Lyrics: "["}
|
||||
lyrics, err := fromEmbedded(ctx, &mf)
|
||||
|
||||
Expect(lyrics).To(HaveLen(0))
|
||||
Expect(err).ToNot(BeNil())
|
||||
})
|
||||
})
|
||||
|
||||
Describe("fromExternalFile", func() {
|
||||
It("should return nil for lyrics that don't exist", func() {
|
||||
mf := model.MediaFile{Path: "tests/fixtures/01 Invisible (RED) Edit Version.mp3"}
|
||||
lyrics, err := fromExternalFile(ctx, &mf, ".lrc")
|
||||
|
||||
Expect(err).To(BeNil())
|
||||
Expect(lyrics).To(HaveLen(0))
|
||||
})
|
||||
|
||||
It("should return synchronized lyrics from a file", func() {
|
||||
mf := model.MediaFile{Path: "tests/fixtures/test.mp3"}
|
||||
lyrics, err := fromExternalFile(ctx, &mf, ".lrc")
|
||||
|
||||
Expect(err).To(BeNil())
|
||||
Expect(lyrics).To(Equal(model.LyricList{
|
||||
model.Lyrics{
|
||||
DisplayArtist: "Rick Astley",
|
||||
DisplayTitle: "That one song",
|
||||
Lang: "eng",
|
||||
Line: []model.Line{
|
||||
{
|
||||
Start: gg.P(int64(18800)),
|
||||
Value: "We're no strangers to love",
|
||||
},
|
||||
{
|
||||
Start: gg.P(int64(22801)),
|
||||
Value: "You know the rules and so do I",
|
||||
},
|
||||
},
|
||||
Offset: gg.P(int64(-100)),
|
||||
Synced: true,
|
||||
},
|
||||
}))
|
||||
})
|
||||
|
||||
It("should return unsynchronized lyrics from a file", func() {
|
||||
mf := model.MediaFile{Path: "tests/fixtures/test.mp3"}
|
||||
lyrics, err := fromExternalFile(ctx, &mf, ".txt")
|
||||
|
||||
Expect(err).To(BeNil())
|
||||
Expect(lyrics).To(Equal(model.LyricList{
|
||||
model.Lyrics{
|
||||
Lang: "xxx",
|
||||
Line: []model.Line{
|
||||
{
|
||||
Value: "We're no strangers to love",
|
||||
},
|
||||
{
|
||||
Value: "You know the rules and so do I",
|
||||
},
|
||||
},
|
||||
Synced: false,
|
||||
},
|
||||
}))
|
||||
})
|
||||
})
|
||||
})
|
||||
@@ -5,7 +5,6 @@ import (
|
||||
"sort"
|
||||
"time"
|
||||
|
||||
"github.com/navidrome/navidrome/conf"
|
||||
"github.com/navidrome/navidrome/consts"
|
||||
"github.com/navidrome/navidrome/log"
|
||||
"github.com/navidrome/navidrome/model"
|
||||
@@ -61,9 +60,7 @@ func newPlayTracker(ds model.DataStore, broker events.Broker) *playTracker {
|
||||
continue
|
||||
}
|
||||
enabled = append(enabled, name)
|
||||
if conf.Server.DevEnableBufferedScrobble {
|
||||
s = newBufferedScrobbler(ds, s, name)
|
||||
}
|
||||
s = newBufferedScrobbler(ds, s, name)
|
||||
p.scrobblers[name] = s
|
||||
}
|
||||
log.Debug("List of scrobblers enabled", "names", enabled)
|
||||
@@ -183,11 +180,7 @@ func (p *playTracker) dispatchScrobble(ctx context.Context, t *model.MediaFile,
|
||||
if !s.IsAuthorized(ctx, u.ID) {
|
||||
continue
|
||||
}
|
||||
if conf.Server.DevEnableBufferedScrobble {
|
||||
log.Debug(ctx, "Buffering Scrobble", "scrobbler", name, "track", t.Title, "artist", t.Artist)
|
||||
} else {
|
||||
log.Debug(ctx, "Sending Scrobble", "scrobbler", name, "track", t.Title, "artist", t.Artist)
|
||||
}
|
||||
log.Debug(ctx, "Buffering Scrobble", "scrobbler", name, "track", t.Title, "artist", t.Artist)
|
||||
err := s.Scrobble(ctx, u.ID, scrobble)
|
||||
if err != nil {
|
||||
log.Error(ctx, "Error sending Scrobble", "scrobbler", name, "track", t.Title, "artist", t.Artist, err)
|
||||
|
||||
@@ -5,7 +5,6 @@ import (
|
||||
"errors"
|
||||
"time"
|
||||
|
||||
"github.com/navidrome/navidrome/conf"
|
||||
"github.com/navidrome/navidrome/consts"
|
||||
|
||||
"github.com/navidrome/navidrome/model"
|
||||
@@ -27,9 +26,6 @@ var _ = Describe("PlayTracker", func() {
|
||||
var fake fakeScrobbler
|
||||
|
||||
BeforeEach(func() {
|
||||
// Remove buffering to simplify tests
|
||||
conf.Server.DevEnableBufferedScrobble = false
|
||||
|
||||
ctx = context.Background()
|
||||
ctx = request.WithUser(ctx, model.User{ID: "u-1"})
|
||||
ctx = request.WithPlayer(ctx, model.Player{ScrobbleEnabled: true})
|
||||
@@ -42,6 +38,7 @@ var _ = Describe("PlayTracker", func() {
|
||||
return nil
|
||||
})
|
||||
tracker = newPlayTracker(ds, events.GetBroker())
|
||||
tracker.(*playTracker).scrobblers["fake"] = &fake // Bypass buffering for tests
|
||||
|
||||
track = model.MediaFile{
|
||||
ID: "123",
|
||||
|
||||
12
go.mod
12
go.mod
@@ -38,7 +38,6 @@ require (
|
||||
github.com/lestrrat-go/jwx/v2 v2.1.4
|
||||
github.com/matoous/go-nanoid/v2 v2.1.0
|
||||
github.com/mattn/go-sqlite3 v1.14.27
|
||||
github.com/metoro-io/mcp-golang v0.11.0
|
||||
github.com/microcosm-cc/bluemonday v1.0.27
|
||||
github.com/mileusna/useragent v1.3.5
|
||||
github.com/onsi/ginkgo/v2 v2.23.4
|
||||
@@ -54,7 +53,6 @@ require (
|
||||
github.com/spf13/cobra v1.9.1
|
||||
github.com/spf13/viper v1.20.1
|
||||
github.com/stretchr/testify v1.10.0
|
||||
github.com/tetratelabs/wazero v1.9.0
|
||||
github.com/unrolled/secure v1.17.0
|
||||
github.com/xrash/smetrics v0.0.0-20240521201337-686a1a2994c1
|
||||
go.uber.org/goleak v1.3.0
|
||||
@@ -70,9 +68,7 @@ require (
|
||||
|
||||
require (
|
||||
github.com/aymerick/douceur v0.2.0 // indirect
|
||||
github.com/bahlo/generic-list-go v0.2.0 // indirect
|
||||
github.com/beorn7/perks v1.0.1 // indirect
|
||||
github.com/buger/jsonparser v1.1.1 // indirect
|
||||
github.com/cespare/reflex v0.3.1 // indirect
|
||||
github.com/cespare/xxhash/v2 v2.3.0 // indirect
|
||||
github.com/creack/pty v1.1.11 // indirect
|
||||
@@ -89,7 +85,6 @@ require (
|
||||
github.com/gorilla/css v1.0.1 // indirect
|
||||
github.com/hashicorp/errwrap v1.1.0 // indirect
|
||||
github.com/inconshreveable/mousetrap v1.1.0 // indirect
|
||||
github.com/invopop/jsonschema v0.12.0 // indirect
|
||||
github.com/kballard/go-shellquote v0.0.0-20180428030007-95032a82bc51 // indirect
|
||||
github.com/klauspost/compress v1.18.0 // indirect
|
||||
github.com/klauspost/cpuid/v2 v2.2.10 // indirect
|
||||
@@ -101,11 +96,9 @@ require (
|
||||
github.com/lestrrat-go/httprc v1.0.6 // indirect
|
||||
github.com/lestrrat-go/iter v1.0.2 // indirect
|
||||
github.com/lestrrat-go/option v1.0.1 // indirect
|
||||
github.com/mailru/easyjson v0.7.7 // indirect
|
||||
github.com/mfridman/interpolate v0.0.2 // indirect
|
||||
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect
|
||||
github.com/ogier/pflag v0.0.1 // indirect
|
||||
github.com/pkg/errors v0.9.1 // indirect
|
||||
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect
|
||||
github.com/prometheus/client_model v0.6.1 // indirect
|
||||
github.com/prometheus/common v0.62.0 // indirect
|
||||
@@ -120,11 +113,6 @@ require (
|
||||
github.com/spf13/pflag v1.0.6 // indirect
|
||||
github.com/stretchr/objx v0.5.2 // indirect
|
||||
github.com/subosito/gotenv v1.6.0 // indirect
|
||||
github.com/tidwall/gjson v1.18.0 // indirect
|
||||
github.com/tidwall/match v1.1.1 // indirect
|
||||
github.com/tidwall/pretty v1.2.1 // indirect
|
||||
github.com/tidwall/sjson v1.2.5 // indirect
|
||||
github.com/wk8/go-ordered-map/v2 v2.1.8 // indirect
|
||||
github.com/zeebo/xxh3 v1.0.2 // indirect
|
||||
go.uber.org/automaxprocs v1.6.0 // indirect
|
||||
go.uber.org/multierr v1.11.0 // indirect
|
||||
|
||||
27
go.sum
27
go.sum
@@ -8,16 +8,12 @@ github.com/andybalholm/cascadia v1.3.3 h1:AG2YHrzJIm4BZ19iwJ/DAua6Btl3IwJX+VI4kk
|
||||
github.com/andybalholm/cascadia v1.3.3/go.mod h1:xNd9bqTn98Ln4DwST8/nG+H0yuB8Hmgu1YHNnWw0GeA=
|
||||
github.com/aymerick/douceur v0.2.0 h1:Mv+mAeH1Q+n9Fr+oyamOlAkUNPWPlA8PPGR0QAaYuPk=
|
||||
github.com/aymerick/douceur v0.2.0/go.mod h1:wlT5vV2O3h55X9m7iVYN0TBM0NH/MmbLnd30/FjWUq4=
|
||||
github.com/bahlo/generic-list-go v0.2.0 h1:5sz/EEAK+ls5wF+NeqDpk5+iNdMDXrh3z3nPnH1Wvgk=
|
||||
github.com/bahlo/generic-list-go v0.2.0/go.mod h1:2KvAjgMlE5NNynlg/5iLrrCCZ2+5xWbdbCW3pNTGyYg=
|
||||
github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM=
|
||||
github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw=
|
||||
github.com/bmatcuk/doublestar/v4 v4.8.1 h1:54Bopc5c2cAvhLRAzqOGCYHYyhcDHsFF4wWIR5wKP38=
|
||||
github.com/bmatcuk/doublestar/v4 v4.8.1/go.mod h1:xBQ8jztBU6kakFMg+8WGxn0c6z1fTSPVIjEY1Wr7jzc=
|
||||
github.com/bradleyjkemp/cupaloy/v2 v2.8.0 h1:any4BmKE+jGIaMpnU8YgH/I2LPiLBufr6oMMlVBbn9M=
|
||||
github.com/bradleyjkemp/cupaloy/v2 v2.8.0/go.mod h1:bm7JXdkRd4BHJk9HpwqAI8BoAY1lps46Enkdqw6aRX0=
|
||||
github.com/buger/jsonparser v1.1.1 h1:2PnMjfWD7wBILjqQbt530v576A/cAbQvEW9gGIpYMUs=
|
||||
github.com/buger/jsonparser v1.1.1/go.mod h1:6RYKKt7H4d4+iWqouImQ9R2FZql3VbhNgx27UK13J/0=
|
||||
github.com/cespare/reflex v0.3.1 h1:N4Y/UmRrjwOkNT0oQQnYsdr6YBxvHqtSfPB4mqOyAKk=
|
||||
github.com/cespare/reflex v0.3.1/go.mod h1:I+0Pnu2W693i7Hv6ZZG76qHTY0mgUa7uCIfCtikXojE=
|
||||
github.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs=
|
||||
@@ -108,11 +104,8 @@ github.com/hashicorp/go-multierror v1.1.1 h1:H5DkEtf6CXdFp0N0Em5UCwQpXMWke8IA0+l
|
||||
github.com/hashicorp/go-multierror v1.1.1/go.mod h1:iw975J/qwKPdAO1clOe2L8331t/9/fmwbPZ6JB6eMoM=
|
||||
github.com/inconshreveable/mousetrap v1.1.0 h1:wN+x4NVGpMsO7ErUn/mUI3vEoE6Jt13X2s0bqwp9tc8=
|
||||
github.com/inconshreveable/mousetrap v1.1.0/go.mod h1:vpF70FUmC8bwa3OWnCshd2FqLfsEA9PFc4w1p2J65bw=
|
||||
github.com/invopop/jsonschema v0.12.0 h1:6ovsNSuvn9wEQVOyc72aycBMVQFKz7cPdMJn10CvzRI=
|
||||
github.com/invopop/jsonschema v0.12.0/go.mod h1:ffZ5Km5SWWRAIN6wbDXItl95euhFz2uON45H2qjYt+0=
|
||||
github.com/jellydator/ttlcache/v3 v3.3.0 h1:BdoC9cE81qXfrxeb9eoJi9dWrdhSuwXMAnHTbnBm4Wc=
|
||||
github.com/jellydator/ttlcache/v3 v3.3.0/go.mod h1:bj2/e0l4jRnQdrnSTaGTsh4GSXvMjQcy41i7th0GVGw=
|
||||
github.com/josharian/intern v1.0.0/go.mod h1:5DoeVV0s6jJacbCEi61lwdGj/aVlrQvzHFFd8Hwg//Y=
|
||||
github.com/jtolds/gls v4.20.0+incompatible h1:xdiiI2gbIgH/gLH7ADydsJ1uDOEzR8yvV7C0MuV77Wo=
|
||||
github.com/jtolds/gls v4.20.0+incompatible/go.mod h1:QJZ7F/aHp+rZTRtaJ1ow/lLfFfVYBRgL+9YlvaHOwJU=
|
||||
github.com/kardianos/service v1.2.2 h1:ZvePhAHfvo0A7Mftk/tEzqEZ7Q4lgnR8sGz4xu1YX60=
|
||||
@@ -149,16 +142,12 @@ github.com/lestrrat-go/jwx/v2 v2.1.4 h1:uBCMmJX8oRZStmKuMMOFb0Yh9xmEMgNJLgjuKKt4
|
||||
github.com/lestrrat-go/jwx/v2 v2.1.4/go.mod h1:nWRbDFR1ALG2Z6GJbBXzfQaYyvn751KuuyySN2yR6is=
|
||||
github.com/lestrrat-go/option v1.0.1 h1:oAzP2fvZGQKWkvHa1/SAcFolBEca1oN+mQ7eooNBEYU=
|
||||
github.com/lestrrat-go/option v1.0.1/go.mod h1:5ZHFbivi4xwXxhxY9XHDe2FHo6/Z7WWmtT7T5nBBp3I=
|
||||
github.com/mailru/easyjson v0.7.7 h1:UGYAvKxe3sBsEDzO8ZeWOSlIQfWFlxbzLZe7hwFURr0=
|
||||
github.com/mailru/easyjson v0.7.7/go.mod h1:xzfreul335JAWq5oZzymOObrkdz5UnU4kGfJJLY9Nlc=
|
||||
github.com/matoous/go-nanoid/v2 v2.1.0 h1:P64+dmq21hhWdtvZfEAofnvJULaRR1Yib0+PnU669bE=
|
||||
github.com/matoous/go-nanoid/v2 v2.1.0/go.mod h1:KlbGNQ+FhrUNIHUxZdL63t7tl4LaPkZNpUULS8H4uVM=
|
||||
github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWEY=
|
||||
github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
|
||||
github.com/mattn/go-sqlite3 v1.14.27 h1:drZCnuvf37yPfs95E5jd9s3XhdVWLal+6BOK6qrv6IU=
|
||||
github.com/mattn/go-sqlite3 v1.14.27/go.mod h1:Uh1q+B4BYcTPb+yiD3kU8Ct7aC0hY9fxUwlHK0RXw+Y=
|
||||
github.com/metoro-io/mcp-golang v0.11.0 h1:1k+VSE9QaeMTLn0gJ3FgE/DcjsCBsLFnz5eSFbgXUiI=
|
||||
github.com/metoro-io/mcp-golang v0.11.0/go.mod h1:ifLP9ZzKpN1UqFWNTpAHOqSvNkMK6b7d1FSZ5Lu0lN0=
|
||||
github.com/mfridman/interpolate v0.0.2 h1:pnuTK7MQIxxFz1Gr+rjSIx9u7qVjf5VOoM/u6BbAxPY=
|
||||
github.com/mfridman/interpolate v0.0.2/go.mod h1:p+7uk6oE07mpE/Ik1b8EckO0O4ZXiGAfshKBWLUM9Xg=
|
||||
github.com/microcosm-cc/bluemonday v1.0.27 h1:MpEUotklkwCSLeH+Qdx1VJgNqLlpY2KXwXFM08ygZfk=
|
||||
@@ -178,8 +167,6 @@ github.com/onsi/gomega v1.37.0/go.mod h1:8D9+Txp43QWKhM24yyOBEdpkzN8FvJyAwecBgsU
|
||||
github.com/pelletier/go-toml/v2 v2.2.4 h1:mye9XuhQ6gvn5h28+VilKrrPoQVanw5PMw/TB0t5Ec4=
|
||||
github.com/pelletier/go-toml/v2 v2.2.4/go.mod h1:2gIqNv+qfxSVS7cM2xJQKtLSTLUE9V8t9Stt+h56mCY=
|
||||
github.com/pkg/diff v0.0.0-20210226163009-20ebb0f2a09e/go.mod h1:pJLUxLENpZxwdsKMEsNbx1VGcRFpLqf3715MtcvvzbA=
|
||||
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
|
||||
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
|
||||
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
|
||||
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRIccs7FGNTlIRMkT8wgtp5eCXdBlqhYGL6U=
|
||||
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
|
||||
@@ -247,22 +234,8 @@ github.com/stretchr/testify v1.10.0 h1:Xv5erBjTwe/5IxqUQTdXv5kgmIvbHo3QQyRwhJsOf
|
||||
github.com/stretchr/testify v1.10.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
|
||||
github.com/subosito/gotenv v1.6.0 h1:9NlTDc1FTs4qu0DDq7AEtTPNw6SVm7uBMsUCUjABIf8=
|
||||
github.com/subosito/gotenv v1.6.0/go.mod h1:Dk4QP5c2W3ibzajGcXpNraDfq2IrhjMIvMSWPKKo0FU=
|
||||
github.com/tetratelabs/wazero v1.9.0 h1:IcZ56OuxrtaEz8UYNRHBrUa9bYeX9oVY93KspZZBf/I=
|
||||
github.com/tetratelabs/wazero v1.9.0/go.mod h1:TSbcXCfFP0L2FGkRPxHphadXPjo1T6W+CseNNY7EkjM=
|
||||
github.com/tidwall/gjson v1.14.2/go.mod h1:/wbyibRr2FHMks5tjHJ5F8dMZh3AcwJEMf5vlfC0lxk=
|
||||
github.com/tidwall/gjson v1.18.0 h1:FIDeeyB800efLX89e5a8Y0BNH+LOngJyGrIWxG2FKQY=
|
||||
github.com/tidwall/gjson v1.18.0/go.mod h1:/wbyibRr2FHMks5tjHJ5F8dMZh3AcwJEMf5vlfC0lxk=
|
||||
github.com/tidwall/match v1.1.1 h1:+Ho715JplO36QYgwN9PGYNhgZvoUSc9X2c80KVTi+GA=
|
||||
github.com/tidwall/match v1.1.1/go.mod h1:eRSPERbgtNPcGhD8UCthc6PmLEQXEWd3PRB5JTxsfmM=
|
||||
github.com/tidwall/pretty v1.2.0/go.mod h1:ITEVvHYasfjBbM0u2Pg8T2nJnzm8xPwvNhhsoaGGjNU=
|
||||
github.com/tidwall/pretty v1.2.1 h1:qjsOFOWWQl+N3RsoF5/ssm1pHmJJwhjlSbZ51I6wMl4=
|
||||
github.com/tidwall/pretty v1.2.1/go.mod h1:ITEVvHYasfjBbM0u2Pg8T2nJnzm8xPwvNhhsoaGGjNU=
|
||||
github.com/tidwall/sjson v1.2.5 h1:kLy8mja+1c9jlljvWTlSazM7cKDRfJuR/bOJhcY5NcY=
|
||||
github.com/tidwall/sjson v1.2.5/go.mod h1:Fvgq9kS/6ociJEDnK0Fk1cpYF4FIW6ZF7LAe+6jwd28=
|
||||
github.com/unrolled/secure v1.17.0 h1:Io7ifFgo99Bnh0J7+Q+qcMzWM6kaDPCA5FroFZEdbWU=
|
||||
github.com/unrolled/secure v1.17.0/go.mod h1:BmF5hyM6tXczk3MpQkFf1hpKSRqCyhqcbiQtiAF7+40=
|
||||
github.com/wk8/go-ordered-map/v2 v2.1.8 h1:5h/BUHu93oj4gIdvHHHGsScSTMijfx5PeYkE/fJgbpc=
|
||||
github.com/wk8/go-ordered-map/v2 v2.1.8/go.mod h1:5nJHM5DyteebpVlHnWMV0rPz6Zp7+xBAnxjb1X5vnTw=
|
||||
github.com/xrash/smetrics v0.0.0-20240521201337-686a1a2994c1 h1:gEOO8jv9F4OT7lGCjxCBTO/36wtF6j2nSip77qHd4x4=
|
||||
github.com/xrash/smetrics v0.0.0-20240521201337-686a1a2994c1/go.mod h1:Ohn+xnUBiLI6FVj/9LpzZWtj1/D6lUovWYBkxHVV3aM=
|
||||
github.com/yuin/goldmark v1.4.13/go.mod h1:6yULJ656Px+3vBD8DxQVa3kxgyrAnzto9xy5taEt/CY=
|
||||
|
||||
@@ -1 +0,0 @@
|
||||
|
||||
@@ -32,7 +32,7 @@ var (
|
||||
// Should either be at the beginning of file, or beginning of line
|
||||
syncRegex = regexp.MustCompile(`(^|\n)\s*` + timeRegexString)
|
||||
timeRegex = regexp.MustCompile(timeRegexString)
|
||||
lrcIdRegex = regexp.MustCompile(`\[(ar|ti|offset):([^]]+)]`)
|
||||
lrcIdRegex = regexp.MustCompile(`\[(ar|ti|offset|lang):([^]]+)]`)
|
||||
)
|
||||
|
||||
func (l Lyrics) IsEmpty() bool {
|
||||
@@ -72,6 +72,8 @@ func ToLyrics(language, text string) (*Lyrics, error) {
|
||||
switch idTag[1] {
|
||||
case "ar":
|
||||
artist = str.SanitizeText(strings.TrimSpace(idTag[2]))
|
||||
case "lang":
|
||||
language = str.SanitizeText(strings.TrimSpace(idTag[2]))
|
||||
case "offset":
|
||||
{
|
||||
off, err := strconv.ParseInt(strings.TrimSpace(idTag[2]), 10, 64)
|
||||
|
||||
@@ -9,8 +9,9 @@ import (
|
||||
var _ = Describe("ToLyrics", func() {
|
||||
It("should parse tags with spaces", func() {
|
||||
num := int64(1551)
|
||||
lyrics, err := ToLyrics("xxx", "[offset: 1551 ]\n[ti: A title ]\n[ar: An artist ]\n[00:00.00]Hi there")
|
||||
lyrics, err := ToLyrics("xxx", "[lang: eng ]\n[offset: 1551 ]\n[ti: A title ]\n[ar: An artist ]\n[00:00.00]Hi there")
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(lyrics.Lang).To(Equal("eng"))
|
||||
Expect(lyrics.Synced).To(BeTrue())
|
||||
Expect(lyrics.DisplayArtist).To(Equal("An artist"))
|
||||
Expect(lyrics.DisplayTitle).To(Equal("A title"))
|
||||
|
||||
@@ -315,7 +315,7 @@ func (r *albumRepository) GetTouchedAlbums(libID int) (model.AlbumCursor, error)
|
||||
// RefreshPlayCounts updates the play count and last play date annotations for all albums, based
|
||||
// on the media files associated with them.
|
||||
func (r *albumRepository) RefreshPlayCounts() (int64, error) {
|
||||
query := rawSQL(`
|
||||
query := Expr(`
|
||||
with play_counts as (
|
||||
select user_id, album_id, sum(play_count) as total_play_count, max(play_date) as last_play_date
|
||||
from media_file
|
||||
|
||||
@@ -239,7 +239,7 @@ func (r *artistRepository) purgeEmpty() error {
|
||||
// RefreshPlayCounts updates the play count and last play date annotations for all artists, based
|
||||
// on the media files associated with them.
|
||||
func (r *artistRepository) RefreshPlayCounts() (int64, error) {
|
||||
query := rawSQL(`
|
||||
query := Expr(`
|
||||
with play_counts as (
|
||||
select user_id, atom as artist_id, sum(play_count) as total_play_count, max(play_date) as last_play_date
|
||||
from media_file
|
||||
@@ -259,76 +259,123 @@ on conflict (user_id, item_id, item_type) do update
|
||||
return r.executeSQL(query)
|
||||
}
|
||||
|
||||
// RefreshStats updates the stats field for all artists, based on the media files associated with them.
|
||||
// BFR Maybe filter by "touched" artists?
|
||||
// RefreshStats updates the stats field for artists whose associated media files were updated after the oldest recorded library scan time.
|
||||
// It processes artists in batches to handle potentially large updates.
|
||||
func (r *artistRepository) RefreshStats() (int64, error) {
|
||||
// First get all counters, one query groups by artist/role, and another with totals per artist.
|
||||
// Union both queries and group by artist to get a single row of counters per artist/role.
|
||||
// Then format the counters in a JSON object, one key for each role.
|
||||
// Finally update the artist table with the new counters
|
||||
// In all queries, atom is the artist ID and path is the role (or "total" for the totals)
|
||||
query := rawSQL(`
|
||||
-- CTE to get counters for each artist, grouped by role
|
||||
with artist_role_counters as (
|
||||
-- Get counters for each artist, grouped by role
|
||||
-- (remove the index from the role: composer[0] => composer
|
||||
select atom as artist_id,
|
||||
substr(
|
||||
replace(jt.path, '$.', ''),
|
||||
1,
|
||||
case when instr(replace(jt.path, '$.', ''), '[') > 0
|
||||
then instr(replace(jt.path, '$.', ''), '[') - 1
|
||||
else length(replace(jt.path, '$.', ''))
|
||||
end
|
||||
) as role,
|
||||
count(distinct album_id) as album_count,
|
||||
count(mf.id) as count,
|
||||
sum(size) as size
|
||||
from media_file mf
|
||||
left join json_tree(participants) jt
|
||||
where atom is not null and key = 'id'
|
||||
group by atom, role
|
||||
),
|
||||
touchedArtistsQuerySQL := `
|
||||
SELECT DISTINCT mfa.artist_id
|
||||
FROM media_file_artists mfa
|
||||
JOIN media_file mf ON mfa.media_file_id = mf.id
|
||||
WHERE mf.updated_at > (SELECT last_scan_at FROM library ORDER BY last_scan_at ASC LIMIT 1)
|
||||
`
|
||||
|
||||
-- CTE to get the totals for each artist
|
||||
artist_total_counters as (
|
||||
select mfa.artist_id,
|
||||
'total' as role,
|
||||
count(distinct mf.album_id) as album_count,
|
||||
count(distinct mf.id) as count,
|
||||
sum(mf.size) as size
|
||||
from (select artist_id, media_file_id
|
||||
from main.media_file_artists) as mfa
|
||||
join main.media_file mf on mfa.media_file_id = mf.id
|
||||
group by mfa.artist_id
|
||||
),
|
||||
var allTouchedArtistIDs []string
|
||||
if err := r.db.NewQuery(touchedArtistsQuerySQL).Column(&allTouchedArtistIDs); err != nil {
|
||||
return 0, fmt.Errorf("fetching touched artist IDs: %w", err)
|
||||
}
|
||||
|
||||
-- CTE to combine role and total counters
|
||||
combined_counters as (
|
||||
select artist_id, role, album_count, count, size
|
||||
from artist_role_counters
|
||||
union
|
||||
select artist_id, role, album_count, count, size
|
||||
from artist_total_counters
|
||||
),
|
||||
if len(allTouchedArtistIDs) == 0 {
|
||||
log.Debug(r.ctx, "RefreshStats: No artists to update.")
|
||||
return 0, nil
|
||||
}
|
||||
log.Debug(r.ctx, "RefreshStats: Found artists to update.", "count", len(allTouchedArtistIDs))
|
||||
|
||||
-- CTE to format the counters in a JSON object
|
||||
artist_counters as (
|
||||
select artist_id as id,
|
||||
json_group_object(
|
||||
replace(role, '"', ''),
|
||||
json_object('a', album_count, 'm', count, 's', size)
|
||||
) as counters
|
||||
from combined_counters
|
||||
group by artist_id
|
||||
)
|
||||
// Template for the batch update with placeholder markers that we'll replace
|
||||
batchUpdateStatsSQL := `
|
||||
WITH artist_role_counters AS (
|
||||
SELECT jt.atom AS artist_id,
|
||||
substr(
|
||||
replace(jt.path, '$.', ''),
|
||||
1,
|
||||
CASE WHEN instr(replace(jt.path, '$.', ''), '[') > 0
|
||||
THEN instr(replace(jt.path, '$.', ''), '[') - 1
|
||||
ELSE length(replace(jt.path, '$.', ''))
|
||||
END
|
||||
) AS role,
|
||||
count(DISTINCT mf.album_id) AS album_count,
|
||||
count(mf.id) AS count,
|
||||
sum(mf.size) AS size
|
||||
FROM media_file mf
|
||||
JOIN json_tree(mf.participants) jt ON jt.key = 'id' AND jt.atom IS NOT NULL
|
||||
WHERE jt.atom IN (ROLE_IDS_PLACEHOLDER) -- Will replace with actual placeholders
|
||||
GROUP BY jt.atom, role
|
||||
),
|
||||
artist_total_counters AS (
|
||||
SELECT mfa.artist_id,
|
||||
'total' AS role,
|
||||
count(DISTINCT mf.album_id) AS album_count,
|
||||
count(DISTINCT mf.id) AS count,
|
||||
sum(mf.size) AS size
|
||||
FROM media_file_artists mfa
|
||||
JOIN media_file mf ON mfa.media_file_id = mf.id
|
||||
WHERE mfa.artist_id IN (TOTAL_IDS_PLACEHOLDER) -- Will replace with actual placeholders
|
||||
GROUP BY mfa.artist_id
|
||||
),
|
||||
combined_counters AS (
|
||||
SELECT artist_id, role, album_count, count, size FROM artist_role_counters
|
||||
UNION
|
||||
SELECT artist_id, role, album_count, count, size FROM artist_total_counters
|
||||
),
|
||||
artist_counters AS (
|
||||
SELECT artist_id AS id,
|
||||
json_group_object(
|
||||
replace(role, '"', ''),
|
||||
json_object('a', album_count, 'm', count, 's', size)
|
||||
) AS counters
|
||||
FROM combined_counters
|
||||
GROUP BY artist_id
|
||||
)
|
||||
UPDATE artist
|
||||
SET stats = coalesce((SELECT counters FROM artist_counters ac WHERE ac.id = artist.id), '{}'),
|
||||
updated_at = datetime(current_timestamp, 'localtime')
|
||||
WHERE artist.id IN (UPDATE_IDS_PLACEHOLDER) AND artist.id <> '';` // Will replace with actual placeholders
|
||||
|
||||
-- Update the artist table with the new counters
|
||||
update artist
|
||||
set stats = coalesce((select counters from artist_counters where artist_counters.id = artist.id), '{}'),
|
||||
updated_at = datetime(current_timestamp, 'localtime')
|
||||
where id <> ''; -- always true, to avoid warnings`)
|
||||
return r.executeSQL(query)
|
||||
var totalRowsAffected int64 = 0
|
||||
const batchSize = 1000
|
||||
|
||||
batchCounter := 0
|
||||
for artistIDBatch := range slice.CollectChunks(slices.Values(allTouchedArtistIDs), batchSize) {
|
||||
batchCounter++
|
||||
log.Trace(r.ctx, "RefreshStats: Processing batch", "batchNum", batchCounter, "batchSize", len(artistIDBatch))
|
||||
|
||||
// Create placeholders for each ID in the IN clauses
|
||||
placeholders := make([]string, len(artistIDBatch))
|
||||
for i := range artistIDBatch {
|
||||
placeholders[i] = "?"
|
||||
}
|
||||
// Don't add extra parentheses, the IN clause already expects them in SQL syntax
|
||||
inClause := strings.Join(placeholders, ",")
|
||||
|
||||
// Replace the placeholder markers with actual SQL placeholders
|
||||
batchSQL := strings.Replace(batchUpdateStatsSQL, "ROLE_IDS_PLACEHOLDER", inClause, 1)
|
||||
batchSQL = strings.Replace(batchSQL, "TOTAL_IDS_PLACEHOLDER", inClause, 1)
|
||||
batchSQL = strings.Replace(batchSQL, "UPDATE_IDS_PLACEHOLDER", inClause, 1)
|
||||
|
||||
// Create a single parameter array with all IDs (repeated 3 times for each IN clause)
|
||||
// We need to repeat each ID 3 times (once for each IN clause)
|
||||
var args []interface{}
|
||||
for _, id := range artistIDBatch {
|
||||
args = append(args, id) // For ROLE_IDS_PLACEHOLDER
|
||||
}
|
||||
for _, id := range artistIDBatch {
|
||||
args = append(args, id) // For TOTAL_IDS_PLACEHOLDER
|
||||
}
|
||||
for _, id := range artistIDBatch {
|
||||
args = append(args, id) // For UPDATE_IDS_PLACEHOLDER
|
||||
}
|
||||
|
||||
// Now use Expr with the expanded SQL and all parameters
|
||||
sqlizer := Expr(batchSQL, args...)
|
||||
|
||||
rowsAffected, err := r.executeSQL(sqlizer)
|
||||
if err != nil {
|
||||
return totalRowsAffected, fmt.Errorf("executing batch update for artist stats (batch %d): %w", batchCounter, err)
|
||||
}
|
||||
totalRowsAffected += rowsAffected
|
||||
}
|
||||
|
||||
log.Debug(r.ctx, "RefreshStats: Successfully updated stats.", "totalArtistsProcessed", len(allTouchedArtistIDs), "totalDBRowsAffected", totalRowsAffected)
|
||||
return totalRowsAffected, nil
|
||||
}
|
||||
|
||||
func (r *artistRepository) Search(q string, offset int, size int, includeMissing bool) (model.Artists, error) {
|
||||
|
||||
@@ -57,14 +57,6 @@ func toCamelCase(str string) string {
|
||||
})
|
||||
}
|
||||
|
||||
// rawSQL is a string that will be used as is in the SQL query executor
|
||||
// It does not support arguments
|
||||
type rawSQL string
|
||||
|
||||
func (r rawSQL) ToSql() (string, []interface{}, error) {
|
||||
return string(r), nil, nil
|
||||
}
|
||||
|
||||
func Exists(subTable string, cond squirrel.Sqlizer) existsCond {
|
||||
return existsCond{subTable: subTable, cond: cond, not: false}
|
||||
}
|
||||
|
||||
@@ -136,7 +136,7 @@ func (r *libraryRepository) ScanEnd(id int) error {
|
||||
return err
|
||||
}
|
||||
// https://www.sqlite.org/pragma.html#pragma_optimize
|
||||
_, err = r.executeSQL(rawSQL("PRAGMA optimize=0x10012;"))
|
||||
_, err = r.executeSQL(Expr("PRAGMA optimize=0x10012;"))
|
||||
return err
|
||||
}
|
||||
|
||||
|
||||
@@ -87,11 +87,12 @@ func NewMediaFileRepository(ctx context.Context, db dbx.Builder) model.MediaFile
|
||||
|
||||
var mediaFileFilter = sync.OnceValue(func() map[string]filterFunc {
|
||||
filters := map[string]filterFunc{
|
||||
"id": idFilter("media_file"),
|
||||
"title": fullTextFilter("media_file"),
|
||||
"starred": booleanFilter,
|
||||
"genre_id": tagIDFilter,
|
||||
"missing": booleanFilter,
|
||||
"id": idFilter("media_file"),
|
||||
"title": fullTextFilter("media_file"),
|
||||
"starred": booleanFilter,
|
||||
"genre_id": tagIDFilter,
|
||||
"missing": booleanFilter,
|
||||
"artists_id": artistFilter,
|
||||
}
|
||||
// Add all album tags as filters
|
||||
for tag := range model.TagMappings() {
|
||||
|
||||
@@ -60,7 +60,7 @@ where tag.id = updated_values.id;
|
||||
`
|
||||
for _, table := range []string{"album", "media_file"} {
|
||||
start := time.Now()
|
||||
query := rawSQL(fmt.Sprintf(template, table))
|
||||
query := Expr(fmt.Sprintf(template, table))
|
||||
c, err := r.executeSQL(query)
|
||||
log.Debug(r.ctx, "Updated tag counts", "table", table, "elapsed", time.Since(start), "updated", c)
|
||||
if err != nil {
|
||||
|
||||
@@ -1,460 +1,518 @@
|
||||
{
|
||||
"languageName": "한국어",
|
||||
"resources": {
|
||||
"song": {
|
||||
"name": "노래 |||| 노래들",
|
||||
"fields": {
|
||||
"albumArtist": "앨범 아티스트",
|
||||
"duration": "시간",
|
||||
"trackNumber": "#",
|
||||
"playCount": "재생 횟수",
|
||||
"title": "제목",
|
||||
"artist": "아티스트",
|
||||
"album": "앨범",
|
||||
"path": "파일 경로",
|
||||
"genre": "장르",
|
||||
"compilation": "컴필레이션",
|
||||
"year": "년",
|
||||
"size": "파일 크기",
|
||||
"updatedAt": "업데이트됨",
|
||||
"bitRate": "비트레이트",
|
||||
"discSubtitle": "디스크 서브타이틀",
|
||||
"starred": "즐겨찾기",
|
||||
"comment": "댓글",
|
||||
"rating": "평가",
|
||||
"quality": "품질",
|
||||
"bpm": "BPM",
|
||||
"playDate": "마지막 재생",
|
||||
"channels": "채널",
|
||||
"createdAt": "추가된 날짜"
|
||||
},
|
||||
"actions": {
|
||||
"addToQueue": "나중에 재생",
|
||||
"playNow": "지금 재생",
|
||||
"addToPlaylist": "재생목록에 추가",
|
||||
"shuffleAll": "모든 노래 셔플",
|
||||
"download": "다운로드",
|
||||
"playNext": "다음 재생",
|
||||
"info": "정보"
|
||||
}
|
||||
},
|
||||
"album": {
|
||||
"name": "앨범 |||| 앨범들",
|
||||
"fields": {
|
||||
"albumArtist": "앨범 아티스트",
|
||||
"artist": "아티스트",
|
||||
"duration": "시간",
|
||||
"songCount": "노래",
|
||||
"playCount": "재생 횟수",
|
||||
"name": "이름",
|
||||
"genre": "장르",
|
||||
"compilation": "컴필레이션",
|
||||
"year": "년",
|
||||
"updatedAt": "업데이트됨",
|
||||
"comment": "댓글",
|
||||
"rating": "평가",
|
||||
"createdAt": "추가된 날짜",
|
||||
"size": "크기",
|
||||
"originalDate": "오리지널",
|
||||
"releaseDate": "발매일",
|
||||
"releases": "발매 음반 |||| 발매 음반들",
|
||||
"released": "발매됨"
|
||||
},
|
||||
"actions": {
|
||||
"playAll": "재생",
|
||||
"playNext": "다음에 재생",
|
||||
"addToQueue": "나중에 재생",
|
||||
"shuffle": "셔플",
|
||||
"addToPlaylist": "재생목록 추가",
|
||||
"download": "다운로드",
|
||||
"info": "정보",
|
||||
"share": "공유"
|
||||
},
|
||||
"lists": {
|
||||
"all": "모두",
|
||||
"random": "랜덤",
|
||||
"recentlyAdded": "최근 추가됨",
|
||||
"recentlyPlayed": "최근 재생됨",
|
||||
"mostPlayed": "가장 많이 재생됨",
|
||||
"starred": "즐겨찾기",
|
||||
"topRated": "높은 평가"
|
||||
}
|
||||
},
|
||||
"artist": {
|
||||
"name": "아티스트 |||| 아티스트들",
|
||||
"fields": {
|
||||
"name": "이름",
|
||||
"albumCount": "앨범 수",
|
||||
"songCount": "노래 수",
|
||||
"playCount": "재생 횟수",
|
||||
"rating": "평가",
|
||||
"genre": "장르",
|
||||
"size": "크기"
|
||||
}
|
||||
},
|
||||
"user": {
|
||||
"name": "사용자 |||| 사용자들",
|
||||
"fields": {
|
||||
"userName": "사용자이름",
|
||||
"isAdmin": "관리자",
|
||||
"lastLoginAt": "마지막 로그인",
|
||||
"updatedAt": "업데이트됨",
|
||||
"name": "이름",
|
||||
"password": "비밀번호",
|
||||
"createdAt": "생성됨",
|
||||
"changePassword": "비밀번호를 변경할까요?",
|
||||
"currentPassword": "현재 비밀번호",
|
||||
"newPassword": "새 비밀번호",
|
||||
"token": "토큰"
|
||||
},
|
||||
"helperTexts": {
|
||||
"name": "이름 변경 사항은 다음 로그인 이후에 반영됨"
|
||||
},
|
||||
"notifications": {
|
||||
"created": "사용자 생성됨",
|
||||
"updated": "사용자 업데이트됨",
|
||||
"deleted": "사용자 삭제됨"
|
||||
},
|
||||
"message": {
|
||||
"listenBrainzToken": "ListenBrainz 사용자 토큰을 입력하세요.",
|
||||
"clickHereForToken": "여기를 클릭하여 토큰을 얻으세요"
|
||||
}
|
||||
},
|
||||
"player": {
|
||||
"name": "플레이어 |||| 플레이어들",
|
||||
"fields": {
|
||||
"name": "이름",
|
||||
"transcodingId": "트랜스코딩",
|
||||
"maxBitRate": "최대 비트레이트",
|
||||
"client": "클라이언트",
|
||||
"userName": "사용자이름",
|
||||
"lastSeen": "마지막으로 봤음",
|
||||
"reportRealPath": "실제 경로 보고서",
|
||||
"scrobbleEnabled": "외부 서비스에 스크로블 보내기"
|
||||
}
|
||||
},
|
||||
"transcoding": {
|
||||
"name": "트랜스코딩 |||| 트랜스코딩들",
|
||||
"fields": {
|
||||
"name": "이름",
|
||||
"targetFormat": "대상 포맷",
|
||||
"defaultBitRate": "기본 비트레이트",
|
||||
"command": "명령"
|
||||
}
|
||||
},
|
||||
"playlist": {
|
||||
"name": "재생목록 |||| 재생목록들",
|
||||
"fields": {
|
||||
"name": "이름",
|
||||
"duration": "지속",
|
||||
"ownerName": "소유자",
|
||||
"public": "공개",
|
||||
"updatedAt": "업데이트됨",
|
||||
"createdAt": "생성됨",
|
||||
"songCount": "노래",
|
||||
"comment": "댓글",
|
||||
"sync": "자동 가져오기",
|
||||
"path": "다음에서 가져오기"
|
||||
},
|
||||
"actions": {
|
||||
"selectPlaylist": "재생목록 선택:",
|
||||
"addNewPlaylist": "\"%{name}\" 만들기",
|
||||
"export": "내보내기",
|
||||
"makePublic": "공개",
|
||||
"makePrivate": "비공개"
|
||||
},
|
||||
"message": {
|
||||
"duplicate_song": "중복된 노래 추가",
|
||||
"song_exist": "이미 재생목록에 존재하는 노래입니다. 중복을 추가할까요 아니면 건너뛸까요?"
|
||||
}
|
||||
},
|
||||
"radio": {
|
||||
"name": "라디오 |||| 라디오들",
|
||||
"fields": {
|
||||
"name": "이름",
|
||||
"streamUrl": "스트리밍 URL",
|
||||
"homePageUrl": "홈페이지 URL",
|
||||
"updatedAt": "업데이트됨",
|
||||
"createdAt": "생성됨"
|
||||
},
|
||||
"actions": {
|
||||
"playNow": "지금 재생"
|
||||
}
|
||||
},
|
||||
"share": {
|
||||
"name": "공유 |||| 공유되는 것들",
|
||||
"fields": {
|
||||
"username": "공유됨",
|
||||
"url": "URL",
|
||||
"description": "설명",
|
||||
"contents": "컨텐츠",
|
||||
"expiresAt": "만료",
|
||||
"lastVisitedAt": "마지막 방문",
|
||||
"visitCount": "방문 수",
|
||||
"format": "포맷",
|
||||
"maxBitRate": "최대 비트레이트",
|
||||
"updatedAt": "업데이트됨",
|
||||
"createdAt": "생성됨",
|
||||
"downloadable": "다운로드를 허용할까요?"
|
||||
}
|
||||
}
|
||||
"languageName": "한국어",
|
||||
"resources": {
|
||||
"song": {
|
||||
"name": "노래 |||| 노래들",
|
||||
"fields": {
|
||||
"albumArtist": "앨범 아티스트",
|
||||
"duration": "시간",
|
||||
"trackNumber": "#",
|
||||
"playCount": "재생 횟수",
|
||||
"title": "제목",
|
||||
"artist": "아티스트",
|
||||
"album": "앨범",
|
||||
"path": "파일 경로",
|
||||
"genre": "장르",
|
||||
"compilation": "컴필레이션",
|
||||
"year": "년",
|
||||
"size": "파일 크기",
|
||||
"updatedAt": "업데이트됨",
|
||||
"bitRate": "비트레이트",
|
||||
"bitDepth": "비트 심도",
|
||||
"sampleRate": "샘플레이트",
|
||||
"channels": "채널",
|
||||
"discSubtitle": "디스크 서브타이틀",
|
||||
"starred": "즐겨찾기",
|
||||
"comment": "댓글",
|
||||
"rating": "평가",
|
||||
"quality": "품질",
|
||||
"bpm": "BPM",
|
||||
"playDate": "마지막 재생",
|
||||
"createdAt": "추가된 날짜",
|
||||
"grouping": "그룹",
|
||||
"mood": "분위기",
|
||||
"participants": "추가 참가자",
|
||||
"tags": "추가 태그",
|
||||
"mappedTags": "매핑된 태그",
|
||||
"rawTags": "원시 태그"
|
||||
},
|
||||
"actions": {
|
||||
"addToQueue": "나중에 재생",
|
||||
"playNow": "지금 재생",
|
||||
"addToPlaylist": "재생목록에 추가",
|
||||
"shuffleAll": "모든 노래 셔플",
|
||||
"download": "다운로드",
|
||||
"playNext": "다음 재생",
|
||||
"info": "정보 얻기"
|
||||
}
|
||||
},
|
||||
"ra": {
|
||||
"auth": {
|
||||
"welcome1": "Navidrome을 설치해 주셔서 감사합니다!",
|
||||
"welcome2": "관리자를 만들고 시작해 보세요",
|
||||
"confirmPassword": "비밀번호 확인",
|
||||
"buttonCreateAdmin": "관리자 만들기",
|
||||
"auth_check_error": "계속하려면 로그인하세요",
|
||||
"user_menu": "프로파일",
|
||||
"username": "사용자이름",
|
||||
"password": "비밀번호",
|
||||
"sign_in": "가입",
|
||||
"sign_in_error": "인증에 실패했습니다. 다시 시도하세요",
|
||||
"logout": "로그아웃"
|
||||
},
|
||||
"validation": {
|
||||
"invalidChars": "문자와 숫자만 사용하세요",
|
||||
"passwordDoesNotMatch": "비밀번호가 일치하지 않음",
|
||||
"required": "필수 항목임",
|
||||
"minLength": "%{min}자 이하여야 함",
|
||||
"maxLength": "%{max}자 이하여야 함",
|
||||
"minValue": "%{min}자 이상이어야 함",
|
||||
"maxValue": "%{max}자 이하여야 함",
|
||||
"number": "숫자여야 함",
|
||||
"email": "유효한 이메일이어야 함",
|
||||
"oneOf": "다음 중 하나여야 함: %{options}",
|
||||
"regex": "특정 형식(정규식)과 일치해야 함: %{pattern}",
|
||||
"unique": "고유해야 함",
|
||||
"url": "유효한 URL이어야 함"
|
||||
},
|
||||
"action": {
|
||||
"add_filter": "필터 추가",
|
||||
"add": "추가",
|
||||
"back": "뒤로 가기",
|
||||
"bulk_actions": "1 개 항목이 선택되었음 |||| %{smart_count} 개 항목이 선택되었음",
|
||||
"cancel": "취소",
|
||||
"clear_input_value": "값 지우기",
|
||||
"clone": "복제",
|
||||
"confirm": "확인",
|
||||
"create": "만들기",
|
||||
"delete": "삭제",
|
||||
"edit": "편집",
|
||||
"export": "내보내기",
|
||||
"list": "목록",
|
||||
"refresh": "새로 고침",
|
||||
"remove_filter": "이 필터 제거",
|
||||
"remove": "제거",
|
||||
"save": "저장",
|
||||
"search": "검색",
|
||||
"show": "표시",
|
||||
"sort": "정렬",
|
||||
"undo": "실행 취소",
|
||||
"expand": "확장",
|
||||
"close": "닫기",
|
||||
"open_menu": "메뉴 열기",
|
||||
"close_menu": "메뉴 닫기",
|
||||
"unselect": "선택 해제",
|
||||
"skip": "건너뛰기",
|
||||
"bulk_actions_mobile": "1 |||| %{smart_count}",
|
||||
"share": "공유",
|
||||
"download": "다운로드"
|
||||
},
|
||||
"boolean": {
|
||||
"true": "예",
|
||||
"false": "아니요"
|
||||
},
|
||||
"page": {
|
||||
"create": "%{name} 만들기",
|
||||
"dashboard": "대시보드",
|
||||
"edit": "%{name} #%{id}",
|
||||
"error": "문제가 발생하였음",
|
||||
"list": "%{name}",
|
||||
"loading": "로딩 중",
|
||||
"not_found": "찾을 수 없음",
|
||||
"show": "%{name} #%{id}",
|
||||
"empty": "아직 %{name}이(가) 없습니다.",
|
||||
"invite": "추가할까요?"
|
||||
},
|
||||
"input": {
|
||||
"file": {
|
||||
"upload_several": "업로드할 파일을 몇 개 놓거나 클릭하여 하나를 선택하세요.",
|
||||
"upload_single": "업로드할 파일을 몇 개 놓거나 클릭하여 선택하세요."
|
||||
},
|
||||
"image": {
|
||||
"upload_several": "업로드할 사진을 몇 개 놓거나 클릭하여 하나를 선택하세요.",
|
||||
"upload_single": "업로드할 사진을 몇 개 놓거나 클릭하여 선택하세요."
|
||||
},
|
||||
"references": {
|
||||
"all_missing": "참조 데이터를 찾을 수 없습니다.",
|
||||
"many_missing": "연관된 참조 중 적어도 하나는 더 이상 사용할 수 없는 것 같습니다.",
|
||||
"single_missing": "연관된 참조는 더 이상 사용할 수 없는 것 같습니다."
|
||||
},
|
||||
"password": {
|
||||
"toggle_visible": "비밀번호 숨기기",
|
||||
"toggle_hidden": "비밀번호 표시"
|
||||
}
|
||||
},
|
||||
"message": {
|
||||
"about": "정보",
|
||||
"are_you_sure": "확실한가요?",
|
||||
"bulk_delete_content": "이 %{name}을(를) 삭제할까요? |||| 이 %{smart_count} 개의 항목을 삭제할까요?",
|
||||
"bulk_delete_title": "%{name} 삭제 |||| %{smart_count} %{name} 삭제",
|
||||
"delete_content": "이 항목을 삭제할까요?",
|
||||
"delete_title": "%{name} #%{id} 삭제",
|
||||
"details": "상세 정보",
|
||||
"error": "클라이언트 오류가 발생하여 요청을 완료할 수 없습니다.",
|
||||
"invalid_form": "양식이 유효하지 않습니다. 오류를 확인하세요",
|
||||
"loading": "페이지가 로드 중입니다. 잠시만 기다려 주세요",
|
||||
"no": "아니요",
|
||||
"not_found": "잘못된 URL을 입력했거나 잘못된 링크를 클릭했습니다.",
|
||||
"yes": "예",
|
||||
"unsaved_changes": "일부 변경 사항이 저장되지 않았습니다. 무시할까요?"
|
||||
},
|
||||
"navigation": {
|
||||
"no_results": "결과를 찾을 수 없음",
|
||||
"no_more_results": "페이지 번호 %{page}이(가) 경계를 벗어났습니다. 이전 페이지를 시도해 보세요.",
|
||||
"page_out_of_boundaries": "페이지 번호 %{page}이(가) 경계를 벗어남",
|
||||
"page_out_from_end": "마지막 페이지 뒤로 갈 수 없음",
|
||||
"page_out_from_begin": "첫 페이지 앞으로 갈 수 없음",
|
||||
"page_range_info": "%{offsetBegin}-%{offsetEnd} / %{total}",
|
||||
"page_rows_per_page": "페이지당 항목:",
|
||||
"next": "다음",
|
||||
"prev": "이전",
|
||||
"skip_nav": "콘텐츠 건너뛰기"
|
||||
},
|
||||
"notification": {
|
||||
"updated": "요소 업데이트됨 |||| %{smart_count} 개 요소 업데이트됨",
|
||||
"created": "요소 생성됨",
|
||||
"deleted": "요소 삭제됨 |||| %{smart_count} 개 요소 삭제됨",
|
||||
"bad_item": "잘못된 요소",
|
||||
"item_doesnt_exist": "요소가 존재하지 않음",
|
||||
"http_error": "서버 통신 오류",
|
||||
"data_provider_error": "dataProvider 오류입니다. 자세한 내용은 콘솔을 확인하세요.",
|
||||
"i18n_error": "지정된 언어에 대한 번역을 로드할 수 없음",
|
||||
"canceled": "작업이 취소됨",
|
||||
"logged_out": "세션이 종료되었습니다. 다시 연결하세요.",
|
||||
"new_version": "새로운 버전이 출시되었습니다! 이 창을 새로 고침하세요."
|
||||
},
|
||||
"toggleFieldsMenu": {
|
||||
"columnsToDisplay": "표시할 열",
|
||||
"layout": "레이아웃",
|
||||
"grid": "격자",
|
||||
"table": "표"
|
||||
}
|
||||
"album": {
|
||||
"name": "앨범 |||| 앨범들",
|
||||
"fields": {
|
||||
"albumArtist": "앨범 아티스트",
|
||||
"artist": "아티스트",
|
||||
"duration": "시간",
|
||||
"songCount": "노래",
|
||||
"playCount": "재생 횟수",
|
||||
"size": "크기",
|
||||
"name": "이름",
|
||||
"genre": "장르",
|
||||
"compilation": "컴필레이션",
|
||||
"year": "년",
|
||||
"date": "녹음 날짜",
|
||||
"originalDate": "오리지널",
|
||||
"releaseDate": "발매일",
|
||||
"releases": "발매 음반 |||| 발매 음반들",
|
||||
"released": "발매됨",
|
||||
"updatedAt": "업데이트됨",
|
||||
"comment": "댓글",
|
||||
"rating": "평가",
|
||||
"createdAt": "추가된 날짜",
|
||||
"recordLabel": "레이블",
|
||||
"catalogNum": "카탈로그 번호",
|
||||
"releaseType": "유형",
|
||||
"grouping": "그룹",
|
||||
"media": "미디어",
|
||||
"mood": "분위기"
|
||||
},
|
||||
"actions": {
|
||||
"playAll": "재생",
|
||||
"playNext": "다음 재생",
|
||||
"addToQueue": "나중에 재생",
|
||||
"share": "공유",
|
||||
"shuffle": "셔플",
|
||||
"addToPlaylist": "재생목록 추가",
|
||||
"download": "다운로드",
|
||||
"info": "정보 얻기"
|
||||
},
|
||||
"lists": {
|
||||
"all": "모두",
|
||||
"random": "랜덤",
|
||||
"recentlyAdded": "최근 추가됨",
|
||||
"recentlyPlayed": "최근 재생됨",
|
||||
"mostPlayed": "가장 많이 재생됨",
|
||||
"starred": "즐겨찾기",
|
||||
"topRated": "높은 평가"
|
||||
}
|
||||
},
|
||||
"message": {
|
||||
"note": "참고",
|
||||
"transcodingDisabled": "웹 인터페이스를 통한 트랜스코딩 구성 변경은 보안상의 이유로 비활성화되어 있습니다. 트랜스코딩 옵션을 변경(편집 또는 추가)하려면, %{config} 구성 옵션으로 서버를 다시 시작하세요.",
|
||||
"transcodingEnabled": "Navidrome은 현재 %{config}로 실행 중이므로 웹 인터페이스를 사용하여 트랜스코딩 설정에서 시스템 명령을 실행할 수 있습니다. 보안상의 이유로 비활성화하고 트랜스코딩 옵션을 구성할 때만 활성화하는 것이 좋습니다.",
|
||||
"songsAddedToPlaylist": "1 개의 노래를 재생목록에 추가하였음 |||| %{smart_count} 개의 노래를 재생 목록에 추가하였음",
|
||||
"noPlaylistsAvailable": "사용 가능한 노래 없음",
|
||||
"delete_user_title": "사용자 '%{name}' 삭제",
|
||||
"delete_user_content": "이 사용자와 (재생목록 및 기본 설정 포함된) 모든 데이터를 삭제할까요?",
|
||||
"notifications_blocked": "탐색기 설정에서 이 사이트의 알림을 차단하였음",
|
||||
"notifications_not_available": "이 탐색기는 데스크톱 알림을 지원하지 않거나 https를 통해 Navidrome에 접속하지 않음",
|
||||
"lastfmLinkSuccess": "Last.fm이 성공적으로 연결되었고 스크로블링이 활성화되었음",
|
||||
"lastfmLinkFailure": "Last.fm을 연결할 수 없음",
|
||||
"lastfmUnlinkSuccess": "Last.fm이 연결 해제되었고 스크로블링이 비활성화되었음",
|
||||
"lastfmUnlinkFailure": "Last.fm을 연결 해제할 수 없음",
|
||||
"openIn": {
|
||||
"lastfm": "Last.fm에서 열기",
|
||||
"musicbrainz": "MusicBrainz에서 열기"
|
||||
},
|
||||
"lastfmLink": "더 읽기...",
|
||||
"listenBrainzLinkSuccess": "ListenBrainz가 성공적으로 연결되었고 스크로블링이 사용자로 활성화되었음: %{user}",
|
||||
"listenBrainzLinkFailure": "ListenBrainz를 연결할 수 없음: %{error}",
|
||||
"listenBrainzUnlinkSuccess": "ListenBrainz가 연결 해제되었고 스크로블링이 비활성화되었음",
|
||||
"listenBrainzUnlinkFailure": "ListenBrainz를 연결 해제할 수 없음",
|
||||
"downloadOriginalFormat": "오리지널 형식으로 다운로드",
|
||||
"shareOriginalFormat": "오리지널 형식으로 공유",
|
||||
"shareDialogTitle": "%{resource} '%{name}' 공유",
|
||||
"shareBatchDialogTitle": "1 %{resource} 공유 |||| %{smart_count} %{resource} 공유",
|
||||
"shareSuccess": "URL이 클립보드에 복사됨: %{url}",
|
||||
"shareFailure": "%{url}을 클립보드에 복사하는 중 오류 발생",
|
||||
"downloadDialogTitle": "%{resource} '%{name}' (%{size}) 다운로드",
|
||||
"shareCopyToClipboard": "클립보드에 복사: Ctrl+C, Enter"
|
||||
"artist": {
|
||||
"name": "아티스트 |||| 아티스트들",
|
||||
"fields": {
|
||||
"name": "이름",
|
||||
"albumCount": "앨범 수",
|
||||
"songCount": "노래 수",
|
||||
"size": "크기",
|
||||
"playCount": "재생 횟수",
|
||||
"rating": "평가",
|
||||
"genre": "장르",
|
||||
"role": "역할"
|
||||
},
|
||||
"roles": {
|
||||
"albumartist": "앨범 아티스트 |||| 앨범 아티스트들",
|
||||
"artist": "아티스트 |||| 아티스트들",
|
||||
"composer": "작곡가 |||| 작곡가들",
|
||||
"conductor": "지휘자 |||| 지휘자들",
|
||||
"lyricist": "작사가 |||| 작사가들",
|
||||
"arranger": "편곡가 |||| 편곡가들",
|
||||
"producer": "제작자 |||| 제작자들",
|
||||
"director": "감독 |||| 감독들",
|
||||
"engineer": "엔지니어 |||| 엔지니어들",
|
||||
"mixer": "믹서 |||| 믹서들",
|
||||
"remixer": "리믹서 |||| 리믹서들",
|
||||
"djmixer": "DJ 믹서 |||| DJ 믹서들",
|
||||
"performer": "공연자 |||| 공연자들"
|
||||
}
|
||||
},
|
||||
"menu": {
|
||||
"library": "라이브러리",
|
||||
"settings": "설정",
|
||||
"version": "버전",
|
||||
"theme": "테마",
|
||||
"personal": {
|
||||
"name": "개인 설정",
|
||||
"options": {
|
||||
"theme": "테마",
|
||||
"language": "언어",
|
||||
"defaultView": "기본 보기",
|
||||
"desktop_notifications": "데스크톱 알림",
|
||||
"lastfmScrobbling": "Last.fm으로 스크로블",
|
||||
"listenBrainzScrobbling": "ListenBrainz로 스크로블",
|
||||
"replaygain": "리플레이게인 모드",
|
||||
"preAmp": "리플레이게인 프리앰프 (dB)",
|
||||
"gain": {
|
||||
"none": "비활성화",
|
||||
"album": "앨범 게인 사용",
|
||||
"track": "트랙 게인 사용"
|
||||
}
|
||||
}
|
||||
},
|
||||
"albumList": "앨범",
|
||||
"about": "정보",
|
||||
"playlists": "재생목록",
|
||||
"sharedPlaylists": "공유된 재생목록"
|
||||
"user": {
|
||||
"name": "사용자 |||| 사용자들",
|
||||
"fields": {
|
||||
"userName": "사용자이름",
|
||||
"isAdmin": "관리자",
|
||||
"lastLoginAt": "마지막 로그인",
|
||||
"lastAccessAt": "마지막 접속",
|
||||
"updatedAt": "업데이트됨",
|
||||
"name": "이름",
|
||||
"password": "비밀번호",
|
||||
"createdAt": "생성됨",
|
||||
"changePassword": "비밀번호를 변경할까요?",
|
||||
"currentPassword": "현재 비밀번호",
|
||||
"newPassword": "새 비밀번호",
|
||||
"token": "토큰"
|
||||
},
|
||||
"helperTexts": {
|
||||
"name": "이름 변경 사항은 다음 로그인 시에만 반영됨"
|
||||
},
|
||||
"notifications": {
|
||||
"created": "사용자 생성됨",
|
||||
"updated": "사용자 업데이트됨",
|
||||
"deleted": "사용자 삭제됨"
|
||||
},
|
||||
"message": {
|
||||
"listenBrainzToken": "ListenBrainz 사용자 토큰을 입력하세요.",
|
||||
"clickHereForToken": "여기를 클릭하여 토큰을 얻으세요"
|
||||
}
|
||||
},
|
||||
"player": {
|
||||
"playListsText": "대기열 재생",
|
||||
"openText": "열기",
|
||||
"closeText": "닫기",
|
||||
"notContentText": "음악 없음",
|
||||
"clickToPlayText": "재생하려면 클릭",
|
||||
"clickToPauseText": "일시 중지하려면 클릭",
|
||||
"nextTrackText": "다음 트랙",
|
||||
"previousTrackText": "이전 트랙",
|
||||
"reloadText": "다시 로드하기",
|
||||
"volumeText": "볼륨",
|
||||
"toggleLyricText": "가사 전환",
|
||||
"toggleMiniModeText": "최소화",
|
||||
"destroyText": "제거",
|
||||
"downloadText": "다운로드",
|
||||
"removeAudioListsText": "오디오 목록 삭제",
|
||||
"clickToDeleteText": "%{name}을(를) 삭제하려면 클릭",
|
||||
"emptyLyricText": "가사 없음",
|
||||
"playModeText": {
|
||||
"order": "순서대로",
|
||||
"orderLoop": "반복",
|
||||
"singleLoop": "노래 하나 반복",
|
||||
"shufflePlay": "셔플"
|
||||
}
|
||||
"name": "플레이어 |||| 플레이어들",
|
||||
"fields": {
|
||||
"name": "이름",
|
||||
"transcodingId": "트랜스코딩",
|
||||
"maxBitRate": "최대 비트레이트",
|
||||
"client": "클라이언트",
|
||||
"userName": "사용자이름",
|
||||
"lastSeen": "마지막으로 봤음",
|
||||
"reportRealPath": "실제 경로 보고서",
|
||||
"scrobbleEnabled": "외부 서비스에 스크로블 보내기"
|
||||
}
|
||||
},
|
||||
"about": {
|
||||
"links": {
|
||||
"homepage": "홈페이지",
|
||||
"source": "소스 코드",
|
||||
"featureRequests": "기능 요청"
|
||||
}
|
||||
"transcoding": {
|
||||
"name": "트랜스코딩 |||| 트랜스코딩",
|
||||
"fields": {
|
||||
"name": "이름",
|
||||
"targetFormat": "대상 형식",
|
||||
"defaultBitRate": "기본 비트레이트",
|
||||
"command": "명령"
|
||||
}
|
||||
},
|
||||
"activity": {
|
||||
"title": "활동",
|
||||
"totalScanned": "스캔된 전체 폴더",
|
||||
"quickScan": "빠른 스캔",
|
||||
"fullScan": "전체 스캔",
|
||||
"serverUptime": "서버 가동 시간",
|
||||
"serverDown": "오프라인"
|
||||
"playlist": {
|
||||
"name": "재생목록 |||| 재생목록들",
|
||||
"fields": {
|
||||
"name": "이름",
|
||||
"duration": "지속",
|
||||
"ownerName": "소유자",
|
||||
"public": "공개",
|
||||
"updatedAt": "업데이트됨",
|
||||
"createdAt": "생성됨",
|
||||
"songCount": "노래",
|
||||
"comment": "댓글",
|
||||
"sync": "자동 가져오기",
|
||||
"path": "다음에서 가져오기"
|
||||
},
|
||||
"actions": {
|
||||
"selectPlaylist": "재생목록 선택:",
|
||||
"addNewPlaylist": "\"%{name}\" 만들기",
|
||||
"export": "내보내기",
|
||||
"makePublic": "공개 만들기",
|
||||
"makePrivate": "비공개 만들기"
|
||||
},
|
||||
"message": {
|
||||
"duplicate_song": "중복된 노래 추가",
|
||||
"song_exist": "이미 재생목록에 존재하는 노래입니다. 중복을 추가할까요 아니면 건너뛸까요?"
|
||||
}
|
||||
},
|
||||
"help": {
|
||||
"title": "Navidrome 단축키",
|
||||
"hotkeys": {
|
||||
"show_help": "이 도움말 표시",
|
||||
"toggle_menu": "메뉴 사이드바 전환",
|
||||
"toggle_play": "재생 / 일시 중지",
|
||||
"prev_song": "이전 노래",
|
||||
"next_song": "다음 노래",
|
||||
"vol_up": "볼륨 높이기",
|
||||
"vol_down": "볼륨 낮추기",
|
||||
"toggle_love": "이 트랙을 즐겨찾기에 추가",
|
||||
"current_song": "현재 노래로 이동"
|
||||
}
|
||||
"radio": {
|
||||
"name": "라디오 |||| 라디오들",
|
||||
"fields": {
|
||||
"name": "이름",
|
||||
"streamUrl": "스트리밍 URL",
|
||||
"homePageUrl": "홈페이지 URL",
|
||||
"updatedAt": "업데이트됨",
|
||||
"createdAt": "생성됨"
|
||||
},
|
||||
"actions": {
|
||||
"playNow": "지금 재생"
|
||||
}
|
||||
},
|
||||
"share": {
|
||||
"name": "공유 |||| 공유되는 것들",
|
||||
"fields": {
|
||||
"username": "공유됨",
|
||||
"url": "URL",
|
||||
"description": "설명",
|
||||
"downloadable": "다운로드를 허용할까요?",
|
||||
"contents": "컨텐츠",
|
||||
"expiresAt": "만료",
|
||||
"lastVisitedAt": "마지막 방문",
|
||||
"visitCount": "방문 수",
|
||||
"format": "형식",
|
||||
"maxBitRate": "최대 비트레이트",
|
||||
"updatedAt": "업데이트됨",
|
||||
"createdAt": "생성됨"
|
||||
},
|
||||
"notifications": {},
|
||||
"actions": {}
|
||||
},
|
||||
"missing": {
|
||||
"name": "누락 파일 |||| 누락 파일들",
|
||||
"empty": "누락 파일 없음",
|
||||
"fields": {
|
||||
"path": "경로",
|
||||
"size": "크기",
|
||||
"updatedAt": "사라짐"
|
||||
},
|
||||
"actions": {
|
||||
"remove": "제거"
|
||||
},
|
||||
"notifications": {
|
||||
"removed": "누락된 파일이 제거되었음"
|
||||
}
|
||||
}
|
||||
},
|
||||
"ra": {
|
||||
"auth": {
|
||||
"welcome1": "Navidrome을 설치해 주셔서 감사합니다!",
|
||||
"welcome2": "관리자를 만들고 시작해 보세요",
|
||||
"confirmPassword": "비밀번호 확인",
|
||||
"buttonCreateAdmin": "관리자 만들기",
|
||||
"auth_check_error": "계속하려면 로그인하세요",
|
||||
"user_menu": "프로파일",
|
||||
"username": "사용자이름",
|
||||
"password": "비밀번호",
|
||||
"sign_in": "가입",
|
||||
"sign_in_error": "인증에 실패했습니다. 다시 시도하세요",
|
||||
"logout": "로그아웃",
|
||||
"insightsCollectionNote": "Navidrome은 프로젝트 개선을 위해 익명의 사용 데이터를\n수집합니다. 자세한 내용을 알아보고 원치 않으시면 [여기]를\n클릭하여 수신 거부하세요"
|
||||
},
|
||||
"validation": {
|
||||
"invalidChars": "문자와 숫자만 사용하세요",
|
||||
"passwordDoesNotMatch": "비밀번호가 일치하지 않음",
|
||||
"required": "필수 항목임",
|
||||
"minLength": "%{min}자 이하여야 함",
|
||||
"maxLength": "%{max}자 이하여야 함",
|
||||
"minValue": "%{min}자 이상이어야 함",
|
||||
"maxValue": "%{max}자 이하여야 함",
|
||||
"number": "숫자여야 함",
|
||||
"email": "유효한 이메일이어야 함",
|
||||
"oneOf": "다음 중 하나여야 함: %{options}",
|
||||
"regex": "특정 형식(정규식)과 일치해야 함: %{pattern}",
|
||||
"unique": "고유해야 함",
|
||||
"url": "유효한 URL이어야 함"
|
||||
},
|
||||
"action": {
|
||||
"add_filter": "필터 추가",
|
||||
"add": "추가",
|
||||
"back": "뒤로 가기",
|
||||
"bulk_actions": "1 item selected |||| %{smart_count} 개 항목이 선택되었음",
|
||||
"bulk_actions_mobile": "1 |||| %{smart_count}",
|
||||
"cancel": "취소",
|
||||
"clear_input_value": "값 지우기",
|
||||
"clone": "복제",
|
||||
"confirm": "확인",
|
||||
"create": "만들기",
|
||||
"delete": "삭제",
|
||||
"edit": "편집",
|
||||
"export": "내보내기",
|
||||
"list": "목록",
|
||||
"refresh": "새로 고침",
|
||||
"remove_filter": "이 필터 제거",
|
||||
"remove": "제거",
|
||||
"save": "저장",
|
||||
"search": "검색",
|
||||
"show": "표시",
|
||||
"sort": "정렬",
|
||||
"undo": "실행 취소",
|
||||
"expand": "확장",
|
||||
"close": "닫기",
|
||||
"open_menu": "메뉴 열기",
|
||||
"close_menu": "메뉴 닫기",
|
||||
"unselect": "선택 해제",
|
||||
"skip": "건너뛰기",
|
||||
"share": "공유",
|
||||
"download": "다운로드"
|
||||
},
|
||||
"boolean": {
|
||||
"true": "예",
|
||||
"false": "아니오"
|
||||
},
|
||||
"page": {
|
||||
"create": "%{name} 만들기",
|
||||
"dashboard": "대시보드",
|
||||
"edit": "%{name} #%{id}",
|
||||
"error": "문제가 발생하였음",
|
||||
"list": "%{name}",
|
||||
"loading": "불러오기 중",
|
||||
"not_found": "찾을 수 없음",
|
||||
"show": "%{name} #%{id}",
|
||||
"empty": "아직 %{name}이(가) 없습니다.",
|
||||
"invite": "추가할까요?"
|
||||
},
|
||||
"input": {
|
||||
"file": {
|
||||
"upload_several": "업로드할 파일을 몇 개 놓거나 클릭하여 하나를 선택하세요.",
|
||||
"upload_single": "업로드할 파일을 몇 개 놓거나 클릭하여 선택하세요."
|
||||
},
|
||||
"image": {
|
||||
"upload_several": "업로드할 사진을 몇 개 놓거나 클릭하여 하나를 선택하세요.",
|
||||
"upload_single": "업로드할 사진을 몇 개 놓거나 클릭하여 선택하세요."
|
||||
},
|
||||
"references": {
|
||||
"all_missing": "참조 데이터를 찾을 수 없습니다.",
|
||||
"many_missing": "연관된 참조 중 적어도 하나는 더 이상 사용할 수 없는 것 같습니다.",
|
||||
"single_missing": "연관된 참조는 더 이상 사용할 수 없는 것 같습니다."
|
||||
},
|
||||
"password": {
|
||||
"toggle_visible": "비밀번호 숨기기",
|
||||
"toggle_hidden": "비밀번호 표시"
|
||||
}
|
||||
},
|
||||
"message": {
|
||||
"about": "정보",
|
||||
"are_you_sure": "확실한가요?",
|
||||
"bulk_delete_content": "이 %{name}을(를) 삭제할까요? |||| 이 %{smart_count} 개의 항목을 삭제할까요?",
|
||||
"bulk_delete_title": "%{name} 삭제 |||| %{smart_count} %{name} 삭제",
|
||||
"delete_content": "이 항목을 삭제할까요?",
|
||||
"delete_title": "%{name} #%{id} 삭제",
|
||||
"details": "상세정보",
|
||||
"error": "클라이언트 오류가 발생하여 요청을 완료할 수 없습니다.",
|
||||
"invalid_form": "양식이 유효하지 않습니다. 오류를 확인하세요",
|
||||
"loading": "페이지가 로드 중입니다. 잠시만 기다려 주세요",
|
||||
"no": "아니오",
|
||||
"not_found": "잘못된 URL을 입력했거나 잘못된 링크를 클릭했습니다.",
|
||||
"yes": "예",
|
||||
"unsaved_changes": "일부 변경 사항이 저장되지 않았습니다. 무시할까요?"
|
||||
},
|
||||
"navigation": {
|
||||
"no_results": "결과를 찾을 수 없음",
|
||||
"no_more_results": "페이지 번호 %{page}이(가) 경계를 벗어났습니다. 이전 페이지를 시도해 보세요.",
|
||||
"page_out_of_boundaries": "페이지 번호 %{page}이(가) 경계를 벗어남",
|
||||
"page_out_from_end": "마지막 페이지 뒤로 갈 수 없음",
|
||||
"page_out_from_begin": "1 페이지 앞으로 갈 수 없음",
|
||||
"page_range_info": "%{offsetBegin}-%{offsetEnd} / %{total}",
|
||||
"page_rows_per_page": "페이지당 항목:",
|
||||
"next": "다음",
|
||||
"prev": "이전",
|
||||
"skip_nav": "콘텐츠 건너뛰기"
|
||||
},
|
||||
"notification": {
|
||||
"updated": "요소 업데이트됨 |||| %{smart_count} 개 요소 업데이트됨",
|
||||
"created": "요소 생성됨",
|
||||
"deleted": "요소 삭제됨 |||| %{smart_count} 개 요소 삭제됨",
|
||||
"bad_item": "잘못된 요소",
|
||||
"item_doesnt_exist": "요소가 존재하지 않음",
|
||||
"http_error": "서버 통신 오류",
|
||||
"data_provider_error": "dataProvider 오류입니다. 자세한 내용은 콘솔을 확인하세요.",
|
||||
"i18n_error": "지정된 언어에 대한 번역을 로드할 수 없음",
|
||||
"canceled": "작업이 취소됨",
|
||||
"logged_out": "세션이 종료되었습니다. 다시 연결하세요.",
|
||||
"new_version": "새로운 버전이 출시되었습니다! 이 창을 새로 고침하세요."
|
||||
},
|
||||
"toggleFieldsMenu": {
|
||||
"columnsToDisplay": "표시할 열",
|
||||
"layout": "레이아웃",
|
||||
"grid": "격자",
|
||||
"table": "표"
|
||||
}
|
||||
},
|
||||
"message": {
|
||||
"note": "참고",
|
||||
"transcodingDisabled": "웹 인터페이스를 통한 트랜스코딩 구성 변경은 보안상의 이유로 비활성화되어 있습니다. 트랜스코딩 옵션을 변경(편집 또는 추가)하려면, %{config} 구성 옵션으로 서버를 다시 시작하세요.",
|
||||
"transcodingEnabled": "Navidrome은 현재 %{config}로 실행 중이므로 웹 인터페이스를 사용하여 트랜스코딩 설정에서 시스템 명령을 실행할 수 있습니다. 보안상의 이유로 비활성화하고 트랜스코딩 옵션을 구성할 때만 활성화하는 것이 좋습니다.",
|
||||
"songsAddedToPlaylist": "1 개의 노래를 재생목록에 추가하였음 |||| %{smart_count} 개의 노래를 재생 목록에 추가하였음",
|
||||
"noPlaylistsAvailable": "사용 가능한 노래 없음",
|
||||
"delete_user_title": "사용자 '%{name}' 삭제",
|
||||
"delete_user_content": "이 사용자와 해당 사용자의 모든 데이터(재생 목록 및 환경 설정 포함)를 삭제할까요?",
|
||||
"remove_missing_title": "누락된 파일들 제거",
|
||||
"remove_missing_content": "선택한 누락된 파일을 데이터베이스에서 삭제할까요? 삭제하면 재생 횟수 및 평점을 포함하여 해당 파일에 대한 모든 참조가 영구적으로 삭제됩니다.",
|
||||
"notifications_blocked": "브라우저 설정에서 이 사이트의 알림을 차단하였음",
|
||||
"notifications_not_available": "이 브라우저는 데스크톱 알림을 지원하지 않거나 https를 통해 Navidrome에 접속하고 있지 않음",
|
||||
"lastfmLinkSuccess": "Last.fm이 성공적으로 연결되었고 스크로블링이 활성화되었음",
|
||||
"lastfmLinkFailure": "Last.fm을 연결 해제할 수 없음",
|
||||
"lastfmUnlinkSuccess": "Last.fm 연결 해제 및 스크로블링 비활성화",
|
||||
"lastfmUnlinkFailure": "Last.fm을 연결 해제할 수 없음",
|
||||
"listenBrainzLinkSuccess": "ListenBrainz가 성공적으로 연결되었고, 다음 사용자로 스크로블링이 활성화되었음: %{user}",
|
||||
"listenBrainzLinkFailure": "ListenBrainz를 연결할 수 없음: %{error}",
|
||||
"listenBrainzUnlinkSuccess": "ListenBrainz가 연결 해제되었고 스크로블링이 비활성화되었음",
|
||||
"listenBrainzUnlinkFailure": "ListenBrainz를 연결 해제할 수 없음",
|
||||
"openIn": {
|
||||
"lastfm": "Last.fm에서 열기",
|
||||
"musicbrainz": "MusicBrainz에서 열기"
|
||||
},
|
||||
"lastfmLink": "더 읽기...",
|
||||
"shareOriginalFormat": "오리지널 형식으로 공유",
|
||||
"shareDialogTitle": "%{resource} '%{name}' 공유",
|
||||
"shareBatchDialogTitle": "1 %{resource} 공유 |||| %{smart_count} %{resource} 공유",
|
||||
"shareCopyToClipboard": "클립보드에 복사: Ctrl+C, Enter",
|
||||
"shareSuccess": "URL이 클립보드에 복사되었음: %{url}",
|
||||
"shareFailure": "URL %{url}을 클립보드에 복사하는 중 오류가 발생하였음",
|
||||
"downloadDialogTitle": "%{resource} '%{name}' (%{size}) 다운로드",
|
||||
"downloadOriginalFormat": "오리지널 형식으로 다운로드"
|
||||
},
|
||||
"menu": {
|
||||
"library": "라이브러리",
|
||||
"settings": "설정",
|
||||
"version": "버전",
|
||||
"theme": "테마",
|
||||
"personal": {
|
||||
"name": "개인 설정",
|
||||
"options": {
|
||||
"theme": "테마",
|
||||
"language": "언어",
|
||||
"defaultView": "기본 보기",
|
||||
"desktop_notifications": "데스크톱 알림",
|
||||
"lastfmNotConfigured": "Last.fm API 키가 구성되지 않았음",
|
||||
"lastfmScrobbling": "Last.fm으로 스크로블",
|
||||
"listenBrainzScrobbling": "ListenBrainz로 스크로블",
|
||||
"replaygain": "리플레이게인 모드",
|
||||
"preAmp": "리플레이게인 프리앰프 (dB)",
|
||||
"gain": {
|
||||
"none": "비활성화",
|
||||
"album": "앨범 게인 사용",
|
||||
"track": "트랙 게인 사용"
|
||||
}
|
||||
}
|
||||
},
|
||||
"albumList": "앨범",
|
||||
"playlists": "재생목록",
|
||||
"sharedPlaylists": "공유된 재생목록",
|
||||
"about": "정보"
|
||||
},
|
||||
"player": {
|
||||
"playListsText": "대기열 재생",
|
||||
"openText": "열기",
|
||||
"closeText": "닫기",
|
||||
"notContentText": "음악 없음",
|
||||
"clickToPlayText": "재생하려면 클릭",
|
||||
"clickToPauseText": "일시 중지하려면 클릭",
|
||||
"nextTrackText": "다음 트랙",
|
||||
"previousTrackText": "이전 트랙",
|
||||
"reloadText": "다시 로드하기",
|
||||
"volumeText": "볼륨",
|
||||
"toggleLyricText": "가사 전환",
|
||||
"toggleMiniModeText": "최소화",
|
||||
"destroyText": "제거",
|
||||
"downloadText": "다운로드",
|
||||
"removeAudioListsText": "오디오 목록 삭제",
|
||||
"clickToDeleteText": "%{name}을(를) 삭제하려면 클릭",
|
||||
"emptyLyricText": "가사 없음",
|
||||
"playModeText": {
|
||||
"order": "순서대로",
|
||||
"orderLoop": "반복",
|
||||
"singleLoop": "노래 하나 반복",
|
||||
"shufflePlay": "셔플"
|
||||
}
|
||||
},
|
||||
"about": {
|
||||
"links": {
|
||||
"homepage": "홈페이지",
|
||||
"source": "소스 코드",
|
||||
"featureRequests": "기능 요청",
|
||||
"lastInsightsCollection": "마지막 인사이트 컬렉션",
|
||||
"insights": {
|
||||
"disabled": "비활성화",
|
||||
"waiting": "대기중"
|
||||
}
|
||||
}
|
||||
},
|
||||
"activity": {
|
||||
"title": "활동",
|
||||
"totalScanned": "스캔된 전체 폴더",
|
||||
"quickScan": "빠른 스캔",
|
||||
"fullScan": "전체 스캔",
|
||||
"serverUptime": "서버 가동 시간",
|
||||
"serverDown": "오프라인"
|
||||
},
|
||||
"help": {
|
||||
"title": "Navidrome 단축키",
|
||||
"hotkeys": {
|
||||
"show_help": "이 도움말 표시",
|
||||
"toggle_menu": "메뉴 사이드바 전환",
|
||||
"toggle_play": "재생 / 일시 중지",
|
||||
"prev_song": "이전 노래",
|
||||
"next_song": "다음 노래",
|
||||
"current_song": "현재 노래로 이동",
|
||||
"vol_up": "볼륨 높이기",
|
||||
"vol_down": "볼륨 낮추기",
|
||||
"toggle_love": "이 트랙을 즐겨찾기에 추가"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -108,7 +108,7 @@ main:
|
||||
bpm:
|
||||
aliases: [ tbpm, bpm, tmpo, wm/beatsperminute ]
|
||||
lyrics:
|
||||
aliases: [ uslt:description, lyrics, ©lyr, wm/lyrics ]
|
||||
aliases: [ uslt:description, lyrics, ©lyr, wm/lyrics, unsyncedlyrics ]
|
||||
maxLength: 32768
|
||||
type: pair # ex: lyrics:eng, lyrics:xxx
|
||||
comment:
|
||||
|
||||
466
scanner/README.md
Normal file
466
scanner/README.md
Normal file
@@ -0,0 +1,466 @@
|
||||
# Navidrome Scanner: Technical Overview
|
||||
|
||||
This document provides a comprehensive technical explanation of Navidrome's music library scanner system.
|
||||
|
||||
## Architecture Overview
|
||||
|
||||
The Navidrome scanner is built on a multi-phase pipeline architecture designed for efficient processing of music files. It systematically traverses file system directories, processes metadata, and maintains a database representation of the music library. A key performance feature is that some phases run sequentially while others execute in parallel.
|
||||
|
||||
```mermaid
|
||||
flowchart TD
|
||||
subgraph "Scanner Execution Flow"
|
||||
Controller[Scanner Controller] --> Scanner[Scanner Implementation]
|
||||
|
||||
Scanner --> Phase1[Phase 1: Folders Scan]
|
||||
Phase1 --> Phase2[Phase 2: Missing Tracks]
|
||||
|
||||
Phase2 --> ParallelPhases
|
||||
|
||||
subgraph ParallelPhases["Parallel Execution"]
|
||||
Phase3[Phase 3: Refresh Albums]
|
||||
Phase4[Phase 4: Playlist Import]
|
||||
end
|
||||
|
||||
ParallelPhases --> FinalSteps[Final Steps: GC + Stats]
|
||||
end
|
||||
|
||||
%% Triggers that can initiate a scan
|
||||
FileChanges[File System Changes] -->|Detected by| Watcher[Filesystem Watcher]
|
||||
Watcher -->|Triggers| Controller
|
||||
|
||||
ScheduledJob[Scheduled Job] -->|Based on Scanner.Schedule| Controller
|
||||
ServerStartup[Server Startup] -->|If Scanner.ScanOnStartup=true| Controller
|
||||
ManualTrigger[Manual Scan via UI/API] -->|Admin user action| Controller
|
||||
CLICommand[Command Line: navidrome scan] -->|Direct invocation| Controller
|
||||
PIDChange[PID Configuration Change] -->|Forces full scan| Controller
|
||||
DBMigration[Database Migration] -->|May require full scan| Controller
|
||||
|
||||
Scanner -.->|Alternative| External[External Scanner Process]
|
||||
```
|
||||
|
||||
The execution flow shows that Phases 1 and 2 run sequentially, while Phases 3 and 4 execute in parallel to maximize performance before the final processing steps.
|
||||
|
||||
## Core Components
|
||||
|
||||
### Scanner Controller (`controller.go`)
|
||||
|
||||
This is the entry point for all scanning operations. It provides:
|
||||
|
||||
- Public API for initiating scans and checking scan status
|
||||
- Event broadcasting to notify clients about scan progress
|
||||
- Serialization of scan operations (prevents concurrent scans)
|
||||
- Progress tracking and monitoring
|
||||
- Error collection and reporting
|
||||
|
||||
```go
|
||||
type Scanner interface {
|
||||
// ScanAll starts a full scan of the music library. This is a blocking operation.
|
||||
ScanAll(ctx context.Context, fullScan bool) (warnings []string, err error)
|
||||
Status(context.Context) (*StatusInfo, error)
|
||||
}
|
||||
```
|
||||
|
||||
### Scanner Implementation (`scanner.go`)
|
||||
|
||||
The primary implementation that orchestrates the four-phase scanning pipeline. Each phase follows the Phase interface pattern:
|
||||
|
||||
```go
|
||||
type phase[T any] interface {
|
||||
producer() ppl.Producer[T]
|
||||
stages() []ppl.Stage[T]
|
||||
finalize(error) error
|
||||
description() string
|
||||
}
|
||||
```
|
||||
|
||||
This design enables:
|
||||
- Type-safe pipeline construction with generics
|
||||
- Modular phase implementation
|
||||
- Separation of concerns
|
||||
- Easy measurement of performance
|
||||
|
||||
### External Scanner (`external.go`)
|
||||
|
||||
The External Scanner is a specialized implementation that offloads the scanning process to a separate subprocess. This is specifically designed to address memory management challenges in long-running Navidrome instances.
|
||||
|
||||
```go
|
||||
// scannerExternal is a scanner that runs an external process to do the scanning. It is used to avoid
|
||||
// memory leaks or retention in the main process, as the scanner can consume a lot of memory. The
|
||||
// external process will be spawned with the same executable as the current process, and will run
|
||||
// the "scan" command with the "--subprocess" flag.
|
||||
//
|
||||
// The external process will send progress updates to the main process through its STDOUT, and the main
|
||||
// process will forward them to the caller.
|
||||
```
|
||||
|
||||
```mermaid
|
||||
sequenceDiagram
|
||||
participant MP as Main Process
|
||||
participant ES as External Scanner
|
||||
participant SP as Subprocess (navidrome scan --subprocess)
|
||||
participant FS as File System
|
||||
participant DB as Database
|
||||
|
||||
Note over MP: DevExternalScanner=true
|
||||
MP->>ES: ScanAll(ctx, fullScan)
|
||||
activate ES
|
||||
|
||||
ES->>ES: Locate executable path
|
||||
ES->>SP: Start subprocess with args:<br>scan --subprocess --configfile ... etc.
|
||||
activate SP
|
||||
|
||||
Note over ES,SP: Create pipe for communication
|
||||
|
||||
par Subprocess executes scan
|
||||
SP->>FS: Read files & metadata
|
||||
SP->>DB: Update database
|
||||
and Main process monitors progress
|
||||
loop For each progress update
|
||||
SP->>ES: Send encoded progress info via stdout pipe
|
||||
ES->>MP: Forward progress info
|
||||
end
|
||||
end
|
||||
|
||||
SP-->>ES: Subprocess completes (success/error)
|
||||
deactivate SP
|
||||
ES-->>MP: Return aggregated warnings/errors
|
||||
deactivate ES
|
||||
```
|
||||
|
||||
Technical details:
|
||||
|
||||
1. **Process Isolation**
|
||||
- Spawns a separate process using the same executable
|
||||
- Uses the `--subprocess` flag to indicate it's running as a child process
|
||||
- Preserves configuration by passing required flags (`--configfile`, `--datafolder`, etc.)
|
||||
|
||||
2. **Inter-Process Communication**
|
||||
- Uses a pipe for bidirectional communication
|
||||
- Encodes/decodes progress updates using Go's `gob` encoding for efficient binary transfer
|
||||
- Properly handles process termination and error propagation
|
||||
|
||||
3. **Memory Management Benefits**
|
||||
- Scanning operations can be memory-intensive, especially with large music libraries
|
||||
- Memory leaks or excessive allocations are automatically cleaned up when the process terminates
|
||||
- Main Navidrome process remains stable even if scanner encounters memory-related issues
|
||||
|
||||
4. **Error Handling**
|
||||
- Detects non-zero exit codes from the subprocess
|
||||
- Propagates error messages back to the main process
|
||||
- Ensures resources are properly cleaned up, even in error conditions
|
||||
|
||||
## Scanning Process Flow
|
||||
|
||||
### Phase 1: Folder Scan (`phase_1_folders.go`)
|
||||
|
||||
This phase handles the initial traversal and media file processing.
|
||||
|
||||
```mermaid
|
||||
flowchart TD
|
||||
A[Start Phase 1] --> B{Full Scan?}
|
||||
B -- Yes --> C[Scan All Folders]
|
||||
B -- No --> D[Scan Modified Folders]
|
||||
C --> E[Read File Metadata]
|
||||
D --> E
|
||||
E --> F[Create Artists]
|
||||
E --> G[Create Albums]
|
||||
F --> H[Save to Database]
|
||||
G --> H
|
||||
H --> I[Mark Missing Folders]
|
||||
I --> J[End Phase 1]
|
||||
```
|
||||
|
||||
**Technical implementation details:**
|
||||
|
||||
1. **Folder Traversal**
|
||||
- Uses `walkDirTree` to traverse the directory structure
|
||||
- Handles symbolic links and hidden files
|
||||
- Processes `.ndignore` files for exclusions
|
||||
- Maps files to appropriate types (audio, image, playlist)
|
||||
|
||||
2. **Metadata Extraction**
|
||||
- Processes files in batches (defined by `filesBatchSize = 200`)
|
||||
- Extracts metadata using the configured storage backend
|
||||
- Converts raw metadata to `MediaFile` objects
|
||||
- Collects and normalizes tag information
|
||||
|
||||
3. **Album and Artist Creation**
|
||||
- Groups tracks by album ID
|
||||
- Creates album records from track metadata
|
||||
- Handles album ID changes by tracking previous IDs
|
||||
- Creates artist records from track participants
|
||||
|
||||
4. **Database Persistence**
|
||||
- Uses transactions for atomic updates
|
||||
- Preserves album annotations across ID changes
|
||||
- Updates library-artist mappings
|
||||
- Marks missing tracks for later processing
|
||||
- Pre-caches artwork for performance
|
||||
|
||||
### Phase 2: Missing Tracks Processing (`phase_2_missing_tracks.go`)
|
||||
|
||||
This phase identifies tracks that have moved or been deleted.
|
||||
|
||||
```mermaid
|
||||
flowchart TD
|
||||
A[Start Phase 2] --> B[Load Libraries]
|
||||
B --> C[Get Missing and Matching Tracks]
|
||||
C --> D[Group by PID]
|
||||
D --> E{Match Type?}
|
||||
E -- Exact --> F[Update Path]
|
||||
E -- Same PID --> G[Update If Only One]
|
||||
E -- Equivalent --> H[Update If No Better Match]
|
||||
F --> I[End Phase 2]
|
||||
G --> I
|
||||
H --> I
|
||||
```
|
||||
|
||||
**Technical implementation details:**
|
||||
|
||||
1. **Track Identification Strategy**
|
||||
- Uses persistent identifiers (PIDs) to track tracks across scans
|
||||
- Loads missing tracks and potential matches from the database
|
||||
- Groups tracks by PID to limit comparison scope
|
||||
|
||||
2. **Match Analysis**
|
||||
- Applies three levels of matching criteria:
|
||||
- Exact match (full metadata equivalence)
|
||||
- Single match for a PID
|
||||
- Equivalent match (same base path or similar metadata)
|
||||
- Prioritizes matches in order of confidence
|
||||
|
||||
3. **Database Update Strategy**
|
||||
- Preserves the original track ID
|
||||
- Updates the path to the new location
|
||||
- Deletes the duplicate entry
|
||||
- Uses transactions to ensure atomicity
|
||||
|
||||
### Phase 3: Album Refresh (`phase_3_refresh_albums.go`)
|
||||
|
||||
This phase updates album information based on the latest track metadata.
|
||||
|
||||
```mermaid
|
||||
flowchart TD
|
||||
A[Start Phase 3] --> B[Load Touched Albums]
|
||||
B --> C[Filter Unmodified]
|
||||
C --> D{Changes Detected?}
|
||||
D -- Yes --> E[Refresh Album Data]
|
||||
D -- No --> F[Skip]
|
||||
E --> G[Update Database]
|
||||
F --> H[End Phase 3]
|
||||
G --> H
|
||||
H --> I[Refresh Statistics]
|
||||
```
|
||||
|
||||
**Technical implementation details:**
|
||||
|
||||
1. **Album Selection Logic**
|
||||
- Loads albums that have been "touched" in previous phases
|
||||
- Uses a producer-consumer pattern for efficient processing
|
||||
- Retrieves all media files for each album for completeness
|
||||
|
||||
2. **Change Detection**
|
||||
- Rebuilds album metadata from associated tracks
|
||||
- Compares album attributes for changes
|
||||
- Skips albums with no media files
|
||||
- Avoids unnecessary database updates
|
||||
|
||||
3. **Statistics Refreshing**
|
||||
- Updates album play counts
|
||||
- Updates artist play counts
|
||||
- Maintains consistency between related entities
|
||||
|
||||
### Phase 4: Playlist Import (`phase_4_playlists.go`)
|
||||
|
||||
This phase imports and updates playlists from the file system.
|
||||
|
||||
```mermaid
|
||||
flowchart TD
|
||||
A[Start Phase 4] --> B{AutoImportPlaylists?}
|
||||
B -- No --> C[Skip]
|
||||
B -- Yes --> D{Admin User Exists?}
|
||||
D -- No --> E[Log Warning & Skip]
|
||||
D -- Yes --> F[Load Folders with Playlists]
|
||||
F --> G{For Each Folder}
|
||||
G --> H[Read Directory]
|
||||
H --> I{For Each Playlist}
|
||||
I --> J[Import Playlist]
|
||||
J --> K[Pre-cache Artwork]
|
||||
K --> L[End Phase 4]
|
||||
C --> L
|
||||
E --> L
|
||||
```
|
||||
|
||||
**Technical implementation details:**
|
||||
|
||||
1. **Playlist Discovery**
|
||||
- Loads folders known to contain playlists
|
||||
- Focuses on folders that have been touched in previous phases
|
||||
- Handles both playlist formats (M3U, NSP)
|
||||
|
||||
2. **Import Process**
|
||||
- Uses the core.Playlists service for import
|
||||
- Handles both regular and smart playlists
|
||||
- Updates existing playlists when changed
|
||||
- Pre-caches playlist cover art
|
||||
|
||||
3. **Configuration Awareness**
|
||||
- Respects the AutoImportPlaylists setting
|
||||
- Requires an admin user for playlist import
|
||||
- Logs appropriate messages for configuration issues
|
||||
|
||||
## Final Processing Steps
|
||||
|
||||
After the four main phases, several finalization steps occur:
|
||||
|
||||
1. **Garbage Collection**
|
||||
- Removes dangling tracks with no files
|
||||
- Cleans up empty albums
|
||||
- Removes orphaned artists
|
||||
- Deletes orphaned annotations
|
||||
|
||||
2. **Statistics Refresh**
|
||||
- Updates artist song and album counts
|
||||
- Refreshes tag usage statistics
|
||||
- Updates aggregate metrics
|
||||
|
||||
3. **Library Status Update**
|
||||
- Marks scan as completed
|
||||
- Updates last scan timestamp
|
||||
- Stores persistent ID configuration
|
||||
|
||||
4. **Database Optimization**
|
||||
- Performs database maintenance
|
||||
- Optimizes tables and indexes
|
||||
- Reclaims space from deleted records
|
||||
|
||||
## File System Watching
|
||||
|
||||
The watcher system (`watcher.go`) provides real-time monitoring of file system changes:
|
||||
|
||||
```mermaid
|
||||
flowchart TD
|
||||
A[Start Watcher] --> B[For Each Library]
|
||||
B --> C[Start Library Watcher]
|
||||
C --> D[Monitor File Events]
|
||||
D --> E{Change Detected?}
|
||||
E -- Yes --> F[Wait for More Changes]
|
||||
F --> G{Time Elapsed?}
|
||||
G -- Yes --> H[Trigger Scan]
|
||||
G -- No --> F
|
||||
H --> I[Wait for Scan Completion]
|
||||
I --> D
|
||||
```
|
||||
|
||||
**Technical implementation details:**
|
||||
|
||||
1. **Event Throttling**
|
||||
- Uses a timer to batch changes
|
||||
- Prevents excessive rescanning
|
||||
- Configurable wait period
|
||||
|
||||
2. **Library-specific Watching**
|
||||
- Each library has its own watcher goroutine
|
||||
- Translates paths to library-relative paths
|
||||
- Filters irrelevant changes
|
||||
|
||||
3. **Platform Adaptability**
|
||||
- Uses storage-provided watcher implementation
|
||||
- Supports different notification mechanisms per platform
|
||||
- Graceful fallback when watching is not supported
|
||||
|
||||
## Edge Cases and Optimizations
|
||||
|
||||
### Handling Album ID Changes
|
||||
|
||||
The scanner carefully manages album identity across scans:
|
||||
- Tracks previous album IDs to handle ID generation changes
|
||||
- Preserves annotations when IDs change
|
||||
- Maintains creation timestamps for consistent sorting
|
||||
|
||||
### Detecting Moved Files
|
||||
|
||||
A sophisticated algorithm identifies moved files:
|
||||
1. Groups missing and new files by their Persistent ID
|
||||
2. Applies multiple matching strategies in priority order
|
||||
3. Updates paths rather than creating duplicate entries
|
||||
|
||||
### Resuming Interrupted Scans
|
||||
|
||||
If a scan is interrupted:
|
||||
- The next scan detects this condition
|
||||
- Forces a full scan if the previous one was a full scan
|
||||
- Continues from where it left off for incremental scans
|
||||
|
||||
### Memory Efficiency
|
||||
|
||||
Several strategies minimize memory usage:
|
||||
- Batched file processing (200 files at a time)
|
||||
- External scanner process option
|
||||
- Database-side filtering where possible
|
||||
- Stream processing with pipelines
|
||||
|
||||
### Concurrency Control
|
||||
|
||||
The scanner implements a sophisticated concurrency model to optimize performance:
|
||||
|
||||
1. **Phase-Level Parallelism**:
|
||||
- Phases 1 and 2 run sequentially due to their dependencies
|
||||
- Phases 3 and 4 run in parallel using the `chain.RunParallel()` function
|
||||
- Final steps run sequentially to ensure data consistency
|
||||
|
||||
2. **Within-Phase Concurrency**:
|
||||
- Each phase has configurable concurrency for its stages
|
||||
- For example, `phase_1_folders.go` processes folders concurrently: `ppl.NewStage(p.processFolder, ppl.Name("process folder"), ppl.Concurrency(conf.Server.DevScannerThreads))`
|
||||
- Multiple stages can exist within a phase, each with its own concurrency level
|
||||
|
||||
3. **Pipeline Architecture Benefits**:
|
||||
- Producer-consumer pattern minimizes memory usage
|
||||
- Work is streamed through stages rather than accumulated
|
||||
- Back-pressure is automatically managed
|
||||
|
||||
4. **Thread Safety Mechanisms**:
|
||||
- Atomic counters for statistics gathering
|
||||
- Mutex protection for shared resources
|
||||
- Transactional database operations
|
||||
|
||||
## Configuration Options
|
||||
|
||||
The scanner's behavior can be customized through several configuration settings that directly affect its operation:
|
||||
|
||||
### Core Scanner Options
|
||||
|
||||
| Setting | Description | Default |
|
||||
|-------------------------|------------------------------------------------------------------|----------------|
|
||||
| `Scanner.Enabled` | Whether the automatic scanner is enabled | true |
|
||||
| `Scanner.Schedule` | Cron expression or duration for scheduled scans (e.g., "@daily") | "0" (disabled) |
|
||||
| `Scanner.ScanOnStartup` | Whether to scan when the server starts | true |
|
||||
| `Scanner.WatcherWait` | Delay before triggering scan after file changes detected | 5s |
|
||||
| `Scanner.ArtistJoiner` | String used to join multiple artists in track metadata | " • " |
|
||||
|
||||
### Playlist Processing
|
||||
|
||||
| Setting | Description | Default |
|
||||
|-----------------------------|----------------------------------------------------------|---------|
|
||||
| `PlaylistsPath` | Path(s) to search for playlists (supports glob patterns) | "" |
|
||||
| `AutoImportPlaylists` | Whether to import playlists during scanning | true |
|
||||
|
||||
### Performance Options
|
||||
|
||||
| Setting | Description | Default |
|
||||
|----------------------|-----------------------------------------------------------|---------|
|
||||
| `DevExternalScanner` | Use external process for scanning (reduces memory issues) | true |
|
||||
| `DevScannerThreads` | Number of concurrent processing threads during scanning | 5 |
|
||||
|
||||
### Persistent ID Options
|
||||
|
||||
| Setting | Description | Default |
|
||||
|-------------|---------------------------------------------------------------------|---------------------------------------------------------------------|
|
||||
| `PID.Track` | Format for track persistent IDs (critical for tracking moved files) | "musicbrainz_trackid\|albumid,discnumber,tracknumber,title" |
|
||||
| `PID.Album` | Format for album persistent IDs (affects album grouping) | "musicbrainz_albumid\|albumartistid,album,albumversion,releasedate" |
|
||||
|
||||
These options can be set in the Navidrome configuration file (e.g., `navidrome.toml`) or via environment variables with the `ND_` prefix (e.g., `ND_SCANNER_ENABLED=false`). For environment variables, dots in option names are replaced with underscores.
|
||||
|
||||
## Conclusion
|
||||
|
||||
The Navidrome scanner represents a sophisticated system for efficiently managing music libraries. Its phase-based pipeline architecture, careful handling of edge cases, and performance optimizations allow it to handle libraries of significant size while maintaining data integrity and providing a responsive user experience.
|
||||
@@ -11,6 +11,7 @@ import (
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/navidrome/navidrome/conf"
|
||||
"github.com/navidrome/navidrome/consts"
|
||||
"github.com/navidrome/navidrome/core"
|
||||
"github.com/navidrome/navidrome/log"
|
||||
@@ -266,6 +267,10 @@ func isDirOrSymlinkToDir(fsys fs.FS, baseDir string, dirEnt fs.DirEntry) (bool,
|
||||
if dirEnt.Type()&fs.ModeSymlink == 0 {
|
||||
return false, nil
|
||||
}
|
||||
// If symlinks are disabled, return false for symlinks
|
||||
if !conf.Server.Scanner.FollowSymlinks {
|
||||
return false, nil
|
||||
}
|
||||
// Does this symlink point to a directory?
|
||||
fileInfo, err := fs.Stat(fsys, path.Join(baseDir, dirEnt.Name()))
|
||||
if err != nil {
|
||||
|
||||
@@ -8,6 +8,8 @@ import (
|
||||
"path/filepath"
|
||||
"testing/fstest"
|
||||
|
||||
"github.com/navidrome/navidrome/conf"
|
||||
"github.com/navidrome/navidrome/conf/configtest"
|
||||
"github.com/navidrome/navidrome/core/storage"
|
||||
"github.com/navidrome/navidrome/model"
|
||||
. "github.com/onsi/ginkgo/v2"
|
||||
@@ -17,8 +19,15 @@ import (
|
||||
|
||||
var _ = Describe("walk_dir_tree", func() {
|
||||
Describe("walkDirTree", func() {
|
||||
var fsys storage.MusicFS
|
||||
var (
|
||||
fsys storage.MusicFS
|
||||
job *scanJob
|
||||
ctx context.Context
|
||||
)
|
||||
|
||||
BeforeEach(func() {
|
||||
DeferCleanup(configtest.SetupConfig())
|
||||
ctx = GinkgoT().Context()
|
||||
fsys = &mockMusicFS{
|
||||
FS: fstest.MapFS{
|
||||
"root/a/.ndignore": {Data: []byte("ignored/*")},
|
||||
@@ -32,21 +41,22 @@ var _ = Describe("walk_dir_tree", func() {
|
||||
"root/d/f1.mp3": {},
|
||||
"root/d/f2.mp3": {},
|
||||
"root/d/f3.mp3": {},
|
||||
"root/e/original/f1.mp3": {},
|
||||
"root/e/symlink": {Mode: fs.ModeSymlink, Data: []byte("root/e/original")},
|
||||
},
|
||||
}
|
||||
})
|
||||
|
||||
It("walks all directories", func() {
|
||||
job := &scanJob{
|
||||
job = &scanJob{
|
||||
fs: fsys,
|
||||
lib: model.Library{Path: "/music"},
|
||||
}
|
||||
ctx := context.Background()
|
||||
})
|
||||
|
||||
// Helper function to call walkDirTree and collect folders from the results channel
|
||||
getFolders := func() map[string]*folderEntry {
|
||||
results, err := walkDirTree(ctx, job)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
|
||||
folders := map[string]*folderEntry{}
|
||||
|
||||
g := errgroup.Group{}
|
||||
g.Go(func() error {
|
||||
for folder := range results {
|
||||
@@ -55,24 +65,42 @@ var _ = Describe("walk_dir_tree", func() {
|
||||
return nil
|
||||
})
|
||||
_ = g.Wait()
|
||||
return folders
|
||||
}
|
||||
|
||||
Expect(folders).To(HaveLen(6))
|
||||
Expect(folders["root/a/ignored"].audioFiles).To(BeEmpty())
|
||||
Expect(folders["root/a"].audioFiles).To(SatisfyAll(
|
||||
HaveLen(2),
|
||||
HaveKey("f1.mp3"),
|
||||
HaveKey("f2.mp3"),
|
||||
))
|
||||
Expect(folders["root/a"].imageFiles).To(BeEmpty())
|
||||
Expect(folders["root/b"].audioFiles).To(BeEmpty())
|
||||
Expect(folders["root/b"].imageFiles).To(SatisfyAll(
|
||||
HaveLen(1),
|
||||
HaveKey("cover.jpg"),
|
||||
))
|
||||
Expect(folders["root/c"].audioFiles).To(BeEmpty())
|
||||
Expect(folders["root/c"].imageFiles).To(BeEmpty())
|
||||
Expect(folders).ToNot(HaveKey("root/d"))
|
||||
})
|
||||
DescribeTable("symlink handling",
|
||||
func(followSymlinks bool, expectedFolderCount int) {
|
||||
conf.Server.Scanner.FollowSymlinks = followSymlinks
|
||||
folders := getFolders()
|
||||
|
||||
Expect(folders).To(HaveLen(expectedFolderCount + 2)) // +2 for `.` and `root`
|
||||
|
||||
// Basic folder structure checks
|
||||
Expect(folders["root/a"].audioFiles).To(SatisfyAll(
|
||||
HaveLen(2),
|
||||
HaveKey("f1.mp3"),
|
||||
HaveKey("f2.mp3"),
|
||||
))
|
||||
Expect(folders["root/a"].imageFiles).To(BeEmpty())
|
||||
Expect(folders["root/b"].audioFiles).To(BeEmpty())
|
||||
Expect(folders["root/b"].imageFiles).To(SatisfyAll(
|
||||
HaveLen(1),
|
||||
HaveKey("cover.jpg"),
|
||||
))
|
||||
Expect(folders["root/c"].audioFiles).To(BeEmpty())
|
||||
Expect(folders["root/c"].imageFiles).To(BeEmpty())
|
||||
Expect(folders).ToNot(HaveKey("root/d"))
|
||||
|
||||
// Symlink specific checks
|
||||
if followSymlinks {
|
||||
Expect(folders["root/e/symlink"].audioFiles).To(HaveLen(1))
|
||||
} else {
|
||||
Expect(folders).ToNot(HaveKey("root/e/symlink"))
|
||||
}
|
||||
},
|
||||
Entry("with symlinks enabled", true, 7),
|
||||
Entry("with symlinks disabled", false, 6),
|
||||
)
|
||||
})
|
||||
|
||||
Describe("helper functions", func() {
|
||||
@@ -81,74 +109,88 @@ var _ = Describe("walk_dir_tree", func() {
|
||||
baseDir := filepath.Join("tests", "fixtures")
|
||||
|
||||
Describe("isDirOrSymlinkToDir", func() {
|
||||
It("returns true for normal dirs", func() {
|
||||
dirEntry := getDirEntry("tests", "fixtures")
|
||||
Expect(isDirOrSymlinkToDir(fsys, baseDir, dirEntry)).To(BeTrue())
|
||||
BeforeEach(func() {
|
||||
DeferCleanup(configtest.SetupConfig())
|
||||
})
|
||||
It("returns true for symlinks to dirs", func() {
|
||||
dirEntry := getDirEntry(baseDir, "symlink2dir")
|
||||
Expect(isDirOrSymlinkToDir(fsys, baseDir, dirEntry)).To(BeTrue())
|
||||
})
|
||||
It("returns false for files", func() {
|
||||
dirEntry := getDirEntry(baseDir, "test.mp3")
|
||||
Expect(isDirOrSymlinkToDir(fsys, baseDir, dirEntry)).To(BeFalse())
|
||||
})
|
||||
It("returns false for symlinks to files", func() {
|
||||
dirEntry := getDirEntry(baseDir, "symlink")
|
||||
Expect(isDirOrSymlinkToDir(fsys, baseDir, dirEntry)).To(BeFalse())
|
||||
})
|
||||
})
|
||||
Describe("isDirIgnored", func() {
|
||||
It("returns false for normal dirs", func() {
|
||||
Expect(isDirIgnored("empty_folder")).To(BeFalse())
|
||||
})
|
||||
It("returns true when folder name starts with a `.`", func() {
|
||||
Expect(isDirIgnored(".hidden_folder")).To(BeTrue())
|
||||
})
|
||||
It("returns false when folder name starts with ellipses", func() {
|
||||
Expect(isDirIgnored("...unhidden_folder")).To(BeFalse())
|
||||
})
|
||||
It("returns true when folder name is $Recycle.Bin", func() {
|
||||
Expect(isDirIgnored("$Recycle.Bin")).To(BeTrue())
|
||||
})
|
||||
It("returns true when folder name is #snapshot", func() {
|
||||
Expect(isDirIgnored("#snapshot")).To(BeTrue())
|
||||
})
|
||||
})
|
||||
})
|
||||
|
||||
Describe("fullReadDir", func() {
|
||||
var fsys fakeFS
|
||||
var ctx context.Context
|
||||
BeforeEach(func() {
|
||||
ctx = context.Background()
|
||||
fsys = fakeFS{MapFS: fstest.MapFS{
|
||||
"root/a/f1": {},
|
||||
"root/b/f2": {},
|
||||
"root/c/f3": {},
|
||||
}}
|
||||
Context("with symlinks enabled", func() {
|
||||
BeforeEach(func() {
|
||||
conf.Server.Scanner.FollowSymlinks = true
|
||||
})
|
||||
|
||||
DescribeTable("returns expected result",
|
||||
func(dirName string, expected bool) {
|
||||
dirEntry := getDirEntry("tests/fixtures", dirName)
|
||||
Expect(isDirOrSymlinkToDir(fsys, baseDir, dirEntry)).To(Equal(expected))
|
||||
},
|
||||
Entry("normal dir", "empty_folder", true),
|
||||
Entry("symlink to dir", "symlink2dir", true),
|
||||
Entry("regular file", "test.mp3", false),
|
||||
Entry("symlink to file", "symlink", false),
|
||||
)
|
||||
})
|
||||
|
||||
Context("with symlinks disabled", func() {
|
||||
BeforeEach(func() {
|
||||
conf.Server.Scanner.FollowSymlinks = false
|
||||
})
|
||||
|
||||
DescribeTable("returns expected result",
|
||||
func(dirName string, expected bool) {
|
||||
dirEntry := getDirEntry("tests/fixtures", dirName)
|
||||
Expect(isDirOrSymlinkToDir(fsys, baseDir, dirEntry)).To(Equal(expected))
|
||||
},
|
||||
Entry("normal dir", "empty_folder", true),
|
||||
Entry("symlink to dir", "symlink2dir", false),
|
||||
Entry("regular file", "test.mp3", false),
|
||||
Entry("symlink to file", "symlink", false),
|
||||
)
|
||||
})
|
||||
})
|
||||
It("reads all entries", func() {
|
||||
dir, _ := fsys.Open("root")
|
||||
entries := fullReadDir(ctx, dir.(fs.ReadDirFile))
|
||||
Expect(entries).To(HaveLen(3))
|
||||
Expect(entries[0].Name()).To(Equal("a"))
|
||||
Expect(entries[1].Name()).To(Equal("b"))
|
||||
Expect(entries[2].Name()).To(Equal("c"))
|
||||
|
||||
Describe("isDirIgnored", func() {
|
||||
DescribeTable("returns expected result",
|
||||
func(dirName string, expected bool) {
|
||||
Expect(isDirIgnored(dirName)).To(Equal(expected))
|
||||
},
|
||||
Entry("normal dir", "empty_folder", false),
|
||||
Entry("hidden dir", ".hidden_folder", true),
|
||||
Entry("dir starting with ellipsis", "...unhidden_folder", false),
|
||||
Entry("recycle bin", "$Recycle.Bin", true),
|
||||
Entry("snapshot dir", "#snapshot", true),
|
||||
)
|
||||
})
|
||||
It("skips entries with permission error", func() {
|
||||
fsys.failOn = "b"
|
||||
dir, _ := fsys.Open("root")
|
||||
entries := fullReadDir(ctx, dir.(fs.ReadDirFile))
|
||||
Expect(entries).To(HaveLen(2))
|
||||
Expect(entries[0].Name()).To(Equal("a"))
|
||||
Expect(entries[1].Name()).To(Equal("c"))
|
||||
})
|
||||
It("aborts if it keeps getting 'readdirent: no such file or directory'", func() {
|
||||
fsys.err = fs.ErrNotExist
|
||||
dir, _ := fsys.Open("root")
|
||||
entries := fullReadDir(ctx, dir.(fs.ReadDirFile))
|
||||
Expect(entries).To(BeEmpty())
|
||||
|
||||
Describe("fullReadDir", func() {
|
||||
var (
|
||||
fsys fakeFS
|
||||
ctx context.Context
|
||||
)
|
||||
|
||||
BeforeEach(func() {
|
||||
ctx = GinkgoT().Context()
|
||||
fsys = fakeFS{MapFS: fstest.MapFS{
|
||||
"root/a/f1": {},
|
||||
"root/b/f2": {},
|
||||
"root/c/f3": {},
|
||||
}}
|
||||
})
|
||||
|
||||
DescribeTable("reading directory entries",
|
||||
func(failOn string, expectedErr error, expectedNames []string) {
|
||||
fsys.failOn = failOn
|
||||
fsys.err = expectedErr
|
||||
dir, _ := fsys.Open("root")
|
||||
entries := fullReadDir(ctx, dir.(fs.ReadDirFile))
|
||||
Expect(entries).To(HaveLen(len(expectedNames)))
|
||||
for i, name := range expectedNames {
|
||||
Expect(entries[i].Name()).To(Equal(name))
|
||||
}
|
||||
},
|
||||
Entry("reads all entries", "", nil, []string{"a", "b", "c"}),
|
||||
Entry("skips entries with permission error", "b", nil, []string{"a", "c"}),
|
||||
Entry("aborts on fs.ErrNotExist", "", fs.ErrNotExist, []string{}),
|
||||
)
|
||||
})
|
||||
})
|
||||
})
|
||||
@@ -205,11 +247,54 @@ func getDirEntry(baseDir, name string) os.DirEntry {
|
||||
panic(fmt.Sprintf("Could not find %s in %s", name, baseDir))
|
||||
}
|
||||
|
||||
// mockMusicFS is a mock implementation of the MusicFS interface that supports symlinks
|
||||
type mockMusicFS struct {
|
||||
storage.MusicFS
|
||||
fs.FS
|
||||
}
|
||||
|
||||
// Open resolves symlinks
|
||||
func (m *mockMusicFS) Open(name string) (fs.File, error) {
|
||||
return m.FS.Open(name)
|
||||
f, err := m.FS.Open(name)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
info, err := f.Stat()
|
||||
if err != nil {
|
||||
f.Close()
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if info.Mode()&fs.ModeSymlink != 0 {
|
||||
// For symlinks, read the target path from the Data field
|
||||
target := string(m.FS.(fstest.MapFS)[name].Data)
|
||||
f.Close()
|
||||
return m.FS.Open(target)
|
||||
}
|
||||
|
||||
return f, nil
|
||||
}
|
||||
|
||||
// Stat uses Open to resolve symlinks
|
||||
func (m *mockMusicFS) Stat(name string) (fs.FileInfo, error) {
|
||||
f, err := m.Open(name)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer f.Close()
|
||||
return f.Stat()
|
||||
}
|
||||
|
||||
// ReadDir uses Open to resolve symlinks
|
||||
func (m *mockMusicFS) ReadDir(name string) ([]fs.DirEntry, error) {
|
||||
f, err := m.Open(name)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer f.Close()
|
||||
if dirFile, ok := f.(fs.ReadDirFile); ok {
|
||||
return dirFile.ReadDir(-1)
|
||||
}
|
||||
return nil, fmt.Errorf("not a directory")
|
||||
}
|
||||
|
||||
@@ -4,11 +4,13 @@ import (
|
||||
"testing"
|
||||
|
||||
"github.com/navidrome/navidrome/log"
|
||||
"github.com/navidrome/navidrome/tests"
|
||||
. "github.com/onsi/ginkgo/v2"
|
||||
. "github.com/onsi/gomega"
|
||||
)
|
||||
|
||||
func TestSubsonicApi(t *testing.T) {
|
||||
tests.Init(t, false)
|
||||
log.SetLevel(log.LevelFatal)
|
||||
RegisterFailHandler(Fail)
|
||||
RunSpecs(t, "Subsonic API Suite")
|
||||
|
||||
@@ -108,12 +108,12 @@ func SongsByRandom(genre string, fromYear, toYear int) Options {
|
||||
return addDefaultFilters(options)
|
||||
}
|
||||
|
||||
func SongWithLyrics(artist, title string) Options {
|
||||
func SongWithArtistTitle(artist, title string) Options {
|
||||
return addDefaultFilters(Options{
|
||||
Sort: "updated_at",
|
||||
Order: "desc",
|
||||
Max: 1,
|
||||
Filters: And{Eq{"artist": artist, "title": title}, NotEq{"lyrics": ""}},
|
||||
Filters: And{Eq{"artist": artist, "title": title}},
|
||||
})
|
||||
}
|
||||
|
||||
|
||||
@@ -9,6 +9,7 @@ import (
|
||||
|
||||
"github.com/navidrome/navidrome/conf"
|
||||
"github.com/navidrome/navidrome/consts"
|
||||
"github.com/navidrome/navidrome/core/lyrics"
|
||||
"github.com/navidrome/navidrome/log"
|
||||
"github.com/navidrome/navidrome/model"
|
||||
"github.com/navidrome/navidrome/resources"
|
||||
@@ -95,9 +96,9 @@ func (api *Router) GetLyrics(r *http.Request) (*responses.Subsonic, error) {
|
||||
artist, _ := p.String("artist")
|
||||
title, _ := p.String("title")
|
||||
response := newResponse()
|
||||
lyrics := responses.Lyrics{}
|
||||
response.Lyrics = &lyrics
|
||||
mediaFiles, err := api.ds.MediaFile(r.Context()).GetAll(filter.SongWithLyrics(artist, title))
|
||||
lyricsResponse := responses.Lyrics{}
|
||||
response.Lyrics = &lyricsResponse
|
||||
mediaFiles, err := api.ds.MediaFile(r.Context()).GetAll(filter.SongWithArtistTitle(artist, title))
|
||||
|
||||
if err != nil {
|
||||
return nil, err
|
||||
@@ -107,7 +108,7 @@ func (api *Router) GetLyrics(r *http.Request) (*responses.Subsonic, error) {
|
||||
return response, nil
|
||||
}
|
||||
|
||||
structuredLyrics, err := mediaFiles[0].StructuredLyrics()
|
||||
structuredLyrics, err := lyrics.GetLyrics(r.Context(), &mediaFiles[0])
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@@ -116,15 +117,15 @@ func (api *Router) GetLyrics(r *http.Request) (*responses.Subsonic, error) {
|
||||
return response, nil
|
||||
}
|
||||
|
||||
lyrics.Artist = artist
|
||||
lyrics.Title = title
|
||||
lyricsResponse.Artist = artist
|
||||
lyricsResponse.Title = title
|
||||
|
||||
lyricsText := ""
|
||||
for _, line := range structuredLyrics[0].Line {
|
||||
lyricsText += line.Value + "\n"
|
||||
}
|
||||
|
||||
lyrics.Value = lyricsText
|
||||
lyricsResponse.Value = lyricsText
|
||||
|
||||
return response, nil
|
||||
}
|
||||
@@ -140,13 +141,13 @@ func (api *Router) GetLyricsBySongId(r *http.Request) (*responses.Subsonic, erro
|
||||
return nil, err
|
||||
}
|
||||
|
||||
lyrics, err := mediaFile.StructuredLyrics()
|
||||
structuredLyrics, err := lyrics.GetLyrics(r.Context(), mediaFile)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
response := newResponse()
|
||||
response.LyricsList = buildLyricsList(mediaFile, lyrics)
|
||||
response.LyricsList = buildLyricsList(mediaFile, structuredLyrics)
|
||||
|
||||
return response, nil
|
||||
}
|
||||
|
||||
@@ -9,6 +9,8 @@ import (
|
||||
"net/http/httptest"
|
||||
"time"
|
||||
|
||||
"github.com/navidrome/navidrome/conf"
|
||||
"github.com/navidrome/navidrome/conf/configtest"
|
||||
"github.com/navidrome/navidrome/core/artwork"
|
||||
"github.com/navidrome/navidrome/log"
|
||||
"github.com/navidrome/navidrome/model"
|
||||
@@ -32,6 +34,8 @@ var _ = Describe("MediaRetrievalController", func() {
|
||||
artwork = &fakeArtwork{}
|
||||
router = New(ds, artwork, nil, nil, nil, nil, nil, nil, nil, nil, nil, nil)
|
||||
w = httptest.NewRecorder()
|
||||
DeferCleanup(configtest.SetupConfig())
|
||||
conf.Server.LyricsPriority = "embedded,.lrc"
|
||||
})
|
||||
|
||||
Describe("GetCoverArt", func() {
|
||||
@@ -109,6 +113,22 @@ var _ = Describe("MediaRetrievalController", func() {
|
||||
Expect(response.Lyrics.Title).To(Equal(""))
|
||||
Expect(response.Lyrics.Value).To(Equal(""))
|
||||
})
|
||||
It("should return lyric file when finding mediafile with no embedded lyrics but present on filesystem", func() {
|
||||
r := newGetRequest("artist=Rick+Astley", "title=Never+Gonna+Give+You+Up")
|
||||
mockRepo.SetData(model.MediaFiles{
|
||||
{
|
||||
Path: "tests/fixtures/test.mp3",
|
||||
ID: "1",
|
||||
Artist: "Rick Astley",
|
||||
Title: "Never Gonna Give You Up",
|
||||
},
|
||||
})
|
||||
response, err := router.GetLyrics(r)
|
||||
Expect(err).To(BeNil())
|
||||
Expect(response.Lyrics.Artist).To(Equal("Rick Astley"))
|
||||
Expect(response.Lyrics.Title).To(Equal("Never Gonna Give You Up"))
|
||||
Expect(response.Lyrics.Value).To(Equal("We're no strangers to love\nYou know the rules and so do I\n"))
|
||||
})
|
||||
})
|
||||
|
||||
Describe("getLyricsBySongId", func() {
|
||||
|
||||
BIN
tests/fixtures/01 Invisible (RED) Edit Version.m4a
vendored
BIN
tests/fixtures/01 Invisible (RED) Edit Version.m4a
vendored
Binary file not shown.
BIN
tests/fixtures/test.aiff
vendored
BIN
tests/fixtures/test.aiff
vendored
Binary file not shown.
BIN
tests/fixtures/test.flac
vendored
BIN
tests/fixtures/test.flac
vendored
Binary file not shown.
6
tests/fixtures/test.lrc
vendored
Normal file
6
tests/fixtures/test.lrc
vendored
Normal file
@@ -0,0 +1,6 @@
|
||||
[ar:Rick Astley]
|
||||
[ti:That one song]
|
||||
[offset:-100]
|
||||
[lang:eng]
|
||||
[00:18.80]We're no strangers to love
|
||||
[00:22.801]You know the rules and so do I
|
||||
BIN
tests/fixtures/test.m4a
vendored
BIN
tests/fixtures/test.m4a
vendored
Binary file not shown.
BIN
tests/fixtures/test.mp3
vendored
BIN
tests/fixtures/test.mp3
vendored
Binary file not shown.
BIN
tests/fixtures/test.ogg
vendored
BIN
tests/fixtures/test.ogg
vendored
Binary file not shown.
2
tests/fixtures/test.txt
vendored
Normal file
2
tests/fixtures/test.txt
vendored
Normal file
@@ -0,0 +1,2 @@
|
||||
We're no strangers to love
|
||||
You know the rules and so do I
|
||||
BIN
tests/fixtures/test.wav
vendored
BIN
tests/fixtures/test.wav
vendored
Binary file not shown.
BIN
tests/fixtures/test.wma
vendored
BIN
tests/fixtures/test.wma
vendored
Binary file not shown.
BIN
tests/fixtures/test.wv
vendored
BIN
tests/fixtures/test.wv
vendored
Binary file not shown.
Reference in New Issue
Block a user