mirror of
https://github.com/spacedriveapp/spacedrive.git
synced 2026-04-19 14:08:45 -04:00
* feat: add redundancy awareness & cross-volume file comparison Surface content redundancy data so users can answer "if this drive dies, what do I lose?" — builds on existing content identity and volume systems. Backend: - New `redundancy.summary` library query with per-volume at-risk vs redundant byte/file counts and a library-wide replication score - Extend `SearchFilters` with `at_risk`, `on_volumes`, `not_on_volumes`, `min_volume_count`, `max_volume_count` filters - Add composite index migration on entries(content_id, volume_id) Frontend: - `/redundancy` dashboard with replication score, volume bars, at-risk callout - `/redundancy/at-risk` paginated file list sorted by size - `/redundancy/compare` two-volume comparison (unique/shared toggle) - Sidebar ShieldCheck button linking to redundancy view Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * redundancy UI improvements + ZFS volume detection fix Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix ZFS pool capacity reporting and stats filtering - ZFS: override total_capacity for pool-root volumes using zfs list used+available. df under-reports pool-root Size because it only counts the root dataset's own used bytes plus avail — on a 60 TB raidz2 pool this shows as ~15 TB instead of ~62 TB. The pool root's own used property includes descendants, so used+available is the real usable capacity. - Library stats: drop volumes where is_user_visible=false AND re-apply should_hide_by_mount_path retroactively so stale DB rows (detected before the Linux visibility filters existed) don't inflate reported capacity. - Extract should_hide_by_mount_path into volume/utils as a shared helper used by both the list query and the stats calculation. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * show capacity and visibility in sd volume list Makes it possible to verify library-level capacity aggregation from the CLI — previously the list only showed mount, fingerprint, and tracked/mounted state, which meant debugging the ZFS pool capacity issue required querying the library DB directly. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * document filesystem support matrix and detection New docs/core/filesystems.mdx covering per-filesystem capabilities (CoW, pool-awareness, visibility filtering, capacity correction), platform detection strategies, the FilesystemHandler trait, Linux/ macOS/ZFS visibility rules, the ZFS pool-root capacity problem and fix, copy strategy selection, and known limitations. Registered under File Management in both mint.json and docs.json. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * WIP: redundancy filter wiring across search, CLI, and UI - core/src/ops/search: wire redundancy filters (at_risk, on_volumes, not_on_volumes, min/max volume_count) through the search query; fix UUID-to-SQLite BLOB literal so volume UUID comparisons actually match (volumes.uuid is stored as a 16-byte BLOB, quoted-string comparison silently returned zero rows). - apps/cli: new redundancy subcommand + populate the new SearchFilters fields from search args. - packages/interface: redundancy at-risk and compare pages reworked to consume the new filter surface; explorer context/hook updates to support redundancy-scoped views. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * add web context menu renderer and UI polish - New WebContextMenuProvider + Radix DropdownMenu-based renderer anchored at cursor via a 1x1 virtual trigger. Handles separators, submenus, disabled, and the danger variant via text-status-error. - useContextMenu now routes web clicks through the provider instead of parking data in unused local state, and trims leading/trailing/adjacent separators so condition-filtered menus don't render orphaned lines. - Drop app-frame corner rounding on the web build. - Add shrink-0 to the sidebar space switcher so the scrollable sibling can't compress it vertically. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * sd-server dev workflow: auto web build, shutdown watchdog, stable data dir - build.rs runs `bun run build` in apps/web so `just dev-server` always embeds the latest UI. rerun-if-changed covers apps/web/src, packages/interface/src, and packages/ts-client/src so Rust-only edits skip the rebuild. Skips gracefully when bun isn't on PATH or SD_SKIP_WEB_BUILD is set; Dockerfile sets the latter since dist is pre-built and bun isn't in the Rust stage. - Graceful shutdown was hanging because the browser holds the /events SSE stream open forever and axum waits for all connections to drain. After the first signal, arm a background force-exit on second Ctrl+C or 5s timeout so the process can't stick. - Debug builds were starting from a fresh tempfile::tempdir() on every run (the TempDir handle dropped at end of the closure, deleting the dir we just took a path to). Default to ~/.spacedrive in debug so data persists and `just dev-server` shares a data dir with the Tauri app. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * add Sources space item alongside Redundancy in default library layout Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * add TrueNAS native build script Uses zig cc as C/C++ compiler on TrueNAS Scale where /usr is read-only and no system gcc exists. Dev tools live at /mnt/pool/dev-tools/ (zig, cmake, make, extracted deb headers). Builds sd-server + sd-cli in ~4 min on a 12-core NAS. AI feature disabled (whisper.cpp C11 atomics incompatible with zig clang-18). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * never block RPC on synchronous statistics calculation On first load (fresh library, all stats zero), libraries.info used to calculate statistics synchronously before responding. On large libraries during active indexing this hangs indefinitely — the closure-table walk in calculate_file_statistics loads every descendant ID into a Vec then issues a WHERE IN(...) with millions of entries, which SQLite can't finish while the indexer is writing. Now always return cached (possibly zero) stats and let the background recalculate_statistics task fill them in. The UI refreshes via the ResourceChanged event when the calculation completes. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * self-heal protocol handler registration on re-init Core::new() registers default protocol handlers after starting networking, but swallows any failure (error is only logged). If the initial registration fails — e.g. on a host where start_networking hasn't fully set up the event loop command sender by the time register_default_protocol_handlers runs — the registry is left empty. A subsequent call to Core::init_networking() would see `services.networking().is_some()` and skip re-registration, permanently leaving protocols unregistered for the life of the process. sd-server calls init_networking() right after Core::new(), so it's the client most exposed to this. Symptom: pairing over the web UI returns "Pairing protocol not registered" while the same library works fine from Tauri and mobile. Fix: init_networking now queries the registry directly for the pairing handler and re-registers the default set if it's missing, independent of whether networking is already initialized. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * fall back to pkarr+DNS discovery when mDNS port is unavailable Iroh's endpoint.bind() fails wholesale if any configured discovery service fails to initialize. MdnsDiscovery requires binding UDP :5353, which on most Linux systems (including TrueNAS) is already owned by avahi-daemon. Result: endpoint creation errors out with "Service 'mdns' error", the event loop never starts, command_sender stays None, and protocol registration fails — so sd-server has no working networking at all. Make mDNS best-effort: on any error whose message mentions "mdns", retry endpoint creation with only pkarr + DNS discovery. Local-network auto-discovery is lost but remote pairing via node ID (which uses n0's DNS infrastructure, not mDNS) continues to work normally. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * succeed pairing if either mDNS or relay discovery wins The dual-path discovery in start_pairing_as_joiner_with_code used tokio::select! to race mDNS and relay. select! resolves on the first branch to complete — including errors — so a host that can't bind mDNS (e.g. a Linux box where avahi already owns UDP :5353) would fail pairing wholesale: mDNS discovery errors out in <1ms with "Failed to create mDNS discovery: Service 'mdns' error", that Err wins the race, and relay discovery gets cancelled before it can even begin. Switch to futures::select_ok so we only return the error if EVERY discovery path has failed. mDNS failing immediately now leaves relay running to completion, which is the common case for remote pairing into a NAS. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
25 lines
1016 B
Bash
Executable File
25 lines
1016 B
Bash
Executable File
#!/bin/bash
|
|
# Build sd-server + sd-cli natively on TrueNAS Scale
|
|
# Uses zig cc as C compiler (since gcc/clang not installed)
|
|
# Dev tools at /mnt/pool/dev-tools/
|
|
|
|
set -e
|
|
SR=/mnt/pool/dev-tools/sysroot
|
|
export BINDGEN_EXTRA_CLANG_ARGS="-I$SR/usr/lib/gcc/x86_64-linux-gnu/12/include -I$SR/usr/include -I$SR/usr/include/x86_64-linux-gnu"
|
|
export PATH="/mnt/pool/dev-tools:/mnt/pool/dev-tools/bin:/mnt/pool/dev-tools/sysroot/usr/bin:$PATH"
|
|
export CC=/mnt/pool/dev-tools/cc
|
|
export CXX="/mnt/pool/dev-tools/c++"
|
|
export AR=/mnt/pool/dev-tools/ar
|
|
export C_INCLUDE_PATH="$SR/usr/include:$SR/usr/include/x86_64-linux-gnu:$SR/usr/lib/gcc/x86_64-linux-gnu/12/include"
|
|
export CPLUS_INCLUDE_PATH="$C_INCLUDE_PATH"
|
|
export OPENSSL_INCLUDE_DIR="$SR/usr/include"
|
|
export OPENSSL_LIB_DIR="$SR/usr/lib/x86_64-linux-gnu"
|
|
|
|
cd /mnt/pool/spacedrive
|
|
cargo build --release --bin sd-server --bin sd-cli \
|
|
--features sd-core/heif,sd-core/ffmpeg \
|
|
-j10 "$@"
|
|
|
|
echo "Binaries at:"
|
|
ls -lh target/release/sd-server target/release/sd-cli 2>/dev/null
|