huge refactor

This commit is contained in:
Jamie Pine
2025-09-06 21:00:37 -04:00
parent 5001646cb3
commit 29a87a91ce
1258 changed files with 3195 additions and 123076 deletions

View File

@@ -1,31 +0,0 @@
const path = require('node:path');
/**
* {@type require('prettier').Config}
*/
module.exports = {
useTabs: true,
printWidth: 100,
singleQuote: true,
trailingComma: 'none',
bracketSameLine: false,
semi: true,
quoteProps: 'consistent',
importOrder: [
// external packages
'<THIRD_PARTY_MODULES>',
// spacedrive packages
'^@sd/(interface|client|ui)(/.*)?$',
// internal packages
'^@/',
'^~/',
'',
// relative
'^[../]',
'^[./]'
],
importOrderParserPlugins: ['typescript', 'jsx', 'decorators'],
importOrderTypeScriptVersion: '5.0.0',
tailwindConfig: path.resolve(path.join(__dirname, 'packages/ui/tailwind.config.js')),
plugins: ['@ianvs/prettier-plugin-sort-imports', 'prettier-plugin-tailwindcss']
};

View File

@@ -11,7 +11,7 @@ whitepaper: Section 5.1
## Description
Design the infrastructure for provisioning and running isolated `sd-core-new` instances for users in a cloud environment. This involves creating a scalable and secure architecture, likely using containerization and orchestration technologies like Kubernetes.
Design the infrastructure for provisioning and running isolated `sd-core` instances for users in a cloud environment. This involves creating a scalable and secure architecture, likely using containerization and orchestration technologies like Kubernetes.
## Implementation Steps
@@ -21,6 +21,7 @@ Design the infrastructure for provisioning and running isolated `sd-core-new` in
4. Specify the security and networking policies for the cloud environment.
## Acceptance Criteria
- [ ] A detailed architecture document is created.
- [ ] The design addresses scalability, security, and cost-effectiveness.
- [ ] The design is approved and ready for implementation.
- [ ] A detailed architecture document is created.
- [ ] The design addresses scalability, security, and cost-effectiveness.
- [ ] The design is approved and ready for implementation.

View File

@@ -1,8 +1,8 @@
---
id: CORE-008
title: Virtual Sidecar System
status: To Do
assignee: unassigned
status: In Progress
assignee: james
parent: CORE-000
priority: High
tags: [core, vdfs, sidecars, derivatives]

View File

@@ -25,7 +25,7 @@ What started as an ambitious vision became an engineering lesson. Now we're ship
<br/>
> **The Revolution**
>
>
> Copy files between your iPhone and MacBook as easily as moving between folders. Search across all your devices with a single query. Organize photos that live anywhere. **Device boundaries disappear.**
<p align="center">
@@ -50,9 +50,9 @@ What started as an ambitious vision became an engineering lesson. Now we're ship
## The Vision Realized
**Copy iPhone video to MacBook storage?** Done.
**Search across all devices instantly?** Built-in.
**Organize files that live everywhere?** Native.
**Copy iPhone video to MacBook storage?** Done.
**Search across all devices instantly?** Built-in.
**Organize files that live everywhere?** Native.
**Keep it private and lightning fast?** Always.
The original Spacedrive captured imaginations with a bold promise: the **Virtual Distributed File System**. Manage all your files across all your devices as if they were one giant drive. We delivered impressive file management, but the revolutionary cross-device magic remained just out of reach.
@@ -64,21 +64,25 @@ The original Spacedrive captured imaginations with a bold promise: the **Virtual
Your files are scattered across devices, cloud services, and external drives. Traditional file managers trap you in local boundaries. Spacedrive makes those boundaries disappear:
**🌐 Universal File Access**
- Browse files on any device from any device
- External drives, cloud storage, remote servers - all unified
- Offline files show up with cached metadata
**⚡ Lightning Search**
- Find files across all locations with a single search
- Content search inside documents, PDFs, and media
- AI-powered semantic search: "find sunset photos from vacation"
**🔄 Seamless Operations**
- Copy, move, and organize files between any devices
- Drag and drop across device boundaries
- Batch operations on distributed collections
**🔒 Privacy First**
- Your data stays on your devices
- Optional cloud sync, never required
- End-to-end encryption for all transfers
@@ -88,12 +92,14 @@ Your files are scattered across devices, cloud services, and external drives. Tr
The original Spacedrive got 500,000 installs because the vision was right. Development paused because the execution was flawed:
### The Problems (2022-2024)
- **Split personality**: Couldn't copy between different location types
- **Search limitations**: Basic filename matching, not true content discovery
- **Technical debt**: Built on foundations that couldn't scale
- **Feature paralysis**: Perfect became the enemy of good
### The Breakthrough (2024-2025)
- **Unified experience**: Every operation works everywhere
- **Real search**: Content indexing, semantic understanding, instant results
- **Modern foundation**: Built for performance and extensibility
@@ -118,6 +124,7 @@ We kept the revolutionary vision. We rebuilt the foundation to deliver it.
```
**Cross-device operations made simple:**
- Drag photos from your iPhone to external storage
- Search finds files regardless of which device they're on
- Organize distributed media collections as if they were local
@@ -138,8 +145,9 @@ spacedrive server --host 0.0.0.0 --port 8080
```
**Perfect for:**
- **Creators**: Manage media across multiple workstations
- **Developers**: Sync projects between dev environments
- **Developers**: Sync projects between dev environments
- **Families**: Shared photo organization across devices
- **Self-hosters**: Private cloud with true file management
@@ -150,6 +158,7 @@ Access your files from any browser, anywhere. Full Spacedrive functionality with
## Architecture: Built to Last
### Self-Contained Libraries
```
My Photos.sdlibrary/
├── library.json # Configuration & device registry
@@ -159,11 +168,13 @@ My Photos.sdlibrary/
```
**Portable by design:**
- **Backup** = copy the folder
- **Share** = send the folder
- **Share** = send the folder
- **Migrate** = move the folder
### Unified Operations
No more confusion between "indexed" and "direct" files. Every file operation works the same way:
- **Indexed locations**: Rich metadata, lightning search, smart organization
@@ -171,16 +182,18 @@ No more confusion between "indexed" and "direct" files. Every file operation wor
- **Hybrid mode**: Best of both worlds automatically
### Real Search Engine
```
🔍 Search: "sunset photos from vacation"
Results across all devices:
📱 iPhone/Photos/Vacation2024/sunset_beach.jpg
💾 External/Backup/2024/vacation_sunset.mov
💾 External/Backup/2024/vacation_sunset.mov
☁️ iCloud/Memories/golden_hour_sunset.heic
```
**Beyond filename matching:**
- Full-text content search in documents
- Image recognition and scene detection
- Vector search for semantic queries
@@ -189,24 +202,28 @@ Results across all devices:
## What's Shipping: The VDFS Roadmap
### Q1 2025: Foundation
-**Core rewrite** with unified file system
-**Working CLI** with daemon architecture
- 🚧 **Desktop app** rebuilt on new foundation
- 🚧 **Real search** with content indexing
### Q2 2025: Device Communication
### Q2 2025: Device Communication
- 🔄 **P2P discovery** and secure connections
- 🔄 **Cross-device operations** (copy, move, sync)
- 🔄 **Mobile apps** with desktop feature parity
- 🔄 **Web interface** for universal access
### Q3 2025: Intelligence
- 🎯 **AI-powered organization** with local models
- 🎯 **Smart collections** and auto-tagging
- 🎯 **Cloud integrations** (iCloud, Google Drive, etc.)
- 🎯 **Advanced media analysis**
### Q4 2025: Ecosystem
- 🚀 **Extension system** for community features
- 🚀 **Professional tools** for creators and teams
- 🚀 **Enterprise features** and compliance
@@ -236,6 +253,7 @@ spacedrive job monitor
```
**Working today:**
- ✅ Multi-location management
- ✅ Smart indexing with progress tracking
- ✅ Content-aware search
@@ -245,18 +263,21 @@ spacedrive job monitor
## Sustainable Open Source
### Always Free & Open
- **Core file management** and VDFS operations
- **Local search** and organization features
- **P2P sync** between your own devices
- **Privacy-first** architecture
### Premium Value-Adds
- **Spacedrive Cloud**: Cross-internet sync and backup
- **Advanced AI**: Professional media analysis and organization
- **Team features**: Shared libraries and collaboration
- **Enterprise**: SSO, compliance, and enterprise deployment
### Community First
- **Weekly dev streams** showing real progress
- **Open roadmap** with community voting
- **Contributor rewards** and recognition program
@@ -265,20 +286,26 @@ spacedrive job monitor
## Why It Will Work This Time
### Technical Maturity
From 500k installs and 34k stars, we learned what users actually need:
- **Performance first**: Sub-second search, responsive UI, efficient sync
- **Reliability**: Robust error handling, data integrity, graceful failures
- **Simplicity**: Complex features with simple interfaces
### Market Reality
The world has changed since 2022:
- **Privacy concerns** have intensified with cloud services
- **AI expectations** for semantic search and smart organization
- **Multi-device life** is now universal, not niche
- **Creator economy** needs professional file management tools
### Execution Discipline
No more feature paralysis:
- **Ship working features**, enhance over time
- **Measure real usage**, not just code metrics
- **Community feedback** drives priority decisions
@@ -287,18 +314,21 @@ No more feature paralysis:
## Get Involved
### For Users
-**Star the repo** to follow development
- 💬 **Join Discord** for updates and early access
- 🐛 **Report issues** and request features
- 📖 **Beta testing** as features ship
### For Developers
- 🔧 **Contribute code** to the core rewrite
- 📚 **Improve docs** and tutorials
- 🧪 **Write tests** and benchmarks
- 🎨 **Design interfaces** for new features
### For Organizations
- 💼 **Early access** to enterprise features
- 🤝 **Partnership** opportunities
- 💰 **Sponsorship** and development funding
@@ -320,12 +350,12 @@ The future of file management isn't about better folder hierarchies or cloud sto
<p align="center">
<strong>Follow the comeback</strong><br/>
<a href="https://spacedrive.com">Website</a> ·
<a href="https://discord.gg/gTaF2Z44f5">Discord</a> ·
<a href="https://spacedrive.com">Website</a> ·
<a href="https://discord.gg/gTaF2Z44f5">Discord</a> ·
<a href="https://x.com/spacedriveapp">Twitter</a> ·
<a href="https://github.com/spacedriveapp/spacedrive/tree/main/core-new">Core Development</a>
<a href="https://github.com/spacedriveapp/spacedrive/tree/main/core">Core Development</a>
</p>
<p align="center">
<em>The file manager that should exist. Finally being built right.</em>
</p>
</p>

View File

View File

@@ -1,72 +0,0 @@
#!/usr/bin/env python3
import os
import sys
import argparse
from pathlib import Path
def combine_paths(paths, output_file):
"""Combine multiple paths into a single text file."""
with open(output_file, 'w', encoding='utf-8') as out:
for path_str in paths:
path = Path(path_str)
if not path.exists():
print(f"Warning: {path} does not exist, skipping...")
continue
out.write(f"\n{'='*80}\n")
out.write(f"PATH: {path}\n")
out.write(f"{'='*80}\n\n")
if path.is_file():
try:
with open(path, 'r', encoding='utf-8') as f:
out.write(f.read())
out.write('\n\n')
except UnicodeDecodeError:
out.write(f"[Binary file - skipped]\n\n")
except Exception as e:
out.write(f"[Error reading file: {e}]\n\n")
elif path.is_dir():
for root, dirs, files in os.walk(path):
# Skip common ignored directories
dirs[:] = [d for d in dirs if not d.startswith('.') and d not in ['node_modules', '__pycache__', 'target', 'dist', 'build']]
for file in files:
if file.startswith('.'):
continue
file_path = Path(root) / file
out.write(f"\n{'-'*60}\n")
out.write(f"FILE: {file_path}\n")
out.write(f"{'-'*60}\n\n")
try:
with open(file_path, 'r', encoding='utf-8') as f:
out.write(f.read())
out.write('\n\n')
except UnicodeDecodeError:
out.write(f"[Binary file - skipped]\n\n")
except Exception as e:
out.write(f"[Error reading file: {e}]\n\n")
def main():
parser = argparse.ArgumentParser(description='Combine multiple paths into a single text file')
parser.add_argument('paths', nargs='+', help='Paths to combine')
parser.add_argument('-o', '--output', help='Output file name (default: foldername.txt based on first path)')
args = parser.parse_args()
if args.output:
output_file = args.output
else:
first_path = Path(args.paths[0])
folder_name = first_path.name if first_path.is_dir() else first_path.stem
output_file = f"{folder_name}.txt"
combine_paths(args.paths, output_file)
print(f"Combined {len(args.paths)} paths into {output_file}")
if __name__ == "__main__":
main()

View File

@@ -1,161 +0,0 @@
[package]
name = "sd-core-new"
version = "0.1.0"
edition = "2021"
[features]
default = []
# FFmpeg support for video thumbnails
ffmpeg = ["dep:sd-ffmpeg"]
[workspace]
members = [
"benchmarks",
"task-validator",
]
[dependencies]
# Async runtime
tokio = { version = "1.40", features = ["full"] }
futures = "0.3"
async-trait = "0.1"
# Database
sea-orm = { version = "1.1", features = ["sqlx-sqlite", "runtime-tokio-rustls", "macros", "uuid", "with-json", "with-chrono"] }
sea-orm-migration = { version = "1.1", features = ["runtime-tokio-rustls", "sqlx-sqlite"] }
sqlx = { version = "0.8", features = ["runtime-tokio-rustls", "sqlite"] }
# API (temporarily disabled)
# axum = "0.7"
# async-graphql = "7.0"
# async-graphql-axum = "7.0"
# Serialization
serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0"
toml = "0.8"
int-enum = "1.1"
strum = { version = "0.26", features = ["derive"] }
# Error handling
thiserror = "1.0"
anyhow = "1.0"
# File operations
notify = "6.1" # File system watching
blake3 = "1.5" # Content addressing
sha2 = "0.10" # SHA-256 hashing for CAS IDs
hex = "0.4" # Hex encoding for volume fingerprints
# Logging
tracing = "0.1"
tracing-subscriber = { version = "0.3", features = ["env-filter"] }
# Indexer rules engine
globset = { version = "0.4", features = ["serde1"] }
gix-ignore = { version = "0.11", features = ["serde"] }
futures-concurrency = "7.6"
# Job system dependencies
rmp = "0.8" # MessagePack core types
rmp-serde = "1.3" # MessagePack serialization for job state
inventory = "0.3" # Automatic job registration
sd-task-system = { path = "../crates/task-system" }
spacedrive-jobs-derive = { path = "spacedrive-jobs-derive" } # Job derive macros
# Media processing dependencies
sd-images = { path = "../crates/images" }
sd-ffmpeg = { path = "../crates/ffmpeg", optional = true }
webp = "0.3"
image = "0.25"
tokio-rustls = "0.26"
# Networking
# Iroh P2P networking
iroh = "0.28"
iroh-net = "0.28"
iroh-blobs = "0.28"
iroh-gossip = "0.28"
# Serialization for protocols
serde_cbor = "0.11"
# Cryptography for signing (backward compatibility)
ed25519-dalek = "2.1"
# Legacy networking (kept for compatibility during transition)
mdns-sd = "0.13" # mDNS service discovery (DEPRECATED - use libp2p DHT)
snow = "0.9" # Noise Protocol encryption (DEPRECATED - use libp2p noise)
ring = "0.16" # Crypto primitives
argon2 = "0.5" # Password derivation
aes-gcm = "0.10" # AES-GCM encryption for secure storage
async-stream = "0.3" # File streaming
backoff = "0.4" # Retry logic
bincode = "2.0.0-rc.3" # Efficient encoding
# futures-util = "0.3" # WebSocket utilities (disabled for now)
rustls = { version = "0.23", features = ["aws_lc_rs"] } # TLS implementation (DEPRECATED - use libp2p noise)
rcgen = "0.11" # Certificate generation (DEPRECATED - use libp2p noise)
tokio-stream = "0.1" # Async streams
# BIP39 wordlist support
bip39 = "2.0"
# Additional cryptography
chacha20poly1305 = "0.10" # Authenticated encryption for chunk-level security
hkdf = "0.12" # Key derivation function for session keys
x25519-dalek = "2.0"
hmac = "0.12"
# Network utilities
if-watch = "3.0"
local-ip-address = "0.5"
# colored already defined above
# Utils
uuid = { version = "1.11", features = ["v4", "v5", "v7", "serde"] }
chrono = { version = "0.4", features = ["serde"] }
once_cell = "1.20"
dirs = "5.0"
whoami = "1.5"
rand = "0.8" # Random number generation for secure delete
tempfile = "3.14" # Temporary directories for testing
# Secure storage
keyring = "3.6"
# CLI dependencies
clap = { version = "4.5", features = ["derive", "env"] }
comfy-table = "7.1"
dialoguer = "0.11"
indicatif = "0.17"
owo-colors = "4.1"
supports-color = "3.0"
console = "0.15"
colored = "2.1"
[build-dependencies]
vergen = { version = "8", features = ["git", "gitcl", "cargo"] }
ratatui = "0.29"
crossterm = "0.28"
indicatif = "0.17"
console = "0.15"
dialoguer = "0.11"
colored = "2.1"
comfy-table = "7.1"
owo-colors = "4.1"
supports-color = "3.0"
# Platform specific
[target.'cfg(unix)'.dependencies]
libc = "0.2"
[[bin]]
name = "spacedrive"
path = "src/bin/cli.rs"
[dev-dependencies]
tempfile = "3.14"
pretty_assertions = "1.4"

View File

@@ -1,39 +0,0 @@
[package]
name = "sd-bench"
version = "0.1.0"
edition = "2021"
[dependencies]
anyhow = "1.0"
serde = { version = "1.0", features = ["derive"] }
serde_yaml = "0.9"
serde_json = "1.0"
clap = { version = "4.5", features = ["derive", "env"] }
tokio = { version = "1.40", features = ["full"] }
tracing = "0.1"
tracing-subscriber = { version = "0.3", features = ["env-filter"] }
rand = "0.8"
regex = "1.10"
walkdir = "2.5"
indicatif = "0.17"
humantime-serde = "1.1"
humantime = "2.1"
chrono = { version = "0.4", features = ["serde"] }
dirs = "5.0"
uuid = { version = "1.11", features = ["v4", "serde"] }
tempfile = "3.14"
sd-core-new = { path = ".." }
serde_with = { version = "3.9", features = ["json"] }
async-trait = "0.1"
blake3 = "1.5"
sysinfo = { version = "0.30", default-features = false, features = ["multithread"] }
[lib]
name = "sd_bench"
path = "src/lib.rs"
[[bin]]
name = "sd-bench"
path = "src/bin/sd-bench-new.rs"

View File

@@ -1,13 +0,0 @@
use vergen::EmitBuilder;
fn main() -> Result<(), Box<dyn std::error::Error>> {
// Emit the instructions
EmitBuilder::builder()
.git_sha(true)
.git_commit_timestamp()
.git_branch()
.cargo_opt_level()
.cargo_target_triple()
.emit()?;
Ok(())
}

View File

@@ -1,136 +0,0 @@
//! Showcase of the production-ready indexer implementation
//!
//! This example demonstrates the sophisticated features of our new indexer:
//! - Multi-phase processing (Discovery → Processing → Content)
//! - Hardcoded filtering with should_skip_path
//! - Incremental indexing with inode tracking
//! - Performance metrics and reporting
//! - Full resumability with checkpoints
use std::path::Path;
fn main() {
println!("🚀 Spacedrive Production Indexer Showcase\n");
// Demonstrate the filtering system
showcase_filtering();
// Show the modular architecture
showcase_architecture();
// Display sample metrics output
showcase_metrics();
}
fn showcase_filtering() {
println!("📁 Smart Filtering System");
println!("========================\n");
// Import the actual function from our implementation
use sd_core_new::operations::indexing::filters::should_skip_path;
let test_paths = vec![
// Files that should be skipped
(".DS_Store", true, "macOS system file"),
("Thumbs.db", true, "Windows thumbnail cache"),
("node_modules", true, "npm packages directory"),
(".git", true, "Git repository data"),
("target", true, "Rust build directory"),
("__pycache__", true, "Python cache"),
(".mypy_cache", true, "Python type checker cache"),
// Files that should NOT be skipped
("document.pdf", false, "Regular document"),
("photo.jpg", false, "Image file"),
("src", false, "Source code directory"),
(".config", false, "User config directory (allowed)"),
("project.rs", false, "Rust source file"),
];
println!("Testing path filtering:");
for (path_str, should_skip, description) in test_paths {
let path = Path::new(path_str);
let skipped = should_skip_path(path);
let result = if skipped == should_skip { "" } else { "" };
println!(" {} {:20} -> {:8} ({})",
result,
path_str,
if skipped { "SKIP" } else { "INDEX" },
description
);
}
println!("\n💡 Note: This is where the future IndexerRuleEngine will integrate!");
println!(" The should_skip_path function has a clear TODO marker for rules system.\n");
}
fn showcase_architecture() {
println!("🏗️ Modular Architecture");
println!("=======================\n");
println!("core-new/src/operations/indexing/");
println!("├── mod.rs # Module exports and documentation");
println!("├── job.rs # Main IndexerJob with state machine");
println!("├── state.rs # Resumable state management");
println!("├── entry.rs # Entry processing with inode support");
println!("├── filters.rs # Hardcoded filtering (→ future rules)");
println!("├── metrics.rs # Performance tracking");
println!("├── change_detection/ # Incremental indexing");
println!("│ └── mod.rs # Inode-based change detection");
println!("└── phases/ # Multi-phase processing");
println!(" ├── discovery.rs # Directory walking");
println!(" ├── processing.rs # Database operations");
println!(" └── content.rs # CAS ID generation\n");
println!("Key Features:");
println!("✅ Full resumability with checkpoint system");
println!("✅ Inode tracking for move/rename detection");
println!("✅ Batch processing (1000 items per batch)");
println!("✅ Non-critical error collection");
println!("✅ Path prefix optimization");
println!("✅ Content deduplication ready\n");
}
fn showcase_metrics() {
println!("📊 Performance Metrics");
println!("=====================\n");
// Show what metrics output looks like
let sample_output = r#"Indexing completed in 12.5s:
- Files: 10,234 (818.7/s)
- Directories: 1,523 (121.8/s)
- Total size: 2.34 GB (191.23 MB/s)
- Database writes: 10,234 in 11 batches (avg 930.4 items/batch)
- Errors: 5 (skipped 1,523 paths)
- Phase timing: discovery 5.2s, processing 6.1s, content 1.2s"#;
println!("Sample metrics output:");
println!("{}\n", sample_output);
// Show the indexer progress phases
println!("Progress Tracking Phases:");
println!("1⃣ Discovery: 'Found 245 entries in /Users/demo/Documents'");
println!("2⃣ Processing: 'Batch 3/11' (database operations)");
println!("3⃣ Content: 'Generating content identities (456/1234)'");
println!("4⃣ Finalizing: 'Cleaning up and saving final state'\n");
// Show change detection in action
println!("🔄 Incremental Indexing Example:");
println!("First run: Indexed 5,000 files");
println!("Second run: Detected 3 new, 5 modified, 2 moved files");
println!(" Only processed 10 files instead of 5,000!");
println!(" Used inode tracking to detect moves efficiently\n");
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_showcase_runs() {
// Just verify our showcase compiles and runs
showcase_filtering();
showcase_architecture();
showcase_metrics();
}
}

View File

@@ -1,78 +0,0 @@
//! Shared context providing access to core application components.
//! Shared context providing access to core application components.
use crate::{
config::JobLoggingConfig,
device::DeviceManager, infrastructure::events::EventBus,
keys::library_key_manager::LibraryKeyManager, library::LibraryManager,
infrastructure::actions::manager::ActionManager,
services::networking::NetworkingService, volume::VolumeManager,
};
use std::{path::PathBuf, sync::Arc};
use tokio::sync::RwLock;
/// Shared context providing access to core application components.
#[derive(Clone)]
pub struct CoreContext {
pub events: Arc<EventBus>,
pub device_manager: Arc<DeviceManager>,
pub library_manager: Arc<LibraryManager>,
pub volume_manager: Arc<VolumeManager>,
pub library_key_manager: Arc<LibraryKeyManager>,
// This is wrapped in an RwLock to allow it to be set after initialization
pub action_manager: Arc<RwLock<Option<Arc<ActionManager>>>>,
pub networking: Arc<RwLock<Option<Arc<NetworkingService>>>>,
// Job logging configuration
pub job_logging_config: Option<JobLoggingConfig>,
pub job_logs_dir: Option<PathBuf>,
}
impl CoreContext {
/// Create a new context with the given components
pub fn new(
events: Arc<EventBus>,
device_manager: Arc<DeviceManager>,
library_manager: Arc<LibraryManager>,
volume_manager: Arc<VolumeManager>,
library_key_manager: Arc<LibraryKeyManager>,
) -> Self {
Self {
events,
device_manager,
library_manager,
volume_manager,
library_key_manager,
action_manager: Arc::new(RwLock::new(None)),
networking: Arc::new(RwLock::new(None)),
job_logging_config: None,
job_logs_dir: None,
}
}
/// Set job logging configuration
pub fn set_job_logging(&mut self, config: JobLoggingConfig, logs_dir: PathBuf) {
self.job_logging_config = Some(config);
self.job_logs_dir = Some(logs_dir);
}
/// Helper method for services to get the networking service
pub async fn get_networking(&self) -> Option<Arc<NetworkingService>> {
self.networking.read().await.clone()
}
/// Method for Core to set networking after it's initialized
pub async fn set_networking(&self, networking: Arc<NetworkingService>) {
*self.networking.write().await = Some(networking);
}
/// Helper method to get the action manager
pub async fn get_action_manager(&self) -> Option<Arc<ActionManager>> {
self.action_manager.read().await.clone()
}
/// Method for Core to set action manager after it's initialized
pub async fn set_action_manager(&self, action_manager: Arc<ActionManager>) {
*self.action_manager.write().await = Some(action_manager);
}
}

View File

@@ -1,499 +0,0 @@
#![allow(warnings)]
//! Spacedrive Core v2
//!
//! A unified, simplified architecture for cross-platform file management.
pub mod config;
pub mod context;
pub mod device;
pub mod domain;
pub mod file_type;
pub mod infrastructure;
pub mod keys;
pub mod library;
pub mod location;
pub mod operations;
pub mod services;
pub mod shared;
pub mod test_framework;
pub mod volume;
use services::networking::protocols::PairingProtocolHandler;
use services::networking::utils::logging::NetworkLogger;
// Compatibility module for legacy networking references
pub mod networking {
pub use crate::services::networking::*;
}
use crate::config::AppConfig;
use crate::context::CoreContext;
use crate::device::DeviceManager;
use crate::infrastructure::actions::manager::ActionManager;
use crate::infrastructure::events::{Event, EventBus};
use crate::library::LibraryManager;
use crate::services::Services;
use crate::volume::{VolumeDetectionConfig, VolumeManager};
use std::path::PathBuf;
use std::sync::Arc;
use tokio::sync::{mpsc, RwLock};
use tracing::{error, info};
/// Pending pairing request information
#[derive(Debug, Clone)]
pub struct PendingPairingRequest {
pub request_id: uuid::Uuid,
pub device_id: uuid::Uuid,
pub device_name: String,
pub received_at: chrono::DateTime<chrono::Utc>,
}
/// Spacedrop request message
#[derive(serde::Serialize, serde::Deserialize)]
struct SpacedropRequest {
transfer_id: uuid::Uuid,
file_path: String,
sender_name: String,
message: Option<String>,
file_size: u64,
}
// NOTE: SimplePairingUI has been moved to CLI infrastructure
// See: src/infrastructure/cli/pairing_ui.rs for CLI-specific implementations
/// Bridge between networking events and core events
pub struct NetworkEventBridge {
network_events: mpsc::UnboundedReceiver<networking::NetworkEvent>,
core_events: Arc<EventBus>,
}
impl NetworkEventBridge {
pub fn new(
network_events: mpsc::UnboundedReceiver<networking::NetworkEvent>,
core_events: Arc<EventBus>,
) -> Self {
Self {
network_events,
core_events,
}
}
pub async fn run(mut self) {
while let Some(event) = self.network_events.recv().await {
if let Some(core_event) = self.translate_event(event) {
self.core_events.emit(core_event);
}
}
}
fn translate_event(&self, event: networking::NetworkEvent) -> Option<Event> {
match event {
networking::NetworkEvent::ConnectionEstablished { device_id, .. } => {
Some(Event::DeviceConnected {
device_id,
device_name: "Connected Device".to_string(),
})
}
networking::NetworkEvent::ConnectionLost { device_id, .. } => {
Some(Event::DeviceDisconnected { device_id })
}
networking::NetworkEvent::PairingCompleted {
device_id,
device_info,
} => Some(Event::DeviceConnected {
device_id,
device_name: device_info.device_name,
}),
_ => None, // Some events don't map to core events
}
}
}
/// The main context for all core operations
pub struct Core {
/// Application configuration
pub config: Arc<RwLock<AppConfig>>,
/// Device manager
pub device: Arc<DeviceManager>,
/// Library manager
pub libraries: Arc<LibraryManager>,
/// Volume manager
pub volumes: Arc<VolumeManager>,
/// Event bus for state changes
pub events: Arc<EventBus>,
/// Container for high-level services
pub services: Services,
/// Shared context for core components
pub context: Arc<CoreContext>,
}
impl Core {
/// Initialize a new Core instance with default data directory
pub async fn new() -> Result<Self, Box<dyn std::error::Error>> {
let data_dir = crate::config::default_data_dir()?;
Self::new_with_config(data_dir).await
}
/// Initialize a new Core instance with custom data directory
pub async fn new_with_config(data_dir: PathBuf) -> Result<Self, Box<dyn std::error::Error>> {
info!("Initializing Spacedrive Core at {:?}", data_dir);
// 1. Load or create app config
let config = AppConfig::load_or_create(&data_dir)?;
config.ensure_directories()?;
let config = Arc::new(RwLock::new(config));
// 2. Initialize device manager
let device = Arc::new(DeviceManager::init_with_path(&data_dir)?);
// Set the global device ID for legacy compatibility
shared::utils::set_current_device_id(device.device_id()?);
// 3. Create event bus
let events = Arc::new(EventBus::default());
// 4. Initialize volume manager
let volume_config = VolumeDetectionConfig::default();
let device_id = device.device_id()?;
let volumes = Arc::new(VolumeManager::new(device_id, volume_config, events.clone()));
// 5. Initialize volume detection
// info!("Initializing volume detection...");
// match volumes.initialize().await {
// Ok(()) => info!("Volume manager initialized"),
// Err(e) => error!("Failed to initialize volume manager: {}", e),
// }
// 6. Initialize library manager with libraries directory
let libraries_dir = config.read().await.libraries_dir();
let libraries = Arc::new(LibraryManager::new_with_dir(libraries_dir, events.clone()));
// 7. Initialize library key manager
let library_key_manager =
Arc::new(crate::keys::library_key_manager::LibraryKeyManager::new()?);
// 8. Register all job types
info!("Registering job types...");
crate::operations::register_all_jobs();
info!("Job types registered");
// 9. Create the context that will be shared with services
let mut context_inner = CoreContext::new(
events.clone(),
device.clone(),
libraries.clone(),
volumes.clone(),
library_key_manager.clone(),
);
// Set job logging configuration if enabled
let app_config = config.read().await;
if app_config.job_logging.enabled {
context_inner
.set_job_logging(app_config.job_logging.clone(), app_config.job_logs_dir());
}
drop(app_config);
let context = Arc::new(context_inner);
// 10. Initialize services first, passing them the context
let services = Services::new(context.clone());
// 11. Auto-load all libraries with context for job manager initialization
info!("Loading existing libraries...");
let loaded_libraries: Vec<Arc<crate::library::Library>> =
match libraries.load_all_with_context(context.clone()).await {
Ok(count) => {
info!("Loaded {} libraries", count);
libraries.list().await
}
Err(e) => {
error!("Failed to load libraries: {}", e);
vec![]
}
};
// Initialize sidecar manager for each loaded library
for library in &loaded_libraries {
info!("Initializing sidecar manager for library {}", library.id());
if let Err(e) = services.sidecar_manager.init_library(&library).await {
error!(
"Failed to initialize sidecar manager for library {}: {}",
library.id(),
e
);
} else {
// Run bootstrap scan
if let Err(e) = services.sidecar_manager.bootstrap_scan(&library).await {
error!(
"Failed to run sidecar bootstrap scan for library {}: {}",
library.id(),
e
);
}
}
}
info!("Starting background services...");
match services.start_all().await {
Ok(()) => info!("Background services started"),
Err(e) => error!("Failed to start services: {}", e),
}
// 12. Initialize ActionManager and set it in context
let action_manager = Arc::new(crate::infrastructure::actions::manager::ActionManager::new(
context.clone(),
));
context.set_action_manager(action_manager).await;
// 13. Emit startup event
events.emit(Event::CoreStarted);
Ok(Self {
config,
device,
libraries,
volumes,
events,
services,
context,
})
}
/// Get the application configuration
pub fn config(&self) -> Arc<RwLock<AppConfig>> {
self.config.clone()
}
/// Initialize networking using master key
pub async fn init_networking(&mut self) -> Result<(), Box<dyn std::error::Error>> {
self.init_networking_with_logger(Arc::new(networking::SilentLogger))
.await
}
/// Initialize networking with custom logger
pub async fn init_networking_with_logger(
&mut self,
logger: Arc<dyn networking::NetworkLogger>,
) -> Result<(), Box<dyn std::error::Error>> {
logger.info("Initializing networking...").await;
// Initialize networking service through the services container
let data_dir = self.config.read().await.data_dir.clone();
self.services
.init_networking(
self.device.clone(),
self.services.library_key_manager.clone(),
data_dir,
)
.await?;
// Start the networking service
self.services.start_networking().await?;
// Get the networking service for protocol registration
if let Some(networking_service) = self.services.networking() {
// Register default protocol handlers
self.register_default_protocols(&networking_service).await?;
// Set up event bridge to integrate with core event system
let event_bridge = NetworkEventBridge::new(
networking_service
.subscribe_events()
.await
.unwrap_or_else(|| {
let (_, rx) = tokio::sync::mpsc::unbounded_channel();
rx
}),
self.events.clone(),
);
tokio::spawn(event_bridge.run());
// Make networking service available to the context for other services
self.context.set_networking(networking_service).await;
}
logger.info("Networking initialized successfully").await;
Ok(())
}
/// Register default protocol handlers
async fn register_default_protocols(
&self,
networking: &networking::NetworkingService,
) -> Result<(), Box<dyn std::error::Error>> {
let logger = std::sync::Arc::new(networking::utils::logging::ConsoleLogger);
// Get command sender for the pairing handler's state machine
let command_sender = networking
.command_sender()
.ok_or("NetworkingEventLoop command sender not available")?
.clone();
// Get data directory from config
let data_dir = {
let config = self.config.read().await;
config.data_dir.clone()
};
let pairing_handler = Arc::new(networking::protocols::PairingProtocolHandler::new_with_persistence(
networking.identity().clone(),
networking.device_registry(),
logger.clone(),
command_sender,
data_dir,
));
// Try to load persisted sessions, but don't fail if there's an error
if let Err(e) = pairing_handler.load_persisted_sessions().await {
logger.warn(&format!("Failed to load persisted pairing sessions: {}. Starting with empty sessions.", e)).await;
}
// Start the state machine task for pairing
networking::protocols::PairingProtocolHandler::start_state_machine_task(
pairing_handler.clone(),
);
// Start cleanup task for expired sessions
networking::protocols::PairingProtocolHandler::start_cleanup_task(pairing_handler.clone());
let messaging_handler = networking::protocols::MessagingProtocolHandler::new();
let mut file_transfer_handler =
networking::protocols::FileTransferProtocolHandler::new_default(logger.clone());
// Inject device registry into file transfer handler for encryption
file_transfer_handler.set_device_registry(networking.device_registry());
let protocol_registry = networking.protocol_registry();
{
let mut registry = protocol_registry.write().await;
registry.register_handler(pairing_handler)?;
registry.register_handler(Arc::new(messaging_handler))?;
registry.register_handler(Arc::new(file_transfer_handler))?;
}
Ok(())
}
/// Initialize networking from Arc<Core> - for daemon use
pub async fn init_networking_shared(
core: Arc<Core>,
) -> Result<Arc<Core>, Box<dyn std::error::Error>> {
info!("Initializing networking for shared core...");
// Create a new Core with networking enabled
let mut new_core =
Core::new_with_config(core.config().read().await.data_dir.clone()).await?;
// Initialize networking on the new core
new_core.init_networking().await?;
info!("Networking initialized successfully for shared core");
Ok(Arc::new(new_core))
}
/// Get the networking service (if initialized)
pub fn networking(&self) -> Option<Arc<networking::NetworkingService>> {
self.services.networking()
}
/// Get list of connected devices
pub async fn get_connected_devices(
&self,
) -> Result<Vec<uuid::Uuid>, Box<dyn std::error::Error>> {
Ok(self.services.device.get_connected_devices().await?)
}
/// Get detailed information about connected devices
pub async fn get_connected_devices_info(
&self,
) -> Result<Vec<networking::DeviceInfo>, Box<dyn std::error::Error>> {
Ok(self.services.device.get_connected_devices_info().await?)
}
/// Add a location to the file system watcher
pub async fn add_watched_location(
&self,
location_id: uuid::Uuid,
library_id: uuid::Uuid,
path: std::path::PathBuf,
enabled: bool,
) -> Result<(), Box<dyn std::error::Error>> {
use crate::services::location_watcher::WatchedLocation;
let watched_location = WatchedLocation {
id: location_id,
library_id,
path,
enabled,
};
Ok(self
.services
.location_watcher
.add_location(watched_location)
.await?)
}
/// Remove a location from the file system watcher
pub async fn remove_watched_location(
&self,
location_id: uuid::Uuid,
) -> Result<(), Box<dyn std::error::Error>> {
Ok(self
.services
.location_watcher
.remove_location(location_id)
.await?)
}
/// Update file watching settings for a location
pub async fn update_watched_location(
&self,
location_id: uuid::Uuid,
enabled: bool,
) -> Result<(), Box<dyn std::error::Error>> {
Ok(self
.services
.location_watcher
.update_location(location_id, enabled)
.await?)
}
/// Get all currently watched locations
pub async fn get_watched_locations(
&self,
) -> Vec<crate::services::location_watcher::WatchedLocation> {
self.services.location_watcher.get_watched_locations().await
}
/// Shutdown the core gracefully
pub async fn shutdown(&self) -> Result<(), Box<dyn std::error::Error>> {
info!("Shutting down Spacedrive Core...");
// Networking service is stopped by services.stop_all()
// Stop all services
self.services.stop_all().await?;
// Stop volume monitoring
self.volumes.stop_monitoring().await;
// Close all libraries
self.libraries.close_all().await?;
// Save configuration
self.config.write().await.save()?;
// Emit shutdown event
self.events.emit(Event::CoreShutdown);
info!("Spacedrive Core shutdown complete");
Ok(())
}
}

View File

@@ -1,208 +0,0 @@
//! Library configuration types
use chrono::{DateTime, Utc};
use serde::{Deserialize, Serialize};
use uuid::Uuid;
/// Library configuration stored in library.json
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct LibraryConfig {
/// Version of the configuration format
pub version: u32,
/// Unique identifier for this library
pub id: Uuid,
/// Human-readable name
pub name: String,
/// Optional description
pub description: Option<String>,
/// When the library was created
pub created_at: DateTime<Utc>,
/// When the library was last modified
pub updated_at: DateTime<Utc>,
/// Library-specific settings
pub settings: LibrarySettings,
/// Library statistics
pub statistics: LibraryStatistics,
}
/// Library-specific settings
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct LibrarySettings {
/// Whether to generate thumbnails for media files
pub generate_thumbnails: bool,
/// Thumbnail quality (0-100)
pub thumbnail_quality: u8,
/// Whether to enable AI-powered tagging
pub enable_ai_tagging: bool,
/// Whether sync is enabled for this library
pub sync_enabled: bool,
/// Whether the library is encrypted at rest
pub encryption_enabled: bool,
/// Custom thumbnail sizes to generate
pub thumbnail_sizes: Vec<u32>,
/// File extensions to ignore during indexing
pub ignored_extensions: Vec<String>,
/// Maximum file size to index (in bytes)
pub max_file_size: Option<u64>,
/// Whether to automatically track system volumes
pub auto_track_system_volumes: bool,
/// Whether to automatically track external volumes when connected
pub auto_track_external_volumes: bool,
/// Indexer settings (rule toggles and related)
#[serde(default)]
pub indexer: IndexerSettings,
}
impl LibraryConfig {
/// Load library configuration from a JSON file
pub async fn load(path: &std::path::Path) -> Result<Self, super::error::LibraryError> {
let config_data = tokio::fs::read_to_string(path)
.await
.map_err(|e| super::error::LibraryError::IoError(e))?;
let config: LibraryConfig = serde_json::from_str(&config_data)
.map_err(|e| super::error::LibraryError::JsonError(e))?;
Ok(config)
}
}
impl Default for LibrarySettings {
fn default() -> Self {
Self {
generate_thumbnails: true,
thumbnail_quality: 85,
enable_ai_tagging: false,
sync_enabled: false,
encryption_enabled: false,
thumbnail_sizes: vec![128, 256, 512],
ignored_extensions: vec![
".tmp".to_string(),
".temp".to_string(),
".cache".to_string(),
".part".to_string(),
],
max_file_size: Some(100 * 1024 * 1024 * 1024), // 100GB
auto_track_system_volumes: true, // Default to true for user convenience
auto_track_external_volumes: false, // Default to false for privacy
indexer: IndexerSettings::default(),
}
}
}
/// Indexer settings controlling rule toggles
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct IndexerSettings {
#[serde(default = "IndexerSettings::default_true")]
pub no_system_files: bool,
#[serde(default = "IndexerSettings::default_true")]
pub no_git: bool,
#[serde(default = "IndexerSettings::default_true")]
pub no_dev_dirs: bool,
#[serde(default)]
pub no_hidden: bool,
#[serde(default = "IndexerSettings::default_true")]
pub gitignore: bool,
#[serde(default)]
pub only_images: bool,
}
impl IndexerSettings {
fn default_true() -> bool {
true
}
}
impl Default for IndexerSettings {
fn default() -> Self {
Self {
no_system_files: true,
no_git: true,
no_dev_dirs: true,
no_hidden: false,
gitignore: true,
only_images: false,
}
}
}
/// Library statistics
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct LibraryStatistics {
/// Total number of files indexed
pub total_files: u64,
/// Total size of all files in bytes
pub total_size: u64,
/// Number of locations in this library
pub location_count: u32,
/// Number of tags created
pub tag_count: u32,
/// Number of thumbnails generated
pub thumbnail_count: u64,
/// Last time the library was fully indexed
pub last_indexed: Option<DateTime<Utc>>,
/// When these statistics were last updated
pub updated_at: DateTime<Utc>,
}
impl Default for LibraryStatistics {
fn default() -> Self {
Self {
total_files: 0,
total_size: 0,
location_count: 0,
tag_count: 0,
thumbnail_count: 0,
last_indexed: None,
updated_at: Utc::now(),
}
}
}
/// Thumbnail generation metadata
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct ThumbnailMetadata {
/// Version of the thumbnail format
pub version: u32,
/// Quality setting used for generation
pub quality: u8,
/// Sizes that were generated
pub sizes: Vec<u32>,
/// When this metadata was created
pub created_at: DateTime<Utc>,
}
impl Default for ThumbnailMetadata {
fn default() -> Self {
Self {
version: 1,
quality: 85,
sizes: vec![128, 256, 512],
created_at: Utc::now(),
}
}
}

View File

@@ -1,238 +0,0 @@
//! Library management system
//!
//! This module provides the core library functionality for Spacedrive.
//! Each library is a self-contained directory with its own database,
//! thumbnails, and other data.
mod config;
mod error;
mod lock;
mod manager;
pub use config::{LibraryConfig, LibrarySettings, LibraryStatistics};
pub use error::{LibraryError, Result};
pub use lock::LibraryLock;
pub use manager::{LibraryManager, DiscoveredLibrary};
use crate::infrastructure::{
database::Database,
jobs::manager::JobManager,
};
use std::path::{Path, PathBuf};
use std::sync::Arc;
use tokio::sync::RwLock;
use uuid::Uuid;
/// Represents an open Spacedrive library
pub struct Library {
/// Root directory of the library (the .sdlibrary folder)
path: PathBuf,
/// Library configuration
config: RwLock<LibraryConfig>,
/// Database connection
db: Arc<Database>,
/// Job manager for this library
jobs: Arc<JobManager>,
/// Lock preventing concurrent access
_lock: LibraryLock,
}
impl Library {
/// Get the library ID
pub fn id(&self) -> Uuid {
// Config is immutable for ID, so we can use try_read
self.config.try_read().map(|c| c.id).unwrap_or_else(|_| {
// This should never happen in practice
panic!("Failed to read library config for ID")
})
}
/// Get the library name
pub async fn name(&self) -> String {
self.config.read().await.name.clone()
}
/// Get the library path
pub fn path(&self) -> &Path {
&self.path
}
/// Get the database
pub fn db(&self) -> &Arc<Database> {
&self.db
}
/// Get the job manager
pub fn jobs(&self) -> &Arc<JobManager> {
&self.jobs
}
/// Get a copy of the current configuration
pub async fn config(&self) -> LibraryConfig {
self.config.read().await.clone()
}
/// Update library configuration
pub async fn update_config<F>(&self, f: F) -> Result<()>
where
F: FnOnce(&mut LibraryConfig),
{
let mut config = self.config.write().await;
f(&mut config);
config.updated_at = chrono::Utc::now();
// Save to disk
let config_path = self.path.join("library.json");
let json = serde_json::to_string_pretty(&*config)?;
tokio::fs::write(config_path, json).await?;
Ok(())
}
/// Save library configuration to disk
pub async fn save_config(&self, config: &LibraryConfig) -> Result<()> {
let config_path = self.path.join("library.json");
let json = serde_json::to_string_pretty(config)?;
tokio::fs::write(config_path, json).await?;
Ok(())
}
/// Get the thumbnail directory for this library
pub fn thumbnails_dir(&self) -> PathBuf {
self.path.join("thumbnails")
}
/// Get the path for a specific thumbnail with size
pub fn thumbnail_path(&self, cas_id: &str, size: u32) -> PathBuf {
if cas_id.len() < 4 {
// Fallback for short IDs
return self.thumbnails_dir().join(format!("{}_{}.webp", cas_id, size));
}
// Two-level sharding based on first four characters
let shard1 = &cas_id[0..2];
let shard2 = &cas_id[2..4];
self.thumbnails_dir()
.join(shard1)
.join(shard2)
.join(format!("{}_{}.webp", cas_id, size))
}
/// Get the path for any thumbnail size (legacy compatibility)
pub fn thumbnail_path_legacy(&self, cas_id: &str) -> PathBuf {
self.thumbnail_path(cas_id, 256) // Default to 256px
}
/// Save a thumbnail with specific size
pub async fn save_thumbnail(&self, cas_id: &str, size: u32, data: &[u8]) -> Result<()> {
let path = self.thumbnail_path(cas_id, size);
// Ensure parent directory exists
if let Some(parent) = path.parent() {
tokio::fs::create_dir_all(parent).await?;
}
// Write thumbnail
tokio::fs::write(path, data).await?;
Ok(())
}
/// Check if a thumbnail exists for a specific size
pub async fn has_thumbnail(&self, cas_id: &str, size: u32) -> bool {
tokio::fs::metadata(self.thumbnail_path(cas_id, size))
.await
.is_ok()
}
/// Shutdown the library, gracefully stopping all jobs
pub async fn shutdown(&self) -> Result<()> {
// Shutdown the job manager, which will pause all running jobs
self.jobs.shutdown().await?;
// Save config to ensure any updates are persisted
let config = self.config.read().await;
self.save_config(&*config).await?;
Ok(())
}
/// Check if thumbnails exist for all specified sizes
pub async fn has_all_thumbnails(&self, cas_id: &str, sizes: &[u32]) -> bool {
for &size in sizes {
if !self.has_thumbnail(cas_id, size).await {
return false;
}
}
true
}
/// Get thumbnail data for specific size
pub async fn get_thumbnail(&self, cas_id: &str, size: u32) -> Result<Vec<u8>> {
let path = self.thumbnail_path(cas_id, size);
Ok(tokio::fs::read(path).await?)
}
/// Get the best available thumbnail (largest size available)
pub async fn get_best_thumbnail(&self, cas_id: &str, preferred_sizes: &[u32]) -> Result<Option<(u32, Vec<u8>)>> {
// Try sizes in descending order
let mut sizes = preferred_sizes.to_vec();
sizes.sort_by(|a, b| b.cmp(a));
for &size in &sizes {
if self.has_thumbnail(cas_id, size).await {
let data = self.get_thumbnail(cas_id, size).await?;
return Ok(Some((size, data)));
}
}
Ok(None)
}
/// Start thumbnail generation job
pub async fn generate_thumbnails(&self, entry_ids: Option<Vec<Uuid>>) -> Result<crate::infrastructure::jobs::handle::JobHandle> {
use crate::operations::media::thumbnail::{ThumbnailJob, ThumbnailJobConfig};
let config = ThumbnailJobConfig {
sizes: self.config().await.settings.thumbnail_sizes.clone(),
quality: self.config().await.settings.thumbnail_quality,
regenerate: false,
batch_size: 50,
max_concurrent: 4,
};
let job = if let Some(ids) = entry_ids {
ThumbnailJob::for_entries(ids, config)
} else {
ThumbnailJob::new(config)
};
self.jobs().dispatch(job).await
.map_err(|e| LibraryError::JobError(e))
}
/// Update library statistics
pub async fn update_statistics<F>(&self, f: F) -> Result<()>
where
F: FnOnce(&mut LibraryStatistics),
{
self.update_config(|config| {
f(&mut config.statistics);
config.statistics.updated_at = chrono::Utc::now();
}).await
}
}
// Note: Library does not implement Clone due to the exclusive lock
// Use Arc<Library> when you need shared access
/// Current library configuration version
pub const LIBRARY_CONFIG_VERSION: u32 = 2;
/// Library directory extension
pub const LIBRARY_EXTENSION: &str = "sdlibrary";

View File

@@ -1,563 +0,0 @@
//! Location management - simplified implementation matching core patterns
pub mod manager;
use crate::{
infrastructure::{
database::entities::{self, entry::EntryKind},
events::{Event, EventBus},
jobs::{handle::JobHandle, output::IndexedOutput, types::JobStatus},
},
library::Library,
operations::indexing::{IndexMode as JobIndexMode, IndexerJob, IndexerJobConfig, PathResolver, rules::RuleToggles},
domain::addressing::SdPath,
};
use sea_orm::{ActiveModelTrait, ActiveValue::Set, ColumnTrait, EntityTrait, QueryFilter, TransactionTrait};
use serde::{Deserialize, Serialize};
use std::{path::PathBuf, sync::Arc};
use tokio::fs;
use tracing::{error, info, warn};
use uuid::Uuid;
pub use manager::LocationManager;
/// Location creation arguments (simplified from production version)
#[derive(Debug, Serialize, Deserialize)]
pub struct LocationCreateArgs {
pub path: PathBuf,
pub name: Option<String>,
pub index_mode: IndexMode,
}
/// Location indexing mode
#[derive(Debug, Clone, Copy, Serialize, Deserialize, PartialEq, Eq)]
pub enum IndexMode {
/// Only scan file/directory structure
Shallow,
/// Quick scan (metadata only)
Quick,
/// Include content hashing for deduplication
Content,
/// Full indexing with content analysis and metadata
Deep,
/// Full indexing with all features
Full,
}
impl From<IndexMode> for JobIndexMode {
fn from(mode: IndexMode) -> Self {
match mode {
IndexMode::Shallow => JobIndexMode::Shallow,
IndexMode::Quick => JobIndexMode::Content,
IndexMode::Content => JobIndexMode::Content,
IndexMode::Deep => JobIndexMode::Deep,
IndexMode::Full => JobIndexMode::Deep,
}
}
}
impl From<&str> for IndexMode {
fn from(s: &str) -> Self {
match s.to_lowercase().as_str() {
"shallow" => IndexMode::Shallow,
"quick" => IndexMode::Quick,
"content" => IndexMode::Content,
"deep" => IndexMode::Deep,
"full" => IndexMode::Full,
_ => IndexMode::Full,
}
}
}
impl std::fmt::Display for IndexMode {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match self {
IndexMode::Shallow => write!(f, "shallow"),
IndexMode::Quick => write!(f, "quick"),
IndexMode::Content => write!(f, "content"),
IndexMode::Deep => write!(f, "deep"),
IndexMode::Full => write!(f, "full"),
}
}
}
/// Managed location representation
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct ManagedLocation {
pub id: Uuid,
pub name: String,
pub path: PathBuf,
pub device_id: i32,
pub library_id: Uuid,
pub indexing_enabled: bool,
pub index_mode: IndexMode,
pub watch_enabled: bool,
}
/// Location management errors
#[derive(Debug, thiserror::Error)]
pub enum LocationError {
#[error("Database error: {0}")]
Database(#[from] sea_orm::DbErr),
#[error("Database error: {0}")]
DatabaseError(String),
#[error("Path does not exist: {path}")]
PathNotFound { path: PathBuf },
#[error("Path not accessible: {path}")]
PathNotAccessible { path: PathBuf },
#[error("Location already exists: {path}")]
LocationExists { path: PathBuf },
#[error("Location not found: {id}")]
LocationNotFound { id: Uuid },
#[error("IO error: {0}")]
Io(#[from] std::io::Error),
#[error("Invalid path: {0}")]
InvalidPath(String),
#[error("Job error: {0}")]
Job(#[from] crate::infrastructure::jobs::error::JobError),
#[error("Other error: {0}")]
Other(String),
}
pub type LocationResult<T> = Result<T, LocationError>;
/// Create a new location and start indexing (production pattern)
pub async fn create_location(
library: Arc<Library>,
events: &EventBus,
args: LocationCreateArgs,
device_id: i32,
) -> LocationResult<i32> {
let path_str = args
.path
.to_str()
.ok_or_else(|| LocationError::InvalidPath("Non-UTF8 path".to_string()))?;
// Validate path exists
if !args.path.exists() {
return Err(LocationError::PathNotFound { path: args.path });
}
if !args.path.is_dir() {
return Err(LocationError::InvalidPath(
"Path must be a directory".to_string(),
));
}
// Begin transaction to ensure atomicity
let txn = library.db().conn().begin().await
.map_err(|e| LocationError::DatabaseError(e.to_string()))?;
// First, check if an entry already exists for this path
// We need to create a root entry for the location directory
let directory_name = args.path
.file_name()
.and_then(|n| n.to_str())
.unwrap_or("Unknown")
.to_string();
// Create entry for the location directory
let entry_model = entities::entry::ActiveModel {
uuid: Set(Some(Uuid::new_v4())),
name: Set(directory_name.clone()),
kind: Set(EntryKind::Directory as i32),
extension: Set(None),
metadata_id: Set(None),
content_id: Set(None),
size: Set(0),
aggregate_size: Set(0),
child_count: Set(0),
file_count: Set(0),
created_at: Set(chrono::Utc::now()),
modified_at: Set(chrono::Utc::now()),
accessed_at: Set(None),
permissions: Set(None),
inode: Set(None),
parent_id: Set(None), // Location root has no parent
..Default::default()
};
let entry_record = entry_model.insert(&txn).await
.map_err(|e| LocationError::DatabaseError(e.to_string()))?;
let entry_id = entry_record.id;
// Add self-reference to closure table
let self_closure = entities::entry_closure::ActiveModel {
ancestor_id: Set(entry_id),
descendant_id: Set(entry_id),
depth: Set(0),
..Default::default()
};
self_closure.insert(&txn).await
.map_err(|e| LocationError::DatabaseError(e.to_string()))?;
// Add to directory_paths table
let dir_path_entry = entities::directory_paths::ActiveModel {
entry_id: Set(entry_id),
path: Set(path_str.to_string()),
..Default::default()
};
dir_path_entry.insert(&txn).await
.map_err(|e| LocationError::DatabaseError(e.to_string()))?;
// Check if a location already exists for this entry
let existing = entities::location::Entity::find()
.filter(entities::location::Column::EntryId.eq(entry_id))
.one(&txn)
.await
.map_err(|e| LocationError::DatabaseError(e.to_string()))?;
if existing.is_some() {
// Rollback transaction
txn.rollback().await
.map_err(|e| LocationError::DatabaseError(e.to_string()))?;
return Err(LocationError::LocationExists { path: args.path });
}
// Create location record
let location_id = Uuid::new_v4();
let name = args.name.unwrap_or_else(|| {
args.path
.file_name()
.and_then(|n| n.to_str())
.unwrap_or("Unknown")
.to_string()
});
let location_model = entities::location::ActiveModel {
id: Set(0), // Auto-increment
uuid: Set(location_id),
device_id: Set(device_id),
entry_id: Set(entry_id),
name: Set(Some(name.clone())),
index_mode: Set(args.index_mode.to_string()),
scan_state: Set("pending".to_string()),
last_scan_at: Set(None),
error_message: Set(None),
total_file_count: Set(0),
total_byte_size: Set(0),
created_at: Set(chrono::Utc::now()),
updated_at: Set(chrono::Utc::now()),
};
let location_record = location_model.insert(&txn).await
.map_err(|e| LocationError::DatabaseError(e.to_string()))?;
let location_db_id = location_record.id;
// Commit transaction
txn.commit().await
.map_err(|e| LocationError::DatabaseError(e.to_string()))?;
info!("Created location '{}' with ID: {}", name, location_db_id);
// Emit location added event
events.emit(Event::LocationAdded {
library_id: library.id(),
location_id,
path: args.path.clone(),
});
// Start indexing (simplified - in production this goes through proper job manager)
start_location_indexing(
library.clone(),
events,
location_db_id,
location_id,
args.path,
args.index_mode,
)
.await?;
Ok(location_db_id)
}
/// Start indexing for a location (production implementation)
async fn start_location_indexing(
library: Arc<Library>,
events: &EventBus,
location_db_id: i32,
location_uuid: Uuid,
path: PathBuf,
index_mode: IndexMode,
) -> LocationResult<()> {
info!("Starting indexing for location: {}", path.display());
// Update scan state to "running"
update_location_scan_state(library.clone(), location_db_id, "running", None).await?;
// Emit indexing started event
events.emit(Event::IndexingStarted {
location_id: location_uuid,
});
// Get device UUID for SdPath
let device_uuid = get_device_uuid(library.clone()).await?;
let location_sd_path = SdPath::new(device_uuid, path.clone());
// Create and dispatch indexer job through the proper job manager
let lib_cfg = library.config().await;
let idx_cfg = lib_cfg.settings.indexer;
let mut config = IndexerJobConfig::new(location_uuid, location_sd_path, index_mode.into());
config.rule_toggles = RuleToggles {
no_system_files: idx_cfg.no_system_files,
no_hidden: idx_cfg.no_hidden,
no_git: idx_cfg.no_git,
gitignore: idx_cfg.gitignore,
only_images: idx_cfg.only_images,
no_dev_dirs: idx_cfg.no_dev_dirs,
};
let indexer_job = IndexerJob::new(config);
match library.jobs().dispatch(indexer_job).await {
Ok(job_handle) => {
info!(
"Successfully dispatched indexer job {} for location: {}",
job_handle.id(),
path.display()
);
// Monitor job progress asynchronously
let events_clone = events.clone();
let library_clone = library.clone();
let handle_clone = job_handle.clone();
tokio::spawn(async move {
monitor_indexing_job(
handle_clone,
events_clone,
library_clone,
location_db_id,
location_uuid,
path,
)
.await;
});
}
Err(e) => {
error!(
"Failed to dispatch indexer job for {}: {}",
path.display(),
e
);
// Update scan state to failed
if let Err(update_err) = update_location_scan_state(
library.clone(),
location_db_id,
"failed",
Some(e.to_string()),
)
.await
{
error!("Failed to update scan state: {}", update_err);
}
events.emit(Event::IndexingFailed {
location_id: location_uuid,
error: e.to_string(),
});
return Err(LocationError::Other(format!(
"Failed to start indexing: {}",
e
)));
}
}
Ok(())
}
/// Monitor indexing job progress and update location state accordingly
async fn monitor_indexing_job(
job_handle: JobHandle,
events: EventBus,
library: Arc<Library>,
location_db_id: i32,
location_uuid: Uuid,
path: PathBuf,
) {
info!(
"Monitoring indexer job {} for location: {}",
job_handle.id(),
path.display()
);
// Wait for job completion
let job_result = job_handle.wait().await;
match job_result {
Ok(output) => {
info!(
"Indexing completed successfully for location: {}",
path.display()
);
// Parse output to get statistics
if let Some(indexer_output) = output.as_indexed() {
// Update location stats
if let Err(e) = update_location_stats(
library.clone(),
location_db_id,
indexer_output.total_files,
indexer_output.total_bytes,
)
.await
{
error!("Failed to update location stats: {}", e);
}
// Update scan state to completed
if let Err(e) =
update_location_scan_state(library.clone(), location_db_id, "completed", None)
.await
{
error!("Failed to update scan state: {}", e);
}
// Emit completion events
events.emit(Event::IndexingCompleted {
location_id: location_uuid,
total_files: indexer_output.total_files,
total_dirs: indexer_output.total_dirs,
});
events.emit(Event::FilesIndexed {
library_id: library.id(),
location_id: location_uuid,
count: indexer_output.total_files as usize,
});
info!(
"Location indexing completed: {} ({} files, {} dirs, {} bytes)",
path.display(),
indexer_output.total_files,
indexer_output.total_dirs,
indexer_output.total_bytes
);
} else {
warn!("Job completed but output format was unexpected");
// Update scan state to completed anyway
if let Err(e) =
update_location_scan_state(library.clone(), location_db_id, "completed", None)
.await
{
error!("Failed to update scan state: {}", e);
}
}
}
Err(e) => {
error!("Indexing failed for {}: {}", path.display(), e);
// Update scan state to failed
if let Err(update_err) = update_location_scan_state(
library.clone(),
location_db_id,
"failed",
Some(e.to_string()),
)
.await
{
error!("Failed to update scan state: {}", update_err);
}
events.emit(Event::IndexingFailed {
location_id: location_uuid,
error: e.to_string(),
});
}
}
}
/// Scan directory to get basic stats
async fn scan_directory_stats(path: &PathBuf) -> Result<(u64, u64), std::io::Error> {
let mut file_count = 0u64;
let mut total_size = 0u64;
let mut stack = vec![path.clone()];
while let Some(current_path) = stack.pop() {
if let Ok(mut entries) = fs::read_dir(&current_path).await {
while let Ok(Some(entry)) = entries.next_entry().await {
if let Ok(metadata) = entry.metadata().await {
if metadata.is_file() {
file_count += 1;
total_size += metadata.len();
} else if metadata.is_dir() {
stack.push(entry.path());
}
}
}
}
}
Ok((file_count, total_size))
}
/// Update location scan state
async fn update_location_scan_state(
library: Arc<Library>,
location_id: i32,
state: &str,
error_message: Option<String>,
) -> LocationResult<()> {
let location = entities::location::Entity::find_by_id(location_id)
.one(library.db().conn())
.await?
.ok_or_else(|| LocationError::LocationNotFound { id: Uuid::nil() })?;
let mut active_location: entities::location::ActiveModel = location.into();
active_location.scan_state = Set(state.to_string());
active_location.error_message = Set(error_message);
active_location.updated_at = Set(chrono::Utc::now());
if state == "running" {
active_location.last_scan_at = Set(Some(chrono::Utc::now()));
}
active_location.update(library.db().conn()).await?;
Ok(())
}
/// Update location statistics
async fn update_location_stats(
library: Arc<Library>,
location_id: i32,
file_count: u64,
total_size: u64,
) -> LocationResult<()> {
let location = entities::location::Entity::find_by_id(location_id)
.one(library.db().conn())
.await?
.ok_or_else(|| LocationError::LocationNotFound { id: Uuid::nil() })?;
let mut active_location: entities::location::ActiveModel = location.into();
active_location.total_file_count = Set(file_count as i64);
active_location.total_byte_size = Set(total_size as i64);
active_location.updated_at = Set(chrono::Utc::now());
active_location.update(library.db().conn()).await?;
Ok(())
}
/// Get device UUID for current device
async fn get_device_uuid(_library: Arc<Library>) -> LocationResult<Uuid> {
// Get the current device ID from the global state
let device_uuid = crate::shared::utils::get_current_device_id();
if device_uuid.is_nil() {
return Err(LocationError::InvalidPath("Current device ID not initialized".to_string()));
}
Ok(device_uuid)
}
/// List all locations for a library
pub async fn list_locations(
library: Arc<Library>,
) -> LocationResult<Vec<entities::location::Model>> {
Ok(entities::location::Entity::find()
.all(library.db().conn())
.await?)
}

View File

@@ -1,83 +0,0 @@
//! Volume-related error types
use thiserror::Error;
/// Errors that can occur during volume operations
#[derive(Error, Debug)]
pub enum VolumeError {
/// IO error during volume operations
#[error("IO error: {0}")]
Io(#[from] std::io::Error),
/// Platform-specific error
#[error("Platform error: {0}")]
Platform(String),
/// Volume not found
#[error("Volume not found: {0}")]
NotFound(String),
/// Volume is not mounted
#[error("Volume is not mounted: {0}")]
NotMounted(String),
/// Volume is read-only
#[error("Volume is read-only: {0}")]
ReadOnly(String),
/// Insufficient space on volume
#[error("Insufficient space on volume: required {required}, available {available}")]
InsufficientSpace { required: u64, available: u64 },
/// Speed test was cancelled or failed
#[error("Speed test cancelled or failed")]
SpeedTestFailed,
/// Volume detection failed
#[error("Volume detection failed: {0}")]
DetectionFailed(String),
/// Permission denied
#[error("Permission denied: {0}")]
PermissionDenied(String),
/// Operation timed out
#[error("Operation timed out")]
Timeout,
/// Invalid volume data
#[error("Invalid volume data: {0}")]
InvalidData(String),
/// Database operation failed
#[error("Database error: {0}")]
Database(String),
/// Volume is already tracked
#[error("Volume is already tracked: {0}")]
AlreadyTracked(String),
/// Volume is not tracked
#[error("Volume is not tracked: {0}")]
NotTracked(String),
}
impl VolumeError {
/// Create a platform-specific error
pub fn platform(msg: impl Into<String>) -> Self {
Self::Platform(msg.into())
}
/// Create a detection failed error
pub fn detection_failed(msg: impl Into<String>) -> Self {
Self::DetectionFailed(msg.into())
}
/// Create an insufficient space error
pub fn insufficient_space(required: u64, available: u64) -> Self {
Self::InsufficientSpace { required, available }
}
}
/// Result type for volume operations
pub type VolumeResult<T> = Result<T, VolumeError>;

View File

@@ -1,92 +0,0 @@
//! Volume management for Spacedrive Core v2
//!
//! This module provides functionality for detecting, monitoring, and managing storage volumes
//! across different platforms. It's designed to integrate with the copy system for optimal
//! file operation routing.
pub mod classification;
pub mod error;
pub mod manager;
pub mod os_detection;
pub mod speed;
pub mod types;
pub use error::VolumeError;
pub use manager::VolumeManager;
pub use types::{
DiskType, FileSystem, MountType, Volume, VolumeDetectionConfig, VolumeEvent, VolumeFingerprint,
VolumeInfo,
};
// Re-export platform-specific detection
pub use os_detection::detect_volumes;
/// Extension trait for Volume operations
pub trait VolumeExt {
/// Checks if volume is mounted and accessible
async fn is_available(&self) -> bool;
/// Checks if volume has enough free space
fn has_space(&self, required_bytes: u64) -> bool;
/// Check if path is on this volume
fn contains_path(&self, path: &std::path::Path) -> bool;
}
impl VolumeExt for Volume {
async fn is_available(&self) -> bool {
self.is_mounted && tokio::fs::metadata(&self.mount_point).await.is_ok()
}
fn has_space(&self, required_bytes: u64) -> bool {
self.total_bytes_available >= required_bytes
}
fn contains_path(&self, path: &std::path::Path) -> bool {
// Check primary mount point
if path.starts_with(&self.mount_point) {
return true;
}
// Check additional mount points (for APFS volumes)
self.mount_points.iter().any(|mp| path.starts_with(mp))
}
}
/// Utilities for volume operations
pub mod util {
use super::*;
use std::path::Path;
/// Check if a path is on the specified volume
pub fn is_path_on_volume(path: &Path, volume: &Volume) -> bool {
volume.contains_path(&path.to_path_buf())
}
/// Calculate relative path from volume mount point
pub fn relative_path_on_volume(path: &Path, volume: &Volume) -> Option<std::path::PathBuf> {
// Try primary mount point first
if let Ok(relative) = path.strip_prefix(&volume.mount_point) {
return Some(relative.to_path_buf());
}
// Try additional mount points
for mount_point in &volume.mount_points {
if let Ok(relative) = path.strip_prefix(mount_point) {
return Some(relative.to_path_buf());
}
}
None
}
/// Find the volume that contains the given path
pub fn find_volume_for_path<'a>(
path: &Path,
volumes: impl Iterator<Item = &'a Volume>,
) -> Option<&'a Volume> {
volumes
.filter(|vol| vol.contains_path(&path.to_path_buf()))
.max_by_key(|vol| vol.mount_point.as_os_str().len()) // Prefer most specific mount
}
}

View File

@@ -1,369 +0,0 @@
//! Volume speed testing functionality
use crate::volume::{
error::{VolumeError, VolumeResult},
types::{MountType, Volume, VolumeType},
};
use std::time::Instant;
use tokio::{
fs::{File, OpenOptions},
io::{AsyncReadExt, AsyncWriteExt},
time::{timeout, Duration},
};
use tracing::{debug, instrument, warn};
/// Configuration for speed tests
#[derive(Debug, Clone)]
pub struct SpeedTestConfig {
/// Size of the test file in megabytes
pub file_size_mb: usize,
/// Timeout for the test in seconds
pub timeout_secs: u64,
/// Number of test iterations for averaging
pub iterations: usize,
}
impl Default for SpeedTestConfig {
fn default() -> Self {
Self {
file_size_mb: 10,
timeout_secs: 30,
iterations: 1,
}
}
}
/// Result of a speed test
#[derive(Debug, Clone)]
pub struct SpeedTestResult {
/// Write speed in MB/s
pub write_speed_mbps: f64,
/// Read speed in MB/s
pub read_speed_mbps: f64,
/// Total time taken for the test
pub duration_secs: f64,
}
/// Run a speed test on the given volume
#[instrument(skip(volume), fields(volume_name = %volume.name))]
pub async fn run_speed_test(volume: &Volume) -> VolumeResult<(u64, u64)> {
run_speed_test_with_config(volume, SpeedTestConfig::default()).await
}
/// Run a speed test with custom configuration
#[instrument(skip(volume, config), fields(volume_name = %volume.name))]
pub async fn run_speed_test_with_config(
volume: &Volume,
config: SpeedTestConfig,
) -> VolumeResult<(u64, u64)> {
if !volume.is_mounted {
return Err(VolumeError::NotMounted(volume.name.clone()));
}
if volume.read_only {
return Err(VolumeError::ReadOnly(volume.name.clone()));
}
debug!("Starting speed test with config: {:?}", config);
let test_location = TestLocation::new(&volume.mount_point, &volume.mount_type).await?;
let result = perform_speed_test(&test_location, &config).await?;
// Cleanup
test_location.cleanup().await?;
debug!(
"Speed test completed: {:.2} MB/s write, {:.2} MB/s read",
result.write_speed_mbps, result.read_speed_mbps
);
Ok((
result.read_speed_mbps as u64,
result.write_speed_mbps as u64,
))
}
/// Helper for managing test files and directories
struct TestLocation {
test_file: std::path::PathBuf,
created_dir: Option<std::path::PathBuf>,
}
impl TestLocation {
/// Create a new test location
async fn new(volume_path: &std::path::Path, mount_type: &MountType) -> VolumeResult<Self> {
let (dir, created_dir) = get_writable_directory(volume_path, mount_type).await?;
let test_file = dir.join("spacedrive_speed_test.tmp");
Ok(Self {
test_file,
created_dir,
})
}
/// Clean up test files and directories
async fn cleanup(&self) -> VolumeResult<()> {
// Remove test file
if self.test_file.exists() {
if let Err(e) = tokio::fs::remove_file(&self.test_file).await {
warn!("Failed to remove test file: {}", e);
}
}
// Remove created directory if we created it
if let Some(ref dir) = self.created_dir {
if let Err(e) = tokio::fs::remove_dir_all(dir).await {
warn!("Failed to remove test directory: {}", e);
}
}
Ok(())
}
}
/// Perform the actual speed test
async fn perform_speed_test(
location: &TestLocation,
config: &SpeedTestConfig,
) -> VolumeResult<SpeedTestResult> {
let test_data = generate_test_data(config.file_size_mb);
let timeout_duration = Duration::from_secs(config.timeout_secs);
let mut write_speeds = Vec::new();
let mut read_speeds = Vec::new();
let overall_start = Instant::now();
for iteration in 0..config.iterations {
debug!(
"Speed test iteration {}/{}",
iteration + 1,
config.iterations
);
// Write test
let write_speed = timeout(
timeout_duration,
perform_write_test(&location.test_file, &test_data),
)
.await
.map_err(|_| VolumeError::Timeout)??;
write_speeds.push(write_speed);
// Read test
let read_speed = timeout(
timeout_duration,
perform_read_test(&location.test_file, test_data.len()),
)
.await
.map_err(|_| VolumeError::Timeout)??;
read_speeds.push(read_speed);
// Clean up test file between iterations
if iteration < config.iterations - 1 {
let _ = tokio::fs::remove_file(&location.test_file).await;
}
}
let avg_write_speed = write_speeds.iter().sum::<f64>() / write_speeds.len() as f64;
let avg_read_speed = read_speeds.iter().sum::<f64>() / read_speeds.len() as f64;
Ok(SpeedTestResult {
write_speed_mbps: avg_write_speed,
read_speed_mbps: avg_read_speed,
duration_secs: overall_start.elapsed().as_secs_f64(),
})
}
/// Generate test data for speed testing
fn generate_test_data(size_mb: usize) -> Vec<u8> {
let size_bytes = size_mb * 1024 * 1024;
// Use a pattern instead of zeros to avoid compression optimizations
let pattern = b"SpacedriveSpeedTest0123456789ABCDEF";
let mut data = Vec::with_capacity(size_bytes);
for i in 0..size_bytes {
data.push(pattern[i % pattern.len()]);
}
data
}
/// Perform write speed test
async fn perform_write_test(file_path: &std::path::Path, data: &[u8]) -> VolumeResult<f64> {
let start = Instant::now();
let mut file = OpenOptions::new()
.write(true)
.create(true)
.truncate(true)
.open(file_path)
.await?;
file.write_all(data).await?;
file.sync_all().await?; // Ensure data is written to disk
let duration = start.elapsed();
let speed_mbps = (data.len() as f64 / 1024.0 / 1024.0) / duration.as_secs_f64();
Ok(speed_mbps)
}
/// Perform read speed test
async fn perform_read_test(file_path: &std::path::Path, expected_size: usize) -> VolumeResult<f64> {
let start = Instant::now();
let mut file = File::open(file_path).await?;
let mut buffer = Vec::with_capacity(expected_size);
file.read_to_end(&mut buffer).await?;
let duration = start.elapsed();
let speed_mbps = (buffer.len() as f64 / 1024.0 / 1024.0) / duration.as_secs_f64();
Ok(speed_mbps)
}
/// Get a writable directory within the volume
async fn get_writable_directory(
volume_path: &std::path::Path,
mount_type: &MountType,
) -> VolumeResult<(std::path::PathBuf, Option<std::path::PathBuf>)> {
match mount_type {
MountType::System => {
// For system volumes, prefer using temp directory
let temp_dir = std::env::temp_dir();
Ok((temp_dir, None))
}
_ => {
// For external volumes, try to write in the root or create a temp directory
let candidates = [
volume_path.join("tmp"),
volume_path.join(".spacedrive_temp"),
volume_path.to_path_buf(),
];
for candidate in &candidates {
// Try to create the directory
if let Ok(()) = tokio::fs::create_dir_all(candidate).await {
// Test if we can write to it
let test_file = candidate.join("test_write_permissions");
if tokio::fs::write(&test_file, b"test").await.is_ok() {
let _ = tokio::fs::remove_file(&test_file).await;
// If we created a directory specifically for this test, mark it for cleanup
let created_dir = if candidate
.file_name()
.map_or(false, |name| name == "tmp" || name == ".spacedrive_temp")
{
Some(candidate.clone())
} else {
None
};
return Ok((candidate.clone(), created_dir));
}
}
}
Err(VolumeError::PermissionDenied(format!(
"No writable directory found in volume: {}",
volume_path.display()
)))
}
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::volume::{
types::{DiskType, FileSystem},
VolumeFingerprint,
};
use tempfile::TempDir;
#[tokio::test]
async fn test_speed_test_config() {
let config = SpeedTestConfig::default();
assert_eq!(config.file_size_mb, 10);
assert_eq!(config.timeout_secs, 30);
assert_eq!(config.iterations, 1);
}
#[tokio::test]
async fn test_generate_test_data() {
let data = generate_test_data(1); // 1MB
assert_eq!(data.len(), 1024 * 1024);
// Verify pattern is not all zeros
assert!(data.iter().any(|&b| b != 0));
}
#[tokio::test]
async fn test_writable_directory_external() {
let temp_dir = TempDir::new().unwrap();
let volume_path = temp_dir.path();
let (writable_dir, created_dir) = get_writable_directory(volume_path, &MountType::External)
.await
.unwrap();
assert!(writable_dir.exists());
// Cleanup if we created a directory
if let Some(dir) = created_dir {
let _ = tokio::fs::remove_dir_all(dir).await;
}
}
#[tokio::test]
async fn test_writable_directory_system() {
let (writable_dir, created_dir) =
get_writable_directory(&std::path::PathBuf::from("/"), &MountType::System)
.await
.unwrap();
assert!(writable_dir.exists());
assert!(created_dir.is_none()); // Should use system temp, not create new dir
}
#[tokio::test]
async fn test_full_speed_test() {
let temp_dir = TempDir::new().unwrap();
let volume = Volume::new(
uuid::Uuid::new_v4(), // Test device ID
"Test Volume".to_string(),
MountType::External,
VolumeType::External,
temp_dir.path().to_path_buf(),
vec![],
DiskType::Unknown,
FileSystem::Other("test".to_string()),
1000000000, // 1GB capacity
500000000, // 500MB available
false, // Not read-only
None,
VolumeFingerprint::new(
"Test Volume",
1000000000,
"test",
),
);
let config = SpeedTestConfig {
file_size_mb: 1, // Small test file
timeout_secs: 10,
iterations: 1,
};
let result = run_speed_test_with_config(&volume, config).await;
assert!(result.is_ok());
let (read_speed, write_speed) = result.unwrap();
assert!(read_speed > 0);
assert!(write_speed > 0);
}
}

View File

@@ -1,685 +0,0 @@
//! Volume type definitions
use serde::{Deserialize, Serialize};
use std::fmt;
use std::path::PathBuf;
use uuid::Uuid;
/// Spacedrive volume identifier file content
/// This file is created in the root of writable volumes for persistent identification
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct SpacedriveVolumeId {
/// Unique identifier for this volume
pub id: Uuid,
/// When this identifier was created
pub created: chrono::DateTime<chrono::Utc>,
/// Name of the device that created this identifier
pub device_name: Option<String>,
/// Original volume name when identifier was created
pub volume_name: String,
/// Device ID that created this identifier
pub device_id: Uuid,
}
/// Unique fingerprint for a storage volume
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, Eq, Hash)]
pub struct VolumeFingerprint(pub String);
impl VolumeFingerprint {
/// Create a new volume fingerprint from volume properties
/// Uses intrinsic volume characteristics for cross-device portable identification
pub fn new(name: &str, total_bytes: u64, file_system: &str) -> Self {
let mut hasher = blake3::Hasher::new();
hasher.update(b"content_based:");
hasher.update(name.as_bytes());
hasher.update(&total_bytes.to_be_bytes());
hasher.update(file_system.as_bytes());
hasher.update(&(name.len() as u64).to_be_bytes());
Self(hasher.finalize().to_hex().to_string())
}
/// Create a fingerprint from a Spacedrive identifier UUID (preferred method)
/// This provides stable identification across devices, renames and remounts
pub fn from_spacedrive_id(spacedrive_id: Uuid) -> Self {
let mut hasher = blake3::Hasher::new();
hasher.update(b"spacedrive_id:");
hasher.update(spacedrive_id.as_bytes());
Self(hasher.finalize().to_hex().to_string())
}
/// Generate 8-character short ID for CLI display and commands
pub fn short_id(&self) -> String {
self.0.chars().take(8).collect()
}
/// Generate 12-character medium ID for disambiguation
pub fn medium_id(&self) -> String {
self.0.chars().take(12).collect()
}
/// Create fingerprint from hex string
pub fn from_hex(hex: impl Into<String>) -> Self {
Self(hex.into())
}
/// Create fingerprint from string (alias for from_hex)
pub fn from_string(s: &str) -> Result<Self, crate::volume::VolumeError> {
Ok(Self(s.to_string()))
}
/// Check if a string could be a short ID (8 chars, hex)
pub fn is_short_id(s: &str) -> bool {
s.len() == 8 && s.chars().all(|c| c.is_ascii_hexdigit())
}
/// Check if a string could be a medium ID (12 chars, hex)
pub fn is_medium_id(s: &str) -> bool {
s.len() == 12 && s.chars().all(|c| c.is_ascii_hexdigit())
}
/// Check if this fingerprint matches a short or medium ID
pub fn matches_short_id(&self, short_id: &str) -> bool {
if Self::is_short_id(short_id) {
self.short_id() == short_id
} else if Self::is_medium_id(short_id) {
self.medium_id() == short_id
} else {
false
}
}
}
impl fmt::Display for VolumeFingerprint {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(f, "{}", self.0)
}
}
/// Classification of volume types for UX and auto-tracking decisions
#[derive(Debug, Clone, Copy, PartialEq, Eq, Serialize, Deserialize)]
pub enum VolumeType {
/// Primary system drive containing OS and user data
/// Examples: C:\ on Windows, / on Linux, Macintosh HD on macOS
Primary,
/// Dedicated user data volumes (separate from OS)
/// Examples: /System/Volumes/Data on macOS, separate /home on Linux
UserData,
/// External or removable storage devices
/// Examples: USB drives, external HDDs, /Volumes/* on macOS
External,
/// Secondary internal storage (additional drives/partitions)
/// Examples: D:, E: drives on Windows, additional mounted drives
Secondary,
/// System/OS internal volumes (hidden from normal view)
/// Examples: /System/Volumes/* on macOS, Recovery partitions
System,
/// Network attached storage
/// Examples: SMB mounts, NFS, cloud storage
Network,
/// Unknown or unclassified volumes
Unknown,
}
impl VolumeType {
/// Should this volume type be auto-tracked by default?
pub fn auto_track_by_default(&self) -> bool {
match self {
// Only auto-track the primary system volume
// Users should explicitly choose to track other volumes
VolumeType::Primary => true,
VolumeType::UserData
| VolumeType::External
| VolumeType::Secondary
| VolumeType::Network
| VolumeType::System
| VolumeType::Unknown => false,
}
}
/// Should this volume be shown in the default UI view?
pub fn show_by_default(&self) -> bool {
!matches!(self, VolumeType::System | VolumeType::Unknown)
}
/// User-friendly display name for the volume type
pub fn display_name(&self) -> &'static str {
match self {
VolumeType::Primary => "Primary Drive",
VolumeType::UserData => "User Data",
VolumeType::External => "External Drive",
VolumeType::Secondary => "Secondary Drive",
VolumeType::System => "System Volume",
VolumeType::Network => "Network Drive",
VolumeType::Unknown => "Unknown",
}
}
/// Icon/indicator for CLI display
pub fn icon(&self) -> &'static str {
match self {
VolumeType::Primary => "[PRI]",
VolumeType::UserData => "[USR]",
VolumeType::External => "[EXT]",
VolumeType::Secondary => "[SEC]",
VolumeType::System => "[SYS]",
VolumeType::Network => "[NET]",
VolumeType::Unknown => "[UNK]",
}
}
}
/// Events emitted by the Volume Manager when volume state changes
#[derive(Debug, Clone, Serialize, Deserialize)]
pub enum VolumeEvent {
/// Emitted when a new volume is discovered
VolumeAdded(Volume),
/// Emitted when a volume is removed/unmounted
VolumeRemoved { fingerprint: VolumeFingerprint },
/// Emitted when a volume's properties are updated
VolumeUpdated {
fingerprint: VolumeFingerprint,
old: VolumeInfo,
new: VolumeInfo,
},
/// Emitted when a volume's speed test completes
VolumeSpeedTested {
fingerprint: VolumeFingerprint,
read_speed_mbps: u64,
write_speed_mbps: u64,
},
/// Emitted when a volume's mount status changes
VolumeMountChanged {
fingerprint: VolumeFingerprint,
is_mounted: bool,
},
/// Emitted when a volume encounters an error
VolumeError {
fingerprint: VolumeFingerprint,
error: String,
},
}
/// Represents a physical or virtual storage volume in the system
#[derive(Serialize, Deserialize, Debug, Clone)]
pub struct Volume {
/// Unique fingerprint for this volume
pub fingerprint: VolumeFingerprint,
/// Device this volume belongs to
pub device_id: uuid::Uuid,
/// Human-readable volume name
pub name: String,
/// Type of mount (system, external, etc)
pub mount_type: MountType,
/// Classification of this volume for UX decisions
pub volume_type: VolumeType,
/// Primary path where the volume is mounted
pub mount_point: PathBuf,
/// Additional mount points (for APFS volumes, etc.)
pub mount_points: Vec<PathBuf>,
/// Whether the volume is currently mounted
pub is_mounted: bool,
/// Type of storage device (SSD, HDD, etc)
pub disk_type: DiskType,
/// Filesystem type (NTFS, EXT4, etc)
pub file_system: FileSystem,
/// Whether the volume is mounted read-only
pub read_only: bool,
/// Hardware identifier (platform-specific)
pub hardware_id: Option<String>,
/// Current error status if any
pub error_status: Option<String>,
// Storage information
/// Total storage capacity in bytes
pub total_bytes_capacity: u64,
/// Available storage space in bytes
pub total_bytes_available: u64,
// Performance metrics (populated by speed tests)
/// Read speed in megabytes per second
pub read_speed_mbps: Option<u64>,
/// Write speed in megabytes per second
pub write_speed_mbps: Option<u64>,
/// Whether this volume should be visible in default views
pub is_user_visible: bool,
/// Whether this volume should be auto-tracked
pub auto_track_eligible: bool,
/// When this volume information was last updated
pub last_updated: chrono::DateTime<chrono::Utc>,
}
/// Summary information about a volume (for updates and caching)
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct VolumeInfo {
pub is_mounted: bool,
pub total_bytes_available: u64,
pub read_speed_mbps: Option<u64>,
pub write_speed_mbps: Option<u64>,
pub error_status: Option<String>,
}
/// Information about a tracked volume in the database
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct TrackedVolume {
pub id: i32,
pub uuid: uuid::Uuid,
pub device_id: uuid::Uuid,
pub fingerprint: VolumeFingerprint,
pub display_name: Option<String>,
pub tracked_at: chrono::DateTime<chrono::Utc>,
pub last_seen_at: chrono::DateTime<chrono::Utc>,
pub is_online: bool,
pub total_capacity: Option<u64>,
pub available_capacity: Option<u64>,
pub read_speed_mbps: Option<u32>,
pub write_speed_mbps: Option<u32>,
pub last_speed_test_at: Option<chrono::DateTime<chrono::Utc>>,
pub file_system: Option<String>,
pub mount_point: Option<String>,
pub is_removable: Option<bool>,
pub is_network_drive: Option<bool>,
pub device_model: Option<String>,
pub volume_type: String,
pub is_user_visible: Option<bool>,
pub auto_track_eligible: Option<bool>,
}
impl From<&Volume> for VolumeInfo {
fn from(volume: &Volume) -> Self {
Self {
is_mounted: volume.is_mounted,
total_bytes_available: volume.total_bytes_available,
read_speed_mbps: volume.read_speed_mbps,
write_speed_mbps: volume.write_speed_mbps,
error_status: volume.error_status.clone(),
}
}
}
impl TrackedVolume {
/// Convert a TrackedVolume back to a Volume for display purposes
/// This is used for offline volumes that aren't currently detected
pub fn to_offline_volume(&self) -> Volume {
use std::path::PathBuf;
Volume {
fingerprint: self.fingerprint.clone(),
device_id: self.device_id,
name: self
.display_name
.clone()
.unwrap_or_else(|| "Unknown".to_string()),
mount_type: crate::volume::types::MountType::External, // Default for tracked volumes
volume_type: match self.volume_type.as_str() {
"Primary" => VolumeType::Primary,
"UserData" => VolumeType::UserData,
"External" => VolumeType::External,
"Secondary" => VolumeType::Secondary,
"System" => VolumeType::System,
"Network" => VolumeType::Network,
_ => VolumeType::Unknown,
},
mount_point: PathBuf::from(
self.mount_point
.clone()
.unwrap_or_else(|| "Not connected".to_string()),
),
mount_points: vec![], // Not available for offline volumes
disk_type: crate::volume::types::DiskType::Unknown,
file_system: crate::volume::types::FileSystem::from_string(
&self
.file_system
.clone()
.unwrap_or_else(|| "Unknown".to_string()),
),
total_bytes_capacity: self.total_capacity.unwrap_or(0),
total_bytes_available: self.available_capacity.unwrap_or(0),
read_only: false, // Assume not read-only for tracked volumes
hardware_id: self.device_model.clone(),
is_mounted: false, // Offline volumes are not mounted
error_status: None,
read_speed_mbps: self.read_speed_mbps.map(|s| s as u64),
write_speed_mbps: self.write_speed_mbps.map(|s| s as u64),
last_updated: self.last_seen_at,
is_user_visible: self.is_user_visible.unwrap_or(true),
auto_track_eligible: self.auto_track_eligible.unwrap_or(false),
}
}
}
impl Volume {
/// Create a new Volume instance
pub fn new(
device_id: uuid::Uuid,
name: String,
mount_type: MountType,
volume_type: VolumeType,
mount_point: PathBuf,
additional_mount_points: Vec<PathBuf>,
disk_type: DiskType,
file_system: FileSystem,
total_bytes_capacity: u64,
total_bytes_available: u64,
read_only: bool,
hardware_id: Option<String>,
fingerprint: VolumeFingerprint, // Accept pre-computed fingerprint
) -> Self {
Self {
fingerprint,
device_id,
name,
mount_type,
volume_type,
mount_point,
mount_points: additional_mount_points,
is_mounted: true,
disk_type,
file_system,
total_bytes_capacity,
total_bytes_available,
read_only,
hardware_id,
error_status: None,
read_speed_mbps: None,
write_speed_mbps: None,
auto_track_eligible: volume_type.auto_track_by_default(),
is_user_visible: volume_type.show_by_default(),
last_updated: chrono::Utc::now(),
}
}
/// Update volume information
pub fn update_info(&mut self, info: VolumeInfo) {
self.is_mounted = info.is_mounted;
self.total_bytes_available = info.total_bytes_available;
self.read_speed_mbps = info.read_speed_mbps;
self.write_speed_mbps = info.write_speed_mbps;
self.error_status = info.error_status;
self.last_updated = chrono::Utc::now();
}
/// Check if this volume supports fast copy operations (CoW)
pub fn supports_fast_copy(&self) -> bool {
matches!(
self.file_system,
FileSystem::APFS | FileSystem::Btrfs | FileSystem::ZFS | FileSystem::ReFS
)
}
/// Get the optimal chunk size for copying to/from this volume
pub fn optimal_chunk_size(&self) -> usize {
match self.disk_type {
DiskType::SSD => 1024 * 1024, // 1MB for SSDs
DiskType::HDD => 256 * 1024, // 256KB for HDDs
DiskType::Unknown => 64 * 1024, // 64KB default
}
}
/// Estimate copy speed between this and another volume
pub fn estimate_copy_speed(&self, other: &Volume) -> Option<u64> {
let self_read = self.read_speed_mbps?;
let other_write = other.write_speed_mbps?;
// Bottleneck is the slower of read or write speed
Some(self_read.min(other_write))
}
/// Check if a path is contained within this volume
pub fn contains_path(&self, path: &PathBuf) -> bool {
// Check primary mount point
if path.starts_with(&self.mount_point) {
return true;
}
// Check additional mount points
for mount_point in &self.mount_points {
if path.starts_with(mount_point) {
return true;
}
}
false
}
}
/// Represents the type of physical storage device
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, Eq, Hash)]
pub enum DiskType {
/// Solid State Drive
SSD,
/// Hard Disk Drive
HDD,
/// Unknown or virtual disk type
Unknown,
}
impl fmt::Display for DiskType {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
match self {
DiskType::SSD => write!(f, "SSD"),
DiskType::HDD => write!(f, "HDD"),
DiskType::Unknown => write!(f, "Unknown"),
}
}
}
impl DiskType {
pub fn from_string(disk_type: &str) -> Self {
match disk_type.to_uppercase().as_str() {
"SSD" => Self::SSD,
"HDD" => Self::HDD,
_ => Self::Unknown,
}
}
}
/// Represents the filesystem type of the volume
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, Eq, Hash)]
pub enum FileSystem {
/// Windows NTFS filesystem
NTFS,
/// FAT32 filesystem
FAT32,
/// Linux EXT4 filesystem
EXT4,
/// Apple APFS filesystem
APFS,
/// ExFAT filesystem
ExFAT,
/// Btrfs filesystem (Linux)
Btrfs,
/// ZFS filesystem
ZFS,
/// Windows ReFS filesystem
ReFS,
/// Other/unknown filesystem type
Other(String),
}
impl fmt::Display for FileSystem {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
match self {
FileSystem::NTFS => write!(f, "NTFS"),
FileSystem::FAT32 => write!(f, "FAT32"),
FileSystem::EXT4 => write!(f, "EXT4"),
FileSystem::APFS => write!(f, "APFS"),
FileSystem::ExFAT => write!(f, "ExFAT"),
FileSystem::Btrfs => write!(f, "Btrfs"),
FileSystem::ZFS => write!(f, "ZFS"),
FileSystem::ReFS => write!(f, "ReFS"),
FileSystem::Other(name) => write!(f, "{}", name),
}
}
}
impl FileSystem {
pub fn from_string(fs: &str) -> Self {
match fs.to_uppercase().as_str() {
"NTFS" => Self::NTFS,
"FAT32" => Self::FAT32,
"EXT4" => Self::EXT4,
"APFS" => Self::APFS,
"EXFAT" => Self::ExFAT,
"BTRFS" => Self::Btrfs,
"ZFS" => Self::ZFS,
"REFS" => Self::ReFS,
other => Self::Other(other.to_string()),
}
}
/// Check if this filesystem supports reflinks/clones
pub fn supports_reflink(&self) -> bool {
matches!(self, Self::APFS | Self::Btrfs | Self::ZFS | Self::ReFS)
}
/// Check if this filesystem supports sendfile optimization
pub fn supports_sendfile(&self) -> bool {
matches!(self, Self::EXT4 | Self::Btrfs | Self::ZFS | Self::NTFS)
}
}
/// Represents how the volume is mounted in the system
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, Eq, Hash)]
pub enum MountType {
/// System/boot volume
System,
/// External/removable volume
External,
/// Network-attached volume
Network,
/// Virtual/container volume
Virtual,
}
impl fmt::Display for MountType {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
match self {
MountType::System => write!(f, "System"),
MountType::External => write!(f, "External"),
MountType::Network => write!(f, "Network"),
MountType::Virtual => write!(f, "Virtual"),
}
}
}
impl MountType {
pub fn from_string(mount_type: &str) -> Self {
match mount_type.to_uppercase().as_str() {
"SYSTEM" => Self::System,
"EXTERNAL" => Self::External,
"NETWORK" => Self::Network,
"VIRTUAL" => Self::Virtual,
_ => Self::System,
}
}
}
/// Configuration for volume detection and monitoring
#[derive(Debug, Clone)]
pub struct VolumeDetectionConfig {
/// Whether to include system volumes
pub include_system: bool,
/// Whether to include virtual volumes
pub include_virtual: bool,
/// Whether to run speed tests on discovery
pub run_speed_test: bool,
/// How often to refresh volume information (in seconds)
pub refresh_interval_secs: u64,
}
impl Default for VolumeDetectionConfig {
fn default() -> Self {
Self {
include_system: true,
include_virtual: false,
run_speed_test: false, // Expensive operation, off by default
refresh_interval_secs: 30,
}
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_volume_fingerprint() {
let volume = Volume::new(
uuid::Uuid::new_v4(),
"Test Volume".to_string(),
MountType::External,
VolumeType::External,
PathBuf::from("/mnt/test"),
vec![],
DiskType::SSD,
FileSystem::EXT4,
1000000000,
500000000,
false,
Some("test-hw-id".to_string()),
VolumeFingerprint::new("Test", 500000000, "ext4"),
);
// Test basic fingerprint creation
let fingerprint = VolumeFingerprint::new(
"Test Volume",
1000000000, // 1GB
"ext4",
);
assert!(!fingerprint.0.is_empty());
// Test Spacedrive ID fingerprint
let spacedrive_id = Uuid::new_v4();
let spacedrive_fingerprint = VolumeFingerprint::from_spacedrive_id(spacedrive_id);
assert!(!spacedrive_fingerprint.0.is_empty());
assert_ne!(fingerprint, spacedrive_fingerprint);
}
#[test]
fn test_volume_contains_path() {
let volume = Volume::new(
uuid::Uuid::new_v4(),
"Test".to_string(),
MountType::System,
VolumeType::System,
PathBuf::from("/home"),
vec![PathBuf::from("/home"), PathBuf::from("/mnt/home")],
DiskType::SSD,
FileSystem::EXT4,
1000000,
500000,
false,
None,
VolumeFingerprint::new("Test", 1000000, "ext4"),
);
assert!(volume.contains_path(&PathBuf::from("/home/user/file.txt")));
assert!(volume.contains_path(&PathBuf::from("/mnt/home/user/file.txt")));
assert!(!volume.contains_path(&PathBuf::from("/var/log/file.txt")));
}
#[test]
fn test_filesystem_capabilities() {
assert!(FileSystem::APFS.supports_reflink());
assert!(FileSystem::Btrfs.supports_reflink());
assert!(!FileSystem::FAT32.supports_reflink());
assert!(FileSystem::EXT4.supports_sendfile());
assert!(!FileSystem::FAT32.supports_sendfile());
}
}

View File

@@ -1,7 +1,7 @@
#!/bin/bash
# --- Configuration ---
PROJECT_ROOT="/Users/jamespine/Projects/spacedrive/core-new"
PROJECT_ROOT="/Users/jamespine/Projects/spacedrive/core"
INSTALL_DIR="/usr/local/bin"
BINARY_NAME="spacedrive"

View File

View File

Binary file not shown.

View File

@@ -1,161 +1,166 @@
[package]
edition = "2021"
name = "sd-core"
version = "0.5.0"
authors = ["Spacedrive Technology Inc <support@spacedrive.com>"]
description = "Virtual distributed filesystem engine that powers Spacedrive."
edition.workspace = true
license.workspace = true
repository.workspace = true
rust-version.workspace = true
version = "0.1.0"
[features]
default = []
# This feature allows features to be disabled when the Core is running on mobile.
mobile = []
# This feature controls whether the Spacedrive Core contains functionality which requires FFmpeg.
ai = ["dep:sd-ai"]
ffmpeg = ["sd-core-heavy-lifting/ffmpeg", "sd-media-metadata/ffmpeg"]
heif = ["sd-images/heif"]
# FFmpeg support for video thumbnails
ffmpeg = ["dep:sd-ffmpeg"]
[workspace]
members = ["benchmarks", "crates/*"]
[dependencies]
# Inner Core Sub-crates
sd-core-cloud-services = { path = "./crates/cloud-services" }
sd-core-file-path-helper = { path = "./crates/file-path-helper" }
sd-core-heavy-lifting = { path = "./crates/heavy-lifting" }
sd-core-indexer-rules = { path = "./crates/indexer-rules" }
sd-core-prisma-helpers = { path = "./crates/prisma-helpers" }
sd-core-sync = { path = "./crates/sync" }
# Async runtime
async-trait = "0.1"
futures = "0.3"
tokio = { version = "1.40", features = ["full"] }
# Spacedrive Sub-crates
sd-actors = { path = "../crates/actors" }
sd-ai = { path = "../crates/ai", optional = true }
sd-crypto = { path = "../crates/crypto" }
sd-ffmpeg = { path = "../crates/ffmpeg", optional = true }
sd-file-ext = { path = "../crates/file-ext" }
sd-images = { path = "../crates/images", features = ["rspc", "serde", "specta"] }
sd-media-metadata = { path = "../crates/media-metadata" }
sd-old-p2p = { path = "../crates/old-p2p", features = ["specta"] }
sd-old-p2p-block = { path = "../crates/old-p2p/crates/block" }
sd-old-p2p-proto = { path = "../crates/old-p2p/crates/proto" }
sd-old-p2p-tunnel = { path = "../crates/old-p2p/crates/tunnel" }
sd-prisma = { path = "../crates/prisma" }
sd-sync = { path = "../crates/sync" }
sd-task-system = { path = "../crates/task-system" }
sd-utils = { path = "../crates/utils" }
# Workspace dependencies
async-channel = { workspace = true }
async-stream = { workspace = true }
async-trait = { workspace = true }
axum = { workspace = true, features = ["ws"] }
base64 = { workspace = true }
blake3 = { workspace = true }
bytes = { workspace = true }
chrono = { workspace = true, features = ["serde"] }
futures = { workspace = true }
futures-concurrency = { workspace = true }
hyper = { workspace = true, features = ["client", "http1", "server"] }
image = { workspace = true }
itertools = { workspace = true }
libc = { workspace = true }
normpath = { workspace = true, features = ["localization"] }
pin-project-lite = { workspace = true }
prisma-client-rust = { workspace = true, features = ["rspc"] }
regex = { workspace = true }
reqwest = { workspace = true, features = ["json", "native-tls-vendored"] }
rmp-serde = { workspace = true }
rmpv = { workspace = true }
rspc = { workspace = true, features = ["alpha", "axum", "chrono", "unstable", "uuid"] }
sd-cloud-schema = { workspace = true }
serde = { workspace = true, features = ["derive", "rc"] }
serde_json = { workspace = true }
specta = { workspace = true }
strum = { workspace = true, features = ["derive"] }
strum_macros = { workspace = true }
tempfile = { workspace = true }
thiserror = { workspace = true }
tokio-stream = { workspace = true, features = ["fs"] }
tokio-util = { workspace = true, features = ["io"] }
tracing = { workspace = true }
tracing-subscriber = { workspace = true, features = ["env-filter"] }
uuid = { workspace = true, features = ["serde", "v4", "v7"] }
# Specific Core dependencies
async-recursion = "1.1"
base91 = "0.1.0"
ctor = "0.2.8"
directories = "5.0"
flate2 = "1.0"
fsevent = "2.1.2"
hex = "0.4.3"
hostname = "0.4.0"
http-body = "1.0"
http-range = "0.1.5"
hyper-util = { version = "0.1.9", features = ["tokio"] }
int-enum = "0.5" # Update blocked due to API breaking changes
mini-moka = "0.10.3"
once_cell = "1.19.0"
serde-hashkey = "0.4.5"
serde_repr = "0.1.19"
serde_with = "3.8"
slotmap = "1.0"
sysinfo = "0.29.11" # Update blocked due to API breaking changes
tar = "0.4.41"
tower-service = "0.3.2"
tracing-appender = "0.2.3"
whoami = "1.5.2"
[dependencies.tokio]
features = ["io-util", "macros", "process", "rt-multi-thread", "sync", "time"]
workspace = true
[dependencies.notify]
default-features = false
features = ["macos_fsevent"]
git = "https://github.com/notify-rs/notify.git"
rev = "c3929ed114"
# Override features of transitive dependencies
[dependencies.openssl]
features = ["vendored"]
version = "0.10.66"
[dependencies.openssl-sys]
features = ["vendored"]
version = "0.9.103"
# Platform-specific dependencies
[target.'cfg(target_os = "macos")'.dependencies]
plist = "1.6"
trash = "5.1"
[target.'cfg(target_os = "linux")'.dependencies]
inotify = "0.11.0"
trash = "5.1"
[target.'cfg(target_os = "windows")'.dependencies]
trash = "5.1"
windows = { features = [
"Win32_Storage_FileSystem",
"Win32_System_IO",
"Win32_System_Ioctl",
"Win32_System_WindowsProgramming"
], version = "0.58" }
[target.'cfg(target_os = "ios")'.dependencies]
icrate = { version = "0.1.2", features = [
"Foundation",
"Foundation_NSFileManager",
"Foundation_NSNumber",
"Foundation_NSString"
# Database
sea-orm = { version = "1.1", features = [
"macros",
"runtime-tokio-rustls",
"sqlx-sqlite",
"uuid",
"with-chrono",
"with-json"
] }
sea-orm-migration = { version = "1.1", features = ["runtime-tokio-rustls", "sqlx-sqlite"] }
sqlx = { version = "0.8", features = ["runtime-tokio-rustls", "sqlite"] }
# API (temporarily disabled)
# axum = "0.7"
# async-graphql = "7.0"
# async-graphql-axum = "7.0"
# Serialization
int-enum = "1.1"
serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0"
strum = { version = "0.26", features = ["derive"] }
toml = "0.8"
# Error handling
anyhow = "1.0"
thiserror = "1.0"
# File operations
blake3 = "1.5" # Content addressing
hex = "0.4" # Hex encoding for volume fingerprints
notify = "6.1" # File system watching
sha2 = "0.10" # SHA-256 hashing for CAS IDs
# Logging
tracing = "0.1"
tracing-subscriber = { version = "0.3", features = ["env-filter"] }
# Indexer rules engine
futures-concurrency = "7.6"
gix-ignore = { version = "0.11", features = ["serde"] }
globset = { version = "0.4", features = ["serde1"] }
# Job system dependencies
inventory = "0.3" # Automatic job registration
rmp = "0.8" # MessagePack core types
rmp-serde = "1.3" # MessagePack serialization for job state
sd-task-system = { path = "crates/task-system" }
spacedrive-jobs-derive = { path = "crates/spacedrive-jobs-derive" } # Job derive macros
# Media processing dependencies
image = "0.25"
sd-ffmpeg = { path = "crates/ffmpeg", optional = true }
sd-images = { path = "crates/images" }
tokio-rustls = "0.26"
webp = "0.3"
# Networking
# Iroh P2P networking
iroh = "0.28"
iroh-blobs = "0.28"
iroh-gossip = "0.28"
iroh-net = "0.28"
# Serialization for protocols
serde_cbor = "0.11"
# Cryptography for signing (backward compatibility)
ed25519-dalek = "2.1"
# Legacy networking (kept for compatibility during transition)
aes-gcm = "0.10" # AES-GCM encryption for secure storage
argon2 = "0.5" # Password derivation
async-stream = "0.3" # File streaming
backoff = "0.4" # Retry logic
bincode = "2.0.0-rc.3" # Efficient encoding
mdns-sd = "0.13" # mDNS service discovery (DEPRECATED - use libp2p DHT)
ring = "0.16" # Crypto primitives
snow = "0.9" # Noise Protocol encryption (DEPRECATED - use libp2p noise)
# futures-util = "0.3" # WebSocket utilities (disabled for now)
rcgen = "0.11" # Certificate generation (DEPRECATED - use libp2p noise)
rustls = { version = "0.23", features = [
"aws_lc_rs"
] } # TLS implementation (DEPRECATED - use libp2p noise)
tokio-stream = "0.1" # Async streams
# BIP39 wordlist support
bip39 = "2.0"
# Additional cryptography
chacha20poly1305 = "0.10" # Authenticated encryption for chunk-level security
hkdf = "0.12" # Key derivation function for session keys
hmac = "0.12"
x25519-dalek = "2.0"
# Network utilities
if-watch = "3.0"
local-ip-address = "0.5"
# colored already defined above
# Utils
chrono = { version = "0.4", features = ["serde"] }
dirs = "5.0"
once_cell = "1.20"
rand = "0.8" # Random number generation for secure delete
tempfile = "3.14" # Temporary directories for testing
uuid = { version = "1.11", features = ["serde", "v4", "v5", "v7"] }
whoami = "1.5"
# Secure storage
keyring = "3.6"
# CLI dependencies
clap = { version = "4.5", features = ["derive", "env"] }
colored = "2.1"
comfy-table = "7.1"
console = "0.15"
dialoguer = "0.11"
indicatif = "0.17"
owo-colors = "4.1"
supports-color = "3.0"
[build-dependencies]
colored = "2.1"
comfy-table = "7.1"
console = "0.15"
crossterm = "0.28"
dialoguer = "0.11"
indicatif = "0.17"
owo-colors = "4.1"
ratatui = "0.29"
supports-color = "3.0"
vergen = { version = "8", features = ["cargo", "git", "gitcl"] }
# Platform specific
[target.'cfg(unix)'.dependencies]
libc = "0.2"
[[bin]]
name = "spacedrive"
path = "src/bin/cli.rs"
[target.'cfg(target_os = "android")'.dependencies]
tracing-android = "0.2.0"
[dev-dependencies]
# Workspace dependencies
tracing-test = { workspace = true }
# Specific Core dependencies
boxcar = "0.2.5"
pretty_assertions = "1.4"
tempfile = "3.14"

View File

@@ -293,7 +293,7 @@ Currently working features:
```bash
# Clone and build
git clone https://github.com/spacedriveapp/spacedrive
cd spacedrive/core-new
cd spacedrive/core
cargo build --release
# Try the CLI

View File

@@ -0,0 +1,37 @@
[package]
edition = "2021"
name = "sd-bench"
version = "0.1.0"
[dependencies]
anyhow = "1.0"
async-trait = "0.1"
blake3 = "1.5"
chrono = { version = "0.4", features = ["serde"] }
clap = { version = "4.5", features = ["derive", "env"] }
dirs = "5.0"
humantime = "2.1"
humantime-serde = "1.1"
indicatif = "0.17"
rand = "0.8"
regex = "1.10"
sd-core = { path = ".." }
serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0"
serde_with = { version = "3.9", features = ["json"] }
serde_yaml = "0.9"
sysinfo = { version = "0.30", default-features = false, features = ["multithread"] }
tempfile = "3.14"
tokio = { version = "1.40", features = ["full"] }
tracing = "0.1"
tracing-subscriber = { version = "0.3", features = ["env-filter"] }
uuid = { version = "1.11", features = ["serde", "v4"] }
walkdir = "2.5"
[lib]
name = "sd_bench"
path = "src/lib.rs"
[[bin]]
name = "sd-bench"
path = "src/bin/sd-bench-new.rs"

View File

@@ -21,7 +21,7 @@
},
"id": "6a501e72-da12-43bc-a4f6-e336ec7a1d56",
"location_paths": [
"/Users/jamespine/Projects/spacedrive/core-new/benchdata/shape_large"
"/Users/jamespine/Projects/spacedrive/core/benchdata/shape_large"
],
"recipe_name": "shape_large",
"timestamp_utc": "2025-08-10T09:12:02.347142+00:00"

Some files were not shown because too many files have changed in this diff Show More