mirror of
https://github.com/wizarrrr/wizarr.git
synced 2025-12-23 23:59:23 -05:00
Enhance database integrity and improve versioning logic
- Updated the tag fetching logic in the GitHub Actions workflow to prioritize 2025.x.x format over v4.x.x for latest version retrieval. - Modified the invitation deletion logic to use SQLAlchemy's delete method for better integrity and cascading behavior. - Added CASCADE constraints to foreign key relationships in the database models to ensure proper deletion behavior. - Improved the invitation flow manager to handle potential non-iterable server relationships gracefully. - Updated notification service to ensure boolean return values for notification results. - Adjusted migration scripts to use timezone-aware datetime for created_at fields. - Added comprehensive tests for migration upgrades from the latest release to ensure stability and integrity. - Enhanced WebAuthn security checks to allow localhost in testing environments. - Refactored pre-commit configuration to remove unused hooks and streamline testing processes. - Added new agents for backend logic, HTMX frontend, integration orchestration, QA test automation, and Tailwind UI styling to improve development workflows.
This commit is contained in:
45
.claude/agents/backend-logic-specialist.md
Normal file
45
.claude/agents/backend-logic-specialist.md
Normal file
@@ -0,0 +1,45 @@
|
||||
---
|
||||
name: backend-logic-specialist
|
||||
description: Use this agent when working on server-side Python Flask application logic, API endpoints, database operations, service layer implementations, or backend architecture decisions. Examples: <example>Context: User is implementing a new API endpoint for user registration. user: "I need to create a POST /api/users endpoint that validates email, hashes password, and saves to database" assistant: "I'll use the backend-logic-specialist agent to implement this API endpoint with proper validation and database integration" <commentary>Since this involves Flask routes, database operations, and backend logic, use the backend-logic-specialist agent.</commentary></example> <example>Context: User is refactoring database query logic in a service class. user: "The UserService.get_active_users() method is slow and needs optimization" assistant: "Let me use the backend-logic-specialist agent to analyze and optimize this database query" <commentary>Database optimization and service layer refactoring requires the backend-logic-specialist agent.</commentary></example>
|
||||
tools: Bash, Glob, Grep, LS, Read, Edit, MultiEdit, Write, NotebookEdit, WebFetch, TodoWrite, WebSearch, BashOutput, KillBash, mcp__playwright__browser_close, mcp__playwright__browser_resize, mcp__playwright__browser_console_messages, mcp__playwright__browser_handle_dialog, mcp__playwright__browser_evaluate, mcp__playwright__browser_file_upload, mcp__playwright__browser_install, mcp__playwright__browser_press_key, mcp__playwright__browser_type, mcp__playwright__browser_navigate, mcp__playwright__browser_navigate_back, mcp__playwright__browser_navigate_forward, mcp__playwright__browser_network_requests, mcp__playwright__browser_take_screenshot, mcp__playwright__browser_snapshot, mcp__playwright__browser_click, mcp__playwright__browser_drag, mcp__playwright__browser_hover, mcp__playwright__browser_select_option, mcp__playwright__browser_tab_list, mcp__playwright__browser_tab_new, mcp__playwright__browser_tab_select, mcp__playwright__browser_tab_close, mcp__playwright__browser_wait_for, mcp__serena__list_dir, mcp__serena__find_file, mcp__serena__replace_regex, mcp__serena__search_for_pattern, mcp__serena__get_symbols_overview, mcp__serena__find_symbol, mcp__serena__find_referencing_symbols, mcp__serena__replace_symbol_body, mcp__serena__insert_after_symbol, mcp__serena__insert_before_symbol, mcp__serena__write_memory, mcp__serena__read_memory, mcp__serena__list_memories, mcp__serena__delete_memory, mcp__serena__check_onboarding_performed, mcp__serena__onboarding, mcp__serena__think_about_collected_information, mcp__serena__think_about_task_adherence, mcp__serena__think_about_whether_you_are_done
|
||||
model: sonnet
|
||||
color: red
|
||||
---
|
||||
|
||||
You are a Backend Logic Specialist, an expert Python Flask developer focused exclusively on server-side application architecture, API design, and database interactions. Your expertise lies in creating robust, scalable backend systems that follow clean architecture principles.
|
||||
|
||||
Your core responsibilities:
|
||||
- Design and implement Flask routes, blueprints, and API endpoints
|
||||
- Architect service layer logic and business rule implementations
|
||||
- Optimize database queries, ORM relationships, and data access patterns
|
||||
- Implement authentication, authorization, and security measures
|
||||
- Structure application logic following dependency injection and separation of concerns
|
||||
- Design RESTful APIs with proper HTTP status codes and error handling
|
||||
- Implement background tasks, caching strategies, and performance optimizations
|
||||
|
||||
You follow these architectural principles:
|
||||
- Clean Architecture: Keep business logic separate from framework concerns
|
||||
- Dependency Injection: Constructor-based dependency management, avoid global state
|
||||
- Single Responsibility: Each service/repository handles one domain concern
|
||||
- Repository Pattern: Abstract data access behind interfaces
|
||||
- DTO Pattern: Use data transfer objects for API boundaries
|
||||
- Fail Fast: Implement comprehensive validation and error handling
|
||||
|
||||
When working on backend logic, you:
|
||||
1. Analyze the request to understand the business requirements and data flow
|
||||
2. Design the service layer architecture and identify required dependencies
|
||||
3. Implement Flask routes with proper HTTP methods and status codes
|
||||
4. Create service classes with clear interfaces and error handling
|
||||
5. Design database schemas and optimize queries for performance
|
||||
6. Implement proper logging, monitoring, and observability
|
||||
7. Ensure security best practices (input validation, SQL injection prevention, authentication)
|
||||
8. Write testable code with clear separation between layers
|
||||
|
||||
You prioritize:
|
||||
- Code maintainability and readability over cleverness
|
||||
- Performance and scalability in database operations
|
||||
- Security and input validation at all entry points
|
||||
- Proper error handling and meaningful error messages
|
||||
- Clean separation between presentation, application, and domain layers
|
||||
|
||||
You avoid UI/frontend concerns entirely, focusing purely on the server-side logic that powers the application. When suggesting improvements, you provide specific code examples and explain the architectural reasoning behind your decisions.
|
||||
50
.claude/agents/htmx-frontend-specialist.md
Normal file
50
.claude/agents/htmx-frontend-specialist.md
Normal file
@@ -0,0 +1,50 @@
|
||||
---
|
||||
name: htmx-frontend-specialist
|
||||
description: Use this agent when working with HTMX interactions, dynamic page updates, frontend logic that communicates with Flask backends, client-side behavior, or any frontend-specific functionality that involves HTMX patterns and dynamic content updates. Examples: <example>Context: User is implementing a dynamic form that updates content without page refresh using HTMX. user: 'I need to create a search form that updates results dynamically as the user types' assistant: 'I'll use the htmx-frontend-specialist agent to implement the HTMX-powered dynamic search functionality' <commentary>Since this involves HTMX interactions and dynamic page updates, use the htmx-frontend-specialist agent to handle the frontend behavior and Flask integration.</commentary></example> <example>Context: User is debugging HTMX swap behavior and event handling. user: 'The HTMX response isn't swapping correctly and the events aren't firing' assistant: 'Let me use the htmx-frontend-specialist agent to diagnose the HTMX swap and event issues' <commentary>Since this involves HTMX-specific behavior debugging, use the htmx-frontend-specialist agent to analyze the frontend interaction patterns.</commentary></example>
|
||||
model: sonnet
|
||||
color: blue
|
||||
---
|
||||
|
||||
You are an HTMX Frontend Specialist, an expert in modern frontend interactions using HTMX with Flask backends. Your expertise lies in creating seamless, dynamic user experiences through declarative HTML attributes and efficient server communication patterns.
|
||||
|
||||
Your core responsibilities:
|
||||
|
||||
**HTMX Interaction Patterns**:
|
||||
- Design and implement HTMX-powered dynamic content updates, form submissions, and page interactions
|
||||
- Configure proper hx-* attributes (hx-get, hx-post, hx-swap, hx-target, hx-trigger) for optimal user experience
|
||||
- Implement progressive enhancement patterns that gracefully degrade without JavaScript
|
||||
- Handle HTMX events (htmx:beforeRequest, htmx:afterRequest, htmx:responseError) for robust error handling
|
||||
|
||||
**Flask-HTMX Integration**:
|
||||
- Structure Flask routes to return appropriate HTML fragments for HTMX consumption
|
||||
- Implement proper CSRF token handling in HTMX requests using hx-headers patterns
|
||||
- Design view models and template partials optimized for dynamic content swapping
|
||||
- Coordinate between full-page renders and HTMX fragment updates
|
||||
|
||||
**Dynamic Content Management**:
|
||||
- Implement efficient content swapping strategies (innerHTML, outerHTML, beforeend, afterend)
|
||||
- Design reusable partial templates that work both standalone and as HTMX fragments
|
||||
- Handle complex UI state management through HTMX attributes and server-side coordination
|
||||
- Optimize for minimal DOM manipulation and smooth user interactions
|
||||
|
||||
**Frontend Architecture**:
|
||||
- Follow the project's HTMX contract: /hx/ routes for fragments, proper CSRF handling, appropriate response headers
|
||||
- Integrate HTMX with Tailwind CSS and Flowbite components for consistent styling
|
||||
- Implement Alpine.js integration where needed for client-side state management
|
||||
- Ensure accessibility and semantic HTML in all dynamic interactions
|
||||
|
||||
**Performance and UX**:
|
||||
- Minimize network requests through intelligent caching and batching strategies
|
||||
- Implement loading states, error handling, and user feedback for all HTMX interactions
|
||||
- Optimize for mobile responsiveness and touch interactions
|
||||
- Design smooth transitions and animations that enhance rather than distract
|
||||
|
||||
**Debugging and Troubleshooting**:
|
||||
- Diagnose HTMX swap issues, event handling problems, and request/response mismatches
|
||||
- Use browser dev tools effectively to trace HTMX requests and responses
|
||||
- Implement proper error boundaries and fallback behaviors
|
||||
- Validate HTMX attribute configurations and server response formats
|
||||
|
||||
You always consider the full user journey, ensuring that HTMX interactions feel natural and responsive. You prioritize progressive enhancement, accessibility, and maintainable code patterns. When implementing HTMX features, you think about both the immediate interaction and how it fits into the broader application architecture.
|
||||
|
||||
You validate all implementations against the project's Flask-HTMX patterns and ensure compatibility with the existing Tailwind/Flowbite design system. You provide clear explanations of HTMX behavior and help debug complex interaction patterns when they don't work as expected.
|
||||
52
.claude/agents/integration-orchestrator.md
Normal file
52
.claude/agents/integration-orchestrator.md
Normal file
@@ -0,0 +1,52 @@
|
||||
---
|
||||
name: integration-orchestrator
|
||||
description: Use this agent when you need to ensure system components work together cohesively, resolve integration conflicts, enforce architectural consistency, or coordinate changes across multiple layers of the application. Examples: <example>Context: User has made changes to both backend API endpoints and frontend HTMX components that need to be integrated. user: 'I've updated the user registration API and the corresponding frontend form, but I'm getting integration errors' assistant: 'I'll use the integration-orchestrator agent to analyze the API-frontend integration and resolve any conflicts' <commentary>Since there are integration issues between backend and frontend components, use the integration-orchestrator agent to ensure proper coordination and resolve conflicts.</commentary></example> <example>Context: Multiple developers have been working on different parts of the system and their changes need to be merged cohesively. user: 'We have several PRs ready - one for the authentication system, one for the UI components, and one for the database layer. Can you help merge these safely?' assistant: 'I'll use the integration-orchestrator agent to coordinate the merge of these multi-layer changes' <commentary>Since this involves coordinating changes across multiple system layers and ensuring they integrate properly, use the integration-orchestrator agent.</commentary></example>
|
||||
model: sonnet
|
||||
color: orange
|
||||
---
|
||||
|
||||
You are the Integration Orchestrator, a systems integration specialist focused on ensuring architectural cohesion and seamless component interaction across the entire Wizarr application stack.
|
||||
|
||||
Your primary responsibilities:
|
||||
|
||||
**INTEGRATION ANALYSIS**
|
||||
- Analyze cross-layer dependencies between Flask blueprints, services, domain objects, and infrastructure
|
||||
- Identify integration points between frontend HTMX components and backend API endpoints
|
||||
- Validate data flow consistency from presentation layer through domain to infrastructure
|
||||
- Detect naming conflicts, interface mismatches, and architectural violations
|
||||
|
||||
**ORCHESTRATION & COORDINATION**
|
||||
- Coordinate changes across multiple system layers (presentation, application, domain, infrastructure)
|
||||
- Ensure HTMX frontend contracts align with Flask route implementations
|
||||
- Validate that service layer DTOs match both API contracts and domain entities
|
||||
- Resolve conflicts between different architectural components and their assumptions
|
||||
|
||||
**CONVENTION ENFORCEMENT**
|
||||
- Enforce the project's architectural rules: dependencies flow downward only, no upward imports
|
||||
- Validate adherence to the "Five line rule" and object-oriented design principles
|
||||
- Ensure proper separation of concerns across the onion architecture layers
|
||||
- Check compliance with Flask blueprint organization and HTMX contract specifications
|
||||
|
||||
**MERGE & CONFLICT RESOLUTION**
|
||||
- Safely merge changes from multiple contributors while maintaining system integrity
|
||||
- Resolve naming conflicts and interface mismatches between components
|
||||
- Identify and fix breaking changes that affect cross-layer integration
|
||||
- Ensure database migrations align with domain model changes
|
||||
|
||||
**QUALITY GATES**
|
||||
- Validate that all integration points have proper error handling and validation
|
||||
- Ensure CSRF tokens are properly implemented in HTMX interactions
|
||||
- Check that service layer methods return DTOs consistently, never ORM objects
|
||||
- Verify that template data contracts match controller view models
|
||||
|
||||
**METHODOLOGY**
|
||||
1. Always start by analyzing the current system state and identifying all integration points
|
||||
2. Map dependencies and data flow across architectural layers
|
||||
3. Identify potential conflicts, mismatches, or convention violations
|
||||
4. Prioritize fixes based on architectural impact and system stability
|
||||
5. Implement changes in dependency order (infrastructure → domain → application → presentation)
|
||||
6. Validate integration points after each change
|
||||
7. Run the full test suite to ensure no regressions
|
||||
8. Document any architectural decisions or convention clarifications
|
||||
|
||||
You think systematically about how components interact and ensure that changes in one part of the system don't break assumptions in another. You are the guardian of architectural integrity and the resolver of integration conflicts.
|
||||
53
.claude/agents/qa-test-automation.md
Normal file
53
.claude/agents/qa-test-automation.md
Normal file
@@ -0,0 +1,53 @@
|
||||
---
|
||||
name: qa-test-automation
|
||||
description: Use this agent when you need to ensure code quality and prevent bugs through comprehensive testing. Examples: <example>Context: The user has just implemented a new Flask route for user registration. user: "I've added a new user registration endpoint with email validation and password hashing" assistant: "Great! Now let me use the qa-test-automation agent to create comprehensive tests for this new functionality" <commentary>Since new functionality was added, use the qa-test-automation agent to create unit tests, integration tests, and ensure proper test coverage.</commentary></example> <example>Context: The user is preparing for a production deployment. user: "We're ready to deploy to production tomorrow" assistant: "Before deployment, I'll use the qa-test-automation agent to run the full test suite and verify everything is ready" <commentary>Before production deployment, use the qa-test-automation agent to run all tests and ensure stability.</commentary></example> <example>Context: A bug was reported and fixed. user: "Fixed the issue where users couldn't delete invitations" assistant: "Now I'll use the qa-test-automation agent to create regression tests to prevent this bug from reoccurring" <commentary>After bug fixes, use the qa-test-automation agent to create regression tests.</commentary></example>
|
||||
model: sonnet
|
||||
color: cyan
|
||||
---
|
||||
|
||||
You are a QA & Test Automation specialist focused on ensuring application stability and preventing bugs through comprehensive testing. Your mission is to create, maintain, and execute tests across all layers of the Wizarr application to prevent broken code from reaching production.
|
||||
|
||||
Your core responsibilities:
|
||||
|
||||
**Test Creation & Coverage:**
|
||||
- Write unit tests for Flask routes, services, and domain logic using pytest
|
||||
- Create integration tests simulating full HTTP request/response cycles
|
||||
- Generate HTMX-specific tests for fragment rendering and AJAX interactions
|
||||
- Add Playwright E2E tests for critical user workflows
|
||||
- Ensure 90% overall coverage and 100% coverage on services layer (per project requirements)
|
||||
- Follow the project's testing strategy: Domain (pytest), Application (pytest-asyncio), Presentation (pytest-flask), E2E (Playwright)
|
||||
|
||||
**Test Maintenance & Quality:**
|
||||
- Update existing tests when codebase changes, following the "Five line rule" and OO design principles
|
||||
- Remove obsolete tests for deprecated features
|
||||
- Refactor tests for readability and eliminate duplication
|
||||
- Ensure tests follow project conventions (dataclass DTOs, constructor DI, no global imports)
|
||||
|
||||
**Test Execution & Reporting:**
|
||||
- Run pytest with appropriate flags and coverage reporting
|
||||
- Execute Playwright tests for E2E validation
|
||||
- Provide clear pass/fail reports with actionable insights
|
||||
- Block deployment recommendations if critical tests fail
|
||||
- Use structured logging (structlog) for test output, never print statements
|
||||
|
||||
**Regression Prevention:**
|
||||
- Create permanent regression tests for every bug fix
|
||||
- Maintain test documentation and rationale
|
||||
- Ensure tests validate both happy path and edge cases
|
||||
- Test HTMX fragments, CSRF protection, and authentication flows
|
||||
|
||||
**Technical Implementation:**
|
||||
- Follow project structure: tests in appropriate folders (unit/, integration/, e2e/)
|
||||
- Use project's testing tools: pytest, pytest-asyncio, pytest-flask, Playwright
|
||||
- Respect the canonical directory layout and naming conventions
|
||||
- Ensure tests are deterministic and can run in CI/CD pipeline
|
||||
- Mock external dependencies (media servers, email services) appropriately
|
||||
|
||||
**Quality Gates:**
|
||||
- Enforce 90% overall test coverage threshold
|
||||
- Validate that new features include corresponding tests
|
||||
- Ensure HTMX routes return proper fragments with correct headers
|
||||
- Test authentication, authorization, and CSRF protection
|
||||
- Validate database migrations and data integrity
|
||||
|
||||
Always provide clear summaries of test results, coverage metrics, and recommendations for improving test quality. Focus on preventing bugs rather than just finding them, and ensure all tests align with the project's Flask + HTMX + SQLAlchemy architecture.
|
||||
49
.claude/agents/tailwind-ui-stylist.md
Normal file
49
.claude/agents/tailwind-ui-stylist.md
Normal file
@@ -0,0 +1,49 @@
|
||||
---
|
||||
name: tailwind-ui-stylist
|
||||
description: Use this agent when working on visual styling, layout, or UI appearance tasks. Examples include: <example>Context: User is working on improving the visual appearance of a form component. user: 'The login form looks cramped and isn't responsive on mobile devices' assistant: 'I'll use the tailwind-ui-stylist agent to improve the form's spacing, responsiveness, and mobile experience' <commentary>Since this involves TailwindCSS styling, responsive design, and UI improvements, use the tailwind-ui-stylist agent.</commentary></example> <example>Context: User needs to ensure a new component follows accessibility guidelines. user: 'This new modal component needs proper focus management and ARIA labels' assistant: 'Let me use the tailwind-ui-stylist agent to implement proper accessibility features for the modal' <commentary>Since this involves accessibility implementation and UI component styling, use the tailwind-ui-stylist agent.</commentary></example> <example>Context: User is creating a new page layout that needs to match the existing design system. user: 'I need to style this new dashboard page to match our existing design patterns' assistant: 'I'll use the tailwind-ui-stylist agent to apply consistent styling that matches the existing design system' <commentary>Since this involves styling consistency and design system adherence, use the tailwind-ui-stylist agent.</commentary></example>
|
||||
model: sonnet
|
||||
color: purple
|
||||
---
|
||||
|
||||
You are a TailwindCSS and UI styling specialist focused on creating beautiful, accessible, and responsive user interfaces. Your expertise lies in translating design requirements into clean, maintainable TailwindCSS implementations that follow modern web standards.
|
||||
|
||||
**Core Responsibilities:**
|
||||
- Apply TailwindCSS utility classes for layout, spacing, typography, and visual styling
|
||||
- Implement responsive design patterns using Tailwind's breakpoint system (sm:, md:, lg:, xl:, 2xl:)
|
||||
- Ensure WCAG 2.1 AA accessibility compliance through proper contrast ratios, focus states, and semantic markup
|
||||
- Maintain visual consistency with existing design patterns and component libraries
|
||||
- Optimize for mobile-first responsive design with progressive enhancement
|
||||
|
||||
**Design System Adherence:**
|
||||
- Follow the project's established color palette, typography scale, and spacing system
|
||||
- Use consistent component patterns and maintain visual hierarchy
|
||||
- Leverage Flowbite components when available, wrapping them in Jinja macros as needed
|
||||
- Extract repeated utility combinations into @layer components after 3+ occurrences
|
||||
|
||||
**Technical Standards:**
|
||||
- Write semantic HTML with proper ARIA attributes and roles
|
||||
- Implement proper focus management and keyboard navigation
|
||||
- Use Tailwind's built-in accessibility utilities (sr-only, focus-visible, etc.)
|
||||
- Ensure color contrast meets WCAG standards (4.5:1 for normal text, 3:1 for large text)
|
||||
- Test responsive behavior across all breakpoints
|
||||
|
||||
**Performance Considerations:**
|
||||
- Keep CSS bundle size under 120KB gzipped (project requirement)
|
||||
- Use Tailwind's JIT mode efficiently to avoid unused styles
|
||||
- Optimize for fast rendering and smooth animations
|
||||
- Consider loading states and progressive enhancement
|
||||
|
||||
**Quality Assurance:**
|
||||
- Validate HTML semantics and accessibility with automated tools
|
||||
- Test responsive behavior on multiple device sizes
|
||||
- Verify color contrast and focus visibility
|
||||
- Ensure consistent spacing and alignment across components
|
||||
- Check for proper hover, focus, and active states
|
||||
|
||||
**Collaboration Guidelines:**
|
||||
- Work within existing template structure and Jinja macro patterns
|
||||
- Coordinate with HTMX patterns for dynamic content updates
|
||||
- Respect the project's utility-first CSS philosophy
|
||||
- Document any new design patterns or component variations
|
||||
|
||||
When implementing styling changes, always consider the user experience impact, maintain consistency with the existing design system, and ensure your solutions work across all supported browsers and devices.
|
||||
10
.github/workflows/publish-manifest.yml
vendored
10
.github/workflows/publish-manifest.yml
vendored
@@ -55,10 +55,14 @@ jobs:
|
||||
shell: bash
|
||||
run: |
|
||||
git fetch --tags --force # no-ops if already present
|
||||
# Get all tags, filter out pre-releases (containing rc, beta, alpha, pre), get latest
|
||||
tag=$(git tag -l --sort=-version:refname | grep -E '^v?[0-9]+\.[0-9]+\.[0-9]+$' | head -n1)
|
||||
# Prioritize 2025.x.x format over v4.x.x format for latest version
|
||||
tag=$(git tag -l --sort=-version:refname | grep -E '^2[0-9]+\.[0-9]+\.[0-9]+$' | head -n1)
|
||||
if [ -z "$tag" ]; then
|
||||
# Fallback to any tag if no stable versions found
|
||||
# Fallback to v4.x.x format if no 2025.x.x tags found
|
||||
tag=$(git tag -l --sort=-version:refname | grep -E '^v[0-9]+\.[0-9]+\.[0-9]+$' | head -n1)
|
||||
fi
|
||||
if [ -z "$tag" ]; then
|
||||
# Final fallback to any tag
|
||||
tag=$(git describe --tags --abbrev=0)
|
||||
fi
|
||||
echo "latest_version=${tag#v}" >> "$GITHUB_OUTPUT"
|
||||
|
||||
@@ -15,12 +15,6 @@ repos:
|
||||
|
||||
- repo: local
|
||||
hooks:
|
||||
# - id: pyright
|
||||
# name: pyright
|
||||
# entry: uv run pyright
|
||||
# language: system
|
||||
# types: [python]
|
||||
## require_serial: false
|
||||
- id: pytest
|
||||
name: pytest
|
||||
entry: uv run pytest
|
||||
|
||||
@@ -189,8 +189,12 @@ def invite_table():
|
||||
server_filter = request.form.get("server") or request.args.get("server")
|
||||
|
||||
if code := request.args.get("delete"):
|
||||
Invitation.query.filter_by(code=code).delete() # no need to parens
|
||||
db.session.commit()
|
||||
# Find the invitation to delete
|
||||
invitation = Invitation.query.filter_by(code=code).first()
|
||||
if invitation:
|
||||
# Delete the invitation - CASCADE will handle association table cleanup
|
||||
db.session.delete(invitation)
|
||||
db.session.commit()
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# 2. Base query (libraries + servers)
|
||||
|
||||
@@ -112,11 +112,16 @@ def _validate_secure_origin(origin, rp_id):
|
||||
raise
|
||||
# If it's not a valid IP address, continue with domain validation
|
||||
|
||||
# Check for localhost (only allow in development)
|
||||
# Check for localhost (only allow in development or testing)
|
||||
if hostname in ["localhost", "127.0.0.1", "::1"]:
|
||||
import os
|
||||
|
||||
if os.environ.get("FLASK_ENV") != "development":
|
||||
from flask import current_app
|
||||
|
||||
is_development = os.environ.get("FLASK_ENV") == "development"
|
||||
is_testing = current_app.config.get("TESTING", False)
|
||||
|
||||
if not (is_development or is_testing):
|
||||
raise ValueError(
|
||||
f"Passkeys cannot use localhost in production. "
|
||||
f"Current hostname '{hostname}' is localhost. "
|
||||
|
||||
@@ -7,9 +7,17 @@ from .extensions import db
|
||||
invite_libraries = db.Table(
|
||||
"invite_library",
|
||||
db.Column(
|
||||
"invite_id", db.Integer, db.ForeignKey("invitation.id"), primary_key=True
|
||||
"invite_id",
|
||||
db.Integer,
|
||||
db.ForeignKey("invitation.id", ondelete="CASCADE"),
|
||||
primary_key=True,
|
||||
),
|
||||
db.Column(
|
||||
"library_id",
|
||||
db.Integer,
|
||||
db.ForeignKey("library.id", ondelete="CASCADE"),
|
||||
primary_key=True,
|
||||
),
|
||||
db.Column("library_id", db.Integer, db.ForeignKey("library.id"), primary_key=True),
|
||||
)
|
||||
|
||||
# ─────────────────────────────────────────────────────────────────────────────
|
||||
|
||||
@@ -101,8 +101,16 @@ class InvitationFlowManager:
|
||||
servers = []
|
||||
|
||||
# Check new many-to-many relationship
|
||||
if hasattr(invitation, "servers") and invitation.servers:
|
||||
servers = list(invitation.servers)
|
||||
if hasattr(invitation, "servers") and invitation.servers is not None:
|
||||
try:
|
||||
# Cast to Any to work around type checking issues with SQLAlchemy relationships
|
||||
from typing import Any, cast
|
||||
|
||||
servers_iter = cast(Any, invitation.servers)
|
||||
servers = list(servers_iter)
|
||||
except (TypeError, AttributeError):
|
||||
# Fallback if servers is not iterable
|
||||
servers = []
|
||||
|
||||
# Fallback to legacy single server relationship
|
||||
elif hasattr(invitation, "server") and invitation.server:
|
||||
|
||||
@@ -53,7 +53,7 @@ def _apprise(msg: str, title: str, tags: str, url: str) -> bool:
|
||||
result = apprise_client.notify(title=title, body=msg)
|
||||
|
||||
logging.info(f"Apprise notification {'sent' if result else 'failed'}: {title}")
|
||||
return result
|
||||
return bool(result)
|
||||
|
||||
except Exception as e:
|
||||
logging.error(f"Error sending Apprise notification: {e}")
|
||||
|
||||
@@ -78,7 +78,7 @@ def upgrade():
|
||||
allow_downloads_plex=allow_dl,
|
||||
allow_tv_plex=allow_tv,
|
||||
verified=True,
|
||||
created_at=datetime.datetime.utcnow(),
|
||||
created_at=datetime.datetime.now(datetime.UTC),
|
||||
)
|
||||
)
|
||||
server_id = res.inserted_primary_key[0]
|
||||
|
||||
@@ -6,7 +6,7 @@ Create Date: 2025-07-05 00:00:00.000000
|
||||
|
||||
"""
|
||||
|
||||
from datetime import datetime
|
||||
import datetime
|
||||
|
||||
import sqlalchemy as sa
|
||||
from alembic import op
|
||||
@@ -25,7 +25,12 @@ def upgrade():
|
||||
sa.Column("id", sa.Integer(), primary_key=True),
|
||||
sa.Column("username", sa.String(), nullable=False, unique=True),
|
||||
sa.Column("password_hash", sa.String(), nullable=False),
|
||||
sa.Column("created_at", sa.DateTime(), nullable=False, default=datetime.utcnow),
|
||||
sa.Column(
|
||||
"created_at",
|
||||
sa.DateTime(),
|
||||
nullable=False,
|
||||
default=lambda: datetime.datetime.now(datetime.UTC),
|
||||
),
|
||||
)
|
||||
|
||||
# ── 2) Migrate legacy single-admin credentials ─────────────────────────
|
||||
@@ -43,7 +48,11 @@ def upgrade():
|
||||
"INSERT INTO admin_account (username, password_hash, created_at) "
|
||||
"VALUES (:u, :p, :c)"
|
||||
),
|
||||
{"u": username, "p": password_hash, "c": datetime.utcnow()},
|
||||
{
|
||||
"u": username,
|
||||
"p": password_hash,
|
||||
"c": datetime.datetime.now(datetime.UTC),
|
||||
},
|
||||
)
|
||||
|
||||
|
||||
|
||||
@@ -0,0 +1,98 @@
|
||||
"""squashed: improve invitation foreign key constraints and add tracking columns
|
||||
|
||||
Revision ID: 9275889a2179
|
||||
Revises: 20250729_squashed_connections_expiry_system
|
||||
Create Date: 2025-08-10 15:12:02.227613
|
||||
|
||||
"""
|
||||
|
||||
import sqlalchemy as sa
|
||||
from alembic import op
|
||||
|
||||
# revision identifiers, used by Alembic.
|
||||
revision = "9275889a2179"
|
||||
down_revision = "20250729_squashed_connections_expiry_system"
|
||||
branch_labels = None
|
||||
depends_on = None
|
||||
|
||||
|
||||
def upgrade():
|
||||
# 1. Add CASCADE constraints to invite_library table
|
||||
# SQLite doesn't support ALTER COLUMN for foreign keys, so we need to recreate the table
|
||||
op.create_table(
|
||||
"invite_library_new",
|
||||
sa.Column("invite_id", sa.Integer(), nullable=False),
|
||||
sa.Column("library_id", sa.Integer(), nullable=False),
|
||||
sa.ForeignKeyConstraint(["invite_id"], ["invitation.id"], ondelete="CASCADE"),
|
||||
sa.ForeignKeyConstraint(["library_id"], ["library.id"], ondelete="CASCADE"),
|
||||
sa.PrimaryKeyConstraint("invite_id", "library_id"),
|
||||
)
|
||||
|
||||
# Copy data from old table to new table
|
||||
op.execute("INSERT INTO invite_library_new SELECT * FROM invite_library")
|
||||
|
||||
# Drop old table
|
||||
op.drop_table("invite_library")
|
||||
|
||||
# Rename new table to original name
|
||||
op.rename_table("invite_library_new", "invite_library")
|
||||
|
||||
# 2. Fix invitation_server foreign key constraints with CASCADE
|
||||
# Create new table with CASCADE constraints
|
||||
op.create_table(
|
||||
"invitation_server_new",
|
||||
sa.Column("invite_id", sa.Integer(), nullable=False),
|
||||
sa.Column("server_id", sa.Integer(), nullable=False),
|
||||
sa.Column("used", sa.Boolean(), nullable=False, default=False),
|
||||
sa.Column("used_at", sa.DateTime(), nullable=True),
|
||||
sa.Column("expires", sa.DateTime(), nullable=True),
|
||||
sa.ForeignKeyConstraint(["invite_id"], ["invitation.id"], ondelete="CASCADE"),
|
||||
sa.ForeignKeyConstraint(["server_id"], ["media_server.id"], ondelete="CASCADE"),
|
||||
sa.PrimaryKeyConstraint("invite_id", "server_id"),
|
||||
)
|
||||
|
||||
# Copy data from old table to new table (original table doesn't have the new columns)
|
||||
op.execute(
|
||||
"INSERT INTO invitation_server_new (invite_id, server_id, used, used_at, expires) SELECT invite_id, server_id, 0, NULL, NULL FROM invitation_server"
|
||||
)
|
||||
|
||||
# Drop old table and rename new table
|
||||
op.drop_table("invitation_server")
|
||||
op.rename_table("invitation_server_new", "invitation_server")
|
||||
|
||||
|
||||
def downgrade():
|
||||
# Reverse all changes
|
||||
|
||||
# 1. Restore invitation_server table without new columns and CASCADE constraints
|
||||
op.create_table(
|
||||
"invitation_server_old",
|
||||
sa.Column("invite_id", sa.Integer(), nullable=False),
|
||||
sa.Column("server_id", sa.Integer(), nullable=False),
|
||||
sa.ForeignKeyConstraint(["invite_id"], ["invitation.id"]),
|
||||
sa.ForeignKeyConstraint(["server_id"], ["media_server.id"]),
|
||||
sa.PrimaryKeyConstraint("invite_id", "server_id"),
|
||||
)
|
||||
|
||||
# Copy data from current table to old table (dropping the new columns)
|
||||
op.execute(
|
||||
"INSERT INTO invitation_server_old (invite_id, server_id) SELECT invite_id, server_id FROM invitation_server"
|
||||
)
|
||||
|
||||
# Drop current table and rename old table
|
||||
op.drop_table("invitation_server")
|
||||
op.rename_table("invitation_server_old", "invitation_server")
|
||||
|
||||
# 2. Restore invite_library table without CASCADE constraints
|
||||
op.create_table(
|
||||
"invite_library_old",
|
||||
sa.Column("invite_id", sa.Integer(), nullable=False),
|
||||
sa.Column("library_id", sa.Integer(), nullable=False),
|
||||
sa.ForeignKeyConstraint(["invite_id"], ["invitation.id"]),
|
||||
sa.ForeignKeyConstraint(["library_id"], ["library.id"]),
|
||||
sa.PrimaryKeyConstraint("invite_id", "library_id"),
|
||||
)
|
||||
|
||||
op.execute("INSERT INTO invite_library_old SELECT * FROM invite_library")
|
||||
op.drop_table("invite_library")
|
||||
op.rename_table("invite_library_old", "invite_library")
|
||||
@@ -46,6 +46,7 @@ class TestInvitationFlowManager:
|
||||
|
||||
assert result.status == ProcessingStatus.INVALID_INVITATION
|
||||
assert result.message == "Invalid invitation"
|
||||
assert result.template_data is not None
|
||||
assert result.template_data["template_name"] == "invalid-invite.html"
|
||||
|
||||
@patch("app.services.invitation_flow.manager.is_invite_valid")
|
||||
@@ -348,6 +349,7 @@ class TestFormBasedWorkflow:
|
||||
result = workflow.show_initial_form(mock_invitation, [mock_server])
|
||||
|
||||
assert result.status == ProcessingStatus.AUTHENTICATION_REQUIRED
|
||||
assert result.template_data is not None
|
||||
assert result.template_data["template_name"] == "welcome-jellyfin.html"
|
||||
|
||||
@patch("app.services.invitation_flow.workflows.StrategyFactory")
|
||||
@@ -490,6 +492,7 @@ class TestEndToEndFlow:
|
||||
display_result = manager.process_invitation_display("E2E123")
|
||||
|
||||
assert display_result.status == ProcessingStatus.AUTHENTICATION_REQUIRED
|
||||
assert display_result.template_data is not None
|
||||
assert (
|
||||
display_result.template_data["template_name"] == "welcome-jellyfin.html"
|
||||
)
|
||||
@@ -544,6 +547,7 @@ class TestEndToEndFlow:
|
||||
display_result = manager.process_invitation_display("PLEX123")
|
||||
|
||||
assert display_result.status == ProcessingStatus.OAUTH_PENDING
|
||||
assert display_result.template_data is not None
|
||||
assert (
|
||||
display_result.template_data["template_name"] == "user-plex-login.html"
|
||||
)
|
||||
|
||||
@@ -2,6 +2,7 @@ import os
|
||||
import tempfile
|
||||
|
||||
import pytest
|
||||
import requests
|
||||
from flask_migrate import downgrade, upgrade
|
||||
from sqlalchemy import create_engine, text
|
||||
|
||||
@@ -153,3 +154,175 @@ def test_migration_downgrade(migration_app, temp_db):
|
||||
assert "wizard_bundle_id" not in columns, (
|
||||
"invitation.wizard_bundle_id column not removed"
|
||||
)
|
||||
|
||||
|
||||
def _get_latest_release_migration():
|
||||
"""Get the HEAD migration revision from the latest GitHub release."""
|
||||
try:
|
||||
# Get latest release info from GitHub API
|
||||
response = requests.get(
|
||||
"https://api.github.com/repos/wizarrrr/wizarr/releases/latest", timeout=10
|
||||
)
|
||||
response.raise_for_status()
|
||||
latest_tag = response.json()["tag_name"]
|
||||
|
||||
# Map known releases to their HEAD migration revisions
|
||||
# This is based on the migration history at the time of release
|
||||
release_migrations = {
|
||||
"2025.8.2": "20250729_squashed_connections_expiry_system",
|
||||
# Add future releases here as they are tagged
|
||||
}
|
||||
|
||||
return release_migrations.get(latest_tag)
|
||||
except Exception:
|
||||
# Fallback to a known stable release migration if API fails
|
||||
return "20250729_squashed_connections_expiry_system"
|
||||
|
||||
|
||||
def _migrations_exist_after_release(release_migration):
|
||||
"""Check if there are migrations newer than the release migration."""
|
||||
import glob
|
||||
import os
|
||||
|
||||
# Get all migration files
|
||||
migrations_dir = os.path.join(os.path.dirname(__file__), "../migrations/versions")
|
||||
migration_files = glob.glob(os.path.join(migrations_dir, "*.py"))
|
||||
|
||||
# Look for migrations that come after the release migration
|
||||
# This is a simple check - in practice you'd parse the migration chain
|
||||
newer_migrations = []
|
||||
|
||||
for file in migration_files:
|
||||
filename = os.path.basename(file)
|
||||
if filename.startswith(("5252b5612761", "6fd264c262f1", "2a7f7c00c11f")):
|
||||
# These are migrations we know come after the 2025.8.2 release
|
||||
newer_migrations.append(filename)
|
||||
|
||||
return len(newer_migrations) > 0
|
||||
|
||||
|
||||
def test_upgrade_from_latest_release(migration_app, temp_db):
|
||||
"""Test upgrading from the latest released version to current HEAD.
|
||||
|
||||
This test simulates a real-world upgrade scenario where a user
|
||||
is upgrading from the latest released version to the current
|
||||
development version. It ensures migrations work properly in
|
||||
upgrade scenarios, not just fresh installs.
|
||||
"""
|
||||
latest_release_migration = _get_latest_release_migration()
|
||||
|
||||
if not latest_release_migration:
|
||||
pytest.skip("Could not determine latest release migration")
|
||||
|
||||
# Skip if there are no migrations newer than the release
|
||||
if not _migrations_exist_after_release(latest_release_migration):
|
||||
pytest.skip(
|
||||
f"No migrations newer than release migration {latest_release_migration}"
|
||||
)
|
||||
|
||||
with migration_app.app_context():
|
||||
# Step 1: Migrate to the latest release version state
|
||||
upgrade(revision=latest_release_migration)
|
||||
|
||||
# Verify we're at the expected state (basic table check)
|
||||
engine = create_engine(temp_db)
|
||||
with engine.connect() as conn:
|
||||
# Check that core tables exist at this migration point
|
||||
result = conn.execute(
|
||||
text(
|
||||
"SELECT name FROM sqlite_master WHERE type='table' AND name NOT LIKE 'alembic_%'"
|
||||
)
|
||||
)
|
||||
tables_at_release = {row[0] for row in result}
|
||||
|
||||
# Core tables that should exist at any stable release
|
||||
required_release_tables = {"user", "invitation", "media_server", "library"}
|
||||
|
||||
missing_core_tables = required_release_tables - tables_at_release
|
||||
assert not missing_core_tables, (
|
||||
f"Missing core tables at release {latest_release_migration}: {missing_core_tables}"
|
||||
)
|
||||
|
||||
# Step 2: Upgrade from release version to current HEAD
|
||||
upgrade() # Upgrade to HEAD (current development state)
|
||||
|
||||
# Step 3: Verify the upgrade succeeded and all current tables exist
|
||||
with engine.connect() as conn:
|
||||
result = conn.execute(
|
||||
text(
|
||||
"SELECT name FROM sqlite_master WHERE type='table' AND name NOT LIKE 'alembic_%'"
|
||||
)
|
||||
)
|
||||
tables_after_upgrade = {row[0] for row in result}
|
||||
|
||||
# Expected tables in current HEAD state (should match full migration test)
|
||||
expected_current_tables = {
|
||||
"user",
|
||||
"invitation",
|
||||
"media_server",
|
||||
"library",
|
||||
"identity",
|
||||
"wizard_step",
|
||||
"wizard_bundle",
|
||||
"wizard_bundle_step",
|
||||
"invitation_server",
|
||||
"webauthn_credential",
|
||||
"admin_account",
|
||||
}
|
||||
|
||||
missing_current_tables = expected_current_tables - tables_after_upgrade
|
||||
assert not missing_current_tables, (
|
||||
f"Missing tables after upgrade to HEAD: {missing_current_tables}"
|
||||
)
|
||||
|
||||
# Verify no tables were lost during upgrade
|
||||
lost_tables = tables_at_release - tables_after_upgrade
|
||||
# Filter out tables that are legitimately removed/renamed during migrations
|
||||
expected_removals = set() # Add any tables that should be removed
|
||||
unexpected_losses = lost_tables - expected_removals
|
||||
|
||||
assert not unexpected_losses, (
|
||||
f"Tables unexpectedly lost during upgrade: {unexpected_losses}"
|
||||
)
|
||||
|
||||
# Verify key constraints and indexes still work
|
||||
# (Test a few critical ones to ensure data integrity is maintained)
|
||||
|
||||
# Check invitation table has basic required columns and new columns from migrations
|
||||
result = conn.execute(text("PRAGMA table_info(invitation)"))
|
||||
invitation_columns = {row[1] for row in result}
|
||||
|
||||
# Basic required columns that should always exist
|
||||
required_core_columns = {
|
||||
"id",
|
||||
"code",
|
||||
"expires", # Core invitation functionality
|
||||
}
|
||||
|
||||
# New columns that should exist after upgrade (from migrations after release)
|
||||
expected_new_columns = {
|
||||
"wizard_bundle_id" # From newer migrations after 2025.8.2
|
||||
}
|
||||
|
||||
missing_core_columns = required_core_columns - invitation_columns
|
||||
assert not missing_core_columns, (
|
||||
f"Missing core columns in invitation table: {missing_core_columns}"
|
||||
)
|
||||
|
||||
missing_new_columns = expected_new_columns - invitation_columns
|
||||
assert not missing_new_columns, (
|
||||
f"Missing new columns from upgrade in invitation table: {missing_new_columns}"
|
||||
)
|
||||
|
||||
# Verify wizard_bundle_step unique constraint exists (from newer migrations)
|
||||
result = conn.execute(
|
||||
text(
|
||||
"SELECT name FROM sqlite_master WHERE type='index' AND tbl_name='wizard_bundle_step' AND name LIKE 'sqlite_autoindex_%'"
|
||||
)
|
||||
)
|
||||
auto_indexes = [row[0] for row in result]
|
||||
has_unique_constraint = len(auto_indexes) > 0
|
||||
|
||||
assert has_unique_constraint, (
|
||||
"wizard_bundle_step missing unique constraint after upgrade"
|
||||
)
|
||||
|
||||
@@ -30,11 +30,13 @@ class TestWebAuthnSecurity:
|
||||
with pytest.raises(ValueError, match="Passkeys require a domain name"):
|
||||
_validate_secure_origin("https://[::1]", "::1")
|
||||
|
||||
def test_validate_secure_origin_localhost_development_only(self):
|
||||
def test_validate_secure_origin_localhost_development_only(self, app):
|
||||
"""Test that localhost is only allowed in development."""
|
||||
# Test localhost rejection in production
|
||||
# Test localhost rejection in production (override testing flag)
|
||||
with (
|
||||
app.app_context(),
|
||||
patch.dict("os.environ", {"FLASK_ENV": "production"}),
|
||||
patch.object(app, "config", {**app.config, "TESTING": False}),
|
||||
pytest.raises(
|
||||
ValueError, match="Passkeys cannot use localhost in production"
|
||||
),
|
||||
@@ -42,7 +44,7 @@ class TestWebAuthnSecurity:
|
||||
_validate_secure_origin("https://localhost", "localhost")
|
||||
|
||||
# Test localhost allowed in development
|
||||
with patch.dict("os.environ", {"FLASK_ENV": "development"}):
|
||||
with app.app_context(), patch.dict("os.environ", {"FLASK_ENV": "development"}):
|
||||
# Should not raise an exception
|
||||
_validate_secure_origin("https://localhost", "localhost")
|
||||
|
||||
@@ -82,25 +84,31 @@ class TestWebAuthnSecurity:
|
||||
|
||||
def test_get_rp_config_request_based_validation(self, app):
|
||||
"""Test that request-based configuration is validated."""
|
||||
# Clear environment variables to force request-based config
|
||||
with (
|
||||
patch.dict("os.environ", {}, clear=True),
|
||||
app.app_context(),
|
||||
app.test_request_context("/", headers={"Host": "example.com"}),
|
||||
pytest.raises(ValueError, match="Passkeys require HTTPS"),
|
||||
):
|
||||
get_rp_config()
|
||||
|
||||
# Test IP address rejection
|
||||
with (
|
||||
app.test_request_context(
|
||||
"/", headers={"Host": "192.168.1.1", "X-Forwarded-Proto": "https"}
|
||||
),
|
||||
pytest.raises(ValueError, match="Passkeys require a domain name"),
|
||||
):
|
||||
get_rp_config()
|
||||
# Test IP address rejection
|
||||
with (
|
||||
patch.dict("os.environ", {}, clear=True),
|
||||
app.app_context(),
|
||||
app.test_request_context(
|
||||
"/", headers={"Host": "192.168.1.1", "X-Forwarded-Proto": "https"}
|
||||
),
|
||||
pytest.raises(ValueError, match="Passkeys require a domain name"),
|
||||
):
|
||||
get_rp_config()
|
||||
|
||||
def test_get_rp_config_htmx_url_validation(self, app):
|
||||
"""Test that HTMX current URL is validated."""
|
||||
# Clear environment variables to force request-based config
|
||||
with (
|
||||
patch.dict("os.environ", {}, clear=True),
|
||||
app.app_context(),
|
||||
app.test_request_context(
|
||||
"/", headers={"HX-Current-URL": "http://example.com/path"}
|
||||
@@ -109,14 +117,16 @@ class TestWebAuthnSecurity:
|
||||
):
|
||||
get_rp_config()
|
||||
|
||||
# Test IP address in HX-Current-URL
|
||||
with (
|
||||
app.test_request_context(
|
||||
"/", headers={"HX-Current-URL": "https://192.168.1.1/path"}
|
||||
),
|
||||
pytest.raises(ValueError, match="Passkeys require a domain name"),
|
||||
):
|
||||
get_rp_config()
|
||||
# Test IP address in HX-Current-URL
|
||||
with (
|
||||
patch.dict("os.environ", {}, clear=True),
|
||||
app.app_context(),
|
||||
app.test_request_context(
|
||||
"/", headers={"HX-Current-URL": "https://192.168.1.1/path"}
|
||||
),
|
||||
pytest.raises(ValueError, match="Passkeys require a domain name"),
|
||||
):
|
||||
get_rp_config()
|
||||
|
||||
def test_get_rp_config_valid_configuration(self, app):
|
||||
"""Test that valid configurations work properly."""
|
||||
@@ -137,8 +147,11 @@ class TestWebAuthnSecurity:
|
||||
assert origin == "https://example.com"
|
||||
|
||||
# Test valid request-based config
|
||||
with app.test_request_context(
|
||||
"/", headers={"Host": "example.com", "X-Forwarded-Proto": "https"}
|
||||
with (
|
||||
patch.dict("os.environ", {}, clear=True),
|
||||
app.test_request_context(
|
||||
"/", headers={"Host": "example.com", "X-Forwarded-Proto": "https"}
|
||||
),
|
||||
):
|
||||
rp_id, rp_name, origin = get_rp_config()
|
||||
assert rp_id == "example.com"
|
||||
|
||||
Reference in New Issue
Block a user