* Add JavaScript unit tests for auto-view.js using QUnit - Set up QUnit testing framework with local vendoring (no CDN dependency) - Create comprehensive test suite for auto-view.js core functions: * throttle() - timing behavior and edge cases * shouldAutoSwitch() - idle timeout decision logic * isPassiveEventSupported() - feature detection and caching * recordInteraction() and state management functions - Add test runner HTML file that works offline - Document testing approach and usage in README.md - All tests focus on business logic, avoid testing framework internals - Establishes reusable pattern for testing other JS modules * Fix test assertion for shouldAutoSwitch 60s threshold behavior The shouldAutoSwitch function uses >= comparison, so at exactly 60 seconds it returns true (should auto-switch). Updated test to match actual behavior. Also added Node.js test runner for command-line testing capability. * Add master test runner for all JavaScript tests - Created test-all.html that runs all JavaScript module tests in one page - Updated README.md with improved workflow for single-URL testing - Added instructions for adding future JavaScript modules to master runner - Maintains individual test runners for focused debugging This addresses the practical concern of needing to visit multiple URLs as JavaScript test coverage grows. * Organize JavaScript testing documentation - Created high-level documentation at docs/dev/frontend/javascript-testing.md - Covers testing philosophy, framework choice, and best practices - Uses auto-view.js as reference implementation example - Streamlined tests/README.md to focus on practical usage - Follows project documentation organization patterns - Maintains concise, maintainable approach with code references * Remove redundant README.md, consolidate all JS testing docs - Removed src/hi/static/tests/README.md (duplicate content) - Made docs/dev/frontend/javascript-testing.md completely self-contained - Added detailed code examples for adding new tests - Single source of truth for JavaScript testing documentation * Add comprehensive JavaScript unit tests for high-value modules - svg-utils.js: 150+ test cases for SVG transform string parsing with regex edge cases - main.js: Tests for generateUniqueId, cookie parsing/formatting with URL encoding - video-timeline.js: Tests for VideoConnectionManager array logic and caching algorithms - watchdog.js: Tests for timer management and restart logic simulation - Updated test-all.html with proper module loading order (main.js first) - Fixed svg-utils test assertions to match actual regex behavior - Enhanced Node.js test runner with better mocking (though some modules too DOM-heavy) All JavaScript tests now pass in browser - establishes comprehensive testing coverage for business logic functions across multiple modules. * Add JavaScript testing documentation links - Added link to javascript-testing.md in frontend-guidelines.md - Added link to javascript-testing.md in testing-guidelines.md - Ensures JavaScript testing approach is discoverable from main guideline docs
8.0 KiB
Code Commenting Guidelines
Core Philosophy
We generally want to avoid over-commenting and let the code variable/method naming be clear on the intent. We believe in the philosophy: "Before adding a comment, ask yourself how the code can be changed to make the comment unnecessary."
Guidelines
✅ GOOD Comments - What TO Include
-
Design rationale that is non-obvious
- Example:
# Single source of truth for position vs path classification - Explains architectural decisions
- Example:
-
Complex domain logic explanations
- Example: Multi-line explanation of entity delegation concept
- Business rules that aren't obvious from code structure
-
Summarize complex or non-standard coding approaches
- When using unusual patterns or workarounds
- Algorithm explanations that aren't obvious
-
Design decision rationale
- Example: "The general types could be used for these, since all are just name-value pairs. However, by being more specific, we can provide more specific visual and processing"
- Explains why one approach was chosen over alternatives
-
Mathematical/geometric calculations
- Example:
'80.0,40.0', # top-left (100-20, 50-10) - Coordinate calculations that are difficult to verify mentally
- Especially valuable in test cases for validation
- Example:
-
Cross-file coordination notes
- Example:
# Match SvgItemFactory.NEW_PATH_RADIUS_PERCENT - Important synchronization between related constants/values in different files
- Example:
-
Complex domain abstractions
- Example: Multi-line explanation of LocationItem interface concept
- Abstract concepts that need implementation guidance
-
Multi-step process/algorithm documentation
- Example:
alert_manager.py:40-56- Breaks down the three distinct checks and explains why HTML is always returned - Complex workflows that need step-by-step explanation of the "why"
- Example:
-
External API/library limitations and workarounds
- Example:
wmo_units.py:838-841- Documents Pint library limitations requiring unit mappings - Example:
zoneminder/monitors.py:81-87- pyzm timezone parsing quirks - Critical for understanding why non-obvious code patterns exist
- Brief expressions of frustration (e.g., "ugh") acceptable when documenting known pain points
- Example:
-
External service configuration rationale
- Example:
usno.py:59-62- Documents API priority, rate limits, polling intervals - Explains constraints and decisions for external integrations
- Future extension points
- Example:
daily_weather_tracker.py:96-100- "Future: Add other weather field tracking here" - Marks logical insertion points for anticipated features
- Should be brief hints, not commented-out code blocks
- Temporal/timing complexity in APIs
- Example:
zoneminder/monitors.py:99-110- Events as intervals vs points, open/closed handling - Example:
zoneminder/monitors.py:166-171- Why polling time cannot advance - Critical for understanding time-based edge cases in external systems
- Bug fix documentation
- Example:
zoneminder/monitors.py:132-133- "This fixes the core bug where..." - Documents what was broken and why the current approach fixes it
- Helps prevent regression
❌ BAD Comments - What NOT To Include
-
Avoid commenting obvious variable purposes
- Bad:
# Store original entity_type_str to detect changes - The variable name should make this clear
- Bad:
-
Remove work-stream artifacts
- Bad: Comments explaining why tests were removed or referencing specific issues
- Comments should be timeless, not tied to particular development contexts
-
Redundant descriptions of clear code
- Bad:
# Track EntityType change and response needed after transaction - When variable names already convey this information
- Bad:
-
Cryptic references
- Bad:
# Recreate to preserve "max" to show new form - If unclear, either explain properly or remove
- Bad:
-
Development phase/work-stream artifacts
- Bad: Comments explaining "Phase 3/Phase 4" development contexts
- Bad: Explanations of why code was removed or changed
- These belong in commit messages or PR descriptions, not in main branch code
-
TODO comments (high bar to justify)
- Generally avoid TODOs in main branch
- If important enough for TODO, create an issue and prioritize it
- Only acceptable when there's a compelling reason not to address immediately
- Must be specific and actionable, not vague intentions
-
Commented-out code
- Bad: Dead code that's been disabled or replaced
- Bad: Example implementations as large code blocks
- Bad: Logic that looks like it might need uncommenting (e.g.,
zoneminder/integration.py:65-66) - If needed for future reference, create an issue instead
- Exception: Very brief one-line hints at extension points (see "Future extension points" in good comments)
- Commented code creates confusion about whether it should be active
🤔 Consider Alternatives
- Method names over behavioral comments
- Instead of commenting behavior, consider renaming methods to be self-documenting
- Example: Comment about "always attempt regardless of mode/view" might be better as a more descriptive method name
Examples from Entity Module Review
Keep These Comments:
entity/enums.py:104- "Single source of truth for position vs path classification"entity/entity_pairing_manager.py:20-23- Multi-line explanation of delegation conceptentity/enums.py:160-163- Explains why specific types exist vs general typeslocation/models.py:224-226- LocationItem interface conceptalert/alert_manager.py:40-56- Multi-step process breakdownalert/alarm.py:25-27- Data structure behavior explanationweather/monitors.py:47-50- Rate-limiting protection rationaleweather/wmo_units.py:838-841- Pint library limitation workaroundsweather/aggregated_weather_data.py:26-32- Comprehensive class documentationzoneminder/monitors.py:99-110- Complex temporal logic explanationzoneminder/monitors.py:166-171- Critical edge case reasoningzm_manager.py:22- External library frustration ("ugh") with context
Remove/Improve These:
entity/views.py:37- "Store original entity_type_str to detect changes"entity/views.py:48- "Track EntityType change and response needed after transaction"entity/tests/test_views.py:66-72- Long explanations of why tests were removedlocation/edit/views.py:171- "Recreate to preserve 'max' to show new form" - Cryptic referencelocation/location_manager.py:169- "Only collect position OR path based on EntityType, not both" - States obviousalert/alert_queue.py:62- "Use queue_insertion_datetime instead of start_datetime" - States what code does- Test files with "Phase 3/Phase 4" development context comments
weather/daily_weather_tracker.py:39,42,45- Simple labels that state the obviouszoneminder/integration.py:65-66- Commented-out logic that creates confusion
Borderline (Keep for now):
entity/views.py:87- "Always attempt advanced transition handling regardless of mode/view" - Conveys important business rule despite unclear phrasing- Enum inline comments - Generally add semantic value about domain purpose
Special Cases
TRACE Pattern (Accepted)
TRACE = False # for debuggingis an accepted pattern- Addresses Python logging limitation (lacks TRACE level below DEBUG)
- Used for very frequent or large-volume debug output
- Keep the comment to explain purpose for those unfamiliar with the pattern
Summary
Comments should explain the why not the what. Good comments document:
- Non-obvious design decisions
- Complex business logic
- External API/library quirks and workarounds
- Time-based complexities
- Bug fixes that prevent regression
Avoid comments that:
- State what the code obviously does
- Contain development artifacts or work-stream context
- Leave commented-out code (creates confusion)
- Explain what better naming could clarify
When in doubt, ask: "Can I change the code to make this comment unnecessary?"