Merge pull request #1459 from adamoutler/skills-docs

Agentic Skills & Environmental Vars docs
This commit is contained in:
Jokob @NetAlertX
2026-01-26 10:09:06 +11:00
committed by GitHub
23 changed files with 852 additions and 135 deletions

View File

@@ -1,59 +0,0 @@
# Gemini-CLI Agent Instructions for NetAlertX
## 1. Environment & Devcontainer
When starting a session, always identify the active development container.
### Finding the Container
Run `docker ps` to list running containers. Look for an image name containing `vsc-netalertx` or similar.
```bash
docker ps --format "table {{.ID}}\t{{.Image}}\t{{.Status}}\t{{.Names}}" | grep netalertx
```
- **If no container is found:** Inform the user. You cannot run integration tests or backend logic without it.
- **If multiple containers are found:** Ask the user to clarify which one to use (e.g., provide the Container ID).
### Running Commands in the Container
Prefix commands with `docker exec <CONTAINER_ID>` to run them inside the environment. Use the scripts in `/services/` to control backend and other processes.
```bash
docker exec <CONTAINER_ID> bash /workspaces/NetAlertX/.devcontainer/scripts/setup.sh
```
*Note: This script wipes `/tmp` ramdisks, resets DBs, and restarts services (python server, cron,php-fpm, nginx).*
## 2. Codebase Structure & Key Paths
- **Source Code:** `/workspaces/NetAlertX` (mapped to `/app` in container via symlink).
- **Backend Entry:** `server/api_server/api_server_start.py` (Flask) and `server/__main__.py`.
- **Frontend:** `front/` (PHP/JS).
- **Plugins:** `front/plugins/`.
- **Config:** `/data/config/app.conf` (runtime) or `back/app.conf` (default).
- **Database:** `/data/db/app.db` (SQLite).
## 3. Testing Workflow
**Crucial:** Tests MUST be run inside the container to access the correct runtime environment (DB, Config, Dependencies).
### Running Tests
Use `pytest` with the correct PYTHONPATH.
```bash
docker exec <CONTAINER_ID> bash -c "cd /workspaces/NetAlertX && pytest <test_file>"
```
*Example:*
```bash
docker exec <CONTAINER_ID> bash -c "cd /workspaces/NetAlertX && pytest test/api_endpoints/test_mcp_extended_endpoints.py"
```
### Authentication in Tests
The test environment uses `API_TOKEN`. The most reliable way to retrieve the current token from a running container is:
```bash
docker exec <CONTAINER_ID> python3 -c "from helper import get_setting_value; print(get_setting_value('API_TOKEN'))"
```
*Troubleshooting:* If tests fail with 403 Forbidden or empty tokens:
1. Verify server is running and use the aforementioned setup.sh if required.
2. Verify `app.conf` inside the container: `docker exec <ID> cat /data/config/app.conf`
23 Verify Python can read it: `docker exec <ID> python3 -c "from helper import get_setting_value; print(get_setting_value('API_TOKEN'))"`

View File

@@ -0,0 +1,31 @@
---
name: devcontainer-management
description: Guide for identifying, managing, and running commands within the NetAlertX development container. Use this when asked to run backend logic, setup scripts, or troubleshoot container issues.
---
# Devcontainer Management
When starting a session or performing tasks requiring the runtime environment, you must identify and use the active development container.
## Finding the Container
Run `docker ps` to list running containers. Look for an image name containing `vsc-netalertx` or similar.
```bash
docker ps --format "table {{.ID}}\t{{.Image}}\t{{.Status}}\t{{.Names}}" | grep netalertx
```
- **If no container is found:** Inform the user. You cannot run integration tests or backend logic without it.
- **If multiple containers are found:** Ask the user to clarify which one to use (e.g., provide the Container ID).
## Running Commands in the Container
Prefix commands with `docker exec <CONTAINER_ID>` to run them inside the environment. Use the scripts in `/services/` to control backend and other processes.
```bash
docker exec <CONTAINER_ID> bash /workspaces/NetAlertX/.devcontainer/scripts/setup.sh
```
*Note: This script wipes `/tmp` ramdisks, resets DBs, and restarts services (python server, cron,php-fpm, nginx).*
```

View File

@@ -0,0 +1,15 @@
---
name: project-navigation
description: Reference for the NetAlertX codebase structure, key file paths, and configuration locations. Use this when exploring the codebase or looking for specific components like the backend entry point, frontend files, or database location.
---
# Project Navigation & Structure
## Codebase Structure & Key Paths
- **Source Code:** `/workspaces/NetAlertX` (mapped to `/app` in container via symlink).
- **Backend Entry:** `server/api_server/api_server_start.py` (Flask) and `server/__main__.py`.
- **Frontend:** `front/` (PHP/JS).
- **Plugins:** `front/plugins/`.
- **Config:** `/data/config/app.conf` (runtime) or `back/app.conf` (default).
- **Database:** `/data/db/app.db` (SQLite).

View File

@@ -0,0 +1,52 @@
---
name: testing-workflow
description: Guide for running tests within the NetAlertX environment. Detailed instructions for standard unit tests (fast), full suites (slow), and handling authentication.
---
# Testing Workflow
**Crucial:** Tests MUST be run inside the container to access the correct runtime environment (DB, Config, Dependencies).
## 1. Standard Unit Tests (Recommended)
By default, run the standard unit test suite. This **excludes** slow tests marked with `docker` (requires socket access) or `feature_complete` (extended coverage).
```bash
docker exec <CONTAINER_ID> bash -c "cd /workspaces/NetAlertX && pytest -m 'not docker and not feature_complete'"
```
## 2. Full Test Suite (Slow)
To run **all** tests, including integration tests that require Docker socket access and extended feature coverage:
```bash
docker exec <CONTAINER_ID> bash -c "cd /workspaces/NetAlertX && pytest"
```
## 3. Running Specific Tests
To run a specific file or folder:
```bash
docker exec <CONTAINER_ID> bash -c "cd /workspaces/NetAlertX && pytest <path_to_test>"
```
*Example:*
```bash
docker exec <CONTAINER_ID> bash -c "cd /workspaces/NetAlertX && pytest test/api_endpoints/test_mcp_extended_endpoints.py"
```
## Authentication in Tests
The test environment uses `API_TOKEN`. The most reliable way to retrieve the current token from a running container is:
```bash
docker exec <CONTAINER_ID> python3 -c "from helper import get_setting_value; print(get_setting_value('API_TOKEN'))"
```
### Troubleshooting
If tests fail with 403 Forbidden or empty tokens:
1. Verify server is running and use the setup script (`/workspaces/NetAlertX/.devcontainer/scripts/setup.sh`) if required.
2. Verify `app.conf` inside the container: `docker exec <ID> cat /data/config/app.conf`
3. Verify Python can read it: `docker exec <ID> python3 -c "from helper import get_setting_value; print(get_setting_value('API_TOKEN'))"`

112
.github/copilot-instructions.md vendored Executable file → Normal file
View File

@@ -1,89 +1,49 @@
### ROLE: NETALERTX ARCHITECT & STRICT CODE AUDITOR
You are a cynical Security Engineer and Core Maintainer of NetAlertX. Your goal is not just to "help," but to "deliver verified, secure, and production-ready solutions."
You are a cynical Security Engineer and Core Maintainer of NetAlertX. Your goal is to deliver verified, secure, and production-ready solutions.
### MANDATORY BEHAVIORAL OVERRIDES:
1. **Obsessive Verification:** Never provide a solution without a corresponding proof of correctness. If you write a function, you MUST write a test case or validation step immediately after.
2. **Anti-Laziness Protocol:** You are forbidden from using placeholders (e.g., `// ... rest of code`, ``). You must output the full, functional block every time to ensure context is preserved.
3. **Priority Hierarchy:** Priority 1 is Correctness. Priority 2 is Completeness. Priority 3 is Speed.
4. **Mantra:** "Job's not done 'till unit tests run."
### MANDATORY BEHAVIORAL OVERRIDES
1. **Obsessive Verification:** Never provide a solution without proof of correctness. Write test cases or validation immediately after writing functions.
2. **Anti-Laziness Protocol:** No placeholders. Output full, functional blocks every time.
3. **Priority Hierarchy:** Correctness > Completeness > Speed.
4. **Mantra:** "Job's not done 'till unit tests run."
---
# NetAlertX AI Assistant Instructions
This is NetAlertX — network monitoring & alerting. NetAlertX provides Network inventory, awareness, insight, categorization, intruder and presence detection. This is a heavily community-driven project, welcoming of all contributions.
# NetAlertX
## Architecture (what runs where)
- Backend (Python): main loop + GraphQL/REST endpoints orchestrate scans, plugins, workflows, notifications, and JSON export.
- Key: `server/__main__.py`, `server/plugin.py`, `server/initialise.py`, `server/api_server/api_server_start.py`
- Data (SQLite): persistent state in `db/app.db`; helpers in `server/database.py` and `server/db/*`.
- Frontend (Nginx + PHP + JS): UI reads JSON, triggers execution queue events.
- Key: `front/`, `front/js/common.js`, `front/php/server/*.php`
- Plugins (Python): acquisition/enrichment/publishers under `front/plugins/*` with `config.json` manifests.
- Messaging/Workflows: `server/messaging/*`, `server/workflows/*`
- API JSON Cache for UI: generated under `api/*.json`
Network monitoring & alerting. Provides inventory, awareness, insight, categorization, intruder and presence detection.
Backend loop phases (see `server/__main__.py` and `server/plugin.py`): `once`, `schedule`, `always_after_scan`, `before_name_updates`, `on_new_device`, `on_notification`, plus adhoc `run` via execution queue. Plugins execute as scripts that write result logs for ingestion.
## Architecture
## Plugin patterns that matter
- Manifest lives at `front/plugins/<code_name>/config.json`; `code_name` == folder, `unique_prefix` drives settings and filenames (e.g., `ARPSCAN`).
- Control via settings: `<PREF>_RUN` (phase), `<PREF>_RUN_SCHD` (cron-like), `<PREF>_CMD` (script path), `<PREF>_RUN_TIMEOUT`, `<PREF>_WATCH` (diff columns).
- Data contract: scripts write `/tmp/log/plugins/last_result.<PREF>.log` (pipedelimited: 9 required cols + optional 4). Use `front/plugins/plugin_helper.py`s `Plugin_Objects` to sanitize text and normalize MACs, then `write_result_file()`.
- Device import: define `database_column_definitions` when creating/updating devices; watched fields trigger notifications.
- **Backend (Python):** `server/__main__.py`, `server/plugin.py`, `server/api_server/api_server_start.py`
- **Backend Config:** `/data/config/app.conf`
- **Data (SQLite):** `/data/db/app.db`; helpers in `server/db/*`
- **Frontend (Nginx + PHP + JS):** `front/`
- **Plugins (Python):** `front/plugins/*` with `config.json` manifests
### Standard Plugin Formats
* publisher: Sends notifications to services. Runs `on_notification`. Data source: self.
* dev scanner: Creates devices and manages online/offline status. Runs on `schedule`. Data source: self / SQLite DB.
* name discovery: Discovers device names via various protocols. Runs `before_name_updates` or on `schedule`. Data source: self.
* importer: Imports devices from another service. Runs on `schedule`. Data source: self / SQLite DB.
* system: Provides core system functionality. Runs on `schedule` or is always on. Data source: self / Template.
* other: Miscellaneous plugins. Runs at various times. Data source: self / Template.
## Skills
### Plugin logging & outputs
- Always check relevant logs first.
- Use logging as shown in other plugins.
- Collect results with `Plugin_Objects.add_object(...)` during processing and call `plugin_objects.write_result_file()` exactly once at the end of the script.
- Prefer to log a brief summary before writing (e.g., total objects added) to aid troubleshooting; keep logs concise at `info` level and use `verbose` or `debug` for extra context.
- Do not write adhoc files for results; the only consumable output is `last_result.<PREF>.log` generated by `Plugin_Objects`.
Procedural knowledge lives in `.github/skills/`. Load the appropriate skill when performing these tasks:
## API/Endpoints quick map
- Flask app: `server/api_server/api_server_start.py` exposes routes like `/device/<mac>`, `/devices`, `/devices/export/{csv,json}`, `/devices/import`, `/devices/totals`, `/devices/by-status`, plus `nettools`, `events`, `sessions`, `dbquery`, `metrics`, `sync`.
- Authorization: all routes expect header `Authorization: Bearer <API_TOKEN>` via `get_setting_value('API_TOKEN')`.
- All responses need to return `"success":<False:True>` and if `False` an "error" message needs to be returned, e.g. `{"success": False, "error": f"No stored open ports for Device"}`
| Task | Skill |
|------|-------|
| Run tests, check failures | `testing-workflow` |
| Start/stop/restart services | `devcontainer-services` |
| Wipe database, fresh start | `database-reset` |
| Load sample devices | `sample-data` |
| Build Docker images | `docker-build` |
| Reprovision devcontainer | `devcontainer-setup` |
| Create or run plugins | `plugin-run-development` |
| Analyze PR comments | `pr-analysis` |
| Clean Docker resources | `docker-prune` |
| Generate devcontainer configs | `devcontainer-configs` |
| Create API endpoints | `api-development` |
| Logging conventions | `logging-standards` |
| Settings and config | `settings-management` |
| Find files and paths | `project-navigation` |
| Coding standards | `code-standards` |
## Conventions & helpers to reuse
- Settings: add/modify via `ccd()` in `server/initialise.py` or perplugin manifest. Never hardcode ports or secrets; use `get_setting_value()`.
- Logging: use `mylog(level, [message])`; levels: none/minimal/verbose/debug/trace. `none` is used for most important messages that should always appear, such as exceptions. Do NOT use `error` as level.
- Time/MAC/strings: `server/utils/datetime_utils.py` (`timeNowDB`), `front/plugins/plugin_helper.py` (`normalize_mac`), `server/helper.py` (sanitizers). Validate MACs before DB writes.
- DB helpers: prefer `server/db/db_helper.py` functions (e.g., `get_table_json`, device condition helpers) over raw SQL in new paths.
## Execution Protocol
## Dev workflow (devcontainer)
- **Devcontainer philosophy: brutal simplicity.** One user, everything writable, completely idempotent. No permission checks, no conditional logic, no sudo needed. If something doesn't work, tear down the wall and rebuild - don't patch. We unit test permissions in the hardened build.
- **Permissions:** Never `chmod` or `chown` during operations. Everything is already writable. If you need permissions, the devcontainer setup is broken - fix `.devcontainer/scripts/setup.sh` or `.devcontainer/resources/devcontainer-Dockerfile` instead.
- **Files & Paths:** Use environment variables (`NETALERTX_DB`, `NETALERTX_LOG`, etc.) everywhere. `/data` for persistent config/db, `/tmp` for runtime logs/api/nginx state. Never hardcode `/data/db` or relative paths.
- **Database reset:** Use the `[Dev Container] Wipe and Regenerate Database` task. Kills backend, deletes `/data/{db,config}/*`, runs first-time setup scripts. Clean slate, no questions.
- Services: use tasks to (re)start backend and nginx/PHP-FPM. Backend runs with debugpy on 5678; attach a Python debugger if needed.
- Run a plugin manually: `python3 front/plugins/<code_name>/script.py` (ensure `sys.path` includes `/app/front/plugins` and `/app/server` like the template).
- Testing: pytest available via Alpine packages. Tests live in `test/`; app code is under `server/`. PYTHONPATH is preconfigured to include workspace and `/opt/venv` sitepackages.
- **Subprocess calls:** ALWAYS set explicit timeouts. Default to 60s minimum unless plugin config specifies otherwise. Nested subprocess calls (e.g., plugins calling external tools) need their own timeout - outer plugin timeout won't save you.
- you need to set the BACKEND_API_URL setting (e.g. in teh app.conf file or via the APP_CONF_OVERRIDE env variable) to the backend api port url , e.g. https://something-20212.app.github.dev/ depending on your github codespace url.
## What “done right” looks like
- When adding a plugin, start from `front/plugins/__template`, implement with `plugin_helper`, define manifest settings, and wire phase via `<PREF>_RUN`. Verify logs in `/tmp/log/plugins/` and data in `api/*.json`.
- When introducing new config, define it once (core `ccd()` or plugin manifest) and read it via helpers everywhere.
- When exposing new server functionality, add endpoints in `server/api_server/*` and keep authorization consistent; update UI by reading/writing JSON cache rather than bypassing the pipeline.
- Always try following the DRY principle, do not re-implement functionality, but re-use existing methods where possible, or refactor to use a common method that is called multiple times
- If new functionality needs to be added, look at impenting it into existing handlers (e.g. `DeviceInstance` in `server/models/device_instance.py`) or create a new one if it makes sense. Do not access the DB from otehr application layers.
- Code files shoudln't be longer than 500 lines of code
## Useful references
- Docs: `docs/PLUGINS_DEV.md`, `docs/SETTINGS_SYSTEM.md`, `docs/API_*.md`, `docs/DEBUG_*.md`
- Logs: All logs are under `/tmp/log/`. Plugin logs are very shortly under `/tmp/log/plugins/` until picked up by the server.
- plugin logs: `/tmp/log/plugins/*.log`
- backend logs: `/tmp/log/stdout.log` and `/tmp/log/stderr.log`
- php errors: `/tmp/log/app.php_errors.log`
- nginx logs: `/tmp/log/nginx-access.log` and `/tmp/log/nginx-error.log`
## Execution Protocol (Strict)
- Always run the `testFailure` tool before executing any tests to gather current failure information and avoid redundant runs.
- Always prioritize using the appropriate tools in the environment first. Example: if a test is failing use `testFailure` then `runTests`.
- Docker tests take an extremely long time to run. Avoid changes to docker or tests until you've examined the existing `testFailure`s and `runTests` results.
- **Before running tests:** Always use `testFailure` tool first to gather current failures.
- **Docker tests are slow.** Examine existing failures before changing tests or Dockerfiles.

69
.github/skills/api-development/SKILL.md vendored Normal file
View File

@@ -0,0 +1,69 @@
---
name: api-development
description: Develop and extend NetAlertX REST API endpoints. Use this when asked to create endpoint, add API route, implement API, or modify API responses.
---
# API Development
## Entry Point
Flask app: `server/api_server/api_server_start.py`
## Existing Routes
- `/device/<mac>` - Single device operations
- `/devices` - Device list
- `/devices/export/{csv,json}` - Export devices
- `/devices/import` - Import devices
- `/devices/totals` - Device counts
- `/devices/by-status` - Devices grouped by status
- `/nettools` - Network utilities
- `/events` - Event log
- `/sessions` - Session management
- `/dbquery` - Database queries
- `/metrics` - Prometheus metrics
- `/sync` - Synchronization
## Authorization
All routes require header:
```
Authorization: Bearer <API_TOKEN>
```
Retrieve token via `get_setting_value('API_TOKEN')`.
## Response Contract
**MANDATORY:** All responses must include `"success": true|false`
```python
return {"success": False, "error": "Description of what went wrong"}
```
On success:
```python
return {"success": True, "data": result}
```
```python
return {"success": False, "error": "Description of what went wrong"}
```
On success:
```python
return {"success": True, "data": result}
```
**Exception:** The legacy `/device/<mac>` GET endpoint does not follow this contract to maintain backward compatibility with the UI.
## Adding New Endpoints
1. Add route in `server/api_server/` directory
2. Follow authorization pattern
3. Return proper response contract
4. Update UI to read/write JSON cache (don't bypass pipeline)

60
.github/skills/authentication/SKILL.md vendored Normal file
View File

@@ -0,0 +1,60 @@
---
name: netalertx-authentication-tokens
description: Manage and troubleshoot API tokens and authentication-related secrets. Use this when you need to find, rotate, verify, or debug authentication issues (401/403) in NetAlertX.
---
# Authentication
## Purpose ✅
Explain how to locate, validate, rotate, and troubleshoot API tokens and related authentication settings used by NetAlertX.
## Pre-Flight Check (MANDATORY) ⚠️
1. Ensure the backend is running (use devcontainer services or `ps`/systemd checks).
2. Verify the `API_TOKEN` setting can be read with Python (see below).
3. If a token-related error occurs, gather logs (`/tmp/log/app.log`, nginx logs) before changing secrets.
## Retrieve the API token (Python — preferred) 🐍
Always use Python helpers to read secrets to avoid accidental exposure in shells or logs:
```python
from helper import get_setting_value
token = get_setting_value("API_TOKEN")
```
If you must inspect from a running container (read-only), use:
```bash
docker exec <CONTAINER_ID> python3 -c "from helper import get_setting_value; print(get_setting_value('API_TOKEN'))"
```
You can also check the runtime config file:
```bash
docker exec <CONTAINER_ID> grep API_TOKEN /data/config/app.conf
```
## Rotate / Generate a new token 🔁
- Preferred: Use the web UI (Settings / System) and click **Generate** for the `API_TOKEN` field — this updates the value safely and immediately.
- Manual: Edit `/data/config/app.conf` and restart the backend if required (use the existing devcontainer service tasks).
- After rotation: verify the value with `get_setting_value('API_TOKEN')` and update any clients or sync nodes to use the new token.
## Troubleshooting 401 / 403 Errors 🔍
1. Confirm backend is running and reachable.
2. Confirm `get_setting_value('API_TOKEN')` returns a non-empty value.
3. Ensure client requests send the header exactly: `Authorization: Bearer <API_TOKEN>`.
4. Check `/tmp/log/app.log` and plugin logs (e.g., sync plugin) for "Incorrect API Token" messages.
5. If using multiple nodes, ensure the token matches across nodes for sync operations.
6. If token appears missing or incorrect, rotate via UI or update `app.conf` and re-verify.
## Best Practices & Security 🔐
- Never commit tokens to source control or paste them in public issues. Redact tokens when sharing logs.
- Rotate tokens when a secret leak is suspected or per your security policy.
- Use `get_setting_value()` in tests and scripts — do not hardcode secrets.
## Related Skills & Docs 📚
- `testing-workflow` — how to use `API_TOKEN` in tests
- `settings-management` — where settings live and how they are managed
- Docs: `docs/API.md`, `docs/API_OLD.md`, `docs/API_SSE.md`
---
_Last updated: 2026-01-23_

65
.github/skills/code-standards/SKILL.md vendored Normal file
View File

@@ -0,0 +1,65 @@
---
name: netalertx-code-standards
description: NetAlertX coding standards and conventions. Use this when writing code, reviewing code, or implementing features.
---
# Code Standards
## File Length
Keep code files under 500 lines. Split larger files into modules.
## DRY Principle
Do not re-implement functionality. Reuse existing methods or refactor to create shared methods.
## Database Access
- Never access DB directly from application layers
- Use `server/db/db_helper.py` functions (e.g., `get_table_json`)
- Implement new functionality in handlers (e.g., `DeviceInstance` in `server/models/device_instance.py`)
## MAC Address Handling
Always validate and normalize MACs before DB writes:
```python
from plugin_helper import normalize_mac
mac = normalize_mac(raw_mac)
```
## Subprocess Safety
**MANDATORY:** All subprocess calls must set explicit timeouts.
```python
result = subprocess.run(cmd, timeout=60) # Minimum 60s
```
Nested subprocess calls need their own timeout—outer timeout won't save you.
## Time Utilities
```python
from utils.datetime_utils import timeNowDB
timestamp = timeNowDB()
```
## String Sanitization
Use sanitizers from `server/helper.py` before storing user input.
## Devcontainer Constraints
- Never `chmod` or `chown` during operations
- Everything is already writable
- If permissions needed, fix `.devcontainer/scripts/setup.sh`
## Path Hygiene
- Use environment variables for runtime paths
- `/data` for persistent config/db
- `/tmp` for runtime logs/api/nginx state
- Never hardcode `/data/db` or use relative paths

38
.github/skills/database-reset/SKILL.md vendored Normal file
View File

@@ -0,0 +1,38 @@
---
name: reset-netalertx-database
description: Wipe and regenerate the NetAlertX database and config. Use this when asked to reset database, wipe db, fresh database, clean slate, or start fresh.
---
# Database Reset
Completely wipes devcontainer database and config, then regenerates from scratch.
## Command
```bash
killall 'python3' || true
sleep 1
rm -rf /data/db/* /data/config/*
bash /entrypoint.d/15-first-run-config.sh
bash /entrypoint.d/20-first-run-db.sh
```
## What This Does
1. Kills backend to release database locks
2. Deletes all files in `/data/db/` and `/data/config/`
3. Runs first-run config provisioning
4. Runs first-run database initialization
## After Reset
Run the startup script to restart services:
```bash
/workspaces/NetAlertX/.devcontainer/scripts/setup.sh
```
## Database Location
- Runtime: `/data/db/app.db` (SQLite)
- Config: `/data/config/app.conf`

View File

@@ -0,0 +1,28 @@
---
name: netalertx-devcontainer-configs
description: Generate devcontainer configuration files. Use this when asked to generate devcontainer configs, update devcontainer template, or regenerate devcontainer.
---
# Devcontainer Config Generation
Generates devcontainer configs from the template. Must be run after changes to devcontainer configuration.
## Command
```bash
/workspaces/NetAlertX/.devcontainer/scripts/generate-configs.sh
```
## What It Does
Combines and merges template configurations into the final config used by VS Code.
## When to Run
- After modifying `.devcontainer/` template files
- After changing devcontainer features or settings
- Before committing devcontainer changes
## Note
This affects only the devcontainer configuration. It has no bearing on the production or test Docker image.

View File

@@ -0,0 +1,50 @@
---
name: restarting-netalertx-services
description: Control NetAlertX services inside the devcontainer. Use this when asked to start backend, start frontend, start nginx, start php-fpm, start crond, stop services, restart services, or check if services are running.
---
# Devcontainer Services
You operate inside the devcontainer. Do not use `docker exec`.
## Start Backend (Python)
```bash
/services/start-backend.sh
```
Backend runs with debugpy on port 5678 for debugging. Takes ~5 seconds to be ready.
## Start Frontend (nginx + PHP-FPM)
```bash
/services/start-php-fpm.sh &
/services/start-nginx.sh &
```
Launches almost instantly.
## Start Scheduler (CronD)
```bash
/services/start-crond.sh
```
## Stop All Services
```bash
pkill -f 'php-fpm83|nginx|crond|python3' || true
```
## Check Running Services
```bash
pgrep -a 'python3|nginx|php-fpm|crond'
```
## Service Ports
- Frontend (nginx): 20211
- Backend API: 20212
- GraphQL: 20212
- Debugpy: 5678

View File

@@ -0,0 +1,36 @@
---
name: netalertx-idempotent-setup
description: Reprovision and reset the devcontainer environment. Use this when asked to re-run startup, reprovision, setup devcontainer, fix permissions, or reset runtime state.
---
# Devcontainer Setup
The setup script forcefully resets all runtime state. It is idempotent—every run wipes and recreates all relevant folders, symlinks, and files.
## Command
```bash
/workspaces/NetAlertX/.devcontainer/scripts/setup.sh
```
## What It Does
1. Kills all services (php-fpm, nginx, crond, python3)
2. Mounts tmpfs ramdisks for `/tmp/log`, `/tmp/api`, `/tmp/run`, `/tmp/nginx`
3. Creates critical subdirectories
4. Links `/entrypoint.d` and `/app` symlinks
5. Creates `/data`, `/data/config`, `/data/db` directories
6. Creates all log files
7. Runs `/entrypoint.sh` to start services
8. Writes version to `.VERSION`
## When to Use
- After modifying setup scripts
- After container rebuild
- When environment is in broken state
- After database reset
## Philosophy
No conditional logic. Everything is recreated unconditionally. If something doesn't work, run setup again.

38
.github/skills/docker-build/SKILL.md vendored Normal file
View File

@@ -0,0 +1,38 @@
---
name: netalertx-docker-build
description: Build Docker images for testing or production. Use this when asked to build container, build image, docker build, build test image, or launch production container.
---
# Docker Build
## Build Unit Test Image
Required after container/Dockerfile changes. Tests won't see changes until image is rebuilt.
```bash
docker buildx build -t netalertx-test .
```
Build time: ~30 seconds (or ~90s if venv stage changes)
## Build and Launch Production Container
Before launching, stop devcontainer services first to free ports.
```bash
cd /workspaces/NetAlertX
docker compose up -d --build --force-recreate
```
## Pre-Launch Checklist
1. Stop devcontainer services: `pkill -f 'php-fpm83|nginx|crond|python3'`
2. Close VS Code forwarded ports
3. Run the build command
## Production Container Details
- Image: `netalertx:latest`
- Container name: `netalertx`
- Network mode: host
- Ports: 20211 (UI), 20212 (API/GraphQL)

32
.github/skills/docker-prune/SKILL.md vendored Normal file
View File

@@ -0,0 +1,32 @@
---
name: netalertx-docker-prune
description: Clean up unused Docker resources. Use this when asked to prune docker, clean docker, remove unused images, free disk space, or docker cleanup. DANGEROUS operation. Requires human confirmation.
---
# Docker Prune
**DANGER:** This destroys containers, images, volumes, and networks. Any stopped container will be wiped and data will be lost.
## Command
```bash
/workspaces/NetAlertX/.devcontainer/scripts/confirm-docker-prune.sh
```
## What Gets Deleted
- All stopped containers
- All unused images
- All unused volumes
- All unused networks
## When to Use
- Disk space is low
- Build cache is corrupted
- Clean slate needed for testing
- After many image rebuilds
## Safety
The script requires explicit confirmation before proceeding.

View File

@@ -0,0 +1,85 @@
---
name: netalertx-plugin-run-development
description: Create and run NetAlertX plugins. Use this when asked to create plugin, run plugin, test plugin, plugin development, or execute plugin script.
---
# Plugin Development
## Expected Workflow for Running Plugins
1. Read this skill document for context and instructions.
2. Find the plugin in `front/plugins/<code_name>/`.
3. Read the plugin's `config.json` and `script.py` to understand its functionality and settings.
4. Formulate and run the command: `python3 front/plugins/<code_name>/script.py`.
5. Retrieve the result from the plugin log folder (`/tmp/log/plugins/last_result.<PREF>.log`) quickly, as the backend may delete it after processing.
## Run a Plugin Manually
```bash
python3 front/plugins/<code_name>/script.py
```
Ensure `sys.path` includes `/app/front/plugins` and `/app/server` (as in the template).
## Plugin Structure
```text
front/plugins/<code_name>/
├── config.json # Manifest with settings
├── script.py # Main script
└── ...
```
## Manifest Location
`front/plugins/<code_name>/config.json`
- `code_name` == folder name
- `unique_prefix` drives settings and filenames (e.g., `ARPSCAN`)
## Settings Pattern
- `<PREF>_RUN`: execution phase
- `<PREF>_RUN_SCHD`: cron-like schedule
- `<PREF>_CMD`: script path
- `<PREF>_RUN_TIMEOUT`: timeout in seconds
- `<PREF>_WATCH`: columns to watch for changes
## Data Contract
Scripts write to `/tmp/log/plugins/last_result.<PREF>.log`
**Important:** The backend will almost immediately process this result file and delete it after ingestion. If you need to inspect the output, run the plugin and immediately retrieve the result file before the backend processes it.
Use `front/plugins/plugin_helper.py`:
```python
from plugin_helper import Plugin_Objects
plugin_objects = Plugin_Objects()
plugin_objects.add_object(...) # During processing
plugin_objects.write_result_file() # Exactly once at end
```
## Execution Phases
- `once`: runs once at startup
- `schedule`: runs on cron schedule
- `always_after_scan`: runs after every scan
- `before_name_updates`: runs before name resolution
- `on_new_device`: runs when new device detected
- `on_notification`: runs when notification triggered
## Plugin Formats
| Format | Purpose | Runs |
|--------|---------|------|
| publisher | Send notifications | `on_notification` |
| dev scanner | Create/manage devices | `schedule` |
| name discovery | Discover device names | `before_name_updates` |
| importer | Import from services | `schedule` |
| system | Core functionality | `schedule` |
## Starting Point
Copy from `front/plugins/__template` and customize.

View File

@@ -0,0 +1,59 @@
---
name: about-netalertx-project-structure
description: Navigate the NetAlertX codebase structure. Use this when asked about file locations, project structure, where to find code, or key paths.
---
# Project Navigation
## Key Paths
| Component | Path |
|-----------|------|
| Workspace root | `/workspaces/NetAlertX` |
| Backend entry | `server/__main__.py` |
| API server | `server/api_server/api_server_start.py` |
| Plugin system | `server/plugin.py` |
| Initialization | `server/initialise.py` |
| Frontend | `front/` |
| Frontend JS | `front/js/common.js` |
| Frontend PHP | `front/php/server/*.php` |
| Plugins | `front/plugins/` |
| Plugin template | `front/plugins/__template` |
| Database helpers | `server/db/db_helper.py` |
| Device model | `server/models/device_instance.py` |
| Messaging | `server/messaging/` |
| Workflows | `server/workflows/` |
## Architecture
NetAlertX uses a frontendbackend architecture: the frontend runs on **PHP + Nginx** (see `front/`), the backend is implemented in **Python** (see `server/`), and scheduled tasks are managed by a **supercronic** scheduler that runs periodic jobs.
## Runtime Paths
| Data | Path |
|------|------|
| Config (runtime) | `/data/config/app.conf` |
| Config (default) | `back/app.conf` |
| Database | `/data/db/app.db` |
| API JSON cache | `/tmp/api/*.json` |
| Logs | `/tmp/log/` |
| Plugin logs | `/tmp/log/plugins/` |
## Environment Variables
Use these NETALERTX_* instead of hardcoding paths. Examples:
- `NETALERTX_DB`
- `NETALERTX_LOG`
- `NETALERTX_CONFIG`
- `NETALERTX_DATA`
- `NETALERTX_APP`
## Documentation
| Topic | Path |
|-------|------|
| Plugin development | `docs/PLUGINS_DEV.md` |
| System settings | `docs/SETTINGS_SYSTEM.md` |
| API docs | `docs/API_*.md` |
| Debug guides | `docs/DEBUG_*.md` |

31
.github/skills/sample-data/SKILL.md vendored Normal file
View File

@@ -0,0 +1,31 @@
---
name: netalertx-sample-data
description: Load synthetic device data into the devcontainer. Use this when asked to load sample devices, seed data, import test devices, populate database, or generate test data.
---
# Sample Data Loading
Generates synthetic device inventory and imports it via the `/devices/import` API endpoint.
## Command
```bash
cd /workspaces/NetAlertX/.devcontainer/scripts
./load-devices.sh
```
## Environment
- `CSV_PATH`: defaults to `/tmp/netalertx-devices.csv`
## Prerequisites
- Backend must be running
- API must be accessible
## What It Does
1. Generates synthetic device records (MAC addresses, IPs, names, vendors)
2. Creates CSV file at `$CSV_PATH`
3. POSTs to `/devices/import` endpoint
4. Devices appear in database and UI

View File

@@ -0,0 +1,47 @@
---
name: netalertx-settings-management
description: Manage NetAlertX configuration settings. Use this when asked to add setting, read config, get_setting_value, ccd, or configure options.
---
# Settings Management
## Reading Settings
```python
from helper import get_setting_value
value = get_setting_value('SETTING_NAME')
```
Never hardcode ports, secrets, or configuration values. Always use `get_setting_value()`.
## Adding Core Settings
Use `ccd()` in `server/initialise.py`:
```python
ccd('SETTING_NAME', 'default_value', 'description')
```
## Adding Plugin Settings
Define in plugin's `config.json` manifest under the settings section.
## Config Files
| File | Purpose |
|------|---------|
| `/data/config/app.conf` | Runtime config (modified by app) |
| `back/app.conf` | Default config (template) |
## Environment Override
Use `APP_CONF_OVERRIDE` environment variable for settings that must be set before startup.
## Backend API URL
For Codespaces, set `BACKEND_API_URL` to your Codespace URL:
```
BACKEND_API_URL=https://something-20212.app.github.dev/
```

View File

@@ -0,0 +1,61 @@
---
name: netalertx-testing-workflow
description: Run and debug tests in the NetAlertX devcontainer. Use this when asked to run tests, check test failures, debug failing tests, or execute pytest.
---
# Testing Workflow
## Pre-Flight Check (MANDATORY)
Before running any tests, always check for existing failures first:
1. Use the `testFailure` tool to gather current failure information
2. Review the failures to understand what's already broken
3. Only then proceed with test execution
## Running Tests
Use VS Code's testing interface or the `runTests` tool with appropriate parameters:
- To run all tests: invoke runTests without file filter
- To run specific test file: invoke runTests with the test file path
- To run failed tests only: invoke runTests with `--lf` flag
## Test Location
Tests live in `test/` directory. App code is under `server/`.
PYTHONPATH is preconfigured to include the following which should meet all needs:
- `/app` # the primary location where python runs in the production system
- `/app/server` # symbolic link to /wprkspaces/NetAlertX/server
- `/app/front/plugins` # symbolic link to /workspaces/NetAlertX/front/plugins
- `/opt/venv/lib/pythonX.Y/site-packages`
- `/workspaces/NetAlertX/test`
- `/workspaces/NetAlertX/server`
- `/workspaces/NetAlertX`
- `/usr/lib/pythonX.Y/site-packages`
## Authentication in Tests
Retrieve `API_TOKEN` using Python (not shell):
```python
from helper import get_setting_value
token = get_setting_value("API_TOKEN")
```
## Troubleshooting 403 Forbidden
1. Ensure backend is running (use devcontainer-services skill)
2. Verify config loaded: `get_setting_value("API_TOKEN")` returns non-empty
3. Re-run startup if needed (use devcontainer-setup skill)
## Docker Test Image
If container changes affect tests, rebuild the test image first:
```bash
docker buildx build -t netalertx-test .
```
This takes ~30 seconds unless venv stage changes (~90s).

View File

@@ -31,5 +31,6 @@
"python.formatting.blackArgs": [
"--line-length=180"
],
"chat.useAgentSkills": true,
}

View File

@@ -23,6 +23,8 @@ curl 'http://host:GRAPHQL_PORT/graphql' \
The API server runs on `0.0.0.0:<graphql_port>` with **CORS enabled** for all main endpoints.
CORS configuration: You can limit allowed CORS origins with the `CORS_ORIGINS` environment variable. Set it to a comma-separated list of origins (for example: `CORS_ORIGINS="https://example.com,http://localhost:3000"`). The server parses this list at startup and only allows origins that begin with `http://` or `https://`. If `CORS_ORIGINS` is unset or parses to an empty list, the API falls back to a safe development default list (localhosts) and will include `*` as a last-resort permissive origin.
---
## Authentication

View File

@@ -72,6 +72,13 @@ In the **Environment variables** section of Portainer, add the following:
* `PORT=22022` (or another port if needed)
* `APP_CONF_OVERRIDE={"GRAPHQL_PORT":"22023"}` (optional advanced settings, otherwise the backend API server PORT defaults to `20212`)
Additional environment variables (advanced / testing):
* `SKIP_TESTS=1` — when set, the container entrypoint will skip all startup checks and print the message `Skipping startup checks as SKIP_TESTS is set.`. Useful for automated test runs or CI where the container should not perform environment-specific checks.
* `SKIP_STARTUP_CHECKS="<check names>"` — space-delimited list of specific startup checks to skip. Names are the human-friendly names derived from files in `/entrypoint.d` (remove the leading numeric prefix and file extension). Example: `SKIP_STARTUP_CHECKS="mandatory folders"` will skip `30-mandatory-folders.sh`.
Note: these variables are primarily useful for non-production scenarios (testing, CI, or specific deployments) and are processed by the entrypoint scripts. See `entrypoint.sh` and `entrypoint.d/*` for exact behaviour and available check names.
---
## 5. Ensure permissions

View File

@@ -81,3 +81,12 @@ def test_no_app_conf_override_when_no_graphql_port():
result = _run_entrypoint(env={"SKIP_TESTS": "1"}, check_only=True)
assert 'Setting APP_CONF_OVERRIDE to' not in result.stdout
assert result.returncode == 0
def test_skip_startup_checks_env_var():
# If SKIP_STARTUP_CHECKS contains the human-readable name of a check (e.g. "mandatory folders"),
# the entrypoint should skip that specific check. We check that the "Creating NetAlertX log directory."
# message (from the mandatory folders check) is not printed when skipped.
result = _run_entrypoint(env={"SKIP_STARTUP_CHECKS": "mandatory folders"}, check_only=True)
assert "Creating NetAlertX log directory" not in result.stdout
assert result.returncode == 0