This commit is contained in:
jokob-sk
2026-02-01 16:16:01 +11:00
4 changed files with 109 additions and 14 deletions

View File

@@ -1,26 +1,28 @@
---
name: testing-workflow
description: Guide for running tests within the NetAlertX environment. Detailed instructions for standard unit tests (fast), full suites (slow), and handling authentication.
description: Read before running tests. Detailed instructions for single, astandard unit tests (fast), full suites (slow), and handling authentication. Tests must be run when a job is complete.
---
# Testing Workflow
After code is developed, tests must be run to ensure the integrity of the final result.
**Crucial:** Tests MUST be run inside the container to access the correct runtime environment (DB, Config, Dependencies).
## 1. Standard Unit Tests (Recommended)
By default, run the standard unit test suite. This **excludes** slow tests marked with `docker` (requires socket access) or `feature_complete` (extended coverage).
## 1. Full Test Suite (MANDATORY DEFAULT)
Unless the user **explicitly** requests "fast" or "quick" tests, you **MUST** run the full test suite. **Do not** optimize for time. Comprehensive coverage is the priority over speed.
```bash
docker exec <CONTAINER_ID> bash -c "cd /workspaces/NetAlertX && pytest -m 'not docker and not feature_complete'"
cd /workspaces/NetAlertX; pytest test/
```
## 2. Full Test Suite (Slow)
## 2. Fast Unit Tests (Conditional)
To run **all** tests, including integration tests that require Docker socket access and extended feature coverage:
**ONLY** use this if the user explicitly asks for "fast tests", "quick tests", or "unit tests only". This **excludes** slow tests marked with `docker` or `feature_complete`.
```bash
docker exec <CONTAINER_ID> bash -c "cd /workspaces/NetAlertX && pytest"
cd /workspaces/NetAlertX; pytest test/ -m 'not docker and not feature_complete'
```
## 3. Running Specific Tests
@@ -28,12 +30,12 @@ docker exec <CONTAINER_ID> bash -c "cd /workspaces/NetAlertX && pytest"
To run a specific file or folder:
```bash
docker exec <CONTAINER_ID> bash -c "cd /workspaces/NetAlertX && pytest <path_to_test>"
cd /workspaces/NetAlertX; pytest test/<path_to_test>
```
*Example:*
```bash
docker exec <CONTAINER_ID> bash -c "cd /workspaces/NetAlertX && pytest test/api_endpoints/test_mcp_extended_endpoints.py"
cd /workspaces/NetAlertX; pytest test/api_endpoints/test_mcp_extended_endpoints.py
```
## Authentication in Tests
@@ -41,12 +43,12 @@ docker exec <CONTAINER_ID> bash -c "cd /workspaces/NetAlertX && pytest test/api_
The test environment uses `API_TOKEN`. The most reliable way to retrieve the current token from a running container is:
```bash
docker exec <CONTAINER_ID> python3 -c "from helper import get_setting_value; print(get_setting_value('API_TOKEN'))"
python3 -c "from helper import get_setting_value; print(get_setting_value('API_TOKEN'))"
```
### Troubleshooting
If tests fail with 403 Forbidden or empty tokens:
1. Verify server is running and use the setup script (`/workspaces/NetAlertX/.devcontainer/scripts/setup.sh`) if required.
2. Verify `app.conf` inside the container: `docker exec <ID> cat /data/config/app.conf`
3. Verify Python can read it: `docker exec <ID> python3 -c "from helper import get_setting_value; print(get_setting_value('API_TOKEN'))"`
2. Verify `app.conf` inside the container: `cat /data/config/app.conf`
3. Verify Python can read it: `python3 -c "from helper import get_setting_value; print(get_setting_value('API_TOKEN'))"`

View File

@@ -135,6 +135,7 @@ RUN apt-get update && apt-get install -y --no-install-recommends \
procps \
gosu \
jq \
ipcalc \
&& wget -qO /etc/apt/trusted.gpg.d/php.gpg https://packages.sury.org/php/apt.gpg \
&& echo "deb https://packages.sury.org/php/ $(lsb_release -sc) main" > /etc/apt/sources.list.d/php.list \
&& apt-get update \

View File

@@ -1,4 +1,4 @@
#!/bin/sh
#!/bin/bash
# first-run-check.sh - Checks and initializes configuration files on first run
# Fix permissions if config directory exists but is unreadable
@@ -7,6 +7,33 @@ if [ -d "${NETALERTX_CONFIG}" ]; then
fi
chmod u+rw "${NETALERTX_CONFIG}/app.conf" 2>/dev/null || true
### Helper function to set the SCAN_SUBNETS based on active interfaces during first run
get_scan_subnets() {
_list=""
while read -r _cidr _iface; do
[[ "$_iface" =~ ^(lo|docker|veth) ]] && continue
# Robustly get network address regardless of ipcalc version
if ipcalc -n "$_cidr" | grep -q '^Network:'; then
# Debian-style
_net=$(ipcalc -n "$_cidr" | grep '^Network:' | awk '{print $2}' | cut -d/ -f1)
else
# Alpine-style (Busybox)
_net=$(ipcalc -n "$_cidr" | awk -F= '{print $2}' | awk '{print $1}')
fi
_mask=$(echo "$_cidr" | cut -d/ -f2)
_entry="${_net}/${_mask} --interface=${_iface}"
if [ -z "$_list" ]; then
_list="'$_entry'"
else
_list="$_list,'$_entry'"
fi
done < <(ip -o -4 addr show scope global | awk '{print $4, $2}')
[ -z "$_list" ] && printf "['--localnet']" || printf "[%s]" "$_list"
}
set -eu
CYAN=$(printf '\033[1;36m')
@@ -36,7 +63,7 @@ fi
# Fresh rebuild requested
if [ "${ALWAYS_FRESH_INSTALL:-false}" = "true" ] && [ -e "${NETALERTX_CONFIG}/app.conf" ]; then
>&2 echo "INFO: ALWAYS_FRESH_INSTALL enabled — removing existing config."
rm -rf "${NETALERTX_CONFIG}"/*
rm -rf "${NETALERTX_CONFIG:?}"/*
fi
# Check for app.conf and deploy if required
@@ -45,6 +72,12 @@ if [ ! -f "${NETALERTX_CONFIG}/app.conf" ]; then
>&2 echo "ERROR: Failed to deploy default config to ${NETALERTX_CONFIG}/app.conf"
exit 2
}
# Generate the dynamic subnet list
SCAN_LIST=$(get_scan_subnets | tr -d '\n\r')
# Inject into the newly deployed config
sed -i "s|^SCAN_SUBNETS=.*|SCAN_SUBNETS=$SCAN_LIST|" "${NETALERTX_CONFIG}/app.conf" ||true
>&2 printf "%s" "${CYAN}"
>&2 cat <<EOF
══════════════════════════════════════════════════════════════════════════════

View File

@@ -937,6 +937,23 @@ def test_missing_app_conf_triggers_seed(tmp_path: pathlib.Path) -> None:
volume_specs=[f"{vol}:/data"],
sleep_seconds=15,
)
# Verify the generated configuration contains the dynamic subnet detection
# (check that it didn't fall back to default '--localnet')
check_conf = subprocess.run(
[
"docker", "run", "--rm", "-v", f"{vol}:/data",
"alpine:3.22", "cat", "/data/config/app.conf"
],
capture_output=True, text=True, timeout=SUBPROCESS_TIMEOUT_SECONDS
)
assert check_conf.returncode == 0, f"Failed to read config. Stderr: {check_conf.stderr}, Stdout: {check_conf.stdout}"
match = re.search(r"SCAN_SUBNETS\s*=\s*(.*)", check_conf.stdout)
if match:
val = match.group(1)
assert "interface=" in val, f"SCAN_SUBNETS should have interface: {val}"
assert val != "['--localnet']", "SCAN_SUBNETS should not be default localnet"
finally:
_docker_volume_rm(vol)
# The key assertion: config seeding happened
@@ -945,6 +962,48 @@ def test_missing_app_conf_triggers_seed(tmp_path: pathlib.Path) -> None:
# test passes if the config file was created. Full startup success is tested elsewhere.
def test_first_run_dynamic_subnet(tmp_path: pathlib.Path) -> None:
"""Test dynamic subnet detection during first run config generation.
Ensures that when app.conf is generated, it detects the actual network interfaces
instead of defaulting to '--localnet'.
"""
paths = _setup_mount_tree(tmp_path, "dynamic_subnet", seed_config=False)
mount_args = _build_volume_args_for_keys(paths, CONTAINER_TARGETS.keys())
result_container = _run_container(
"dyn-subnet",
volumes=mount_args,
sleep_seconds=15,
user="0:0",
)
assert result_container.returncode == 0, f"Container failed: {result_container.output}"
# Use docker to read the file to avoid permission issues (file is 600 root:root)
# paths["app_config"] is the host absolute path
cmd = [
"docker", "run", "--rm",
"-v", f"{paths['app_config']}:/mnt",
"alpine:3.22",
"cat", "/mnt/app.conf"
]
read_result = subprocess.run(cmd, capture_output=True, text=True, timeout=SUBPROCESS_TIMEOUT_SECONDS)
assert read_result.returncode == 0, f"Could not read app.conf. Stderr: {read_result.stderr}, Stdout: {read_result.stdout}"
content = read_result.stdout
# Check that SCAN_SUBNETS was set to something other than the default fallback
# The default fallback in the script is ['--localnet'] if no interfaces found.
# But in test environment (and prod), we expect interfaces.
match = re.search(r"SCAN_SUBNETS\s*=\s*(.*)", content)
assert match, "SCAN_SUBNETS not found in config"
val = match.group(1)
# verify it contains an interface definition
assert "interface=" in val, f"SCAN_SUBNETS should contain interface spec, got: {val}"
assert val != "['--localnet']", "SCAN_SUBNETS should not be default localnet"
def test_missing_app_db_triggers_seed(tmp_path: pathlib.Path) -> None:
"""Test missing database file seeding - simulates corrupted/missing app.db.