{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"],"fields":{"title":{"boost":1000.0},"text":{"boost":1.0},"tags":{"boost":1000000.0}}},"docs":[{"location":"","title":"Home","text":"NetAlertX

Centralized network visibility and continuous asset discovery.

NetAlertx delivers a scalable and secure solution for comprehensive network monitoring, supporting security awareness and operational efficiency.

Learn

Understand NetAlertX core features, discovery, and alerting concepts

Explore Features Install

Step-by-step installation guides for Docker, Home Assistant, Unraid, and bare-metal setups

View Installation Guides Notifications

Learn how NetAlertX provides device presence, alerting, and compliance-friendly monitoring

Explore Notifications Contribute

Source code, development environment setup, and contribution guidelines

Contribute on GitHub"},{"location":"#help-and-support","title":"Help and Support","text":"

If you need help or run into issues, here are some resources to guide you:

Before opening an issue, please:

Need more help? Join the community discussions or submit a support request:

"},{"location":"#contributing","title":"Contributing","text":"

NetAlertX is open-source and welcomes contributions from the community! If you'd like to help improve the software, please follow the guidelines below:

For more information on contributing, check out our Dev Guide.

"},{"location":"#stay-updated","title":"Stay Updated","text":"

To keep up with the latest changes and updates to NetAlertX, please refer to the following resources:

Make sure to follow the project on GitHub to get notifications for new releases and important updates.

"},{"location":"#additional-info","title":"Additional info","text":"

If you have any suggestions or improvements, please don\u2019t hesitate to contribute!

NetAlertX is actively maintained. You can find the source code, report bugs, or request new features on our GitHub page.

"},{"location":"ADVISORY_EYES_ON_GLASS/","title":"Eyes on glass","text":""},{"location":"ADVISORY_EYES_ON_GLASS/#build-an-msp-wallboard-for-network-monitoring","title":"Build an MSP Wallboard for Network Monitoring","text":"

For Managed Service Providers (MSPs) and Network Operations Centers (NOC), \"Eyes on Glass\" monitoring requires a UI that is both self-healing (auto-refreshing) and focused only on critical data. By leveraging the UI Settings Plugin, you can transform NetAlertX from a management tool into a dedicated live monitor.

"},{"location":"ADVISORY_EYES_ON_GLASS/#1-configure-auto-refresh-for-live-monitoring","title":"1. Configure Auto-Refresh for Live Monitoring","text":"

Static dashboards are the enemy of real-time response. NetAlertX allows you to force the UI to pull fresh data without manual page reloads.

"},{"location":"ADVISORY_EYES_ON_GLASS/#2-streamlining-the-dashboard-msp-mode","title":"2. Streamlining the Dashboard (MSP Mode)","text":"

An MSP's focus is on what is broken, not what is working. Hide the noise to increase reaction speed.

"},{"location":"ADVISORY_EYES_ON_GLASS/#3-creating-custom-noc-views","title":"3. Creating Custom NOC Views","text":"

Use the UI Filters in tandem with UI Settings to create custom views.

Feature NOC/MSP Application Site-Specific Nodes Filter the view by a specific \"Sync Node\" or \"Location\" filter to monitor a single client site. Filter by Criticality Filter devices where Group == \"Infrastructure\" or \"Server\". (depending on your predefined values) Predefined \"Down\" View Bookmark the URL with the /devices.php#down path to ensure the dashboard always loads into an \"Alert Only\" mode."},{"location":"ADVISORY_EYES_ON_GLASS/#4-browser-cache-stability","title":"4. Browser & Cache Stability","text":"

Because the UI is a web application, long-running sessions can occasionally experience cache drift.

Tip

NetAlertX - Detailed Dashboard Guide This video provides a visual walkthrough of the NetAlertX dashboard features, including how to map and visualize devices which is crucial for setting up a clear \"Eyes on Glass\" monitoring environment.

"},{"location":"ADVISORY_EYES_ON_GLASS/#summary-checklist","title":"Summary Checklist","text":""},{"location":"ADVISORY_MULTI_NETWORK/","title":"Multi-network monitoring","text":""},{"location":"ADVISORY_MULTI_NETWORK/#advisory-best-practices-for-monitoring-multiple-networks-with-netalertx","title":"ADVISORY: Best Practices for Monitoring Multiple Networks with NetAlertX","text":""},{"location":"ADVISORY_MULTI_NETWORK/#1-define-monitoring-scope-architecture","title":"1. Define Monitoring Scope & Architecture","text":"

Effective multi-network monitoring starts with understanding how NetAlertX \"sees\" your traffic.

Tip

Explore the remote networks documentation for more details on how to set up the approaches menationed above.

"},{"location":"ADVISORY_MULTI_NETWORK/#2-automating-it-asset-inventory-with-workflows","title":"2. Automating IT Asset Inventory with Workflows","text":"

Workflows are the \"engine\" of NetAlertX, reducing manual overhead as your device list grows.

{\n  \"name\": \"Assign Location - BranchOffice\",\n  \"trigger\": {\n    \"object_type\": \"Devices\",\n    \"event_type\": \"update\"\n  },\n  \"conditions\": [\n    {\n      \"logic\": \"AND\",\n      \"conditions\": [\n        {\n          \"field\": \"devLastIP\",\n          \"operator\": \"contains\",\n          \"value\": \"10.10.20.\"\n        }\n      ]\n    }\n  ],\n  \"actions\": [\n    {\n      \"type\": \"update_field\",\n      \"field\": \"devLocation\",\n      \"value\": \"BranchOffice\"\n    }\n  ]\n}\n

Tip

Always test new workflows in a \"Staging\" instance. A misconfigured workflow can trigger thousands of unintended updates across your database.

"},{"location":"ADVISORY_MULTI_NETWORK/#3-notification-strategy-low-noise-high-signal","title":"3. Notification Strategy: Low Noise, High Signal","text":"

A multi-network environment can generate significant \"alert fatigue.\" Use a layered filtering approach.

Level Strategy Recommended Action Device Silence Flapping Use \"Skip repeated notifications\" for unstable IoT devices. Plugin Tune Watchers Only enable _WATCH on reliable plugins (e.g., ICMP/SNMP). Global Filter Sections Limit NTFPRCS_INCLUDED_SECTIONS to new_devices and down_devices.

Tip

Ignore Rules: Maintain strict Ignored MAC (NEWDEV_ignored_MACs) and Ignored IP (NEWDEV_ignored_IPs) lists for guest networks or broadcast scanners to keep your logs clean.

"},{"location":"ADVISORY_MULTI_NETWORK/#4-ui-filters-for-multi-network-clarity","title":"4. UI Filters for Multi-Network Clarity","text":"

Don't let a massive device list overwhelm you. Use the Multi-edit features to categorize devices and create focused views:

Tip

If you are providing services as a Managed Service Provider (MSP) customize your default UI to be exactly how you need it, by hiding parts of the UI that you are not interested in, or by configuring a auto-refreshed screen monitoring your most important clients. See the Eyes on glass advisory for more details.

"},{"location":"ADVISORY_MULTI_NETWORK/#5-operational-stability-sync-health","title":"5. Operational Stability & Sync Health","text":""},{"location":"ADVISORY_MULTI_NETWORK/#6-optimize-performance","title":"6. Optimize Performance","text":"

As your environment grows, tuning the underlying engine is vital to maintain a snappy UI and reliable discovery cycles.

Important

For a deep dive into hardware requirements, database vacuuming, and specific environment variables for high-load instances, refer to the full Performance Optimization Guide.

"},{"location":"ADVISORY_MULTI_NETWORK/#summary-checklist","title":"Summary Checklist","text":""},{"location":"API/","title":"API Documentation","text":"

This API provides programmatic access to devices, events, sessions, metrics, network tools, and sync in NetAlertX. It is implemented as a REST and GraphQL server. All requests require authentication via API Token (API_TOKEN setting) unless explicitly noted. For example, to authorize a GraphQL request, you need to use a Authorization: Bearer API_TOKEN header as per example below:

curl 'http://host:GRAPHQL_PORT/graphql' \\\n  -X POST \\\n  -H 'Authorization: Bearer API_TOKEN' \\\n  -H 'Content-Type: application/json' \\\n  --data '{\n    \"query\": \"query GetDevices($options: PageQueryOptionsInput) { devices(options: $options) { devices { rowid devMac devName devOwner devType devVendor devLastConnection devStatus } count } }\",\n    \"variables\": {\n      \"options\": {\n        \"page\": 1,\n        \"limit\": 10,\n        \"sort\": [{ \"field\": \"devName\", \"order\": \"asc\" }],\n        \"search\": \"\",\n        \"status\": \"connected\"\n      }\n    }\n  }'\n

The API server runs on 0.0.0.0:<graphql_port> with CORS enabled for all main endpoints.

CORS configuration: You can limit allowed CORS origins with the CORS_ORIGINS environment variable. Set it to a comma-separated list of origins (for example: CORS_ORIGINS=\"https://example.com,http://localhost:3000\"). The server parses this list at startup and only allows origins that begin with http:// or https://. If CORS_ORIGINS is unset or parses to an empty list, the API falls back to a safe development default list (localhosts) and will include * as a last-resort permissive origin.

"},{"location":"API/#authentication","title":"Authentication","text":"

All endpoints require an API token provided in the HTTP headers:

Authorization: Bearer <API_TOKEN>\n

If the token is missing or invalid, the server will return:

{\n  \"success\": false,\n  \"message\": \"ERROR: Not authorized\",\n  \"error\": \"Forbidden\"\n}\n

HTTP Status: 403 Forbidden

"},{"location":"API/#base-url","title":"Base URL","text":"
http://<server>:<GRAPHQL_PORT>/\n
"},{"location":"API/#endpoints","title":"Endpoints","text":"

Note

You can explore the API endpoints by using the interactive API docs at http://<server>:<GRAPHQL_PORT>/docs.

Tip

When retrieving devices or settings try using the GraphQL API endpoint first as it is read-optimized.

"},{"location":"API/#standard-rest-endpoints","title":"Standard REST Endpoints","text":""},{"location":"API/#mcp-server-bridge","title":"MCP Server Bridge","text":"

NetAlertX includes an MCP (Model Context Protocol) Server Bridge that provides AI assistants access to NetAlertX functionality through standardized tools. MCP endpoints are available at /mcp/sse/* paths and mirror the functionality of standard REST endpoints:

MCP endpoints require the same Bearer token authentication as REST endpoints.

\ud83d\udcd6 See MCP Server Bridge API for complete documentation, tool specifications, and integration examples.

See Testing for example requests and usage.

"},{"location":"API/#notes","title":"Notes","text":""},{"location":"API_DBQUERY/","title":"Database Query API","text":"

The Database Query API provides direct, low-level access to the NetAlertX database. It allows read, write, update, and delete operations against tables, using base64-encoded SQL or structured parameters.

Warning

This API is primarily used internally to generate and render the application UI. These endpoints are low-level and powerful, and should be used with caution. Wherever possible, prefer the standard API endpoints. Invalid or unsafe queries can corrupt data. If you need data in a specific format that is not already provided, please open an issue or pull request with a clear, broadly useful use case. This helps ensure new endpoints benefit the wider community rather than relying on raw database queries.

"},{"location":"API_DBQUERY/#authentication","title":"Authentication","text":"

All /dbquery/* endpoints require an API token in the HTTP headers:

Authorization: Bearer <API_TOKEN>\n

If the token is missing or invalid (HTTP 403):

{\n  \"success\": false,\n  \"message\": \"ERROR: Not authorized\",\n  \"error\": \"Forbidden\"\n}\n
"},{"location":"API_DBQUERY/#endpoints","title":"Endpoints","text":""},{"location":"API_DBQUERY/#1-post-dbqueryread","title":"1. POST /dbquery/read","text":"

Execute a read-only SQL query (e.g., SELECT).

"},{"location":"API_DBQUERY/#request-body","title":"Request Body","text":"
{\n  \"rawSql\": \"U0VMRUNUICogRlJPTSBERVZJQ0VT\"   // base64 encoded SQL\n}\n

Decoded SQL:

SELECT * FROM Devices;\n
"},{"location":"API_DBQUERY/#response","title":"Response","text":"
{\n  \"success\": true,\n  \"results\": [\n    { \"devMac\": \"AA:BB:CC:DD:EE:FF\", \"devName\": \"Phone\" }\n  ]\n}\n
"},{"location":"API_DBQUERY/#curl-example","title":"curl Example","text":"
curl -X POST \"http://<server_ip>:<GRAPHQL_PORT>/dbquery/read\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\" \\\n  -H \"Accept: application/json\" \\\n  -H \"Content-Type: application/json\" \\\n  -d '{\n    \"rawSql\": \"U0VMRUNUICogRlJPTSBERVZJQ0VT\"\n  }'\n
"},{"location":"API_DBQUERY/#2-post-dbqueryupdate-safer-than-dbquerywrite","title":"2. POST /dbquery/update (safer than /dbquery/write)","text":"

Update rows in a table by columnName + id. /dbquery/update is parameterized to reduce the risk of SQL injection, while /dbquery/write executes raw SQL directly.

"},{"location":"API_DBQUERY/#request-body_1","title":"Request Body","text":"
{\n  \"columnName\": \"devMac\",\n  \"id\": [\"AA:BB:CC:DD:EE:FF\"],\n  \"dbtable\": \"Devices\",\n  \"columns\": [\"devName\", \"devOwner\"],\n  \"values\": [\"Laptop\", \"Alice\"]\n}\n
"},{"location":"API_DBQUERY/#response_1","title":"Response","text":"
{ \"success\": true, \"updated_count\": 1 }\n
"},{"location":"API_DBQUERY/#curl-example_1","title":"curl Example","text":"
curl -X POST \"http://<server_ip>:<GRAPHQL_PORT>/dbquery/update\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\" \\\n  -H \"Accept: application/json\" \\\n  -H \"Content-Type: application/json\" \\\n  -d '{\n    \"columnName\": \"devMac\",\n    \"id\": [\"AA:BB:CC:DD:EE:FF\"],\n    \"dbtable\": \"Devices\",\n    \"columns\": [\"devName\", \"devOwner\"],\n    \"values\": [\"Laptop\", \"Alice\"]\n  }'\n
"},{"location":"API_DBQUERY/#3-post-dbquerywrite","title":"3. POST /dbquery/write","text":"

Execute a write query (INSERT, UPDATE, DELETE).

"},{"location":"API_DBQUERY/#request-body_2","title":"Request Body","text":"
{\n  \"rawSql\": \"SU5TRVJUIElOVE8gRGV2aWNlcyAoZGV2TWFjLCBkZXYgTmFtZSwgZGV2Rmlyc3RDb25uZWN0aW9uLCBkZXZMYXN0Q29ubmVjdGlvbiwgZGV2TGFzdElQKSBWQUxVRVMgKCc2QTpCQjo0Qzo1RDo2RTonLCAnVGVzdERldmljZScsICcyMDI1LTA4LTMwIDEyOjAwOjAwJywgJzIwMjUtMDgtMzAgMTI6MDA6MDAnLCAnMTAuMC4wLjEwJyk=\"\n}\n

Decoded SQL:

INSERT INTO Devices (devMac, devName, devFirstConnection, devLastConnection, devLastIP)\nVALUES ('6A:BB:4C:5D:6E', 'TestDevice', '2025-08-30 12:00:00', '2025-08-30 12:00:00', '10.0.0.10');\n
"},{"location":"API_DBQUERY/#response_2","title":"Response","text":"
{ \"success\": true, \"affected_rows\": 1 }\n
"},{"location":"API_DBQUERY/#curl-example_2","title":"curl Example","text":"
curl -X POST \"http://<server_ip>:<GRAPHQL_PORT>/dbquery/write\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\" \\\n  -H \"Accept: application/json\" \\\n  -H \"Content-Type: application/json\" \\\n  -d '{\n    \"rawSql\": \"SU5TRVJUIElOVE8gRGV2aWNlcyAoZGV2TWFjLCBkZXYgTmFtZSwgZGV2Rmlyc3RDb25uZWN0aW9uLCBkZXZMYXN0Q29ubmVjdGlvbiwgZGV2TGFzdElQKSBWQUxVRVMgKCc2QTpCQjo0Qzo1RDo2RTonLCAnVGVzdERldmljZScsICcyMDI1LTA4LTMwIDEyOjAwOjAwJywgJzIwMjUtMDgtMzAgMTI6MDA6MDAnLCAnMTAuMC4wLjEwJyk=\"\n  }'\n
"},{"location":"API_DBQUERY/#4-post-dbquerydelete","title":"4. POST /dbquery/delete","text":"

Delete rows in a table by columnName + id.

"},{"location":"API_DBQUERY/#request-body_3","title":"Request Body","text":"
{\n  \"columnName\": \"devMac\",\n  \"id\": [\"AA:BB:CC:DD:EE:FF\"],\n  \"dbtable\": \"Devices\"\n}\n
"},{"location":"API_DBQUERY/#response_3","title":"Response","text":"
{ \"success\": true, \"deleted_count\": 1 }\n
"},{"location":"API_DBQUERY/#curl-example_3","title":"curl Example","text":"
curl -X POST \"http://<server_ip>:<GRAPHQL_PORT>/dbquery/delete\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\" \\\n  -H \"Accept: application/json\" \\\n  -H \"Content-Type: application/json\" \\\n  -d '{\n    \"columnName\": \"devMac\",\n    \"id\": [\"AA:BB:CC:DD:EE:FF\"],\n    \"dbtable\": \"Devices\"\n  }'\n
"},{"location":"API_DEVICE/","title":"Device API Endpoints","text":"

Manage a single device by its MAC address. Operations include retrieval, updates, deletion, resetting properties, and copying data between devices. All endpoints require authorization via Bearer token.

"},{"location":"API_DEVICE/#1-retrieve-device-details","title":"1. Retrieve Device Details","text":"

Special case: mac=new returns a template for a new device with default values.

Response (success):

{\n  \"devMac\": \"AA:BB:CC:DD:EE:FF\",\n  \"devName\": \"Net - Huawei\",\n  \"devOwner\": \"Admin\",\n  \"devType\": \"Router\",\n  \"devVendor\": \"Huawei\",\n  \"devStatus\": \"On-line\",\n  \"devSessions\": 12,\n  \"devEvents\": 5,\n  \"devDownAlerts\": 1,\n  \"devPresenceHours\": 32,\n  \"devChildrenDynamic\": [...],\n  \"devChildrenNicsDynamic\": [...],\n  ...\n}\n

Error Responses:

MCP Integration: Available as get_device_info and set_device_alias tools. See MCP Server Bridge API.

"},{"location":"API_DEVICE/#2-update-device-fields","title":"2. Update Device Fields","text":"

Request Body:

{\n  \"devName\": \"New Device\",\n  \"devOwner\": \"Admin\",\n  \"createNew\": true\n}\n

Behavior:

Response:

{\n  \"success\": true\n}\n

Error Responses:

"},{"location":"API_DEVICE/#3-delete-a-device","title":"3. Delete a Device","text":"

Response:

{\n  \"success\": true\n}\n

Error Responses:

"},{"location":"API_DEVICE/#4-delete-all-events-for-a-device","title":"4. Delete All Events for a Device","text":"

Response:

{\n  \"success\": true\n}\n
"},{"location":"API_DEVICE/#5-reset-device-properties","title":"5. Reset Device Properties","text":"

Request Body: Optional JSON for additional parameters.

Response:

{\n  \"success\": true\n}\n
"},{"location":"API_DEVICE/#6-copy-device-data","title":"6. Copy Device Data","text":"

Request Body:

{\n  \"macFrom\": \"AA:BB:CC:DD:EE:FF\",\n  \"macTo\": \"11:22:33:44:55:66\"\n}\n

Response:

{\n  \"success\": true,\n  \"message\": \"Device copied from AA:BB:CC:DD:EE:FF to 11:22:33:44:55:66\"\n}\n

Error Responses:

"},{"location":"API_DEVICE/#7-update-a-single-column","title":"7. Update a Single Column","text":"

Request Body:

{\n  \"columnName\": \"devName\",\n  \"columnValue\": \"Updated Device Name\"\n}\n

Response (success):

{\n  \"success\": true\n}\n

Error Responses:

"},{"location":"API_DEVICE/#example-curl-requests","title":"Example curl Requests","text":"

Get Device Details:

curl -X GET \"http://<server_ip>:<GRAPHQL_PORT>/device/AA:BB:CC:DD:EE:FF\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\"\n

Update Device Fields:

curl -X POST \"http://<server_ip>:<GRAPHQL_PORT>/device/AA:BB:CC:DD:EE:FF\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\" \\\n  -H \"Content-Type: application/json\" \\\n  --data '{\"devName\": \"New Device Name\"}'\n

Delete Device:

curl -X DELETE \"http://<server_ip>:<GRAPHQL_PORT>/device/AA:BB:CC:DD:EE:FF/delete\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\"\n

Copy Device Data:

curl -X POST \"http://<server_ip>:<GRAPHQL_PORT>/device/copy\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\" \\\n  -H \"Content-Type: application/json\" \\\n  --data '{\"macFrom\":\"AA:BB:CC:DD:EE:FF\",\"macTo\":\"11:22:33:44:55:66\"}'\n

Update Single Column:

curl -X POST \"http://<server_ip>:<GRAPHQL_PORT>/device/AA:BB:CC:DD:EE:FF/update-column\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\" \\\n  -H \"Content-Type: application/json\" \\\n  --data '{\"columnName\":\"devName\",\"columnValue\":\"Updated Device\"}'\n
"},{"location":"API_DEVICES/","title":"Devices Collection API Endpoints","text":"

The Devices Collection API provides operations to retrieve, manage, import/export, and filter devices in bulk. All endpoints require authorization via Bearer token.

"},{"location":"API_DEVICES/#endpoints","title":"Endpoints","text":""},{"location":"API_DEVICES/#1-get-all-devices","title":"1. Get All Devices","text":"

Response (success):

{\n  \"success\": true,\n  \"devices\": [\n    {\n      \"devName\": \"Net - Huawei\",\n      \"devMAC\": \"AA:BB:CC:DD:EE:FF\",\n      \"devIP\": \"192.168.1.1\",\n      \"devType\": \"Router\",\n      \"devFavorite\": 0,\n      \"devStatus\": \"online\"\n    },\n    ...\n  ]\n}\n

Error Responses:

"},{"location":"API_DEVICES/#2-delete-devices-by-mac","title":"2. Delete Devices by MAC","text":"

Request Body:

{\n  \"macs\": [\"AA:BB:CC:DD:EE:FF\", \"11:22:33:*\"]\n}\n

Behavior:

Response:

{\n  \"success\": true,\n  \"deleted_count\": 5\n}\n

Error Responses:

"},{"location":"API_DEVICES/#3-delete-devices-with-empty-macs","title":"3. Delete Devices with Empty MACs","text":"

Response:

{\n  \"success\": true,\n  \"deleted\": 3\n}\n
"},{"location":"API_DEVICES/#4-delete-unknown-devices","title":"4. Delete Unknown Devices","text":"

Response:

{\n  \"success\": true,\n  \"deleted\": 2\n}\n
"},{"location":"API_DEVICES/#5-export-devices","title":"5. Export Devices","text":"

Query Parameter / URL Parameter:

CSV Response:

JSON Response:

{\n  \"data\": [\n    { \"devName\": \"Net - Huawei\", \"devMAC\": \"AA:BB:CC:DD:EE:FF\", ... },\n    ...\n  ],\n  \"columns\": [\"devName\", \"devMAC\", \"devIP\", \"devType\", \"devFavorite\", \"devStatus\"]\n}\n

Error Responses:

"},{"location":"API_DEVICES/#6-import-devices-from-csv","title":"6. Import Devices from CSV","text":"

Request Body (multipart file or JSON with content field):

{\n  \"content\": \"<base64-encoded CSV content>\"\n}\n

Response:

{\n  \"success\": true,\n  \"inserted\": 25,\n  \"skipped_lines\": [3, 7]\n}\n

Error Responses:

"},{"location":"API_DEVICES/#7-get-device-totals","title":"7. Get Device Totals","text":"

Response:

[\n  120,    // Total devices\n  85,     // Connected\n  5,      // Favorites\n  10,     // New\n  8,      // Down\n  12      // Archived\n]\n

Order: [all, connected, favorites, new, down, archived]

"},{"location":"API_DEVICES/#8-get-devices-by-status","title":"8. Get Devices by Status","text":"

Query Parameter:

Response (success):

[\n  { \"id\": \"AA:BB:CC:DD:EE:FF\", \"title\": \"Net - Huawei\", \"favorite\": 0 },\n  { \"id\": \"11:22:33:44:55:66\", \"title\": \"\u2605 USG Firewall\", \"favorite\": 1 }\n]\n

If devFavorite=1, the title is prepended with a star \u2605.

"},{"location":"API_DEVICES/#9-search-devices","title":"9. Search Devices","text":"

Request Body (JSON):

{\n  \"query\": \".50\"\n}\n

Response:

{\n  \"success\": true,\n  \"devices\": [\n    {\n      \"devName\": \"Test Device\",\n      \"devMac\": \"AA:BB:CC:DD:EE:FF\",\n      \"devLastIP\": \"192.168.1.50\"\n    }\n  ]\n}\n
"},{"location":"API_DEVICES/#10-get-latest-device","title":"10. Get Latest Device","text":"

Response:

[\n  {\n    \"devName\": \"Latest Device\",\n    \"devMac\": \"AA:BB:CC:DD:EE:FF\",\n    \"devLastIP\": \"192.168.1.100\",\n    \"devFirstConnection\": \"2025-12-07 10:30:00\"\n  }\n]\n
"},{"location":"API_DEVICES/#11-get-network-topology","title":"11. Get Network Topology","text":"

Response:

{\n  \"nodes\": [\n    {\n      \"id\": \"AA:AA:AA:AA:AA:AA\",\n      \"name\": \"Router\",\n      \"vendor\": \"VendorA\"\n    }\n  ],\n  \"links\": [\n    {\n      \"source\": \"AA:AA:AA:AA:AA:AA\",\n      \"target\": \"BB:BB:BB:BB:BB:BB\",\n      \"port\": \"eth1\"\n    }\n  ]\n}\n
"},{"location":"API_DEVICES/#mcp-tools","title":"MCP Tools","text":"

These endpoints are also available as MCP Tools for AI assistant integration: - list_devices, search_devices, get_latest_device, get_network_topology, set_device_alias

\ud83d\udcd6 See MCP Server Bridge API for AI integration details.

"},{"location":"API_DEVICES/#example-curl-requests","title":"Example curl Requests","text":"

Get All Devices:

curl -X GET \"http://<server_ip>:<GRAPHQL_PORT>/devices\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\"\n

Delete Devices by MAC:

curl -X DELETE \"http://<server_ip>:<GRAPHQL_PORT>/devices\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\" \\\n  -H \"Content-Type: application/json\" \\\n  --data '{\"macs\":[\"AA:BB:CC:DD:EE:FF\",\"11:22:33:*\"]}'\n

Export Devices CSV:

curl -X GET \"http://<server_ip>:<GRAPHQL_PORT>/devices/export?format=csv\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\"\n

Import Devices from CSV:

curl -X POST \"http://<server_ip>:<GRAPHQL_PORT>/devices/import\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\" \\\n  -F \"file=@devices.csv\"\n

Get Devices by Status:

curl -X GET \"http://<server_ip>:<GRAPHQL_PORT>/devices/by-status?status=online\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\"\n

Search Devices:

curl -X POST \"http://<server_ip>:<GRAPHQL_PORT>/devices/search\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\" \\\n  -H \"Content-Type: application/json\" \\\n  --data '{\"query\": \"192.168.1\"}'\n

Get Latest Device:

curl -X GET \"http://<server_ip>:<GRAPHQL_PORT>/devices/latest\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\"\n

Get Network Topology:

curl -X GET \"http://<server_ip>:<GRAPHQL_PORT>/devices/network/topology\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\"\n
"},{"location":"API_DEVICE_FIELD_LOCK/","title":"Device Field Lock/Unlock API","text":""},{"location":"API_DEVICE_FIELD_LOCK/#overview","title":"Overview","text":"

The Device Field Lock/Unlock feature allows users to lock specific device fields to prevent plugin overwrites. This is part of the authoritative device field update system that ensures data integrity while maintaining flexibility for user customization.

"},{"location":"API_DEVICE_FIELD_LOCK/#concepts","title":"Concepts","text":""},{"location":"API_DEVICE_FIELD_LOCK/#tracked-fields","title":"Tracked Fields","text":"

Only certain device fields support locking. These are the fields that can be modified by both plugins and users:

"},{"location":"API_DEVICE_FIELD_LOCK/#field-source-tracking","title":"Field Source Tracking","text":"

Every tracked field has an associated *Source field that indicates where the current value originated:

"},{"location":"API_DEVICE_FIELD_LOCK/#locking-mechanism","title":"Locking Mechanism","text":"

When a field is locked, its source is set to LOCKED. This prevents plugin overwrites based on the authorization logic:

  1. Plugin wants to update field
  2. Authoritative handler checks field's *Source value
  3. If *Source == LOCKED, plugin update is rejected
  4. User can still manually unlock the field

When a field is unlocked, its source is set to NEWDEV, allowing plugins to resume updates.

"},{"location":"API_DEVICE_FIELD_LOCK/#endpoints","title":"Endpoints","text":""},{"location":"API_DEVICE_FIELD_LOCK/#lock-or-unlock-a-field","title":"Lock or Unlock a Field","text":"
POST /device/{mac}/field/lock\nAuthorization: Bearer {API_TOKEN}\nContent-Type: application/json\n\n{\n  \"fieldName\": \"devName\",\n  \"lock\": true\n}\n
"},{"location":"API_DEVICE_FIELD_LOCK/#parameters","title":"Parameters","text":""},{"location":"API_DEVICE_FIELD_LOCK/#responses","title":"Responses","text":"

Success (200)

{\n  \"success\": true,\n  \"message\": \"Field devName locked\",\n  \"fieldName\": \"devName\",\n  \"locked\": true\n}\n

Bad Request (400)

{\n  \"success\": false,\n  \"error\": \"fieldName is required\"\n}\n

{\n  \"success\": false,\n  \"error\": \"Field 'devInvalidField' cannot be locked\"\n}\n

Unauthorized (403)

{\n  \"success\": false,\n  \"error\": \"Unauthorized\"\n}\n

Not Found (404)

{\n  \"success\": false,\n  \"error\": \"Device not found\"\n}\n

"},{"location":"API_DEVICE_FIELD_LOCK/#examples","title":"Examples","text":""},{"location":"API_DEVICE_FIELD_LOCK/#lock-a-device-name","title":"Lock a Device Name","text":"

Prevent the device name from being overwritten by plugins:

curl -X POST https://your-netalertx.local/api/device/AA:BB:CC:DD:EE:FF/field/lock \\\n  -H \"Authorization: Bearer your-api-token\" \\\n  -H \"Content-Type: application/json\" \\\n  -d '{\n    \"fieldName\": \"devName\",\n    \"lock\": true\n  }'\n
"},{"location":"API_DEVICE_FIELD_LOCK/#unlock-a-field","title":"Unlock a Field","text":"

Allow plugins to resume updating a field:

curl -X POST https://your-netalertx.local/api/device/AA:BB:CC:DD:EE:FF/field/lock \\\n  -H \"Authorization: Bearer your-api-token\" \\\n  -H \"Content-Type: application/json\" \\\n  -d '{\n    \"fieldName\": \"devName\",\n    \"lock\": false\n  }'\n
"},{"location":"API_DEVICE_FIELD_LOCK/#ui-integration","title":"UI Integration","text":"

The Device Edit form displays lock/unlock buttons for all tracked fields:

  1. Lock Button (\ud83d\udd12): Click to prevent plugin overwrites
  2. Unlock Button (\ud83d\udd13): Click to allow plugin overwrites again
  3. Source Indicator: Shows current field source (USER, LOCKED, NEWDEV, or plugin name)
"},{"location":"API_DEVICE_FIELD_LOCK/#authorization-handler","title":"Authorization Handler","text":"

The authoritative field update logic prevents plugin overwrites:

  1. Plugin provides new value for field via plugin config SET_ALWAYS/SET_EMPTY
  2. Authoritative handler (in DeviceInstance) checks {field}Source value
  3. If source is LOCKED or USER, plugin update is rejected
  4. If source is NEWDEV or plugin name, plugin update is accepted
"},{"location":"API_DEVICE_FIELD_LOCK/#see-also","title":"See Also","text":""},{"location":"API_EVENTS/","title":"Events API Endpoints","text":"

The Events API provides access to device event logs, allowing creation, retrieval, deletion, and summary of events over time.

"},{"location":"API_EVENTS/#endpoints","title":"Endpoints","text":""},{"location":"API_EVENTS/#1-create-event","title":"1. Create Event","text":"

Request Body (JSON):

{\n  \"ip\": \"192.168.1.10\",\n  \"event_type\": \"Device Down\",\n  \"additional_info\": \"Optional info about the event\",\n  \"pending_alert\": 1,\n  \"event_time\": \"2025-08-24T12:00:00Z\"\n}\n

Response (JSON):

{\n  \"success\": true,\n  \"message\": \"Event created for 00:11:22:33:44:55\"\n}\n
"},{"location":"API_EVENTS/#2-get-events","title":"2. Get Events","text":"
/events?mac=<mac>\n

Response:

{\n  \"success\": true,\n  \"events\": [\n    {\n      \"eve_MAC\": \"00:11:22:33:44:55\",\n      \"eve_IP\": \"192.168.1.10\",\n      \"eve_DateTime\": \"2025-08-24T12:00:00Z\",\n      \"eve_EventType\": \"Device Down\",\n      \"eve_AdditionalInfo\": \"\",\n      \"eve_PendingAlertEmail\": 1\n    }\n  ]\n}\n
"},{"location":"API_EVENTS/#3-delete-events","title":"3. Delete Events","text":"

Response:

{\n  \"success\": true,\n  \"message\": \"Deleted events older than <days> days\"\n}\n
"},{"location":"API_EVENTS/#4-get-recent-events","title":"4. Get Recent Events","text":"

Response (JSON):

{\n  \"success\": true,\n  \"hours\": 24,\n  \"count\": 5,\n  \"events\": [\n    {\n      \"eve_DateTime\": \"2025-12-07 12:00:00\",\n      \"eve_EventType\": \"New Device\",\n      \"eve_MAC\": \"AA:BB:CC:DD:EE:FF\",\n      \"eve_IP\": \"192.168.1.100\",\n      \"eve_AdditionalInfo\": \"Device detected\"\n    }\n  ]\n}\n
"},{"location":"API_EVENTS/#5-get-latest-events","title":"5. Get Latest Events","text":"

Response (JSON):

{\n  \"success\": true,\n  \"count\": 10,\n  \"events\": [\n    {\n      \"eve_DateTime\": \"2025-12-07 12:00:00\",\n      \"eve_EventType\": \"Device Down\",\n      \"eve_MAC\": \"AA:BB:CC:DD:EE:FF\"\n    }\n  ]\n}\n
"},{"location":"API_EVENTS/#6-event-totals-over-a-period","title":"6. Event Totals Over a Period","text":"

Query Parameters:

Parameter Description period Time period for totals, e.g., \"7 days\", \"1 month\", \"1 year\", \"100 years\"

Sample Response (JSON Array):

[120, 85, 5, 10, 3, 7]\n

Meaning of Values:

  1. Total events in the period
  2. Total sessions
  3. Missing sessions
  4. Voided events (eve_EventType LIKE 'VOIDED%')
  5. New device events (eve_EventType LIKE 'New Device')
  6. Device down events (eve_EventType LIKE 'Device Down')
"},{"location":"API_EVENTS/#mcp-tools","title":"MCP Tools","text":"

Event endpoints are available as MCP Tools for AI assistant integration: - get_recent_alerts, get_last_events

\ud83d\udcd6 See MCP Server Bridge API for AI integration details.

"},{"location":"API_EVENTS/#notes","title":"Notes","text":"
{\n  \"success\": false,\n  \"message\": \"ERROR: Not authorized\",\n  \"error\": \"Forbidden\"\n}\n
"},{"location":"API_EVENTS/#example-curl-requests","title":"Example curl Requests","text":"

Create Event:

curl -X POST \"http://<server_ip>:<GRAPHQL_PORT>/events/create/00:11:22:33:44:55\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\" \\\n  -H \"Content-Type: application/json\" \\\n  --data '{\n    \"ip\": \"192.168.1.10\",\n    \"event_type\": \"Device Down\",\n    \"additional_info\": \"Power outage\",\n    \"pending_alert\": 1\n  }'\n

Get Events for a Device:

curl \"http://<server_ip>:<GRAPHQL_PORT>/events?mac=00:11:22:33:44:55\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\"\n

Delete Events Older Than 30 Days:

curl -X DELETE \"http://<server_ip>:<GRAPHQL_PORT>/events/30\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\"\n

Get Event Totals for 7 Days:

curl \"http://<server_ip>:<GRAPHQL_PORT>/sessions/totals?period=7 days\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\"\n
"},{"location":"API_GRAPHQL/","title":"GraphQL API Endpoint","text":"

GraphQL queries are read-optimized for speed. Data may be slightly out of date until the file system cache refreshes. The GraphQL endpoints allow you to access the following objects:

"},{"location":"API_GRAPHQL/#endpoints","title":"Endpoints","text":""},{"location":"API_GRAPHQL/#devices-query","title":"Devices Query","text":""},{"location":"API_GRAPHQL/#sample-query","title":"Sample Query","text":"
query GetDevices($options: PageQueryOptionsInput) {\n  devices(options: $options) {\n    devices {\n      rowid\n      devMac\n      devName\n      devOwner\n      devType\n      devVendor\n      devLastConnection\n      devStatus\n    }\n    count\n  }\n}\n
"},{"location":"API_GRAPHQL/#query-parameters","title":"Query Parameters","text":"Parameter Description page Page number of results to fetch. limit Number of results per page. sort Sorting options (field = field name, order = asc or desc). search Term to filter devices. status Filter devices by status: my_devices, connected, favorites, new, down, archived, offline. filters Additional filters (array of { filterColumn, filterValue })."},{"location":"API_GRAPHQL/#curl-example","title":"curl Example","text":"
curl 'http://host:GRAPHQL_PORT/graphql' \\\n  -X POST \\\n  -H 'Authorization: Bearer API_TOKEN' \\\n  -H 'Content-Type: application/json' \\\n  --data '{\n    \"query\": \"query GetDevices($options: PageQueryOptionsInput) { devices(options: $options) { devices { rowid devMac devName devOwner devType devVendor devLastConnection devStatus } count } }\",\n    \"variables\": {\n      \"options\": {\n        \"page\": 1,\n        \"limit\": 10,\n        \"sort\": [{ \"field\": \"devName\", \"order\": \"asc\" }],\n        \"search\": \"\",\n        \"status\": \"connected\"\n      }\n    }\n  }'\n
"},{"location":"API_GRAPHQL/#sample-response","title":"Sample Response","text":"
{\n  \"data\": {\n    \"devices\": {\n      \"devices\": [\n        {\n          \"rowid\": 1,\n          \"devMac\": \"00:11:22:33:44:55\",\n          \"devName\": \"Device 1\",\n          \"devOwner\": \"Owner 1\",\n          \"devType\": \"Type 1\",\n          \"devVendor\": \"Vendor 1\",\n          \"devLastConnection\": \"2025-01-01T00:00:00Z\",\n          \"devStatus\": \"connected\"\n        }\n      ],\n      \"count\": 1\n    }\n  }\n}\n
"},{"location":"API_GRAPHQL/#settings-query","title":"Settings Query","text":"

The settings query provides access to NetAlertX configuration stored in the settings table.

"},{"location":"API_GRAPHQL/#sample-query_1","title":"Sample Query","text":"
query GetSettings {\n  settings {\n    settings {\n      setKey\n      setName\n      setDescription\n      setType\n      setOptions\n      setGroup\n      setValue\n      setEvents\n      setOverriddenByEnv\n    }\n    count\n  }\n}\n
"},{"location":"API_GRAPHQL/#schema-fields","title":"Schema Fields","text":"Field Type Description setKey String Unique key identifier for the setting. setName String Human-readable name. setDescription String Description or documentation of the setting. setType String Data type (string, int, bool, json, etc.). setOptions String Available options (for dropdown/select-type settings). setGroup String Group/category the setting belongs to. setValue String Current value of the setting. setEvents String Events or triggers related to this setting. setOverriddenByEnv Boolean Whether the setting is overridden by an environment variable at runtime."},{"location":"API_GRAPHQL/#curl-example_1","title":"curl Example","text":"
curl 'http://host:GRAPHQL_PORT/graphql' \\\n  -X POST \\\n  -H 'Authorization: Bearer API_TOKEN' \\\n  -H 'Content-Type: application/json' \\\n  --data '{\n    \"query\": \"query GetSettings { settings { settings { setKey setName setDescription setType setOptions setGroup setValue setEvents setOverriddenByEnv } count } }\"\n  }'\n
"},{"location":"API_GRAPHQL/#sample-response_1","title":"Sample Response","text":"
{\n  \"data\": {\n    \"settings\": {\n      \"settings\": [\n        {\n          \"setKey\": \"UI_MY_DEVICES\",\n          \"setName\": \"My Devices Filter\",\n          \"setDescription\": \"Defines which statuses to include in the 'My Devices' view.\",\n          \"setType\": \"list\",\n          \"setOptions\": \"[\\\"online\\\",\\\"new\\\",\\\"down\\\",\\\"offline\\\",\\\"archived\\\"]\",\n          \"setGroup\": \"UI\",\n          \"setValue\": \"[\\\"online\\\",\\\"new\\\"]\",\n          \"setEvents\": null,\n          \"setOverriddenByEnv\": false\n        },\n        {\n          \"setKey\": \"NETWORK_DEVICE_TYPES\",\n          \"setName\": \"Network Device Types\",\n          \"setDescription\": \"Types of devices considered as network infrastructure.\",\n          \"setType\": \"list\",\n          \"setOptions\": \"[\\\"Router\\\",\\\"Switch\\\",\\\"AP\\\"]\",\n          \"setGroup\": \"Network\",\n          \"setValue\": \"[\\\"Router\\\",\\\"Switch\\\"]\",\n          \"setEvents\": null,\n          \"setOverriddenByEnv\": true\n        }\n      ],\n      \"count\": 2\n    }\n  }\n}\n
"},{"location":"API_GRAPHQL/#langstrings-query","title":"LangStrings Query","text":"

The LangStrings query provides access to localized strings. Supports filtering by langCode and langStringKey. If the requested string is missing or empty, you can optionally fallback to en_us.

"},{"location":"API_GRAPHQL/#sample-query_2","title":"Sample Query","text":"
query GetLangStrings {\n  langStrings(langCode: \"de_de\", langStringKey: \"settings_other_scanners\") {\n    langStrings {\n      langCode\n      langStringKey\n      langStringText\n    }\n    count\n  }\n}\n
"},{"location":"API_GRAPHQL/#query-parameters_1","title":"Query Parameters","text":"Parameter Type Description langCode String Optional language code (e.g., en_us, de_de). If omitted, all languages are returned. langStringKey String Optional string key to retrieve a specific entry. fallback_to_en Boolean Optional (default true). If true, empty or missing strings fallback to en_us."},{"location":"API_GRAPHQL/#curl-example_2","title":"curl Example","text":"
curl 'http://host:GRAPHQL_PORT/graphql' \\\n  -X POST \\\n  -H 'Authorization: Bearer API_TOKEN' \\\n  -H 'Content-Type: application/json' \\\n  --data '{\n    \"query\": \"query GetLangStrings { langStrings(langCode: \\\"de_de\\\", langStringKey: \\\"settings_other_scanners\\\") { langStrings { langCode langStringKey langStringText } count } }\"\n  }'\n
"},{"location":"API_GRAPHQL/#sample-response_2","title":"Sample Response","text":"
{\n  \"data\": {\n    \"langStrings\": {\n      \"count\": 1,\n      \"langStrings\": [\n        {\n          \"langCode\": \"de_de\",\n          \"langStringKey\": \"settings_other_scanners\",\n          \"langStringText\": \"Other, non-device scanner plugins that are currently enabled.\"  // falls back to en_us if empty\n        }\n      ]\n    }\n  }\n}\n
"},{"location":"API_GRAPHQL/#notes","title":"Notes","text":""},{"location":"API_LOGS/","title":"Logs API Endpoints","text":"

Manage or purge application log files stored under /app/log and manage the execution queue. These endpoints are primarily used for maintenance tasks such as clearing accumulated logs or adding system actions without restarting the container.

Only specific, pre-approved log files can be purged for security and stability reasons.

"},{"location":"API_LOGS/#delete-purge-a-log-file","title":"Delete (Purge) a Log File","text":"

Query Parameter:

Allowed Files:

app.log\nIP_changes.log\nstdout.log\nstderr.log\napp.php_errors.log\nexecution_queue.log\ndb_is_locked.log\n

Authorization: Requires a valid API token in the Authorization header.

"},{"location":"API_LOGS/#curl-example-success","title":"curl Example (Success)","text":"
curl -X DELETE 'http://<server_ip>:<GRAPHQL_PORT>/logs?file=app.log' \\\n  -H 'Authorization: Bearer <API_TOKEN>' \\\n  -H 'Accept: application/json'\n

Response:

{\n  \"success\": true,\n  \"message\": \"[clean_log] File app.log purged successfully\"\n}\n
"},{"location":"API_LOGS/#curl-example-not-allowed","title":"curl Example (Not Allowed)","text":"
curl -X DELETE 'http://<server_ip>:<GRAPHQL_PORT>/logs?file=not_allowed.log' \\\n  -H 'Authorization: Bearer <API_TOKEN>' \\\n  -H 'Accept: application/json'\n

Response:

{\n  \"success\": false,\n  \"message\": \"[clean_log] File not_allowed.log is not allowed to be purged\"\n}\n
"},{"location":"API_LOGS/#curl-example-unauthorized","title":"curl Example (Unauthorized)","text":"
curl -X DELETE 'http://<server_ip>:<GRAPHQL_PORT>/logs?file=app.log' \\\n  -H 'Accept: application/json'\n

Response:

{\n  \"error\": \"Forbidden\"\n}\n
"},{"location":"API_LOGS/#add-an-action-to-the-execution-queue","title":"Add an Action to the Execution Queue","text":"

Request Body (JSON):

{\n  \"action\": \"update_api|devices\"\n}\n

Authorization: Requires a valid API token in the Authorization header.

"},{"location":"API_LOGS/#curl-example-success_1","title":"curl Example (Success)","text":"

The below will update the API cache for Devices

curl -X POST 'http://<server_ip>:<GRAPHQL_PORT>/logs/add-to-execution-queue' \\\n  -H 'Authorization: Bearer <API_TOKEN>' \\\n  -H 'Content-Type: application/json' \\\n  --data '{\"action\": \"update_api|devices\"}'\n

Response:

{\n  \"success\": true,\n  \"message\": \"[UserEventsQueueInstance] Action \\\"update_api|devices\\\" added to the execution queue.\"\n}\n
"},{"location":"API_LOGS/#curl-example-missing-parameter","title":"curl Example (Missing Parameter)","text":"
curl -X POST 'http://<server_ip>:<GRAPHQL_PORT>/logs/add-to-execution-queue' \\\n  -H 'Authorization: Bearer <API_TOKEN>' \\\n  -H 'Content-Type: application/json' \\\n  --data '{}'\n

Response:

{\n  \"success\": false,\n  \"message\": \"Missing parameters\",\n  \"error\": \"Missing required 'action' field in JSON body\"\n}\n
"},{"location":"API_LOGS/#curl-example-unauthorized_1","title":"curl Example (Unauthorized)","text":"
curl -X POST 'http://<server_ip>:<GRAPHQL_PORT>/logs/add-to-execution-queue' \\\n  -H 'Content-Type: application/json' \\\n  --data '{\"action\": \"update_api|devices\"}'\n

Response:

{\n  \"error\": \"Forbidden\"\n}\n
"},{"location":"API_LOGS/#notes","title":"Notes","text":""},{"location":"API_MCP/","title":"MCP Server Bridge API","text":"

The MCP (Model Context Protocol) Server Bridge provides AI assistants with standardized access to NetAlertX functionality through tools and server-sent events. This enables AI systems to interact with your network monitoring data in real-time.

"},{"location":"API_MCP/#overview","title":"Overview","text":"

The MCP Server Bridge exposes NetAlertX functionality as MCP Tools that AI assistants can call to:

All MCP endpoints mirror the functionality of standard REST endpoints but are optimized for AI assistant integration.

"},{"location":"API_MCP/#architecture-overview","title":"Architecture Overview","text":""},{"location":"API_MCP/#mcp-connection-flow","title":"MCP Connection Flow","text":"
graph TB\n    A[AI Assistant<br/>Claude Desktop] -->|SSE Connection| B[NetAlertX MCP Server<br/>:20212/mcp/sse]\n    B -->|JSON-RPC Messages| C[MCP Bridge<br/>api_server_start.py]\n    C -->|Tool Calls| D[NetAlertX Tools<br/>Device/Network APIs]\n    D -->|Response Data| C\n    C -->|JSON Response| B\n    B -->|Stream Events| A
"},{"location":"API_MCP/#mcp-tool-integration","title":"MCP Tool Integration","text":"
sequenceDiagram\n    participant AI as AI Assistant\n    participant MCP as MCP Server (:20212)\n    participant API as NetAlertX API (:20211)\n    participant DB as SQLite Database\n\n    AI->>MCP: 1. Connect via SSE\n    MCP-->>AI: 2. Session established\n    AI->>MCP: 3. tools/list request\n    MCP->>API: 4. GET /mcp/sse/openapi.json\n    API-->>MCP: 5. Available tools spec\n    MCP-->>AI: 6. Tool definitions\n    AI->>MCP: 7. tools/call: search_devices\n    MCP->>API: 8. POST /devices/search\n    API->>DB: 9. Query devices\n    DB-->>API: 10. Device data\n    API-->>MCP: 11. JSON response\n    MCP-->>AI: 12. Tool result
"},{"location":"API_MCP/#component-architecture","title":"Component Architecture","text":"
graph LR\n    subgraph \"AI Client\"\n        A[Claude Desktop]\n        B[Custom MCP Client]\n    end\n\n    subgraph \"NetAlertX MCP Server (:20212)\"\n        C[SSE Endpoint<br/>/mcp/sse]\n        D[Message Handler<br/>/mcp/messages]\n        E[OpenAPI Spec<br/>/mcp/sse/openapi.json]\n    end\n\n    subgraph \"NetAlertX API Server (:20211)\"\n        F[Device APIs<br/>/devices/*]\n        G[Network Tools<br/>/nettools/*]\n        H[Events API<br/>/events/*]\n    end\n\n    subgraph \"Backend\"\n        I[SQLite Database]\n        J[Network Scanners]\n        K[Plugin System]\n    end\n\n    A -.->|Bearer Auth| C\n    B -.->|Bearer Auth| C\n    C --> D\n    C --> E\n    D --> F\n    D --> G\n    D --> H\n    F --> I\n    G --> J\n    H --> I
"},{"location":"API_MCP/#authentication","title":"Authentication","text":"

MCP endpoints use the same Bearer token authentication as REST endpoints:

Authorization: Bearer <API_TOKEN>\n

Unauthorized requests return HTTP 403:

{\n  \"success\": false,\n  \"message\": \"ERROR: Not authorized\",\n  \"error\": \"Forbidden\"\n}\n
"},{"location":"API_MCP/#mcp-connection-endpoint","title":"MCP Connection Endpoint","text":""},{"location":"API_MCP/#server-sent-events-sse","title":"Server-Sent Events (SSE)","text":"

Main MCP connection endpoint for AI clients. Establishes a persistent connection using Server-Sent Events for real-time communication between AI assistants and NetAlertX.

Connection Example:

const eventSource = new EventSource('/mcp/sse', {\n  headers: {\n    'Authorization': 'Bearer <API_TOKEN>'\n  }\n});\n\neventSource.onmessage = function(event) {\n  const response = JSON.parse(event.data);\n  console.log('MCP Response:', response);\n};\n
"},{"location":"API_MCP/#openapi-specification","title":"OpenAPI Specification","text":""},{"location":"API_MCP/#get-mcp-tools-specification","title":"Get MCP Tools Specification","text":"

Returns the OpenAPI specification for all available MCP tools, describing the parameters and schemas for each tool.

Response:

{\n  \"openapi\": \"3.0.0\",\n  \"info\": {\n    \"title\": \"NetAlertX Tools\",\n    \"version\": \"1.1.0\"\n  },\n  \"servers\": [{\"url\": \"/\"}],\n  \"paths\": {\n    \"/devices/by-status\": {\n      \"post\": {\"operationId\": \"list_devices\"}\n    },\n    \"/device/{mac}\": {\n      \"post\": {\"operationId\": \"get_device_info\"}\n    },\n    \"/devices/search\": {\n      \"post\": {\"operationId\": \"search_devices\"}\n    }\n  }\n}\n
"},{"location":"API_MCP/#available-mcp-tools","title":"Available MCP Tools","text":""},{"location":"API_MCP/#device-management-tools","title":"Device Management Tools","text":"Tool Endpoint Description list_devices /devices/by-status List devices by online status get_device_info /device/{mac} Get detailed device information search_devices /devices/search Search devices by MAC, name, or IP get_latest_device /devices/latest Get most recently connected device set_device_alias /device/{mac}/set-alias Set device friendly name"},{"location":"API_MCP/#network-tools","title":"Network Tools","text":"Tool Endpoint Description trigger_scan /nettools/trigger-scan Trigger network discovery scan to find new devices. run_nmap_scan /nettools/nmap Perform NMAP scan on a target to identify open ports. get_open_ports /device/open_ports Get stored NMAP open ports. Use run_nmap_scan first if empty. wol_wake_device /nettools/wakeonlan Wake device using Wake-on-LAN get_network_topology /devices/network/topology Get network topology map"},{"location":"API_MCP/#event-monitoring-tools","title":"Event & Monitoring Tools","text":"Tool Endpoint Description get_recent_alerts /events/recent Get events from last 24 hours get_last_events /events/last Get 10 most recent events"},{"location":"API_MCP/#tool-usage-examples","title":"Tool Usage Examples","text":""},{"location":"API_MCP/#search-devices-tool","title":"Search Devices Tool","text":"

Tool Call:

{\n  \"jsonrpc\": \"2.0\",\n  \"id\": \"1\",\n  \"method\": \"tools/call\",\n  \"params\": {\n    \"name\": \"search_devices\",\n    \"arguments\": {\n      \"query\": \"192.168.1\"\n    }\n  }\n}\n

Response:

{\n  \"jsonrpc\": \"2.0\",\n  \"id\": \"1\",\n  \"result\": {\n    \"content\": [\n      {\n        \"type\": \"text\",\n        \"text\": \"{\\n  \\\"success\\\": true,\\n  \\\"devices\\\": [\\n    {\\n      \\\"devName\\\": \\\"Router\\\",\\n      \\\"devMac\\\": \\\"AA:BB:CC:DD:EE:FF\\\",\\n      \\\"devLastIP\\\": \\\"192.168.1.1\\\"\\n    }\\n  ]\\n}\"\n      }\n    ],\n    \"isError\": false\n  }\n}\n

"},{"location":"API_MCP/#trigger-network-scan-tool","title":"Trigger Network Scan Tool","text":"

Tool Call:

{\n  \"jsonrpc\": \"2.0\",\n  \"id\": \"2\",\n  \"method\": \"tools/call\",\n  \"params\": {\n    \"name\": \"trigger_scan\",\n    \"arguments\": {\n      \"type\": \"ARPSCAN\"\n    }\n  }\n}\n

Response:

{\n  \"jsonrpc\": \"2.0\",\n  \"id\": \"2\",\n  \"result\": {\n    \"content\": [\n      {\n        \"type\": \"text\",\n        \"text\": \"{\\n  \\\"success\\\": true,\\n  \\\"message\\\": \\\"Scan triggered for type: ARPSCAN\\\"\\n}\"\n      }\n    ],\n    \"isError\": false\n  }\n}\n

"},{"location":"API_MCP/#wake-on-lan-tool","title":"Wake-on-LAN Tool","text":"

Tool Call:

{\n  \"jsonrpc\": \"2.0\",\n  \"id\": \"3\",\n  \"method\": \"tools/call\",\n  \"params\": {\n    \"name\": \"wol_wake_device\",\n    \"arguments\": {\n      \"devMac\": \"AA:BB:CC:DD:EE:FF\"\n    }\n  }\n}\n

"},{"location":"API_MCP/#integration-with-ai-assistants","title":"Integration with AI Assistants","text":""},{"location":"API_MCP/#claude-desktop-integration","title":"Claude Desktop Integration","text":"

Add to your Claude Desktop mcp.json configuration:

{\n  \"mcp\": {\n    \"servers\": {\n      \"netalertx\": {\n        \"command\": \"node\",\n        \"args\": [\"/path/to/mcp-client.js\"],\n        \"env\": {\n          \"NETALERTX_URL\": \"http://your-server:<GRAPHQL_PORT>\",\n          \"NETALERTX_TOKEN\": \"your-api-token\"\n        }\n      }\n    }\n  }\n}\n
"},{"location":"API_MCP/#generic-mcp-client","title":"Generic MCP Client","text":"
import asyncio\nimport json\nfrom mcp import ClientSession, StdioServerParameters\nfrom mcp.client.stdio import stdio_client\n\nasync def main():\n    # Connect to NetAlertX MCP server\n    server_params = StdioServerParameters(\n        command=\"curl\",\n        args=[\n            \"-N\", \"-H\", \"Authorization: Bearer <API_TOKEN>\",\n            \"http://your-server:<GRAPHQL_PORT>/mcp/sse\"\n        ]\n    )\n\n    async with stdio_client(server_params) as (read, write):\n        async with ClientSession(read, write) as session:\n            # Initialize connection\n            await session.initialize()\n\n            # List available tools\n            tools = await session.list_tools()\n            print(f\"Available tools: {[t.name for t in tools.tools]}\")\n\n            # Call a tool\n            result = await session.call_tool(\"search_devices\", {\"query\": \"router\"})\n            print(f\"Search result: {result}\")\n\nif __name__ == \"__main__\":\n    asyncio.run(main())\n
"},{"location":"API_MCP/#error-handling","title":"Error Handling","text":"

MCP tool calls return structured error information:

Error Response:

{\n  \"jsonrpc\": \"2.0\",\n  \"id\": \"1\",\n  \"result\": {\n    \"content\": [\n      {\n        \"type\": \"text\",\n        \"text\": \"Error calling tool: Device not found\"\n      }\n    ],\n    \"isError\": true\n  }\n}\n

Common Error Types: - 401/403 - Authentication failure - 400 - Invalid parameters or missing required fields - 404 - Resource not found (device, scan results, etc.) - 500 - Internal server error

"},{"location":"API_MCP/#notes","title":"Notes","text":""},{"location":"API_MCP/#related-documentation","title":"Related Documentation","text":""},{"location":"API_MESSAGING_IN_APP/","title":"In-app Notifications API","text":"

Manage in-app notifications for users. Notifications can be written, retrieved, marked as read, or deleted.

"},{"location":"API_MESSAGING_IN_APP/#write-notification","title":"Write Notification","text":"

Request Body:

{\n  \"content\": \"This is a test notification\",\n  \"level\": \"alert\"   // optional, [\"interrupt\",\"info\",\"alert\"]  default: \"alert\"\n}\n

Response:

{\n  \"success\": true\n}\n
"},{"location":"API_MESSAGING_IN_APP/#curl-example","title":"curl Example","text":"
curl -X POST \"http://<server_ip>:<GRAPHQL_PORT>/messaging/in-app/write\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\" \\\n  -H \"Accept: application/json\" \\\n  -H \"Content-Type: application/json\" \\\n  -d '{\n    \"content\": \"This is a test notification\",\n    \"level\": \"alert\"\n  }'\n
"},{"location":"API_MESSAGING_IN_APP/#get-unread-notifications","title":"Get Unread Notifications","text":"

Response:

[\n  {\n    \"timestamp\": \"2025-10-10T12:34:56\",\n    \"guid\": \"f47ac10b-58cc-4372-a567-0e02b2c3d479\",\n    \"read\": 0,\n    \"level\": \"alert\",\n    \"content\": \"This is a test notification\"\n  }\n]\n
"},{"location":"API_MESSAGING_IN_APP/#curl-example_1","title":"curl Example","text":"
curl -X GET \"http://<server_ip>:<GRAPHQL_PORT>/messaging/in-app/unread\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\" \\\n  -H \"Accept: application/json\"\n
"},{"location":"API_MESSAGING_IN_APP/#mark-all-notifications-as-read","title":"Mark All Notifications as Read","text":"

Response:

{\n  \"success\": true\n}\n
"},{"location":"API_MESSAGING_IN_APP/#curl-example_2","title":"curl Example","text":"
curl -X POST \"http://<server_ip>:<GRAPHQL_PORT>/messaging/in-app/read/all\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\" \\\n  -H \"Accept: application/json\"\n
"},{"location":"API_MESSAGING_IN_APP/#mark-single-notification-as-read","title":"Mark Single Notification as Read","text":"

Response (success):

{\n  \"success\": true\n}\n

Response (failure):

{\n  \"success\": false,\n  \"error\": \"Notification not found\"\n}\n
"},{"location":"API_MESSAGING_IN_APP/#curl-example_3","title":"curl Example","text":"
curl -X POST \"http://<server_ip>:<GRAPHQL_PORT>/messaging/in-app/read/f47ac10b-58cc-4372-a567-0e02b2c3d479\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\" \\\n  -H \"Accept: application/json\"\n
"},{"location":"API_MESSAGING_IN_APP/#delete-all-notifications","title":"Delete All Notifications","text":"

Response:

{\n  \"success\": true\n}\n
"},{"location":"API_MESSAGING_IN_APP/#curl-example_4","title":"curl Example","text":"
curl -X DELETE \"http://<server_ip>:<GRAPHQL_PORT>/messaging/in-app/delete\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\" \\\n  -H \"Accept: application/json\"\n
"},{"location":"API_MESSAGING_IN_APP/#delete-single-notification","title":"Delete Single Notification","text":"

Response (success):

{\n  \"success\": true\n}\n

Response (failure):

{\n  \"success\": false,\n  \"error\": \"Notification not found\"\n}\n
"},{"location":"API_MESSAGING_IN_APP/#curl-example_5","title":"curl Example","text":"
curl -X DELETE \"http://<server_ip>:<GRAPHQL_PORT>/messaging/in-app/delete/f47ac10b-58cc-4372-a567-0e02b2c3d479\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\" \\\n  -H \"Accept: application/json\"\n
"},{"location":"API_METRICS/","title":"Metrics API Endpoint","text":"

The /metrics endpoint exposes Prometheus-compatible metrics for NetAlertX, including aggregate device counts and per-device status.

"},{"location":"API_METRICS/#endpoint-details","title":"Endpoint Details","text":""},{"location":"API_METRICS/#example-output","title":"Example Output","text":"
netalertx_connected_devices 31\nnetalertx_offline_devices 54\nnetalertx_down_devices 0\nnetalertx_new_devices 0\nnetalertx_archived_devices 31\nnetalertx_favorite_devices 2\nnetalertx_my_devices 54\n\nnetalertx_device_status{device=\"Net - Huawei\", mac=\"Internet\", ip=\"1111.111.111.111\", vendor=\"None\", first_connection=\"2021-01-01 00:00:00\", last_connection=\"2025-08-04 17:57:00\", dev_type=\"Router\", device_status=\"Online\"} 1\nnetalertx_device_status{device=\"Net - USG\", mac=\"74:ac:74:ac:74:ac\", ip=\"192.168.1.1\", vendor=\"Ubiquiti Networks Inc.\", first_connection=\"2022-02-12 22:05:00\", last_connection=\"2025-06-07 08:16:49\", dev_type=\"Firewall\", device_status=\"Archived\"} 1\nnetalertx_device_status{device=\"Raspberry Pi 4 LAN\", mac=\"74:ac:74:ac:74:74\", ip=\"192.168.1.9\", vendor=\"Raspberry Pi Trading Ltd\", first_connection=\"2022-02-12 22:05:00\", last_connection=\"2025-08-04 17:57:00\", dev_type=\"Singleboard Computer (SBC)\", device_status=\"Online\"} 1\n...\n
"},{"location":"API_METRICS/#metrics-overview","title":"Metrics Overview","text":""},{"location":"API_METRICS/#1-aggregate-device-counts","title":"1. Aggregate Device Counts","text":"Metric Description netalertx_connected_devices Devices currently connected netalertx_offline_devices Devices currently offline netalertx_down_devices Down/unreachable devices netalertx_new_devices Recently detected devices netalertx_archived_devices Archived devices netalertx_favorite_devices User-marked favorites netalertx_my_devices Devices associated with the current user"},{"location":"API_METRICS/#2-per-device-status","title":"2. Per-Device Status","text":"

Metric: netalertx_device_status Each device has labels:

Metric value is always 1 (presence indicator).

"},{"location":"API_METRICS/#querying-with-curl","title":"Querying with curl","text":"
curl 'http://<server_ip>:<GRAPHQL_PORT>/metrics' \\\n  -H 'Authorization: Bearer <API_TOKEN>' \\\n  -H 'Accept: text/plain'\n

Replace placeholders:

"},{"location":"API_METRICS/#prometheus-scraping-configuration","title":"Prometheus Scraping Configuration","text":"
scrape_configs:\n  - job_name: 'netalertx'\n    metrics_path: /metrics\n    scheme: http\n    scrape_interval: 60s\n    static_configs:\n      - targets: ['<server_ip>:<GRAPHQL_PORT>']\n    authorization:\n      type: Bearer\n      credentials: <API_TOKEN>\n
"},{"location":"API_METRICS/#grafana-dashboard-template","title":"Grafana Dashboard Template","text":"

Sample template JSON: Download

"},{"location":"API_NETTOOLS/","title":"Net Tools API Endpoints","text":"

The Net Tools API provides network diagnostic utilities, including Wake-on-LAN, traceroute, speed testing, DNS resolution, nmap scanning, internet connection information, and network interface info.

All endpoints require authorization via Bearer token.

"},{"location":"API_NETTOOLS/#endpoints","title":"Endpoints","text":""},{"location":"API_NETTOOLS/#1-wake-on-lan","title":"1. Wake-on-LAN","text":"

Request Body (JSON):

{\n  \"devMac\": \"AA:BB:CC:DD:EE:FF\"\n}\n

Response (success):

{\n  \"success\": true,\n  \"message\": \"WOL packet sent\",\n  \"output\": \"Sent magic packet to AA:BB:CC:DD:EE:FF\"\n}\n

Error Responses:

"},{"location":"API_NETTOOLS/#2-traceroute","title":"2. Traceroute","text":"

Request Body:

{\n  \"devLastIP\": \"192.168.1.1\"\n}\n

Response (success):

{\n  \"success\": true,\n  \"output\": \"traceroute output as string\"\n}\n

Error Responses:

"},{"location":"API_NETTOOLS/#3-speedtest","title":"3. Speedtest","text":"

Response (success):

{\n  \"success\": true,\n  \"output\": [\n    \"Ping: 15 ms\",\n    \"Download: 120.5 Mbit/s\",\n    \"Upload: 22.4 Mbit/s\"\n  ]\n}\n

Error Responses:

"},{"location":"API_NETTOOLS/#4-dns-lookup-nslookup","title":"4. DNS Lookup (nslookup)","text":"

Request Body:

{\n  \"devLastIP\": \"8.8.8.8\"\n}\n

Response (success):

{\n  \"success\": true,\n  \"output\": [\n    \"Server: 8.8.8.8\",\n    \"Address: 8.8.8.8#53\",\n    \"Name: google-public-dns-a.google.com\"\n  ]\n}\n

Error Responses:

"},{"location":"API_NETTOOLS/#5-nmap-scan","title":"5. Nmap Scan","text":"

Request Body:

{\n  \"scan\": \"192.168.1.0/24\",\n  \"mode\": \"fast\"\n}\n

Supported Modes:

Mode nmap Arguments fast -F normal default detail -A skipdiscovery -Pn

Response (success):

{\n  \"success\": true,\n  \"mode\": \"fast\",\n  \"ip\": \"192.168.1.0/24\",\n  \"output\": [\n    \"Starting Nmap 7.91\",\n    \"Host 192.168.1.1 is up\",\n    \"... scan results ...\"\n  ]\n}\n

Error Responses:

"},{"location":"API_NETTOOLS/#6-internet-connection-info","title":"6. Internet Connection Info","text":"

Response (success):

{\n  \"success\": true,\n  \"output\": \"IP: 203.0.113.5 City: Sydney Country: AU Org: Example ISP\"\n}\n

Error Responses:

"},{"location":"API_NETTOOLS/#7-network-interfaces","title":"7. Network Interfaces","text":"

Response (success):

{\n  \"success\": true,\n  \"interfaces\": {\n    \"eth0\": {\n      \"name\": \"eth0\",\n      \"short\": \"eth0\",\n      \"type\": \"ethernet\",\n      \"state\": \"up\",\n      \"mtu\": 1500,\n      \"mac\": \"00:11:32:EF:A5:6B\",\n      \"ipv4\": [\"192.168.1.82/24\"],\n      \"ipv6\": [\"fe80::211:32ff:feef:a56c/64\"],\n      \"rx_bytes\": 18488221,\n      \"tx_bytes\": 1443944\n    },\n    \"lo\": {\n      \"name\": \"lo\",\n      \"short\": \"lo\",\n      \"type\": \"loopback\",\n      \"state\": \"up\",\n      \"mtu\": 65536,\n      \"mac\": null,\n      \"ipv4\": [\"127.0.0.1/8\"],\n      \"ipv6\": [\"::1/128\"],\n      \"rx_bytes\": 123456,\n      \"tx_bytes\": 123456\n    }\n  }\n}\n

Error Responses:

"},{"location":"API_NETTOOLS/#example-curl-requests","title":"Example curl Requests","text":"

Wake-on-LAN:

curl -X POST \"http://<server_ip>:<GRAPHQL_PORT>/nettools/wakeonlan\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\" \\\n  -H \"Content-Type: application/json\" \\\n  --data '{\"devMac\":\"AA:BB:CC:DD:EE:FF\"}'\n

Traceroute:

curl -X POST \"http://<server_ip>:<GRAPHQL_PORT>/nettools/traceroute\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\" \\\n  -H \"Content-Type: application/json\" \\\n  --data '{\"devLastIP\":\"192.168.1.1\"}'\n

Speedtest:

curl \"http://<server_ip>:<GRAPHQL_PORT>/nettools/speedtest\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\"\n

Nslookup:

curl -X POST \"http://<server_ip>:<GRAPHQL_PORT>/nettools/nslookup\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\" \\\n  -H \"Content-Type: application/json\" \\\n  --data '{\"devLastIP\":\"8.8.8.8\"}'\n

Nmap Scan:

curl -X POST \"http://<server_ip>:<GRAPHQL_PORT>/nettools/nmap\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\" \\\n  -H \"Content-Type: application/json\" \\\n  --data '{\"scan\":\"192.168.1.0/24\",\"mode\":\"fast\"}'\n

Internet Info:

curl \"http://<server_ip>:<GRAPHQL_PORT>/nettools/internetinfo\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\"\n

Network Interfaces:

curl \"http://<server_ip>:<GRAPHQL_PORT>/nettools/interfaces\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\"\n
"},{"location":"API_NETTOOLS/#mcp-tools","title":"MCP Tools","text":"

Network tools are available as MCP Tools for AI assistant integration:

\ud83d\udcd6 See MCP Server Bridge API for AI integration details.

"},{"location":"API_OLD/","title":"[Deprecated] API endpoints","text":"

Warning

Some of these endpoints will be deprecated soon. Please refere to the new API endpoints docs for details on the new API layer.

NetAlertX comes with a couple of API endpoints. All requests need to be authorized (executed in a logged in browser session) or you have to pass the value of the API_TOKEN settings as authorization bearer, for example:

curl 'http://host:GRAPHQL_PORT/graphql' \\\n  -X POST \\\n  -H 'Authorization: Bearer API_TOKEN' \\\n  -H 'Content-Type: application/json' \\\n  --data '{\n    \"query\": \"query GetDevices($options: PageQueryOptionsInput) { devices(options: $options) { devices { rowid devMac devName devOwner devType devVendor devLastConnection devStatus } count } }\",\n    \"variables\": {\n      \"options\": {\n        \"page\": 1,\n        \"limit\": 10,\n        \"sort\": [{ \"field\": \"devName\", \"order\": \"asc\" }],\n        \"search\": \"\",\n        \"status\": \"connected\"\n      }\n    }\n  }'\n
"},{"location":"API_OLD/#api-endpoint-graphql","title":"API Endpoint: GraphQL","text":""},{"location":"API_OLD/#example-query-to-fetch-devices","title":"Example Query to Fetch Devices","text":"

First, let's define the GraphQL query to fetch devices with pagination and sorting options.

query GetDevices($options: PageQueryOptionsInput) {\n  devices(options: $options) {\n    devices {\n      rowid\n      devMac\n      devName\n      devOwner\n      devType\n      devVendor\n      devLastConnection\n      devStatus\n    }\n    count\n  }\n}\n

See also: Debugging GraphQL issues

"},{"location":"API_OLD/#curl-command","title":"curl Command","text":"

You can use the following curl command to execute the query.

curl 'http://host:GRAPHQL_PORT/graphql'   -X POST   -H 'Authorization: Bearer API_TOKEN'  -H 'Content-Type: application/json'   --data '{\n    \"query\": \"query GetDevices($options: PageQueryOptionsInput) { devices(options: $options) { devices { rowid devMac devName devOwner devType devVendor devLastConnection devStatus } count } }\",\n    \"variables\": {\n      \"options\": {\n        \"page\": 1,\n        \"limit\": 10,\n        \"sort\": [{ \"field\": \"devName\", \"order\": \"asc\" }],\n        \"search\": \"\",\n        \"status\": \"connected\"\n      }\n    }\n  }'\n
"},{"location":"API_OLD/#explanation","title":"Explanation:","text":"
  1. GraphQL Query:
  2. The query parameter contains the GraphQL query as a string.
  3. The variables parameter contains the input variables for the query.

  4. Query Variables:

  5. page: Specifies the page number of results to fetch.
  6. limit: Specifies the number of results per page.
  7. sort: Specifies the sorting options, with field being the field to sort by and order being the sort order (asc for ascending or desc for descending).
  8. search: A search term to filter the devices.
  9. status: The status filter to apply (valid values are my_devices (determined by the UI_MY_DEVICES setting), connected, favorites, new, down, archived, offline).

  10. curl Command:

  11. The -X POST option specifies that we are making a POST request.
  12. The -H \"Content-Type: application/json\" option sets the content type of the request to JSON.
  13. The -d option provides the request payload, which includes the GraphQL query and variables.
"},{"location":"API_OLD/#sample-response","title":"Sample Response","text":"

The response will be in JSON format, similar to the following:

{\n  \"data\": {\n    \"devices\": {\n      \"devices\": [\n        {\n          \"rowid\": 1,\n          \"devMac\": \"00:11:22:33:44:55\",\n          \"devName\": \"Device 1\",\n          \"devOwner\": \"Owner 1\",\n          \"devType\": \"Type 1\",\n          \"devVendor\": \"Vendor 1\",\n          \"devLastConnection\": \"2025-01-01T00:00:00Z\",\n          \"devStatus\": \"connected\"\n        },\n        {\n          \"rowid\": 2,\n          \"devMac\": \"66:77:88:99:AA:BB\",\n          \"devName\": \"Device 2\",\n          \"devOwner\": \"Owner 2\",\n          \"devType\": \"Type 2\",\n          \"devVendor\": \"Vendor 2\",\n          \"devLastConnection\": \"2025-01-02T00:00:00Z\",\n          \"devStatus\": \"connected\"\n        }\n      ],\n      \"count\": 2\n    }\n  }\n}\n
"},{"location":"API_OLD/#api-endpoint-json-files","title":"API Endpoint: JSON files","text":"

This API endpoint retrieves static files, that are periodically updated.

"},{"location":"API_OLD/#when-are-the-endpoints-updated","title":"When are the endpoints updated","text":"

The endpoints are updated when objects in the API endpoints are changed.

"},{"location":"API_OLD/#location-of-the-endpoints","title":"Location of the endpoints","text":"

In the container, these files are located under the API directory (default: /tmp/api/, configurable via NETALERTX_API environment variable). You can access them via the /php/server/query_json.php?file=user_notifications.json endpoint.

"},{"location":"API_OLD/#available-endpoints","title":"Available endpoints","text":"

You can access the following files:

File name Description notification_json_final.json The json version of the last notification (e.g. used for webhooks - sample JSON). table_devices.json All of the available Devices detected by the app. table_plugins_events.json The list of the unprocessed (pending) notification events (plugins_events DB table). table_plugins_history.json The list of notification events history. table_plugins_objects.json The content of the plugins_objects table. Find more info on the Plugin system here language_strings.json The content of the language_strings table, which in turn is loaded from the plugins config.json definitions. table_custom_endpoint.json A custom endpoint generated by the SQL query specified by the API_CUSTOM_SQL setting. table_settings.json The content of the settings table. app_state.json Contains the current application state."},{"location":"API_OLD/#json-data-format","title":"JSON Data format","text":"

The endpoints starting with the table_ prefix contain most, if not all, data contained in the corresponding database table. The common format for those is:

{\n  \"data\": [\n        {\n          \"db_column_name\": \"data\",\n          \"db_column_name2\": \"data2\"\n        },\n        {\n          \"db_column_name\": \"data3\",\n          \"db_column_name2\": \"data4\"\n        }\n    ]\n}\n

Example JSON of the table_devices.json endpoint with two Devices (database rows):

{\n  \"data\": [\n        {\n          \"devMac\": \"Internet\",\n          \"devName\": \"Net - Huawei\",\n          \"devType\": \"Router\",\n          \"devVendor\": null,\n          \"devGroup\": \"Always on\",\n          \"devFirstConnection\": \"2021-01-01 00:00:00\",\n          \"devLastConnection\": \"2021-01-28 22:22:11\",\n          \"devLastIP\": \"192.168.1.24\",\n          \"devStaticIP\": 0,\n          \"devPresentLastScan\": 1,\n          \"devLastNotification\": \"2023-01-28 22:22:28.998715\",\n          \"devIsNew\": 0,\n          \"devParentMAC\": \"\",\n          \"devParentPort\": \"\",\n          \"devIcon\": \"globe\"\n        },\n        {\n          \"devMac\": \"a4:8f:ff:aa:ba:1f\",\n          \"devName\": \"Net - USG\",\n          \"devType\": \"Firewall\",\n          \"devVendor\": \"Ubiquiti Inc\",\n          \"devGroup\": \"\",\n          \"devFirstConnection\": \"2021-02-12 22:05:00\",\n          \"devLastConnection\": \"2021-07-17 15:40:00\",\n          \"devLastIP\": \"192.168.1.1\",\n          \"devStaticIP\": 1,\n          \"devPresentLastScan\": 1,\n          \"devLastNotification\": \"2021-07-17 15:40:10.667717\",\n          \"devIsNew\": 0,\n          \"devParentMAC\": \"Internet\",\n          \"devParentPort\": 1,\n          \"devIcon\": \"shield-halved\"\n      }\n    ]\n}\n
"},{"location":"API_OLD/#api-endpoint-prometheus-exporter","title":"API Endpoint: Prometheus Exporter","text":""},{"location":"API_OLD/#example-output-of-the-metrics-endpoint","title":"Example Output of the /metrics Endpoint","text":"

Below is a representative snippet of the metrics you may find when querying the /metrics endpoint for netalertx. It includes both aggregate counters and device_status labels per device.

netalertx_connected_devices 31\nnetalertx_offline_devices 54\nnetalertx_down_devices 0\nnetalertx_new_devices 0\nnetalertx_archived_devices 31\nnetalertx_favorite_devices 2\nnetalertx_my_devices 54\n\nnetalertx_device_status{device=\"Net - Huawei\", mac=\"Internet\", ip=\"1111.111.111.111\", vendor=\"None\", first_connection=\"2021-01-01 00:00:00\", last_connection=\"2025-08-04 17:57:00\", dev_type=\"Router\", device_status=\"Online\"} 1\nnetalertx_device_status{device=\"Net - USG\", mac=\"74:ac:74:ac:74:ac\", ip=\"192.168.1.1\", vendor=\"Ubiquiti Networks Inc.\", first_connection=\"2022-02-12 22:05:00\", last_connection=\"2025-06-07 08:16:49\", dev_type=\"Firewall\", device_status=\"Archived\"} 1\nnetalertx_device_status{device=\"Raspberry Pi 4 LAN\", mac=\"74:ac:74:ac:74:74\", ip=\"192.168.1.9\", vendor=\"Raspberry Pi Trading Ltd\", first_connection=\"2022-02-12 22:05:00\", last_connection=\"2025-08-04 17:57:00\", dev_type=\"Singleboard Computer (SBC)\", device_status=\"Online\"} 1\n...\n
"},{"location":"API_OLD/#metrics-explanation","title":"Metrics Explanation","text":""},{"location":"API_OLD/#1-aggregate-device-counts","title":"1. Aggregate Device Counts","text":"

Metric names prefixed with netalertx_ provide aggregated counts by device status:

These numeric values give a high-level overview of device distribution.

"},{"location":"API_OLD/#2-perdevice-status-with-labels","title":"2. Per\u2011Device Status with Labels","text":"

Each individual device is represented by a netalertx_device_status metric, with descriptive labels:

The metric value is always 1 (indicating presence or active state) and the combination of labels identifies the device.

"},{"location":"API_OLD/#how-to-query-with-curl","title":"How to Query with curl","text":"

To fetch the metrics from the NetAlertX exporter:

curl 'http://<server_ip>:<GRAPHQL_PORT>/metrics' \\\n  -H 'Authorization: Bearer <API_TOKEN>' \\\n  -H 'Accept: text/plain'\n

Replace:

"},{"location":"API_OLD/#summary","title":"Summary","text":""},{"location":"API_OLD/#prometheus-scraping-configuration","title":"Prometheus Scraping Configuration","text":"
scrape_configs:\n  - job_name: 'netalertx'\n    metrics_path: /metrics\n    scheme: http\n    scrape_interval: 60s\n    static_configs:\n      - targets: ['<server_ip>:<GRAPHQL_PORT>']\n    authorization:\n      type: Bearer\n      credentials: <API_TOKEN>\n
"},{"location":"API_OLD/#grafana-template","title":"Grafana template","text":"

Grafana template sample: Download json

"},{"location":"API_OLD/#api-endpoint-log-files","title":"API Endpoint: /log files","text":"

This API endpoint retrieves files from the /tmp/log folder.

File Description IP_changes.log Logs of IP address changes app.log Main application log app.php_errors.log PHP error log app_front.log Frontend application log app_nmap.log Logs of Nmap scan results db_is_locked.log Logs when the database is locked execution_queue.log Logs of execution queue activities plugins/ Directory for temporary plugin-related files (not accessible) report_output.html HTML report output report_output.json JSON format report output report_output.txt Text format report output stderr.log Logs of standard error output stdout.log Logs of standard output"},{"location":"API_OLD/#api-endpoint-config-files","title":"API Endpoint: /config files","text":"

To retrieve files from the /data/config folder.

File Description devices.csv Devices csv file app.conf Application config file"},{"location":"API_ONLINEHISTORY/","title":"Online History API Endpoints","text":"

Manage the online history records of devices. Currently, the API supports deletion of all history entries. All endpoints require authorization.

"},{"location":"API_ONLINEHISTORY/#1-delete-online-history","title":"1. Delete Online History","text":"

Response (success):

{\n  \"success\": true,\n  \"message\": \"Deleted online history\"\n}\n

Error Responses:

"},{"location":"API_ONLINEHISTORY/#example-curl-request","title":"Example curl Request","text":"
curl -X DELETE \"http://<server_ip>:<GRAPHQL_PORT>/history\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\"\n
"},{"location":"API_SESSIONS/","title":"Sessions API Endpoints","text":"

Track and manage device connection sessions. Sessions record when a device connects or disconnects on the network.

"},{"location":"API_SESSIONS/#create-a-session","title":"Create a Session","text":"

Request Body:

{\n  \"mac\": \"AA:BB:CC:DD:EE:FF\",\n  \"ip\": \"192.168.1.10\",\n  \"start_time\": \"2025-08-01T10:00:00\",\n  \"end_time\": \"2025-08-01T12:00:00\",      // optional\n  \"event_type_conn\": \"Connected\",         // optional, default \"Connected\"\n  \"event_type_disc\": \"Disconnected\"       // optional, default \"Disconnected\"\n}\n

Response:

{\n  \"success\": true,\n  \"message\": \"Session created for MAC AA:BB:CC:DD:EE:FF\"\n}\n
"},{"location":"API_SESSIONS/#curl-example","title":"curl Example","text":"
curl -X POST \"http://<server_ip>:<GRAPHQL_PORT>/sessions/create\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\" \\\n  -H \"Accept: application/json\" \\\n  -H \"Content-Type: application/json\" \\\n  -d '{\n    \"mac\": \"AA:BB:CC:DD:EE:FF\",\n    \"ip\": \"192.168.1.10\",\n    \"start_time\": \"2025-08-01T10:00:00\",\n    \"end_time\": \"2025-08-01T12:00:00\",\n    \"event_type_conn\": \"Connected\",\n    \"event_type_disc\": \"Disconnected\"\n  }'\n
"},{"location":"API_SESSIONS/#delete-sessions","title":"Delete Sessions","text":"

Request Body:

{\n  \"mac\": \"AA:BB:CC:DD:EE:FF\"\n}\n

Response:

{\n  \"success\": true,\n  \"message\": \"Deleted sessions for MAC AA:BB:CC:DD:EE:FF\"\n}\n
"},{"location":"API_SESSIONS/#curl-example_1","title":"curl Example","text":"
curl -X DELETE \"http://<server_ip>:<GRAPHQL_PORT>/sessions/delete\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\" \\\n  -H \"Accept: application/json\" \\\n  -H \"Content-Type: application/json\" \\\n  -d '{\n    \"mac\": \"AA:BB:CC:DD:EE:FF\"\n  }'\n
"},{"location":"API_SESSIONS/#list-sessions","title":"List Sessions","text":"

Query Parameters:

Example:

/sessions/list?mac=AA:BB:CC:DD:EE:FF&start_date=2025-08-01&end_date=2025-08-21\n

Response:

{\n  \"success\": true,\n  \"sessions\": [\n    {\n      \"ses_MAC\": \"AA:BB:CC:DD:EE:FF\",\n      \"ses_Connection\": \"2025-08-01 10:00\",\n      \"ses_Disconnection\": \"2025-08-01 12:00\",\n      \"ses_Duration\": \"2h 0m\",\n      \"ses_IP\": \"192.168.1.10\",\n      \"ses_Info\": \"\"\n    }\n  ]\n}\n
"},{"location":"API_SESSIONS/#curl-example_2","title":"curl Example","text":"

get sessions for mac

curl -X GET \"http://<server_ip>:<GRAPHQL_PORT>/sessions/list?mac=AA:BB:CC:DD:EE:FF&start_date=2025-08-01&end_date=2025-08-21\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\" \\\n  -H \"Accept: application/json\"\n
"},{"location":"API_SESSIONS/#calendar-view-of-sessions","title":"Calendar View of Sessions","text":"

Query Parameters:

Example:

/sessions/calendar?start=2025-08-01&end=2025-08-21\n

Response:

{\n  \"success\": true,\n  \"sessions\": [\n    {\n      \"resourceId\": \"AA:BB:CC:DD:EE:FF\",\n      \"title\": \"\",\n      \"start\": \"2025-08-01T10:00:00\",\n      \"end\": \"2025-08-01T12:00:00\",\n      \"color\": \"#00a659\",\n      \"tooltip\": \"Connection: 2025-08-01 10:00\\nDisconnection: 2025-08-01 12:00\\nIP: 192.168.1.10\",\n      \"className\": \"no-border\"\n    }\n  ]\n}\n
"},{"location":"API_SESSIONS/#curl-example_3","title":"curl Example","text":"
curl -X GET \"http://<server_ip>:<GRAPHQL_PORT>/sessions/calendar?start=2025-08-01&end=2025-08-21\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\" \\\n  -H \"Accept: application/json\"\n
"},{"location":"API_SESSIONS/#device-sessions","title":"Device Sessions","text":"

Query Parameters:

Example:

/sessions/AA:BB:CC:DD:EE:FF?period=7 days\n

Response:

{\n  \"success\": true,\n  \"sessions\": [\n    {\n      \"ses_MAC\": \"AA:BB:CC:DD:EE:FF\",\n      \"ses_Connection\": \"2025-08-01 10:00\",\n      \"ses_Disconnection\": \"2025-08-01 12:00\",\n      \"ses_Duration\": \"2h 0m\",\n      \"ses_IP\": \"192.168.1.10\",\n      \"ses_Info\": \"\"\n    }\n  ]\n}\n
"},{"location":"API_SESSIONS/#curl-example_4","title":"curl Example","text":"
curl -X GET \"http://<server_ip>:<GRAPHQL_PORT>/sessions/AA:BB:CC:DD:EE:FF?period=7%20days\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\" \\\n  -H \"Accept: application/json\"\n
"},{"location":"API_SESSIONS/#session-events-summary","title":"Session Events Summary","text":"

Query Parameters:

Example:

/sessions/session-events?type=all&period=7 days\n

Response: Returns a list of events or sessions with formatted connection, disconnection, duration, and IP information.

"},{"location":"API_SESSIONS/#curl-example_5","title":"curl Example","text":"
curl -X GET \"http://<server_ip>:<GRAPHQL_PORT>/sessions/session-events?type=all&period=7%20days\" \\\n  -H \"Authorization: Bearer <API_TOKEN>\" \\\n  -H \"Accept: application/json\"\n
"},{"location":"API_SETTINGS/","title":"Settings API Endpoints","text":"

Retrieve application settings stored in the configuration system. This endpoint is useful for quickly fetching individual settings such as API_TOKEN or TIMEZONE.

For bulk or structured access (all settings, schema details, or filtering), use the GraphQL API Endpoint.

"},{"location":"API_SETTINGS/#get-a-setting","title":"Get a Setting","text":"

Path Parameter:

Authorization: Requires a valid API token in the Authorization header.

"},{"location":"API_SETTINGS/#curl-example-success","title":"curl Example (Success)","text":"
curl 'http://<server_ip>:<GRAPHQL_PORT>/settings/API_TOKEN' \\\n  -H 'Authorization: Bearer <API_TOKEN>' \\\n  -H 'Accept: application/json'\n

Response:

{\n  \"success\": true,\n  \"value\": \"my-secret-token\"\n}\n
"},{"location":"API_SETTINGS/#curl-example-invalid-key","title":"curl Example (Invalid Key)","text":"
curl 'http://<server_ip>:<GRAPHQL_PORT>/settings/DOES_NOT_EXIST' \\\n  -H 'Authorization: Bearer <API_TOKEN>' \\\n  -H 'Accept: application/json'\n

Response:

{\n  \"success\": true,\n  \"value\": null\n}\n
"},{"location":"API_SETTINGS/#curl-example-unauthorized","title":"curl Example (Unauthorized)","text":"
curl 'http://<server_ip>:<GRAPHQL_PORT>/settings/API_TOKEN' \\\n  -H 'Accept: application/json'\n

Response:

{\n  \"error\": \"Forbidden\"\n}\n
"},{"location":"API_SETTINGS/#notes","title":"Notes","text":"
curl 'http://<server_ip>:<GRAPHQL_PORT>/graphql' \\\n  -X POST \\\n  -H 'Authorization: Bearer <API_TOKEN>' \\\n  -H 'Content-Type: application/json' \\\n  --data '{\n    \"query\": \"query GetSettings { settings { settings { setKey setName setDescription setType setOptions setGroup setValue setEvents setOverriddenByEnv } count } }\"\n  }'\n

See the GraphQL API Endpoint for more details.

"},{"location":"API_SSE/","title":"SSE (Server-Sent Events)","text":"

Real-time app state updates via Server-Sent Events. Reduces server load ~95% vs polling.

"},{"location":"API_SSE/#endpoints","title":"Endpoints","text":"Endpoint Method Purpose /sse/state GET Stream state updates (requires Bearer token) /sse/stats GET Debug: connected clients, queued events"},{"location":"API_SSE/#usage","title":"Usage","text":""},{"location":"API_SSE/#connect-to-sse-stream","title":"Connect to SSE Stream","text":"
curl -H \"Authorization: Bearer YOUR_API_TOKEN\" \\\n  http://localhost:5000/sse/state\n
"},{"location":"API_SSE/#check-connection-stats","title":"Check Connection Stats","text":"
curl -H \"Authorization: Bearer YOUR_API_TOKEN\" \\\n  http://localhost:5000/sse/stats\n
"},{"location":"API_SSE/#event-types","title":"Event Types","text":""},{"location":"API_SSE/#backend-integration","title":"Backend Integration","text":"

Broadcasts automatically triggered in app_state.py via broadcast_state_update():

from api_server.sse_broadcast import broadcast_state_update\n\n# Called on every state change - no additional code needed\nbroadcast_state_update(current_state=\"Scanning\", settings_imported=time.time())\n
"},{"location":"API_SSE/#frontend-integration","title":"Frontend Integration","text":"

Auto-enabled via sse_manager.js:

// In browser console:\nnetAlertXStateManager.getStats().then(stats => {\n  console.log(\"Connected clients:\", stats.connected_clients);\n});\n
"},{"location":"API_SSE/#fallback-behavior","title":"Fallback Behavior","text":""},{"location":"API_SSE/#files","title":"Files","text":"File Purpose server/api_server/sse_endpoint.py SSE endpoints & event queue server/api_server/sse_broadcast.py Broadcast helper functions front/js/sse_manager.js Client-side SSE connection manager"},{"location":"API_SSE/#troubleshooting","title":"Troubleshooting","text":"Issue Solution Connection refused Check backend running, API token correct No events received Verify broadcast_state_update() is called on state changes High memory Events not processed fast enough, check client logs Using polling instead of SSE Normal fallback - check browser console for errors"},{"location":"API_SYNC/","title":"Sync API Endpoint","text":"

The /sync endpoint is used by the SYNC plugin to synchronize data between multiple NetAlertX instances (e.g., from a node to a hub). It supports both GET and POST requests.

"},{"location":"API_SYNC/#91-get-sync","title":"9.1 GET /sync","text":"

Fetches data from a node to the hub. The data is returned as a base64-encoded JSON file.

Example Request:

curl 'http://<server>:<GRAPHQL_PORT>/sync' \\\n  -H 'Authorization: Bearer <API_TOKEN>'\n

Response Example:

{\n  \"node_name\": \"NODE-01\",\n  \"status\": 200,\n  \"message\": \"OK\",\n  \"data_base64\": \"eyJkZXZpY2VzIjogW3siZGV2TWFjIjogIjAwOjExOjIyOjMzOjQ0OjU1IiwiZGV2TmFtZSI6ICJEZXZpY2UgMSJ9XSwgImNvdW50Ijog1fQ==\",\n  \"timestamp\": \"2025-08-24T10:15:00+10:00\"\n}\n

Notes:

"},{"location":"API_SYNC/#92-post-sync","title":"9.2 POST /sync","text":"

The POST endpoint is used by nodes to send data to the hub. The hub expects the data as form-encoded fields (application/x-www-form-urlencoded or multipart/form-data). The hub then stores the data in the plugin log folder for processing.

"},{"location":"API_SYNC/#required-fields","title":"Required Fields","text":"Field Type Description data string The payload from the plugin or devices. Typically plain text, JSON, or encrypted Base64 data. In your Python script, encrypt_data() is applied before sending. node_name string The name of the node sending the data. Matches the node\u2019s SYNC_node_name setting. Used to generate the filename on the hub. plugin string The name of the plugin sending the data. Determines the filename prefix (last_result.<plugin>...). file_path string (optional) Path of the local file being sent. Used only for logging/debugging purposes on the hub; not required for processing."},{"location":"API_SYNC/#how-the-hub-processes-the-post-data","title":"How the Hub Processes the POST Data","text":"
  1. Receives the data and validates the API token.
  2. Stores the raw payload in:
INSTALL_PATH/log/plugins/last_result.<plugin>.encoded.<node_name>.<sequence>.log\n
processed_last_result.<plugin>.<node_name>.<sequence>.log\n
"},{"location":"API_SYNC/#example-post-payload","title":"Example POST Payload","text":"

If a node is sending device data:

curl -X POST 'http://<hub>:<PORT>/sync' \\\n  -H 'Authorization: Bearer <API_TOKEN>' \\\n  -F 'data={\"data\":[{\"devMac\":\"00:11:22:33:44:55\",\"devName\":\"Device 1\",\"devVendor\":\"Vendor A\",\"devLastIP\":\"192.168.1.10\"}]}' \\\n  -F 'node_name=NODE-01' \\\n  -F 'plugin=SYNC'\n
"},{"location":"API_SYNC/#key-notes","title":"Key Notes","text":"

Storage Details:

last_result.<plugin>.encoded.<node_name>.<sequence>.log\n
"},{"location":"API_SYNC/#93-notes-and-best-practices","title":"9.3 Notes and Best Practices","text":""},{"location":"API_TESTS/","title":"Tests","text":""},{"location":"API_TESTS/#unit-tests","title":"Unit Tests","text":"

Warning

Please note these test modify data in the database.

  1. See the /test directory for available test cases. These are not exhaustive but cover the main API endpoints.
  2. To run a test case, SSH into the container: sudo docker exec -it netalertx /bin/bash
  3. Inside the container, install pytest (if not already installed): pip install pytest
  4. Run a specific test case: pytest /app/test/TESTFILE.py
"},{"location":"AUTHELIA/","title":"Authelia","text":""},{"location":"AUTHELIA/#authelia-support","title":"Authelia support","text":"

Note

This is community-contributed. Due to environment, setup, or networking differences, results may vary. Please open a PR to improve it instead of creating an issue, as the maintainer is not actively maintaining it.

theme: dark\n\ndefault_2fa_method: \"totp\"\n\nserver:\n  address: 0.0.0.0:9091\n  endpoints:\n    enable_expvars: false\n    enable_pprof: false\n    authz:\n      forward-auth:\n        implementation: 'ForwardAuth'\n        authn_strategies:\n          - name: 'HeaderAuthorization'\n            schemes:\n              - 'Basic'\n          - name: 'CookieSession'\n      ext-authz:\n        implementation: 'ExtAuthz'\n        authn_strategies:\n          - name: 'HeaderAuthorization'\n            schemes:\n              - 'Basic'\n          - name: 'CookieSession'\n      auth-request:\n        implementation: 'AuthRequest'\n        authn_strategies:\n          - name: 'HeaderAuthRequestProxyAuthorization'\n            schemes:\n              - 'Basic'\n          - name: 'CookieSession'\n      legacy:\n        implementation: 'Legacy'\n        authn_strategies:\n          - name: 'HeaderLegacy'\n          - name: 'CookieSession'\n  disable_healthcheck: false\n  tls:\n    key: \"\"\n    certificate: \"\"\n    client_certificates: []\n  headers:\n    csp_template: \"\"\n\nlog:\n  ## Level of verbosity for logs: info, debug, trace.\n  level: info\n\n###############################################################\n# The most important section\n###############################################################\naccess_control:\n  ## Default policy can either be 'bypass', 'one_factor', 'two_factor' or 'deny'.\n  default_policy: deny\n  networks:\n    - name: internal\n      networks:\n        - '192.168.0.0/18'\n        - '10.10.10.0/8' # Zerotier\n    - name: private\n      networks:\n        - '172.16.0.0/12'\n  rules:\n    - networks:\n        - private\n      domain:\n        - '*'\n      policy: bypass\n    - networks:\n        - internal\n      domain:\n        - '*'\n      policy: bypass\n    - domain:\n        # exclude itself from auth, should not happen as we use Traefik middleware on a case-by-case screnario\n        - 'auth.MYDOMAIN1.TLD'\n        - 'authelia.MYDOMAIN1.TLD'\n        - 'auth.MYDOMAIN2.TLD'\n        - 'authelia.MYDOMAIN2.TLD'\n      policy: bypass\n    - domain:\n        #All subdomains match\n        - 'MYDOMAIN1.TLD'\n        - '*.MYDOMAIN1.TLD'\n      policy: two_factor\n    - domain:\n        # This will not work yet as Authelio does not support multi-domain authentication\n        - 'MYDOMAIN2.TLD'\n        - '*.MYDOMAIN2.TLD'\n      policy: two_factor\n\n\n############################################################\nidentity_validation:\n  reset_password:\n    jwt_secret: \"[REDACTED]\"\n\nidentity_providers:\n  oidc:\n    enable_client_debug_messages: true\n    enforce_pkce: public_clients_only\n    hmac_secret: [REDACTED]\n    lifespans:\n      authorize_code: 1m\n      id_token: 1h\n      refresh_token: 90m\n      access_token: 1h\n    cors:\n      endpoints:\n        - authorization\n        - token\n        - revocation\n        - introspection\n        - userinfo\n      allowed_origins:\n        - \"*\"\n      allowed_origins_from_client_redirect_uris: false\n    jwks:\n      - key: [REDACTED]\n        certificate_chain:\n    clients:\n      - client_id: portainer\n        client_name: Portainer\n        # generate secret with \"authelia crypto hash generate pbkdf2 --random --random.length 32 --random.charset alphanumeric\"\n        # Random Password: [REDACTED]\n        # Digest: [REDACTED]\n        client_secret: [REDACTED]\n        token_endpoint_auth_method: 'client_secret_post'\n        public: false\n        authorization_policy: two_factor\n        consent_mode: pre-configured #explicit\n        pre_configured_consent_duration: '6M' #Must be re-authorised every 6 Months\n        scopes:\n          - openid\n          #- groups #Currently not supported in Authelia V\n          - email\n          - profile\n        redirect_uris:\n          - https://portainer.MYDOMAIN1.LTD\n        userinfo_signed_response_alg: none\n\n      - client_id: openproject\n        client_name: OpenProject\n        # generate secret with \"authelia crypto hash generate pbkdf2 --random --random.length 32 --random.charset alphanumeric\"\n        # Random Password: [REDACTED]\n        # Digest: [REDACTED]\n        client_secret: [REDACTED]\n        token_endpoint_auth_method: 'client_secret_basic'\n        public: false\n        authorization_policy: two_factor\n        consent_mode: pre-configured #explicit\n        pre_configured_consent_duration: '6M' #Must be re-authorised every 6 Months\n        scopes:\n          - openid\n          #- groups #Currently not supported in Authelia V\n          - email\n          - profile\n        redirect_uris:\n          - https://op.MYDOMAIN.TLD\n        #grant_types:\n        #  - refresh_token\n        #  - authorization_code\n        #response_types:\n        #  - code\n        #response_modes:\n        #  - form_post\n        #  - query\n        #  - fragment\n        userinfo_signed_response_alg: none\n##################################################################\n\n\ntelemetry:\n  metrics:\n    enabled: false\n    address: tcp://0.0.0.0:9959\n\ntotp:\n  disable: false\n  issuer: authelia.com\n  algorithm: sha1\n  digits: 6\n  period: 30 ## The period in seconds a one-time password is valid for.\n  skew: 1\n  secret_size: 32\n\nwebauthn:\n  disable: false\n  timeout: 60s ## Adjust the interaction timeout for Webauthn dialogues.\n  display_name: Authelia\n  attestation_conveyance_preference: indirect\n  user_verification: preferred\n\nntp:\n  address: \"pool.ntp.org\"\n  version: 4\n  max_desync: 5s\n  disable_startup_check: false\n  disable_failure: false\n\nauthentication_backend:\n  password_reset:\n    disable: false\n    custom_url: \"\"\n  refresh_interval: 5m\n  file:\n    path: /config/users_database.yml\n    watch: true\n    password:\n      algorithm: argon2\n      argon2:\n        variant: argon2id\n        iterations: 3\n        memory: 65536\n        parallelism: 4\n        key_length: 32\n        salt_length: 16\n\npassword_policy:\n  standard:\n    enabled: false\n    min_length: 8\n    max_length: 0\n    require_uppercase: true\n    require_lowercase: true\n    require_number: true\n    require_special: true\n  ## zxcvbn is a well known and used password strength algorithm. It does not have tunable settings.\n  zxcvbn:\n    enabled: false\n    min_score: 3\n\nregulation:\n  max_retries: 3\n  find_time: 2m\n  ban_time: 5m\n\nsession:\n  name: authelia_session\n  secret: [REDACTED]\n  expiration: 60m\n  inactivity: 15m\n  cookies:\n    - domain: 'MYDOMAIN1.LTD'\n      authelia_url: 'https://auth.MYDOMAIN1.LTD'\n      name: 'authelia_session'\n      default_redirection_url: 'https://MYDOMAIN1.LTD'\n    - domain: 'MYDOMAIN2.LTD'\n      authelia_url: 'https://auth.MYDOMAIN2.LTD'\n      name: 'authelia_session_other'\n      default_redirection_url: 'https://MYDOMAIN2.LTD'\n\nstorage:\n  encryption_key: [REDACTED]\n  local:\n    path: /config/db.sqlite3\n\nnotifier:\n  disable_startup_check: true\n  smtp:\n    address: MYOTHERDOMAIN.LTD:465\n    timeout: 5s\n    username: \"USER@DOMAIN\"\n    password: \"[REDACTED]\"\n    sender: \"Authelia <postmaster@MYOTHERDOMAIN.LTD>\"\n    identifier: NAME@MYOTHERDOMAIN.LTD\n    subject: \"[Authelia] {title}\"\n    startup_check_address: postmaster@MYOTHERDOMAIN.LTD\n
"},{"location":"BACKUPS/","title":"Backing Things Up","text":"

Note

To back up 99% of your configuration, back up at least the /data/config folder. Database definitions can change between releases, so the safest method is to restore backups using the same app version they were taken from, then upgrade incrementally by following the Migration documentation.

"},{"location":"BACKUPS/#what-to-back-up","title":"What to Back Up","text":"

There are four key artifacts you can use to back up your NetAlertX configuration:

File Description Limitations /db/app.db The application database Might be in an uncommitted state or corrupted /config/app.conf Configuration file Can be overridden using the APP_CONF_OVERRIDE variable /config/devices.csv CSV file containing device data Does not include historical data /config/workflows.json JSON file containing your workflows N/A"},{"location":"BACKUPS/#where-the-data-lives","title":"Where the Data Lives","text":"

Understanding where your data is stored helps you plan your backup strategy.

"},{"location":"BACKUPS/#core-configuration","title":"Core Configuration","text":"

Stored in /data/config/app.conf. This includes settings for:

(See Settings System for details.)

"},{"location":"BACKUPS/#device-data","title":"Device Data","text":"

Stored in /data/config/devices_<timestamp>.csv or /data/config/devices.csv, created by the CSV Backup CSVBCKP Plugin. Contains:

"},{"location":"BACKUPS/#historical-data","title":"Historical Data","text":"

Stored in /data/db/app.db (see Database Overview). Contains:

"},{"location":"BACKUPS/#backup-strategies","title":"Backup Strategies","text":"

The safest approach is to back up both the /db and /config folders regularly. Tools like Kopia make this simple and efficient.

If you can only keep a few files, prioritize:

  1. The latest devices_<timestamp>.csv or devices.csv
  2. app.conf
  3. workflows.json

You can also download the app.conf and devices.csv files from the Maintenance section:

"},{"location":"BACKUPS/#scenario-1-full-backup-and-restore","title":"Scenario 1: Full Backup and Restore","text":"

Goal: Full recovery of your configuration and data.

"},{"location":"BACKUPS/#what-to-back-up_1","title":"\ud83d\udcbe What to Back Up","text":""},{"location":"BACKUPS/#how-to-restore","title":"\ud83d\udce5 How to Restore","text":"

Map these files into your container as described in the Setup documentation.

"},{"location":"BACKUPS/#scenario-2-corrupted-database","title":"Scenario 2: Corrupted Database","text":"

Goal: Recover configuration and device data when the database is lost or corrupted.

"},{"location":"BACKUPS/#what-to-back-up_2","title":"\ud83d\udcbe What to Back Up","text":""},{"location":"BACKUPS/#how-to-restore_1","title":"\ud83d\udce5 How to Restore","text":"
  1. Copy app.conf and workflows.json into /data/config/
  2. Rename and place devices_<timestamp>.csv \u2192 /data/config/devices.csv
  3. Restore via the Maintenance section under Devices \u2192 Bulk Editing

This recovers nearly all configuration, workflows, and device metadata.

"},{"location":"BACKUPS/#docker-based-backup-and-restore","title":"Docker-Based Backup and Restore","text":"

For users running NetAlertX via Docker, you can back up or restore directly from your host system \u2014 a convenient and scriptable option.

"},{"location":"BACKUPS/#full-backup-file-level","title":"Full Backup (File-Level)","text":"
  1. Stop the container:
docker stop netalertx\n
  1. Create a compressed archive of your configuration and database volumes:
docker run --rm -v local_path/config:/config -v local_path/db:/db alpine tar -cz /config /db > netalertx-backup.tar.gz\n
  1. Restart the container:
docker start netalertx\n
"},{"location":"BACKUPS/#restore-from-backup","title":"Restore from Backup","text":"
  1. Stop the container:
docker stop netalertx\n
  1. Restore from your backup file:
docker run --rm -i -v local_path/config:/config -v local_path/db:/db alpine tar -C / -xz < netalertx-backup.tar.gz\n
  1. Restart the container:
docker start netalertx\n

This approach uses a temporary, minimal alpine container to access Docker-managed volumes. The tar command creates or extracts an archive directly from your host\u2019s filesystem, making it fast, clean, and reliable for both automation and manual recovery.

"},{"location":"BACKUPS/#summary","title":"Summary","text":""},{"location":"BUILDS/","title":"NetAlertX Builds: Choose Your Path","text":"

NetAlertX provides different installation methods for different needs. This guide helps you choose the right path for security, experimentation, or development.

"},{"location":"BUILDS/#1-hardened-appliance-default-production","title":"1. Hardened Appliance (Default Production)","text":"

Note

Use this image if: You want to use NetAlertX securely.

"},{"location":"BUILDS/#who-is-this-for","title":"Who is this for?","text":"

All users who want a stable, secure, \"set-it-and-forget-it\" appliance.

"},{"location":"BUILDS/#methodology","title":"Methodology","text":""},{"location":"BUILDS/#source","title":"Source","text":"

Dockerfile (hardened target)

"},{"location":"BUILDS/#2-tinkerers-image-insecure-vm-style","title":"2. \"Tinkerer's\" Image (Insecure VM-Style)","text":"

Note

Use this image if: You want to experiment with NetAlertX.

"},{"location":"BUILDS/#who-is-this-for_1","title":"Who is this for?","text":"

Power users, developers, and \"tinkerers\" wanting a familiar \"VM-like\" experience.

"},{"location":"BUILDS/#methodology_1","title":"Methodology","text":""},{"location":"BUILDS/#source_1","title":"Source","text":"

Dockerfile.debian

"},{"location":"BUILDS/#3-contributors-devcontainer-project-developers","title":"3. Contributor's Devcontainer (Project Developers)","text":"

Note

Use this image if: You want to develop NetAlertX itself.

"},{"location":"BUILDS/#who-is-this-for_2","title":"Who is this for?","text":"

Project contributors who are actively writing and debugging code for NetAlertX.

"},{"location":"BUILDS/#methodology_2","title":"Methodology","text":""},{"location":"BUILDS/#source_2","title":"Source","text":"

Dockerfile (devcontainer target)

"},{"location":"BUILDS/#visualizing-the-trade-offs","title":"Visualizing the Trade-Offs","text":"

This chart compares the three builds across key attributes. A higher score means \"more of\" that attribute. Notice the clear trade-offs between security and development features.

"},{"location":"BUILDS/#build-process-origins","title":"Build Process & Origins","text":"

The final images originate from two different files and build paths. The main Dockerfile uses stages to create both the hardened and development container images.

"},{"location":"BUILDS/#official-build-path","title":"Official Build Path","text":"

Dockerfile -> builder (Stage 1) -> runner (Stage 2) -> hardened (Final Stage) (Production Image) + devcontainer (Final Stage) (Developer Image)

"},{"location":"BUILDS/#legacy-build-path","title":"Legacy Build Path","text":"

Dockerfile.debian -> \"Tinkerer's\" Image (Insecure VM-Style Image)

"},{"location":"COMMON_ISSUES/","title":"Troubleshooting Common Issues","text":"

Tip

Before troubleshooting, ensure you have set the correct Debugging and LOG_LEVEL.

"},{"location":"COMMON_ISSUES/#docker-container-doesnt-start","title":"Docker Container Doesn't Start","text":"

Initial setup issues are often caused by missing permissions or incorrectly mapped volumes. Always double-check your docker run or docker-compose.yml against the official setup guide before proceeding.

"},{"location":"COMMON_ISSUES/#permissions","title":"Permissions","text":"

Make sure your file permissions are correctly set:

"},{"location":"COMMON_ISSUES/#container-restarts-crashes","title":"Container Restarts / Crashes","text":"
docker run --rm -it <your_image>\n
"},{"location":"COMMON_ISSUES/#docker-container-starts-but-the-application-misbehaves","title":"Docker Container Starts, But the Application Misbehaves","text":"

If the container starts but the app shows unexpected behavior, the cause is often data corruption, incorrect configuration, or unexpected input data.

"},{"location":"COMMON_ISSUES/#continuous-loading-screen","title":"Continuous \"Loading...\" Screen","text":"

A misconfigured application may display a persistent Loading... dialog. This is usually caused by the backend failing to start.

Steps to troubleshoot:

  1. Check Maintenance \u2192 Logs for exceptions.
  2. If no exception is visible, check the Portainer logs.
  3. Start the container in the foreground to observe exceptions.
  4. Enable trace or debug logging for detailed output (see Debug Tips).
  5. Verify that GRAPHQL_PORT is correctly configured.
  6. Check browser logs (press F12):

  7. Console tab \u2192 refresh the page

  8. Network tab \u2192 refresh the page

If you are unsure how to resolve errors, provide screenshots or log excerpts in your issue report or Discord discussion.

"},{"location":"COMMON_ISSUES/#common-configuration-issues","title":"Common Configuration Issues","text":""},{"location":"COMMON_ISSUES/#incorrect-scan_subnets","title":"Incorrect SCAN_SUBNETS","text":"

If SCAN_SUBNETS is misconfigured, you may see only a few devices in your device list after a scan. See the Subnets Documentation for proper configuration.

"},{"location":"COMMON_ISSUES/#duplicate-devices-and-notifications","title":"Duplicate Devices and Notifications","text":""},{"location":"COMMON_ISSUES/#unable-to-resolve-host","title":"Unable to Resolve Host","text":""},{"location":"COMMON_ISSUES/#invalid-json-errors","title":"Invalid JSON Errors","text":""},{"location":"COMMON_ISSUES/#sudo-execution-fails-eg-on-arpscan-on-raspberry-pi-4","title":"Sudo Execution Fails (e.g., on arpscan on Raspberry Pi 4)","text":"

Error:

sudo: unexpected child termination condition: 0\n

Resolution:

wget ftp.us.debian.org/debian/pool/main/libs/libseccomp/libseccomp2_2.5.3-2_armhf.deb\nsudo dpkg -i libseccomp2_2.5.3-2_armhf.deb\n

\u26a0\ufe0f The link may break over time. Check Debian Packages for the latest version.

"},{"location":"COMMON_ISSUES/#only-router-and-own-device-show-up","title":"Only Router and Own Device Show Up","text":""},{"location":"COMMON_ISSUES/#losing-settings-or-devices-after-update","title":"Losing Settings or Devices After Update","text":""},{"location":"COMMON_ISSUES/#application-performance-issues","title":"Application Performance Issues","text":"

Slowness can be caused by:

See Performance Tips for detailed optimization steps.

"},{"location":"COMMON_ISSUES/#ip-flipping","title":"IP flipping","text":"

With ARPSCAN scans some devices might flip IP addresses after each scan triggering false notifications. This is because some devices respond to broadcast calls and thus different IPs after scans are logged.

See how to prevent IP flipping in the ARPSCAN plugin guide.

Alternatively adjust your notification settings to prevent false positives by filtering out events or devices.

"},{"location":"COMMON_ISSUES/#multiple-nics-on-same-host-reporting-same-ip","title":"Multiple NICs on Same Host Reporting Same IP","text":"

On systems with multiple NICs (like a Proxmox server), each NIC has its own MAC address. Sometimes NetAlertX can incorrectly assign the same IP to all NICs, causing false device mappings. This is due to the way ARP responses are handled by the OS and cannot be overridden directly in NetAlertX.

Resolution (Linux-based systems, e.g., Proxmox):

Run the following commands on the host to fix ARP behavior:

sudo sysctl -w net.ipv4.conf.all.arp_ignore=1\nsudo sysctl -w net.ipv4.conf.all.arp_announce=2\n

This ensures each NIC responds correctly to ARP requests and prevents NetAlertX from misassigning IPs.

For setups with multiple interfaces on the same switch, consider workflows, device exclusions, or dummy devices as additional workarounds. See Feature Requests for reporting edge cases.

"},{"location":"COMMUNITY_GUIDES/","title":"Community Guides","text":"

Note

This is community-contributed. Due to environment, setup, or networking differences, results may vary. Please open a PR to improve it instead of creating an issue, as the maintainer is not actively maintaining it.

Use the official installation guides at first and use community content as supplementary material. (Ordered by last update time)

"},{"location":"CUSTOM_PROPERTIES/","title":"Custom Properties for Devices","text":""},{"location":"CUSTOM_PROPERTIES/#overview","title":"Overview","text":"

This functionality allows you to define custom properties for devices, which can store and display additional information on the device listing page. By marking properties as \"Show\", you can enhance the user interface with quick actions, notes, or external links.

"},{"location":"CUSTOM_PROPERTIES/#key-features","title":"Key Features:","text":""},{"location":"CUSTOM_PROPERTIES/#usage-on-the-device-listing-page","title":"Usage on the Device Listing Page","text":"

Visible properties (CUSTPROP_show: true) are displayed as interactive icons in the device listing. Each icon can perform one of the following actions based on the CUSTPROP_type:

  1. Modals (e.g., Show Notes):
  2. Displays detailed information in a popup modal.
  3. Example: Firmware version details.

  4. Links:

  5. Redirect to an external or internal URL.
  6. Example: Open a device's documentation or external site.

  7. Device Actions:

  8. Manage devices with actions like delete.
  9. Example: Quickly remove a device from the network.

  10. Plugins:

  11. Future placeholder for running custom plugin scripts.
  12. Note: Not implemented yet.
"},{"location":"CUSTOM_PROPERTIES/#example-use-cases","title":"Example Use Cases","text":"
  1. Device Documentation Link:
  2. Add a custom property with CUSTPROP_type set to link or link_new_tab to allow quick navigation to the external documentation of the device.

  3. Firmware Details:

  4. Use CUSTPROP_type: show_notes to display firmware versions or upgrade instructions in a modal.

  5. Device Removal:

  6. Enable device removal functionality using CUSTPROP_type: delete_dev.
"},{"location":"CUSTOM_PROPERTIES/#defining-custom-properties","title":"Defining Custom Properties","text":"

Custom properties are structured as a list of objects, where each property includes the following fields:

Field Description CUSTPROP_icon The icon (Base64-encoded HTML) displayed for the property. CUSTPROP_type The action type (e.g., show_notes, link, delete_dev). CUSTPROP_name A short name or title for the property. CUSTPROP_args Arguments for the action (e.g., URL or modal text). CUSTPROP_notes Additional notes or details displayed when applicable. CUSTPROP_show A boolean to control visibility (true to show on the listing page)."},{"location":"CUSTOM_PROPERTIES/#available-action-types","title":"Available Action Types","text":""},{"location":"CUSTOM_PROPERTIES/#notes","title":"Notes","text":"

This feature provides a flexible way to enhance device management and display with interactive elements tailored to your needs.

"},{"location":"DATABASE/","title":"A high-level description of the database structure","text":"

An overview of the most important database tables as well as an detailed overview of the Devices table. The MAC address is used as a foreign key in most cases.

"},{"location":"DATABASE/#devices-database-table","title":"Devices database table","text":"Field Name Description Sample Value devMac MAC address of the device. 00:1A:2B:3C:4D:5E devName Name of the device. iPhone 12 devOwner Owner of the device. John Doe devType Type of the device (e.g., phone, laptop, etc.). If set to a network type (e.g., switch), it will become selectable as a Network Parent Node. Laptop devVendor Vendor/manufacturer of the device. Apple devFavorite Whether the device is marked as a favorite. 1 devGroup Group the device belongs to. Home Devices devComments User comments or notes about the device. Used for work purposes devFirstConnection Timestamp of the device's first connection. 2025-03-22 12:07:26+11:00 devLastConnection Timestamp of the device's last connection. 2025-03-22 12:07:26+11:00 devLastIP Last known IP address of the device. 192.168.1.5 devStaticIP Whether the device has a static IP address. 0 devScan Whether the device should be scanned. 1 devLogEvents Whether events related to the device should be logged. 0 devAlertEvents Whether alerts should be generated for events. 1 devAlertDown Whether an alert should be sent when the device goes down. 0 devCanSleep Whether the device can enter a sleep window. When 1, offline periods within the NTFPRCS_sleep_time window are shown as Sleeping instead of Down and no down alert is fired. 0 devSkipRepeated Whether to skip repeated alerts for this device. 1 devLastNotification Timestamp of the last notification sent for this device. 2025-03-22 12:07:26+11:00 devPresentLastScan Whether the device was present during the last scan. 1 devIsNew Whether the device is marked as new. 0 devLocation Physical or logical location of the device. Living Room devIsArchived Whether the device is archived. 0 devParentMAC MAC address of the parent device (if applicable) to build the Network Tree. 00:1A:2B:3C:4D:5F devParentPort Port of the parent device to which this device is connected. Port 3 devIcon Icon representing the device. The value is a base64-encoded SVG or Font Awesome HTML tag. PHN2ZyB... devGUID Unique identifier for the device. a2f4b5d6-7a8c-9d10-11e1-f12345678901 devSite Site or location where the device is registered. Office devSSID SSID of the Wi-Fi network the device is connected to. HomeNetwork devSyncHubNode The NetAlertX node ID used for synchronization between NetAlertX instances. node_1 devSourcePlugin Source plugin that discovered the device. ARPSCAN devCustomProps Custom properties related to the device. The value is a base64-encoded JSON object. PHN2ZyB... devFQDN Fully qualified domain name. raspberrypi.local devParentRelType The type of relationship between the current device and it's parent node. By default, selecting nic will hide it from lists. nic devReqNicsOnline If all NICs are required to be online to mark teh current device online. 0

Note

DevicesView extends the Devices table with two computed fields that are never persisted: - devIsSleeping (1 when devCanSleep=1, device is offline, and devLastConnection is within the NTFPRCS_sleep_time window). - devFlapping (1 when the device has changed state more than the flap threshold times in the trailing window). - devStatus \u2014 derived string: On-line, Sleeping, Down, or Off-line.

To understand how values of these fields influuence application behavior, such as Notifications or Network topology, see also:

"},{"location":"DATABASE/#other-tables-overview","title":"Other Tables overview","text":"Table name Description Sample data CurrentScan Result of the current scan Devices The main devices database that also contains the Network tree mappings. If ScanCycle is set to 0 device is not scanned. Events Used to collect connection/disconnection events. Online_History Used to display the Device presence chart Parameters Used to pass values between the frontend and backend. Plugins_Events For capturing events exposed by a plugin via the last_result.log file. If unique then saved into the Plugins_Objects table. Entries are deleted once processed and stored in the Plugins_History and/or Plugins_Objects tables. Plugins_History History of all entries from the Plugins_Events table Plugins_Language_Strings Language strings collected from the plugin config.json files used for string resolution in the frontend. Plugins_Objects Unique objects detected by individual plugins. Sessions Used to display sessions in the charts Settings Database representation of the sum of all settings from app.conf and plugins coming from config.json files."},{"location":"DEBUG_API_SERVER/","title":"Debugging GraphQL server issues","text":"

The GraphQL server is an API middle layer, running on it's own port specified by GRAPHQL_PORT, to retrieve and show the data in the UI. It can also be used to retrieve data for custom third party integarions. Check the API documentation for details.

The most common issue is that the GraphQL server doesn't start properly, usually due to a port conflict. If you are running multiple NetAlertX instances, make sure to use unique ports by changing the GRAPHQL_PORT setting. The default is 20212.

"},{"location":"DEBUG_API_SERVER/#how-to-update-the-graphql_port-in-case-of-issues","title":"How to update the GRAPHQL_PORT in case of issues","text":"

As a first troubleshooting step try changing the default GRAPHQL_PORT setting. Please remember NetAlertX is running on the host so any application uising the same port will cause issues.

"},{"location":"DEBUG_API_SERVER/#updating-the-setting-via-the-settings-ui","title":"Updating the setting via the Settings UI","text":"

Ideally use the Settings UI to update the setting under General -> Core -> GraphQL port:

You might need to temporarily stop other applications or NetAlertX instances causing conflicts to update the setting. The API_TOKEN is used to authenticate any API calls, including GraphQL requests.

"},{"location":"DEBUG_API_SERVER/#updating-the-appconf-file","title":"Updating the app.conf file","text":"

If the UI is not accessible, you can directly edit the app.conf file in your /config folder:

"},{"location":"DEBUG_API_SERVER/#using-a-docker-variable","title":"Using a docker variable","text":"

All application settings can also be initialized via the APP_CONF_OVERRIDE docker env variable.

...\n environment:\n      - PORT=20213\n      - APP_CONF_OVERRIDE={\"GRAPHQL_PORT\":\"20214\"}\n...\n
"},{"location":"DEBUG_API_SERVER/#how-to-check-the-graphql-server-is-running","title":"How to check the GraphQL server is running?","text":"

There are several ways to check if the GraphQL server is running.

"},{"location":"DEBUG_API_SERVER/#flask-debug-mode-environment","title":"Flask debug mode (environment)","text":"

You can control whether the Flask development debugger is enabled by setting the environment variable FLASK_DEBUG (default: False). Enabling debug mode will turn on the interactive debugger which may expose a remote code execution (RCE) vector if the server is reachable; only enable this for local development and never in production. Valid truthy values are: 1, true, yes, on (case-insensitive).

In the running container you can set this variable via Docker Compose or your environment, for example:

environment:\n  - FLASK_DEBUG=1\n

When enabled, the GraphQL server startup logs will indicate the debug setting.

"},{"location":"DEBUG_API_SERVER/#init-check","title":"Init Check","text":"

You can navigate to System Info -> Init Check to see if isGraphQLServerRunning is ticked:

"},{"location":"DEBUG_API_SERVER/#checking-the-logs","title":"Checking the Logs","text":"

You can navigate to Maintenance -> Logs and search for graphql to see if it started correctly and serving requests:

"},{"location":"DEBUG_API_SERVER/#inspecting-the-browser-console","title":"Inspecting the Browser console","text":"

In your browser open the dev console (usually F12) and navigate to the Network tab where you can filter GraphQL requests (e.g., reload the Devices page).

You can then inspect any of the POST requests by opening them in a new tab.

"},{"location":"DEBUG_INVALID_JSON/","title":"How to debug the Invalid JSON response error","text":"

Check the the HTTP response of the failing backend call by following these steps:

For reference, the above queries should return results in the following format:

"},{"location":"DEBUG_INVALID_JSON/#first-url","title":"First URL:","text":""},{"location":"DEBUG_INVALID_JSON/#second-url","title":"Second URL:","text":""},{"location":"DEBUG_INVALID_JSON/#third-url","title":"Third URL:","text":"

You can copy and paste any JSON result (result of the First and Third query) into an online JSON checker, such as this one to check if it's valid.

"},{"location":"DEBUG_PHP/","title":"Debugging backend PHP issues","text":""},{"location":"DEBUG_PHP/#logs-in-ui","title":"Logs in UI","text":"

You can view recent backend PHP errors directly in the Maintenance > Logs section of the UI. This provides quick access to logs without needing terminal access.

"},{"location":"DEBUG_PHP/#accessing-logs-directly","title":"Accessing logs directly","text":"

Sometimes, the UI might not be accessible. In that case, you can access the logs directly inside the container.

"},{"location":"DEBUG_PHP/#step-by-step","title":"Step-by-step:","text":"
  1. Open a shell into the container:
docker exec -it netalertx /bin/sh\n
  1. Check the NGINX error log:
cat /var/log/nginx/error.log\n
  1. Check the PHP application error log:
cat /tmp/log/app.php_errors.log\n

These logs will help identify syntax issues, fatal errors, or startup problems when the UI fails to load properly.

"},{"location":"DEBUG_PLUGINS/","title":"Troubleshooting plugins","text":"

Tip

Before troubleshooting, please ensure you have the right Debugging and LOG_LEVEL set in Settings.

"},{"location":"DEBUG_PLUGINS/#high-level-overview","title":"High-level overview","text":"

If a Plugin supplies data to the main app it's done either vie a SQL query or via a script that updates the last_result.log file in the plugin log folder (app/log/plugins/).

For a more in-depth overview on how plugins work check the Plugins development docs.

"},{"location":"DEBUG_PLUGINS/#prerequisites","title":"Prerequisites","text":""},{"location":"DEBUG_PLUGINS/#potential-issues","title":"Potential issues","text":""},{"location":"DEBUG_PLUGINS/#incorrect-input-data","title":"Incorrect input data","text":"

Input data from the plugin might cause mapping issues in specific edge cases. Look for a corresponding section in the app.log file, and search for [Scheduler] run for PLUGINNAME: YES, so for ICMP you would look for [Scheduler] run for ICMP: YES. You can find examples of useful logs below. If your issue is related to a plugin, and you don't include a log section with this data, we can't help you to resolve your issue.

"},{"location":"DEBUG_PLUGINS/#icmp-log-example","title":"ICMP log example","text":"
20:39:04 [Scheduler] run for ICMP: YES\n20:39:04 [ICMP] fping skipping 192.168.1.124 : [2], timed out (NaN avg, 100% loss)\n20:39:04 [ICMP] adding 192.168.1.123 from 192.168.1.123 : [2], 64 bytes, 20.1 ms (8.22 avg, 0% loss)\n20:39:04 [ICMP] fping skipping 192.168.1.157 : [1], timed out (NaN avg, 100% loss)\n20:39:04 [ICMP] adding 192.168.1.79 from 192.168.1.79  : [2], 64 bytes, 48.3 ms (60.9 avg, 0% loss)\n20:39:04 [ICMP] fping skipping 192.168.1.128 : [2], timed out (NaN avg, 100% loss)\n20:39:04 [ICMP] fping skipping 192.168.1.129 : [2], timed out (NaN avg, 100% loss)\n
"},{"location":"DEBUG_PLUGINS/#pihole-log-example","title":"PIHOLE log example","text":"
17:31:05 [Scheduler] run for PIHOLE: YES\n17:31:05 [Plugin utils] ---------------------------------------------\n17:31:05 [Plugin utils] display_name: PiHole (Device sync)\n17:31:05 [Plugins] CMD: SELECT n.hwaddr AS Object_PrimaryID, {s-quote}null{s-quote} AS Object_SecondaryID, datetime() AS DateTime, na.ip  AS Watched_Value1, n.lastQuery AS Watched_Value2, na.name AS Watched_Value3, n.macVendor AS Watched_Value4, {s-quote}null{s-quote} AS Extra, n.hwaddr AS ForeignKey FROM EXTERNAL_PIHOLE.Network AS n LEFT JOIN EXTERNAL_PIHOLE.Network_Addresses AS na ON na.network_id = n.id WHERE n.hwaddr NOT LIKE {s-quote}ip-%{s-quote} AND n.hwaddr is not {s-quote}00:00:00:00:00:00{s-quote}  AND na.ip is not null\n17:31:05 [Plugins] setTyp: subnets\n17:31:05 [Plugin utils] Flattening the below array\n17:31:05 ['192.168.1.0/24 --interface=eth1']\n17:31:05 [Plugin utils] isinstance(arr, list) : False | isinstance(arr, str) : True\n17:31:05 [Plugins] Resolved value: 192.168.1.0/24 --interface=eth1\n17:31:05 [Plugins] Convert to Base64: True\n17:31:05 [Plugins] base64 value: b'MTkyLjE2OC4xLjAvMjQgLS1pbnRlcmZhY2U9ZXRoMQ=='\n17:31:05 [Plugins] Timeout: 10\n17:31:05 [Plugins] Executing: SELECT n.hwaddr AS Object_PrimaryID, 'null' AS Object_SecondaryID, datetime() AS DateTime, na.ip  AS Watched_Value1, n.lastQuery AS Watched_Value2, na.name AS Watched_Value3, n.macVendor AS Watched_Value4, 'null' AS Extra, n.hwaddr AS ForeignKey FROM EXTERNAL_PIHOLE.Network AS n LEFT JOIN EXTERNAL_PIHOLE.Network_Addresses AS na ON na.network_id = n.id WHERE n.hwaddr NOT LIKE 'ip-%' AND n.hwaddr is not '00:00:00:00:00:00'  AND na.ip is not null\n\ud83d\udd3b\n17:31:05 [Plugins] SUCCESS, received 2 entries\n17:31:05 [Plugins] sqlParam entries: [(0, 'PIHOLE', '01:01:01:01:01:01', 'null', 'null', '2023-12-25 06:31:05', '172.30.0.1', 0, 'aaaa', 'vvvvvvvvv', 'not-processed', 'null', 'null', '01:01:01:01:01:01'), (0, 'PIHOLE', '02:42:ac:1e:00:02', 'null', 'null', '2023-12-25 06:31:05', '172.30.0.2', 0, 'dddd', 'vvvvv2222', 'not-processed', 'null', 'null', '02:42:ac:1e:00:02')]\n17:31:05 [Plugins] Processing        : PIHOLE\n17:31:05 [Plugins] Existing objects from Plugins_Objects: 4\n17:31:05 [Plugins] Logged events from the plugin run    : 2\n17:31:05 [Plugins] pluginEvents      count: 2\n17:31:05 [Plugins] pluginObjects     count: 4\n17:31:05 [Plugins] events_to_insert  count: 0\n17:31:05 [Plugins] history_to_insert count: 4\n17:31:05 [Plugins] objects_to_insert count: 0\n17:31:05 [Plugins] objects_to_update count: 4\n17:31:05 [Plugin utils] In pluginEvents there are 2 events with the status \"watched-not-changed\"\n17:31:05 [Plugin utils] In pluginObjects there are 2 events with the status \"missing-in-last-scan\"\n17:31:05 [Plugin utils] In pluginObjects there are 2 events with the status \"watched-not-changed\"\n17:31:05 [Plugins] Mapping objects to database table: CurrentScan\n17:31:05 [Plugins] SQL query for mapping: INSERT into CurrentScan ( \"scanMac\", \"scanLastIP\", \"scanLastQuery\", \"scanName\", \"scanVendor\", \"scanSourcePlugin\") VALUES ( ?, ?, ?, ?, ?, ?)\n17:31:05 [Plugins] SQL sqlParams for mapping: [('01:01:01:01:01:01', '172.30.0.1', 0, 'aaaa', 'vvvvvvvvv', 'PIHOLE'), ('02:42:ac:1e:00:02', '172.30.0.2', 0, 'dddd', 'vvvvv2222', 'PIHOLE')]\n\ud83d\udd3a\n17:31:05 [API] Update API starting\n17:31:06 [API] Updating table_plugins_history.json file in /api\n

Note

The debug output between the \ud83d\udd3bred arrows\ud83d\udd3a is important for debugging (arrows added only to highlight the section on this page, they are not available in the actual debug log)

In the above output notice the section logging how many events are produced by the plugin:

17:31:05 [Plugins] Existing objects from Plugins_Objects: 4\n17:31:05 [Plugins] Logged events from the plugin run    : 2\n17:31:05 [Plugins] pluginEvents      count: 2\n17:31:05 [Plugins] pluginObjects     count: 4\n17:31:05 [Plugins] events_to_insert  count: 0\n17:31:05 [Plugins] history_to_insert count: 4\n17:31:05 [Plugins] objects_to_insert count: 0\n17:31:05 [Plugins] objects_to_update count: 4\n

These values, if formatted correctly, will also show up in the UI:

"},{"location":"DEBUG_PLUGINS/#sharing-application-state","title":"Sharing application state","text":"

Sometimes specific log sections are needed to debug issues. The Devices and CurrentScan table data is sometimes needed to figure out what's wrong.

  1. Please set LOG_LEVEL to trace in the Settings (Disable it once you have the info as this produces big log files).
  2. Wait for the issue to occur.
  3. Search for ================ DEVICES table content ================ in your logs.
  4. Search for ================ CurrentScan table content ================ in your logs.
  5. Open a new issue and post (redacted) output into the issue description (or send to the netalertx@gmail.com email if sensitive data present).
  6. Please set LOG_LEVEL to debug or lower.
"},{"location":"DEBUG_TIPS/","title":"Debugging and troubleshooting","text":"

Please follow tips 1 - 4 to get a more detailed error.

"},{"location":"DEBUG_TIPS/#1-more-logging","title":"1. More Logging","text":"

When debugging an issue always set the highest log level in Settings -> Core:

LOG_LEVEL='trace'

"},{"location":"DEBUG_TIPS/#2-surfacing-errors-when-container-restarts","title":"2. Surfacing errors when container restarts","text":"

Start the container via the terminal with a command similar to this one:

docker run \\\n  --network=host \\\n  --restart unless-stopped \\\n  -v /local_data_dir:/data \\\n  -v /etc/localtime:/etc/localtime:ro \\\n  --tmpfs /tmp:uid=20211,gid=20211,mode=1700 \\\n  -e PORT=20211 \\\n  -e APP_CONF_OVERRIDE='{\"GRAPHQL_PORT\":\"20214\"}' \\\n  ghcr.io/netalertx/netalertx:latest\n

Note: Your /local_data_dir should contain a config and db folder.

Note

\u26a0 The most important part is NOT to use the -d parameter so you see the error when the container crashes. Use this error in your issue description.

"},{"location":"DEBUG_TIPS/#3-check-the-_dev-image-and-open-issues","title":"3. Check the _dev image and open issues","text":"

If possible, check if your issue got fixed in the _dev image before opening a new issue. The container is:

ghcr.io/netalertx/netalertx-dev:latest

\u26a0 Please backup your DB and config beforehand!

Please also search open issues.

"},{"location":"DEBUG_TIPS/#4-disable-restart-behavior","title":"4. Disable restart behavior","text":"

To prevent a Docker container from automatically restarting in a Docker Compose file, specify the restart policy as no:

version: '3'\n\nservices:\n  your-service:\n    image: your-image:tag\n    restart: no\n    # Other service configurations...\n
"},{"location":"DEBUG_TIPS/#5-tmp-mount-directories-to-rule-host-out-permission-issues","title":"5. TMP mount directories to rule host out permission issues","text":"

Try starting the container with all data to be in non-persistent volumes. If this works, the issue might be related to the permissions of your persistent data mount locations on your server. See teh Permissions guide for details.

"},{"location":"DEBUG_TIPS/#6-sharing-application-state","title":"6. Sharing application state","text":"

Sometimes specific log sections are needed to debug issues. The Devices and CurrentScan table data is sometimes needed to figure out what's wrong.

  1. Please set LOG_LEVEL to trace (Disable it once you have the info as this produces big log files).
  2. Wait for the issue to occur.
  3. Search for ================ DEVICES table content ================ in your logs.
  4. Search for ================ CurrentScan table content ================ in your logs.
  5. Open a new issue and post (redacted) output into the issue description (or send to the netalertx@gmail.com email if sensitive data present).
  6. Please set LOG_LEVEL to debug or lower.
"},{"location":"DEBUG_TIPS/#common-issues","title":"Common issues","text":"

See Common issues for additional troubleshooting tips.

"},{"location":"DEVICES_BULK_EDITING/","title":"Editing multiple devices at once","text":"

NetAlertX allows you to mass-edit devices via a CSV export and import feature, or directly in the UI.

"},{"location":"DEVICES_BULK_EDITING/#ui-multi-edit","title":"UI multi edit","text":"

Note

Make sure you have your backups saved and restorable before doing any mass edits. Check Backup strategies.

You can select devices in the Devices view by selecting devices to edit and then clicking the Multi-edit button or via the Maintenance > Multi-Edit section.

"},{"location":"DEVICES_BULK_EDITING/#csv-bulk-edit","title":"CSV bulk edit","text":"

The database and device structure may change with new releases. When using the CSV import functionality, ensure the format matches what the application expects. To avoid issues, you can first export the devices and review the column formats before importing any custom data.

Note

As always, backup everything, just in case.

  1. In Maintenance > Backup / Restore click the CSV Export button.
  2. A devices.csv is generated in the /config folder
  3. Edit the devices.csv file however you like.

Note

The file containing a list of Devices including the Network relationships between Network Nodes and connected devices. You can also trigger this with the CSV Backup plugin. (\ud83d\udca1 You can schedule this)

"},{"location":"DEVICES_BULK_EDITING/#file-encoding-format","title":"File encoding format","text":"

Note

Keep Linux line endings (suggested editors: Nano, Notepad++)

"},{"location":"DEVICE_DISPLAY_SETTINGS/","title":"Device Display Settings","text":"

This set of settings allows you to group Devices under different views. The Archived toggle allows you to exclude a Device from most listings and notifications.

"},{"location":"DEVICE_DISPLAY_SETTINGS/#status-colors","title":"Status Colors","text":"Icon Status Image Description Online (Green) A device that is no longer marked as a \"New Device\". New (Green) A newly discovered device that is online and is still marked as a \"New Device\". Online (Orange) The device is online, but unstable and flapping (3 status changes in the last hour). New (Grey) Same as \"New (Green)\" but the device is now offline. New (Grey) Same as \"New (Green)\" but the device is now offline and archived. Offline (Grey) A device that was not detected online in the last scan. Archived (Grey) A device that was not detected online in the last scan. Sleeping (Aqua) A device with Can Sleep enabled that has gone offline within the NTFPRCS_sleep_time window. No down alert is fired while the device is in this state. See Notifications. Down (Red) A device marked as \"Alert Down\" and offline for the duration set in NTFPRCS_alert_down_time.

See also Notification guide.

"},{"location":"DEVICE_FIELD_LOCK/","title":"Quick Reference Guide - Device Field Lock/Unlock System","text":""},{"location":"DEVICE_FIELD_LOCK/#overview","title":"Overview","text":"

The device field lock/unlock system allows you to protect specific device fields from being automatically overwritten by scanning plugins. When you lock a field, NetAlertX remembers your choice and prevents plugins from changing that value until you unlock it.

Use case: You've manually corrected a device name or port number and want to keep it that way, even when plugins discover different values.

"},{"location":"DEVICE_FIELD_LOCK/#tracked-fields","title":"Tracked Fields","text":"

These are the ONLY fields that can be locked:

Additional fields that are tracked (and their source is dispalyed in the UI if available):

"},{"location":"DEVICE_FIELD_LOCK/#source-values-explained","title":"Source Values Explained","text":"

Each locked field has a \"source\" indicator that shows you why the value is protected:

Indicator Meaning Can It Change? \ud83d\udd12 LOCKED You locked this field No, until you unlock it \u270f\ufe0f USER You edited this field No, plugins can't overwrite \ud83d\udce1 NEWDEV Default/unset value Yes, plugins can update \ud83d\udce1 Plugin name Last updated by a plugin (e.g., UNIFIAPI) Yes, plugins can update if field in SET_ALWAYS

Overwrite rules are

Tip

You can bulk-unlock devices in the Multi-edit dialog. This removes all USER and LOCKED values from all *Source fields of selected devices.

"},{"location":"DEVICE_FIELD_LOCK/#usage-examples","title":"Usage Examples","text":""},{"location":"DEVICE_FIELD_LOCK/#lock-a-field-prevent-plugin-changes","title":"Lock a Field (Prevent Plugin Changes)","text":"
  1. Navigate to Device Details for the device
  2. Find the field you want to protect (e.g., device name)
  3. Click the lock button (\ud83d\udd12) next to the field
  4. The button changes to unlock (\ud83d\udd13)
  5. That field is now protected
"},{"location":"DEVICE_FIELD_LOCK/#unlock-a-field-allow-plugin-updates","title":"Unlock a Field (Allow Plugin Updates)","text":"
  1. Go to Device Details
  2. Find the locked field (shows \ud83d\udd13)
  3. Click the unlock button (\ud83d\udd13)
  4. The button changes back to lock (\ud83d\udd12)
  5. Plugins can now update that field again
"},{"location":"DEVICE_FIELD_LOCK/#common-scenarios","title":"Common Scenarios","text":""},{"location":"DEVICE_FIELD_LOCK/#scenario-1-youve-named-your-device-and-want-to-keep-the-name","title":"Scenario 1: You've Named Your Device and Want to Keep the Name","text":"
  1. You manually edit device name to \"Living Room Smart TV\"
  2. A scanning plugin later discovers it as \"Unknown Device\" or \"DEVICE-ABC123\"
  3. Solution: Lock the device name field
  4. Your custom name is preserved even after future scans
"},{"location":"DEVICE_FIELD_LOCK/#scenario-2-you-lock-a-field-but-it-still-changes","title":"Scenario 2: You Lock a Field, But It Still Changes","text":"

This means the field source is USER or LOCKED (protected). Check: - Is it showing the lock icon? (If yes, it's protected) - Wait a moment\u2014sometimes changes take a few seconds to display - Try refreshing the page

"},{"location":"DEVICE_FIELD_LOCK/#scenario-3-you-want-to-let-plugins-update-again","title":"Scenario 3: You Want to Let Plugins Update Again","text":"
  1. Find the device with locked fields
  2. Click the unlock button (\ud83d\udd13) next to each field
  3. Refresh the page
  4. Next time a plugin runs, it can update that field
"},{"location":"DEVICE_FIELD_LOCK/#what-happens-when-you-lock-a-field","title":"What Happens When You Lock a Field","text":""},{"location":"DEVICE_FIELD_LOCK/#what-happens-when-you-unlock-a-field","title":"What Happens When You Unlock a Field","text":""},{"location":"DEVICE_FIELD_LOCK/#error-messages-solutions","title":"Error Messages & Solutions","text":"Message What It Means What to Do \"Field cannot be locked\" You tried to lock a field that doesn't support locking Only lock the fields listed above \"Device not found\" The device MAC address doesn't exist Verify the device hasn't been deleted Lock button doesn't work Network or permission issue Refresh the page and try again Unexpected field changed Field might have been unlocked Check if field shows unlock icon (\ud83d\udd13)"},{"location":"DEVICE_FIELD_LOCK/#quick-tips","title":"Quick Tips","text":""},{"location":"DEVICE_FIELD_LOCK/#when-to-lock-vs-when-not-to-lock","title":"When to Lock vs. When NOT to Lock","text":""},{"location":"DEVICE_FIELD_LOCK/#good-reasons-to-lock","title":"\u2705 Good reasons to lock:","text":""},{"location":"DEVICE_FIELD_LOCK/#bad-reasons-to-lock","title":"\u274c Bad reasons to lock:","text":""},{"location":"DEVICE_FIELD_LOCK/#troubleshooting","title":"Troubleshooting","text":"

Lock button not appearing:

Lock button is there but click doesn't work:

Field still changes after locking:

"},{"location":"DEVICE_FIELD_LOCK/#see-also","title":"See also","text":""},{"location":"DEVICE_HEURISTICS/","title":"Device Heuristics: Icon and Type Guessing","text":"

This module is responsible for inferring the most likely device type and icon based on minimal identifying data like MAC address, vendor, IP, or device name.

It does this using a set of heuristics defined in an external JSON rules file, which it evaluates in priority order.

Note

You can find the full source code of the heuristics module in the device_heuristics.py file.

"},{"location":"DEVICE_HEURISTICS/#json-rule-format","title":"JSON Rule Format","text":"

Rules are defined in a file called device_heuristics_rules.json (located under /back), structured like:

[\n  {\n    \"dev_type\": \"Phone\",\n    \"icon_html\": \"<i class=\\\"fa-brands fa-apple\\\"></i>\",\n    \"matching_pattern\": [\n      { \"mac_prefix\": \"001A79\", \"vendor\": \"Apple\" }\n    ],\n    \"name_pattern\": [\"iphone\", \"pixel\"]\n  }\n]\n

Note

Feel free to raise a PR in case you'd like to add any rules into the device_heuristics_rules.json file. Please place new rules into the correct position and consider the priority of already available rules.

"},{"location":"DEVICE_HEURISTICS/#supported-fields","title":"Supported fields:","text":"Field Type Description dev_type string Type to assign if rule matches (e.g. \"Gateway\", \"Phone\") icon_html string Icon (HTML string) to assign if rule matches. Encoded to base64 at load time. matching_pattern array List of { mac_prefix, vendor } objects for first strict and then loose matching name_pattern array (optional) List of lowercase substrings (used with regex) ip_pattern array (optional) Regex patterns to match IPs

Order in this array defines priority \u2014 rules are checked top-down and short-circuit on first match.

"},{"location":"DEVICE_HEURISTICS/#matching-flow-in-priority-order","title":"Matching Flow (in Priority Order)","text":"

The function guess_device_attributes(...) runs a series of matching functions in strict order:

  1. MAC + Vendor \u2192 match_mac_and_vendor()
  2. Vendor only \u2192 match_vendor()
  3. Name pattern \u2192 match_name()
  4. IP pattern \u2192 match_ip()
  5. Final fallback \u2192 defaults defined in the NEWDEV_devIcon and NEWDEV_devType settings.

Note

The app will try guessing the device type or icon if devType or devIcon are \"\" or \"null\".

"},{"location":"DEVICE_HEURISTICS/#use-of-default-values","title":"Use of default values","text":"

The guessing process runs for every device as long as the current type or icon still matches the default values. Even if earlier heuristics return a match, the system continues evaluating additional clues \u2014 like name or IP \u2014 to try and replace placeholders.

# Still considered a match attempt if current values are defaults\nif (not type_ or type_ == default_type) or (not icon or icon == default_icon):\n    type_, icon = match_ip(ip, default_type, default_icon)\n

In other words: if the type or icon is still \"unknown\" (or matches the default), the system assumes the match isn\u2019t final \u2014 and keeps looking. It stops only when both values are non-default (defaults are defined in the NEWDEV_devIcon and NEWDEV_devType settings).

"},{"location":"DEVICE_HEURISTICS/#match-behavior-per-function","title":"Match Behavior (per function)","text":"

These functions are executed in the following order:

"},{"location":"DEVICE_HEURISTICS/#match_mac_and_vendormac_clean-vendor","title":"match_mac_and_vendor(mac_clean, vendor, ...)","text":""},{"location":"DEVICE_HEURISTICS/#match_vendorvendor","title":"match_vendor(vendor, ...)","text":""},{"location":"DEVICE_HEURISTICS/#match_namename","title":"match_name(name, ...)","text":""},{"location":"DEVICE_HEURISTICS/#match_ipip","title":"match_ip(ip, ...)","text":""},{"location":"DEVICE_HEURISTICS/#icons","title":"Icons","text":"

TL;DR: Type and icon must both be matched. If only one is matched, the other falls back to the default.

"},{"location":"DEVICE_HEURISTICS/#priority-mechanics","title":"Priority Mechanics","text":""},{"location":"DEVICE_MANAGEMENT/","title":"Device Management","text":"

The Main Info section is where most of the device identifiable information is stored and edited. Some of the information is autodetected via various plugins. Initial values for most of the fields can be specified in the NEWDEV plugin.

Note

You can multi-edit devices by selecting them in the main Devices view, from the Mainetence section, or via the CSV Export functionality under Maintenance. More info can be found in the Devices Bulk-editing docs.

"},{"location":"DEVICE_MANAGEMENT/#main-info","title":"Main Info","text":"

Note

Please note the above usage of the fields are only suggestions. You can use most of these fields for other purposes, such as storing the network interface, company owning a device, or similar.

"},{"location":"DEVICE_MANAGEMENT/#dummy-devices","title":"Dummy devices","text":"

You can create dummy devices from the Devices listing screen.

The MAC field and the Last IP field will then become editable.

"},{"location":"DEVICE_MANAGEMENT/#dummy-or-manually-created-device-status","title":"Dummy or Manually Created Device Status","text":"

You can control a dummy device\u2019s status either via ICMP (automatic) or the Force Status field (manual). Choose based on whether the device is real and how important data hygiene is.

"},{"location":"DEVICE_MANAGEMENT/#icmp-real-devices","title":"ICMP (Real Devices)","text":"

Use a real IP that responds to ping so status is updated automatically.

"},{"location":"DEVICE_MANAGEMENT/#force-status-best-for-data-hygiene","title":"Force Status (Best for Data Hygiene)","text":"

Manually set the status when the device is not reachable or is purely logical. This keeps your data clean and avoids fake IPs.

"},{"location":"DEVICE_MANAGEMENT/#loopback-ip-127001-0000","title":"Loopback IP (127.0.0.1, 0.0.0.0)","text":"

Use when you want the device to always appear online via ICMP. Note this simulates reachability and introduces artificial data. This approach might be preferred, if you want to filter and distinguish dummy devices based on IP when filtering your asset lists.

"},{"location":"DEVICE_MANAGEMENT/#copying-data-from-an-existing-device","title":"Copying data from an existing device.","text":"

To speed up device population you can also copy data from an existing device. This can be done from the Tools tab on the Device details.

"},{"location":"DEVICE_MANAGEMENT/#field-locking-preventing-plugin-overwrites","title":"Field Locking (Preventing Plugin Overwrites)","text":"

NetAlertX allows you to \"lock\" specific device fields to prevent plugins from automatically overwriting your custom values. This is useful when you've manually corrected information that might be discovered differently by discovery plugins.

"},{"location":"DEVICE_MANAGEMENT/#quick-start","title":"Quick Start","text":"
  1. Open a device for editing
  2. Click the lock button (\ud83d\udd12) next to any tracked field
  3. The field is now protected\u2014plugins cannot change it until you unlock it
"},{"location":"DEVICE_MANAGEMENT/#see-also","title":"See Also","text":""},{"location":"DEVICE_SOURCE_FIELDS/","title":"Understanding Device Source Fields and Field Updates","text":"

When the system scans a network, it finds various details about devices (like names, IP addresses, and manufacturers). To ensure the data remains accurate without accidentally overwriting manual changes, the system uses a set of \"Source Rules.\"

"},{"location":"DEVICE_SOURCE_FIELDS/#the-protection-levels","title":"The \"Protection\" Levels","text":"

Every piece of information for a device has a Source. This source determines whether a new scan is allowed to change that value.

Source Status Description Can a Scan Overwrite it? USER You manually entered this value. Never LOCKED This value is pinned and protected. Never NEWDEV This value was initialized from NEWDEV plugin settings. Always (Plugin Name) The value was found by a specific scanner (e.g., NBTSCAN). Only if specific rules are met"},{"location":"DEVICE_SOURCE_FIELDS/#how-scans-update-information","title":"How Scans Update Information","text":"

If a field is not protected by a USER or LOCKED status, the system follows these rules to decide if it should update the info:

"},{"location":"DEVICE_SOURCE_FIELDS/#1-the-empty-field-rule-default","title":"1. The \"Empty Field\" Rule (Default)","text":"

By default, the system is cautious. It will only fill in a piece of information if the current field is empty (showing as \"unknown,\" \"0.0.0.0,\" or blank). It won't change for example an existing name unless you tell it to.

"},{"location":"DEVICE_SOURCE_FIELDS/#2-set_always","title":"2. SET_ALWAYS","text":"

Some plugins are configured to be \"authoritative.\" If a field is in the SET_ALWAYS setting of a plugin:

"},{"location":"DEVICE_SOURCE_FIELDS/#3-set_empty","title":"3. SET_EMPTY","text":"

If a field is in the SET_EMPTY list:

"},{"location":"DEVICE_SOURCE_FIELDS/#4-automatic-overrides-live-tracking","title":"4. Automatic Overrides (Live Tracking)","text":"

Some fields, like IP Addresses (devLastIP) and Full Domain Names (devFQDN), are set to automatically update whenever they change. This ensures that if a device moves to a new IP on your network, the system reflects that change immediately without you having to do anything.

"},{"location":"DEVICE_SOURCE_FIELDS/#summary-of-field-logic","title":"Summary of Field Logic","text":"If the current value is... And the Scan finds... Does it update? USER / LOCKED Anything No Empty A new value Yes A \"Plugin\" value A different value No (Unless SET_ALWAYS is on) An IP Address A different IP Yes (Updates automatically)"},{"location":"DEVICE_SOURCE_FIELDS/#see-also","title":"See also:","text":""},{"location":"DEV_DEVCONTAINER/","title":"Devcontainer for NetAlertX Guide","text":"

This devcontainer is designed to mirror the production container environment as closely as possible, while providing a rich set of tools for development.

"},{"location":"DEV_DEVCONTAINER/#how-to-get-started","title":"How to Get Started","text":"
  1. Prerequisites:

  2. Launch the Devcontainer:

"},{"location":"DEV_DEVCONTAINER/#key-workflows-features","title":"Key Workflows & Features","text":"

Once you're inside the container, everything is set up for you.

"},{"location":"DEV_DEVCONTAINER/#1-services-frontend-backend","title":"1. Services (Frontend & Backend)","text":"

The container's startup script (.devcontainer/scripts/setup.sh) automatically starts the Nginx/PHP frontend and the Python backend. You can restart them at any time using the built-in tasks.

"},{"location":"DEV_DEVCONTAINER/#2-integrated-debugging-just-press-f5","title":"2. Integrated Debugging (Just Press F5!)","text":"

Debugging for both the Python backend and PHP frontend is pre-configured and ready to go.

"},{"location":"DEV_DEVCONTAINER/#3-common-tasks-f1-run-task","title":"3. Common Tasks (F1 -> Run Task)","text":"

We've created several VS Code Tasks to simplify common operations. Access them by pressing F1 and typing \"Tasks: Run Task\".

"},{"location":"DEV_DEVCONTAINER/#4-running-tests","title":"4. Running Tests","text":"

The environment includes pytest. You can run tests directly from the VS Code Test Explorer UI or by running pytest -q in the integrated terminal. The necessary PYTHONPATH is already configured so that tests can correctly import the server modules.

"},{"location":"DEV_DEVCONTAINER/#how-to-maintain-this-devcontainer","title":"How to Maintain This Devcontainer","text":"

The setup is designed to be easy to manage. Here are the core principles:

This setup provides a powerful and consistent foundation for all current and future contributors to NetAlertX.

"},{"location":"DEV_ENV_SETUP/","title":"Development Environment Setup","text":"

I truly appreciate all contributions! To help keep this project maintainable, this guide provides an overview of project priorities, key design considerations, and overall philosophy. It also includes instructions for setting up your environment so you can start contributing right away.

"},{"location":"DEV_ENV_SETUP/#development-guidelines","title":"Development Guidelines","text":"

Before starting development, please review the following guidelines.

"},{"location":"DEV_ENV_SETUP/#priority-order-highest-to-lowest","title":"Priority Order (Highest to Lowest)","text":"
  1. \ud83d\udd3c Fixing core bugs that lack workarounds
  2. \ud83d\udd35 Adding core functionality that unlocks other features (e.g., plugins)
  3. \ud83d\udd35 Refactoring to enable faster development
  4. \ud83d\udd3d UI improvements (PRs welcome, but low priority)
"},{"location":"DEV_ENV_SETUP/#design-philosophy","title":"Design Philosophy","text":"

The application architecture is designed for extensibility and maintainability. It relies heavily on configuration manifests via plugins and settings to dynamically build the UI and populate the application with data from various sources.

For details, see: - Plugins Development (includes video) - Settings System

Focus on core functionality and integrate with existing tools rather than reinventing the wheel.

Examples: - Using Apprise for notifications instead of implementing multiple separate gateways - Implementing regex-based validation instead of one-off validation for each setting

Note

UI changes have lower priority. PRs are welcome, but please keep them small and focused.

"},{"location":"DEV_ENV_SETUP/#development-environment-set-up","title":"Development Environment Set Up","text":"

Tip

There is also a ready to use devcontainer available.

The following steps will guide you to set up your environment for local development and to run a custom docker build on your system. For most changes the container doesn't need to be rebuild which speeds up the development significantly.

Note

Replace /development with the path where your code files will be stored. The default container name is netalertx so there might be a conflict with your running containers.

"},{"location":"DEV_ENV_SETUP/#1-download-the-code","title":"1. Download the code:","text":""},{"location":"DEV_ENV_SETUP/#2-create-a-dev-env_dev-file","title":"2. Create a DEV .env_dev file","text":"

touch /development/.env_dev && sudo nano /development/.env_dev

The file content should be following, with your custom values.

#--------------------------------\n#NETALERTX\n#--------------------------------\nPORT=22222    # make sure this port is unique on your whole network\nDEV_LOCATION=/development/NetAlertX\nAPP_DATA_LOCATION=/volume/docker_appdata\n# Make sure your GRAPHQL_PORT setting has a port that is unique on your whole host network\nAPP_CONF_OVERRIDE={\"GRAPHQL_PORT\":\"22223\"}\n# ALWAYS_FRESH_INSTALL=true # uncommenting this will always delete the content of /config and /db dirs on boot to simulate a fresh install\n
"},{"location":"DEV_ENV_SETUP/#3-create-db-and-config-dirs","title":"3. Create /db and /config dirs","text":"

Create a folder netalertx in the APP_DATA_LOCATION (in this example in /volume/docker_appdata) with 2 subfolders db and config.

"},{"location":"DEV_ENV_SETUP/#4-run-the-container","title":"4. Run the container","text":"

You can then modify the python script without restarting/rebuilding the container every time. Additionally, you can trigger a plugin run via the UI:

"},{"location":"DEV_ENV_SETUP/#tips","title":"Tips","text":"

A quick cheat sheet of useful commands.

"},{"location":"DEV_ENV_SETUP/#removing-the-container-and-image","title":"Removing the container and image","text":"

A command to stop, remove the container and the image (replace netalertx and netalertx-netalertx with the appropriate values)

"},{"location":"DEV_ENV_SETUP/#restart-the-server-backend","title":"Restart the server backend","text":"

Most code changes can be tested without rebuilding the container. When working on the python server backend, you only need to restart the server.

  1. You can usually restart the backend via Maintenance > Logs > Restart server

  1. If above doesn't work, SSH into the container and kill & restart the main script loop

  2. sudo docker exec -it netalertx /bin/bash

  3. pkill -f \"python /app/server\" && python /app/server &

  4. If none of the above work, restart the docker container.

  5. This is usually the last resort as sometimes the Docker engine becomes unresponsive and the whole engine needs to be restarted.

"},{"location":"DEV_ENV_SETUP/#contributing-pull-requests","title":"Contributing & Pull Requests","text":""},{"location":"DEV_ENV_SETUP/#before-submitting-a-pr-please-ensure","title":"Before submitting a PR, please ensure:","text":"

\u2714 Changes are backward-compatible with existing installs. \u2714 No unnecessary changes are made. \u2714 New features are reusable, not narrowly scoped. \u2714 Features are implemented via plugins if possible.

"},{"location":"DEV_ENV_SETUP/#mandatory-test-cases","title":"Mandatory Test Cases","text":"

Note

Always run all available tests as per the Testing documentation.

"},{"location":"DEV_PORTS_HOST_MODE/","title":"Dev Ports in Host Network Mode","text":"

When using \"--network=host\" in the devcontainer, VS Code's normal port forwarding model doesn't apply. All container ports are already on the host network namespace, so:

"},{"location":"DEV_PORTS_HOST_MODE/#recommended-pattern","title":"Recommended Pattern","text":"
  1. Only include debugger ports in forwardPorts:
    \"forwardPorts\": [5678, 9003]\n
  2. Do NOT list application service ports (e.g. 20211, 20212) there when in host mode.
  3. Use the helper task to enumerate current bindings:
  4. Run task: > Tasks: Run Task \u2192 [Dev Container] List NetAlertX Ports
"},{"location":"DEV_PORTS_HOST_MODE/#port-enumeration-script","title":"Port Enumeration Script","text":"

Script: scripts/list-ports.sh Outputs binding address, PID (if resolvable) and process name for key ports.

You can edit the PORTS variable inside that script to add/remove watched ports.

"},{"location":"DEV_PORTS_HOST_MODE/#xdebug-notes","title":"Xdebug Notes","text":"

Set in 99-xdebug.ini:

xdebug.client_host=127.0.0.1\nxdebug.client_port=9003\nxdebug.discover_client_host=1\n
Ensure your IDE is listening on 9003.

"},{"location":"DEV_PORTS_HOST_MODE/#troubleshooting","title":"Troubleshooting","text":"Symptom Cause Fix Waiting for port 20211 to free... repeats VS Code pre-bound the port via forwardPorts Remove the port from forwardPorts, rebuild, retry PHP request hangs at start Xdebug trying to connect to unresolved host (host.docker.internal) Use 127.0.0.1 or rely on discovery PORTS panel empty Expected in host mode Use the port enumeration task"},{"location":"DEV_PORTS_HOST_MODE/#future-improvements","title":"Future Improvements","text":""},{"location":"DOCKER_COMPOSE/","title":"NetAlertX and Docker Compose","text":"

Warning

\u26a0\ufe0f Important: The docker-compose has recently changed. Carefully read the Migration guide for detailed instructions.

Great care is taken to ensure NetAlertX meets the needs of everyone while being flexible enough for anyone. This document outlines how you can configure your docker-compose. There are many settings, so we recommend using the Baseline Docker Compose as-is, or modifying it for your system.Good care is taken to ensure NetAlertX meets the needs of everyone while being flexible enough for anyone. This document outlines how you can configure your docker-compose. There are many settings, so we recommend using the Baseline Docker Compose as-is, or modifying it for your system.

Note

The container needs to run in network_mode:\"host\" to access Layer 2 networking such as arp, nmap and others. Due to lack of support for this feature, Windows host is not a supported operating system.

"},{"location":"DOCKER_COMPOSE/#baseline-docker-compose","title":"Baseline Docker Compose","text":"

There is one baseline for NetAlertX. That's the default security-enabled official distribution.

services:\n  netalertx:\n  #use an environmental variable to set host networking mode if needed\n    container_name: netalertx                       # The name when you docker contiainer ls\n    image: ghcr.io/netalertx/netalertx:latest\n    network_mode: ${NETALERTX_NETWORK_MODE:-host}   # Use host networking for ARP scanning and other services\n\n    read_only: true                                 # Make the container filesystem read-only\n    cap_drop:                                       # Drop all capabilities for enhanced security\n      - ALL\n    cap_add:                                        # Add only the necessary capabilities\n      - NET_ADMIN                                   # Required for ARP scanning\n      - NET_RAW                                     # Required for raw socket operations\n      - NET_BIND_SERVICE                            # Required to bind to privileged ports (nbtscan)\n      - CHOWN                                       # Required for root-entrypoint to chown /data + /tmp before dropping privileges\n      - SETUID                                      # Required for root-entrypoint to switch to non-root user\n      - SETGID                                      # Required for root-entrypoint to switch to non-root group\n\n    volumes:\n      - type: volume                                # Persistent Docker-managed named volume for config + database\n        source: netalertx_data\n        target: /data                               # `/data/config` and `/data/db` live inside this mount\n        read_only: false\n\n    # Example custom local folder called /home/user/netalertx_data\n    # - type: bind\n    #   source: /home/user/netalertx_data\n    #   target: /data\n    #   read_only: false\n    # ... or use the alternative format\n    # - /home/user/netalertx_data:/data:rw\n\n      - type: bind                                  # Bind mount for timezone consistency\n        source: /etc/localtime\n        target: /etc/localtime\n        read_only: true\n\n      # Mount your DHCP server file into NetAlertX for a plugin to access\n      # - path/on/host/to/dhcp.file:/resources/dhcp.file\n\n    # tmpfs mount consolidates writable state for a read-only container and improves performance\n    # uid/gid default to the service user (NETALERTX_UID/GID, default 20211)\n    # mode=1700 grants rwx------ permissions to the runtime user only\n    tmpfs:\n      # Comment out to retain logs between container restarts - this has a server performance impact.\n      - \"/tmp:uid=${NETALERTX_UID:-20211},gid=${NETALERTX_GID:-20211},mode=1700,rw,noexec,nosuid,nodev,async,noatime,nodiratime\"\n\n      # Retain logs - comment out tmpfs /tmp if you want to retain logs between container restarts\n      # Please note if you remove the /tmp mount, you must create and maintain sub-folder mounts.\n      # - /path/on/host/log:/tmp/log\n      # - \"/tmp/api:uid=${NETALERTX_UID:-20211},gid=${NETALERTX_GID:-20211},mode=1700,rw,noexec,nosuid,nodev,async,noatime,nodiratime\"\n      # - \"/tmp/nginx:uid=${NETALERTX_UID:-20211},gid=${NETALERTX_GID:-20211},mode=1700,rw,noexec,nosuid,nodev,async,noatime,nodiratime\"\n      # - \"/tmp/run:uid=${NETALERTX_UID:-20211},gid=${NETALERTX_GID:-20211},mode=1700,rw,noexec,nosuid,nodev,async,noatime,nodiratime\"\n\n    environment:\n      LISTEN_ADDR: ${LISTEN_ADDR:-0.0.0.0}                   # Listen for connections on all interfaces\n      PORT: ${PORT:-20211}                                   # Application port\n      GRAPHQL_PORT: ${GRAPHQL_PORT:-20212}                   # GraphQL API port (passed into APP_CONF_OVERRIDE at runtime)\n  #    NETALERTX_DEBUG: ${NETALERTX_DEBUG:-0}                 # 0=kill all services and restart if any dies. 1 keeps running dead services.\n  #    PUID: 20211                             # Runtime PUID override, set to 0 to run as root\n  #    PGID: 20211                             # Runtime PGID override\n\n    # Resource limits to prevent resource exhaustion\n    mem_limit: 2048m            # Maximum memory usage\n    mem_reservation: 1024m      # Soft memory limit\n    cpu_shares: 512             # Relative CPU weight for CPU contention scenarios\n    pids_limit: 512             # Limit the number of processes/threads to prevent fork bombs\n    logging:\n      options:\n        max-size: \"10m\"         # Rotate log files after they reach 10MB\n        max-file: \"3\"           # Keep a maximum of 3 log files\n\n    # Always restart the container unless explicitly stopped\n    restart: unless-stopped\n\nvolumes:                        # Persistent volume for configuration and database storage\n  netalertx_data:\n

Run or re-run it:

docker compose up --force-recreate\n

Tip

Runtime UID/GID: The image ships with a service user netalertx (UID/GID 20211) and a readonly lock owner also at 20211 for 004/005 immutability. If you override the runtime user (compose user: or NETALERTX_UID/GID vars), ensure your /data volume and tmpfs mounts use matching uid/gid so startup checks and writable paths succeed.

"},{"location":"DOCKER_COMPOSE/#customize-with-environmental-variables","title":"Customize with Environmental Variables","text":"

You can override the default settings by passing environmental variables to the docker compose up command.

Example using a single variable:

This command runs NetAlertX on port 8080 instead of the default 20211.

PORT=8080 docker compose up\n

Example using all available variables:

This command demonstrates overriding all primary environmental variables: running with host networking, on port 20211, GraphQL on 20212, and listening on all IPs.

NETALERTX_NETWORK_MODE=host \\\nLISTEN_ADDR=0.0.0.0 \\\nPORT=20211 \\\nGRAPHQL_PORT=20212 \\\nNETALERTX_DEBUG=0 \\\ndocker compose up\n
"},{"location":"DOCKER_COMPOSE/#docker-composeyaml-modifications","title":"docker-compose.yaml Modifications","text":""},{"location":"DOCKER_COMPOSE/#modification-1-use-a-local-folder-bind-mount","title":"Modification 1: Use a Local Folder (Bind Mount)","text":"

By default, the baseline compose file uses a single named volume (netalertx_data) mounted at /data. This single-volume layout is preferred because NetAlertX manages both configuration and the database under /data (for example, /data/config and /data/db) via its web UI. Using one named volume simplifies permissions and portability: Docker manages the storage and NetAlertX manages the files inside /data.

A two-volume layout that mounts /data/config and /data/db separately (for example, netalertx_config and netalertx_db) is supported for backward compatibility and some advanced workflows, but it is an abnormal/legacy layout and not recommended for new deployments.

However, if you prefer to have direct, file-level access to your configuration for manual editing, a \"bind mount\" is a simple alternative. This tells Docker to use a specific folder from your computer (the \"host\") inside the container.

How to make the change:

  1. Choose a location on your computer. For example, /local_data_dir.

  2. Create the subfolders: mkdir -p /local_data_dir/config and mkdir -p /local_data_dir/db.

  3. Edit your docker-compose.yml and find the volumes: section (the one inside the netalertx: service).

  4. Comment out (add a # in front) or delete the type: volume blocks for netalertx_config and netalertx_db.

  5. Add new lines pointing to your local folders.

Before (Using Named Volumes - Preferred):

...\n    volumes:\n      - netalertx_config:/data/config:rw #short-form volume (no /path is a short volume)\n      - netalertx_db:/data/db:rw\n...\n

After (Using a Local Folder / Bind Mount): Make sure to replace /local_data_dir with your actual path. The format is <path_on_your_computer>:<path_inside_container>:<options>.

...\n    volumes:\n#      - netalertx_config:/data/config:rw\n#      - netalertx_db:/data/db:rw\n      - /local_data_dir/config:/data/config:rw\n      - /local_data_dir/db:/data/db:rw\n...\n

Now, any files created by NetAlertX in /data/config will appear in your /local_data_dir/config folder.

This same method works for mounting other things, like custom plugins or enterprise NGINX files, as shown in the commented-out examples in the baseline file.

"},{"location":"DOCKER_COMPOSE/#example-2-external-env-file-for-paths","title":"Example 2: External .env File for Paths","text":"

This method is useful for keeping your paths and other settings separate from your main compose file, making it more portable.

docker-compose.yml changes:

...\nservices:\n  netalertx:\n    environment:\n      - PORT=${PORT}\n      - GRAPHQL_PORT=${GRAPHQL_PORT}\n\n...\n

.env file contents:

PORT=20211\nNETALERTX_NETWORK_MODE=host\nLISTEN_ADDR=0.0.0.0\nGRAPHQL_PORT=20212\n

Run with: sudo docker-compose --env-file /path/to/.env up

"},{"location":"DOCKER_COMPOSE/#example-3-docker-swarm","title":"Example 3: Docker Swarm","text":"

This is for deploying on a Docker Swarm cluster. The key differences from the baseline are the removal of network_mode: from the service, and the addition of deploy: and networks: blocks at both the service and top-level.

Here are the only changes you need to make to the baseline compose file to make it Swarm-compatible.

services:\n  netalertx:\n    ...\n    #    network_mode: ${NETALERTX_NETWORK_MODE:-host} # <-- DELETE THIS LINE\n    ...\n\n    # 2. ADD a 'networks:' block INSIDE the service to connect to the external host network.\n    networks:\n      - outside\n    # 3. ADD a 'deploy:' block to manage the service as a swarm replica.\n    deploy:\n      mode: replicated\n      replicas: 1\n      restart_policy:\n        condition: on-failure\n\n\n# 4. ADD a new top-level 'networks:' block at the end of the file to define 'outside' as the external 'host' network.\nnetworks:\n  outside:\n    external:\n      name: \"host\"\n
"},{"location":"DOCKER_INSTALLATION/","title":"Docker Guide","text":""},{"location":"DOCKER_INSTALLATION/#netalertx-network-visibility-asset-intelligence-framework","title":"NetAlertX - Network Visibility & Asset Intelligence Framework","text":""},{"location":"DOCKER_INSTALLATION/#docker-guide-releases-docs-plugins-website","title":"|| Docker guide || Releases || Docs || Plugins || Website","text":"

Head to https://netalertx.com/ for more gifs and screenshots \ud83d\udcf7.

Note

There is also an experimental \ud83e\uddea bare-metal install method available.

"},{"location":"DOCKER_INSTALLATION/#basic-usage","title":"\ud83d\udcd5 Basic Usage","text":"

Warning

You will have to run the container on the host network and specify SCAN_SUBNETS unless you use other plugin scanners. The initial scan can take a few minutes, so please wait 5-10 minutes for the initial discovery to finish.

docker run -d --rm --network=host \\\n  -v /local_data_dir:/data \\\n  -v /etc/localtime:/etc/localtime \\\n  --tmpfs /tmp:uid=${NETALERTX_UID:-20211},gid=${NETALERTX_GID:-20211},mode=1700 \\\n  -e PORT=20211 \\\n  -e APP_CONF_OVERRIDE={\"GRAPHQL_PORT\":\"20214\"} \\\n  ghcr.io/netalertx/netalertx:latest\n

Runtime UID/GID: The image defaults to a service user netalertx (UID/GID 20211). A separate readonly lock owner also uses UID/GID 20211 for 004/005 immutability. You can override the runtime UID/GID at build (ARG) or run (--user / compose user:) but must align writable mounts (/data, /tmp*) and tmpfs uid/gid to that choice.

See alternative docked-compose examples.

"},{"location":"DOCKER_INSTALLATION/#default-ports","title":"Default ports","text":"Default Description How to override 20211 Port of the web interface -e PORT=20222 20212 Port of the backend API server -e APP_CONF_OVERRIDE={\"GRAPHQL_PORT\":\"20214\"} or via the GRAPHQL_PORT Setting"},{"location":"DOCKER_INSTALLATION/#docker-environment-variables","title":"Docker environment variables","text":"Variable Description Example/Default Value PUID Runtime UID override, set to 0 to run as root. 20211 PGID Runtime GID override 20211 PORT Port of the web interface 20211 LISTEN_ADDR Set the specific IP Address for the listener address for the nginx webserver (web interface). This could be useful when using multiple subnets to hide the web interface from all untrusted networks. 0.0.0.0 LOADED_PLUGINS Default plugins to load. Plugins cannot be loaded with APP_CONF_OVERRIDE, you need to use this variable instead and then specify the plugins settings with APP_CONF_OVERRIDE. [\"PIHOLE\",\"ASUSWRT\"] APP_CONF_OVERRIDE JSON override for settings (except LOADED_PLUGINS). {\"SCAN_SUBNETS\":\"['192.168.1.0/24 --interface=eth1']\",\"GRAPHQL_PORT\":\"20212\"} ALWAYS_FRESH_INSTALL \u26a0 If true will delete the content of the /db & /config folders. For testing purposes. Can be coupled with watchtower to have an always freshly installed netalertx/netalertx-dev image. true

You can override the default GraphQL port setting GRAPHQL_PORT (set to 20212) by using the APP_CONF_OVERRIDE env variable. LOADED_PLUGINS and settings in APP_CONF_OVERRIDE can be specified via the UI as well.

"},{"location":"DOCKER_INSTALLATION/#docker-paths","title":"Docker paths","text":"

Note

See also Backup strategies.

Required Path Description \u2705 :/data Folder which needs to contain a /db and /config sub-folders. \u2705 /etc/localtime:/etc/localtime:ro Ensuring the timezone is the same as on the server. :/tmp/log Logs folder useful for debugging if you have issues setting up the container :/tmp/api The API endpoint containing static (but regularly updated) json and other files. Path configurable via NETALERTX_API environment variable. :/app/front/plugins/<plugin>/ignore_plugin Map a file ignore_plugin to ignore a plugin. Plugins can be soft-disabled via settings. More in the Plugin docs. :/etc/resolv.conf Use a custom resolv.conf file for better name resolution."},{"location":"DOCKER_INSTALLATION/#folder-structure","title":"Folder structure","text":"

Use separate db and config directories, do not nest them:

data\n\u251c\u2500\u2500 config\n\u2514\u2500\u2500 db\n
"},{"location":"DOCKER_INSTALLATION/#permissions","title":"Permissions","text":"

If you are facing permissions issues run the following commands on your server. This will change the owner and assure sufficient access to the database and config files that are stored in the /local_data_dir/db and /local_data_dir/config folders (replace local_data_dir with the location where your /db and /config folders are located).

# Use the runtime UID/GID you intend to run with (default 20211:20211)\nsudo chown -R ${NETALERTX_UID:-20211}:${NETALERTX_GID:-20211} /local_data_dir\nsudo chmod -R a+rwx /local_data_dir\n
"},{"location":"DOCKER_INSTALLATION/#initial-setup","title":"Initial setup","text":""},{"location":"DOCKER_INSTALLATION/#setting-up-scanners","title":"Setting up scanners","text":"

You have to specify which network(s) should be scanned. This is done by entering subnets that are accessible from the host. If you use the default ARPSCAN plugin, you have to specify at least one valid subnet and interface in the SCAN_SUBNETS setting. See the documentation on How to set up multiple SUBNETS, VLANs and what are limitations for troubleshooting and more advanced scenarios.

If you are running PiHole you can synchronize devices directly. Check the PiHole configuration guide for details.

Note

You can bulk-import devices via the CSV import method.

"},{"location":"DOCKER_INSTALLATION/#community-guides","title":"Community guides","text":"

You can read or watch several community configuration guides in Chinese, Korean, German, or French.

Please note these might be outdated. Rely on official documentation first.

"},{"location":"DOCKER_INSTALLATION/#common-issues","title":"Common issues","text":""},{"location":"DOCKER_INSTALLATION/#support-me","title":"\ud83d\udc99 Support me","text":"

\ud83d\udce7 Email me at netalertx@gmail.com if you want to get in touch or if I should add other sponsorship platforms.

"},{"location":"DOCKER_MAINTENANCE/","title":"The NetAlertX Container Operator's Guide","text":"

Warning

\u26a0\ufe0f Important: The docker-compose has recently changed. Carefully read the Migration guide for detailed instructions.

This guide assumes you are starting with the official docker-compose.yml file provided with the project. We strongly recommend you start with or migrate to this file as your baseline and modify it to suit your specific needs (e.g., changing file paths). While there are many ways to configure NetAlertX, the default file is designed to meet the mandatory security baseline with layer-2 networking capabilities while operating securely and without startup warnings.

This guide provides direct, concise solutions for common NetAlertX administrative tasks. It is structured to help you identify a problem, implement the solution, and understand the details.

"},{"location":"DOCKER_MAINTENANCE/#guide-contents","title":"Guide Contents","text":"

Note

Other relevant resources - Fixing Permission Issues - Handling Backups - Accessing Application Logs

"},{"location":"DOCKER_MAINTENANCE/#task-using-a-local-folder-for-configuration","title":"Task: Using a Local Folder for Configuration","text":""},{"location":"DOCKER_MAINTENANCE/#problem","title":"Problem","text":"

You want to edit your app.conf and other configuration files directly from your host machine, instead of using a Docker-managed volume.

"},{"location":"DOCKER_MAINTENANCE/#solution","title":"Solution","text":"
  1. Stop the container:

docker-compose down\n
2. (Optional but Recommended) Back up your data using the method in Part 1. 3. Create a local folder on your host machine (e.g., /data/netalertx_config). 4. Edit docker-compose.yml:

...\n    volumes:\n      # - type: volume\n      #   source: netalertx_config\n      #   target: /data/config\n      #   read_only: false\n...\n    # Example custom local folder called /data/netalertx_config\n    - type: bind\n      source: /data/netalertx_config\n      target: /data/config\n      read_only: false\n...\n
5. (Optional) Restore your backup. 6. Restart the container:

docker-compose up -d\n
"},{"location":"DOCKER_MAINTENANCE/#about-this-method","title":"About This Method","text":"

This replaces the Docker-managed volume with a \"bind mount.\" This is a direct mapping between a folder on your host computer (/data/netalertx_config) and a folder inside the container (/data/config), allowing you to edit the files directly.

"},{"location":"DOCKER_MAINTENANCE/#task-migrating-from-a-local-folder-to-a-docker-volume","title":"Task: Migrating from a Local Folder to a Docker Volume","text":""},{"location":"DOCKER_MAINTENANCE/#problem_1","title":"Problem","text":"

You are currently using a local folder (bind mount) for your configuration (e.g., /data/netalertx_config) and want to switch to the recommended Docker-managed volume (netalertx_config).

"},{"location":"DOCKER_MAINTENANCE/#solution_1","title":"Solution","text":"
  1. Stop the container:

docker-compose down\n
2. Edit docker-compose.yml:

...\n    volumes:\n      - type: volume\n        source: netalertx_config\n        target: /data/config\n        read_only: false\n...\n    # Example custom local folder called /data/netalertx_config\n    # - type: bind\n    #   source: /data/netalertx_config\n    #   target: /data/config\n    #   read_only: false\n...\n
3. (Optional) Initialize the volume:

docker-compose up -d && docker-compose down\n
4. Run the migration command (replace /data/netalertx_config with your actual path):

docker run --rm -v netalertx_config:/config -v /data/netalertx_config:/local-config alpine \\\n  sh -c \"tar -C /local-config -c . | tar -C /config -x\"\n
5. Restart the container:

docker-compose up -d\n
"},{"location":"DOCKER_MAINTENANCE/#about-this-method_1","title":"About This Method","text":"

This uses a temporary alpine container that mounts both your source folder (/local-config) and destination volume (/config). The tar ... | tar ... command safely copies all files, including hidden ones, preserving structure.

"},{"location":"DOCKER_MAINTENANCE/#task-applying-a-custom-nginx-configuration","title":"Task: Applying a Custom Nginx Configuration","text":""},{"location":"DOCKER_MAINTENANCE/#problem_2","title":"Problem","text":"

You need to override the default Nginx configuration to add features like LDAP, SSO, or custom SSL settings.

"},{"location":"DOCKER_MAINTENANCE/#solution_2","title":"Solution","text":"
  1. Stop the container:

docker-compose down\n
2. Create your custom config file on your host (e.g., /data/my-netalertx.conf). 3. Edit docker-compose.yml:

...\n    # Use a custom Enterprise-configured nginx config for ldap or other settings\n    - /data/my-netalertx.conf:/tmp/nginx/active-config/netalertx.conf:ro\n...\n
4. Restart the container:

docker-compose up -d\n
"},{"location":"DOCKER_MAINTENANCE/#about-this-method_2","title":"About This Method","text":"

Docker\u2019s bind mount overlays your host file (my-netalertx.conf) on top of the default file inside the container. The container remains read-only, but Nginx reads your file as if it were the default.

"},{"location":"DOCKER_MAINTENANCE/#task-mounting-additional-files-for-plugins","title":"Task: Mounting Additional Files for Plugins","text":""},{"location":"DOCKER_MAINTENANCE/#problem_3","title":"Problem","text":"

A plugin (like DHCPLSS) needs to read a file from your host machine (e.g., /var/lib/dhcp/dhcpd.leases).

"},{"location":"DOCKER_MAINTENANCE/#solution_3","title":"Solution","text":"
  1. Stop the container:

docker-compose down\n
2. Edit docker-compose.yml and add a new line under the volumes: section:

...\n    volumes:\n...\n      # Mount for DHCPLSS plugin\n      - /var/lib/dhcp/dhcpd.leases:/mnt/dhcpd.leases:ro\n...\n
3. Restart the container:

docker-compose up -d\n
4. In the NetAlertX web UI, configure the plugin to read from:

/mnt/dhcpd.leases\n
"},{"location":"DOCKER_MAINTENANCE/#about-this-method_3","title":"About This Method","text":"

This maps your host file to a new, read-only (:ro) location inside the container. The plugin can then safely read this file without exposing anything else on your host filesystem.

"},{"location":"DOCKER_PORTAINER/","title":"Deploying NetAlertX in Portainer (via Stacks)","text":"

This guide shows you how to set up NetAlertX using Portainer\u2019s Stacks feature.

"},{"location":"DOCKER_PORTAINER/#1-prepare-your-host","title":"1. Prepare Your Host","text":"

Before deploying, make sure you have a folder on your Docker host for NetAlertX data. Replace APP_FOLDER with your preferred location, for example /local_data_dir here:

mkdir -p /local_data_dir/netalertx/config\nmkdir -p /local_data_dir/netalertx/db\nmkdir -p /local_data_dir/netalertx/log\n
"},{"location":"DOCKER_PORTAINER/#2-open-portainer-stacks","title":"2. Open Portainer Stacks","text":"
  1. Log in to your Portainer UI.
  2. Navigate to Stacks \u2192 Add stack.
  3. Give your stack a name (e.g., netalertx).
"},{"location":"DOCKER_PORTAINER/#3-paste-the-stack-configuration","title":"3. Paste the Stack Configuration","text":"

Copy and paste the following YAML into the Web editor:

services:\n  netalertx:\n    container_name: netalertx\n    # Use this line for stable release\n    image: \"ghcr.io/netalertx/netalertx:latest\"\n    # Or, use this for the latest development build\n    # image: \"ghcr.io/netalertx/netalertx-dev:latest\"\n    network_mode: \"host\"\n    restart: unless-stopped\n    cap_drop:       # Drop all capabilities for enhanced security\n      - ALL\n    cap_add:        # Re-add necessary capabilities\n      - NET_RAW\n      - NET_ADMIN\n      - NET_BIND_SERVICE\n      - CHOWN\n      - SETUID\n      - SETGID\n    volumes:\n      - ${APP_FOLDER}/netalertx/config:/data/config\n      - ${APP_FOLDER}/netalertx/db:/data/db\n      # to sync with system time\n      - /etc/localtime:/etc/localtime:ro\n    tmpfs:\n      # All writable runtime state resides under /tmp; comment out to persist logs between restarts\n      - \"/tmp:uid=20211,gid=20211,mode=1700,rw,noexec,nosuid,nodev,async,noatime,nodiratime\"\n    environment:\n      - PORT=${PORT}\n      - APP_CONF_OVERRIDE=${APP_CONF_OVERRIDE}\n
"},{"location":"DOCKER_PORTAINER/#4-configure-environment-variables","title":"4. Configure Environment Variables","text":"

In the Environment variables section of Portainer, add the following:

Additional environment variables (advanced / testing):

Note: these variables are primarily useful for non-production scenarios (testing, CI, or specific deployments) and are processed by the entrypoint scripts. See entrypoint.sh and entrypoint.d/* for exact behaviour and available check names.

"},{"location":"DOCKER_PORTAINER/#5-ensure-permissions","title":"5. Ensure permissions","text":"

Tip

If you are facing permissions issues run the following commands on your server. This will change the owner and assure sufficient access to the database and config files that are stored in the /local_data_dir/db and /local_data_dir/config folders (replace local_data_dir with the location where your /db and /config folders are located).

sudo chown -R 20211:20211 /local_data_dir

sudo chmod -R a+rwx /local_data_dir

"},{"location":"DOCKER_PORTAINER/#6-deploy-the-stack","title":"6. Deploy the Stack","text":"
  1. Scroll down and click Deploy the stack.
  2. Portainer will pull the image and start NetAlertX.
  3. Once running, access the app at:
http://<your-docker-host-ip>:22022\n
"},{"location":"DOCKER_PORTAINER/#7-verify-and-troubleshoot","title":"7. Verify and Troubleshoot","text":"

Once the application is running, configure it by reading the initial setup guide, or troubleshoot common issues.

"},{"location":"DOCKER_SWARM/","title":"Docker Swarm Deployment Guide (IPvlan)","text":"

Note

This is community-contributed. Due to environment, setup, or networking differences, results may vary. Please open a PR to improve it instead of creating an issue, as the maintainer is not actively maintaining it.

This guide describes how to deploy NetAlertX in a Docker Swarm environment using an ipvlan network. This enables the container to receive a LAN IP address directly, which is ideal for network monitoring.

"},{"location":"DOCKER_SWARM/#step-1-create-an-ipvlan-config-only-network-on-all-nodes","title":"\u2699\ufe0f Step 1: Create an IPvlan Config-Only Network on All Nodes","text":"

Run this command on each node in the Swarm.

docker network create -d ipvlan \\\n  --subnet=192.168.1.0/24 \\              # \ud83d\udd27 Replace with your LAN subnet\n  --gateway=192.168.1.1 \\                # \ud83d\udd27 Replace with your LAN gateway\n  -o ipvlan_mode=l2 \\\n  -o parent=eno1 \\                       # \ud83d\udd27 Replace with your network interface (e.g., eth0, eno1)\n  --config-only \\\n  ipvlan-swarm-config\n
"},{"location":"DOCKER_SWARM/#step-2-create-the-swarm-scoped-ipvlan-network-one-time-setup","title":"\ud83d\udda5\ufe0f Step 2: Create the Swarm-Scoped IPvlan Network (One-Time Setup)","text":"

Run this on one Swarm manager node only.

docker network create -d ipvlan \\\n  --scope swarm \\\n  --config-from ipvlan-swarm-config \\\n  swarm-ipvlan\n
"},{"location":"DOCKER_SWARM/#step-3-deploy-netalertx-with-docker-compose","title":"\ud83e\uddfe Step 3: Deploy NetAlertX with Docker Compose","text":"

Use the following Compose snippet to deploy NetAlertX with a static LAN IP assigned via the swarm-ipvlan network.

services:\n  netalertx:\n    image: ghcr.io/netalertx/netalertx:latest\n...\n    networks:\n      swarm-ipvlan:\n        ipv4_address: 192.168.1.240     # \u26a0\ufe0f Choose a free IP from your LAN\n    deploy:\n      mode: replicated\n      replicas: 1\n      restart_policy:\n        condition: on-failure\n      placement:\n        constraints:\n          - node.role == manager        # \ud83d\udd04 Or use: node.labels.netalertx == true\n\nnetworks:\n  swarm-ipvlan:\n    external: true\n
"},{"location":"DOCKER_SWARM/#notes","title":"\u2705 Notes","text":""},{"location":"FEATURES/","title":"NetAlertX Features Overview","text":"

NetAlertX is a lightweight, flexible platform for monitoring networks, tracking devices, and delivering actionable alerts. It combines discovery, change detection, and multi-channel notification into a single, streamlined solution.

"},{"location":"FEATURES/#network-discovery-device-tracking","title":"Network Discovery & Device Tracking","text":""},{"location":"FEATURES/#lan-visualization","title":"LAN Visualization","text":""},{"location":"FEATURES/#event-driven-alerts","title":"Event-Driven Alerts","text":""},{"location":"FEATURES/#workflows-for-implementing-business-rules","title":"Workflows for implementing Business rules","text":""},{"location":"FEATURES/#multi-channel-notification","title":"Multi-Channel Notification","text":""},{"location":"FEATURES/#security-compliance-friendly-logging","title":"Security & Compliance-Friendly Logging","text":""},{"location":"FEATURES/#mcp-server-and-openapi","title":"MCP Server and OpenAPI","text":""},{"location":"FEATURES/#extensible-open-source","title":"Extensible & Open Source","text":"

NetAlertX provides a centralized, proactive approach to network awareness, combining device visibility, event-driven alerting, and flexible notifications into a single, deployable solution. Its design prioritizes efficiency, clarity, and actionable insights, making it ideal for monitoring dynamic environments.

"},{"location":"FILE_PERMISSIONS/","title":"Managing File Permissions for NetAlertX on a Read-Only Container","text":"

Sometimes, permission issues arise if your existing host directories were created by a previous container running as root or another UID. The container will fail to start with \"Permission Denied\" errors.

Tip

NetAlertX runs in a secure, read-only Alpine-based container under a dedicated netalertx user (UID 20211, GID 20211). All writable paths are either mounted as persistent volumes or tmpfs filesystems. This ensures consistent file ownership and prevents privilege escalation.

Try starting the container with all data to be in non-persistent volumes. If this works, the issue might be related to the permissions of your persistent data mount locations on your server.

docker run --rm --network=host \\\n  -v /etc/localtime:/etc/localtime:ro \\\n  --tmpfs /tmp:uid=20211,gid=20211,mode=1700 \\\n  -e PORT=20211 \\\n  ghcr.io/netalertx/netalertx:latest\n

Warning

The above should be only used as a test - once the container restarts, all data is lost.

"},{"location":"FILE_PERMISSIONS/#writable-paths","title":"Writable Paths","text":"

NetAlertX requires certain paths to be writable at runtime. These paths should be mounted either as host volumes or tmpfs in your docker-compose.yml or docker run command:

Path Purpose Notes /data/config Application configuration Persistent volume recommended /data/db Database files Persistent volume recommended /tmp/log Logs Lives under /tmp; optional host bind to retain logs /tmp/api API cache Subdirectory of /tmp /tmp/nginx/active-config Active nginx configuration override Mount /tmp (or override specific file) /tmp/run Runtime directories for nginx & PHP Subdirectory of /tmp /tmp PHP session save directory Backed by tmpfs for runtime writes

Mounting /tmp as tmpfs automatically covers all of its subdirectories (log, api, run, nginx/active-config, etc.).

All these paths will have UID 20211 / GID 20211 inside the container. Files on the host will appear owned by 20211:20211.

"},{"location":"FILE_PERMISSIONS/#running-as-root","title":"Running as root","text":"

You can override the default PUID and PGID using environment variables:

...\n  environment:\n      PUID: 20211                             # Runtime PUID override, set to 0 to run as root\n      PGID: 20211                             # Runtime PGID override\n...\n

To run as the root user, it usually looks like this (verify the IDs on your server first by executing id root):

...\n  environment:\n      PUID: 0                             # Runtime PUID override, set to 0 to run as root\n      PGID: 100                           # Runtime PGID override\n...\n

If you use a custom PUID (e.g. 0) and GUID (e.g. 100) make sure you also update the tmpfs ownership, e.g. /tmp:uid=0,gid=100...

"},{"location":"FILE_PERMISSIONS/#solution","title":"Solution","text":"
  1. Run the container once as root (--user \"0\") to allow it to correct permissions automatically:
docker run -it --rm --name netalertx --user \"0\" \\\n  -v /local_data_dir:/data \\\n  --tmpfs /tmp:uid=20211,gid=20211,mode=1700 \\\n  ghcr.io/netalertx/netalertx:latest\n
  1. Wait for logs showing permissions being fixed. The container will then hang intentionally.
  2. Press Ctrl+C to stop the container.
  3. Start the container normally with your docker-compose.yml or docker run command.

The container startup script detects root and runs chown -R 20211:20211 on all volumes, fixing ownership for the secure netalertx user.

Tip

If you are facing permissions issues run the following commands on your server. This will change the owner and assure sufficient access to the database and config files that are stored in the /local_data_dir/db and /local_data_dir/config folders (replace local_data_dir with the location where your /db and /config folders are located).

sudo chown -R 20211:20211 /local_data_dir

sudo chmod -R a+rwx /local_data_dir

"},{"location":"FILE_PERMISSIONS/#example-docker-composeyml-with-tmpfs","title":"Example: docker-compose.yml with tmpfs","text":"
services:\n  netalertx:\n    container_name: netalertx\n    image: \"ghcr.io/netalertx/netalertx\"\n    network_mode: \"host\"\n    cap_drop:                                       # Drop all capabilities for enhanced security\n      - ALL\n    cap_add:                                        # Add only the necessary capabilities\n      - NET_ADMIN                                   # Required for ARP scanning\n      - NET_RAW                                     # Required for raw socket operations\n      - NET_BIND_SERVICE                            # Required to bind to privileged ports (nbtscan)\n    restart: unless-stopped\n    volumes:\n      - /local_data_dir:/data\n      - /etc/localtime:/etc/localtime\n    environment:\n      - PORT=20211\n    tmpfs:\n      - \"/tmp:uid=20211,gid=20211,mode=1700,rw,noexec,nosuid,nodev,async,noatime,nodiratime\"\n

This setup ensures all writable paths are either in tmpfs or host-mounted, and the container never writes outside of controlled volumes.

"},{"location":"FIX_OFFLINE_DETECTION/","title":"Troubleshooting: Devices Show Offline When They Are Online","text":"

In some network setups, certain devices may intermittently appear as offline in NetAlertX, even though they are connected and responsive. This issue is often more noticeable with devices that have higher IP addresses within the subnet.

Note

Network presence graph showing increased drop outs before enabling additional ICMP scans and continuous online presence after following this guide. This graph shows a sudden spike in drop outs probably caused by a device software update.

"},{"location":"FIX_OFFLINE_DETECTION/#symptoms","title":"Symptoms","text":""},{"location":"FIX_OFFLINE_DETECTION/#cause","title":"Cause","text":"

This issue is typically related to scanning limitations:

"},{"location":"FIX_OFFLINE_DETECTION/#recommended-fixes","title":"Recommended Fixes","text":"

To improve presence accuracy and reduce false offline states:

"},{"location":"FIX_OFFLINE_DETECTION/#increase-arp-scan-timeout","title":"\u2705 Increase ARP Scan Timeout","text":"

Extend the ARP scanner timeout and DURATION to ensure full subnet coverage:

ARPSCAN_RUN_TIMEOUT=360\nARPSCAN_DURATION=30\n

Adjust based on your network size and device count.

"},{"location":"FIX_OFFLINE_DETECTION/#add-icmp-ping-scanning","title":"\u2705 Add ICMP (Ping) Scanning","text":"

Enable the ICMP scan plugin to complement ARP detection. ICMP is often more reliable for detecting active hosts, especially when ARP fails.

Important

If using AdGuard/Pi-hole: If devices still show offline after enabling ICMP, temporarily disable your content blocker. If the issue disappears, whitelist the NetAlertX host IP in your blocker's settings to prevent pings from being dropped.

"},{"location":"FIX_OFFLINE_DETECTION/#use-multiple-detection-methods","title":"\u2705 Use Multiple Detection Methods","text":"

A combined approach greatly improves detection robustness:

This hybrid strategy increases reliability, especially for down detection and alerting. See other plugins that might be compatible with your setup. See benefits and drawbacks of individual scan methods in their respective docs.

"},{"location":"FIX_OFFLINE_DETECTION/#results","title":"Results","text":"

After increasing the ARP timeout and adding ICMP scanning (on select IP ranges), users typically report:

"},{"location":"FIX_OFFLINE_DETECTION/#summary","title":"Summary","text":"Setting Recommendation ARPSCAN_RUN_TIMEOUT Increase to ensure scans reach all IPs ICMP Scan Enable to detect devices ARP might miss Multi-method Scanning Use a mix of ARP, ICMP, and NMAP-based methods

Tip: Each environment is unique. Consider fine-tuning scan settings based on your network size, device behavior, and desired detection accuracy.

Let us know in the NetAlertX Discussions if you have further feedback or edge cases.

See also Remote Networks for more advanced setups.

"},{"location":"FRONTEND_DEVELOPMENT/","title":"Frontend development","text":"

This page contains tips for frontend development when extending NetAlertX. Guiding principles are:

  1. Maintainability
  2. Extendability
  3. Reusability
  4. Placing more functionality into Plugins and enhancing core Plugins functionality

That means that, when writing code, focus on reusing what's available instead of writing quick fixes. Or creating reusable functions, instead of bespoke functionaility.

"},{"location":"FRONTEND_DEVELOPMENT/#examples","title":"\ud83d\udd0d Examples","text":"

Some examples how to apply the above:

Example 1

I want to implement a scan fucntion. Options would be:

  1. To add a manual scan functionality to the deviceDetails.php page.
  2. To create a separate page that handles the execution of the scan.
  3. To create a configurable Plugin.

From the above, number 3 would be the most appropriate solution. Then followed by number 2. Number 1 would be approved only in special circumstances.

Example 2

I want to change the behavior of the application. Options to implement this could be:

  1. Hard-code the changes in the code.
  2. Implement the changes and add settings to influence the behavior in the initialize.py file so the user can adjust these.
  3. Implement the changes and add settings via a setting-only plugin.
  4. Implement the changes in a way so the behavior can be toggled on each plugin so the core capabilities of Plugins get extended.

From the above, number 4 would be the most appropriate solution. Then followed by number 3. Number 1 or 2 would be approved only in special circumstances.

"},{"location":"FRONTEND_DEVELOPMENT/#frontend-tips","title":"\ud83d\udca1 Frontend tips","text":"

Some useful frontend JavaScript functions:

Check the common.js file for more frontend functions.

"},{"location":"HELPER_SCRIPTS/","title":"Community Helper Scripts Overview","text":"

This page provides an overview of community-contributed scripts for NetAlertX. These scripts are not actively maintained and are provided as-is.

"},{"location":"HELPER_SCRIPTS/#community-scripts","title":"Community Scripts","text":"

You can find all scripts in this scripts GitHub folder.

Script Name Description Author Version Release Date New Devices Checkmk Script Checks for new devices in NetAlertX and reports status to Checkmk. N/A 1.0 08-Jan-2025 DB Cleanup Script Queries and removes old device-related entries from the database. laxduke 1.0 23-Dec-2024 OPNsense DHCP Lease Converter Retrieves DHCP lease data from OPNsense and converts it to dnsmasq format. im-redactd 1.0 24-Feb-2025"},{"location":"HELPER_SCRIPTS/#important-notes","title":"Important Notes","text":"

Note

These scripts are community-supplied and not actively maintained. Use at your own discretion.

For detailed usage instructions, refer to each script's documentation in each scripts GitHub folder.

"},{"location":"HOME_ASSISTANT/","title":"Home Assistant integration overview","text":"

NetAlertX comes with MQTT support, allowing you to show all detected devices as devices in Home Assistant. It also supplies a collection of stats, such as number of online devices.

Tip

You can install NetAlertX also as a Home Assistant addon via the alexbelgium/hassio-addons repository. This is only possible if you run a supervised instance of Home Assistant. If not, you can still run NetAlertX in a separate Docker container and follow this guide to configure MQTT.

"},{"location":"HOME_ASSISTANT/#note","title":"\u26a0 Note","text":""},{"location":"HOME_ASSISTANT/#guide","title":"\ud83e\udded Guide","text":"

\ud83d\udca1 This guide was tested only with the Mosquitto MQTT broker

  1. Enable Mosquitto MQTT in Home Assistant by following the documentation

  2. Configure a user name and password on your broker.

  3. Note down the following details that you will need to configure NetAlertX:

  4. Open the NetAlertX > Settings > MQTT settings group

"},{"location":"HOME_ASSISTANT/#screenshots","title":"\ud83d\udcf7 Screenshots","text":""},{"location":"HOME_ASSISTANT/#troubleshooting","title":"Troubleshooting","text":"

If you can't see all devices detected, run sudo arp-scan --interface=eth0 192.168.1.0/24 (change these based on your setup, read Subnets docs for details). This command has to be executed the NetAlertX container, not in the Home Assistant container.

You can access the NetAlertX container via Portainer on your host or via ssh. The container name will be something like addon_db21ed7f_netalertx (you can copy the db21ed7f_netalertx part from the browser when accessing the UI of NetAlertX).

"},{"location":"HOME_ASSISTANT/#accessing-the-netalertx-container-via-ssh","title":"Accessing the NetAlertX container via SSH","text":"
  1. Log into your Home Assistant host via SSH

local@local:~ $ ssh pi@192.168.1.9\n
2. Find the NetAlertX container name, in this case addon_db21ed7f_netalertx

pi@raspberrypi:~ $ sudo docker container ls | grep netalertx\n06c540d97f67   ghcr.io/alexbelgium/netalertx-armv7:25.3.1                   \"/init\"               6 days ago      Up 6 days (healthy)    addon_db21ed7f_netalertx\n
  1. SSH into the NetAlertX cointainer
pi@raspberrypi:~ $ sudo docker exec -it addon_db21ed7f_netalertx  /bin/sh\n/ #\n
  1. Execute a test asrp-scan scan
/ # sudo arp-scan --ignoredups --retry=6 192.168.1.0/24 --interface=eth0\nInterface: eth0, type: EN10MB, MAC: dc:a6:32:73:8a:b1, IPv4: 192.168.1.9\nStarting arp-scan 1.10.0 with 256 hosts (https://github.com/royhills/arp-scan)\n192.168.1.1     74:ac:b9:54:09:fb       Ubiquiti Networks Inc.\n192.168.1.21    74:ac:b9:ad:c3:30       Ubiquiti Networks Inc.\n192.168.1.58    1c:69:7a:a2:34:7b       EliteGroup Computer Systems Co., LTD\n192.168.1.57    f4:92:bf:a3:f3:56       Ubiquiti Networks Inc.\n...\n

If your result doesn't contain results similar to the above, double check your subnet, interface and if you are dealing with an inaccessible network segment, read the Remote networks documentation.

"},{"location":"HW_INSTALL/","title":"How to install NetAlertX on the server hardware","text":"

To download and install NetAlertX on the hardware/server directly use the curl or wget commands at the bottom of this page.

Note

This is an Experimental feature \ud83e\uddea and it relies on community support.

\ud83d\ude4f Looking for maintainers for this installation method \ud83d\ude42 Current community volunteers: - slammingprogramming - ingoratsdorf

There is no guarantee that the install script or any other script will gracefully handle other installed software. Data loss is a possibility, it is recommended to install NetAlertX using the supplied Docker image.

Warning

A warning to the installation method below: Piping to bash is controversial and may be dangerous, as you cannot see the code that's about to be executed on your system.

If you trust this repo, you can download the install script via one of the methods (curl/wget) below and it will fo its best to install NetAlertX on your system.

Alternatively you can download the installation script from the repository and check the code yourself.

NetAlertX will be installed in /app and run on port number 20211.

Some facts about what and where something will be changed/installed by the HW install setup (may not contain everything!):

"},{"location":"HW_INSTALL/#limitations","title":"Limitations","text":"

Tip

If the below fails try grabbing and installing one of the previous releases and run the installation from the zip package.

These commands will download the install.debian12.sh script from the GitHub repository, make it executable with chmod, and then run it using ./install.debian12.sh.

Make sure you have the necessary permissions to execute the script.

"},{"location":"HW_INSTALL/#debian-12-bookworm","title":"\ud83d\udce5 Debian 12 (Bookworm)","text":""},{"location":"HW_INSTALL/#installation-via-curl","title":"Installation via curl","text":"
curl -o install.debian12.sh https://raw.githubusercontent.com/jokob-sk/NetAlertX/main/install/debian12/install.debian12.sh && sudo chmod +x install.debian12.sh && sudo ./install.debian12.sh\n
"},{"location":"HW_INSTALL/#installation-via-wget","title":"Installation via wget","text":"
wget https://raw.githubusercontent.com/jokob-sk/NetAlertX/main/install/debian12/install.debian12.sh -O install.debian12.sh && sudo chmod +x install.debian12.sh && sudo ./install.debian12.sh\n
"},{"location":"HW_INSTALL/#ubuntu-24-noble-numbat","title":"\ud83d\udce5 Ubuntu 24 (Noble Numbat)","text":"

Note

Maintained by ingoratsdorf

"},{"location":"HW_INSTALL/#installation-via-curl_1","title":"Installation via curl","text":"
curl -o install.sh https://raw.githubusercontent.com/jokob-sk/NetAlertX/main/install/ubuntu24/install.sh && sudo chmod +x install.sh && sudo ./install.sh\n
"},{"location":"HW_INSTALL/#installation-via-wget_1","title":"Installation via wget","text":"
wget https://raw.githubusercontent.com/jokob-sk/NetAlertX/main/install/ubuntu24/install.sh -O install.sh && sudo chmod +x install.sh && sudo ./install.sh\n
"},{"location":"HW_INSTALL/#bare-metal-proxmox","title":"\ud83d\udce5 Bare Metal - Proxmox","text":"

Note

Use this on a clean LXC/VM for Debian 13 OR Ubuntu 24. The Scipt will detect OS and build acordingly. Maintained by JVKeller

"},{"location":"HW_INSTALL/#installation-via-wget_2","title":"Installation via wget","text":"
wget https://raw.githubusercontent.com/jokob-sk/NetAlertX/main/install/proxmox/proxmox-install-netalertx.sh -O proxmox-install-netalertx.sh && chmod +x proxmox-install-netalertx.sh && ./proxmox-install-netalertx.sh\n
"},{"location":"ICONS/","title":"Icons","text":""},{"location":"ICONS/#icons-overview","title":"Icons overview","text":"

Icons are used to visually distinguish devices in the app in most of the device listing tables and the network tree.

"},{"location":"ICONS/#icons-support","title":"Icons Support","text":"

Two types of icons are suported:

You can assign icons individually on each device in the Details tab.

"},{"location":"ICONS/#adding-new-icons","title":"Adding new icons","text":"
  1. You can get an SVG or a Font awesome HTML code

Copying the SVG (for example from iconify.design):

Copying the HTML code from Font Awesome.

  1. Navigate to the device you want to use the icon on and click the \"+\" icon:

  1. Paste in the copied HTML or SVG code and click \"OK\":

  1. \"Save\" the device

Note

If you want to mass-apply an icon to all devices of the same device type (Field: Type), you can click the mass-copy button (next to the \"+\" button). A confirmation prompt is displayed. If you proceed, icons of all devices set to the same device type as the current device, will be overwritten with the current device's icon.

"},{"location":"ICONS/#font-awesome-pro-icons","title":"Font Awesome Pro icons","text":"

If you own the premium package of Font Awesome icons you can mount it in your Docker container the following way:

/font-awesome:/app/front/lib/font-awesome:ro\n

You can use the full range of Font Awesome icons afterwards.

"},{"location":"INITIAL_SETUP/","title":"\u26a1 Quick Start Guide","text":"

Get NetAlertX up and running in a few simple steps.

"},{"location":"INITIAL_SETUP/#1-configure-scanner-plugins","title":"1. Configure Scanner Plugin(s)","text":"

Tip

Enable additional plugins under Settings \u2192 LOADED_PLUGINS. Make sure to save your changes and reload the page to activate them.

Initial configuration: ARPSCAN, INTRNT

Note

ARPSCAN and INTRNT scan the current network. You can complement them with other \ud83d\udd0d dev scanner plugins like NMAPDEV, or import devices using \ud83d\udce5 importer plugins. See the Subnet & VLAN Setup Guide and Remote Networks for advanced configurations.

"},{"location":"INITIAL_SETUP/#2-choose-a-publisher-plugin","title":"2. Choose a Publisher Plugin","text":"

Initial configuration: SMTP

Note

Configure your SMTP settings or enable additional \u25b6\ufe0f publisher plugins to send alerts. For more flexibility, try \ud83d\udcda _publisher_apprise, which supports over 80 notification services.

"},{"location":"INITIAL_SETUP/#3-set-up-a-network-topology-diagram","title":"3. Set Up a Network Topology Diagram","text":"

Initial configuration: The app auto-selects a root node (MAC internet) and attempts to identify other network devices by vendor or name.

Note

Visualize and manage your network using the Network Guide. Some plugins (e.g., UNFIMP) build the topology automatically, or you can use Custom Workflows to generate it based on your own rules.

"},{"location":"INITIAL_SETUP/#4-configure-notifications","title":"4. Configure Notifications","text":"

Initial configuration: Notifies on new_devices, down_devices, and events as defined in NTFPRCS_INCLUDED_SECTIONS.

Note

Notification settings support global, plugin-specific, and per-device rules. For fine-tuning, refer to the Notification Guide.

"},{"location":"INITIAL_SETUP/#5-set-up-workflows","title":"5. Set Up Workflows","text":"

Initial configuration: N/A

Note

Automate responses to device status changes, group management, topology updates, and more. See the Workflows Guide to simplify your network operations.

"},{"location":"INITIAL_SETUP/#6-backup-your-configuration","title":"6. Backup Your Configuration","text":"

Initial configuration: The CSVBCKP plugin creates a daily backup to /config/devices.csv.

Note

For a complete backup strategy, follow the Backup Guide.

"},{"location":"INITIAL_SETUP/#7-optional-create-custom-plugins","title":"7. (Optional) Create Custom Plugins","text":"

Initial configuration: N/A

Note

Build your own scanner, importer, or publisher plugin. See the Plugin Development Guide and included video tutorials.

"},{"location":"INITIAL_SETUP/#recommended-guides","title":"\ud83d\udcc1 Recommended Guides","text":""},{"location":"INITIAL_SETUP/#troubleshooting-help","title":"\ud83d\udee0\ufe0f Troubleshooting & Help","text":"

Before opening a new issue:

Let me know if you want a condensed README version, separate pages for each section, or UI copy based on this!

"},{"location":"INSTALLATION/","title":"Installation","text":""},{"location":"INSTALLATION/#installation-options","title":"Installation options","text":"

NetAlertX can be installed several ways. The best supported option is Docker, followed by a supervised Home Assistant instance, as an Unraid app, and lastly, on bare metal.

"},{"location":"INSTALLATION/#help","title":"Help","text":"

If facing issues, please spend a few minutes searching.

Note

If you can't find a solution anywhere, ask in Discord if you think it's a quick question, otherwise open a new issue. Please fill in as much as possible to speed up the help process.

"},{"location":"LOGGING/","title":"Logging","text":"

NetAlertX comes with several logs that help to identify application issues. These include nginx logs, app, or plugin logs. For plugin-specific log debugging, please read the Debug Plugins guide.

Note

When debugging any issue, increase the LOG_LEVEL Setting as per the Debug tips documentation.

"},{"location":"LOGGING/#main-logs","title":"Main logs","text":"

You can find most of the logs exposed in the UI under Maintenance -> Logs.

If the UI is inaccessible, you can access them under /tmp/log.

In the Maintennace -> Logs you can Purge logs, download the full log file or Filter the lines with some substring to narrow down your search.

"},{"location":"LOGGING/#plugin-logging","title":"Plugin logging","text":"

If a Plugin supplies data to the main app it's done either vie a SQL query or via a script that updates the last_result.log file in the plugin log folder (app/log/plugins/). These files are processed at the end of the scan and deleted on successful processing.

The data is in most of the cases then displayed in the application under Integrations -> Plugins (or Device -> Plugins if the plugin is supplying device-specific data).

"},{"location":"LOGGING/#viewing-logs-on-the-file-system","title":"Viewing Logs on the File System","text":"

You cannot find any log files on the filesystem. The container is read-only and writes logs to a temporary in-memory filesystem (tmpfs) for security and performance. The application follows container best-practices by writing all logs to the standard output (stdout) and standard error (stderr) streams. Docker's logging driver (set in docker-compose.yml) captures this stream automatically, allowing you to access it with the docker logs <image_name> command.

docker logs netalertx\n
* To watch the logs live (live feed):

docker logs -f netalertx\n
"},{"location":"LOGGING/#enabling-persistent-file-based-logs","title":"Enabling Persistent File-Based Logs","text":"

The default logs are erased every time the container restarts because they are stored in temporary in-memory storage (tmpfs). If you need to keep a persistent, file-based log history, follow the steps below.

Note

This might lead to performance degradation so this approach is only suggested when actively debugging issues. See the Performance optimization documentation for details.

  1. Stop the container:
docker-compose down\n
  1. Edit your docker-compose.yml file:

  2. Comment out the /tmp/log line under the tmpfs: section.

  3. Uncomment the \"Retain logs\" line under the volumes: section and set your desired host path.

...\n    tmpfs:\n      # - \"/tmp/log:uid=20211,gid=20211,mode=1700,rw,noexec,nosuid,nodev,async,noatime,nodiratime\"\n...\n    volumes:\n...\n      # Retain logs - comment out tmpfs /tmp/log if you want to retain logs between container restarts\n      - /home/adam/netalertx_logs:/tmp/log\n...\n
3. Restart the container:

docker-compose up -d\n

This change stops Docker from mounting a temporary in-memory volume at /tmp/log. Instead, it \"bind mounts\" a persistent folder from your host computer (e.g., /data/netalertx_logs) to that same location inside the container.

"},{"location":"MIGRATION/","title":"Migration","text":"

When upgrading from older versions of NetAlertX (or PiAlert by jokob-sk), follow the migration steps below to ensure your data and configuration are properly transferred.

Tip

It's always important to have a backup strategy in place.

"},{"location":"MIGRATION/#migration-scenarios","title":"Migration scenarios","text":""},{"location":"MIGRATION/#10-manual-migration","title":"1.0 Manual Migration","text":"

You can migrate data manually, for example by exporting and importing devices using the CSV import method.

"},{"location":"MIGRATION/#11-migration-from-pialert-to-netalertx-v25524","title":"1.1 Migration from PiAlert to NetAlertX v25.5.24","text":""},{"location":"MIGRATION/#steps","title":"STEPS:","text":"

The application will automatically migrate the database, configuration, and all device information. A banner message will appear at the top of the web UI reminding you to update your Docker mount points.

  1. Stop the container
  2. Back up your setup
  3. Update the Docker file mount locations in your docker-compose.yml or docker run command (See below New Docker mount locations).
  4. Rename the DB and conf files to app.db and app.conf and place them in the appropriate location.
  5. Start the container

Tip

If you have trouble accessing past backups, config or database files you can copy them into the newly mapped directories, for example by running this command in the container: cp -r /data/config /home/pi/pialert/config/old_backup_files. This should create a folder in the config directory called old_backup_files containing all the files in that location. Another approach is to map the old location and the new one at the same time to copy things over.

"},{"location":"MIGRATION/#new-docker-mount-locations","title":"New Docker mount locations","text":"

The internal application path in the container has changed from /home/pi/pialert to /app. Update your volume mounts as follows:

Old mount point New mount point /home/pi/pialert/config /data/config /home/pi/pialert/db /data/db

If you were mounting files directly, please note the file names have changed:

Old file name New file name pialert.conf app.conf pialert.db app.db

Note

The application automatically creates symlinks from the old database and config locations to the new ones, so data loss should not occur. Read the backup strategies guide to backup your setup.

"},{"location":"MIGRATION/#examples","title":"Examples","text":"

Examples of docker files with the new mount points.

"},{"location":"MIGRATION/#example-1-mapping-folders","title":"Example 1: Mapping folders","text":""},{"location":"MIGRATION/#old-docker-composeyml","title":"Old docker-compose.yml","text":"
services:\n  pialert:\n    container_name: pialert\n    # use the below line if you want to test the latest dev image\n    # image: \"ghcr.io/jokob-sk/netalertx-dev:latest\"\n    image: \"jokobsk/pialert:latest\"\n    network_mode: \"host\"\n    restart: unless-stopped\n    volumes:\n      - /local_data_dir/config:/home/pi/pialert/config\n      - /local_data_dir/db:/home/pi/pialert/db\n      # (optional) useful for debugging if you have issues setting up the container\n      - /local_data_dir/logs:/home/pi/pialert/front/log\n    environment:\n      - TZ=Europe/Berlin\n      - PORT=20211\n
"},{"location":"MIGRATION/#new-docker-composeyml","title":"New docker-compose.yml","text":"
services:\n  netalertx:                                  # \ud83c\udd95 This has changed\n    container_name: netalertx                 # \ud83c\udd95 This has changed\n    image: \"ghcr.io/jokob-sk/netalertx:25.5.24\"         # \ud83c\udd95 This has changed\n    network_mode: \"host\"\n    restart: unless-stopped\n    volumes:\n      - /local_data_dir/config:/data/config         # \ud83c\udd95 This has changed\n      - /local_data_dir/db:/data/db                 # \ud83c\udd95 This has changed\n      # (optional) useful for debugging if you have issues setting up the container\n      - /local_data_dir/logs:/tmp/log        # \ud83c\udd95 This has changed\n    environment:\n      - TZ=Europe/Berlin\n      - PORT=20211\n
"},{"location":"MIGRATION/#example-2-mapping-files","title":"Example 2: Mapping files","text":"

Note

The recommendation is to map folders as in Example 1, map files directly only when needed.

"},{"location":"MIGRATION/#old-docker-composeyml_1","title":"Old docker-compose.yml","text":"
services:\n  pialert:\n    container_name: pialert\n    # use the below line if you want to test the latest dev image\n    # image: \"ghcr.io/jokob-sk/netalertx-dev:latest\"\n    image: \"jokobsk/pialert:latest\"\n    network_mode: \"host\"\n    restart: unless-stopped\n    volumes:\n      - /local_data_dir/config/pialert.conf:/home/pi/pialert/config/pialert.conf\n      - /local_data_dir/db/pialert.db:/home/pi/pialert/db/pialert.db\n      # (optional) useful for debugging if you have issues setting up the container\n      - /local_data_dir/logs:/home/pi/pialert/front/log\n    environment:\n      - TZ=Europe/Berlin\n      - PORT=20211\n
"},{"location":"MIGRATION/#new-docker-composeyml_1","title":"New docker-compose.yml","text":"
services:\n  netalertx:                                  # \ud83c\udd95 This has changed\n    container_name: netalertx                 # \ud83c\udd95 This has changed\n    image: \"ghcr.io/jokob-sk/netalertx:25.5.24\"         # \ud83c\udd95 This has changed\n    network_mode: \"host\"\n    restart: unless-stopped\n    volumes:\n      - /local_data_dir/config/app.conf:/data/config/app.conf # \ud83c\udd95 This has changed\n      - /local_data_dir/db/app.db:/data/db/app.db             # \ud83c\udd95 This has changed\n      # (optional) useful for debugging if you have issues setting up the container\n      - /local_data_dir/logs:/tmp/log                  # \ud83c\udd95 This has changed\n    environment:\n      - TZ=Europe/Berlin\n      - PORT=20211\n
"},{"location":"MIGRATION/#12-migration-from-netalertx-v25524","title":"1.2 Migration from NetAlertX v25.5.24","text":"

Versions before v25.10.1 require an intermediate migration through v25.5.24 to ensure database compatibility. Skipping this step may cause compatibility issues due to database schema changes introduced after v25.5.24.

"},{"location":"MIGRATION/#steps_1","title":"STEPS:","text":"
  1. Stop the container
  2. Back up your setup
  3. Upgrade to v25.5.24 by pinning the release version (See Examples below)
  4. Start the container and verify everything works as expected.
  5. Stop the container
  6. Upgrade to v25.10.1 by pinning the release version (See Examples below)
  7. Start the container and verify everything works as expected.
"},{"location":"MIGRATION/#examples_1","title":"Examples","text":"

Examples of docker files with the tagged version.

"},{"location":"MIGRATION/#example-1-mapping-folders_1","title":"Example 1: Mapping folders","text":""},{"location":"MIGRATION/#docker-composeyml-changes","title":"docker-compose.yml changes","text":"
services:\n  netalertx:\n    container_name: netalertx\n    image: \"ghcr.io/jokob-sk/netalertx:25.5.24\"         # \ud83c\udd95 This is important\n    network_mode: \"host\"\n    restart: unless-stopped\n    volumes:\n      - /local_data_dir/config:/data/config\n      - /local_data_dir/db:/data/db\n      # (optional) useful for debugging if you have issues setting up the container\n      - /local_data_dir/logs:/tmp/log\n    environment:\n      - TZ=Europe/Berlin\n      - PORT=20211\n
services:\n  netalertx:\n    container_name: netalertx\n    image: \"ghcr.io/jokob-sk/netalertx:25.10.1\"         # \ud83c\udd95 This is important\n    network_mode: \"host\"\n    restart: unless-stopped\n    volumes:\n      - /local_data_dir/config:/data/config\n      - /local_data_dir/db:/data/db\n      # (optional) useful for debugging if you have issues setting up the container\n      - /local_data_dir/logs:/tmp/log\n    environment:\n      - TZ=Europe/Berlin\n      - PORT=20211\n
"},{"location":"MIGRATION/#13-migration-from-netalertx-v25101","title":"1.3 Migration from NetAlertX v25.10.1","text":"

Starting from v25.10.1, the container uses a more secure, read-only runtime environment, which requires all writable paths (e.g., logs, API cache, temporary data) to be mounted as tmpfs or permanent writable volumes, with sufficient access permissions. The data location has also hanged from /app/db and /app/config to /data/db and /data/config. See detailed steps below.

"},{"location":"MIGRATION/#steps_2","title":"STEPS:","text":"
  1. Stop the container
  2. Back up your setup
  3. Upgrade to v25.10.1 by pinning the release version (See the example below)
services:\n  netalertx:\n    container_name: netalertx\n    image: \"ghcr.io/jokob-sk/netalertx:25.10.1\"         # \ud83c\udd95 This is important\n    network_mode: \"host\"\n    restart: unless-stopped\n    volumes:\n      - /local_data_dir/config:/app/config\n      - /local_data_dir/db:/app/db\n      # (optional) useful for debugging if you have issues setting up the container\n      - /local_data_dir/logs:/tmp/log\n    environment:\n      - TZ=Europe/Berlin\n      - PORT=20211\n
  1. Start the container and verify everything works as expected.
  2. Stop the container.
  3. Update the docker-compose.yml as per example below.

services:\n  netalertx:\n    container_name: netalertx\n    image: \"ghcr.io/jokob-sk/netalertx:25.11.29\"  # \ud83c\udd95 This has changed\n    network_mode: \"host\"\n    cap_drop:                # \ud83c\udd95 New line\n      - ALL                  # \ud83c\udd95 New line\n    cap_add:                 # \ud83c\udd95 New line\n      - NET_RAW              # \ud83c\udd95 New line\n      - NET_ADMIN            # \ud83c\udd95 New line\n      - NET_BIND_SERVICE     # \ud83c\udd95 New line\n    restart: unless-stopped\n    volumes:\n      - /local_data_dir:/data  # \ud83c\udd95 This folder contains your /db and /config directories and the parent changed from /app to /data\n      # Ensuring the timezone is the same as on the server - make sure also the TIMEZONE setting is configured\n      - /etc/localtime:/etc/localtime:ro    # \ud83c\udd95 New line\n    environment:\n      - PORT=20211\n    # \ud83c\udd95 New \"tmpfs\" section START \ud83d\udd3d\n    tmpfs:\n      # All writable runtime state resides under /tmp; comment out to persist logs between restarts\n      - \"/tmp:uid=20211,gid=20211,mode=1700,rw,noexec,nosuid,nodev,async,noatime,nodiratime\"\n    # \ud83c\udd95 New \"tmpfs\" section END  \ud83d\udd3c\n
7. Perform a one-off migration to the latest netalertx image and 20211 user.

Note

The examples below assumes your /config and /db folders are stored in local_data_dir. Replace this path with your actual configuration directory. netalertx is the container name, which might differ from your setup.

Automated approach:

Run the container with the --user \"0\" parameter. Please note, some systems will require the manual approach below.

docker run -it --rm --name netalertx --user \"0\" \\\n  -v /local_data_dir/config:/app/config \\\n  -v /local_data_dir/db:/app/db \\\n  -v /local_data_dir:/data \\\n  --tmpfs /tmp:uid=20211,gid=20211,mode=1700 \\\n  ghcr.io/jokob-sk/netalertx:latest\n

Stop the container and run it as you would normally.

Manual approach:

Use the manual approach if the Automated approach fails. Execute the below commands:

sudo chown -R 20211:20211 /local_data_dir\nsudo chmod -R a+rwx /local_data_dir\n
  1. Start the container and verify everything works as expected.
  2. Check the Permissions -> Writable-paths what directories to mount if you'd like to access the API or log files.
"},{"location":"MIGRATION/#14-migration-from-netalertx-v251129","title":"1.4 Migration from NetAlertX v25.11.29","text":"

As per user feedback, we\u2019ve re-introduced the ability to control which user the application runs as via the PUID and PGID environment variables. This required additional changes to the container to safely handle permission adjustments at runtime.

"},{"location":"MIGRATION/#steps_3","title":"STEPS:","text":"
  1. Stop the container
  2. Back up your setup
  3. Stop the container
  4. Update the docker-compose.yml as per example below.
services:\n  netalertx:\n    container_name: netalertx\n    image: \"ghcr.io/netalertx/netalertx\"\n    network_mode: \"host\"\n    cap_drop:\n      - ALL\n    cap_add:\n      - NET_RAW\n      - NET_ADMIN\n      - NET_BIND_SERVICE\n      - CHOWN                # \ud83c\udd95 New line\n      - SETUID               # \ud83c\udd95 New line\n      - SETGID               # \ud83c\udd95 New line\n    restart: unless-stopped\n    volumes:\n      - /local_data_dir:/data\n      # Ensuring the timezone is the same as on the server - make sure also the TIMEZONE setting is configured\n      - /etc/localtime:/etc/localtime:ro\n    environment:\n      - PORT=20211\n      # - PUID=0    # New optional variable to run as root\n      # - GUID=100  # New optional variable to run as root\n    tmpfs:\n      # All writable runtime state resides under /tmp; comment out to persist logs between restarts\n      - \"/tmp:uid=20211,gid=20211,mode=1700,rw,noexec,nosuid,nodev,async,noatime,nodiratime\"\n
  1. If you use a custom PUID (e.g. 0) and GUID (e.g. 100) make sure you also update the tmpfs ownership, e.g. /tmp:uid=0,gid=100...
  2. Start the container and verify everything works as expected.
  3. If running a reverse proxy review the Reverse proxy documentation as a new BACKEND_API_URL setting was added.
"},{"location":"NAME_RESOLUTION/","title":"Device Name Resolution","text":"

Name resolution in NetAlertX relies on multiple plugins to resolve device names from IP addresses. If you are seeing (name not found) as device names, follow these steps to diagnose and fix the issue.

Tip

Before proceeding, make sure Reverse DNS is enabled on your network. You can control how names are handled and cleaned using the NEWDEV_NAME_CLEANUP_REGEX setting. To auto-update Fully Qualified Domain Names (FQDN), enable the REFRESH_FQDN setting.

"},{"location":"NAME_RESOLUTION/#required-plugins","title":"Required Plugins","text":"

For best results, ensure the following name resolution plugins are enabled:

You can check which plugins are active in your Settings section and enable any that are missing.

There are other plugins that can supply device names as well, but they rely on bespoke hardware and services. See Plugins overview for details and look for plugins with name discovery (\ud83c\udd8e) features.

"},{"location":"NAME_RESOLUTION/#checking-logs","title":"Checking Logs","text":"

If names are not resolving, check the logs for errors or timeouts.

See how to explore logs in the Logging guide.

Logs will show which plugins attempted resolution and any failures encountered.

"},{"location":"NAME_RESOLUTION/#adjusting-timeout-settings","title":"Adjusting Timeout Settings","text":"

If resolution is slow or failing due to timeouts, increase the timeout settings in your configuration, for example.

NSLOOKUP_RUN_TIMEOUT = 30\n

Raising the timeout may help if your network has high latency or slow DNS responses.

"},{"location":"NAME_RESOLUTION/#checking-plugin-objects","title":"Checking Plugin Objects","text":"

Each plugin stores results in its respective object. You can inspect these objects to see if they contain valid name resolution data.

See Logging guide and Debug plugins guides for details.

If the object contains no results, the issue may be with DNS settings or network access.

"},{"location":"NAME_RESOLUTION/#improving-name-resolution","title":"Improving name resolution","text":"

For more details how to improve name resolution refer to the Reverse DNS Documentation.

"},{"location":"NETWORK_TREE/","title":"How to Set Up Your Network Page","text":"

The Network page lets you map how devices connect \u2014 visually and logically. It\u2019s especially useful for planning infrastructure, assigning parent-child relationships, and spotting gaps.

To get started, you\u2019ll need to define at least one root node and mark certain devices as network nodes (like Switches or Routers).

Start by creating a root device with the MAC address Internet, if the application didn\u2019t create one already. This special MAC address (Internet) is required for the root network node \u2014 no other value is currently supported. Set its Type to a valid network type \u2014 such as Router or Gateway.

Tip

If you don\u2019t have one, use the Create new device button on the Devices page to add a root device.

"},{"location":"NETWORK_TREE/#quick-setup","title":"\u26a1 Quick Setup","text":"
  1. Open the device you want to use as a network node (e.g. a Switch).
  2. Set its Type to one of the following: AP, Firewall, Gateway, PLC, Powerline, Router, Switch, USB LAN Adapter, USB WIFI Adapter, WLAN (Or add custom types under Settings \u2192 General \u2192 NETWORK_DEVICE_TYPES.)
  3. Save the device.
  4. Go to the Network page \u2014 supported device types will appear as tabs.
  5. Use the Assign button to connect unassigned devices to a network node.
  6. If the Port is 0 or empty, a Wi-Fi icon is shown. Otherwise, an Ethernet icon appears.

Note

Use bulk editing with CSV Export to fix Internet root assignments or update many devices at once.

"},{"location":"NETWORK_TREE/#example-setting-up-a-raspberrypi-as-a-switch","title":"Example: Setting up a raspberrypi as a Switch","text":"

Let\u2019s walk through setting up a device named raspberrypi to act as a network Switch that other devices connect through.

"},{"location":"NETWORK_TREE/#1-set-device-type-and-parent","title":"1. Set Device Type and Parent","text":"

Note

Only certain device types can act as network nodes: AP, Firewall, Gateway, Hypervisor, PLC, Powerline, Router, Switch, USB LAN Adapter, USB WIFI Adapter, WLAN You can add custom types via the NETWORK_DEVICE_TYPES setting.

"},{"location":"NETWORK_TREE/#2-confirm-the-device-appears-as-a-network-node","title":"2. Confirm The Device Appears as a Network Node","text":"

You can confirm that raspberrypi now acts as a network device in two places:

"},{"location":"NETWORK_TREE/#3-assign-connected-devices","title":"3. Assign Connected Devices","text":"

Hovering over devices in the tree reveals connection details and tooltips for quick inspection.

Note

Selecting certain relationship types hides the device in the default device views. You can change this behavior by adjusting the UI_hide_rel_types setting, which by default is set to [\"nic\",\"virtual\"]. This means devices with devParentRelType set to nic or virtual will not be shown. All devices, regardless of relationship type, are always accessible in the All devices view.

"},{"location":"NETWORK_TREE/#troubleshooting","title":"Troubleshooting","text":"

If the Network page doesn't load re-set your parent nodes. This can be done with bulk-edit.

  1. Backup your setup just in case
  2. Navigate to Maintenance -> Multi edit ( (1), (2) )
  3. Add all devices (3) (clear the cache with the refresh button if you seem to be missing devices in the dropdown (4))
  4. Select None as parent node (5) and save (6)

  1. Find now your root Internet Node by searching for \"Internet\" in the My Devices view
  2. If not found, make sure the INTRNT plugin runs and creates the internet device
  3. If above fails, create a manual device with the MAC set to Internet

  1. You should be able to start again to configure your Network view.
"},{"location":"NETWORK_TREE/#summary","title":"\u2705 Summary","text":"

To configure devices on the Network page:

Need to reset or undo changes? Use backups or bulk editing to manage devices at scale. You can also automate device assignment with Workflows.

"},{"location":"NOTIFICATIONS/","title":"Notifications \ud83d\udce7","text":"

There are 4 ways how to influence notifications:

  1. On the device itself
  2. On the settings of the plugin
  3. Globally
  4. Ignoring devices

Note

It's recommended to use the same schedule interval for all plugins responsible for scanning devices, otherwise false positives might be reported if different devices are discovered by different plugins. Check the Settings > Enabled settings section for a warning:

"},{"location":"NOTIFICATIONS/#device-settings","title":"Device settings \ud83d\udcbb","text":"

The following device properties influence notifications. You can:

  1. Alert Events - Enables alerts of connections, disconnections, IP changes (down and down reconnected notifications are still sent even if this is disabled).
  2. Alert Down - Alerts when a device goes down. This setting overrides a disabled Alert Events setting, so you will get a notification of a device going down even if you don't have Alert Events ticked. Disabling this will disable down and down reconnected notifications on the device.
  3. Can Sleep - Marks the device as sleep-capable (e.g. a battery-powered sensor that deep-sleeps between readings). When enabled, offline periods within the Alert down after (sleep) (NTFPRCS_sleep_time) global window are shown as Sleeping (aqua badge \ud83c\udf19) instead of Down, and no down alert is fired during that window. Once the window expires the device falls back to normal down-alert logic. \u26a0 Requires Alert Down to be enabled \u2014 sleeping suppresses the alert during the window only.
  4. Skip repeated notifications, if for example you know there is a temporary issue and want to pause the same notification for this device for a given time.
  5. Require NICs Online - Indicates whether this device should be considered online only if all associated NICs (devices with the nic relationship type) are online. If disabled, the device is considered online if any NIC is online. If a NIC is online it sets the parent (this) device's status to online irrespectivelly of the detected device's status. The Relationship type is set on the childern device.

Note

Please read through the NTFPRCS plugin documentation to understand how device and global settings influence the notification processing.

"},{"location":"NOTIFICATIONS/#plugin-settings","title":"Plugin settings \ud83d\udd0c","text":"

On almost all plugins there are 2 core settings, <plugin>_WATCH and <plugin>_REPORT_ON.

  1. <plugin>_WATCH specifies the columns which the app should watch. If watched columns change the device state is considered changed. This changed status is then used to decide to send out notifications based on the <plugin>_REPORT_ON setting.
  2. <plugin>_REPORT_ON let's you specify on which events the app should notify you. This is related to the <plugin>_WATCH setting. So if you select watched-changed and in <plugin>_WATCH you only select Watched_Value1, then a notification is triggered if Watched_Value1 is changed from the previous value, but no notification is send if Watched_Value2 changes.

Click the Read more in the docs. Link at the top of each plugin to get more details on how the given plugin works.

"},{"location":"NOTIFICATIONS/#global-settings","title":"Global settings \u2699","text":"

In Notification Processing settings, you can specify blanket rules. These allow you to specify exceptions to the Plugin and Device settings and will override those.

  1. Notify on (NTFPRCS_INCLUDED_SECTIONS) allows you to specify which events trigger notifications. Usual setups will have new_devices, down_devices, and possibly down_reconnected set. Including plugin (dependenton the Plugin <plugin>_WATCH and <plugin>_REPORT_ON settings) and events (dependent on the on-device Alert Events setting) might be too noisy for most setups. More info in the NTFPRCS plugin on what events these selections include.
  2. Alert down after (NTFPRCS_alert_down_time) is useful if you want to wait for some time before the system sends out a down notification for a device. This is related to the on-device Alert down setting and only devices with this checked will trigger a down notification.
  3. Alert down after (sleep) (NTFPRCS_sleep_time) sets the sleep window in minutes. If a device has Can Sleep enabled and goes offline, it is shown as Sleeping (aqua \ud83c\udf19 badge) for this many minutes before down-alert logic kicks in. Default is 30 minutes. Changing this setting takes effect after saving \u2014 no restart required.

You can filter out unwanted notifications globally. This could be because of a misbehaving device (GoogleNest/GoogleHub (See also ARPSAN docs and the --exclude-broadcast flag)) which flips between IP addresses, or because you want to ignore new device notifications of a certain pattern.

  1. Events Filter (NTFPRCS_event_condition) - Filter out Events from notifications.
  2. New Devices Filter (NTFPRCS_new_dev_condition) - Filter out New Devices from notifications, but log and keep a new device in the system.
"},{"location":"NOTIFICATIONS/#ignoring-devices","title":"Ignoring devices \ud83d\udcbb","text":"

You can completely ignore detected devices globally. This could be because your instance detects docker containers, you want to ignore devices from a specific manufacturer via MAC rules or you want to ignore devices on a specific IP range.

  1. Ignored MACs (NEWDEV_ignored_MACs) - List of MACs to ignore.
  2. Ignored IPs (NEWDEV_ignored_IPs) - List of IPs to ignore.
"},{"location":"PERFORMANCE/","title":"Performance Optimization Guide","text":"

There are several ways to improve the application's performance. The application has been tested on a range of devices, from Raspberry Pi 4 units to NAS and NUC systems. If you are running the application on a lower-end device, fine-tuning the performance settings can significantly improve the user experience.

"},{"location":"PERFORMANCE/#common-causes-of-slowness","title":"Common Causes of Slowness","text":"

Performance issues are usually caused by:

The application performs regular maintenance and database cleanup. If these tasks are failing, you will see slowdowns.

"},{"location":"PERFORMANCE/#database-and-log-file-size","title":"Database and Log File Size","text":"

A large database or oversized log files can impact performance. You can check database and table sizes on the Maintenance page.

Note

"},{"location":"PERFORMANCE/#maintenance-plugins","title":"Maintenance Plugins","text":"

Two plugins help maintain the system\u2019s performance:

"},{"location":"PERFORMANCE/#1-database-cleanup-dbclnp","title":"1. Database Cleanup (DBCLNP)","text":""},{"location":"PERFORMANCE/#2-maintenance-maint","title":"2. Maintenance (MAINT)","text":""},{"location":"PERFORMANCE/#database-performance-tuning","title":"Database Performance Tuning","text":"

The application automatically maintains database performance as data accumulates. However, you can adjust settings to balance CPU usage, disk usage, and responsiveness.

"},{"location":"PERFORMANCE/#wal-size-tuning-storage-vs-cpu-tradeoff","title":"WAL Size Tuning (Storage vs. CPU Tradeoff)","text":"

The SQLite Write-Ahead Log (WAL) is a temporary file that grows during normal operation. On systems with constrained resources (NAS, Raspberry Pi), controlling WAL size is important.

Setting: PRAGMA_JOURNAL_SIZE_LIMIT (default: 50 MB)

Setting Effect Use Case 10\u201320 MB Smaller storage footprint; more frequent disk operations NAS with SD card (storage priority) 50 MB (default) Balanced; recommended for most setups General use 75\u2013100 MB Smoother performance; larger WAL on disk High-speed NAS or servers

Recommendation: For NAS devices with SD cards, leave at default (50 MB) or increase slightly (75 MB). Avoid very low values (< 10 MB) as they cause frequent disk thrashing and CPU spikes.

"},{"location":"PERFORMANCE/#automatic-cleanup","title":"Automatic Cleanup","text":"

The DB cleanup plugin (DBCLNP) automatically optimizes query performance and trims old data:

If cleanup fails, performance degrades quickly. Check Maintenance \u2192 Logs for errors. If you see frequent failures, increase the timeout (DBCLNP_RUN_TIMEOUT).

"},{"location":"PERFORMANCE/#scan-frequency-and-coverage","title":"Scan Frequency and Coverage","text":"

Frequent scans increase resource usage, network traffic, and database read/write cycles.

"},{"location":"PERFORMANCE/#optimizations","title":"Optimizations","text":"

Some plugins also include options to limit which devices are scanned. If certain plugins consistently run long, consider narrowing their scope.

For example, the ICMP plugin allows scanning only IPs that match a specific regular expression.

"},{"location":"PERFORMANCE/#storing-temporary-files-in-memory","title":"Storing Temporary Files in Memory","text":"

On devices with slower I/O, you can improve performance by storing temporary files (and optionally the database) in memory using tmpfs.

Warning

Storing the database in tmpfs is generally discouraged. Use this only if device data and historical records are not required to persist. If needed, you can pair this setup with the SYNC plugin to store important persistent data on another node. See the Plugins docs for details.

Using tmpfs reduces disk writes and speeds up I/O, but all data stored in memory will be lost on restart.

Below is an optimized docker-compose.yml snippet using non-persistent logs, API data, and DB:

services:\n  netalertx:\n    container_name: netalertx\n    # Use this line for the stable release\n    image: \"ghcr.io/netalertx/netalertx:latest\"\n    # Or use this line for the latest development build\n    # image: \"ghcr.io/netalertx/netalertx-dev:latest\"\n    network_mode: \"host\"\n    restart: unless-stopped\n\n    cap_drop:       # Drop all capabilities for enhanced security\n      - ALL\n    cap_add:        # Re-add necessary capabilities\n      - NET_RAW\n      - NET_ADMIN\n      - NET_BIND_SERVICE\n      - CHOWN\n      - SETUID\n      - SETGID\n\n    volumes:\n      - ${APP_FOLDER}/netalertx/config:/data/config\n      - /etc/localtime:/etc/localtime:ro\n\n    tmpfs:\n      # All writable runtime state resides under /tmp; comment out to persist logs between restarts\n      - \"/tmp:uid=20211,gid=20211,mode=1700,rw,noexec,nosuid,nodev,async,noatime,nodiratime\"\n      - \"/data/db:uid=20211,gid=20211,mode=1700\"  # \u26a0 You will lose historical data on restart\n\n    environment:\n      - PORT=${PORT}\n      - APP_CONF_OVERRIDE=${APP_CONF_OVERRIDE}\n
"},{"location":"PIHOLE_GUIDE/","title":"Integration with PiHole","text":"

NetAlertX comes with 3 plugins suitable for integrating with your existing PiHole instance. The first plugin uses the v6 API, the second plugin is using a direct SQLite DB connection, the other leverages the DHCP.leases file generated by PiHole. You can combine multiple approaches and also supplement scans with other plugins.

"},{"location":"PIHOLE_GUIDE/#approach-1-piholeapi-plugin-import-devices-directly-from-pihole-v6-api","title":"Approach 1: PIHOLEAPI Plugin - Import devices directly from PiHole v6 API","text":"

To use this approach make sure the Web UI password in Pi-hole is set.

Setting Description Recommended value PIHOLEAPI_URL Your Pi-hole base URL including port. http://192.168.1.82:9880/ PIHOLEAPI_RUN_SCHD If you run multiple device scanner plugins, align the schedules of all plugins to the same value. */5 * * * * PIHOLEAPI_PASSWORD The Web UI base64 encoded (en-/decoding handled by the app) admin password. passw0rd PIHOLEAPI_SSL_VERIFY Whether to verify HTTPS certificates. Disable only for self-signed certificates. False PIHOLEAPI_API_MAXCLIENTS Maximum number of devices to request from Pi-hole. Defaults are usually fine. 500 PIHOLEAPI_FAKE_MAC Generate FAKE MAC from IP. False

Check the PiHole API plugin readme for details and troubleshooting.

"},{"location":"PIHOLE_GUIDE/#docker-compose-changes","title":"docker-compose changes","text":"

No changes needed

"},{"location":"PIHOLE_GUIDE/#approach-2-dhcplss-plugin-import-devices-from-the-pihole-dhcp-leases-file","title":"Approach 2: DHCPLSS Plugin - Import devices from the PiHole DHCP leases file","text":""},{"location":"PIHOLE_GUIDE/#settings","title":"Settings","text":"Setting Description Recommended value DHCPLSS_RUN When the plugin should run. schedule DHCPLSS_RUN_SCHD If you run multiple device scanner plugins, align the schedules of all plugins to the same value. */5 * * * * DHCPLSS_paths_to_check You need to map the value in this setting in the docker-compose.yml file. The in-container path must contain pihole so it's parsed correctly. ['/etc/pihole/dhcp.leases']

Check the DHCPLSS plugin readme for details

"},{"location":"PIHOLE_GUIDE/#docker-compose-changes_1","title":"docker-compose changes","text":"Path Description :/etc/pihole/dhcp.leases PiHole's dhcp.leases file. Required if you want to use PiHole dhcp.leases file. This has to be matched with a corresponding DHCPLSS_paths_to_check setting entry (the path in the container must contain pihole)"},{"location":"PIHOLE_GUIDE/#approach-3-pihole-plugin-import-devices-directly-from-the-pihole-database","title":"Approach 3: PIHOLE Plugin - Import devices directly from the PiHole database","text":"Setting Description Recommended value PIHOLE_RUN When the plugin should run. schedule PIHOLE_RUN_SCHD If you run multiple device scanner plugins, align the schedules of all plugins to the same value. */5 * * * * PIHOLE_DB_PATH You need to map the value in this setting in the docker-compose.yml file. /etc/pihole/pihole-FTL.db

Check the PiHole plugin readme for details

"},{"location":"PIHOLE_GUIDE/#docker-compose-changes_2","title":"docker-compose changes","text":"Path Description :/etc/pihole/pihole-FTL.db PiHole's pihole-FTL.db database file.

Check out other plugins that can help you discover more about your network or check how to scan Remote networks.

"},{"location":"PLUGINS/","title":"\ud83d\udd0c Plugins","text":"

NetAlertX supports additional plugins to extend its functionality, each with its own settings and options. Plugins can be loaded via the General -> LOADED_PLUGINS setting. For custom plugin development, refer to the Plugin development guide.

Note

Please check this Plugins debugging guide and the corresponding Plugin documentation in the below table if you are facing issues.

"},{"location":"PLUGINS/#quick-start","title":"\u26a1 Quick start","text":"

Tip

You can load additional Plugins via the General -> LOADED_PLUGINS setting. You need to save the settings for the new plugins to load (cache/page reload may be necessary).

  1. Pick your \ud83d\udd0d dev scanner plugin (e.g. ARPSCAN or NMAPDEV), or import devices into the application with an \ud83d\udce5 importer plugin. (See Enabling plugins below)
  2. Pick a \u25b6\ufe0f publisher plugin, if you want to send notifications. If you don't see a publisher you'd like to use, look at the \ud83d\udcda_publisher_apprise plugin which is a proxy for over 80 notification services.
  3. Setup your Network topology diagram
  4. Fine-tune Notifications
  5. Setup Workflows
  6. Backup your setup
  7. Contribute and Create custom plugins
"},{"location":"PLUGINS/#plugin-types","title":"Plugin types","text":"Plugin type Icon Description When to run Required Data source ? publisher \u25b6\ufe0f Sending notifications to services. on_notification \u2716 Script dev scanner \ud83d\udd0d Create devices in the app, manages online/offline device status. schedule \u2716 Script / SQLite DB name discovery \ud83c\udd8e Discovers names of devices via various protocols. before_name_updates, schedule \u2716 Script importer \ud83d\udce5 Importing devices from another service. schedule \u2716 Script / SQLite DB system \u2699 Providing core system functionality. schedule / always on \u2716/\u2714 Script / Template other \u267b Other plugins misc \u2716 Script / Template"},{"location":"PLUGINS/#features","title":"Features","text":"Icon Description \ud83d\udda7 Auto-imports the network topology diagram \ud83d\udd04 Has the option to sync some data back into the plugin source"},{"location":"PLUGINS/#available-plugins","title":"Available Plugins","text":"

Device-detecting plugins insert values into the CurrentScan database table. The plugins that are not required are safe to ignore, however, it makes sense to have at least some device-detecting plugins enabled, such as ARPSCAN or NMAPDEV.

ID Plugin docs Type Description Features Required APPRISE _publisher_apprise \u25b6\ufe0f Apprise notification proxy ARPSCAN arp_scan \ud83d\udd0d ARP-scan on current network AVAHISCAN avahi_scan \ud83c\udd8e Avahi (mDNS-based) name resolution ASUSWRT asuswrt_import \ud83d\udd0d Import connected devices from AsusWRT CSVBCKP csv_backup \u2699 CSV devices backup CUSTPROP custom_props \u2699 Managing custom device properties values Yes DBCLNP db_cleanup \u2699 Database cleanup Yes* DDNS ddns_update \u2699 DDNS update DHCPLSS dhcp_leases \ud83d\udd0d/\ud83d\udce5/\ud83c\udd8e Import devices from DHCP leases DHCPSRVS dhcp_servers \u267b DHCP servers DIGSCAN dig_scan \ud83c\udd8e Dig (DNS) Name resolution FREEBOX freebox \ud83d\udd0d/\u267b/\ud83c\udd8e Pull data and names from Freebox/Iliadbox ICMP icmp_scan \u267b ICMP (ping) status checker INTRNT internet_ip \ud83d\udd0d Internet IP scanner INTRSPD internet_speedtest \u267b Internet speed test IPNEIGH ipneigh \ud83d\udd0d Scan ARP (IPv4) and NDP (IPv6) tables LUCIRPC luci_import \ud83d\udd0d Import connected devices from OpenWRT MAINT maintenance \u2699 Maintenance of logs, etc. MQTT _publisher_mqtt \u25b6\ufe0f MQTT for synching to Home Assistant MTSCAN mikrotik_scan \ud83d\udd0d Mikrotik device import & sync NBTSCAN nbtscan_scan \ud83c\udd8e Nbtscan (NetBIOS-based) name resolution NEWDEV newdev_template \u2699 New device template Yes NMAP nmap_scan \u267b Nmap port scanning & discovery NMAPDEV nmap_dev_scan \ud83d\udd0d Nmap dev scan on current network NSLOOKUP nslookup_scan \ud83c\udd8e NSLookup (DNS-based) name resolution NTFPRCS notification_processing \u2699 Notification processing Yes NTFY _publisher_ntfy \u25b6\ufe0f NTFY notifications OMDSDN omada_sdn_imp \ud83d\udce5/\ud83c\udd8e \u274c UNMAINTAINED use OMDSDNOPENAPI \ud83d\udda7 \ud83d\udd04 OMDSDNOPENAPI omada_sdn_openapi \ud83d\udce5/\ud83c\udd8e OMADA TP-Link import via OpenAPI \ud83d\udda7 PIHOLE pihole_scan \ud83d\udd0d/\ud83c\udd8e/\ud83d\udce5 Pi-hole device import & sync PIHOLEAPI pihole_api_scan \ud83d\udd0d/\ud83c\udd8e/\ud83d\udce5 Pi-hole device import & sync via API v6+ PUSHSAFER _publisher_pushsafer \u25b6\ufe0f Pushsafer notifications PUSHOVER _publisher_pushover \u25b6\ufe0f Pushover notifications SETPWD set_password \u2699 Set password Yes SMTP _publisher_email \u25b6\ufe0f Email notifications SNMPDSC snmp_discovery \ud83d\udd0d/\ud83d\udce5 SNMP device import & sync SYNC sync \ud83d\udd0d/\u2699/\ud83d\udce5 Sync & import from NetAlertX instances \ud83d\udda7 \ud83d\udd04 Yes TELEGRAM _publisher_telegram \u25b6\ufe0f Telegram notifications UI ui_settings \u267b UI specific settings Yes UNFIMP unifi_import \ud83d\udd0d/\ud83d\udce5/\ud83c\udd8e UniFi device import & sync \ud83d\udda7 UNIFIAPI unifi_api_import \ud83d\udd0d/\ud83d\udce5/\ud83c\udd8e UniFi device import (SM API, multi-site) VNDRPDT vendor_update \u2699 Vendor database update WEBHOOK _publisher_webhook \u25b6\ufe0f Webhook notifications WEBMON website_monitor \u267b Website down monitoring WOL wake_on_lan \u267b Automatic wake-on-lan

* The database cleanup plugin (DBCLNP) is not required but the app will become unusable after a while if not executed. \u274c marked for removal/unmaintained - looking for help \u231aIt's recommended to use the same schedule interval for all plugins responsible for discovering new devices.

"},{"location":"PLUGINS/#enabling-plugins","title":"Enabling plugins","text":"

Plugins can be enabled via Settings, and can be disabled as needed.

  1. Research which plugin you'd like to use, enable DISCOVER_PLUGINS and load the required plugins in Settings via the LOADED_PLUGINS setting.
  2. Save the changes and review the Settings of the newly loaded plugins.
  3. Change the <prefix>_RUN Setting to the recommended or custom value as per the documentation of the given setting
"},{"location":"PLUGINS/#disabling-unloading-and-ignoring-plugins","title":"Disabling, Unloading and Ignoring plugins","text":"
  1. Change the <prefix>_RUN Setting to disabled if you want to disable the plugin, but keep the settings
  2. If you want to speed up the application, you can unload the plugin by unselecting it in the LOADED_PLUGINS setting.
  3. You can completely ignore plugins by placing a ignore_plugin file into the plugin directory. Ignored plugins won't show up in the LOADED_PLUGINS setting.
"},{"location":"PLUGINS/#developing-new-custom-plugins","title":"\ud83c\udd95 Developing new custom plugins","text":"

If you want to develop a custom plugin, please read this Plugin development guide.

"},{"location":"PLUGINS_DEV/","title":"Plugin Development Guide","text":"

This comprehensive guide covers how to build plugins for NetAlertX.

Tip

New to plugin development? Start with the Quick Start Guide to get a working plugin in 5 minutes.

NetAlertX comes with a plugin system to feed events from third-party scripts into the UI and then send notifications, if desired. The highlighted core functionality this plugin system supports:

Note

For a high-level overview of how the config.json is used and its lifecycle, see the config.json Lifecycle Guide.

"},{"location":"PLUGINS_DEV/#quick-links","title":"Quick Links","text":""},{"location":"PLUGINS_DEV/#getting-started","title":"\ud83d\ude80 Getting Started","text":""},{"location":"PLUGINS_DEV/#core-concepts","title":"\ud83d\udcda Core Concepts","text":""},{"location":"PLUGINS_DEV/#architecture","title":"\ud83c\udfd7\ufe0f Architecture","text":""},{"location":"PLUGINS_DEV/#troubleshooting","title":"\ud83d\udc1b Troubleshooting","text":""},{"location":"PLUGINS_DEV/#video-tutorial","title":"\ud83c\udfa5 Video Tutorial","text":""},{"location":"PLUGINS_DEV/#screenshots","title":"\ud83d\udcf8 Screenshots","text":""},{"location":"PLUGINS_DEV/#use-cases","title":"Use Cases","text":"

Plugins are infinitely flexible. Here are some examples:

If you can imagine it and script it, you can build a plugin.

"},{"location":"PLUGINS_DEV/#limitations-notes","title":"Limitations & Notes","text":""},{"location":"PLUGINS_DEV/#plugin-development-workflow","title":"Plugin Development Workflow","text":""},{"location":"PLUGINS_DEV/#step-1-understand-the-basics","title":"Step 1: Understand the Basics","text":"
  1. Read Quick Start Guide - 5 minute overview
  2. Study the Data Contract - Understand the output format
  3. Choose a Data Source - Where does your data come from?
"},{"location":"PLUGINS_DEV/#step-2-create-your-plugin","title":"Step 2: Create Your Plugin","text":"
  1. Copy the __template plugin folder (see below for structure)
  2. Update config.json with your plugin metadata
  3. Implement script.py (or configure alternative data source)
  4. Test locally in the devcontainer
"},{"location":"PLUGINS_DEV/#step-3-configure-display","title":"Step 3: Configure & Display","text":"
  1. Define Settings for user configuration
  2. Design UI Components for result display
  3. Map to database tables if needed (for notifications, etc.)
"},{"location":"PLUGINS_DEV/#step-4-deploy-test","title":"Step 4: Deploy & Test","text":"
  1. Restart the backend
  2. Test via Settings \u2192 Plugin Settings
  3. Verify results in UI and logs
  4. Check /tmp/log/plugins/last_result.<PREFIX>.log

See Quick Start Guide for detailed step-by-step instructions.

"},{"location":"PLUGINS_DEV/#plugin-file-structure","title":"Plugin File Structure","text":"

Every plugin lives in its own folder under /app/front/plugins/.

Important: Folder name must match the \"code_name\" value in config.json

/app/front/plugins/\n\u251c\u2500\u2500 __template/          # Copy this as a starting point\n\u2502   \u251c\u2500\u2500 config.json      # Plugin manifest (configuration)\n\u2502   \u251c\u2500\u2500 script.py        # Your plugin logic (optional, depends on data_source)\n\u2502   \u2514\u2500\u2500 README.md        # Setup and usage documentation\n\u251c\u2500\u2500 my_plugin/           # Your new plugin\n\u2502   \u251c\u2500\u2500 config.json      # REQUIRED - Plugin manifest\n\u2502   \u251c\u2500\u2500 script.py        # OPTIONAL - Python script (if using script data source)\n\u2502   \u251c\u2500\u2500 README.md        # REQUIRED - Documentation for users\n\u2502   \u2514\u2500\u2500 other_files...   # Your supporting files\n
"},{"location":"PLUGINS_DEV/#plugin-manifest-configjson","title":"Plugin Manifest (config.json)","text":"

The config.json file is the plugin manifest - it tells NetAlertX everything about your plugin:

Example minimal config.json:

{\n  \"code_name\": \"my_plugin\",\n  \"unique_prefix\": \"MYPLN\",\n  \"display_name\": [{\"language_code\": \"en_us\", \"string\": \"My Plugin\"}],\n  \"description\": [{\"language_code\": \"en_us\", \"string\": \"My awesome plugin\"}],\n  \"icon\": \"fa-plug\",\n  \"data_source\": \"script\",\n  \"execution_order\": \"Layer_0\",\n  \"settings\": [\n    {\n      \"function\": \"RUN\",\n      \"type\": {\"dataType\": \"string\", \"elements\": [{\"elementType\": \"select\", \"elementOptions\": [], \"transformers\": []}]},\n      \"default_value\": \"disabled\",\n      \"options\": [\"disabled\", \"once\", \"schedule\"],\n      \"localized\": [\"name\"],\n      \"name\": [{\"language_code\": \"en_us\", \"string\": \"When to run\"}]\n    },\n    {\n      \"function\": \"CMD\",\n      \"type\": {\"dataType\": \"string\", \"elements\": [{\"elementType\": \"input\", \"elementOptions\": [], \"transformers\": []}]},\n      \"default_value\": \"python3 /app/front/plugins/my_plugin/script.py\",\n      \"localized\": [\"name\"],\n      \"name\": [{\"language_code\": \"en_us\", \"string\": \"Command\"}]\n    }\n  ],\n  \"database_column_definitions\": []\n}\n

For comprehensive config.json documentation, see PLUGINS_DEV_CONFIG.md

"},{"location":"PLUGINS_DEV/#full-reference-below","title":"Full Reference (Below)","text":"

The sections below provide complete reference documentation for all plugin development topics. Use the quick links above to jump to specific sections, or read sequentially for a deep dive.

More on specifics below.

"},{"location":"PLUGINS_DEV/#data-contract-output-format","title":"Data Contract & Output Format","text":"

For detailed information on plugin output format, see PLUGINS_DEV_DATA_CONTRACT.md.

Quick reference: - Format: Pipe-delimited (|) text file - Location: /tmp/log/plugins/last_result.<PREFIX>.log - Columns: 9 required + 4 optional = 13 maximum - Helper: Use plugin_helper.py for easy formatting

"},{"location":"PLUGINS_DEV/#the-9-mandatory-columns","title":"The 9 Mandatory Columns","text":"Column Name Required Example 0 Object_PrimaryID YES \"device_name\" or \"192.168.1.1\" 1 Object_SecondaryID no \"secondary_id\" or null 2 DateTime YES \"2023-01-02 15:56:30\" 3 Watched_Value1 YES \"online\" or \"200\" 4 Watched_Value2 no \"ip_address\" or null 5 Watched_Value3 no null 6 Watched_Value4 no null 7 Extra no \"additional data\" or null 8 ForeignKey no \"aa:bb:cc:dd:ee:ff\" or null

See Data Contract for examples, validation, and debugging tips.

"},{"location":"PLUGINS_DEV/#configjson-settings-configuration","title":"Config.json: Settings & Configuration","text":"

For detailed settings documentation, see PLUGINS_DEV_SETTINGS.md and PLUGINS_DEV_DATASOURCES.md.

"},{"location":"PLUGINS_DEV/#setting-object-structure","title":"Setting Object Structure","text":"

Every setting in your plugin has this structure:

{\n  \"function\": \"UNIQUE_CODE\",\n  \"type\": {\"dataType\": \"string\", \"elements\": [...]},\n  \"default_value\": \"...\",\n  \"options\": [...],\n  \"localized\": [\"name\", \"description\"],\n  \"name\": [{\"language_code\": \"en_us\", \"string\": \"Display Name\"}],\n  \"description\": [{\"language_code\": \"en_us\", \"string\": \"Help text\"}]\n}\n
"},{"location":"PLUGINS_DEV/#reserved-function-names","title":"Reserved Function Names","text":"

These control core plugin behavior:

Function Purpose Required Options RUN When to execute YES disabled, once, schedule, always_after_scan, before_name_updates, on_new_device RUN_SCHD Cron schedule If RUN=schedule Cron format: \"0 * * * *\" CMD Command to run YES Shell command or script path RUN_TIMEOUT Max execution time optional Seconds: \"60\" WATCH Monitor for changes optional Column names REPORT_ON When to notify optional new, watched-changed, watched-not-changed, missing-in-last-scan DB_PATH External DB path If using SQLite /path/to/db.db

See PLUGINS_DEV_SETTINGS.md for full component types and examples.

"},{"location":"PLUGINS_DEV/#filters-data-display","title":"Filters & Data Display","text":"

For comprehensive display configuration, see PLUGINS_DEV_UI_COMPONENTS.md.

"},{"location":"PLUGINS_DEV/#filters","title":"Filters","text":"

Control which rows display in the UI:

{\n  \"data_filters\": [\n    {\n      \"compare_column\": \"Object_PrimaryID\",\n      \"compare_operator\": \"==\",\n      \"compare_field_id\": \"txtMacFilter\",\n      \"compare_js_template\": \"'{value}'.toString()\",\n      \"compare_use_quotes\": true\n    }\n  ]\n}\n

See UI Components: Filters for full documentation.

"},{"location":"PLUGINS_DEV/#database-mapping","title":"Database Mapping","text":"

To import plugin data into NetAlertX tables for device discovery or notifications:

{\n  \"mapped_to_table\": \"CurrentScan\",\n  \"database_column_definitions\": [\n    {\n      \"column\": \"Object_PrimaryID\",\n      \"mapped_to_column\": \"scanMac\",\n      \"show\": true,\n      \"type\": \"device_mac\",\n      \"localized\": [\"name\"],\n      \"name\": [{\"language_code\": \"en_us\", \"string\": \"MAC Address\"}]\n    }\n  ]\n}\n

See UI Components: Database Mapping for full documentation.

"},{"location":"PLUGINS_DEV/#static-value-mapping","title":"Static Value Mapping","text":"

To always map a static value (not read from plugin output):

{\n  \"column\": \"NameDoesntMatter\",\n  \"mapped_to_column\": \"scanSourcePlugin\",\n  \"mapped_to_column_data\": {\n    \"value\": \"MYPLN\"\n  }\n}\n
"},{"location":"PLUGINS_DEV/#ui-component-types","title":"UI Component Types","text":"

Plugin results are displayed in the web interface using various component types. See PLUGINS_DEV_UI_COMPONENTS.md for complete documentation.

"},{"location":"PLUGINS_DEV/#common-display-types","title":"Common Display Types","text":"

Read settings in your Python script:

from helper import get_setting_value\n\n# Read a setting by code name (prefix + function)\napi_url = get_setting_value('MYPLN_API_URL')\napi_key = get_setting_value('MYPLN_API_KEY')\nwatch_columns = get_setting_value('MYPLN_WATCH')\n\nprint(f\"Connecting to {api_url}\")\n

Pass settings as command parameters:

Define params in config to pass settings as script arguments:

{\n  \"params\": [\n    {\n      \"name\": \"api_url\",\n      \"type\": \"setting\",\n      \"value\": \"MYPLN_API_URL\"\n    }\n  ]\n}\n

Then use in CMD: python3 script.py --url={api_url}

See PLUGINS_DEV_SETTINGS.md for complete settings documentation, and PLUGINS_DEV_DATASOURCES.md for data source details.

"},{"location":"PLUGINS_DEV/#quick-reference-key-concepts","title":"Quick Reference: Key Concepts","text":""},{"location":"PLUGINS_DEV/#plugin-output-format","title":"Plugin Output Format","text":"

Object_PrimaryID|Object_SecondaryID|DateTime|Watched_Value1|Watched_Value2|Watched_Value3|Watched_Value4|Extra|ForeignKey\n
9 required columns, 4 optional helpers = 13 max

See: Data Contract

"},{"location":"PLUGINS_DEV/#plugin-metadata-configjson","title":"Plugin Metadata (config.json)","text":"
{\n  \"code_name\": \"my_plugin\",           // Folder name\n  \"unique_prefix\": \"MYPLN\",           // Settings prefix\n  \"display_name\": [...],              // UI label\n  \"data_source\": \"script\",            // Where data comes from\n  \"settings\": [...],                  // User configurable\n  \"database_column_definitions\": [...] // How to display\n}\n

See: Full Guide, Settings

"},{"location":"PLUGINS_DEV/#reserved-settings","title":"Reserved Settings","text":"

See: Settings System

"},{"location":"PLUGINS_DEV/#display-types","title":"Display Types","text":"

label, device_mac, device_ip, url, threshold, replace, regex, textbox_save, and more.

See: UI Components

"},{"location":"PLUGINS_DEV/#tools-references","title":"Tools & References","text":""},{"location":"PLUGINS_DEV_CONFIG/","title":"Plugins Implementation Details","text":"

Plugins provide data to the NetAlertX core, which processes it to detect changes, discover new devices, raise alerts, and apply heuristics.

"},{"location":"PLUGINS_DEV_CONFIG/#overview-plugin-data-flow","title":"Overview: Plugin Data Flow","text":"
  1. Each plugin runs on a defined schedule.
  2. Aligning all plugin schedules is recommended so they execute in the same loop.
  3. During execution, all plugins write their collected data into the CurrentScan table.
  4. After all plugins complete, the CurrentScan table is evaluated to detect new devices, changes, and triggers.

Although plugins run independently, they contribute to the shared CurrentScan table. To inspect its contents, set LOG_LEVEL=trace and check for the log section:

================ CurrentScan table content ================\n
"},{"location":"PLUGINS_DEV_CONFIG/#configjson-lifecycle","title":"config.json Lifecycle","text":"

This section outlines how each plugin\u2019s config.json manifest is read, validated, and used by the core and plugins. It also describes plugin output expectations and the main plugin categories.

Tip

For detailed schema and examples, see the Plugin Development Guide.

"},{"location":"PLUGINS_DEV_CONFIG/#1-loading","title":"1. Loading","text":""},{"location":"PLUGINS_DEV_CONFIG/#2-validation","title":"2. Validation","text":""},{"location":"PLUGINS_DEV_CONFIG/#3-preparation","title":"3. Preparation","text":""},{"location":"PLUGINS_DEV_CONFIG/#4-execution","title":"4. Execution","text":""},{"location":"PLUGINS_DEV_CONFIG/#5-parsing","title":"5. Parsing","text":""},{"location":"PLUGINS_DEV_CONFIG/#6-mapping","title":"6. Mapping","text":"

Example: Object_PrimaryID \u2192 devMAC

"},{"location":"PLUGINS_DEV_CONFIG/#6a-plugin-output-contract","title":"6a. Plugin Output Contract","text":"

All plugins must follow the Plugin Interface Contract defined in PLUGINS_DEV.md. Output values are pipe-delimited in a fixed order.

"},{"location":"PLUGINS_DEV_CONFIG/#identifiers","title":"Identifiers","text":""},{"location":"PLUGINS_DEV_CONFIG/#watched-values-watched_value14","title":"Watched Values (Watched_Value1\u20134)","text":""},{"location":"PLUGINS_DEV_CONFIG/#extra-field-extra","title":"Extra Field (Extra)","text":""},{"location":"PLUGINS_DEV_CONFIG/#helper-values-helper_value13","title":"Helper Values (Helper_Value1\u20133)","text":""},{"location":"PLUGINS_DEV_CONFIG/#mapping","title":"Mapping","text":""},{"location":"PLUGINS_DEV_CONFIG/#7-persistence","title":"7. Persistence","text":""},{"location":"PLUGINS_DEV_CONFIG/#plugin-categories","title":"Plugin Categories","text":"

Plugins fall into several functional categories depending on their purpose and expected outputs.

"},{"location":"PLUGINS_DEV_CONFIG/#1-device-discovery-plugins","title":"1. Device Discovery Plugins","text":""},{"location":"PLUGINS_DEV_CONFIG/#2-device-data-enrichment-plugins","title":"2. Device Data Enrichment Plugins","text":""},{"location":"PLUGINS_DEV_CONFIG/#3-name-resolver-plugins","title":"3. Name Resolver Plugins","text":""},{"location":"PLUGINS_DEV_CONFIG/#4-generic-plugins","title":"4. Generic Plugins","text":""},{"location":"PLUGINS_DEV_CONFIG/#5-configuration-only-plugins","title":"5. Configuration-Only Plugins","text":""},{"location":"PLUGINS_DEV_CONFIG/#post-processing","title":"Post-Processing","text":"

After persistence:

"},{"location":"PLUGINS_DEV_CONFIG/#field-update-authorization-set_always-set_empty","title":"Field Update Authorization (SET_ALWAYS / SET_EMPTY)","text":"

For tracked fields (devMac, devName, devLastIP, devVendor, devFQDN, devSSID, devParentMAC, devParentPort, devParentRelType, devVlan), plugins can configure how they interact with the authoritative field update system.

"},{"location":"PLUGINS_DEV_CONFIG/#set_always","title":"SET_ALWAYS","text":"

Mandatory when field is tracked.

Controls whether a plugin field is enabled:

Authorization logic: Even with a field listed in SET_ALWAYS, the plugin respects source-based permissions:

Example in config.json:

{\n  \"SET_ALWAYS\": [\"devName\", \"devLastIP\"]\n}\n
"},{"location":"PLUGINS_DEV_CONFIG/#set_empty","title":"SET_EMPTY","text":"

Optional field override.

Restricts when a plugin can update a field:

Use case: Some plugins discover optional enrichment data (like vendor/hostname) that shouldn't override user-set or existing values. Use SET_EMPTY to be less aggressive.

"},{"location":"PLUGINS_DEV_CONFIG/#authorization-decision-flow","title":"Authorization Decision Flow","text":"
  1. Source check: Is field LOCKED or USER? \u2192 REJECT (protected)
  2. Field in SET_ALWAYS check: Is SET_ALWAYS enabled for this plugin+field? \u2192 YES: ALLOW (can overwrite empty values, NEWDEV, plugin sources, etc.) | NO: Continue to step 3
  3. Field in SET_EMPTY check: Is SET_EMPTY enabled AND field non-empty+non-NEWDEV? \u2192 REJECT
  4. Default behavior: Allow overwrite if field empty or NEWDEV source

Note: Check each plugin's config.json manifest for its specific SET_ALWAYS/SET_EMPTY configuration.

"},{"location":"PLUGINS_DEV_CONFIG/#summary","title":"Summary","text":"

The lifecycle of a plugin configuration is:

Load \u2192 Validate \u2192 Prepare \u2192 Execute \u2192 Parse \u2192 Map \u2192 Persist \u2192 Post-process

Each plugin must:

"},{"location":"PLUGINS_DEV_DATASOURCES/","title":"Plugin Data Sources","text":"

Learn how to configure different data sources for your plugin.

"},{"location":"PLUGINS_DEV_DATASOURCES/#overview","title":"Overview","text":"

Data sources determine where the plugin gets its data and what format it returns. NetAlertX supports multiple data source types, each suited for different use cases.

Data Source Type Purpose Returns Example script Code Execution Execute Linux commands or Python scripts Pipeline Scan network, collect metrics, call APIs app-db-query Database Query Query the NetAlertX database Result set Show devices, open ports, recent events sqlite-db-query External DB Query external SQLite databases Result set PiHole database, external logs template Template Generate values from templates Values Initialize default settings"},{"location":"PLUGINS_DEV_DATASOURCES/#data-source-script","title":"Data Source: script","text":"

Execute any Linux command or Python script and capture its output.

"},{"location":"PLUGINS_DEV_DATASOURCES/#configuration","title":"Configuration","text":"
{\n  \"data_source\": \"script\",\n  \"show_ui\": true,\n  \"mapped_to_table\": \"CurrentScan\"\n}\n
"},{"location":"PLUGINS_DEV_DATASOURCES/#how-it-works","title":"How It Works","text":"
  1. Command specified in CMD setting is executed
  2. Script writes results to /tmp/log/plugins/last_result.<PREFIX>.log
  3. Core reads file and parses pipe-delimited results
  4. Results inserted into database
"},{"location":"PLUGINS_DEV_DATASOURCES/#example-simple-python-script","title":"Example: Simple Python Script","text":"
{\n  \"function\": \"CMD\",\n  \"type\": {\"dataType\": \"string\", \"elements\": [{\"elementType\": \"input\", \"elementOptions\": [], \"transformers\": []}]},\n  \"default_value\": \"python3 /app/front/plugins/my_plugin/script.py\",\n  \"localized\": [\"name\"],\n  \"name\": [{\"language_code\": \"en_us\", \"string\": \"Command\"}]\n}\n
"},{"location":"PLUGINS_DEV_DATASOURCES/#example-bash-script","title":"Example: Bash Script","text":"
{\n  \"function\": \"CMD\",\n  \"default_value\": \"bash /app/front/plugins/my_plugin/script.sh\",\n  \"localized\": [\"name\"],\n  \"name\": [{\"language_code\": \"en_us\", \"string\": \"Command\"}]\n}\n
"},{"location":"PLUGINS_DEV_DATASOURCES/#best-practices","title":"Best Practices","text":""},{"location":"PLUGINS_DEV_DATASOURCES/#output-format","title":"Output Format","text":"

Must write to: /tmp/log/plugins/last_result.<PREFIX>.log

Format: Pipe-delimited, 9 or 13 columns

See Plugin Data Contract for exact format

"},{"location":"PLUGINS_DEV_DATASOURCES/#data-source-app-db-query","title":"Data Source: app-db-query","text":"

Query the NetAlertX SQLite database and display results.

"},{"location":"PLUGINS_DEV_DATASOURCES/#configuration_1","title":"Configuration","text":"
{\n  \"data_source\": \"app-db-query\",\n  \"show_ui\": true,\n  \"mapped_to_table\": \"CurrentScan\"\n}\n
"},{"location":"PLUGINS_DEV_DATASOURCES/#how-it-works_1","title":"How It Works","text":"
  1. SQL query specified in CMD setting is executed against app.db
  2. Results parsed according to column definitions
  3. Inserted into plugin display/database
"},{"location":"PLUGINS_DEV_DATASOURCES/#sql-query-requirements","title":"SQL Query Requirements","text":""},{"location":"PLUGINS_DEV_DATASOURCES/#example-open-ports-from-nmap","title":"Example: Open Ports from Nmap","text":"
{\n  \"function\": \"CMD\",\n  \"type\": {\"dataType\": \"string\", \"elements\": [{\"elementType\": \"input\", \"elementOptions\": [], \"transformers\": []}]},\n  \"default_value\": \"SELECT dv.devName as Object_PrimaryID, cast(dv.devLastIP as VARCHAR(100)) || ':' || cast(SUBSTR(ns.Port, 0, INSTR(ns.Port, '/')) as VARCHAR(100)) as Object_SecondaryID, datetime() as DateTime, ns.Service as Watched_Value1, ns.State as Watched_Value2, null as Watched_Value3, null as Watched_Value4, ns.Extra as Extra, dv.devMac as ForeignKey FROM (SELECT * FROM Nmap_Scan) ns LEFT JOIN (SELECT devName, devMac, devLastIP FROM Devices) dv ON ns.MAC = dv.devMac\",\n  \"localized\": [\"name\"],\n  \"name\": [{\"language_code\": \"en_us\", \"string\": \"SQL to run\"}],\n  \"description\": [{\"language_code\": \"en_us\", \"string\": \"This SQL query populates the plugin table\"}]\n}\n
"},{"location":"PLUGINS_DEV_DATASOURCES/#example-recent-device-events","title":"Example: Recent Device Events","text":"
SELECT\n  e.EventValue as Object_PrimaryID,\n  d.devName as Object_SecondaryID,\n  e.EventDateTime as DateTime,\n  e.EventType as Watched_Value1,\n  d.devLastIP as Watched_Value2,\n  null as Watched_Value3,\n  null as Watched_Value4,\n  e.EventDetails as Extra,\n  d.devMac as ForeignKey\nFROM\n  Events e\nLEFT JOIN\n  Devices d ON e.DeviceMac = d.devMac\nWHERE\n  e.EventDateTime > datetime('now', '-24 hours')\nORDER BY\n  e.EventDateTime DESC\n

See the Database documentation for a list of common columns.

"},{"location":"PLUGINS_DEV_DATASOURCES/#data-source-sqlite-db-query","title":"Data Source: sqlite-db-query","text":"

Query an external SQLite database mounted in the container.

"},{"location":"PLUGINS_DEV_DATASOURCES/#configuration_2","title":"Configuration","text":"

First, define the database path in a setting:

{\n  \"function\": \"DB_PATH\",\n  \"type\": {\"dataType\": \"string\", \"elements\": [{\"elementType\": \"input\", \"elementOptions\": [], \"transformers\": []}]},\n  \"default_value\": \"/etc/pihole/pihole-FTL.db\",\n  \"localized\": [\"name\"],\n  \"name\": [{\"language_code\": \"en_us\", \"string\": \"Database path\"}],\n  \"description\": [{\"language_code\": \"en_us\", \"string\": \"Path to external SQLite database\"}]\n}\n

Then set data source and query:

{\n  \"data_source\": \"sqlite-db-query\",\n  \"show_ui\": true\n}\n
"},{"location":"PLUGINS_DEV_DATASOURCES/#how-it-works_2","title":"How It Works","text":"
  1. External database file path specified in DB_PATH setting
  2. Database mounted at that path (e.g., via Docker volume)
  3. SQL query executed against external database using EXTERNAL_<PREFIX>. prefix
  4. Results returned in standard format
"},{"location":"PLUGINS_DEV_DATASOURCES/#sql-query-example-pihole-data","title":"SQL Query Example: PiHole Data","text":"
{\n  \"function\": \"CMD\",\n  \"default_value\": \"SELECT hwaddr as Object_PrimaryID, cast('http://' || (SELECT ip FROM EXTERNAL_PIHOLE.network_addresses WHERE network_id = id ORDER BY lastseen DESC LIMIT 1) as VARCHAR(100)) || ':' || cast(SUBSTR((SELECT name FROM EXTERNAL_PIHOLE.network_addresses WHERE network_id = id ORDER BY lastseen DESC LIMIT 1), 0, INSTR((SELECT name FROM EXTERNAL_PIHOLE.network_addresses WHERE network_id = id ORDER BY lastseen DESC LIMIT 1), '/')) as VARCHAR(100)) as Object_SecondaryID, datetime() as DateTime, macVendor as Watched_Value1, lastQuery as Watched_Value2, (SELECT name FROM EXTERNAL_PIHOLE.network_addresses WHERE network_id = id ORDER BY lastseen DESC LIMIT 1) as Watched_Value3, null as Watched_Value4, '' as Extra, hwaddr as ForeignKey FROM EXTERNAL_PIHOLE.network WHERE hwaddr NOT LIKE 'ip-%' AND hwaddr <> '00:00:00:00:00:00'\",\n  \"localized\": [\"name\"],\n  \"name\": [{\"language_code\": \"en_us\", \"string\": \"SQL to run\"}]\n}\n
"},{"location":"PLUGINS_DEV_DATASOURCES/#key-points","title":"Key Points","text":""},{"location":"PLUGINS_DEV_DATASOURCES/#docker-volume-setup","title":"Docker Volume Setup","text":"

To mount external database in docker-compose:

services:\n  netalertx:\n    volumes:\n      - /path/on/host/pihole-FTL.db:/etc/pihole/pihole-FTL.db:ro\n
"},{"location":"PLUGINS_DEV_DATASOURCES/#data-source-template","title":"Data Source: template","text":"

Generate values from a template. Usually used for initialization and default settings.

"},{"location":"PLUGINS_DEV_DATASOURCES/#configuration_3","title":"Configuration","text":"
{\n  \"data_source\": \"template\"\n}\n
"},{"location":"PLUGINS_DEV_DATASOURCES/#how-it-works_3","title":"How It Works","text":""},{"location":"PLUGINS_DEV_DATASOURCES/#example-default-device-template","title":"Example: Default Device Template","text":"
{\n  \"function\": \"DEFAULT_DEVICE_PROPERTIES\",\n  \"type\": {\"dataType\": \"string\", \"elements\": [{\"elementType\": \"textarea\", \"elementOptions\": [], \"transformers\": []}]},\n  \"default_value\": \"type=Unknown|vendor=Unknown\",\n  \"localized\": [\"name\"],\n  \"name\": [{\"language_code\": \"en_us\", \"string\": \"Default properties\"}]\n}\n
"},{"location":"PLUGINS_DEV_DATASOURCES/#data-source-plugin_type","title":"Data Source: plugin_type","text":"

Declare the plugin category. Controls where settings appear in the UI.

"},{"location":"PLUGINS_DEV_DATASOURCES/#configuration_4","title":"Configuration","text":"
{\n  \"data_source\": \"plugin_type\",\n  \"value\": \"scanner\"\n}\n
"},{"location":"PLUGINS_DEV_DATASOURCES/#supported-values","title":"Supported Values","text":"Value Section Purpose scanner Device Scanners Discovers devices on network system System Plugins Core system functionality publisher Notification/Alert Plugins Sends notifications/alerts importer Data Importers Imports devices from external sources other Other Plugins Miscellaneous functionality"},{"location":"PLUGINS_DEV_DATASOURCES/#example","title":"Example","text":"
{\n  \"settings\": [\n    {\n      \"function\": \"plugin_type\",\n      \"type\": {\"dataType\": \"string\", \"elements\": []},\n      \"default_value\": \"scanner\",\n      \"options\": [\"scanner\"],\n      \"data_source\": \"plugin_type\",\n      \"value\": \"scanner\",\n      \"localized\": []\n    }\n  ]\n}\n
"},{"location":"PLUGINS_DEV_DATASOURCES/#execution-order","title":"Execution Order","text":"

Control plugin execution priority. Higher priority plugins run first.

{\n  \"execution_order\": \"Layer_0\"\n}\n
"},{"location":"PLUGINS_DEV_DATASOURCES/#levels-highest-to-lowest-priority","title":"Levels (highest to lowest priority)","text":""},{"location":"PLUGINS_DEV_DATASOURCES/#example-device-discovery","title":"Example: Device Discovery","text":"
{\n  \"code_name\": \"device_scanner\",\n  \"unique_prefix\": \"DEVSCAN\",\n  \"execution_order\": \"Layer_0\",\n  \"data_source\": \"script\",\n  \"mapped_to_table\": \"CurrentScan\"\n}\n
"},{"location":"PLUGINS_DEV_DATASOURCES/#performance-considerations","title":"Performance Considerations","text":""},{"location":"PLUGINS_DEV_DATASOURCES/#script-source","title":"Script Source","text":""},{"location":"PLUGINS_DEV_DATASOURCES/#database-query-source","title":"Database Query Source","text":""},{"location":"PLUGINS_DEV_DATASOURCES/#external-db-query-source","title":"External DB Query Source","text":""},{"location":"PLUGINS_DEV_DATASOURCES/#debugging-data-sources","title":"Debugging Data Sources","text":""},{"location":"PLUGINS_DEV_DATASOURCES/#check-script-output","title":"Check Script Output","text":"
# Run script manually\npython3 /app/front/plugins/my_plugin/script.py\n\n# Check result file\ncat /tmp/log/plugins/last_result.MYPREFIX.log\n
"},{"location":"PLUGINS_DEV_DATASOURCES/#test-sql-query","title":"Test SQL Query","text":"
# Connect to app database\nsqlite3 /data/db/app.db\n\n# Run query\nsqlite> SELECT ... ;\n
"},{"location":"PLUGINS_DEV_DATASOURCES/#monitor-execution","title":"Monitor Execution","text":"
# Watch backend logs\ntail -f /tmp/log/stdout.log | grep -i \"data_source\\|MYPREFIX\"\n
"},{"location":"PLUGINS_DEV_DATASOURCES/#see-also","title":"See Also","text":""},{"location":"PLUGINS_DEV_DATA_CONTRACT/","title":"Plugin Data Contract","text":"

This document specifies the exact interface between plugins and the NetAlertX core.

Important

Every plugin must output data in this exact format to be recognized and processed correctly.

"},{"location":"PLUGINS_DEV_DATA_CONTRACT/#overview","title":"Overview","text":"

Plugins communicate with NetAlertX by writing results to a pipe-delimited log file. The core reads this file, parses the data, and processes it for notifications, device discovery, and data integration.

File Location: /tmp/log/plugins/last_result.<PREFIX>.log

Format: Pipe-delimited (|), one record per line

Required Columns: 9 (mandatory) + up to 4 optional helper columns = 13 total

"},{"location":"PLUGINS_DEV_DATA_CONTRACT/#column-specification","title":"Column Specification","text":"

Note

The order of columns is FIXED and cannot be changed. All 9 mandatory columns must be provided. If you use any optional column (HelpVal1), you must supply all optional columns (HelpVal1 through HelpVal4).

"},{"location":"PLUGINS_DEV_DATA_CONTRACT/#mandatory-columns-08","title":"Mandatory Columns (0\u20138)","text":"Order Column Name Type Required Description 0 Object_PrimaryID string YES The primary identifier for grouping. Examples: device MAC, hostname, service name, or any unique ID 1 Object_SecondaryID string no Secondary identifier for relationships (e.g., IP address, port, sub-ID). Use null if not needed 2 DateTime string YES Timestamp when the event/data was collected. Format: YYYY-MM-DD HH:MM:SS 3 Watched_Value1 string YES Primary watched value. Changes trigger notifications. Examples: IP address, status, version 4 Watched_Value2 string no Secondary watched value. Use null if not needed 5 Watched_Value3 string no Tertiary watched value. Use null if not needed 6 Watched_Value4 string no Quaternary watched value. Use null if not needed 7 Extra string no Any additional metadata to display in UI and notifications. Use null if not needed 8 ForeignKey string no Foreign key linking to parent object (usually MAC address for device relationship). Use null if not needed"},{"location":"PLUGINS_DEV_DATA_CONTRACT/#optional-columns-912","title":"Optional Columns (9\u201312)","text":"Order Column Name Type Required Description 9 HelpVal1 string conditional Helper value 1. If used, all help values must be supplied 10 HelpVal2 string conditional Helper value 2. If used, all help values must be supplied 11 HelpVal3 string conditional Helper value 3. If used, all help values must be supplied 12 HelpVal4 string conditional Helper value 4. If used, all help values must be supplied"},{"location":"PLUGINS_DEV_DATA_CONTRACT/#usage-guide","title":"Usage Guide","text":""},{"location":"PLUGINS_DEV_DATA_CONTRACT/#emptynull-values","title":"Empty/Null Values","text":""},{"location":"PLUGINS_DEV_DATA_CONTRACT/#watched-values","title":"Watched Values","text":"

What are Watched Values?

Watched values are fields that the NetAlertX core monitors for changes between scans. When a watched value differs from the previous scan, it can trigger notifications.

How to use them:

Example:

"},{"location":"PLUGINS_DEV_DATA_CONTRACT/#foreign-key","title":"Foreign Key","text":"

Use the ForeignKey column to link objects to a parent device by MAC address:

device_name|192.168.1.100|2023-01-02 15:56:30|online|null|null|null|Found on network|aa:bb:cc:dd:ee:ff\n                                                                                              \u2191\n                                                                                        ForeignKey (MAC)\n

This allows NetAlertX to:

"},{"location":"PLUGINS_DEV_DATA_CONTRACT/#examples","title":"Examples","text":""},{"location":"PLUGINS_DEV_DATA_CONTRACT/#valid-data-9-columns-minimal","title":"Valid Data (9 columns, minimal)","text":"
https://example.com|null|2023-01-02 15:56:30|200|null|null|null|null|null\nprinter-hp-1|192.168.1.50|2023-01-02 15:56:30|online|50%|null|null|Last seen in office|aa:11:22:33:44:55\ngateway.local|null|2023-01-02 15:56:30|active|v2.1.5|null|null|Firmware version|null\n
"},{"location":"PLUGINS_DEV_DATA_CONTRACT/#valid-data-13-columns-with-helpers","title":"Valid Data (13 columns, with helpers)","text":"
service-api|192.168.1.100:8080|2023-01-02 15:56:30|200|45ms|true|null|Responding normally|aa:bb:cc:dd:ee:ff|extra1|extra2|extra3|extra4\nhost-web-1|10.0.0.20|2023-01-02 15:56:30|active|256GB|online|ok|Production server|null|cpu:80|mem:92|disk:45|alerts:0\n
"},{"location":"PLUGINS_DEV_DATA_CONTRACT/#invalid-data-common-errors","title":"Invalid Data (Common Errors)","text":"

\u274c Missing required column (only 8 separators instead of 8):

https://google.com|null|2023-01-02 15:56:30|200|0.7898||null|null\n                                                      \u2191\n                                                  Missing pipe\n

\u274c Missing mandatory Watched_Value1 (column 3):

https://duckduckgo.com|192.168.1.1|2023-01-02 15:56:30|null|0.9898|null|null|Best|null\n                                                         \u2191\n                                          Must not be null\n

\u274c Incomplete optional columns (has HelpVal1 but missing HelpVal2\u20134):

device|null|2023-01-02 15:56:30|status|null|null|null|null|null|helper1\n                                                                    \u2191\n                                                    Has helper but incomplete\n

\u2705 Complete with helpers (all 4 helpers provided):

device|null|2023-01-02 15:56:30|status|null|null|null|null|null|h1|h2|h3|h4\n

\u2705 Complete without helpers (9 columns exactly):

device|null|2023-01-02 15:56:30|status|null|null|null|null|null\n

"},{"location":"PLUGINS_DEV_DATA_CONTRACT/#using-plugin_helperpy","title":"Using plugin_helper.py","text":"

The easiest way to ensure correct output is to use the plugin_helper.py library:

from plugin_helper import Plugin_Objects\n\n# Initialize with your plugin's prefix\nplugin_objects = Plugin_Objects(\"YOURPREFIX\")\n\n# Add objects\nplugin_objects.add_object(\n    Object_PrimaryID=\"device_id\",\n    Object_SecondaryID=\"192.168.1.1\",\n    DateTime=\"2023-01-02 15:56:30\",\n    Watched_Value1=\"online\",\n    Watched_Value2=None,\n    Watched_Value3=None,\n    Watched_Value4=None,\n    Extra=\"Additional data\",\n    ForeignKey=\"aa:bb:cc:dd:ee:ff\",\n    HelpVal1=None,\n    HelpVal2=None,\n    HelpVal3=None,\n    HelpVal4=None\n)\n\n# Write results (handles formatting, sanitization, and file creation)\nplugin_objects.write_result_file()\n

The library automatically:

"},{"location":"PLUGINS_DEV_DATA_CONTRACT/#de-duplication","title":"De-duplication","text":"

The core runs de-duplication once per hour on the Plugins_Objects table:

"},{"location":"PLUGINS_DEV_DATA_CONTRACT/#datetime-format","title":"DateTime Format","text":"

Required Format: YYYY-MM-DD HH:MM:SS

Examples: - 2023-01-02 15:56:30 \u2705 - 2023-1-2 15:56:30 \u274c (missing leading zeros) - 2023-01-02T15:56:30 \u274c (wrong separator) - 15:56:30 2023-01-02 \u274c (wrong order)

Python Helper:

from datetime import datetime\n\n# Current time in correct format\nnow = datetime.now().strftime(\"%Y-%m-%d %H:%M:%S\")\n# Output: \"2023-01-02 15:56:30\"\n

Bash Helper:

# Current time in correct format\ndate '+%Y-%m-%d %H:%M:%S'\n# Output: 2023-01-02 15:56:30\n

"},{"location":"PLUGINS_DEV_DATA_CONTRACT/#validation-checklist","title":"Validation Checklist","text":"

Before writing your plugin's script.py, ensure:

"},{"location":"PLUGINS_DEV_DATA_CONTRACT/#debugging","title":"Debugging","text":"

View raw plugin output:

cat /tmp/log/plugins/last_result.YOURPREFIX.log\n

Check line count:

wc -l /tmp/log/plugins/last_result.YOURPREFIX.log\n

Validate column count (should be 8 or 12 pipes per line):

cat /tmp/log/plugins/last_result.YOURPREFIX.log | awk -F'|' '{print NF}' | sort | uniq\n# Output: 9 (for minimal) or 13 (for with helpers)\n

Check core processing in logs:

tail -f /tmp/log/stdout.log | grep -i \"YOURPREFIX\\|Plugins_Objects\"\n

"},{"location":"PLUGINS_DEV_DATA_CONTRACT/#see-also","title":"See Also","text":""},{"location":"PLUGINS_DEV_QUICK_START/","title":"Plugin Development Quick Start","text":"

Get a working plugin up and running in 5 minutes.

"},{"location":"PLUGINS_DEV_QUICK_START/#prerequisites","title":"Prerequisites","text":""},{"location":"PLUGINS_DEV_QUICK_START/#quick-start-steps","title":"Quick Start Steps","text":""},{"location":"PLUGINS_DEV_QUICK_START/#1-create-your-plugin-folder","title":"1. Create Your Plugin Folder","text":"

Start from the template to get the basic structure:

cd /workspaces/NetAlertX/front/plugins\ncp -r __template my_plugin\ncd my_plugin\n
"},{"location":"PLUGINS_DEV_QUICK_START/#2-update-configjson-identifiers","title":"2. Update config.json Identifiers","text":"

Edit my_plugin/config.json and update these critical fields:

{\n  \"code_name\": \"my_plugin\",\n  \"unique_prefix\": \"MYPLN\",\n  \"display_name\": [{\"language_code\": \"en_us\", \"string\": \"My Plugin\"}],\n  \"description\": [{\"language_code\": \"en_us\", \"string\": \"My custom plugin\"}]\n}\n

Important: - code_name must match the folder name - unique_prefix must be unique and uppercase (check existing plugins to avoid conflicts) - unique_prefix is used as a prefix for all generated settings (e.g., MYPLN_RUN, MYPLN_CMD)

"},{"location":"PLUGINS_DEV_QUICK_START/#3-implement-your-script","title":"3. Implement Your Script","text":"

Edit my_plugin/script.py and implement your data collection logic:

#!/usr/bin/env python3\n\nimport sys\nimport os\nsys.path.insert(0, os.path.join(os.path.dirname(__file__), '../../server'))\nsys.path.insert(0, os.path.join(os.path.dirname(__file__), '../plugins'))\n\nfrom plugin_helper import Plugin_Objects, mylog\nfrom helper import get_setting_value\nfrom const import logPath\n\npluginName = \"MYPLN\"\n\nLOG_PATH = logPath + \"/plugins\"\nLOG_FILE = os.path.join(LOG_PATH, f\"script.{pluginName}.log\")\nRESULT_FILE = os.path.join(LOG_PATH, f\"last_result.{pluginName}.log\")\n\n# Initialize\nplugin_objects = Plugin_Objects(RESULT_FILE)\n\ntry:\n    # Your data collection logic here\n    # For example, scan something and collect results\n\n    # Add an object to results\n    plugin_objects.add_object(\n        Object_PrimaryID=\"example_id\",\n        Object_SecondaryID=None,\n        DateTime=\"2023-01-02 15:56:30\",\n        Watched_Value1=\"value1\",\n        Watched_Value2=None,\n        Watched_Value3=None,\n        Watched_Value4=None,\n        Extra=\"additional_data\",\n        ForeignKey=None\n    )\n\n    # Write results to the log file\n    plugin_objects.write_result_file()\n\nexcept Exception as e:\n    mylog(\"none\", f\"Error: {e}\")\n    sys.exit(1)\n
"},{"location":"PLUGINS_DEV_QUICK_START/#4-configure-execution","title":"4. Configure Execution","text":"

Edit the RUN and CMD settings in config.json:

{\n  \"function\": \"RUN\",\n  \"type\": {\"dataType\":\"string\", \"elements\": [{\"elementType\": \"select\", \"elementOptions\": [], \"transformers\": []}]},\n  \"default_value\": \"disabled\",\n  \"options\": [\"disabled\", \"once\", \"schedule\"],\n  \"localized\": [\"name\", \"description\"],\n  \"name\": [{\"language_code\":\"en_us\", \"string\": \"When to run\"}],\n  \"description\": [{\"language_code\":\"en_us\", \"string\": \"Enable plugin execution\"}]\n},\n{\n  \"function\": \"CMD\",\n  \"type\": {\"dataType\":\"string\", \"elements\": [{\"elementType\": \"input\", \"elementOptions\": [], \"transformers\": []}]},\n  \"default_value\": \"python3 /app/front/plugins/my_plugin/script.py\",\n  \"localized\": [\"name\", \"description\"],\n  \"name\": [{\"language_code\":\"en_us\", \"string\": \"Command\"}],\n  \"description\": [{\"language_code\":\"en_us\", \"string\": \"Command to execute\"}]\n}\n
"},{"location":"PLUGINS_DEV_QUICK_START/#5-test-your-plugin","title":"5. Test Your Plugin","text":""},{"location":"PLUGINS_DEV_QUICK_START/#in-dev-container","title":"In Dev Container","text":"
# Test the script directly\npython3 /workspaces/NetAlertX/front/plugins/my_plugin/script.py\n\n# Check the results\ncat /tmp/log/plugins/last_result.MYPLN.log\n
"},{"location":"PLUGINS_DEV_QUICK_START/#via-ui","title":"Via UI","text":"
  1. Restart backend: Run task [Dev Container] Start Backend (Python)
  2. Open Settings \u2192 Plugin Settings \u2192 My Plugin
  3. Set My Plugin - When to run to once
  4. Click Save
  5. Check /tmp/log/plugins/last_result.MYPLN.log for output
"},{"location":"PLUGINS_DEV_QUICK_START/#6-check-results","title":"6. Check Results","text":"

Verify your plugin is working:

# Check if result file was generated\nls -la /tmp/log/plugins/last_result.MYPLN.log\n\n# View contents\ncat /tmp/log/plugins/last_result.MYPLN.log\n\n# Check backend logs for errors\ntail -f /tmp/log/stdout.log | grep \"my_plugin\\|MYPLN\"\n
"},{"location":"PLUGINS_DEV_QUICK_START/#next-steps","title":"Next Steps","text":"

Now that you have a working basic plugin:

  1. Add Settings: Customize behavior via user-configurable settings (see PLUGINS_DEV_SETTINGS.md)
  2. Implement Data Contract: Structure your output correctly (see PLUGINS_DEV_DATA_CONTRACT.md)
  3. Configure UI: Display plugin results in the web interface (see PLUGINS_DEV_UI_COMPONENTS.md)
  4. Map to Database: Import data into NetAlertX tables like CurrentScan or Devices
  5. Set Schedules: Run your plugin on a schedule (see PLUGINS_DEV_CONFIG.md)
"},{"location":"PLUGINS_DEV_QUICK_START/#common-issues","title":"Common Issues","text":"Issue Solution \"Module not found\" errors Ensure sys.path includes /app/server and /app/front/plugins Settings not appearing Restart backend and clear browser cache Results not showing up Check /tmp/log/plugins/*.log and /tmp/log/stdout.log for errors Permission denied Plugin runs in container, use absolute paths like /app/front/plugins/..."},{"location":"PLUGINS_DEV_QUICK_START/#resources","title":"Resources","text":""},{"location":"PLUGINS_DEV_SETTINGS/","title":"Plugin Settings System","text":"

Learn how to let users configure your plugin via the NetAlertX UI Settings page.

Tip

For the higher-level settings flow and lifecycle, see Settings System Documentation.

"},{"location":"PLUGINS_DEV_SETTINGS/#overview","title":"Overview","text":"

Plugin settings allow users to configure:

All settings are defined in your plugin's config.json file under the \"settings\" array.

"},{"location":"PLUGINS_DEV_SETTINGS/#setting-definition-structure","title":"Setting Definition Structure","text":"

Each setting is a JSON object with required and optional properties:

{\n  \"function\": \"UNIQUE_CODE\",\n  \"type\": {\n    \"dataType\": \"string\",\n    \"elements\": [\n      {\n        \"elementType\": \"input\",\n        \"elementOptions\": [],\n        \"transformers\": []\n      }\n    ]\n  },\n  \"default_value\": \"default_value_here\",\n  \"options\": [],\n  \"localized\": [\"name\", \"description\"],\n  \"name\": [\n    {\n      \"language_code\": \"en_us\",\n      \"string\": \"Display Name\"\n    }\n  ],\n  \"description\": [\n    {\n      \"language_code\": \"en_us\",\n      \"string\": \"Help text describing the setting\"\n    }\n  ]\n}\n
"},{"location":"PLUGINS_DEV_SETTINGS/#required-properties","title":"Required Properties","text":"Property Type Description Example function string Unique identifier for the setting. Used in manifest and when reading values. See Reserved Function Names for special values \"MY_CUSTOM_SETTING\" type object Defines the UI component and data type See Component Types default_value varies Initial value shown in UI \"https://example.com\" localized array Which properties have translations [\"name\", \"description\"] name array Display name in Settings UI (localized) See Localized Strings description array Help text in Settings UI (localized) See Localized Strings"},{"location":"PLUGINS_DEV_SETTINGS/#optional-properties","title":"Optional Properties","text":"Property Type Description Example options array Valid values for select/checkbox controls [\"option1\", \"option2\"] events string Trigger action button: \"test\" or \"run\" \"test\" for notifications maxLength number Character limit for input fields 100 readonly boolean Make field read-only true override_value object Template-based value override (WIP) Work in Progress"},{"location":"PLUGINS_DEV_SETTINGS/#reserved-function-names","title":"Reserved Function Names","text":"

These function names have special meaning and control core plugin behavior:

"},{"location":"PLUGINS_DEV_SETTINGS/#core-execution-settings","title":"Core Execution Settings","text":"Function Purpose Type Required Options RUN When to execute the plugin select YES \"disabled\", \"once\", \"schedule\", \"always_after_scan\", \"before_name_updates\", \"on_new_device\", \"before_config_save\" RUN_SCHD Cron schedule input If RUN=\"schedule\" Cron format: \"0 * * * *\" (hourly) CMD Command/script to execute input YES Linux command or path to script RUN_TIMEOUT Maximum execution time in seconds input optional Numeric: \"60\", \"120\", etc."},{"location":"PLUGINS_DEV_SETTINGS/#data-filtering-settings","title":"Data & Filtering Settings","text":"Function Purpose Type Required Options WATCH Which columns to monitor for changes multi-select optional Column names from data contract REPORT_ON When to send notifications select optional \"new\", \"watched-changed\", \"watched-not-changed\", \"missing-in-last-scan\" DB_PATH External database path input If using SQLite plugin File path: \"/etc/pihole/pihole-FTL.db\""},{"location":"PLUGINS_DEV_SETTINGS/#api-data-settings","title":"API & Data Settings","text":"Function Purpose Type Required Options API_SQL Generate API JSON file (reserved) Not implemented \u2014"},{"location":"PLUGINS_DEV_SETTINGS/#component-types","title":"Component Types","text":""},{"location":"PLUGINS_DEV_SETTINGS/#text-input","title":"Text Input","text":"

Simple text field for API keys, URLs, thresholds, etc.

{\n  \"function\": \"URL\",\n  \"type\": {\n    \"dataType\": \"string\",\n    \"elements\": [\n      {\n        \"elementType\": \"input\",\n        \"elementOptions\": [],\n        \"transformers\": []\n      }\n    ]\n  },\n  \"default_value\": \"https://api.example.com\",\n  \"localized\": [\"name\", \"description\"],\n  \"name\": [{\"language_code\": \"en_us\", \"string\": \"API URL\"}],\n  \"description\": [{\"language_code\": \"en_us\", \"string\": \"The API endpoint to query\"}]\n}\n
"},{"location":"PLUGINS_DEV_SETTINGS/#password-input","title":"Password Input","text":"

Secure field with SHA256 hashing transformer.

{\n  \"function\": \"API_KEY\",\n  \"type\": {\n    \"dataType\": \"string\",\n    \"elements\": [\n      {\n        \"elementType\": \"input\",\n        \"elementOptions\": [{\"type\": \"password\"}],\n        \"transformers\": [\"sha256\"]\n      }\n    ]\n  },\n  \"default_value\": \"\",\n  \"localized\": [\"name\", \"description\"],\n  \"name\": [{\"language_code\": \"en_us\", \"string\": \"API Key\"}],\n  \"description\": [{\"language_code\": \"en_us\", \"string\": \"Stored securely with SHA256 hashing\"}]\n}\n
"},{"location":"PLUGINS_DEV_SETTINGS/#dropdownselect","title":"Dropdown/Select","text":"

Choose from predefined options.

{\n  \"function\": \"RUN\",\n  \"type\": {\n    \"dataType\": \"string\",\n    \"elements\": [\n      {\n        \"elementType\": \"select\",\n        \"elementOptions\": [],\n        \"transformers\": []\n      }\n    ]\n  },\n  \"default_value\": \"disabled\",\n  \"options\": [\"disabled\", \"once\", \"schedule\", \"always_after_scan\"],\n  \"localized\": [\"name\", \"description\"],\n  \"name\": [{\"language_code\": \"en_us\", \"string\": \"When to run\"}],\n  \"description\": [{\"language_code\": \"en_us\", \"string\": \"Select execution trigger\"}]\n}\n
"},{"location":"PLUGINS_DEV_SETTINGS/#multi-select","title":"Multi-Select","text":"

Select multiple values (returns array).

{\n  \"function\": \"WATCH\",\n  \"type\": {\n    \"dataType\": \"array\",\n    \"elements\": [\n      {\n        \"elementType\": \"select\",\n        \"elementOptions\": [{\"isMultiSelect\": true}],\n        \"transformers\": []\n      }\n    ]\n  },\n  \"default_value\": [],\n  \"options\": [\"Status\", \"IP_Address\", \"Response_Time\"],\n  \"localized\": [\"name\", \"description\"],\n  \"name\": [{\"language_code\": \"en_us\", \"string\": \"Watch columns\"}],\n  \"description\": [{\"language_code\": \"en_us\", \"string\": \"Which columns trigger notifications on change\"}]\n}\n
"},{"location":"PLUGINS_DEV_SETTINGS/#checkbox","title":"Checkbox","text":"

Boolean toggle.

{\n  \"function\": \"ENABLED\",\n  \"type\": {\n    \"dataType\": \"boolean\",\n    \"elements\": [\n      {\n        \"elementType\": \"input\",\n        \"elementOptions\": [{\"type\": \"checkbox\"}],\n        \"transformers\": []\n      }\n    ]\n  },\n  \"default_value\": false,\n  \"localized\": [\"name\", \"description\"],\n  \"name\": [{\"language_code\": \"en_us\", \"string\": \"Enable feature\"}],\n  \"description\": [{\"language_code\": \"en_us\", \"string\": \"Toggle this feature on/off\"}]\n}\n
"},{"location":"PLUGINS_DEV_SETTINGS/#textarea","title":"Textarea","text":"

Multi-line text input.

{\n  \"function\": \"CUSTOM_CONFIG\",\n  \"type\": {\n    \"dataType\": \"string\",\n    \"elements\": [\n      {\n        \"elementType\": \"textarea\",\n        \"elementOptions\": [],\n        \"transformers\": []\n      }\n    ]\n  },\n  \"default_value\": \"\",\n  \"localized\": [\"name\", \"description\"],\n  \"name\": [{\"language_code\": \"en_us\", \"string\": \"Custom Configuration\"}],\n  \"description\": [{\"language_code\": \"en_us\", \"string\": \"Enter configuration (one per line)\"}]\n}\n
"},{"location":"PLUGINS_DEV_SETTINGS/#read-only-label","title":"Read-Only Label","text":"

Display information without user input.

{\n  \"function\": \"STATUS_DISPLAY\",\n  \"type\": {\n    \"dataType\": \"string\",\n    \"elements\": [\n      {\n        \"elementType\": \"input\",\n        \"elementOptions\": [{\"readonly\": true}],\n        \"transformers\": []\n      }\n    ]\n  },\n  \"default_value\": \"Ready\",\n  \"localized\": [\"name\"],\n  \"name\": [{\"language_code\": \"en_us\", \"string\": \"Status\"}]\n}\n
"},{"location":"PLUGINS_DEV_SETTINGS/#using-settings-in-your-script","title":"Using Settings in Your Script","text":""},{"location":"PLUGINS_DEV_SETTINGS/#method-1-via-get_setting_value-helper","title":"Method 1: Via get_setting_value() Helper","text":"

Recommended approach \u2014 clean and simple:

from helper import get_setting_value\n\n# Read the setting by function name with plugin prefix\napi_url = get_setting_value('MYPLN_API_URL')\napi_key = get_setting_value('MYPLN_API_KEY')\nwatch_columns = get_setting_value('MYPLN_WATCH')  # Returns list if multi-select\n\n# Use in your script\nmylog(\"none\", f\"Connecting to {api_url} with key {api_key}\")\n
"},{"location":"PLUGINS_DEV_SETTINGS/#method-2-via-command-parameters","title":"Method 2: Via Command Parameters","text":"

For more complex scenarios where you need to pass settings as command-line arguments:

Define params in your config.json:

{\n  \"params\": [\n    {\n      \"name\": \"api_url\",\n      \"type\": \"setting\",\n      \"value\": \"MYPLN_API_URL\"\n    },\n    {\n      \"name\": \"timeout\",\n      \"type\": \"setting\",\n      \"value\": \"MYPLN_RUN_TIMEOUT\"\n    }\n  ]\n}\n

Update your CMD setting:

{\n  \"function\": \"CMD\",\n  \"default_value\": \"python3 /app/front/plugins/my_plugin/script.py --url={api_url} --timeout={timeout}\"\n}\n

The framework will replace {api_url} and {timeout} with actual values before execution.

"},{"location":"PLUGINS_DEV_SETTINGS/#method-3-via-environment-variables-check-with-maintainer","title":"Method 3: Via Environment Variables (check with maintainer)","text":"

Settings are also available as environment variables:

# Environment variable format: <PREFIX>_<FUNCTION>\nMY_PLUGIN_API_URL\nMY_PLUGIN_API_KEY\nMY_PLUGIN_RUN\n

In Python:

import os\n\napi_url = os.environ.get('MYPLN_API_URL', 'default_value')\n

"},{"location":"PLUGINS_DEV_SETTINGS/#localized-strings","title":"Localized Strings","text":"

Settings and UI text support multiple languages. Define translations in the name and description arrays:

{\n  \"localized\": [\"name\", \"description\"],\n  \"name\": [\n    {\n      \"language_code\": \"en_us\",\n      \"string\": \"API URL\"\n    },\n    {\n      \"language_code\": \"es_es\",\n      \"string\": \"URL de API\"\n    },\n    {\n      \"language_code\": \"de_de\",\n      \"string\": \"API-URL\"\n    }\n  ],\n  \"description\": [\n    {\n      \"language_code\": \"en_us\",\n      \"string\": \"Enter the API endpoint URL\"\n    },\n    {\n      \"language_code\": \"es_es\",\n      \"string\": \"Ingrese la URL del endpoint de API\"\n    },\n    {\n      \"language_code\": \"de_de\",\n      \"string\": \"Geben Sie die API-Endpunkt-URL ein\"\n    }\n  ]\n}\n
"},{"location":"PLUGINS_DEV_SETTINGS/#examples","title":"Examples","text":""},{"location":"PLUGINS_DEV_SETTINGS/#example-1-website-monitor-plugin","title":"Example 1: Website Monitor Plugin","text":"
{\n  \"settings\": [\n    {\n      \"function\": \"RUN\",\n      \"type\": {\"dataType\": \"string\", \"elements\": [{\"elementType\": \"select\", \"elementOptions\": [], \"transformers\": []}]},\n      \"default_value\": \"disabled\",\n      \"options\": [\"disabled\", \"once\", \"schedule\"],\n      \"localized\": [\"name\", \"description\"],\n      \"name\": [{\"language_code\": \"en_us\", \"string\": \"When to run\"}],\n      \"description\": [{\"language_code\": \"en_us\", \"string\": \"Enable website monitoring\"}]\n    },\n    {\n      \"function\": \"RUN_SCHD\",\n      \"type\": {\"dataType\": \"string\", \"elements\": [{\"elementType\": \"input\", \"elementOptions\": [], \"transformers\": []}]},\n      \"default_value\": \"*/5 * * * *\",\n      \"localized\": [\"name\", \"description\"],\n      \"name\": [{\"language_code\": \"en_us\", \"string\": \"Schedule\"}],\n      \"description\": [{\"language_code\": \"en_us\", \"string\": \"Cron format (default: every 5 minutes)\"}]\n    },\n    {\n      \"function\": \"CMD\",\n      \"type\": {\"dataType\": \"string\", \"elements\": [{\"elementType\": \"input\", \"elementOptions\": [], \"transformers\": []}]},\n      \"default_value\": \"python3 /app/front/plugins/website_monitor/script.py urls={urls}\",\n      \"localized\": [\"name\", \"description\"],\n      \"name\": [{\"language_code\": \"en_us\", \"string\": \"Command\"}],\n      \"description\": [{\"language_code\": \"en_us\", \"string\": \"Command to execute\"}]\n    },\n    {\n      \"function\": \"RUN_TIMEOUT\",\n      \"type\": {\"dataType\": \"string\", \"elements\": [{\"elementType\": \"input\", \"elementOptions\": [], \"transformers\": []}]},\n      \"default_value\": \"60\",\n      \"localized\": [\"name\", \"description\"],\n      \"name\": [{\"language_code\": \"en_us\", \"string\": \"Timeout\"}],\n      \"description\": [{\"language_code\": \"en_us\", \"string\": \"Maximum execution time in seconds\"}]\n    },\n    {\n      \"function\": \"URLS\",\n      \"type\": {\"dataType\": \"array\", \"elements\": [{\"elementType\": \"input\", \"elementOptions\": [], \"transformers\": []}]},\n      \"default_value\": [\"https://example.com\"],\n      \"maxLength\": 200,\n      \"localized\": [\"name\", \"description\"],\n      \"name\": [{\"language_code\": \"en_us\", \"string\": \"URLs to monitor\"}],\n      \"description\": [{\"language_code\": \"en_us\", \"string\": \"One URL per line\"}]\n    },\n    {\n      \"function\": \"WATCH\",\n      \"type\": {\"dataType\": \"array\", \"elements\": [{\"elementType\": \"select\", \"elementOptions\": [{\"isMultiSelect\": true}], \"transformers\": []}]},\n      \"default_value\": [\"Status_Code\"],\n      \"options\": [\"Status_Code\", \"Response_Time\", \"Certificate_Expiry\"],\n      \"localized\": [\"name\", \"description\"],\n      \"name\": [{\"language_code\": \"en_us\", \"string\": \"Watch columns\"}],\n      \"description\": [{\"language_code\": \"en_us\", \"string\": \"Which changes trigger notifications\"}]\n    }\n  ]\n}\n
"},{"location":"PLUGINS_DEV_SETTINGS/#example-2-pihole-integration-plugin","title":"Example 2: PiHole Integration Plugin","text":"
{\n  \"settings\": [\n    {\n      \"function\": \"RUN\",\n      \"type\": {\"dataType\": \"string\", \"elements\": [{\"elementType\": \"select\", \"elementOptions\": [], \"transformers\": []}]},\n      \"default_value\": \"disabled\",\n      \"options\": [\"disabled\", \"schedule\"],\n      \"localized\": [\"name\", \"description\"],\n      \"name\": [{\"language_code\": \"en_us\", \"string\": \"When to run\"}],\n      \"description\": [{\"language_code\": \"en_us\", \"string\": \"Enable PiHole integration\"}]\n    },\n    {\n      \"function\": \"DB_PATH\",\n      \"type\": {\"dataType\": \"string\", \"elements\": [{\"elementType\": \"input\", \"elementOptions\": [], \"transformers\": []}]},\n      \"default_value\": \"/etc/pihole/pihole-FTL.db\",\n      \"localized\": [\"name\", \"description\"],\n      \"name\": [{\"language_code\": \"en_us\", \"string\": \"Database path\"}],\n      \"description\": [{\"language_code\": \"en_us\", \"string\": \"Path to pihole-FTL.db inside container\"}]\n    },\n    {\n      \"function\": \"API_KEY\",\n      \"type\": {\"dataType\": \"string\", \"elements\": [{\"elementType\": \"input\", \"elementOptions\": [{\"type\": \"password\"}], \"transformers\": [\"sha256\"]}]},\n      \"default_value\": \"\",\n      \"localized\": [\"name\", \"description\"],\n      \"name\": [{\"language_code\": \"en_us\", \"string\": \"API Key\"}],\n      \"description\": [{\"language_code\": \"en_us\", \"string\": \"PiHole API key (optional, stored securely)\"}]\n    }\n  ]\n}\n
"},{"location":"PLUGINS_DEV_SETTINGS/#validation-testing","title":"Validation & Testing","text":""},{"location":"PLUGINS_DEV_SETTINGS/#check-settings-are-recognized","title":"Check Settings Are Recognized","text":"

After saving your config.json:

  1. Restart the backend: Run task [Dev Container] Start Backend (Python)
  2. Open Settings page in UI
  3. Navigate to Plugin Settings
  4. Look for your plugin's settings
"},{"location":"PLUGINS_DEV_SETTINGS/#read-setting-values-in-script","title":"Read Setting Values in Script","text":"

Test that values are accessible:

from helper import get_setting_value\n\n# Try to read a setting\nvalue = get_setting_value('MYPLN_API_URL')\nmylog('none', f\"Setting value: {value}\")\n\n# Should print the user-configured value or default\n
"},{"location":"PLUGINS_DEV_SETTINGS/#debug-settings","title":"Debug Settings","text":"

Check backend logs:

tail -f /tmp/log/stdout.log | grep -i \"setting\\|MYPLN\"\n
"},{"location":"PLUGINS_DEV_SETTINGS/#see-also","title":"See Also","text":""},{"location":"PLUGINS_DEV_UI_COMPONENTS/","title":"Plugin UI Components","text":"

Configure how your plugin's data is displayed in the NetAlertX web interface.

"},{"location":"PLUGINS_DEV_UI_COMPONENTS/#overview","title":"Overview","text":"

Plugin results are displayed in the UI via the Plugins page and Device details tabs. You control the appearance and functionality of these displays by defining database_column_definitions in your plugin's config.json.

Each column definition specifies: - Which data field to display - How to render it (label, link, color-coded badge, etc.) - What CSS classes to apply - What transformations to apply (regex, string replacement, etc.)

"},{"location":"PLUGINS_DEV_UI_COMPONENTS/#column-definition-structure","title":"Column Definition Structure","text":"
{\n  \"column\": \"Object_PrimaryID\",\n  \"mapped_to_column\": \"devMac\",\n  \"mapped_to_column_data\": null,\n  \"css_classes\": \"col-sm-2\",\n  \"show\": true,\n  \"type\": \"device_mac\",\n  \"default_value\": \"\",\n  \"options\": [],\n  \"options_params\": [],\n  \"localized\": [\"name\"],\n  \"name\": [\n    {\n      \"language_code\": \"en_us\",\n      \"string\": \"MAC Address\"\n    }\n  ]\n}\n
"},{"location":"PLUGINS_DEV_UI_COMPONENTS/#properties","title":"Properties","text":"Property Type Required Description column string YES Source column name from data contract (e.g., Object_PrimaryID, Watched_Value1) mapped_to_column string no Target database column if mapping to a table like CurrentScan mapped_to_column_data object no Static value to map instead of using column data css_classes string no Bootstrap CSS classes for width/spacing (e.g., \"col-sm-2\", \"col-sm-6\") show boolean YES Whether to display in UI (must be true to appear) type string YES How to render the value (see Render Types) default_value varies YES Default if column is empty options array no Options for select/threshold/replace/regex types options_params array no Dynamic options from SQL or settings localized array YES Which properties need translations (e.g., [\"name\", \"description\"]) name array YES Display name in UI (localized strings) description array no Help text in UI (localized strings) maxLength number no Character limit for input fields"},{"location":"PLUGINS_DEV_UI_COMPONENTS/#render-types","title":"Render Types","text":""},{"location":"PLUGINS_DEV_UI_COMPONENTS/#display-only-types","title":"Display-Only Types","text":"

These render as read-only display elements:

"},{"location":"PLUGINS_DEV_UI_COMPONENTS/#label","title":"label","text":"

Plain text display (read-only).

{\n  \"column\": \"Watched_Value1\",\n  \"show\": true,\n  \"type\": \"label\",\n  \"localized\": [\"name\"],\n  \"name\": [{\"language_code\": \"en_us\", \"string\": \"Status\"}]\n}\n

Output: online

"},{"location":"PLUGINS_DEV_UI_COMPONENTS/#device_mac","title":"device_mac","text":"

Renders as a clickable link to the device with the given MAC address.

{\n  \"column\": \"ForeignKey\",\n  \"show\": true,\n  \"type\": \"device_mac\",\n  \"localized\": [\"name\"],\n  \"name\": [{\"language_code\": \"en_us\", \"string\": \"Device\"}]\n}\n

Input: aa:bb:cc:dd:ee:ff Output: Clickable link to device details page

"},{"location":"PLUGINS_DEV_UI_COMPONENTS/#device_ip","title":"device_ip","text":"

Resolves an IP address to a MAC address and creates a device link.

{\n  \"column\": \"Object_SecondaryID\",\n  \"show\": true,\n  \"type\": \"device_ip\",\n  \"localized\": [\"name\"],\n  \"name\": [{\"language_code\": \"en_us\", \"string\": \"Host\"}]\n}\n

Input: 192.168.1.100 Output: Link to device with that IP (if known)

"},{"location":"PLUGINS_DEV_UI_COMPONENTS/#device_name_mac","title":"device_name_mac","text":"

Creates a device link with the target device's name as the link label.

{\n  \"column\": \"Object_PrimaryID\",\n  \"show\": true,\n  \"type\": \"device_name_mac\",\n  \"localized\": [\"name\"],\n  \"name\": [{\"language_code\": \"en_us\", \"string\": \"Device Name\"}]\n}\n

Input: aa:bb:cc:dd:ee:ff Output: Device name (clickable link to device)

"},{"location":"PLUGINS_DEV_UI_COMPONENTS/#url","title":"url","text":"

Renders as a clickable HTTP/HTTPS link.

{\n  \"column\": \"Watched_Value1\",\n  \"show\": true,\n  \"type\": \"url\",\n  \"localized\": [\"name\"],\n  \"name\": [{\"language_code\": \"en_us\", \"string\": \"Endpoint\"}]\n}\n

Input: https://example.com/api Output: Clickable link

"},{"location":"PLUGINS_DEV_UI_COMPONENTS/#url_http_https","title":"url_http_https","text":"

Creates two links (HTTP and HTTPS) as lock icons for the given IP/hostname.

{\n  \"column\": \"Object_SecondaryID\",\n  \"show\": true,\n  \"type\": \"url_http_https\",\n  \"localized\": [\"name\"],\n  \"name\": [{\"language_code\": \"en_us\", \"string\": \"Web Links\"}]\n}\n

Input: 192.168.1.50 Output: \ud83d\udd13 HTTP link | \ud83d\udd12 HTTPS link

"},{"location":"PLUGINS_DEV_UI_COMPONENTS/#textarea_readonly","title":"textarea_readonly","text":"

Multi-line read-only display with newlines preserved.

{\n  \"column\": \"Extra\",\n  \"show\": true,\n  \"type\": \"textarea_readonly\",\n  \"localized\": [\"name\"],\n  \"name\": [{\"language_code\": \"en_us\", \"string\": \"Details\"}]\n}\n
"},{"location":"PLUGINS_DEV_UI_COMPONENTS/#interactive-types","title":"Interactive Types","text":""},{"location":"PLUGINS_DEV_UI_COMPONENTS/#textbox_save","title":"textbox_save","text":"

User-editable text box that persists changes to the database (typically UserData column).

{\n  \"column\": \"UserData\",\n  \"show\": true,\n  \"type\": \"textbox_save\",\n  \"default_value\": \"\",\n  \"localized\": [\"name\"],\n  \"name\": [{\"language_code\": \"en_us\", \"string\": \"Notes\"}]\n}\n
"},{"location":"PLUGINS_DEV_UI_COMPONENTS/#styledtransformed-types","title":"Styled/Transformed Types","text":""},{"location":"PLUGINS_DEV_UI_COMPONENTS/#label-with-threshold","title":"label with threshold","text":"

Color-codes values based on ranges. Useful for status codes, latency, capacity percentages.

{\n  \"column\": \"Watched_Value1\",\n  \"show\": true,\n  \"type\": \"threshold\",\n  \"options\": [\n    {\n      \"maximum\": 199,\n      \"hexColor\": \"#792D86\"  // Purple for <199\n    },\n    {\n      \"maximum\": 299,\n      \"hexColor\": \"#5B862D\"  // Green for 200-299\n    },\n    {\n      \"maximum\": 399,\n      \"hexColor\": \"#7D862D\"  // Orange for 300-399\n    },\n    {\n      \"maximum\": 499,\n      \"hexColor\": \"#BF6440\"  // Red-orange for 400-499\n    },\n    {\n      \"maximum\": 999,\n      \"hexColor\": \"#D33115\"  // Dark red for 500+\n    }\n  ],\n  \"localized\": [\"name\"],\n  \"name\": [{\"language_code\": \"en_us\", \"string\": \"HTTP Status\"}]\n}\n

How it works:

"},{"location":"PLUGINS_DEV_UI_COMPONENTS/#replace","title":"replace","text":"

Replaces specific values with display strings or HTML.

{\n  \"column\": \"Watched_Value2\",\n  \"show\": true,\n  \"type\": \"replace\",\n  \"options\": [\n    {\n      \"equals\": \"online\",\n      \"replacement\": \"<i class='fa-solid fa-circle' style='color: green;'></i> Online\"\n    },\n    {\n      \"equals\": \"offline\",\n      \"replacement\": \"<i class='fa-solid fa-circle' style='color: red;'></i> Offline\"\n    },\n    {\n      \"equals\": \"idle\",\n      \"replacement\": \"<i class='fa-solid fa-circle' style='color: yellow;'></i> Idle\"\n    }\n  ],\n  \"localized\": [\"name\"],\n  \"name\": [{\"language_code\": \"en_us\", \"string\": \"Status\"}]\n}\n

Output Examples: - \"online\" \u2192 \ud83d\udfe2 Online - \"offline\" \u2192 \ud83d\udd34 Offline - \"idle\" \u2192 \ud83d\udfe1 Idle

"},{"location":"PLUGINS_DEV_UI_COMPONENTS/#regex","title":"regex","text":"

Applies a regular expression to extract/transform values.

{\n  \"column\": \"Watched_Value1\",\n  \"show\": true,\n  \"type\": \"regex\",\n  \"options\": [\n    {\n      \"type\": \"regex\",\n      \"param\": \"([0-9]{1,3}\\\\.[0-9]{1,3}\\\\.[0-9]{1,3}\\\\.[0-9]{1,3})\"\n    }\n  ],\n  \"localized\": [\"name\"],\n  \"name\": [{\"language_code\": \"en_us\", \"string\": \"IP Address\"}]\n}\n
"},{"location":"PLUGINS_DEV_UI_COMPONENTS/#eval","title":"eval","text":"

Evaluates JavaScript code with access to the column value (use ${value} or {value}).

{\n  \"column\": \"Watched_Value1\",\n  \"show\": true,\n  \"type\": \"eval\",\n  \"default_value\": \"\",\n  \"localized\": [\"name\"],\n  \"name\": [{\"language_code\": \"en_us\", \"string\": \"Formatted Value\"}]\n}\n

Example with custom formatting:

{\n  \"column\": \"Watched_Value1\",\n  \"show\": true,\n  \"type\": \"eval\",\n  \"options\": [\n    {\n      \"type\": \"eval\",\n      \"param\": \"`<b>${value}</b> units`\"\n    }\n  ],\n  \"localized\": [\"name\"],\n  \"name\": [{\"language_code\": \"en_us\", \"string\": \"Value with Units\"}]\n}\n

"},{"location":"PLUGINS_DEV_UI_COMPONENTS/#chaining-types","title":"Chaining Types","text":"

You can chain multiple transformations with dot notation:

{\n  \"column\": \"Watched_Value3\",\n  \"show\": true,\n  \"type\": \"regex.url_http_https\",\n  \"options\": [\n    {\n      \"type\": \"regex\",\n      \"param\": \"([\\\\d.:]+)\"  // Extract IP/host\n    }\n  ],\n  \"localized\": [\"name\"],\n  \"name\": [{\"language_code\": \"en_us\", \"string\": \"HTTP/S Links\"}]\n}\n

Flow:

  1. Apply regex to extract 192.168.1.50 from input
  2. Create HTTP/HTTPS links for that host
"},{"location":"PLUGINS_DEV_UI_COMPONENTS/#dynamic-options","title":"Dynamic Options","text":""},{"location":"PLUGINS_DEV_UI_COMPONENTS/#sql-driven-select","title":"SQL-Driven Select","text":"

Use SQL query results to populate dropdown options:

{\n  \"column\": \"Watched_Value2\",\n  \"show\": true,\n  \"type\": \"select\",\n  \"options\": [\"{value}\"],\n  \"options_params\": [\n    {\n      \"name\": \"value\",\n      \"type\": \"sql\",\n      \"value\": \"SELECT devType as id, devType as name FROM Devices UNION SELECT 'Unknown' as id, 'Unknown' as name ORDER BY id\"\n    }\n  ],\n  \"localized\": [\"name\"],\n  \"name\": [{\"language_code\": \"en_us\", \"string\": \"Device Type\"}]\n}\n

The SQL query must return exactly 2 columns:

"},{"location":"PLUGINS_DEV_UI_COMPONENTS/#setting-driven-select","title":"Setting-Driven Select","text":"

Use plugin settings to populate options:

{\n  \"column\": \"Watched_Value1\",\n  \"show\": true,\n  \"type\": \"select\",\n  \"options\": [\"{value}\"],\n  \"options_params\": [\n    {\n      \"name\": \"value\",\n      \"type\": \"setting\",\n      \"value\": \"MYPLN_AVAILABLE_STATUSES\"\n    }\n  ],\n  \"localized\": [\"name\"],\n  \"name\": [{\"language_code\": \"en_us\", \"string\": \"Status\"}]\n}\n
"},{"location":"PLUGINS_DEV_UI_COMPONENTS/#mapping-to-database-tables","title":"Mapping to Database Tables","text":""},{"location":"PLUGINS_DEV_UI_COMPONENTS/#mapping-to-currentscan","title":"Mapping to CurrentScan","text":"

To import plugin data into the device scan pipeline (for notifications, heuristics, etc.):

  1. Add \"mapped_to_table\": \"CurrentScan\" at the root level of config.json
  2. Add \"mapped_to_column\" property to each column definition
{\n  \"code_name\": \"my_device_scanner\",\n  \"unique_prefix\": \"MYSCAN\",\n  \"mapped_to_table\": \"CurrentScan\",\n  \"database_column_definitions\": [\n    {\n      \"column\": \"Object_PrimaryID\",\n      \"mapped_to_column\": \"scanMac\",\n      \"show\": true,\n      \"type\": \"device_mac\",\n      \"localized\": [\"name\"],\n      \"name\": [{\"language_code\": \"en_us\", \"string\": \"MAC Address\"}]\n    },\n    {\n      \"column\": \"Object_SecondaryID\",\n      \"mapped_to_column\": \"scanLastIP\",\n      \"show\": true,\n      \"type\": \"device_ip\",\n      \"localized\": [\"name\"],\n      \"name\": [{\"language_code\": \"en_us\", \"string\": \"IP Address\"}]\n    },\n    {\n      \"column\": \"NameDoesntMatter\",\n      \"mapped_to_column\": \"scanSourcePlugin\",\n      \"mapped_to_column_data\": {\n        \"value\": \"MYSCAN\"\n      },\n      \"show\": true,\n      \"type\": \"label\",\n      \"localized\": [\"name\"],\n      \"name\": [{\"language_code\": \"en_us\", \"string\": \"Scan Method\"}]\n    }\n  ]\n}\n
"},{"location":"PLUGINS_DEV_UI_COMPONENTS/#using-static-values","title":"Using Static Values","text":"

Use mapped_to_column_data to map a static value instead of reading from a column:

{\n  \"column\": \"NameDoesntMatter\",\n  \"mapped_to_column\": \"scanSourcePlugin\",\n  \"mapped_to_column_data\": {\n    \"value\": \"MYSCAN\"\n  },\n  \"show\": true,\n  \"type\": \"label\",\n  \"localized\": [\"name\"],\n  \"name\": [{\"language_code\": \"en_us\", \"string\": \"Discovery Method\"}]\n}\n

This always sets scanSourcePlugin to \"MYSCAN\" regardless of column data.

"},{"location":"PLUGINS_DEV_UI_COMPONENTS/#filters","title":"Filters","text":"

Control which rows are displayed based on filter conditions. Filters are applied on the client-side in JavaScript.

{\n  \"data_filters\": [\n    {\n      \"compare_column\": \"Object_PrimaryID\",\n      \"compare_operator\": \"==\",\n      \"compare_field_id\": \"txtMacFilter\",\n      \"compare_js_template\": \"'{value}'.toString()\",\n      \"compare_use_quotes\": true\n    }\n  ]\n}\n
Property Description compare_column The column from plugin results to compare (left side) compare_operator JavaScript operator: ==, !=, <, >, <=, >=, includes, startsWith compare_field_id HTML input field ID containing the filter value (right side) compare_js_template JavaScript template to transform values. Use {value} placeholder compare_use_quotes If true, wrap result in quotes for string comparison

Example: Filter by MAC address

{\n  \"data_filters\": [\n    {\n      \"compare_column\": \"ForeignKey\",\n      \"compare_operator\": \"==\",\n      \"compare_field_id\": \"txtMacFilter\",\n      \"compare_js_template\": \"'{value}'.toString()\",\n      \"compare_use_quotes\": true\n    }\n  ]\n}\n

When viewing a device detail page, the txtMacFilter field is populated with that device's MAC, and only rows where ForeignKey == MAC are shown.

"},{"location":"PLUGINS_DEV_UI_COMPONENTS/#example-complete-column-definitions","title":"Example: Complete Column Definitions","text":"
{\n  \"database_column_definitions\": [\n    {\n      \"column\": \"Object_PrimaryID\",\n      \"mapped_to_column\": \"scanMac\",\n      \"css_classes\": \"col-sm-2\",\n      \"show\": true,\n      \"type\": \"device_mac\",\n      \"default_value\": \"\",\n      \"localized\": [\"name\"],\n      \"name\": [{\"language_code\": \"en_us\", \"string\": \"MAC Address\"}]\n    },\n    {\n      \"column\": \"Object_SecondaryID\",\n      \"mapped_to_column\": \"scanLastIP\",\n      \"css_classes\": \"col-sm-2\",\n      \"show\": true,\n      \"type\": \"device_ip\",\n      \"default_value\": \"unknown\",\n      \"localized\": [\"name\"],\n      \"name\": [{\"language_code\": \"en_us\", \"string\": \"IP Address\"}]\n    },\n    {\n      \"column\": \"DateTime\",\n      \"css_classes\": \"col-sm-2\",\n      \"show\": true,\n      \"type\": \"label\",\n      \"default_value\": \"\",\n      \"localized\": [\"name\"],\n      \"name\": [{\"language_code\": \"en_us\", \"string\": \"Last Seen\"}]\n    },\n    {\n      \"column\": \"Watched_Value1\",\n      \"css_classes\": \"col-sm-2\",\n      \"show\": true,\n      \"type\": \"threshold\",\n      \"options\": [\n        {\"maximum\": 199, \"hexColor\": \"#792D86\"},\n        {\"maximum\": 299, \"hexColor\": \"#5B862D\"},\n        {\"maximum\": 399, \"hexColor\": \"#7D862D\"},\n        {\"maximum\": 499, \"hexColor\": \"#BF6440\"},\n        {\"maximum\": 999, \"hexColor\": \"#D33115\"}\n      ],\n      \"localized\": [\"name\"],\n      \"name\": [{\"language_code\": \"en_us\", \"string\": \"HTTP Status\"}]\n    },\n    {\n      \"column\": \"Watched_Value2\",\n      \"css_classes\": \"col-sm-1\",\n      \"show\": true,\n      \"type\": \"label\",\n      \"default_value\": \"\",\n      \"localized\": [\"name\"],\n      \"name\": [{\"language_code\": \"en_us\", \"string\": \"Response Time\"}]\n    },\n    {\n      \"column\": \"Extra\",\n      \"css_classes\": \"col-sm-3\",\n      \"show\": true,\n      \"type\": \"textarea_readonly\",\n      \"default_value\": \"\",\n      \"localized\": [\"name\"],\n      \"name\": [{\"language_code\": \"en_us\", \"string\": \"Additional Info\"}]\n    }\n  ]\n}\n
"},{"location":"PLUGINS_DEV_UI_COMPONENTS/#css-classes","title":"CSS Classes","text":"

Use Bootstrap grid classes to control column widths in tables:

Class Width Usage col-sm-1 ~8% Very narrow (icons, status) col-sm-2 ~16% Narrow (MACs, IPs) col-sm-3 ~25% Medium (names, URLs) col-sm-4 ~33% Medium-wide (descriptions) col-sm-6 ~50% Wide (large content)"},{"location":"PLUGINS_DEV_UI_COMPONENTS/#validation-checklist","title":"Validation Checklist","text":""},{"location":"PLUGINS_DEV_UI_COMPONENTS/#see-also","title":"See Also","text":""},{"location":"PUID_PGID_SECURITY/","title":"PUID/PGID Security \u2014 Why the entrypoint requires numeric IDs","text":""},{"location":"PUID_PGID_SECURITY/#purpose","title":"Purpose","text":"

This short document explains the security rationale behind the root-priming entrypoint's validation of runtime user IDs (PUID) and group IDs (PGID). The validation is intentionally strict and is a safety measure to prevent environment-variable-based command injection when running as root during the initial priming stage.

"},{"location":"PUID_PGID_SECURITY/#key-points","title":"Key points","text":""},{"location":"PUID_PGID_SECURITY/#behavior-on-malformed-input","title":"Behavior on malformed input","text":""},{"location":"PUID_PGID_SECURITY/#operator-guidance","title":"Operator guidance","text":""},{"location":"PUID_PGID_SECURITY/#related-docs","title":"Related docs","text":"

Document created to clarify the security behavior of the root-priming entrypoint (PUID/PGID validation).

"},{"location":"RANDOM_MAC/","title":"Privacy & Random MAC's","text":"

Some operating systems incorporate randomize MAC addresses to improve privacy.

This functionality allows you to hide the real MAC of the device and assign a random MAC when we connect to WIFI networks.

This behavior is especially useful when connecting to WIFI's that we do not know, but it is totally useless when connecting to our own WIFI's or known networks.

I recommend disabling this on-device functionality when connecting our devices to our own WIFI's, this way, NetAlertX will be able to identify the device, and it will not identify it as a new device every so often (every time iOS or Android randomizes the MAC).

Random MACs are recognized by the characters \"2\", \"6\", \"A\", or \"E\" as the 2nd character in the Mac address. You can disable specific prefixes to be detected as random MAC addresses by specifying the UI_NOT_RANDOM_MAC setting.

"},{"location":"RANDOM_MAC/#windows","title":"Windows","text":""},{"location":"RANDOM_MAC/#ios","title":"IOS","text":""},{"location":"RANDOM_MAC/#android","title":"Android","text":""},{"location":"REMOTE_NETWORKS/","title":"Scanning Remote or Inaccessible Networks","text":"

By design, local network scanners such as arp-scan use ARP (Address Resolution Protocol) to map IP addresses to MAC addresses on the local network. Since ARP operates at Layer 2 (Data Link Layer), it typically works only within a single broadcast domain, usually limited to a single router or network segment.

Note

Ping and ARPSCAN use different protocols so even if you can ping devices it doesn't mean ARPSCAN can detect them.

To scan multiple locally accessible network segments, add them as subnets according to the subnets documentation. If ARPSCAN is not suitable for your setup, read on.

"},{"location":"REMOTE_NETWORKS/#complex-use-cases","title":"Complex Use Cases","text":"

The following network setups might make some devices undetectable with ARPSCAN. Check the specific setup to understand the cause and find potential workarounds to report on these devices.

"},{"location":"REMOTE_NETWORKS/#wi-fi-extenders","title":"Wi-Fi Extenders","text":"

Wi-Fi extenders often block or proxy Layer-2 broadcast traffic, which can prevent network scanning tools like arp-scan from detecting devices behind the extender. This can happen even when the extender uses the same SSID and the same IP subnet as the main network.

Please note that being able to ping a device does not mean it is discoverable via arp-scan.

That\u2019s why devices behind extenders may respond to ping but remain undiscoverable via arp-scan.

Possible workaround: If the extender uses a separate subnet, scan that subnet directly. Otherwise, use DHCP-based discovery plugins or router integration instead of ARP. See the Other Workarounds section below for more details.

"},{"location":"REMOTE_NETWORKS/#vpns","title":"VPNs","text":"

ARP operates at Layer 2 (Data Link Layer) and works only within a local area network (LAN). VPNs, which operate at Layer 3 (Network Layer), route traffic between networks, preventing ARP requests from discovering devices outside the local network.

VPNs use virtual interfaces (e.g., tun0, tap0) to encapsulate traffic, bypassing ARP-based discovery. Additionally, many VPNs use NAT, which masks individual devices behind a shared IP address.

Possible workaround: Configure the VPN to bridge networks instead of routing to enable ARP, though this depends on the VPN setup and security requirements.

"},{"location":"REMOTE_NETWORKS/#other-workarounds","title":"Other Workarounds","text":"

The following workarounds should work for most complex network setups.

"},{"location":"REMOTE_NETWORKS/#supplementing-plugins","title":"Supplementing Plugins","text":"

You can use supplementary plugins that employ alternate methods. Protocols used by the SNMPDSC or DHCPLSS plugins are widely supported on different routers and can be effective as workarounds. Check the plugins list to find a plugin that works with your router and network setup.

"},{"location":"REMOTE_NETWORKS/#multiple-netalertx-instances","title":"Multiple NetAlertX Instances","text":"

If you have servers in different networks, you can set up separate NetAlertX instances on those subnets and synchronize the results into one instance using the SYNC plugin.

"},{"location":"REMOTE_NETWORKS/#manual-entry","title":"Manual Entry","text":"

If you don't need to discover new devices and only need to report on their status (online, offline, down), you can manually enter devices and check their status using the ICMP plugin, which uses the ping command internally.

For more information on how to add devices manually (or dummy devices), refer to the Device Management documentation.

To create truly dummy devices, you can use a loopback IP address (e.g., 0.0.0.0 or 127.0.0.1) or the Force Status field so they appear online.

"},{"location":"REMOTE_NETWORKS/#nmap-and-fake-mac-addresses","title":"NMAP and Fake MAC Addresses","text":"

Scanning remote networks with NMAP is possible (via the NMAPDEV plugin), but since it cannot retrieve the MAC address, you need to enable the NMAPDEV_FAKE_MAC setting. This will generate a fake MAC address based on the IP address, allowing you to track devices. However, this can lead to inconsistencies, especially if the IP address changes or a previously logged device is rediscovered. If this setting is disabled, only the IP address will be discovered, and devices with missing MAC addresses will be skipped.

Check the NMAPDEV plugin for details

"},{"location":"REVERSE_DNS/","title":"Reverse DNS","text":""},{"location":"REVERSE_DNS/#setting-up-better-name-discovery-with-reverse-dns","title":"Setting up better name discovery with Reverse DNS","text":"

If you are running a DNS server, such as AdGuard, set up Private reverse DNS servers for a better name resolution on your network. Enabling this setting will enable NetAlertX to execute dig and nslookup commands to automatically resolve device names based on their IP addresses.

Tip

Before proceeding, ensure that name resolution plugins are enabled. You can customize how names are cleaned using the NEWDEV_NAME_CLEANUP_REGEX setting. To auto-update Fully Qualified Domain Names (FQDN), enable the REFRESH_FQDN setting.

Example 1: Reverse DNS disabled

jokob@Synology-NAS:/$ nslookup 192.168.1.58\n** server can't find 58.1.168.192.in-addr.arpa: NXDOMAIN\n

Example 2: Reverse DNS enabled

jokob@Synology-NAS:/$ nslookup 192.168.1.58\n45.1.168.192.in-addr.arpa       name = jokob-NUC.localdomain.\n
"},{"location":"REVERSE_DNS/#enabling-reverse-dns-in-adguard","title":"Enabling reverse DNS in AdGuard","text":"
  1. Navigate to Settings -> DNS Settings
  2. Locate Private reverse DNS servers
  3. Enter your router IP address, such as 192.168.1.1
  4. Make sure you have Use private reverse DNS resolvers ticked.
  5. Click Apply to save your settings.
"},{"location":"REVERSE_DNS/#specifying-the-dns-in-the-container","title":"Specifying the DNS in the container","text":"

You can specify the DNS server in the docker-compose to improve name resolution on your network.

services:\n  netalertx:\n    container_name: netalertx\n    image: \"ghcr.io/netalertx/netalertx:latest\"\n...\n    dns:           # specifying the DNS servers used for the container\n      - 10.8.0.1\n      - 10.8.0.17\n
"},{"location":"REVERSE_DNS/#using-a-custom-resolvconf-file","title":"Using a custom resolv.conf file","text":"

You can configure a custom /etc/resolv.conf file in docker-compose.yml and set the nameserver to your LAN DNS server (e.g.: Pi-Hole). See the relevant resolv.conf man entry for details.

"},{"location":"REVERSE_DNS/#docker-composeyml","title":"docker-compose.yml:","text":"
services:\n  netalertx:\n    container_name: netalertx\n    volumes:\n...\n      - /local_data_dir/config/resolv.conf:/etc/resolv.conf                          # \u26a0 Mapping the /resolv.conf file for better name resolution\n...\n
"},{"location":"REVERSE_DNS/#local_data_dirconfigresolvconf","title":"/local_data_dir/config/resolv.conf:","text":"

The most important below is the nameserver entry (you can add multiple):

nameserver 192.168.178.11\noptions edns0 trust-ad\nsearch example.com\n
"},{"location":"REVERSE_PROXY/","title":"Reverse Proxy Configuration","text":"

A reverse proxy is a server that sits between users and your NetAlertX instance. It allows you to: - Access NetAlertX via a domain name (e.g., https://netalertx.example.com). - Add HTTPS/SSL encryption. - Enforce authentication (like SSO).

flowchart LR\n  Browser --HTTPS--> Proxy[Reverse Proxy] --HTTP--> Container[NetAlertX Container]
"},{"location":"REVERSE_PROXY/#netalertx-ports","title":"NetAlertX Ports","text":"

NetAlertX exposes two ports that serve different purposes. Your reverse proxy can target one or both, depending on your needs.

Port Service Purpose 20211 Nginx (Web UI) The main interface. 20212 Backend API Direct access to the API and GraphQL. Includes API docs you can view with a browser.

Warning

Do not document or use /server as an external API endpoint. It is an internal route used by the Nginx frontend to communicate with the backend.

"},{"location":"REVERSE_PROXY/#connection-patterns","title":"Connection Patterns","text":""},{"location":"REVERSE_PROXY/#1-default-no-proxy","title":"1. Default (No Proxy)","text":"

For local testing or LAN access. The browser accesses the UI on port 20211. Code and API docs are accessible on 20212.

flowchart LR\n  B[Browser]\n  subgraph NAC[NetAlertX Container]\n    N[Nginx listening on port 20211]\n    A[Service on port 20212]\n    N -->|Proxy /server to localhost:20212| A\n  end\n  B -->|port 20211| NAC\n  B -->|port 20212| NAC
"},{"location":"REVERSE_PROXY/#2-direct-api-consumer-not-recommended","title":"2. Direct API Consumer (Not Recommended)","text":"

Connecting directly to the backend API port (20212).

Caution

This exposes the API directly to the network without additional protection. Avoid this on untrusted networks.

flowchart LR\n  B[Browser] -->|HTTPS| S[Any API Consumer app]\n  subgraph NAC[NetAlertX Container]\n    N[Nginx listening on port 20211]\n    N -->|Proxy /server to localhost:20212| A[Service on port 20212]\n  end\n  S -->|Port 20212| NAC
"},{"location":"REVERSE_PROXY/#3-recommended-reverse-proxy-to-web-ui","title":"3. Recommended: Reverse Proxy to Web UI","text":"

Using a reverse proxy (Nginx, Traefik, Caddy, etc.) to handle HTTPS and Auth in front of the main UI.

flowchart LR\n  B[Browser] -->|HTTPS| S[Any Auth/SSL proxy]\n  subgraph NAC[NetAlertX Container]\n    N[Nginx listening on port 20211]\n    N -->|Proxy /server to localhost:20212| A[Service on port 20212]\n  end\n  S -->|port 20211| NAC
"},{"location":"REVERSE_PROXY/#4-recommended-proxied-api-consumer","title":"4. Recommended: Proxied API Consumer","text":"

Using a proxy to secure API access with TLS or IP limiting.

Why is this important? The backend API (:20212) is powerful\u2014more so than the Web UI, which is a safer, password-protectable interface. By using a reverse proxy to limit sources (e.g., allowing only your Home Assistant server's IP), you ensure that only trusted devices can talk to your backend.

flowchart LR\n  B[Browser] -->|HTTPS| S[Any API Consumer app]\n  C[HTTPS/source-limiting Proxy]\n  subgraph NAC[NetAlertX Container]\n    N[Nginx listening on port 20211]\n    N -->|Proxy /server to localhost:20212| A[Service on port 20212]\n  end\n  S -->|HTTPS| C\n  C -->|Port 20212| NAC
"},{"location":"REVERSE_PROXY/#getting-started-nginx-proxy-manager","title":"Getting Started: Nginx Proxy Manager","text":"

For beginners, we recommend Nginx Proxy Manager. It provides a user-friendly interface to manage proxy hosts and free SSL certificates via Let's Encrypt.

  1. Install Nginx Proxy Manager alongside NetAlertX.
  2. Create a Proxy Host pointing to your NetAlertX IP and Port 20211 for the Web UI.
  3. (Optional) Create a second host for the API on Port 20212.

"},{"location":"REVERSE_PROXY/#configuration-settings","title":"Configuration Settings","text":"

When using a reverse proxy, you should verify two settings in Settings > Core > General:

  1. BACKEND_API_URL: This should be set to /server.
  2. Reason: The frontend should communicate with the backend via the internal Nginx proxy rather than routing out to the internet and back.

  3. REPORT_DASHBOARD_URL: Set this to your external proxy URL (e.g., https://netalertx.example.com).

  4. Reason: This URL is used to generate proper clickable links in emails and HTML reports.

"},{"location":"REVERSE_PROXY/#other-reverse-proxies","title":"Other Reverse Proxies","text":"

NetAlertX uses standard HTTP. Any reverse proxy will work. Simply forward traffic to the appropriate port (20211 or 20212).

For configuration details, consult the documentation for your preferred proxy:

"},{"location":"REVERSE_PROXY/#authentication","title":"Authentication","text":"

If you wish to add Single Sign-On (SSO) or other authentication in front of NetAlertX, refer to the documentation for your identity provider:

"},{"location":"REVERSE_PROXY/#further-reading","title":"Further Reading","text":"

If you want to understand more about reverse proxies and networking concepts:

"},{"location":"SECURITY/","title":"Security Considerations","text":""},{"location":"SECURITY/#responsibility-disclaimer","title":"\ud83e\udded Responsibility Disclaimer","text":"

NetAlertX provides powerful tools for network scanning, presence detection, and automation. However, it is up to you\u2014the deployer\u2014to ensure that your instance is properly secured.

This includes (but is not limited to): - Controlling who has access to the UI and API - Following network and container security best practices - Running NetAlertX only on networks where you have legal authorization - Keeping your deployment up to date with the latest patches

NetAlertX is not responsible for misuse, misconfiguration, or unsecure deployments. Always test and secure your setup before exposing it to the outside world.

"},{"location":"SECURITY/#securing-your-netalertx-instance","title":"\ud83d\udd10 Securing Your NetAlertX Instance","text":"

NetAlertX is a powerful network scanning and automation framework. With that power comes responsibility. It is your responsibility to secure your deployment, especially if you're running it outside a trusted local environment.

"},{"location":"SECURITY/#tldr-key-security-recommendations","title":"\u26a0\ufe0f TL;DR \u2013 Key Security Recommendations","text":""},{"location":"SECURITY/#access-control-with-vpn-or-tailscale","title":"\ud83d\udd17 Access Control with VPN (or Tailscale)","text":"

NetAlertX is designed to be run on private LANs, not the open internet.

Recommended: Use a VPN to access NetAlertX from remote locations.

"},{"location":"SECURITY/#tailscale-easy-vpn-alternative","title":"\u2705 Tailscale (Easy VPN Alternative)","text":"

Tailscale sets up a private mesh network between your devices. It's fast to configure and ideal for NetAlertX. \ud83d\udc49 Get started with Tailscale

"},{"location":"SECURITY/#web-ui-password-protection","title":"\ud83d\udd11 Web UI Password Protection","text":"

By default, NetAlertX does not require login. Before exposing the UI in any way:

  1. Enable password protection:

    SETPWD_enable_password=true\nSETPWD_password=your_secure_password\n

  2. Passwords are stored as SHA256 hashes

  3. Default password (if not changed): 123456 \u2014 change it ASAP!

To disable authenticated login, set SETPWD_enable_password=false in app.conf

"},{"location":"SECURITY/#additional-security-measures","title":"\ud83d\udd25 Additional Security Measures","text":""},{"location":"SECURITY/#docker-hardening-tips","title":"\ud83e\uddf1 Docker Hardening Tips","text":""},{"location":"SECURITY/#responsible-disclosure","title":"\ud83d\udce3 Responsible Disclosure","text":"

If you discover a vulnerability or security concern, please report it privately to:

\ud83d\udce7 jokob@duck.com

We take security seriously and will work to patch confirmed issues promptly. Your help in responsible disclosure is appreciated!

By following these recommendations, you can ensure your NetAlertX deployment is both powerful and secure.

"},{"location":"SECURITY_FEATURES/","title":"NetAlertX Security: A Layered Defense","text":"

Your network security monitor has the \"keys to the kingdom,\" making it a prime target for attackers. If it gets compromised, the game is over.

NetAlertX is engineered from the ground up to prevent this. It's not just an app; it's a purpose-built security appliance. Its core design is built on a zero-trust philosophy, which is a modern way of saying we assume a breach will happen and plan for it. This isn't a single \"lock on the door\"; it's a \"defense-in-depth\" strategy, more like a medieval castle with a moat, high walls, and guards at every door.

Here\u2019s a breakdown of the defensive layers you get, right out of the box using the default configuration.

"},{"location":"SECURITY_FEATURES/#feature-1-the-digital-concrete-filesystem","title":"Feature 1: The \"Digital Concrete\" Filesystem","text":"

Methodology: The core application and its system files are treated as immutable. Once built, the app's code is \"set in concrete,\" preventing attackers from modifying it or planting malware.

What's this mean to you: Even if an attacker gets in, they cannot modify the application code or plant malware. It's like the app is set in digital concrete.

"},{"location":"SECURITY_FEATURES/#feature-2-surgical-keycard-only-access","title":"Feature 2: Surgical, \"Keycard-Only\" Access","text":"

Methodology: The principle of least privilege is strictly enforced. Every process gets only the absolute minimum set of permissions needed for its specific job.

What's this mean to you: A security breach is firewalled. An attacker who gets into the web UI does not have the \"keycard\" to start scanning your network or take over the system. The breach is contained.

"},{"location":"SECURITY_FEATURES/#feature-3-attack-surface-amputation","title":"Feature 3: Attack Surface \"Amputation\"","text":"

Methodology: The potential attack surface is aggressively minimized by removing every non-essential tool an attacker would want to use.

What's this mean to you: An attacker who breaks in finds themselves in an empty room with no tools. They have no sudo to get more power, no package manager to download weapons, and no compilers to build new ones.

"},{"location":"SECURITY_FEATURES/#feature-4-self-cleaning-writable-areas","title":"Feature 4: \"Self-Cleaning\" Writable Areas","text":"

Methodology: All writable locations are treated as untrusted, temporary, and non-executable by default.

What's this mean to you: Any malicious file an attacker does manage to drop is written in invisible, non-permanent ink. The file is written to RAM, not disk, so it vaporizes the instant you restart the container. Even worse for them, the noexec flag means they can't even run the file in the first place.

"},{"location":"SECURITY_FEATURES/#feature-5-built-in-resource-guardrails","title":"Feature 5: Built-in Resource Guardrails","text":"

Methodology: The container is constrained by resource limits to function as a \"good citizen\" on the host system. This prevents a compromised or runaway process from consuming excessive resources, a common vector for Denial of Service (DoS) attacks.

What's this mean to you: NetAlertX is a \"good neighbor\" and can't be used to crash your host machine. Even if a process is compromised, it's in a digital straitjacket and cannot pull a \"denial of service\" attack by hogging all your CPU or memory.

"},{"location":"SECURITY_FEATURES/#feature-6-the-pre-flight-self-check","title":"Feature 6: The \"Pre-Flight\" Self-Check","text":"

Methodology: Before any services start, NetAlertX runs a comprehensive \"pre-flight\" check to ensure its own security and configuration are sound. It's like a built-in auditor who verifies its own defenses.

What's this mean to you: The system is self-aware and checks its own work. You get instant feedback if a setting is wrong, and you get peace of mind on every single boot knowing all these security layers are active and verified, all in about one second.

"},{"location":"SECURITY_FEATURES/#conclusion-security-by-default","title":"Conclusion: Security by Default","text":"

No single security control is a silver bullet. The robust security posture of NetAlertX is achieved through defense in depth, layering these methodologies.

An adversary must not only gain initial access but must also find a way to write a payload to a non-executable, in-memory location, without access to any standard system tools, sudo, or a package manager. And they must do this while operating as an unprivileged user in a resource-limited environment where the application code is immutable and actively checks its own integrity on every boot.

"},{"location":"SESSION_INFO/","title":"Sessions Section \u2013 Device View","text":"

The Sessions Section shows a device\u2019s connection history. All data is automatically detected and cannot be edited.

"},{"location":"SESSION_INFO/#key-fields","title":"Key Fields","text":"Field Description Editable? First Connection The first time the device was detected on the network. \u274c Auto-detected Last Connection The most recent time the device was online. \u274c Auto-detected"},{"location":"SESSION_INFO/#how-session-information-works","title":"How Session Information Works","text":""},{"location":"SESSION_INFO/#1-detecting-new-devices","title":"1. Detecting New Devices","text":""},{"location":"SESSION_INFO/#2-recording-connection-sessions","title":"2. Recording Connection Sessions","text":""},{"location":"SESSION_INFO/#3-handling-missing-or-conflicting-data","title":"3. Handling Missing or Conflicting Data","text":""},{"location":"SESSION_INFO/#4-updating-sessions","title":"4. Updating Sessions","text":"

This session information feeds directly into Monitoring \u2192 Presence, providing a live view of which devices are currently online.

"},{"location":"SETTINGS_SYSTEM/","title":"Settings","text":""},{"location":"SETTINGS_SYSTEM/#setting-system","title":"\u2699 Setting system","text":"

This is an explanation how settings are handled intended for anyone thinking about writing their own plugin or contributing to the project.

If you are a user of the app, settings have a detailed description in the Settings section of the app. Open an issue if you'd like to clarify any of the settings.

"},{"location":"SETTINGS_SYSTEM/#data-storage","title":"\ud83d\udee2 Data storage","text":"

The source of truth for user-defined values is the app.conf file. Editing the file makes the App overwrite values in the Settings database table and in the table_settings.json file.

"},{"location":"SETTINGS_SYSTEM/#settings-database-table","title":"Settings database table","text":"

The Settings database table contains settings for App run purposes. The table is recreated every time the App restarts. The settings are loaded from the source-of-truth, that is the app.conf file. A high-level overview on the database structure can be found in the database documentation.

"},{"location":"SETTINGS_SYSTEM/#table_settingsjson","title":"table_settings.json","text":"

This is the API endpoint that reflects the state of the Settings database table. Settings can be accessed with the:

The json file is also cached on the client-side local storage of the browser.

"},{"location":"SETTINGS_SYSTEM/#appconf","title":"app.conf","text":"

Note

This is the source of truth for settings. User-defined values in this files always override default values specified in the Plugin definition.

The App generates two app.conf entries for every setting (Since version 23.8+). One entry is the setting value, the second is the __metadata associated with the setting. This __metadata entry contains the full setting definition in JSON format. Currently unused, but intended to be used in future to extend the Settings system.

"},{"location":"SETTINGS_SYSTEM/#plugin-settings","title":"Plugin settings","text":"

Note

This is the preferred way adding settings going forward. I'll be likely migrating all app settings into plugin-based settings.

Plugin settings are loaded dynamically from the config.json of individual plugins. If a setting isn't defined in the app.conf file, it is initialized via the default_value property of a setting from the config.json file. Check the Plugins documentation, section \u2699 Setting object structure for details on the structure of the setting.

"},{"location":"SETTINGS_SYSTEM/#settings-process-flow","title":"Settings Process flow","text":"

The process flow is mostly managed by the initialise.py file.

The script is responsible for reading user-defined values from a configuration file (app.conf), initializing settings, and importing them into a database. It also handles plugins and their configurations.

Here's a high-level description of the code:

  1. Function Definitions:
  2. ccd: This function is used to handle user-defined settings and configurations. It takes several parameters related to the setting's name, default value, input type, options, group, and more. It saves the settings and their metadata in different lists (conf.mySettingsSQLsafe and conf.mySettings).

  3. importConfigs: This function is the main entry point of the script. It imports user settings from a configuration file, processes them, and saves them to the database.

  4. read_config_file: This function reads the configuration file (app.conf) and returns a dictionary containing the key-value pairs from the file.

  5. Importing Configuration and Initializing Settings:

  6. The importConfigs function starts by checking the modification time of the configuration file to determine if it needs to be re-imported. If the file has not been modified since the last import, the function skips the import process.

  7. The function reads the configuration file using the read_config_file function, which returns a dictionary of settings.

  8. The script then initializes various user-defined settings using the ccd function, based on the values read from the configuration file. These settings are categorized into groups such as \"General,\" \"Email,\" \"Webhooks,\" \"Apprise,\" and more.

  9. Plugin Handling:

  10. The script loads and handles plugins dynamically. It retrieves plugin configurations and iterates through each plugin.
  11. For each plugin, it extracts the prefix and settings related to that plugin and processes them similarly to other user-defined settings.
  12. It also handles scheduling for plugins with specific RUN_SCHD settings.

  13. Saving Settings to the Database:

  14. The script clears the existing settings in the database and inserts the updated settings into the database using SQL queries.

  15. Updating the API and Performing Cleanup:

  16. After importing the configurations, the script updates the API to reflect the changes in the settings.
  17. It saves the current timestamp to determine the next import time.
  18. Finally, it logs the successful import of the new configuration.
"},{"location":"SMTP/","title":"\ud83d\udce7 SMTP server guides","text":"

The SMTP plugin supports any SMTP server. Here are some commonly used services to help speed up your configuration.

Note

If you are using a self hosted SMTP server ssh into the container and verify (e.g. via ping) that your server is reachable from within the NetAlertX container. See also how to ssh into the container if you are running it as a Home Assistant addon.

"},{"location":"SMTP/#gmail","title":"Gmail","text":"
  1. Create an app password by following the instructions from Google, you need to Enable 2FA for this to work. https://support.google.com/accounts/answer/185833

  2. Specify the following settings:

    SMTP_RUN='on_notification'\n    SMTP_SKIP_TLS=True\n    SMTP_FORCE_SSL=True \n    SMTP_PORT=465\n    SMTP_SERVER='smtp.gmail.com'\n    SMTP_PASS='16-digit passcode from google'\n    SMTP_REPORT_TO='some_target_email@gmail.com'\n
"},{"location":"SMTP/#brevo","title":"Brevo","text":"

Brevo allows for 300 free emails per day as of time of writing.

  1. Create an account on Brevo: https://www.brevo.com/free-smtp-server/
  2. Click your name -> SMTP & API
  3. Click Generate a new SMTP key
  4. Save the details and fill in the NetAlertX settings as below.
SMTP_SERVER='smtp-relay.brevo.com'\nSMTP_PORT=587\nSMTP_SKIP_LOGIN=False\nSMTP_USER='user@email.com'\nSMTP_PASS='xsmtpsib-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx-xxxxxxxxxxx'\nSMTP_SKIP_TLS=False\nSMTP_FORCE_SSL=False\nSMTP_REPORT_TO='some_target_email@gmail.com'\nSMTP_REPORT_FROM='NetAlertX <user@email.com>'\n
"},{"location":"SMTP/#gmx","title":"GMX","text":"
  1. Go to your GMX account https://account.gmx.com
  2. Under Security Options enable 2FA (Two-factor authentication)
  3. Under Security Options generate an Application-specific password
  4. Home -> Email Settings -> POP3 & IMAP -> Enable access to this account via POP3 and IMAP
  5. In NetAlertX specify these settings:
    SMTP_RUN='on_notification'\n    SMTP_SERVER='mail.gmx.com'\n    SMTP_PORT=465\n    SMTP_USER='gmx_email@gmx.com'\n    SMTP_PASS='<your Application-specific password>'\n    SMTP_SKIP_TLS=True\n    SMTP_FORCE_SSL=True\n    SMTP_SKIP_LOGIN=False\n    SMTP_REPORT_FROM='gmx_email@gmx.com' # this has to be the same email as in SMTP_USER\n    SMTP_REPORT_TO='some_target_email@gmail.com'\n
"},{"location":"SUBNETS/","title":"Subnets Configuration","text":"

You need to specify the network interface and the network mask. You can also configure multiple subnets and specify VLANs (see VLAN exceptions below).

ARPSCAN can scan multiple networks if the network allows it. To scan networks directly, the subnets must be accessible from the network where NetAlertX is running. This means NetAlertX needs to have access to the interface attached to that subnet.

Warning

If you don't see all expected devices run the following command in the NetAlertX container (replace the interface and ip mask): sudo arp-scan --interface=eth0 192.168.1.0/24

If this command returns no results, the network is not accessible due to your network or firewall restrictions (Wi-Fi Extenders, VPNs and inaccessible networks). If direct scans are not possible, check the remote networks documentation for workarounds.

"},{"location":"SUBNETS/#example-values","title":"Example Values","text":"

Note

Please use the UI to configure settings as it ensures the config file is in the correct format. Edit app.conf directly only when really necessary.

Tip

When adding more subnets, you may need to increase both the scan interval (ARPSCAN_RUN_SCHD) and the timeout (ARPSCAN_RUN_TIMEOUT)\u2014as well as similar settings for related plugins.

If the timeout is too short, you may see timeout errors in the log. To prevent the application from hanging due to unresponsive plugins, scans are canceled when they exceed the timeout limit.

To fix this: - Reduce the subnet size (e.g., change /16 to /24). - Increase the timeout (e.g., set ARPSCAN_RUN_TIMEOUT to 300 for a 5-minute timeout). - Extend the scan interval (e.g., set ARPSCAN_RUN_SCHD to */10 * * * * to scan every 10 minutes).

For more troubleshooting tips, see Debugging Plugins.

"},{"location":"SUBNETS/#explanation","title":"Explanation","text":""},{"location":"SUBNETS/#network-mask","title":"Network Mask","text":"

Example value: 192.168.1.0/24

The arp-scan time itself depends on the number of IP addresses to check.

The number of IPs to check depends on the network mask you set in the SCAN_SUBNETS setting. For example, a /24 mask results in 256 IPs to check, whereas a /16 mask checks around 65,536 IPs. Each IP takes a couple of seconds, so an incorrect configuration could make arp-scan take hours instead of seconds.

Specify the network filter, which significantly speeds up the scan process. For example, the filter 192.168.1.0/24 covers IP ranges from 192.168.1.0 to 192.168.1.255.

"},{"location":"SUBNETS/#network-interface-adapter","title":"Network Interface (Adapter)","text":"

Example value: --interface=eth0

The adapter will probably be eth0 or eth1. (Check System Info > Network Hardware, or run iwconfig in the container to find your interface name(s)).

Tip

As an alternative to iwconfig, run ip -o link show | awk -F': ' '!/lo|vir|docker/ {print $2}' in your container to find your interface name(s) (e.g.: eth0, eth1):

Synology-NAS:/# ip -o link show | awk -F': ' '!/lo|vir|docker/ {print $2}'\nsit0@NONE\neth1\neth0\n

"},{"location":"SUBNETS/#vlans","title":"VLANs","text":"

Example value: --vlan=107

"},{"location":"SUBNETS/#vlans-on-a-hyper-v-setup","title":"VLANs on a Hyper-V Setup","text":"

Community-sourced content by mscreations from this discussion.

Tested Setup: Bare Metal \u2192 Hyper-V on Win Server 2019 \u2192 Ubuntu 22.04 VM \u2192 Docker \u2192 NetAlertX.

Approach 1 (may cause issues): Configure multiple network adapters in Hyper-V with distinct VLANs connected to each one using Hyper-V's network setup. However, this action can potentially lead to the Docker host's inability to handle network traffic correctly. This might interfere with other applications such as Authentik.

Approach 2 (working example):

Network connections to switches are configured as trunk and allow all VLANs access to the server.

By default, Hyper-V only allows untagged packets through to the VM interface, blocking VLAN-tagged packets. To fix this, follow these steps:

  1. Run the following command in PowerShell on the Hyper-V machine:
Set-VMNetworkAdapterVlan -VMName <Docker VM Name> -Trunk -NativeVlanId 0 -AllowedVlanIdList \"<comma separated list of vlans>\"\n
  1. Within the VM, set up sub-interfaces for each VLAN to enable scanning. On Ubuntu 22.04, Netplan can be used. In /etc/netplan/00-installer-config.yaml, add VLAN definitions:
    network:\n      ethernets:\n        eth0:\n          dhcp4: yes\n      vlans:\n        eth0.2:\n          id: 2\n          link: eth0\n          addresses: [ \"192.168.2.2/24\" ]\n          routes:\n            - to: 192.168.2.0/24\n              via: 192.168.1.1\n
  1. Run sudo netplan apply to activate the interfaces for scanning in NetAlertX.

In this case, use 192.168.2.0/24 --interface=eth0.2 in NetAlertX.

"},{"location":"SUBNETS/#vlan-support-exceptions","title":"VLAN Support & Exceptions","text":"

Please note the accessibility of macvlans when configured on the same computer. This is a general networking behavior, but feel free to clarify via a PR/issue.

"},{"location":"SYNOLOGY_GUIDE/","title":"Installation on a Synology NAS","text":"

There are different ways to install NetAlertX on a Synology, including SSH-ing into the machine and using the command line. For this guide, we will use the Project option in Container manager.

"},{"location":"SYNOLOGY_GUIDE/#create-the-folder-structure","title":"Create the folder structure","text":"

The folders you are creating below will contain the configuration and the database. Back them up regularly.

  1. Create a parent folder named netalertx
  2. Create a db sub-folder

  1. Create a config sub-folder

  1. Note down the folders Locations:

"},{"location":"SYNOLOGY_GUIDE/#creating-the-project","title":"Creating the Project","text":"
  1. Open Container manager -> Project and click Create.
  2. Fill in the details:

  3. Project name: netalertx

  4. Path: /app_storage/netalertx (will differ from yours)
  5. Paste in the following template:
services:\n  netalertx:\n    container_name: netalertx\n    # use the below line if you want to test the latest dev image\n    # image: \"ghcr.io/netalertx/netalertx-dev:latest\"\n    image: \"ghcr.io/netalertx/netalertx:latest\"\n    network_mode: \"host\"\n    restart: unless-stopped\n    cap_drop:       # Drop all capabilities for enhanced security\n      - ALL\n    cap_add:        # Re-add necessary capabilities\n      - NET_RAW\n      - NET_ADMIN\n      - NET_BIND_SERVICE\n      - CHOWN\n      - SETUID\n      - SETGID\n    volumes:\n      - /app_storage/netalertx:/data\n      # to sync with system time\n      - /etc/localtime:/etc/localtime:ro\n    tmpfs:\n      # All writable runtime state resides under /tmp; comment out to persist logs between restarts\n      - \"/tmp:uid=20211,gid=20211,mode=1700,rw,noexec,nosuid,nodev,async,noatime,nodiratime\"\n    environment:\n      - PORT=20211\n

  1. Replace the paths to your volume and comment out unnecessary line(s).

This is only an example, your paths will differ.

volumes:\n      - /volume1/app_storage/netalertx:/data\n

  1. (optional) Change the port number from 20211 to an unused port if this port is already used.
  2. Build the project:

  1. Navigate to <Synology URL>:20211 (or your custom port).
  2. Read the Subnets and Plugins docs to complete your setup.
"},{"location":"SYNOLOGY_GUIDE/#solving-permission-issues","title":"Solving permission issues","text":"

See also the Permission overview guide.

"},{"location":"SYNOLOGY_GUIDE/#configuring-the-permissions-via-ssh","title":"Configuring the permissions via SSH","text":"

Tip

If you are facing permissions issues run the following commands on your server. This will change the owner and assure sufficient access to the database and config files that are stored in the /local_data_dir/db and /local_data_dir/config folders (replace local_data_dir with the location where your /db and /config folders are located).

sudo chown -R 20211:20211 /local_data_dir

sudo chmod -R a+rwx /local_data_dir

"},{"location":"SYNOLOGY_GUIDE/#configuring-the-permissions-via-the-synology-ui","title":"Configuring the permissions via the Synology UI","text":"

You can also execute the above bash commands via the UI by creating a one-off scheduled task.

  1. Control panel -> Task Scheduler
  2. Create -> Scheduled Task -> User-defined Script

  1. Give your task a name.

  1. Specify one-off execution time (e.g. 5 minutes from now).

  1. Paste the commands from the above SSH section and replace the /local_data_dir with the parent fodler of your /db and /config folders.

  1. Wait until the execution time passes and verify the new ownership.

In case of issues, double-check the Permission overview guide.

"},{"location":"UPDATES/","title":"Docker Update Strategies to upgrade NetAlertX","text":"

Warning

For versions prior to v25.6.7 upgrade to version v25.5.24 first (docker pull ghcr.io/jokob-sk/netalertx:25.5.24) as later versions don't support a full upgrade. Alternatively, devices and settings can be migrated manually, e.g. via CSV import. See the Migration guide for details.

This guide outlines approaches for updating Docker containers, usually when upgrading to a newer version of NetAlertX. Each method offers different benefits depending on the situation. Here are the methods:

You can choose any approach that fits your workflow.

In the examples I assume that the container name is netalertx and the image name is netalertx as well.

Note

See also Backup strategies to be on the safe side.

"},{"location":"UPDATES/#1-manual-updates","title":"1. Manual Updates","text":"

Use this method when you need precise control over a single container or when dealing with a broken container that needs immediate attention. Example Commands

To manually update the netalertx container, stop it, delete it, remove the old image, and start a fresh one with docker-compose.

# Stop the container\nsudo docker container stop netalertx\n\n# Remove the container\nsudo docker container rm netalertx\n\n# Remove the old image\nsudo docker image rm netalertx\n\n# Pull and start a new container\nsudo docker-compose up -d\n
"},{"location":"UPDATES/#alternative-force-pull-with-docker-compose","title":"Alternative: Force Pull with Docker Compose","text":"

You can also use --pull always to ensure Docker pulls the latest image before starting the container:

sudo docker-compose up --pull always -d\n
"},{"location":"UPDATES/#2-dockcheck-for-bulk-container-updates","title":"2. Dockcheck for Bulk Container Updates","text":"

Always check the Dockcheck docs if encountering issues with the guide below.

Dockcheck is a useful tool if you have multiple containers to update and some flexibility for handling potential issues that might arise during mass updates. Dockcheck allows you to inspect each container and decide when to update.

"},{"location":"UPDATES/#example-workflow-with-dockcheck","title":"Example Workflow with Dockcheck","text":"

You might use Dockcheck to:

Dockcheck can help streamline bulk updates, especially if you\u2019re managing multiple containers.

Below is a script I use to run an update of the Dockcheck script and start a check for new containers:

cd /path/to/Docker &&\nrm dockcheck.sh &&\nwget https://raw.githubusercontent.com/mag37/dockcheck/main/dockcheck.sh &&\nsudo chmod +x dockcheck.sh &&\nsudo ./dockcheck.sh\n
"},{"location":"UPDATES/#3-automated-updates-with-watchtower","title":"3. Automated Updates with Watchtower","text":"

Always check the watchtower docs if encountering issues with the guide below.

Watchtower monitors your Docker containers and automatically updates them when new images are available. This is ideal for ongoing updates without manual intervention.

"},{"location":"UPDATES/#setting-up-watchtower","title":"Setting Up Watchtower","text":""},{"location":"UPDATES/#1-pull-the-watchtower-image","title":"1. Pull the Watchtower Image:","text":"
docker pull containrrr/watchtower\n
"},{"location":"UPDATES/#2-run-watchtower-to-update-all-images","title":"2. Run Watchtower to update all images:","text":"
docker run -d \\\n  --name watchtower \\\n  -v /var/run/docker.sock:/var/run/docker.sock \\\n  containrrr/watchtower \\\n  --interval 300 # Check for updates every 5 minutes\n
"},{"location":"UPDATES/#3-run-watchtower-to-update-only-netalertx","title":"3. Run Watchtower to update only NetAlertX:","text":"

You can specify which containers to monitor by listing them. For example, to monitor netalertx only:

docker run -d \\\n  --name watchtower \\\n  -v /var/run/docker.sock:/var/run/docker.sock \\\n  containrrr/watchtower netalertx\n
"},{"location":"UPDATES/#4-portainer-controlled-image","title":"4. Portainer controlled image","text":"

This assumes you're using Portainer to manage Docker (or Docker Swarm) and want to pull the latest version of an image and redeploy the container.

Note

"},{"location":"UPDATES/#41-steps-to-update-an-image-in-portainer-standalone-docker","title":"4.1 Steps to Update an Image in Portainer (Standalone Docker)","text":"
  1. Login to Portainer.
  2. Go to \"Containers\" in the left sidebar.
  3. Find the container you want to update, click its name.
  4. Click \"Recreate\" (top right).
  5. Tick: Pull latest image (this ensures Portainer fetches the newest version from Docker Hub or your registry).
  6. Click \"Recreate\" again.
  7. Wait for the container to be stopped, removed, and recreated with the updated image.
"},{"location":"UPDATES/#42-for-docker-swarm-services","title":"4.2 For Docker Swarm Services","text":"

If you're using Docker Swarm (under \"Stacks\" or \"Services\"):

  1. Go to \"Stacks\".
  2. Select the stack managing the container.
  3. Click \"Editor\" (or \"Update the Stack\").
  4. Add a version tag or use :latest if your image tag is latest (not recommended for production).
  5. Click \"Update the Stack\". \u26a0 Portainer will not pull the new image unless the tag changes OR the stack is forced to recreate.
  6. If image tag hasn't changed, go to \"Services\", find the service, and click \"Force Update\".
"},{"location":"UPDATES/#summary","title":"Summary","text":"Method Type Pros Cons Manual CLI Full control, no dependencies Tedious for many containers Dockcheck CLI Script Great for batch updates Needs setup, semi-automated Watchtower Daemonized Fully automated updates Less control, version drift Portainer UI Easy via web interface No auto-updates

These approaches allow you to maintain flexibility in how you update Docker containers, depending on the urgency and scale of the update.

"},{"location":"VERSIONS/","title":"Versions","text":""},{"location":"VERSIONS/#am-i-running-the-latest-released-version","title":"Am I running the latest released version?","text":"

Since version 23.01.14 NetAlertX uses a simple timestamp-based version check to verify if a new version is available. You can check the current and past releases here, or have a look at what I'm currently working on.

If you are not on the latest version, the app will notify you, that a new released version is avialable the following way:

"},{"location":"VERSIONS/#via-email-on-a-notification-event","title":"\ud83d\udce7 Via email on a notification event","text":"

If any notification occurs and an email is sent, the email will contain a note that a new version is available. See the sample email below:

"},{"location":"VERSIONS/#in-the-ui","title":"\ud83c\udd95 In the UI","text":"

In the UI via a notification Icon and via a custom message in the Maintenance section.

For a comparison, this is how the UI looks like if you are on the latest stable image:

"},{"location":"VERSIONS/#implementation-details","title":"Implementation details","text":"

During build a /app/front/buildtimestamp.txt file is created. The app then periodically checks if a new release is available with a newer timestamp in GitHub's rest-based JSON endpoint (check the def isNewVersion: method for details).

"},{"location":"WEBHOOK_N8N/","title":"Webhooks (n8n)","text":""},{"location":"WEBHOOK_N8N/#create-a-simple-n8n-workflow","title":"Create a simple n8n workflow","text":"

Note

You need to enable the WEBHOOK plugin first in order to follow this guide. See the Plugins guide for details.

N8N can be used for more advanced conditional notification use cases. For example, you want only to get notified if two out of a specified list of devices is down. Or you can use other plugins to process the notifiations further. The below is a simple example of sending an email on a webhook.

"},{"location":"WEBHOOK_N8N/#specify-your-email-template","title":"Specify your email template","text":"

See sample JSON if you want to see the JSON paths used in the email template below

Events count: {{ $json[\"body\"][\"attachments\"][0][\"text\"][\"events\"].length }}\nNew devices count: {{ $json[\"body\"][\"attachments\"][0][\"text\"][\"new_devices\"].length }}\n
"},{"location":"WEBHOOK_N8N/#get-your-webhook-in-n8n","title":"Get your webhook in n8n","text":""},{"location":"WEBHOOK_N8N/#configure-netalertx-to-point-to-the-above-url","title":"Configure NetAlertX to point to the above URL","text":""},{"location":"WEBHOOK_SECRET/","title":"Webhook Secrets","text":"

Note

This is community-contributed. Due to environment, setup, or networking differences, results may vary. Please open a PR to improve it instead of creating an issue, as the maintainer is not actively maintaining it.

Note

You need to enable the WEBHOOK plugin first in order to follow this guide. See the Plugins guide for details.

"},{"location":"WEBHOOK_SECRET/#how-does-the-signing-work","title":"How does the signing work?","text":"

NetAlertX will use the configured secret to create a hash signature of the request body. This SHA256-HMAC signature will appear in the X-Webhook-Signature header of each request to the webhook target URL. You can use the value of this header to validate the request was sent by NetAlertX.

"},{"location":"WEBHOOK_SECRET/#activating-webhook-signatures","title":"Activating webhook signatures","text":"

All you need to do in order to add a signature to the request headers is to set the WEBHOOK_SECRET config value to a non-empty string.

"},{"location":"WEBHOOK_SECRET/#validating-webhook-deliveries","title":"Validating webhook deliveries","text":"

There are a few things to keep in mind when validating the webhook delivery:

"},{"location":"WEBHOOK_SECRET/#testing-the-webhook-payload-validation","title":"Testing the webhook payload validation","text":"

You can use the following secret and payload to verify that your implementation is working correctly.

secret: 'this is my secret'

payload: '{\"test\":\"this is a test body\"}'

If your implementation is correct, the signature you generated should match the following:

signature: bed21fcc34f98e94fd71c7edb75e51a544b4a3b38b069ebaaeb19bf4be8147e9

X-Webhook-Signature: sha256=bed21fcc34f98e94fd71c7edb75e51a544b4a3b38b069ebaaeb19bf4be8147e9

"},{"location":"WEBHOOK_SECRET/#more-information","title":"More information","text":"

If you want to learn more about webhook security, take a look at GitHub's webhook documentation.

You can find examples for validating a webhook delivery here.

"},{"location":"WEB_UI_PORT_DEBUG/","title":"Debugging inaccessible UI","text":"

The application uses the following default ports:

The Web UI is served by an nginx server, while the API backend runs on a Flask (Python) server.

"},{"location":"WEB_UI_PORT_DEBUG/#changing-ports","title":"Changing Ports","text":"

For more information, check the Docker installation guide.

"},{"location":"WEB_UI_PORT_DEBUG/#possible-issues-and-troubleshooting","title":"Possible issues and troubleshooting","text":"

Follow all of the below in order to disqualify potential causes of issues and to troubleshoot these problems faster.

"},{"location":"WEB_UI_PORT_DEBUG/#1-port-conflicts","title":"1. Port conflicts","text":"

When opening an issue or debugging:

  1. Include a screenshot of what you see when accessing HTTP://<your_server>:20211 (or your custom port)
  2. Follow steps 1, 2, 3, 4 on this page
  3. Execute the following in the container to see the processes and their ports and submit a screenshot of the result:
  4. sudo apk add lsof
  5. sudo lsof -i
  6. Try running the nginx command in the container:
  7. if you get nginx: [emerg] bind() to 0.0.0.0:20211 failed (98: Address in use) try using a different port number

"},{"location":"WEB_UI_PORT_DEBUG/#2-javascript-issues","title":"2. JavaScript issues","text":"

Check for browser console (F12 browser dev console) errors + check different browsers.

"},{"location":"WEB_UI_PORT_DEBUG/#3-clear-the-app-cache-and-cached-javascript-files","title":"3. Clear the app cache and cached JavaScript files","text":"

Refresh the browser cache (usually shoft + refresh), try a private window, or different browsers. Please also refresh the app cache by clicking the \ud83d\udd03 (reload) button in the header of the application.

"},{"location":"WEB_UI_PORT_DEBUG/#4-disable-proxies","title":"4. Disable proxies","text":"

If you have any reverse proxy or similar, try disabling it.

"},{"location":"WEB_UI_PORT_DEBUG/#5-disable-your-firewall","title":"5. Disable your firewall","text":"

If you are using a firewall, try to temporarily disabling it.

"},{"location":"WEB_UI_PORT_DEBUG/#6-post-your-docker-start-details","title":"6. Post your docker start details","text":"

If you haven't, post your docker compose/run command.

"},{"location":"WEB_UI_PORT_DEBUG/#7-check-for-errors-in-your-phpnginx-error-logs","title":"7. Check for errors in your PHP/NGINX error logs","text":"

In the container execute and investigate:

cat /var/log/nginx/error.log

cat /tmp/log/app.php_errors.log

"},{"location":"WEB_UI_PORT_DEBUG/#8-make-sure-permissions-are-correct","title":"8. Make sure permissions are correct","text":"

Tip

You can try to start the container without mapping the /data/config and /data/db dirs and if the UI shows up then the issue is most likely related to your file system permissions or file ownership.

Please read the Permissions troubleshooting guide and provide a screesnhot of the permissions and ownership in the /data/db and app/config directories.

"},{"location":"WORKFLOWS/","title":"Workflows Overview","text":"

The workflows module in allows to automate repetitive tasks, making network management more efficient. Whether you need to assign newly discovered devices to a specific Network Node, auto-group devices from a given vendor, unarchive a device if detected online, or automatically delete devices, this module provides the flexibility to tailor the automations to your needs.

Below are a few examples that demonstrate how this module can be used to simplify network management tasks.

"},{"location":"WORKFLOWS/#updating-workflows","title":"Updating Workflows","text":"

Note

In order to apply a workflow change, you must first Save the changes and then reload the application by clicking Restart server.

"},{"location":"WORKFLOWS/#workflow-components","title":"Workflow components","text":""},{"location":"WORKFLOWS/#triggers","title":"Triggers","text":"

Triggers define the event that activates a workflow. They monitor changes to objects within the system, such as updates to devices or the insertion of new entries. When the specified event occurs, the workflow is executed.

Tip

Workflows not running? Check the Workflows debugging guide how to troubleshoot triggers and conditions.

"},{"location":"WORKFLOWS/#example-trigger","title":"Example Trigger:","text":"

This trigger will activate when a Device object is updated.

"},{"location":"WORKFLOWS/#conditions","title":"Conditions","text":"

Conditions determine whether a workflow should proceed based on certain criteria. These criteria can be set for specific fields, such as whether a device is from a certain vendor, or whether it is new or archived. You can combine conditions using logical operators (AND, OR).

Tip

To better understand how to use specific Device fields, please read through the Database overview guide.

"},{"location":"WORKFLOWS/#example-condition","title":"Example Condition:","text":"

This condition checks if the device's vendor is Google. The workflow will only proceed if the condition is true.

"},{"location":"WORKFLOWS/#actions","title":"Actions","text":"

Actions define the tasks that the workflow will perform once the conditions are met. Actions can include updating fields or deleting devices.

You can include multiple actions that should execute once the conditions are met.

"},{"location":"WORKFLOWS/#example-action","title":"Example Action:","text":"

This action updates the devIsNew field to 0, marking the device as no longer new.

"},{"location":"WORKFLOWS/#examples","title":"Examples","text":"

You can find a couple of configuration examples in Workflow Examples.

Tip

Share your workflows in Discord or GitHub Discussions.

"},{"location":"WORKFLOWS_DEBUGGING/","title":"Workflows debugging and troubleshooting","text":"

Tip

Before troubleshooting, please ensure you have the right Debugging and LOG_LEVEL set.

Workflows are triggered by various events. These events are captured and listed in the Integrations -> App Events section of the application.

"},{"location":"WORKFLOWS_DEBUGGING/#troubleshooting-triggers","title":"Troubleshooting triggers","text":"

Note

Workflow events are processed once every 5 seconds. However, if a scan or other background tasks are running, this can cause a delay up to a few minutes.

If an event doesn't trigger a workflow as expected, check the App Events section for the event. You can filter these by the ID of the device (devMAC or devGUID).

Once you find the Event Guid and Object GUID, use them to find relevant debug entries.

Navigate to Mainetenace -> Logs where you can filter the logs based on the Event or Object GUID.

Below you can find some example app.log entries that will help you understand why a Workflow was or was not triggered.

16:27:03 [WF] Checking if '13f0ce26-1835-4c48-ae03-cdaf38f328fe' triggers the workflow 'Sample Device Update Workflow'\n16:27:03 [WF] self.triggered 'False' for event '[[155], ['13f0ce26-1835-4c48-ae03-cdaf38f328fe'], [0], ['2025-04-02 05:26:56'], ['Devices'], ['050b6980-7af6-4409-950d-08e9786b7b33'], ['DEVICES'], ['00:11:32:ef:a5:6c'], ['192.168.1.82'], ['050b6980-7af6-4409-950d-08e9786b7b33'], [None], [0], [0], ['devPresentLastScan'], ['online'], ['update'], [None], [None], [None], [None]] and trigger {\"object_type\": \"Devices\", \"event_type\": \"insert\"}'\n16:27:03 [WF] Checking if '13f0ce26-1835-4c48-ae03-cdaf38f328fe' triggers the workflow 'Location Change'\n16:27:03 [WF] self.triggered 'True' for event '[[155], ['13f0ce26-1835-4c48-ae03-cdaf38f328fe'], [0], ['2025-04-02 05:26:56'], ['Devices'], ['050b6980-7af6-4409-950d-08e9786b7b33'], ['DEVICES'], ['00:11:32:ef:a5:6c'], ['192.168.1.82'], ['050b6980-7af6-4409-950d-08e9786b7b33'], [None], [0], [0], ['devPresentLastScan'], ['online'], ['update'], [None], [None], [None], [None]] and trigger {\"object_type\": \"Devices\", \"event_type\": \"update\"}'\n16:27:03 [WF] Event with GUID '13f0ce26-1835-4c48-ae03-cdaf38f328fe' triggered the workflow 'Location Change'\n

Note how one trigger executed, but the other didn't based on different \"event_type\" values. One is \"event_type\": \"insert\", the other \"event_type\": \"update\".

Given the Event is a update event (note ...['online'], ['update'], [None]... in the event structure), the \"event_type\": \"insert\" trigger didn't execute.

"},{"location":"WORKFLOW_EXAMPLES/","title":"Workflow examples","text":"

Workflows in NetAlertX automate actions based on real-time events and conditions. Below are practical examples that demonstrate how to build automation using triggers, conditions, and actions.

"},{"location":"WORKFLOW_EXAMPLES/#example-1-un-archive-devices-if-detected-online","title":"Example 1: Un-archive devices if detected online","text":"

This workflow automatically unarchives a device if it was previously archived but has now been detected as online.

"},{"location":"WORKFLOW_EXAMPLES/#use-case","title":"\ud83d\udccb Use Case","text":"

Sometimes devices are manually archived (e.g., no longer expected on the network), but they reappear unexpectedly. This workflow reverses the archive status when such devices are detected during a scan.

"},{"location":"WORKFLOW_EXAMPLES/#workflow-configuration","title":"\u2699\ufe0f Workflow Configuration","text":"
{\n  \"name\": \"Un-archive devices if detected online\",\n  \"trigger\": {\n    \"object_type\": \"Devices\",\n    \"event_type\": \"update\"\n  },\n  \"conditions\": [\n    {\n      \"logic\": \"AND\",\n      \"conditions\": [\n        {\n          \"field\": \"devIsArchived\",\n          \"operator\": \"equals\",\n          \"value\": \"1\"\n        },\n        {\n          \"field\": \"devPresentLastScan\",\n          \"operator\": \"equals\",\n          \"value\": \"1\"\n        }\n      ]\n    }\n  ],\n  \"actions\": [\n    {\n      \"type\": \"update_field\",\n      \"field\": \"devIsArchived\",\n      \"value\": \"0\"\n    }\n  ],\n  \"enabled\": \"Yes\"\n}\n
"},{"location":"WORKFLOW_EXAMPLES/#explanation","title":"\ud83d\udd0d Explanation","text":"
- Trigger: Listens for updates to device records.\n- Conditions:\n    - `devIsArchived` is `1` (archived).\n    - `devPresentLastScan` is `1` (device was detected in the latest scan).\n- Action: Updates the device to set `devIsArchived` to `0` (unarchived).\n
"},{"location":"WORKFLOW_EXAMPLES/#result","title":"\u2705 Result","text":"

Whenever a previously archived device shows up during a network scan, it will be automatically unarchived \u2014 allowing it to reappear in your device lists and dashboards.

Here is your updated version of Example 2 and Example 3, fully aligned with the format and structure of Example 1 for consistency and professionalism:

"},{"location":"WORKFLOW_EXAMPLES/#example-2-assign-device-to-network-node-based-on-ip","title":"Example 2: Assign Device to Network Node Based on IP","text":"

This workflow assigns newly added devices with IP addresses in the 192.168.1.* range to a specific network node with MAC address 6c:6d:6d:6c:6c:6c.

"},{"location":"WORKFLOW_EXAMPLES/#use-case_1","title":"\ud83d\udccb Use Case","text":"

When new devices join your network, assigning them to the correct network node is important for accurate topology and grouping. This workflow ensures devices in a specific subnet are automatically linked to the intended node.

"},{"location":"WORKFLOW_EXAMPLES/#workflow-configuration_1","title":"\u2699\ufe0f Workflow Configuration","text":"
{\n  \"name\": \"Assign Device to Network Node Based on IP\",\n  \"trigger\": {\n    \"object_type\": \"Devices\",\n    \"event_type\": \"insert\"\n  },\n  \"conditions\": [\n    {\n      \"logic\": \"AND\",\n      \"conditions\": [\n        {\n          \"field\": \"devLastIP\",\n          \"operator\": \"contains\",\n          \"value\": \"192.168.1.\"\n        }\n      ]\n    }\n  ],\n  \"actions\": [\n    {\n      \"type\": \"update_field\",\n      \"field\": \"devNetworkNode\",\n      \"value\": \"6c:6d:6d:6c:6c:6c\"\n    }\n  ],\n  \"enabled\": \"Yes\"\n}\n
"},{"location":"WORKFLOW_EXAMPLES/#explanation_1","title":"\ud83d\udd0d Explanation","text":""},{"location":"WORKFLOW_EXAMPLES/#result_1","title":"\u2705 Result","text":"

New devices with IPs in the 192.168.1.* subnet are automatically assigned to the correct network node, streamlining device organization and reducing manual work.

"},{"location":"WORKFLOW_EXAMPLES/#example-3-mark-device-as-not-new-and-delete-if-from-google-vendor","title":"Example 3: Mark Device as Not New and Delete If from Google Vendor","text":"

This workflow automatically marks newly detected Google devices as not new and deletes them immediately.

"},{"location":"WORKFLOW_EXAMPLES/#use-case_2","title":"\ud83d\udccb Use Case","text":"

You may want to automatically clear out newly detected Google devices (such as Chromecast or Google Home) if they\u2019re not needed in your device database. This workflow handles that clean-up automatically.

"},{"location":"WORKFLOW_EXAMPLES/#workflow-configuration_2","title":"\u2699\ufe0f Workflow Configuration","text":"
{\n  \"name\": \"Mark Device as Not New and Delete If from Google Vendor\",\n  \"trigger\": {\n    \"object_type\": \"Devices\",\n    \"event_type\": \"update\"\n  },\n  \"conditions\": [\n    {\n      \"logic\": \"AND\",\n      \"conditions\": [\n        {\n          \"field\": \"devVendor\",\n          \"operator\": \"contains\",\n          \"value\": \"Google\"\n        },\n        {\n          \"field\": \"devIsNew\",\n          \"operator\": \"equals\",\n          \"value\": \"1\"\n        }\n      ]\n    }\n  ],\n  \"actions\": [\n    {\n      \"type\": \"update_field\",\n      \"field\": \"devIsNew\",\n      \"value\": \"0\"\n    },\n    {\n      \"type\": \"delete_device\"\n    }\n  ],\n  \"enabled\": \"Yes\"\n}\n
"},{"location":"WORKFLOW_EXAMPLES/#explanation_2","title":"\ud83d\udd0d Explanation","text":""},{"location":"WORKFLOW_EXAMPLES/#result_2","title":"\u2705 Result","text":"

Any newly detected Google devices are cleaned up instantly \u2014 first marked as not new, then deleted \u2014 helping you avoid clutter in your device records.

"},{"location":"docker-troubleshooting/PUID_PGID_SECURITY/","title":"PUID/PGID Security \u2014 Why the entrypoint requires numeric IDs","text":""},{"location":"docker-troubleshooting/PUID_PGID_SECURITY/#purpose","title":"Purpose","text":"

This short document explains the security rationale behind the root-priming entrypoint's validation of runtime user IDs (PUID) and group IDs (PGID). The validation is intentionally strict and is a safety measure to prevent environment-variable-based command injection when running as root during the initial priming stage.

"},{"location":"docker-troubleshooting/PUID_PGID_SECURITY/#key-points","title":"Key points","text":""},{"location":"docker-troubleshooting/PUID_PGID_SECURITY/#behavior-on-malformed-input","title":"Behavior on malformed input","text":""},{"location":"docker-troubleshooting/PUID_PGID_SECURITY/#operator-guidance","title":"Operator guidance","text":""},{"location":"docker-troubleshooting/PUID_PGID_SECURITY/#required-capabilities-for-privilege-drop","title":"Required Capabilities for Privilege Drop","text":"

If you are hardening your container by dropping capabilities (e.g., cap_drop: [ALL]), you must explicitly grant the SETUID and SETGID capabilities.

cap_drop:\n  - ALL\ncap_add:\n  - SETUID\n  - SETGID\n  # ... other required caps like CHOWN, NET_ADMIN, etc.\n

Document created to clarify the security behavior of the root-priming entrypoint (PUID/PGID validation).

"},{"location":"docker-troubleshooting/aufs-capabilities/","title":"AUFS Legacy Storage Driver Support","text":""},{"location":"docker-troubleshooting/aufs-capabilities/#issue-description","title":"Issue Description","text":"

NetAlertX automatically detects the legacy aufs storage driver, which is commonly found on older Synology NAS devices (DSM 6.x/7.0.x) or Linux systems where the underlying filesystem lacks d_type support. This occurs on older ext4 and other filesystems which did not support capabilites at time of last formatting. While ext4 currently support capabilities and filesystem overlays, older variants of ext4 did not and require a reformat to enable the support. Old variants result in docker choosing aufs and newer may use overlayfs.

The Technical Limitation: AUFS (Another Union File System) does not support or preserve extended file attributes (xattrs) during Docker image extraction. NetAlertX relies on these attributes to grant granular privileges (CAP_NET_RAW and CAP_NET_ADMIN) to network scanning binaries like arp-scan, nmap, and nbtscan.

The Result: When the container runs as a standard non-root user (default) on AUFS, these binaries are stripped of their capabilities. Consequently, layer-2 network discovery will fail silently, find zero devices, or exit with \"Operation not permitted\" errors.

"},{"location":"docker-troubleshooting/aufs-capabilities/#operational-logic","title":"Operational Logic","text":"

The container is designed to inspect the runtime environment at startup (/root-entrypoint.sh). It respects user configuration first, falling back to safe defaults (with warnings) where necessary.

Behavior Matrix:

Filesystem PUID Config Runtime User Outcome Modern (Overlay2/Btrfs) Unset 20211 Secure. Full functionality via preserved setcap. Legacy (AUFS) Unset 20211 Degraded. Logs warning. L2 scans fail due to missing perms. Legacy (AUFS) PUID=0 Root Functional. Root privileges bypass capability requirements. Legacy (AUFS) PUID=1000 1000 Degraded. Logs warning. L2 scans fail due to missing perms."},{"location":"docker-troubleshooting/aufs-capabilities/#warning-log","title":"Warning Log","text":"

When AUFS is detected without root privileges, the system emits the following warning during startup:

\u26a0\ufe0f WARNING: Reduced functionality (AUFS + non-root user).

AUFS strips Linux file capabilities, so tools like arp-scan, nmap, and nbtscan fail when NetAlertX runs as a non-root PUID.

Action: Set PUID=0 on AUFS hosts for full functionality.

"},{"location":"docker-troubleshooting/aufs-capabilities/#security-ramifications","title":"Security Ramifications","text":"

To mitigate the AUFS limitation, the recommended fix is to run the application as the root user (PUID=0).

"},{"location":"docker-troubleshooting/aufs-capabilities/#how-to-correct-the-issue","title":"How to Correct the Issue","text":"

Choose the scenario that best matches your environment and security requirements.

"},{"location":"docker-troubleshooting/aufs-capabilities/#scenario-a-modern-systems-recommended","title":"Scenario A: Modern Systems (Recommended)","text":"

Context: Systems using overlay2, btrfs, or zfs. Action: No action required. The system auto-configures PUID=20211.

services:\n  netalertx:\n    image: netalertx/netalertx\n    # No PUID/PGID needed; defaults to secure non-root\n
"},{"location":"docker-troubleshooting/aufs-capabilities/#scenario-b-legacysynology-aufs-the-fix","title":"Scenario B: Legacy/Synology AUFS (The Fix)","text":"

Context: Synology DSM 6.x/7.x or Linux hosts using AUFS. Action: Explicitly elevate to root. This bypasses the need for file capabilities because Root inherits runtime capabilities directly from Docker.

services:\n  netalertx:\n    image: netalertx/netalertx\n    environment:\n      - PUID=0  # Required for arp-scan/nmap on AUFS\n      - PGID=0\n
"},{"location":"docker-troubleshooting/aufs-capabilities/#scenario-c-forced-non-root-on-aufs","title":"Scenario C: Forced Non-Root on AUFS","text":"

Context: Strict security compliance requires non-root, even if it breaks functionality. Action: The warning will persist. The Web UI and Database will function, but network discovery (ARP/Nmap) will be severely limited.

services:\n  netalertx:\n    image: netalertx/netalertx\n    environment:\n      - PUID=1000\n      - PGID=1000\n    # Note: cap_add is ineffective here due to AUFS stripping the binary's file caps\n
"},{"location":"docker-troubleshooting/aufs-capabilities/#infrastructure-upgrades-long-term-fix","title":"Infrastructure Upgrades (Long-term Fix)","text":"

To solve the root cause and run securely as non-root, you must migrate off the AUFS driver.

"},{"location":"docker-troubleshooting/aufs-capabilities/#1-switch-to-btrfs-synology-recommended","title":"1. Switch to Btrfs (Synology Recommended)","text":"

If your NAS supports it, creating a new volume formatted as Btrfs allows Docker to use the native btrfs storage driver.

"},{"location":"docker-troubleshooting/aufs-capabilities/#2-reformat-ext4-with-d_type-support","title":"2. Reformat Ext4 with d_type Support","text":"

If you must use ext4, the issue is likely that your volume lacks d_type support (common on older volumes created before DSM 6).

"},{"location":"docker-troubleshooting/aufs-capabilities/#technical-implementation","title":"Technical Implementation","text":""},{"location":"docker-troubleshooting/aufs-capabilities/#detection-mechanism","title":"Detection Mechanism","text":"

The logic resides in _detect_storage_driver() within /root-entrypoint.sh. It parses the root mount point (/) to identify the underlying driver.

# Modern (overlay2) - Pass\noverlay / overlay rw,relatime,lowerdir=...\n\n# Legacy (AUFS) - Triggers Warning\nnone / aufs rw,relatime,si=...\n
"},{"location":"docker-troubleshooting/aufs-capabilities/#verification-troubleshooting","title":"Verification & Troubleshooting","text":"

1. Confirm Storage Driver If your host is using ext4 you might be defaulting to aufs:

docker info | grep \"Storage Driver\"\n# OR inside the container:\ndocker exec netalertx grep \" / \" /proc/mounts\n

2. Verify Capability Loss If scans fail, check if the binary permissions were stripped.

docker exec netalertx getcap /usr/sbin/arp-scan\n

3. Simulating AUFS (Dev/Test) Developers can force the AUFS logic path on a modern machine by mocking the mounts file. Note: Docker often restricts direct bind-mounts of host /proc paths, so the test suite uses an environment-variable injection instead (see test_puid_pgid.py).

# Create mock mounts content and encode it as base64\necho \"none / aufs rw,relatime 0 0\" | base64\n\n# Run the container passing the encoded mounts via NETALERTX_PROC_MOUNTS_B64\n# (the entrypoint decodes this and uses it instead of reading /proc/mounts directly)\ndocker run --rm -e NETALERTX_PROC_MOUNTS_B64=\"bm9uZSAvIGF1ZnMgcncs...\" netalertx/netalertx\n
"},{"location":"docker-troubleshooting/aufs-capabilities/#additional-resources","title":"Additional Resources","text":""},{"location":"docker-troubleshooting/excessive-capabilities/","title":"Excessive Capabilities","text":""},{"location":"docker-troubleshooting/excessive-capabilities/#issue-description","title":"Issue Description","text":"

Excessive Linux capabilities are detected beyond the necessary NET_ADMIN, NET_BIND_SERVICE, and NET_RAW. This may indicate overly permissive container configuration.

"},{"location":"docker-troubleshooting/excessive-capabilities/#security-ramifications","title":"Security Ramifications","text":"

While the detected capabilities might not directly harm operation, running with more privileges than necessary increases the attack surface. If the container is compromised, additional capabilities could allow broader system access or privilege escalation.

"},{"location":"docker-troubleshooting/excessive-capabilities/#why-youre-seeing-this-issue","title":"Why You're Seeing This Issue","text":"

This occurs when your Docker configuration grants more capabilities than required for network monitoring. The application only needs specific network-related capabilities for proper function.

"},{"location":"docker-troubleshooting/excessive-capabilities/#how-to-correct-the-issue","title":"How to Correct the Issue","text":"

Limit capabilities to only those required:

"},{"location":"docker-troubleshooting/excessive-capabilities/#additional-resources","title":"Additional Resources","text":"

Docker Compose setup can be complex. We recommend starting with the default docker-compose.yml as a base and modifying it incrementally.

For detailed Docker Compose configuration guidance, see: DOCKER_COMPOSE.md

"},{"location":"docker-troubleshooting/file-permissions/","title":"File Permission Issues","text":""},{"location":"docker-troubleshooting/file-permissions/#issue-description","title":"Issue Description","text":"

NetAlertX cannot read from or write to critical configuration and database files. This prevents the application from saving data, logs, or configuration changes.

"},{"location":"docker-troubleshooting/file-permissions/#security-ramifications","title":"Security Ramifications","text":"

Incorrect file permissions can expose sensitive configuration data or database contents to unauthorized access. Network monitoring tools handle sensitive information about devices on your network, and improper permissions could lead to information disclosure.

"},{"location":"docker-troubleshooting/file-permissions/#why-youre-seeing-this-issue","title":"Why You're Seeing This Issue","text":"

This occurs when the mounted volumes for configuration and database files don't have proper ownership or permissions set for the netalertx user (UID 20211). The container expects these files to be accessible by the service account, not root or other users.

"},{"location":"docker-troubleshooting/file-permissions/#how-to-correct-the-issue","title":"How to Correct the Issue","text":"

Fix permissions on the host system for the mounted directories:

"},{"location":"docker-troubleshooting/file-permissions/#additional-resources","title":"Additional Resources","text":"

Docker Compose setup can be complex. We recommend starting with the default docker-compose.yml as a base and modifying it incrementally.

For detailed Docker Compose configuration guidance, see: DOCKER_COMPOSE.md

"},{"location":"docker-troubleshooting/incorrect-user/","title":"Incorrect Container User","text":""},{"location":"docker-troubleshooting/incorrect-user/#issue-description","title":"Issue Description","text":"

NetAlertX is running as a UID:GID that does not match the runtime service user configured for this container (default 20211:20211). Hardened ownership on writable paths may block writes if the UID/GID do not align with mounted volumes and tmpfs settings.

"},{"location":"docker-troubleshooting/incorrect-user/#security-ramifications","title":"Security Ramifications","text":"

The image uses a dedicated service user for writes and a readonly lock owner (UID 20211) for code/venv with 004/005 permissions. Running as an arbitrary UID is supported, but only when writable mounts (/data, /tmp/*) are owned by that UID. Misalignment can cause startup failures or unexpected permission escalation attempts.

"},{"location":"docker-troubleshooting/incorrect-user/#why-youre-seeing-this-issue","title":"Why You're Seeing This Issue","text":""},{"location":"docker-troubleshooting/incorrect-user/#how-to-correct-the-issue","title":"How to Correct the Issue","text":"

Option A: Use defaults (recommended) - Remove custom user: overrides and --user flags. - Let the container run as the built-in service user (UID/GID 20211) and keep tmpfs at uid=20211,gid=20211.

Option B: Run with a custom UID/GID - Set user: (or NETALERTX_UID/NETALERTX_GID) to your desired UID/GID. - Align mounts: ensure /data (and any /tmp/* tmpfs) use the same uid=/gid= and that host bind mounts are chowned to that UID/GID. - Recreate the container so ownership is consistent.

"},{"location":"docker-troubleshooting/incorrect-user/#additional-resources","title":"Additional Resources","text":""},{"location":"docker-troubleshooting/missing-capabilities/","title":"Missing Network Capabilities","text":""},{"location":"docker-troubleshooting/missing-capabilities/#issue-description","title":"Issue Description","text":"

Raw network capabilities (NET_RAW, NET_ADMIN, NET_BIND_SERVICE) are missing. Tools that rely on these capabilities (e.g., nmap -sS, arp-scan, nbtscan) will not function.

"},{"location":"docker-troubleshooting/missing-capabilities/#security-ramifications","title":"Security Ramifications","text":"

Network scanning and monitoring requires low-level network access that these capabilities provide. Without them, the application cannot perform essential functions like ARP scanning, port scanning, or passive network discovery, severely limiting its effectiveness.

"},{"location":"docker-troubleshooting/missing-capabilities/#why-youre-seeing-this-issue","title":"Why You're Seeing This Issue","text":"

This occurs when the container doesn't have the necessary Linux capabilities granted. Docker containers run with limited capabilities by default, and network monitoring tools need elevated network privileges.

"},{"location":"docker-troubleshooting/missing-capabilities/#how-to-correct-the-issue","title":"How to Correct the Issue","text":"

Add the required capabilities to your container:

"},{"location":"docker-troubleshooting/missing-capabilities/#additional-resources","title":"Additional Resources","text":"

Docker Compose setup can be complex. We recommend starting with the default docker-compose.yml as a base and modifying it incrementally.

For detailed Docker Compose configuration guidance, see: DOCKER_COMPOSE.md

"},{"location":"docker-troubleshooting/missing-capabilities/#cap_chown-required-when-cap_drop-all","title":"CAP_CHOWN required when cap_drop: [ALL]","text":"

When you start NetAlertX with cap_drop: [ALL], the container loses CAP_CHOWN. The root priming step needs CAP_CHOWN to adjust ownership of /data and /tmp before dropping privileges to PUID:PGID. Without it, startup fails with a fatal failed to chown message and exits.

To fix: - Add CHOWN back in cap_add when you also set cap_drop: [ALL]:

cap_drop:\n  - ALL\ncap_add:\n  - CHOWN\n

If you harden capabilities further, expect priming to fail until you restore the minimum set needed for ownership changes.

"},{"location":"docker-troubleshooting/mount-configuration-issues/","title":"Mount Configuration Issues","text":""},{"location":"docker-troubleshooting/mount-configuration-issues/#issue-description","title":"Issue Description","text":"

NetAlertX has detected configuration issues with your Docker volume mounts. These may include write permission problems, data loss risks, or performance concerns marked with \u274c in the table.

"},{"location":"docker-troubleshooting/mount-configuration-issues/#security-ramifications","title":"Security Ramifications","text":"

Improper mount configurations can lead to data loss, performance degradation, or security vulnerabilities. For persistent data (database and configuration), using non-persistent storage like tmpfs can result in complete data loss on container restart. For temporary data, using persistent storage may unnecessarily expose sensitive logs or cache data.

"},{"location":"docker-troubleshooting/mount-configuration-issues/#why-youre-seeing-this-issue","title":"Why You're Seeing This Issue","text":"

This occurs when your Docker Compose or run configuration doesn't properly map host directories to container paths, or when the mounted volumes have incorrect permissions. The application requires specific paths to be writable for operation, and some paths should use persistent storage while others should be temporary.

"},{"location":"docker-troubleshooting/mount-configuration-issues/#how-to-correct-the-issue","title":"How to Correct the Issue","text":"

Review and correct your volume mounts in docker-compose.yml:

Example volume configuration:

volumes:\n  - ./data/db:/data/db\n  - ./data/config:/data/config\n  - ./data/log:/tmp/log\n

"},{"location":"docker-troubleshooting/mount-configuration-issues/#additional-resources","title":"Additional Resources","text":"

Docker Compose setup can be complex. We recommend starting with the default docker-compose.yml as a base and modifying it incrementally.

For detailed Docker Compose configuration guidance, see: DOCKER_COMPOSE.md

"},{"location":"docker-troubleshooting/network-mode/","title":"Network Mode Configuration","text":""},{"location":"docker-troubleshooting/network-mode/#issue-description","title":"Issue Description","text":"

NetAlertX is not running with --network=host. Bridge networking blocks passive discovery (ARP, NBNS, mDNS) and active scanning accuracy.

"},{"location":"docker-troubleshooting/network-mode/#security-ramifications","title":"Security Ramifications","text":"

Host networking is required for comprehensive network monitoring. Bridge mode isolates the container from raw network access needed for ARP scanning, passive discovery protocols, and accurate device detection. Without host networking, the application cannot fully monitor your network.

"},{"location":"docker-troubleshooting/network-mode/#why-youre-seeing-this-issue","title":"Why You're Seeing This Issue","text":"

This occurs when your Docker configuration uses bridge networking instead of host networking. Network monitoring requires direct access to the host's network interfaces to perform passive discovery and active scanning.

"},{"location":"docker-troubleshooting/network-mode/#how-to-correct-the-issue","title":"How to Correct the Issue","text":"

Enable host networking mode:

"},{"location":"docker-troubleshooting/network-mode/#additional-resources","title":"Additional Resources","text":"

Docker Compose setup can be complex. We recommend starting with the default docker-compose.yml as a base and modifying it incrementally.

For detailed Docker Compose configuration guidance, see: DOCKER_COMPOSE.md

"},{"location":"docker-troubleshooting/nginx-configuration-mount/","title":"Nginx Configuration Mount Issues","text":""},{"location":"docker-troubleshooting/nginx-configuration-mount/#issue-description","title":"Issue Description","text":"

You've configured a custom port for NetAlertX, but the required nginx configuration mount is missing or not writable. Without this mount, the container cannot apply your port changes and will fall back to the default port 20211.

"},{"location":"docker-troubleshooting/nginx-configuration-mount/#security-ramifications","title":"Security Ramifications","text":"

Running in read-only mode (as recommended) prevents the container from modifying its own nginx configuration. Without a writable mount, custom port configurations cannot be applied, potentially exposing the service on unintended ports or requiring fallback to defaults.

"},{"location":"docker-troubleshooting/nginx-configuration-mount/#why-youre-seeing-this-issue","title":"Why You're Seeing This Issue","text":"

This occurs when you set a custom PORT environment variable (other than 20211) but haven't provided a writable mount for nginx configuration. The container needs to write custom nginx config files when running in read-only mode.

"},{"location":"docker-troubleshooting/nginx-configuration-mount/#how-to-correct-the-issue","title":"How to Correct the Issue","text":"

If you want to use a custom port, create a bind mount for the nginx configuration:

If you don't need a custom port, simply omit the PORT environment variable and the container will use 20211 by default.

"},{"location":"docker-troubleshooting/nginx-configuration-mount/#additional-resources","title":"Additional Resources","text":"

Docker Compose setup can be complex. We recommend starting with the default docker-compose.yml as a base and modifying it incrementally.

For detailed Docker Compose configuration guidance, see: DOCKER_COMPOSE.md

"},{"location":"docker-troubleshooting/port-conflicts/","title":"Port Conflicts","text":""},{"location":"docker-troubleshooting/port-conflicts/#issue-description","title":"Issue Description","text":"

The configured application port (default 20211) or GraphQL API port (default 20212) is already in use by another service. This commonly occurs when you already have another NetAlertX instance running.

"},{"location":"docker-troubleshooting/port-conflicts/#security-ramifications","title":"Security Ramifications","text":"

Port conflicts prevent the application from starting properly, leaving network monitoring services unavailable. Running multiple instances on the same ports can also create configuration confusion and potential security issues if services are inadvertently exposed.

"},{"location":"docker-troubleshooting/port-conflicts/#why-youre-seeing-this-issue","title":"Why You're Seeing This Issue","text":"

This error typically occurs when:

"},{"location":"docker-troubleshooting/port-conflicts/#how-to-correct-the-issue","title":"How to Correct the Issue","text":""},{"location":"docker-troubleshooting/port-conflicts/#check-for-existing-netalertx-instances","title":"Check for Existing NetAlertX Instances","text":"

First, check if you already have NetAlertX running:

# Check for running NetAlertX containers\ndocker ps | grep netalertx\n\n# Check for devcontainer processes\nps aux | grep netalertx\n\n# Check what services are using the ports\nnetstat -tlnp | grep :20211\nnetstat -tlnp | grep :20212\n
"},{"location":"docker-troubleshooting/port-conflicts/#stop-conflicting-instances","title":"Stop Conflicting Instances","text":"

If you find another NetAlertX instance:

# Stop specific container\ndocker stop <container_name>\n\n# Stop all NetAlertX containers\ndocker stop $(docker ps -q --filter ancestor=jokob-sk/netalertx)\n\n# Stop devcontainer services\n# Use VS Code command palette: \"Dev Containers: Rebuild Container\"\n
"},{"location":"docker-troubleshooting/port-conflicts/#configure-different-ports","title":"Configure Different Ports","text":"

If you need multiple instances, configure unique ports:

environment:\n  - PORT=20211          # Main application port\n  - GRAPHQL_PORT=20212  # GraphQL API port\n

For a second instance, use different ports:

environment:\n  - PORT=20213          # Different main port\n  - GRAPHQL_PORT=20214  # Different API port\n
"},{"location":"docker-troubleshooting/port-conflicts/#alternative-use-different-container-names","title":"Alternative: Use Different Container Names","text":"

When running multiple instances, use unique container names:

services:\n  netalertx-primary:\n    # ... existing config\n  netalertx-secondary:\n    # ... config with different ports\n
"},{"location":"docker-troubleshooting/port-conflicts/#additional-resources","title":"Additional Resources","text":"

Docker Compose setup can be complex. We recommend starting with the default docker-compose.yml as a base and modifying it incrementally.

For detailed Docker Compose configuration guidance, see: DOCKER_COMPOSE.md

"},{"location":"docker-troubleshooting/read-only-filesystem/","title":"Read-Only Filesystem Mode","text":""},{"location":"docker-troubleshooting/read-only-filesystem/#issue-description","title":"Issue Description","text":"

The container is running as read-write instead of read-only mode. This reduces the security hardening of the appliance.

"},{"location":"docker-troubleshooting/read-only-filesystem/#security-ramifications","title":"Security Ramifications","text":"

Read-only root filesystem is a security best practice that prevents malicious modifications to the container's filesystem. Running read-write allows potential attackers to modify system files or persist malware within the container.

"},{"location":"docker-troubleshooting/read-only-filesystem/#why-youre-seeing-this-issue","title":"Why You're Seeing This Issue","text":"

This occurs when the Docker configuration doesn't mount the root filesystem as read-only. The application is designed as a security appliance that should prevent filesystem modifications.

"},{"location":"docker-troubleshooting/read-only-filesystem/#how-to-correct-the-issue","title":"How to Correct the Issue","text":"

Enable read-only mode:

"},{"location":"docker-troubleshooting/read-only-filesystem/#additional-resources","title":"Additional Resources","text":"

Docker Compose setup can be complex. We recommend starting with the default docker-compose.yml as a base and modifying it incrementally.

For detailed Docker Compose configuration guidance, see: DOCKER_COMPOSE.md

"},{"location":"docker-troubleshooting/running-as-root/","title":"Running as Root User","text":"

Tip

Looking for how to run the container as root? See the File permissions documentation for details.

"},{"location":"docker-troubleshooting/running-as-root/#issue-description","title":"Issue Description","text":"

NetAlertX has detected that the container is running with root privileges (UID 0). This configuration bypasses all built-in security hardening measures designed to protect your system.

"},{"location":"docker-troubleshooting/running-as-root/#security-ramifications","title":"Security Ramifications","text":"

Running security-critical applications like network monitoring tools as root grants unrestricted access to your host system. A successful compromise here could jeopardize your entire infrastructure, including other containers, host services, and potentially your network.

"},{"location":"docker-troubleshooting/running-as-root/#why-youre-seeing-this-issue","title":"Why You're Seeing This Issue","text":"

This typically occurs when you've explicitly overridden the container's default user in your Docker configuration, such as using user: root or --user 0:0 in docker-compose.yml or docker run commands. The application is designed to run under a dedicated, non-privileged service account for security.

"},{"location":"docker-troubleshooting/running-as-root/#how-to-correct-the-issue","title":"How to Correct the Issue","text":"

Switch to the dedicated 'netalertx' user by removing any custom user directives:

After making these changes, restart the container. The application will automatically adjust ownership of required directories.

"},{"location":"docker-troubleshooting/running-as-root/#additional-resources","title":"Additional Resources","text":"

Docker Compose setup can be complex. We recommend starting with the default docker-compose.yml as a base and modifying it incrementally.

For detailed Docker Compose configuration guidance, see: DOCKER_COMPOSE.md

"}]}