Running Ad-hoc Custom Command Tests through the API

NetBeez’s BeezKeeper API supports ad-hoc custom command tests. You can trigger on-demand custom script runs from one or more agents and retrieve the parsed results (e.g. for dashboards or automation).

How It Works

  1. Create an ad-hoc custom command run with POST /multiagent_nb_test_runs/ad_hoc.
  2. Poll the same run with GET /multiagent_nb_test_runs?filter[multiagent_nb_test_runs]=<run_id>&include=results until state is completed.
  3. Read the per-agent results from the included array (scheduled_nb_test_result with result_values).

1. Setting Up the API Token and Base URL

Use your API token and hostname:

API_TOKEN = "your_api_token_here"
BASE_URL = "https://[HOSTNAME]"
HEADERS = {
    "Authorization": f"Bearer {API_TOKEN}",
    "Content-Type": "application/vnd.api+json"
}

2. Create a New Ad-hoc Custom Command Test

Send a POST request to /multiagent_nb_test_runs/ad_hoc with:

  • test_type id: 11 (custom command).
  • attributes: custom_command (script body) and output_schema (metrics to parse).
  • relationships: agents (agents that will run the script) and test_type.

Requirements:

  • custom_command must start with #!/usr/bin/env bash or #!/usr/bin/env python.
  • output_schema is an array of { "metric": "name", "unit": "..." } defining the metrics your script outputs (e.g. key=value lines).

Example request (curl)

curl --location -g 'https://[HOSTNAME]/multiagent_nb_test_runs/ad_hoc' \
  -H "Content-Type: application/vnd.api+json" \
  -H "Authorization: Bearer YOUR_API_TOKEN" \
  --data '{
    "data": {
        "type": "multiagent_nb_test_run",
        "attributes": {
            "custom_command": "#!/usr/bin/env bash\necho \"value=123\"",
            "output_schema": [
                {
                    "metric": "value",
                    "unit": "int"
                }
            ]
        },
        "relationships": {
            "agents": {
                "data": [
                    { "id": 3646, "type": "agent" },
                    { "id": 3588, "type": "agent" }
                ]
            },
            "test_type": {
                "data": { "id": 11, "type": "test_type" }
            }
        }
    }
}'

Example response (created)

{
  "data": {
    "id": "450374",
    "type": "multiagent_nb_test_run",
    "attributes": {
      "ts": 1773652644033,
      "state": "initialization",
      "configuration": {
        "schedule_type": "ad_hoc",
        "custom_command": "#!/usr/bin/env bash\necho \"value=123\"",
        "output_schema": [
          { "metric": "value", "unit": "int" }
        ],
        "agent_ids": [3646, 3588],
        "test_type_id": 11
      }
    },
    "relationships": {
      "scheduled_nb_test_template": { "data": null },
      "scheduled_nb_test_results": { "data": [] },
      "test_type": { "data": null }
    }
  }
}

Save the run id (e.g. 450374) for the next step.

3. Poll for Test Results

Call the index endpoint with a filter on the run id and include=results:

GET https://[HOSTNAME]/multiagent_nb_test_runs?filter[multiagent_nb_test_runs]=450374&include=results

Repeat until data[0].attributes.state is completed (or failed). Then read the results from included.

Example response (completed)

{
  "data": [
    {
      "id": "450374",
      "type": "multiagent_nb_test_run",
      "attributes": {
        "ts": 1773652644033,
        "state": "completed",
        "configuration": {
          "schedule_type": "ad_hoc",
          "custom_command": "#!/usr/bin/env bash\necho \"value=123\"",
          "output_schema": [
            { "metric": "value", "unit": "int" }
          ],
          "agent_ids": [3646, 3588],
          "test_type_id": 11
        }
      },
      "relationships": {
        "scheduled_nb_test_template": { "data": null },
        "scheduled_nb_test_results": {
          "data": [
            { "id": "720055", "type": "scheduled_nb_test_result" },
            { "id": "720056", "type": "scheduled_nb_test_result" }
          ]
        },
        "test_type": { "data": null }
      }
    }
  ],
  "included": [
    {
      "id": "720055",
      "type": "scheduled_nb_test_result",
      "attributes": {
        "error_message": null,
        "severity": 6,
        "ts": 1773652644033,
        "ssid": "netbeez",
        "result_values": [
          { "key": "value", "value": 123.0 }
        ]
      },
      "relationships": {
        "agent": { "data": { "id": "3646", "type": "agent" } },
        "multiagent_nb_test_run": { "data": { "id": "450374", "type": "multiagent_nb_test_run" } }
      }
    },
    {
      "id": "720056",
      "type": "scheduled_nb_test_result",
      "attributes": {
        "error_message": null,
        "severity": 6,
        "ts": 1773652644033,
        "ssid": "netbeez",
        "result_values": [
          { "key": "value", "value": 123.0 }
        ]
      },
      "relationships": {
        "agent": { "data": { "id": "3588", "type": "agent" } },
        "multiagent_nb_test_run": { "data": { "id": "450374", "type": "multiagent_nb_test_run" } }
      }
    }
  ],
  "meta": {
    "filters": { "multiagent_nb_test_runs": "450374" },
    "include": "results",
    "page": { "offset": 1, "limit": 25, "total": 1 }
  }
}

Each item in included is a scheduled_nb_test_result: one per agent. Use relationships.agent.data.id to match the agent, and attributes.result_values for the parsed metrics (e.g. key: "value", value: 123.0).

Putting It All Together (Python)

import requests
import time

API_TOKEN = "your_api_token_here"
BASE_URL = "https://[HOSTNAME]"
HEADERS = {
    "Authorization": f"Bearer {API_TOKEN}",
    "Content-Type": "application/vnd.api+json"
}

# 1. Create ad-hoc custom command test
payload = {
    "data": {
        "type": "multiagent_nb_test_run",
        "attributes": {
            "custom_command": "#!/usr/bin/env bash\necho \"value=123\"",
            "output_schema": [
                {"metric": "value", "unit": "int"}
            ]
        },
        "relationships": {
            "agents": {
                "data": [
                    {"id": 3646, "type": "agent"},
                    {"id": 3588, "type": "agent"}
                ]
            },
            "test_type": {
                "data": {"id": 11, "type": "test_type"}
            }
        }
    }
}

resp = requests.post(
    f"{BASE_URL}/multiagent_nb_test_runs/ad_hoc",
    headers=HEADERS,
    json=payload
)
resp.raise_for_status()
run_id = resp.json()["data"]["id"]

# 2. Poll until completed
while True:
    resp = requests.get(
        f"{BASE_URL}/multiagent_nb_test_runs",
        headers=HEADERS,
        params={
            "filter[multiagent_nb_test_runs]": run_id,
            "include": "results"
        }
    )
    resp.raise_for_status()
    data = resp.json()
    state = data["data"][0]["attributes"]["state"]
    if state in ("completed", "failed"):
        break
    time.sleep(5)

# 3. Use results (included = scheduled_nb_test_result per agent)
for item in data.get("included", []):
    if item.get("type") == "scheduled_nb_test_result":
        agent_id = item["relationships"]["agent"]["data"]["id"]
        result_values = item["attributes"].get("result_values", [])
        print(f"Agent {agent_id}: {result_values}")

Summary

  • Endpoint: POST /multiagent_nb_test_runs/ad_hoc with test_type id 11 (custom command).
  • Required attributes: custom_command (script with correct shebang) and output_schema (metric names and units).
  • Results: Poll GET /multiagent_nb_test_runs?filter[multiagent_nb_test_runs]=<run_id>&include=results; when state is completed, read included for per-agent scheduled_nb_test_result and result_values.

For more on ad-hoc tests (e.g. speed tests), see Running Ad-hoc Network Speed Tests through the API.

1 Like