Introduction
In the previous post, we talked about the Advanced Targeting Best Practices. It’s time to round out our testing capacity with scheduled tests. These tests will run on a specific schedule, usually by the hour, a few times a day or daily.
Types of Scheduled Tests
There are three types of scheduled tests: speed test, iperf and avoid and each test will gather different types of data for different needs/problems to solve. Below are the types of data gathered for each test type.
- Speed Test - Download/Upload (Mbps) and Latency
- Iperf - Throughput (Mbps), retransmissions, jitter (UDP only) & packet loss (UDP Only)
- VoIP - MOS, jitter, packet loss and latency
Configuration Suggestions
In the case of speed test and iperf, you have several options to consider. One is by agent or group. Most of the time we recommend groups, especially the default group as this will allow the test to be auto assigned on new agents.
In the case for all 3 test types, we covered some of the alerting parts in my previous post NetBeez Alerting Best Practices. Specifically towards the end we talk about “Scheduled Tests (Watermark or Baseline). To navigate to alerts, you go to the top right cog>Anomaly Detection> + Add Alert. From there you’ll have 3 options: Scheduled Test Error, Performance Baseline and Performance Watermark as seen below.
Depending on your situation, a watermark or baseline may be the right fit for you. If you are monitoring a wide range of devices and expect a large variation in performance (different generation of networking devices), then a baseline like this below would be ideal. Or if you have a specific SLA to maintain, watermark is the best option.
For each alert you can generate alerts covering nearly any metric reporting, while using and/or statements to allow you to craft a very specific alerting profile to best suit your situation. A standard one we see for RWA deployments is 25Mbps download or 10Mbps upload. This will notify you if any agent reports back either of these tests running below these thresholds and allow you to enforce SLA’s.
Another option to think about is the run mode which we have 3 options: parallel, serialized or randomized.
- Parallel - Run all of the tests at the same time
- Serialized - Run 1 at a time, waiting for it to complete before the next starts (limit of 70-80 per hour)
- Randomized - Run all of them with as little overlap as possible in a 30/60/90/120 minute window
Depending on your deployment, needs, testing frequency and other factors will dictate how you configure these. Below I will outline a few things to consider for each but this won’t cover all of them.
- Parallel
- Prefer large spread deployments where there won’t be a singular funnel/bottleneck such as an on-prem VPN
- Running tests often (every hour)
- Speed test only
- Serialized
- Won’t exceed more than 70-80 per hour
- Not ideal for hourly testing
- Potential funnel/choke point on the network
- Randomized
- Good for testing every 1-2 hours
- Consider the amount of agents vs the timeframe of running the tests
- Will have overlap so be cautious on both the server and network resources
For VoIP, there is not nearly as much to consider, mostly if there is a specific codec you are looking to use.
Iperf
While setting up an iperf test is straightforward, I must explain a few things about this test and how it differs from the speed test. One key difference is, iperf uses real traffic over the network. So if you only have a 1Gbps uplink, and your devices are all 1G, then you can easily saturate the network which would impact user experience. Also, processing iperf testing can take more resources on the receiving end than one might realize. For this reason, iperf testing is limited to one to one for agent to agent.
We recommend setting up a dedicated iperf server to get scalable test results. The server can be configured as a VM in the cloud, datacenter or on-prem. From there you can navigate to TESTS>Iperf> + Add Test, select Multiple Agents to Server, select a group or specific agents, the destination agent (for one-to-one) or the server IP (Many-to-server), time, run mode, monitoring conditions (only applicable for RWA agents) and alerts You can cap the bandwidth if you only need to confirm a certain amount available.
Speed Test
Speed testing on NetBeez has received several recent updates. With release version 14.5 we added both Cloudflare speed test as well as the customer NDT server option. Unlike iperf, you do not need a specific destination as all 3 of our options (Cloudflare, NDT & Fast.com) have publicly accessible endpoints to hit. This won’t flood the network with traffic the same way iperf does, but still carries a concern if you try to run dozens on the same network using the same uplink.
To configure, you can navigate to TESTS>Network Speed> + Add Test, select the agents you want to run the tests or select a group and configure the tests to your preferences. Unlike iperf, there are not many options to select other than the Test Type, a specific server (NDT only), time, run mode, monitoring conditions (only applicable for RWA agents) and alerts.
VoIP
This is a bit simpler in regards to setup, you can navigate to TESTS>VoIP> + Add Test, then select the source & destination agents, the duration, codec, time, run mode, monitoring conditions (only applicable for RWA agents) and alerts. This will only measure a simulated call across your network from 1 agent to another.
For continuous voice testing, I recommend searching our site and checking out many of our blog posts about VoIP monitoring. But the general recommendation is to configure a ping test to a VoIP application and gather jitter/MOS statistics as outlined in our NetBeez Advanced Targeting Best Practices post.
Mistakes to Avoid
I know I’ve said this before but don’t overdo it. I swear this is a copy/paste from each post I write. Doing too much is almost as bad as not enough. Try to configure iperf servers vs a ton of one-to-one tests. Leverage the cloud as much as possible and configure your tests accordingly. Does a speed test or iperf test NEED to run every hour? In most cases, especially for RWA, the answer is no. A few times a day to capture baselines is all that is necessary. Real time testing for metrics such as packet loss, jitter, MOS, RTT is going to be far more useful for detecting problems as they arise.
Conclusion
Schedule testing is such an important core function of network engineering teams. People complain when the network is slow. There is also a rising problem where remote workers may not have the proper internet speeds to perform certain duties or tasks. This is why scheduled testing is so important.