My Web Markups - Chris Ryan
Firewall Configuration - AppNeta Documentation | AppNeta
Packet shaping is similar to on-ramp traffic lights that pace cars onto a highway. The cars are still allowed to queue up on the ramp, and the highway is protected from flooding. However, rate-limiting queue is like a tank that blows up cars when there are too many lined up on the ramp.
It is much better to implement a rate-limiting server, also called a Packet Shaper. This method relies on spacing of packets on the downstream link. Packets are not dropped unnecessarily, and rate limiting cannot be bypassed with upstream traffic shaping
Rate-limiting queues are found in routers that are implementing Cisco’s CAR feature. We have also detected rate-limiting queues in African and Eastern-European Frame Relay Networks.
You may be able to adjust upstream flows to bypass effect of rate limiting queues. By pacing packets, the downstream link can be saturated. Sending many simultaneous streams of data sometimes can fill the downstream network.
Dropping packets to limit data rates is a very bad idea. Remember that users create traffic. By simply dropping packets, data does not go away. Eventually the data is retransmitted. Rate-limiting queues can actually increase overall traffic, and may drive larger networks to the point of saturation.
As long as combined upstream data rates never exceed the capacity of the server, a rate-limiting queue has no effect. But usually this is not the case. As soon as 4 or 5 packets are sent in unison, the queue is filled and packets are dropped.
Rate-limiting queues are a method of limiting traffic by reducing the size of the queue buffer.
A clean link without media errors will only drop packets at points in the network where queues are full. Queue buffer sizes vary, however 64Kbytes is a common size. Much larger, and a network may experience delays that will cause TCP to retransmit data unnecessarily. Much smaller, and queues can cripple TCP efficiency.
The server sends data to the routers and switches at 100Mbps. The switches and routers then must queue the data while it is being fed to the client at 10Mbps. In this environment the network will be better protected if the workstations are set to 100Mbps rather than 10Mbps.
All servers and workstations have 10/100Mbps NIC cards. A decision was made to tune the speed of the network cards to protect the load on the network. All workstations were to be set to 10Mbps, and all servers would be set to 100Mbps.
For example, a user who is using the internet usually will generate more traffic on a faster link than on a slow link. However, an individual who uses a network for business purposes will have a predefined amount of traffic that must be passed over the network. A hospital worker must process a fixed amount of patients. The number of transactions that a bank must perform is based on the number of customers that enter their branches. If network traffic is predefined, implementing rate-limiting networks will not reduce the amount of traffic that will pass over a network. In fact, rate limiting could actually increase the total amount of traffic on a network.
Some may say that traffic is generated by computers. But, even more basically, traffic is generated by people. Individuals request information, and that information is passed over the network. The amount of traffic generated by an individual depends on what that individual is using the network for.
Occasionally the full capacity of a network is not made available to end users. Usually rate-limiting is implemented to protect networks or servers. However, often the opposite happens. Rate-limiting algorithms can actually break an otherwise operational network.
Advanced Analysis "Rate-limiting Behavior Detected"
Advanced Analysis "Rate-limiting behavior detected" - AppNeta Documentation | AppNeta
n general, transmission parameters (Maximum Transmission Unit (MTU), Auto negotiation, Speed, and Duplex) use default values but can be changed if necessary.
Transmission parameters - AppNeta Documentation | AppNeta
over time, these measurements should be comparable but not equivalent.
‘Application latency’ and ‘server response’ are calculated in the same way: the total response time (GET to first byte in Usage monitoring, and SYN to first byte in Experience monitoring) minus the network component
‘application latency’ in Usage monitoring and ‘server response’ in Experience monitoring are comparable but not equivalent.
same values in each direction
alculated application latency applies equally to the inbound and outbound directions
Whenever flow history is summarized or collated for presentation, the average of the average and the max of the max are taken
Application Usage Monitoring Point Details charts provide two application latency values: the average of all matching flows, and the maximum of all matching flows
For any flow where network latency is much greater than application latency, application latency becomes inconsequential.
The TCP handshake must be seen so that server network delay can be calculated.
includes server network delay, it must be subtracted to get the application latency
In practice, application delay is the time between the last request byte sent by the client to the first response byte received by the client
Application latency is the time the server takes to respond to a request from the client.
‘network latency’ in Usage monitoring is different than ‘network response’ in Experience monitoring
Again, the handshake is only present in the initial flow
Mid-stream TOS/DSCP changes
handshake is only present in the initial flow.
Long-lived sessions (like SSL)
wo cases in which handshake will not be present
network latency is only available for TCP flows in which the handshake is seen
Network latency is calculated using the TCP handshake because these packets are processed by the client/server only up to Layer 3, so we can assume negligible host delay
Network latency is the time it takes a packet to travel between a client and a server
The charts contain columns showing network and application latency
Outbound traffic is that sent from a host on a local subnet to an external destination
nbound traffic is that coming from an external source to a host on one of the local subnets you identified during setup
you should be aware of
a few concepts and calculations
Traffic direction Application and network latency calculations
Usage chart data - AppNeta Documentation | AppNeta
Depending on the workflow type you specify when creating the web app group, the web paths you create will either emulate users accessing a web app via a browser or they will send HTTP requests to the web app’s API.
In order to use Experience monitoring to monitor an application, you need to create a web app group. When you create it you are asked to specify the source Monitoring Points, web app URLs, and workflows you wish to use. A web path is then created for all combinations of these components. For example, four Monitoring Points, two web app URLs, and one workflow results in eight web paths being created in the web app group. Typically though, all web paths within a web app group use the same target web app.
Web App Groups - AppNeta Documentation | AppNeta
It is good practice to rename these files to match the Hostanme but leave the .qcow2 and .iso extensions.
Copy the KVM base image (.qcow2) and the config image (.iso) to /var/lib/libvirt/images/ on the host system.
Helpful commands The following commands can be used on the host to manage your KVM virtual machines.
The AppNeta software required for the v35 includes a base image (.qcow2) and a config image (.iso). These must be downloaded and then copied to the KVM host.
There are two scripts used for v35 setup that must be downloaded and copied to the KVM host: vk35tool.py - Script to create a v35 and configure its interfaces. vk35hook.sh - Hook script to automatically connect a mirror port to an OVS bridge.
Intel X540 Intel X550 Intel 82599
For 10 Gbps: A 10Gbps NIC with SR-IOV support.
Open vSwitch (OVS) software must be installed
The OVS bridge is required only if you want to use a VLAN
For 1Gbps: A Linux bridge or an OVS bridge to bind the interface on the v35 to the physical interface on the host
Has up to four Ethernet NICs installed on the KVM host.
CentOS-7.x, RHEL 7.x - qemu-kvm qemu-img libvirt libvirt-python libvirt-client virt-install bridge-utils Ubuntu 16.04.3 LTS - qemu-kvm libvirt-bin ubuntu-vm-builder bridge-utils virtinst
Runs CentOS-7.x, RHEL 7.x or Ubuntu 16.04.3 LTS.
Capable of running a guest virtual machine with at least the v35 requirements.
KVM runs on one of the following Linux versions: CentOS-7.x, RHEL 7.x or Ubuntu 16.04.3 LTS.
runs on a host machine that has the KVM hypervisor installed
v35 virtual Monitoring Point
Virtual Monitoring Point Setup - v35 on KVM - AppNeta Documentation | AppNeta
Step 3: Check application performance metrics Filter to the application experiencing an issue. Filter to the relevant time range. Check for anomalies in key application performance metrics. If network latency is unusually high, head back to Delivery and review any diagnostic tests completed during the relevant time range. Look for the hop that introduces excessive latency. If application latency is unusually high, correlate with results from Experience to collect a comprehensive data set to report to the application team for further investigation (Tip: provide timing data for any notable events to help the app team correlate with app logs). If retransmit rate is unusually high, the issue is likely related to data loss and possibly congestion. Check for overutilization if you haven't already. Correlate with results from Delivery. Check network path charts and diagnostics for data loss between the relevant Monitoring Point and target. If there’s a correlated spike in data loss in Delivery, investigate the hop.
Intro to Usage Results
Navigate to Usage > Packet Captures. For the capture you want to schedule, select > Schedule This.
Create a schedule using an existing packet capture
Navigate to Usage > Packet Capture Schedules. Click + Create New Schedule
Prior to creating a packet capture schedule you must set up for packet capture (Note that if you don’t set the passphrase, the schedule will run but every capture will fail). When creating a packet capture schedule, you can create the capture and the schedule from scratch or you can create a schedule based on an existing packet capture.
Instead of starting a packet capture manually, you can schedule captures to start and stop automatically once or on a schedule.
Managing Packet Capture Schedules
Managing packet capture schedules - AppNeta Documentation | AppNeta
Rate limit AppNeta limits the number of API requests that can be made over a given period of time. Currently the rate limit is 50 requests every 10 seconds.
Access Token authentication requires an access token that you create on APM. Basic authentication requires an APM username and password. In either case the authentication information is passed as part of the API request.
The APM API supports Access Token authentication (recommended) and Basic authentication
When using the interactive APM API interface you will typically use the current API version (for example, V3). At times, new or changed endpoints will become available for early access prior to being promoted to the latest API version
Care should be taken with PUT, POST, and DELETE requests as they affect your live data.
Click Try it out!. The Curl field shows the curl command string (except for the authentication information) that can be used from the command line to produce the same result. See Using curl for more information The Request URL field shows the URL the request was made to. The Response Body field shows the JSON formatted response sent by APM. In this case, all network paths within the selected organization. The Response Code is the response code sent by APM. Successful responses have a “2xx” code. See Error codes for a list of error codes and their meaning. The Response Headers is the header information sent by APM.
Log in to APM. If you have more than one organization, change the organization to the one you are interested in. Navigate to > Explore API. The interactive APM API interface appears. Navigate to path > GET /v3/path.
provides detailed documentation about each endpoint and its parameters.
The best place to start learning and experimenting with the APM API is through its interactive interface. It provides a good place to stage API calls before implementing them in your integration software
Getting started with the APM API
In addition to the APM API, there is an API for administering Enterprise Monitoring Points (EMPs) - the Admin API.
API also provides the ability to configure and control APM
Results are delivered in a lightweight JSON format
The APM API makes data collected and generated by AppNeta Performance Manager (APM) available for analysis, reporting, and presentation in third-party systems
APM API - AppNeta Documentation | AppNeta
Copy the token and save it. Important: It will not appear again.
In the Select Organizations section, check any organizations you want the token to have access to
In the Dynamic Access section, check Add all organizations and any in the future if you want the token to access any organizations the user has access to now and in the future (as new organizations are added or removed).
reate a token To create an access token: Navigate to > Manage Access Tokens.
There are a few limitations to be aware of: Token generation is available to all users except those with custom roles. Token permissions are less than or equal to those of the user that created it. A token cannot be modified once it is created. You must revoke it then create a new one. If the user that created a token is deleted from APM, any tokens they created are immediately revoked. Tokens cannot be used to call the observer API endpoint for creating, viewing, or deleting an observer URL. To access the observer API endpoint, use basic authentication or use the interactive API interface.
API Access Tokens are easily generated within APM and are revocable.
Another benefit is that single sign-on users do not require a local account to utilize the API.
A user can generate a token and define the scope of access available to that token to a degree that is equal to or less than that user’s own scope of access.
Access management is improved because you can generate access keys granting API access within various scope and permission levels without requiring the creation of additional APM users
Security is improved because username and password do not need to be encoded into scripts or applications that access the API. Also, the authentication token does not carry any user information.
n, they provide greater control over API access management compared with Basic authentication (i.e., username and password).
provide a secure method of accessing the APM API from a script or from an application.
API Access Tokens
API access tokens - AppNeta Documentation | AppNeta
Configuring APM event integration consists of specifying the URL of the server you want to send the events to and the type of events you want to send there.
Configure APM event integration
You can use either HTTPS or HTTP to communicate between APM and your server, though we do not recommend using HTTP. HTTPS is recommended because it provides encrypted communications using Secure Socket Layer (SSL). SSL communications requires that your server has an SSL certificate. If you are using APM-Public Cloud (as opposed to APM-Private Cloud), you’ll need an SSL certificate from a recognized Certificate Authority (CA) as self-signed/untrusted certificates are not supported
limitations Event integration does not currently support Usage Events.
APM event integration is configured using the observer API endpoint in the APM API.
There is a different JSON object for each event type. The properties within each object are described in Event data.
Each event that is sent consists of data in JSON format.
Network change events - These are notifications that alert you to changes in the sequence of networks (BGP Autonomous Systems) on the path between a source and a target.
Web application events - These are notifications that a web path alert profile was violated or cleared
Service quality events - These are notifications that an alert condition was violated or cleared.
Sequencer events - These are notifications that APM lost or reestablished connectivity with a Monitoring Point
Test events - These are notifications that a diagnostic test has completed (or was halted).
event integration functionality provides you the ability to have APM events sent to a server or servers of your choice so that you can process the events as required.
Event Integration - AppNeta Documentation | AppNeta
Retrieve the embeddable URL At the time “Embeddable Session Login” is enabled on a user account, an email with the embeddable URL is sent to the user. Users with “Embeddable Session Login” enabled can also retrieve the embeddable URL at any time. To retrieve the embeddable URL: In APM, hover over your user icon at the top right of the page. Select Get Embed from the dropdown. Copy the embeddable content and click OK. Paste the embeddable content into the web page being modified.
Be aware that embedding an application within another has the potential for exposure to cross-site scripting (XSS) types of security vulnerabilities
hey can also retrieve this URL at any time. The URL (and surrounding HTML code) must then be added to the web page being modified.
The embeddable session URL is sent to a user when they are enabled for embeddable session logins
require the “Embeddable Session Login” add-on privilege
There are situations where it is desirable to consolidate the presentation of monitoring tools from different vendors onto a single web page
Embeddable Session Login
Embeddable Session Login - AppNeta Documentation | AppNeta
To embed charts into a web page, embeddable chart configuration must first be enabled
There are situations where it is desirable to display charts available within APM on a web page that you have created.
Embeddable charts - AppNeta Documentation | AppNeta
Troubleshoot a blank chart If your chart is blank, possible issues include: You are reporting on a range that has no data. You have an error in your code. The Monitoring Point is offline. If you are using Internet Explorer 11, the embed code must be hosted before it can be viewed. It can’t simply be opened as a local file or it will be blank.
Customize an embeddable chart
To retrieve embeddable chart content: For network path performance charts: Navigate to Delivery > Network Paths. Click the network path you are interested in. Click the Performance tab. The network path performance page is displayed. For the DNS chart: Navigate to Experience > Web Paths. Click the web path you are interested in. Click the Performance tab. The Test Timeline page is displayed. Scroll down to the DNS pane. Click the “</>” symbol within the chart pane to view the embed code. For example: The embed code is displayed. Copy the embed code and paste it in the web page you are modifying as per the comments in the code. If you want, you can customize the embeddable chart.
Retrieve embeddable chart content
Navigate to > Embeddable Charts Configuration. Select Embeddable Charts. Embeddable chart configuration is enabled
Enable embeddable chart configuration
Embeddable charts - AppNeta Documentation | AppNeta
Add a related network path
When creating or editing a packet capture configuration, you can associate it with one or more network paths using the Related Network Paths feature. This enables you to easily see all packet captures related to a given network path and to have the packet captures appear in relevant areas of the user interface and reports (for example, on the network path performance Events chart).
Associate a packet capture with a network path
Add comments to a packet capture To add comments to a packet capture: Navigate to Usage > Packet Captures. Click the name of the capture you are interested in. Click the Overview tab. In the Comments field, click the edit link. Add your comments. Click OK. Your comments are added to the capture overview.
Packet Capture uses the following Wireshark filters to provide alert and warning statistics:
On physical Monitoring Points, sorting by timestamp will produce the correct order. Timestamps are taken before the packets are split into hardware receive queues and thus respect the absolute order of the packet, which means that sorting a capture file (.pcap) by time will produce a better picture of packet ordering than sorting by packet index.
Between flows (different Layer 3 source/dest addresses), packets may be reordered. Two flows may not be processed by the same receive queue, which results in nondeterministic ordering when they’re inserted into the final capture file
Within a given flow (same Layer 3 source and destination IP addresses), packets will not be reordered. Every packet in a flow will be processed by the same hardware receive queue and thus fed into the capture file (.pcap) in order.
Note the following regarding packet order:
Navigate to Usage > Packet Capture. Click the name of the capture you are interested in. The capture results are displayed on a number of tabs: Overview - provides high-level capture details. Alerts and Warnings - displays the number of packets in the capture that match a predefined set of display filters that identify notable network behavior that you may be interested in. Protocol Breakdown - displays the number of packets, and the number of bytes in those packets, for each protocol in the capture. Conversations - displays the network conversations (traffic between two specific endpoints for a protocol layer) with the highest total number of bytes. Related Network Paths - lists the network paths associated with the capture. Click a path to display all of the captures related to that path.
View packet capture results
A capture can be stopped automatically as part of a stop condition specified when it is started or scheduled, or it can be stopped manually. Stopping a packet capture will not stop a packet capture schedule.
Stop a packet capture
To use an existing packet capture configuration as a template: Navigate to Usage > Packet Capture. For the capture you want to repeat, select > Start Again.
Navigate to Usage > Packet Capture. Click + Start New Capture. In the Name field, specify a name for the capture. In the Monitoring Point dropdown, select the Monitoring Point to capture from. In the Capture Interface dropdown, select the Monitoring Point capture interface to use. In the Packet Limit field, specify the maximum number of bytes to store of each captured packet. Default: 96 bytes. Range: 68 - 65,535 Deselect this option to capture entire packets. In the Capture Filter field, use a filter to specify which packets are captured. The filter uses libpcap syntax. For examples, click the icon. Filtering only the traffic you care about will reduce the capture size. This provides a longer captured duration, and it ensures that the capture analysis is relevant to the problem you are trying to solve. Leave the field blank to capture all packets. In the Capture Stop Condition(s) field, specify when to stop the capture. In the Related Network Paths field, specify network paths associated with the capture to have the path name appear in relevant areas of the user interface and reports (for example, on the network path performance Events chart), and to filter completed packet captures by related network paths. Click Start. The capture is started.
To start a new packet capture:
Prior to starting a packet capture you must set up for packet capture. You can then start a new capture, use an existing capture configuration as a template to start a capture, or you can edit an existing packet capture configuration. Capture files are capped at 1GB. In addition, regardless of any stop conditions specified, capturing ends when the space remaining on the Monitoring Point is too low: For full-packet captures (where maximum of 1500 bytes per packet are captured), capturing ends when less than 10MB remains on the device. For partial-packet captures (where less than 1500 bytes per packet are captured), capturing ends when less than 1MB remains on the device.
The captured packets are packaged into a standard file format and securely uploaded to AppNeta Performance Manager (APM). You can view the results there, or you can download the capture file to analyze using third-party software (for example, Wireshark).
Packets from AppNeta monitoring, assessments, diagnostic tests, and Monitoring Point communication are not captured.
captures packets based on user-defined parameters that include which packets to capture, how much of each packet to capture, and when to start and stop capturing
Packet Capture is a way of setting up a Monitoring Point to copy and store IP packets that it sees on its Usage monitoring interface
Managing Packet Captures
Managing Packet Captures - AppNeta Documentation | AppNeta
To save a view of the page: On all but the Traffic Summary page, click > Save Current View (next to the page title) and enter the view details. You can view a list of saved views once they have been created
o view details for a particular host: On the Top Sources, Top Destinations, or Top Hosts pages, in the table below the chart, click the host you are interested in. A page containing details for the selected host is opened
8) To select a recent filter: In the 10 Recent Filter(s) pane (bottom right), click the recent filter you want to see. A new tab is opened.
(7) To select a different time range: In the Filter Options pane (center right), specify the time range and click Apply Filter
(6) To select a different Monitoring Point/interface: In the Monitoring Point pane (top right), select a Monitoring Point/interface from the dropdown
(5) To show both inbound and outbound traffic: In the chart pane, click View Options (top right) and select Show as Inbound / Outbound Traffic
4) To select a different traffic parameter to display: In the chart pane, click View Options (top right) and select the traffic parameter in the dropdown
(3) To select a new chart: In the chart pane, click the chart title and select a new chart from the dropdown
(2) To drill down on a chart: Hover over the chart and click a blue pip for the detail you are interested in
(1) To open a detail chart (containing a chart and a details table) in a new tab: In the top pane, click one of the chart links
The Application Usage Monitoring Point Details page is displayed. The top pane allows you select from a number of charts. Selecting a chart will open a new tab. The Traffic Summary pane provides a summary of various traffic parameters for the selected Monitoring Points over the selected period. The chart pane displays one of the selected charts: Top Applications - displays the applications generating the most traffic. Top Categories - displays application category generating the most traffic. Top QoS - displays the QoS values seen most frequently. Top Sources - displays the hosts sending the most traffic. Top Destinations - displays the hosts receiving the most traffic. Top Hosts - displays the hosts sending and receiving the most traffic.
Application Usage Monitoring Point Details charts
(1) To change the traffic parameter the charts display: In the 24 Hour column heading, select the traffic parameter to display in the charts. These include: Traffic Rate - the average traffic rate (bits/second) in the measurement period. Traffic Volume - the total traffic volume (bytes) in the measurement period. Number of Packets - the total number of packets in the measurement period. Flows / second - the average number of flows in progress per second in the measurement period. None - no charts appear
Navigate to Usage > Monitoring Points. There is a 24 Hour chart displayed for each Monitoring Point/interface.
24 Hour charts
(5) To view detailed charts for a Monitoring Point: In the Top Applications pane, click the Monitoring Point name/port.
(4) To view application traffic details: In the Top Applications pane, hover over the application name or click it for additional detail
(3) To filter what is being displayed in the Top Applications pane: In the Top Applications pane, adjust the filter fields in the top right of the pane.
(2) To select the Monitoring Points to display: In the Traffic Summary pane, click the Locations link.
(1) To adjust the time period to display: In the top pane, select a zoom link, use the calendar, or use the slider.
The Top Applications pane provides a graphic of the relative traffic volume for the top applications against selected Monitoring Points.
The Traffic Summary pane provides a summary of various traffic parameters for the selected Monitoring Points over the selected period.
Navigate to Usage > Summary
Application Usage Summary page provides an overview of traffic (by application) across selected Monitoring Points for a selected timeframe
Application Usage Summary page
Saved Views - provides a way to save a view of a Application Usage Monitoring Point Details chart so that it doesn’t have to be reconfigured each time you want to run it.
Application Usage Monitoring Point Details charts - provides a variety of highly customizable and detailed charts showing results from a selected Monitoring Point Usage monitoring interface.
24 Hour charts - provides a view of a selected traffic parameter (for example, Traffic Volume) for each Monitoring Point.
Application Usage Summary page - provides an overview of traffic (by application) across selected Monitoring Points for the selected timeframe (up to one day).
View Usage monitoring results
Usage Monitoring Data - AppNeta Documentation | AppNeta
Apply a Usage alert profile to a Monitoring Point The last step in setting up Usage alerts is to apply a Usage alert profile to a Monitoring Point. To apply a Usage alert profile to a Monitoring Point: Navigate to Usage > Monitoring Points. For the Monitoring Point interface you want to configure, click the icon. In the Configure dropdown, select Alert Profiles. In the Flow Alert Profile section, select the profile to apply. Click Apply. The selected alert profile is applied to the Monitoring Point.
Usage alerts - AppNeta Documentation | AppNeta
a number of metrics for the path
source, the target, and all hops between the source and target
The Data Details and Voice Details tabs provide per-hop details
Host name table
provides diagnostic messages that describe issues detected
provides a number of metrics for the path
Hop diagram - shows the source, the target, and all hops between the source and target.
The Summary tab provides a high level summary of the information obtained during the diagnostic test
In addition to being triggered automatically when an alert condition is violated, diagnostics are also triggered automatically when a network path is created. In addition they can be triggered manually.
If more than five network paths have diagnostics tests started due to the same condition, to the same target, in the same direction, and at a similar time, only five of these tests are run
If the condition that triggered a diagnostic test is cleared before the test has run, the test is not run
Note that the path status changes to “testing” only when the test starts, not while it is queued.
Diagnostics initiated manually, or triggered when a path is created, are always run
events are marked on the Capacity chart
Tests triggered when the queue is full are not run
a limited number of tests may be queued
Within a parent or child organization, the number of concurrent diagnostic tests triggered by a violation condition is subject to a limit
When the return path is through more than a single NAT device.
When the firewall in front of the source is blocking inbound UDP - In this instance, the inbound diagnostics cannot be forwarded to the source. Diagnostics revert to ICMP.
When the target and one or more other Monitoring Points are behind a firewall - In this instance, APM cannot uniquely identify the target.
When the source connects to APM via the AppNeta relay server - In this instance, APM does not know the public IP address of the source.
There are circumstances in which inbound diagnostics (target-to-source), cannot be run:
If a hostname is used to identify the target then a hostname must also be used to identify the source. An IP address cannot be used.
If they are not, single-ended diagnostics are run in the outbound direction, and inbound diagnostics are not available.
Diagnostics are only available for dual-ended paths when the source and target Monitoring Points are in the same organizational hierarchy
Diagnostics on dual-ended paths are subject to some restrictions:
For dual-ended paths, the diagnostics tests in both directions (source-to-target and target-to-source) use ICMP to test path hops and UDP to test the endpoints.
For single-ended paths, the diagnostics tests use ICMP for testing both the path hops and the path target
Diagnostics help you to pinpoint problematic hops using the same packet dispersion techniques as CPA, but rather than sending test packets only to the target, test packets are sent to all hops on the network path in addition to the target.
Diagnostics - AppNeta Documentation | AppNeta
To use Rate-limited monitoring: Open a support ticket to enable Rate-limited monitoring. Once enabled, the Rate Limiting Capacity chart appears on the path performance page.
As Rate-limited monitoring generates network load, it must be used with caution. To that end, it is not enabled by default and must be enabled by AppNeta Support.
hese bursts, though brief, have more impact on the network than AppNeta’s standard monitoring techniques, but are required in order to trigger the rate limiting mechanism
he rate at which the Monitoring Points receive the packet bursts determines the rate-limited capacity.
The source sends packet bursts outbound and the target sends packet bursts inbound.
When Rate-limited monitoring is enabled, APM instruments the network path with controlled bursts of packets at the total capacity rate in order to trigger the network’s rate-limiting mechanism
To run a basic PathTest using UDP:
Tests are bidirectional. The source sends packets and the target simply replies.
The target does not need to be a Monitoring Point
considered control traffic rather than true data traffic
For PathTest using ICMP
Tests must use port numbers not blocked by a firewall.
Tests must use port numbers not in use by the Monitoring Point (e.g. 80, 433)
Tests are unidirectional
The target must be a Monitoring Point or an AppNeta WAN Target
Test with multiple PathTest streams to better saturate the network with TCP traffic
As TCP uses flow control and retransmissions (unlike UDP and ICMP), the PathTest results can show a lower capacity than with UDP. The results, however, will show the true capacity to carry TCP traffic.
For PathTest using TCP: PathTests using TCP are the best way to test the capacity of networks carrying TCP traffic
Tests are unidirectional. Inbound tests are started after outbound tests complete.
The target must be a Monitoring Point or an AppNeta WAN Target.
it doesn’t have flow control and retransmissions like TCP.
UDP is considered true data traffic
PathTests using UDP are the best way to test general network capacity.
For example, on physical Monitoring Points the multiple is 20. So 20 packets are sent on the wire for every one seen in a packet capture.
PathTest sends multiple copies of a packet “on the wire”
The number of packets seen in a packet capture will be less than the number actually sent on the network by PathTest.
gradually increase the number of streams (e.g., 5, 10, …) and decrease the bandwidth proportionately (e.g., 200Mbps, 100Mbps, …) until UDP tests and TCP tests show the same results
Set bandwidth to maximum expected capacity (e.g., 1000Mbps) and run a test
This can be addressed by running more than one stream when using TCP
can also be due to the distance between source and target
due to TCP’s flow control mechanism as well as the affect of packet loss and retransmissions on TCP
There can be a large discrepancy between a test using UDP and the same test using TCP
Networks can be provisioned asymmetrically
or example: data vs voice, protocol used (ICMP, UDP, TCP), and QoS type
Networks treat different traffic types in different ways.
A PathTest can only be sourced from a Monitoring Point’s default interface
By default, only Advanced and Organization Admin user roles can run a PathTest.
When separate bandwidth is allocated to data and voice, use PathTest to stress the network with data and voice traffic.
Generate traffic with a QoS DSCP marking to verify or stress a traffic engineering strategy. This can be used to confirm that traffic with a given DSCP marking is given priority over best-effort traffic.
Verify that a link can achieve the capacity provisioned by your ISP.
possible to accurately measure the available ICMP, UDP and/or TCP capacity and analyze the behavior of a network under different traffic loads
PathTest is a powerful load generation tool used to examine the network capacity between two endpoints on a LAN or WAN link.
Caution: While these loading techniques produce accurate results, be aware that they can have an impact on the available network capacity for the duration of a test.
Both of these tools require a Monitoring Point at each end of the network path being tested (i.e. a dual-ended path).
Rate-limited monitoring is used when you need to make measurements at regular intervals over time
PathTest is used when you need to make a single measurement
PathTest and Rate-limited monitoring are tools that can be used to generate a load that triggers these techniques and then measure the capacity of the network under load
n either case, your true available capacity is the one in the rate-limited state
other ISPs market a ‘burst’ technology, which means that they’re limiting your regular capacity, but give you more for short periods of time, for example, during large downloads
some ISPs trigger rate limiting when your traffic rate crosses a threshold
ossible that the network will behave differently (provide a lower capacity) once it is actually loaded
create very little load
by sending relatively small packet trains and then calculate the path’s capacity based on the dispersion between the packets on the receiving end
Delivery monitoring tools that run by default (Continuous Path Analysis (CPA) and Deep Path Analysis (DPA) / Diagnostics) measure the capacity of a network path
Stress, Capacity, and Availability testing - AppNeta Documentation | AppNeta
OIDs that can’t be matched to a MIB are grouped under ‘unknown OIDs’; and if the device doesn’t return a value for MIB, it is hidden by default.
APM can automatically determine which MIBs are available on a device, and then walk each one
APM has access to an extensive SNMP MIB library, that covers commons vendors like: Alcatel, Broadcom, Avaya, Cisco, IBM, Lucent, Nortel, Xerox, etc
Initiate an SNMP walk
Network devices - AppNeta Documentation | AppNeta
If you want to influence your readiness score, increase throughput. Keep in mind however that readiness is not throughput, nor is it a linear function of throughput.
This estimate considers total capacity and utilization, and the effect of loss, latency, and jitter on TCP congestion control.
Modeling is used to estimate throughput at layer 4, based on layer 3 behaviors and set of parameters that represent a typical TCP configuration
The basis for readiness is throughput,
eadiness, on the other hand, leverages the diagnostic capability of APM to discover the origins of loss, latency, and jitter, and then characterizes the range of performance possible in the presence of any issues.
MOS is based on current performance and is typically related to the symptomatic evaluation of the network path
Conceptually, readiness is like MOS for data, but with key differences
return familiar metrics (loss, latency, jitter, etc.), but they also generate a ‘readiness’ value
A data assessment runs diagnostics tests on multiple paths at once, with the goal of determining your network’s ability to deliver data-intensive applications
Data assessments are not as relevant to voice, video, best-effort, or transactional data
Data assessments are tests used to determine a network path’s readiness by evaluating its ability to handle activities like ftp transfers, backups, and recovery
The readiness metric is a succinct representation of how well a network path is expected to handle data-intensive applications.
Data assessments - AppNeta Documentation | AppNeta
Create a Voice test schedule Voice test schedules are used to repeatedly execute a Voice test at a given interval for a specified period of time.
When your call is traversing a network with insufficient bandwidth for the VOIP configuration, the call will experience higher latency and possibly packet loss. Packet loss is important because VOIP traffic uses the UDP protocol. With UDP, lost packets are not re-transmitted, resulting in broken audio on the listener’s end.
The amount of compression, the number of samples taken, and the number of samples packed into each IP packet all directly affected how much bandwidth is consumed by the call
When human voice is converted from analog to digital, it is sampled thousands of times per second, using one of several techniques called ‘codecs’ that not only converts the sampled voice, but also compresses it
Voice quality assessments are based on four metrics that affect voice quality: bandwidth utilization, packet loss, latency, and jitter
This results in a low MOS even though the audio sounds fine
not all VOIP handsets will respond to test packets
Devices such as printers will fail a voice assessment.
When you are assessing voice endpoints on a LAN, make sure you are targeting the correct devices
Contact AppNeta Support
against the service provider’s PBX.
Run an advanced voice assessment
against the service provider’s PBX.
Run a basic voice assessment
Apply the default voice WAN alert profile to capture any events and initiate diagnostics tests
Create a single-ended path from the location experiencing the problem to the service provider’s PBX
Test the connection to the VOIP server
Test the WAN link
Assessing a voice service provider
Voice packet loss
MOS - An estimate of the rating that a typical user would give to the sound quality of a call
Readiness - A representation of MOS that helps you understand how well a path is handling voice traffic
Voice monitoring metrics
Note: If you do not have a voice license, you can use PathTest as a replacement for Voice test, though it does not use voice protocols (SIP/RTP/RTCP) to simulate voice traffic so results are less accurate than using Voice test.
Test a network’s ability to handle inter-office voice traffic
Test a network’s ability to handle actual voice traffic
Create voice traffic load to expose network problems
Voice tests require a Monitoring Point as the target (though the target Monitoring Point does not require a voice license)
one or more sessions; each session specifies a path, a number of concurrent calls, and QoS settings
allow you to more accurately measure of how your network would treat voice traffic, but at the expense of greater bandwidth consumption
Voice tests provide a way to simulate multiple simultaneous voice calls using the actual voice codecs and protocols
Determine how varying call loads affect the ability to handle voice traffic using the call ramp-up feature
Assess network paths (up to 25) over a period of time in order to capture transient conditions
Check an existing voice deployment for issues
Provide a quick voice assessment of your network in advance of a VOIP deployment
Advanced voice assessments do not require a Monitoring Point as a target
The difference from Basic assessments is that this testing happens continually over a specified period of time and relies on both diagnostic and continuous monitoring techniques to infer voice quality
Advanced voice assessments also provide a way to assess the ability of multiple network paths to carry voice traffic.
Assess network paths using a large number of concurrent calls (up to 250)
Check an existing voice deployment for issues
Provide a quick voice assessment of your network in advance of a VOIP deployment
do not require a Monitoring Point as a target
happens at a point in time and relies on diagnostic techniques to infer voice quality
Basic voice assessments provide a way to assess the ability of multiple network paths to carry voice traffic
the source Monitoring Point needs to be licensed for voice.
enables you to assess your network’s ability to handle voice traffic with three tools: Basic voice assessments, Advanced voice assessments, and Voice tests
(APM) has monitoring capability designed specifically for ensuring good voice quality
Voice delivery - AppNeta Documentation | AppNeta
tables below shows the maximum call loads for each Monitoring Point type for different codecs, bit rates, and call directions
number of simultaneous video calls supported depends on the codec used, the bit rate, and whether they are one-way or two-way calls
Monitoring Points have different capacities with respect to video call load
Monitoring Point load limits
To support these use cases, tests are constructed in units called sessions. Each session is based on one path, a number of calls, call direction, codec, bit rate, and QoS
Video tests can simulate one-to-one, one-to-many, and many-to-many conferencing scenarios
Load warning - Be aware that video tests can create significant network load, especially when tests include several sessions, or when multiple tests are run concurrently.
The target must also be a Monitoring Point but does not require a video license.
source Monitoring Point needs to be licensed for video
enable you to measure how network security and traffic-shaping impacts video calls, and verify that QoS is having the desired effect
You can perform discrete tests that account for network path, number of calls, direction, codec, etc.
AppNeta Performance Manager (APM) has monitoring capability designed specifically for ensuring good video conference quality
Video delivery - AppNeta Documentation | AppNeta
You can configure the tree map to display network paths according to their Importance setting (where the larger the Importance number, the larger the rectangle) or the number of violations on the path in the last 24 hours or 7 days (where the more violations in the period, the larger the rectangle, and paths with no violations are not shown).
The Network Path Status tree map is like a word cloud but represents monitored network paths in your organization. Each path is represented by a color-coded rectangle where its color indicates the path’s status and its size represents either its Importance setting or the number of violations on the path in the last 24 hours or 7 days
Path status is indicated by its line color. For aggregated paths, the line color is that of the path with the most severe status.
Network Path Status - AppNeta Documentation | AppNeta
he provisioned capacity of a network path is indicated by a horizontal yellow line on the network path’s Capacity chart. It shows either the highest total capacity seen during the specified time range, or the capacity you expect (and have set) based on the service level agreement (SLA) with your ISP.
White box with dashed lines - means that monitoring is disabled on the path.
Grey chart - means that APM has not heard from the Monitoring Point.
Black chart - means that the Monitoring Point can’t reach the path target
Red chart - indicates that path performance violated a condition threshold
Yellow horizontal line - indicates the provisioned capacity
Black horizontal line - indicates a condition threshold
Black vertical line - indicates an attribute essential to monitoring has changed
Voice Loss chart
The Round-Trip Time chart shows the average round-trip time (RTT) measured on the network path during the specified time period
Round-Trip Time char
The Latency chart shows the latency (calculated as 1/2 RTT) measured on the network path during the specified time period
he Data Jitter chart shows the data jitter (data packet delay variation) measured on the network path during the specified time period
Data Jitter chart
Each data point is calculated as a rolling average of the last five samples during normal sampling (once per minute by default) and the last ten samples during escalated sampling (once every ten seconds)
shows the percentage data loss (loss of simulated data packets) measured on the network path during the specified time period
Data Loss chart
In addition, it shows the provisioned capacity (yellow horizontal line) either measured or set on the network path
shows the total, utilized, and available capacity measured on the network path during the specified time period
Hovering over a circle shows the associated events.
Circle position on the x-axis indicates the time that the events occurred. Circle size indicates the number of events of a given type. Circle color indicates the event type as follows:
provides a summary of the types of events that have occurred on the network path during the specified time period.
The Route chart shows the latest routes taken by TCP, UDP, and ICMP packets from the source Monitoring Point to the target. Hover over the nodes to display detailed hop information. You can also view route details for the network path by using the Routes pane.
The charts are plotted in real-time so no page refresh is required
By default, they show information from the last hour, but the timeline can easily be modified
Network path performance details are provided on a number of charts - each chart representing a separate metric.
Network Path Performance - AppNeta Documentation | AppNeta
In general, a condition is violated when it is outside a configured limit for a specified period of time and is cleared when it returns to an acceptable value for another period of time. The time periods specified for a given condition (the evaluation periods) are used to confirm that the condition is persistent for that time before declaring a violation or clear event.
Violations that occur while a Monitoring Point is disconnected from APM don’t raise an event until the connection is restored.
Alerts - things to know
If an alert profile time range expires while a condition is in violation, the condition is cleared and a ‘threshold suppressed’ entry is added to the event log.
Navigate to Delivery > Events
etwork path events indicate changes to conditions on a network path that exceed alert thresholds
All events for all paths in an organization are logged and retained for seven days.
Events include: condition violation and clear, Monitoring Point availability changes, diagnostic test completions, route changes, packet captures, and ISP changes.
A network path event is an indicator that something about a path has changed
Network Path Events
Network Path Events - AppNeta Documentation | AppNeta
Use the Route Visualization in cases where a routing issue is suspected. These include: a significant change in RTT (Round-trip time) (due to packets taking a non-optimal route, potentially because of a network failure or non-optimal DNS configuration) a significant change in capacity (due to packets taking a non-optimal route, potentially because of a network failure) seeing unexpected route changes connectivity failures
he Route Visualization provides a way to view traceroutes taken on multiple paths in a single view and to see how the routes taken by packets on those paths change over time. This enables you to: see routes beyond your private network review historical routing changes watch how routes change over time compare routes of different paths to find common/different hops between them
The primary tool for route analysis is the Route Visualization on the Network Paths page as it provides a visual of historical routes taken for up to 20 paths at the same time.
As part of Continuous Path Analysis (CPA), APM regularly traces the route for each network path, hop-by-hop, from source to target. This information can be used to confirm whether traffic is traveling through expected hops and networks.
Network Path Routes - AppNeta Documentation | AppNeta
To use a satellite, data must travel about 50,000 miles, which translates to about 260msec latency. But to go half way around the world, you only need to travel 12,000 miles.
Inevitably this flag is raised when the path under test includes one or more satellite links.
Advanced Analysis "Excessive Latency Measured"
Advanced Analysis "Excessive latency measured" - AppNeta Documentation | AppNeta
"Excessive packet round-trip time (RTT) detected" is a diagnostic that indicates that your network could be duplicating network traffic during busy periods, which in turn can lead to a traffic snowball effect. By reducing queue time (and thereby reducing trip time), packet loss will occur during periods of congestion, and thereby traffic duplication is avoided. Simply said, by dropping packets sooner, TCP flows will work more efficiently.
Consider the fact that TCP must recover from data that is damaged, lost, duplicated, or delivered out of order across the path. The least efficient of these is duplicated data
Similarly, huge queues can be encountered if too many queues are used in series. For example, Frame Relay Vendor A has 4 switches in its cloud connection between New York and Chicago, while Frame Relay Vendor B may use 80 switches. Vendor B’s network may exhibit "Excessive packet round-trip time (RTT) detected" during busy periods
This diagnostic can be triggered if queues are simply to
Less commonly queues are stopped because of device failures. Sometimes a reboot of a router or switch is in order.
Queues that are too big
a downstream intermittent media error can cause a queue to stop, causing packets to delay
One of three network situations can lead to this condition: media errors or router hangs where queues are stopping, or excessive queue sizes where packets must pass through queues that are too big.
The "Excessive packet round-trip time (RTT) detected" diagnostic indicates that your network is "queuing" (holding onto) packets for an unreasonable amount of time, and that this condition may be affecting the efficiency of your traffic flows.
As a network is driven close to a point of congestion, you may find that data is repeatedly retransmitted, and TCP windows will start to slide back and forth. The overall result is very inefficient traffic flows.
However, packets that survive long after they have been given up for dead make a mess of TCP flows.
TCP flows will respond to packet loss by backing off transmission rates.
In a properly tuned network, congestion results in packet loss
in general queue sizes are limited to about 8 to 64Kbits, which results in proper TCP flow control.
TCP flows do not work well if queues are too large
In older switches and routers, queues have been known to be very large
The "server" is a processor that puts bits onto the down-stream wire. The "queue" is the memory that will hold packets that are waiting to be processed by the server. If the queue is full when a packet arrives, a packet is lost (typically the packet at the front of the queue is dropped to accommodate the incoming packet).
n its simplest form a gateway/router/switch port can be represented in its two simplest components; a server and a queue.
This condition usually indicates the existence of intermittent media errors, or an over-queued network condition.
For example, a packet that survives for 3 seconds on a small LAN would trigger this diagnostic message.
The "Excessive packet round-trip time (RTT) detected" diagnostic indicates that a packet has survived an unreasonably long time based on the characteristics of the network path.
Advanced Analysis "Excessive Packet Round-trip Time (RTT) Detected"
Advanced Analysis "Excessive packet round-trip time (RTT) detected" - AppNeta Documentation | AppNeta
Most often the problem arises by attempting to use a router as a firewall.
When testing "through" an intermediate router, very little delay is realized. However, when testing "to" the router, test packets leave the high-speed path and other delays may be experienced. Usually delays occur because of a busy router management CPU. Therefore, if you see high utilization on a hop in the middle of a network, you will inevitably find that the router CPU is overworked.
Occasionally you may run across a situation where utilization is high at an intermediate hop, but not at the target hop.
High router CPU utilization
Note that it is important that good packet drivers are installed on the Monitoring Point, so that Delivery monitoring can achieve good test results. This will allow you to properly isolate which devices along the test path are flawed.
A more normal scenario would be where Total Capacity is measured at 94Mbps, and no "High utilization detected" message appears. In both these cases, Total Capacity and Utilization reflect the Apparent Network that is available to the application. However, in the second case, Utilization is reflecting the effects of actual traffic on the wire rather than flaws in the NIC driver.
if Delivery monitoring measured the test path as 67Mbps with 94% frequency of "High utilization detected" and no packet loss, experience will lead you to understand that these results are typical of a certain NIC card running down-level drivers. Upgrade the NIC drivers and typically the Total Capacity increases and Utilization will reflect the actual traffic on the wire
Delivery monitoring reports end-to-end characteristics to each layer 3 device along a path
Advanced Analysis "High Utilization Detected"
Advanced Analysis "High utilization detected" - AppNeta Documentation | AppNeta
MTU negotiation is an important part of the overall health of a network. During a test, Delivery monitoring will report the following MTU conditions detected along the test path: The (apparent measurable) PMTU Nonstandard MTU's in use Standard MTU's in use, other than 1500 bytes (Ethernet) "Black-hole" hop, where a router fails to send the "fragmentation needed and DF set" ICMP message. In other words, it is unable to properly participate in MTU negotiation. "Gray-hole" hop, where a router returns the wrong MTU value in the "fragmentation needed and DF set" ICMP message. In other words, it is responding with an incorrect MTU value for the constricting hop. MTU conflicts, where the network is exhibiting behavior, including packet loss, that corresponds to an MTU conflict with one or more devices on the network path.
The Delivery monitoring Approach
To avoid MTU problems, consider the following: Use one common MTU per subnet Separate different MTUs with Layer 3 routers. Avoid Layer 2 FDDI/Token Ring/Ethernet bridges. Do not filter ICMP packets. Use reputable VPN solutions. Test network paths from high MTU to low MTU (appliance on large MTU end) Maintain logical diagrams to establish rule sets for MTUs, and to ensure that routers separate MTU domains. Avoid adjusting MTU of clients to compensate for network issues. Ensure server and network personnel have established common MTU policies.
Avoiding MTU Conflicts
Therefore, if you want to set up a GigE connection with a 9000 byte MTU, you must set the frame size of the NIC to 9018. When the cause of this condition is a reduced MTU at a destination hop, Maximum Segment Size (MSS) negotiation can protect TCP from failure. However, many Black-Hole Hops are caused by incorrectly configured mid-point layer-2 devices, in which case MSS negotiations are ineffective. Furthermore, in many scenarios MSS negotiation is ignored.
Notice that at Layer 2, a typical Ethernet frame has a maximum size of 1518 bytes. But at Layer 3 we deal with packets, and in this example the MTU represents the maximum packet size of 1500 bytes. It is important to understand that the difference between packet size and Ethernet frame size is 18 bytes.
Another common cause of black-hole hops is confusion between frame size and MTU. The following diagram illustrates a breakdown of a typical Ethernet frame.
In complex environments it is easy forget to install Layer 2 routers between MTU boundaries. To avoid problems, we suggest maintaining MTU values on logical diagrams outlining the rules for Layer 3 subnets. Doing so establishes rule sets for portions of networks regardless of how they are physically connected. Here is an example.
Most modern operating systems and applications are PMTU enabled, and thereby the "Don't Fragment" bit is set in all IP headers. Therefore, when the 9000 byte packet is received by the router, fragmentation is not attempted. Rather, the result is an ICMP "fragmentation required but DF set" message, also known as the "too big" message. When the server receives this ICMP message, it updates its routing table for the client with the MTU reported in the message, and will remember to send smaller packets to the client. Note that once the client's MTU has been discovered, the MTU is not renegotiated on subsequent connections.
The following diagram illustrates how PMTU discovery works.
To avoid MTU conflicts, you must ensure that you deploy Layer 2 devices on MTU boundaries, and that you do not filter out ICMP messages. This will ensure that Path MTU discovery (PMTU) works as described in RFC 1191.
Eventually the connection develops a cycle of sending a 2250 byte packet that is lost, waiting a timeout period, and then sending a 1125 byte packet that is received. Overall, the connection does not die. Rather, it runs very slowly.
n the case of a black-hole hop, the result is a retransmitted packet that is half its previous size. In this example, 3 packets are lost before the TCP transmit window produces a 1125 byte packet. This packet does survive, which produces a response from the client and in turn keeps the connection alive.
TCP contains a slow-start congestion avoidance algorithm that shrinks the TCP transmit window to half its size when a packet has timed out
The server receives a request from the client, and a 9000 byte packet is generated. The Layer 2 switch accepts the packet, but drops it once it discovers that the packet is too large to send to the client. Since Layer 2 switches have no knowledge of Layer 3 content, they cannot inspect the DF bit in the IP header, nor can they generate a sufficient response to the server to explain why it dropped the packet. As far as the server is concerned, the packet was lost due to congestion.
To understand why MTU conflicts often result in slow links, consider the following client-server TCP flow. It illustrates a typical environment where a Gigabit server configured with 9000 byte MTU is incorrectly connected via a Layer 2 switch to a 10/100 client employing a 1500 byte MTU
Normal server to client traffic is quick, but client backups fail.
Email works, but discussion databases take forever to load.
Web pages load quickly, but .GIF files are very slow to load, and occasionally fail.
FTP downloads work only with small files.
FTP upload of a large file takes a few minutes, but download takes hours, or even fails.
What is often misunderstood is the fact that an MTU conflict typically results in slow network connections, not broken connections.
traffic will fail in one direction but not in the other
MTU negotiation errors are very difficult to detect and manifest themselves in subtle but destructive behaviors. When an MTU conflict exists, MTU negotiation fails and packets will be lost when they are too large to traverse the network.
Understanding MTU Conflicts
theoretically, increasing MTU should have just a minor effect on network’s maximum performance. However, most high-speed networks do not work at full capacity. For these networks, increasing MTU has proven to greatly increase overall throughput
Increasing frame size gives better performance.
Why Use Large MTUs?
The following diagram shows a 1500 byte TCP/IP packet passing through Ethernet. Notice that although Ethernet supports 1518 bytes frames, it is designed to carry at most 1500 byte packets; therefore, the MTU of Ethernet is 1500 bytes.
Frames are generated by Layer-2 devices and encapsulate Layer-3 packets.
To understand MTU, one must be very aware of the difference between frames and packets.
Typically the internet operates with an MTU of 1500 bytes, however other values are acceptable. Mixing different MTU values within one network path is also acceptable, provided that all components within a network path share similar rules regarding conflicting MTUs. As a general rule, fragmentation should be avoided at all costs.
Network links that have properly configured MTUs are more efficient
The MTU of the medium determines the maximum size of the packets that can be transmitted without fragmentation
f the application is sending a small chunk of data, for example 500 bytes, it will usually be sent within a single packet. However, when the application must send a larger chunk of data, the data must be distributed over several packets.
When Internet Protocol is used to transfer data across a path, data is encapsulated into packets before it leaves the physical interface
What is MTU?
elivery monitoring checks for several significant MTU error conditions for each link within a network path. Passing these checks ensures that MTU works properly along the entire path.
MTU misalignments can result in network idiosyncrasies and degradations that are extremely difficult to diagnose
One of the most powerful and yet subtle capabilities of Delivery monitoring is its ability to discover Maximum Transmission Unit (MTU) conditions and problems on a link
Maximum Transmission Unit
Maximum Transmission Unit - AppNeta Documentation | AppNeta
The Location Bandwidth Quality Report compares the performance of a WAN network path to the stated performance of the internet service package you purchased from your ISP
Delivery reports - AppNeta Documentation | AppNeta
You will see this issue on the Round-Trip Time chart, the Page Load Time chart, and on the Routes pane. To fix the issue, configure clients to use only local DNS resolvers.
If you notice that users are reporting that page loads are very fast at times and quite slow at other times, one possibility is that their clients are configured to use both local and remote DNS resolvers. This can be an issue when the web app being accessed is served by a Content Delivery Network (CDN), with target servers distributed globally. In these cases, the remote DNS resolver can resolve the IP address of a target server located close to it rather than close to the client. This will cause longer page load times than from a sever located closer to the client.
Poor browsing experience
To determine how long a network path was in a violation state you need to find the violation event and the corresponding clear event
When a Monitoring Point generates diagnostic tests on a network path, it targets each hop on the path, one at a time, to help determine whether any of the hops is performing poorly. One of the metrics that’s calculated for each hop (and displayed on the Data Details tab of the Diagnostics page) is the Total Capacity. One would expect that the total end-to-end capacity should be no greater than that of the lowest capacity hop. This is true, the lowest capacity device is the bottleneck in the path, but often you will see hops showing a lower capacity number than the end-to end capacity. The reason for this is to do with router architecture. Traffic passing through the router is given higher priority than that destined for the router itself. In other words, the capacity numbers for the intermediate hops may not actually represent the true forwarding capacity of the device.
Mid-path device shows lower capacity than the target
Often we see the terms network bottleneck and network congestion point used interchangeably. We believe that there is a distinction between these terms. A network bottleneck is the slowest point on a network path. Every network path has a bottleneck (for example, a low speed link). If the performance of the bottleneck is improved, the bottleneck moves to another point in the path. A congestion point, on the other hand, is the point in a network path where production traffic is backing up. Often there is congestion at the bottleneck, but not always. There may be several congestion points on a network. A congestion point is usually a transient condition where a bottleneck is not.
Bottleneck vs. Congestion Point
High utilized capacity coupled with no increase in flow data is a classic sign of oversubscription. You should contact your ISP if this is the case.
The first thing you want to do is corroborate capacity measurements with round-trip time, loss, and jitter. If there are no corresponding anomalies, then whatever triggered the high utilization isn’t really impacting performance. If there are, you’ll then use Usage monitoring to check for an increase in network utilization.
Oversubscription is a technique your ISP uses to sell the full bandwidth of a link to multiple customers. It’s a common practice and is usually not problematic, but if it is impacting performance, you’ll see it first in your utilized capacity measurements on the Capacity chart.
As issues with ICMP traffic may not be present in TCP or UDP traffic, you can set up a dual-ended path to test whether the other protocols are affected in the same way.
If a path shows sustained packet loss, review its latest diagnostic results to understand where the loss is occurring: If the loss is occurring at the last hop, make sure that firewall/endpoint protection at the target allows ICMP. If the loss is occurring mid-path, make sure routing policies are not de-prioritizing ICMP, and access control lists are not blocking ICMP.
If none of the previous subsections is applicable to your situation, you can use PathTest to corroborate the low capacity measurements. Remember that this is a load test and it measures bandwidth, not capacity.
Capacity is measured by sending multiple bursts of back-to-back packets every minute (as described in TruPath). To measure total capacity, at least one burst must come back with zero packet loss. If that is not the case, then the capacity measurement is skipped for that interval. If packet loss is intermittent, the result is a choppy Capacity chart. If packet loss is sustained, the Capacity chart will show no capacity while the packet loss is present.
Capacity chart shows no capacity This can be due to sustained packet loss.
Asymmetric links, if measured using single-ended paths, will show the capacity of the slowest of the uplink and downlink directions. This can be misleading. Measuring a link using a dual-ended path will show the capacity of each direction.
Some devices make better targets than others. Choosing a good target is important in order to get good measurements.
Note: Total capacity is based on the assumption that traffic will flow in both directions. Therefore, you can expect the total capacity for half-duplex links to be roughly half of what it would be with full-duplex.
When a low capacity condition is persistent rather than transient, it is caused by a network bottleneck, not by congestion
‘Saturate’ means the ability to transmit packets at line rate without any gaps between them. All switches can run at line rate for the length of time that a packet is being sent but some are unable to send the next packet without any rest in between. This determines the ‘switch capacity’. APM provides a range for Total Capacity that you can expect given the physical medium and modern equipment with good switching capacity.
Bandwidth is the transmission rate of the physical media link between your site and your ISP. The bandwidth number is what the ISP quotes you. Capacity is the end-to-end network layer measurement of a network path - from a source to a target. Link-layer headers and framing overhead reduces rated capacity to a theoretical maximum. This maximum is different for every network technology. Further reducing capacity is the fact that NICs, routers, and switches are sometimes unable to saturate the network path, and therefore the theoretical maximum can’t be achieved.
Capacity and bandwidth are different
Capacity lower than expected
For cases where you need to measure capacity over time, you can use rate-limited monitoring. Rate-limited monitoring is similar to PathTest in that it loads the network while testing, but instead of a single measurement, it makes measurements at regular intervals over time. Contact AppNeta Support to enable rate-limited monitoring.
Because Continuous Path Analysis testing used by Delivery monitoring is extremely light weight, it too might not be able trigger the rate limiter. As a result, you’ll end up seeing the entire capacity of the link, rather than the amount that has been provisioned for you by your ISP. To confirm this, try the following: Confirm that the speed test run by the ISP is effectively using the same source and target as your test. Use dual-ended monitoring (testing a path between two AppNeta Monitoring Points). Dual-ended monitoring measures network capacity in both directions (source to target and target to source), similar to speed tests. Testing each direction independently allows you to account for asymmetry in the network path. For example, upload and download rates may be different and may take different routes. Single-ended monitoring can only determine the capacity in the direction with the lowest capacity. Run PathTest. PathTest does not use lightweight packet dispersion, but rather generates bursts of packets which may trigger carrier shaping technologies
Usually transactional data and control data is allowed through at full capacity because they are short bursts of traffic, but sustained data transfers like streaming media will trigger the rate limiter.
There are times when the network capacity numbers returned by APM do not match those from a speed test provided by your ISP. If total capacity measurements from APM are greater than what you expect, this is usually because the link to your ISP is physically capable of greater speed, but your ISP has used a traffic engineering technique called ‘rate limiting’ to limit your bandwidth to the amount specified in your Service Level Agreement.
APM and ISP capacity numbers differ
Network Path Issues - AppNeta Documentation | AppNeta
Note: On Linux hosts, you may need to run sudo docker-compose -f mp-compose.yaml ps for this to execute successfully. Alternatively, you can use docker ps to view the status of the two containers used by the Monitoring Point.
View CMP status - Docker Compose To view CMP status: Login to the host the Monitoring Point is deployed on. Navigate to the directory the Monitoring Point was deployed from (it contains the mp-compose.yaml file). View the status of a CMP and verify that the two main system containers (with names ending in “sequencer_1” and “talos-001_1”) are up and running. For example:
View CMP status - AKS To view CMP status: Sign in to the Azure Cloud Shell. To view deployments: kubectl get deployments To view deployment details: kubectl describe deployment <deployment-name> To view pods: kubectl get pods To view pod details: kubectl describe pod <pod-name> For additional kubectl commands see kubectl Commands.
Manage Monitoring Points
For a Container-based Monitoring Point (CMP), you can also check its status on the host it was deployed on.
In order to collect monitoring data or to access a Monitoring Point from APM, it must first be connected to APM
Monitoring Point status - AppNeta Documentation | AppNeta
Step 6: Analyze monitoring results
SNMP notifications - Use this method if you are integrating with an SNMP system. Set up using the Manage SNMP page.
Event integration - Use this method if you already have an event monitoring system in place. Integrate directly with that system via POSTs that contain JSON event payloads.
Email notification -
Step 5: Set up alert notifications
Delivery monitoring procedure Create a Path Template Group to monitor the health of the underlay network. Use the Data Center Monitoring Point IP address as the target. Specify “Dual Ended” paths. Add source interfaces from each remote office Monitoring Point (typically the “Auto” interface) to create paths.
Important: SD-WAN must be configured to route traffic destined for the Data Center Monitoring Point over specific underlay paths based on traffic identifiers such as port, IP address, or QoS markings, and not via the overlay.
Are my service providers living up to their SLAs? How do I determine where network problems are originating?
When you set up Experience monitoring, single-ended network paths are automatically created for Delivery monitoring. These allow you to confirm that traffic is being routed as you expect (for example, over MPLS or over the internet).
Step 4: Monitor network health
Create a Web App Group for each web app you want to monitor. Use the web app URL as the test target. Add a Selenium workflow that accesses the web app. At a minimum, it should login to the web app. Include at least one interface on each Monitoring Point as a test source.
Important: Your SD-WAN must be configured to route Experience traffic (TCP port 443) and its associated Delivery traffic (ICMP and UDP) out the same interface. This allows you to see the path the Experience traffic takes using Delivery monitoring.
Experience monitoring prerequisites Monitoring Points deployed with interface(s) on the desired end-user subnets.
Monitor key applications to identify any issues affecting end user experience. Use associated Delivery paths to see the overlay path taken for specific applications. Compare user experience of app performance before and after SD-WAN transition.
Step 3: Emulate web app users By emulating a user, Experience monitoring helps you answer the question: What sort of app performance are my users experiencing?
Usage monitoring prerequisites Monitoring Points deployed with capture interface(s) connected to switch ports that SPAN/mirror all WAN traffic.
This is particularly helpful prior to SD-WAN implementation to determine how best to use SD-WAN to route different traffic types and how best to size the WAN links it uses. For example, voice/video traffic over MPLS and all other traffic over the internet.
Step 2: Understand WAN traffic Usage monitoring is used to monitor WAN traffic to and from a site. It helps you to answer the questions: What apps are users using? How much bandwidth is devoted to each app? Which users are consuming the most bandwidth?
Connect the Monitoring Point’s Usage monitoring port to a SPAN/mirror of the WAN traffic (prior to any NAT or encapsulation) at each location. The egress interface of the core switch is typically the best place for this
Connect each Monitoring Point to the same network subnet/segment as users in those locations in order to monitor from a user perspective
Deploy Enterprise Monitoring Points (EMPs) in remote offices and in your Data Center(s) in order to monitor all sites.
Step 1: Deploy Monitoring Points
In this example, end user experience over the SD-WAN is measured using web paths (Experience monitoring) to a SaaS application (P1) and an Enterprise Web Application (P2). The network health through the SD-WAN is measured using (auto-created) single-ended network paths (Delivery monitoring) to these same targets. The health of the underlay network is measured through a dual-ended network path (P3) (Delivery monitoring) to the Data Center Monitoring Point via the underlying MPLS WAN.
Recommended approach AppNeta recommends deploying Monitoring Points to remote office and Data Center locations and configuring them to provide visibility into remote site application usage, to emulate users accessing priority apps through the SD-WAN, and to monitor the health of both the overlay network (through the SD-WAN) and the underlay network (the network used by the SD-WAN).
If you are considering a transition to SD-WAN or have already implemented an SD-WAN, AppNeta Performance Manager (APM) can answer questions that will help you to successfully manage your network. For example: What apps are users actually using? (Usage) How much bandwidth is being used by each app? (Usage) Which users are consuming the most bandwidth? (Usage) What sort of app performance are my users experiencing? (Experience) What path is user-to-app traffic taking? (Delivery) Are my service providers living up to their SLAs? (Delivery) How do I determine where network problems are originating? (Delivery)
Monitoring an SD-WAN installation - AppNeta Documentation | AppNeta
As you zoom out, sources and targets that are geographically close to one another are aggregated into a cluster.
The Current Network Violation Map dashboard provides a geographical view of your current network status. It displays all path sources (Monitoring Points), path targets (URLs, server/workstation IP, or Monitoring Point), and the WAN paths between them on a world map.
Web App Performance—The Web App Performance dashboard provides the status of selected web app groups over time (up to 30 days).
Current Network Violation Map—The Current Network Violation Map dashboard provides a geographical view of your current network status. It also allows you to filter paths directly rather than indirectly via Network Path Groups or Saved Lists.
Overview—The Overview dashboard provides a high-level system summary that includes the number of web paths and network paths, Monitoring Point status, service level compliance, violations, and network path status in the form of a GeoMap and a TreeMap.
Dashboards provide an “at a glance” way to view the status and health of your network and applications.
Dashboards - AppNeta Documentation | AppNeta
Upgrading EMP software
AppNeta recommends keeping your Enterprise Monitoring Point (EMP) software up to date to take advantage of the latest features and bug fixes.
Managing software on an EMP - AppNeta Documentation | AppNeta
The software upgrade schedule is listed on the AppNeta service status page: http://status.appneta.com/.
can result in a gap of up to 15 minutes of monitoring history
You can configure APM to have a Monitoring Point upgraded automatically or you can upgrade it manually at any time
You will see the warning symbol appear at various places in APM (including the Manage Monitoring Points page) when a Monitoring Point is no longer running the latest software version
AppNeta recommends keeping your Enterprise Monitoring Point (EMP) software up to date to take advantage of the latest features and bug fixes.
The effects of these procedures are summarized in the following table: Procedure Software change? Network config change? APM access config change? Upgrade software Y (latest software) N N Reflash (local image) Y (local image) N N Reflash (USB image) Y (USB image) N N Decommission N N Y (factory)(no APM access) Reset to factory defaults (local image) Y (local image) Y (factory) Y (factory)(no APM access) Reset to factory defaults (USB image) Y (USB image) Y (factory) Y (factory)(no APM access)
Reset a Monitoring Point to its factory default configuration with the locally stored system image or with a downloaded system image.
Decommission a Monitoring Point so that it can no longer access APM without affecting its software or network configuration
Reflash a Monitoring Point with the locally stored system image or with a downloaded system image without affecting its configuration.
Upgrade a Monitoring Point to the latest software version without affecting its configuration
There are a number of procedures that you can use to affect the software version, the network configuration, and the APM access configuration on a Monitoring Point:
Managing Software on an EMP Managing software on physical and virtual Monitoring Points Upgrading EMP software Reflashing an EMP - local image Reflashing an EMP - USB image Decommissioning an EMP Resetting an EMP to factory defaults - local image Resetting an EMP to factory defaults - USB image Managing software on a CMP deployed using AKS Upgrade CMP software - AKS Roll back CMP software - AKS Uninstall CMP software - AKS Remove all resources associated with a CMP - AKS Managing software on a CMP deployed using Docker Compose Recreate a CMP - Docker Compose Upgrade CMP software - Docker Compose Uninstall CMP software - Docker Compose Managing software on a Windows NMP Upgrade NMP software - Windows Uninstall NMP software - Windows Managing software on a macOS NMP Upgrade NMP software - macOS Uninstall NMP software - macOS Quit the menu bar app Restart the menu bar app The procedures for managing Monitoring Point software depend on the type of Monitoring Point.
Managing software on an EMP - AppNeta Documentation | AppNeta
To migrate monitoring from one Monitoring Point to another: Navigate to > Manage Monitoring Points. For the Monitoring Point you want to migrate from, select > Migrate Monitoring. In the Use as field, select Source. In the dropdown under the Replacement column, select the Monitoring Point you want to migrate to. The remaining fields in the Replacement column are filled in to show the default mapping between the source and replacement Monitoring Points. Any potential migration problems are identified. In the Licensing section, add or move licenses to the replacement as required. Select Assign New Licenses to use new licenses. Select Move From Source to move licenses from the source Monitoring Point. Select Advanced to override the automatic licensing. In the Interface Mapping section, make changes to the automatic mapping as required. The Launch Monitoring Point Web Admin link is available if you need to modify interfaces on the replacement. Click Move Now to migrate monitoring from the source to the replacement. The migration to the replacement proceeds and the source Monitoring Point is deleted from APM.
Restrictions include: Roles - Only users with Organization Admin or Advanced roles can migrate monitoring. Organizations - You can only migrate monitoring between Monitoring Points in the same organization. A child organization is considered different than its siblings and its parent. Licensing - You cannot migrate from APM licensing to legacy licensing. Hardware - You cannot migrate from current hardware to legacy hardware (m20, m22, m25, m30, r40, r45, r400). 10Gbps Usage interface - You cannot migrate from a 10Gbps Usage interface to a 1Gbps interface. NMP - You cannot migrate from a Monitoring Point that has web paths to a Native Monitoring Point (NMP) as NMPs do not support Experience monitoring. CMP - Container-based Monitoring Points do not support monitoring migration.
The following are included in the migration: Delivery All network paths are migrated from the source to mapped interfaces on the replacement. Experience All web paths are migrated from the source to mapped interfaces on the replacement. Usage Usage history is reattached to mapped interfaces on the replacement. Usage Packet Captures Packet Captures are preserved and associated with mapped Usage interfaces on the replacement. Usage Packet Capture Schedules Packet Capture schedules are preserved and continue normally on the replacement (given that a passphrase is configured on the replacement). Network Devices Voice/Video Tests Voice/Video Schedules Shared Appliances SNMP Trap Sending Saved-List Membership Base and Add-on licenses (where applicable)
As part of the migration process, the source Monitoring Point is deleted from APM
or when aggregating monitoring from two or more Monitoring Points to a single higher capacity replacement
typically used when moving to a newer Monitoring Point
APM provides the ability to migrate monitoring from one Enterprise Monitoring Point (EMP) to another while preserving monitoring history and (where applicable) licensing
Migrating monitoring between EMPs - AppNeta Documentation | AppNeta
Sharing a Monitoring Point has the following effects and restrictions: Users in the selected organizations will have first-come, first-served access to the Monitoring Point’s licenses. Sharing does not grant access to the Monitoring Point via the Manage Monitoring Points and Manage Licenses pages. In child organizations, users are not permitted to make a shared Monitoring Point an SNMP notification sender. Un-sharing a Monitoring Point deletes all paths and corresponding monitoring data in the affected child organizations. Usage monitoring is not supported on a shared Monitoring Point
an be enabled by AppNeta Support on a per-organization basis
There may be cases where you require child organizations to share a Monitoring Point
Sharing a Monitoring Point - AppNeta Documentation | AppNeta
To move a Monitoring Point between organizations: Download the nis.config file for the new organization. Update the Monitoring Point with the new nis.config settings. Finalize the move to the new organization. Clean up the Monitoring Point from the old organization.
You can move an Enterprise Monitoring Point (EMP) from one organization to another but any paths and monitoring history associated with the Monitoring Point are not moved. These can, however, be moved to another Monitoring Point in the old organization. You need to be an Organization Admin or Advanced user in the old and new organizations in order to move a Monitoring Point between them.
Moving an EMP between Organizations - AppNeta Documentation | AppNeta
Find an organization ID The organization ID is number that uniquely identifies an organization. It is required for some API calls. You can determine the organization ID for a given organization using the GET /v3/organization API call. Alternatively, you can see it in the APM user interface. To find the organization ID for a given organization: In APM, hover over your user icon at the top right of the page. Select Organization Summary from the dropdown. In the Organizations pane, hover over the organization you are interested in. The organization ID appears in a tool tip and at the end of a URL on the bottom left of the page.
An organization is a container for users, licenses, and Monitoring Points
Organizations - AppNeta Documentation | AppNeta
SSO behavior Upon enabling SSO: Users in a mapped security group may log in to APM in via your custom URL (https://<keyword>.pm.appneta.com/ ). Once users log in to the custom URL for the first time, access via the regular APM login will be automatically disabled. Upon disabling SSO: Single sign-on is disabled for the organizations associated with the identity provider. Users in mapped security groups will have their federated profiles converted to local profiles, which must then be managed via the Manage Users page. Affected users must revert to logging in via the regular APM login. Affected users must reset their passwords before they can log in again. Notifications will continue to be delivered to affected users.
Step 4: Configure SSO access control on APM Within APM, login as a user with Organization Admin credentials. Navigate to > Manage Identity Provider. For the identity provider you want to edit, select > Edit. In the Organization field, select the organizations you want SSO users at the selected identity provider to have access to. In the Role Mapping field, map security groups in your IdP to user roles within APM. All users that you want to log in via SSO must belong to a group that is mapped to an APM role. All mapped groups will have access to all organizations specified. If you are using APM-Private and an external SP, port 443 on your firewall must be open between the APM-Private server and the PingOne SP. Notify AppNeta Support that the configuration is complete. AppNeta Support will enable single sign-on. Users can then access APM via https://<keyword>.pm.appneta.com (where “<keyword>” is the keyword you provided).
Step 3: Map IdP SAML attributes Within your IdP, map the correct attributes from the corporate directory to properties in SAML assertions that APM expects. For example, for an Active Directory IdP this looks as follows: Active Directory attribute SAML property mail NameID * (also set nameid-format to emailAddress) mail email * givenName firstName sn lastName * member groups * title title telephoneNumber (or mobile) phone extensionAttribute1#OrgNames orgNames
Step 2: Install APM SAML metadata on your IdP Within your IdP, register APM as a service provider using the APM SAML metadata (entity descriptor) file provided by AppNeta Support.
Step 1: Send IdP SAML metadata to AppNeta Within your IdP, generate an IdP SAML metadata (entity descriptor) file. Contact AppNeta Support and ask them to add the IdP to your organization in APM. Provide them the following: The IdP SAML metadata (entity descriptor) file you generated. The APM organizations you control that you want to use single sign-on. If your deployment model is APM-Public with PingOne Identity Bridge: A keyword to use for your new custom URL. It will take the form https://<keyword>.pm.appneta.com. If your deployment model is APM-Private with PingOne Identity Bridge: A keyword to use for your new custom URL. To determine the keyword: Navigate to > Manage Identity Provider. In the Login URL column, find the keyword in the form https://<FQDN>:443/pvc/?sitename= <keyword> If your deployment model is APM-Private with PingFederate Server: Nothing else is required. AppNeta Support will provide you with an APM SAML metadata (entity descriptor) file.
Note: Coordination with AppNeta Support is required to set up SSO.
PingFederate SAML requests - TCP Port 9031
Pingone SAML requests - TCP Port 443 (default - can be customized)
If the OrgNames attribute is not set, the user may access any child organization on authentication.
single-valued attribute contains a comma-separated list of child organizations to which the user may be granted access once successfully authenticated
For customers with multiple child organizations, users’ access to child organizations can be controlled through the use of a custom attribute: OrgNames
Although most customers use the member: attribute for auto role assignment, this does contain all the user’s group memberships.
After successful SAML authentication, users can be automatically assigned APM roles (ie. authorized) based on their group memberships. To do this, APM makes a REST API call to PingOne over SSL using the access token to query for the group membership. Based on this and the group->role mapping, a role is auto-assigned.
The only information exchanged between the IdP and PingOne is an encrypted SAML token which contains: email. Normally an email address. groups. Normally group memberships of the user. Used for authorization. Optionally first and last name for auto-user creation.
User credentials are never exposed beyond your corporate IdP. APM never has access to them.
The entire authentication transaction is accomplished using browser redirects and SAML exchanges.
Ping’s proven SAML implementation ensures a completely secure authentication and single sign-on experience
APM-Private with a PingFederate SP is used to allow the entire application and identity framework to operate within your corporate network
APM-Private is used in order to keep all measurement data within your corporate network
APM-Public is used for ease of deployment
The deployment chosen depends on your corporate security policy
APM-Private with PingFederate Server - Private version of APM with internal SAML federation server as SP. For example, the following diagram shows SAML SSO with APM-Private, a PingOne PingFederate Server (SP), and on-premise IdP.
APM-Private with PingOne Identity Bridge - Private version of APM with an external SAML Identity Bridge. For example, the following diagram shows SAML SSO with APM-Private, a PingOne SAML Identity Bridge (SP), and on-premise IdP.
Several deployment types are available, depending on your security needs. For example: APM-Public with PingOne Identity Bridge - Public version of APM with an external SAML Identity Bridge. For example, the following diagram shows SAML SSO with APM-Public, a PingOne SAML Identity Bridge (SP), and on-premise IdP.
must be SAML2.0-compliant
APM communicates with a Service Provider (SP) which, in turn, communicates with an Identity Provider (IdP) that you control. In all deployments, the SAML SP function is provided by Ping
APM supports SAML-based SSO
Advantages include: No new credentials for users to remember No login required when accessing a deep link Centralized management of credentials (eg: password policy, lockouts, audit, etc.) No storage of, or access to, credentials by APM Auto-provisioning of users into APM Automated role assignments based on group membership
The web browser single sign-on (SSO) feature allows your users to access AppNeta Performance Manager (APM) using their existing corporate credentials
Single sign-on (SSO) - AppNeta Documentation | AppNeta
Create a Path Template Group and path templates To simplify network path creation from each user’s NMP to the selected targets (for example, a web app, central site, and potentially your VPN Gateway’s public IP), create a Path Template Group with a separate path template for each target.
We recommend creating an alert profile called “WFH Users” containing the following conditions: Data Loss - violates when data loss is above 2% for 2 minutes and clears when it is below 2% for 2 minutes. Voice Loss - violates when voice loss is above 2% for 2 minutes and clears when it is below 2% for 2 minutes. MOS - violates when MOS is below 3.7 for 2 minutes and clears when it is above 3.7 for 2 minutes.
Create an alert profile In order to trigger an alert when a user is experiencing network performance issues, you need to create an alert profile that specifies the limits of acceptable network performance
Create a time range for alerting In order to alert on network issues only when users are active, we recommend creating an alerting time range called “Business Hours” that spans your typical business hours
In order to monitor network performance, you need to create network paths from user computers to the targets identified in the diagrams above
There two ways to install the NMP software: Manual install - users install the software Unattended install - software is installed on user computers remotely
A given computer can only have one NMP instance installed and it can only be connected to one organization.
If you have multiple APM organizations, you will need one downloadable package per relevant operating system specific to the organization you want the NMP to connect to
Create NMP deployment packages You must also deploy an NMP on the computer of each work-from-home user you want to monitor. To prepare for this you’ll need to create a separate downloadable package for each client operating system (Supported OS’s include Windows and Mac OS). The appropriate package can then be downloaded and installed on the user’s computer.
Deploy an AppNeta Enterprise Monitoring Point (EMP) of sufficient capacity (typically an r90 or an r1000) to your central site (data center, hub, corporate head office) as a VPN performance monitoring target
P3 - (Applies to Split tunnel only) Monitoring to the VPN Gateway at the central site via single-ended path measures the performance of the infrastructure the VPN operates on and shows the route the VPN traffic takes to the central site.
P2 - Monitoring to a central site through the VPN via dual-ended path measures the VPN performance. Applies to both use cases.
P1 - Monitoring to a web app via single-ended path measures the user’s network performance to that app. Split tunnel - The measurement is of infrastructure strictly outside the VPN. Full tunnel - The measurement is through the VPN tunnel and then, once at the central site, outside of it to the web app.
In the Full tunnel scenario, all traffic passes through the VPN to the central corporate site. Traffic to external applications/services is routed from there.
In the Split tunnel scenario, only corporate network traffic passes from the user through the VPN to the central corporate site. All other traffic is routed outside the VPN. The advantage of this scenario, from a monitoring perspective, is that we can review the performance of the non-VPN paths (P1 and P3) using tools available in APM to isolate issues with the user’s ISP infrastructure
Prerequisites An organization set up in APM. A Monitoring Point for the central site (r90 or r1000 are typical). A Workstation (c10/n10) license for each user. Organization Admin or Advanced user role privileges on APM for setup. Administrative privileges on the work-from-home user’s computer. In the Split tunnel scenario, if traffic passes through a firewall at the corporate site, it must allow ICMP traffic to the VPN gateway.
it’s best practice to test a small pilot group first to make sure everything is working as expected prior to rolling out to all users
In this article you’ll learn how to monitor the network performance of work-from-home VPN users by: determining where to deploy AppNeta Monitoring Points and deploying them. setting up AppNeta Performance Manager (APM) to monitor a user’s network performance. setting up alerting, notifications, and reporting to help locate and resolve network issues proactively.
Set up and monitor a work-from-home VPN user - AppNeta Documentation | AppNeta
Monitor network performance between remote offices and WFH sites and the Teams infrastructure.
The egress interface of the core switch is often the best place for this
Remote office Monitoring Points require a SPAN/mirror of the WAN traffic of the entire office prior to any NAT or encapsulation for optimal Usage monitoring.
Monitoring Point deployment Deploy Enterprise Monitoring Points (EMPs) in remote offices and in Data Centers. Deploy Native Monitoring Points (NMPs) on work-from-home (WFH) user computers. Remote office Monitoring Points should belong to the same network subnet/segment as the end-users in those offices for optimal Delivery and Experience visibility.
This article covers the recommended approach to monitoring Microsoft Teams with AppNeta Performance Manager (APM). It assumes that Teams traffic does not pass through the corporate network but exits to the internet as close to the user as possible
Monitoring Microsoft Teams - AppNeta Documentation | AppNeta
To specify local subnets: Navigate to Usage > Monitoring Points. For the Monitoring Point interface you want to configure, click the icon. In the Configure dropdown, select Traffic Direction. In the IP Address and Netmask fields, enter a local subnet (e.g., 172.16.123.0) and associated subnet mask (e.g., 255.255.255.0). Click to add any additional local subnets. Click Apply before selecting any other configuration option (e.g., Alert Profiles).
hostnames are resolved every time a Usage monitoring page is loaded
also enables you to resolve hostnames on local subnets.
In order for APM to be able to distinguish between inbound and outbound traffic, you need to identify the subnets local to the Monitoring Point
The Monitoring Point location must be set
A Monitoring Point must be set up and cabled for Usage monitoring. In particular, the Monitoring Point’s Primary Usage monitoring port must be physically connected to the switch port being monitored.
identify local subnets prior to monitoring
need a Monitoring Point deployed and connected to the switch port you want to monitor
Set up Usage monitoring - AppNeta Documentation | AppNeta
The profile must be applied to a Monitoring Point for it to take effect.
Navigate to > Manage Alert Profiles > Flow Analysis. Click New. The New Flow Alert Profile page appears.
Create a Usage alert profile
traffic volume, application classification, and QoS type
alert you if application usage conditions are outside of acceptable limits
Usage alert profile (or flow analysis alert profile) is a set of conditions that define the limits of acceptable application usage for a monitored link
Usage alerts - AppNeta Documentation | AppNeta
APM provides a way to add to the library
here will be cases, however, where the library does not include an application you want to monitor (for example, your company’s custom internal applications).
To use the custom application definition, it must be applied to a Monitoring Point.
o create a custom application definition: Navigate to > Manage Application Identification. In the Custom Applications section, click + Define Application.
Custom Applications - AppNeta Documentation | AppNeta
If this is the issue, you’ll also notice that the Usage traffic seen is primarily broadcast traffic like DHCP or Netbios.
band” setup, low packet counts will be seen if you connect to a regular access port on a switch rather than to a mirror port (also known as a “SPAN” port). Confirm that the switch port the Monitoring Point is connected to is a mirror port that is mirroring the port you want to monitor and not to a regular access port.
When connecting to a switch in an “out-of-band” setup, low packet counts will be seen if you connect to a regular access port on a switch rather than to a mirror port (also known as a “SPAN” port).
Very low packet counts
Check for encapsulations by reviewing the configuration of network devices that may encapsulate traffic and/or reviewing packet captures of the traffic in question
traffic encapsulated using capwap, Q-in-Q, or with Cisco MetaData (Ethertype: 0x8909) headers is not currently counted.
Usage is unable to see inside some encapsulated traffic to capture packet counts, so it is likely that some of the traffic is encapsulated and is not captured in the counts
Lower than expected packet counts
troubleshoot the connectivity problem.
Monitoring Point connection to APM is down.
’-’ in Traffic Rate column
Check your Usage monitoring cabling and, if you are using port mirroring, the port mirroring configuration on your switch.
Monitoring Point has established a connection with APM but has not seen traffic on the Usage port.
‘0 kbps’ in Traffic Rate column
Usage troubleshooting - AppNeta Documentation | AppNeta
To start a packet capture, see Start a packet capture. To schedule a packet capture, see Schedule a packet capture.
A passphrase is used to secure your packet captures
A Monitoring Point must be set up and cabled for Usage monitoring. In particular, the Monitoring Point’s Primary Usage monitoring port must be physically connected to the switch port being monitored. The Monitoring Point location must be set.
In order to capture the traffic passing through your network, you need a Monitoring Point deployed and connected to the switch port you want to monitor. Once this is done, you need to set a passphrase for capture files you’ll generate to keep them secure.
Set up Packet capture - AppNeta Documentation | AppNeta
Associate a packet capture with a network path When creating or editing a packet capture configuration, you can associate it with one or more network paths using the Related Network Paths feature. This enables you to easily see all packet captures related to a given network path and to have the packet captures appear in relevant areas of the user interface and reports (for example, on the network path performance Events chart).
Alert and warning statistic filters Packet Capture uses the following Wireshark filters to provide alert and warning statistics: Filter Expression ICMP errors or warnings icmp.type eq 3 or icmp.type eq 4 or icmp.type eq 5 DNS errors dns.flags.rcode > 0 Bad TCP tcp.analysis.flags BitTorrent bittorrent SMTP errors smtp.response.code >= 400 and smtp.response.code < 600 FTP errors ftp.response.code >= 400 and ftp.response.code < 600 HTTP server errors http.response.code >= 500 and http.response.code < 600 HTTP client errors http.response.code >= 400 and http.response.code < 500 SIP errors sip.Status-Code >= 400 OSPF State Change ospf.msg != 1 Spanning Tree topology change stp.type == 0x80
Timestamps are taken before the packets are split into hardware receive queues and thus respect the absolute order of the packet, which means that sorting a capture file (.pcap) by time will produce a better picture of packet ordering than sorting by packet index.
On physical Monitoring Points, sorting by timestamp will produce the correct order
Between flows (different Layer 3 source/dest addresses), packets may be reordered. Two flows may not be processed by the same receive queue, which results in nondeterministic ordering when they’re inserted into the final capture file (.pcap).
Within a given flow (same Layer 3 source and destination IP addresses), packets will not be reordered. Every packet in a flow will be processed by the same hardware receive queue and thus fed into the capture file (.pcap) in order.
Note the following regarding packet order:
Related Network Paths - lists the network paths associated with the capture. Click a path to display all of the captures related to that path.
Conversations - displays the network conversations (traffic between two specific endpoints for a protocol layer) with the highest total number of bytes.
Protocol Breakdown - displays the number of packets, and the number of bytes in those packets, for each protocol in the capture.
Alerts and Warnings - displays the number of packets in the capture that match a predefined set of display filters that identify notable network behavior that you may be interested in.
To view packet capture results: Navigate to Usage > Packet Capture.
In the Related Network Paths field, specify network paths associated with the capture to have the path name appear in relevant areas of the user interface and reports (for example, on the network path performance Events chart), and to filter completed packet captures by related network paths.
In the Capture Stop Condition(s) field, specify when to stop the capture.
In the Capture Filter field, use a filter to specify which packets are captured. The filter uses libpcap syntax. For examples, click the icon. Filtering only the traffic you care about will reduce the capture size. This provides a longer captured duration, and it ensures that the capture analysis is relevant to the problem you are trying to solve. Leave the field blank to capture all packets.
In the Packet Limit field, specify the maximum number of bytes to store of each captured packet. Default: 96 bytes. Range: 68 - 65,535 Deselect this option to capture entire packets.
To start a new packet capture: Navigate to Usage > Packet Capture. Click + Start New Capture.
Capture files are capped at 1GB. In addition, regardless of any stop conditions specified, capturing ends when the space remaining on the Monitoring Point is too low: For full-packet captures (where maximum of 1500 bytes per packet are captured), capturing ends when less than 10MB remains on the device. For partial-packet captures (where less than 1500 bytes per packet are captured), capturing ends when less than 1MB remains on the device.
Prior to starting a packet capture you must set up for packet capture
Managing Packet Captures - AppNeta Documentation | AppNeta
Instead of starting a packet capture manually, you can schedule captures to start and stop automatically once or on a schedule.
Managing packet capture schedules - AppNeta Documentation | AppNeta
To load the AppNeta MIB onto your NMSs: Navigate to > Manage SNMP. If you do not see the Manage SNMP option, you do not have the necessary (Organization Admin) privileges. Click the download the latest MIB link to download the AppNeta MIB to your computer. Copy the MIB to each NMS that will receive AppNeta notifications.
Configure a Primary sender and optionally a Secondary sender. Note that the NMS Hosts field can contain multiple NMS IP addresses or hostnames. Click Test SNMP Traps to test your configuration.
Configure SNMP notification forwarding Organization Admin privileges are required to configure this feature. To configure SNMP notification forwarding: Navigate to > Manage SNMP. If you do not see the Manage SNMP option, you do not have the necessary (Organization Admin) privileges.
Edit default email addresses To edit the list of default email addresses: Navigate to > Update Notification Options. Click the link next to the Default Email Addresses field.
To enable or disable Service Bulletins: Navigate to > Update Notification Options. To enable Service Bulletins, select Yes in the Enable Service Bulletins field. To disable Service Bulletins, select No in the Enable Service Bulletins field.
Enable for changes in Monitoring Point availability to be reported in the notification emails. If you specify a time range, note that it is based on the Monitoring Point’s time zone.
Monitoring Point Availability
Flow analysis events occur when Usage alerts are triggered.
Flow Analysis Event
Enable for network route change events to be included in the notification emails. Network route change events occur when there is a change to the sequence of Autonomous System (AS) numbers from the path source to its target (indicating a change in provider network)
Network Route Change Events
Web path events occur when Experience alerts are triggered
Enable for web path events to be included in the notification emails
Web Path Event
Network path events occur when Delivery alerts are triggered
Network Path Event
all events occurring within the Digest Period are added to a Digest Summary and sent as a single email
To create a notification profile: Navigate to > Update Notification Options. Click + Add Profile. In the Name field, enter the name of the profile. In the Organization dropdown, select the organization associated with the profile. Click Submit. The profile is created. You still need to edit it in order to activate it.
Notifications - AppNeta Documentation | AppNeta
Go to APM-Private Cloud Configuration to configure APM-Private Cloud services.
Log in using the Email address specified in the AppNeta - Private Cloud Setup form as the username. Use “Superp@th” as the password. You should change the password once logged in.
Access the PCS from a web browser using the Virtual Machine Hostname and Domain specified in the AppNeta - Private Cloud Setup form. For example: https://my_vm_hostname.my_domain_name.com
Once you receive your PCS, install it as follows: (Optional) Place the PCS in a server rack. Connect the PCS to power. The PCS has redundant power supplies. Connect both to surge-protected power - ideally into separate circuits. Connect the PCS to your network using Port 1. Port 1 is pre-configured as requested (normally for DHCP).
Prior to shipment of your PCS, contact AppNeta Support to create a case for your installation. You will receive an AppNeta - Private Cloud Setup form where you will specify configuration information about your system. The AppNeta Support team will apply this configuration to your PCS before it ships.
The AppNeta Private Cloud Server (PCS) is an AppNeta-supplied device that runs APM-Private Cloud software
Private Cloud Server Setup
Private Cloud Server Setup - AppNeta Documentation | AppNeta
Configuring settings using the API
Configuring TACACS+ authentication
Configuring an email server
Log into your APM-Private Cloud system: From a browser, log into your APM-Private Cloud. Use the IP address or login URL provided at the end of the initial setup procedure.
Configuring firewall rules The table below shows the ports and protocols that must be permitted through your firewall for access to the APM-Private Cloud server.
Configure branding to have your company’s branding appear on APM-Private Cloud
Configure TACACS+ authentication (optional) to authenticate users using a TACACS+ authentication server.
Configure an email server in order to receive email notifications.
Configure firewall rules to allow inbound connections to, and outbound connections from, the APM-Private Cloud server.
APM-Private Cloud Configuration
APM-Private Cloud Configuration - AppNeta Documentation | AppNeta
From a browser, log into your APM-Private Cloud. Use the IP address or login URL provided at the end of the initial setup procedure. For example: https://192.168.1.100 or https://my-vpca.mydomain.org/pvc/login.html Use the email address and password you provided in the setup procedure as login credentials. Provide AppNeta Support access to the system to install licenses. See APM-Private Cloud Maintenance. Contact AppNeta Support to install your licenses. Go to APM-Private Cloud Configuration to configure APM-Private Cloud services.
Complete the initial setup.
In virt-manager, access the APM-Private Cloud virtual machine console.
Use virt-manager (or a similar application) to view the virtual machines running on the KVM host
Confirm that the virtual machine is persistent and will start automatically when the KVM host is restarted.
Start the virtual machine.
Set the virtual machine to start automatically when the KVM host is restarted.
Define a virtual machine based on the KVM domain definition file
Create a KVM domain definition file. See Creating a KVM domain definition file.
Copy the image to your KVM host machine. Copy it to a directory where you’d like the KVM disk image files to reside. For example: /data/kvm_images/myvpca
To install APM-Private Cloud on KVM:
Storage resources can be thin-provisioned
Prerequisites The APM-Private Cloud KVM image requires KVM on a system capable of hosting a guest machine with the following minimum hardware requirements. Component Minimum requirements vCPUs 4 Memory 16 GB Hard Disk 1 (pca-base) 40 GB (SSD performance required) Hard Disk 2 (pca-data) 750 GB (SSD performance required) Hard Disk 3 (pca-backup) 2000 GB Hard Disk 4 (pca-flow-data) 326 GB (SSD performance required) Network Adapter 1 x 1 GigE Video Card 4 MB The compute resources should be adjusted based on planned usage: Trial up to 250 Monitoring Points up to 1000 Monitoring Points vCPUs 4 4 16 Memory (GB) 16 16 64
APM-Private Cloud is a virtual machine (VM) image that can be deployed on customer-supplied hardware running a KVM hypervisor.
APM-Private Cloud - KVM Setup - AppNeta Documentation | AppNeta
source bridge - The name of a bridge interface on the KVM host.
source file - Absolute paths to the KVM disk image files you downloaded. There are four of these that must be changed.
memory - The maximum amount of memory allocated to the virtual machine at boot time
cpu - The guest CPU requirements.
vcpu - The maximum number of virtual CPUs allocated to the virtual machine
name - The virtual machine name. This should be unique across all VMs on the KVM host.
A KVM domain definition file is required to specify the configuration of the KVM virtual machine APM-Private Cloud will run on. Create a file using the XML that follows as the basis for your KVM domain definition. Adjust the following elements and attributes as required:
APM-Private Cloud - KVM Setup - AppNeta Documentation | AppNeta
Disabling a maintenance tunnel When the maintenance is complete, disable the maintenance tunnel. To disable the maintenance tunnel: Navigate to > Remote Maintenance. Click Disable Tunnel. The Tunnel Status should show “Not connected”.
Enabling a maintenance tunnel To enable the maintenance tunnel for the AppNeta Support team: Navigate to > Remote Maintenance. Click Enable Tunnel. The Tunnel Status should show “Connected”.
APM-Private Cloud Maintenance - AppNeta Documentation | AppNeta
Provide AppNeta Support access to the system to install licenses. See APM-Private Cloud Maintenance. Contact AppNeta Support to install your licenses. Go to APM-Private Cloud Configuration to configure APM-Private Cloud services.
From a browser, log into your APM-Private Cloud. Use the IP address or login URL provided at the end of the initial setup procedure (use https://). For example: https://192.168.1.100 or https://my-vpca.mydomain.org/pvc/login.html Use the email address and password you provided in the setup procedure as login credentials.
Complete the initial setup.
Click Actions > Deploy OVF Template….
Storage resources can be thin-provisioned.
The APM-Private Cloud VMware image requires a VMware vSphere Hypervisor host capable of hosting a guest machine with the following minimum hardware requirements. Component Minimum requirements vCPUs 4 Memory 16 GB Hard Disk 1 (pca-base) 40 GB (SSD performance required) Hard Disk 2 (pca-data) 750 GB (SSD performance required) Hard Disk 3 (pca-backup) 2000 GB Hard Disk 4 (pca-flow-data) 326 GB (SSD performance required) Network Adapter 1 x 1 GigE Video Card 4 MB The compute resources should be adjusted based on planned usage: Trial up to 250 Monitoring Points up to 1000 Monitoring Points vCPUs 4 4 16 Memory (GB) 16 16 64
APM-Private Cloud is a virtual machine (VM) image that can be deployed on customer-supplied hardware running VMware vSphere 5.5, 6.0, or 6.5.
APM-Private Cloud - VMware Setup - AppNeta Documentation | AppNeta
To enable the maintenance tunnel for the AppNeta Support team: Navigate to > Remote Maintenance. Click Enable Tunnel. The Tunnel Status should show “Connected”.
Providing maintenance access to APM-Private Cloud AppNeta Support is responsible for APM-Private Cloud maintenance and administrative tasks such as: Software upgrades License provisioning User management Account management Security updates Configuration management In order for the AppNeta Support team to perform these tasks, you must provide access to your APM-Private Cloud system. Enable a secure (TLS) maintenance tunnel to the APM-Private Cloud server. Disable the maintenance tunnel once the AppNeta Support team is done. Use a dedicated port for maintenance access if required.
APM-Private Cloud provides an API endpoint that can be used to assess its health. The API call returns ‘true’ if the system is healthy and ‘false’ (with a list of tests that failed) if it is not
APM-Private Cloud Maintenance Checking the system health Providing maintenance access to APM-Private Cloud Enabling a maintenance tunnel Disabling a maintenance tunnel Adding a dedicated maintenance port System maintenance involves periodically checking system health in addition to tasks such as deploying software upgrades and managing users (which requires a secure tunnel to be enabled for AppNeta Customer Care to access the system).
APM-Private Cloud Maintenance - AppNeta Documentation | AppNeta
The setup procedure is the same for both PCA 125 and PCA 250:
Legacy Private Cloud Appliance (PCA) - AppNeta Documentation | AppNeta
Reset Monitoring Point password To reset the password on a virtual Monitoring Point: Create a new virtual Monitoring Point with a new password (for example, on KVM or VMware). Migrate monitoring from the old Monitoring Point to the new Monitoring Point.
Reset Monitoring Point password To reset the password: Download the reset password config file. Edit the downloaded config file for your needs. Uncomment sections to be used (if required). Replace content in arrow brackets (no arrow brackets should remain). Copy the file onto a USB stick. Make sure the Monitoring Point is ready. Insert the USB stick into the Monitoring Point. The Monitoring Point reads the configuration from the USB stick and indicates that it is doing so. Wait until the Monitoring Point is finished. Remove the USB stick. The Monitoring Point configuration is updated. Any problems updating the configuration are logged in the usb.log file on the USB stick. Log in to Web Admin using your new password.
If you change the password and then forget it, there is a procedure to reset it.
Every Enterprise Monitoring Point (EMP) with Web Admin access is shipped with a set of default credentials: a single username and password
EMP Access Credentials - AppNeta Documentation | AppNeta
Determine the Monitoring Point hostname or IP address Once a Monitoring Point has connected to APM, you can view its hostname and IP address. To view a Monitoring Point’s hostname or IP address from APM: Log in to APM. Make sure you are using the correct organization Navigate to > Manage Monitoring Points. Click the Monitoring Point you want to connect to. In the right pane: the Name field contains the hostname. the Public IP Address field contains the public IP address. Use this if you are connecting to the Monitoring Point across the internet. the Local Network Interfaces field contains the active Monitoring Point interfaces and the local IP address of each. Use this if you are connecting to the Monitoring Point locally.
There are a few different methods to log into the Web Admin interface depending on its state. If the Monitoring Point is currently connected to APM, use Method 1. If the Monitoring Point has been connected to APM in the past but is not connected now, use Method 2. If the Monitoring Point has never been connected to APM, use Method 3.
Some AppNeta Monitoring Point models have a Web Admin interface that’s used as the primary management interface.
Access via Web admin - AppNeta Documentation | AppNeta
Rate limit There are a number of Monitoring Point API endpoints that have restrictions on the number of requests they will process over a given time period. Endpoint Rate limits PUT /service/<service_name>/ 12/min and 1/sec PUT /service//settings/ 12/min and 1/sec POST /access_control/ldap/ 12/min and 1/sec DELETE /access_control/ldap/ 12/min and 1/sec POST /acccess_control/ldap/ 12/min and 1/sec PUT /ntp/ 12/min and 2/sec DELETE /ntp/ 12/min and 2/sec POST /web/ssl/ 12/min and 2/sec DELETE /web/ssl 12/min and 2/sec
Access the Admin API directly (hostname or IP address from APM) To access the Admin API directly: Log in to APM. Navigate to > Manage Monitoring Points. Click the Monitoring Point you want to connect to. The Name field contains the hostname. The Local Network Interface field in the right pane contains the IP address. Use the Public IP if you are connecting across the internet. Substituting the IP address or hostname, enter https://<hostname-or-ip>/swagger/index.html in the address bar of your web browser. The Monitoring Point Admin API appears.
Note that some API endpoints have limits to how often they can be called.
Some AppNeta Monitoring Point models have a web hosted API for performing device management tasks. You can access the Admin API in two different ways: via Web Admin or by specifying the hostname or IP address directly.
Access via Admin API - AppNeta Documentation | AppNeta
Some AppNeta Monitoring Point models have a web hosted API for performing device management tasks. You can interact with the API via the interactive Admin API interface, programmatically, or via a command line tool like curl.
Access via curl - AppNeta Documentation | AppNeta
You can read a Monitoring Point’s configuration using a blank FAT32-formatted USB stick. To read a Monitoring Point configuration: Power on the Monitoring Point and wait until it is ready. Insert the USB stick into the Monitoring Point. The Monitoring Point writes the configuration to the USB stick and indicates that it is doing so. Wait until the Monitoring Point is finished. Remove the USB stick and insert it in your computer. The configuration is on the .html file.
You can use a USB stick on a physical Monitoring Point to perform the initial setup and to read its configuration. On some Monitoring Points, a USB stick can also be used to update the Monitoring Point configuration.
Access via USB - AppNeta Documentation | AppNeta
Container - Docker Compose To restart a Container-based Monitoring Point (CMP) installed using Docker Compose: Login to the host the Monitoring Point is deployed on. Navigate to the directory the Monitoring Point was deployed from (it contains the mp-compose.yaml file). Perform the restart. docker-compose -f mp-compose.yaml restart Note: On Linux hosts, you may need to run sudo docker-compose -f mp-compose.yaml restart for this to execute successfully. Confirm that the Monitoring Point restarted successfully. docker-compose -f mp-compose.yaml ps Note: On Linux hosts, you may need to run sudo docker-compose -f mp-compose.yaml ps for this to execute successfully.
Container - AKS To restart a Container-based Monitoring Point (CMP) installed using Azure Kubernetes Service (AKS): Sign in to the Azure Cloud Shell. Determine the deployment name for the Monitoring Point. kubectl get deployments Perform the restart. kubectl rollout restart deployment <deployment name> Confirm that the Monitoring Point restarted successfully. kubectl get deployments
Admin API Access the Admin API. Navigate to Appliance > PUT /appliance/. Click Try it out. In the Parameters section, for the action parameter, select reboot. Click Execute. The Server response section should show Code “202”.
Restarting an EMP
Restarting an EMP - AppNeta Documentation | AppNeta
Shutting down a CMP installed using Docker Compose To shut down a CMP installed using Docker Compose: Login to the host the Monitoring Point is deployed on. Navigate to the directory the Monitoring Point was deployed from (it contains the mp-compose.yaml file). Perform the shutdown. docker-compose -f mp-compose.yaml stop To start it back up again: docker-compose -f mp-compose.yaml start Note: On Linux hosts, you may need to run sudo docker-compose -f mp-compose.yaml stop and sudo docker-compose -f mp-compose.yaml start for these commands to execute successfully. If necessary, you can use docker ps and docker kill to find and stop the two containers used by the Monitoring Point. Their names end in “sequencer_1” and “talos-001_1”.
Shutting down a CMP installed using AKS To shut down a CMP installed using Azure Kubernetes Service (AKS): Sign in to the Azure Cloud Shell. Determine the deployment name for the Monitoring Point. kubectl get deployments Perform the shutdown. kubectl scale deploy <deployment name> --replicas=0 To start it back up again: kubectl scale deploy <deployment name> --replicas=1
To gracefully shut down a Monitoring Point: Monitoring Point Shutdown method m25, m35, m50, m70, r90, r1000 Press the power button on the back of the device and wait for the Power and Heartbeat LEDs to turn off. v35 Issue a graceful shutdown request from the hypervisor. r45 Use SSH to access the device and run sudo shutdown. m20, m22, m30, r40, r400 No graceful shutdown mechanism. CMP (AKS) See detail below. CMP (Docker Compose) See detail below. NMP (macOS) See detail below. NMP (Windows) See detail below.
Shutting down an EMP - AppNeta Documentation | AppNeta
To delete a Monitoring Point from your organization: Navigate to > Manage Monitoring Points. For the Monitoring Point you want to delete, select > Delete. You will be prompted to confirm this action, and optionally to move all affected paths to another Monitoring Point. For Container-based Monitoring Points (CMPs) and Native Monitoring Points (NMPs), you should also remove resources on the deployment host. CMP - AKS deployments, see Uninstall CMP software - AKS. CMP - Docker Compose deployments, see Uninstall CMP software - Docker Compose. NMP - Windows deployments, see Uninstall NMP software - Windows
Deleting a Monitoring Point has the following effects: All paths where the Monitoring Point is the source (and the monitoring history related to those paths) are deleted (though they can be moved to another Monitoring Point during the delete process). All Usage monitoring data related to the Monitoring Point is deleted. Tests, assessments, and packet captures are not deleted. The base license and any add-on licenses that were assigned to the Monitoring Point become available again. If the Monitoring Point has a legacy Usage-based license assigned to it, the license is deleted. Access to the Monitoring Point from APM is lost. The Monitoring Point is decommissioned. This resets the APM connection configuration on the Monitoring Point. In order to use the Monitoring Point again you need to redo the setup procedure.
Deleting an Enterprise Monitoring Point (EMP) from an organization is typically done when you are moving the Monitoring Point to another organization or freeing up its base license so it can be used by another Monitoring Point.
Deleting an EMP - AppNeta Documentation | AppNeta
If the script captured user credentials that have a password you do not want visible, you can declare a password variable to mask its value. Those variables declared as “password”, “passwd”, “pwd”, or “secret” have their values masked. All other variable names do not have masked values. To declare a variable: Click Need Any Variables? on the Edit Workflow page. The Variables section appears. In the Variables section, add the variable Name and Value. Click + to add a new variable (optional).
Set it to slightly longer than the worst case script execution time. The maximum Timeout is testing interval / 3. NOTE: Avoid setting an overly long timeout period. If the script does time out, it will consume resources on the monitoring point for the full duration of the timeout period.
In the Timeout (sec) field, specify how long to wait for the script to complete before automatically terminating it.
In the HTTP Authentication section, specify a valid Username and Password if required by the target web application. These credentials are only used by the Selenium open command (and only if prompted for by the target) with one of the supported authentication methods.
Have the site you’re testing against open in Chrome on one monitor, and APM open in a different browser on another monitor
IP address logging/alerting
This ensures you don’t lose your work if the site you are testing against hangs or crashes Chrome
Use two different browsers on two different monitors
Plan ahead - As with any coding endeavor, it’s best to plan out what you’re trying to do ahead of time. Spend some time thinking about what you want to test, and what results you expect to see from it.
Close all unnecessary tabs. Clear out your browser’s cache and cookies. Disable any plug-ins you don’t need or that could interfere with the script, particularly script blockers and ones that automatically enter text into fields, such as password managers.
each time it runs, your script will be starting from a blank slate, so you’ll need to make sure the browser you are creating the script with is equally clean
Use a clean browser
set up a dedicated account for the script to log into the application being tested
Obtain the login credentials
the script you create will be run on a Monitoring Point and it needs to be able to access the target
Script against an accessible target
script against the same version of the site that the Monitoring Point will access
Script against the target URL
Monitoring Point uses Chrome to execute the script,
Use Chrome for testing
There are several points you should consider before you start scripting:
You can use the Selenium scripting language to create scripts for Experience monitoring.
Selenium Scripting - AppNeta Documentation | AppNeta
The element is only visible if you hover over its parent Description: You want to access an element but it is only visible when you hover over its parent. Solution: Use the mouseOver() command to simulate mousing over the parent element.
Solution: Use the setUserAgent() command to set the user agent explicitly so that the page renders as expected.
Rationale: A script can fail if it expects an element to be present but, in a given rendering, it is not.
The page loads in a mobile or unexpected view
Solution: To simplify the logout procedure in the script, use the “open(<target>)” command with the logout URL as the <target>.
Description: You are trying to write steps in your script to log out of a website but are having difficulty.
or increase the script execution interval
reduce the number of web paths
This indicates that the Monitoring Point is overloaded and transactions are failing as a result.
“Delayed transaction” event.
Description: When a link is clicked, it opens the page in a new tab. The new tab is not active.
Solution: Use the selectActiveWindow() command to switch execution to the new tab.
Rationale: The text that Selenium is trying to match is that that is rendered rather than that in the page source
Description: You created a locator that matches link text and it is failing even though the text you are using exactly matches that you see in the page source.
olution: Use the “clickIfVisible” command to click the element only if it is visible and continue execution if it is not.
Rationale: An example is a pop-up that appears at the end of each month.
You want the script to interact with the element when it is visible and simply continue if it is not.
The script fails because the element or pop-up you want to use only appears periodically.
Pop-up or element appears periodically
Solution: Use a non-dynamic attribute or, if possible, match the part of the attribute that is static.
Another option is to “blacklist the resource” that is slow to load
increase the interval between script executions in APM (configure the Web App Group
There are pages that take so long to load that the script times out waiting for them.
If the issue was due to a pop-up or overlay being in the way, you can try using “clickAt(<locator>, 0, 0)” instead of “click”.
If the issue was due to a resource not being available when the command executed, use a command such as “waitForElementPresent” or “waitForVisible” to wait until the element is available before trying to access it
a pop-up or overlay) is blocking the element from view
Something on the page is not yet active or loaded
if the element it is trying to operate on is not ready or is not visible when the command is executed, it will not work
The “Element is not clickable at point (x, y)” error message indicates that the “click” command used was not able to execute.
This issue typically occurs when the locator used with the command does not correctly reference the element you are trying to use. Solution: Revise the locator to properly identify the element
When using a command with the “waitFor” prefix (for example, “waitForElementPresent” or “waitForVisible”), it can time out after 30 seconds rather than continue immediately as expected.
The “AndWait” suffix should only be used with elements that cause a new page to load.
an time out after 30 seconds rather than continue immediately as expected
a helpful way to debug script-related problems is with the “captureEntirePageScreenshot” command. It records a screenshot at its location in the script. Placing this command before and after commands in question is a good way to observe what is seen before and after a command is invoked
Resolving Common Issues - AppNeta Documentation | AppNeta
assertAlertPresent() - This will dismiss the alert and continue. It does not need a string to match.
assertAlert(pattern) - This requires a string to match the alert text. If it matches, script execution continues. Otherwise, a milestone error event will occur.
There are two ways of acknowledging an alert with Selenium commands: using assertAlert or using assertAlertPresent.
dialogs that prompt the user for input and contain more than one button
One of these must be pressed in order to continue.
dialogs containing more than one button
that must be pressed in order to continue.
dialogs containing a single button
Selenium pop-up handling - AppNeta Documentation | AppNeta
For example, adding a command that waits for something to load, or checks that a certain label exists, or checks that something is visible.
you may need to make web element locators more robust and you may need to manually add waits, assertions, or tests.
As recording a script is just a starting point, it is likely that, once recorded, you will need to revise it manually
ther than manually entering all script commands, you can use a 3rd-party script recorder, such as UI.Vision RPA for Chrome (formerly Kantu), to record your interactions with a web application and automatically generate the Selenium script to mimic your interactions. UI.Vision RPA creates the script by recording your mouse clicks and key strokes as you interact with a web application - just like recording a macro.
Selenium script recording using UI.Vision RPA - AppNeta Documentation | AppNeta
Available alert conditions There are separate alert conditions available for browser-based and for HTTP request-based web path alert profiles. The following conditions are available for browser-based web path alert profiles: Apdex Score The minimum tolerable Apdex score for the test, not the web app. Connectivity The source Monitoring Point cannot obtain any HTTP status from the target. HTTP Errors The target returns an HTTP status code between 400 and 505, inclusive. HTTP Status The target returns an HTTP status code other than the value specified. Script Errors The source Monitoring Point returned an error because a workflow execution did not run to completion. Page Load Time A page load takes longer than the specified amount of time. Transaction Time The transaction takes longer than the specified amount of time. The following conditions are available for HTTP request-based web path alert profiles: HTTP Connectivity The source Monitoring Point cannot obtain a response from the target. Expected Response Body The response body does not match the expected response from the target. Expected Status The response status does not match the expected response from the target. Total Time The time to receive a response from the target is longer than expected.
Experience Alerts - AppNeta Documentation | AppNeta
Adjust if too few alerts
Adjust alert profile if necessary
Adjust if too many alerts
Establish a baseline
The goal of an alert is to let you know when there is a problem you need to attend to. The challenge is to set alert condition thresholds so that you are not alerted too much or too little.
Setting good alert thresholds - AppNeta Documentation | AppNeta
reflashing the Monitoring Point.
files on a USB stick
manage and configure the Monitoring Point from a command line
Admin API interface can generate custom curl commands
manage and configure the Monitoring Point
web-based API interface hosted by the Monitoring Point
restart networking, add static IPs, and VLAN tags
manage and configure the Monitoring Point
Some management and configuration capabilities
change location, rename, and name capture interfaces.
Configuration Methods - AppNeta Documentation | AppNeta
It is required for alert conditions to be applied at the right time according to alert time ranges you have set.
It is required in order to display the correct time in path performance charts when Source Monitoring Point Time Zone is set.
It is required for monitoring results to be timestamped correctly.
In conjunction with the Monitoring Point’s system time, it is required in order for the Monitoring Point to connect to APM. If the time is not correct, then there could be an issue with certificate validity checks when connecting.
where the Monitoring Point is located
where you are located
Set Monitoring Point time zone
Set user time zone
Setting the time zone for both APM users and for Monitoring Points is important in order for monitoring results to display correctly.
There is a minor caveat when using daylight saving time zones. When the time range you select for displaying charts includes the daylight savings time boundary, the x-axis of the time series honors standard time. If this is undesirable, you can temporarily change your user time zone to a standard time zone.
In APM, hover over your user icon at the top right of the page. Select Update Time Zone from the dropdown. In the Time Zone field, select the time zone you are located in. Click Save.
Setting the user time zone correctly is important because charts and reports reflect your local time.
Time Zone - AppNeta Documentation | AppNeta
To edit a target location: Navigate to Delivery > Network Paths. For the path you want to edit, select > Configure. In the Target Location field, specify the target location. Click Next ». Click Save
Specifying the location of a network path target is also important and is required in order for the target to show on the GeoMap
Edit a target location
Edit a Monitoring Point’s location
Manage Monitoring Points
The Monitoring Point location is specified during the setup procedure but it can be edited at any time
necessary for a variety of reports and charts.
Location - AppNeta Documentation | AppNeta
Restart the Monitoring Point
Access the Admin API. Navigate to Hostname > PUT /hostname/. Click Try it out. In the Parameters section, in the body field, change “string” to the new hostname. Click Execute. The Server response section should show Code “200”
Restart Monitoring Point.
Navigate to Monitoring Point Settings > Hostname.
To change the Monitoring Point’s hostname:
Manage Monitoring Points
to the default name generated by APM
reset the APM name
Manage Monitoring Points.
To change the Monitoring Point’s APM name:
if you change the hostname, the APM name is also changed.
By default, both of these names are the same and are generated by APM. They are of the form: <device type>-<unique string>. If you change the APM name, the hostname is not affected
APM name is used to identify the Monitoring Point within APM
hostname is used to identify the Monitoring Point on the network
All Enterprise Monitoring Points (EMP) have two names: a hostname and an APM name
Name - AppNeta Documentation | AppNeta
Navigate to Access Control > GET /access_control/acl/. Click Try it out. Click Execute.
TCP port 22 (SSH) TCP port 80 (HTTP) TCP port 443 (HTTPS) TCP ports 1025-8080 (PathTest) TCP ports 8082-65535 (PathTest) UDP port 7 (Traceroute) UDP port 123 (NTP) UDP port 161 (SNMP) UDP ports 1025-65535 (Delivery, including PathTest, voice, and video)
ACL rules can be viewed, created, added to, deleted from, and removed altogether using the Admin API
Typically, ACLs are used to restrict inbound access to specific internal source addresses and/or subnets on specific interfaces
used in conjunction with firewall rules that provide access between the Monitoring Point and APM
ACLs permit access to the Monitoring Point on a specified protocol and port (or port range), from an optional list of source IPv4/IPv6 addresses/networks, on an optional list of interfaces.
By default, all inbound access to a Monitoring Point is denied, with a few exceptions. These exceptions are in the form of Access Control Lists (ACLs)
Access Control Lists (ACLs) - AppNeta Documentation | AppNeta
Different port used
No server certificate validation
Credentials stored in a different group
Make sure all fields in the LDAP configuration have been set correctly for your LDAP environment
Make sure the Monitoring Point has network access to the LDAP server. For example, make sure the appropriate firewall ports are open. LDAP uses TCP and UDP port 389. LDAPS uses TCP and UDP port 636.
LDAP is configured on a Monitoring Point using the Admin API.
LDAP setup on a Monitoring Point should be completed by a system administrator with good knowledge of LDAP system administration
what your policy is for authenticating responses from the LDAP server. If a CA certificate is required, you’ll need to upload it to the Monitoring Point.
he base DN from which LDAP searches will be executed
read-only directory access credentials
credentials required to bind to the LDAP server
If required by the server
The DN of the authorization group containing the admin users
Active Directory and Oracle DSEE use different schemas
the port it is using
address of the server and
Prerequisites for LDAP configuration
Monitoring Point makes a request to the LDAP server to authenticate the user
using their credentials as stored on the LDAP server
Once the Monitoring Point is configured
Monitoring Point must then be configured to access the LDAP server and search the correct group for user credentials when a login attempt is made
accessible by the Monitoring Point and an authorization group on the server containing Monitoring Point administrators
need an LDAPv3 compliant server
Only members of this group
Settings supplied by the network administrator that tell the Monitoring Point to authenticate via LDAP, and where to find the server and authorization group.
perform administration tasks
Created by the network administrator
Logs into Monitoring Point (using their own desktop credentials) and assumes administrator privileges
Monitoring Point administrator
Applies LDAP configuration to Monitoring Points
Has access to the directory server and controls authentication
Better security forensics
Centralized password policy
Easy to control access
More convenient for administrators
allows administrative users to log in to Web Admin or the Admin API using their own credentials rather than the administrator credentials configured on the Monitoring Point.
AppNeta provides support for LDAP
Lightweight Directory Access Protocol (LDAP
LDAP - AppNeta Documentation | AppNeta
y default, it is the Primary network connection port
The default interface is the one the Monitoring Point uses to connect to APM
Default interface - AppNeta Documentation | AppNeta
ou must set one as the default interface, which will be used for Monitoring Point connectivity, and use the other as the source interface for a network path.
You can configure additional physical interfaces if required
Physical interfaces - AppNeta Documentation | AppNeta
In cases where name servers are not equivalent (typically when providing sub-domain name resolution), this behavior may result in an incorrect name resolution. For example, a name server that cannot resolve a name may respond first with a negative result. This result is used by the Monitoring Point despite it receiving a positive result later on. In these instances, you want to associate a name server with a DNS Search Domain so that all requests for a given domain are forwarded to a specific name server (or set of servers).
Typically, the IP addresses of DNS name servers are configured within DHCP
The Domain Name System (DNS) is essential internet functionality used for resolving domain names to IP addresses.
DNS - AppNeta Documentation | AppNeta
Depending on where a Monitoring Point is deployed, you may need to add static routes. Static routes are configured per interface.
Static routes - AppNeta Documentation | AppNeta
By default the Monitoring Point uses NTP to keep the clock accurate.
NTP - AppNeta Documentation | AppNeta
The proxy settings on a Monitoring Point only affect how it connects and reports data back to APM. These settings are not used for performance monitoring. If you are using Experience monitoring, you will also need to configure access to the proxy either when you create a web app group or after the web app group is created.
For networks that require internet traffic to be forwarded by a web proxy, the Monitoring Point must be configured to connect to your proxy server so that it can communicate with APM.
Web proxy - AppNeta Documentation | AppNeta
For a VLAN to work with 802.1X, its underlying physical interface must be configured with 802.1X.
For a VLAN to work with 802.1X, its underlying physical interface must be configured as DHCP or static.
EAP and TLS authentication protocols, additional authentication information including certificates
user name and password recognized by the authentication server
ayer-2 switch with 802.1X support configured to access an authentication server
trusted server that validates network access requests
the device that provides the physical link between the supplicant and the network, relays supplicant credentials to the authentication server, and enforces the network access policy (for example, a layer-2 switch).
the device that wishes to use network resources
three entities involved in 802.1X
AppNeta Monitoring Points can be authenticated using PEAP, TLS, or MD5 authentication protocols
802.1X is an IEEE standard for providing port-based layer-2 access control to authenticate users or devices wishing to access a LAN or WAN.
802.1X - AppNeta Documentation | AppNeta
Port pairs are used for inline monitoring
Single ports are used for out-of-band monitoring.
Usage monitoring ports - AppNeta Documentation | AppNeta
This technique drops port information from nfcapd files and results in significant compression of Usage data with a corresponding increase in responsiveness. Drop-port compression is enabled by default on all 10Gbps and faster Usage interfaces.
to increase Usage performance, you can configure “drop-port” compression on Usage interfaces
Compression - AppNeta Documentation | AppNeta
passphrase is required to download capture files
The passphrase is used to encrypt your .pcap files generated by Packet Capture
Packet Capture Passphrase
Packet Capture Passphrase - AppNeta Documentation | AppNeta
you can replace the self-signed certificate/key pair with your own trusted certificate/key pair and, if provided by the Certificate Authority, intermediate/chained CA certificates.
The default SSL certificate is self-signed, hence users accessing the Monitoring Point using these interfaces will receive security warnings complaining about an untrusted or invalid certificate, or that the hostname in the certificate does not match the Monitoring Point’s hostname.
he web server running on the Monitoring Point
Web Server SSL Key/Certificate
Web Server SSL Key/Certificate - AppNeta Documentation | AppNeta
By default, the string is either ‘AppNeta’ (for Monitoring Points with version 9.10.0 or higher) or ‘public’ (for all other Monitoring Points).
A Monitoring Point’s read-only SNMP community string is
SNMP - AppNeta Documentation | AppNeta
Single-ended - Discover My Network - this method is used to create many single-ended network paths at once using network discovery to identify possible targets. Any target type can be used.
Single-ended - hub and spoke - this method allows you to create paths from one Monitoring Point to selected Monitoring Points using the Path Setup Wizard. It requires Monitoring Points on both ends of the path.
Single-ended - mesh - this method allows you to create all possible combinations of paths between selected Monitoring Points using the Path Setup Wizard. It requires Monitoring Points on both ends of the path.
Network Paths - AppNeta Documentation | AppNeta
Saved lists - provide more advanced grouping capabilities. One key difference is that a network path can belong to more than one saved list. This provides flexibility when grouping paths that have multiple attributes in common.
Network path groups - provide a very basic grouping - essentially, providing a group label for a given network path. Each network path can belong to only one network path group.
organize network paths in ways that are of value to you
Creating groups of network paths enables you to operate on a set of related paths rather than on individual paths, making these operations easier and more efficient
There are two main types of groups you can create: network path groups and saved lists.
Grouping Network Paths - AppNeta Documentation | AppNeta
Path template groups are collections of path templates and source Monitoring Point interfaces used to simultaneously configure multiple Monitoring Points with identical network path configurations. When a path template group is created or updated, all path templates in the group are applied to each source interface in the group to create network paths - one network path for each path template/source interface combination.
Path templates are network path configurations that contain all configuration parameters (target, instrumentation, alert profiles, etc.) except the source Monitoring Point interface
Path template groups - AppNeta Documentation | AppNeta
For a network path to use QoS settings within test packets it sends, a QoS settings template must be applied to the network path.
QoS templates are QoS settings that can be applied to a network path when it is created or edited
QoS templates - AppNeta Documentation | AppNeta
If a condition is violated, indicating unacceptable performance, or a violated condition is cleared, an alert is triggered.
for example, network connectivity, utilized capacity, and QoS changes
A network path alert profile is a set of conditions that define the limits of acceptable performance for a network path.
Delivery alerts - AppNeta Documentation | AppNeta
Once your Monitoring Points are deployed you can create web app groups (either browser-based or HTTP request-based) that use them.
Once a web app group is created, the associated Monitoring Point(s) will start executing the workflow(s) against the web app target(s) and sending results back to APM for review.
In order to monitor a web application you need to create a web app group with at least one web path
Set up Experience monitoring - AppNeta Documentation | AppNeta
Two scripting languages are supported: Selenium and AppNeta
As a script proceeds, it measures the timing of responses from the application and forwards these measurements to APM. APM then creates a picture of the application performance from a user perspective and generates alerts when the measurements are outside an acceptable range. Alerts are also generated for issues such as connectivity loss, HTTP errors, and an Apdex value indicating unacceptable user satisfaction.
Experience monitoring uses scripts, run on a browser located on the Monitoring Point, to simulate a typical user’s interactions with a web application
Introduction to scripting - AppNeta Documentation | AppNeta
if the target detects a QoS change in a packet it is sent, it informs the source and the source generates an alert. If the source detects a QoS change in incoming packets it also generates an alert.
source and target Monitoring Points are both configured to use the user-specified QoS value
For dual-ended paths, QoS changes are detected using only UDP messages
Note: Because the “Port Unreachable” check normally occurs every five minutes, if a QoS change violation occurs, it takes at least five minutes to clear.
the result is indeterminate and no alert is generated
If the markings are the same
If the markings are different - a change took place somewhere on the outbound path and an alert is generated.
The expectation is that the target will reply with an ICMP “Port Unreachable” message containing the header of the denied packet in its payload. The QoS markings in that header are then compared to the QoS markings that were sent to determine if they were altered on the outbound (source to target) path.
If they are zero (i.e. cleared) - then, because some targets clear QoS markings in ICMP echo replies (and we do not want to generate an alert if the target clears the markings), we need the results of a UDP test
f they are different than what was sent and are non-zero - a change took place somewhere on the path and an alert is generated
no alert is generated
If they are the same
An ICMP echo response is returned and the QoS markings on the response are evaluated:
An ICMP echo request is sent from the source
For single-ended paths, QoS changes are detected using both ICMP and UDP messages
a QoS change is detected
an alert profile with the QoS change condition configured is applied
the network path is configured with the QoS Settings field set to something other than “None”
APM can generate an alert if it detects that QoS markings are altered
f these markings are altered by a device in the network, a poor user experience can occur
QoS markings on packets are used to prioritize traffic
Alerting on QoS changes - AppNeta Documentation | AppNeta
Each time a script is run or an HTTP request is made against a web application it is known as a web path test.
The set of web paths associated with a given web application are grouped together in a web app group
nown as a web path
Setting up Experience monitoring involves specifying which Monitoring Points are going to monitor a given web application and creating a script or specifying the HTTP request to interact with the application
you can set alerts
HTTP workflows generate HTTP requests from a Monitoring Point to an application’s API
determine the web app’s availability and responsiveness
generate direct HTTP requests that emulate a client app’s interactions with the web service using GET, PUT, or POST commands
HTTP workflows are primarily used to monitor web service APIs
you can set alerts
also breaks down the measurements by milestone within the script
Each time a script is run, the Monitoring Point measures the amount of time taken by the browser, the network, and the server running the application
Browser workflows use a Chrome browser located on the Monitoring Point to run scripts that emulate the workflow of a typical user
If the issue is with the server or the web app running on it, you can determine where in the app the issue is originating.
can also determine whether degradation in responsiveness is due to the browser, the network, or the server
allow you to determine the web app’s availability and responsiveness
scripted synthetic transactions that emulate an end user’s interactions with a web page through a browser
Browser workflows are primarily used to monitor HTML-based web apps
are easily identified within APM.
show how application performance changes over time
ecause the workflows are run at regular intervals
the monitoring results accurately reflect the application performance experienced at those locations
Because the workflows are executed from Monitoring Points at various locations
HTTP workflows APIs or web services Direct HTTP requests
Browser workflows Web apps (HTML-based) End user interactions with a web page
Experience monitoring provides you insight into how web applications are performing from a user or client application perspective. Monitoring Points execute transactions that emulate user or client interactions with an application.
Experience monitoring - AppNeta Documentation | AppNeta
Milestone Apdex - is calculated based on the page loads within a given milestone over the previous two hours. Web path Apdex - is calculated based on the page loads within all milestones on a given web path over the previous two hours. Web application Apdex - is calculated based on the page loads within all milestones, on all web paths, for a given web application over the previous two hours.
Apdex scores are calculated for every milestone, web path, and web application.
ach satisfied page load is counted as 1. Each tolerating page load is counted as 1/2. All others (those taking longer than the ‘tolerating threshold’ and those that fail) are counted as 0.
So the Apdex score is simply a ratio of satisfied and tolerated page load response times to the total number of page load requests made.
Apdext = (Satisfied count + (Tolerating count / 2)) / Total samples
Satisfied count - the number of page load samples where the response time is less than the ‘satisfied threshold’ (t). Tolerating count - the number of page load samples where the response time is between the ‘satisfied threshold’ (t) and the ‘tolerating threshold’ (by default, 4 x t). Total samples - the total number of page load samples. In APM, the samples used for an Apdex calculation are from the previous two hours.
t - the ‘satisfied threshold’ (in seconds) under which the user is satisfied with an application’s response. By default, this is four seconds.
Apdex uses a simple formula to calculate user satisfaction. The result - the Apdex ‘score’ - is a single number between 0 and 1 where 1 indicates that a user would be completely satisfied with the application response time. APM presents the Apdex score as a percentage from 0% to 100%.
Apdex is an industry-standard method for reporting and comparing application performance in terms of end user experience
Apdex - AppNeta Documentation | AppNeta
Execution interval of each web path
Workflow execution time
Number of active web paths
Type of Monitoring Point
consider making changes to one or more of the factors
As load increases, there is a greater chance of workflows being delayed as they wait for others to complete
xperience Load includes the execution of both Browser and HTTP web paths, and is measured every 30 minutes.
represents the average percentage use of available Experience execution time over 30 minutes
Experience Load - AppNeta Documentation | AppNeta
each Monitoring Point model (and legacy Monitoring Point model) has a limit to the number of simultaneous flows it can monitor
Monitoring Points have separate ports dedicated to Usage monitoring
Usage monitoring is passive - it does not generate any test traffic
see how bandwidth at a given location is being devoted to particular applications, hosts, and users
These records are then collated into groups and classes so that you can easily understand how your network resources are being consumed
When you start Usage monitoring, the Monitoring Point generates traffic flow records in real time
Each TCP conversation, for example, consists of two flows: one from source to target and one from target to source.
All packets with the same 6-tuple of: source IP address, destination IP address, source port, destination port, ToS (Type of Service), and protocol are considered part of the same flow
also includes the ability to capture packets in order to provide even greater traffic detail.
and on the hosts that are using those applications
It provides traffic volume information on the applications and application categories being used
Usage monitoring provides a view into the traffic that is passing through your network
Usage monitoring - AppNeta Documentation | AppNeta
APM is supported on the following browsers: Chrome & Firefox
Global Monitoring Points are upgraded periodically and the upgrade usually requires a service interruption
Enterprise Monitoring Points should be upgraded periodically
usually requires a service interruption
APM-Private Cloud software should be upgraded periodically to take advantage of new features
Report schedules will be paused during this window
Advanced voice assessments and voice/video tests that are either scheduled to start or are running during this window will need to be restarted.
During the service window you may not be able to log in.
APM-Public Cloud (the SaaS version of APM) is upgraded on a bi-weekly basis and the upgrade generally requires a service interruption
APM-Public Cloud and Global Monitoring Points are updated automatically by AppNeta. APM-Private Cloud (if you use it) and Enterprise Monitoring Points are your responsibility to upgrade.
AppNeta offers full customer support on all products for the duration of your contract
Occasionally incidents or planned maintenance affects only a subset of customers. In these cases, notices we post on status.appneta.com will include mention of the affected application cluster(s).
The most up-to-date APM Public-Cloud service status information is at status.appneta.com.
Service - AppNeta Documentation | AppNeta
Access to systems and customer information in the AppNeta Performance Manager (APM) is controlled using a policy of need-to-know/least privilege
Only Organization Admins can download the audit log.
The APM audit log file contains records of all actions performed on APM, when they were performed, who performed them, and where they were performed from
Software packages are downloaded from the upgrade repository via SSL
Linux-based NMP run as root and require outbound connections to APM servers to report the timing data and to download software updates. Timing data is sent back to APM via HTTPS
prompted for a passphrase once per Monitoring Point per login session
The symmetric key used for encryption is based on a per-Monitoring Point, user-defined passphrase
Captures must be decrypted using the symmetric key created from the passphrase
Captures are uploaded to the Capture Server via SSL where they are encrypted using an AES 256-bit key prior to their transfer to Amazon S3
APM uses standard encryption practices to ensure that the information in your packet captures is securely transmitted and stored.
are masked within the script editor
is done through a secure channel (SSL/TLS)
All Experience workflow script contents, including stored passwords, are encrypted while at rest within the APM database
AppNeta utilizes NIST SP 800-88 on Data Sanitization as its guideline
Customer data is purged within 90 days of being decommissioned or contract termination
Only key engineers may access production data
Data access is restricted solely to AppNeta employees
AppNeta Performance Manager (APM) is hosted on Amazon Web Services
we negotiate to TLS 1.2 whenever possible
We also run Rapid7 vulnerability scans on all releases to ensure that no new vulnerabilities have been created
current generation of modern Monitoring Points will always negotiate to the highest protocol level - TLS 1.2
Within the cloud application infrastructure, each unique SSL/TLS tunnel connection is identified by the GUID associated with the Monitoring Point. Since each GUID is associated with exactly one organization, we ensure that all of the telemetry data arriving on that tunnel is directed to the data store and/or scheme associated with that organization
APM-Public will only support TLS1.2 and higher
all delays or failed attempts reset after an hour of inactivity
all delays or failed attempts reset after the first success
the 30 second delay recurs after each subsequent failure
after 5 consecutive failures, a delay of 30 seconds is imposed
up to 5 attempts are allowed with no delay
If a user session has been idle for more than 10 minutes, their session times out
APM passwords must contain a minimum of eight characters and must include uppercase alphabetic, numeric, and special characters
Security - AppNeta Documentation | AppNeta
or on customer-supplied hardware using either KVM or VMware
APM-Private Cloud can be deployed on AppNeta-supplied hardware
Feature Public cloud Private cloud Monitoring Points supported Unlimited 1000 Usage monitoring at 10 Gbps Yes No Global Monitoring Point access
Usage monitoring at 10 Gbps
With APM-Private Cloud: All data remains within the customer network. There are no external network connections except where explicitly enabled. These include: Maintenance tunnel ISP resolution service Geo map service
Private Cloud is typically used by customers whose network or security policy
Private Cloud, runs as a virtual machine instance on a server within a customer network
public cloud offering runs as a service (SaaS) within the public cloud
Private Cloud Overview - AppNeta Documentation | AppNeta
Power Redundancy 1+1
4 x 1 GigE
Private Cloud Server hardware (PCS1000)
Private Cloud Server Hardware - AppNeta Documentation | AppNeta
monitor your network and web applications
DNS monitoring is currently not supported. Migrate monitoring is not currently supported. AppNeta Synthetic scripting is not supported. Traceroutes outbound from a GMP will not show the identity of any hops between the source and target
Monitor performance from specific regions to apps used by your distributors, integrators, or retail locations.
Monitor performance to core public-facing apps or internet-facing services from global locations representative of your customer base
typically deployed in one of the following scenarios
used only for Delivery and Experience monitoring
nstalled in global cloud provider locations selected by you
Global Monitoring Points are Container-based Monitoring Points owned by you but managed by AppNeta
macOS automatic time updates cause periodic jitter spikes and packet discards in voice and video test
does not support Apple M-series processors
On macOS machines:
TCP Traceroute and TCP Ping are not supported
an be used at a maximum speed of 500 Mbps
On Windows machines:
Capacity measurements in either direction are problematic on Wifi networks
Scheduled access network links (for example, Fibre PON, DOCSIS Cable), typically provided by ISPs to residential customers, can show lower uplink capacity than expected.
Does not support voice or video testing when installed on a virtual machine
Does not support PathTest
Delivery monitoring only
typically used to monitor network performance from the perspective of a work-from-home user.
NMP Enterprise Monitoring Point is software that runs on the native operating system of a host computer and is used for Delivery monitoring
Any virtual machine that comes up without the following minimum requirements will show up in APM as a v25, which is an unsupported Monitoring Point type that cannot be licensed
10Gbps NICs are supported on eth0, eth2, and eth3. Only 1Gbps NICs are supported on eth1 - the Usage monitoring port.
AppNeta Synthetic scripting is not supported.
Migrate monitoring is not currently supported.
DNS monitoring is currently not supported.
on a given deployment host.
CMP limitations include:
The host requirements for the CMP are as follows:
To target a CMP deployed using AKS, run the terraform output command within the Azure Cloud Shell to determine the Load balancer fqdn to target.
When deployed on Azure using AKS, a redundant instance is automatically created and failover is automatic.
targeting a CMP, use dual-ended monitoring
Capacity measurements can be influenced by networking on the host, kernel version on the host, other containers sharing the host
applicable orchestration command
does not have a Web UI
or as a c50
The CMP can be licensed as either a c10
Only one CMP can be running on a given host
The CMP can be deployed in Azure cloud using AKS orchestration or on Azure, AWS, a server, or a workstation, using Docker Compose
Typically though, this use case is achieved using a Native Monitoring Point
deployed on a user workstation to monitor network and application performance from the perspective of the work-from-home user
deployed on a server in a remote office to monitor network and application performance from the perspective of users at that office.
also be used to measure baseline user experience
used as a target
deployed in the same Virtual Private Cloud (VPC) as your critical applications
User experience can then be measured from that region
deployed in the public cloud in a region
Monitoring from a workstation
Monitoring from a remote office
Monitoring to a cloud-based app
Monitoring from a remote region
supports Delivery/Experience monitoring up to 1Gbps
CMP Enterprise Monitoring Point is software that runs in a Docker container
Using PathTest with TCP, the maximum load generated is 37Gbps.
On idle 100Gbps networks you may see average network utilizations of approximately 2Gbps. This shows as “Utilized Capacity (avg)” on Delivery
For Diagnostics tests, the maximum “Total Capacity” measurement is approximately 50Gbps for both dual- and single-ended paths.
On 100Gbps networks, the maximum “Total Capacity” we can detect is approximately 96Gbps (on a dual-ended path, approximately 85Gbps on a single-ended path) rather than 99.58Gbps (maximum capacity with 9000-byte MTU taking packet overhead into consideration
In order to achieve accurate capacity measurements, the MTU size is important. For 100Gbps networks, use a 9000-byte MTU. For 40Gbps networks, use a minimum 4000-byte MTU.
On 40Gbps and 100Gbps networks, there are a few points to be aware of
Monitoring Point Specifications - AppNeta Documentation | AppNeta
ompare application performance from the data center with that from user locations (Experience monitoring
reate dual-ended paths to test WAN connectivity between the data center and user locations
another point of reference for network and application performance monitoring
Delivery and Experience monitoring - Deploy in the core (data center) as well as at user sites
deploying at an aggregation point - typically at a gateway or a VLAN trunk.
Delivery monitoring - Deploy such that you can monitor end-to-end from the user to an application.
same physical location, same subnet, same VLAN if applicable, same QoS characteristics if applicable, etc.
ant your monitoring results to mimic those of a user so the closer the network environment you deploy in is to that of the user
Experience monitoring - Deploy as you would deploy a typical user.
helpful when you need to know which user is connected to a particular application
allows you to see private IP addresses rather than public IP addresses
Usage monitoring - Deploy downstream of network address translation (NAT).
monitor traffic from as many users and applications as possible.
Typically at a gateway or a VLAN trunk
Usage monitoring - Deploy at an aggregation point.
more accurate picture of what users are experiencing
General - Deploy behind the firewall.
deploying “as close to the network core as possible
Considerations for Monitoring Point placement
requiring cloud-based monitoring but preferring to have AppNeta provide Monitoring Point management
hosting cloud-based applications
recommend deploying a CMP in each region or application environment.
Work-from-home user workstations NMP
recommend deploying one Monitoring Point at each location you want to monitor
Deployment Considerations - AppNeta Documentation | AppNeta
If the Windows NMP is to serve as a target for single-ended paths, you need to add an inbound rule to allow ICMPv4 “Echo Request” packets to any program
When you install the AppNeta Native Monitoring Point (NMP) on a Windows machine, firewall rules are automatically added during the installation process. They are all inbound rules and include: Allow ICMPv4 “Echo Reply” (Type 0, Code Any) packets to the NMP Allow ICMPv4 “Destination Unreachable” packets to the NMP Allow UDP packets to the NMP Allow ICMPv4 “Time Exceeded” packets to any program
Azure - If you are deploying within Azure, the Azure firewalls and Network Security Groups should be configured with “Allow Inbound ICMP”. AWS - If you are deploying within AWS, the default security group needs a rule to allow inbound ICMP Echo Requests.
If you are installing the CMP using Docker Compose, you need to configure the following firewall rules for Delivery monitoring. All the other (non-Delivery) firewall rules shown above still apply:
If you are installing a CMP in Azure using AKS, there are no additional firewall rules to configure. The install process takes care of the firewall rules.
Additional firewall rules mat be required depending on where you are deploying the Container-based Monitoring Point (CMP).
Firewall rules allowing access to a Simple Network Management Protocol (SNMP) Network Management System (NMS) server are only required if the Monitoring Point is configured for SNMP notification forwarding and the server is external.
only required if the resolver is external.
only required if the server is external.
The Monitoring Point needs inbound and outbound connections for Network Time Protocol (NTP) to ensure precise timestamping
Domain Name System (DNS) is required for hostname to IP resolution
Container-based Monitoring Point (CMP), proxy configuration is done on the Docker host
If the proxy service requires authentication, it must use either basic or digest authentication; NTLM and Kerberos are not supported.
This might be the case if the Monitoring Point is deployed in a subnet reserved for network infrastructure rather than end-stations
If HTTP traffic is directed to a proxy server, make sure that no ACLs prevent the Monitoring Point from connecting to it (for example, permit tcp host device-ip host proxy-ip eq proxy-port
Allow outbound connections on port 443 if a workflow includes logging in to the target site.
Outbound TCP connections on port 80 are essential to Experience monitoring
Experience monitoring is a fundamental feature of the AppNeta solution so ports related to Experience monitoring should be opened.
port forwarding is required on the remote firewall to route traffic to the target Monitoring Point.
Monitoring Point is a target behind a NAT device
Delivery monitoring is a fundamental feature of the AppNeta solution so ports related to Delivery monitoring should be opened
Specifying *.pm.appneta.com provides access to all AppNeta APM servers but you can create rules for specific APM servers you need to access
Access to APM is mandatory.
In addition to setting firewall rules, you can create Access Control Lists (ACLs) on some Monitoring Point models to restrict inbound access
Additional configuration beyond this is based on your monitoring needs
At a minimum, the Monitoring Point must be able to connect to APM
Firewall Configuration - AppNeta Documentation | AppNeta
Monitoring Point needs inbound and outbound connections for Network Time Protocol (NTP) to ensure precise timestamping
Firewall Configuration - AppNeta Documentation | AppNeta
Only one CMP can be installed on a host.
You can install a CMP using AKS either via the APM user interface or via the APM API.
need “Owner” or “Contributor” and “User Access Administrator”
best place in your network to deploy
Kubernetes 1.14+ Azure Cloud Shell, which also provides: Terraform v0.12.23+ Helm v3.1.1+
Installing the CMP using AKS
Also, if you are installing on Azure, always use a dedicated instance and “host networking”.
f you do use “bridge networking” and install multiple CMPs on the same host, only one of them can be a path target.
Important: CMPs can be installed using either “host networking” or “bridge networking”. We recommend using “host networking” unless you are sharing a host/instance with other containerized apps
AWS: An AWS account. A Security Group with a rule to allow inbound SSH (AppNeta firewall rules are added to this during setup below). A key pair to allow SSH access to your instance (for example, ssh -i key-pair.pem email@example.com). An EC2 instance of the appropriate size that supports nitro in a VPC associated with the Security Group. A Hardware Virtual Machine (HVM) image that supports enhanced networking.
Azure: An Azure account. “Owner” or “Contributor” and “User Access Administrator” roles assigned
Linux hosts: Docker Engine 18.06.0+ Docker Compose file format 2.4
Windows hosts: Docker Desktop 220.127.116.11+ Nested virtualization needs to be enabled when running in a VM. VirtualBox and Docker Desktop (or rather its underlying technology - Hyper-V) cannot be run at the same time on Windows
Prior to deployment you’ll need to: Determine the best place in your network to deploy the Monitoring Point. Configure your firewall rules to enable the Monitoring Point access to APM.
Server or Workstation Use Docker Compose.
AWS Use Docker Compose.
To install the CMP into an Azure Virtual Network (VNet) (Azure’s version of a Virtual Public Cloud) that you control, you’ll need to use Docker Compose.
Docker Compose can also be used to deploy a CMP.
Azure Kubernetes Service (AKS) can be used for simplified container deployment and management.
Monitoring from a workstation
Monitoring from a remote office
Monitoring to a cloud-based app
Monitoring from a remote region
Typical use cases for the CMP include:
Container-based Monitoring Point Setup - AppNeta Documentation | AppNeta
A .ova image file is downloaded to your compute
Download v35 template (vCenter) and generate a configuration file
The first is for connectivity to APM and for Delivery and Experience monitoring. The second is for Usage monitoring.
two physical NICs.
vSphere 5.5.0 or later.
Virtual Monitoring Point Setup - v35 on VMware - AppNeta Documentation | AppNeta
If the firewall within the macOS (Apple menu () > System Preferences > Security & Privacy > Firewall) is turned on, make sure that you: Enable “Automatically allow downloaded signed software to receive incoming connections”. Disable “Enable stealth mode”.
Native Monitoring Point Setup - macOS - AppNeta Documentation | AppNeta
macOS 10.14 (Mojave), macOS 10.15 (Catalina), macOS 11.0 (Big Sur)
Windows 8.1 and newer (all variants) and Windows Server 2012 and newer
Monitoring Point Specifications - AppNeta Documentation | AppNeta
Usage monitoring does not consume application licenses.
Those < 4 hops or with latency < 5ms are considered LAN targets and do not consume an application license.
for each network path created in Delivery monitoring where the target is >= 4 hops away from the source and latency >= 5ms
An application license is consumed: for each web path created in Experience monitoring
Add-on License - Required to enable certain additional features.
Included with each Base license. Also available individually if you need to monitor more applications than the Base license provides. The maximum number that can be assigned depends on the Monitoring Point model.
nables you to monitor an application on a distinct path from the Monitoring Point to the target application.
Base License - Every Monitoring Point must be assigned a Base license before it can be used. Each Base license type contains a number of Standard Enterprise Application Licenses Each Base license type can be assigned to specific Monitoring Point models. Once a Base license is assigned, the Monitoring Point can (depending on the model) be used for Delivery, Experience, and Usage monitoring. Basic monitoring of data and voice connections is included. Voice and video tests and assessments require an add-on license.
Monitoring Point Licensing - AppNeta Documentation | AppNeta
Enterprise Monitoring Points
Monitoring Point overview - AppNeta Documentation | AppNeta
No No No No
1. The NMP does not support video testing when installed on a virtual machine.
1. The NMP does not support voice testing when installed on a virtual machine.
1. On Windows machines, the NMP is qualified to be used at a maximum speed of 500 Mbps
5. IPv6 is not supported when using AKS orchestration.
4. The VLAN is configured on the Docker host.
3. The v35 (KVM) can support up to four ports. Up to three of these can be 10Gbps
2. The Azure D1 v2 VM instance supports a network bandwidth of up to 750Mbps
part of the container configuration in Docker
the configuration of network features normally done on
1. For the CMP
— Delivery/Experience ports
10 Gbps monitoring ports
Native Monitoring Points (NMPs) and Container-based Monitoring Points (CMPs) can be deployed with different capabilities
Software Monitoring Points
QSFP monitoring ports
10 Gbps monitoring ports
1 Gbps monitoring ports
Monitoring Point feature comparison - AppNeta Documentation | AppNeta
simulating video calls
simulating voice calls
assess the network’s suitability to handle voice traffic
assess the network’s suitability to handle data-intensive applications like backups and file transfers
a stress testing tool called PathTest
ping, traceroute, and nslookup
basic network investigation tools
Delivery monitoring functionality provides tools used for specific purposes
Deep Path Analysis (DPA)
to help determine where and why it is performing poorly
once an alert is triggered, TruPath runs a Diagnostic test
outside of acceptable limits
Alerts can be set to trigger
typically once per minute
measure a number of network path performance metrics
identifying the path between the source and the target
a network path is created
AppNeta Monitoring Points are deployed to act as one or both of these endpoints
is what we are interested in monitoring
The path this data takes
and a target
between a source
Application data travels back and forth
assess the condition of the networks over which your application data is transported
network performance monitoring
Delivery monitoring - AppNeta Documentation | AppNeta
analyze the behavior of a network under different traffic loads.
accurately measure the available ICMP, UDP and/or TCP capacity
PathTest is a powerful load generation tool used to examine the network capacity between two endpoints on a LAN or WAN link
Stress, Capacity, and Availability testing - AppNeta Documentation | AppNeta
you should definitely configure network paths that use it as dual-ended
Typically, more bandwidth is allocated to the downlink direction (traffic flowing from the internet) than to the uplink direction (traffic flowing to the internet). Because of this, this link can be a network bottleneck.
Prior to creating a network path, you should determine whether QoS is applied within your administrative domain (AD). If it is, you’ll specify that in the network path configuration, and choose an alert profile with a QoS condition so that you can verify that the QoS you expect isn’t being changed en route.
should take into consideration the prioritization, or quality of service (QoS), the traffic receives while in transit
Monitoring the path
video traffic is treated as a large data transfer
o monitor a network for voice quality, you need to know which audio codec your VOIP deployment is configured to use so that APM can properly simulate that traffic
specify the Network Type according to these definitions.
LAN path - < 4 hops (including the target) or latency < 5ms. WAN path - >= 4 hops (including the target) and latency >= 5ms.
Those with a LAN target do not
Network paths created in Delivery monitoring with a WAN target consume an application license
Data or Voice
For CMPs deployed using AKS, we recommend dual-ended paths.
If you target a Container-based Monitoring Point (CMP) deployed using AKS, run the terraform output command within the Azure Cloud Shell to determine the Load balancer fqdn to target.
you may only be able to connect to the outside interface of a remote WAN router
Voice handsets, depending on the model, can make good targets
voice test, you’ll need a Monitoring Point as the target
best target is an AppNeta Monitoring Point in the same network segment as the handset or workstation
monitor voice or video
up to 500 Mbps
you’ll need to create an exception
not protected by Symantec endpoint protection
configured to allow ICMP responses
If you target a workstation
If you want to monitor the performance of a web application, target the webserver it runs on
quality of the NIC, the network driver
Servers and workstations are good targets but you may see as low as 70% of the expected bandwidth
AppNeta Monitoring Points and AppNeta WAN targets make the best targets
Target selection considerations include:
network devices show a lower than expected capacity
end-stations report the expected capacity
The image below shows the difference between targeting network devices and targeting end-stations on a gigabit network
If you try to target a network device such as a router, the capacity measurements you see may be less than expected as routers prioritize traffic forwarding over responding to ICMP echo requests
he goal is typically to measure performance through the network
the connectivity check will pass but the subsequent diagnostic will hang
all network paths that use the interface will enter the connectivity lost state
interfaces with DHCP-assigned addresses that become unavailable
Known issue on m22, m30, r40, r400:
Re-run the Path Setup Wizard.
configure the additional physical interface or wireless interface
Confirm that your Monitoring Point has a secondary network connection port.
If the secondary port does not appear
When creating a network path using the Path Setup Wizard, select the appropriate Local Network Interface in Step 1 of the wizard.
To use a secondary port for a network path:
Some Monitoring Points have secondary network connection ports
By default, all network paths use the primary network connection port (the default interface)
Source interface selection
If you want the return path to target the source Monitoring Point hostname, create the path using the target’s hostname
f you want the return path to target a specific source Monitoring Point interface, create the path using that interface and specify the target’s IP address
When identifying a dual-ended path target, use either its IP address or its hostname
the path from the target Monitoring Point back to the source depends on how the path is set up
Determining the return path for a Dual-ended path
possible that their respective performance metrics do not match
protocols may be treated differently in your network
two path types use different protocols
there are some restrictions
exceptions are the Latency and Round-Trip Time charts
path performance page displays one chart for each direction
use UDP in addition to ICMP
require Monitoring Points at both ends
Characteristics of dual-ended paths
dual-ended monitoring, capacity can be measured in each direction independently.
using single-ended monitoring will only detect the slower of the two directions
both directions is required for paths that include a link provisioned for asymmetric loads (for example, a typical home ISP connection)
AppNeta Monitoring Point is required as a target to respond to UDP probes and to initiate Diagnostics tests in the reverse direction
uses UDP in addition to ICMP
A dual-ended path is one that is monitored independently in both directions.
Network devices and end-stations generally respond to ICMP echo requests, so the target does not need to be a Monitoring Point.
sending a train of ICMP echo requests to the path target and measuring various aspects of the packets received in response
single-ended path is one that is monitored in only one direction: source to target
Single-ended vs Dual-ended paths
Considerations for Network Paths - AppNeta Documentation | AppNeta
increasing an alert threshold can also increase the number of alerts generated
before an alert is triggered.
Decrease the length of time
Decrease an alert threshold.
before an alert is triggered.
Increase the length of time
Increase an alert threshold
fix the issue causing the violation
may want to customize an alert profile such that alert condition thresholds are violated when they are outside of the normal operating range
Start with a basic system-defined alert profile
Adjust if too few alerts
Adjust if too many alerts
Adjust alert profile if necessary
Establish a baseline
set alert condition thresholds so that you are not alerted too much or too little
Setting Good Alert Thresholds
Setting good alert thresholds - AppNeta Documentation | AppNeta
The Mean Opinion Score (MOS) is an estimate of the rating a typical user would give to the sound quality of a call. It is expressed on a scale of 1 to 5, where 5 is perfect. It is a function of loss, latency, and jitter. It also varies with voice codec and call load.
Measurements are taken every minute. Each iteration consists of multiple packet trains, each with a specific packet size and number of packets (up to 50) per train. Sending multiple packet trains reduces the effect of packet loss. Sending large packets guarantees queuing. If packets did not get queued, there would be no dispersion and capacity would be overestimated. The initial packet size is the path MTU (PMTU). PMTU is the largest packet size a path can handle without fragmentation. APM considers the best option for total capacity measurement to be a packet train with the largest packet size that experiences no packet loss.
To understand how it this works, imagine two packets of equal size are sent back-to-back with no other traffic on the line. We’re interested in the distance between those packets by the time they reach the target. The packet dispersion is the time between the arrival of the last byte of the first packet and the last byte of the second packet.
TruPath uses packet dispersion analysis to calculate capacity
Bandwidth is the transmission rate of the physical media. It is the number quoted by your ISP but it does not take into consideration any protocol overhead or queuing delays. Capacity, on the other hand, takes these into consideration.
Available capacity is the part of the total capacity that is available for use. Utilized capacity is the part of the total capacity that is in use.
Total capacity is the highest transmission rate that you can achieve between a sender and a receiver
Delivery monitoring provides both data and voice loss metrics
UDP, VoIP for example, the loss may or may not have a significant affect on the conversation depending on how much loss is experienced as UDP packets are not retransmitted
heavy data loss can cause many retransmissions and can significantly impact throughput
traffic congestion along the network path, an overloaded network device, bad physical media, flapping routes, flapping load balancing, and name resolution issues
Packet loss, whether the packets are data or voice, is simply a measure of the number of packets that did not make it to their intended destination
Delivery monitoring provides both data and voice jitter metrics
Severe jitter is almost always caused either by network congestion, lack of QoS configuration, or mis-configured QoS.
In order to accurately recreate the media at the receiver end, those packets must arrive at a constant rate and in the correct order. If not, the audio may be garbled, or the video may be fuzzy or freeze
itter affects time-sensitive applications that use UDP but does not affect applications using TCP
Jitter, also known as packet delay variation, is a measure of variation in latency
There are several ways latency gets introduced into your data stream. The first is propagation delay. This is the time it takes for a signal to propagate across a link between one device and another. In general, the farther apart the devices are, the greater the propagation delay. The second is queuing delay. Queuing delay is introduced when a network device is congested and can’t route incoming packets immediately upon ingress. Finally, there is handling delay. This is the time it takes to put a packet on the wire. Generally, this is negligible compared to the other two.
For time-sensitive applications that use UDP (for example, real-time voice and video streaming and Voice over IP (VoIP)), large latencies can introduce both conversational difficulty and packet loss.
the effect of latency is compounded due to the way its congestion control mechanism works. This results in a major decrease in TCP throughput
High latency values have a detrimental effect on applications that use TCP and time-sensitive applications that use UDP
Latency, the time it takes for a packet to go from a source to a target
RTT is the time it takes for a packet to go from a source to a target and back
round-trip time (RTT)
voice traffic has smaller payloads with wider packet spacing
Delivery monitoring provides tools to evaluate network performance for both data and voice traffic
seeing potentially different performance metrics than traffic with unmarked packets, you will also be able to see which hops (if any) are changing the markings
Specifying DSCP markings on test packets sent by TruPath
potential cause of poor quality with delay-sensitive traffic
DSCP markings are even changed as they pass through a hop
Because honoring these values is not mandatory, there can be variations in how traffic is handled at various hops along a network path through the internet
DSCP 0 (the default value) means forward with “best effort”, whereas DSCP 46 (0xEF) means “high priority expedited forwarding”
Some DSCP markings have agreed upon meanings and others do not
categorized and handled appropriately by network devices
Differentiated Services Code Point (DSCP) values
different traffic types can be marked
delay-sensitive traffic can be allocated dedicated bandwidth and separate queuing on a network device so that it passes through the device more quickly than delay-insensitive traffic
traffic flows can be prioritized
QoS) is the mechanism used to manage packet loss, delay, and jitter by categorizing traffic types and then handling them appropriately
VoIP traffic is very sensitive to delay and jitter - the variation in packet dela
file transfers must not lose data but delays between packets in the transfer are not a problem
WWW browsing, file transfer, email, and video streaming services like Youtube and Netflix
used for applications where reliability is more important than reduced latency
a connection-oriented protocol that provides a reliable, ordered, and error-checked way to transfer data
UDP for continuous monitoring on dual-ended paths, to expose QoS marking changes during diagnostics tests, and as part of a traceroute for determining the route UDP packets take
real-time voice and video streaming and Voice over IP (VoIP
typically used by time-sensitive applications where losing packets is preferable to spending time retransmitting lost packets
no guarantee of packet delivery, packet ordering, or duplicate protection
no error checking
connectionless with very little protocol overhead
UDP is a core internet protocol used to transport data
for determining the route
and as part of a traceroute
ICMP is also used to expose QoS marking changes
Delivery monitoring use ICMP echo request and echo response packets (“ping” packets) for the majority of our continuous monitoring
CMP is a control message protocol used by network devices to send error messages and operational information
ICMP, UDP, and TCP
ICMP, UDP, and TCP
TruPath uses three common protocols
target Monitoring Point can be one of your AppNeta Enterprise Monitoring Points or an AppNeta WAN Target
from source to target and from target to source
get a more accurate picture of your network performance
In the dual-ended configuration, one Monitoring Point is the source and another is the target and UDP packets (rather than ICMP packets) are used for monitoring
cannot be detected
differences in capacity
network characteristics that are direction dependent
only one Monitoring Point is required
and ICMP echo replies are returned
ICMP echo requests are sent to the target
(the target) can be any TCP/IP device
t acts as one endpoint (the source)
In the single-ended configuration, a single AppNeta Monitoring Point is required
Single-ended and dual-ended network paths
For very slow speed links or networks with other restrictions like small maximum MTU size, TruPath automatically adjusts its traffic loads to minimize network impact even further
Because the packet sequences are very short, the overall load on the network is kept very low
Commonly used packet sequences are 1, 5, 10, 20, 30 and 50 packets in length
the route taken by all protocol types (ICMP, UDP, and TCP) is determined
As part of the diagnostic test
as many as 400-2000 packets can be sent in a series of packet trains
If a network dysfunction is detected (for example, a higher than acceptable data loss) TruPath first confirms the dysfunction is present (by sampling every six seconds for ten samples) and then, once it is confirmed, automatically shifts to DPA mode and runs a diagnostic test which probes not only the target, but all devices on the network path from the source to the target.
places roughly 20-50 packets onto the network
CPA mode runs continuously, and every 60 seconds
Continuous Path Analysis™ (CPA) and Deep Path Analysis™ (DPA)
To obtain this information
it can determine if there are Quality of Service (QoS) changes along the network path
t uses information like the time the packets take to go from a source to a target and back, the delay between packets on their return, packet reordering, and the number of packets lost, to directly measure key network performance metrics (round-trip time (RTT), latency, jitter, and data loss), and to infer others (total and utilized capacity)
waits for the replies
probes a network using short bursts of packets
the heart of Delivery monitoring
The lowest capacity device on the network path between the two endpoints determines the capacity of that path.
he rate at which a given queue can repeatedly fill and drain without data loss is effectively the maximum capacity of the device for traffic using that queue
queues fills up, any additional packets destined for that queue are dropped - causing data loss
auses a delay between packets when they are received at their destination
wait their turn to be forwarded
amount of traffic passing through a device at a given time, and the priority of that traffic, affects the amount of time a given packet will be queued (if at all) on that device
they encounter a number of network devices (for example, routers, switches, firewalls, load balancers, etc
As data packets flow through a network
IP Networking and TruPath
IP Networking and TruPath - AppNeta Documentation | AppNeta
Global Monitoring Points are used only for Delivery and Experience monitoring (not for Usage monitoring)
Global Monitoring Points are Container-based Monitoring Points owned by you but managed by AppNeta
used for Delivery, Experience, and Usage
Enterprise Monitoring Points (available as hardware, as containers, as virtual machines, or as software) are owned and managed by you
APM-Public Cloud is a SaaS application deployed on the public cloud. APM-Private Cloud is software system that can be deployed either on AppNeta-supplied hardware or on your own hardware.
available in two formats
The AppNeta solution consists of two main components: APM, and AppNeta Monitoring Points. APM analyzes and reports on performance data supplied by Monitoring Points located throughout your network.
answers questions like: Which applications are being used at a given location? Which applications are consuming the most bandwidth? Which applications is a given user connecting to? Which users are consuming the most bandwidth? How many users are using a given application?
shows the amount of available bandwidth consumed
determine which applications are being used and who is using them
Usage monitoring enables you to see how bandwidth at a given location is being devoted to particular applications, hosts, and users.
This method answers questions like: Are there particular applications that are running slowly? Are there particular locations that are slow? Are there problems connecting to an application? Is an application available and responding as expected?
measures how long the application takes to respond
simulate machine-to-machine interactions
makes HTTP requests periodically to a web app’s API
This method answers questions like: Are there particular applications that are running slowly? Are there particular locations that are slow? Is the slowness I am experiencing an application issue, a network issue, or a browser issue? Are there problems connecting to an application? Is the issue within an application? For application issues, which part of the application is slow or unresponsive?
Each measurement is broken down by the amount of time taken by the browser, the network, and the server running the application
Experience monitoring enables you to visualize application performance experienced by users at a given location
Delivery monitoring helps you answer questions like: Where on the network path is the problem occurring? Are there particular routes that are slow? What routes are down and when did they go down? How much capacity am I being provided by my ISP? How much of the available capacity am I using?
continuous path analysis (CPA)
Delivery monitoring enables you to visualize network performance and to determine where network problems are occurring
provides this visibility using three distinct mechanisms we call Delivery, Experience, and Usage monitoring
particularly useful if you are using cloud-based applications, or are running any part of your network across the internet
it enables you to: determine the source of network problems determine how users are experiencing application performance determine how network bandwidth is being utilized per application and per user determine whether network and application service providers are meeting their service level agreements plan for changes in capacity requirements
APM Overview - AppNeta Documentation | AppNeta