The website uses cookies. By using this site, you agree to our use of cookies as described in the Privacy Policy.
I Agree
Chris Ryan
126 articles
My Web Markups - Chris Ryan
  • Packet shaping is similar to on-ramp traffic lights that pace cars onto a highway. The cars are still allowed to queue up on the ramp, and the highway is protected from flooding. However, rate-limiting queue is like a tank that blows up cars when there are too many lined up on the ramp.
  • It is much better to implement a rate-limiting server, also called a Packet Shaper. This method relies on spacing of packets on the downstream link. Packets are not dropped unnecessarily, and rate limiting cannot be bypassed with upstream traffic shaping
  • Packet shaping
  • Rate-limiting queues are found in routers that are implementing Cisco’s CAR feature. We have also detected rate-limiting queues in African and Eastern-European Frame Relay Networks.
  • You may be able to adjust upstream flows to bypass effect of rate limiting queues. By pacing packets, the downstream link can be saturated. Sending many simultaneous streams of data sometimes can fill the downstream network.
  • Dropping packets to limit data rates is a very bad idea. Remember that users create traffic. By simply dropping packets, data does not go away. Eventually the data is retransmitted. Rate-limiting queues can actually increase overall traffic, and may drive larger networks to the point of saturation.
  • As long as combined upstream data rates never exceed the capacity of the server, a rate-limiting queue has no effect. But usually this is not the case. As soon as 4 or 5 packets are sent in unison, the queue is filled and packets are dropped.
  • Rate-limiting queues are a method of limiting traffic by reducing the size of the queue buffer.
  • Rate-limiting queues
  • A clean link without media errors will only drop packets at points in the network where queues are full. Queue buffer sizes vary, however 64Kbytes is a common size. Much larger, and a network may experience delays that will cause TCP to retransmit data unnecessarily. Much smaller, and queues can cripple TCP efficiency.
  • The server sends data to the routers and switches at 100Mbps. The switches and routers then must queue the data while it is being fed to the client at 10Mbps. In this environment the network will be better protected if the workstations are set to 100Mbps rather than 10Mbps.
  • All servers and workstations have 10/100Mbps NIC cards. A decision was made to tune the speed of the network cards to protect the load on the network. All workstations were to be set to 10Mbps, and all servers would be set to 100Mbps.
  • For example, a user who is using the internet usually will generate more traffic on a faster link than on a slow link. However, an individual who uses a network for business purposes will have a predefined amount of traffic that must be passed over the network. A hospital worker must process a fixed amount of patients. The number of transactions that a bank must perform is based on the number of customers that enter their branches. If network traffic is predefined, implementing rate-limiting networks will not reduce the amount of traffic that will pass over a network. In fact, rate limiting could actually increase the total amount of traffic on a network.
  • Some may say that traffic is generated by computers. But, even more basically, traffic is generated by people. Individuals request information, and that information is passed over the network. The amount of traffic generated by an individual depends on what that individual is using the network for.
  • Network Traffic
  • Occasionally the full capacity of a network is not made available to end users. Usually rate-limiting is implemented to protect networks or servers. However, often the opposite happens. Rate-limiting algorithms can actually break an otherwise operational network.
  • Advanced Analysis "Rate-limiting Behavior Detected"
17 annotations
  • over time, these measurements should be comparable but not equivalent.
  • ‘Application latency’ and ‘server response’ are calculated in the same way: the total response time (GET to first byte in Usage monitoring, and SYN to first byte in Experience monitoring) minus the network component
  • ‘application latency’ in Usage monitoring and ‘server response’ in Experience monitoring are comparable but not equivalent.
  • same values in each direction
  • alculated application latency applies equally to the inbound and outbound directions
  • Whenever flow history is summarized or collated for presentation, the average of the average and the max of the max are taken
  • Application Usage Monitoring Point Details charts provide two application latency values: the average of all matching flows, and the maximum of all matching flows
  • For any flow where network latency is much greater than application latency, application latency becomes inconsequential.
  • The TCP handshake must be seen so that server network delay can be calculated.
  • includes server network delay, it must be subtracted to get the application latency
  • In practice, application delay is the time between the last request byte sent by the client to the first response byte received by the client
  • Application latency is the time the server takes to respond to a request from the client.
  • ‘network latency’ in Usage monitoring is different than ‘network response’ in Experience monitoring
  • Again, the handshake is only present in the initial flow
  • Mid-stream TOS/DSCP changes
  • handshake is only present in the initial flow.
  • Long-lived sessions (like SSL)
  • wo cases in which handshake will not be present
  • network latency is only available for TCP flows in which the handshake is seen
  • Network latency is calculated using the TCP handshake because these packets are processed by the client/server only up to Layer 3, so we can assume negligible host delay
  • Network latency is the time it takes a packet to travel between a client and a server
  • The charts contain columns showing network and application latency
  • Outbound traffic is that sent from a host on a local subnet to an external destination
  • nbound traffic is that coming from an external source to a host on one of the local subnets you identified during setup
  • you should be aware of
  • a few concepts and calculations
  • Traffic direction Application and network latency calculations
27 annotations
  • It is good practice to rename these files to match the Hostanme but leave the .qcow2 and .iso extensions.
  • Copy the KVM base image (.qcow2) and the config image (.iso) to /var/lib/libvirt/images/ on the host system.
  • Helpful commands The following commands can be used on the host to manage your KVM virtual machines.
  • The AppNeta software required for the v35 includes a base image (.qcow2) and a config image (.iso). These must be downloaded and then copied to the KVM host.
  • There are two scripts used for v35 setup that must be downloaded and copied to the KVM host: vk35tool.py - Script to create a v35 and configure its interfaces. vk35hook.sh - Hook script to automatically connect a mirror port to an OVS bridge.
  • Intel X540 Intel X550 Intel 82599
  • For 10 Gbps: A 10Gbps NIC with SR-IOV support.
  • Open vSwitch (OVS) software must be installed
  • The OVS bridge is required only if you want to use a VLAN
  • For 1Gbps: A Linux bridge or an OVS bridge to bind the interface on the v35 to the physical interface on the host
  • Has up to four Ethernet NICs installed on the KVM host.
  • CentOS-7.x, RHEL 7.x - qemu-kvm qemu-img libvirt libvirt-python libvirt-client virt-install bridge-utils Ubuntu 16.04.3 LTS - qemu-kvm libvirt-bin ubuntu-vm-builder bridge-utils virtinst
  • Runs CentOS-7.x, RHEL 7.x or Ubuntu 16.04.3 LTS.
  • Capable of running a guest virtual machine with at least the v35 requirements.
  • KVM runs on one of the following Linux versions: CentOS-7.x, RHEL 7.x or Ubuntu 16.04.3 LTS.
  • runs on a host machine that has the KVM hypervisor installed
  • v35 virtual Monitoring Point
17 annotations
  • Rate limit AppNeta limits the number of API requests that can be made over a given period of time. Currently the rate limit is 50 requests every 10 seconds.
  • Access Token authentication requires an access token that you create on APM. Basic authentication requires an APM username and password. In either case the authentication information is passed as part of the API request.
  • The APM API supports Access Token authentication (recommended) and Basic authentication
  • When using the interactive APM API interface you will typically use the current API version (for example, V3). At times, new or changed endpoints will become available for early access prior to being promoted to the latest API version
  • Care should be taken with PUT, POST, and DELETE requests as they affect your live data.
  • API version
  • Click Try it out!. The Curl field shows the curl command string (except for the authentication information) that can be used from the command line to produce the same result. See Using curl for more information The Request URL field shows the URL the request was made to. The Response Body field shows the JSON formatted response sent by APM. In this case, all network paths within the selected organization. The Response Code is the response code sent by APM. Successful responses have a “2xx” code. See Error codes for a list of error codes and their meaning. The Response Headers is the header information sent by APM.
  • Log in to APM. If you have more than one organization, change the organization to the one you are interested in. Navigate to > Explore API. The interactive APM API interface appears. Navigate to path > GET /v3/path.
  • provides detailed documentation about each endpoint and its parameters.
  • The best place to start learning and experimenting with the APM API is through its interactive interface. It provides a good place to stage API calls before implementing them in your integration software
  • Getting started with the APM API
  • In addition to the APM API, there is an API for administering Enterprise Monitoring Points (EMPs) - the Admin API.
  • API also provides the ability to configure and control APM
  • Results are delivered in a lightweight JSON format
  • The APM API makes data collected and generated by AppNeta Performance Manager (APM) available for analysis, reporting, and presentation in third-party systems
  • APM API
16 annotations
  • Copy the token and save it. Important: It will not appear again.
  • In the Select Organizations section, check any organizations you want the token to have access to
  • In the Dynamic Access section, check Add all organizations and any in the future if you want the token to access any organizations the user has access to now and in the future (as new organizations are added or removed).
  • reate a token To create an access token: Navigate to > Manage Access Tokens.
  • There are a few limitations to be aware of: Token generation is available to all users except those with custom roles. Token permissions are less than or equal to those of the user that created it. A token cannot be modified once it is created. You must revoke it then create a new one. If the user that created a token is deleted from APM, any tokens they created are immediately revoked. Tokens cannot be used to call the observer API endpoint for creating, viewing, or deleting an observer URL. To access the observer API endpoint, use basic authentication or use the interactive API interface.
  • Limitations
  • API Access Tokens are easily generated within APM and are revocable.
  • Another benefit is that single sign-on users do not require a local account to utilize the API.
  • A user can generate a token and define the scope of access available to that token to a degree that is equal to or less than that user’s own scope of access.
  • Access management is improved because you can generate access keys granting API access within various scope and permission levels without requiring the creation of additional APM users
  • Security is improved because username and password do not need to be encoded into scripts or applications that access the API. Also, the authentication token does not carry any user information.
  • n, they provide greater control over API access management compared with Basic authentication (i.e., username and password).
  • provide a secure method of accessing the APM API from a script or from an application.
  • API Access Tokens
14 annotations
  • Event data
  • Configuring APM event integration consists of specifying the URL of the server you want to send the events to and the type of events you want to send there.
  • Configure APM event integration
  • You can use either HTTPS or HTTP to communicate between APM and your server, though we do not recommend using HTTP. HTTPS is recommended because it provides encrypted communications using Secure Socket Layer (SSL). SSL communications requires that your server has an SSL certificate. If you are using APM-Public Cloud (as opposed to APM-Private Cloud), you’ll need an SSL certificate from a recognized Certificate Authority (CA) as self-signed/untrusted certificates are not supported
  • Prerequisites
  • limitations Event integration does not currently support Usage Events.
  • APM event integration is configured using the observer API endpoint in the APM API.
  • There is a different JSON object for each event type. The properties within each object are described in Event data.
  • Each event that is sent consists of data in JSON format.
  • Network change events - These are notifications that alert you to changes in the sequence of networks (BGP Autonomous Systems) on the path between a source and a target.
  • Web application events - These are notifications that a web path alert profile was violated or cleared
  • Service quality events - These are notifications that an alert condition was violated or cleared.
  • Sequencer events - These are notifications that APM lost or reestablished connectivity with a Monitoring Point
  • Test events - These are notifications that a diagnostic test has completed (or was halted).
  • event integration functionality provides you the ability to have APM events sent to a server or servers of your choice so that you can process the events as required.
15 annotations
  • Add a related network path
  • When creating or editing a packet capture configuration, you can associate it with one or more network paths using the Related Network Paths feature. This enables you to easily see all packet captures related to a given network path and to have the packet captures appear in relevant areas of the user interface and reports (for example, on the network path performance Events chart).
  • Associate a packet capture with a network path
  • Add comments to a packet capture To add comments to a packet capture: Navigate to Usage > Packet Captures. Click the name of the capture you are interested in. Click the Overview tab. In the Comments field, click the edit link. Add your comments. Click OK. Your comments are added to the capture overview.
  • Packet Capture uses the following Wireshark filters to provide alert and warning statistics:
  • On physical Monitoring Points, sorting by timestamp will produce the correct order. Timestamps are taken before the packets are split into hardware receive queues and thus respect the absolute order of the packet, which means that sorting a capture file (.pcap) by time will produce a better picture of packet ordering than sorting by packet index.
  • Between flows (different Layer 3 source/dest addresses), packets may be reordered. Two flows may not be processed by the same receive queue, which results in nondeterministic ordering when they’re inserted into the final capture file
  • Within a given flow (same Layer 3 source and destination IP addresses), packets will not be reordered. Every packet in a flow will be processed by the same hardware receive queue and thus fed into the capture file (.pcap) in order.
  • Note the following regarding packet order:
  • Navigate to Usage > Packet Capture. Click the name of the capture you are interested in. The capture results are displayed on a number of tabs: Overview - provides high-level capture details. Alerts and Warnings - displays the number of packets in the capture that match a predefined set of display filters that identify notable network behavior that you may be interested in. Protocol Breakdown - displays the number of packets, and the number of bytes in those packets, for each protocol in the capture. Conversations - displays the network conversations (traffic between two specific endpoints for a protocol layer) with the highest total number of bytes. Related Network Paths - lists the network paths associated with the capture. Click a path to display all of the captures related to that path.
  • View packet capture results
  • A capture can be stopped automatically as part of a stop condition specified when it is started or scheduled, or it can be stopped manually. Stopping a packet capture will not stop a packet capture schedule.
  • Stop a packet capture
  • To use an existing packet capture configuration as a template: Navigate to Usage > Packet Capture. For the capture you want to repeat, select > Start Again.
  • Navigate to Usage > Packet Capture. Click + Start New Capture. In the Name field, specify a name for the capture. In the Monitoring Point dropdown, select the Monitoring Point to capture from. In the Capture Interface dropdown, select the Monitoring Point capture interface to use. In the Packet Limit field, specify the maximum number of bytes to store of each captured packet. Default: 96 bytes. Range: 68 - 65,535 Deselect this option to capture entire packets. In the Capture Filter field, use a filter to specify which packets are captured. The filter uses libpcap syntax. For examples, click the icon. Filtering only the traffic you care about will reduce the capture size. This provides a longer captured duration, and it ensures that the capture analysis is relevant to the problem you are trying to solve. Leave the field blank to capture all packets. In the Capture Stop Condition(s) field, specify when to stop the capture. In the Related Network Paths field, specify network paths associated with the capture to have the path name appear in relevant areas of the user interface and reports (for example, on the network path performance Events chart), and to filter completed packet captures by related network paths. Click Start. The capture is started.
  • To start a new packet capture:
  • Prior to starting a packet capture you must set up for packet capture. You can then start a new capture, use an existing capture configuration as a template to start a capture, or you can edit an existing packet capture configuration. Capture files are capped at 1GB. In addition, regardless of any stop conditions specified, capturing ends when the space remaining on the Monitoring Point is too low: For full-packet captures (where maximum of 1500 bytes per packet are captured), capturing ends when less than 10MB remains on the device. For partial-packet captures (where less than 1500 bytes per packet are captured), capturing ends when less than 1MB remains on the device.
  • The captured packets are packaged into a standard file format and securely uploaded to AppNeta Performance Manager (APM). You can view the results there, or you can download the capture file to analyze using third-party software (for example, Wireshark).
  • Packets from AppNeta monitoring, assessments, diagnostic tests, and Monitoring Point communication are not captured.
  • captures packets based on user-defined parameters that include which packets to capture, how much of each packet to capture, and when to start and stop capturing
  • Packet Capture is a way of setting up a Monitoring Point to copy and store IP packets that it sees on its Usage monitoring interface
  • Managing Packet Captures
22 annotations
  • To save a view of the page: On all but the Traffic Summary page, click > Save Current View (next to the page title) and enter the view details. You can view a list of saved views once they have been created
  • o view details for a particular host: On the Top Sources, Top Destinations, or Top Hosts pages, in the table below the chart, click the host you are interested in. A page containing details for the selected host is opened
  • 8) To select a recent filter: In the 10 Recent Filter(s) pane (bottom right), click the recent filter you want to see. A new tab is opened.
  • (7) To select a different time range: In the Filter Options pane (center right), specify the time range and click Apply Filter
  • (6) To select a different Monitoring Point/interface: In the Monitoring Point pane (top right), select a Monitoring Point/interface from the dropdown
  • (5) To show both inbound and outbound traffic: In the chart pane, click View Options (top right) and select Show as Inbound / Outbound Traffic
  • 4) To select a different traffic parameter to display: In the chart pane, click View Options (top right) and select the traffic parameter in the dropdown
  • (3) To select a new chart: In the chart pane, click the chart title and select a new chart from the dropdown
  • (2) To drill down on a chart: Hover over the chart and click a blue pip for the detail you are interested in
  • (1) To open a detail chart (containing a chart and a details table) in a new tab: In the top pane, click one of the chart links
  • The Application Usage Monitoring Point Details page is displayed. The top pane allows you select from a number of charts. Selecting a chart will open a new tab. The Traffic Summary pane provides a summary of various traffic parameters for the selected Monitoring Points over the selected period. The chart pane displays one of the selected charts: Top Applications - displays the applications generating the most traffic. Top Categories - displays application category generating the most traffic. Top QoS - displays the QoS values seen most frequently. Top Sources - displays the hosts sending the most traffic. Top Destinations - displays the hosts receiving the most traffic. Top Hosts - displays the hosts sending and receiving the most traffic.
  • Application Usage Monitoring Point Details charts
  • (1) To change the traffic parameter the charts display: In the 24 Hour column heading, select the traffic parameter to display in the charts. These include: Traffic Rate - the average traffic rate (bits/second) in the measurement period. Traffic Volume - the total traffic volume (bytes) in the measurement period. Number of Packets - the total number of packets in the measurement period. Flows / second - the average number of flows in progress per second in the measurement period. None - no charts appear
  • Navigate to Usage > Monitoring Points. There is a 24 Hour chart displayed for each Monitoring Point/interface.
  • 24 Hour charts
  • (5) To view detailed charts for a Monitoring Point: In the Top Applications pane, click the Monitoring Point name/port.
  • (4) To view application traffic details: In the Top Applications pane, hover over the application name or click it for additional detail
  • (3) To filter what is being displayed in the Top Applications pane: In the Top Applications pane, adjust the filter fields in the top right of the pane.
  • (2) To select the Monitoring Points to display: In the Traffic Summary pane, click the Locations link.
  • (1) To adjust the time period to display: In the top pane, select a zoom link, use the calendar, or use the slider.
  • The Top Applications pane provides a graphic of the relative traffic volume for the top applications against selected Monitoring Points.
  • The Traffic Summary pane provides a summary of various traffic parameters for the selected Monitoring Points over the selected period.
  • Navigate to Usage > Summary
  • Application Usage Summary page provides an overview of traffic (by application) across selected Monitoring Points for a selected timeframe
  • Application Usage Summary page
  • Saved Views - provides a way to save a view of a Application Usage Monitoring Point Details chart so that it doesn’t have to be reconfigured each time you want to run it.
  • Application Usage Monitoring Point Details charts - provides a variety of highly customizable and detailed charts showing results from a selected Monitoring Point Usage monitoring interface.
  • 24 Hour charts - provides a view of a selected traffic parameter (for example, Traffic Volume) for each Monitoring Point.
  • Application Usage Summary page - provides an overview of traffic (by application) across selected Monitoring Points for the selected timeframe (up to one day).
  • View Usage monitoring results
30 annotations
  • Diagnostics
  • Observations
  • QoS
  • Severity
  • Hop table
  • a number of metrics for the path
  • source, the target, and all hops between the source and target
  • Hop diagram
  • The Data Details and Voice Details tabs provide per-hop details
  • Host name table
  • Recommendations
  • provides diagnostic messages that describe issues detected
  • Diagnostics section
  • provides a number of metrics for the path
  • Hop diagram - shows the source, the target, and all hops between the source and target.
  • The Summary tab provides a high level summary of the information obtained during the diagnostic test
  • In addition to being triggered automatically when an alert condition is violated, diagnostics are also triggered automatically when a network path is created. In addition they can be triggered manually.
  • If more than five network paths have diagnostics tests started due to the same condition, to the same target, in the same direction, and at a similar time, only five of these tests are run
  • If the condition that triggered a diagnostic test is cleared before the test has run, the test is not run
  • Note that the path status changes to “testing” only when the test starts, not while it is queued.
  • Diagnostics initiated manually, or triggered when a path is created, are always run
  • events are marked on the Capacity chart
  • Tests triggered when the queue is full are not run
  • a limited number of tests may be queued
  • Within a parent or child organization, the number of concurrent diagnostic tests triggered by a violation condition is subject to a limit
  • When the return path is through more than a single NAT device.
  • When the firewall in front of the source is blocking inbound UDP - In this instance, the inbound diagnostics cannot be forwarded to the source. Diagnostics revert to ICMP.
  • When the target and one or more other Monitoring Points are behind a firewall - In this instance, APM cannot uniquely identify the target.
  • When the source connects to APM via the AppNeta relay server - In this instance, APM does not know the public IP address of the source.
  • There are circumstances in which inbound diagnostics (target-to-source), cannot be run:
  • If a hostname is used to identify the target then a hostname must also be used to identify the source. An IP address cannot be used.
  • If they are not, single-ended diagnostics are run in the outbound direction, and inbound diagnostics are not available.
  • Diagnostics are only available for dual-ended paths when the source and target Monitoring Points are in the same organizational hierarchy
  • Diagnostics on dual-ended paths are subject to some restrictions:
  • For dual-ended paths, the diagnostics tests in both directions (source-to-target and target-to-source) use ICMP to test path hops and UDP to test the endpoints.
  • For single-ended paths, the diagnostics tests use ICMP for testing both the path hops and the path target
  • Diagnostics help you to pinpoint problematic hops using the same packet dispersion techniques as CPA, but rather than sending test packets only to the target, test packets are sent to all hops on the network path in addition to the target.
37 annotations
  • To use Rate-limited monitoring: Open a support ticket to enable Rate-limited monitoring. Once enabled, the Rate Limiting Capacity chart appears on the path performance page.
  • As Rate-limited monitoring generates network load, it must be used with caution. To that end, it is not enabled by default and must be enabled by AppNeta Support.
  • hese bursts, though brief, have more impact on the network than AppNeta’s standard monitoring techniques, but are required in order to trigger the rate limiting mechanism
  • he rate at which the Monitoring Points receive the packet bursts determines the rate-limited capacity.
  • The source sends packet bursts outbound and the target sends packet bursts inbound.
  • When Rate-limited monitoring is enabled, APM instruments the network path with controlled bursts of packets at the total capacity rate in order to trigger the network’s rate-limiting mechanism
  • To run a basic PathTest using UDP:
  • Tests are bidirectional. The source sends packets and the target simply replies.
  • The target does not need to be a Monitoring Point
  • considered control traffic rather than true data traffic
  • For PathTest using ICMP
  • Tests must use port numbers not blocked by a firewall.
  • Tests must use port numbers not in use by the Monitoring Point (e.g. 80, 433)
  • Tests are unidirectional
  • The target must be a Monitoring Point or an AppNeta WAN Target
  • Test with multiple PathTest streams to better saturate the network with TCP traffic
  • As TCP uses flow control and retransmissions (unlike UDP and ICMP), the PathTest results can show a lower capacity than with UDP. The results, however, will show the true capacity to carry TCP traffic.
  • For PathTest using TCP: PathTests using TCP are the best way to test the capacity of networks carrying TCP traffic
  • Tests are unidirectional. Inbound tests are started after outbound tests complete.
  • The target must be a Monitoring Point or an AppNeta WAN Target.
  • it doesn’t have flow control and retransmissions like TCP.
  • UDP is considered true data traffic
  • PathTests using UDP are the best way to test general network capacity.
  • For example, on physical Monitoring Points the multiple is 20. So 20 packets are sent on the wire for every one seen in a packet capture.
  • PathTest sends multiple copies of a packet “on the wire”
  • The number of packets seen in a packet capture will be less than the number actually sent on the network by PathTest.
  • gradually increase the number of streams (e.g., 5, 10, …) and decrease the bandwidth proportionately (e.g., 200Mbps, 100Mbps, …) until UDP tests and TCP tests show the same results
  • Set bandwidth to maximum expected capacity (e.g., 1000Mbps) and run a test
  • This can be addressed by running more than one stream when using TCP
  • can also be due to the distance between source and target
  • due to TCP’s flow control mechanism as well as the affect of packet loss and retransmissions on TCP
  • There can be a large discrepancy between a test using UDP and the same test using TCP
  • Networks can be provisioned asymmetrically
  • or example: data vs voice, protocol used (ICMP, UDP, TCP), and QoS type
  • Networks treat different traffic types in different ways.
  • A PathTest can only be sourced from a Monitoring Point’s default interface
  • By default, only Advanced and Organization Admin user roles can run a PathTest.
  • When separate bandwidth is allocated to data and voice, use PathTest to stress the network with data and voice traffic.
  • Generate traffic with a QoS DSCP marking to verify or stress a traffic engineering strategy. This can be used to confirm that traffic with a given DSCP marking is given priority over best-effort traffic.
  • Verify that a link can achieve the capacity provisioned by your ISP.
  • possible to accurately measure the available ICMP, UDP and/or TCP capacity and analyze the behavior of a network under different traffic loads
  • PathTest is a powerful load generation tool used to examine the network capacity between two endpoints on a LAN or WAN link.
  • Caution: While these loading techniques produce accurate results, be aware that they can have an impact on the available network capacity for the duration of a test.
  • Both of these tools require a Monitoring Point at each end of the network path being tested (i.e. a dual-ended path).
  • Rate-limited monitoring is used when you need to make measurements at regular intervals over time
  • PathTest is used when you need to make a single measurement
  • PathTest and Rate-limited monitoring are tools that can be used to generate a load that triggers these techniques and then measure the capacity of the network under load
  • n either case, your true available capacity is the one in the rate-limited state
  • other ISPs market a ‘burst’ technology, which means that they’re limiting your regular capacity, but give you more for short periods of time, for example, during large downloads
  • some ISPs trigger rate limiting when your traffic rate crosses a threshold
  • ossible that the network will behave differently (provide a lower capacity) once it is actually loaded
  • create very little load
  • by sending relatively small packet trains and then calculate the path’s capacity based on the dispersion between the packets on the receiving end
  • Delivery monitoring tools that run by default (Continuous Path Analysis (CPA) and Deep Path Analysis (DPA) / Diagnostics) measure the capacity of a network path
54 annotations
  • If you want to influence your readiness score, increase throughput. Keep in mind however that readiness is not throughput, nor is it a linear function of throughput.
  • This estimate considers total capacity and utilization, and the effect of loss, latency, and jitter on TCP congestion control.
  • Modeling is used to estimate throughput at layer 4, based on layer 3 behaviors and set of parameters that represent a typical TCP configuration
  • The basis for readiness is throughput,
  • eadiness, on the other hand, leverages the diagnostic capability of APM to discover the origins of loss, latency, and jitter, and then characterizes the range of performance possible in the presence of any issues.
  • MOS is based on current performance and is typically related to the symptomatic evaluation of the network path
  • Conceptually, readiness is like MOS for data, but with key differences
  • return familiar metrics (loss, latency, jitter, etc.), but they also generate a ‘readiness’ value
  • A data assessment runs diagnostics tests on multiple paths at once, with the goal of determining your network’s ability to deliver data-intensive applications
  • Data assessments are not as relevant to voice, video, best-effort, or transactional data
  • Data assessments are tests used to determine a network path’s readiness by evaluating its ability to handle activities like ftp transfers, backups, and recovery
  • The readiness metric is a succinct representation of how well a network path is expected to handle data-intensive applications.
12 annotations
  • Create a Voice test schedule Voice test schedules are used to repeatedly execute a Voice test at a given interval for a specified period of time.
  • When your call is traversing a network with insufficient bandwidth for the VOIP configuration, the call will experience higher latency and possibly packet loss. Packet loss is important because VOIP traffic uses the UDP protocol. With UDP, lost packets are not re-transmitted, resulting in broken audio on the listener’s end.
  • The amount of compression, the number of samples taken, and the number of samples packed into each IP packet all directly affected how much bandwidth is consumed by the call
  • When human voice is converted from analog to digital, it is sampled thousands of times per second, using one of several techniques called ‘codecs’ that not only converts the sampled voice, but also compresses it
  • Voice quality assessments are based on four metrics that affect voice quality: bandwidth utilization, packet loss, latency, and jitter
  • This results in a low MOS even though the audio sounds fine
  • not all VOIP handsets will respond to test packets
  • Devices such as printers will fail a voice assessment.
  • When you are assessing voice endpoints on a LAN, make sure you are targeting the correct devices
  • Contact AppNeta Support
  • against the service provider’s PBX.
  • Run an advanced voice assessment
  • against the service provider’s PBX.
  • Run a basic voice assessment
  • Apply the default voice WAN alert profile to capture any events and initiate diagnostics tests
  • Create a single-ended path from the location experiencing the problem to the service provider’s PBX
  • Test the connection to the VOIP server
  • Test the WAN link
  • Assessing a voice service provider
  • Packet reordering
  • Packet discards
  • Total capacity
  • Availability
  • Latency
  • Voice jitter
  • Voice packet loss
  • MOS - An estimate of the rating that a typical user would give to the sound quality of a call
  • Readiness - A representation of MOS that helps you understand how well a path is handling voice traffic
  • Voice monitoring metrics
  • Note: If you do not have a voice license, you can use PathTest as a replacement for Voice test, though it does not use voice protocols (SIP/RTP/RTCP) to simulate voice traffic so results are less accurate than using Voice test.
  • Test a network’s ability to handle inter-office voice traffic
  • Test a network’s ability to handle actual voice traffic
  • Create voice traffic load to expose network problems
  • Voice tests require a Monitoring Point as the target (though the target Monitoring Point does not require a voice license)
  • one or more sessions; each session specifies a path, a number of concurrent calls, and QoS settings
  • allow you to more accurately measure of how your network would treat voice traffic, but at the expense of greater bandwidth consumption
  • Voice tests provide a way to simulate multiple simultaneous voice calls using the actual voice codecs and protocols
  • Determine how varying call loads affect the ability to handle voice traffic using the call ramp-up feature
  • Assess network paths (up to 25) over a period of time in order to capture transient conditions
  • Check an existing voice deployment for issues
  • Provide a quick voice assessment of your network in advance of a VOIP deployment
  • Advanced voice assessments do not require a Monitoring Point as a target
  • The difference from Basic assessments is that this testing happens continually over a specified period of time and relies on both diagnostic and continuous monitoring techniques to infer voice quality
  • Advanced voice assessments also provide a way to assess the ability of multiple network paths to carry voice traffic.
  • Validate QoS
  • Assess network paths using a large number of concurrent calls (up to 250)
  • Check an existing voice deployment for issues
  • Provide a quick voice assessment of your network in advance of a VOIP deployment
  • do not require a Monitoring Point as a target
  • happens at a point in time and relies on diagnostic techniques to infer voice quality
  • Basic voice assessments provide a way to assess the ability of multiple network paths to carry voice traffic
  • the source Monitoring Point needs to be licensed for voice.
  • enables you to assess your network’s ability to handle voice traffic with three tools: Basic voice assessments, Advanced voice assessments, and Voice tests
  • (APM) has monitoring capability designed specifically for ensuring good voice quality
54 annotations
  • he provisioned capacity of a network path is indicated by a horizontal yellow line on the network path’s Capacity chart. It shows either the highest total capacity seen during the specified time range, or the capacity you expect (and have set) based on the service level agreement (SLA) with your ISP.
  • White box with dashed lines - means that monitoring is disabled on the path.
  • Grey chart - means that APM has not heard from the Monitoring Point.
  • Black chart - means that the Monitoring Point can’t reach the path target
  • Red chart - indicates that path performance violated a condition threshold
  • Yellow horizontal line - indicates the provisioned capacity
  • Black horizontal line - indicates a condition threshold
  • Black vertical line - indicates an attribute essential to monitoring has changed
  • Chart annotations
  • Voice Loss chart
  • The Round-Trip Time chart shows the average round-trip time (RTT) measured on the network path during the specified time period
  • Round-Trip Time char
  • The Latency chart shows the latency (calculated as 1/2 RTT) measured on the network path during the specified time period
  • Latency chart
  • he Data Jitter chart shows the data jitter (data packet delay variation) measured on the network path during the specified time period
  • Data Jitter chart
  • Each data point is calculated as a rolling average of the last five samples during normal sampling (once per minute by default) and the last ten samples during escalated sampling (once every ten seconds)
  • shows the percentage data loss (loss of simulated data packets) measured on the network path during the specified time period
  • Data Loss chart
  • In addition, it shows the provisioned capacity (yellow horizontal line) either measured or set on the network path
  • shows the total, utilized, and available capacity measured on the network path during the specified time period
  • Capacity chart
  • Hovering over a circle shows the associated events.
  • Circle position on the x-axis indicates the time that the events occurred. Circle size indicates the number of events of a given type. Circle color indicates the event type as follows:
  • provides a summary of the types of events that have occurred on the network path during the specified time period.
  • Events chart
  • Timeline
  • The Route chart shows the latest routes taken by TCP, UDP, and ICMP packets from the source Monitoring Point to the target. Hover over the nodes to display detailed hop information. You can also view route details for the network path by using the Routes pane.
  • Route chart
  • The charts are plotted in real-time so no page refresh is required
  • By default, they show information from the last hour, but the timeline can easily be modified
  • Network path performance details are provided on a number of charts - each chart representing a separate metric.
32 annotations
  • "Excessive packet round-trip time (RTT) detected" is a diagnostic that indicates that your network could be duplicating network traffic during busy periods, which in turn can lead to a traffic snowball effect. By reducing queue time (and thereby reducing trip time), packet loss will occur during periods of congestion, and thereby traffic duplication is avoided. Simply said, by dropping packets sooner, TCP flows will work more efficiently.
  • Consider the fact that TCP must recover from data that is damaged, lost, duplicated, or delivered out of order across the path. The least efficient of these is duplicated data
  • In conclusion
  • Similarly, huge queues can be encountered if too many queues are used in series. For example, Frame Relay Vendor A has 4 switches in its cloud connection between New York and Chicago, while Frame Relay Vendor B may use 80 switches. Vendor B’s network may exhibit "Excessive packet round-trip time (RTT) detected" during busy periods
  • This diagnostic can be triggered if queues are simply to
  • Less commonly queues are stopped because of device failures. Sometimes a reboot of a router or switch is in order.
  • Queues that are too big
  • a downstream intermittent media error can cause a queue to stop, causing packets to delay
  • Stopped queues
  • One of three network situations can lead to this condition: media errors or router hangs where queues are stopping, or excessive queue sizes where packets must pass through queues that are too big.
  • The "Excessive packet round-trip time (RTT) detected" diagnostic indicates that your network is "queuing" (holding onto) packets for an unreasonable amount of time, and that this condition may be affecting the efficiency of your traffic flows.
  • As a network is driven close to a point of congestion, you may find that data is repeatedly retransmitted, and TCP windows will start to slide back and forth. The overall result is very inefficient traffic flows.
  • However, packets that survive long after they have been given up for dead make a mess of TCP flows.
  • TCP flows will respond to packet loss by backing off transmission rates.
  • In a properly tuned network, congestion results in packet loss
  • in general queue sizes are limited to about 8 to 64Kbits, which results in proper TCP flow control.
  • TCP flows do not work well if queues are too large
  • In older switches and routers, queues have been known to be very large
  • The "server" is a processor that puts bits onto the down-stream wire. The "queue" is the memory that will hold packets that are waiting to be processed by the server. If the queue is full when a packet arrives, a packet is lost (typically the packet at the front of the queue is dropped to accommodate the incoming packet).
  • n its simplest form a gateway/router/switch port can be represented in its two simplest components; a server and a queue.
  • This condition usually indicates the existence of intermittent media errors, or an over-queued network condition.
  • For example, a packet that survives for 3 seconds on a small LAN would trigger this diagnostic message.
  • The "Excessive packet round-trip time (RTT) detected" diagnostic indicates that a packet has survived an unreasonably long time based on the characteristics of the network path.
  • Advanced Analysis "Excessive Packet Round-trip Time (RTT) Detected"
24 annotations
  • Most often the problem arises by attempting to use a router as a firewall.
  • When testing "through" an intermediate router, very little delay is realized. However, when testing "to" the router, test packets leave the high-speed path and other delays may be experienced. Usually delays occur because of a busy router management CPU. Therefore, if you see high utilization on a hop in the middle of a network, you will inevitably find that the router CPU is overworked.
  • Occasionally you may run across a situation where utilization is high at an intermediate hop, but not at the target hop.
  • High router CPU utilization
  • Note that it is important that good packet drivers are installed on the Monitoring Point, so that Delivery monitoring can achieve good test results. This will allow you to properly isolate which devices along the test path are flawed.
  • A more normal scenario would be where Total Capacity is measured at 94Mbps, and no "High utilization detected" message appears. In both these cases, Total Capacity and Utilization reflect the Apparent Network that is available to the application. However, in the second case, Utilization is reflecting the effects of actual traffic on the wire rather than flaws in the NIC driver.
  • if Delivery monitoring measured the test path as 67Mbps with 94% frequency of "High utilization detected" and no packet loss, experience will lead you to understand that these results are typical of a certain NIC card running down-level drivers. Upgrade the NIC drivers and typically the Total Capacity increases and Utilization will reflect the actual traffic on the wire
  • Delivery monitoring reports end-to-end characteristics to each layer 3 device along a path
  • Advanced Analysis "High Utilization Detected"
9 annotations
  • MTU negotiation is an important part of the overall health of a network. During a test, Delivery monitoring will report the following MTU conditions detected along the test path: The (apparent measurable) PMTU Nonstandard MTU's in use Standard MTU's in use, other than 1500 bytes (Ethernet) "Black-hole" hop, where a router fails to send the "fragmentation needed and DF set" ICMP message. In other words, it is unable to properly participate in MTU negotiation. "Gray-hole" hop, where a router returns the wrong MTU value in the "fragmentation needed and DF set" ICMP message. In other words, it is responding with an incorrect MTU value for the constricting hop. MTU conflicts, where the network is exhibiting behavior, including packet loss, that corresponds to an MTU conflict with one or more devices on the network path.
  • The Delivery monitoring Approach
  • To avoid MTU problems, consider the following: Use one common MTU per subnet Separate different MTUs with Layer 3 routers. Avoid Layer 2 FDDI/Token Ring/Ethernet bridges. Do not filter ICMP packets. Use reputable VPN solutions. Test network paths from high MTU to low MTU (appliance on large MTU end) Maintain logical diagrams to establish rule sets for MTUs, and to ensure that routers separate MTU domains. Avoid adjusting MTU of clients to compensate for network issues. Ensure server and network personnel have established common MTU policies.
  • Avoiding MTU Conflicts
  • Therefore, if you want to set up a GigE connection with a 9000 byte MTU, you must set the frame size of the NIC to 9018. When the cause of this condition is a reduced MTU at a destination hop, Maximum Segment Size (MSS) negotiation can protect TCP from failure. However, many Black-Hole Hops are caused by incorrectly configured mid-point layer-2 devices, in which case MSS negotiations are ineffective. Furthermore, in many scenarios MSS negotiation is ignored.
  • Notice that at Layer 2, a typical Ethernet frame has a maximum size of 1518 bytes. But at Layer 3 we deal with packets, and in this example the MTU represents the maximum packet size of 1500 bytes. It is important to understand that the difference between packet size and Ethernet frame size is 18 bytes.
  • Another common cause of black-hole hops is confusion between frame size and MTU. The following diagram illustrates a breakdown of a typical Ethernet frame.
  • In complex environments it is easy forget to install Layer 2 routers between MTU boundaries. To avoid problems, we suggest maintaining MTU values on logical diagrams outlining the rules for Layer 3 subnets. Doing so establishes rule sets for portions of networks regardless of how they are physically connected. Here is an example.
  • Most modern operating systems and applications are PMTU enabled, and thereby the "Don't Fragment" bit is set in all IP headers. Therefore, when the 9000 byte packet is received by the router, fragmentation is not attempted. Rather, the result is an ICMP "fragmentation required but DF set" message, also known as the "too big" message. When the server receives this ICMP message, it updates its routing table for the client with the MTU reported in the message, and will remember to send smaller packets to the client. Note that once the client's MTU has been discovered, the MTU is not renegotiated on subsequent connections.
  • The following diagram illustrates how PMTU discovery works.
  • To avoid MTU conflicts, you must ensure that you deploy Layer 2 devices on MTU boundaries, and that you do not filter out ICMP messages. This will ensure that Path MTU discovery (PMTU) works as described in RFC 1191.
  • Eventually the connection develops a cycle of sending a 2250 byte packet that is lost, waiting a timeout period, and then sending a 1125 byte packet that is received. Overall, the connection does not die. Rather, it runs very slowly.
  • n the case of a black-hole hop, the result is a retransmitted packet that is half its previous size. In this example, 3 packets are lost before the TCP transmit window produces a 1125 byte packet. This packet does survive, which produces a response from the client and in turn keeps the connection alive.
  • TCP contains a slow-start congestion avoidance algorithm that shrinks the TCP transmit window to half its size when a packet has timed out
  • The server receives a request from the client, and a 9000 byte packet is generated. The Layer 2 switch accepts the packet, but drops it once it discovers that the packet is too large to send to the client. Since Layer 2 switches have no knowledge of Layer 3 content, they cannot inspect the DF bit in the IP header, nor can they generate a sufficient response to the server to explain why it dropped the packet. As far as the server is concerned, the packet was lost due to congestion.
  • To understand why MTU conflicts often result in slow links, consider the following client-server TCP flow. It illustrates a typical environment where a Gigabit server configured with 9000 byte MTU is incorrectly connected via a Layer 2 switch to a 10/100 client employing a 1500 byte MTU
  • Normal server to client traffic is quick, but client backups fail.
  • Email works, but discussion databases take forever to load.
  • Web pages load quickly, but .GIF files are very slow to load, and occasionally fail.
  • FTP downloads work only with small files.
  • FTP upload of a large file takes a few minutes, but download takes hours, or even fails.
  • What is often misunderstood is the fact that an MTU conflict typically results in slow network connections, not broken connections.
  • traffic will fail in one direction but not in the other
  • MTU negotiation errors are very difficult to detect and manifest themselves in subtle but destructive behaviors. When an MTU conflict exists, MTU negotiation fails and packets will be lost when they are too large to traverse the network.
  • Understanding MTU Conflicts
  • theoretically, increasing MTU should have just a minor effect on network’s maximum performance. However, most high-speed networks do not work at full capacity. For these networks, increasing MTU has proven to greatly increase overall throughput
  • Increasing frame size gives better performance.
  • Why Use Large MTUs?
  • The following diagram shows a 1500 byte TCP/IP packet passing through Ethernet. Notice that although Ethernet supports 1518 bytes frames, it is designed to carry at most 1500 byte packets; therefore, the MTU of Ethernet is 1500 bytes.
  • Frames are generated by Layer-2 devices and encapsulate Layer-3 packets.
  • To understand MTU, one must be very aware of the difference between frames and packets.
  • Typically the internet operates with an MTU of 1500 bytes, however other values are acceptable. Mixing different MTU values within one network path is also acceptable, provided that all components within a network path share similar rules regarding conflicting MTUs. As a general rule, fragmentation should be avoided at all costs.
  • Network links that have properly configured MTUs are more efficient
  • The MTU of the medium determines the maximum size of the packets that can be transmitted without fragmentation
  • f the application is sending a small chunk of data, for example 500 bytes, it will usually be sent within a single packet. However, when the application must send a larger chunk of data, the data must be distributed over several packets.
  • When Internet Protocol is used to transfer data across a path, data is encapsulated into packets before it leaves the physical interface
  • What is MTU?
  • elivery monitoring checks for several significant MTU error conditions for each link within a network path. Passing these checks ensures that MTU works properly along the entire path.
  • MTU misalignments can result in network idiosyncrasies and degradations that are extremely difficult to diagnose
  • One of the most powerful and yet subtle capabilities of Delivery monitoring is its ability to discover Maximum Transmission Unit (MTU) conditions and problems on a link
  • Maximum Transmission Unit
41 annotations
  • You will see this issue on the Round-Trip Time chart, the Page Load Time chart, and on the Routes pane. To fix the issue, configure clients to use only local DNS resolvers.
  • If you notice that users are reporting that page loads are very fast at times and quite slow at other times, one possibility is that their clients are configured to use both local and remote DNS resolvers. This can be an issue when the web app being accessed is served by a Content Delivery Network (CDN), with target servers distributed globally. In these cases, the remote DNS resolver can resolve the IP address of a target server located close to it rather than close to the client. This will cause longer page load times than from a sever located closer to the client.
  • Poor browsing experience
  • To determine how long a network path was in a violation state you need to find the violation event and the corresponding clear event
  • When a Monitoring Point generates diagnostic tests on a network path, it targets each hop on the path, one at a time, to help determine whether any of the hops is performing poorly. One of the metrics that’s calculated for each hop (and displayed on the Data Details tab of the Diagnostics page) is the Total Capacity. One would expect that the total end-to-end capacity should be no greater than that of the lowest capacity hop. This is true, the lowest capacity device is the bottleneck in the path, but often you will see hops showing a lower capacity number than the end-to end capacity. The reason for this is to do with router architecture. Traffic passing through the router is given higher priority than that destined for the router itself. In other words, the capacity numbers for the intermediate hops may not actually represent the true forwarding capacity of the device.
  • Mid-path device shows lower capacity than the target
  • Often we see the terms network bottleneck and network congestion point used interchangeably. We believe that there is a distinction between these terms. A network bottleneck is the slowest point on a network path. Every network path has a bottleneck (for example, a low speed link). If the performance of the bottleneck is improved, the bottleneck moves to another point in the path. A congestion point, on the other hand, is the point in a network path where production traffic is backing up. Often there is congestion at the bottleneck, but not always. There may be several congestion points on a network. A congestion point is usually a transient condition where a bottleneck is not.
  • Bottleneck vs. Congestion Point
  • High utilized capacity coupled with no increase in flow data is a classic sign of oversubscription. You should contact your ISP if this is the case.
  • The first thing you want to do is corroborate capacity measurements with round-trip time, loss, and jitter. If there are no corresponding anomalies, then whatever triggered the high utilization isn’t really impacting performance. If there are, you’ll then use Usage monitoring to check for an increase in network utilization.
  • Oversubscription is a technique your ISP uses to sell the full bandwidth of a link to multiple customers. It’s a common practice and is usually not problematic, but if it is impacting performance, you’ll see it first in your utilized capacity measurements on the Capacity chart.
  • As issues with ICMP traffic may not be present in TCP or UDP traffic, you can set up a dual-ended path to test whether the other protocols are affected in the same way.
  • If a path shows sustained packet loss, review its latest diagnostic results to understand where the loss is occurring: If the loss is occurring at the last hop, make sure that firewall/endpoint protection at the target allows ICMP. If the loss is occurring mid-path, make sure routing policies are not de-prioritizing ICMP, and access control lists are not blocking ICMP.
  • If none of the previous subsections is applicable to your situation, you can use PathTest to corroborate the low capacity measurements. Remember that this is a load test and it measures bandwidth, not capacity.
  • Capacity is measured by sending multiple bursts of back-to-back packets every minute (as described in TruPath). To measure total capacity, at least one burst must come back with zero packet loss. If that is not the case, then the capacity measurement is skipped for that interval. If packet loss is intermittent, the result is a choppy Capacity chart. If packet loss is sustained, the Capacity chart will show no capacity while the packet loss is present.
  • Capacity chart shows no capacity This can be due to sustained packet loss.
  • Asymmetric links, if measured using single-ended paths, will show the capacity of the slowest of the uplink and downlink directions. This can be misleading. Measuring a link using a dual-ended path will show the capacity of each direction.
  • Some devices make better targets than others. Choosing a good target is important in order to get good measurements.
  • Note: Total capacity is based on the assumption that traffic will flow in both directions. Therefore, you can expect the total capacity for half-duplex links to be roughly half of what it would be with full-duplex.
  • When a low capacity condition is persistent rather than transient, it is caused by a network bottleneck, not by congestion
  • ‘Saturate’ means the ability to transmit packets at line rate without any gaps between them. All switches can run at line rate for the length of time that a packet is being sent but some are unable to send the next packet without any rest in between. This determines the ‘switch capacity’. APM provides a range for Total Capacity that you can expect given the physical medium and modern equipment with good switching capacity.
  • Bandwidth is the transmission rate of the physical media link between your site and your ISP. The bandwidth number is what the ISP quotes you. Capacity is the end-to-end network layer measurement of a network path - from a source to a target. Link-layer headers and framing overhead reduces rated capacity to a theoretical maximum. This maximum is different for every network technology. Further reducing capacity is the fact that NICs, routers, and switches are sometimes unable to saturate the network path, and therefore the theoretical maximum can’t be achieved.
  • Capacity and bandwidth are different
  • Capacity lower than expected
  • For cases where you need to measure capacity over time, you can use rate-limited monitoring. Rate-limited monitoring is similar to PathTest in that it loads the network while testing, but instead of a single measurement, it makes measurements at regular intervals over time. Contact AppNeta Support to enable rate-limited monitoring.
  • Because Continuous Path Analysis testing used by Delivery monitoring is extremely light weight, it too might not be able trigger the rate limiter. As a result, you’ll end up seeing the entire capacity of the link, rather than the amount that has been provisioned for you by your ISP. To confirm this, try the following: Confirm that the speed test run by the ISP is effectively using the same source and target as your test. Use dual-ended monitoring (testing a path between two AppNeta Monitoring Points). Dual-ended monitoring measures network capacity in both directions (source to target and target to source), similar to speed tests. Testing each direction independently allows you to account for asymmetry in the network path. For example, upload and download rates may be different and may take different routes. Single-ended monitoring can only determine the capacity in the direction with the lowest capacity. Run PathTest. PathTest does not use lightweight packet dispersion, but rather generates bursts of packets which may trigger carrier shaping technologies
  • Usually transactional data and control data is allowed through at full capacity because they are short bursts of traffic, but sustained data transfers like streaming media will trigger the rate limiter.
  • There are times when the network capacity numbers returned by APM do not match those from a speed test provided by your ISP. If total capacity measurements from APM are greater than what you expect, this is usually because the link to your ISP is physically capable of greater speed, but your ISP has used a traffic engineering technique called ‘rate limiting’ to limit your bandwidth to the amount specified in your Service Level Agreement.
  • APM and ISP capacity numbers differ
29 annotations
  • Step 6: Analyze monitoring results
  • SNMP notifications - Use this method if you are integrating with an SNMP system. Set up using the Manage SNMP page.
  • Event integration - Use this method if you already have an event monitoring system in place. Integrate directly with that system via POSTs that contain JSON event payloads.
  • Email notification -
  • Step 5: Set up alert notifications
  • Delivery monitoring procedure Create a Path Template Group to monitor the health of the underlay network. Use the Data Center Monitoring Point IP address as the target. Specify “Dual Ended” paths. Add source interfaces from each remote office Monitoring Point (typically the “Auto” interface) to create paths.
  • Important: SD-WAN must be configured to route traffic destined for the Data Center Monitoring Point over specific underlay paths based on traffic identifiers such as port, IP address, or QoS markings, and not via the overlay.
  • Are my service providers living up to their SLAs? How do I determine where network problems are originating?
  • When you set up Experience monitoring, single-ended network paths are automatically created for Delivery monitoring. These allow you to confirm that traffic is being routed as you expect (for example, over MPLS or over the internet).
  • Step 4: Monitor network health
  • Create a Web App Group for each web app you want to monitor. Use the web app URL as the test target. Add a Selenium workflow that accesses the web app. At a minimum, it should login to the web app. Include at least one interface on each Monitoring Point as a test source.
  • Important: Your SD-WAN must be configured to route Experience traffic (TCP port 443) and its associated Delivery traffic (ICMP and UDP) out the same interface. This allows you to see the path the Experience traffic takes using Delivery monitoring.
  • Experience monitoring prerequisites Monitoring Points deployed with interface(s) on the desired end-user subnets.
  • Monitor key applications to identify any issues affecting end user experience. Use associated Delivery paths to see the overlay path taken for specific applications. Compare user experience of app performance before and after SD-WAN transition.
  • Step 3: Emulate web app users By emulating a user, Experience monitoring helps you answer the question: What sort of app performance are my users experiencing?
  • Usage monitoring prerequisites Monitoring Points deployed with capture interface(s) connected to switch ports that SPAN/mirror all WAN traffic.
  • This is particularly helpful prior to SD-WAN implementation to determine how best to use SD-WAN to route different traffic types and how best to size the WAN links it uses. For example, voice/video traffic over MPLS and all other traffic over the internet.
  • Step 2: Understand WAN traffic Usage monitoring is used to monitor WAN traffic to and from a site. It helps you to answer the questions: What apps are users using? How much bandwidth is devoted to each app? Which users are consuming the most bandwidth?
  • Connect the Monitoring Point’s Usage monitoring port to a SPAN/mirror of the WAN traffic (prior to any NAT or encapsulation) at each location. The egress interface of the core switch is typically the best place for this
  • Connect each Monitoring Point to the same network subnet/segment as users in those locations in order to monitor from a user perspective
  • Deploy Enterprise Monitoring Points (EMPs) in remote offices and in your Data Center(s) in order to monitor all sites.
  • Step 1: Deploy Monitoring Points
  • In this example, end user experience over the SD-WAN is measured using web paths (Experience monitoring) to a SaaS application (P1) and an Enterprise Web Application (P2). The network health through the SD-WAN is measured using (auto-created) single-ended network paths (Delivery monitoring) to these same targets. The health of the underlay network is measured through a dual-ended network path (P3) (Delivery monitoring) to the Data Center Monitoring Point via the underlying MPLS WAN.
  • Recommended approach AppNeta recommends deploying Monitoring Points to remote office and Data Center locations and configuring them to provide visibility into remote site application usage, to emulate users accessing priority apps through the SD-WAN, and to monitor the health of both the overlay network (through the SD-WAN) and the underlay network (the network used by the SD-WAN).
  • If you are considering a transition to SD-WAN or have already implemented an SD-WAN, AppNeta Performance Manager (APM) can answer questions that will help you to successfully manage your network. For example: What apps are users actually using? (Usage) How much bandwidth is being used by each app? (Usage) Which users are consuming the most bandwidth? (Usage) What sort of app performance are my users experiencing? (Experience) What path is user-to-app traffic taking? (Delivery) Are my service providers living up to their SLAs? (Delivery) How do I determine where network problems are originating? (Delivery)
25 annotations
  • The software upgrade schedule is listed on the AppNeta service status page: http://status.appneta.com/.
  • can result in a gap of up to 15 minutes of monitoring history
  • You can configure APM to have a Monitoring Point upgraded automatically or you can upgrade it manually at any time
  • You will see the warning symbol appear at various places in APM (including the Manage Monitoring Points page) when a Monitoring Point is no longer running the latest software version
  • AppNeta recommends keeping your Enterprise Monitoring Point (EMP) software up to date to take advantage of the latest features and bug fixes.
  • The effects of these procedures are summarized in the following table: Procedure Software change? Network config change? APM access config change? Upgrade software Y (latest software) N N Reflash (local image) Y (local image) N N Reflash (USB image) Y (USB image) N N Decommission N N Y (factory)(no APM access) Reset to factory defaults (local image) Y (local image) Y (factory) Y (factory)(no APM access) Reset to factory defaults (USB image) Y (USB image) Y (factory) Y (factory)(no APM access)
  • Reset a Monitoring Point to its factory default configuration with the locally stored system image or with a downloaded system image.
  • Decommission a Monitoring Point so that it can no longer access APM without affecting its software or network configuration
  • Reflash a Monitoring Point with the locally stored system image or with a downloaded system image without affecting its configuration.
  • Upgrade a Monitoring Point to the latest software version without affecting its configuration
  • There are a number of procedures that you can use to affect the software version, the network configuration, and the APM access configuration on a Monitoring Point:
  • Managing Software on an EMP Managing software on physical and virtual Monitoring Points Upgrading EMP software Reflashing an EMP - local image Reflashing an EMP - USB image Decommissioning an EMP Resetting an EMP to factory defaults - local image Resetting an EMP to factory defaults - USB image Managing software on a CMP deployed using AKS Upgrade CMP software - AKS Roll back CMP software - AKS Uninstall CMP software - AKS Remove all resources associated with a CMP - AKS Managing software on a CMP deployed using Docker Compose Recreate a CMP - Docker Compose Upgrade CMP software - Docker Compose Uninstall CMP software - Docker Compose Managing software on a Windows NMP Upgrade NMP software - Windows Uninstall NMP software - Windows Managing software on a macOS NMP Upgrade NMP software - macOS Uninstall NMP software - macOS Quit the menu bar app Restart the menu bar app The procedures for managing Monitoring Point software depend on the type of Monitoring Point.
12 annotations
  • To migrate monitoring from one Monitoring Point to another: Navigate to > Manage Monitoring Points. For the Monitoring Point you want to migrate from, select > Migrate Monitoring. In the Use as field, select Source. In the dropdown under the Replacement column, select the Monitoring Point you want to migrate to. The remaining fields in the Replacement column are filled in to show the default mapping between the source and replacement Monitoring Points. Any potential migration problems are identified. In the Licensing section, add or move licenses to the replacement as required. Select Assign New Licenses to use new licenses. Select Move From Source to move licenses from the source Monitoring Point. Select Advanced to override the automatic licensing. In the Interface Mapping section, make changes to the automatic mapping as required. The Launch Monitoring Point Web Admin link is available if you need to modify interfaces on the replacement. Click Move Now to migrate monitoring from the source to the replacement. The migration to the replacement proceeds and the source Monitoring Point is deleted from APM.
  • Restrictions include: Roles - Only users with Organization Admin or Advanced roles can migrate monitoring. Organizations - You can only migrate monitoring between Monitoring Points in the same organization. A child organization is considered different than its siblings and its parent. Licensing - You cannot migrate from APM licensing to legacy licensing. Hardware - You cannot migrate from current hardware to legacy hardware (m20, m22, m25, m30, r40, r45, r400). 10Gbps Usage interface - You cannot migrate from a 10Gbps Usage interface to a 1Gbps interface. NMP - You cannot migrate from a Monitoring Point that has web paths to a Native Monitoring Point (NMP) as NMPs do not support Experience monitoring. CMP - Container-based Monitoring Points do not support monitoring migration.
  • The following are included in the migration: Delivery All network paths are migrated from the source to mapped interfaces on the replacement. Experience All web paths are migrated from the source to mapped interfaces on the replacement. Usage Usage history is reattached to mapped interfaces on the replacement. Usage Packet Captures Packet Captures are preserved and associated with mapped Usage interfaces on the replacement. Usage Packet Capture Schedules Packet Capture schedules are preserved and continue normally on the replacement (given that a passphrase is configured on the replacement). Network Devices Voice/Video Tests Voice/Video Schedules Shared Appliances SNMP Trap Sending Saved-List Membership Base and Add-on licenses (where applicable)
  • As part of the migration process, the source Monitoring Point is deleted from APM
  • or when aggregating monitoring from two or more Monitoring Points to a single higher capacity replacement
  • typically used when moving to a newer Monitoring Point
  • APM provides the ability to migrate monitoring from one Enterprise Monitoring Point (EMP) to another while preserving monitoring history and (where applicable) licensing
7 annotations
  • SSO behavior Upon enabling SSO: Users in a mapped security group may log in to APM in via your custom URL (https://<keyword>.pm.appneta.com/ ). Once users log in to the custom URL for the first time, access via the regular APM login will be automatically disabled. Upon disabling SSO: Single sign-on is disabled for the organizations associated with the identity provider. Users in mapped security groups will have their federated profiles converted to local profiles, which must then be managed via the Manage Users page. Affected users must revert to logging in via the regular APM login. Affected users must reset their passwords before they can log in again. Notifications will continue to be delivered to affected users.
  • Step 4: Configure SSO access control on APM Within APM, login as a user with Organization Admin credentials. Navigate to > Manage Identity Provider. For the identity provider you want to edit, select > Edit. In the Organization field, select the organizations you want SSO users at the selected identity provider to have access to. In the Role Mapping field, map security groups in your IdP to user roles within APM. All users that you want to log in via SSO must belong to a group that is mapped to an APM role. All mapped groups will have access to all organizations specified. If you are using APM-Private and an external SP, port 443 on your firewall must be open between the APM-Private server and the PingOne SP. Notify AppNeta Support that the configuration is complete. AppNeta Support will enable single sign-on. Users can then access APM via https://<keyword>.pm.appneta.com (where “<keyword>” is the keyword you provided).
  • Step 3: Map IdP SAML attributes Within your IdP, map the correct attributes from the corporate directory to properties in SAML assertions that APM expects. For example, for an Active Directory IdP this looks as follows: Active Directory attribute SAML property mail NameID * (also set nameid-format to emailAddress) mail email * givenName firstName sn lastName * member groups * title title telephoneNumber (or mobile) phone extensionAttribute1#OrgNames orgNames
  • Step 2: Install APM SAML metadata on your IdP Within your IdP, register APM as a service provider using the APM SAML metadata (entity descriptor) file provided by AppNeta Support.
  • Step 1: Send IdP SAML metadata to AppNeta Within your IdP, generate an IdP SAML metadata (entity descriptor) file. Contact AppNeta Support and ask them to add the IdP to your organization in APM. Provide them the following: The IdP SAML metadata (entity descriptor) file you generated. The APM organizations you control that you want to use single sign-on. If your deployment model is APM-Public with PingOne Identity Bridge: A keyword to use for your new custom URL. It will take the form https://<keyword>.pm.appneta.com. If your deployment model is APM-Private with PingOne Identity Bridge: A keyword to use for your new custom URL. To determine the keyword: Navigate to > Manage Identity Provider. In the Login URL column, find the keyword in the form https://<FQDN>:443/pvc/?sitename= <keyword> If your deployment model is APM-Private with PingFederate Server: Nothing else is required. AppNeta Support will provide you with an APM SAML metadata (entity descriptor) file.
  • Note: Coordination with AppNeta Support is required to set up SSO.
  • PingFederate SAML requests - TCP Port 9031
  • Pingone SAML requests - TCP Port 443 (default - can be customized)
  • If the OrgNames attribute is not set, the user may access any child organization on authentication.
  • single-valued attribute contains a comma-separated list of child organizations to which the user may be granted access once successfully authenticated
  • For customers with multiple child organizations, users’ access to child organizations can be controlled through the use of a custom attribute: OrgNames
  • Although most customers use the member: attribute for auto role assignment, this does contain all the user’s group memberships.
  • After successful SAML authentication, users can be automatically assigned APM roles (ie. authorized) based on their group memberships. To do this, APM makes a REST API call to PingOne over SSL using the access token to query for the group membership. Based on this and the group->role mapping, a role is auto-assigned.
  • The only information exchanged between the IdP and PingOne is an encrypted SAML token which contains: email. Normally an email address. groups. Normally group memberships of the user. Used for authorization. Optionally first and last name for auto-user creation.
  • User credentials are never exposed beyond your corporate IdP. APM never has access to them.
  • The entire authentication transaction is accomplished using browser redirects and SAML exchanges.
  • Ping’s proven SAML implementation ensures a completely secure authentication and single sign-on experience
  • APM-Private with a PingFederate SP is used to allow the entire application and identity framework to operate within your corporate network
  • APM-Private is used in order to keep all measurement data within your corporate network
  • APM-Public is used for ease of deployment
  • The deployment chosen depends on your corporate security policy
  • APM-Private with PingFederate Server - Private version of APM with internal SAML federation server as SP. For example, the following diagram shows SAML SSO with APM-Private, a PingOne PingFederate Server (SP), and on-premise IdP.
  • APM-Private with PingOne Identity Bridge - Private version of APM with an external SAML Identity Bridge. For example, the following diagram shows SAML SSO with APM-Private, a PingOne SAML Identity Bridge (SP), and on-premise IdP.
  • Several deployment types are available, depending on your security needs. For example: APM-Public with PingOne Identity Bridge - Public version of APM with an external SAML Identity Bridge. For example, the following diagram shows SAML SSO with APM-Public, a PingOne SAML Identity Bridge (SP), and on-premise IdP.
  • must be SAML2.0-compliant
  • APM communicates with a Service Provider (SP) which, in turn, communicates with an Identity Provider (IdP) that you control. In all deployments, the SAML SP function is provided by Ping
  • APM supports SAML-based SSO
  • Advantages include: No new credentials for users to remember No login required when accessing a deep link Centralized management of credentials (eg: password policy, lockouts, audit, etc.) No storage of, or access to, credentials by APM Auto-provisioning of users into APM Automated role assignments based on group membership
  • The web browser single sign-on (SSO) feature allows your users to access AppNeta Performance Manager (APM) using their existing corporate credentials
29 annotations
  • Create a Path Template Group and path templates To simplify network path creation from each user’s NMP to the selected targets (for example, a web app, central site, and potentially your VPN Gateway’s public IP), create a Path Template Group with a separate path template for each target.
  • We recommend creating an alert profile called “WFH Users” containing the following conditions: Data Loss - violates when data loss is above 2% for 2 minutes and clears when it is below 2% for 2 minutes. Voice Loss - violates when voice loss is above 2% for 2 minutes and clears when it is below 2% for 2 minutes. MOS - violates when MOS is below 3.7 for 2 minutes and clears when it is above 3.7 for 2 minutes.
  • Create an alert profile In order to trigger an alert when a user is experiencing network performance issues, you need to create an alert profile that specifies the limits of acceptable network performance
  • Create a time range for alerting In order to alert on network issues only when users are active, we recommend creating an alerting time range called “Business Hours” that spans your typical business hours
  • In order to monitor network performance, you need to create network paths from user computers to the targets identified in the diagrams above
  • There two ways to install the NMP software: Manual install - users install the software Unattended install - software is installed on user computers remotely
  • A given computer can only have one NMP instance installed and it can only be connected to one organization.
  • If you have multiple APM organizations, you will need one downloadable package per relevant operating system specific to the organization you want the NMP to connect to
  • Create NMP deployment packages You must also deploy an NMP on the computer of each work-from-home user you want to monitor. To prepare for this you’ll need to create a separate downloadable package for each client operating system (Supported OS’s include Windows and Mac OS). The appropriate package can then be downloaded and installed on the user’s computer.
  • Deploy an AppNeta Enterprise Monitoring Point (EMP) of sufficient capacity (typically an r90 or an r1000) to your central site (data center, hub, corporate head office) as a VPN performance monitoring target
  • P3 - (Applies to Split tunnel only) Monitoring to the VPN Gateway at the central site via single-ended path measures the performance of the infrastructure the VPN operates on and shows the route the VPN traffic takes to the central site.
  • P2 - Monitoring to a central site through the VPN via dual-ended path measures the VPN performance. Applies to both use cases.
  • P1 - Monitoring to a web app via single-ended path measures the user’s network performance to that app. Split tunnel - The measurement is of infrastructure strictly outside the VPN. Full tunnel - The measurement is through the VPN tunnel and then, once at the central site, outside of it to the web app.
  • In the Full tunnel scenario, all traffic passes through the VPN to the central corporate site. Traffic to external applications/services is routed from there.
  • In the Split tunnel scenario, only corporate network traffic passes from the user through the VPN to the central corporate site. All other traffic is routed outside the VPN. The advantage of this scenario, from a monitoring perspective, is that we can review the performance of the non-VPN paths (P1 and P3) using tools available in APM to isolate issues with the user’s ISP infrastructure
  • Prerequisites An organization set up in APM. A Monitoring Point for the central site (r90 or r1000 are typical). A Workstation (c10/n10) license for each user. Organization Admin or Advanced user role privileges on APM for setup. Administrative privileges on the work-from-home user’s computer. In the Split tunnel scenario, if traffic passes through a firewall at the corporate site, it must allow ICMP traffic to the VPN gateway.
  • it’s best practice to test a small pilot group first to make sure everything is working as expected prior to rolling out to all users
  • In this article you’ll learn how to monitor the network performance of work-from-home VPN users by: determining where to deploy AppNeta Monitoring Points and deploying them. setting up AppNeta Performance Manager (APM) to monitor a user’s network performance. setting up alerting, notifications, and reporting to help locate and resolve network issues proactively.
18 annotations
  • If this is the issue, you’ll also notice that the Usage traffic seen is primarily broadcast traffic like DHCP or Netbios.
  • band” setup, low packet counts will be seen if you connect to a regular access port on a switch rather than to a mirror port (also known as a “SPAN” port). Confirm that the switch port the Monitoring Point is connected to is a mirror port that is mirroring the port you want to monitor and not to a regular access port.
  • When connecting to a switch in an “out-of-band” setup, low packet counts will be seen if you connect to a regular access port on a switch rather than to a mirror port (also known as a “SPAN” port).
  • Very low packet counts
  • Check for encapsulations by reviewing the configuration of network devices that may encapsulate traffic and/or reviewing packet captures of the traffic in question
  • traffic encapsulated using capwap, Q-in-Q, or with Cisco MetaData (Ethertype: 0x8909) headers is not currently counted.
  • Usage is unable to see inside some encapsulated traffic to capture packet counts, so it is likely that some of the traffic is encapsulated and is not captured in the counts
  • Lower than expected packet counts
  • troubleshoot the connectivity problem.
  • Monitoring Point connection to APM is down.
  • ’-’ in Traffic Rate column
  • Check your Usage monitoring cabling and, if you are using port mirroring, the port mirroring configuration on your switch.
  • Monitoring Point has established a connection with APM but has not seen traffic on the Usage port.
  • ‘0 kbps’ in Traffic Rate column
14 annotations
  • Associate a packet capture with a network path When creating or editing a packet capture configuration, you can associate it with one or more network paths using the Related Network Paths feature. This enables you to easily see all packet captures related to a given network path and to have the packet captures appear in relevant areas of the user interface and reports (for example, on the network path performance Events chart).
  • Alert and warning statistic filters Packet Capture uses the following Wireshark filters to provide alert and warning statistics: Filter Expression ICMP errors or warnings icmp.type eq 3 or icmp.type eq 4 or icmp.type eq 5 DNS errors dns.flags.rcode > 0 Bad TCP tcp.analysis.flags BitTorrent bittorrent SMTP errors smtp.response.code >= 400 and smtp.response.code < 600 FTP errors ftp.response.code >= 400 and ftp.response.code < 600 HTTP server errors http.response.code >= 500 and http.response.code < 600 HTTP client errors http.response.code >= 400 and http.response.code < 500 SIP errors sip.Status-Code >= 400 OSPF State Change ospf.msg != 1 Spanning Tree topology change stp.type == 0x80
  • Timestamps are taken before the packets are split into hardware receive queues and thus respect the absolute order of the packet, which means that sorting a capture file (.pcap) by time will produce a better picture of packet ordering than sorting by packet index.
  • On physical Monitoring Points, sorting by timestamp will produce the correct order
  • Between flows (different Layer 3 source/dest addresses), packets may be reordered. Two flows may not be processed by the same receive queue, which results in nondeterministic ordering when they’re inserted into the final capture file (.pcap).
  • Within a given flow (same Layer 3 source and destination IP addresses), packets will not be reordered. Every packet in a flow will be processed by the same hardware receive queue and thus fed into the capture file (.pcap) in order.
  • Note the following regarding packet order:
  • Related Network Paths - lists the network paths associated with the capture. Click a path to display all of the captures related to that path.
  • Conversations - displays the network conversations (traffic between two specific endpoints for a protocol layer) with the highest total number of bytes.
  • Protocol Breakdown - displays the number of packets, and the number of bytes in those packets, for each protocol in the capture.
  • Alerts and Warnings - displays the number of packets in the capture that match a predefined set of display filters that identify notable network behavior that you may be interested in.
  • Overview
  • To view packet capture results: Navigate to Usage > Packet Capture.
  • In the Related Network Paths field, specify network paths associated with the capture to have the path name appear in relevant areas of the user interface and reports (for example, on the network path performance Events chart), and to filter completed packet captures by related network paths.
  • In the Capture Stop Condition(s) field, specify when to stop the capture.
  • In the Capture Filter field, use a filter to specify which packets are captured. The filter uses libpcap syntax. For examples, click the icon. Filtering only the traffic you care about will reduce the capture size. This provides a longer captured duration, and it ensures that the capture analysis is relevant to the problem you are trying to solve. Leave the field blank to capture all packets.
  • In the Packet Limit field, specify the maximum number of bytes to store of each captured packet. Default: 96 bytes. Range: 68 - 65,535 Deselect this option to capture entire packets.
  • To start a new packet capture: Navigate to Usage > Packet Capture. Click + Start New Capture.
  • Capture files are capped at 1GB. In addition, regardless of any stop conditions specified, capturing ends when the space remaining on the Monitoring Point is too low: For full-packet captures (where maximum of 1500 bytes per packet are captured), capturing ends when less than 10MB remains on the device. For partial-packet captures (where less than 1500 bytes per packet are captured), capturing ends when less than 1MB remains on the device.
  • Prior to starting a packet capture you must set up for packet capture
20 annotations
  • To load the AppNeta MIB onto your NMSs: Navigate to > Manage SNMP. If you do not see the Manage SNMP option, you do not have the necessary (Organization Admin) privileges. Click the download the latest MIB link to download the AppNeta MIB to your computer. Copy the MIB to each NMS that will receive AppNeta notifications.
  • Configure a Primary sender and optionally a Secondary sender. Note that the NMS Hosts field can contain multiple NMS IP addresses or hostnames. Click Test SNMP Traps to test your configuration.
  • Configure SNMP notification forwarding Organization Admin privileges are required to configure this feature. To configure SNMP notification forwarding: Navigate to > Manage SNMP. If you do not see the Manage SNMP option, you do not have the necessary (Organization Admin) privileges.
  • Edit default email addresses To edit the list of default email addresses: Navigate to > Update Notification Options. Click the link next to the Default Email Addresses field.
  • To enable or disable Service Bulletins: Navigate to > Update Notification Options. To enable Service Bulletins, select Yes in the Enable Service Bulletins field. To disable Service Bulletins, select No in the Enable Service Bulletins field.
  • Enable for changes in Monitoring Point availability to be reported in the notification emails. If you specify a time range, note that it is based on the Monitoring Point’s time zone.
  • Monitoring Point Availability
  • Flow analysis events occur when Usage alerts are triggered.
  • Flow Analysis Event
  • Enable for network route change events to be included in the notification emails. Network route change events occur when there is a change to the sequence of Autonomous System (AS) numbers from the path source to its target (indicating a change in provider network)
  • Network Route Change Events
  • Web path events occur when Experience alerts are triggered
  • Enable for web path events to be included in the notification emails
  • Web Path Event
  • Network path events occur when Delivery alerts are triggered
  • Network Path Event
  • all events occurring within the Digest Period are added to a Digest Summary and sent as a single email
  • Digest Period
  • To create a notification profile: Navigate to > Update Notification Options. Click + Add Profile. In the Name field, enter the name of the profile. In the Organization dropdown, select the organization associated with the profile. Click Submit. The profile is created. You still need to edit it in order to activate it.
19 annotations
  • From a browser, log into your APM-Private Cloud. Use the IP address or login URL provided at the end of the initial setup procedure. For example: https://192.168.1.100 or https://my-vpca.mydomain.org/pvc/login.html Use the email address and password you provided in the setup procedure as login credentials. Provide AppNeta Support access to the system to install licenses. See APM-Private Cloud Maintenance. Contact AppNeta Support to install your licenses. Go to APM-Private Cloud Configuration to configure APM-Private Cloud services.
  • Complete the initial setup.
  • In virt-manager, access the APM-Private Cloud virtual machine console.
  • Use virt-manager (or a similar application) to view the virtual machines running on the KVM host
  • Confirm that the virtual machine is persistent and will start automatically when the KVM host is restarted.
  • Start the virtual machine.
  • Set the virtual machine to start automatically when the KVM host is restarted.
  • Define a virtual machine based on the KVM domain definition file
  • Create a KVM domain definition file. See Creating a KVM domain definition file.
  • Copy the image to your KVM host machine. Copy it to a directory where you’d like the KVM disk image files to reside. For example: /data/kvm_images/myvpca
  • To install APM-Private Cloud on KVM:
  • Storage resources can be thin-provisioned
  • Prerequisites The APM-Private Cloud KVM image requires KVM on a system capable of hosting a guest machine with the following minimum hardware requirements. Component Minimum requirements vCPUs 4 Memory 16 GB Hard Disk 1 (pca-base) 40 GB (SSD performance required) Hard Disk 2 (pca-data) 750 GB (SSD performance required) Hard Disk 3 (pca-backup) 2000 GB Hard Disk 4 (pca-flow-data) 326 GB (SSD performance required) Network Adapter 1 x 1 GigE Video Card 4 MB The compute resources should be adjusted based on planned usage:   Trial up to 250 Monitoring Points up to 1000 Monitoring Points vCPUs 4 4 16 Memory (GB) 16 16 64
  • APM-Private Cloud is a virtual machine (VM) image that can be deployed on customer-supplied hardware running a KVM hypervisor.
14 annotations
7 annotations
  • Shutting down a CMP installed using Docker Compose To shut down a CMP installed using Docker Compose: Login to the host the Monitoring Point is deployed on. Navigate to the directory the Monitoring Point was deployed from (it contains the mp-compose.yaml file). Perform the shutdown. docker-compose -f mp-compose.yaml stop To start it back up again: docker-compose -f mp-compose.yaml start Note: On Linux hosts, you may need to run sudo docker-compose -f mp-compose.yaml stop and sudo docker-compose -f mp-compose.yaml start for these commands to execute successfully. If necessary, you can use docker ps and docker kill to find and stop the two containers used by the Monitoring Point. Their names end in “sequencer_1” and “talos-001_1”.
  • Shutting down a CMP installed using AKS To shut down a CMP installed using Azure Kubernetes Service (AKS): Sign in to the Azure Cloud Shell. Determine the deployment name for the Monitoring Point. kubectl get deployments Perform the shutdown. kubectl scale deploy <deployment name> --replicas=0 To start it back up again: kubectl scale deploy <deployment name> --replicas=1
  • To gracefully shut down a Monitoring Point: Monitoring Point Shutdown method m25, m35, m50, m70, r90, r1000 Press the power button on the back of the device and wait for the Power and Heartbeat LEDs to turn off. v35 Issue a graceful shutdown request from the hypervisor. r45 Use SSH to access the device and run sudo shutdown. m20, m22, m30, r40, r400 No graceful shutdown mechanism. CMP (AKS) See detail below. CMP (Docker Compose) See detail below. NMP (macOS) See detail below. NMP (Windows) See detail below.
3 annotations
  • To delete a Monitoring Point from your organization: Navigate to > Manage Monitoring Points. For the Monitoring Point you want to delete, select > Delete. You will be prompted to confirm this action, and optionally to move all affected paths to another Monitoring Point. For Container-based Monitoring Points (CMPs) and Native Monitoring Points (NMPs), you should also remove resources on the deployment host. CMP - AKS deployments, see Uninstall CMP software - AKS. CMP - Docker Compose deployments, see Uninstall CMP software - Docker Compose. NMP - Windows deployments, see Uninstall NMP software - Windows
  • Deleting a Monitoring Point has the following effects: All paths where the Monitoring Point is the source (and the monitoring history related to those paths) are deleted (though they can be moved to another Monitoring Point during the delete process). All Usage monitoring data related to the Monitoring Point is deleted. Tests, assessments, and packet captures are not deleted. The base license and any add-on licenses that were assigned to the Monitoring Point become available again. If the Monitoring Point has a legacy Usage-based license assigned to it, the license is deleted. Access to the Monitoring Point from APM is lost. The Monitoring Point is decommissioned. This resets the APM connection configuration on the Monitoring Point. In order to use the Monitoring Point again you need to redo the setup procedure.
  • Deleting an Enterprise Monitoring Point (EMP) from an organization is typically done when you are moving the Monitoring Point to another organization or freeing up its base license so it can be used by another Monitoring Point.
3 annotations
  • If the script captured user credentials that have a password you do not want visible, you can declare a password variable to mask its value. Those variables declared as “password”, “passwd”, “pwd”, or “secret” have their values masked. All other variable names do not have masked values. To declare a variable: Click Need Any Variables? on the Edit Workflow page. The Variables section appears. In the Variables section, add the variable Name and Value. Click + to add a new variable (optional).
  • Set it to slightly longer than the worst case script execution time. The maximum Timeout is testing interval / 3. NOTE: Avoid setting an overly long timeout period. If the script does time out, it will consume resources on the monitoring point for the full duration of the timeout period.
  • In the Timeout (sec) field, specify how long to wait for the script to complete before automatically terminating it.
  • In the HTTP Authentication section, specify a valid Username and Password if required by the target web application. These credentials are only used by the Selenium open command (and only if prompted for by the target) with one of the supported authentication methods.
  • Have the site you’re testing against open in Chrome on one monitor, and APM open in a different browser on another monitor
  • IP address logging/alerting
  • This ensures you don’t lose your work if the site you are testing against hangs or crashes Chrome
  • Use two different browsers on two different monitors
  • Plan ahead - As with any coding endeavor, it’s best to plan out what you’re trying to do ahead of time. Spend some time thinking about what you want to test, and what results you expect to see from it.
  • Close all unnecessary tabs. Clear out your browser’s cache and cookies. Disable any plug-ins you don’t need or that could interfere with the script, particularly script blockers and ones that automatically enter text into fields, such as password managers.
  • each time it runs, your script will be starting from a blank slate, so you’ll need to make sure the browser you are creating the script with is equally clean
  • Use a clean browser
  • minimal permissions
  • set up a dedicated account for the script to log into the application being tested
  • Obtain the login credentials
  • the script you create will be run on a Monitoring Point and it needs to be able to access the target
  • Script against an accessible target
  • script against the same version of the site that the Monitoring Point will access
  • Script against the target URL
  • Monitoring Point uses Chrome to execute the script,
  • Use Chrome for testing
  • There are several points you should consider before you start scripting:
  • You can use the Selenium scripting language to create scripts for Experience monitoring.
23 annotations
  • The element is only visible if you hover over its parent Description: You want to access an element but it is only visible when you hover over its parent. Solution: Use the mouseOver() command to simulate mousing over the parent element.
  • Solution: Use the setUserAgent() command to set the user agent explicitly so that the page renders as expected.
  • Rationale: A script can fail if it expects an element to be present but, in a given rendering, it is not.
  • The page loads in a mobile or unexpected view
  • Solution: To simplify the logout procedure in the script, use the “open(<target>)” command with the logout URL as the <target>.
  • Description: You are trying to write steps in your script to log out of a website but are having difficulty.
  • or increase the script execution interval
  • reduce the number of web paths
  • This indicates that the Monitoring Point is overloaded and transactions are failing as a result.
  • “Delayed transaction” event.
  • Purple diamond
  • Description: When a link is clicked, it opens the page in a new tab. The new tab is not active.
  • Solution: Use the selectActiveWindow() command to switch execution to the new tab.
  • Rationale: The text that Selenium is trying to match is that that is rendered rather than that in the page source
  • Description: You created a locator that matches link text and it is failing even though the text you are using exactly matches that you see in the page source.
  • olution: Use the “clickIfVisible” command to click the element only if it is visible and continue execution if it is not.
  • Rationale: An example is a pop-up that appears at the end of each month.
  • You want the script to interact with the element when it is visible and simply continue if it is not.
  • The script fails because the element or pop-up you want to use only appears periodically.
  • Pop-up or element appears periodically
  • Solution: Use a non-dynamic attribute or, if possible, match the part of the attribute that is static.
  • Another option is to “blacklist the resource” that is slow to load
  • increase the interval between script executions in APM (configure the Web App Group
  • There are pages that take so long to load that the script times out waiting for them.
  • If the issue was due to a pop-up or overlay being in the way, you can try using “clickAt(<locator>, 0, 0)” instead of “click”.
  • If the issue was due to a resource not being available when the command executed, use a command such as “waitForElementPresent” or “waitForVisible” to wait until the element is available before trying to access it
  • a pop-up or overlay) is blocking the element from view
  • Something on the page is not yet active or loaded
  • if the element it is trying to operate on is not ready or is not visible when the command is executed, it will not work
  • The “Element is not clickable at point (x, y)” error message indicates that the “click” command used was not able to execute.
  • test it.
  • This issue typically occurs when the locator used with the command does not correctly reference the element you are trying to use. Solution: Revise the locator to properly identify the element
  • When using a command with the “waitFor” prefix (for example, “waitForElementPresent” or “waitForVisible”), it can time out after 30 seconds rather than continue immediately as expected.
  • The “AndWait” suffix should only be used with elements that cause a new page to load.
  • an time out after 30 seconds rather than continue immediately as expected
  • “AndWait
  • a helpful way to debug script-related problems is with the “captureEntirePageScreenshot” command. It records a screenshot at its location in the script. Placing this command before and after commands in question is a good way to observe what is seen before and after a command is invoked
37 annotations
  • Different port used
  • No encryption
  • SSL encryption
  • No server certificate validation
  • Credentials stored in a different group
  • Make sure all fields in the LDAP configuration have been set correctly for your LDAP environment
  • Make sure the Monitoring Point has network access to the LDAP server. For example, make sure the appropriate firewall ports are open. LDAP uses TCP and UDP port 389. LDAPS uses TCP and UDP port 636.
  • Troubleshooting
  • LDAP is configured on a Monitoring Point using the Admin API.
  • LDAP setup on a Monitoring Point should be completed by a system administrator with good knowledge of LDAP system administration
  • what your policy is for authenticating responses from the LDAP server. If a CA certificate is required, you’ll need to upload it to the Monitoring Point.
  • Certificate requirement
  • Encryption type
  • he base DN from which LDAP searches will be executed
  • Search base
  • read-only directory access credentials
  • credentials required to bind to the LDAP server
  • If required by the server
  • Bind name/password
  • The DN of the authorization group containing the admin users
  • Authorization group
  • Active Directory and Oracle DSEE use different schemas
  • Server type
  • the port it is using
  • address of the server and
  • Prerequisites for LDAP configuration
  • Server URI
  • Monitoring Point makes a request to the LDAP server to authenticate the user
  • log in
  • using their credentials as stored on the LDAP server
  • Once the Monitoring Point is configured
  • Monitoring Point must then be configured to access the LDAP server and search the correct group for user credentials when a login attempt is made
  • accessible by the Monitoring Point and an authorization group on the server containing Monitoring Point administrators
  • need an LDAPv3 compliant server
  • Only members of this group
  • Settings supplied by the network administrator that tell the Monitoring Point to authenticate via LDAP, and where to find the server and authorization group.
  • LDAP configuration
  • perform administration tasks
  • Created by the network administrator
  • Authorization group
  • Logs into Monitoring Point (using their own desktop credentials) and assumes administrator privileges
  • Monitoring Point administrator
  • Applies LDAP configuration to Monitoring Points
  • Has access to the directory server and controls authentication
  • Network administrator
  • Key concepts
  • Better security forensics
  • Centralized password policy
  • Easy to control access
  • More secure
  • More convenient for administrators
  • allows administrative users to log in to Web Admin or the Admin API using their own credentials rather than the administrator credentials configured on the Monitoring Point.
  • AppNeta provides support for LDAP
  • Lightweight Directory Access Protocol (LDAP
54 annotations
  • if the target detects a QoS change in a packet it is sent, it informs the source and the source generates an alert. If the source detects a QoS change in incoming packets it also generates an alert.
  • source and target Monitoring Points are both configured to use the user-specified QoS value
  • For dual-ended paths, QoS changes are detected using only UDP messages
  • Note: Because the “Port Unreachable” check normally occurs every five minutes, if a QoS change violation occurs, it takes at least five minutes to clear.
  • the result is indeterminate and no alert is generated
  • If the markings are the same
  • If the markings are different - a change took place somewhere on the outbound path and an alert is generated.
  • The expectation is that the target will reply with an ICMP “Port Unreachable” message containing the header of the denied packet in its payload. The QoS markings in that header are then compared to the QoS markings that were sent to determine if they were altered on the outbound (source to target) path.
  • If they are zero (i.e. cleared) - then, because some targets clear QoS markings in ICMP echo replies (and we do not want to generate an alert if the target clears the markings), we need the results of a UDP test
  • f they are different than what was sent and are non-zero - a change took place somewhere on the path and an alert is generated
  • no alert is generated
  • If they are the same
  • An ICMP echo response is returned and the QoS markings on the response are evaluated:
  • An ICMP echo request is sent from the source
  • For single-ended paths, QoS changes are detected using both ICMP and UDP messages
  • a QoS change is detected
  • an alert profile with the QoS change condition configured is applied
  • the network path is configured with the QoS Settings field set to something other than “None”
  • APM can generate an alert if it detects that QoS markings are altered
  • f these markings are altered by a device in the network, a poor user experience can occur
  • QoS markings on packets are used to prioritize traffic
21 annotations
  • Each time a script is run or an HTTP request is made against a web application it is known as a web path test.
  • The set of web paths associated with a given web application are grouped together in a web app group
  • nown as a web path
  • Setting up Experience monitoring involves specifying which Monitoring Points are going to monitor a given web application and creating a script or specifying the HTTP request to interact with the application
  • you can set alerts
  • HTTP workflows generate HTTP requests from a Monitoring Point to an application’s API
  • determine the web app’s availability and responsiveness
  • generate direct HTTP requests that emulate a client app’s interactions with the web service using GET, PUT, or POST commands
  • HTTP workflows are primarily used to monitor web service APIs
  • you can set alerts
  • also breaks down the measurements by milestone within the script
  • Each time a script is run, the Monitoring Point measures the amount of time taken by the browser, the network, and the server running the application
  • Browser workflows use a Chrome browser located on the Monitoring Point to run scripts that emulate the workflow of a typical user
  • If the issue is with the server or the web app running on it, you can determine where in the app the issue is originating.
  • can also determine whether degradation in responsiveness is due to the browser, the network, or the server
  • allow you to determine the web app’s availability and responsiveness
  • scripted synthetic transactions that emulate an end user’s interactions with a web page through a browser
  • Browser workflows are primarily used to monitor HTML-based web apps
  • are easily identified within APM.
  • Performance-affecting changes
  • show how application performance changes over time
  • ecause the workflows are run at regular intervals
  • the monitoring results accurately reflect the application performance experienced at those locations
  • Because the workflows are executed from Monitoring Points at various locations
  • HTTP workflows APIs or web services Direct HTTP requests
  • Browser workflows Web apps (HTML-based) End user interactions with a web page
  • Experience monitoring provides you insight into how web applications are performing from a user or client application perspective. Monitoring Points execute transactions that emulate user or client interactions with an application.
27 annotations
  • Milestone Apdex - is calculated based on the page loads within a given milestone over the previous two hours. Web path Apdex - is calculated based on the page loads within all milestones on a given web path over the previous two hours. Web application Apdex - is calculated based on the page loads within all milestones, on all web paths, for a given web application over the previous two hours.
  • Apdex scores are calculated for every milestone, web path, and web application.
  • ach satisfied page load is counted as 1. Each tolerating page load is counted as 1/2. All others (those taking longer than the ‘tolerating threshold’ and those that fail) are counted as 0.
  • So the Apdex score is simply a ratio of satisfied and tolerated page load response times to the total number of page load requests made.
  • Apdext = (Satisfied count + (Tolerating count / 2)) / Total samples
  • Satisfied count - the number of page load samples where the response time is less than the ‘satisfied threshold’ (t). Tolerating count - the number of page load samples where the response time is between the ‘satisfied threshold’ (t) and the ‘tolerating threshold’ (by default, 4 x t). Total samples - the total number of page load samples. In APM, the samples used for an Apdex calculation are from the previous two hours.
  • t - the ‘satisfied threshold’ (in seconds) under which the user is satisfied with an application’s response. By default, this is four seconds.
  • Apdex uses a simple formula to calculate user satisfaction. The result - the Apdex ‘score’ - is a single number between 0 and 1 where 1 indicates that a user would be completely satisfied with the application response time. APM presents the Apdex score as a percentage from 0% to 100%.
  • Apdex is an industry-standard method for reporting and comparing application performance in terms of end user experience
9 annotations
  • Access to systems and customer information in the AppNeta Performance Manager (APM) is controlled using a policy of need-to-know/least privilege
  • Only Organization Admins can download the audit log.
  • The APM audit log file contains records of all actions performed on APM, when they were performed, who performed them, and where they were performed from
  • Software packages are downloaded from the upgrade repository via SSL
  • Linux-based NMP run as root and require outbound connections to APM servers to report the timing data and to download software updates. Timing data is sent back to APM via HTTPS
  • prompted for a passphrase once per Monitoring Point per login session
  • The symmetric key used for encryption is based on a per-Monitoring Point, user-defined passphrase
  • Captures must be decrypted using the symmetric key created from the passphrase
  • Captures are uploaded to the Capture Server via SSL where they are encrypted using an AES 256-bit key prior to their transfer to Amazon S3
  • APM uses standard encryption practices to ensure that the information in your packet captures is securely transmitted and stored.
  • are masked within the script editor
  • Password variables
  • is done through a secure channel (SSL/TLS)
  • Transmission
  • All Experience workflow script contents, including stored passwords, are encrypted while at rest within the APM database
  • AppNeta utilizes NIST SP 800-88 on Data Sanitization as its guideline
  • Customer data is purged within 90 days of being decommissioned or contract termination
  • Only key engineers may access production data
  • confidentiality agreements
  • Data access is restricted solely to AppNeta employees
  • AppNeta Performance Manager (APM) is hosted on Amazon Web Services
  • we negotiate to TLS 1.2 whenever possible
  • We also run Rapid7 vulnerability scans on all releases to ensure that no new vulnerabilities have been created
  • current generation of modern Monitoring Points will always negotiate to the highest protocol level - TLS 1.2
  • Within the cloud application infrastructure, each unique SSL/TLS tunnel connection is identified by the GUID associated with the Monitoring Point. Since each GUID is associated with exactly one organization, we ensure that all of the telemetry data arriving on that tunnel is directed to the data store and/or scheme associated with that organization
  • APM-Public will only support TLS1.2 and higher
  • all delays or failed attempts reset after an hour of inactivity
  • all delays or failed attempts reset after the first success
  • the 30 second delay recurs after each subsequent failure
  • after 5 consecutive failures, a delay of 30 seconds is imposed
  • up to 5 attempts are allowed with no delay
  • If a user session has been idle for more than 10 minutes, their session times out
  • APM passwords must contain a minimum of eight characters and must include uppercase alphabetic, numeric, and special characters
33 annotations
  • monitor your network and web applications
  • DNS monitoring is currently not supported. Migrate monitoring is not currently supported. AppNeta Synthetic scripting is not supported. Traceroutes outbound from a GMP will not show the identity of any hops between the source and target
  • GMP limitations
  • Performance baseline
  • Regional reference
  • Monitor performance from specific regions to apps used by your distributors, integrators, or retail locations.
  • Partner substitute
  • Monitor performance to core public-facing apps or internet-facing services from global locations representative of your customer base
  • Customer substitute
  • typically deployed in one of the following scenarios
  • used only for Delivery and Experience monitoring
  • nstalled in global cloud provider locations selected by you
  • Global Monitoring Points are Container-based Monitoring Points owned by you but managed by AppNeta
  • macOS automatic time updates cause periodic jitter spikes and packet discards in voice and video test
  • does not support Apple M-series processors
  • On macOS machines:
  • TCP Traceroute and TCP Ping are not supported
  • an be used at a maximum speed of 500 Mbps
  • On Windows machines:
  • Capacity measurements in either direction are problematic on Wifi networks
  • Scheduled access network links (for example, Fibre PON, DOCSIS Cable), typically provided by ISPs to residential customers, can show lower uplink capacity than expected.
  • Does not support voice or video testing when installed on a virtual machine
  • Does not support PathTest
  • Delivery monitoring only
  • typically used to monitor network performance from the perspective of a work-from-home user.
  • NMP Enterprise Monitoring Point is software that runs on the native operating system of a host computer and is used for Delivery monitoring
  • Any virtual machine that comes up without the following minimum requirements will show up in APM as a v25, which is an unsupported Monitoring Point type that cannot be licensed
  • 10Gbps NICs are supported on eth0, eth2, and eth3. Only 1Gbps NICs are supported on eth1 - the Usage monitoring port.
  • AppNeta Synthetic scripting is not supported.
  • Migrate monitoring is not currently supported.
  • DNS monitoring is currently not supported.
  • on a given deployment host.
  • one CMP
  • CMP limitations include:
  • The host requirements for the CMP are as follows:
  • To target a CMP deployed using AKS, run the terraform output command within the Azure Cloud Shell to determine the Load balancer fqdn to target.
  • When deployed on Azure using AKS, a redundant instance is automatically created and failover is automatic.
  • targeting a CMP, use dual-ended monitoring
  • Capacity measurements can be influenced by networking on the host, kernel version on the host, other containers sharing the host
  • applicable orchestration command
  • does not have a Web UI
  • or as a c50
  • The CMP can be licensed as either a c10
  • Only one CMP can be running on a given host
  • The CMP can be deployed in Azure cloud using AKS orchestration or on Azure, AWS, a server, or a workstation, using Docker Compose
  • Typically though, this use case is achieved using a Native Monitoring Point
  • deployed on a user workstation to monitor network and application performance from the perspective of the work-from-home user
  • deployed on a server in a remote office to monitor network and application performance from the perspective of users at that office.
  • also be used to measure baseline user experience
  • used as a target
  • deployed in the same Virtual Private Cloud (VPC) as your critical applications
  • User experience can then be measured from that region
  • deployed in the public cloud in a region
  • Monitoring from a workstation
  • Monitoring from a remote office
  • Monitoring to a cloud-based app
  • Monitoring from a remote region
  • supports Delivery/Experience monitoring up to 1Gbps
  • CMP Enterprise Monitoring Point is software that runs in a Docker container
  • Using PathTest with TCP, the maximum load generated is 37Gbps.
  • On idle 100Gbps networks you may see average network utilizations of approximately 2Gbps. This shows as “Utilized Capacity (avg)” on Delivery
  • For Diagnostics tests, the maximum “Total Capacity” measurement is approximately 50Gbps for both dual- and single-ended paths.
  • On 100Gbps networks, the maximum “Total Capacity” we can detect is approximately 96Gbps (on a dual-ended path, approximately 85Gbps on a single-ended path) rather than 99.58Gbps (maximum capacity with 9000-byte MTU taking packet overhead into consideration
  • In order to achieve accurate capacity measurements, the MTU size is important. For 100Gbps networks, use a 9000-byte MTU. For 40Gbps networks, use a minimum 4000-byte MTU.
  • On 40Gbps and 100Gbps networks, there are a few points to be aware of
65 annotations
  • ompare application performance from the data center with that from user locations (Experience monitoring
  • reate dual-ended paths to test WAN connectivity between the data center and user locations
  • another point of reference for network and application performance monitoring
  • Delivery and Experience monitoring - Deploy in the core (data center) as well as at user sites
  • deploying at an aggregation point - typically at a gateway or a VLAN trunk.
  • Delivery monitoring - Deploy such that you can monitor end-to-end from the user to an application.
  • same physical location, same subnet, same VLAN if applicable, same QoS characteristics if applicable, etc.
  • ant your monitoring results to mimic those of a user so the closer the network environment you deploy in is to that of the user
  • Experience monitoring - Deploy as you would deploy a typical user.
  • helpful when you need to know which user is connected to a particular application
  • allows you to see private IP addresses rather than public IP addresses
  • Usage monitoring - Deploy downstream of network address translation (NAT).
  • monitor traffic from as many users and applications as possible.
  • Typically at a gateway or a VLAN trunk
  • Usage monitoring - Deploy at an aggregation point.
  • more accurate picture of what users are experiencing
  • General - Deploy behind the firewall.
  • deploying “as close to the network core as possible
  • Considerations for Monitoring Point placement
  • requiring cloud-based monitoring but preferring to have AppNeta provide Monitoring Point management
  • GMP
  • hosting cloud-based applications
  • recommend deploying a CMP in each region or application environment.
  • CMP
  • Cloud
  • r1000
  • Data center
  • v35
  • CMP
  • r90
  • Headquarters
  • v35
  • CMP
  • includes WiFi
  • m70
  • Branch offices
  • CMP
  • Work-from-home user workstations NMP
  • recommend deploying one Monitoring Point at each location you want to monitor
39 annotations
  • If the Windows NMP is to serve as a target for single-ended paths, you need to add an inbound rule to allow ICMPv4 “Echo Request” packets to any program
  • When you install the AppNeta Native Monitoring Point (NMP) on a Windows machine, firewall rules are automatically added during the installation process. They are all inbound rules and include: Allow ICMPv4 “Echo Reply” (Type 0, Code Any) packets to the NMP Allow ICMPv4 “Destination Unreachable” packets to the NMP Allow UDP packets to the NMP Allow ICMPv4 “Time Exceeded” packets to any program
  • Azure - If you are deploying within Azure, the Azure firewalls and Network Security Groups should be configured with “Allow Inbound ICMP”. AWS - If you are deploying within AWS, the default security group needs a rule to allow inbound ICMP Echo Requests.
  • If you are installing the CMP using Docker Compose, you need to configure the following firewall rules for Delivery monitoring. All the other (non-Delivery) firewall rules shown above still apply:
  • If you are installing a CMP in Azure using AKS, there are no additional firewall rules to configure. The install process takes care of the firewall rules.
  • Additional firewall rules mat be required depending on where you are deploying the Container-based Monitoring Point (CMP).
  • Firewall rules allowing access to a Simple Network Management Protocol (SNMP) Network Management System (NMS) server are only required if the Monitoring Point is configured for SNMP notification forwarding and the server is external.
  • only required if the resolver is external.
  • only required if the server is external.
  • The Monitoring Point needs inbound and outbound connections for Network Time Protocol (NTP) to ensure precise timestamping
  • Domain Name System (DNS) is required for hostname to IP resolution
  • Container-based Monitoring Point (CMP), proxy configuration is done on the Docker host
  • If the proxy service requires authentication, it must use either basic or digest authentication; NTLM and Kerberos are not supported.
  • This might be the case if the Monitoring Point is deployed in a subnet reserved for network infrastructure rather than end-stations
  • If HTTP traffic is directed to a proxy server, make sure that no ACLs prevent the Monitoring Point from connecting to it (for example, permit tcp host device-ip host proxy-ip eq proxy-port
  • Allow outbound connections on port 443 if a workflow includes logging in to the target site.
  • Outbound TCP connections on port 80 are essential to Experience monitoring
  • Experience monitoring is a fundamental feature of the AppNeta solution so ports related to Experience monitoring should be opened.
  • port forwarding is required on the remote firewall to route traffic to the target Monitoring Point.
  • Monitoring Point is a target behind a NAT device
  • Delivery monitoring is a fundamental feature of the AppNeta solution so ports related to Delivery monitoring should be opened
  • Specifying *.pm.appneta.com provides access to all AppNeta APM servers but you can create rules for specific APM servers you need to access
  • Access to APM is mandatory.
  • In addition to setting firewall rules, you can create Access Control Lists (ACLs) on some Monitoring Point models to restrict inbound access
  • Additional configuration beyond this is based on your monitoring needs
  • At a minimum, the Monitoring Point must be able to connect to APM
26 annotations
  • Only one CMP can be installed on a host.
  • You can install a CMP using AKS either via the APM user interface or via the APM API.
  • need “Owner” or “Contributor” and “User Access Administrator”
  • best place in your network to deploy
  • Kubernetes 1.14+ Azure Cloud Shell, which also provides: Terraform v0.12.23+ Helm v3.1.1+
  • Prerequisites
  • Installing the CMP using AKS
  • Also, if you are installing on Azure, always use a dedicated instance and “host networking”.
  • f you do use “bridge networking” and install multiple CMPs on the same host, only one of them can be a path target.
  • Important: CMPs can be installed using either “host networking” or “bridge networking”. We recommend using “host networking” unless you are sharing a host/instance with other containerized apps
  • Install Docker
  • AWS: An AWS account. A Security Group with a rule to allow inbound SSH (AppNeta firewall rules are added to this during setup below). A key pair to allow SSH access to your instance (for example, ssh -i key-pair.pem user-name@x.x.x.x). An EC2 instance of the appropriate size that supports nitro in a VPC associated with the Security Group. A Hardware Virtual Machine (HVM) image that supports enhanced networking.
  • Azure: An Azure account. “Owner” or “Contributor” and “User Access Administrator” roles assigned
  • Linux hosts: Docker Engine 18.06.0+ Docker Compose file format 2.4
  • Windows hosts: Docker Desktop 2.2.0.5+ Nested virtualization needs to be enabled when running in a VM. VirtualBox and Docker Desktop (or rather its underlying technology - Hyper-V) cannot be run at the same time on Windows
  • Prior to deployment you’ll need to: Determine the best place in your network to deploy the Monitoring Point. Configure your firewall rules to enable the Monitoring Point access to APM.
  • Server or Workstation Use Docker Compose.
  • AWS Use Docker Compose.
  • To install the CMP into an Azure Virtual Network (VNet) (Azure’s version of a Virtual Public Cloud) that you control, you’ll need to use Docker Compose.
  • Docker Compose can also be used to deploy a CMP.
  • Azure Kubernetes Service (AKS) can be used for simplified container deployment and management.
  • Microsoft Azure
  • Monitoring from a workstation
  • Monitoring from a remote office
  • Monitoring to a cloud-based app
  • Monitoring from a remote region
  • Typical use cases for the CMP include:
27 annotations
  • Alert settings
  • you should definitely configure network paths that use it as dual-ended
  • Typically, more bandwidth is allocated to the downlink direction (traffic flowing from the internet) than to the uplink direction (traffic flowing to the internet). Because of this, this link can be a network bottleneck.
  • Prior to creating a network path, you should determine whether QoS is applied within your administrative domain (AD). If it is, you’ll specify that in the network path configuration, and choose an alert profile with a QoS condition so that you can verify that the QoS you expect isn’t being changed en route.
  • should take into consideration the prioritization, or quality of service (QoS), the traffic receives while in transit
  • Monitoring the path
  • video traffic is treated as a large data transfer
  • o monitor a network for voice quality, you need to know which audio codec your VOIP deployment is configured to use so that APM can properly simulate that traffic
  • specify the Network Type according to these definitions.
  • LAN path - < 4 hops (including the target) or latency < 5ms. WAN path - >= 4 hops (including the target) and latency >= 5ms.
  • Those with a LAN target do not
  • Network paths created in Delivery monitoring with a WAN target consume an application license
  • Asymmetric link
  • QoS settings
  • Data or Voice
  • Network type
  • For CMPs deployed using AKS, we recommend dual-ended paths.
  • If you target a Container-based Monitoring Point (CMP) deployed using AKS, run the terraform output command within the Azure Cloud Shell to determine the Load balancer fqdn to target.
  • you may only be able to connect to the outside interface of a remote WAN router
  • Voice handsets, depending on the model, can make good targets
  • voice test, you’ll need a Monitoring Point as the target
  • best target is an AppNeta Monitoring Point in the same network segment as the handset or workstation
  • monitor voice or video
  • up to 500 Mbps
  • Windows machines
  • you’ll need to create an exception
  • not protected by Symantec endpoint protection
  • configured to allow ICMP responses
  • If you target a workstation
  • If you want to monitor the performance of a web application, target the webserver it runs on
  • operating system.
  • quality of the NIC, the network driver
  • Servers and workstations are good targets but you may see as low as 70% of the expected bandwidth
  • AppNeta Monitoring Points and AppNeta WAN targets make the best targets
  • Target selection considerations include:
  • network devices show a lower than expected capacity
  • end-stations report the expected capacity
  • The image below shows the difference between targeting network devices and targeting end-stations on a gigabit network
  • If you try to target a network device such as a router, the capacity measurements you see may be less than expected as routers prioritize traffic forwarding over responding to ICMP echo requests
  • he goal is typically to measure performance through the network
  • Target selection
  • the connectivity check will pass but the subsequent diagnostic will hang
  • all network paths that use the interface will enter the connectivity lost state
  • interfaces with DHCP-assigned addresses that become unavailable
  • Known issue on m22, m30, r40, r400:
  • Re-run the Path Setup Wizard.
  • configure the additional physical interface or wireless interface
  • Confirm that your Monitoring Point has a secondary network connection port.
  • If the secondary port does not appear
  • When creating a network path using the Path Setup Wizard, select the appropriate Local Network Interface in Step 1 of the wizard.
  • To use a secondary port for a network path:
  • Some Monitoring Points have secondary network connection ports
  • By default, all network paths use the primary network connection port (the default interface)
  • Source interface selection
  • If you want the return path to target the source Monitoring Point hostname, create the path using the target’s hostname
  • f you want the return path to target a specific source Monitoring Point interface, create the path using that interface and specify the target’s IP address
  • When identifying a dual-ended path target, use either its IP address or its hostname
  • the path from the target Monitoring Point back to the source depends on how the path is set up
  • Determining the return path for a Dual-ended path
  • possible that their respective performance metrics do not match
  • protocols may be treated differently in your network
  • two path types use different protocols
  • there are some restrictions
  • exceptions are the Latency and Round-Trip Time charts
  • path performance page displays one chart for each direction
  • use UDP in addition to ICMP
  • require Monitoring Points at both ends
  • Characteristics of dual-ended paths
  • dual-ended monitoring, capacity can be measured in each direction independently.
  • using single-ended monitoring will only detect the slower of the two directions
  • both directions is required for paths that include a link provisioned for asymmetric loads (for example, a typical home ISP connection)
  • AppNeta Monitoring Point is required as a target to respond to UDP probes and to initiate Diagnostics tests in the reverse direction
  • uses UDP in addition to ICMP
  • A dual-ended path is one that is monitored independently in both directions.
  • Network devices and end-stations generally respond to ICMP echo requests, so the target does not need to be a Monitoring Point.
  • sending a train of ICMP echo requests to the path target and measuring various aspects of the packets received in response
  • single-ended path is one that is monitored in only one direction: source to target
  • Single-ended vs Dual-ended paths
78 annotations
  • The Mean Opinion Score (MOS) is an estimate of the rating a typical user would give to the sound quality of a call. It is expressed on a scale of 1 to 5, where 5 is perfect. It is a function of loss, latency, and jitter. It also varies with voice codec and call load.
  • Measurements are taken every minute. Each iteration consists of multiple packet trains, each with a specific packet size and number of packets (up to 50) per train. Sending multiple packet trains reduces the effect of packet loss. Sending large packets guarantees queuing. If packets did not get queued, there would be no dispersion and capacity would be overestimated. The initial packet size is the path MTU (PMTU). PMTU is the largest packet size a path can handle without fragmentation. APM considers the best option for total capacity measurement to be a packet train with the largest packet size that experiences no packet loss.
  • To understand how it this works, imagine two packets of equal size are sent back-to-back with no other traffic on the line. We’re interested in the distance between those packets by the time they reach the target. The packet dispersion is the time between the arrival of the last byte of the first packet and the last byte of the second packet.
  • TruPath uses packet dispersion analysis to calculate capacity
  • Bandwidth is the transmission rate of the physical media. It is the number quoted by your ISP but it does not take into consideration any protocol overhead or queuing delays. Capacity, on the other hand, takes these into consideration.
  • Available capacity is the part of the total capacity that is available for use. Utilized capacity is the part of the total capacity that is in use.
  • Total capacity is the highest transmission rate that you can achieve between a sender and a receiver
  • Delivery monitoring provides both data and voice loss metrics
  • UDP, VoIP for example, the loss may or may not have a significant affect on the conversation depending on how much loss is experienced as UDP packets are not retransmitted
  • heavy data loss can cause many retransmissions and can significantly impact throughput
  • traffic congestion along the network path, an overloaded network device, bad physical media, flapping routes, flapping load balancing, and name resolution issues
  • Packet loss, whether the packets are data or voice, is simply a measure of the number of packets that did not make it to their intended destination
  • Delivery monitoring provides both data and voice jitter metrics
  • Severe jitter is almost always caused either by network congestion, lack of QoS configuration, or mis-configured QoS.
  • In order to accurately recreate the media at the receiver end, those packets must arrive at a constant rate and in the correct order. If not, the audio may be garbled, or the video may be fuzzy or freeze
  • itter affects time-sensitive applications that use UDP but does not affect applications using TCP
  • Jitter, also known as packet delay variation, is a measure of variation in latency
  • There are several ways latency gets introduced into your data stream. The first is propagation delay. This is the time it takes for a signal to propagate across a link between one device and another. In general, the farther apart the devices are, the greater the propagation delay. The second is queuing delay. Queuing delay is introduced when a network device is congested and can’t route incoming packets immediately upon ingress. Finally, there is handling delay. This is the time it takes to put a packet on the wire. Generally, this is negligible compared to the other two.
  • For time-sensitive applications that use UDP (for example, real-time voice and video streaming and Voice over IP (VoIP)), large latencies can introduce both conversational difficulty and packet loss.
  • the effect of latency is compounded due to the way its congestion control mechanism works. This results in a major decrease in TCP throughput
  • High latency values have a detrimental effect on applications that use TCP and time-sensitive applications that use UDP
  • Latency, the time it takes for a packet to go from a source to a target
  • RTT is the time it takes for a packet to go from a source to a target and back
  • round-trip time (RTT)
  • voice traffic has smaller payloads with wider packet spacing
  • Delivery monitoring provides tools to evaluate network performance for both data and voice traffic
  • seeing potentially different performance metrics than traffic with unmarked packets, you will also be able to see which hops (if any) are changing the markings
  • Specifying DSCP markings on test packets sent by TruPath
  • potential cause of poor quality with delay-sensitive traffic
  • DSCP markings are even changed as they pass through a hop
  • Because honoring these values is not mandatory, there can be variations in how traffic is handled at various hops along a network path through the internet
  • DSCP 0 (the default value) means forward with “best effort”, whereas DSCP 46 (0xEF) means “high priority expedited forwarding”
  • Some DSCP markings have agreed upon meanings and others do not
  • categorized and handled appropriately by network devices
  • Differentiated Services Code Point (DSCP) values
  • different traffic types can be marked
  • delay-sensitive traffic can be allocated dedicated bandwidth and separate queuing on a network device so that it passes through the device more quickly than delay-insensitive traffic
  • traffic flows can be prioritized
  • QoS) is the mechanism used to manage packet loss, delay, and jitter by categorizing traffic types and then handling them appropriately
  • VoIP traffic is very sensitive to delay and jitter - the variation in packet dela
  • file transfers must not lose data but delays between packets in the transfer are not a problem
  • QoS
  • WWW browsing, file transfer, email, and video streaming services like Youtube and Netflix
  • used for applications where reliability is more important than reduced latency
  • a connection-oriented protocol that provides a reliable, ordered, and error-checked way to transfer data
  • UDP for continuous monitoring on dual-ended paths, to expose QoS marking changes during diagnostics tests, and as part of a traceroute for determining the route UDP packets take
  • real-time voice and video streaming and Voice over IP (VoIP
  • typically used by time-sensitive applications where losing packets is preferable to spending time retransmitting lost packets
  • no guarantee of packet delivery, packet ordering, or duplicate protection
  • no error checking
  • connectionless with very little protocol overhead
  • UDP is a core internet protocol used to transport data
  • for determining the route
  • and as part of a traceroute
  • ICMP is also used to expose QoS marking changes
  • Delivery monitoring use ICMP echo request and echo response packets (“ping” packets) for the majority of our continuous monitoring
  • CMP is a control message protocol used by network devices to send error messages and operational information
  • ICMP, UDP, and TCP
  • ICMP, UDP, and TCP
  • TruPath uses three common protocols
  • target Monitoring Point can be one of your AppNeta Enterprise Monitoring Points or an AppNeta WAN Target
  • from source to target and from target to source
  • independent measurements
  • get a more accurate picture of your network performance
  • In the dual-ended configuration, one Monitoring Point is the source and another is the target and UDP packets (rather than ICMP packets) are used for monitoring
  • cannot be detected
  • differences in capacity
  • network characteristics that are direction dependent
  • disadvantage
  • only one Monitoring Point is required
  • and ICMP echo replies are returned
  • ICMP echo requests are sent to the target
  • (the target) can be any TCP/IP device
  • t acts as one endpoint (the source)
  • In the single-ended configuration, a single AppNeta Monitoring Point is required
  • Single-ended and dual-ended network paths
  • For very slow speed links or networks with other restrictions like small maximum MTU size, TruPath automatically adjusts its traffic loads to minimize network impact even further
  • Because the packet sequences are very short, the overall load on the network is kept very low
  • Commonly used packet sequences are 1, 5, 10, 20, 30 and 50 packets in length
  • the route taken by all protocol types (ICMP, UDP, and TCP) is determined
  • As part of the diagnostic test
  • as many as 400-2000 packets can be sent in a series of packet trains
  • If a network dysfunction is detected (for example, a higher than acceptable data loss) TruPath first confirms the dysfunction is present (by sampling every six seconds for ten samples) and then, once it is confirmed, automatically shifts to DPA mode and runs a diagnostic test which probes not only the target, but all devices on the network path from the source to the target.
  • places roughly 20-50 packets onto the network
  • CPA mode runs continuously, and every 60 seconds
  • Continuous Path Analysis™ (CPA) and Deep Path Analysis™ (DPA)
  • To obtain this information
  • it can determine if there are Quality of Service (QoS) changes along the network path
  • t uses information like the time the packets take to go from a source to a target and back, the delay between packets on their return, packet reordering, and the number of packets lost, to directly measure key network performance metrics (round-trip time (RTT), latency, jitter, and data loss), and to infer others (total and utilized capacity)
  • waits for the replies
  • probes a network using short bursts of packets
  • the heart of Delivery monitoring
  • The lowest capacity device on the network path between the two endpoints determines the capacity of that path.
  • he rate at which a given queue can repeatedly fill and drain without data loss is effectively the maximum capacity of the device for traffic using that queue
  • queues fills up, any additional packets destined for that queue are dropped - causing data loss
  • auses a delay between packets when they are received at their destination
  • wait their turn to be forwarded
  • amount of traffic passing through a device at a given time, and the priority of that traffic, affects the amount of time a given packet will be queued (if at all) on that device
  • they encounter a number of network devices (for example, routers, switches, firewalls, load balancers, etc
  • As data packets flow through a network
  • TruPath™
  • Network traffic
  • IP Networking and TruPath
103 annotations
  • Global Monitoring Points are used only for Delivery and Experience monitoring (not for Usage monitoring)
  • Global Monitoring Points are Container-based Monitoring Points owned by you but managed by AppNeta
  • used for Delivery, Experience, and Usage
  • Enterprise Monitoring Points (available as hardware, as containers, as virtual machines, or as software) are owned and managed by you
  • APM-Public Cloud is a SaaS application deployed on the public cloud. APM-Private Cloud is software system that can be deployed either on AppNeta-supplied hardware or on your own hardware.
  • available in two formats
  • The AppNeta solution consists of two main components: APM, and AppNeta Monitoring Points. APM analyzes and reports on performance data supplied by Monitoring Points located throughout your network.
  • answers questions like: Which applications are being used at a given location? Which applications are consuming the most bandwidth? Which applications is a given user connecting to? Which users are consuming the most bandwidth? How many users are using a given application?
  • shows the amount of available bandwidth consumed
  • determine which applications are being used and who is using them
  • Usage monitoring enables you to see how bandwidth at a given location is being devoted to particular applications, hosts, and users.
  • This method answers questions like: Are there particular applications that are running slowly? Are there particular locations that are slow? Are there problems connecting to an application? Is an application available and responding as expected?
  • measures how long the application takes to respond
  • simulate machine-to-machine interactions
  • makes HTTP requests periodically to a web app’s API
  • HTTP workflows
  • This method answers questions like: Are there particular applications that are running slowly? Are there particular locations that are slow? Is the slowness I am experiencing an application issue, a network issue, or a browser issue? Are there problems connecting to an application? Is the issue within an application? For application issues, which part of the application is slow or unresponsive?
  • Each measurement is broken down by the amount of time taken by the browser, the network, and the server running the application
  • Browser workflows
  • Experience monitoring enables you to visualize application performance experienced by users at a given location
  • Delivery monitoring helps you answer questions like: Where on the network path is the problem occurring? Are there particular routes that are slow? What routes are down and when did they go down? How much capacity am I being provided by my ISP? How much of the available capacity am I using?
  • continuous path analysis (CPA)
  • Delivery monitoring enables you to visualize network performance and to determine where network problems are occurring
  • provides this visibility using three distinct mechanisms we call Delivery, Experience, and Usage monitoring
  • particularly useful if you are using cloud-based applications, or are running any part of your network across the internet
  • it enables you to: determine the source of network problems determine how users are experiencing application performance determine how network bandwidth is being utilized per application and per user determine whether network and application service providers are meeting their service level agreements plan for changes in capacity requirements
26 annotations