1. Understanding Errors per Second

Understanding Errors per Second

How is errors-per-second data calculated on network interfaces?

OmniCenter uses SNMP for data collection of interface error statistics. (In some cases OmniCenter may use other protocols such as WBEM, WMI, and SOAP, however the basic concepts are the same.) Devices typically store this data in the form of a counter. This counter is like an odometer in your car, and shows how many errors have been seen on the interface. By default, OmniCenter collects this data every 5 minutes. It calculates the difference between the current counter and the previous counter and then divides the result by the elapsed time to compute an average rate (errors per second) over that time period. This is the value that OmniCenter reports in the error graphs.

Quite simply, if you consider the difference between an odometer and a speedometer on the dashboard of your car, you can easily visualize the difference between the errors-per-second displayed in OmniCenter (the speedometer), and the error counter on any given device (the odometer).

One side effect of recording errors this way is—in cases where errors are few and sporadic—that OmniCenter can measure less than 1 error per second. For example, if a device interface only had a single error in a 5 minute window, OmniCenter would divide that 1 error by 300 (the number of seconds in 5 minutes) and conclude that there were an average of .003 Err/s in that time period—or 3 mErr/s (milli-errors per second).