1. Home
  2. OmniCenter Reference
  3. Understanding Errors per Second

Understanding Errors per Second

How is errors-per-second data calculated on network interfaces?
OmniCenter typically uses SNMP for data collection of interface error statistics. (In some cases OmniCenter may use other protocols such as WBEM, WMI or SOAP. However, the basic concepts are the same.) Devices usually store this data in the form of a counter. This counter is like the odometer in your car and shows how many errors have been seen on the interface since it began counting (typically at boot). By default, OmniCenter collects this value every 5 minutes. It calculates the difference between the current counter value and the previous counter value and then divides that result by the elapsed time in seconds to compute an average error rate (errors-per-second) for that time period. This is the value that OmniCenter reports in the error graphs.

Quite simply, if you consider the difference between an odometer and a speedometer on the dashboard of a car, you can easily visualize the difference between the errors-per-second rate displayed in OmniCenter (the speedometer), and the error counter on any given device (the odometer).

One side effect of recording errors this way is that OmniCenter can measure values of less than 1 error per second (in cases where errors are few and sporadic). For example, if a device interface only had a single error in a 5 minute period, OmniCenter would divide that 1 error by the number of seconds in 5 minutes (300), and conclude that there were an average of .003 Err/s (errors-per-second) in that time period—or 3 mErr/s (milli-errors-per-second).

Updated on November 6, 2019

Was this article helpful?

Need Support?
Can’t find the answer you’re looking for? Don’t worry we’re here to help!
Contact Support

Leave a Reply