This document outlines a complete OmniCenter deployment. It includes details about every component involved in a full implementation.
OmniCenter is packaged as an appliance-based solution. Therefore, everything necessary for a complete solution is packaged and bundled together. There are no agents, probes, or additional components to deploy.
The OmniCenter appliance is available in two forms:
Netreo’s OmniCenter can be deployed as a virtual appliance (VA), hosted within the customer’s virtualization environment. All of OmniCenter functions and features are fully supported in this VA.
Performance guidelines shown in the following table will assist customers in allocating the appropriate amount of resources to OmniCenter VAs. This guide is not comprehensive. Rather it is intended to give a general idea of the performance levels expected based on the amount of resources allocated to an OmniCenter VA (OCVA).
Deploying OmniCenter as physical appliance is also an option. However, the overall design is the same. Similar to the virtual appliance, a complete solution is packaged and bundled together. Hardware appliance guidelines can be found in the table below.
The recommendations in the above table are based on Netreo best practices and an average customer deployment. Customers with specialized environments that require large amounts of monitoring, rapid monitoring intervals, large numbers of simultaneous users, or that collect traffic flow data will require more resources. Likewise, some limited environments may be able to reach higher values before performance becomes problematic.
Depending on the size, scope and other circumstances particular to the implementation, other components can be included in a solution. These components are detailed below.
To make log collection and processing more efficient in very large environments, an OmniCenter Log Collector appliance (OLC) can be deployed. This appliance exclusively collects and processes logs, offloading that work from the main OmniCenter appliance to itself. However, all the reporting, searching and alerting are still handled directly through the main OmniCenter. So, the only difference a user will notice is increased performance. An OLC is typically deployed as a virtual appliance, so there is no limitation on the number of OLC appliances that can be added to an OmniCenter installation. In a high availability (HA) OmniCenter implementation, an OLC is required, as it protects resources dedicated to active polling.
Traffic Flow Collectors
In addition to the ancillary logging collector, another way to allow OmniCenter to scale horizontally is to add an OmniCenter Traffic Collector appliance (OTC) to the OmniCenter deployment. The idea behind this appliance is identical to the OLC appliance, except the data being collected and processed are traffic statistics from layer 3 devices. Like the OLC, an OTC is typically deployed as a virtual appliance, so there is no limitation on the number of OLC appliances that can be added to an OmniCenter installation. In a high availability (HA) OmniCenter implementation, an OTC is required, as it protects resources dedicated to active polling.
A third type of supplemental appliance that can be added to an OmniCenter solution is the OmniCenter Remote Poller appliance (ORP). The primary use case for this type of appliance would be to deploy it into a portion of your infrastructure where traditional monitoring protocols, such as SNMP and WMI/WinRM, are not allowed to traverse in or out. As with the OLC and OTC appliances all of the work of data collection and processing is handled on the local appliance and then retrieved from the ORP (via RESTFul API calls) for integration into the main OmniCenter UI. Generally speaking, only a single ORP would need to be deployed per restricted section of network—as the deployment specifications of the ORP match those of a standard OmniCenter appliance.
Scaling for Ancillary Servers
Determining the number of remote appliances to add to a solution is a function of two variables: The number of devices transmitting data in the network, and the volume of data sent by those devices to the managing OmniCenter appliance. It has been our experience that if inbound logging or traffic data is sourced from less than 30 devices, a core OmniCenter can handle the entirety of the load. But beyond that point, segmentation is recommended.
OmniCenter deployments also offer a high availability (HA) option. HA for OmniCenter is implemented as a cluster of multiple, independent appliances. A standard OmniCenter HA setup consists of three deployed OmniCenter appliances: A master, an arbitrator and a slave. The master is the “main” OmniCenter appliance that provides production services under normal circumstances. The slave is the “backup” OmniCenter appliance, that becomes active in the event that the master fails. The arbitrator is a third OmniCenter appliance that acts to provide quorum for the cluster in the event of a failure—casting the deciding vote on whether or not the slave should take over, based on the prevailing circumstances. The arbitrator also helps to reduce the stress on the master of database replication during initial HA data synchronization.
In order for the HA cluster nodes to communicate properly, they must be able to connect with each other using the ports noted in the table below. For HA to function properly, all managed devices in the customer environment must allow access from the IP addresses of both the master and the slave appliances. OmniCenter high availability configurations do not provide virtual IP functionality. Both the master and the slave must have static and permanent IP addresses. In the event of a failover, end-users must access the slave system via its natural IP address. VIP functionality is possible. However, the setup and configuration of that architecture are purely the responsibility of the customer. In any high availability (HA) OmniCenter implementation, remote log and traffic collectors are required, as they protect resources dedicated to active polling.
OmniCenter Overview is a deployment option used to scale OmniCenter vertically rather than horizontally—as with remote collectors or pollers. This type of deployment is intended for service provider environments where multi-tenancy is required, but can still be used in any situation. An OmniCenter Overview appliance (which can be deployed either virtually or physically) can be thought of as an “OmniCenter of OmniCenters.” Alert status from all linked OmniCenter clients are aggregated and rolled up through a single Overview user interface. The Overview appliance also polls and monitors its client OmniCenter appliances. Overview is not meant to monitor other resources in an infrastructure.
The purpose of an OmniCenter Overview deployment is to tie together multiple OmniCenter appliances, where each client OmniCenter is individually deployed and polling/monitoring its own set of devices. In addition to alert aggregation from client OmniCenters to the Overview appliance, the following functions are also available:
- Pass-through Authentication – Since OmniCenter Overview is meant to be a single UI to combine multiple OmniCenters, pass-through authentication can be configured to allow unobstructed access to the client OmniCenters. Logging into an Overview and navigating to a remote OmniCenter through the Overview UI will allow access to the client using the Overview credentials.
- Private Cloud Object Libraries – An OmniCenter Overview can be configured to host authoritative versions of device types, device subtypes, device templates, alert templates, incident management rules and a local users list in its private cloud libraries.
- Private Software Release Repository – An OmniCenter Overview can be configured to mirror the general Netreo software repository that linked OmniCenters look to for updates, thus limiting what version/patch-releases can be installed on the clients.
There are two possible methods for connecting/linking a remote OmniCenter client to an Overview appliance.
- The first option is the use the Overview appliance as an OpenSSL VPN server. A tunnel connection can be configured that will initiate from the client OmniCenter and connect to the Overview appliance. Default communication requires bidirectional outbound TCP/443 access between the client and the Overview. (This configuration is adjustable based on customer requirements.)
- The second option is for the customer to configure their network connections via routing (or some other method) to be able to reach an accessible Overview server.
Linked client OmniCenters have a listening API that allows the Overview server to make calls to the remote system and pull in its aggregated alert status. This API call is made every five minutes and updates the main Overview landing page.
Federation features in the Overview (Private Cloud Object Libraries) are only updated on demand. There is no automatic synchronization.
An OmniCenter Overview connects disparate independent OmniCenter client appliances. Aside from federation settings which can be pushed down (or pulled in) they are controlled and administered as stand-alone entities. The ideal use case for an Overview/client deployment is a widely dispersed network where local-perspective statistics are required, there will be IP address conflicts, or multi-tenancy is required (or any combination thereof). The primary contrast between this architecture and an OmniCenter Remote Poller deployment is that in the latter, only the time-series data is sourced from the local instance. All other elements are controlled and administered via the central OmniCenter appliance. Combining these features with device restrictions and user profiles found in the core of OmniCenter make OmniCenter Overview ideal for service provider environments.
Deployment and Implementation
Preparing for OmniCenter
OmniCenter uses existing manufacturer APIs to collect data from systems without having to install additional agents. Netreo recommends deploying OmniCenter on a core network, and inside of any firewalls used for perimeter protection. Because OmniCenter uses a wide variety of protocols for management (including direct connections to applications for monitoring and management), implementation is greatly simplified by this approach.
Having the necessary credentials for your network handy while configuring OmniCenter will make the initial setup go much more quickly and smoothly. Here’s a list of credentials to gather before you begin:
- Any relevant SNMP read-only strings for devices on your network.
- WMI/WinRM credentials to access Windows devices.
- SSH/Telnet credentials for configuration management.
SNMP (Simple Network Management Protocol) is the main protocol used for Linux servers and network devices (routers, switches, firewalls, load balancers, etc.). SNMP uses port UDP/161 for polled data collection and port UDP/162 for Trap messages (originating from the device to OmniCenter).
Either WMI or WSMAN protocols can be used to collect data from Windows servers. Communication using WMI requires TCP/135 and all high ports (1024-65535), bidirectionally. To use WSMAN enable TCP/5985 originating from OmniCenter.
OmniCenter can operate without Internet access, however; licensing, software updates and remote support are greatly simplified with basic Internet access. The following is a list of IP addresses and ports that can be configured on your outbound firewall to safely allow OmniCenter the access it needs.
An alerting policy is a collection of practices and procedures centered on the alerting of non-optimal conditions in the IT infrastructure. Prior to any OmniCenter implementation, Netreo will work with the client to understand their business needs and various requirements. These requirements can help dictate a policy that can be put into place via OmniCenter’s software. The follow table outlines the core aspects of an NMS alerting policy.
Installation and Configuration
There is a standardized methodology that Netreo Operations Engineers follow when implementing OmniCenter. It follows these steps:
A) Quick Start
In this step, Netreo Ops will assist the customer in getting OmniCenter licensed and putting the necessary initial configuration information into the OmniCenter setup wizard (device authentication information, alert contacts and other variables).
B) Device Population
The next step is to get target devices that need to be monitored into OmniCenter. This process can be completed in one of four ways:
- Manual device addition
- Import of a device list via CSV upload
- A “device addition” API call to OmniCenter
- Network scan and automatic discovery
C) Basic Categorization
This step is the first pass in device organization within OmniCenter. It is a precursor to additional setup activities.
D) Baselining and Templating
After roughly two weeks of data polling, a base of data exists such that reports can be run to determine “normal” operating levels. Exception thresholds can then be altered appropriately.
Complete OmniCenter organization by defining functional categories, geographic sites and strategic/application groups. The classifications defined here will largely be governed by what was dictated in the alerting policy.
F) Additional Configuration
This step is the completion of additional features found in OmniCenter; such as the configuration manager, Web Application Response monitoring, mobile application activation, custom dashboards and other elements.