In this guide, I will show you a quick and easy way to get open source syslog monitoring using Opsview.
You are here
Exporting Results to Elastic Stack
As a highly-scalable, enterprise-class unified monitoring platform, Opsview Monitor aggregates an enormous volume of information from IT infrastructure and applications. That makes Opsview Monitor a “single source of (realtime) truth,” and a natural integration point for numerous systems aimed at improving operational efficiency and making IT smarter.
Results Exporter -- introduced with Opsview Monitor release 6.1 -- is an add-on engineered to simplify sending events and metrics from Opsview Monitor to log servers and external analytics platforms. Results Exporter includes built-in configurations to help you export results data to Splunk -- covered extensively in this recent tutorial, and in Results Exporter documentation. In this article, we provide strategies and code for integrating with Elastic Stack: a widely-used open source toolkit for importing large volumes of data, making it searchable, and visualizing it.
This tutorial is primarily intended for enterprise users having access to and familiarity with Elastic. Other users, however, will find Elastic Stack fairly easy to install on a large VM or server for development and testing. Elastic also offers Elasticsearch Service, which lets you deploy Elasticsearch and Kibana with single-click ease on AWS or GCP.
Elastic Stack Components and Flavors
Elastic Stack (formerly called ELK) components include Elasticsearch, a search engine; Logstash, a log aggregation, filtering, and reformatting solution; and Kibana, a data-visualization system. The stack now also includes Beats, a plugin framework for shipping data to Elasticsearch directly, or (more commonly) to Logstash, which stores and formats data so Elasticsearch can more effectively consume, index, and search it.
Integrating Opsview with Elastic
We’ll demonstrate two methods of integrating Opsview Monitor with Elastic Stack:
- Use Results Exporter to output data to a local file, then ship to Logstash using Filebeat, the Elastic Beats file shipper.
- Use Results Exporter to ship data directly to Logstash, via a syslog TCP or UDP connection.
Prerequisite: Install and Configure Results Exporter
To begin with, if you haven’t already, you'll need to install Results Exporter. Installation is easy: just log into your Opsview Monitor master, become root, and follow the instructions in Opsview Knowledge Center - Results Exporter Component.
Results Exporter is configured by editing the file
/opt/opsview/resultsexporter/etc/resultsexporter.yaml, which overrides values in the default configuration file resultsexporter.defaults.yaml. As the docs detail, configuration begins by copying parts of the results_queue and registry stanzas from resultsexporter.defaults.yaml to a new (empty) resultsexporter.yaml file, replacing the values for message queue encoder key, message queue password, and Opsview registry password (not registry root password!) with actual values for your system. The values you need -- auto-generated when you deployed Opsview Monitor -- can be found in the file /opt/opsview/deploy/etc/user_secrets.yml.
We’ll also copy (from resultsexporter.defaults.yaml) the stanzas defining the default filter and output fields. This will make it convenient to modify these values, if needed, and help us keep track of what Results Exporter is sending to Elastic Stack. The resultsexporter.yaml skeleton we'll create will look like this:
|
Template results exporter.yaml configuration file, showing authentication overrides and skeleton stanzas for configuring syslog, file and http outputs. |
Remember that any time you make changes to a Results Exporter configuration file, you need to restart the component for changes to go into effect. To do this, just run (as root):
$ /opt/opsview/watchdog/bin/opsview-monit restart opsview-resultsexporter
Method 1: Shipping Data with Filebeat
As you can see from the sample skeleton, above, empty .yaml stanzas are provided for inserting configurations for syslog, file, and http outputs. The first step in shipping data to Elastic Stack with Filebeat is to create a Results Exporter file output. This is very easy: just change the file stanza to read something like this:
|
Example file stanza, inserted into resultsexporter.yaml to configure output to a local log file. |
...and restart the component. Check in /tmp to see that the log file is being created.
Next step is to install Filebeat appropriately for the Linux version on which you’ve deployed Opsview Monitor.
Finally, we need to configure Filebeat to export data to Logstash. This will require modifying the file /etc/filebeat/filebeat.yml
First we’ll tell Filebeat to export the log file Results Exporter is now generating, by adding or modifying the filebeat.inputs: stanza, providing an input of type: log and the requisite path. The file filebeat.yml
is heavily commented to help you, and further help can be found in the Filebeat documentation. The changes you’ll make look roughly like this:
Filebeat.inputs:
- type: log
enabled: true
paths:
- /tmp/resultsexporter.log
Next, we’ll configure the output to Logstash, also commenting out any existing default output direct to Elasticsearch. Look in filebeat.yml
for the sections marked “Elasticsearch output” and “Logstash output.” Specifying the IP address and port of your Logstash host is done like this:
output.logstash:
hosts: ["my-logstash-host:5044"]
Make sure filebeat.yml is owned by root.
Setting Up Logstash for Beats Input
Configuring Logstash to consume Filebeat input and export it to local Elasticsearch can be done by composing an /etc/logstash/logstash.conf file like the following, which is based on the Elastic-provided file logstash-sample.conf, but with an additional filter block in the middle, used to parse and clean up the output transmitted by Filebeat from Opsview Monitor, via Results Exporter:
|
Logstash configuration file example for ingesting data sent by Opsview Monitor Results Exporter and Filebeat. |
The input block causes all inputs on this port to be tagged with type ‘opsview.’ The filter block uses Logstash’s built in modules to process the incoming data. The ‘grok’ module is used for an initial parse of each whole message, isolating the results data. The ‘kv’ (key-value) module is used to split the extracted results field into key-value pairs at commas, and also remove some “noisy” fields that aren’t relevant to graphing (this line can be removed if you want to keep these fields in your dataset). Finally, the ruby module is used to break out the perf_data (performance metrics) from each message into separate event values.
Finally, the output block causes the processed data to be handed up to Elasticsearch, which (for our small, single-server test instances) is running locally.
Unless you’ve configured Logstash to pick up configuration file changes automatically, once you’ve created this logstash.conf
, activating the new configuration requires you to start Logstash in the appropriate way for your system (if stopped), or force configuration file reload as recommended in the docs.
Finally, restart Filebeat on the Opsview Master to enable data shipping to begin, e.g.:
$ sudo service filebeat restart
Method 2: Sending Data Via Syslog
In this next example, we forego using Filebeat and instead, reconfigure Results Exporter to make a direct UDP connection of type syslog with Logstash. To do this, we’ll configure a syslog output in resultsexporter.yaml on the Opsview master, as shown below. (You’ll probably also want to shut down Filebeat and comment out or remove the file output configuration we created earlier.) Just change the syslog stanza in resultsexporter.yaml (under outputs:) to read something like this, analogous to the file stanza we created in the prior section:
|
Example syslog output configuration for Results Exporter, used to send data to Logstash. |
We'll also need to change the logstash.conf file to reflect the new, UDP input:
|
Example logstash.conf to ingest data from Opsview Monitor Results Exporter syslog/UDP output. |
Finally we can force Logstash to reload its configuration, and restart the Results Exporter service on the Opsview master to begin shipping data this new way.
Consuming and Visualizing Opsview Monitor Data
Once Opsview Monitor results data is being shipped into Elastic Stack, the real challenge (read: fun) of analytics and visualization can begin. The screenshot, below, is a very simple example of a dashboard built to consume and display Opsview Monitor metrics: in this case, metrics pertaining to Opsview Monitor’s own self-monitoring (Opsview deploys with extensive monitoring of all its components active by default), as well as basic monitoring of system health for a Windows VM running on Microsoft Azure -- i.e., a typical host. Metrics for the Azure VM are being derived using the Microsoft Azure Virtual Machines Opspack, which draws in host metrics collected and distributed by Azure cloud.
In coming months, we’ll provide more tutorials on how Results Exporter and other facilities can be used to harness Opsview Monitor as a single source of realtime truth and insight, in your organization.
Get unified insight into your IT operations with Opsview Cloud
If you're a dissatisfied Nagios user who is ready to make the switch to Opsview, here is a guide on how to execute a migration that will result in...
Opsview's Python project structure that provides a "place for everything" and the flexibility to work efficiently both for development and...