This content has been archived.

Configuring Elasticsearch, Logstash, and Kibana (ELK) for log management

You can Elasticsearch, Logstash, and Kibana (ELK) to manage your Pega® Platform log files. ELK provides access to system log files that help you diagnose and debug issues without having to download log files from each node in the cluster.

Before you can use ELK, you must install and configure the following Elastic Stack components:

  • System nodes: On the system nodes on which the Pega Platform is installed, configure these nodes to output Pega log files as JSON files, which will serve as the input feed to Filebeat.
  • Filebeat: Filebeat is a lightweight Logstash forwarder that you can run as a service on the system on which it is installed. Set up Filebeat on every system that runs the Pega Platform and use it to forward Pega logs to Logstash.
  • Logstash: Logstash is a logging pipeline that you can configure to gather log events from different sources, transform and filter these events, and export data to various targets such as Elasticsearch.
Logstash is optional. You can configure Filebeat to directly forward logs to Elasticsearch. The advantage of using Logstash is that it can help process logs and other event data from a variety of systems.
  • Elasticsearch: Elasticsearch is a no-SQL database implementation for indexing and storing data that is based on the Lucene search index.
  • Kibana: Kibana is a log viewer that you can use to view and search for logs.

The following diagram illustrates the Elastic Stack architecture.

Elastic Stack architectureElastic Stack architecture

Setting up and installing the Elastic Stack

These setup and installation instructions assume that you are using the following versions of the Elastic Stack components:

  • Filebeat 1.3.1
  • Logstash 2.4.0 (requires Java 7)
  • Elasticsearch 2.4.0
  • Kibana 4.6.1

The setup and installation of the Elastic Stack consists of the following high-level tasks:

  1. Configure the Pega logs to output as JSON files
  2. Install and configure Filebeat
  3. ​Install and configure Logstash
  4. ​Install and configure Elasticsearch
  5. Install and configure Kibana
  6. Configure the Pega Platform to use Kibana

Configure the Pega log files to output as JSON files

Depending on your version of the Pega Platform, do one of the following actions:

For Pega 7.2.x and earlier

Add the following appender to the root logger in the prlogging.xml file to output log files as formatted JSON objects:

<appender name="JSONAppender" class="com.pega.pegarules.priv.util.DailySizeRollingFileAppenderPega">
<param name="FileNamePattern" value="'PegaRULES-'yyyy-MM-dd'.json.log'"/>
<layout class="com.pega.pegarules.priv.LogLayoutJSON">
<param name="UserFields" value="'src-vm:','src-node:','src-env:'"/>
</layout>
</appender>

You can also add user fields for adding custom data to log events. For more information, see Configuring Pega file logging appenders.

Beginning with Pega 7.3

Add the following appender to the root logger in the prlog4j2.xml file to output log files as formatted JSON objects:

<RollingRandomAccessFile name="JSONAppender" fileName="${sys:pega.tmpdir}/PegaRULES.json.log" filePattern="${sys:pega.tmpdir}/PegaRULES-%d{MM-dd-yyyy}-%i.json.log.gz">
<LogStashJSONLayoutPega userFields="src-vm:<value>,src-node:system1,src-env:<value>" />
<Filters>
<!--Deny message logged under ALERT log level-->
<ThresholdFilter level="ALERT" onMatch="DENY" onMismatch="NEUTRAL"/>
</Filters>
<Policies>
<TimeBasedTriggeringPolicy />
<SizeBasedTriggeringPolicy size="250 MB"/>
</Policies>
<DefaultRolloverStrategy max="20"/>
</RollingRandomAccessFile>

You can also add user fields to add custom data to log events. For more information, see the Apache Log4j 2 documentation.

Installing and configuring Filebeat

To install and configure Filebeat:

  1. ​Download and install Filebeat from the elastic website.
  2. Navigate to the Filebeat installation folder and modify the filebeat.yml file:
    1. Uncomment the paths variable and provide the destination to the JSON log file, for example:

      filebeat.prospectors:
      - input_type: log
      # Paths that should be crawled and fetched. Glob based paths.
      paths:
      - C:/Sust/PegaEclipse-4.2.2.0/StandalonePrograms/Tomcat/apache-tomcat/bin/PegaRULES-2016-Sep-*.json.log

    2. Uncomment the input_type.
    3. In the output section, uncomment the logstash entry.
    4. In the hosts section, enter the system and port where Logstash is hosted. By default Logstash is hosted on port 5044, for example:

      output.logstash:
      # The Logstash
      hosts hosts: ["localhost:5044"]

    5. Optional. Configure the output as file and enter a destination for the output. Using an output file is useful for testing. You can remove the setting when you finish testing.

Installing and configuring Logstash

To install and configure Logstash:

  1. Download and install Logstash from the elastic website.
  2. Navigate to the Logstash installation folder and create a pipeline.conf file, for example, pega-pipeline.conf.
  3. Configure the input as beats and the codec to use to decode the JSON input as json, for example:

    beats {
    port => 5044
    codec=> json
    ​ }

  4. Configure the output as elasticsearch and enter the URL where Elasticsearch has been configured.
  5. Optional. For testing, you can output the Logstash logs to a file and remove this configuration when you finish testing, for example:

output {
elasticsearch {
hosts => [" URL where ElasticSearch is configured"]
}
file {
path => " "
}
}

If the JSON decoding receives a payload from the input file, it uses plain text and adds the _jsonparsefailure tag to the file. When this happens, the entire payload is stored in the message field.

Installing and configuring Elasticsearch

To install and configure Elasticsearch:

  1. Download and install Elasticsearch from the elastic website.
  2. Navigate to the ES_HOME/config folder and open the elasticsearch.yml file.
  3. If you are using a cluster, enter the name of the cluster. A cluster is identified by a unique name. By default, the name is "elasticsearch." Nodes can be part of a cluster only if the node is set up to join the cluster by its name.​

Installing and configuring Kibana

To install and configure Kibana:

  1. Download and install Kibana from the elastic website.
  2. Navigate to the Kibana_Home/config folder and modify the kibana.yml file to point to an Elasticsearch instance to use for queries, for example:

    # The Elasticsearch instance to use for all your queries.
    elasticsearch.url: "http://localhost:9200"

  3. Start Kibana.
  4. Configure an index pattern that identifies the Elasticsearch index that you want to use for search and analytics. By default, Kibana selects the index pattern that it finds in Elasticsearch. If there is more than one index, choose one to use as the default.
  5. Optional. If your index contains a time stamp field that you want to use for time-based comparisons, click Settings, select Index contains time-based events, and select the field that contains the time stamp. For more information about using and configuring Kibana, refer to the Kibana User Guide on the elastic website.
  6. Configure the Kibana dashboard. See Pega logs in Kibana.

Configure the Pega Platform to use Kibana

To access Kibana from the Pega Platform, configure Kibana as an external log viewer by specifying its URL on the System Settings - Resource URLs tab. For more information, see Viewing log files in an external log viewer.


66% found this useful

Have a question? Get answers now.

Visit the Collaboration Center to ask questions, engage in discussions, share ideas, and help others.