Skip to main content


         This documentation site is for previous versions. Visit our new documentation site for current releases.      
 

This content has been archived and is no longer being updated.

Links may not function; however, this content may be relevant to outdated versions of the product.

Advanced configurations for the Stream service

Updated on September 17, 2021

When you add Stream nodes in Pega Platform™ to use the decisioning Stream service, your platform instance uses default configuration settings for the service. If you have any reason not to use these default values, such as a port is being used by another process, you can modify the prconfig.xml file or create dynamic system settings to change the default values.

If you change the value of a property in the prconfig.xml file, you must save the configuration and then restart Pega Platform to apply the new settings to the Stream service.
Note: The Stream service is built on the Apache Kafka® platform. To understand Kafka-related terminology, see the official Apache Kafka documentation.

You can modify the following Stream service settings:

Changing the count of the open file descriptors

The open file descriptors count for the Stream service is set to 100,000 to enable high load of concurrent write and read operations. With a low descriptors count, the count limit might be exceeded, causing the Stream service to fail.

Note: For information how to change this value, see the Linux Documentation Project.

Increasing JVM heap sizes

Increasing the size of the Java Virtual Machine (JVM) heap improves performance and reliability of the Stream service. The default heap setting for the Stream service is set to -Xmx1G -Xms1G, where:

  • -Xmx- Specifies the maximum heap size.
  • 1G- Specifies the limit of the heap to be 1 gigabyte.
  • -Xms- Specifies the initial Java heap size.

To use the Stream service on production environments, increase the maximum heap size in one of the following ways:

  • Add an entry in the prconfig.xml file. For example,<env name="dsm/services/stream/pyHeapOptions"  value="-Xmx4G-Xms4G" >.
  • Create a dynamic system setting with the following options:
    • Owning Ruleset - Pega-Engine
    • Setting Purpose - prconfig/dsm/services/stream/pyHeapOptions/default
    • Value - For example, -Xmx4G   -Xms4G
Note: For more information on tuning Java Virtual Machines (JVMs), see the official Oracle documentation.

Changing location of the Apache Kafka distribution

When the Stream service is enabled on Pega Platform, the Apache Kafka distribution is unpacked in the top level directory of the Java EE Servlet Container. If you need to change the default location because it is secured against writing operations, you can do it in one of the following ways:

  • Add the <env name="dsm/services/stream/pyUnpackBasePath" value="/opt/kafka" > entry in the prconfig.xml file.
  • Create a dynamic system setting with the following options:
    • Owning Ruleset - Pega-Engine
    • Setting Purpose - prconfig/dsm/services/stream/pyUnpackBasePath/default
    • Value - /opt/kafka
Note: For information on Java EE Servlet Containers, see the official Oracle documentation.

Storing the Apache Kafka commit files

You can change the default storage location of the Apache Kafka commit files if you need to store the files in a different location. For optimal performance, make sure that you use an SSD disk or disks to store the files.

If you need to change the default location, you can do it in one of the following ways:

  • To change the default directory for a single server, add the <env name="dsm/services/stream/pyBaseLogPath" value="/root/kafkalogs" > entry in the prconfig.xml file.
  • To change the default directory for all servers in the cluster, create a dynamic system setting with the following options:
    • Owning Ruleset - Pega-Engine
    • Setting Purpose - prconfig/dsm/services/stream/pyBaseLogPath/default
    • Value - /root/kafkalogs
  • Optional: To spread the commit files across multiple directories and achieve better performance of the Stream service, do one of the following actions:
    • Add an entry in the prconfig.xml file. For example,<env name="dsm/services/stream/pyBaseLogPath" value="/vol1/kafkalogs,/vol2/kafkalogs" >
    • Create a dynamic system setting with the following options:
      • Owning Ruleset - Pega-Engine
      • Setting Purpose - prconfig/dsm/services/stream/pyBaseLogPath/default
      • Value - /vol1/kafkalogs, /vol2/kafkalogs

Associating a Kafka broker and keeper with individual IP addresses

The Apache Kafka server consists of two main components, a broker that manages data streaming and a keeper that manages cluster configuration in real time. In horizontally-scaled systems with multiple JVMs for Pega Platform, each Pega Platform instance is assigned an individual IP address. To enable horizontal scaling for the Stream service, you can assign individual IP addresses to the broker and the keeper by modifying the pyBrokerAddress and the pyKeeperAddress properties.

  • Add an entry in the prconfig.xml file. For example,<env name="dsm/services/stream/pyBrokerAddress" value="192.168.1.1" >
  • Create a dynamic system setting with the following options:
    • Owning Ruleset - Pega-Engine
    • Setting Purpose - prconfig/dsm/services/stream/pyKeeperAddress/default
    • Value - For example, 192.168.1.2

Changing the broker, keeper, and JMX port configuration

You can change port configuration for the broker, keeper, and JMX when you experience port conflicts on your Pega Platform. By default the broker is set to 9092, the keeper is set to 2181, and JMX is set to 9999. If you need to change the default ports, you can do it in one of the following ways:

  • Add the <env name="dsm/services/stream/<property_name>" value="<port_number>" > entry in the prconfig.xml file.
  • Create a dynamic system setting with the following options:
    • Owning Ruleset - Pega-Engine
    • Setting Purpose - prconfig/dsm/services/stream/<property_name>/default
    • Value - Port number. For example, 1111

Where the <property_name> is:

  • pyBrokerPort for the broker.
  • pyKeeperPort for the keeper.
  • pyJmxPort for the JMX.

Increasing the number of partitions per a topic

The default number of Kafka partitions is 20, but you can increase it to achieve greater concurrency of the Stream service and better performance. The number of partitions correlates with the number of nodes in your clusters. The more processing nodes you have, the more partition you can add. Ideally, the number of partitions corresponds to the number of nodes multiplied by the number of processing threads per node.

Increase the number of partitions per topic in one of the following ways:

  • Add the <env name="dsm/services/stream/server_properties/num.partitions" value="100" > entry in the prconfig.xml file to increase the number of partitions to 100.
  • Create a dynamic system setting with the following options:
    • Owning Ruleset - Pega-Engine
    • Setting Purpose - prconfig/dsm/services/stream/server_properties/num.partitions/default
    • Value - Number of partitions. For example, 100

Overriding settings in the server.properties file

When you start the Stream service, Pega Platform generates a server.properties file that contains default Kafka settings. The Stream service provides optimal performance with these settings. You can override the default settings to fine-tune the performance or to meet your requirements for the service configuration.

For example, if you need to override the default settings and change the number of IO threads from eight to four, you can do it in one of the following ways:

  • Add the <env name="dsm/services/stream/server_properties/num.io.threads" value="4" > entry in the prconfig.xml file.
  • Create a dynamic system setting with the following options:
    • Owning Ruleset - Pega-Engine
    • Setting Purpose - prconfig/dsm/services/stream/server_properties/num.io.threads/default
    • Value - 4

If you need to change how long Kafka keeps data override the default value of seven days in the following way:

  • Add the <env name="dsm/services/stream/server_properties/log.retention.hours" value="<number_of_hours>" > entry in the prconfig.xml file.
For more information about the available properties that you can override, see the official Apache Kafka documentation.

Have a question? Get answers now.

Visit the Support Center to ask questions, engage in discussions, share ideas, and help others.

Did you find this content helpful?

Want to help us improve this content?

We'd prefer it if you saw us at our best.

Pega.com is not optimized for Internet Explorer. For the optimal experience, please use:

Close Deprecation Notice
Contact us