Table of Contents

Tips for troubleshooting Decision Data Store nodes

If you experience problems with Decision Data Store (DDS) nodes, verify that the Apache Cassandra database is configured correctly and that adequate resources are allocated. Use the following tips to investigate and solve the most common problems before looking for further guidance.

  • Check that the status of the nodes on the Decision Data Store tab of the Services landing page is NORMAL

    For more information, see Status information for decision data nodes and Configuring the Decision Data Store service.

  • Check that each instance of Pega® Platform with DDS nodes has its unique IP address.

    When you use one virtual machine to host several instances of Pega Platform, only one instance of the platform can use DDS nodes at a time. All Pega Platform instances share the IP address of the virtual machine. Because the Cassandra database uses IP addresses to identify cluster nodes, each DDS node requires an individual IP address to start and operate correctly.

  • Prevent clock skews.

    Check the system date and time across computers in the cluster. Clock skew across a cluster causes many issues for DDS nodes: for example, data might get modified or might reappear after it was deleted.

  • Verify that the number of file handles in use is below the allowed threshold.

    The default and recommended threshold is 100000. You can check this setting by running the ulimit -HN and ulimit -Sn commands.

    For more information, see the DataStax documentation about user resource limits.

  • Monitor the CPU statistics with the Linux top command.

    Check CPU statistics such as %sy, %us, %idle, and %waitio. A %waitio value greater than 1% might indicate that the system RAM memory is insufficient, which causes disk swapping. This issue can occur in systems where Cassandra is CPU-bound rather than I/O-bound.

  • Verify that the Cassandra database was not terminated during Pega Platform startup.

    Go to the Pega logs and look for exit code 137, which indicates that the Linux operating system terminated Cassandra. When a process uses an excessive number of resources, such as memory or file handles, Linux might disable the process by removing it from the process table. Cassandra termination is also logged in the /etc/log/kern.log file.

    For more information, see Log files tool.

  • Check the ports with the netstat -an | grep 7000 and netstat -an | grep 9042 commands.

    Ports 7000 and 9042 must listen to an IP address that is accessible from other nodes. The Cassandra database is available when port 7000 is in the LISTEN state. When port 7000 is in the ESTABLISH state, Cassandra is available and other nodes are connected to this computer's Cassandra instance. When port 9032 is in the LISTEN state, the DataStax driver clients can query the node.

    For more information, see Data Nodes on Pega Platform.

  • Check the logs for PEGA0085 alert. If PEGA0085 is displayed, check the available disk space for the DDS nodes. For more information, see PEGA0085 alert: Decision Data Store disk space below threshold.

If you cannot troubleshoot issues that you are experiencing, contact Pega Product Support and provide the following details:

  • Cassandra and Pega logs. For more information, see Log files tool.
  • Information from nodetool by running the following commands:
    • nodetool status
    • nodetool status data
    • nodetool ring

Suggest Edit

100% found this useful


Related Content

Have a question? Get answers now.

Visit the Pega Support Community to ask questions, engage in discussions, and help others.