Table of Contents

Article

Troubleshooting Kafka

Review the following troubleshooting scenarios to find solutions to problems with Kafka.

Stream service nodes are NORMAL, but data flow hangs

Even though all nodes are in the NORMAL state, the data flow hangs.

Reason

Some ports of any Kafka brokers within the cluster are unreachable.

Solution

Perform the following actions:

  1. Check whether the Kafka broker port is exposed for the rest of the nodes.

    The default port is 9092.

  2. If the port is not exposed, change the setting in the prconfig.xml file.

    For more information, see the check “Broker, Keeper and JMX Port Configuration” section in the Kafka common configuration options article.

Stream service nodes fail to run on WebSphere 8.5

Reason

By default, Java 6 is used, while Kafka supports Java 7 and later versions (in the case of IBM JVM 7.1).

Solution

Check which WebSphere version is used.

This node cannot be replaced in the current production level as data will be lost

Reason

The message appears during the replacement of the stream node if the Kafka data or metadata already exists.

Solution

Perform one of the following actions:

  • If some Kafka data is present on the stream cluster and you want to keep that data, disregard the message because this is expected behavior.
  • If no Kafka data exists on the stream cluster or if you want to remove the existing data, clean the Kafka metadata by running the following SQL command, and then retry replacing the node:
    DELETE FROM pr_data_stream_nodes;

    DELETE FROM pr_data_stream_sessions;

 

To view the main outline for this article, see Kafka as a streaming service.

Published February 21, 2019 — Updated March 6, 2019

Related Content

Have a question? Get answers now.

Visit the Pega Support Community to ask questions, engage in discussions, and help others.