You are here: Reference > Rule types > Data Set > Creating a Kafka data set

Creating a Kafka data set

You can create an instance of a Kafka data set in the Pega Platform to connect to a topic in the Kafka cluster. Topics are categories where the Kafka cluster stores streams of records. Each record in a topic consists of a key, value, and a time stamp. You can also create a new topic in the Kafka cluster from the Pega Platform and then connect to that topic.

Use a Kafka data set as a source of events (for example, customer calls or messages) that are used as input for Event Strategy rules that process data in real time.

Note: You can connect to an Apache Kafka cluster version 0.10.0.1 or later.

Perform the following steps to create a Kafka data set in the Pega Platform that represents a topic in the Kafka cluster:

  1. In Designer Studio, click + Create > Data Model > Data Set.
  2. Provide the data set label and identifier.
  3. From the Type list, select Kafka.
  4. Provide the ruleset, Applies to class, and ruleset version of the data set.
  5. Click Create and open.
  6. In the Connection section, in the Kafka configuration instance field, select an existing Kafka cluster record (Data-Admin-Kafka class) or create a new one (for example, when no records are present) by clicking the Open icon.
  7. Check whether the Pega Platform is connected to the Kafka cluster by clicking Text connectivity.
  8. In the Topic section, perform one of the following actions:

    Note: By default, the name of the topic is the same as the name of the data set. If you enter a new topic name, that topic is created in the Kafka cluster only if the ability to automatically create topics is enabled on that Kafka cluster.

  9. Optional: In the Partition Key(s) section, define the data set partitioning. By configuring partitioning you can ensure that related records are sent to the same partition. If no partition keys are set, the Kafka data set randomly assigns records to partitions. Perform the following steps to define partitioning of the Kafka data set:
    1. Click Add key.
    2. In the Key field, press the Down Arrow key to select a property to be used by the Kafka data set as a partitioning key.
    3. Note: By default, the available properties to be used as keys correspond to the properties of the Applies To class of the Kafka data set.

  10. Optional: Specify whether you want to read Kafka historical records by selecting the Read from beginning check box in the Advanced section. By selecting this option, each real-time data flow run that references a Data Flow rule for which this Kafka data set is the source will analyze also all the records from before the data flow run was started. If this check box is cleared, only the data that entered the Kafka data set after the data flow run was started will be analyzed. This setting is useful in a situation when you want the Pega Platform to analyze all historical data that you accumulated in the Kafka cluster.
  11. Click Save.