Skip to main content


         This documentation site is for previous versions. Visit our new documentation site for current releases.      
 

Triggering a real-time event with the Event Stream service

Updated on August 4, 2022

Use the out-of-the-box Event Stream service to trigger Events and respond to them immediately. This method is recommended if the client calls are coming from within the cluster that is running Pega Customer Decision Hub.

Pega Customer Decision Hub
You can create real-time runs for data flows that have a data set that can be streamed in real-time as the primary input. Data flow runs that are initiated through the Data Flows landing page process the data using the checked-in instance of the Data Flow rule and the rules that are referenced by that Data Flow rule.
  1. Log in to Pega Customer Decision Hub as an operator with access to Dev Studio.
  2. Click ConfigureDecisioningDecisionsData FlowsReal-time processing.
  3. On the Real-time processing tab, click New.
  4. Associate a Data Flow rule with the data flow run by doing the following steps:
    1. In the Applies to field, select the event class, PegaMKT-Data-Event.
    2. In the Access group field, select PegaNBAM:Agents as the access group context for the data flow run.
    3. In the Data flow field, select the ProcessFromEventSource Data Flow rule.
    4. In the Service instance name field, select RealTime.
  5. Optional: Configure the number of threads for the data flow nodes.
    The recommended default value is 5. The number of threads should not exceed the number of cores on the data flow nodes. Check your hardware specifications in order to determine the maximum number of possible threads to use.
  6. Optional: To keep the run active and restarted automatically after every modification, select Manage the run and include it in the application, and then select the ruleset.
    If you move the ruleset to a new environment, the application will move the run with the ruleset and keep it active.
  7. Optional: Specify any activities that you want to run before the data flow starts or after the data flow run has completed.
    1. Expand the Advanced section.
    2. In the Additional processing section, perform the following actions:
      • Specify a preprocessing activity that you want to run before running the data flow.
      • Specify a postprocessing activity that you want to run after running the data flow.
  8. Optional: In the Resilience section, specify the data flow run resilience settings for resumable or non-resumable data flow runs. You can configure the following resilience settings:
    Record failure
    • Fail the run after more than x failed records – Terminate the processing of the data flow and mark it as failed after the threshold for the allowed total number of failed records is reached or exceeded. If the threshold is not reached or exceeded, the data flow run finishes with errors. The default value is 1000 failed records.
    Node failure
    • Restart the partitions on other nodes – For non-resumable data flow runs, transfer the processing to the remaining active Data Flow service nodes. The starting point is based on the first record in the data partition. With this setting enabled, each record can be processed more than once.
    • Fail the entire run – For non-resumable data flow runs, terminate the data flow run and mark it as failed when a Data Flow service node fails. This setting provides backward compatibility with previous versions of Pega Platform.
    Snapshot management
    • Create a snapshot every x seconds – For resumable data flow runs, specify the elapsed time for creating snapshots of the data flow runs state. The default value is 5 seconds.
  9. Click Done.
    Result: Data flow nodes are required to start the run. If the service contains no nodes in the cluster, a message is displayed with a link to the Services landing page, where you can add nodes.
  10. To analyze the lifecycle of the run and troubleshoot potential issues, in the Run details tab of the data flow run, click View Lifecycle Events.
    Result: In the window that opens, each event has a list of details, for example, reason, which you can analyze to better understand the event or debug an issue. You can use the Actions menu to export the events to a single file.
    Note: By default, events from the last 10 days are displayed. You can change this value by editing the dataflow/run/lifecycleEventsRetentionDays dynamic data setting.
What to do next:

When an event is triggered, the system determines whether the referenced event is valid, enabled, and available (date and time are valid). If so, any campaigns that are associated with the events are initiated. The payload for the event (including customer id) is propagated to the campaign, and from there to the strategy, and finally to the action. Refer to the Multi-Channel Campaigns chapter for more information on associating Events with Campaigns.

Pega Customer Decision Hub 8.5 includes the CDHEventSource data set and the related ProcessFromEventSource data flow which can be used to trigger real-time events. In addition, you can configure an external Kafka cluster for event processing, as described in Configuring Kafka to process real-time events.

Have a question? Get answers now.

Visit the Support Center to ask questions, engage in discussions, share ideas, and help others.

Did you find this content helpful?

Want to help us improve this content?

We'd prefer it if you saw us at our best.

Pega.com is not optimized for Internet Explorer. For the optimal experience, please use:

Close Deprecation Notice
Contact us