Creating a batch run for data flows

You can create batch runs to make simultaneous decisions for large groups of customers or in cases where the primary input of a data flow is not a data set that can be streamed in real time. Data flow runs that are initiated through the Data Flows landing page process data in the access group context. They always use the checked-in instance of the data flow rule and the referenced rules.

  1. In the header of Dev Studio, click Configure > Decisioning > Decisions > Data Flows > Batch processing.
  2. On the Batch processing tab, click New.
  3. Associate a Data Flow rule with the data flow run:
    • In the Applies to field, press the Down Arrow key and select the class that the Data Flow rule that you want to run applies to.
    • In the Access group field, press the Down Arrow key and select an access group context for the data flow run.
    • In the Data flow field, press the Down Arrow key and select a Data Flow rule that you want to run. The available rules are limited by the selection of the Applies To class.
    • In the Service instance name field, select Batch.
  4. Optional: Specify any activities that you want to run before the data flow starts or after the data flow run has completed.
    1. Expand the Advanced section.
    2. In the Additional processing section, perform the following actions:
  5. Optional: Specify the data flow run resilience settings for resumable or non-resumable data flow runs.
    Data flow source
    In a resumable data flow run, the source of the referenced data Flow is a Stream, Kafka, or Database Table data set. The remaining data sets can be part of non-resumable data flow runs only.
    Data flow resumption
    Resumable runs can be paused or resumed and, in the case of node failure, the active data partitions will be transferred to the remaining functional nodes and resumed from the last correctly processed record ID that was captured as a snapshot. For non-resumable runs, no snapshots are taken because the order of the incoming records cannot be ensured. Therefore, the starting point for non-resumable data flow runs is the first record in each partition.

    You can configure the following resilience settings:

    • Record failure :
      • Fail the run after more than x failed records – Terminate the processing of the data flow and mark it as failed after the threshold for the allowed total number of failed records is reached or exceeded. If the threshold is not reached or exceeded, the data flow run finishes with errors. The default value is 1000 failed records.
    • Node failure :
      • Resume on other nodes from the last snapshot – For resumable data flow runs, transfer the processing to the remaining active Data Flow service nodes. The starting point is based on the last processed record ID before the snapshot with the data flow run was saved. With this setting enabled, each record can be processed more than once.
      • Restart the partitions on other nodes – For non-resumable data flow runs, transfer the processing to the remaining active Data Flow service nodes. The starting point is based on the first record in the data partition. With this setting enabled, each record can be processed more than once.
      • Skip partitions on the failed node – For batch mode data flow runs, do not analyze the data that resides on the failed Data Flow service node. The run will be completed without all records being processed but each record that is successfully processed as a result of this data flow run is processed only once.
      • Fail the entire run – Terminate the data flow run and mark it as failed when a Data Flow service node fails. This setting provides backward compatibility with previous Pega Platform versions.
    • Snapshot management :
      • Create a snapshot every x seconds – For resumable data flow runs, specify the elapsed time for creating snapshots of the data flow runs state. The default value is 5 seconds.
  6. Optional: For Data Flow rules that reference an Event Strategy rule, configure the state management settings.
    1. Expand the Event strategy section.
    2. Optional: Modify the Event emitting option. By default, when the data flow run stops, all the incomplete Tumbling windows in the Event Strategy rule emit the events that they have collected.
    3. In the State management section, specify the persistence type:
      • Memory - This persistence type keeps the event strategy state in running memory and writes the output to a destination when the data flow finishes running. The data is processed faster, but it can be lost if a system failure occurs.
      • Database - This persistence type periodically replicates the state of an event strategy to the Cassandra database that is located in the Decision Data Store and stores it in the form of key values. When you select this type of data persistence, if a system failure occurs, you can fully restore the state of the event strategy and continue processing data.
    4. In the Target cache size field, specify the maximum size of the cache for the state management data. The default value is 10 megabytes.
  7. Click Done.
  8. In the Run details window that opens, click Start to run your data flow. The Run details window displays the progress and statistics of your data flow work item.