Making decisions in data flow runs Creating a batch run for data flows Create batch runs for your data flows to make simultaneous decisions for large groups of customers. You can also create a batch run for data flows with a non-streamable primary input, for example, a Facebook data set. Creating a real-time run for data flows Provide your decision strategies with the latest data by creating real-time runs for data flows with a streamable data set source, for example, a Kafka data set. Creating an external data flow run You can specify where to run external data flows and manage and monitor running them on the External processing tab of the Data Flows landing page. External data flows run in an external environment (data set) that is referenced by a Hadoop record on the Pega Platform platform. Monitoring single case data flow runs View and monitor statistics of data flow runs that are triggered in the single case mode from the DataFlow-Execute method. Check the number of invocations for single case data flow runs to evaluate the system usage for licensing purposes. Analyze run metrics to support performance investigation when Service Level Agreements (SLAs) are breached. Changing the data flow failure threshold In the Real-time processing and Batch processing tabs, you can view the number of errors that occurred during stream and non-stream data processing. By clicking the number of errors in the # Failed records column, you can open the data flow errors report and determine the cause of the error. When the number of errors reaches the data flow failure threshold, the data flow fails.