Data Flow rule form
|
|
Data flows provide a design experience based on selecting the patterns you apply, starting from source and ending in destination. Unlike strategies and process flows, you do not add components or shapes directly to the data flow, you define patterns by clicking the add icon available when you focus on a shape, and selecting from the instructions you can use to create paths that lead to the destination. As a result, the visual layout of the data flow is determined by the sequence of instructions or execution points and, as you define the properties of each shape, the data flow calculates the connection to the destination.
You can toggle the way you visualize the labels of the shapes in data flow canvas by selecting Names or Classes in the data flow toolbar:
Source is the standard entry point of a data flow. A source defines data you read from through the data flow.
A data flow contains one primary source. This primary source can originate from:
Abstract: results in the data class.
Data flow: the entry point is based on another data flow. This data flow can be under the same data flow class or another class.
Only a data flow that has an abstract destination can be used as the primary input in another data flow.
Data set: the entry point is based on the data defined in a data set in the data flow class.
Report definition: the entry point is based on a report definition. The scope of this report definition is the applies to class of the data flow. To distribute a data flow run across multiple nodes, select the Enable distributed run check box and specify the Partition Key.
Note: If you want to use a report definition that does aggregation (generates database group by statement), create a property in the class on which the report is defined. The property must have the same name as the aggregate in the report definition.
The compose pattern allows you to combine data from two sources into a page or page list property. This pattern requires a property to match data between the two sources. The starting data point is the shape from which you start combining data. Once the compose path is available, you define the secondary source by providing an existing data set or another data flow.
The convert pattern allows you to take data in one class and put it in another, overwriting identical properties (by name), or explicitly mapping properties that do not have the same name between the source and target. This pattern requires a property to match data between the two sources. The data you pass is determined by the starting shape for the convert path. Once the convert path is available, you define its properties by selecting the target class and specifying how to handle property mapping between the source and target.
The mapping of properties between the source and target can be handled automatically, or you can explicitly define how to map source and target properties.
The event strategy pattern allows you to reference and run an event strategy in a data flow.
Configuring the event strategy pattern
The filter pattern allows you to specify filter conditions and apply them to each element of the input flow. The output flow consists of those elements which satisfy the filter conditions.
Configuring the filter pattern
In the left field, specify the name of a property to be used by the filter.
This pattern allows you to reference an instance of the Free Text Model rule to deliver text analytics capabilities to users via data flows.
The merge pattern allows you to combine data in the primary and secondary data paths into a single track. This pattern requires the source to be in the same class, matching conditions between the primary and secondary sources, and configuration to handle data mismatches. The starting data point is the shape from which you start merging data. Once the merge path is available, you define the secondary source by providing an existing data set or another data flow.
In cases of no data match, you can exclude the source component results that do not match the merge condition. If one of the specified properties does not exist, the value of the other property is not included in the class that stores the merge results.
In cases of data mismatch, you can select which source is leading:
The strategy pattern is designed to run a strategy based on combined or not combined data. The data that you pass to execute the strategy is determined by the starting shape for the strategy path. When the strategy path is available, you select the strategy to run and the mode in which to run it. In a data flow, each strategy can run in one mode only.
Output strategy results
Unfold this section and select a class where you want to store strategy results.
When you decide to change the default output class, map the properties from the strategy result class to the properties of the class that you select.
The destination pattern defines the data point you write to and it is the standard output point of a data flow. Every data flow defines one or more destinations that output all results, or results based on a given condition.
The types of data representation you can use in a destination are: Abstract, Activity, Case, Data flow, and Data set.
The type of data set determines the write operation performed by the destination: Database tables, Decision Data Stores, Adaptive Decision Manager (ADM) by using the pxAdaptiveAnalytics data set, Interaction History (IH) by using the pxInteractionHistory data set, or a Visual Business Director (VBD) data source.
When your data set is a database table, you need to specify the save options: