Data Flow rule form
|
|
You can use the Data Flows tab on the Data Flow form to design the path that data takes from a source to a destination. When you apply a pattern to the canvas and provide instructions you can control which connections the data follows to the destination at run time. This approach is different from adding individual shapes to the canvas, which you can use to configure process flows or strategies.
You can toggle the way that you visualize the labels of the shapes in the data flow canvas by selecting Names or Classes in the data flow toolbar.
Use the source pattern to define data that you read from the data flow. A source is the standard entry point of a data flow.
A data flow contains one primary source. This primary source can originate from:
Abstract – The connection point for other data flows (nested data flows). It can represent the input page to be used when invoking either Save or Process operations.
Data flow – The entry point is based on another data flow. This data flow can be under the same data flow class or another class.
When you select a data flow as the source or destination of other data flows, you create data flow chains.
Only a data flow that has an abstract destination can be used as the primary input in another data flow.
Data set – The entry point is based on the data defined in a data set in the data flow class.
Report definition – The entry point is based on a report definition. The scope of this report definition is the applies to class of the data flow. To distribute a data flow run across multiple nodes, select the Enable distributed run check box and specify the Partition Key.
Note: To use an aggregate report definition that generates a database group by statement, create a property in the class on which the report is defined. The property must have the same name as the aggregate in the report definition.
Use the compose pattern to combine data from two sources into a page or page list property. This pattern requires a property to match data between the two sources. The starting data point is the shape from which you start combining data. After the compose path is available, you can define the secondary source by providing an existing data set or another data flow.
You the convert pattern to change the class of the incoming data pages to another class. The following conversion modes are available:
Class conversion involves overwriting identical properties by name, or manually mapping properties that do not have the same name between the source and target.
The conversion pattern requires a property to match data between the two sources. The data that you pass is determined by the starting shape for the convert path. After the convert path is available, you define its properties by selecting the target class and specifying how to handle property mapping between the source and target.
The mapping of properties between the source and target can be handled automatically, or you can manually define how to map source and target properties.
Configuring the convert pattern
This way, you create expressions, for example Set .pyName equal to .pySubjectName, and so on. Through manual mapping, you control which properties are overwritten because they do not have the same name but represent the same data.
By adding this shape in a data flow, you can apply complex data transformations on the top-level clipboard page through data transform rules. In this shape, you can reference data transform rules that belong to the following classes:
When a data transformation is complete, the Data Transform shape propagates the modified pages to the next shape in the flow.
Configuring the data transform pattern
Use the event strategy pattern allows you to reference and run an event strategy in a data flow.
Configuring the event strategy pattern
Use the filter pattern to define and apply a filter to the incoming data. The output consists of those elements which satisfy the filter conditions.
Configuring the filter pattern
In the left field, enter the name of a property that is evaluated by the filter.
Use the Text Analyzer pattern to reference an instance of the Text Analyzer rule to apply text analysis on the incoming records that contain text.
Use the merge pattern to combine data in the primary and secondary data paths of a data flow into a single track. The starting data point is the shape from which you start merging data. Once the merge path is available, you define the secondary source by providing an existing data set or another data flow.
In cases of no data match, you can exclude the source component results that do not match the merge condition. If one of the specified properties does not exist, the value of the other property is not included in the class that stores the merge results.
In cases of data mismatch, you can select which source is leading:
Use the strategy pattern to run a strategy based on combined or not combined data. The data that you pass to run the strategy is determined by the starting shape for the strategy path. When the strategy path is available, you select the strategy to run and the mode in which to run it. In a data flow, each strategy can run in one mode only.
You can use the following modes for running a strategy:
Output strategy results
Unfold this section and select a class where you want to store strategy results.
When you change the default output class, map the properties from the strategy result class to the properties of the class that you select.
Use the destination pattern to define the data point you write to and it is the standard output point of a data flow. Every data flow defines one or more destinations that output all results, or results based on a given condition.
You can use the following destination types:
When you select a data flow as the source or destination of other data flow, you create data flow chains.
The type of data set determines the write operation performed by the destination: Database tables, Decision Data Stores, Adaptive Decision Manager (ADM) by using the pxAdaptiveAnalytics data set, Interaction History (IH) by using the pxInteractionHistory data set, or a Visual Business Director (VBD) data source. If your data set type is Decision Data Store (DDS) you can define the period of time for which to store the data.
When your data set is a database table, you must specify the save options:
By adding the Branch pattern in a data flow, you can perform the following actions:
You insert the Branch pattern by adding multiple Destination shapes in a data flow. You can add only one Branch shape in a data flow. The Branch shape radiates connectors that lead to each Destination shape that you created. In each of those connectors, you can add Filter, Convert, and Data Transform shapes to apply processing instructions that are specific only to the destination that the connector leads to.
Caution: When you delete the Branch pattern, you remove all additional destination patterns and the patterns that are associated with each branch.