LinkedIn
Copied!

Table of Contents

Creating a Stream data set

Version:

Only available versions of this content are shown in the dropdown

Process a continuous data stream of events (records) by creating a Stream data set.

You can test how data flow processing is distributed across Data Flow service nodes in a multinode decision management environment by specifying the partition keys for Stream data set and by using the load balancer provided by Pega. For example, you can test whether the intended number and type of partitions negatively affect the processing of a Data Flow rule that references an event strategy.
  1. In the header of Dev Studio, click Create Data Model Data Set .

  2. In the Data Set Record Configuration section of the Create Data Set tab, define the data set by performing the following actions:

    1. In the Label field, enter the data set label.

      The identifier is automatically created based on the data set label.
    2. Optional:

      To change the automatically created identifier, click Edit, enter an identifier name, and then click OK.

    3. From the Type list, select Stream.

  3. In the Context section, specify the ruleset, applicable class, and ruleset version of the data set.

  4. Click Create and open.

  5. Optional:

    To create partition keys for testing purposes, in the Stream tab, in the Partition key(s) section, perform the following actions:

    Create partition keys for Stream data sets only in application environments where the production level is set to 1 - Sandbox, 2 - Development, or 3 - Quality assurance. For more information, see Specifying the production level.
    1. Click Add key.

    2. In the Key field, press the Down arrow key, and then select a property to use as a partition key.

      The available properties are based on the applicable class of the data set which you defined in step 3.
    3. To add more partition keys, repeat steps 5.a through 5.b.

    For more information on when and how to use partition keys in a Stream data set, see Partition keys for Stream data sets.
  6. Optional:

    To disable basic authentication for your Stream data set, perform the following actions: in the Settings tab, perform the following actions:

    1. Click the Settings tab.

    2. Clear the Require basic authentication check box.

      The REST and WebSocket endpoints are secured by using the Pega Platform common authentication scheme. Each post to the stream requires authenticating with your user name and password. By default, the Enable basic authentication check box is selected.
  7. Confirm your settings by clicking Save.

  8. Optional:

    To populate the Stream data set with external data, perform one of the following actions:

    Choice Action
    Use an existing Pega REST service
    1. In the navigation panel of Dev Studio, click Records Integration-Connectors Connect REST .

    2. Select a Pega REST service.

    3. Configure the settings in the Methods tab.

    Create a Pega REST service
    1. Create a Connect REST rule.

    2. Configure the settings in the Methods tab.

  • Partition keys for Stream data sets

    You can define a set of partition keys in a Stream data set to test how data flow processing is distributed across Data Flow service nodes in a multinode decision management environment by using the default load balancer. For example, you can test whether the intended number and type of partitions negatively affect the processing of a Data Flow rule that references an event strategy.

Related Content

Did you find this content helpful?

Have a question? Get answers now.

Visit the Collaboration Center to ask questions, engage in discussions, share ideas, and help others.