LinkedIn
Copied!

Pega Cloud cloning limitations in Customer Decision Hub environments

This content applies only to Cloud environments.

For clients running Customer Decision Hub (CDH) in their environments, when Pega Cloud Services clones a client's Staging environment as part of the Pega Cloud software upgrade process, the cloning process maintains the client application and data from the Staging environment with the following exceptions: 

Decision data store (DDS) datasets

Decision data store (DDS) datasets in the CDH environment are not copied during the upgrade process cloning of the client's Staging environment; however, the datasets are  re-created in the cloned environment on first use. The standard product features in the client application continue to work using these newly created datasets.

Real time data flow runs and Interaction History (IH) summaries

Depending on the type of data flow and IH summary using in your CDH environment the action for these data flows and summaries varies:

  • Managed real-time data flow and materialized IH aggregates automatically restart on the cloned environment.
  • Un-managed real-time data flows (not marked as managed) or non-materialized IH aggregates are not automated; instead clients must restart the real-time data flows that populate the IH aggregates.

Adaptive Decision Manager (ADM) models

Since ADM models uses DDS data sets as a cache, the models are present in the relational database and then cached in DDS datasets; therefore the cloning does not impact these models and are regenerated when the cloned environment starts.

Visual Business Director (VBD) 

 All VBD column families are recreated when needed. 

Client-defined DDS data sets

Client-defined DDS data sets are created during when the cloned environment starts, but will be empty. If data in these DDS datasets is needed, then clients need to do either of the following steps:

  • Recreate the data set from the current database data.
  • Copy the data set from the originating system, which depends on the size of the data set:
    • For small datasets, clients use the export and import facility available from the action menu. The export option generates a CSV file on your desktop, while the import option reads the local CSV and inserts it into the empty data set on the clone.
    • For larger datasets, clients create a data flow to take the data out of the source data set and writes it to a file data set that resides on one of the shared environment repositories. To read this data into the empty data set on the cloned environment, clients create a data flow that takes the data out of a file data set that resides on one of the shared environment repositories and writes it to the empty data set on the clone.

In both cases, care should be taken regarding the sensitivity of the data being copied into the cloned environment: exporting Production data is not allowed.

Suggest Edit

Have a question? Get answers now.

Visit the Collaboration Center to ask questions, engage in discussions, share ideas, and help others.