Skip to main content

Resolved Issues

View the resolved issues for a specific Platform release.

Go to download resolved issues by patch release.

Browse release notes for a selected Pega Version.

NOTE: Enter just the Case ID number (SR or INC) in order to find the associated Support Request.

Please note: beginning with the Pega Platform 8.7.4 Patch, the Resolved Issues have moved to the Support Center.

INC-126129 · Issue 569666

PropertyToColumnMap made more robust

Resolved in Pega Version 8.1.9

The DF_ProcessEmails dataflow was intermittently failing with a StageException error. This was traced to schema changes being propagated asynchronously by system pulse, which seem to have caused PropertyToColumnMap to cache stale schema. To resolve this, if the property mapping is not found the first time, the system will make a second attempt to get the mapping. Additional logging has also been added for better diagnostics.

INC-128385 · Issue 564521

Behavior made consistent between SSA and legacy engines

Resolved in Pega Version 8.1.9

There was a behavioral disparity between the legacy execution engine and the SSA engine where the latter was not creating a new page when the index was one above the size of the page list. This has now been corrected in order to make the SSA behavior fully backward compatible with the legacy engine, i.e. a new blank page will be added to the list if the index is one above the size of the list.

INC-129222 · Issue 568530

Handling improvements for commit logs

Resolved in Pega Version 8.1.9

The ADM commit log logs the number of unconsumed messages that are going to expire. In certain circumstances, it can include unconsumed messages that are not going to expire in the count. Because they are not expired and removed, the environment was running out of disk space due to the ADM commitlogs table growing larger than expected and performance issues were seen. To resolve this, a new adm_commitlog.adm_responses_commit_log_date_tiered table has been created, with a default_time_to_live of 24 hours. DateTieredCompactionStrategy has been set with max_window_size_seconds as 24hrs and tombstone_compaction_interval as 24hrs.

INC-132976 · Issue 580685

Performance improvements for Test Strategy data flow

Resolved in Pega Version 8.1.9

In the Test Strategy panel under Single case -> "Settings", selecting the "Data flow" option and choosing CustomerData dataflow was taking an excessive amount of time to run on a system with an extremely large database. To improve performance, two areas have been addressed: 1) the default behavior for record key suggestions in the test panel has been modified to collect only the ID as the additional data is not necessary at that time; 2) a DSS has been added that will opt out of reading and collecting the customer IDs in order to minimize data stored on the clipboard.

INC-138037 · Issue 586593

Strategy handling updated for very large systems using IH summary

Resolved in Pega Version 8.1.9

When a Strategy in a Real-time dataflow used IH Summary on a system with more than 5000 groups for one eventKey, the message "Error retrieving aggregates from Cassandra KVS" intermittently appeared. Investigation showed that if the number of result rows was greater than the FETCH_SIZE (set to 5000), it meant another read to Cassandra was required and an exception was generated. To resolve this, updates have been made so that instead of returning maps, the system will return iterators and change them to map on the calling thread.

SR-D92734 · Issue 553412

Simulation can take Data flow type as destination

Resolved in Pega Version 8.1.9

Support has been added for Data flow functionality as simulation target and data transform in simulation input.

SR-D16934 · Issue 493705

External Cassandra nodes listed in DDS cluster

Resolved in Pega Version 8.4

Nodes of an external Cassandra cluster were not listed in the DDS cluster except for the first one in the host list, and when the “only” listed Cassandra node was restarted, the status on the DDS cluster LP did not become “NORMAL” afterwards. In addition, even though other C* nodes were up and running, the external Cassandra cluster was reported as unreachable. This was an unintended side effect of work done on the landing page to reflect the real state of the nodes after some were killed and restarted, and has been corrected by refining the equals() and hashCode() methods for DDS member info in order to better differentiate the external Cassandra nodes.

SR-D22686 · Issue 493519

IH summaries working with external Cassandra

Resolved in Pega Version 8.4

Summaries were not working for external Cassandra. This was an issue with the IH summary component using the aggregated dataset as a reference, and this fix contains several components to improve this function. An issue where the IH aggregates dataset does not materialize when DDS is external Cassandra has been resolved by modifying the code that checks the DDS availability. A filter that was used to match “” did not work when pre-aggregation is off : this was due to the IH Browse operation being done outside of a DF context, and has been fixed. The IH Summary shape not working properly in a strategy if it referenced an aggregate dataset with pxInteractionID as part of the group-by properties was traced to an issue with it generating a pxInteractionID value when executing the strategy, and has been resolved by excluding pxInteractionID from the group keys in the IH Summary shape.

SR-D24880 · Issue 494175

CEP bucket size increased to 1 minute

Resolved in Pega Version 8.4

Frequent DSM Data Flow Errors were observed in production nodes with very high event rates. This was traced to the default bucket size for CEP being 1 second for a 31 day window, which was not sufficient to handle high volume use which may create an event strategy window large enough to hit the Cassandra column size limit. In order to resolve this, the default bucket size has been increased from 1 second to 1 minute.

SR-D26010 · Issue 500166

Modified VBD insertion logic to improve handling

Resolved in Pega Version 8.4

An issue with not being able to launch an additional VDB node was traced to two processes inserting into VBD with different field signatures and triggering unnecessary object creation. This was amplified when adding a second node as the objects were serialized. To resolve this, the insertion logic has been modified to avoid creating new data container/field descriptors after a new field is included, as well as a measurement with smaller data type than previous container.

We'd prefer it if you saw us at our best.

Pega.com is not optimized for Internet Explorer. For the optimal experience, please use:

Close Deprecation Notice
Contact us