Skip to main content

Resolved Issues

View the resolved issues for a specific Platform release.

Go to download resolved issues by patch release.

Browse release notes for a selected Pega Version.

NOTE: Enter just the Case ID number (SR or INC) in order to find the associated Support Request.

Please note: beginning with the Pega Platform 8.7.4 Patch, the Resolved Issues have moved to the Support Center.

SR-C1507 · Issue 344564

Existing Kafka topic name used for connection

Resolved in Pega Version 7.4

When running a Kafka dataflow, Pega was using the dataset name instead of the topic name for the topic connection. This has been fixed, along with forcing a capital letter for the first character of the dataset name in order to ensure proper matching.

SR-C2352 · Issue 347253

Data join optimized for performance

Resolved in Pega Version 7.4

The data join implementation has been modified in order to improve performance for ADE applications built on Pega and based on DSM.

SR-C5585 · Issue 347734

Correct error messages shown for failed data flow records

Resolved in Pega Version 7.4

When running the data flow, the failed records were returned with incorrect error messages. In this case, the original exception generated by the CassandraBrowseByKeysOperations was not propagated to the caller. This has been fixed by adding the original exception to the one thrown to the caller.

SR-C6486 · Issue 349824

Check added for Kafka dataflow error handling

Resolved in Pega Version 7.4

When an event dataflow has failed because too many errors were detected, it is possible to continue it. However, it was noticed that the "continue" button had to be used twice: The first time, the dataflow failed again and the "Input record" was increased more than expected. For instance, if the event flow was sent 2 incorrect events, the "Input record" was increased by 3 instead of 1. The second time, the event dataflow was resumed correctly. Investigation showed that when a bad record is inserted into Kafka, a dummy error record is generated that has no information of the partition and position, so the data flow cannot update the partition table correctly. In order to correct this, the partition and position information will be given on the error record. The data flow execution has also been updated such that when there is an onError call, a check will be performed to assess whether the error originated on the primary source and an input record is present. If so, then the partition table will be updated from that record.

SR-C691 · Issue 349064

VBD reworked to retrieve IH DB metadata

Resolved in Pega Version 7.4

Campaign dashboard performance was slow when accessed via the Navigation bar button PRPC user configured for app server has restricted access to schema metadata. This causes VBD to be unable to build SQL query used to synchronize Actuals data set with Interaction History. It was possible to optionally configure vbd/useTableMapping = true in PR Config or DSS (owning RS: Pega-DecisionEngine) in order to force VBD to load IH database column metadata using Pega table info mapping instead of using database connection API, but the issue has now been fixed by reworking VBD to use Pega class/database mapping rules instead of JDBC DatabaseMetadata to retrieve IH fact/dim table columns.

SR-C8023 · Issue 350229

Updated dataflow used after pause/continue

Resolved in Pega Version 7.4

When using an event dataflow that has a Kafka dataset as the source, running the flow, pausing it, and importing updated rules was resulting in the resumed flow still using the old rules. This was a known limitation in the data flow metrics management where when a run was resumed, the previous metrics were "merged" with new metrics. In cases where the structure of a data flow changed between pause/resume, merge failed silently and new metrics were not saved to the database. The metric management has now been updated to merge metrics correctly: in case of a data flow structure update (e.g. shapes added/removed), after the data flow is run, stage metrics will be cleared up as it is not possible to match them with the new structure, and all other metrics will be properly resumed from the "paused" position (e.g. number of processed records, throughput etc.).

SR-D16934 · Issue 493705

External Cassandra nodes listed in DDS cluster

Resolved in Pega Version 8.4

Nodes of an external Cassandra cluster were not listed in the DDS cluster except for the first one in the host list, and when the “only” listed Cassandra node was restarted, the status on the DDS cluster LP did not become “NORMAL” afterwards. In addition, even though other C* nodes were up and running, the external Cassandra cluster was reported as unreachable. This was an unintended side effect of work done on the landing page to reflect the real state of the nodes after some were killed and restarted, and has been corrected by refining the equals() and hashCode() methods for DDS member info in order to better differentiate the external Cassandra nodes.

SR-D22686 · Issue 493519

IH summaries working with external Cassandra

Resolved in Pega Version 8.4

Summaries were not working for external Cassandra. This was an issue with the IH summary component using the aggregated dataset as a reference, and this fix contains several components to improve this function. An issue where the IH aggregates dataset does not materialize when DDS is external Cassandra has been resolved by modifying the code that checks the DDS availability. A filter that was used to match “” did not work when pre-aggregation is off : this was due to the IH Browse operation being done outside of a DF context, and has been fixed. The IH Summary shape not working properly in a strategy if it referenced an aggregate dataset with pxInteractionID as part of the group-by properties was traced to an issue with it generating a pxInteractionID value when executing the strategy, and has been resolved by excluding pxInteractionID from the group keys in the IH Summary shape.

SR-D24880 · Issue 494175

CEP bucket size increased to 1 minute

Resolved in Pega Version 8.4

Frequent DSM Data Flow Errors were observed in production nodes with very high event rates. This was traced to the default bucket size for CEP being 1 second for a 31 day window, which was not sufficient to handle high volume use which may create an event strategy window large enough to hit the Cassandra column size limit. In order to resolve this, the default bucket size has been increased from 1 second to 1 minute.

SR-D26010 · Issue 500166

Modified VBD insertion logic to improve handling

Resolved in Pega Version 8.4

An issue with not being able to launch an additional VDB node was traced to two processes inserting into VBD with different field signatures and triggering unnecessary object creation. This was amplified when adding a second node as the objects were serialized. To resolve this, the insertion logic has been modified to avoid creating new data container/field descriptors after a new field is included, as well as a measurement with smaller data type than previous container.

We'd prefer it if you saw us at our best.

Pega.com is not optimized for Internet Explorer. For the optimal experience, please use:

Close Deprecation Notice
Contact us