Skip to main content

Resolved Issues

View the resolved issues for a specific Platform release.

Go to download resolved issues by patch release.

Browse release notes for a selected Pega Version.

NOTE: Enter just the Case ID number (SR or INC) in order to find the associated Support Request.

Please note: beginning with the Pega Platform 8.7.4 Patch, the Resolved Issues have moved to the Support Center.

SR-D54430 · Issue 518291

Updated VDB Statistics rendering to compensate for Google Chrome changes

Resolved in Pega Version 8.4

The Statistics overlay was not rendering for VBD planner in Google Chrome. Investigation showed that the Google Chrome browser (v.77) was misfiring one of the mouse events due to changes in the browser, and the handling has been updated to resolve this issue.

SR-D54602 · Issue 517309

Prconfigs added to support Cassandra Speculative Based Execution

Resolved in Pega Version 8.4

In order to achieve high availability for Cassandra, prconfigs have been added to support speculative execution. The available prconfigs will enable it, and set maximum number of executions and delay before the next execution is launched.The prconfigs are: dnode/cassandra_speculative_execution_policy dnode/cassandra_speculative_execution_policy/max_executions dnode/cassandra_speculative_execution_policy/delay

SR-D60121 · Issue 525492

All interactions visible in "Latest Responses" for ADM

Resolved in Pega Version 8.4

Interactions were not visible in the "Latest Responses" section of the Model Management landing page for Adaptive models if the requests were stored on multi-node systems. This was due to the system fetching the Last Responses using a list of server nodes built using a version of deployment.getClusterState(tools) which gave only the ADM nodes list instead of all the ADM nodes both client and server. To resolve this, the system has been updated to use ServiceRegistry to get all of the data flow nodes and get the last responses from each of them.

SR-D60268 · Issue 521467

Performance and thread-handling improvements for SSA

Resolved in Pega Version 8.4

The SecureRandom class was used internally by SSAExecutionContext indirectly via UUID generation. Because this exhibited performance issues on some Linux environments, UUID has been replaced with static AtomicLong. In addition, a memory leak was observed when the strategy (SSA) execution resulted in an exception, and the strategy template has been modified to gracefully shutdown the VM under all circumstances. Thread-safety measures have also been tuned to be more fine-grained to reduce the potential thread contention that was seen while borrowing the SSAInterpreter object from SSAInterpreterPool.

SR-D69028 · Issue 528974

Deadlock in static Initialization of IntList resolved

Resolved in Pega Version 8.4

JVM Deadlock was seen related to the static Initialization of a subclass field in class com.pega.decision.strategy.ssa.runtime.collections.api.IntList . Thread dumps showed threads in RUNNABLE State that were parked to wait for class initialization, and this was traced to a missed sonar alert which failed in multi-threading. To resolve this, the system handling has been updated to prevent potential deadlock.

SR-D41730 · Issue 508144

TTL value correctly passed for Adaptive Event store

Resolved in Pega Version 8.4

The ADM table was growing due to the Time to Live (TTL) for entries in the Adaptive Event Store not being propagated to clean them out. This was traced to the TTL field on the data flow not being checked, causing the TTL value to be supplied as zero so there was no expiration. This has been corrected.

SR-D90367 · Issue 556687

Cleanup enhanced for long pyEditElement names

Resolved in Pega Version 8.5

A pyEditElement error relating to decision data was seen multiple times in a stack trace. Research showed that while the utility worked as expected for decision data rules with names of less than 30 characters, the pyEditElement section was truncated the name for the decision data. This meant that decision data with the name SampleIssueandSampleGroupTwosalkdjkightntbmkblffvfvfv would be saved as SampleIssueandSampleGroupT for the pyEditElement section. Because of this, the utility failed the match and did not clean up the pyEditElement section. To resolve this, the cleanup utility has been updated to handle pyEditElement sections of decision data with longer names. Additional logging has also been added to improve debugging.

SR-D71621 · Issue 533296

Real time processing picks up correct datetime for Capture Response records

Resolved in Pega Version 8.5

A Realtime Data flow for the Capture Response flow was configured with a strategy shape set to load previous decisions within the past 7 days. Once this Realtime DF was started, attempting to Capture Response for decisions made after that startup timepoint did not work. This was traced to the InteractionID being written with global properties for the datetimes, and has been resolved by making those datetime properties local so the start and end time are not cached and the time range is calculated based on "now”.

SR-D85558 · Issue 548286

Handling added for prolonged Heartbeat Update Queries

Resolved in Pega Version 8.5

After restart, the pyFTSIncrementalIndexer queue size had hundreds of thousands of entries even though it was empty prior to the restart. Investigation traced this to a job scheduler that checked all the database connections everyday at 1 EST by using a list that contained some connections which did not exist. Checking those invalid connections caused other update queries to queue and wait, resulting in the update heartbeat query taking longer than its default beat. This caused a Split Brain issue wherein other nodes considered the long-executing node to be dead and triggered a rebalance while the node itself continued to execute partitions thinking that it was healthy. This caused duplicate processing of records. To resolve this, a fail safe has been added: while updating heartbeat in Service Registry, nodes will enter safe mode when the update query is taking longer than the default beat.

SR-D66397 · Issue 530333

ADM out-of-sync corrected for multi-datacenter Cassandra cluster

Resolved in Pega Version 8.5

After setting up the multi-datacenter configuration for a Cassandra cluster that consisted of six nodes in datacenter 1 and three nodes in datacenter 2, failover testing revealed a mismatch in the number of ADM models stored in each datacenter. The mismatch was observed mostly in the number of records present in the "adm_scoringmodel" and "adm_response_commit_log_date_tiered" tables. When Cassandra nodes are down, the other nodes in the cluster will store hints (records to be written) for the down nodes. When these nodes come back online the hints are replayed to those nodes and the data is written. Hints are written for 3 hours, so if a node come back up within 3 hours data is recovered and repairs are not required. The gc_grace_seconds for the above tables that were getting out of sync across the two datacenters was set to zero seconds. The "gc_grace_seconds" attribute is not just used as the time for removal of tombstones, it's also used to set the TTL for records written to the system.hints table. That meant that when the hints were written for the ADM tables for the nodes that were down, they were immediately expired since it was set to 0 and not played back when the terminated nodes restarted and joined the cluster. This has been resolved with this fix for all customers new to this release. Existing customers already on v7.3 or higher will need to complete the local change detailed below: Connect to the Cassandra cluster using cqlsh in the Pega Cassandra distribution and then run ALTER TABLE adm_commitlog.adm_response_commit_log_date_tiered WITH gc_grace_seconds = 86400; to change the relevant setting from zero to the equivalent of one day - the same length of time that the data in the table lives for. This will mean that any hints written can still be used to replay data to another node while the data itself is alive. It does also mean, however, that, given a constant load, a day's worth of expired ADM event data in the table will always be present on the disk, as the tombstones can now not be cleaned up for a day.

We'd prefer it if you saw us at our best.

Pega.com is not optimized for Internet Explorer. For the optimal experience, please use:

Close Deprecation Notice
Contact us