Skip to main content

Resolved Issues

View the resolved issues for a specific Platform release.

Go to download resolved issues by patch release.

Browse release notes for a selected Pega Version.

NOTE: Enter just the Case ID number (SR or INC) in order to find the associated Support Request.

Please note: beginning with the Pega Platform 8.7.4 Patch, the Resolved Issues have moved to the Support Center.

INC-143121 · Issue 610732

Timeout for loading predictors made configurable

Resolved in Pega Version 8.3.6

When using an extremely large number of predictors, the Report definition pzADMPredictorsFilter was suffering timeouts due to the time for loading predictors from the database exceeding the time threshold allowed. This has been resolved by marking the rule as editable to allow custom setting of the threshold according to need.

INC-148899 · Issue 615702

Adaptive models update correctly

Resolved in Pega Version 8.3.6

Some models had the recorded responses column updated, but the models (number of Positive, Negative and Processed Responses) were not updated. Investigation showed that deleting the modelRuleConfiguration through the stateManager/client did not delete modelFactories related to the configuration. If a new configuration came in with a different algorithm, the update issue occurred. This has been resolved by reseting the configuration according to its factory in that specific case.

INC-155822 · Issue 618267

Locking added to avoid null pointer error for auto-populate property

Resolved in Pega Version 8.3.6

After configuring the auto populate property "OrgProduct" which referred to a data page, the system experiencing heavy load led to the property not getting properly initialized. This resulted in a WrongModeException and NullPointerException. To resolve this, the system has been updated to lock the requestor when Queue Processors execute their activity. This will prevent race conditions and concurrent modifications if other threads are accessing the same requestor.

INC-157629 · Issue 626633

Duplicate key exception resolved for adaptive model

Resolved in Pega Version 8.3.6

During the model snapshot update, a DuplicateKeyException was generated while trying to insert a record in to the predictor table. This did not affect the model's learning, but did appear ion the model monitoring report. This was traced to a local scenario of having the same outcome values defined on the model with different cases (Accept and accept). All predictors used in an Adaptive model are inserted into the model monitoring tables as a part of the monitoring job: because the monitoring tables are not case sensitive, this lead to a unique constraint exception since there were multiple IH predictors with the same name. To resolve this, validation has been added which will skip adding duplicates from new responses.

INC-160331 · Issue 628710

ML model continue tag error changed to debug logging

Resolved in Pega Version 8.3.6

Log entries were seen indicating "The continue tag is found without a start tag for window Line1 1234567890 in text". This appeared to be coming from the MLCommand java class. Investigation showed this occurred when the ML model believed a portion of the text (token) was a continuation of the entity because it has not found the starting part of the entity. This situation is not currently treated as a valid output, so an update has been made to change the logging level from error to debug for this case.

SR-69015 · Issue 619995

Unescaping characters implemented for expressions

Resolved in Pega Version 8.3.6, Resolved in Pega Version 8.4.4, Resolved in Pega Version 8.5.3, Resolved in Pega Version 8.6

An issue where expression builder statements were evaluated differently at runtime than at testing has been resolved. Pega Platform expressions with String literals(that is, sequences of characters enclosed in quotation marks) now unescape characters in strategy shapes such as Set Property or Filter.

INC-126796 · Issue 561533

Modifications to getFunctionalServiceNodes process

Resolved in Pega Version 8.2.7

The count of the Interaction History write related threads was increasing rapidly and a stack trace indicated "waiting on condition" and "java.lang.Thread.State: WAITING (parking)" errors. Investigation showed that this was due to getFunctionalServiceNodes using Hazelcast to determine node status by making a service request on an installation with a very large number of nodes, causing thread locking. To resolve this, the implementation has been updated to avoid calling getFunctionalServiceNodes on save of Interaction History, instead using Cassandra and only calling getFunctionalServiceNodes on the master node, not on all nodes.

INC-128385 · Issue 564519

Behavior made consistent between SSA and legacy engines

Resolved in Pega Version 8.2.7

There was a behavioral disparity between the legacy execution engine and the SSA engine where the latter was not creating a new page when the index was one above the size of the page list. This has now been corrected in order to make the SSA behavior fully backward compatible with the legacy engine, i.e. a new blank page will be added to the list if the index is one above the size of the list.

INC-128898 · Issue 564690

Updated precondition checks for Tumbling Time data in event strategies

Resolved in Pega Version 8.2.7

Tumbling Time data keys in event strategies were not being properly executed when certain window configurations were used. This was has been resolved by turning off the optimalization of Cassandra reads for small windows when window size is not known upfront and set dynamically (set size by property). In addition, an issue with Cassandra timeouts after the dataflow run had been running for several months has been resolved by adding a 'time to live' value for tumbling time windows, and event strategies has been switched to use leveled compaction by default.

SR-D82727 · Issue 547723

Improved management for table pr_log_dataflow_events

Resolved in Pega Version 8.2.7

The Lifecycle event table was sometimes growing too large. This additional strain of database transaction volume caused poor performance on the Dataflow tier and lead to cluster instability and time-consuming cluster restarts. Due to problems in one of the Pulse tasks, the Pulse thread was not processing single case metrics properly and causing the unbounded queue for single case to grow. This has been addressed by switching to a fixed queue size, which is configurable with the DSS: dnode/single_case_queue_size. The default value of the DSS is 4000, and if changed a system restart is required. An error will be logged each 1000 queue misses, and metrics will be dropped if the queue is full. In addition, the Pulse task frequency has been improved and managed to prevent interference with other Pulse tasks and will be triggered only if a run is system-paused for a long interval. Rebalances now have a failsafe if something unexpected happens during the Pausing of the run, and If the cluster becomes unstable, the life cycle events logs may be disabled with dataflow/run/events/persist .

We'd prefer it if you saw us at our best.

Pega.com is not optimized for Internet Explorer. For the optimal experience, please use:

Close Deprecation Notice
Contact us