Out Of Memory in Production
User is getting Out-Of-Memory ("OOM") errors on our system from time to time.
2015-10-13 07:16:07,969 [_1.0_user,maxpri=10]] [ STANDARD] [ ] (ternal.async.PassivationDaemon) ERROR - Failed to process passivation queue:
Java heap space
at com.pega.pegarules.priv.factory.ByteArrayFactory.newProduct (ByteArrayFactory.java:79)
Steps to Reproduce
Normal Production Use of System.
A defect in Pegasystems’ code or rules.
The Smart Investigate Application is creating invalid History rows.
For instance, a typical work item key would be "CUSTOMCLASS-WORK M-1" ; but under certain circumstances Smart Investigate truncates the key to "CUSTOM-CLASS-WORK".
The OOM error occurs when there are large number of these rows in the Database.
Run the following SQL to identify the number of invalid rows in your Database:
SELECT * from
(SELECT pxhistoryforreference, count(*) AS testcount
GROUP BY pxhistoryforreference
ORDER BY testcount DESC)
WHERE rownum <=5;
(Replace 'CUSTOMCLASS' with the specific Class you are interested in).
The query shows the top 5 work item with the most history data associated.
Here is an example of result seen in an affected environment where more than 2 millions history rows are associated to an invalid work item:
Hfix-24662 prevents the invalid rows from being created.
User must also to check if the following rules have been customized:
If they have been customized - then "pyIDNotBlank" precondition must be added to each "History-Add" method in each of them.
0% found this useful