Skip to main content

This content has been archived and is no longer being updated. Links may not function; however, this content may be relevant to outdated versions of the product.

Support Article

One node of a cluster does not work



In a multi node system, running on separate servers, one of the nodes appears hung causing poor performance in the whole cluster. 
No errors or exceptions are present that would indicate a hang. However, the ALERT log displays several alerts while obtaining connections from the JDBC connection pool, some of which have high KPI values.

Error Messages

2015-04-13 06:02:02,617 GMT*7*PEGA0026*4053*100*
2015-04-13 06:02:02,618 GMT*7*PEGA0026*4045*100*
2015-04-13 06:02:02,618 GMT*7*PEGA0026*4045*100*
2015-04-13 06:02:02,619 GMT*7*PEGA0026*4045*100*
2015-04-13 06:02:02,621 GMT*7*PEGA0026*4045*100*
2015-04-13 06:02:02,621 GMT*7*PEGA0026*3492*100*
2015-04-13 06:02:14,865 GMT*7*PEGA0026*148*100*

Steps to Reproduce

There is no specific use case to reproduce this issue.

Root Cause

The root cause of this problem is in a third-party product. On reviewing the connection pool settings (defined at the Cluster scope, which means that all nodes in cluster shared a single pool of DB connections) it is found that Max Connections needs to be increased and Purge policy has to be changed so as to limit the impact of a single failed connection.


This issue is resolved by making the following change to the operating environment:

Change the tuning to increase Max Connections and change the Purge policy so as to limit the impact of a single failed connection.

Published June 12, 2015 - Updated October 8, 2020

Was this useful?

0% found this useful

Have a question? Get answers now.

Visit the Collaboration Center to ask questions, engage in discussions, share ideas, and help others.

Did you find this content helpful?

Want to help us improve this content?

We'd prefer it if you saw us at our best.

Pega Community has detected you are using a browser which may prevent you from experiencing the site as intended. To improve your experience, please update your browser.

Close Deprecation Notice
Contact us