Skip to main content

This content has been archived and is no longer being updated. Links may not function; however, this content may be relevant to outdated versions of the product.

Support Article

Migrate Rules Schema fails with Buffer underflow error

SA-9936

Summary



An error occurs when migrating an already upgraded Rules Schema from one database instance to another using the migrate.sh utility for config settings.

Deployment scenario
  • Oracle 11gR2
  • Weblogic 11
  • Java SE 7

Error Messages



[java] Thu Apr 30 20:16:36 CEST 2015 (INFO): PegaBulkMover: Loaded 403000 out of 403843 Rows
[java] Thu Apr 30 20:16:39 CEST 2015 (INFO): Processed 25 of 51 table(s)
[java] Thu Apr 30 20:16:39 CEST 2015 (INFO): Encountered exception during processing!
[java] Buffer underflow.
[java] at com.esotericsoftware.kryo.io.Input.require(Input.java:156)
[java] at com.esotericsoftware.kryo.io.Input.readInt(Input.java:337)
[java] at com.pega.pegarules.util.deploy.PegaBulkMover.loadDatabaseTable(PegaBulkMover.java:971)
[java] at com.pega.pegarules.util.deploy.PegaBulkMover.process(PegaBulkMover.java:542)
[java] at com.pega.pegarules.util.deploy.PegaBulkMover.main(PegaBulkMover.java:212)
[java]
[java] PegaBulkMover:: SQLException loading Database.
[java] at com.pega.pegarules.util.deploy.PegaBulkMover.loadDatabaseTable(PegaBulkMover.java:1016)
[java] at com.pega.pegarules.util.deploy.PegaBulkMover.process(PegaBulkMover.java:542)
[java] at com.pega.pegarules.util.deploy.PegaBulkMover.main(PegaBulkMover.java:212)
[java]

BUILD FAILED
/cs/appsrv/pega_media/scripts/migrateSystem.xml:583: The following error occurred while executing this line:
/cs/appsrv/pega_media/scripts/migrateSystem.xml:33: Java returned: 1


Steps to Reproduce



1. Complete settings in migrateSystem.properties.
2. Run migrate.sh.

Root Cause



The root cause of this problem is a defect/misconfiguration in the operating environment. Server was running when migrate script was run. Hence there is discrepancy in values between "Rows to Dump" (output of select count(*)) and "Exported Rows for table" (actual total rows exported).

The migrate scripts works by first exporting the table data (UNLOAD operation of PegaBulkMover) into a dmp file and then importing it by reading (LOAD operation of PegaBulkMover) the dmp files.
After comparing the export side of the logs, the system collates the following data for every table exported:

1. "Rows in <table name> to Dump" values. This value is set by the output of "Select Count(*) from tablename" query.
2.  PegaBulkMover: Exported <rowcount> rows for table <table name> requiring <file size> of space value. This value is set by the final value of the counter set in the loop for resultset of "Select * from tablename" query.
 
For some tables there is a mismatch between these values.

Hence at the import time(load operation), if the actual number of rows exported is less than the value returned by count(*), this means that the data present in the .dmp file has less content than expected. However, the import operation will still continue to look ahead in the file causing the buffer overflow error.

These values should remain same for the import to work correctly.


Resolution



Shut down the source server when performing the export operation so that data is not updated while the export package is built.
Suggest Edit

Published October 8, 2020

Did you find this content helpful? Yes No

Have a question? Get answers now.

Visit the Collaboration Center to ask questions, engage in discussions, share ideas, and help others.

We'd prefer it if you saw us at our best.

Pega Community has detected you are using a browser which may prevent you from experiencing the site as intended. To improve your experience, please update your browser.

Close Deprecation Notice
Contact us