Dataflow failure due to Cassandra Timeout
SummaryDataflow failures started in production environment with the mentioned exception.
The changes to the environment includes:
- Adding a new application which does not use any DDS facilities
- Visual Business Director (VBD) node is added.
- HFix-31496 which upgrades the Cassandra driver version is also installed recently
Caused by: java.util.concurrent.ExecutionException: com.datastax.driver.core.exceptions.OperationTimedOutException: [/<your IP>:9042] Timed out waiting for server response
... 32 more
Caused by: com.datastax.driver.core.exceptions.OperationTimedOutException: [/<your IP>:9042] Timed out waiting for server response
Steps to ReproduceRun a data flow
Root CauseA defect in Pegasystems’ code or rules:
The issue is caused by two set of scenarios introduced by two separate Hotfixes that got deployed as dependents for another hotfix.
These hotfixes improved performance of dataflow execution thereby caused overloading on Cassandra servers.
Issue 1: No retries performed after timeout, throwing OperationTimedOutException. HFix-31496 upgrades the Cassandra driver from 2.1.9 to 3.1.2.
This upgrades changes the default behavior of the driver to not perform retries unless the cql query is set as idempotent.
Issue 2: Too much read 'load' on Cassandra server, causing it to loose visibility of other nodes in the cluster.
This caused the Cassandra to store hinted handoff's locally.
Saving/compacting hinted handoff caused GC which in turn caused JVM pauses hence throwing timeouts errors.
HFix-35785 has changes, to improve performance of DDS dataset, so that DDS Reads occur in parallel, instead of one by one.
ResolutionApply HFix-37220. Following configuration changes are also require:.
1. To increase the timeout values, update prconfig.xml as per below nodes.
a. On all DDS nodes. <env name="dnode/yaml/write_request_timeout_in_ms" value="60000" />b. On all DF nodes. <env name="dnode/cassandra_read_timeout_millis" value="50000" />
2. Reduce the DDS Read load on Cassandra servers by executing the DataFlow using an Activity and set the following properties to the RunOptions page.
- One can tweak the below values based on whether your DF execution is Cassandra intensive or Strategy intensive.
- One can deduce this by checking the Dataflow run item progress page and check the '% of total time'.
- Higher values for DDS dataset components would mean its Cassandra intensive.
- Reduce the pyBatchSize and pyNumberOfRequestors for those Dataflow runs that are Cassandra intensive.
.pyNumberOfRequestors to 8.
.pyBatchSize to 50
Published January 24, 2018 - Updated October 8, 2020