Skip to main content

This content has been archived and is no longer being updated. Links may not function; however, this content may be relevant to outdated versions of the product.

Support Article

Unable to save records to Cassandra

SA-83806

Summary



Unable to save all records to an external Cassandra after reading messages from Kafka through a data flow.


Error Messages



No functional DDS nodes available after waiting 30 seconds
Unable to obtain session. Cassandra hasn't been started or is unavailable


Steps to Reproduce

  1. Run the data flow to read messages from a Kafka stream through a dataset.
  2. Save it to an external Cassandra through another dataset.


Root Cause



The source data (Kafka) consists of duplicate records and Cassandra only writes distinct records. Hence the difference in record count.


Resolution



Here's the explanation for the reported behavior:

The difference in the count of records between the source (Kafka) and destination (Cassandra) is due to distinct values which persist to the Cassandra table.

Write the results to a database table to perform a quick test. The distinct values count is the same as the record count in Cassandra keyspace.

Published December 2, 2021

Was this useful?

0% found this useful

Have a question? Get answers now.

Visit the Collaboration Center to ask questions, engage in discussions, share ideas, and help others.

Did you find this content helpful?

Want to help us improve this content?

We'd prefer it if you saw us at our best.

Pega Community has detected you are using a browser which may prevent you from experiencing the site as intended. To improve your experience, please update your browser.

Close Deprecation Notice
Contact us