Support Article
Cassandra timeout error during Data flow run
SA-66816
Summary
Data flow error occurs during a batch data flow run.
Error Messages
com.pega.dsm.dnode.api.dataflow.StageException: Exception in stage: StageName at com.pega.dsm.dnode.api.dataflow.StageException.create(StageException.java:39) at com.pega.dsm.dnode.api.dataflow.DataFlowStage$StageOutputSubscriber.onError(DataFlowStage.java:512) at com.pega.dsm.dnode.api.dataflow.DataFlowStage$StageInputSubscriber.onError(DataFlowStage.java:380) at com.pega.dsm.dnode.impl.stream.DataObservableImpl$SafeDataSubscriber.onError(DataObservableImpl.java:305) at com.pega.dsm.dnode.api.stream.DataSubscriber.onError(DataSubscriber.java:60) at com.pega.dsm.dnode.impl.stream.DataObservableImpl$SafeDataSubscriber.onError(DataObservableImpl.java:305) at com.pega.dsm.dnode.api.stream.DataObservables$6$1.onError(DataObservables.java:142) at com.pega.dsm.dnode.impl.stream.DataObservableImpl$SafeDataSubscriber.onError(DataObservableImpl.java:305) at com.pega.dsm.dnode.impl.stream.DataObservableImpl$SafeDataSubscriber.subscribe(DataObservableImpl.java:344) at com.pega.dsm.dnode.impl.stream.DataObservableImpl.subscribe(DataObservableImpl.java:40) at com.pega.dsm.dnode.api.stream.DataObservables$6.emit(DataObservables.java:134) at com.pega.dsm.dnode.impl.stream.DataObservableImpl$SafeDataSubscriber.subscribe(DataObservableImpl.java:338) at com.pega.dsm.dnode.impl.stream.DataObservableImpl.subscribe(DataObservableImpl.java:40) at com.pega.dsm.dnode.impl.stream.DataObservableImpl$3.emit(DataObservableImpl.java:161) at com.pega.dsm.dnode.impl.stream.DataObservableImpl$SafeDataSubscriber.subscribe(DataObservableImpl.java:338) at com.pega.dsm.dnode.impl.stream.DataObservableImpl.subscribe(DataObservableImpl.java:40) at com.pega.dsm.dnode.api.dataflow.DataFlow$3.run(DataFlow.java:417) at com.pega.dsm.dnode.api.dataflow.DataFlow$3.run(DataFlow.java:411) at com.pega.dsm.dnode.util.PrpcRunnable.execute(PrpcRunnable.java:52) at com.pega.dsm.dnode.impl.dataflow.DataFlowThreadContext$1.run(DataFlowThreadContext.java:161) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:108) at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:41) at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:77) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at com.pega.dsm.dnode.util.PrpcRunnable$1.run(PrpcRunnable.java:44) at com.pega.dsm.dnode.util.PrpcRunnable$1.run(PrpcRunnable.java:41) at com.pega.dsm.dnode.util.PrpcRunnable.execute(PrpcRunnable.java:52) at com.pega.dsm.dnode.impl.prpc.PrpcThreadFactory$PrpcThread.run(PrpcThreadFactory.java:109) Caused by: com.datastax.driver.core.exceptions.ReadTimeoutException: Cassandra timeout during read query at consistency ONE (1 responses were required but only 0 replica responded) at com.datastax.driver.core.exceptions.ReadTimeoutException.copy(ReadTimeoutException.java:88) at com.datastax.driver.core.exceptions.ReadTimeoutException.copy(ReadTimeoutException.java:25) at com.datastax.driver.core.DriverThrowables.propagateCause(DriverThrowables.java:37) at com.datastax.driver.core.ArrayBackedResultSet$MultiPage.prepareNextRow(ArrayBackedResultSet.java:313) at com.datastax.driver.core.ArrayBackedResultSet$MultiPage.isExhausted(ArrayBackedResultSet.java:269) at com.datastax.driver.core.ArrayBackedResultSet$1.hasNext(ArrayBackedResultSet.java:143) at com.pega.dsm.dnode.impl.dataset.cassandra.CassandraDataEmitter.processResults(CassandraDataEmitter.java:69) at com.pega.dsm.dnode.impl.dataset.cassandra.CassandraBrowseAllRecordsOperation$6.emit(CassandraBrowseAllRecordsOperation.java:159) at com.pega.dsm.dnode.impl.dataset.cassandra.CassandraDataEmitter.emit(CassandraDataEmitter.java:45) at com.pega.dsm.dnode.impl.stream.DataObservableImpl$SafeDataSubscriber.subscribe(DataObservableImpl.java:338) ... 21 more Caused by: com.datastax.driver.core.exceptions.ReadTimeoutException: Cassandra timeout during read query at consistency ONE (1 responses were required but only 0 replica responded) at
.
.
.
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:328) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:321) at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1280) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:342) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:328) at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:890) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:564) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:505) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:419) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:391) at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:112) at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:145) at java.lang.Thread.run(Thread.java:748) Caused by: com.datastax.driver.core.exceptions.ReadTimeoutException: Cassandra timeout during read query at consistency ONE (1 responses were required but only 0 replica responded) at com.datastax.driver.core.Responses$Error$1.decode(Responses.java:62) at com.datastax.driver.core.Responses$Error$1.decode(Responses.java:37) at com.datastax.driver.core.Message$ProtocolDecoder.decode(Message.java:277) at com.datastax.driver.core.Message$ProtocolDecoder.decode(Message.java:257) at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:88) ... 31 more
Steps to Reproduce
- Configure a data flow to read from One C* dataset and write to another.
- Set the dataset such that it has more than 100 million records. Each record has about 280 properties.
- Run the process.
Root Cause
A defect or configuration issue in the operating environment.
Resolution
- Apply HFix-46940.
- Configure the following Dynamic System Setting (DSS) settings.
Pega-Engine
prconfig/dnode/cassandra_use_extended_token_aware_policy/default
true
Pega-Engine
prconfig/dnode/dds_partitioner_class/default
com.pega.dsm.dnode.impl.dataset.cassandra.TokenRangePartitioner
Published December 4, 2018 - Updated December 2, 2021
Have a question? Get answers now.
Visit the Collaboration Center to ask questions, engage in discussions, share ideas, and help others.