LinkedIn
Copied!

Table of Contents

Cassandra error: Too many open files

Version:

Only available versions of this content are shown in the dropdown

The Cassandra process might crash with an error that indicates that there are too many open files. By performing the following task, you can check for issues with querying, saving, or synchronizing data, and then correct the errors.

The root cause is that the Cassandra process has run into system-imposed limits on the maximum number of open files. See the following code snippet for example of the error message:

Caused by: java.lang.RuntimeException: java.nio.file.FileSystemException: 
/path/data/system/compaction_history-b4dbb7b4dc493fb5b3bfce6e434832ca/
mc_txn_flush_8bdc78f0-7d48-11e9-9b2e-0f78ea2b6c2b.log: Too many open files
  1. For Linux, enter the following commands in the Unix shell to check the limits on the number of open files:

    • To check the hard limit, enter ulimit -Hn

      Only the root user can raise this limit but any process can lower it.

    • To check the soft limit, enter ulimit -Sn

      Any process can change this limit.

  2. Change the limit on the maximum number of open files, depending on your business needs.

    Do not raise the limit on open files above 100,000. For more information about changing open file limits, see the Apache Cassandra documentation.

Related Content

Have a question? Get answers now.

Visit the Collaboration Center to ask questions, engage in discussions, share ideas, and help others.