Skip to main content

This content has been archived and is no longer being updated. Links may not function; however, this content may be relevant to outdated versions of the product.

Support Article

Login error: "Cannot save Data-Admin-OperatorID"

SA-83424

Summary



Pega Platform 8.2.1 is installed on Apache Tomcat 8.5.39 on Linux on a PostgreSQL 9.6.12 instance.
Post the installation, unable to log in to the applicatoin using the default Administator ID and temporary password.





Error Messages



[il INITIALIZE_SEARCH] [STANDARD] [ ] [ ] (Manager.PegaSearchProviderImpl) ERROR   - Failed to initialize full text search functionality for this node.

com.pega.platform.search.searchmanager.FTSInitializationException: Failed to initialize full text search for this node.
    at com.pega.platform.search.internal.ESSearchProviderEmbedded.initializeElasticSearchNode(ESSearchProviderEmbedded.java:204) ~[search.jar:?]
 
Caused by: java.lang.IllegalStateException: Failed to create node environment
    at org.elasticsearch.node.Node.<init>(Node.java:268) ~[elasticsearch-5.6.9.jar:?]
 
Caused by: java.io.IOException: failed to test writes in data directory [/data/tmp/PegaSearchIndex/nodes/0/indices/fvoOijQmSJO163rlP_7bUA/_state] write permission is required
    at org.elasticsearch.env.NodeEnvironment.tryWriteTempFile(NodeEnvironment.java:1081) ~[elasticsearch-5.6.9.jar:?]
    at org.elasticsearch.env.NodeEnvironment.assertCanWrite(NodeEnvironment.java:1049) ~[elasticsearch-5.6.9.jar:?]
    at org.elasticsearch.env.NodeEnvironment.<init>(NodeEnvironment.java:278) ~[elasticsearch-5.6.9.jar:?]

Caused by: java.nio.file.AccessDeniedException: /data/tmp/PegaSearchIndex/nodes/0/indices/fvoOijQmSJO163rlP_7bUA/_state/.es_temp_file
    at sun.nio.fs.UnixException.translateToIOException(UnixException.java:84) ~[?:1.8.0_201]
    at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102) ~[?:1.8.0_201

[jsse-nio-8443-exec-3] [STANDARD] [ ] [PegaRULES:8] (sm.VirtualTableAssemblyHandler) ERROR pega-my.web.work|127.0.0.1  - Failed to compile com.pegarules.generated.activity.ra_action_pzloadauthenticationpolicies_843598b81a72746c251794b14b639076, pzInsKey = RULE-OBJ-ACTIVITY DATA-ADMIN-SYSTEM-AUTHPOLICIES PZLOADAUTHENTICATIONPOLICIES #20180713T132639.226 GMT; see class's compile log file.

[jsse-nio-8443-exec-5] [STANDARD] [ ] [PegaRULES:8] (.authentication.Authentication) ERROR pega-my-web.work|127.0.0.1  - Cannot save Data-Admin-Operator-ID instance to your_database.com.pega.pegarules.pub.PRRuntimeException: Error occurred while executing forward chaining on page '', using rule: {pxObjClass=Rule-Obj-Activity, pyClassName=Data-Admin-Operator-ID, pyActivityName=UpdateOperatorID}
com.pega.pegarules.pub.PRRuntimeException: Error occurred while executing forward chaining on page '', using rule: {pxObjClass=Rule-Obj-Activity, pyClassName=Data-Admin-Operator-ID, pyActivityName=UpdateOperatorID}
    at com.pega.pegarules.exec.internal.declare.infengine.ChainingEngineUtilImpl.runActivity(ChainingEngineUtilImpl.java:264) ~[prprivate.jar:?]
    at com.pega.pegarules.exec.internal.declare.infengine.TriggerImpl.evaluateNetworks(TriggerImpl.java:304) ~[prprivate.jar:?]
    at com.pega.pegarules.data.internal.access.DatabaseImpl.performTriggers(DatabaseImpl.java:6782) ~[prprivate.jar:?]

Caused by: com.pega.pegarules.pub.generator.FirstUseAssemblerException: Failed to compile generated Java com.pegarules.generated.activity.ra_action_updateoperatorid_3ec78c493c9f6ee0b44992755a481034: 
    at com.pega.pegarules.generation.internal.vtable.asm.VirtualTableAssemblyHandler.logErrorsAndThrowException(VirtualTableAssemblyHandler.java:839) ~[prprivate.jar:?]

 [PegaRULES-Batch-5] [STANDARD] [ ] [PegaRULES:8] (internal.async.AgentQueue) ERROR   - Problem queue Pega-SearchEngine #0: System-Queue-FTSIncrementalIndexer.pzFTSIncrementalIndexer will restart in 240000 ms
[StreamServer.Default] [STANDARD] [ ] [ ] (dsm.kafka.Kafka) ERROR   - Failed to start Kafka on 1 attempt, kafka log log4j:ERROR setFile(null,true) call failed.java.io.FileNotFoundException: /opt/tomcat/kafka-1.1.0.3/logs/controller.log (Permission denied)    at java.io.FileOutputStream.open0(Native Method)    at java.io.FileOutputStream.open(FileOutputStream.java:270)    at java.io.FileOutputStream.<init>(FileOutputStream.java:213)    at java.io.FileOutputStream.<init>

[StreamServer.Default] [STANDARD] [ ] [ ] (rvice.operation.StartOperation) ERROR   - Cannot start service [StreamServer.Default]
com.pega.dsm.dnode.api.StreamServiceException: Unable to start Kafka broker. Last state was: NotConnected
log4j:ERROR setFile(null,true) call failed.java.io.FileNotFoundException: /opt/tomcat/kafka-1.1.0.3/logs/controller.log (Permission denied)    at java.io.FileOutputStream.open0(Native Method)    at java.io.FileOutputStream.open(FileOutputStream.java:270)    at java.io.FileOutputStream.<init>(FileOutputStream.java:213)    at java.io.FileOutputStream.<init

ERROR Disk error while locking directory /opt/tomcat/kafka-data (kafka.server.LogDirFailureChannel)java.io.FileNotFoundException: /opt/tomcat/kafka-data/.lock (Permission denied)    at java.io.RandomAccessFile.open0(Native Method)    at java.io.RandomAccessFile.open(RandomAccessFile.java:316)    at java.io.RandomAccessFile.<init>(RandomAccessFile.java:243)    at kafka.utils.FileLock.<init>(FileLock.scala:33)    at kafka.log.LogManager$$anonfun$lockLogDirs$1.apply(LogManager.scala:231)    at kafka.log.LogManager$$anonfun$lockLogDirs$1.apply(LogManager.scala:229)    at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)    at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)    at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)    at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:241)    at scala.collection.AbstractTraversable.flatMap(Traversable.scala:104)    at kafka.log.LogManager.lockLogDirs(LogManager.scala:229)    at kafka.log.LogManager.<init>(LogManager.scala:96)    at kafka.log.LogManager$.apply(LogManager.scala:933)    at kafka.server.KafkaServer.startup(KafkaServer.scala:237)    at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:38)    at kafka.Kafka$.main(Kafka.scala:75)    at kafka.Kafka.main(Kafka.scala)[2019-07-03 14:23:04,306] ERROR Shutdown broker because all log dirs in /opt/tomcat/kafka-data have failed (kafka.log.LogManager)


Steps to Reproduce

  1. Install Pega Platform 8.2.1.
  2. Log in to the application using the default Administator ID and temporary password provided during installation.


Root Cause



Directories such as, indices, are created by the Root and Tomcat is not able to access them.


Resolution



Make the following change to the operating environment:
  1. Clear all the directories and temporary files.
  2. Re-deploy prweb.war.

Published December 2, 2021

Was this useful?

0% found this useful

Have a question? Get answers now.

Visit the Collaboration Center to ask questions, engage in discussions, share ideas, and help others.

Did you find this content helpful?

Want to help us improve this content?

We'd prefer it if you saw us at our best.

Pega Community has detected you are using a browser which may prevent you from experiencing the site as intended. To improve your experience, please update your browser.

Close Deprecation Notice
Contact us