Skip to main content


         This documentation site is for previous versions. Visit our new documentation site for current releases.      
 

Monitoring your case archival and purge process

Updated on May 11, 2022

View the completion of your archiving and purge jobs within a single view using the Log-Archival summary class and its associated log files. Alternatively, view the pr_metadata table, which holds all of the pyArchiveStatus and other information about all records that are processed by the three types of archival jobs.

Monitoring with the Log-Archival Summary class

Open the Log-ArchivalSummary class with the following procedure:
  1. In the navigation pane of Dev Studio, click App.
  2. On the Classes tab, in the Search field, enter Log-ArchivalSummary, and then select the Log-ArchivalSummary class.
  3. Review the following columns to confirm the success of the archival process:
ColumnDescription
TaskThe name of the job running through the Job Scheduler. For more information about the jobs used during a case archival and purge process, see Case archiving and purging overview.

This includes the following types of tasks:

  • Crawler - The step of the pyPegaArchiver job or the pzPerformArchive activity that confirms case eligibility for archiving and purge.
  • Copier - The step of the pyPegaArchiver job or the pzPerformArchive activity that copies and zips case data to Pega Cloud File Storage.
  • Indexer - The pyPegaIndexer job or the pzPerformIndex that indexes case data for Elasticsearch in Pega Cloud File Storage.
  • Purger - The pyPegaPurger job or the pzPerformPurge activity that deletes copied and indexed case data from the Pega Platform database.
  • Expunger - The pyPegaExpunger job or the pzPerformExpunge activity that deletes archived cases from Pega Cloud File Storage.
pyTaskEndTimeWhen the job finished.
pyCaseProcessedThe number of cases that the job successfully completed.
pyCasesUnsuccessfulThe number of cases that the job failed to complete

The Log-Archival summary class holds 365 days worth of entries before discarding old entries. Entries older than this number are purged.

To specify how long to maintain older entries in the activity, use the TrimLog activity.

  1. In the navigation pane of Dev Studio, click RecordsTechnicalActivity.
  2. In the Setting Purpose column, click the filter icon and filter the column by entering TrimLog.
  3. Select TrimLog.
  4. Select ActionsRun.
  5. In the daysAgo field, enter the number of days that you want to hold Log-Archival class entries.
  6. Click Run.

Monitoring your archival progress using logs

Review the log entries for the archiving and purge jobs found in the PegaRULES.log.

For more information about accessing log files, see Viewing logs.
  • Search for Archival-CaseCrawler, Archival-CaseCopier, Archival-Purger, or Archival-Indexer to view detailed log messages for the respective events of each job.
    • The PegaRULES.log file lists five cases for each batch that you define in the archive setting dataarchival/batchSize, to present a sample of where Pega Platform is archiving eligible cases in your backlog.
    • The log message Final status for file <filepath> is confirmation of completion of an archival job, with the file path of the corresponding .zip archive file in Pega Cloud File Storage.
  • Search for the ERROR and INFO_FORCED log levels for archiving and purge jobs to find failures during your archival and purge process. Failures use the following format to present where the archiving and purge process did not succeed:

    <Log level> - <timestamp> <Archival-CaseCrawler, Archival-CaseCopier, Archival-Purger, or Archival-Indexer job> : <CaseType>-<CaseTypeID>

Monitoring your progress using queries to the pr_metadata table

Use the Query Runner to run an SQL statement on the pr_metadata table. For more information about Query Runner, see Running SQL queries on Pega Cloud.

The return of the SQL query contains the value for pyArchiveStatus. The following figure shows when the status occurs during the archival process:
Monitoring your archival process using the pr_metadata table
Archival process steppyArchiveStatus Description
1. Archive-Ready (Status occurs after the Crawler steps of the pyPegaArchiver job.)

The record is pending to go through the Copier step of the pyPegaArchiver job.

2A. Archived (Status occurs after the pyPegaArchiver job.)

The record is copied to Pega Cloud File Storage.

2B. Archive-External (Status occurs after the pyPegaArchiver job.)

The record is an external attachment, and reference is copied to Pega Cloud File Storage.

2C.Archive-Shared (Status occurs after the pyPegaArchiver job.)

The record is shared between cases, and may not be eligible for an archival process.

2D.Archive-Failed (Status occurs after the pyPegaArchiver job.)

The archival process failed.

3A. Pending-Purge (Status occurs after the pyPegaIndexer job.)

The record has been indexed into Elasticsearch and is pending to go through the purge process.

3B. Indexing-Failed(Status occurs after the pyPegaIndexer job.)

The indexing of the record into Elasticsearch failed.

4. <empty pr_metadata table>(Empty tables occur after the pyPegaPurger job.)

The record has been purged from the Pega database.

Have a question? Get answers now.

Visit the Support Center to ask questions, engage in discussions, share ideas, and help others.

Did you find this content helpful?

Want to help us improve this content?

We'd prefer it if you saw us at our best.

Pega.com is not optimized for Internet Explorer. For the optimal experience, please use:

Close Deprecation Notice
Contact us