Back Forward Working with the PegaRULES database — Common tasks

Working with the PegaRULES database

zzz Show all 

This topic presents general guidance for database administration tasks that may arise in a Process Commander system that supports dozens, or thousands, of application users.

Because Data-Admin-DB-Table and Data-Admin-DB-Name instances control how Process Commander finds objects within the PegaRULES database, changes to the database table structure must be tightly coordinated with changes to these data instances.

zzzLimitations

AdvancedMore specific guidance cannot be provided here, because details may depend on:

zzzPlanning tablespace

The recommended minimum tablespace for the PegaRULES database is 1 gigabyte. Some production sites have databases with more than 20 gigabytes of tablespace. However, the 17,000+ standard rules require only about 300 megabytes. Most space in your PegaRULES database is required for work objects, history, and attachments.

Consult the Pega Developer Network article PDNPRKB-12464 How to estimate disk space requirements for a PegaRULES Database (Oracle) for more information on disk space planning.

NoteTo provide good performance, Process Commander can use a single database account but multiple connections, in a pool. You can implement pooled connections using a JNDI datasource and application server facilities. Alternatively, you can use Process Commander facilities. See How to limit database connections to the PegaRULES database.

NoteDatabase accesses that support service rules and escalation depend on the current date and time setting on the database server. Make sure the date and time of the database server is synchronized with the date and time on the application server node or nodes.

zzzAccommodating table growth

To see a current record count for a table:

  1. Select Tools > System Management application to start the System Management application.
  2. Select a node. Select the Advanced > Database Table Information menu item.
  3. Select the radio button corresponding to the table. Click  List Record Count.

In production settings, the tables listed below are in general likely to grow to contain 100,000 or more rows, depending of the application design or pattern of use:

These tables typically show stable or slowly growing row counts:

zzzTables with highest and lowest turnover rates

These tables ordinarily have the highest rate of insert, update, and delete operations:

These tables ordinarily have a low rate of insert, update, and delete operations:

zzzAddressing truncated property values

In rare situations, data values may be entered or created in the application that contain more characters than the corresponding column width in the database table. When the system detects this while committing the object to the database table, it:

Tip Review system logs regularly to identify such situations. Change the column width or modify the property definition or other validation to restrict the value length to prevent loss of important data.

A few properties, when they hold values longer than system limits, cause the Commit method to fail. Properties with pz or px prefixes are designed to be exposed as columns, so truncation of long values is not appropriate.

zzzCreating a separate table for a child class

Classes that are derived from a parent class using pattern inheritance share the same table with their parent class except as otherwise specified. Thus, the system allows the table for an entire hierarchy of classes to be defined in one place.

However, you can associate a child class with a table different from its parent by creating a separate Data-Admin-DB-Table instance for the child. This may be desirable for performance, backup, space, or other reasons. The table you specify for a child class overrides the table defined for the parent class.

For example, the standard schema places instances of the Data-Party-Com class — which identify business entities — in the pr_data table, through the Data-Admin-DB-Table instance named Data-. If your application includes thousands or millions of such records, it may be desirable to place them in a dedicated table. To make a dedicated table:

  1. Using database tools, create a new table (call it myco_businesses) within the PegaRULES database. Make certain the schema includes the 6 properties needed in for every table, and that property pzInsKey is identified as the unique key. Identify additional columns for other properties to be exposed.
  2. Move all rows of the pr_data table that have Data-Party-Com as the value of the pxObjClass column into the new table.
  3. Add a Data-Admin-DB-Table instance named Data-Party-Com that references the myco_businesses table.
  4. Run the Column Populator utility to populate values for exposed columns from the BLOB column (Storage Stream).

TipWhen you create a new table in the PegaRULES database, include a database constraint that identifies the pzInsKey column as the primary key of the table.

zzzReplicated data, split data, and transactions

In general, Process Commander data can be replicated to support high-volume production requirements, reporting, or other needs. Database vendor facilities that implement replication do not affect Process Commander operation. Separate server nodes can access tables in separate, but synchronized, database instances.

For example, in a production system the rules change only slowly and frequently used rules become cached. Work objects, work object history, and work object attachments may grow to require much more space than rules. You can consider splitting the rules into a separate database (or databases) from the larger objects. See Pega Developer Network article PDNPRKB-19641 Benefits of placing rules in a separate database from transaction data for a discussion of the advantages and disadvantages of such splits.

CautionHowever, never replicate the pr_sys_updatescache or pr_sys_locks tables. All nodes in a multinode Process Commander system must access a single copy of these two tables.

zzzTrimming and purging

Trimming or purging of tables can be done directly with an SQL query tool. In a development system, this approach is fast and often appropriate.

TipHowever, as a best practice in a production setting, whenever practical implement trimming and purging operations through activities, either run on request or run automatically by an agent. This approach may be slower and more resource-intensive, but it helps ensure database integrity, handle errors, and causes internally maintained Index- instances to be deleted appropriately by Declare Index rules.

You can use the standard activity Log-.TrimLog to purges instances of one concrete class derived from the Log- base class, retaining instances less than N days old, where N is a parameter to the activity. This activity records the number of records trimmed in the Pega log file. (Log instances are ordered by date and time.)

The Pega-RULES agent normally runs the standard Code-.SystemCleaner once each day for housekeeping tasks. If desired, you can add a similar activity to another agent in your application to purge outdated information and recover table space. Make sure the new agent has access to the needed RuleSet Versions, and take care not to purge records that are referenced in other tables.

zzzPurging and archiving work objects

Work object details are stored as rows of eight different tables (or more than eight if you have placed separate work pools in separate tables). To purge work objects or move them to an archive, it is important to remove all the pieces, and to remove only those work objects that are not still active.

The Purge/Archive wizard purges and/or archive work objects and their details. See About the Purge/Archive wizard.

For a description of the tables involved, see PDNPurging and Archiving Work Objects, a V5.2 document on the Pega Developer Network. (This document also describes older SQL-based approaches to purging, as the Purge/Archive wizard was not available in V5.2.)

zzzBatching insert, delete and update operations

An optional setting batchUpdates in the prconfig.xml file can allow use of prepared statements to apply database updates in batches during a Commit operation. This setting is most beneficial for improving the performance of bulk processing.

See Commit method for details.

zzzAccessing a modified database schema

If a database administrator directly updates a PegaRULES database table while the system is running, you do not need to stop and restart the servers. Open and resave the affected Data-Admin-DB-Table data instances to access the modified table schema. (This is not necessary when you modify the database table through the Modify Database Schema wizard.)

zzzPega Developer Network articles

These additional PDNPega Developer Network articles provide information on database administration tasks:

Definitions bulk processing, Column Populator utility, Storage Stream
Related topics About the RuleSet Delete tool
About the Purge/Archive wizard

HomeWorking with the PegaRULES database