On the journey to the autonomous enterprise, you must identify bottlenecks and inefficiencies hindering people's work to improve things for them and their customers. Once you know where those inefficiencies are, you can automate and streamline that work in Pega, perhaps most powerfully through Predictive Analytics and AI-powered decisioning. Join us for our final CLSA event of 2023 to learn how Pega Process Mining helps you find the problems and inefficiencies in your current solution, how Pega Process AI can revolutionize how you architect your application to "fix" those problems, and how these technologies move your organization toward the autonomous enterprise.
- Slides from this webinar: Find it, Fix it - Evolving your apps with Process Mining
- Slides from this webinar: Find it, Fix it - Evolving your apps with Process AI
Process Mining: Connecting to Pega
Process Mining: Find the problem
Process AI: Case Classification
Process AI: Missing SLA
Questions & Answers
Below are the list of questions. Click the link to jump to the section of your interest.
|How are the event logs different from the Pega system logs?
Event logs is what Process Mining uses and processes to discover. It can be from any source, including Pega logs. Content of used logs need to have at least some fields.
Pega logs can be one of the data sources for Process Mining. Typically the event logs reflect some business activities/state changes in the business process, which are of interest for the analysis. However, it is not system logs what is used but history table entries.
|Does the Process Mining Pega server need to be on Cloud or will it work for an on prem server as well?
Currently, Process Mining runs on the cloud. It can take data from on-prem Pega environment, after addressing connectivity requirements (firewalls, authentication, etc.).
However, we are having internal discussions about enabling it to run on-prem. If you have any specific demands, we would love to hear them and discuss the use cases.
|It appears the Pega Platform and Pega Process Mining run as independent stacks. If that is true, how is data pushed from the Pega Platform application to Process Mining?
Yes, Pega Platform and Process Mining are independent stacks. Data can be imported from Pega Platform to Process Mining through the Pega Connector demonstrated in this session.
While the architecture does allow for streaming data analytics, pragmatically the Pega Process Mining is primarily deemed to work in the batch mode ingesting the historical data periodically.
|Some of our applications hold sensitive data that would not be allowed to cloud servers. Is there a way to select which data is loaded into Process Mining?
You can select which data it ingest into Process Mining, so you can avoid loading sensitive data. You can also obfuscate sensitive data through different means.
It depends on which way you load data. If you use the Pega Connector, or you use a file extracted through BIX, you can filter on the Pega side the fields you want to use in Process Mining. Also, since the Pega Connector uses Case Type level Data Page to gather data, you can further refine which data is sent by the Data Page.
|What is the licensing model for Process Mining?
|Process Mining is licensed based on the volume of cases you wish to analyze at any point in time during the license term. The product has a robust big data architecture that support analyses of millions of cases., so you can continuously optimize and evolve your workflows. Please contact your Pega Account Executive for detailed information.
|When developers add more history entries to our case processing configuration, the more history Process Mining can analyze. Is this a correct statement?
|Pega Connector uses history tables, since that is the most common way of understanding a case lifecycle, where actions occur and flow advances. However, any other source of information could be used, depending on the scenario you need to analyze. So you could use any other data, as soon as it contains activities and timestamp, that data can be loaded through file connectors.
|If my case is created outside of the Pega, and Pega is used as a Business Rules Engine (headless - no UX), would we be able to benefit from using Process Mining to understand the case lifecycle? Or is it restricted to events that are only happening within Pega the user interface?
Process Mining is source agnostic. You can load data from any system through different connectors, so you can load data from Pega and others. You can even combine events from different systems.
Even if Pega is used 'head-less' (no UI), the case instances have a corresponding work history table and hence data can be pulled from case and history tables for analysis.
|How does Pega determine the base line of process efficiencies of different source data?
As soon as data is loaded and digested in Process Mining, it all becomes part of the same analysis. So for Process Mining, it will be independent of the source. It depends on how you ingest data.
From the analysis perspective, it’s up to the analyst/BA to determine what should be the baseline for the process efficiency/inefficiency. It also depends on the case type. It can be known SLAs or business expectations, it can be also predefined ‘ideal’ process path (see the ‘conformance checking’ use-case), it can also be the past process performance (say previous month), etc. Such reference points can be explicitly set in Process Mining to assist the analysis.
|How can we identify and evaluate Pega Process Mining to fit different use cases, when data being analyzed is confidential?
The main information Process Mining needs to detect inefficiencies is events (actions or activities), the event name and the moment when it happened.
Any other information from the case can be added to help with the analysis, since it can be the reason for a case to reach an assignment. So the more relevant information is added to Process Mining the better. In this cases, where we need to take care of confidential or sensitive information bottom up analysis can be used, where we start with less information added into the analysis. LBA/Analyst can decide which information is used and loaded into Process Mining.
|What are the config changes that are needed to obfuscate PII data when ingesting data into Process Mining?
|Obfuscating data by using the anonymization feature can easily be done whenever a new data source is being created by selecting which columns in the data source need to be anonymized. It is also possible to obfuscate from Pega side if security is an issue. For instance, if we use Pega Connector we can modify Case Data Page and/or use Pega rules, like ABAC, to obfuscate fields.
|How do you configure the anonymization feature?
|When creating a new source on the step number four of how wizard, "Column transformation”, you can click on the button to the right of each column name to review more options, inside that menu there will be the option to anonymize or remove the selected column.
|How do we get the model process in Pega Process Mining? Do we rebuild it manually or there is a way to import it from source system?
|You can import the model using BPMN format or you can use the Process Mining tool to create a model based on your data, edit it, and save it in BPMN format for further analysis.
|Will Process Mining work if my Pega application is built using Theme Cosmos or UI Kit?
|Yes. For Process Mining, the relevant information is the platform version that your application runs. The Pega Connector uses DX API and support applications on 8.7 and higher. For previous releases, data is extracted using BIX.
|How is a score calculated for a predictor here in the demo examples ? Is it based on history data with statistical mathematics applied on the history data like AUC etc.?
|It depends on the model used in the prediction. If the model used is a Predictive model, it is trained using the historical data. If the model used in Adaptive, then the learning and the scores you get are based on the real-time responses to the model.
|Can Process AI run on client cloud?
|Yes, Process AI can run on a client cloud.
|For the Case Management prediction authored in Prediction Studio, did this auto-create the Prediction field in the case data model for the Claim case type, or does the application author have to add the Prediction field explicitly and point that at what was authored in Prediction Studio?
|The prediction field was auto-created. It’s something we improved on recently as before Infinity23 the prediction would need to be added to the Settings tab of the Case Type.
|When authoring the Case Management model in Prediction Studio, the Claim case type was pre-selected in Prediction Studio with no opportunity to choose a different case type. Is that because there was only one case type in the application, or is there some other aspect of marking the case types as eligible for Case Management predictions?
|Correct. If there were more Case Types defined, then they would be in the dropdown.