Conversation
Pegasystems Inc.
GB
Last activity: 3 Nov 2025 10:47 EST
Q3 2025 Pega as-a-Service Release & Roadmap: Your Questions Answered
Thank you to everyone who joined our Q3 2025 Pega Cloud release and roadmap webinar! The session generated fantastic questions from our community, and Satya Mishra, Senior Director of Product Management, provided comprehensive answers that we're excited to share with you.
Missed the live session? You can watch the full recording here to see the complete discussion, including the live demonstration of MyPega Cloud's new self-service features.
Prefer text format? You can read our blog post that covers this webinar content.
Below you'll find the key questions and detailed answers from our Ask-Me-Anything session. These insights will help you understand how to leverage the latest Pega Cloud capabilities and plan for upcoming innovations.
Enhanced Disaster Recovery
Q: When exactly will clients be able to test disaster recovery for themselves, and how will the process work?
A: The self-service disaster recovery testing feature is planned for release by the end of 2025. Here's how the process will work:
Since the testing process involves some downtime, you'll create a clone of your production environment rather than testing on the live system. You'll need to provide advance notice (approximately 7 days, though the final timeframe is still being determined) to allow Pega to set up synchronization between your primary and secondary regions for the cloned environment.
Once synchronization is established and the clone is ready, you'll be able to log into MyPega Cloud and trigger the failover test with a simple click. The system will collect all the data during the test and generate a detailed report showing the actual RPO (Recovery Point Objective) and RTO (Recovery Time Objective) achieved during the test. You can then submit this report to your internal audit committees to demonstrate compliance with disaster recovery requirements.
This is a full simulation—an actual failover of a cloned environment from the primary region to the secondary region—giving you complete confidence in the disaster recovery process without any risk to your production systems.
Q: How does the disaster recovery process work when a real disaster occurs?
A: When a real disaster occurs, Pega's monitoring systems continuously track regional health across all availability zones. When our monitoring flashes red indicating severe problems, we immediately open a bridge with AWS or GCP to understand whether this is a short-term issue that will recover quickly or a prolonged regional outage.
Once we establish that we're experiencing a regional failure with no chance of fast recovery, Pega initiates the failover process for all clients who have purchased the Enhanced Disaster Recovery option. Your application will become operational in the secondary region within the defined RTO:
- Production environments: RPO of 15 minutes, RTO of 150 minutes (2.5 hours)
- Non-production environments: RPO and RTO of 24 hours
The failover can be triggered by Pega after consultation with you, or if you've granted permission in advance, Pega can initiate the failover automatically without waiting for confirmation. Once the secondary region is active, we'll work with you to confirm that your application is serving workloads correctly and all functionality has been restored.
Q: How closely does the secondary database follow the primary database in the disaster recovery setup?
A: For the majority of the time, the secondary database should be fully synchronized with the primary. However, we specify an RPO (Recovery Point Objective) of 15 minutes for production environments, which means the possibility of data loss is not more than 15 minutes.
In practical terms, if synchronization has fully caught up when a disaster occurs, you won't lose any data. But if the disaster happens while the system is in the process of synchronizing, you would not lose more than 15 minutes worth of data. This represents an excellent balance between data protection and system performance.
Q: What is expected from clients during the disaster recovery testing process?
A: The only expectation from clients is the initial configuration and setup. This primarily involves establishing network connectivity to your secondary site and keeping it ready for use.
Once this initial setup is complete, there are no ongoing expectations. When you're ready to test, you simply give Pega the go-ahead through MyPega Cloud, and the failover process initiates. During a real disaster scenario, Pega would be on an active bridge call with you, and we'd make the failover decision together with all necessary artifacts and approvals documented.
After the failover (whether test or real), we do appreciate confirmation from you that your application is able to service workloads and is running normally from a functional perspective. While Pega can monitor that the environment is up and running from an infrastructure standpoint, your confirmation that the application functionality is fully restored helps us gain complete confidence that the recovery was successful.
Regional Expansion and Compliance
Q: Can you elaborate on Canada Protected B status and how it impacts Pega Cloud?
A: Canada Protected B is a compliance boundary similar to FedRAMP in the United States. Just as US government agencies require FedRAMP certification, Canadian organizations in government and regulated industries require Canada Protected B certification to use cloud services.
Obtaining this certification requires Pega to follow a rigorous process: architect the underlying infrastructure to meet compliance boundary regulations, submit all required evidence, and undergo a formal certification process. One of the key requirements is country-specific support, meaning Pega must employ people within Canada (or the designated region) to service clients in that compliance boundary.
This is different from our commercial Pega Cloud offering, which operates across 41 regions globally. Compliance boundaries like Canada Protected B, EU Service Boundary (Sovereign Cloud), and US FedRAMP require additional architectural considerations, regional support teams, and certification processes. Once we achieve these certifications, it opens up new markets and enables Pega to serve clients who have strict regulatory requirements that mandate certified cloud providers.
The same principle applies to the EU Service Boundary we're launching in partnership with AWS in 2026, which will serve European markets with strict data sovereignty requirements.
Observability and Monitoring
Q: Is there anything on the roadmap for more granular alerts for specific background processes in PDC?
A: Absolutely! In 2026, Pega is investing heavily in observability capabilities. We envision comprehensive health and availability dashboards that will provide a holistic view of all your environments and applications.
Imagine a dashboard with visual indicators (red or green dots) for various feature areas across each environment: application availability, background processing, queue processors, job schedulers, search indexing, and MQ listeners. The dashboard will turn red when it detects problems in any of these specific areas.
Behind the scenes, we'll be performing massive correlation between various data sources—network layer information, storage metrics, Kubernetes infrastructure data, platform logs, exception data, and PDC monitoring data. By correlating information from all these layers, we'll be able to accurately detect issues and pinpoint exactly which feature area is experiencing problems.
The next phase will enable you to subscribe to events from these dashboards, so you'll receive notifications about issues that matter to you. But we won't stop at just alerting you to problems—we'll also provide diagnostic information showing where the problem is and remediation guidance on how to fix it.
Even more advanced, if we detect that an issue is caused by a Pega infrastructure problem, we'll automatically create a support incident so our operations team can resolve it without you needing to investigate or log a ticket. We're building an agentic-based diagnosis system—think of it as a swarm of specialized agents monitoring your database health, infrastructure health, and application health, all communicating with each other to identify root causes and determine whether the issue requires client action or Pega intervention.
Q: Is there a solution on the roadmap for MQ listeners not automatically reactivating after a client-provided message queue experiences a short outage?
A: This is an important issue that we're aware of. From an observability perspective, our 2026 dashboard enhancements will definitely detect these kinds of problems—when MQ listeners stop functioning, it will appear on the health dashboard.
We're also planning to implement self-healing actions where appropriate. However, we need to be careful that automated remediation doesn't create recursive loops or mask underlying issues. We have an internal program to analyze patterns of detection and remediation to understand whether we can prevent these issues from occurring in the first place, rather than just reacting to them.
That said, if this is a widespread issue affecting many clients, I highly encourage you to log this feedback through our official support channels so it can be prioritized for the platform engineering teams. This sounds like either a feature gap or a bug that should be addressed at the core platform level.
You're also welcome to post this question in the Expert Circle community, where we can connect you directly with platform teams and infrastructure teams to get a comprehensive answer and potentially accelerate a solution.
Data Management and Archival
Q: What kind of archiving capabilities does Pega Cloud have, and what strategies do you recommend to clients?
A: Case archival is an extremely important strategy for both performance optimization and cost management. Here's how it works and why it matters:
When you create cases in Pega, they're stored in your database. Any attachments associated with those cases are stored in cloud file storage. Over time, as your case volume grows, your database consumption increases. Database storage is one of the highest costs in a Pega Cloud subscription, while cloud file storage is comparatively negligible.
I highly recommend using the platform-provided archival tools to archive cases that are no longer actively being worked on. The specific timeline should be based on your organization's compliance policies—perhaps cases that have been closed for 5 years, for example.
When you archive cases, they're moved from the database to cloud file storage. You still retain read-only access to these cases in your application, but you cannot edit or modify them—which is typically fine since these are closed cases anyway. The benefits are substantial:
Performance improvement: When you reduce the volume of data in case tables, you also reduce the associated index data and other linkages required for loading data pages into memory. This can significantly improve application performance.
Cost savings: Moving cases from expensive database storage to inexpensive cloud file storage dramatically reduces your storage costs.
Table optimization: Reducing the volume of data in case tables has a cumulative effect—it reduces data in multiple related tables, making queries faster and more efficient.
This is a best practice that every Pega Cloud client should implement based on their specific compliance and data retention requirements.
Join the Conversation
These were just some of the excellent questions from our community. If you have additional questions about the Q3 release, upcoming roadmap items, or specific use cases for your organization, please continue the discussion here on our Pega-as-a-Service Expert Circle for ongoing engagement with product teams and cloud experts.
Want to dive deeper? Watch the full webinar recording to see Satya's live demonstration of the new MyPega Cloud self-service upgrade journey and to hear additional context around these answers.
We look forward to your continued participation in future sessions!