Conversation
Pegasystems Inc.
CH
Last activity: 22 Dec 2025 12:05 EST
Webinar Q&A - Mastering Pega Deployment Manager Pipelines Part 2
Thank you to everyone who joined our advanced webinar on Deployment Manager pipeline configuration! The session generated fantastic questions from the community, and our expert panel, Madhuri Vasa (Product Manager for Deployment Manager), Ram Yadavalli (Architect for Deployment Manager), and Meenakshi Nayak (Senior Manager, Cloud Product Management), provided in-depth answers to help you optimize your deployment practices.
Missed the session? Watch the full recording here: Mastering Pega Deployment Manager Pipelines Part 2
Below you'll find all the questions and answers from the live session. The discussion covered critical topics including error handling, artifact management, rollback capabilities, dependency management, and advanced pipeline configuration strategies.
Q1: Is the system going to prompt suggestions in case of errors during deployment in the latest PDM release?
Answered by: Madhuri Vasa
Yes, we are prompting root cause analysis and quick next steps where possible. With the help of Generative AI, we're bringing this capability to Deployment Manager, and it's evolving to provide even more intelligent suggestions over time.
Q2: Is it mandatory to have at least one artifact in the production repository?
Answered by: Madhuri Vasa & Tihomir Petrovic
For dependencies to appear as dependency suggestions, yes, we look for a production-ready artifact. The artifact needs to be ready to ship—tested and approved. This ensures that the complete product you're deploying has all dependencies properly validated and available for global reuse. We've seen situations where teams try to use artifacts from mid-stages like SIT or staging to pre-prod, but this doesn't align with best practices. The module on which you're dependent should already be tested and published for global reuse.
Q3: How about rollback in case of any issues in the code which has been moved?
Answered by: Madhuri Vasa & Ram Yadavalli
We provide comprehensive rollback options. As part of the deployment pipeline, we give an intermediary action to roll back if any task fails post-deployment. Additionally, as part of the deploy application task, we automatically create a restore point that you can access from the deployment output parameters.
There are two rollback scenarios:
- During deployment: If there's a failure during post-deployment tasks, you can choose to roll back, and we auto-pick the restore point—no manual input needed.
- After deployment completes: If you need to roll back at the environment level (for example, a production issue arises after deployment has closed), you can specify the restore point of your most stable build and apply the rollback to the specific environment.
Importantly, we support only application-level rollback, which provides clean isolation without unintended impacts on other applications' rules.
Q4: Is there any way in Deployment Manager to deploy a master pipeline and have all dependent pipelines deploy automatically in the given sequence mentioned in the master pipeline?
Answered by: Meenakshi Nayak & Tihomir Petrovic
We recommend releasing applications independently, and a master pipeline approach conflicts with this recommendation. While we understand that in complex ecosystems you might want to move multiple independent applications together, this approach will slow you down. Your lead time increases, dependencies and testing become interdependent, and you lose agility.
Instead, we recommend being as independent as possible with each pipeline and testing independently. You can achieve similar coordination using our dependency features and global deployments. By using dependencies, you're working with already-published, approved artifacts, and Deployment Manager will deploy them sequentially. This provides much more control and confidence that you're using the right artifacts, especially when different teams own different modules or applications.
Q5: Can you confirm if RestorePoint can be created by us as part of deployment via Deployment Manager?
Answered by: Madhuri Vasa & Tihomir Petrovic
Restore points are automatically created—there's nothing manual that you need to do. You'll see the restore point as an output parameter of the deploy application task.
If you're performing a rollback within the same deployment (when something fails), we automatically pick the restore point without waiting for your input. However, if your deployment is closed and you have an issue in production requiring an environment-level rollback, you'll need to specify which restore point to use—typically your most stable build. In that case, you provide the restore point identifier when initiating the rollback at the environment level.
Q6: Will we get the recording of this session?
Answered by: Meenakshi Nayak
Yes, the webinar is being recorded and will be automatically transcribed for follow-up posts on the community. In a couple of days, we'll publish the replay of the webinar, demo videos, a community blog, and the results of this Q&A discussion on the same community page.
Q7: Do dependent pipelines first deploy their own changes and then the main application will deploy?
Answered by: Tihomir Petrovic
Yes, always. We deploy dependent applications first, followed by the main application deployment. This ensures that all dependencies are in place before the consuming application is deployed.
Q8: If you configure your merges to also trigger a deployment to QA, how do you mitigate the risk of breaking QA with bugs? Would you recommend that approach always?
Answered by: Madhuri Vasa & Tihomir Petrovic
This is actually a feature, not a risk! We want early failure and bug detection. Depending on your route to life structure, QA might be your initial environment for running unit tests and integration testing of merged code changes. If it fails, that's excellent—you're detecting issues at an early stage. It's much better to fail on a single branch than after twenty merges.
The key is that this isn't just a merge, it's a gated merge. Before a rule is even merged, we run unit tests and perform conflict checks. Your rules aren't just getting merged blindly. As long as your unit tests are strong enough, you won't have these situations in the first place. This falls back to the best practice of having proper unit tests before you merge.
You could merge for a full sprint and then deploy once to QA, but when it breaks, it becomes much more difficult to find the root cause and identify which story was breaking the tests. Having every single merge deployed gives you at least an initial check—a smoke test to verify that integration works in general before checking advanced functional impacts.
This is a shift-left mindset: detecting bugs early, failing early, reacting quickly, and isolating errors smoothly. Give it a try! If you don't like it, you can disable it at any time. It's worth trying if you want to speed up your release cycle.
Q9: Is it possible to check in Deployment Manager if any dependent code (for example, a new property in another ruleset) is missed from being added to the product file?
Answered by: Tihomir Petrovic & Meenakshi Nayak
Currently, we don't have checks in place for missing rules or rulesets from dependent code (built-on applications). However, this is great input for future enhancements, and we're collecting such ideas for the product roadmap.
That said, if something goes wrong during deployment of a built-on dependency, the deployment will fail, which will fail the whole deployment process. Additionally, we strongly recommend versioning your applications and packaging them properly. When you version and package applications correctly, except for a few instances like data instances, missing rule references are automatically addressed. This approach saves significant time and reduces errors.
Q10: In the previous session, someone mentioned that it's possible to configure CI/CD to get only new rules from a given date. Could you please showcase how to configure it?
Answered by: Tihomir Petrovic
This relates to handling applications with large histories where packages become too big. The approach involves performing a major schema deployment once, then continuing with cumulative deployments instead of packaging the whole history down to ruleset 01.01.01.
In the product rule, you have options to start packaging cumulatively from a specific ruleset version (for example, 04.01.01), including whatever you want—data objects, rule history, etc. Then, in the inclusion and exclusion options, you configure settings to automatically package new ruleset versions.
There's also a related discussion about quality checks and guardrails or rule overrides. For those checks, you can configure a start date in the task settings to limit which rule changes should be considered. This speeds up task execution by not checking the whole rule base repeatedly and potentially running into timeouts.
As one community member (Suresh Chintapalli) suggested, you can also create report definitions to pull all rules modified after a particular date and include that report definition in the product rule. However, be cautious with date-based filtering—you never know if someone goes back and changes older rules, so you might not pick up all necessary rulesets. This approach works best for specific use cases like data instances or non-versioned rules.
Q11: Are you planning to move PDM to Pega Cloud version 3 any time soon?
Answered by: Meenakshi Nayak & Tihomir Petrovic
On Pega Cloud 3, we have Deployment Manager as a service. You'll have a way to opt for it once you're on Pega Cloud 3. For more information, check out the documentation: Deployment Manager Service Documentation
Additional Context from the Session
The panel emphasized several best practices throughout the discussion:
- Versioning strategy: Use major versions for application releases and minor versions for features/enhancements
- Development layers: Structure applications with test layers and developmental layers to prevent unintentional changes
- Automated testing: Strong unit tests during gated merges prevent issues from entering pipelines
- Dependency intelligence: Let Deployment Manager automatically recommend dependencies rather than configuring manually
- Independent releases: Maintain pipeline independence to maximize agility and reduce lead time