Skip to main content
Back to all events


90 Mins

CLSA Community: Adopting DevOps on your project

This event originally took place on July 26, 2023


CLSA webinar - DevOps roadmap: Zero to Hero
Video duration: 1:35:00


Join the conversation and find related content for this event on the Pega Support Center post!

Don't know how to get started with working DevOps into your project? Go from Zero to Hero in this webinar by learning techniques to apply valuable DevOps capabilities into your existing projects.

This webinar shares a Pega Consulting LSA's real-life experience about building a DevOps roadmap which progressively adopts DevOps over a 3 to 6 month period into a project that was in-flight long before-hand.

New to DevOps? This webinar will guide you through the critical components of the roadmap as we go, including:

  • Modular architecture (Componentization)
  • Development best practices
  • Application versioning
  • CI/CD pipelines
  • Application quality
  • Test automation


Downloadable slides & related events

Technology-independent best practices discussed

Related Pega Academy missions

Pega Documentation

Pega Marketplace components


Questions & Answers

Modular architecture

Question Answer

How to achieve modularity in Pega applications as its rules need to be part of rulesets, either in:

  • An application; or
  • A component application; or
  • A new application which uses built-ons

The suggestion by Pega is to create a Module as an Application rule.

The Modular Enterprise Reuse Foundation mission in Pega Academy provides more insight.

In a microservices journey, is the recommednation is to have one implementation application built on multiple Modules?

The expectation is that there will be an application that implements the microjourney. This is where you typically see the case type(s).

This top-level Application accesses the reusable functionality it needs by adding the relevant Modules as built-on Applications.

How are component and implementation applications different?

The focus of Modules is to contain collections of rules that address a specific business, integration, or utility purpose. 

A Module is implemented as a Pega Application that is a collection of such rules that supply building blocks for production-ready, workflow automation solutions.

Should these modular applications be Component applications? One strong reason to build Modules as Applications (and not as Components) is because Deployment Manager does not manage the deployment of Components.

Can Modules contain case types?

Can I use multiple Modules (with different case types), without stacking one on the other?

Based on Pega's Modular Situational Layer Cake approach, Module applications contain features packaged as a Flow saved on a Work- class. When marked as relevant records, Citizen Developers can access the feature from case types they help author in App Studio.

Journey and Microjourney applications define the case types and are further composed of Modules through the built-on Applications configured. A common use case is build one Journey application on top of another one when specialization is needed at a regional, channel-based, or business line level.

What is the recommended level of granularity when splitting a monolith application into multiple smaller Modules?

Does it impact the main application performance when multiple smaller applications are used to deploy the full functionality of the top-level application?

The right level of granularity is always difficult when designing a modular architecture. There is no one correct answer.

Any architectural decission requires a trade-off analysis. A solution for one project may not work for another. Some guidelines you can follow to make the right decission about granularity:

  • Follow the Single Responsibility Principle: each Module should have a single, well-defined responsibility.
  • Consider team structure and collaboration: different teams own different Modules; each Module is owned by one team; no shared ownership.
  • Avoid coupling between Modules: so you can build, test, deploy and scale a Module independently from other Modules.
  • Avoid excessive numbers of Modules which can make the system more difficult to understand and navigate.

Do we have any guidelines on:

  • When to include to Enterprise Layer; and
  • When to build an independent Module (which may add additional overhead)?

The Enterprise layer is very lightweight, only holding common assets that are genuinely applicable across the enterprise like UI styling, security and authentication.

Any sort of business or technical function should be encapsulated in its own Module.

What about the overhead required for maintenance of the multiple pipelines for all the Modules?

It is true that following a modular architecture design creates more artefacts to manage: both on the application setup and maintenance.

Maintaining a monolithic application architecture incurs overhead as well. Consider the effort required to support parallel development for multiple, distributed development teams, or testing the whole application each time new change is introduced.

How to do a functional regression test of the Module which does not have a case type and life cycle?

Is it expected to create a sample implementation application for this Module?

We usually run both functional testing and end-to-end testing only on Journey and Microjourney applications.

A Module contains different artefacts like Data Pages, Data Transforms and Connectors to fulfil the features they are designed to provide to consuming applications. Features in a Module are typically tested with unit and integration testing capabilities available from Pega Unit.

Is Pega modular architecture the same as front end application development for custom built apps?

Modular architecture in Pega is different to the separation between presentation, business and data layers.

Modular architecture in Pega is an alternate approach how rules and features (including UI, process and data) are packaged to promote better reusability across the different Pega applications implemented for a client.

Must monolithic applications convert to a modular architecture to use DevOps?

Modular architecture is not required to make use of DevOps. Pega is prescribing modular architecture as a best practice to make applications less complex, promote reuse, easier to govern, scale and deliver.

DevOps remains available on existing, monolithic application architectures. The associated risk is that issues with features in the monolith are not found until the pipeline for the entire application reaches the associated quality gate. DevOps at a Module level will discover more of these issues earlier in the development cycle as the Module pipeline discovers the issue as the Module is readied for usage by the consuming applications.


Development best practices

Question Answer
For each Module, do we need to create a distinct DEV > QA > PROD app structure?

Yes, this application structure is required for each Module, as each Module has their own software lifecycle.

You should be able to build, test, and deploy each Module independently.

What is the recommended approach to create DEV and QA application when there is a Framework application available in the stack?

Create a DEV > QA > FW application structure, to manage any changes on Framework application independently from any of its implementations.

Each different implementation of your Framework needs also its own DEV > QA > PROD application structure.

If all versions of the application are packaged into one Product, we may overwrite non versioned rules on deployment to other environments. How can I avoid that?

This can be problematic mainly if you merge changes in your development environment, but do not want to promote immediately to higher environments. Unfortunately, there is no perfect solution to manage non-versioned rules.

One approach I have followed is configuring the application's Product rule to exclude those rules, and create another dedicated Product rule for non-versioned rules to deploy "on demand" using a different deployment pipeline.

App Studio development generally results in rule changes being saved to the top-most ruleset in the stack.

Can App Studio be used to author features for a Module when the Module's rulesets are not the top-most in the stack?

App Studio works on the application that is in context, creating the rules in the rulesets for the current application. If the current application that is the author's focus is the Module application - rather than an application that uses the Module - then changes the author makes are applied to the rulesets in the Module application.

Branch development preferences - available in App Studio and Dev Studio - also allow the author to select one of the built-on applications as the target for where rule changes are stored.

So, either switch to the Module application you need to add functionality to, or use branch development preferences to select the appropriate Module. 

Bear in mind that the responsibility of making changes to a Module may lay with a different team to that who is authoring the top-level application that uses the Module.

If we have multiple branches in a release, how can we merge all of them together and promote? Merge each branch one at a time, as all the quality gates for each branch need to be performed independently. Then use the multi-speed deployment feature of Deployment Manager to promote many releases at nearly the same time.


Application versioning and Gitflow

Question Answer

About managing the Enterprise layer up to only a major version number: does that mean even though new rules will be added over time and published to production, the application version will stay same?

It feels like that defeats the whole purpose of having version management in place?

When following the major version only format for the Enterprise layer application, this reduces the need for development teams to frequently update version numbers of all dependent applications.

When adopting a modular architecture, expect to have very many Module applications that build on your Enterprise Layer. Every time a new version of your Enterprise application is created, there will be significant configuration effort to update all the Modules to build-on the new Enterprise layer version.

If your Enterprise layer is a very lightweight application containing genuinely enterprise-wide reusable rules, these rules do not change very often.

Remember that when you change version 01-02-03 of a rule and save it in 01-05-01, the existing Enterprise application rule can be reconfigured to point to ruleset version 01-05-01 (or any future ruleset version matching 01-05-xx) as part of its ruleset stack without having to change the version number of the application rule.

Following this approach, only backwards-incompatible changes trigger an increment to your Enterprise layer application version number.

In case of a greenfield application, can the Gitflow approach to application version numbering be followed for the Enterprise layer when introducing new features?

I recommend following the "major version only" format for Enterprise layer applications.

If you decide follow Gitflow approach also for your Enterprise application, consider the impact on all the teams who are responsible for managing dependent Modules and applications each time you increment the version of the Enterprise application.

In the Gitflow versioning model, where are fixes to current production issues developed?

You have a "hotfix" application in your Development environment to implement fixes to bugs found in Production that need to be fast-tracked back to Production. This approach depends on the existence of dedicated merge and deployment pipelines for the hotfix application that manage both continuous integration and continuous deployment processes for hotfixes.

Is Gitflow only the recommended tool for application versioning or can we use other tool as well?

Gitflow is not a tool; Gitflow is an approach to version management. The Gitflow standard for application versioning is an option I recommend based on my experience, both Pega and non-Pega.

  • Consider it as an option for new projects.
  • Consider it as an option for existing projects where managing hotfixes, new development and stable releases in parallel is proving challenging.


CI/CD pipelines and Deployment Manager

Question Answer

Should a Product specify the full application ruleset stack or be a "delta" package of just the newest ruleset versions?

The recommendation is always package the complete application.

Avoid incremental or "delta" packaging as this is more difficult to construct a new environment from.

If we have same server for hotfixes and regular dev work, what is the best merge policy?

For regular development: use the New ruleset version merge policy. 

For hotfix development: use a distinct merge pipeline configured with an application rule that specifies ruleset versions all the way down to patch version. Then use the Highest existing ruleset version merge policy.

How can we manage the deployment of environment-specific rules and data between environments? For example, authentication profiles and dynamic system settings.

If we reference these the inside application-wide Product, then this risks overriding these instances in a higher environment which were deployed earlier and then been overridden with the appropriate value for that environment.

For instances like these that need to be configured manually per environment, avoid overriding the locally updated rules in higher environments by configuring the Bypass updates for class instances on import on the Product rule.

This and other techniques are discussed at Deployment Manager best practices - Prescriptive application release life cycle - Management of environment configuration.

To re-run the unit tests in a Staging environment, we would need to deploy them to that Staging environment. The unit tests would not need to go all the way to Production though.

Can Deployment Manager prevent the deployment of unit test rules to Production?

That is one of the reasons for the DEV > QA > PROD application setup discussed in Development best practices. This channels the creation of the test rules into rulesets defined only on the QA app.

In the definition of the deployment pipeline, you can configure that the QA application be deployed only as far as the Staging environment. This enables the pipeline to run the tests from the QA app in Staging, but not continue the deployment of the test assets any further.

Where in the deployment pipeline should the Validate Test Coverage task reside?
For each environment that performs tests, verify test coverage by having the following tasks in the pipeline for that environment in this order:
  1. Enable test coverage
  2. Run all your automated tests (and manual tests, if any)
  3. Validate test coverage
Do we have any best practices or approaches for automating the technical verification after production deployments or other environmental changes (such as operating system patching)? Each client and project have their own requirements for post-Production deployment verification. In my opinion, you have two options in general to configure your Production deployment pipeline:
  1. For a manual verification, add a manual step to the pipeline, assigned to the team responsible to perform this verification.
  2. If you are able to automate this verification somehow, you can add a custom task to your pipeline, or delegate this responsibility to a third-party deployment orchestrator to perform this verification.
Can we integrate with third-party orchestration tools (Jenkins, Azure DevOps) to trigger Pega deployments or third-party tests? There are two approaches to integrate Pega Deployment Manager with a orchestrator, which depend on whether Deployment Manager is the main orchestrator or not.
  • If Deployment Manager is the main orchestrator, and you need to run third-party tests like end-to-end UI Selenium tests, you need to delegate out to a third-party orchestrator. Pega has an OOTB integration with Jenkins, and for any other solution you can build a custom task.
  • If Deployment Manager is the secondary orchestrator, you are able to trigger Pega deployments from a third-party orchestrator using Pega's REST API.
Is there any documnetation how to build custom deployment tasks? For example, integrating with Azure DevOps pipelines? See the Custom Tasks article on Pega Docs.
Can Deployment Manager roll back failed deployments? You have the option in Pega Deployment Manager to perform a rollback on any deployment that encounters an error or fails for any reason. 
If you need to perform a manual verification after deployment, add a manual step for final approval in the Production stage of the deployment pipeline, so that if the verification checks fail, then you can rollback the deployment.
Can we run tasks asynchronously in a Deployment Manager pipeline? Yes, Deployment Manager supports running custom tasks with callbacks to support asynchronous task execution.
Is there any possibility to choose the environment to which I can deploy the package?
Like we have "Change stage" in case management.

Do you mean dynamically? Per pipeline you can define to which environment(s) you will deploy

The pipeline definition standardizes what environments a package is deployed to, so as to reduce the likelihood that the Production environment is the first environment in any deployment to have exercised the rulebase that is deployed there.

Does Deployment Manager need to use a repository type supplied by Pega Platform, or can we use a custom repository type?

You can use a custom repository as well. An example is the Nexus repository component on Pega Marketplace.

See Pega Docs for information on creating custom repository types.


Application quality

Question Answer
Does Pega have a Rule Coverage tool, which measures how much configuration in a release is covered by the testing strategy?

Pega Platform has a test coverage capability from the Application Quality menu which measures coverage of your test strategy - manual and automated.

Compared to how test coverage is typically measured in high-code platforms, Pega measures coverage at the rule level. A rule is considered covered in Pega if it is executed at least once, regardless of how many of its available logic paths were executed. This is different to the test coverage for high-code platforms like Java that can measure test coverage at the line-of-code level, meaning 100% of a Java function means that all available logic paths are executed.

What is the impact on deployment of the informational guardrail warnings about no test cases being implemented for some rules in our application?

As the guardrail warnings for no test cases are informational, they do not impact the Guardrail Score and will therefore not cause a deployment to fail based on missing a Guardrail Score threshold.

An absence of Pega Unit test cases on some rules will likely impact your Test Coverage unless these rules are covered by other Pega Unit or manually-executed test cases. This increases the risk that a deployment fails due to insufficient test coverage.

A good guideline to aim for is a healthy Guardrail Score, ever-increasing Test Coverage, and 100% pass rate on all Pega Unit tests.


Testing automation

Question Answer

If integration tests are configured with Pega Unit, then they are also run when the Merge pipeline runs the unit tests before merging. How can we guard against the Merge failing because an integration test fails?

We have two approaches to perform integration testing:

  1. Create a copy of your Production environment and fill it with anonymized data. Then run the real API against this data.
  2. Mock the behavior of the API and validate the mocks during your DevOps lifecycle.

My recommendation - specifically because of scenarios like the one described in this questions - is mock APIs instead of running real API requests. Generally, the goal of an integration test is to verify that the behaviour of your Pega rules given the expected responses (success and failure) from the downstream API. These expected responses can be mocked.

Performing integration tests against real APIs is very costly: to build, to maintain, and to run as part of every merge.

If running integration tests against real APIs is genuinely required, I recommend that you run them every night to avoid impacting development team productivity. Do these in addition to tests against mock APIs, not instead of it.

What is the Simul8r component?

The Simul8r component is available on Pega Marketplace providing a solution for mocking APIs as discussed above. It is not part of the Pega Platform.

Simul8r is a low-code approach to simulating your integrations, introducing a new "Simulation" rule type, and providing a simple UI for capturing all the API responses you would like to simulate.

What are the guidlines for deployment and unit testing of applications developed by low code citizen developers?

Do we have any support from Deployment Manager to review the changes made by citizen developers?

Pega Unit Testing capability is only available from Dev Studio, which makes Unit Testing not a Citizen Developer responsibility.

If you are working with an hybrid development model - where both Citizen Developers and more experienced Pega developers are working together - Citizen Developers can implement their changes in App Studio using branches, and System Architects:

  • Perform branch reviews in Dev Studio where review of Citizen Developer changes is needed
  • Add Pega Unit tests to rules where these add value.

The configuration created by Citizen Developers in App Studio is often auto-generated by App Studio, such as UI and Flow step configuration. There may not always be material benefit from attaching Pega Unit tests to this configuration. This should be evaluated on a case by case basis, as with When rule and Data Transform configuration emerging as App Studio capabilities, some of these will benefit from attaching Pega Unit test cases.


Do you want to leave some feedback about the event?
Please leave a comment at our Support Center post for this event.

Did you find this content helpful?

Want to help us improve this content?

We'd prefer it if you saw us at our best.

Pega Community has detected you are using a browser which may prevent you from experiencing the site as intended. To improve your experience, please update your browser.

Close Deprecation Notice
Contact us