Skip to main content


         This documentation site is for previous versions. Visit our new documentation site for current releases.      
 

This content has been archived and is no longer being updated.

Links may not function; however, this content may be relevant to outdated versions of the product.

Choosing the Kubernetes-based services for your deployment

Updated on May 12, 2020

Pega supports using load balancing and logging services that already exist in your enterprise or environment. In this case, you must disable the Pega-provided service from starting.

Details about your options for load balancing and logging services are listed below. After you review your choices here, make your choices by editing the appropriate sections in the values.yaml file as described in Deploying Pega Platform with Helm.

Support for load balancers

The Pega Platform application web nodes require a load balancer, which is dependent on the type of environment hosting your Pega Platform deployment. See the following table to choose the appropriate best practice load balancer configuration for your type of environment:

Environment

Best practice load balancer configuration

For

  • Open-source Kubernetes
  • Google GKE
  • Pivotal PKS
  • Azure AKS

Enable Traefik by setting traefik.enabled to true in the values.yaml file in order for the deployment to automatically configure the Traefik settings.

Traefik automatically routes network traffic to the appropriate Kubernetes ingress based on the domain name. For configuration details, see the Traefk documentation for Global Configuration.

You may disable Traefik in the Helm Charts in order to use your own load balancer that has been configured to support the requirements detailed in the Platform Support Guide, including cookie-based session affinity.

OpenShift

Disable Traefik by setting traefik.enabled to false in the values.yaml file in order for the deployment to automatically configure the Traefik settings and ensure it is ignored when you deploy using OpenShift. After deployment, the built-in HAProxy Template Router in Openshift directs domain name traffic appropriately in the OpenShift environment.

Note: The Openshift HAProxy supports session affinity by using the roundrobin load balancing strategy.

Amazon EKS

The default load balancer for Amazon EKS deployments does not currently support cookie-based session affinity as of this writing; therefore it’s a best practice to use Traefik or configure your own load balancer.

Enable Traefik by setting traefik.enabled to true in the values.yaml file in order for the deployment to automatically configure the Traefik settings. 

Traefik automatically routes network traffic to the appropriate Kubernetes ingress based on the domain name. For configuration details, see the Traefk documentation for Global Configuration.

Support for logging using Elasticsearch-Fluentd-Kibana (EFK)

The deployment can configure logging for the Pega Platform application, which is dependent on the type of environment hosting your Pega Platfrom deployment. See the following table to choose the appropriate best practice logging configuration for your type of environment:

Environment

Best practice logging configuration

For

  • Kubernetes
  • Pivotal PKS

Enable the Elasticsearch-Fluentd-Kibana (EFK) stack setting elasticsearch.enabled, kibana.enabled, and fluentd-elasticsearch.enabled to true in the values.yaml file. EFK is a standard logging stack that is provided as an example for ease of getting started. For more configuration options available for each of the components, see their Helm Charts.

Elasticsearch: https://github.com/helm/charts/tree/master/stable/elasticsearch/values.yaml

Kibana: https://github.com/helm/charts/tree/master/stable/kibana/values.yaml

Fluentd:https://github.com/helm/charts/tree/master/stable/fluentd-elasticsearch/values.yaml

OpenShift

Disable EFK by setting elasticsearch.enabled, kibana.enabled, and fluentd-elasticsearch.enabled to false in the values.yaml file. For OpenShift deployments, you configure aggregate monitoring. For details, see the OpenShift documentation for aggregating container logs.

Amazon EKS

Disable EFK by setting elasticsearch.enabled, kibana.enabled, and fluentd-elasticsearch.enabled to false in the values.yaml file. For Amazon EKS deployments, configure monitoring using the EFK monitoring tools that are built into EKS . For details, see the article Amazon EKS workshop Implement logging with EFK.

GoogleGKE

Disable EFK by setting elasticsearch.enabled, kibana.enabled, and fluentd-elasticsearch.enabled to false in the values.yaml file. For Pega deployments in Google GKE, configure monitoring using the Azure Monitor tools that are built into AKS. Stackdriver is the default monitoring tool available for Google Cloud Platform (GCP), supports GKE, and is the recommended monitoring tool for Pega deployments in GKE. Stackdriver monitoring provides visibility into the performance, uptime, and overall health of cloud-powered applications. Stackdriver collects metrics, events, and metadata from various resources of Google Cloud Platform and ingests that data and generates insights via dashboards, charts, and alerts.

For details, see the Stackdriver Logging documentation. To enable Stackdriver in your GKE environment, see Stackdriver support in GKE configurations.

Azure AKS

Disable EFK by setting elasticsearch.enabled, kibana.enabled, and fluentd-elasticsearch.enabled to false in the values.yaml file. For Pega deployments in Azure AKS, configure monitoring using the Azure Monitor tools that are built into AKS. Log data collected by Azure Monitor is stored in a Log Analytics workspace, which is based on Azure Data Explorer. It collects telemetry from a variety of sources and uses the query language from Data Explorer to retrieve and analyze data.

For details, see the Azure Monitor Documentation. To enable azure Monitor in your AKS environment, see Azure Monitor support in AKS configurations.

Support for node HPA settings and a corresponding metrics server

Pega supports autoscaling case processing in your deployment using the Horizontal Pod Autoscaler (HPA) of Kubernetes. For details, see the Kubernetes documentation for Horizontal Pod Autoscaler.

Deployments of Pega Platform supports setting autoscaling thresholds based on CPU utilization and memory resources for a given pod in the deployment. The default settings for CPU utilization and memory resource capacity thresholds are based on Pega testing of applications under heavy loads. You can customize the thresholds to match your workloads by changing targetAverageCPUUtilization and targetAverageMemoryUtilization in the values.yaml file. These targets will be based on your initial cpuRequest and memRequest configuration.

Autoscaling in Kubernetes requires the use of a metrics server, a cluster-wide aggregator of resource usage data.

Best practices for configuring the metrics server for HPA autoscaling in supported environments
EnvironmentBest Practice
Open-source KubernetesEnable the use of a Pega-provided metrics service by setting metric-server.enabled to true in the values.yaml file,unless a metrics server has already been supplied.

For

  • Amazon EKS
  • Google GKE
  • Pivotal PKS
  • Azure AKS
Disable the use of a Pega-provided metrics service by setting metric-server.enabled to false in the values.yaml file since a metrics-server is installed in the cluster by default.

Have a question? Get answers now.

Visit the Support Center to ask questions, engage in discussions, share ideas, and help others.

Did you find this content helpful?

Want to help us improve this content?

We'd prefer it if you saw us at our best.

Pega.com is not optimized for Internet Explorer. For the optimal experience, please use:

Close Deprecation Notice
Contact us