Skip to main content
LinkedIn
Copied!

Table of Contents

Connection tab

Version:

Only available versions of this content are shown in the dropdown

From the Connection tab, define all the connection details for the Hadoop host.

Before you can connect to an Apache HBase or HDFS data store, upload the relevant client JAR files into the application container with Pega Platform. For more information, see the Pega Community article JAR files dependencies for the HBase and HDFS data sets.
  1. In the Connection section, specify a master Hadoop host. This host must contain HDFS NameNode and HBase master node.
  2. Optional: To configure settings for HDFS connection, select the Use HDFS configuration check box.
  3. Optional: To configure settings for HBase connection, select the Use HBase configuration check box.
  4. Optional: Enable running external data flows on the Hadoop record. Configure the following objects:
    You can configure Pega Platform to run predictive models directly on a Hadoop record with an external data flow. Through the Pega Platform, you can view the input for the data flow and its outcome.

    The use of the Hadoop infrastructure lets you process large amounts of data directly on the Hadoop cluster and reduce the data transfer between the Hadoop cluster and the Pega Platform.

Did you find this content helpful?

Have a question? Get answers now.

Visit the Collaboration Center to ask questions, engage in discussions, share ideas, and help others.

Ready to crush complexity?

Experience the benefits of Pega Community when you log in.

We'd prefer it if you saw us at our best.

Pega Community has detected you are using a browser which may prevent you from experiencing the site as intended. To improve your experience, please update your browser.

Close Deprecation Notice
Contact us