Integrate Software AG Products Using Digital Event Services : Using Digital Event Services to Communicate between Software AG Products : Administering Digital Event Services : Configuring Services : Configuring Digital Event Persistence Services for HDFS
Configuring Digital Event Persistence Services for HDFS
With Digital Event Persistence services, you can store events to an Apache Hadoop Distributed File System and Hive, Cloudera distribution (HDFS CDH) 5.3.0 storage engine.
To use HDFS as the storage engine for Digital Event Persistence, you must first configure the Hadoop cluster by deploying the custom Hive SerDe and Joda Date/Time libraries from your Digital Event Persistence installation. For more information about how to configure HDFS for use with Digital Event Persistence services, see Configuring HDFS for Digital Event Persistence.
You can either specify static values in the configuration fields, or use dynamic service configuration to persist events to different storage destinations based on the content of the events. You can specify dynamic values in the Name Node URI, Database, Hive Server URI, and User Id fields. To specify a variable, start and end your expression with $.
For more information about adding dynamic service configuration to a digital event type, see Adding Dynamic Service Information to Digital Event Types.
To create Digital Event Services services of type Digital Event Persistence for HDFS
1. In Command Central, navigate to Environments > Instances > All > instance_name > Digital Event Services > Configuration.
2. Select Event Persistence from the drop-down menu.
3. Click , and then select HDFS CDH 5.3.0 for the service type.
4. Specify values for the following fields:
Parameter
Description
Service Name
Required. The name of the new service. Specify a unique service name that starts with a character. Valid separator characters are periods (.) and dashes (-). The service name is not case-sensitive.
Note:  
You cannot rename an existing service. If you want to modify the service name, you must delete the existing service and create a new one with a different name.
Service Description
Optional. A description of the new service.
Name Node URI
Required. Supports dynamic service configuration. The URI of the Name Node in the HDFS cluster . Specify the Name Node URI as follows: hdfs://host:port, where host is the host name of the server, and port is the port on which the server listens for incoming requests.
The default value is hdfs://localhost:8020.
You can use dynamic service configuration to specify the host, for example: hdfs://$host$:port.
Maximum File Size(MB)
Required. The HDFS block size in megabytes. The default value is 65.
Hive Server URI
Required. Supports dynamic service configuration. The URI of the Apache Hive Server. Specify the server URI as follows: jdbc:hive2://host:port, where host is the host name of the server, and port is the port on which the server listens for incoming connection requests.
The default value is jdbc:hive2://localhost:10000.
You can use dynamic service configuration to specify the Hive Server, for example: jdbc:hive2://$host$:port.
Database
Required. Supports dynamic service configuration. The name of the Hive database.
You can use dynamic service configuration to specify all or part of the database name, for example: $database_name$.
Warehouse Location
Required. The location of the Hive warehouse. The default value is /user/hive/warehouse.
User Id
Required. Supports dynamic service configuration. The username for the Hive user account.
You can use dynamic service configuration to specify the user ID, for example: $userid$.
Password
Required. The password for the Hive user account.
Batch Size
Required. The number of events that is written to HDFS with a single write operation. The default value is 10000.
Note:  
If the HDFS service queues a batch of events before the batch write timer expires, the service immediately persists all queued events to HDFS.
Batch Write Timer(sec)
Required. Batch write frequency in seconds. The default value is 15.
Note:  
If the batch write timer expires before the HDFS service queues a batch of events, all currently queued events are persisted to HDFS.
5. Optionally, click Test to verify that your configuration is valid.
Note:  
When using dynamic service configuration, it is not possible to successfully connect to HDFS using the Test button. However, field validation works as expected.
6. Save your changes.
Copyright © 2017 Software AG, Darmstadt, Germany.

Product LogoContact Support   |   Community   |   Feedback