Administering Digital Event Services : Configuring Services : Configuring Event Persistence Services : Configuring Event Persistence Services for HDFS
Configuring Event Persistence Services for HDFS
With Event Persistence services, you can store events to an Apache Hadoop Distributed File System and Hive, Cloudera distribution (HDFS CDH) 5.3.0 storage engine.
To use HDFS as the storage engine for Event Persistence, you must first configure the Hadoop cluster by deploying the custom Hive SerDe and Joda Date/Time libraries from your Event Persistence installation. For more information about how to configure HDFS for use with Event Persistence services, see Using HDFS with Event Persistence.
To create Digital Event Services services of type Event Persistence for HDFS:
1. In Command Central, navigate to Environments > Instances > All > instance_name > Digital Event Services > Configuration.
2. Select Event Persistence from the drop-down menu.
3. Click , and then select HDFS CDH 5.3.0 for the service type.
4. Specify values for the following fields:
Parameter
Description
Service Name
Required. The name of the new service. Specify a unique service name that starts with a character. Valid separator characters are periods (.) and dashes (-). The service name is not case-sensitive.
Note:  
You cannot rename an existing service. If you want to modify the service name, you must delete the existing service and create a new one with a different name.
Service Description
Optional. A description of the new service.
Name Node URI
Required. The URI of the Name Node in the HDFS cluster as follows: hdfs://host:port, where host is the host name of the server, and port is the port on which the server listens for incoming requests. The default value is hdfs://localhost:8020.
Maximum File Size(MB)
Required. The HDFS block size in megabytes. The default value is 65.
Hive Server URI
Required. Specify the URI of the Apache Hive Server as follows: jdbc:hive2://host:port, where host is the host name of the server, and port is the port on which the server listens for incoming connection requests. The default value is jdbc:hive2://localhost:10000.
Database
Required. The name of the Hive database.
Warehouse Location
Required. The location of the Hive warehouse. The default value is /user/hive/warehouse.
User Id
Required. The username for the Hive user account.
Password
Required. The password for the Hive user account.
Batch Size
Required. The number of events that is written to HDFS with a single write operation. The default value is 10000.
Note:  
If the HDFS service queues a batch of events before the batch write timer expires, the service immediately persists all queued events to HDFS.
Batch Write Timer(sec)
Required. Batch write frequency in seconds. The default value is 15.
Note:  
If the batch write timer expires before the HDFS service queues a batch of events, all currently queued events are persisted to HDFS.
5. Optionally, click Test to verify that your configuration is valid.
6. Save your changes.
Copyright © 2017 Software AG, Darmstadt, Germany. (Innovation Release)

Product LogoContact Support   |   Community   |   Feedback