Configuring an API Gateway Cluster
Configuring an API Gateway cluster requires the following:
Configuring
Integration Server cluster
Configuring Event Data Store cluster
Configuring Terracotta Server array
Configuring load balancer
Configuring ports
Integration Server Configuration
The cluster implementation of API Gateway is built upon the cluster support of Integration Server.
API Gateway's cluster implementation is built upon the Integration Server's cluster support. In contrast to the Integration Server clustering, API Gateway does not require any database that is shared across the cluster nodes. For information on Integration Server clustering, see webMethods Integration Server Clustering Guide.
1. Add the following entries to the Install-Dir/IntegrationServer/instances/default/config/server.cnf file:
watt.server.cluster.aware=true watt.server.cluster.name=APIGatewayTSAcluster watt.server.cluster.tsaURLs=TSA host:TSA port watt.server.terracotta.license.path=path to TSA license file 2. Extend the wrapper.conf with an additional java parameter in the Install-Dir/profiles/IS_default/configuration/custom_wrapper.conf file.
wrapper.java.additional.xx=-Dtest.cluster.withDerby=true xx denotes any free additional java parameter number.
Note: Ensure that you configure all the cluster nodes with the same Integration Server configuration changes.
For additional details, see webMethods Integration Server Clustering Guide.
Event Data Store Configuration
Each API Gateway cluster node consists of an Event Data Store instance for storing run-time assets and configuration items. An Event Data Store instance is a non-clustered Elasticsearch node. For a cluster configuration, the Event Data Store instances should also be clustered using standard Elasticsearch clustering properties, by modifying the SAG_root/EventDataStore/config/elasticsearch.yml file on each instance. For more information, see https://www.elastic.co/guide/en/elasticsearch/guide/current/important-configuration-changes.html and https://www.elastic.co/guide/en/elasticsearch/reference/2.3/index.html. The cluster name has to be specified and the cluster nodes have to be configured.
A sample configuration looks like follows:
cluster.name:"SAG_EventDataStore"
network.host:0.0.0.0
http.port:9240
transport.tcp.port:9340
node.master:true
discovery.zen.ping.unicast.hosts: ["apigateway1:9340","apigateway2:9340",
"apigateway3:9340"]
Cluster Health
The health of the Event Data Store cluster can be checked using the following URL:
Cluster health response example:
{
"cluster_name" : "SAG_EventDataStore",
"status" : "green",
"timed_out" : false,
"number_of_nodes" : 3,
"number_of_data_nodes" : 3,
"active_primary_shards" : 11,
"active_shards" : 22,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 0,
"delayed_unassigned_shards" : 0,
"number_of_pending_tasks" : 0,
"number_of_in_flight_fetch" : 0,
"task_max_waiting_in_queue_millis" : 0,
"active_shards_percent_as_number" : 100.0
}
The response shows the status of the cluster and the number of its nodes. The following is an example sample response showing an unhealthy cluster status:
{
"cluster_name" : "SAG_EventDataStore",
"status" : "yellow",
"timed_out" : false,
"number_of_nodes" : 2,
"number_of_data_nodes" : 2,
"active_primary_shards" : 11,
"active_shards" : 15,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 7,
"delayed_unassigned_shards" : 7,
"number_of_pending_tasks" : 0,
"number_of_in_flight_fetch" : 0,
"task_max_waiting_in_queue_millis" : 0,
"active_shards_percent_as_number" : 68.18181818181817
}
Here the Event Data Store cluster state is yellow in the system and the number of nodes indicate that a cluster node is missing. An unhealthy cluster state can be caused by communication problems between the cluster nodes. To recover from an unhealthy state, Integration Server running the API Gateway with the missing node has to be restarted. A restart forces the Event Data Store instance to rejoin the cluster.
For details on cluster health, see https://www.elastic.co/guide/en/elasticsearch/guide/current/_cluster_health.html.
Terracotta Server Array Configuration
API Gateway requires a Terracotta Server array installation. For more information see webMethods Integration Server Clustering Guide and the Terracotta documentation located at http://www.terracotta.org/
Load Balancer Configuration
A custom load balancer can be used for an API Gateway cluster. Here we use the load balancer nginx.
On a Linux machine, the load balancer configuration file /etc/nginx/nginx.conf is as follows:
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log debug;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
gzip on;
upstream apigateway {
server daefermion4:5555;
server daefermion4:5556;
server daefermion4:5557;
}
server {
listen 8000;
location / {
proxy_pass http://apigateway;
}
}
}
Use sudo nginx -s reload or sudo nginx -s start to reload or start nginx respectively. In a test environment, the command nginx-debug is used for greater debugging. The load needs to be exposed through the firewall that is protecting the host the firewall is running on.
Ports Configuration
By default, API Gateway does provide synchronization of the port configuration across API Gateway cluster nodes. If you do not want the ports to be synchronized across API Gateway cluster nodes, set the portClusteringEnabled parameter available under Username > Administration > General > Extended settings in API Gateway to false.
Note: When this parameter is set to true, all the existing port configurations except the diagnostic port (9999) and the primary port (5555) are removed.
Synchronization of ports configuration does not cover temporary disconnects of a node, therefore, to get a node synchronized, you must restart it. Also, if you do not remove the port configuration, the port can be re-synchronized by performing another update on the same configuration. Therefore, to activate the ports synchronization, do the following:
1. Set the portClusteringEnabled parameter to true.
2. Restart all the cluster nodes.