API Portal 10.7 | webMethods API Portal for Administrators | Configuring API Portal | High Availability setup in API Portal | Setting up API Portal HA setup | Reconfiguring Other Runnables
 
Reconfiguring Other Runnables
After the Elasticsearch, you must reconfigure other runnables and specify the order in which they must be started.
1. Reconfigure Kibana runnable on all three nodes as follows:
on n1 reconfigure kibana_s kibanacfg.elasticsearch.username="esUserName"
kibanacfg.elasticsearch.password="esPassword"
kibanacfg.elasticsearch.hosts="[\"machine1:esHTTPport\",\"http://machine2:esHTTPport\",
\"http://machine3:esHTTPport\"]" -zookeeper.connect.string
on n2 reconfigure kibana_s kibanacfg.elasticsearch.username="esUserName"
kibanacfg.elasticsearch.password="esPassword"
kibanacfg.elasticsearch.hosts="[\"machine1:esHTTPport\",\"http://machine2:esHTTPport\",
\"http://machine3:esHTTPport\"]" -zookeeper.connect.string
on n3 reconfigure kibana_s kibanacfg.elasticsearch.username="esUserName"
kibanacfg.elasticsearch.password="esPassword"
kibanacfg.elasticsearch.hosts="[\"machine1:esHTTPport\",\"http://machine2:esHTTPport\",
\"http://machine3:esHTTPport\"]" -zookeeper.connect.string
Note:
In the above three commands, replace machine1, machine2, and machine3 with the names or IP addresses of your machines and esHTTPport with Elasticsearch HTTP Port. You can retrieve the HTTP port of an Elasticsearch runnable by running the show runnable elastic_s command in ACC. esUserName and esPassword are the username and password of the Elasticsearch instance you have configured for the high availability setup in Step 3 of the previous section.
2. Reconfigure the PostgreSQL database on n1, so that it knows about all zookeeper cluster members and accepts connections from all locations, by running the following command:
on n1 reconfigure postgres_s -zookeeper.connect.string
+postgresql.listen_addresses = "'*'"
The only kind of scaling that is possible with PostgreSQL at the moment is that of scaling across tenants. Currently, the data of a single tenant always resides in a single database instance, so the load created by a single tenant on the database instance needs to be handled by that instance. At the same time, a tenant's database instance is a single point of failure for that tenant; if the tenant's DB goes offline, the tenant becomes unusable until the DB is available again. In particular for production use on mission critical systems, when high availability is of interest, this approach is not an ideal solution. Since API Portal does not support a highly available configuration using our PostgreSQL runnable, you have to use an external DBMS like Oracle or MS SQL, which offer mechanisms for clustering and high availability. For details on configuring external databases, see Configuring API Portal with External Databases.
3. Define two cloudsearch instances, on the nodes n2 and n3, where each one belongs to a different data center:
on n2 reconfigure cloudsearch_s -zookeeper.connect.string
+zookeeper.application.instance.datacenter = n2
on n3 reconfigure cloudsearch_s -zookeeper.connect.string
+zookeeper.application.instance.datacenter = n3
4. Reconfigure the apiportalbundle runnable on the nodes n2 and n3 as follows:
on n2 reconfigure apiportalbundle_s -zookeeper.connect.string
on n3 reconfigure apiportalbundle_s -zookeeper.connect.string
5. Reconfigure the loadbalancer on n1 to point to all three zookeeper cluster members as follows:
on n1 reconfigure loadbalancer_s -zookeeper.connect.string
on n2 reconfigure loadbalancer_s -zookeeper.connect.string
on n3 reconfigure loadbalancer_s -zookeeper.connect.string
API Portal can be accessed through multiple hostnames. If one of the loadbalancer fails, the application can be accessed using the other available loadbalancers. You can ignore this step if external load balancer is not required. If you have an external load balancer Software AG recommends you to place a highly available loadbalancer (HA LB) in front of the loadbalancer runnables. To do this the following has to be added to the loadbalancer runnable configuration:
*HTTPD.servername to specify the hostname or IP address of the HA loadbalancer
*HTTPD.zookeeper.application.instance.port, the port on which the HA loadbalancer receives the http/https requests
*zookeeper.application.instance.scheme which specifies the scheme http/https
For example:
on n1 reconfigure loadbalancer_s HTTPD.servername=HTTPD.servername
HTTPD.zookeeper.application.instance.port=PORT
zookeeper.application.instance.scheme=SCHEME
6. Change the startup order of the runnables by running the following commands:
ACC+ n1>on n1 set runnable.order = "zoo0 < (elastic_s, kibana_s, postgres_s)
< loadbalancer_s"
ACC+ n1>on n2 set runnable.order = "zoo0 < (elastic_s, kibana_s)
< cloudsearch_s < apiportalbundle_s < loadbalancer_s"
ACC+ n1>on n3 set runnable.order = "zoo0 < (elastic_s, kibana_s)
< cloudsearch_s < apiportalbundle_s < loadbalancer_s"