Software AG Products 10.5 | Administering API Portal | Configuring API Portal | High Availability setup in API Portal | Setting up API Portal HA setup with External Database (Oracle)
 
Setting up API Portal HA setup with External Database (Oracle)
This procedure describes in detail the setting up of the HA setup for API Portal with external Database (Oracle).
Ensure the following before you start configuring the setup:
*Install API Portal but do not start any runnables.
*You have a running Oracle instance
*To setup API Portal HA set up with an external Database
1. Add worker nodes to ACC.
a. Start ACC.
b. Execute the add node command for each of the worker nodes. The basic syntax of the add node command is as follows:
add node logicalNodeName ipAddressOrHostname
@agentPort] agentUsername agentPassword
Replace logicalNodeName with the logical node name you want to assign to that node and with which you will later refer to it in the ACC commands.
2. Create a 3-node environment.
a. On machine1, create a nodelist.pt file in the folder C:\SoftwareAG\API_Portal\server, which contains the following lines:
add node n1 machine1 @18011 Clous g3h31m
add node n2 machine2 @18011 Clous g3h31m
add node n3 machine3 @18011 Clous g3h31m
set current node n1
b. Replace machine1, machine2, and machine3 with the names or IP addresses of your machines.
c. Run the following command:
C:\SoftwareAG\API_Portal\server>acc\acc.bat -n
C:\SoftwareAG\API_Portal\server\nodelist.pt –c
C:\SoftwareAG\API_Portal\server\generated.apptypes.cfg
This creates an ensemble between the instances in the cluster.
d. To view the 3-node cluster in ACC, run the command:
ACC+ n1>list nodes
the 3-node cluster has all nodes listed listening on port 18011 using REST services as follows:
n1 : machine1 (18011) OK
n2 : machine2 (18011) OK
n3 : machine3 (18011) OK
3. Cleanup the unnecessary runnables by running the following commands, in ACC, to deconfigure the runnables for all the 3 nodes:
ACC+ n1>on n1 deconfigure zoo_s
ACC+ n1>on n1 deconfigure cloudsearch_s
ACC+ n1>on n1 deconfigure apiportalbundle_s
ACC+ n1>on n1 deconfigure postgres_s
ACC+ n1>on n2 deconfigure zoo_s
ACC+ n1>on n2 deconfigure postgres_s
ACC+ n1>on n3 deconfigure zoo_s
ACC+ n1>on n3 deconfigure postgres_s

4. Create a zookeeper cluster in ACC by running the following commands:
ACC+ n1>on n1 add zk
ACC+ n1>on n2 add zk
ACC+ n1>on n3 add zk
ACC+ n1>commit zk changes
You can view the following configuration using the ACC command after you start the zoo runnables:
ACC+ n1>list zk instances
3 Zookeeper instances:
Node InstID MyID State Cl Port Port A Port B Type
n1 zoo0 1 STARTED 14281 14285 14290 Master
n2 zoo0 2 STARTED 14281 14285 14290 Master
n3 zoo0 3 STARTED 14281 14285 14290 Master
5. Reconfigure the three elasticsearch runnables to form a cluster through ACC by running the following commands:
ACC+ n1>on n1 reconfigure elastic_s
+ELASTICSEARCH.node.name = machine1
+ELASTICSEARCH.cluster.name=apiportal
ELASTICSEARCH.discovery.zen.ping.unicast.hosts="machine2:esTCPport,
machine3:esTCPport"
+ELASTICSEARCH.discovery.zen.minimum_master_nodes=2 -zookeeper.connect.string
+ELASTICSEARCH.index.number_of_replicas=1
-ELASTICSEARCH.sonian.elasticsearch.zookeeper.client.host

ACC+ n1>on n2 reconfigure elastic_s
+ELASTICSEARCH.node.name = machine2
+ELASTICSEARCH.cluster.name=apiportal
ELASTICSEARCH.discovery.zen.ping.unicast.hosts="machine1:esTCPport,
machine3:esTCPport"
+ELASTICSEARCH.discovery.zen.minimum_master_nodes=2 -zookeeper.connect.string
+ELASTICSEARCH.index.number_of_replicas=1
-ELASTICSEARCH.sonian.elasticsearch.zookeeper.client.host

ACC+ n1>on n3 reconfigure elastic_s
+ELASTICSEARCH.node.name = machine3
+ELASTICSEARCH.cluster.name=apiportal
ELASTICSEARCH.discovery.zen.ping.unicast.hosts="machine1:esTCPport,
machine2:esTCPport"
+ELASTICSEARCH.discovery.zen.minimum_master_nodes=2 -zookeeper.connect.string
+ELASTICSEARCH.index.number_of_replicas=1
-ELASTICSEARCH.sonian.elasticsearch.zookeeper.client.host
Note:
In the three commands above, replace machine1, machine2, and machine3 with the names or IP addresses of your machines and esTCPport with elasticsearch TCP Port.
6. To validate the elasticsearch cluster, execute the following command:
validate elasticsearch cluster
This displays the following message:
Found 3 Elasticsearch instances in one
cluster across all currently registered nodes.
There were no errors.
7. To set the same user name and password for the ElasticSearch runnable in all three nodes, execute the below command:

on n1 reconfigure elastic_s +ELASTICSEARCH.aris.api.user.name="<username>"
+ELASTICSEARCH.aris.api.user.password="<password>"
on n2 reconfigure elastic_s +ELASTICSEARCH.aris.api.user.name="<username>"
+ELASTICSEARCH.aris.api.user.password="<password>"
on n3 reconfigure elastic_s +ELASTICSEARCH.aris.api.user.name="<username>"
+ELASTICSEARCH.aris.api.user.password="<password>"
For example,

on n1 reconfigure elastic_s +ELASTICSEARCH.aris.api.user.name="portal"
+ELASTICSEARCH.aris.api.user.password="manager"
8. Reconfigure kibana runnable on all three nodes as follows:
on n1 reconfigure kibana_s -zookeeper.connect.string
on n2 reconfigure kibana_s -zookeeper.connect.string
on n3 reconfigure kibana_s -zookeeper.connect.string
9. Define two cloudsearch instances, on the nodes n2 and n3, where each one belongs to a different data center:
on n2 reconfigure cloudsearch_s -zookeeper.connect.string
+zookeeper.application.instance.datacenter = n2
on n3 reconfigure cloudsearch_s -zookeeper.connect.string
+zookeeper.application.instance.datacenter = n3
10. Reconfigure the apiportalbundle runnable on the nodes n2 and n3 as follows:
on n2 reconfigure apiportalbundle_s -zookeeper.connect.string
on n3 reconfigure apiportalbundle_s -zookeeper.connect.string
11. Reconfigure the loadbalancer on n1 to point to all three zookeeper cluster members as follows:
on n1 reconfigure loadbalancer_s -zookeeper.connect.string
on n2 reconfigure loadbalancer_s -zookeeper.connect.string
on n3 reconfigure loadbalancer_s -zookeeper.connect.string
API Portal can be accessed through multiple hostnames. If one of the loadbalancer fails, the application can be accessed using the other available loadbalancers. You can ignore this step if external load balancer is not required. If you have an external load balancer Software AG recommends you to place a highly available loadbalancer (HA LB) in front of the loadbalancer runnables. To do this the following has to be added to the loadbalancer runnable configuration:
*HTTPD.servername to specify the hostname or IP address of the HA loadbalancer
*HTTPD.zookeeper.application.instance.port, the port on which the HA loadbalancer receives the http/https requests
*zookeeper.application.instance.scheme which specifies the scheme http/https
For example:
on n1 reconfigure loadbalancer_s HTTPD.servername=HTTPD.servername
HTTPD.zookeeper.application.instance.port=PORT
zookeeper.application.instance.scheme=SCHEME
12. Change the startup order of the runnables by running the following commands:
ACC+ n1>on n1 set runnable.order = "zoo0 < (elastic_s, kibana_s)
< loadbalancer_s"
ACC+ n1>on n2 set runnable.order = "zoo0 < (elastic_s, kibana_s)
< cloudsearch_s < apiportalbundle_s < loadbalancer_s"
ACC+ n1>on n3 set runnable.order = "zoo0 < (elastic_s, kibana_s)
< cloudsearch_s < apiportalbundle_s < loadbalancer_s"
13. Create a file startupScript.bat under C:\SoftwareAG\API_Portal\server. Copy the following content into the file.
#
# start Zookeeper Ensemble
#
on n1 start zoo0
on n2 start zoo0
on n3 start zoo0
on n1 wait for STARTED of zoo0
on n2 wait for STARTED of zoo0
on n3 wait for STARTED of zoo0
#
# start ElasticSearch Cluster
#
on n1 start elastic_s
on n2 start elastic_s
on n3 start elastic_s
on n1 wait for STARTED of elastic_s
on n2 wait for STARTED of elastic_s
on n3 wait for STARTED of elastic_s
#
# start Kibana
#
on n1 start kibana_s
on n2 start kibana_s
on n3 start kibana_s
on n1 wait for STARTED of kibana_s
on n2 wait for STARTED of kibana_s
on n3 wait for STARTED of kibana_s
#
# start CloudSearch
#
on n2 start cloudsearch_s
on n3 start cloudsearch_s
on n2 wait for STARTED of cloudsearch_s
on n3 wait for STARTED of cloudsearch_s
#
# start API Portal Bundle
#
on n2 start apiportalbundle_s
on n3 start apiportalbundle_s
on n2 wait for STARTED of apiportalbundle_s
on n3 wait for STARTED of apiportalbundle_s
#
# finally, start loadbalancer
#
on n1 start loadbalancer_s
on n2 start loadbalancer_s
on n3 start loadbalancer_s
on n1 wait for STARTED of loadbalancer_s
on n2 wait for STARTED of loadbalancer_s
on n3 wait for STARTED of loadbalancer_s
14. To configure envset.bat, login to the machine where oracle server is running, and go to the directory where the script files are downloaded from ARIS Download centre. Scripts are present in the folder, download_root_folder\ARIS.xxx.DatabaseScripts\DatabaseScripts\Design&ConnectServer\oracle
15. Open the envset.bat file, modify the following fields, and save the file:
*SET CIP_ORA_BIN_PATH=Path where sqlplus.exe can be found (for example C:\app\username\product\11.2.0\dbname\BIN)
*SET TARGET_HOST=DB Server Name (Machine name in which Oracle server is running)
*SET TARGET_PORT=Port (Port in which oracle server is running. Example: 1521)
*SET TARGET_SERVICE_NAME= Service name (Name of the oracle service. Example: XE for oracle 11g)
*SET CIP_INSTALL_USER=User Name (Database administrator username)
*SET CIP_INSTALL_PWD=Password (Database administrator password)
*SET CIP_TS_DATA=Table space name (Table space which is already present in the database)
*SET CIP_APP_USER=Username (User that will be used by the application. Example: dbuser)
*SET CIP_APP_PWD=Password (Password of the application user. Example: dbuser123)
*SET CIP_TENANT_SCHEMA_PWD=Password (Password used for tenant schemas. Example: dbuser123)
16. Before running the database scripts ensure that the Oracle query tool (sqlplus) is available in the command prompt. Run the envset.bat file.
17. Run cip_create_app_user.bat file. This creates the application user, which was specified in envset.bat file.
For Oracle 12c and Oracle 19c, the following changes should be made for error free execution of the scripts files,
a. To avoid ORA-65096: invalid common user or role name error during schema creation, open cip_create_empty_tenant_schema.sql and cip_create_app_user.sql files, and add the following after "set verify off", alter session set "_ORACLE_SCRIPT"=true;
b. If complex password policy is enabled by default in the database and the application user password does not comply to it, an error message displays, while creating the tenant schema. To avoid this, open cip_create_empty_tenant_schema.sql file and add the following after "BEGIN",
EXECUTE IMMEDIATE 'ALTER PROFILE default LIMIT
PASSWORD_VERIFY_FUNCTION null';
18. To create database schema for the tenants default and master, run the following commands in a command line.
*cip_create_schema_for_tenant.bat CIP_MASTER
*cip_create_schema_for_tenant.bat CIP_DEFAULT
Note:
You can use dbvisualizer to ensure that the schemas are created.
19. Switch to the machine where API Portal is installed. Add the JDBC drivers to API Portal classpath.
a. Start API Portal Cloud Controller.
b. Run the following command:
enhance apiportalbundle_s with commonsClasspath
local file "location of ojdbc file"
Example:
ACC+ localhost>enhance apiportalbundle_s with commonsClasspath
local file "C:/jdbc/jar/ojdbc8.jar"
20. Register the external service database.
a. In API Portal Cloud Controller, run the following commands:
register external service db url="jdbc:oracle:thin:@
servername:port/servicename"
driverClassName=oracle.jdbc.OracleDriver jmxEnabled=true maxActive=100
maxIdle=15 logAbandoned=true rollbackOnReturn=true maxWait=10000
removeAbandoned=false defaultAutoCommit=false
username=Application Username password=Application User password
host=servername
An external service identifier is returned once the above command is executed, for example, it returns the service id as db0000000000 .
b. Run the following command to assign the service to the default and master tenants:
assign tenant default to service db0000000000

com.aris.cip.db.schema=CIP_DEFAULT
assign tenant master to service db0000000000
com.aris.cip.db.schema=CIP_MASTER
21. Execute the script by running the following command from the command prompt
C:\SoftwareAG\API_Portal\server>acc\acc.sh |bat -n
C:\SoftwareAG\API_Portal\server\nodelist.pt –c
C:\SoftwareAG\API_Portal\server\generated.apptypes.cfg -f
C:\SoftwareAG\API_Portal\server\startupScript.bat
22. Ensure the HA setup is successfully running and all the runnables are started.
In the default installation, access the User Management Component (UMC) at http://machine name/umc, the ARIS Document Storage (ADS) at http://machine name/ads, the collaboration component at http://machine name/collaboration, and the API Portal at http:/machine name.
You can see that the elasticsearch cluster consists of three nodes, that the cluster name is apiportal and the master node is indicated by a solid star. Each index is split in 3 parts, called shards, which are replicated once and distributed over the three available nodes. If one node goes offline, the system can still fill up the complete index making it a fail-over system.