Adding Terracotta Clustering to Quartz Scheduler
This document describes how to add Terracotta clustering to an application that is using Quartz Scheduler. Use this installation if you have been running your Quartz Scheduler application:
on a single JVM, or
on a cluster using JDBC-Jobstore.
To set up the cluster with Terracotta, you will add a Terracotta JAR to each application and run a Terracotta Server Array. Except as noted in this document, you can continue to use Quartz in your application as specified in the Quartz documentation.
Prerequisites
JDK 1.6 or higher.
BigMemory Max 4.0.2 or higher. Download the kit and run the installer on the machine that will host the Terracotta Server.
All clustered Quartz objects must be serializable. For example, if you create Quartz objects such as Trigger types, they must be serializable.
Step 1: Install Quartz Scheduler
For guaranteed compatibility, use the JAR files included with the Terracotta kit you are installing. Mixing with older components may cause errors or unexpected behavior.
To install the Quartz Scheduler in your application, add the following JAR files to your application's classpath:
${TERRACOTTA_HOME}/quartz/quartz-ee-<quartz-version>.jar <quartz-version> is the current version of Quartz (2.2.0 or higher).
${TERRACOTTA_HOME}/common/terracotta-toolkit-runtime-ee-<version>.jar The Terracotta Toolkit JAR contains the Terracotta client libraries. <version> is the current version of the Terracotta Toolkit JAR (4.0.2 or higher).
If you are using a WAR file, add these JAR files to its WEB-INF/lib directory.
Note: | Most application servers (or web containers) should work with this installation of the Quartz Scheduler. However, note the following: GlassFish application server – You must add <jvm-options>-Dcom.sun.enterprise.server.ss.ASQuickStartup=false</jvm-options> to domains.xml. |
Step 2: Configure Quartz Scheduler
The Quartz configuration file, quartz.properties, by default, should be on your application's classpath. If you are using a WAR file, add the Quartz configuration file to WEB-INF/classes or to a JAR file that is included in WEB-INF/lib.
Add Terracotta Configuration
To be clustered by Terracotta, the following properties in quartz.properties must be set as follows:
# If you use the jobStore class TerracottaJobStore,
# Quartz Where will not be available.
org.quartz.jobStore.class = org.terracotta.quartz.EnterpriseTerracottaJobStore
org.quartz.jobStore.tcConfigUrl = <path/to/Terracotta/configuration>
The property org.quartz.jobStore.tcConfigUrl must point the client (or application server) at the location of the Terracotta configuration.
Note: | In a Terracotta cluster, the application server is also known as the client. |
The client must load the configuration from a file or a Terracotta server. If loading from a server, give the server's hostname and its tsa-port (9510 by default), found in the Terracotta configuration. The following example shows a configuration that is loaded from the Terracotta server on the local host:
# If you use the jobStore class TerracottaJobStore,
# Quartz Where will not be available.
org.quartz.jobStore.class = org.terracotta.quartz.EnterpriseTerracottaJobStore
org.quartz.jobStore.tcConfigUrl = localhost:9510
To load Terracotta configuration from a Terracotta configuration file (tc-config.xml by default), use a path. For example, if the Terracotta configuration file is located on myHost.myNet.net at /usr/local/TerracottaHome, use the full URI along with the configuration file's name:
# If you use the jobStore class TerracottaJobStore,
# Quartz Where will not be available.
org.quartz.jobStore.class = org.terracotta.quartz.EnterpriseTerracottaJobStore
org.quartz.jobStore.tcConfigUrl =
file://myHost.myNet.net/usr/local/TerracottaHome/tc-config.xml
If the Terracotta configuration source changes at a later time, it must be updated in configuration.
Scheduler Instance Name
A Quartz scheduler has a default name configured by the following quartz.properties property:
org.quartz.scheduler.instanceName = QuartzScheduler
Setting this property is not required. However, you can use this property to instantiate and differentiate between two or more instances of the scheduler, each of which then receives a separate store in the Terracotta cluster.
Using different scheduler names allows you to isolate different job stores within the Terracotta cluster (logically unique scheduler instances). Using the same scheduler name allows different scheduler instances to share the same job store in the cluster.
Step 3: Start the Cluster
1. Start the Terracotta server:
On UNIX/Linux
[PROMPT] ${TERRACOTTA_HOME}/server/bin/start-tc-server.sh &
On Microsoft Windows
[PROMPT] ${TERRACOTTA_HOME}\server\bin\start-tc-server.bat
2. Start the application servers.
3. To monitor the servers, start the Terracotta Management Console:
On UNIX/Linux
[PROMPT] ${TERRACOTTA_HOME}/tools/management-console/bin/start-tmc.sh &
On Microsoft Windows
[PROMPT] ${TERRACOTTA_HOME}\tools\management-console\bin\start-tmc.bat
Step 4: Edit the Terracotta Configuration
This step shows you how to run clients and servers on separate machines and add failover (High Availability). You will expand the Terracotta cluster and add High Availability by doing the following:
Moving the Terracotta server to its own machine
Creating a cluster with multiple Terracotta servers
Creating multiple application nodes
These tasks bring your cluster closer to a production architecture.
Procedure:
1. Shut down the Terracotta cluster.
On UNIX/Linux
[PROMPT] ${TERRACOTTA_HOME}/server/bin/stop-tc-server.sh
On Microsoft Windows
[PROMPT] ${TERRACOTTA_HOME}\server\bin\stop-tc-server.bat
2. Create a Terracotta configuration file called tc-config.xml with contents similar to the following:
<?xml version="1.0" encoding="UTF-8"?>
<con:tc-config xmlns:con="http://www.terracotta.org/config">
<servers>
<mirror-group group-name="default-group">
<!-- Sets where the Terracotta server can be found.
Replace the value of host with the server's IP address. -->
<server host="%i" name="server1">
<offheap>
<enabled>true</enabled>
<maxDataSize>512M</maxDataSize>
</offheap>
<tsa-port>9510</tsa-port>
<jmx-port>9520</jmx-port>
<data>terracotta/data</data>
<logs>terracotta/logs</logs>
<data-backup>terracotta/backups</data-backup>
</server>
<!-- If using a mirror Terracotta server, also referred to as an
ACTIVE-PASSIVE configuration, add the second server here. -->
<server host="%i" name="Server2">
<offheap>
<enabled>true</enabled>
<maxDataSize>512M</maxDataSize>
</offheap>
<tsa-port>9511</tsa-port>
<data>terracotta/data-dos</data>
<logs>terracotta/logs-dos</logs>
<data-backup>terracotta/backups-dos</data-backup>
</server>
</mirror-group>
<update-check>
<enabled>false</enabled>
</update-check>
<garbage-collection>
<enabled>true</enabled>
</garbage-collection>
<restartable enabled="true"/>
</servers>
<!-- Sets where the generated client logs are saved on clients. -->
<clients>
<logs>terracotta/logs</logs>
</clients>
</con:tc-config>
3. Install BigMemory Max on a separate machine for each server you configure in tc-config.xml.
4. Copy the tc-config.xml to a location accessible to the Terracotta servers.
6. Edit the org.quartz.jobStore.tcConfigUrl property in quartz.properties to list both Terracotta servers: org.quartz.jobStore.tcConfigUrl = <server.1.ip.address>:9510,<server.2.ip.address>:9510
7. Copy quartz.properties to each application node and ensure that it is on your application's classpath. If you are using a WAR file, add the Quartz configuration file to WEB-INF/classes or to a JAR file that is included in WEB-INF/lib.
8. Start the Terracotta server in the following way, replacing "Server1" with the name you gave your server in tc-config.xml:
On UNIX/Linux
[PROMPT] ${TERRACOTTA_HOME}/server/bin/start-tc-server.sh \
-f <path/to/tc-config.xml> -n Server1 &
On Microsoft Windows
[PROMPT] ${TERRACOTTA_HOME}\server\bin\start-tc-server.bat ^
-f <path\to\tc-config.xml> -n Server1 &
If you configured a second server, start that server in the same way on its machine, entering its name after the -n flag. The second server to start up becomes the mirror. Any other servers you configured will also start up as mirrors.
9. Start all application servers.
10. Start the Terracotta Management Server and view the cluster.