Terracotta Quartz User Guide : Quartz Scheduler Where Code Sample : Sample Code for Quartz Scheduler Where
Sample Code for Quartz Scheduler Where
In this example, a cluster has Terracotta clients running Quartz Scheduler on the following hosts: node0, node1, node2, node3. These hostnames are used as the instance IDs for the Quartz Scheduler scheduler instances because the following properties are set in quartz.properties as shown here:
org.quartz.scheduler.instanceId = AUTO
#This sets the hostnames as instance IDs:
org.quartz.scheduler.instanceIdGenerator.class =
org.quartz.simpl.HostnameInstanceIdGenerator
Setting quartzLocality Properties
quartzLocality.properties has the following configuration:
org.quartz.locality.nodeGroup.slowJobs = node0, node3
org.quartz.locality.nodeGroup.fastJobs = node1, node2
org.quartz.locality.nodeGroup.allNodes = node0, node1, node2, node3
org.quartz.locality.nodeGroup.slowJobs.triggerGroups = slowTriggers
org.quartz.locality.nodeGroup.fastJobs.triggerGroups = fastTriggers
Creating Locality-Aware Jobs and Triggers
The following code snippet uses Quartz Scheduler Where to create locality-aware jobs and triggers.
// Note the static imports of builder classes that define a Domain Specific Language (DSL).
import static org.quartz.JobBuilder.newJob;
import static org.quartz.TriggerBuilder.newTrigger;
import static org.quartz.locality.LocalityTriggerBuilder.localTrigger;
import static org.quartz.locality.NodeSpecBuilder.node;
import static org.quartz.locality.constraint.NodeGroupConstraint.partOfNodeGroup;
import org.quartz.JobDetail;
import org.quartz.locality.LocalityTrigger;
// Other required imports...
// Using the Quartz Scheduler fluent interface, or the DSL.
/***** Node Group + OS Constraint
Create a locality-aware job that can be run on any node
from nodeGroup "group1" that runs a Linux OS:
*****/
LocalityJobDetail jobDetail1 =
localJob(
newJob(myJob1.class)
.withIdentity("myJob1")
.storeDurably(true)
.build())
.where(
node()
.is(partOfNodeGroup("group1"))
.is(OsConstraint.LINUX))
.build();
// Create a trigger for myJob1:
Trigger trigger1 = newTrigger()
.forJob("myJob1")
.withIdentity("myTrigger1")
.withSchedule(simpleSchedule()
.withIntervalInSeconds(10)
.withRepeatCount(2))
.build();
// Create a second job:
JobDetail jobDetail2 = newJob(myJob2.class)
.withIdentity("myJob2")
.storeDurably(true)
.build();
/***** Memory Constraint
Create a locality-aware trigger for myJob2 that will fire on any
node that has a certain amount of free memory available:
*****/
LocalityTrigger trigger2 =
localTrigger(newTrigger()
.forJob("myJob2")
.withIdentity("myTrigger2"))
.where(
node()
// fire on any node in allNodes
// with at least 100MB in free memory.
.is(partOfNodeGroup("allNodes"))
.has(atLeastAvailable(100, MemoryConstraint.Unit.MB)))
.build();
/***** A Locality-Aware Trigger For an Existing Job
The following trigger will fire myJob1 on any node in the allNodes group
that's running Linux:
*****/
LocalityTrigger trigger3 =
localTrigger(newTrigger()
.forJob("myJob1")
.withIdentity("myTrigger3"))
.where(
node()
.is(partOfNodeGroup("allNodes")))
.build();
/***** Locality Constraint Based on Cache Keys
The following job detail sets up a job (cacheJob) that will be fired on the node
where myCache has, locally, the most keys specified in the collection myKeys.
After the best match is found, missing elements will be faulted in.
If these types of jobs are fired frequently and a large amount of data must
often be faulted in, performance could degrade. To maintain performance, ensure
that most of the targeted data is already cached.
*****/
// myCache is already configured, populated, and distributed.
Cache myCache = cacheManager.getEhcache("myCache");
// A Collection is needed to hold the keys for elements targeted by cacheJob.
// The following assumes String keys.
Set<String> myKeys = new HashSet<String>();
... // Populate myKeys with the keys for the target elements in myCache.
// Create the job that will do work on the target elements.
LocalityJobDetail cacheJobDetail =
localJob(
newJob(cacheJob.class)
.withIdentity("cacheJob")
.storeDurably(true)
.build())
.where(
node()
.has(elements(myCache, myKeys)))
.build();
Notice that trigger3, the third trigger defined, overrode the partOfNodeGroup constraint of myJob1. Where triggers and jobs have conflicting constraints, the triggers take priority. However, since trigger3 did not provide an OS constraint, it did not override the OS constraint in myJob1. If any of the constraints in effect — trigger or job — are not met, the trigger will go into an error state and the job will not be fired.
Using CPU-Based Constraints
The CPU constraint allows you to run jobs on machines with adequate processing power:
...
import static org.quartz.locality.constraint.CpuConstraint.loadAtMost;
...
// Create a locality-aware trigger for someJob.
LocalityTrigger trigger =
localTrigger(newTrigger()
.forJob("someJob")
.withIdentity("someTrigger"))
.where(
node()
// fire on any node in allNodes
// with at most the specified load:
.is(partOfNodeGroup("allNodes"))
.has(loadAtMost(.80)))
.build();
The load constraint refers to the CPU load (a standard *NIX load measurement) averaged over the last minute. A load average below 1.00 indicates that the CPU is likely to execute the job immediately. The smaller the load, the freer the CPU, though setting a threshold that is too low could make it difficult for a match to be found.
Other CPU constraints include CpuContraint.coresAtLeast(int amount), which specifies a node with a minimum number of CPU cores, and CpuConstraint.threadsAvailableAtLeast(int amount), which specifies a node with a minimum number of available threads.
Note:  
If a trigger cannot fire because it has constraints that cannot be met by any node, that trigger will go into an error state. Applications using Quartz Scheduler Where with constraints should be tested under conditions that simulate those constraints in the cluster.
This example showed how memory and node-group constraints are used to route locality-aware triggers and jobs. For example, trigger2 is set to fire myJob2 on a node in a specific group ("allNodes") with a specified minimum amount of free memory. A constraint based on operating system (Linux, Microsoft Windows, Apple OSX, and Oracle Solaris) is also available.
Copyright © 2010-2015 Software AG, Darmstadt, Germany.

Product Logo |   Feedback