Terracotta Ehcache 10.7 | Terracotta Server Administration Guide | Planning a Configuration
 
Planning a Configuration
To be successful, most deployments require at least a little planning before beginning the configuration process in order to avoid missteps or even the need for starting over.
If you have not already done so, please familiarize yourself with the material presented in Configuration Terms and Concepts.
Naming and Addressing
Naming Servers and Stripes
Because Terracotta deployments typically involve at least two servers, and often very many more, you should put a little planning into how you name them. Doing so will help keep the nodes and stripes easily identifiable within configuration and management commands and monitoring views. It is not actually necessary to name them, in which case they will be assigned auto-generated names, but using names that are meaningful to you will likely be helpful.
Some things to consider when deciding upon the scheme for naming stripes and nodes:
*You may want to include within the name of a stripe or node something that hints at its purpose, such as whether it is part of a development, test, or production environment. For example "DevStripe-A".
*You may want to include within a node's name something related to the name of the host upon which the server runs. On the other hand, in dynamic/container environments you may want to purposely avoid this.
*If you expect that you'll be changing your TSA topology in the future (i.e. adding or removing stripes from the cluster, or adding or removing servers from stripes), you may want to purposely avoid using sequential numbering in the names, such as "server-1" or "stripe-1", because over time you may end up with gaps or other oddities in the numbering. This will actually work fine, but may be confusing to users who try to assemble within their minds a mental map of the topology.
As you form your cluster, the cluster itself can also be named. It makes good sense to use a name that clearly identifies its purpose, e.g. "MyApp-PROD-TSA" or "MyApp-DEV-TSA", etc.
Addressing Servers
As you plan the set of servers that you will need, and where they are to be deployed, it would be wise to make, and keep handy during the configuration process, a clear listing of which host names (or addresses) and ports will be used for each server.
You have the choice of addressing servers by hostname or by IP address. Using host names (that can be resolved by DNS) is favorable. Note that you can also specify bind-address (for port) and group-bind-address (for group-port) in case of any ambiguity of which IP address the ports will be opened on (with the default being to open the ports on all of the host's addresses).
Data Consistency and Availability
One of the most important actions in planning your configuration is that of determining which guarantees you would like to favor in the case of a fail-over situation.
Please refer to Failover Tuning for a full discussion of that feature.
The well-known CAP Theorem causes the need for a choice in which guarantees the TSA should sacrifice in order to preserve the others, in the case of a fail-over situation.
If you plan to store data in the TSA and have its integrity protected with priority, you should strongly consider using the consistency setting for the cluster's failover-priority setting.
If you plan only to cache data in the TSA, you may prefer to use the availability setting for failover-priority.
In either case, the likelihood of the cluster ever needing to resort to compromising on either data availability or data consistency can be greatly reduced by careful choices and resourcing related to High Availability.
High Availability
High Availability (HA) of the Terracotta Server Array is achieved through the use of mirror servers within each stripe, and the optional use of "voters".
When an active server is shut down or fails, other stripe members become eligible for becoming the new active server for that stripe's set of data. If there are no other members of the stripe running, then the stripe's data is not available, and that typically results in the complete unavailability of the Terracotta cluster, until the stripe is back online.
As you plan your TSA configuration, you should consider what levels of service are required, and plan the proper number of servers per stripe and any requisite external voters (to assist with tie-breaking during elections when quorum is not otherwise present).
For more information on these topics, see Active and Passive Servers, Electing an Active Server, and Failover Tuning (including discussion of External Voters).
Storage and Persistence Resources
In-Memory Storage
As part of planning for your configuration, you need to put some thought into how you will organize the storage of your data.
Typically, data is stored within "offheap resources" which represent pools of memory reserved from the underlying operating system. You configure one or more offheap resources, giving each a name and a size (such as 700MB or 512GB, etc.). After your cluster is up and running, you can create Caches and Datasets for storing your data, and as you do so, you will need to indicate which offheap resource will be used by each.
There is nothing inherently wrong with simply defining only one offheap resource and having all Datasets and Caches us it. However, some users may find it useful to be sure particular amounts of memory are reserved for particular Datasets or Caches.
Note that the total amount of memory for a particular offheap resource is actually the configured size of the resource multiplied by the number of stripes in the TSA, because the configured amount is for a given server. Thus if you configure an offheap resource name 'primary' with a size of 50GB, and your TSA has 3 stripes, then you will be able to store a total of 150GB of data (including any related secondary indexes) within the 'primary' offheap resource.
See also the topic Necessarily Equivalent Settings in the section Configuration Terms and Concepts.
Disk Storage and Persistence
Most users desire to have at least some sets of their data persisted, or in other words, durable between restarts of the servers in the TSA. Terracotta's FRS and Hybrid features provide such capability.
FRS (Fast Restartable Store) is a transaction log structured in such a way that it can be very efficiently replayed upon server restart, in order to recover all of the stored data as it existed when the server went down. (Note that passive/mirror servers would instead sync the latest state of the data from the active server). When FRS is enabled, data writes (additions, updates, deletions) are recorded in FRS, but all data reads (gets and queries) occur within memory.
Hybrid storage mode utilized FRS capabilities, but also expands storage capacity to include the disk, not just memory. In Hybrid mode, memory (offheap resources) is used to store keys, pointers/references and search indexes (for extremely fast resolution of lookups and queries), but values are read from disk, such that memory does not need to have the capacity to contain them all. Like FRS, all data is recovered in its last state when the server restarts.
For your configuration planning, you should note that both FRS and Hybrid features require a location on disk where they can store the data. Because data is written to disk when modifications occur, the speed of the disk is a major factor on the latency and throughput of Dataset and Cache operations, and the speed of server restarts. Many users find it beneficial to dedicate a highly performant file system for FRS/Hybrid data, while having the server use a different file system for storing configuration, logs, etc. Some users find it useful to have multiple file system paths (e.g. mount points) for storing different sets of data (different Caches or Datasets) both for performance and organizational (e.g. for backups) purposes.
Locations for user data storage are specified with the data-dirs configuration property, which can contain a comma-separated list of one or more data directories. Each data-dir has an identifying name, that is used in the configuration of Caches and Datasets to enable persistence of the data that is put into them. Recall that an equivalent set of data-dirs (with the same names), should exist on all nodes of the cluster.
Your planning should consider what filesystem path(s) you will use for data persistnce (if any), and what names you would like to identify each of those locations with.
See also the topic Necessarily Equivalent Settings in the section Configuration Terms and Concepts.
Config and Metadata Directories
Terracotta servers require locations for storing their internal configuration (set with the config-dir property), and their state metadata (set with the metadata-dir property).
For each server instance you'll want to make sure that the locations of these are always available to the server (perhaps ideally on a local disk).
You may want to plan to name the directories after the server node's name, or similar - in order to help keep things organized and clear for yourself and others who administer the system.
Backup Directory
In order to use the data backup feature of Terracotta, you will need to configure a location for the backup to be written to. This is done with the backup-dir config property.
The location should ideally be performant (such that the backup files can be written quickly, with minimal impact to the server), and large enough to contain the backup to be made, plus any other backups that you may have previously made and not removed.
See also: Backup, Restore and Data Migration.
Logging Directory
You should also put some planning into where your server's logs will be written. This is configured with the log-dir property.
Like the server's metadata directory and config directory, the log directory should be available to the server at all times, and you likely want to ensure that its path and name make clear sense to you and others who will be administering the system, as to which server's logs the directory contains.
Security
Your configuration planning should also consider whether you wish to enable security features on your cluster. Security features include encryption of network communications via TLS/SSL, and authentication, authorization and auditing (AAA) features.
If so, you will need to become familiar with these feature to properly plan your configuration. See also: Security Core Concepts and Cluster Security.
Public Addresses
Will the clients need to address the servers differently than the servers address each other (such as due to being within a managed container environment that has an "internal" network)?
If so, you may want to review whether hostnames will resolve to legal addresses both inside and outside of the containers, and whether you need to use the public-address configuration setting on your servers.
See also: Terracotta in Network Environments with Subnets.