Terracotta 10.3 | Terracotta Server Administration Guide | Active and Passive Servers
 
Active and Passive Servers
Introduction
Terracotta Servers exist in two modes, active and passive. The description of each mode is given below.
Active servers
Within a given stripe of a cluster, there is always an active server. A server in a single-server stripe is always the active server. A multi-server stripe will only ever have one active server at a given point in time.
The active server is the server which clients communicate with directly. The active server relays messages on to the passive servers independently.
How an active server is chosen
When a stripe starts up, or a failover occurs, the online servers perform an election to decide which one will become the active server and lead the stripe. For more information about elections, see the section Electing an Active Server.
How clients find the active server
Clients will attempt to connect to each server in the stripe, and only the active server will accept the connection.
The client will continue to only interact with this server until the connection is broken. It then attempts the other servers if there has been a failover. For more information about failover, see the section Failover.
Responsibilities of the active server
The active server differs from passive servers in that it receives all messages from the clients. It is then responsible for sending back responses to the calling clients.
Additionally, the active server is responsible for replicating the messages that it receives on the passive servers.
When a new server joins the stripe, the active server is responsible for synchronizing its internal state with the new server, before telling it to enter a standby state. This state means that the new server is now a valid candidate to become a new active server in the case of a failover.
Passive servers
Any stripe of a cluster which has more than one running server will contain passive servers. While there is only one active server per stripe, there can be zero, one, or several passive servers.
Passive servers go through multiple states before being available for failover:
UNINITIALIZED
This passive server has just joined the stripe and has no data.
SYNCHRONIZING
This passive server is receiving the current state from the active server. It has some of the stripe data but not yet enough to participate in failover.
STANDBY
This passive contains the stripe data and can be a candidate to become the active, in the case of a failover.
Passive servers only communicate with the active server, not with each other, and not with any clients.
How a server becomes passive
When a stripe starts up and a server fails to win the election, it becomes a passive server.
Additionally, newly-started servers which join an existing stripe which already has an active server will become passive servers.
Responsibilities of the passive server
The passive server has far fewer responsibilities than an active server. It only receives messages from the active server, not communicating directly with other passive servers or any clients interacting with the stripe.
Its key responsibility is to be ready to take over the role of the active server in the case that the active server crashes, loses power/network, or is taken offline for maintenance/upgrade activities.
All the passive server does is apply messages which come from the active server, whether the initial state synchronization messages when the passive server first joined, or the on-going replication of new messages. This means that the state of the passive server is considered consistent with that of the active server.
Lifecycle of the passive server
When a passive server first joins a stripe and determines that its role will be passive, it is in the UNINITIALIZED state.
If it is a restartable server and also discovers existing data from a previous run, it makes a backup of that data for safety reasons. Refer to the section Clearing Automatic Backup Data for more details.
Refer to the section Restarting a Stripe for information on the proper order in which to restart a restartable stripe.
From here, the active server begins sending it messages to rebuild the active server's current state on the passive server. This puts the passive server into the SYNCHRONIZING state.
Once the entire active state has been synchronized to the passive server, the active server tells it that synchronization is complete and the passive server now enters the STANDBY state. In this state, it receives messages replicated from the active server and applies them locally.
If the active server goes offline, only passive servers in the STANDBY state can be considered candidates to become the new active server.
Clearing Automatic Backup Data
After a passive server is restarted, for safety reasons, it may retain artifacts from previous runs. This happens when the server is restartable, even in the absence of restartable cache managers. The number of copies of backups that are retained is unlimited. Over time, and with frequent restarts, these copies may consume a substantial amount of disk space, and it may be desirable to clear up that space.
Backup rationale: If, after a full shutdown, an operator inadvertently starts the stripe members in the wrong order, this could result in data loss wherein the new active server initializes itself from the, possibly, incomplete data of a previous passive server. This situation can be mitigated by (1) ensuring all servers are running, and (2) the cluster is quiesced, prior to taking the backup. This ensures that all members of the stripe contain exactly the same data.
Clearing backup data manually: The old fast restart and platform files are backed up under the server's data directories in the format terracotta.backup.{date&time}/ehcache/ and backup-platform-data-{date&time}/platform-data respectively. Simply change to the data root directory, and remove the backups.
It may be desirable to keep the latest backup copy. In that case, remove all the backup directories except the one with the latest timestamp.

Copyright © 2010-2019 | Software AG, Darmstadt, Germany and/or Software AG USA, Inc., Reston, VA, USA, and/or its subsidiaries and/or its affiliates and/or their licensors.