Call us: +1-415-738-4000
For BigMemory Max 4.0, Quartz Scheduler, and Terracotta Web Sessions
The Terracotta Server Array (TSA) provides the platform for Terracotta products and the backbone for Terracotta clusters. A Terracotta Server Array can vary from a basic two-node tandem to a multi-node array providing configurable scale, high performance, and deep failover coverage.
The main features of the Terracotta Server Array include:
The 4.0 TSA is an in-memory data platform, where all data is kept in memory, providing faster, more consistent, and more predictable access to data. With resource management, if you have more data than memory available, the TSA protects itself from going over its limit through data eviction and throttling. In most cases, it will recover and come back to its normal working state automatically. In addition, three systems are available to protect data: the Fast Restart feature, active-mirror server groups, and backups.
BigMemory's Fast Restart feature is now integrated into the TSA, providing crash resilience with quick recovery, plus a consistent record of the entire in-memory data set, no matter how large. For more information, refer to Fast Restartability.
The new implementation has no option for temporary disk storage. All data handled by the TSA is in-memory only. With no overflow or swapping to server disks, the TSA-managed data set is always exactly what is in memory. (Note that localTempSwap continues to be an option for unclustered BigMemory Go.)
Resource management provides better control over the TSA's in-memory data through time, size, and count limitations. This enables automatic handling of, and recovery from, near-memory-full conditions. For more information, refer to Automatic Resource Management.
Based upon user-configured time, size, and count limitations, the TSA's 3-pronged eviction strategy works automatically to ensure predictable behavior when memory becomes full. For more information, refer to Eviction.
Improvements to provide continuous availability of data include flexibility in server startup sequencing, better utilization of extra mirrors in mirror groups, multi-stripe backup capability, optimizations to bulk load, and performance improvements for data access on rejoin. In addition, the TSA no longer uses Oracle Berkeley DB, enabling in-memory data to be ready for use much more quickly after any planned or unplanned restart.
The expanded TMC replaces the Developer Console and Operations Center as the integrated platform for monitoring, managing, and administering all Terracotta deployments. There is also support for additional REST APIs for management and monitoring. For more information, start with Terracotta Management Console.
Active Directory (AD) and Lightweight Directory Access Protocol (LDAP) support on Terracotta servers, and custom SecretProvider on Terracotta clients. For more information, refer to Securing Terracotta Clusters and Setting up LDAP-based Authentication.
DSO configuration has been deprecated, and the tc-config has a new format. Most of the elements are the same, but the structure is revised. For more information, refer to Terracotta Configuration Reference.
The major components of a Terracotta installation are the following:
|This documentation may refer to a Terracotta server instance as L2, and a Terracotta client (the node running your application) as L1. These are the shorthand references used in Terracotta configuration files.|
A Terracotta cluster has the following functional characteristics:
|For more about||Go to|
|Architecture||Terracotta Server Array Architecture|
|High Availability||Configuring Terracotta Clusters for High Availability|
|Configuration||Working with Terracotta Configuration Files|
|Resource Management||TSA Operations – Automatic Resource Management|
|Live Data Backup||TSA Operations – Distributed In-memory Data Backup|