Skip to main content

JGroups Cluster Security

This page describes cluster security in Atoti's distributed architecture: how to control which JGroups nodes are allowed to join a cluster, and how to encrypt the JGroups traffic between cluster members. User authentication itself is unchanged in a distributed deployment; see the note at the end of this page.

Throughout this page, the terms "distributed cube", "cluster member", and "node" are used interchangeably. They all refer to a single JGroups process participating in the cluster, in line with JGroups terminology.

note

Most of the material on this page is a distilled version of the JGroups 5.x security chapter, with Atoti-specific context added. The JGroups manual remains the authoritative reference for protocol-level details.

Cluster security covers two concerns:

  • preventing unauthorized JGroups nodes from joining the cluster;
  • preventing an eavesdropper on the network from making sense of the JGroups messages exchanged between cluster members.

JGroups addresses the first concern through its AUTH protocol and the second through its ENCRYPT family of protocols (SYM_ENCRYPT, ASYM_ENCRYPT). Both are already part of the JGroups library that Atoti uses for its distributed cluster management.

The AUTH protocol

The AUTH element is placed in the JGroups protocol stack XML (by convention, protocol.xml). It intercepts JOIN_REQUEST messages and rejects any node whose token does not satisfy the configured criteria.

In JGroups 5.x, all token properties in the XML must be prefixed with auth_token.. The auth_class attribute identifies the AuthToken implementation to use.

The following snippet shows the RegexMembership token used in the Atoti sandbox configuration. It permits only nodes whose IP address matches the given regular expression:

<AUTH auth_class="org.jgroups.auth.RegexMembership"
auth_token.match_string="^127.*"
auth_token.match_ip_address="true"
auth_token.match_logical_name="false"/>

This configuration is suitable for local development, where all cluster members run on 127.x.x.x. It does not provide a shared secret and is not appropriate for production.

The JGroups 5.x AUTH reference is at http://www.jgroups.org/manual5/#AUTH.

JGroups built-in token types

Token classDescriptionNotes
RegexMembershipFilters by IP address or logical node name using a regular expressionNo shared secret; suitable for development only
FixedMembershipTokenAllows an explicit list of addressesRequires known, stable addresses
X509TokenAsymmetric-key token backed by a keystoreSuitable for production
Krb5TokenKerberos-based token validated against a KDCSuitable for Kerberos-backed infrastructures
warning

Simple hash-based token schemes are vulnerable to replay attacks on any sniffable network. The hashed bytes travel in the clear, so an attacker who captures a single JOIN_REQUEST can retransmit it verbatim to gain cluster membership, with no need to reverse the hash. For production environments, always combine <AUTH> with an <ENCRYPT> protocol (SYM_ENCRYPT or ASYM_ENCRYPT) so that the auth token and all cluster traffic are encrypted on the wire. Prefer a keystore-backed token such as X509Token.

For this same reason, JGroups removed both MD5Token and SimpleToken in 5.0, and Atoti 6.1 removed AtotiAuthToken from distribution-common. Neither library now ships a default AuthToken implementation.

Writing a custom AuthToken

To create a custom token java class, follow these steps:

  1. Extend org.jgroups.auth.AuthToken.
  2. Declare configurable properties using @Property(name = "…") and setters. The name value must match the XML attribute name after the auth_token. prefix.
  3. Implement authenticate(AuthToken token, Message msg), returning true only when the incoming token proves shared identity with the local node.
  4. Implement writeTo(DataOutput out) and readFrom(DataInput in) to serialize the token across the wire.
  5. Implement size() to return the serialized byte length of the token.
  6. Implement getName() returning getClass().getName() (required by JGroups).

Encrypting cluster traffic

The AUTH protocol controls who may join. It does not protect the content of cluster messages, and it does not protect the auth token itself from replay if the transport is unencrypted. Adding an <ENCRYPT> protocol to the stack addresses both problems: the auth exchange is encrypted on the wire, and all subsequent cluster messages are encrypted.

JGroups provides two ENCRYPT variants:

  • SYM_ENCRYPT: all cluster members share the same symmetric key, loaded from a JCEKS keystore. The keystore must be distributed to each node by a secure means before startup.
  • ASYM_ENCRYPT: the cluster coordinator generates a symmetric session key and distributes it to joining nodes using asymmetric encryption. No pre-shared keystore is needed, but joining members must accept the coordinator's key, which requires a careful trust policy.

Protocol stack example with SYM_ENCRYPT and AUTH

note

JGroups XML files are written bottom-up; see Protocol stack basics for the full explanation. Whenever this page says a protocol is "below" another in the stack, it means the protocol appears earlier in the XML file.

Stack position of the two security protocols, per JGroups recommendation:

  • AUTH sits directly below pbcast.GMS.
  • SYM_ENCRYPT (and equivalently ASYM_ENCRYPT) sits directly below pbcast.NAKACK2, so that retransmission buffers hold plaintext and each retransmission is re-encrypted by SYM_ENCRYPT.

In the XML, <SYM_ENCRYPT> is declared immediately before <pbcast.NAKACK2>, and <AUTH> is declared immediately before <pbcast.GMS>. Even though SYM_ENCRYPT sits several layers below AUTH in the stack, the auth token is still encrypted on the wire: on outgoing messages AUTH adds its header, the reliability protocols process the message, and SYM_ENCRYPT then wraps the whole frame; on incoming messages, SYM_ENCRYPT decrypts first, before any higher protocol sees the plaintext.

<config xmlns="urn:org:jgroups"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:org:jgroups http://www.jgroups.org/schema/jgroups.xsd">

<TCP bind_port="16484" bind_addr="127.0.0.1"/>
<TCPPING initial_hosts="${jgroups.tcpping.initial_hosts:127.0.0.1[16484]}"
async_discovery="true"
port_range="1"/>

<!-- Encrypt all cluster traffic using a shared AES key from a JCEKS keystore.
Placed directly under NAKACK2 so retransmission buffers hold plaintext. -->
<SYM_ENCRYPT keystore_name="encrypt-keystore-name.jceks"
store_password="keystore-password"
alias="cluster-key"
sym_algorithm="AES"/>
<pbcast.NAKACK2/>

<!-- Admit only nodes that can encrypt the shared auth_value with the cluster certificate -->
<AUTH auth_class="org.jgroups.auth.X509Token"
auth_token.auth_value="change-me-to-a-strong-secret"
auth_token.keystore_type="JKS"
auth_token.keystore_path="auth-keystore-path.jks"
auth_token.keystore_password="auth-keystore-password"
auth_token.cert_alias="cluster-cert"
auth_token.cipher_type="RSA"/>
<pbcast.GMS print_local_addr="true" join_timeout="3000"/>
</config>

The snippet above keeps only the protocols needed to anchor the AUTH and ENCRYPT placement rules: the transport, discovery, NAKACK2, and GMS. A real deployment also needs reliable unicast (UNICAST3), stability (STABLE), merge recovery (MERGE3), view-change synchronization (BARRIER), failure detection (FD_SOCK, FD_HOST, VERIFY_SUSPECT, …), and possibly flow control and fragmentation (UFC, MFC, FRAG4). Refer to the JGroups manual for the full list and their recommended placements.

Using ASYM_ENCRYPT instead

ASYM_ENCRYPT requires no pre-shared keystore. Replace SYM_ENCRYPT with:

<ASYM_ENCRYPT asym_keylength="2048"
asym_algorithm="RSA"
change_key_on_leave="true"/>

The JGroups ENCRYPT reference is at http://www.jgroups.org/manual5/#ENCRYPT.

User authentication

User authentication is unchanged in a distributed deployment: each node runs the same Spring Security flow it would in a standalone setup. The only cross-node requirement is that every node resolves the same username to the same identity, typically by pointing all nodes at a shared directory service (LDAP, Active Directory) or by deploying an identical IUserDetailsService configuration on each node.