Install a Primary/Replica cluster

Hardware prerequisites

Make sure you have checked the Cluster prerequisites before starting configuring your servers.


Install Axway Decision Insight (DI) with primary and replica nodes when you want to balance the load on your installation because you have a large number of concurrent users using the deployment and/or a large volume of computations that you want to distribute over several nodes. This feature is called distributed computing and is enabled by default.

A primary/replica cluster is composed of:

  • One primary node (PN) – absorbs data and executes pre-computings and computings. 
  • One or many replica nodes (RN) – execute computings and respond to end-users' requests to display dashboards.

A node refers to a complete installed instance of DI on a server. RNs receive a complete copy of the database from the PN, which gives them autonomy to run queries and do computings.

Important:  All nodes within a DI cluster must use the same version of DI.


  • Disable the distributed computing feature on the primary node with the following property: com.systar.krypton.distributedcomputing.primaryComputingEnable=false
  • Disable the computing capabilities for each individual replica with the following property: com.systar.krypton.distributedcomputing.replicaComputingEnable=false

Install the cluster

When setting up a primary/replica architecture, you must install an instance of DI on each node. Installation depends on the topology you want to deploy.

You must comply with the following guidelines, regardless of the topology:

  • The configuration files conf/jvm.conf must be identical on all nodes.
  • In the conf/ configuration file of all nodes, the only parameters that can be different are the ones related to:
    • ports
    • primary/replica configuration
    • authentication (required only on the primary node)

Topologies for Flex based UI

You can find the corresponding topologies description in the page Topologies for Flex based UI.

Topology for HTML5 based UI

You can find the corresponding topology description in the page Topology for HTML5 based UI.

Synchronization status between primary/replicas

The PN dispatches its most current Transaction Time to all RNs. This value can be compared on each replica with their own most current Transaction Time.

To define whether a replica is correctly synchronized or late compared to the primary node, a threshold is defined:

Synchronization max lag threshold
# In milliseconds, default value is 2000ms

The default value can be overridden from the file.

Get the synchronization status of a replica node


On each node, a URL is available to monitor synchronization status between primary and replicas:

Synchronization status url
http(s)://<Replica host/IP address on internode network>/heartbeat/synchronizationStatus


On each node, a JMX metric is available:

SynchronizationStatus attribute though JMX
com.systar:type=com.systar.calcium.impl.distributed.ClusterDataClientMXBean,name=calcium.clusterDataClient // Attribute SynchronizationStatus


The response is a JSON message. For example:

Synchronization status response example
{"status": "SYNCHRONIZED", "lag": 300}


  • status: the status of the RN. If the lag value is superior to the max lag threshold, the value of status is LATE. Otherwise, it will be SYNCHRONIZED.
  • lag: the lag time, in milliseconds, between the PN and the current RN: comparison between the most current Transaction Time from the PN and the one from the RN.

Change maxLag value using JMX

A getter/setter is available through JMX to change the MaxLag value:

MaxLag attribute though JMX
com.systar:type=com.systar.calcium.impl.distributed.ClusterDataClientMXBean,name=calcium.clusterDataClient // Attribute MaxLag

Changes are only applied in memory. When the node restarts, the com.systar.calcium.maxLag value is used.

Related Links