Install a HA cluster

To install a HA cluster, you must first install a standard primary/replica cluster. For more information, see Install a Primary/Replica cluster.  Once you're done, follow Activate HA/DR on existing nodes guidelines to turn your PR cluster into an HA cluster. 

About HA clusters

Install Axway Decision Insight (DI) with HA main and backup nodes when you want high availability or disaster recovery on your installation.

The HA main node is acting as a primary node. It supports distributed computing over several replica nodes. See Install a Primary/Replica cluster for more details on this feature.

The HA backup is dedicated to data replication. It cannot be used as a replica for distributed computing.

A HA cluster is composed of:

  • One HA main node – Absorbs data and executes pre-computings and computings. 
  • One HA backup node – Backs-up data.
  • Optionally, one or many replica nodes (RN) – Execute computings and respond to end-users' requests to display dashboards.

A node refers to a complete installed instance of DI on a server.

Important:  All nodes within a DI cluster must use the same version of DI.

Topology


For more details, see Topology for HA cluster.

Install the cluster

Hardware prerequisites

Make sure you have checked the Cluster prerequisites before starting configuring your servers.

When setting up a HA architecture, you must install an instance of DI on each node. Installation depends on the topology you want to deploy.

You must comply with the following guidelines, regardless of the topology:

  • The configuration files conf/jvm.conf must be identical on all nodes.
  • In the conf/platform.properties configuration file of all nodes, the only parameters that can be different are the ones related to:
    • ports
    • hosts

Parameter Main node value Backup node value

com.systar.electron.type

HA

HA

com.systar.electron.host

Host / IP of the local host (main)

Host / IP of the local host (backup)
com.systar.electron.ha.host Host / IP of the remote host (backup)
Host / IP of the remote host (main)
com.systar.electron.ha.token Password of your choice Password of your choice

A HA node always starts as backup. Use Switch backup to main procedure to activate main node.

Synchronization status between HA nodes

The HA main node dispatches its most current Transaction Time to the HA backup node. This value can be compared on the HA backup node with its own most current Transaction Time.

To define whether a backup is correctly synchronized or late compared to the primary node, a threshold is defined:

Synchronization max lag threshold
# In milliseconds, default value is 60,000ms
com.systar.calcium.ha.maxLag=60000 

To override this setting, change the default from the platform.properties file of all nodes in your HA cluster.

Get the status of a HA node

HTTP

On each node, a URL is available to monitor HA status of a node:

Synchronization status url
http(s)://<Node host/IP address on internode network>/heartbeat/ha

Response

The response value is a JSON message. For example:

Synchronization status response example
{"status": "STARTED", "type": "MAIN"}

where:

  • status: the lifecycle status of the node. The possible values are STARTING, STARTED or STOPPING.
  • type: the type of HA node. The possible values are
    • MAIN if node is installed as HA with marker file <install>/var/data/electron/MAIN ,
    • BACKUP if node is installed as HA without marker file
    • NONE if node is not installed as HA.

The response status is OK (200) if the node is an active HA main (i.e. the response value is STARTED + MAIN), SERVICE_UNAVAILABLE (503) otherwise.

Example

  Click to expand an example of the HAProxy configuration for this topology...

We use  HAProxy as a reverse proxy and loadbalancer as described in Topology for HA cluster.

The URI that can be redirected to the replica nodes are the queries that only retrieve data from the server (no absorption).

In this example, the following URI are redirected to the replica nodes:

  • pagelet execution queries: /rest/ui/dashboards/dashboard-.*/pagelets/pagelet-.*

  • parameters queries: /rest/ui/dashboards/dashboard-.*/pagelets/parameters-.*

Other queries are redirected to the active HA main node.

Configuration file

haproxy.conf
global
    maxconn 2000

defaults
    mode    http
    retries 3
    option redispatch
    timeout connect  5000
    timeout client  10000
    timeout server  10000

# Proxy URL to use for FLEX replicas configuration
listen flex_proxy
    bind 10.0.0.1:1443
    mode http
    balance roundrobin
    option httpclose
    option forwardfor
    server node1 10.0.0.12:8080 track queries/replica1
    server node2 10.0.0.13:8080 track queries/replica2
 
# Define a group of servers for the UI query requests
frontend front
    bind 10.0.0.1:443
    mode http
    # Redirect all parameters and pagelet queries to the Replica servers
    acl url_api path_reg ^\/hvp\\/rest\\/ui\/dashboards\/dashboard-.*/(parameters|pagelet-)
    use_backend queries if url_api
    # Other queries are redirected to primary group
    default_backend primary

# Accept Replica node if heartbeat declares it is SYNCHRONIZED
backend queries
    mode http
    option httpchk GET /hvp/heartbeat/synchronizationStatus/status HTTP/1.1\r\n10.0.0.1
    http-check expect string "SYNCHRONIZED"
    server replica1 10.0.0.12:8080 check
    server replica2 10.0.0.13:8080 check
    server ha1 10.0.0.10:8080 track primary/ha1
    server ha2 10.0.0.11:8080 track primary/ha2
 
# Accept HA node if heartbeat declares it is HA MAIN (HTTP reply 200)
backend primary
    mode http
    option httpchk GET /hvp/heartbeat/ha HTTP/1.1\r\nHost:10.0.0.1
    server ha1 10.0.0.10:8080 check
    server ha2 10.0.0.11:8080 check




Related Links