Topologies for Flex based UI

Introduction

All nodes must use different context roots.

You can use three main topologies when installing a primary/replica cluster. Each topology addresses different needs. To switch from one topology to another, you must update the settings for all nodes.

  1. Native topology – Reference deployment topology, uses internal load balancing, no other software needed.
  2. Dedicated load balancer – Uses a 3rd party load balancer (software or hardware) for dispatching client requests.
  3. Dedicated proxy – Runs behind a proxy for higher security, uses internal load balancing.

Install the cluster

Native topology installation

Description

With this topology, the client browser randomly selects one of the available RNs. If the RN becomes unavailable, the client browser automatically selects another RN and falls back to the PN if no RN is available.


The IP addresses shown in the diagram are only examples in order to ease comprehension; you can use any network as long as the IP addresses are different.

Limitations

  • All nodes must either use HTTP or HTTPS, they cannot be mixed.
  • All nodes must have an empty context root.

Installation

Install each node (Install a node) by defining the following settings:

  • through the installer or,
  • manually inside the conf/platform.properties file.
Primary node Replica node
platform.properties
com.systar.electron.type=PRIMARY
com.systar.electron.host=<Primary host/IP address on internode network, eg: 172.16.5.10>
com.systar.gluon.clusterId=<Generated cluster id, like 00000007-001-000>

In this topology, an empty context root is required. Check that you have the following parameter configured:

platform.properties
com.systar.boson.http.contextRoot=/

In case the distributed computing feature has to be disabled on the primary node (configurable only in platform.properties )

platform.properties
com.systar.krypton.distributedcomputing.primaryComputingEnable=false

Database encryption should be shared:

platform.properties
com.systar.titanium.encryptionKeyFile=${com.systar.platform.conf.dir}/encryption.key
platform.properties
com.systar.electron.type=REPLICA
com.systar.electron.host=<Primary host/IP address on internode network, eg: 172.16.5.10>
com.systar.electron.localhost=<Replica host/IP address on internode network, eg: 172.16.5.1X>
com.systar.gluon.clusterId=<Primary cluster id, like 00000007-001-000>

In this topology, an empty context root is required and you must configure proxy URL with Replica interface on default network. Check that you have the following parameter configured:

platform.properties
com.systar.boson.http.contextRoot=/
com.systar.boson.http.proxyUrl=<Replica URL on default network, eg: http://10.0.0.100>

In case computings have to be disabled on a replica node (configurable only in platform.properties )

platform.properties
com.systar.krypton.distributedcomputing.replicaComputingEnable=false

To totally disable distributed computing, this parameter must be set to false on all replicas

Database encryption should be shared:

platform.properties
com.systar.titanium.encryptionKeyFile=<Copy of primary encryption.key>

Dedicated load balancer topology

Description

In this environment, all the client browser requests sent to the replica nodes are dispatched by a real load balancer (software or hardware). If an RN becomes unavailable, it is the responsibility of the load balancer to transfer the requests to another RN.

The IP addresses displayed in the diagram are only examples in order to ease comprehension; you can use any network as long as the IP addresses are different.

Limitations

  • The PN and the load balancer must use either HTTP or HTTPS, they cannot be mixed.
  • The PN and the load balancer must not have any context root.
  • Any new RNs must be declared in the load balancer.

Installation

Install each node (Install a node) by defining the following settings:

  • through the installer or,
  • manually inside the conf/platform.properties file.


Primary node Replica node
platform.properties
com.systar.electron.type=PRIMARY
com.systar.electron.host=<Primary host/IP address on internode network, eg: 172.16.5.10>
com.systar.gluon.clusterId=<Generated cluster id, like 00000007-001-000>

In this topology, an empty context root is required. Ensure that you have the following parameter configured:

platform.properties
com.systar.boson.http.contextRoot=/

To disable the distributed computing feature on the PN (configurable only in platform.properties ):

platform.properties
com.systar.krypton.distributedcomputing.primaryComputingEnable=false

Database encryption should be shared:

platform.properties
com.systar.titanium.encryptionKeyFile=${com.systar.platform.conf.dir}/encryption.key
platform.properties
com.systar.electron.type=REPLICA
com.systar.electron.host=<Primary host/IP address on internode network, eg: 172.16.5.10>
com.systar.electron.localhost=<Replica host/IP address on internode network, eg: 172.16.5.1X>
com.systar.gluon.clusterId=<Primary cluster id, like 00000007-001-000>

In this topology, an empty context root is required and you must configure proxy URL on the default network:

platform.properties
com.systar.boson.http.contextRoot=/
com.systar.boson.http.proxyUrl=<Load balancer URL on default network, eg: http://10.0.0.100>

To disable computings on an RN (configurable only in platform.properties ):

platform.properties
com.systar.krypton.distributedcomputing.replicaComputingEnable=false

To totally disable distributed computing, this parameter must be set to false on all replicas

Database encryption should be shared:

platform.properties
com.systar.titanium.encryptionKeyFile=<Copy of primary encryption.key>

Advanced load balancer configuration

You can configure the load balancer to fall back on the PN if no RN is available. On each node, a heartbeat page informs you whether the node is running or not. For example,  http://replica-b.axway.int/heartbeat .

Caution: The heartbeat page of an RN will not warn you whether a node is already overloaded or not.

Dedicated proxy topology

Description

In this environment, the PN and RNs are accessed through a unique proxy. Similarly to the classic topology, the client browser handles the load balancing between the replica nodes, that is, if an RN becomes unavailable the client browser automatically chooses another RN then falls back to the PN if no RN is available.

The IP addresses displayed in the diagram are only examples in order to ease comprehension; you can use any network as long as the IP addresses are different.

Limitations

Installation

Install each node (Install a node) by defining the following settings:

  • through the installer or,
  • manually inside the conf/platform.properties file.


Primary node Replica node
platform.properties
com.systar.electron.type=PRIMARY
com.systar.electron.host=<Primary host/IP address on internode network, eg: 172.16.5.10>
com.systar.gluon.clusterId=<Generated cluster id, like 00000007-001-000>

In this topology, you must configure proxy URL and context root on the default network:

platform.properties
com.systar.boson.http.proxyUrl=<Proxy URL on default network, eg: 10.0.0.1>
com.systar.boson.http.contextRoot=<Context root for the Primary>

To disable the distributed computing feature on the PN (configurable only in platform.properties ):

platform.properties
com.systar.krypton.distributedcomputing.primaryComputingEnable=false

Database encryption should be shared:

platform.properties
com.systar.titanium.encryptionKeyFile=${com.systar.platform.conf.dir}/encryption.key
platform.properties
com.systar.electron.type=REPLICA
com.systar.electron.host=<Primary host/IP address on internode network, eg: 172.16.5.10>
com.systar.electron.localhost=<Replica host/IP address on internode network, eg: 172.16.5.1X>
com.systar.gluon.clusterId=<Primary cluster id, like 00000007-001-000>

In this topology, you must configure proxy URL and context root on the default network:

platform.properties
com.systar.boson.http.proxyUrl=<Proxy URL on default network, eg: http://10.0.0.1>
com.systar.boson.http.contextRoot=<Context root for this Replica>

To disable computings on an RN (configurable only in platform.properties ):

platform.properties
com.systar.krypton.distributedcomputing.replicaComputingEnable=false

To totally disable distributed computing, this parameter must be set to false on all replicas.

Database encryption should be shared:

platform.properties
com.systar.titanium.encryptionKeyFile=<Copy of primary encryption.key>

Related Links