About multi-node architecture

This topic describes the Transfer CFT multi-node feature, which provides Horizontal scalability for increased transfer flow capacity.

A node is a Transfer CFT runtime running on a host. Multiple nodes are called a Transfer CFT cluster, which can run on one host in an IBM i environment.


Transfer CFT in multi-node architecture requires:

  • That one key per node is licensed for the cluster option.


  • Transfer CFT provides one node manager that monitors every node and checks that its nodes are active. If a node goes down, the node manager detects the inactivity and takes over that node's activity.
  • For multiple nodes to be able to access the same files, using the same set configuration, the system requires the use of a shared file system. The shared disk provides communication, configuration, partners, data flows, internal datafiles and nodes. The shared data includes parameter files and configuration settings.

Service descriptions


Copilot operates two services, the node manager and the UI server.

Node manager

The node manager monitors all nodes that are part of the Transfer CFT multi-node environment. The monitoring mechanism is based on locks provided by the resource queuing system.

Typically, when a node is not running correctly, the node manager tries to start it locally.

CFTCOM dispatcher

For outgoing calls, you can set the CFTCOM dispatcher to use either a round robin load balancing, or define a one-to-one relationship between a partner and a node. A one-to-one relationship ensures that for any given partner the transfers are kept in the correct chronological order. In the unified configuration, set the variable:

  • Round robin: round_robin (default)
  • One-to-one: node_affinity

Transfer CFT files

All runtime data are stored on a shared file system.

The following internal datafiles are shared between nodes:

  • Parameter internal datafile (CFTPARM)
  • Partners internal datafile (CFTPART)
  • PKI base (CFTPKI)
  • Main communication media file (CFTCOM)
  • Unified Configuration (UCONF)

The following internal datafiles are node specific, and the filename is flagged by the node identifier:

  • Catalog (..CATALOG.N00,..CATALOG.N01,...)
  • Communication media file (..COM.N00, ..COM.N01,...)
  • Log files (..LOG1.N00, ..LOG2.N00, ..LOG1.N01, ..LOG2.N01, ,...)
  • Account file (..ACCNT1.N00, ..ACCNT2.N00, ..ACCNT1.N01, ..ACCNT2.N01 ,...)
Note When using multi-node architecture, the allocated space in the catalog file is 10% greater than when working in a standalone Transfer CFT.


Node recovery

If the node manager detects a failure the node is restarted, and it completes all transfer requests that were active when the failure occurred.

Transfer recovery

When a node receives an incoming request, be that a transfer receive, restart, acknowledgement or negative acknowledgement, if the corresponding transfer record cannot be found in the node's own catalog, the node requests the transfer record from other nodes through the CFTPRX task.

Possible scenarios include:

  • If another node has the catalog record, the node retrieves it and performs the transfer.
  • If no nodes have the record, an error is returned.
  • If any one of the nodes does not respond, the requesting node continues to retry all nodes until the session's timeout. Once the timeout is reached, the node ends the connection. After this, the remote partner retries the request according to its retry parameters.

In the case of node failure during the transfer recovery process, the catalog record is locked in both catalogs until both nodes are available for recovery.


Additionally note the following restrictions:

  • There is only one communication media and must be a media FILE.
  • The only network is TCP/IP.
  • The use of the console interface commands can apply only to one specific node.
  • Bandwidth control is calculated by node.
  • Accounting statistics are generated by node.
  • Duplicate file detection is not supported.

Related Links