Using NFSv4 as a shared file system

The recommendations in this topic apply to a Transfer CFT multi-node, multi-host architecture based on an NFSv4 shared file system. To implement a Transfer CFT active/active architecture using NFS, version 4 is mandatory. This is because NFSv4 can detect host failures (unlike NFSv3). With host failure detections possible, Transfer CFT can restart other host's nodes when necessary.

To implement active/active Transfer CFT you must use NFSv4 for the Transfer CFT runtime directory, which contains internal data such as the catalog, log, communication file, etc. Other versions of NFS are not supported for the runtime directory. For file exchanges, you can use either NFSv4 or v3. NFSv3 is not described in this document.

Define NFS as the shared file system

Execute the following command to enable the Transfer CFT internal data files to reside on a NFSv4 file system.
Enter the nfs value in lower case:

CFTUTIL uconfset id=cft.multi_node.shared.filesystem.type, value=nfs

Required NFSv4 mount options

Define the NFS version

If version 4 is not your NFS subsystem's default, you must specify version 4 when defining the mount options. Depending on your OS, use either the vers or nfsvers option.

Set the hard and nointr options

Mount NFSv4 using the hard and nointr options. The intr mount option should not be available for NFSv4, but if you are in doubt, you should explicitly specify the nointr option.

Define file locking

Because Transfer CFT uses POSIX file locking services to synchronize shared files, make sure that the NFS clients report these locks to the NFS server. Depending on the NFS client, the corresponding option to tune may be called  local_lock, llock, or nolock. Do not enable the local locking option.

Set the cto option

NFS implements a weak data consistency called "Close To Open consistency" or cto. This means that when a file is closed on a client, all modified data associated with the file is flushed from the server. If your NFS clients allow this behavior, be certain that the cto option is set.

Mount options summary

The following table summarizes the recommended NFSv4 mount options. Note that depending on the OS platform, only one of the three locking options should be available.

Correct option Incorrect option
vers=4 (or nfsvers=4) not specified or value <= 4
hard (default) "soft" specified
nointr (not the default) "intr" specified
llock not specified "llock" specified
lock (default) "nolock" specified
local_lock=none (default) any other value specified
cto (default) "nocto" specified

Synchronous versus asynchronous option

To improve performance, NFS clients and NFS servers can delay file write operations in order to combine small file IOs into larger file IOs. You can enable this behavior on the NFS clients, NFS servers, or on both, using the async option. The sync option disables this behavior.


On the client side, use the mount command to specify the async/sync option.


The NFS client treats the sync mount option differently than some other file systems. If neither sync nor async is specified (or if async is specified), the NFS client delays sending application writes to the server until any of the following events occur:

  • Memory limitations force reclaiming of system memory resources.
  • Transfer CFT explicitly flushes file data (PeSIT synchronization points, for example).
  • Transfer CFT closes a file.

This means that under normal circumstances, data written by Transfer CFT may not immediately appear on the server that hosts the file.


If the sync option is specified on a mount point, any system call that writes data to files on that mount point causes that data to be flushed to the server before the system call returns control to Transfer CFT. This provides greater data cache coherence among clients, but at a significant cost to performance.


On the server side, use the exports command to specify the async/sync option (NFS server export table).


The async option allows the NFS server to violate the NFS protocol and reply to requests before any changes made by that request have been committed to stable storage (the disk drive, for example), even if the client is set to sync. This option usually improves performance, however data may be lost or corrupted in the case of an unclean server restart, such as an NFS server crash.

This possible data corruption is not detectable at the time of occurrence, because the async option instructs the server to lie to the client, telling the client that all data was written to stable storage (regardless of the protocol used).


Enables replies to requests only after the changes have been committed to stable storage.

Note For more information on these options, refer to NFS mount and export options in the UNIX man pages (for example, here).

Synchronous / asynchronous option impact

Client Server Internal data Transferable data Performance
Sync Sync 1 1 Low
Sync Async 2 (secure the NFS server) 2 (secure the NFS server) Medium
Async Sync 1 (if cft.server.catalog.
1 (when using sync points) Medium - high
Async Async 3 3 High


  • 1 = Secure
  • 2 = Fairly secure
  • 3 = Not secure
  • Internal data = Transfer CFT runtime files, such as the catalog
  • Transferable data = Files exchanged using Transfer CFT

Tuning NFSv4 locking for node failover

The NFSv4 locking lease period affects the Transfer CFT delay required to detect node failovers. The default value for this parameter is typically 90 seconds. On systems where this parameter is tunable, configuring a shorter value can significantly reduce Transfer CFT node failovers.

Troubleshoot an NFS lock daemon issue with no error message

When transferring files that are located in a Network File System, an NFS locking issue (lockd) may occur if the correct port is not open on the firewall.


  • Flow transfers hang in the phase T and phasestep C, with a timeout but no error message.


  • Check that the correct port for the lockd service is open on the firewall (default=4045).

Multi-node unified configuration parameters

Tuning the database cache

Related Links