Replicate API Portal nodes

This section describes how to configure the API Portal nodes for high availability.

Prerequisites

  1. Host the Joomla! files on a shared storage. To avoid further editing of Apache configurations, it is recommended to use the following default path:
  2. mv /opt/axway/apiportal/htdoc /opt/axway/apiportal/htdoc_temp mkdir /opt/axway/apiportal/htdoc mount -t nfs shared.storage.server.int:/mnt/myapiportal /opt/axway/apiportal/htdoc shopt -s dotglob mv /opt/axway/apiportal/htdoc_temp/* /opt/axway/apiportal/htdoc/ shopt -u dotglob

    You can adjust the NFS options or make other changes you need.

    Note   For NFS, make sure that the UID and the GUID of the wwwrun user and the www group that the Apache on the API Portaluses are present and have the same IDs as on the shared storage.
  3. Back up the current Joomla! database with mysqldump. If neither API Portal installation has custom setup, you can backup from either node. Use the following command and copy the file to both machines for later use:
  4. mysqldump -u root --opt joomla > /tmp/api_portal_joomla_db.sql  

  5. Create a directory on both API Portal nodes for the binary database log file. This log is used for the replication. To create the directory and set the permissions, enter the following commands:
  6. mkdir /var/log/mysql/
    chown -R mysql /var/log/mysql/ 

Configure the nodes for replication

  1. Go to the mysql config file on the primary node, and comment out or remove the bind-address line:
  2. server-id = 1
    log_bin = /var/log/mysql/mysql-bin.log 
    binlog_do_db = joomla
    # bind-address = 127.0.0.1

  3. For security purposes, your database is by default bound to localhost (127.0.0.1). If the bind-address is not specified, the database server (mysqld) listens on all interfaces (for example, * or 0.0.0.0). This is required for the replication interconnection of the two nodes.
  4. Restart the database server on the primary node:
  5. /etc/init.d/mysqld restart

  6. Create a database user and password on the primary node:
  7. mysql -u root << EOF
    create user '<user name>'@'%' identified by '<password>'; 
    grant replication slave on *.* to '<user name>'@'%';
    flush privileges;
    EOF

    For example:

    mysql -u root << EOF
    create user 'replicator'@'%' identified by 'SECRET123'; 
    grant replication slave on *.* to 'replicator'@'%';
    flush privileges;
    EOF

  8. Go to the mysql config file on the secondary node, and repeat the steps. Note that the server-id on the secondary note is different (server-id = 2).
  9. Tip   When creating the database user on the secondary node, use a different user name and password than on the primary node. If you later need to manually search through the logs to identify an issue, this makes it easier to see which node is affected. For example

Replicate the nodes

  1. Stop the Apache server on both nodes to ensure no application is writing to the database:
  2. /etc/init.d/apache2 stop

  3. Check that you have the same database on both nodes.
  4. Enter the following command to copy the sql dump issue:
  5. mysql -u root joomla < /tmp/api_portal_joomla_db.sql

  6. Go to the mysql command line on the primary node, enter the command show master status, and take a note of the name and position of the master log file:
  7. Example mysql command line for "show master status"
  8. Position and file values may vary. In this example, the master log file is mysql-bin.000003 and the position 120. These details are used to feed the replication engine on the secondary node
  9. Go the secondary node, and enter the following command to complete the two-way replication:
  10. mysql -u root << EOF
    stop slave;
    CHANGE MASTER TO MASTER_HOST = '<primary node IP address>', MASTER_USER = '<user name>', MASTER_PASSWORD = '<password>', MASTER_LOG_FILE = '<master log file>', MASTER_LOG_POS = <position>;
    start slave;
    EOF

    The user name and password are the ones you created for the primary node . For example:

    mysql -u root << EOF
    stop slave;
    CHANGE MASTER TO MASTER_HOST = '10.0.1.6', MASTER_USER = 'replicator', MASTER_PASSWORD = 'SECRET123', MASTER_LOG_FILE = 'mysql-bin.000003', MASTER_LOG_POS = 120;
    start slave;
    EOF

  11. Go to the primary node, and enter the following command:
  12. mysql -u root << EOF
    stop slave;
    CHANGE MASTER TO MASTER_HOST = '<secondary node IP address>', MASTER_USER = '<user name>', MASTER_PASSWORD = '<password>', MASTER_LOG_FILE = '<master log file>', MASTER_LOG_POS = <position>;
    start slave;
    EOF

    The user name and password are the ones you created for the secondary node . For example:

    mysql -u root << EOF
    stop slave;
    CHANGE MASTER TO MASTER_HOST = '10.0.1.5', MASTER_USER = 'replicator', MASTER_PASSWORD = 'SECRET123', MASTER_LOG_FILE = 'mysql-bin.000001', MASTER_LOG_POS = 423;
    start slave;
    EOF

Test the replication

You can test that the replication between the nodes works, before continuing configuring the HA deployment.

  1. Create a dummy table on the primary node:
  2. mysql -u root << EOF
    use joomla;
    create table joomla.justatest ('id' varchar(7));
    EOF

  3. Go to the secondary node and check if the table exists.
  4. Delete the table on the secondary node:
  5. mysql -u root << EOF
    use joomla;
    describe justatest;
    drop table justatest;
    EOF

  6. Check the primary node to see if the table disappeared there.

Secure the root user

After you finish testing the configuration, complete the final steps:

  1. Enter the following to start Apache:
  2. /etc/init.d/apache2 start

  3. When the database is listening on all interfaces, set a password for the root user:
  4. UPDATE mysql.user SET Password=PASSWORD('<your secure password>') WHERE User='root';
    FLUSH PRIVILEGES;

Related Links