Upgrading

Upgrading the ArrowCluster on-premise deployment

As of ArrowCloud 1.5.2, you can perform on-premise upgrades.

ArrowCluster does not manage nodes in a VMWare cluster. So to upgrade a cluster, one needs to prepare a certain number of new nodes with the new image in advance. According to the configuration at https://github.com/appcelerator/arrowcloud-deploy/blob/master/config/hosts_mapping.json the number is:

  • development: 1
  • production: 4

When deploying the cluster, a json file will be generated for upgrade use which contains necessary information to connect to ArrowCloud admin node. The file name is in admin-<cluster>-<env>-<timestamp>.json format.

INFO[2016-05-11 23:42:45] Please use the following file for managing/upgrading the cluster. 
INFO[2016-05-11 23:42:45] /Users/yjin/appcelerator/arrowcloud-deploy/bin/admin-weitest-development-20160511T234245Z.json


The json file should look something like this:

{
  "admin_host": [
    "184.169.207.20",
    "54.177.153.66"
  ],
  "env": "development",
  "hosts_for_upgrade": [],
  "ssh": {
    "pem": "/Users/yjin/dev-auto.pem",
    "port": 22,
    "username": "ubuntu"
  }
}


To upgrade a cluster follow the following steps:

  1. Launch the required number of new nodes as mentioned above. The new nodes should use a new image of the target version.
  2. Fill the IPs of the new nodes in hosts_for_upgrade field in admin-<cluster>-<env>-<timestamp>.json.

    Note: ssh credentials for new nodes should be the same as old nodes.

    {
      "admin_host": [
        "184.169.207.20",
        "54.177.153.66"
      ],
      "env": "development",
      "hosts_for_upgrade": ["54.219.65.209","54.177.2.200"],
      "ssh": {
        "pem": "/Users/yjin/dev-auto.pem",
        "port": 22,
        "username": "ubuntu"
      }
    }
  3. Run the following command to configure and run the new version in parallel with the old one:

    $ cd ~/Workspace/go/arrowcloud-deploy
    $./bin/arrowcluster upgrade -c bin/admin-weitest-development-20160511T234245Z.json


    • Note: 
      The new version and the old version will be running in parallel. To test the new version you need to point your domain to the IPs of the new nodes. This can be done by adding such entries in /etc/hosts on your laptop.

  4. Activate the new version. This should be done when the new version is tested and working well.

    $ cd ~/Workspace/go/arrowcloud-deploy
    $./bin/arrowcluster upgrade activate -c bin/admin-weitest-development-20160511T234245Z.json
  5. Show current master/slave versions:

    $ cd ~/Workspace/go/arrowcloud-deploy
    $./bin/arrowcluster upgrade status -c bin/admin-weitest-development-20160511T234245Z.json

Note: To finally make a new version live, you will need to associate the public IPs of the old nodes to the old nodes.

Upgrade production cluster

API Runtime Services 1.6.0.sp1 introduced the mandatory dedication of three (3) swarm manager virtual machines (VMs) in a production cluster. If you have a production deployment using a 1.6.0 on-premise cluster, you will need to add the additional swarm manager hosts by following these steps:

  1. Prepare three additional hosts according to the requirements when initially setting up the cluster. There should be three swarm managers running and the services running on the manager nodes will be moved to the three new hosts.
  2. Use the arrowcluster add-host command to provision the three new hosts.
  3. Use ssh to connect to one of the manager nodes and then use the docker node ls -f role=manager command to find all the swarm managers.
  4. Execute the docker node update <node-id> --availability=drain command for each manager node.
  5. Wait about two minutes and then execute the arrowcluster verify postinstall command to verify all services are working well.

Note: Users should use arrowcluster 1.6.1.

ArrowCluster Tools

There are several tools available that can be used as sanity checks before and after installations. Please visit ArrowCluster Tool Commands#Upgrade for more details.

Related Links