Document toolboxDocument toolbox

Upgrade Digital Access component from 6.0.0 and above to 6.0.5 and above

This article is valid for upgrade from Digital Access 6.0.0 and above to 6.0.5 or above 

This article describes how to upgrade the Smart ID Digital Access component from version 6.0.0 and above to 6.0.5 and above for single node appliance as well as High Availability (HA) or distributed setup.

You only need to perform these steps once to set the system to use docker and swarm. Once this is all set, future upgrades will become much easier.

There are two options, described below, for upgrading from 6.0.0 to 6.0.4 to 6.0.5 and above:

  1. Migrate - This section describes how to upgrade, as well as migrate, the Digital Access instance from appliance to a new Virtual Machine (VM) by exporting all data/configuration files with the help of the script provided. (Recommended)
  2. Upgrade - This section describes how to upgrade Digital Access in the existing appliance to the newer version.

Download latest updated scripts

Make sure you download the upgrade.tgz file again in case you have downloaded it before 29th October 2021 to get the latest updated scripts.

For Upgrade from 6.0.5 to 6.0.6 and above follow:

Migrate Digital Access

  1. Make sure that you have the correct resources available (memory, CPU and hard disk) as per requirement on new machines.
  2. Install docker, xmlstarlet, although upgrade/migrate script will install docker and xmlstarlet if not installed already. But if it is an offline upgrade (no internet connection on machine) then install the latest version of docker and xmlstarlet before running the migration steps.
  3. The following ports shall be open to traffic to and from each Docker host participating on an overlay network:
    1. TCP port 2377 for cluster management communications.
    2. TCP and UDP port 7946 for communication among nodes.
    3. UDP port 4789 for overlay network traffic.
  4. For High Availability setup only:
    1. Make sure you have the similar set of machines, since the placement of services will be same as on the existing setup. For example, if you have two appliances in the High Availability setup, you must have two new machines to migrate the setup.
    2. Identify the nodes, as the new setup should have equal number of machines. You must create mapping of machines from the old setup to the new setup.
    3. In docker swarm deployment, one machine is the manager node (node on which the administration service runs) and other nodes are worker nodes.

 Steps on existing appliance/setup
  1. Copy upgrade.tgz to all nodes, and extract the .tgz file.

    Extract
    tar -xzf upgrade.tgz
  2.  Run upgrade.sh to migrate files/configuration from the existing setup. It will create a .tgz file in upgrade/scripts/da_migrate_6.0.x.xxxxx.tgz. Copy this file to the new machine.

    Run the below commands with --manager on the manager node and --worker on the worker nodes:

    Run upgrade script
    sudo bash upgrade/scripts/upgrade.sh --manager --export_da (on node running administration-service)
    sudo bash upgrade/scripts/upgrade.sh --worker --export_da  (on all other worker nodes)

    After running the commands above you will be asked: "Do you wish to stop the existing services ?- [y/n]". It is recommended to select y for yes. The same configuration and database settings will be copied over to the new setup and there is a possibility of connecting the new instance too with the same database, if the database settings and other configurations are not modified before starting the services.

    If you select n for no, the services on the older existing machines will not stop.

    The system will now create a dump of the locally running PostgreSQL database. Note that the database dump will only be created in the admin service node.



 Steps on the new setup
  1. Copy upgrade.tgz to all nodes, and extract the .tgz file.

    Extract
    tar -xzf upgrade.tgz
  2. Edit the configuration files. (Only applicable for High Availability or distributed setup).

  3. Place the da_migrate_6.0.x.xxxxx.tgz file inside the scripts folder, upgrade/scripts/.
  4. Run the upgrade script to import files/configuration from the older setup and upgrade to the latest version.
    1. Verify the Digital Access tag in versiontag.yml (<path to upgrade folder>/docker-compose/versiontag.yml) file. Same tag will be installed as part of upgrade. If it is not correct tag, please update it.
    2. Although the upgrade script installs docker and pulls the images from the repository, it is recommended to install docker and pull the images before running the upgrade. This will reduce the script run time and also the downtime of the system.

    3. Note: In case of offline upgrade, load the Digital Access docker images to the machine. Also, if you are using internal postgres, load postgres:9.6-alpine image on the manager node.
    4. pull images
      sudo bash upgrade/scripts/pull_image.sh
    5. Run the import command:

      1. On the manager node

        Run import command on manager node
        sudo bash upgrade/scripts/upgrade.sh --manager --import_da   
        (on node running administration-service)
        
      2. To set Docker Swarm, provide your manager node host IP address. 
      3. In case you are using an external database, select No to skip postgres installation.

        (Only applicable for High Availability or distributed setup)

        The script prints a token in the output. This token will be used while setting up worker nodes.

        Example:
        docker swarm join --token SWMTKN-1-5dxny21y4oslz87lqjzz4wj2wejy6vicjtqwq33mvqqni42ki2-1gvl9xiqcrlxuxoafesxampwq 192.168.253.139:2377

        Here the token part is:
        SWMTKN-1-5dxny21y4oslz87lqjzz4wj2wejy6vicjtqwq33mvqqni42ki2-1gvl9xiqcrlxuxoafesxampwq 192.168.253.139:2377

        If you cannot find the token in the upgrade script output on the manager node, get the cluster join token by running this command:

        Get cluster join token (Only applicable for High Availability or distributed setup)
        sudo docker swarm join-token worker
      4. On worker nodes

        Run import command on worker nodes (Only applicable for High Availability or distributed setup)
        sudo bash upgrade/scripts/upgrade.sh --worker --import_da --token <token value> --ip-port <ip:port>  
        (on all other worker nodes)
        
    6. Follow the screen messages and complete the upgrade. Check for any error in the logs. During the upgrade, it will extract the da_migrate_6.0.x.xxxxx.tgz. files and create the same directory structure as it was on the older setup. 
    7. On the manager node, it will install PostgreSQL database as docker container and import database dump from the older machine.
    8. After the scripts are executed, the .tgz file will still be there. Delete it once it is confirmed that the upgrade process has been completed successfully.



 Edit configuration files

Navigate to the docker-compose folder (<path to upgrade folder>/docker-compose) and edit these files:

  • docker-compose.yml
  • network.yml
  • versiontag.yml

docker-compose.yml

For each service, add one section in the docker-compose.yml file.

Change the values for the following keys:

  • Service name
  • Hostname
  • Constraints

For example, if you want to deploy two policy services on two nodes you will have two configuration blocks as shown in the example below.  

  policy:
    # configure image tag from versiontag.yaml
    hostname: policy
    deploy:
      mode: replicated
      replicas: 1
      placement:
        constraints:
         #If you need to set constraints using node name
         #- node.hostname ==<node name> 
         # use node label
         [node.labels.da-policy-service == true ]
      resources:
        limits:
          cpus: "0.50"
          memory: 512M
        reservations:
          cpus: "0.10"
          memory: 128M
    volumes:
      - /opt/nexus/config/policy-service:/etc/nexus/policy-service:z
      - /etc/localtime:/etc/localtime
      - /etc/timezone:/etc/timezone
    logging:
      options:
       max-size: 10m

  policy1:
    # configure image tag from versiontag.yaml
    hostname: policy1
    deploy:
      mode: replicated
      replicas: 1
      placement:
        constraints:
         #If you need to set constraints using node name
         #- node.hostname ==<node name> 
         # use node label
         [node.labels.da-policy-service1 == true ]
      resources:
        limits:
          cpus: "0.50"
          memory: 512M
        reservations:
          cpus: "0.10"
          memory: 128M
    volumes:
      - /opt/nexus/config/policy-service:/etc/nexus/policy-service:z
      - /etc/localtime:/etc/localtime
      - /etc/timezone:/etc/timezone
    logging:
      options:
       max-size: 10m

network.yml

For each service, add network configuration in the network.yml file. If you want to deploy two policy services on two nodes you will have two blocks of configuration as shown below.

  • Service name: Service name should be identical to what is mentioned in docker-compose.yml

Example:

policy:
ports:
      - target: 4443
        published: 4443
        mode: host
    networks:
      - da-overlay

policy1:
ports:
      - target: 4443
        published: 4443
        mode: host
    networks:
      - da-overlay

Also, make sure all the listeners that are used for access point load balance are exposed in network.yml

versiontag.yml

Add one line for each service in this file.

For example, if you have two policy services with the names policy and policy1, you will have two lines for each service. 

Example:

policy :
image: nexusimages.azurecr.io/smartid-digitalaccess/policy-service:6.0.x.xxxxx

policy1:
image: nexusimages.azurecr.io/smartid-digitalaccess/policy-service:6.0.x.xxxxx
 Verify and identify nodes
  1. Verify if all nodes are part of the cluster by running this command:

    Verify if all nodes are part of cluster
    sudo docker node ls

    Example:

  2. Identify nodes ID, master and worker where the service will be distributed.

    Identify nodes
    sudo docker node inspect --format '{{ .Status }}' h9u7iiifi6sr85zyszu8xo54l
  3. Output from this command:

    {ready  192.168.86.129}

    The IP address will help to identify the Digital Access node.


 Add new labels for each service

Add new labels for each service which you want to run.  Choose any name based on requirement, but make sure they are in accordance with what you have defined in the constraint section in the docker-compose.yml file.

  1. Use these commands to add label for each service:

    Commands to add labels
    sudo docker node update --label-add da-policy-service=true <manager node ID>
    sudo docker node update --label-add da-authentication-service=true <manager node ID>
    sudo docker node update --label-add da-administration-service=true <manager node ID>
    sudo docker node update --label-add da-access-point=true <manager node ID>
    sudo docker node update --label-add da-distribution-service=true <manager node ID>
    sudo docker node update --label-add da-policy-service1=true <worker node ID>
    sudo docker node update --label-add da-access-point1=true <worker node ID>
    
  2. Deploy your Digital Access stack using this command. 

    Verify that the required images are available on the nodes. Then run the start-all.sh script in the manager node.

    Deploy Digital Access stack
    sudo bash /opt/nexus/scripts/start-all.sh


 Do updates in Digital Access Admin
  1. Log in to Digital Access Admin. If you use an internal database for configurations, provide the host machine IP address to connect the databases (HAG, Oath, Oauth). 

  2. Publish the configurations.
  3. Change the internal host and port for each added service according to the docker-compose.yml and network.yml files.
  4. Go to Manage System > Distribution Services and select “Listen on All Interfaces” in case of the ports that are to be exposed.

  5. Go to Manage System >Access Points and provide the IP address instead of the service name. Also, enable the "Listen on all Interfaces" option.

  6. If you want to enable the XPI and SOAP services, the expose port ID should be 0.0.0.0 in Digital Access Admin.

  7. If there is a host entry for DNS on appliance, you need to provide an additional host entry for the same in the docker-compose file. 
  8. Redeploy the services using this command on the manager node.

    Restart
    sudo bash /opt/nexus/scripts/start-all.sh


 Upgrade Digital Access

 Prerequisites
  1. The following ports shall be open to traffic to and from each Docker host participating on an overlay network:
    1. TCP port 2377 for cluster management communications.
    2. TCP and UDP port 7946 for communication among nodes.
    3. UDP port 4789 for overlay network traffic.
  2. Make sure there is a backup/snapshot of the machine before starting the upgrade.
 Preparations before upgrade
  1. Copy upgrade.tgz to the manager node (node where administration service is running) and all worker (other) nodes.
  2. Extract the tar file on all nodes.

    Extract
    tar -xzf upgrade.tgz
  3. Download DA docker images on all machines (OPTIONAL)
    1. Run the script pull_image.sh on all machines (in case of HA or distributed mode). This script will download docker images for all DA services with the version mentioned in versiontag.yml. This helps in reducing the downtime for upgrade. If you choose to download images via the upgrade script, the script will smartly download images on all nodes based on the configuration set up in docker-compose file. For instance, if you have a setup of 2 VMs/appliances. On one you have the admin, policy, authentication, distribution and accesspoint and on other you have policy and authentication. Then the upgrade script will download only the required images on respective machines.

      Pull images
      sudo bash upgrade/scripts/pull_image.sh



 Edit configuration files

Before starting the upgrade, it is important to edit the configuration files based on your setup. This is required in case of a high availability or a distributed mode setup.

Only the configuration files in the manager node (machine running the admin service) need to be configured in the below way.

Navigate to the docker-compose folder (/upgrade/docker-compose) and edit these files:

  • docker-compose.yml
  • network.yml
  • versiontag.yml

docker-compose.yml


For each service, add one section in the docker-compose.yml file.

Below is an example of how to set policy service on 2 different nodes - policy and policy1 are the service names of these services. Similar changes will also have to be replicated for other services as well.

It is important to get the constraints node labels correct. The node management in swarm is done by giving labels (metadata) to the nodes.

policy:
    hostname: policy
    deploy:
      mode: replicated
      replicas: 1
      placement:
        constraints:
         [node.labels.da-policy-service == true ]
      resources:
        limits:
          cpus: "0.50"
          memory: 512M
        reservations:
          cpus: "0.10"
          memory: 128M
    volumes:
      - /opt/nexus/config/policy-service:/etc/nexus/policy-service:z
      - /etc/localtime:/etc/localtime
      - /etc/timezone:/etc/timezone
    logging:
      options:
       max-size: 10m

  policy1:
    hostname: policy1
    deploy:
      mode: replicated
      replicas: 1
      placement:
        constraints:
         [node.labels.da-policy-service1 == true ]
      resources:
        limits:
          cpus: "0.50"
          memory: 512M
        reservations:
          cpus: "0.10"
          memory: 128M
    volumes:
      - /opt/nexus/config/policy-service:/etc/nexus/policy-service:z
      - /etc/localtime:/etc/localtime
      - /etc/timezone:/etc/timezone
    logging:
      options:
       max-size: 10m

network.yml

For each service, add network configuration in the network.yml file. This file specifies about the network used and the ports exposed by each service.

Below is an example of how to set network configuration for 2 policy services that we defined in the docker-compose yml file above - policy and policy1 are the service names of these services.

Similar changes will also have to be replicated for other services as well.

policy :
ports:
      - target: 4443
        published: 4443
        mode: host
    networks:
      - da-overlay

policy1:
ports:
      - target: 4443
        published: 4443
        mode: host
    networks:
      - da-overlay

Also, make sure all the listeners that are used for access point Load balance are exposed on network.yml.

versiontag.yml

A change also needs to be done in versiontag.yml to mention the docker image tags of all services. This file determines which version of DA images will be downloaded.

Below is an example for policy service - policy and policy1. Similar entries will also be present for other services.

policy :
image: nexusimages.azurecr.io/smartid-digitalaccess/policy-service:6.1.x.xxxxx

policy1:
image: nexusimages.azurecr.io/smartid-digitalaccess/policy-service:6.1.x.xxxxx

Upgrade manager node (node running admin service) 

 Run upgrade script

To upgrade the manager node:

  1. Run the upgrade script with this command line option:

    Run upgrade script
    sudo bash upgrade/scripts/upgrade.sh --manager
  2. Provide this machine's IP address while setting up swarm. It will make this machine as the manager node in the swarm.
  3. Say No to postgres installation in case you want to use an external database.

 Get Cluster Join token (only applicable for High Availability or distributed setup)
  1. Get the cluster join token by running this command on manager node: 

    Get cluster join token
    sudo docker swarm join-token worker

    Output of the above command will be :

Output

docker swarm join --token SWMTKN-1-5dxny21y4oslz87lqjzz4wj2wejy6vicjtqwq33mvqqni42ki2-1gvl9xiqcrlxuxoafesxampwq 192.168.253.139:2377

Keep a note of the token, IP and port from the above output. This is used by the worker nodes to join to this swarm (as worker).

<token> = SWMTKN-1-5dxny21y4oslz87lqjzz4wj2wejy6vicjtqwq33mvqqni42ki2-1gvl9xiqcrlxuxoafesxampwq

<ip:port> = 192.168.253.139:2377

 Upgrade worker nodes

  • Run the upgrade script in each worker node, follow the command line options replacing the token and ip:port values from above step.

    Run upgrade script on worker node
    sudo bash upgrade/scripts/upgrade.sh --worker --token <token> --ip-port <ip:port>


 Manager node: Verify all nodes in swarm cluster
  1. Go to the manager node and verify if all the nodes are part of the cluster by running the below command. 
    Once the upgrade is done on all worker nodes, Identify and verify the nodes in swarm cluster show a "Ready" status and "Active" availability. Make a note of the manager and worker node IDs.
    The Manager Status showing the value "Leader" is the manager node and the rest are worker nodes.

    Verify if all nodes are part of cluster
    sudo docker node ls

    Here you will have to note down the node IDs for all nodes as it will be required in further steps

 Add labels for each service and start DA

Add label metadata for each service that you are running.  Make sure these labels match the constraints section [node.label] in the docker-compose.yml file.

Labels provide a way of managing services on multiple nodes.

  1. Below is an example of updating the node labels for 1 manager node and 1 worker node. Repeat the below for multiple worker nodes.

    Commands to add labels
    sudo docker node update --label-add da-policy-service=true <manager_node_ID>
    sudo docker node update --label-add da-authentication-service=true <manager_node_ID>
    sudo docker node update --label-add da-administration-service=true <manager_node_ID>
    sudo docker node update --label-add da-access-point=true <manager_node_ID>
    sudo docker node update --label-add da-distribution-service=true <manager_node_ID>
    sudo docker node update --label-add da-policy-service1=true <worker_node_ID>
    sudo docker node update --label-add da-access-point1=true <worker_node_ID>

    Run the below command to inspect and check what labels are added to each node. This is to make sure the correct labels for services are added in different nodes.

    Restart services
    sudo docker node inspect <manager_node_ID> 	// do the same for worker nodes as well

    The Labels section below will show the services that are labelled as per the above step. When we run start.sh, swarm will read the labels and accordingly bring up the docker containers for those services.
    These labels will indicate which services will be running on which nodes.

  2. Make sure the password for the reporting database is correct as set before in the administrations service's customize.conf file.
  3. Now run the start-all.sh script on the manager node to start the configured services in all nodes.

    Deploy Digital Access stack
    sudo bash /opt/nexus/scripts/start-all.sh
  4. Log in to Digital Access Admin. If you use an internal database for configurations, provide the host machine IP address to connect to the databases (HAG, Oath, Oauth).
  5. Also, change the Internal host for all services to respective service name instead of IP/127.0.0.1. These service names should match the hostname in the docker-compose.yml file for all services.
  6. Change the access-point ports as shown below (Portal port - 10443)

  7. Publish the configurations.
  8. Change the "Internal Host" and port for each added service according to the docker-compose.yml and network.yml files.

  9. If there is any host entry for DNS on appliance, Then provide an additional host entry for the same in docker-compose file. 
  10. If you want to enable the XPI and SOAP services, the expose port ID should be 0.0.0.0 in Digital Access Admin.

  11. Restart the services using this command on the manager node:

    Deploy Digital Access stack
    sudo bash /opt/nexus/scripts/start-all.sh

Copyright 2024 Technology Nexus Secured Business Solutions AB. All rights reserved.
Contact Nexus | https://www.nexusgroup.com | Disclaimer | Terms & Conditions