Navigate to the docker-compose folder (<path to upgrade folder>/docker-compose) and edit these files: - docker-compose.yml
- network.yml
- versiontag.yml
docker-compose.ymlFor each service, add one section in the docker-compose.yml file. Change the values for the following keys: - Service name
- Hostname
- Constraints
For example, if you want to deploy two policy services on two nodes you will have two configuration blocks as shown in the example below. No Format |
---|
policy:
# configure image tag from versiontag.yaml
hostname: | policy1policy
deploy:
mode: replicated
replicas: 1
placement:
constraints:
#If you need to set constraints using node name
#- node.hostname ==<node name>
# use node label
[node.labels.da-policy- | service1service == true ]
resources:
limits:
cpus: "0.50"
memory: 512M
reservations:
cpus: "0.10"
memory: 128M
volumes:
- /opt/nexus/config/policy-service:/etc/nexus/policy-service:z
- /etc/localtime:/etc/localtime
- /etc/timezone:/etc/timezone
logging:
options:
max-size: 10m
policy1:
# configure image tag from versiontag.yaml
hostname: policy1
deploy:
mode: replicated
replicas: 1
| max-size: 10mnetwork.ymlFor each service, add network configuration in the network.yml file. For example, if you want to deploy two policy services on two nodes you will have two blocks of configuration as shown below. Change the value of: - Service name: Service name should be identical to what is mentioned in docker-compose.yml
Example: No Format | policy:
ports:-target:4443need to set constraints using node name
| published:4443mode:hostnetworks:-da-overlaypolicy1:ports: target: 4443service1 == true ]
resources:
| published:4443mode:host networks - da-overlayAlso, make sure all the listeners that are used for access point Load balance are exposed in network.yml. versiontag.ymlAdd one line for each service in this file also. For example, if you have two policy services with name policy and policy1, you will have two lines for each service. Example: No Format |
---|
policy:
image: nexusimages.azurecr.io/smartid-digitalaccess/policy-service:6.0.x.xxxxx
policy1:
image: nexusimages.azurecr.io/smartid-digitalaccess/policy-service:6.0.x.xxxxx | Place the da_migrate_5.13.tgz file inside the scripts folder, upgrade/scripts/.Run the upgrade script to import files/configuration from the older setup and upgrade to the latest version.- Although the upgrade script installs docker and pulls the images from the repository, it is recommended to install docker and pull the images before running the upgrade. That will reduce the script run time and also the downtime of system.
- Verify the Digital Access tag in versiontag.yml (<path to upgrade folder>/docker-compose/versiontag.yml) file. Same tag will be installed as part of upgrade. If it is not correct tag, please update it manually.
Run the script pull_image.sh to pull images on all nodes. Note: In case of offline upgrade, load DA docker images to the machine also If you are using internal postgres, load postgres:9.6-alpine image on the manager node.
Code Block |
---|
| sudo bash upgrade/scripts/pull_image.sh |
Run the import command: On the manager node Code Block |
---|
title | Run import command on manager node |
---|
| sudo bash upgrade/scripts/upgrade.sh --manager --import_da
(on node running administration-service)
|
- To set Docker Swarm provide your manager node host IP address.
- In case you are using an external database, select No to skip postgres installation.
Note |
---|
title | (Only applicable for High Availability or distributed setup) |
---|
| The script prints a token in the output. This token will be used while setting up worker nodes. | Example: docker swarm join --token SWMTKN-1-5dxny21y4oslz87lqjzz4wj2wejy6vicjtqwq33mvqqni42ki2-1gvl9xiqcrlxuxoafesxampwq 192.168.253.139:2377 Here the token part is: SWMTKN-1-5dxny21y4oslz87lqjzz4wj2wejy6vicjtqwq33mvqqni42ki2-1gvl9xiqcrlxuxoafesxampwq 192.168.253.139:2377
If you cannot find the token in the upgrade script output on the manager node, get the cluster join token by running this command: Code Block |
---|
title | Get cluster join token |
---|
| sudo docker swarm join-token worker |
On worker nodes Code Block |
---|
title | Run import command on worker nodes (Only applicable for High Availability or distributed setup) |
---|
| sudo bash upgrade/scripts/upgrade.sh --worker --import_da --token <token value> --ip-port <ip:port>
(on all other worker nodes)
| Follow the screen messages and complete the upgrade. Check for any error in the logs. During the upgrade, it will extract the da_migrate_5.13.tgz files and create the same directory structure as it was on the older setup.On the manager node, it will install PostgreSQL database as docker container and import database dump from the older machine.After the scripts are executed, the .tgz file will still be there. Delete it once it is confirmed that the upgrade process has been completed successfully. memory: 512M
reservations:
cpus: "0.10"
memory: 128M
volumes:
- /opt/nexus/config/policy-service:/etc/nexus/policy-service:z
- /etc/localtime:/etc/localtime
- /etc/timezone:/etc/timezone
logging:
options:
max-size: 10m |
network.ymlFor each service, add network configuration in the network.yml file. For example, if you want to deploy two policy services on two nodes you will have two blocks of configuration as shown below. Change the value of: - Service name: Service name should be identical to what is mentioned in docker-compose.yml
Example: No Format |
---|
policy:
ports:
- target: 4443
published: 4443
mode: host
networks:
- da-overlay
policy1:
ports:
- target: 4443
published: 4443
mode: host
networks:
- da-overlay |
Also, make sure all the listeners that are used for access point Load balance are exposed in network.yml. versiontag.ymlAdd one line for each service in this file also. For example, if you have two policy services with name policy and policy1, you will have two lines for each service. Example: No Format |
---|
policy:
image: nexusimages.azurecr.io/smartid-digitalaccess/policy-service:6.0.x.xxxxx
policy1:
image: nexusimages.azurecr.io/smartid-digitalaccess/policy-service:6.0.x.xxxxx |
|