- Created by Former user, last modified by Ylva Andersson on Apr 07, 2022
You are viewing an old version of this page. View the current version.
Compare with Current View Page History
« Previous Version 14 Next »
This article describes how to run Smart ID Digital Access component in distributed mode.
Distributed mode is used when the different functions in Digital Access component are distributed to several virtual appliances. A typical case is when you want to enforce the access in one appliance (PEP, Policy Enforcement Point) and process the authorization and authentication requests in one appliance (PDP, Policy Decision Point). In this case you will need two appliances. One that runs access point and another that runs the other Digital Access component services.
Administration service limitations
There can be only one administration service in a node network. Nodes running other services should be connected to the administration service node. Once a service has successfully connected to an administration service, then that service cannot easily be switched to work with another appliance's administration service.
- Manager node is the node that hosts the administration service.
- Worker node is a node that hosts other services, not running the administration service.
- Make sure 1003 is available for user id and group id.
Prerequisites
The following prerequisites apply:
- Two Digital Access components with services and docker swarm available
- The following ports shall be open to traffic to and from each Docker host participating on an overlay network:
- TCP port 2377 for cluster management communications
- TCP and UDP port 7946 for communication among nodes
- UDP port 4789 for overlay network traffic
- For more details refer to: https://docs.docker.com/network/overlay/
- Keep a note of IP addresses of nodes where access point is running.
Step-by-step instruction
Get token and stop services - manager node
- SSH to the node running the administration service, that is, the manager node.
Get the cluster join token by running this command. This token will be used for joining worker nodes to the manager node.
Get tokensudo docker swarm join-token worker
Output of the command will be like:
Outputdocker swarm join --token SWMTKN-1-5dxny21y4oslz87lqjzz4wj2wejy6vicjtqwq33mvqqni42ki2-1gvl9xiqcrlxuxoafesxampwq 192.168.253.139:2377
Stop the running services.
Stop servicessudo docker stack rm <your da stack name>
Join as worker nodes
At manager node
- SSH to manager node.
Remove label for all services which are not required on this node.
Remove labelsudo docker node update --label-rm da-access-point <nodeid>
Verify if all nodes are part of cluster by running this command.
Verify if all nodes are part of clustersudo docker node ls
Identify nodes ID, master and worker where the service will be distributed.
Identify nodessudo docker node inspect --format '{{ .Status }}' h9u7iiifi6sr85zyszu8xo54l
Output: {ready 192.168.86.129} - IP address will help to identify the DA node
Update labels for each service which you want to run on worker nodes.
<node ID> is the id of the node on which the service will be running.Commands to update labelssudo docker node update --label-add da-policy-service=true <worker node ID> sudo docker node update --label-add da-access-point=true <worker node ID>
Deploy your stack using this command. To run the command your working directory should be docker-compose.
Deploy DA stacksudo docker stack deploy --compose-file docker-compose.yml -c network.yml -c versiontag.yml <your da stack name>
Here:
docker stack deploy
is the command to deploy services as stack.- compose file flag is used to provide the file name of base docker-compose file.
-c
is short for–compose-file
flag. It is used to provide override files for docker -compose.<your da stack name>
is the name of the stack. You can change it based on requirements.
This article is valid for Digital Access 6.2 and later.
Related information
This article describes how to run Smart ID Digital Access component in distributed mode.
Distributed mode is used when the different functions in Digital Access component are distributed to several virtual appliances. A typical case is when you want to enforce the access in one appliance (PEP, Policy Enforcement Point) and process the authorization and authentication requests in one appliance (PDP, Policy Decision Point). In this case you will need two appliances. One that runs access point and another that runs the other Digital Access component services.
Administration service limitations
There can be only one administration service in a node network. Nodes running other services should be connected to the administration service node. Once a service has successfully connected to an administration service, then that service cannot easily be switched to work with another appliance's administration service.
- Manager node is the node that hosts the administration service.
- Worker node is a node that hosts other services, not running the administration service.
Prerequisites
The following prerequisites apply:
- Two Digital Access components with services and docker swarm available
- The following ports shall be open to traffic to and from each Docker host participating on an overlay network:
- TCP port 2377 for cluster management communications
- TCP and UDP port 7946 for communication among nodes
- UDP port 4789 for overlay network traffic
- For more details refer to: https://docs.docker.com/network/overlay/
- Keep a note of IP addresses of nodes where access point is running.
Step-by-step instruction
Get token and stop services - manager node
- SSH to the node running the administration service, that is, the manager node.
Get the cluster join token by running this command. This token will be used for joining worker nodes to the manager node.
Get tokensudo docker swarm join-token worker
Output of the command will be like:
Outputdocker swarm join --token SWMTKN-1-5dxny21y4oslz87lqjzz4wj2wejy6vicjtqwq33mvqqni42ki2-1gvl9xiqcrlxuxoafesxampwq 192.168.253.139:2377
Stop the running services.
Stop servicessudo docker stack rm <your da stack name>
Join as worker nodes
At manager node
- SSH to manager node.
Remove label for all services which are not required on this node.
Remove labelsudo docker node update --label-rm da-access-point <nodeid>
Verify if all nodes are part of cluster by running this command.
Verify if all nodes are part of clustersudo docker node ls
Identify nodes ID, master and worker where the service will be distributed.
Identify nodessudo docker node inspect --format '{{ .Status }}' h9u7iiifi6sr85zyszu8xo54l
Output: {ready 192.168.86.129} - IP address will help to identify the DA node
Update labels for each service which you want to run on worker nodes.
<node ID> is the id of the node on which the service will be running.Commands to update labelssudo docker node update --label-add da-policy-service=true <worker node ID> sudo docker node update --label-add da-access-point=true <worker node ID>
Deploy your stack using this command. To run the command your working directory should be docker-compose.
Deploy DA stacksudo docker stack deploy --compose-file docker-compose.yml -c network.yml -c versiontag.yml <your da stack name>
Here:
docker stack deploy
is the command to deploy services as stack.- compose file flag is used to provide the file name of base docker-compose file.
-c
is short for–compose-file
flag. It is used to provide override files for docker -compose.<your da stack name>
is the name of the stack. You can change it based on requirements.
This article is valid for Digital Access 6.0.5/Smart ID 21.04 and later.
Related information
- No labels