Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Note
titleUnder update

This article is currently under update and will be finished within the coming weeks.

This article

This article describes how to run Smart ID Digital Access component in distributed mode.

Distributed mode is used when the different functions in Digital Access component are distributed to several virtual appliances. A typical case is when you want to enforce the access in one appliance (PEP, Policy Enforcement Point) and process the authorization and authentication requests in one appliance (PDP, Policy Decision Point). In this case you will need two appliances. One that runs access point and another that runs the other Digital Access component services.

Administration service limitations

There can be only one one administration service in  in a node network. The appliance that runs administration service can be toggled to and from distributed mode. When toggling from distributed mode, no other than the services running locally on the appliance can be a part of the node network. Toggling an appliance with no administration service to and/or from distributed mode in general doesn’t make sense since there is no local administration serviceNodes running other services should be connected to the administration service node. Once a service has successfully connected to an administration service, then that service cannot easily be switched to work with another appliance's administration service. ­­

Log in on all hosts and go through the basic setup. The Administration Service UI setup system wizard should not be run on an appliance that will not run a local Administration service. Make a note of each host's network IP address which the other hosts should use to communicate with it.

This can be viewed in the console under “modify interfaces”.
Expand
titleConfigure distributed mode
Expand
titleOn virtual appliance
Note
  • Manager node is the node that hosts the administration service.
  • Worker node is a node that hosts other services, not running the administration service.

Prerequisites

  • Log in to Digital Access Admin of the host that will run the Administration service.
  • Go to Manage System. Here you can add, remove and configure the services: Administration service (configure, not add/remove), Access point, Policy service, Authentication service and Distribution service according to your preferred setup. As the services must be able to communicate with each other, you must set them to listen on the host's network IP address, overriding the default 127.0.0.1:
    1. Set the value Internal Host to an external IP address.
    2. Make a note of the Service ID for all services, including the new services that have been created.
    3. When configuring the Policy service make sure to also configure XPI:REST.
  • Go to Manage Resource Access and select the api resource.
  • Select Edit Resource Host…
  • Configure the same IP address as you configured under XPI:REST.
  • If the Administration service, Policy service(s) and/or Authentication service(s) are to be spread out over multiple hosts, then the built-in default internal database cannot be used due to it being reachable only on the loopback adapter (127.0.0.1). Consequently an external database has to be used that can be reached by the hosts running these services.
    1. Go to Manage System > Database Service to configure it, see also Database service in Digital Access.
  • If multiple Authentication services are to be used, then the built-in default OATH database cannot be used for the same reason as above.
    1. Go to Manage System > OATH Configuration.
    2. Select Configure Database Connection.
  • Click Publish.
  • Log on to the host running the Administration service and disable the services that this host should not run.

    Expand
    title
    On Orchestrator
    1. This is the IP address of host network.

    Expand
    titleOn virtual appliance
    1. In the console, select 2) Detailed server setup. A list of local services is displayed.

    2. Select each service that shall be deactivated. Answer the questions (first question is "Should this service be enabled?") with No.

    Expand
    titleOn Orchestrator

    For each service that should be disabled, run the following command:

    Code Block
    docker exec orchestrator hagcli -s policy-service -o disable

    Log on to the host running the Administration service and enable distributed mode.

    Expand
    titleOn virtual appliance
    1. Select 6) Activate distributed mode to toggle to “distributed mode”.

    Expand
    titleOn Orchestrator

    Run the following command:

    Code Block
    docker exec orchestrator hagcli -s distributed-service -o enable
    Log on to the other host(s) not running Administration service and select and disable all services you do not want to run on this host:

    Disable all services you do not want to run on this host.

    Expand
    titleOn virtual appliance
    1. Select Detailed server setup in the console.
    Expand
    titleOn Orchestrator
    For each service that should be disabled, run the following command:
    Code Block
    docker exec orchestrator hagcli -s policy-service -o disable

     

    Since the Administration service is not hosted on this/these host(s), then an external one needs to be pointed to.

    Expand
    titleOn virtual appliance
    1. Select Detailed server setup in the console.
    2. Disable Administration Service and answer question where to find Administration service. This will change all IP addresses of the Administration service in the LocalConfiguration.xml automatically.
    Expand
    titleOn Orchestrator
    1. Change IP address of Administration Service for each service enabled on this host
      1. Open LocalConfiguration.xml in opt/nexus/primary/<service>/config/LocalConfiguration.xml 
      2. Search for Administration Service section
      3. Change value of mHost to external IP address of Administration Service
  • The Activate distributed mode option can be used as a convenience on an appliance to quickly set all IP address fields to a given value, and their port and node id to the default values:
    1. In the console, select 2) Detailed server setup.
    2. Then select 6) Activate distributed mode.
  • To further manually configure any service on this appliance,

    Expand
    titleOn virtual appliance
    1. Select 2) Detailed server setup, and select the service to modify and answer the questions.
    Expand
    titleOn Orchestrator
    1. Open LocalConfiguration.xml in opt/nexus/primary/<service>/config/LocalConfiguration.xml 
    2. Change id values in element <id> and attribute mId to a the number you got when adding the new service node in Digital Access Admin.
    This article is valid for Smart ID 20.06
    Prerequisites

    The following prerequisites apply:

    • Two Digital Access components with services and docker swarm available 
    • The following ports shall be open to traffic to and from each Docker host participating on an overlay network:
      • TCP port 2377 for cluster management communications
      • TCP and UDP port 7946 for communication among nodes
      • UDP port 4789 for overlay network traffic
    • For more details refer to: https://docs.docker.com/network/overlay/
    • Keep a note of IP addresses of nodes where access point is running.

    Step-by-step instruction

    Get token and stop services - manager node

    Expand
    title Get cluster join token
    1. SSH to the node running the administration service, that is, the manager node.
    2. Get the cluster join token by running this command. This token will be used for joining worker nodes to the manager node.

      Code Block
      titleGet token
      sudo docker swarm join-token worker

      Output of the command will be like:

      Panel
      titleOutput

      docker swarm join --token SWMTKN-1-5dxny21y4oslz87lqjzz4wj2wejy6vicjtqwq33mvqqni42ki2-1gvl9xiqcrlxuxoafesxampwq 192.168.253.139:2377



    Expand
    titleStop services
    1. Stop the running services.

      Code Block
      titleStop services
      sudo docker stack rm <your da stack name>


    Join as worker nodes

    Insert excerpt
    COPY - Set up high availability for Digital Access component
    COPY - Set up high availability for Digital Access component
    nopaneltrue

    At manager node

    Expand
    titleRemove labels, verify and identify nodes
    1. SSH to manager node.
    2. Remove label for all services which are not required on this node.

      Code Block
      titleRemove label
      sudo docker node update --label-rm  da-accesspoint <nodeid>


    3. Verify if all nodes are part of cluster by running this command.

      Code Block
      titleVerify if all nodes are part of cluster
      sudo docker node ls

      Image Added

    4. Identify nodes ID, master and worker where the service will be distributed.

      Code Block
      titleIdentify nodes
      sudo docker node inspect --format '{{ .Status }}' h9u7iiifi6sr85zyszu8xo54l

      Output: {ready  192.168.86.129} - IP address will help to identify the DA node  


    Expand
    titleUpdate labels for each service
    1.  Update labels for each service which you want to run on worker nodes.
       <node ID> is the id of the node on which the service will be running.

      Code Block
      titleCommands to update labels
      sudo docker node update --label-add da-policy=true <node ID> 
      sudo docker node update --label-add da-authentication=true <node ID> 
      sudo docker node update --label-add da-accesspoint=true <node ID> 
      sudo docker node update --label-add da-distribution=true <node ID>


    2. Deploy your stack using this command. To run the command your working directory should be docker-compose.

      Code Block
      titleDeploy DA stack
      sudo docker stack deploy --compose-file docker-compose.yml -c network.yml -c versiontag.yml <your da stack name>

      Here: 

      • docker stack deploy is the command to deploy services as stack. 
      • compose file flag is used to provide the file name of base docker-compose file. 
      • -c is short for –compose-file flag. It is used to provide override files for docker -compose. 
      • <your da stack name> is the name of the stack. You can change it based on requirements.