Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 2 Next »

This article is new for Certificate Manager 8.10.4.

This article describes how to install Smart ID Certificate Manager (CM) server components using Podman compose.

Prerequisites

  • A supported database server must be installed/available

  • License file must be available

  • Podman version 4.9.4 or later

  • RedHat Enterprise Linux 9.4 or Rocky Linux 9.4

  • Administrator's Workbench, Registration Authority, and Certificate Controller clients from CM distributable package.

Step-by-step instructions

Pre-configuration

There are a few pre-configuration steps required before CM can be deployed. To prepare the deployment with an initial configuration please follow the configuration steps in the below sections.

CM image archive files

The Podman images of Certificate Manager are stored in the images directory under the distributable. These image files may be uploaded to a local private container registry with controlled and limited access, but shall not be distributed to any public container registry.

For local use, the images can be read with below commands:

podman image load --input images/cf-server.tar
podman image load --input images/pgw.tar

CM license file

Create a license directory in the cm deployment directory and place the CM license files inside it. In this article, the license directory will be mounted as
a read-only bind file system volume for the cf-server container, which runs the certificate factory server.

Deployment directory

When deploying using podman-compose, the name of the directory in which the distributable deployment files are located will dictate the prefix of the names of each container deployed using the docker-compose.yml file. In general most parameters in the docker-compose.yml files can be changed to suit special needs, such as for example ports that are exposed by the containers can be changed to different ports if so required.

In this guide it is assumed that the name of this directory is cm, and each container name will hence be prefixed with:

cm_

This will also be the directory from which the deployment will be done from and all configuration for the containers is placed inside it.

Initialize the CM deployment

Copy the cm directory from inside the deployment directory in the container release distributable to the preferred storage path for the deployment files. The cm directory will be used to prefix the container names when deploying with podman-compose, and may be changed to a different name. This can be suitable for example if deploying multiple instances of any of the container images.

Enter the cm and where the docker-compose.yml file is located, and create the containers and volumes by running the below command:

podman-compose up --no-start

This will prevent the containers to start immediately but it does set up all the necessary volumes and the network.

Add CM database connection

Before continuing this step, follow the steps for the corresponding database from one of the following pages:

To add the CMDB connection to the CM configuration, a JDBC connection must be added.

Add the following --cm-param flags to cf-server command property in the docker-compose.yml to make the CF container start with a correctly configured JDBC connection:

cf-server:
    image: "localhost/smartid/certificatemanager/cf-server:<tag>"
    command: [
        "5009",
        "combo",
        "--cm-param", "Database.name=<jdbc-connection-string>",
        "--cm-param", "Database.user=<user>",
        "--cm-param", "Database.password=<password>",
        "--cm-param", "Database.connections=20"
    ]

Alternatively, add the JDBC connection manually to the cm.conf file in the following way.

The CF container needs to be started to initialize the volumes with the configuration files, this start of CF will fail because no JDBC connection is yet configured.

podman-compose start cf-server

The configuration volume may then be mounted on the Podman host and the changes to cm.conf can be done, this command will output the path on which the volume got mounted:

podman volume mount cm_cf-server-config

Then restart the CF container by running the below command again:

podman-compose start cf-server

Monitor the container with the podman inspect command until "Running" state.

Post-configuration

Accessing the CM containers using the CM clients

At this point the CF is ready to accept connections on the exposed CF container port, so it is now possible to connect using Administrator's Workbench and Registration Agent clients. These clients can be installed from the CM distributable zip package.

There might be firewall rules blocking the exposed container ports in the Podman host.

Configuring the CM server application container

At this point deployment will have configuration, certificates, and other persistent data on volumes mounted in the CF containers. To make changes in any of the configuration files, or just copy files, the volumes needs to be accessed either inside the containers or mounting the volumes elsewhere.

For example, to edit configuration in Certificate Manager from inside the container:
podman exec -ti cm_cf-server_1 bash

This will start a new shell inside the cm_cf-server_1 container, which allows to edit the cm.conf, and other files. It is also possible to mount a named volume to the host in case the container cannot start properly for some reason.

The volumes can also be reached from the local file system in most cases by viewing the container volume mount list using the below command:
podman inspect cm_cf-server_1

Initializing the Protocol Gateway container

The Protocol Gateway container, here named pgw, is based on an Apache Tomcat version 10 image and contains a configuration for a minimal deployment. The Protocol Gateway servlets are deployed but none of them are started.

For HTTPS in Tomcat, there is a default PKCS#12 TLS server token file name and password in the server.xml, "protocol-gateway-tls.p12", but the token file is not included. It needs to be issued and then it can be uploaded to the Tomcat configuration directory, (or a different volume backed path configured in server.xml.)

To initialize the volumes from the Protocol Gateway image data, the pgw container should be started up, and once Tomcat has started, the container should be stopped again:

podman-compose start pgw  
podman logs -f cm_pgw_1
podman-compose stop pgw 

Configuring the Protocol Gateway container

The pgw container has two volumes by default:

  • cm_pgw-config-gw

  • cm_pgw-config-tomcat

It should be configured in stopped state, and in order to modify the configuration in these volumes one method is to access the container's file system from the Podman host.

In order to configure the Protocol Gateway servlets in the pgw container, the Install Protocol Gateway document should be followed, with the difference that all configuration operations must be done in the pgw container's configuration volumes, which are mounted in the pgw container.

Examples to edit the configuration in the volumes, these commands will output the path on which the volume got mounted:

podman volume mount cm_pgw-config-tomcat
podman volume mount cm_pgw-config-gw

The containerized deployment is done rootless by default. Therefore, executing the podman unshare command is not required. Normally after a start of the container, the volumes should also already be mounted.

Make sure that the owner and permissions of any copied files into the volume directories are correct.

Another possibility is to start a new container using a suitable image (such as the pgw image) which mounts the volumes required for copying files and changing configuration.

Example:

# Run an interactive container which mounts the pgw volumes
podman run --rm --network cm_cmnet --user 0 \
        --entrypoint /bin/bash \
        -v cm_pgw-config-gw:/pgwy-conf:z \
        -v cm_pgw-config-tomcat:/tomcat-conf:z \
        -ti smartid/certificatemanager/pgw:latest

Instead of "latest", the pgw image tag should be the same as the one used in the deployment files.

Enabling the pgw container health check

For the health check to pass successfully, the Ping servlet requires a valid virtual registration officer token from the previous step, and the ping servlet should also have a ping procedure configured in CM.

Once the token is configured and the ping procedure is available, the Ping servlet must be started by setting the "start=true" parameter in the below properties file inside the pgw container:

/var/cm-gateway/conf/ping.properties

Alternatively, the volume can be mounted on the host or a different container and can be edited from there.

Warnings will be logged in the Protocol Gateway if the Ping servlet does not refer to a Ping procedure in Certificate Manager. See the Technical Description document for configuring this.

Starting the pgw container

Once the configuration has been edited the pgw container can be started:

podman-compose start pgw

This is the required minimum configuration for setting up the Protocol Gateway. Additional volumes for possible output directories or HSM/other libraries, additional configuration, or web applications may be added if so required.

Connecting to services running on the Podman host

While it is not recommended to do so, there might be situations where a connection from a container to the Podman container host machine is needed. As Podman uses the slirp4netns driver by default, there is no directly available routing configured to reach the Podman localhost/127.0.0.1 address. To achieve this, a special IP address 10.0.2.2 can be used to reach the Podman machine localhost, while adding the following configuration depending on the deployment type:

Deployment type

Location

Configuration

podman-compose

Under the container object

network_mode: "slirp4netns:allow_host_loopback=true"

Preventing shutdown of containers

Podman containers which belong to a user session will shut down after the user's session ends, i.e. user logs out of the podman host machine.

An easy way to prevent the shutdown is by enabling lingering via loginctl:

loginctl enable-linger <user>

HSM configuration

HSM libraries are by default stored in the directory /opt/cm/server/bin, which is also backed by a volume by default. However, it can be configured to point to another location in the container, which could be pointed out by the LD_LIBRARY_PATH environment variable inside the container, for example. The configuration location for the HSM should be indicated from its provided documentation.

It is recommended to create additional volumes for both the library and its configuration, so that they are persistent and can be upgraded to newer versions.

The CM configuration files have documentation for the parameters where a HSM library should be configured. To test and configure a HSM for using with CM, the "hwsetup" tool can be used. See Initialize Hardware Security Module for use in Certificate Manager for more details.

Troubleshooting

The container logs can be monitored using the "podman logs" command in order to narrow down any issues that might occur. If any of the containers fail to start up it is commonly necessary to access the configuration inside the container.

A simple way to handle this is to start another container mounting the volumes and overriding the image's entry point, for example:

podman run --rm --network cm_cmnet --user 0 --entrypoint /bin/bash -v cm_pgw-config-tomcat:/tomcat-cfg:z -ti smartid/certificatemanager/pgw:8.10.3-1

Even if the faulty container is down or unsuccessfully trying to restart, this temporary container allows for editing the configuration on the mounted volumes, and files can be copied between them and the Podman host.

  • No labels