Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Updates for CM 8.11.0-1.
Info

This article is new for Certificate Manager applies for CM version 8.10.4x and later versions.

This article describes how to install Smart ID Certificate Manager (CM) server components using quadlets. Insert excerptDeployment using Podman composeDeployment using Podman composenamePrereqs, preconfig, CM imgage, CM licensenopaneltrue

Prerequisites

  • A supported database server must be installed/available

  • License file must be available

  • Podman version 4.9.4 or later

  • Administrator's Workbench, Registration Authority, and Certificate Controller clients from CM distributable package.

CM and PGW installation steps

CM image archive files

The Podman images of Certificate Manager are stored in the images directory under the distributable. These image files may be uploaded to a local private container registry with controlled and limited access, but shall not be distributed to any public container registry.

For local use, the images can be read with below commands:

Code Block
podman image load --input images/cf-server-image-<version>.tar
podman image load --input images/pgw-image-<version>.tar

CM license file

Create a license directory in the cm deployment directory and place the CM license files inside it. In this article, the license directory will be mounted as
a read-only bind file system volume for the cf-server container, which runs the certificate factory server.

Deployment directory

When deploying using quadlets the name of the directory in which the distributable deployment files are located will be dictated by the user running the container. It will map to the following directory:

...

Initialize the CM deployment

Before continuing with the CM deployment on quadlets, follow the steps for the corresponding database from one of the following pages:

To handle CM on Podman in a production system, it is recommended to create quadlets for each container. Example quadlets can be found in the quadlets directory inside the distributable cm directory.

...

The license directory containing the CM license files must also be copied to the above directory.

CM containers are by default configured with an internal bridged network. This implies that CM will not be able to access anything outside its network. This can be an issue with for example cloud HSM's, or an external CMDB database.

To enable outgoing external connectivity from this network, the parameter in the cmnet.network file should be changed as below:

Code Block
Internal=no

In case the above setting is used, hardening security for the outgoing connectivity from the containers in this network should be done with additional firewall rules outside the containers.

Once the .container, .volume, .network and license files are in place they can be loaded into systemd using:

...

The deployment procedure for CM with quadlets is essentially the same as for podman-compose, with the exception that the containers are managed by using systemd. Volumes only need to be started once, but may be restarted if the volumes are removed for any reason.

...

Code Block
systemctl --user start cf-server-bin-volume
systemctl --user start cf-server-certs-volume
systemctl --user start cf-server-config-volume
systemctl --user start cf-server

The container images should be pulled or loaded manually before starting any of the containers using systemd. Due to lack of TTY (a virtual text console) it might cause startup problems with the containers, the systemctl command will appear to time out due to halting and waiting for console user input. Pulling the images manually with "podman pull" is preferable, which will permit user input in case an option needs to be selected.

...

Containers running from quadlets/systemd will be removed when the systemd service is stopped. Any data on the container not stored on volumes will be lost. 

CM database installation

Before continuing with the CM deployment on quadlets, follow the steps for the corresponding database from one of the following pages:

Add the CM database connection

To add the CMDB connection to the CM configuration, a JDBC connection must be added. This can be done in two ways, either by updating the cf-server.container file, or by editing cm.conf file in the systemd-cf-server-config volume. See the steps for the two alternatives below.

Containers should not be running , while doing changes in conf files

Connection string examples for the supported databases:

Database.name = jdbc:oracle:thin:@//<host>:<port/CMDB
Database.name = jdbc:postgresql://<host>:<port>/cmdb
Database.name = jdbc:sqlserver://<host>:<port>;databaseName=CMDB;encrypt=false;trustServerCertificate=true
Database.name = jdbc:mysql://<host>:<port>/CMDB?permitMysqlScheme=true&allowPublicKeyRetrieval=true&sessionVariables=transaction_isolation='READ-COMMITTED'
Database.name = jdbc:mariadb://<host>:<port/CMDB?sessionVariables=tx_isolation='READ-COMMITTED'

The following parameters needs to be updated:

  • Database.name

  • Database.user

  • Database.password

  • Database.connections

Edit cf-server.container

Add the folloing following --cm-param flags to the /quadlets/cf-server.container Exec property to make the CF container start with a correctly configured JDBC connection:

Code Block
Exec=5009 combo --cm-param Database.name=<jdbc-connection-string>\
 --cm-param Database.user=<user> --cm-param Database.password=<password> --cm-param Database.connections=20

...

OR:

Edit cm.conf file in the following way.

The CF container needs to be started to initialize the volumes with the configuration files, this start of CF will fail because no JDBC connection is yet configured.

...

systemctl --user restart cf-server

Connecting to services running on the Podman host

There might be situations where a connection from a container to the Podman host machine is needed. As Podman uses the slirp4netns driver by default for rootless containers, there is no directly available routing configured to reach the Podman localhost/127.0.0.1 address. To achieve this, a special IP address 10.0.2.2 can be configured in the CM configuration files to reach the Podman machine localhost. Enable this by adding the following configuration to the container-files:

Location

Configuration

In the [Container] section

Network=slirp4netns:allow_host_loopback=true

Post-configuration

Accessing the CM containers using the CM clients

...

There might be firewall rules blocking the exposed container ports in the Podman host.

...

At this point the deployment will have configuration, certificates, and other persistent data on volumes mounted in the CF containers. To make changes in any of the configuration files or just copy files, the volumes needs to be accessed either inside the containers or mounting the volumes elsewhere.

...

To initialize the volumes from the Protocol Gateway image data, the pgw container should be started up, and once Tomcat has started, the container should be stopped again:

PGW image deployment

Code Block
podman image load --input images/pgw-image-<version>.tar
Code Block
systemctl --user start pgw
podman logs -f pgw
systemctl --user stop pgw

Configuring the Protocol Gateway container

The pgw container has two volumes by default:

Code Block
systemd_pgw-config-gw
systemd-pgw-config-tomcat

The systemd_pgw-config-gw volume contains configuration related to PGW, this includes configuration of the different certificate issuance protocols that PGW supports.

The systemd-pgw-config-tomcat volume contains configuration related to Tomcat, this includes configuration of the different connectors that tomcat should listen on.

...

It is possible to edit the files from within the pgw container. However, this is not recommended due to the limit amount of utility tools available inside the container.

For more information on how to configure pgw: Initial configuration of Protocol Gateway.

The template file that includes standard configurations of Protocol Gateway can be found in cm_clients_<version>.zip provided with the client installation zip.

Enabling the pgw container health check

...

Once the token is configured and the ping procedure is available, the Ping servlet must be started by setting the "start=true" parameter in the ping.properties file in the systemd-pgw-config-gw volume.

Warnings will be logged in the Protocol Gateway if the Ping servlet does not refer to a Ping procedure in CM. See the Technical Description document for configuring this.

...

This is the required minimum configuration for setting up the Protocol Gateway. Additional volumes for possible output directories or HSM/other libraries, additional configuration, or web applications may be added if so required.

Connecting to services running on the Podman host

While it is not recommended to do so, there might be situations where a connection from a container to the Podman container host machine is needed. As Podman uses the slirp4netns driver by default, there is no directly available routing configured to reach the Podman localhost/127.0.0.1 address. To achieve this, a special IP address 10.0.2.2 can be used to reach the Podman machine localhost, while adding the following configuration depending on the deployment type:

Deployment type

Location

Configuration

Quadlets

In the [Container] section

Network=slirp4netns:allow_host_loopback=true

Insert excerptDeployment using Podman composeDeployment using Podman composenamePreventing shutdown, HSM, Troubleshootingnopaneltrue