Document toolboxDocument toolbox

OCSP deployment using Quadlets

This article is new for Nexus OCSP Responder 6.2.7-1.

This article describes how to install Nexus OCSP Responder server using quadlets.

Prerequisites

  • License file must be available

  • Podman version 4.9.4 or later

  • RedHat Enterprise Linux 9.4 or Rocky Linux 9.4

Step-by-step instructions

Pre-configuration

There are a few pre-configuration steps required before OCSP can be deployed. To prepare the deployment with an initial configuration, follow the configuration steps in the below sections.

OCSP image archive file

The Podman image of Nexus OCSP Responder is stored in the image directory under the distributable. This image files may be uploaded to a local private container registry with controlled and limited access, but shall not be distributed to any public container registry.

For local use, the image can be read with the below command:

podman image load --input image-6.2.7-1/ocsp-6.2.7-1.tar

Deployment directory

When deploying using quadlets the name of the directory in which the distributable deployment files are located will be dictated by the user running the container. It will map to the following directory:

$HOME/.config/containers/systemd/

OCSP license file

Create a license directory in the ocsp deployment directory and place the OCSP license file inside it. In this article, the license directory will be mounted as a read-only bind file system volume for the ocsp container.

Example with a license file called ocsp.license:

$HOME/.config/containers/systemd/license/ocsp.license

Initialize the OCSP deployment

To handle OCSP on Podman in a production system, it is recommended to create quadlets for each container. Example quadlets can be found in the quadlets directory inside the distributable ocsp directory.

For rootless deployment, the .container, .volume, and .network files from the quadlets directory need to be copied to the following location, assuming that the current user is the operator for the container deployment:

Once the .container, .volume, .network, and license files are in place they can be loaded into systemd using:

This will create a systemd service for each container and volume and can be started up accordingly.

The deployment procedure for CM with quadlets is essentially the same as for podman-compose, with the exception that the containers are managed by using systemd. Volumes only need to be started once, but may be restarted if the volumes are removed for any reason.

Examples (for rootless deployment):

The container images should be pulled or loaded manually before starting any of the containers using systemd. Due to lack of TTY (a virtual text console) it might cause startup problems with the containers, the systemctl command will appear to time out due to halting and waiting for console user input. Pulling the images manually with "podman pull" is preferable, which will permit user input in case an option needs to be selected.

Any changes to files in the systemd .container, .volume, .network files or even local bind volumes in the systemd directory require a daemon reload:

Containers running from quadlets/systemd will be removed when the systemd service is stopped. Any data on the container not stored on volumes will be lost. 

Post-configuration

Connecting to services running on the Podman host

While it is not recommended to do so, there might be situations where a connection from a container to the Podman container host machine is needed. As Podman uses the slirp4netns driver by default, there is no directly available routing configured to reach the Podman localhost/127.0.0.1 address. To achieve this, a special IP address 10.0.2.2 can be used to reach the Podman machine localhost, while adding the following configuration depending on the deployment type:

Deployment type

Location

Configuration

Quadlets

In the [Container] section

Network=slirp4netns:allow_host_loopback=true

Preventing shutdown of containers

Podman containers which belong to a user session will shut down after the user's session ends, i.e. user logs out of the podman host machine.

An easy way to prevent the shutdown is by enabling lingering via loginctl:

HSM configuration

HSM libraries are by default stored in the directory /opt/ocsp/bin, which is also backed by a volume by default. However, it can be configured to point to another location in the container, which could be pointed out by the LD_LIBRARY_PATH environment variable inside the container, for example. The configuration location for the HSM should be indicated from its provided documentation.

It is recommended to create additional volumes for both the library and its configuration, so that they are persistent and can be upgraded to newer versions.

The OCSP configuration files have documentation for the parameters where a HSM library should be configured. To test and configure a HSM for use with OCSP, the "hwsetup" tool can be used.

Troubleshooting

The container logs can be monitored using the "podman logs" command in order to narrow down any issues that might occur. If any of the containers fail to start up it is commonly necessary to access the configuration inside the container.

A simple way to handle this is to start another container mounting the volumes and overriding the image's entry point, for example:

Even if the faulty container is down or unsuccessfully trying to restart, this temporary container allows for editing the configuration on the mounted volumes, and files can be copied between them and the Podman host.

Copyright 2024 Technology Nexus Secured Business Solutions AB. All rights reserved.
Contact Nexus | https://www.nexusgroup.com | Disclaimer | Terms & Conditions