OSCP deployment using Podman compose
This article is new for Nexus OCSP Responder 6.2.7-1.
This article describes how to install Nexus OCSP Responder server using Podman compose.
Prerequisites
License file must be available
Podman version 4.9.4 or later
RedHat Enterprise Linux 9.4 or Rocky Linux 9.4
Step-by-step instructions
Pre-configuration
There are a few pre-configuration steps required before OCSP can be deployed. To prepare the deployment with an initial configuration, follow the configuration steps in the below sections.
OCSP image archive files
The Podman image of Nexus OCSP Responder is stored in the image directory under the distributable. This image file may be uploaded to a local private container registry with controlled and limited access, but shall not be distributed to any public container registry.
For local use, the image can be read with below commands:
podman image load --input image-6.2.7-1/ocsp-6.2.7-1.tar
OCSP license file
Create a license directory in the ocsp deployment directory and place the OCSP license file inside it. In this article, the license directory will be mounted as a read-only bind file system volume for the ocsp container.
Deployment directory
When deploying using podman-compose, the name of the directory in which the distributable deployment files are located will dictate the prefix of the names of each container deployed using the docker-compose.yml file. In general, most parameters in the docker-compose.yml files can be changed to suit special needs, such as for example ports that are exposed by the containers can be changed to different ports if so required.
In this guide it is assumed that the name of this directory is ocsp, and each container name will hence be prefixed with:
ocsp_
This will also be the directory from which the deployment will be done from and all configuration for the containers is placed inside it.
Initialize the OCSP deployment
Copy the ocsp directory from inside the deployment directory in the container release distributable to the preferred storage path for the deployment files. The ocsp directory will be used to prefix the container names when deploying with podman-compose, and may be changed to a different name. This can be suitable for example if deploying multiple instances of any of the container images.
Enter the ocsp directory where the docker-compose.yml file is located, and create the container and volumes by running the below command:
podman-compose up --no-start
This will prevent the containers to start immediately but it does set up all the necessary volumes and the network. The default list of volumes is:
bin:
$HOME/.local/share/containers/storage/volumes/ocsp_ocsp-bin/_data/certs:
$HOME/.local/share/containers/storage/volumes/ocsp_ocsp-certs/_data/config:
$HOME/.local/share/containers/storage/volumes/ocsp_ocsp-config/_data/cils:
$HOME/.local/share/containers/storage/volumes/ocsp_ocsp-cils/_data/crls:
$HOME/.local/share/containers/storage/volumes/ocsp_ocsp-crls/_data/
Post-configuration
Configuring the OCSP server application container
At this point the deployment will have configuration, certificates, and other persistent data on volumes mounted in the OCSP container. To make changes in any of the configuration files, or just copy files, the volumes need to be accessed either inside the containers or directly from the host machine.
For example, to edit configuration in OCSP from inside the container:
This will start a new shell inside the ocsp_ocsp_1 container, which allows to edit the ocsp.conf, and other files. It is also possible to mount a named volume to the host in case the container cannot start properly for some reason.
The volumes can also be reached from the local file system in most cases by viewing the container volume mount list using the below command:
Connecting to services running on the Podman host
While it is not recommended to do so, there might be situations where a connection from a container to the Podman container host machine is needed. As Podman uses the slirp4netns driver by default, there is no directly available routing configured to reach the Podman localhost/127.0.0.1 address. To achieve this, a special IP address 10.0.2.2 can be used to reach the Podman machine localhost, while adding the following configuration depending on the deployment type:
Deployment type | Location | Configuration |
---|---|---|
podman-compose | Under the container object |
Preventing shutdown of containers
Podman containers which belong to a user session will shut down after the user's session ends, i.e. user logs out of the podman host machine.
An easy way to prevent the shutdown is by enabling lingering via loginctl:
HSM configuration
HSM libraries are by default stored in the directory /opt/ocsp/bin, which is also backed by a volume by default. However, it can be configured to point to another location in the container, which could be pointed out by the LD_LIBRARY_PATH environment variable inside the container, for example. The configuration location for the HSM should be indicated from its provided documentation.
It is recommended to create additional volumes for both the library and its configuration, so that they are persistent and can be upgraded to newer versions.
The OCSP configuration files have documentation for the parameters where a HSM library should be configured. To test and configure a HSM for use with OCSP, the "hwsetup" tool can be used.
Troubleshooting
The container logs can be monitored using the "podman logs" command in order to narrow down any issues that might occur. If any of the containers fail to start up it is commonly necessary to access the configuration inside the container.
A simple way to handle this is to start another container mounting the volumes and overriding the image's entry point, for example:
Even if the faulty container is down or unsuccessfully trying to restart, this temporary container allows for editing the configuration on the mounted volumes, and files can be copied between them and the Podman host.
Copyright 2024 Technology Nexus Secured Business Solutions AB. All rights reserved.
Contact Nexus | https://www.nexusgroup.com | Disclaimer | Terms & Conditions