How to use NVIDIA containers on the ULHPC

How to use NVIDIA containers on the ULHPC

In this knowledge nugget, we will consider GROMACS which is provided by NVIDIA as a container. However other tools are provided on the NVIDIA website.

Setup your environment

In this section, we will:

  • Start an interactive session on a regular node (at this step we do not need GPU acceleration)
  • Build a Singularity container based on a NVIDIA Docker container as Docker is not supported on the HPC

To do so, you can follow the steps below:

# Request an interactive session for 2h
si -t120
# Load the singularity module
module load tools/Singularity
# Build your gromacs container from the NVIDIA Docker container
# N.B.: You only need to do it once for each version
singularity build gromacs-2020_2.sif docker://nvcr.io/hpc/gromacs:2020.2

If everything went fine, you are now in possesion of a Singularity container containing GROMACS stored on your home folder.

Test your environment

To ensure that your container is working properly, you first need to request an interactive session on a GPU node and load the singularity module.

# Request an interactive session on a GPU node for 2h
si-gpu -t120
# Load the singularity module
module load tools/Singularity

Then, to test, you can either:

  • Run the container and play with GROMACS if you already know how to use it
  • Run a NVIDIA test script

Run the container and play with GROMACS

You can run the container via the following command. Adapt the container name if necessary:

# Run your container: you will end up INSIDE the container 
# containing the GROMACS software
singularity run --nv -B ${PWD}:/host_pwd --pwd /host_pwd gromacs-2020_2.sif

NVIDIA test script

NVIDIA provides a small test script which can be found on their Github. The content of the script has been copied below. Adapt the container filename and the work folder name if necessary:

#!/bin/bash
set -e
# Usage: ./water.sh {GPU_COUNT} {IMG_NAME}

# Script arguments
GPU_COUNT=${1:-1}
SIMG=${2:-"${PWD}/gromacs-2020_2.sif"}

# Set number of OpenMP threads
export OMP_NUM_THREADS=${OMP_NUM_THREADS:-1}

# Create a directory on the host to work within
mkdir -p ./work
cd ./work

# Download benchmark data
DATA_SET=water_GMX50_bare
wget -c ftp://ftp.gromacs.org/pub/benchmarks/${DATA_SET}.tar.gz
tar xf ${DATA_SET}.tar.gz

# Change to the benchmark directory
cd ./water-cut1.0_GMX50_bare/1536

# Singularity will mount the host PWD to /host_pwd in the container
SINGULARITY="singularity run --nv -B ${PWD}:/host_pwd --pwd /host_pwd ${SIMG}"

# Prepare benchmark data
${SINGULARITY} gmx grompp -f pme.mdp

# Run benchmark
${SINGULARITY} gmx mdrun \
                 -ntmpi ${GPU_COUNT} \
                 -nb gpu \
                 -ntomp ${OMP_NUM_THREADS} \
                 -pin on \
                 -v \
                 -noconfout \
                 -nsteps 5000 \
                 -s topol.tpr